url
stringlengths
31
38
title
stringlengths
7
229
abstract
stringlengths
44
2.87k
text
stringlengths
319
2.51M
meta
dict
https://arxiv.org/abs/1903.03908
Asymptotic Density of Graphs Excluding Disconnected Minors
For a graph $H$, let $$c_{\infty}(H)= \lim_{n \to \infty}\max\frac{|E(G)|}{n},$$ where the maximum is taken over all graphs $G$ on $n$ vertices not containing $H$ as a minor. Thus $c_{\infty}(H)$ is the asymptotic maximum density of graphs not containing $H$ as a minor. Employing a structural lemma due to Eppstein, we prove new upper bounds on $c_{\infty}(H)$ for disconnected graphs $H$. In particular, we determine $c_{\infty}(H)$ whenever $H$ is union of cycles. Finally, we investigate the behaviour of $c_\infty(sK_r)$ for fixed $r$, where $sK_r$ denotes the union of $s$ disjoint copies of the complete graph on $r$ vertices. Improving on a result of Thomason, we show that $$c_\infty(sK_r)=s(r-1)-1 \mathrm{\; for \;} s ={\Omega}\left(\frac{\log{r}}{\log\log{r}}\right),$$ and $$c_\infty(sK_r)>s(r-1)-1 \mathrm{\; for \;} s ={o}\left(\frac{\log{r}}{\log\log{r}}\right).$$
\section{Introduction} A graph $H$ is \emph{a minor} of a graph $G$ if a graph isomorphic to $H$ can be obtained from a subgraph of $G$ by contracting edges. A well-studied extremal question in graph minor theory is determining the maximum density of graphs $G$ not containing $H$ as a minor. We denote by $v(G)$ and $e(G)$ the number of vertices and edges of a graph $G$, respectively, and by $d(G)=e(G)/v(G)$ the \emph{density} of a non-null graph $G$. Following Myers and Thomason~\cite{MyeTho05} for a graph $H$ with $v(H) \geq 2$ we define \emph{the extremal function $c(H)$ of $H$} as the supremum of $d(G)$ taken over all non-null graphs $G$ not containing $H$ as a minor. The asymptotic behaviour of $c(K_r)$, where $K_r$ denotes the complete graph on $r$ vertices, was studied in~\cite{Kostochka82,Kostochka84,Thomason84}, and was determined precisely by Thomason~\cite{Thomason01}, who has shown that \begin{equation}\label{e:Thomason} c(K_r)=(\lambda+o_r(1))r\sqrt{\log{r}}, \end{equation} where $$\lambda = \max_{\alpha >0} \frac{1-e^{-\alpha}}{2\sqrt{\alpha}}=0.319...,$$ is an explicit constant, which we will refer to as \emph{Thomason's constant}. In~\cite{Tho08} Thomason defined an asymptotic variant of the extremal function as $$c_{\infty}(H)= \lim_{n \to \infty}\max_{v(G)=n} d(G)$$ where the maximum is taken over all graphs $G$ on $n$ vertices not containing $H$ as a minor. We refer to $c_{\infty}(H)$ as the \emph{asymptotic extremal function of $H$.} Clearly, $c_{\infty}(H) \leq c(H)$. When $H$ is connected then, as observed in~\cite{Tho08}, $c(H)=c_{\infty}(H)$, because in this case one can replace an $H$-minor free graph $G$ by a disjoint union of many copies of $G$ to obtain arbitrarily large $H$-minor free graphs with the same density as $G$. For disconnected graphs $H$ the parameters $c_{\infty}(H)$ and $c(H)$ frequently differ. Let $lH$ denote the union of $l$ disjoint copies of a graph $H$. The following theorem is the main result of~\cite{Tho08}. \begin{thm}[Thomason~\cite{Tho08}]\label{t:Thomason} $\:$ \begin{description} \item[a)] $c_{\infty}(lK_r)= (1+o_r(1))c(K_r)$ for fixed $l$, \item[b)] $c_{\infty}(lK_r)= l(r-1)-1$ for $l \geq 20c(K_r)$ \end{description} \end{thm} Powerful structural tools of graph minor theory become available when one considers large graphs in minor-closed graph classes, and, in particular, when one investigates $c_{\infty}(H)$ rather than $c(H)$. The main goal of this paper is to use one such tool, a lemma proved by Eppstein~\cite{Eppstein10}, to derive several new bounds on the asymptotic density of graphs excluding disconnected minors. In particular, we improve bounds in Theorem~\ref{t:Thomason}. Let us first present a natural lower bound on $c_{\infty}(H)$. Let $\tau(H)$ denote \emph{the vertex cover number} of the graph $H$, that is the minimum size of the set $X \subseteq V(H)$ such that $H - X$ is edgeless. Let $\bar{K}_{s,t}$ denote the graph obtained from the disjoint union of a complete graph $K_s$ and an edgeless graph $E_t$ on $t$ vertices by making every vertex of $K_s$ adjacent to every vertex of $E_t$. Then $\tau(\bar{K}_{s,t})=s$ for $t \geq 1$, and $\lim_{t \to \infty} d(\bar{K}_{s,t}) = s$. As the vertex cover of any minor of a graph $G$ does not exceed $\tau(G)$, it follows that $H$ is not a minor of the graph $\bar{K}_{s,t}$ for any $s < \tau(H)$ and any $t$. Thus \begin{equation}\label{e:tau} c_{\infty}(H) \geq \tau(H)-1 \end{equation} for every graph $H$. We say that a graph $H$ is \emph{well-behaved} if (\ref{e:tau}) holds with equality. Dirac~\cite{Dirac64}, Mader~\cite{Mader68}, J{\o}rgensen~\cite{Jorgensen94}, and Song and Thomas~\cite{SonTho06} proved that $c(K_r)=r-2$ for $r \leq 5$, $r \leq 7$, $r=8$ and $r=9$, respectively. Thus $K_r$ is well-behaved for $r \leq 9$, however (\ref{e:Thomason}) implies that $K_r$ is far from being well-behaved for large $r$. On the other hand, Theorem~\ref{t:Thomason} b) implies that $lK_r$ is well-behaved for fixed $r$ and large $l$. The results of this paper imply that many classes of disconnected graphs are well-behaved, or are close to being well-behaved. Our first result provides a general upper bound on $c_{\infty}(H)$ for a disconnected graph $H$ in terms of the asymptotic extremal function and the vertex cover number of its components. \begin{thm} \label{thm:infty+} Let $H$ be the disjoint union of non-null graphs $H_1$ and $H_2$, then \begin{equation}\label{e:tau2} c_{\infty}(H) \leq \max\{c_\infty(H_2), c_{\infty}(H_1) + \tau(H_2)\}. \end{equation} In particular, \begin{equation}\label{e:sum} c_\infty(H) \leq c_\infty(H_1)+c_\infty(H_2)+1. \end{equation} \end{thm} Note that $c_\infty(H)+l-1 \leq c_\infty(lH)$ for any positive integer $l$ and non-null graph $H$. Theorem~\ref{thm:infty+} together with this observation immediately imply the following corollary, which establishes in a strong form Theorem~\ref{t:Thomason} a), and provides upper and lower bounds for $c_\infty(lK_r)$ in terms of $c(K_r)$ which differ at most by a multiplicative factor of two. \begin{cor}\label{c:lKrLoose} For all positive integers $l$ and $r$ we have $$ \max\{c(K_r)+l-1, l(r-1)-1\} \leq c_\infty(lK_r) \leq c(K_r) + (l-1)(r-1). $$ \end{cor} Theorem~\ref{thm:infty+} also implies that if $H_1$ is a well-behaved graph, and a graph $H_2$ satisfies $c_\infty(H_2)~\leq~c_\infty(H_1)~+~\tau(H_2)$, then the disjoint union of $H_1$ and $H_2$ is well-behaved. Thus the disjoint union of cliques of size nine or less is well-behaved. The inequality (\ref{e:tau2}) does not necessarily hold with $c_{\infty}$ replaced by $c$. However, it was conjectured by the third author that (\ref{e:sum}) still holds. A weaker form of this conjecture has been verified by Cs\'{o}ka et al.~\cite{CLNWY} who have shown the following. \begin{thm}[Cs\'{o}ka et al.~\cite{CLNWY}]\label{t:clnwy} Let $H$ be a disjoint union of $2$-connected graphs $H_1$,$H_2$,\ldots,$H_k$. Then $$ c(H) \leq c(H_1)+c(H_2)+\ldots+c(H_k)+k-1. $$ \end{thm} The proof of Theorem~\ref{t:clnwy} relies on extremal graph theory techniques, in particular, on a lemma about partitioning graphs into parts with prescribed average degree, the proof of which requires extensive calculations. In contrast, the proof of Theorem~\ref{thm:infty+} is very short, modulo the aforementioned lemma by Eppstein. In~\cite{CLNWY} Theorem~\ref{t:clnwy} is used to prove the following upper bound on the extremal function of the union of cycles, verifying conjectures of Reed and Wood~\cite{ReeWoo15}, and Harvey and Wood~\cite{HarWoo15}. \begin{thm}\label{thm:cycles} Let $H$ be a disjoint union of cycles. Then \begin{equation}\label{e:cycledensity} c(H) \leq \frac{v(H)+\brm{comp}(H)}{2}-1. \end{equation} \end{thm} In the case when $H$ is the union of odd cycles, the right side of (\ref{e:cycledensity}) is equal to $\tau(H)-1$ and thus the union of odd cycles is well-behaved. In most of the remaining cases for a union of cycles $H$, the exact value of $c(H)$ remains undetermined, but our next result completely determines $c_{\infty}(H)$. \begin{thm}\label{thm:2r} Let $H$ be a $2$-regular graph with $\brm{odd}(H)$ odd components. Then $$c_{\infty}(H)= \frac{v(H)+\brm{odd}(H)}{2}-1,$$ unless $H=C_{2l}$, in which case $c_{\infty}(H)=l -\frac 12,$ or $H=kC_4$, in which case $c_{\infty}(H)=2k -\frac{1}{2}.$ \end{thm} Next we turn to investigating unions of large cliques. Theorem~\ref{t:Thomason} b) and Theorem~\ref{thm:infty+} imply that for every $r$ there exists $l_0=l_0(r) \leq 20c(K_r)$, such that $lK_r$ is not well-behaved for $l<l_0$ and $lK_r$ is well-behaved for $l \geq l_0$. It follows from (\ref{e:Thomason}) that $l_0(r) \geq (\lambda+o_r(1))\sqrt{\log{r}}$. Thomason mentions in~\cite{Tho07,Tho08} that it is likely that $l_0(r) = \Theta(\sqrt{\log{r}})$. This prediction is motivated by the belief that for large enough $r$ and any $l$, the extremal examples should either be ``close" to being $K_r$-minor free or of the form $\bar{K}_{l(r-1)-1,n}$ for some $n$. We show that Thomason's prediction is almost, but not quite correct, as the next theorem implies that $l_0(r)={\Theta}(\log {r}/\log\log{r})$. The main reason for the discrepancy is that for a certain range of $l$ we exhibit extremal examples, which do not have the structure suggested in~\cite{Tho08}, but are obtained by gluing certain non-uniform random graphs. \begin{thm}\label{thm:main} There exist constants $c,C>0$ such that for every positive integer $r$ \begin{flalign*} &a)\qquad c_\infty(lK_r) > l(r-1)-1 \qquad \mathrm{for} \qquad l \leq c\frac{\log r}{\log\log r} \\ &b)\qquad c_\infty(lK_r) = l(r-1)-1 \qquad \mathrm{for} \qquad l \geq C\frac{\log r}{\log\log r}. \end{flalign*} \end{thm} Additionally, the next two theorems provide upper and lower bounds on $c_\infty(lK_r)$, which allow us to approximate the error term $c_\infty(lK_r) - lr$ in the range where this term is substantial, i.e. $l = o(\log {r}/\log\log{r})$. \begin{thm}\label{thm:lower} Let $\lambda$ be Thomason's constant. For $l=\omega(\sqrt{\log n})$ and $l = o(\log {r}/\log\log{r})$ we have $$c_\infty(lK_r) \geq lr + (1-o(1)) \frac{\lambda^2r\log r}{4l}.$$ \end{thm} \begin{thm}\label{thm:upper} There exists a constant $C_u > 0$ such that for all positive integers $l,r$ $$c_\infty(lK_r) \leq lr + C_u\frac{r\log r}{l}.$$ \end{thm} As one of the ingredients in the proof of Theorem~\ref{thm:upper} we need an upper bound on the extremal function $c(K_{s,t})$ which is within a constant factor of optimal. This extremal function has been extensively investigated in the past. It follows from the results of Kostochka~\cite{Kostochka84} and Thomason~\cite{Thomason84} that $c(K_{s,t}) = O(t\sqrt{\log t})$ for all $s\leq t$. Myers~\cite{Myers03} considered $c(K_{s,t})$ for $s \ll t$ and conjectured that $c(K_{s,t}) \leq c_s t$ for some constant independent on $t$. K\"uhn and Osthus~\cite{KuhOst05} and Kostochka and Prince~\cite{KosPri08} have independently proved this conjecture by showing that $c(K_{s,t}) =(1/2+o(1))t$ for $s \ll t$. Unfortunately, none of the above bounds suffice for our purpose and we prove the following result, which is tighter in the regime $s=\omega(t/\log t)$ and $s = o(t)$. \begin{thm}\label{t:bipartite} Let $t\geq s \geq 2$ be positive integers. Then $$c(K_{s,t}) \leq 40 (\sqrt{st \log s}+s+t).$$ \end{thm} Theorem~\ref{t:bipartite} additionally answers a question of Harvey and Wood~\cite{HarWooAverage15}. They have asked whether there exists a constant $\varepsilon > 0$ such that for every graph $H$ on $n$ vertices we have \begin{equation}\label{e:HW} c(H) \geq \varepsilon n \sqrt{d(H - S)}\end{equation} for some set $S \subseteq V(H)$ such that $|S| \leq \frac{n}{\varepsilon \log n}$. By Theorem~\ref{t:bipartite} the answer is negative. Indeed, if $s = \omega (t/\log t)$ then $d(K_{s,t}-S) \geq s/2$ for every set $S$ as above. Therefore, if additionally $s=o(t)$ then the bound given by Theorem~\ref{t:bipartite} is smaller than the bound in (\ref{e:HW}) by a factor of roughly $\sqrt{t/s}$. The rest of the paper is structured as follows In Section~\ref{s:eppstein} we introduce the lemma of Eppstein~\cite{Eppstein10}, which will serve as our main tool and prove several additional preliminary lemmas. We prove Theorem~\ref{thm:infty+} in Section~\ref{s:infty+}, and Theorem~\ref{thm:2r} in Section~\ref{s:2r}. In Section~\ref{s:lower} we prove a general lower bound on $c_{\infty}(lK_r)$ attained by a random construction and derive Theorem~\ref{thm:main} a) and~\ref{thm:lower} from this bound. In Section~\ref{s:tools} we introduce several additional tools we need for proving the upper bounds on $c_{\infty}(lK_r)$. In particular, we prove Theorem~\ref{t:bipartite}. In Section~\ref{s:main} we prove Theorem~\ref{thm:main} a) and~\ref{thm:lower}. Section~\ref{s:conclude} contains the concluding remarks. \section{Blades, fans and Eppstein's lemma}\label{s:eppstein} In this section we define blades and fans and present a lemma of Eppstein~\cite{Eppstein10}, which will provide the framework for proving our results. We say that a pair $(G,S)$ is \emph{a blade} if $G$ is a graph and $S \subsetneq V(G)$. Given a blade $\mc{B}=(G,S)$ and a positive integer $k$, let ${\mathrm{Fan}}(\mc{B},k)$ or ${\mathrm{Fan}}(G,S,k)$ denote the graph obtained by $k$ copies of $G$ by identifying the vertices in $S$. For example, $\bar{K}_{s,t}$ can be considered as ${\mathrm{Fan}}(K_{s+1},S,t)$, where $S$ is a subset of vertices of $K_{s+1}$ of size $s$. It is easy to see that $$\lim_{k \to \infty} d({\mathrm{Fan}}(G,S,k)) = \frac{e(G)-e(G[S])}{v(G)-|S|},$$ and we define the \emph{density of a blade $\mc{B}=(G,S)$} as $$d(\mc{B}) = d(G,S) = \frac{e(G)-e(G[S])}{v(G)-|S|}. $$ We say that a blade $(G,S)$ is \emph{semiregular} if \begin{itemize} \item $G[S]$ is complete, \item $G \setminus S$ is connected, \item each vertex of $S$ has a neighbor in $ V(G) - S$, \end{itemize} We say that a semiregular blade $\mc{B}$ is \emph{regular} if ${\mathrm{deg}}(v) \geq d(G,S)$ for every $v \in V(G\setminus S)$. Given a graph $H$ and a blade $\mc{B}$, we say that $H$ is \emph{a minor of $\mc{B}$} if $H$ is a minor of ${\mathrm{Fan}}(\mc{B},k)$ for some $k$, and we say that $\mc{B}$ is \emph{$H$-minor free}, otherwise. We are now ready to state the key lemma, which is proven in~\cite{Eppstein10} for general minor-closed classes of graphs. For convenience we state only a weaker version for classes of graphs with a single excluded minor. \begin{lem}[Eppstein~\cite{Eppstein10}] \label{lem:formfan} Let $H$ be a graph. Then for any $\epsilon>0$ there exists a regular $H$-minor free blade $\mc{B}$ such that $d(\mc{B}) \geq c_\infty(H) - \epsilon$. \end{lem} (In~\cite{Eppstein10}, it is only shown that a semiregular blade as above exists. However, it is easy to that if a blade $(G,S)$ satisfies the conclusion of the lemma is chosen so that $d(G,S)$ is maximum and subject to that $v(G)$ is minimum, then $(G,S)$ is regular.) Essentially, Lemma~\ref{lem:formfan} allows us to restrict our attention to fans when proving upper bounds on $c_{\infty}(H)$. The following convenient corollary is immediately implied by Lemma~\ref{lem:formfan}. \begin{cor}\label{c:fanden} Let $H$ be a graph, and let $c\in \bb{R}$ be such that $d(\mc{B}) \leq c$ for every regular $H$-minor free blade $\mc{B}$. Then $c_{\infty}(H) \leq c$. Conversely, if $\mc{B}$ is an $H$-minor-free blade then $d(\mc{B}) \leq c_{\infty}(H).$ \end{cor} We finish this section by introducing additional notation and several easy, but useful, lemmas. Let $\mc{B}=(G,S)$ be a blade. For $S' \subseteq S$, we denote by $\mc{B}[S']$ the blade $(G \setminus (S-S'),S')$ obtained from $\mc{B}$ by deleting vertices in $S - S'$. \begin{lem}\label{l:bladedensity} Let $\mc{B}=(G,S)$ be a blade, and let $S' \subseteq S$. Then $d(\mc{B}[S']) \geq d(\mc{B}) - |S|+|S'|$ \end{lem} \begin{proof} We have $$ (e(G)-e(G[S]))-(e(G - (S - S'))-e(G[S'])) \leq (|S|-|S'|)(v(G)-|S|), $$ implying the desired inequality by definition of the blade density. \end{proof} \begin{lem}\label{l:bladetau} Let $(G,S)$ be a semiregular blade, and let $H$ be a graph. If $|S| \geq \tau(H)$ then $H$ is a minor of $G$. \end{lem} \begin{proof} Note that $\bar{K}_{|S|,k}$ is a minor of $(G,S)$ for every $k$. On the other hand $H$ is isomorphic to a subgraph of $\bar{K}_{\tau(H),v(H) - \tau(H)}$. The desired conclusion follows. \end{proof} Showing that a graph $G$ contains a graph $H$ as a minor typically involves constructing a model of $H$ in $G$, defined as follows. We say that a map $\mu$ is a \emph{blueprint of $H$ in $G$} if $\mu$ maps vertices of $H$ to disjoint subsets of vertices of $G$, called \emph{bags of $\mu$}. We will use $\mu(H)$ to denote $\cup_{v \in V(H)}\mu(v)$. We say that a blueprint is a \emph{premodel} if for every edge $\{u,v\} \in E(H)$ there exists an edge of $G$ with one end in $\mu(u)$ and another in $\mu(v)$. Finally, we say that a premodel is a \emph{model} if $G[\mu(v)]$ is connected for every $u \in V(H)$. The following useful observation is well known. \begin{obs}\label{o:model} A graph $H$ is a minor of a graph $G$ if and only if there exists a model of $H$ in $G$. \end{obs} Observation~\ref{o:model} is used, in particular, in the proofs of the next remaining lemmas of this section. \begin{lem}\label{l:subblade} Let $\mc{B}=(G,S)$ be a blade, let $H_1, H_2, \ldots, H_t$ be vertex disjoint graphs, and let $H$ be their union. Then the following are equivalent \begin{enumerate} \item $H$ is a minor of $\mc{B}$, and \item there exist disjoint $S_1,\ldots, S_t \subseteq S$ such that $H_i$ is a minor of $\mc{B}[S_i]$ for every $1 \leq i \leq t$. \end{enumerate} \end{lem} \begin{proof} We start by showing that the first condition implies the second. By Observation~\ref{o:model}, there exists a model $\mu$ of $H$ in ${\mathrm{Fan}}(G,S,k)$ for some positive integer $k$. Equivalently there exists models $\mu_i$ of $H_i$ in ${\mathrm{Fan}}(G,S,k)$ for $1 \leq i \leq t$ such that $\mu_i(H_i) \cap \mu_j(H_j) = \emptyset$ for all $i \neq j$. Let $S_i = \mu_i(H_i) \cap S$ then the second condition clearly holds. The proof of the other implication is similar. \end{proof} \begin{lem}\label{l:minorcomplete} Let $\mc{B}=(G,S)$ be a semiregular blade. If $K_r$ is a minor of $\mc{B}$ then $K_r$ is a minor of $G$. \end{lem} \begin{proof} Let $\mu$ be a model of $K_r$ in ${\mathrm{Fan}}(G,S,k)$ for some positive integer $k$. If $|S| \geq r$ then $K_r$ is a subgraph of $G[S]$ and so the lemma holds. Otherwise, there exists a $v \in V(K_r)$ such that $\mu(v) \subseteq V(G') \setminus S$ for some copy $G'$ of $G$ in ${\mathrm{Fan}}(G,S,k)$. Then $\mu(u) \cap V(G') \neq \emptyset$ for every $u \in V(K_r)$, and it is easy to see that the restriction of $\mu$ to $V(G')$ is a model of $K_r$ in $G'$. \end{proof} \section{Proof of Theorem~\ref{thm:infty+}}\label{s:infty+} Let $c= \max\{c_\infty(H_2), c_{\infty}(H_1) + \tau(H_2)\}$. By Corollary~\ref{c:fanden} it suffices to show that $d(\mc{B}) \leq c$ for every $H$-minor free regular blade $\mc{B}=(G,S)$. We number the vertices in $S=\{v_1,v_2,\dots,v_s\}$, where $s=|S|$. Let $S_i=\{v_1,\dots,v_i\}$, $\overline{S_i}=S-S_i$. Choose $i$ minimum such that $H_1$ is a minor of $\mc{B}[S_i]$. Thus $\mc{B}[\overline{S_i}]$ is $H_2$ minor-free by Lemma~\ref{l:subblade}, and therefore $d(\mc{B}[\overline{S_i}]) \leq c_\infty(H_2)$ by Corollary~\ref{c:fanden}. In particular, if $i=0$ then $d(G,S) \leq c_{\infty}(H_2) \leq c$, as desired. Thus we assume $i>0$. By Lemma~\ref{l:bladetau} we have \begin{equation}\label{e:h2} s-i \leq \tau(H_2)-1. \end{equation} By minimality of $i$, $\mc{B}[S_{i-1}]$ is $H_1$-minor-free. Therefore $c_\infty(H_1) \geq d(\mc{B}[S_{i-1}])$. By Lemma~\ref{l:bladedensity} and (\ref{e:h2}), we have $$d(G,S) \leq d(\mc{B}[S_{i-1}]) + s-i+1 \leq c_\infty(H_1) + \tau(H_2),$$ as desired. \section{Proof of Theorem~\ref{thm:2r}}\label{s:2r} A classical result of Erd\H{o}s and Gallai below implies that \begin{equation}\label{e:singlecycle} c_\infty(C_l) \leq \frac{l-1}2 \end{equation} for every $l \geq 3$. \begin{thm}[Erd\H{o}s and Gallai~\cite{ErdGal59}]\label{thm:ErdGal} Let $l \geq 3$ be an integer and let $G$ be a graph with $n$ vertices and more than $(l-1)(n-1)/2$ edges. Then $G$ contains a cycle of length at least $l$. \end{thm} We prove Theorem~\ref{thm:2r} by induction on $v(H)$. By (\ref{e:singlecycle}) we may assume that $H$ as at least $2$ components. Let \[ d_0=\begin{cases} 2m-\frac{1}{2}, \text{ if $H=mC_4$;}\\ \frac{v(H)+{\mathrm{odd}}(H)}{2}-1, \text{ otherwise.} \end{cases} \] By Corollary~\ref{c:fanden} it suffices to show that $d(G,S) \leq d_0$ for every $H$-minor-free regular blade $\mc{B}=(G,S)$. Let $C$ be the longest cycle in $H$, let $l = v(C)$, and let $H$ be the disjoint union of $C$ and a graph $H_1$. If $H_1$ is a minor of $G \setminus S$ then $(G,S)$ is $C$-minor-free, and so by (\ref{e:singlecycle}) we have $d(G,S) \leq \frac{(l-1)}{2} \leq d_0$, as desired. Thus $H_1$ is not a minor of $G \setminus S$, and by the induction hypothesis $$d(G\setminus S)\leq\frac{v(H_1)+{\mathrm{odd}}(H_1)}{2}-\frac{1}{2}.$$ Suppose that $|S| \leq (l-1)/2$, then \begin{align*} d(G,S)&\leq |S|+ d(G\setminus S) \leq |S|+ \frac{v(H_1)+{\mathrm{odd}}(H_1)}{2}-\frac{1}{2} \\ &\leq \frac{v(H_1)+v(C)+{\mathrm{odd}}(H_1)}{2}-1 \leq \frac{v(H)+{\mathrm{odd}}(H)}{2}-1 \leq d_0. \end{align*} Thus we assume that $|S| \geq l/2$. Suppose next that there exists $v \in V(G) - S$ such that $v$ is the only vertex in $V(G)-S$ adjacent to a vertex in $S$. Then $e(G) - e(G[S]) \leq e(G \setminus S) +|S|$. If $|S| \geq |V(G)-S|$ then \begin{align*} d(G,S)&\leq d(G\setminus S) + \frac{|S|}{v(G)-|S|} \\ &\leq \frac{v(G)-|S|-1}{2} +\frac{|S|}{v(G)-|S|} \\ &\leq |S| \leq \tau(H)-1 \leq d_0, \end{align*} where second to last inequality uses Lemma~\ref{l:bladetau}. Otherwise, \begin{align*} d(G,S)&\leq d(G\setminus S) + \frac{|S|}{v(G)-|S|} \leq d(G\setminus S) + 1 \\& \leq \frac{v(H_1)+{\mathrm{odd}}(H_1)}{2}+\frac{1}{2} \leq \frac{v(H)+{\mathrm{odd}}(H)}{2}-1 \leq d_0. \end{align*} Thus we assume that there exists distinct $u_1,u_2 \in S$, $v_1,v_2\in G\setminus S$ such that $u_1v_1,u_2v_2\in E(G)$. Let $S' \subseteq S$ be such that $u_1,u_2 \in S'$, and let $k=|S'|$. We show that $C_{2k+2}$ is a minor $\mc{B}[S']$. Let $S'=\{u_1,u_2,\ldots,u_k\}$. We say that a path $P$ in $G$ is an \emph{$S'$-jump} if both ends of $P$ are in $S'$, and $P$ is otherwise disjoint from $S$. By taking a path joining $v_1$ and $v_2$ in $G \setminus S$ we obtain an $S'$-jump $P_1$ with ends $u_1$ and $u_2$ and at least $3$ edges. If $k=2$, then taking the union of two copies of $P_1$ in ${\mathrm{Fan}}(\mc{B}[S'],2)$ we obtain a cycle of length at least six, as desired. Thus we assume $k \geq 3$. Let $v_3$ be a neighbor of $u_3$ in $V(G) - S$, and assume without loss of generality that $v_3 \neq v_2$. Let $P_2$ be an $S'$-jump of length at least three with ends $u_2$ and $u_3$. For $i=3,\ldots,k$, let $P_i$ be an $S'$-jump of length at least two with ends $u_i$ and $u_{i+1}$, where $u_{k+1}=u_1$ by convention. By taking the union of copies of paths $P_1,\ldots,P_k$, each chosen from a separate copy of $G$ we obtain a cycle of length at least $2k+2$ in ${\mathrm{Fan}}(\mc{B}[S'],k)$, as desired. We finish the proof by considering two cases. Suppose first that $H=mC_4$, and let $S'$ with $|S'|=2$ be as in the previous paragraph. Then $H_1=(m-1)C_4$ is not a minor of $\mc{B}[S-S']$ and therefore by Corollary~\ref{c:fanden}, Lemma~\ref{l:bladedensity} and the induction hypothesis we have \begin{equation*} d(\mc{B}) \leq c_\infty(H_1)+|S'| \leq 2(m-1)-\frac{1}{2} +2 = d_0. \end{equation*} Thus we assume that at least one cycle in $H$ has length not equal to four. If $l\geq5$, then by the claim above there exists $S' \subseteq S$ such that $|S'| \leq \lceil l/2\rceil -1 = (v(C)+{\mathrm{odd}}(C))/2 -1$, and $C$ is a minor of $\mc{B}[S']$. Again it follows that \begin{align*} d(\mc{B}) &\leq c_\infty(H_1)+|S'| \leq\frac{v(H_1)+{\mathrm{odd}}(H_1)}{2}-\frac{1}{2} +|S'| \\ & \leq\frac{v(H)+{\mathrm{odd}}(H)}{2}-\frac{3}{2} < d_0. \end{align*} It remains to consider the case $l \leq 4$, but $H$ contains at least one cycle of length not equal to four. It follows that $c_\infty(H_1)\leq\frac{v(H_1)+{\mathrm{odd}}(H_1)}{2}-1$ by the induction hypothesis, and choosing $S' \subseteq S$ with $|S'|=2$, we once again have \begin{equation*} d(\mc{B}) \leq c_\infty(H_1)+|S'| \leq\frac{v(H_1)+{\mathrm{odd}}(H_1)}{2} +1 \leq\frac{v(H)+{\mathrm{odd}}(H)}{2}-1 =d_0, \end{equation*} finishing the proof. \section{A lower bound on $c_\infty(lK_r)$}\label{s:lower} Our constructions of dense blades with no $lK_r$ minor are random. Let ${\bf G}(a,b,p,q)$ be a random graph, with $V({\bf G}(a,b,p,q))=A \cup B$, where $A$ and $B$ are disjoint sets with $|A|=a$, $|B|=b$, the vertices of $B$ form a clique and the edges are chosen independently at random so that every edge with both ends in $A$ is present with probability $p$ and an edge joining a vertex in $A$ to a vertex in $B$ is present with probability $q$. The next lemma is a technical variation of a computation which to the best of our knowledge was first used by Bollobas, Caitlin and Erd\H{o}s~\cite{BCE80} to compute the size of the largest minor in a random graph. \begin{lem}\label{l:construction} Let positive integers $a,b$ and $r$, and reals $\alpha,\beta >0$ be such that $a+b \leq r^2$, $r \leq 2b$ and\begin{equation} \alpha(r-b)b(\log (r-b)-\log\log r -3) \geq (\alpha a+\beta b)^2. \label{e:ab2/r2} \end{equation} Then $$\Pr [K_{r} \mathrm{\;is \;a \:minor \:of\;} {\bf G}(a,b,1-e^{-\alpha},1-e^{-\beta}) ] \leq e^{-2r\log{r}}.$$ \end{lem} \begin{proof} We denote the random graph ${\bf G}(a,b,1-e^{-\alpha},1-e^{-\beta})$ by $G$ for brevity. There are at most $$(a+b)^{r} {\leq} r^{2r} = e^{2r\log{r}}$$ blueprints $\mu$ of $K_r$ in $G$. Thus it suffices to show that the probability that for a fixed blueprint $\mu$ is a premodel of $K_r$ is at most $e^{-4r\log{r}}.$ Let $\mc{K}_a$ be the collection of all bags of $\mu$ which lie completely in $A$, and let $\mc{K}_b$ be the collection of the remaining bags. Let $\mc{K}_a = \{X_1,X_2,\ldots,X_s\}$, and let $x_i = |X_i|$ for $1 \leq i \leq s$. Note that $s\geq r-b$. Let $\mc{K}_b = \{U_1,U_2,\ldots,U_{r-s}\}$, and let $Y_i = U_i \cap A$, $Z_i = U_i \cap B$, $y_i = |Y_i|, z_i = |Z_i|$ for $1 \leq i \leq r-s$. Note that the probability that $X_i$ and $X_j$ are adjacent in $G$ is $1-e^{-\alpha x_ix_j}$, and the probability that $X_i$ is adjacent to $U_j$ is $1-e^{-\alpha x_iy_j-\beta x_iz_j}$. Suppose first that $s > b$. We upper bound the probability that $\mu$ is premodel of $K_r$ by the probability that the bags in $\mc{K}_a$ are pairwise adjacent, which is $$ \prod_{1 \leq i < j \leq s}(1-e^{-\alpha x_ix_j}) \leq \exp\left(-\sum_{1 \leq i < j \leq s}e^{-\alpha x_ix_j}\right) $$ Thus it suffices to show that $$\sum_{1 \leq i < j \leq s}e^{-\alpha x_ix_j} \geq 4r\log{r}.$$ As $b \geq r-b$, the condition (\ref{e:ab2/r2}) implies that \begin{equation}\label{e:a2/b2} b^2(\log b-\log\log r - 3) \geq \alpha a^2. \end{equation} By the AM-GM inequality \begin{align*} \sum_{1 \leq i < j \leq s}e^{-\alpha x_ix_j} &\geq \binom{s}{2}\exp\left(-\frac{\alpha}{\binom{s}{2}}\sum_{1 \leq i < j \leq s}x_ix_j \right) \geq \frac{b^2}{2}\exp\left(-\alpha \left(\frac{a}{b} \right)^2 \right) \\ & \stackrel{(\ref{e:a2/b2})}{\geq} \frac{b^2}{2}\exp\left(-\log b+\log\log r + 3\right)\geq \frac{ 20b \log r}{2}{\geq} 4r\log{r}, \end{align*} as desired. Thus we assume that $s \leq b$. Now we upper bound the probability that every set in $\mc{K}_a$ is adjacent to every set in $\mc{K}_b$. Repeating the beginning of the argument in the previous case we see that it suffices to show that $$\sum_{1 \leq i \leq s} \sum_{1 \leq j \leq r-s}e^{-x_i (\alpha y_j + \beta z_j)} \geq 4r\log{r},$$ Let $x =\sum_{1 \leq i \leq s} x_i$. Applying the AM-GM inequality as before we obtain \begin{align*} \sum_{1 \leq i \leq s} &\sum_{1 \leq j \leq r-s}e^{-x_i (\alpha y_j + \beta z_j)} \\ &\geq s(r-s) \exp\left(-\frac{1}{s(r-s)}\sum_{1 \leq i \leq s} x_i \left(\sum_{1 \leq j \leq r-s} \alpha y_j + \sum_{1 \leq j \leq r-s} \beta z_j \right)\right) \\ &\geq b(r-b) \exp\left(-\frac{x(\alpha(a-x)+\beta b)}{b(r-b)} \right) \geq b(r-b) \exp\left(-\frac{(\alpha a+\beta b)^2}{\alpha b(r-b)} \right)\\&\stackrel{(\ref{e:ab2/r2})}{\geq} b(r-b)\exp\left(-\log (r-b)+\log\log r + 3\right) = e^3 b \log r\geq 4r\log r, \end{align*} as desired. \end{proof} \begin{thm}\label{t:lKrLower} Let $\lambda$ be the Thomason's constant. There exists $\xi>0$ so that for every $0\leq \varepsilon \leq 1/2$, $r \gg 1/\varepsilon$ and $ \sqrt{\log r}/\varepsilon \leq l \leq \log r$, we have \begin{equation} \label{e:lKrLower2} c_\infty(lK_r) \geq lr + (1-\varepsilon)\frac{\lambda^2r\log r}{4l} -lr\exp\left(-\frac{\xi\varepsilon\log{r}}{l}\right). \end{equation} \end{thm} \begin{proof} Consider $a,b,\alpha$ and $\beta$ satisfying the conditions of Lemma~\ref{l:construction}. Let $G={\bf G}(a,l(b+1)-1,1-e^{-\alpha},1-e^{-\beta})$ be a random graph, and let the set of vertices $A$ and $B$ be as in the definition of such random graph. Consider the blade $\mc{B}=(G,B)$. If $lK_r$ is a minor of $\mc{B}$ then by Lemma~\ref{l:subblade} there exists $B' \subseteq B$ with $|B'| \leq \lfloor |B|/l \rfloor = b$ such that $K_r$ is a minor of $\mc{B}[B']$. From Lemma~\ref{l:minorcomplete} it follows that $K_r$ is a minor of $G[A \cup B']$. However, by Lemma~\ref{l:construction} the probability that $K_r$ is a minor of $G[A \cup B']$ for some $B' \subseteq B$ with $|B'|=b$ is at most $$|B|^{b}\exp(-2r\log r) \leq (lr)^r \exp(-2r\log r) \leq \exp(r(\log\log r -\log r)) \leq e^{-r}.$$ Thus the probability that $lK_r$ is a minor of $\mc{B}$ is at most $e^{-r}$. Let $$D(a,b,\alpha,\beta) = \frac{a}{2}(1-e^{-\alpha}) + lb(1-e^{-\beta}).$$ An easy computation shows that $$\bb{E}[d(\mc{B})] = \frac{(a-1)}{2}(1-e^{-\alpha}) + (l(b+1)-1)(1-e^{-\beta}) \geq D(a,b,\alpha,\beta) + 1. $$ As $$\Pr [d(\mc{B}) \geq \bb{E}[d(\mc{B})] -1 ] \geq \frac{1}{a+l(b+1)} \geq \frac{1}{r^2} \geq e^{-r},$$ it follows that there exists an $lK_r$-minor-free blade $\mc{B}$ with density at least $D(a,b,\alpha,\beta)$, i.e. $c_\infty(lK_r) \geq D(a,b,\alpha,\beta)$. It remains to choose $a,b,\alpha$ and $\beta$ satisfying the conditions of Lemma~\ref{l:construction} so that $$D(a,b,\alpha,\beta) \geq lr + (1-2\varepsilon)\frac{\lambda^2r\log r}{4l} -lr\exp\left(-\frac{2\xi\varepsilon\log{r}}{l}\right).$$ (Note that we replaced $\varepsilon$ by $2\varepsilon$ for later convenience.) Let constant $0<\alpha<1$ be chosen to maximize $\frac{1-e^{-\alpha}}{2\sqrt{\alpha}}$, i.e. $\lambda=\frac{1-e^{-\alpha}}{2\sqrt{\alpha}}$, and let \begin{align*} &\gamma=\frac{\lambda(1-\varepsilon)}{2}, &\sigma = \frac{\gamma r\log{r}}{l}, \\&k = \frac{\gamma^2r\log{r}}{l^2} =\frac{\gamma\sigma}{l}, &b = \lceil r-k \rceil, \\ &\beta =\frac{\varepsilon\sqrt{\alpha}\sigma}{2r}, &a=\left\lceil \frac{(1-\varepsilon)\sigma}{\sqrt{\alpha}} \right\rceil. \end{align*} Note that by the choice of $l$ we have \begin{equation} \gamma^2 \frac{r}{\log r} \leq k \leq \frac{\gamma^2 \varepsilon r \log r}{\log r} = \frac{\varepsilon r}{2} \end{equation} Let us first verify that $a,b,\alpha$ and $\beta$ satisfy (\ref{e:ab2/r2}). For $r \gg 1/\varepsilon$, we have \begin{align*} \alpha&(r-b)b(\log (r-b)-\log\log r -3) \\ &\geq (1-\varepsilon/2)^2\alpha rk\log r \\&=\left(\sqrt{\alpha}(1-\varepsilon/{2})\sigma \right)^2 \end{align*} Thus it suffices to show that $\alpha a + \beta b \leq \sqrt{\alpha}(1-\varepsilon/{2})\sigma$, which is immediate from the definitions. We now return to the computation of $D(a,b,\alpha,\beta)$ for $a,b,\alpha$ and $\beta$ as above. Let $\xi = \sqrt{\alpha}\lambda/16$. We have \begin{align*} D(a,b,\alpha,\beta) &\geq \frac{(1-\varepsilon)\sigma}{2\sqrt{\alpha}}(1-e^{-\alpha}) + l\left(r-\frac{\gamma\sigma}{l}\right)(1-e^{-\beta}) \\ &\geq 2\gamma\sigma +lr - \gamma\sigma -lr\exp\left(-\frac{\varepsilon\sqrt{\alpha}\lambda(1-\varepsilon)\log{r}}{4l}\right) \\ &= lr + \left(\frac{\lambda(1-\varepsilon)}{2}\right)^2\frac{r \log r}{l} -lr\exp\left(-\frac{\varepsilon\sqrt{\alpha}\lambda(1-\varepsilon)\log{r}}{4l}\right) \\ &\geq lr + (1-2\varepsilon)\frac{\lambda^2r\log r}{4l} -lr\exp\left(-\frac{2\xi\varepsilon\log{r}}{l}\right), \end{align*} which finishes the proof of the theorem. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main} a)] Let $\xi$ be as in Theorem~\ref{t:lKrLower}, and let $2 \sqrt{\log r} \leq l \leq \frac{\xi\log r}{2\log\log r}$. Thus $l = c\log r/\log\log r$ for some $c \leq \xi/2$. It suffices to show that $c_\infty(lK_r) \geq lr$. By Theorem~\ref{t:lKrLower} applied with $\varepsilon =1/2$ we have \begin{align*} c_\infty(lK_r) - lr &\geq \frac{\lambda^2r\log r}{8l} -lr\exp\left(-\frac{\xi\log{r}}{2l}\right) \\ &= \frac{\lambda^2}{8c}r\log\log r - \frac{cr\log r}{\log\log r}e^{-\frac{\xi\log\log{r}}{2c}} \\ &\geq \frac{\lambda^2}{8c}r\log\log r - \frac{cr}{\log\log r} \geq 0, \end{align*} as desired. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:lower}] The inequality (\ref{e:lKrLower2}) gives the required bound, as long as we show that for every $0<\varepsilon \leq 1$ there exists $\delta >0$ so that for $l \leq \delta\log r/\log\log r$ we have $$ lr\exp\left(-\frac{\xi\varepsilon\log{r}}{l}\right) \leq \varepsilon \frac{r\log r}{l}. $$ Let $\delta = \operatorname{min} \{\xi\varepsilon,\sqrt{\varepsilon}\}$. Then \begin{align*}\exp&\left(-\frac{\xi\varepsilon\log {r}}{l}\right) \leq \exp\left(-\frac{\xi\varepsilon\log\log r}{\delta}\right)\leq \frac{1}{\log r} \leq\frac{\varepsilon (\log\log r)^2 }{\delta^2\log r} \leq \varepsilon\frac{\log r}{l^2}, \end{align*} as desired. \end{proof} \section{Hefty graphs}\label{s:tools} In this section we introduce the tools which will be subsequently used to upper bound $c_\infty(lK_r)$. These tools are built around the concept of hefty graphs. We say that a graph $H$ is \emph{hefty} if $H=K_2$, or ${\mathrm{deg}}(v) \geq 0.65|V(H)|$ for every $v \in V(H)$. (Our choice of constant $0.65$ is motivated by Lemma~\ref{lem:ReedWood} below.) Classes of graphs with similar properties are considered in many proofs of upper bounds on the extremal function and the following lemmas demonstrate some of the ways in which they are used. The first lemma allows one to replace any graph by a hefty graph at a cost of a constant fraction of density. It is a variant of a result first proved by Mader~\cite{Mader68}, and appears in a slightly stronger form than the one stated below in Reed and Wood~\cite{ReeWoo15}. \begin{lem}\label{lem:ReedWood} Let $G$ be a graph such that $d(G)\neq 0$. Then there exists a hefty minor $H$ of $G$ such that $|V(H)| \geq d(G)/2$. \end{lem} Next lemma shows that if a hefty graph $G$ contains a small model of a graph $H$ and a graph $H'$ is obtained from $H$ by adding a few edges then $G$ contains a model of $H'$. We say that a set $F$ of pairs of vertices of $G$ is a \emph{completion} of a blueprint $\mu$ of a graph $H$ in a graph $G$ if $\mu$ is a model of $H$ in a graph obtained from $G$ by adding $F$ to $E(G)$. The \emph{defect} of a blueprint $\mu$ is the minimum size of a completion of $\mu$. \begin{lem}\label{lem:heftymodel2} Let $G$ be a hefty graph, and let $\mu$ be a blueprint of a graph $H$ in $G$ with defect at most $0.3|V(G)|-|\mu(H)|$. Then $\mu$ extends to a model of $H$ in $G$. \end{lem} \begin{proof} We prove the lemma by induction on the defect $c$ of $\mu$. The base case $c=0$ is immediate. For the induction step, let $F$ be a completion of $\mu$ with $|F|=c\geq 1$ and consider arbitrary $f=\{u,v\} \in F$. Note that $u$ and $v$ have at least $0.3|V(G)|$ common neighbors in $G$, and so there exists $w \in V(G) - \mu (H)$ adjacent to both $u$ and $v$. Let $x \in V(H)$ be such that $u \in \mu(x)$. Adding $w$ to $\mu(x)$ we obtain a blueprint $\mu'$ of $H$ in $G$ such that $F \setminus \{f\}$ is a completion of $\mu'$. By the induction hypothesis $\mu'$ extends to a model of $H$ in $G$ as desired. \end{proof} As a first application of the above lemmas we prove Theorem~\ref{t:bipartite}. The technical part of the proof is contained in the following lemma. \begin{lem} \label{l:heftyminor3} Let $G$ be a hefty graph on $a$ vertices. Let $s,t,k,l$ be positive integers such that $sk+tl \leq 3a/20$ and $(k-2)l-2 \geq \log_2 s$. Then $K_{s,t}$ is a minor of $G$. \end{lem} \begin{proof} Let $d=0.65$. For every $v \in V(G)$ and a set $X \subseteq V(G)\setminus\{v\}$ of size $l$ chosen uniformly at random the probability that $v$ has no neighbor in $X$ is at most $(1 - d)^l$. Thus for a set $X$ as above the expected number of vertices in $V(G)-X$ with no neighbor in $X$ is at most $a(1-d)^l$. We say that a set $X$ is \emph{good} if at most $3a(1-d)^l$ vertices in $V(G)-X$ have no neighbor in $X$. By Markov's inequality the probability that $X$ is good is at least $2/3$. Given a good set $X$ if a set $Y$ of size $k$ is selected from $V(G)-X$ uniformly at random then the probability that no vertex of $Y$ is adjacent to a vertex of $X$ is at most $(4(1-d)^{l})^k < (1/2)^{(l-2)k}$. We now select disjoint subsets $X_1,X_2,\ldots,X_{2t},Y_1,Y_2,\ldots,Y_{s}$ of $V(G)$ such that $|X_i|=l$, $|Y_j|=k$ uniformly at random. We say that a pair $(i,j)$ is \emph{fulfilled} if there exist $\{u,v\} \in E(G)$ with $u \in X_i$, $v \in V_j$. We say that $X_i$ is \emph{perfect} if $(i,j)$ is fulfilled for every $j$, and we say that $X_i$ is \emph{flawed} otherwise. By the calculations above the probability that $X_i$ is good, but flawed is at most $s(1/2)^{(l-2)k} \leq 1/4$. Therefore the probability that $X_i$ is perfect is at least $1/2$. Thus there exists a choice of subsets as above such that at least $t$ of subsets $X_1,X_2,\ldots,X_{2t}$ are perfect. If, say, $X_1,\ldots,X_t$ are these subsets then $Y_1,Y_2,\ldots,Y_{s},X_1,X_2,\ldots,X_{t}$ form a premodel $\mu$ of $K_{s,t}$ which can be extended to a model by Lemma~\ref{lem:heftymodel2}, as $2|\mu(K_{s,t})| \leq 2(sk+tl) \leq 3a/10$. \end{proof} \begin{proof}[Proof of Theorem~\ref{t:bipartite}] Let $d= 40 (\sqrt{st \log s}+s+t)$, and let $G$ be a graph with $d(G) \geq d$. By Lemma~\ref{lem:ReedWood} there exists a hefty minor $H$ of $G$ with $a=|V(H)|\geq d/2$. Let $p = \sqrt{st \log_2 s}$, $k =\lceil p/s\rceil+2$, and $l =\lceil p/t\rceil+2$. Then we have \begin{align*} (l-2)k- 2&\geq (k-2)(l-2) \geq \frac{p^2}{st}=\log_2 s, \qquad \mathrm{and}\\ sk+tl &< s(p/s+3)+ t(p/t+3) \\&= 2\sqrt{st \log_2 s}+3s+3t \leq 3d/40 \leq 3a/20. \end{align*} Thus $s,t,k$ and $l$ satisfy the conditions of Lemma~\ref{l:heftyminor3}. It follows that $K_{s,t}$ is a minor of $H$ as desired. \end{proof} Next we prove a counterpart of Lemma~\ref{l:construction}. We will show that if a graph has the structure similar to that of the random examples of $K_r$ minor-free graphs considered in that lemma, but is somewhat denser, then it has a $K_r$ minor. To make the above statement precise we need a definition. We say that a partition $(A,B)$ of the vertices of the graph $G$ is \emph{$(a,b,\delta)$-semicomplete} if $|A|=a$, $|B|=b$, $G[A]$ is hefty, $G[B]$ is complete and every $v \in B$ has at least $(1 -\delta)a$ neighbors in $A$. We say that $G$ is \emph{$(a,b,\delta)$-semicomplete} if $V(G)$ admits an $(a,b,\delta)$-semicomplete partition. We will investigate the range of parameters which guarantee the presence of a $K_r$ minor in an $(a,b,\delta)$-semicomplete graph. First, we need an easy lemma. \begin{lem}\label{l:easyrandom} Let $G$ be a graph, let $d=e(G)/\binom{n}{2}$ and let $X \subseteq V(G)$, $|X|=k$ be chosen uniformly at random. Then $$\Pr \left[e(G[X]) \geq \left(d-\frac 12\right)\binom{k}{2}\right] \geq \frac{1}{2}.$$ \end{lem} \begin{proof} Note that the expected value of $e([G[X]])$ is $d \binom{k}{2}$, and so the lemma follows immediately from Markov's inequality. \end{proof} We are now ready to prove the first of the main results on minors in semicomplete graphs. \begin{lem}\label{l:semicomplete1} There exists $\varepsilon >0$ satisfying the following. Let $a,k,r$ be positive integers and $\delta>0$ be real so that \begin{align} k \cdot\brm \max \left\{\sqrt{\log k},-\frac{\log r}{\log{\delta}} \right\} < \varepsilon a , \label{e:condition1} \end{align} then every $(a,r-k,\delta)$-semicomplete graph has a $K_r$ minor. \end{lem} \begin{proof} Let $(A,B)$ be an $(a,r-k,\delta)$-semicomplete partition of vertices of a graph $G$. Let $0.05 \leq c \leq 0.1$ be such that $s=ca/k$ is an integer. We say that $X \subseteq A$ with $|X|=s$ is \emph{bad} if some vertex of $B$ has no neighbors in $X$, and good otherwise. Then the probability that a set $X$ chosen uniformly at random is bad is at most $$r \delta^s \leq r\delta^{\frac{a}{20k}} \leq \frac{1}{3},$$ where the last condition follows from (\ref{e:condition1}), when $\varepsilon$ is sufficiently small. We now choose disjoint subsets $X_1,X_2,\ldots,X_{3k},Z$ of $A$ such that $|X_i|=s$, $|Z|=ks$ uniformly at random. By the computation above with probability greater than $1/2$ at least $k$ of the sets $X_1,X_2,\ldots,X_{3k}$ are good. By Lemma~\ref{l:easyrandom} with probability at least $1/2$ we have $d(G[Z]) \geq 0.15\cdot(a/20 -1) \geq a/200.$ It follows that for some choice as above, $X_1,\ldots,X_k$ are good and $d(G[Z]) \geq a/200.$ By (\ref{e:condition1}) and (\ref{e:Thomason}) if $\varepsilon$ is sufficiently small then there exists a model $\mu$ of $K_k$ in $G[Z]$. Assume for convenience that $V(K_k)=\{1,2,\ldots,k\}$, and extend $\mu$ to a blueprint $\mu'$ of $K_k$ in $G[A]$ by adding $X_i$ to $\mu(i)$. Then $|\mu'(K_k)| \leq 2ca \leq 0.2a$ and the defect of $\mu'$ is at most $ca$. By Lemma~\ref{lem:heftymodel2} the blueprint $\mu'$ extends to a model $\mu''$ of $K_k$ in $G[A]$, and by the choice of $X_1,\ldots,X_k$ every vertex in $B$ has a neighbor in $\mu(i)$ for every $i$. Therefore adding each vertex of $B$ as a new bag to $\mu''$ produces a model of $K_r$ in $G$, as desired. \end{proof} The next lemma differs from Lemma~\ref{l:semicomplete1} by the restriction on parameters and the construction of the model of $K_r$. \begin{lem}\label{l:semicomplete2} There exists $\varepsilon >0$ satisfying the following. Let $a,r\geq k \geq 2$ be positive integers and $\delta>0$ be real so that \begin{equation}\label{e:condition4} \max\{r,\sqrt{rk\log{r}}\} < \varepsilon a, \end{equation} then every $(a,r-k,0.8)$-semicomplete graph has a $K_r$ minor. \end{lem} \begin{proof} Let $(A,B)$ be an $(a,r-k,0.8)$-semicomplete partition of vertices of a graph $G$. As in the proof Lemma~\ref{l:semicomplete1} we can find $Z \subseteq A$ such that $d(G[Z]) \geq a/200$ and $|Z| \leq a/10$. (In fact, the constants can be significantly improved, if needed.) By Theorem~\ref{t:bipartite} and (\ref{e:condition4}) if $\varepsilon$ is sufficiently small then $G[Z]$ contains a model of $K_{k,r}$ and thus a model of $\bar{K}_{k,r}$. Let the vertices of independent set of $\bar{K}_{k,r-k}$ be $v_1,v_2,\ldots,v_{r-k} $ and let $B=\{u_1,u_2,\ldots,u_{r-k}\}$. As every vertex in $B$ has at least $a/5$ neighbors in $A$ and $|B| \leq a/10$, there exist distinct $x_1,\ldots,x_{r-k}$ in $A \setminus \mu(K_{k,r-k})$ such that $x_i$ is adjacent to $u_i$. By Lemma~\ref{lem:heftymodel2} the model $\mu$ extends to a model of $\mu'$ of $\bar{K}_{k,r-k}$ in $G[A]$ such that $x_i \in \mu'(v_i)$ for $1 \leq i \leq r-k$. Adding $u_i$ to $\mu'(v_i)$ for each $i$ produces the desired model of $K_r$ in $G$. \end{proof} \section{Proof of Theorems~\ref{thm:main} b) and Theorem~\ref{thm:upper}}\label{s:main} We start this section by introducing a crucial lemma which will allow us to apply the results of the previous section. Recall that by Lemma~\ref{lem:ReedWood} every graph can be replaced with a hefty minor while losing only constant fraction of density. Given a blade $(G,S)$, we would like to apply it to the graph $G - S$ while controlling the loss of the density of the blade. We can do this if we first ensure that every vertex of $G-S$ has a large number of neighbors in $S$. This is accomplished by the next lemma. First let us recall some standard definitions, which are used in the proof. A \emph{separation} of a graph $G$ is a pair $(A,B)$ such that $A \cup B = V(G)$ and no edge of $G$ has one end in $A-B$ and the other in $B-A$. The \emph{order} of a separation $(A,B)$ is $|A \cap B|$. For $X, Y \subseteq V(G)$ an \emph{$(X,Y)$-linkage} is a set of vertex disjoint paths, each with one end in $X$ and the other end in $Y$. By Menger's theorem the maximum order of an $(X,Y)$-linkage in $G$ is equal to the minimum order of a separation $(A,B)$ of $G$ such that $X \subseteq A$, $Y \subseteq B$. \begin{lem}\label{lem:densemodel} For every graph $G$ there exists a graph $H$ and a model $\mu$ of a graph $H$ in $G$ such that for every $v \in V(H)$ there exists $u \in \mu(v)$ such that ${\mathrm{deg}}_G(u) \leq 96d(H)+24$. \end{lem} \begin{proof} Let $G_1\subseteq G$ be chosen such that $d(G_1)$ is maximum. Let $d=d(G_1)$. Let $R$ be the set of all vertices of $G$ of degree at most $12d$, and let $\mathcal{P}$ be the $(V(G_1),R)$-linkage in $G$ of maximum order. Let $x=\abs{\mathcal{P}}$ and $n=\abs{V(G_1)}$. As noted above, by Menger's theorem there exists a separation $(A,B)$ of $G$ such that $V(G_1)\subseteq A,R\subseteq B$ and $\abs{A\cap B}=x$. Let $G_2=G[A]$ and $n'=\abs{A}$. Then, $\abs{A-B}=n'-x$ and every vertex in $A-B$ has degree at least $12d$. By the choice of $G_1$ we have $$d\geq d(G_2) =\frac{e(G_2)}{v(G_2)}\geq\frac{6d(n'-x)}{n'}.$$ Thus $x\geq \frac{5}{6}n' \geq \frac{5}{6}n$. Let $Q$ be the set of starting vertices of paths $\mathcal{P}$ in $V(G_1)$, then $\abs{Q}=x$. Let $G_3=G[V(G_1)-Q]$, then $$e(G_3)\leq d v(G_3)=d(v(G_1)-\abs{Q})\leq dn/6.$$ Let $G_4=G_1 \setminus E(G_3)$, then $\abs{E(G_4)}\geq \frac{5}{6}dn$. Let $S$ be the set of vertices in $V(G_4)-Q$ with degree at least $2d$. We claim that there exists a matching $M$ in $G_4$ so each vertex of $S$ is joined by an edge of $M$ to a vertex in $Q$. Suppose not. Then by Hall's theorem there exists a set $S' \subseteq Q$ such that $|S'| \leq |S|$ and all the edges of $G_4$ incident with vertices of $S$ have their second end in $S'$. It follows that $|E(G[S \cup S'])| \geq 2d|S|> d|S \cup S'|$ contradicting the choice of $d$ and proving our claim. For every edge of $e \in M$ with an end $q \in Q$ extend the path $P$ in $\mathcal{P}$ which ends in $q$ to include $e$. We are now ready to construct the graph $H$ satisfying the lemma. Let $G_5=G_4[Q \cup S]$, let $V(H)= \mathcal{P} $ and $P',P'' \in H$ are adjacent in $H$ if some edge of $G_5$ joins a vertex of $P'$ to a vertex of $P''$. Then the identity map $\mu$ is a model of $H$ in $G$. Next we estimate $d(H)$. Note that $|V(P) \cap V(G_5)| \leq 2$ for every $P \in \mc{P}$, and every vertex of $G_5$ is a vertex of some path in $\mc{P}$. It follows that $e(H) \geq \frac{e(G_5)-v(H)}{4}$. Moreover, $$e(G_5) \geq e(G_4) - 2d(v(G_4)-|Q|-|S|) \geq \frac{5}{6}dn - 2d\frac{n}{6} \geq \frac{dv(H)}{2}.$$ Thus $d(H) \geq \frac{d-2}{8}.$ Finally, by the choice of $\mc{P}$, for every $v \in V(H)$ there exists $u \in V(\mu(v))$ such that ${\mathrm{deg}}_G(u) \leq 12d \leq 96d(H)+24$. \end{proof} We say that a blade $(G,S)$ is \emph{$(a,m)$-hefty} if \begin{itemize} \item $(G,S)$ is semiregular, \item $G\setminus S$ is hefty, \item $a=|V(G) - S|$, \item there are at least $m$ edges joining vertices of $S$ to vertices of $G\setminus S$. \end{itemize} We say that a blade $(G',S')$ is a \emph{minor} of a blade $(G,S)$ if $G'$ is obtained from $G$ by repeatedly deleting vertices and deleting and contracting edges with both ends in $V(G)\setminus S$. Lemmas~\ref{lem:densemodel} and~\ref{lem:ReedWood} imply the following. \begin{lem}\label{lem:heftyblade} There exists a constant $D$ satisfying the following Let $\mc{B}=(G,S)$ be a regular blade such that $|V(G)-S|>1$. Then $\mc{B}$ has an $(a,d(\mc{B})a - Da^2)$-hefty minor for some positive integer $a \geq 2$. \end{lem} \begin{proof} We show that $D=204$ satisfies the lemma. By Lemma~\ref{lem:densemodel} there exists a graph $H$ and a model $\mu$ of $H$ in $G \setminus S$ such that for every $v \in V(H)$ there exists $u \in \mu(v)$ such that ${\mathrm{deg}}_{G \setminus S}(u) \leq 96d(H)+24$. As $\mc{B}$ is regular, it follows that each such vertex $u$ has at least $d(\mc{B}) - 96d(H)-24$ neighbors in $S$. Contracting the bags of $\mu$ to single vertices we obtain a minor $(G',S)$ of $\mc{B}$ such that $G' \setminus S$ is isomorphic to $H$ and every vertex in $V(G') \setminus S $ has at least $d(\mc{B}) - 96d(H)-24$ neighbors in $S$. Applying Lemma~\ref{lem:ReedWood} to $G' \setminus S$ we obtain a minor $(G'',S)$ of $\mc{B}$ such that $G'' \setminus S $ is hefty, $a=|V(G'')-S| \geq d(H)/2$ and every vertex $V(G'') \setminus S $ has at least $d(\mc{B}) - 96d(H)-24 \geq d(\mc{B}) - 204a$ neighbors in $S$. Let $Z$ be the set of vertices in $S$ with no neighbors in $V(G'')-S$, then $(G'' - Z, S-Z)$ is $(a,d(\mc{B})a - 204a^2)$ hefty, as desired. \end{proof} Lemma~\ref{lem:heftyblade} allows us to restrict our attention to hefty blades during the investigation of $c_\infty(lK_r)$ at the expense of an error term linear to the size of $V(G) \setminus S$ in such a blade $(G,S)$. Meanwhile, Lemmas~\ref{l:semicomplete1} and~\ref{l:semicomplete2} seem tailored for finding disjoint complete minors in hefty blades. Our next lemma makes the connection explicit. \begin{lem}\label{lem:heftyblade1} Let $a,r \geq k \geq 2$ be positive integers. If every $(a,r-k,\frac{k-1}{r-1})$-semicomplete graph has a $K_r$ minor. Then for every $l \geq 1$, every $(a,(l(r-k)+k-1)a)$-hefty blade has an $lK_{r}$ minor. \end{lem} \begin{proof} Let $\mc{B}=(G,S)$ be an $(a,(l(r-k)+k-1)a)$-hefty blade. Let $H=G\setminus S$. Let $\delta=(k-1)/(r-1)$, and let $S'$ be the set of all vertices in $S$ with at least $a(1 -\delta)$ neighbors in $V(H)$. Then for every subset $T$ of $S'$ with $|T|=r-k$ the underlying graph of the blade $\mc{B}[T]$ is $(a,r-k,\delta)$-semicomplete, and so $\mc{B}[T]$ contains a $K_r$ minor by the assumption of the lemma. Let $x= \lfloor|S'|/(r-k) \rfloor$. Then there exists disjoint $T_1,T_2,\ldots T_x \subseteq S'$ such that $|T_i|=r-k$ for $1 \leq i \leq x$. Let $S''= S \setminus \cup_{i=1}^x T_i$. Suppose that $|S''| \geq (r-1)(l-x)$. Then there exist disjoint $T_{x+1},\ldots,T_{l} \subseteq S''$, such that $|T_i|=r-1$ for $x+1 \leq i \leq l$. Contracting $H$ to a single vertex gives a model of $K_r$ in $T_i$ for $x+1 \leq i \leq l$. Thus by Lemma~\ref{l:subblade} the blade $\mc{B}$ has an $lK_r$ minor. Therefore we may assume for a contradiction that $|S''| \leq (r-1)(l-x)$. We have $|S' \cap S''| \leq r-k-1$ and so the total number of edges of $G$ with one end in $S''$ and another in $V(H)$ is at most $$a(r-k)+ ((r-1)(l-x)-r-k)a(1-\delta)$$ Adding the edges with one end in $S \setminus S''$, we obtain the following upper bound on the number of edges from $S$ to $V(H)$ \begin{align*} x(r-k)a+a(r-k)+ ((r-1)(l-x)-r-k)a(1-\delta) \\ =a (x(r-k -(r-1)(1-\delta))+ l(r-1)(1-\delta)+\delta(r-k)) \\ = a \left(l(r-k)+\frac{(k-1)(r-k)}{r-1} \right) \\ < a (l(r-k)+k-1), \end{align*} contradicting the assumption that $\mc{B}$ is $(a,a(l(r-k)+k-1))$-hefty. \end{proof} We now have all the ingredients in place for the proofs of our main theorems. \begin{proof}[Proof of Theorem~\ref{thm:main} b)] Let $D$ be as in Lemma~\ref{lem:heftyblade}, let $\varepsilon$ be as in Lemma~\ref{l:semicomplete1}, let $\lambda^*$ be such that every graph $H$ with $d(H) \geq \lambda^* r \sqrt {\log r}$ contains a $K_r$ minor. Assuming $C \gg \lambda^*,D,1/\varepsilon$, we will show that $c_\infty(lK_r) \leq l(r-1)-1$ for all $l \geq C \log r/ \log\log r$. By Corollary~\ref{c:fanden} it suffices to show that if $\mc{B}'=(G',S')$ is a regula with $d(\mc{B}') > l(r-1)-1$ then $lK_r$ is a minor of $\mc{B}'$. If $|S'| \geq l(r-1)$, then $\mc{B}'$ has an $lK_r$ minor by Lemma~\ref{l:bladetau}, and so we assume $|S| < l(r-1)$. Therefore $|V(G')-S| \geq 2$, and by Lemma~\ref{lem:heftyblade}, $\mc{B}'$ contains an $(a, (l(r-1)-1- Da)a)$-hefty minor $\mc{B}=(G,S)$ for some integer $a \geq 2$. We will show that $\mc{B}=(G,S)$ contains an $lK_r$ minor. If $a \geq 2\lambda^* r\sqrt{\log r}$ then $G - S$ has a $K_r$ minor, and so $\mc{B}'$ contains an $nK_r$ minor for any integer $n>0$. Thus we assume \begin{equation}\label{e:asmall} \varepsilon a \leq 2\lambda^*r\sqrt{\log r} \end{equation} Suppose next that $l \geq 2Da^3$. Then $G$ contains at least $(l(r -2) + Da^2)a$ edges joining vertices of $S$ to vertices in $V(G)-S$, and so $|S| \geq l(r-2)+Da^2$. Moreover, $|S| \leq l(r-1)$, and therefore at most $Da^2$ vertices in $S$ have a non-neighbor in $V(G)-S$. Thus there exist a set $S' \subseteq S$ such that $|S'| \geq l(r-2)$ and every $v \in S'$ is adjacent to every vertex of $V(G)-S$. Let $S_1,S_2,\ldots,S_l$ be disjoint subsets of $S'$ such that $|S_i|=r-2$ for $1 \leq i \leq l$. Then $\mc{B}[S_i]$ contains $K_r$ as a subgraph, and so $\mc{B}$ has an $lK_r$ minor by Lemma~\ref{l:subblade}. Thus we may assume that $2Da^3 \geq l \geq C$, implying $a\gg 1$, which in turn implies $r\gg 1$ by (\ref{e:asmall}). Suppose that there exist an integer $2 \leq k\leq r$ such that \begin{align} k \cdot\brm \max \left\{\sqrt{\log k},2\frac{\log r}{\log{r}-\log{k}} \right\} < \varepsilon a \label{e:condition11}\\ l(r-1)- 1-Da \geq l(r-k)+k-1, \label{e:condition13} \end{align} Then by Lemma~\ref{l:semicomplete1} every $(a,r-k,(k-1)/(r-1))$-semicomplete graph has a $K_r$ minor, and thus by Lemma~\ref{lem:heftyblade1} every $(a,(l(r-k)+k-1)a)$-hefty blade has an $lK_r$ minor. Meanwhile, the last condition implies that $\mc{B}$ is $(a,(l(r-k)+k-1)a)$-hefty. Thus it remains to find $k$ satisfying the above. Let $ k= \lceil 2Da/l+1 \rceil.$ Then $(k-1)(l-1) \geq Da+1$ and so (\ref{e:condition13}) holds. If $k \leq 3$ then (\ref{e:condition11}) also holds $\varepsilon a, r \gg 1$. Otherwise, $k \leq 4Da/l$. By (\ref{e:asmall}), we have \begin{align*}\log l &\geq \log C + \log \log r - \log\log\log r \\ &\geq \frac{1}{3} \log \log r -\log r +\log a + \log 4D,\end{align*} and so $\log k \leq \log r - \frac{1}{3}\log\log r$. Thus the left side of (\ref{e:condition11}) is at most $$a \cdot \frac{4D \log\log r}{C\log r} \cdot \frac{2\log r}{\frac{1}{3}\log\log r} = \frac{24D}{C}a < \varepsilon a$$ as desired. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:upper}] The argument is very similar to the proof of Theorem~\ref{thm:main} b) above, except that we use Lemma~\ref{l:semicomplete2} in place of Lemma~\ref{l:semicomplete1}. Let $D$ be as in Lemma~\ref{lem:heftyblade}, let $\varepsilon$ be as in Lemma~\ref{l:semicomplete2}, and let $\lambda^*$ be such that every graph $H$ with $d(H) \geq \lambda^* r \sqrt{ \log r}$ contains a $K_r$ minor, and let $C$ be as in Theorem~\ref{thm:main} b). We show that the theorem holds as long as $C_u \gg C, \lambda^*,D,1/\varepsilon$. Let $\Delta = C_u r \log r /l$. As in the proof of Theorem~\ref{thm:main} by Lemma~\ref{lem:heftyblade} it suffices to show that that if $\mc{B}=(G,S)$ is an $(a, (l(r-1) - 1 + \Delta - Da)a)$-hefty blade for some integer $a \geq 2$ then $\mc{B}$ contains an $lK_r$ minor. By Theorem~\ref{thm:main} b) we may assume that $l \leq C\log r \log \log r$. As in the previous proof we may assume that $|S| < l(r-1)$ and that (\ref{e:asmall}) holds. The first of these conditions implies $Da \geq \Delta$, that is \begin{equation}\label{e:alarge} a \geq \frac{C_u r \log r}{Dl}. \end{equation} Substituting the upper bound on $l$, we have $a > r/\varepsilon$. As a consequence of (\ref{e:asmall}) amd (\ref{e:alarge}) we have $r \gg 1$ and \begin{equation}\label{e:llarge} l > 6D\lambda^*\sqrt{\log r}. \end{equation} (The constants in the above inequalities may seem arbitrary, but are chosen for later use.) As in the proof of Theorem~\ref{thm:main} successively applying Lemma~\ref{l:semicomplete2} and Lemma~\ref{lem:heftyblade1} we see that it suffices to find a positive integer $k\geq 2$ satisfying \begin{align} \brm{max} \{r,\sqrt{rk\log r}\} < \varepsilon a, \label{e:condition21}\\ 0.2 \leq \frac{k-1}{r-1}, \label{e:condition22}\\ l(r-1) - Da \geq l(r-k)+k-1. \label{e:condition23} \end{align} Choose $k = cDa/l$ for some $2 < c <3$. Then $lk \geq 2Da$ and (\ref{e:condition23}) holds. The condition (\ref{e:condition22}) holds by (\ref{e:asmall}) and (\ref{e:llarge}). It remains to show that $\sqrt{rk\log r} < \varepsilon a$, i.e. $$\frac{cDa}{l}r\log r < \varepsilon^2 a^2,$$ which follows directly from (\ref{e:alarge}). \end{proof} \section{Concluding remarks}\label{s:conclude} In this paper we explored applications of the structural lemma of Eppstein~\cite{Eppstein10} to bounds on the asymptotic extremal function $c_\infty(H)$ for disconnected graphs $H$. In particular, the large portion of the paper is dedicated to proving bounds on $c_\infty(lK_r)$. In this direction the following interesting questions remain open \begin{que}\label{q:1} How large is $c_\infty(2K_r) - c_\infty(K_r)$? \end{que} Clearly, $c_\infty(2K_r) - c_\infty(K_r) \geq 1$, and we have $c_\infty(2K_r) - c_\infty(K_r) \leq r-1$ by Theorem~\ref{thm:infty+}, but we can not improve on either of the bounds. Giving a precise answer to Question~\ref{q:1} might be out of reach of the current techniques, as it seems likely to involve obtaining estimates on $c(K_r)$ with additive error sublinear in $r$. In contrast, we believe that it is possible that a refinement of the tools presented in this paper is sufficient to answer the following two questions. \begin{que}\label{q:2} Give an estimate on $c_\infty(lK_r)$ which is asymptotically tight for all $l,r$ such that $l+r \to \infty$. \end{que} As noted in the introduction, we have $$\frac{1}{2}-o(1) \leq \frac{c_\infty(lK_r)}{\lambda r \sqrt{\log r} + l(r-1) } \leq 1 + o(1),$$ but can one improve on the estimate in denominator to remove the gap between the bounds? \begin{que}\label{q:3} Give a tight estimate of $c_\infty(lK_r) - l(r-1)$ in the range $l=\omega(\sqrt{\log r})$ and $l=o(\log r/ \log\log r)$. \end{que} Theorems~\ref{thm:lower} and~\ref{thm:upper} provide bounds on the above difference which differ by a constant factor. We believe that the lower bound is tight. There are also many natural questions which could be asked about the behaviour of $c_\infty(lH)$ for non-complete graph $H$. For example, define the \emph{excess} of $H$ by $$\brm{exc}(H)=\lim_{l \to \infty} (c_\infty(lH)-l\tau(H)+1).$$ By (\ref{e:tau}) and Theorem~\ref{thm:infty+}, $\brm{exc}(H)$ is well-defined and is non-negative for every graph $H$. By Theorem~\ref{t:Thomason} we have $\brm{exc}(K_r)=0$ for every $r$. By Theorem~\ref{thm:cycles} we have $\brm{exc}(C_l)=0$ for every $l \neq 4$, while $\brm{exc}(C_4)=1/2$. \begin{que}\label{q:4} Describe $\brm{exc}(H)$ in terms of other (natural) parameters of the graph $H$. \end{que} Finally, note once again that $c_\infty((l+1)H)-c_\infty(lH) \leq \tau(H)$ for all $H$ and all $l \geq 1$ by Theorem~\ref{thm:infty+}. It is possible to show that for fixed $H$ and large enough $l$ the above inequality holds with equality. Hence one might consider the following question. \begin{que}\label{q:5} For a fixed graph $H$ is the sequence $c_\infty((l+1)H)-c_\infty(lH)$ unimodular? Is it non-decreasing? \end{que} Note that the answer to Question~\ref{q:5} might shed light on Questions~\ref{q:1} and~\ref{q:2}. \bibliographystyle{alpha}
{ "timestamp": "2019-03-12T01:14:35", "yymm": "1903", "arxiv_id": "1903.03908", "language": "en", "url": "https://arxiv.org/abs/1903.03908", "abstract": "For a graph $H$, let $$c_{\\infty}(H)= \\lim_{n \\to \\infty}\\max\\frac{|E(G)|}{n},$$ where the maximum is taken over all graphs $G$ on $n$ vertices not containing $H$ as a minor. Thus $c_{\\infty}(H)$ is the asymptotic maximum density of graphs not containing $H$ as a minor. Employing a structural lemma due to Eppstein, we prove new upper bounds on $c_{\\infty}(H)$ for disconnected graphs $H$. In particular, we determine $c_{\\infty}(H)$ whenever $H$ is union of cycles. Finally, we investigate the behaviour of $c_\\infty(sK_r)$ for fixed $r$, where $sK_r$ denotes the union of $s$ disjoint copies of the complete graph on $r$ vertices. Improving on a result of Thomason, we show that $$c_\\infty(sK_r)=s(r-1)-1 \\mathrm{\\; for \\;} s ={\\Omega}\\left(\\frac{\\log{r}}{\\log\\log{r}}\\right),$$ and $$c_\\infty(sK_r)>s(r-1)-1 \\mathrm{\\; for \\;} s ={o}\\left(\\frac{\\log{r}}{\\log\\log{r}}\\right).$$", "subjects": "Combinatorics (math.CO)", "title": "Asymptotic Density of Graphs Excluding Disconnected Minors", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9888419661213173, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.7074412646010709 }
https://arxiv.org/abs/1504.03910
Loading Relativistic Maxwell Distributions in Particle Simulations
Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ${\approx}50\%$ for generic cases and $100\%$ for symmetric distributions. They can be combined with arbitrary base algorithms.
\section{INTRODUCTION} Because of an increasing demand in high-energy astrophysics, numerical modeling of relativistic kinetic plasmas has been growing in importance. To date, many simulations on relativistic kinetic processes have been performed, such as the Rankine-Hugoniot problem across a relativistic shock \citep{gallant92}, magnetic reconnection and kinetic instabilities \citep{zeni07} in a relativistically hot current sheet,\citep{harris,hoh66} and the kinetic Kelvin-Helmholtz instability in a relativistic flow shear \citep{alves12}. In these simulations, one has to carefully set up ultrarelativistic bulk flows and/or relativistically hot plasmas in their rest frame. Loading velocity distribution function, i.e., initializing particle velocities by using random variables according to a relativistic distribution function, is essentially important. In nonrelativistic particle simulations, it is quite natural to begin with a Maxwell-Boltzmann distribution (Maxwellian in short). To load the Maxwellian, the Box--Muller algorithm is widely used.\citep{bm58} One can easily initialize a distribution with a bulk drift velocity, by applying an offset to the particle velocities. In relativistic simulations, it is natural to begin with a relativistic Maxwellian, also known as the J\"{u}tter-Synge distribution function.\citep{jut11,synge} In order to load it, perhaps the \citet{sobol76} algorithm is the most popular, at least in Monte--Carlo simulation community. The algorithm was formally proposed by \citet{sobol76} in a Russian proceeding. Its key results are outlined in \citet{pod77,pod83}. Meanwhile, it is not so clear how to initialize particles according to the relativistic shifted-Maxwellian or moving population of other distributions. To the best of our knowledge, the algorithms for the J\"{u}tter-Synge distribution have not been applied to the relativistic shifted-Maxwellian. Several alternative algorithms have been proposed. \citet{swisdak13} applied a rejection method for a log-concave distribution function. \citet{melzani13} utilized a numerical cumulative distribution function and cylindrical transformation. In this research note, we describe numerical methods to load relativistic Maxwellians in particle simulations. We first describe two base algorithms to load stationary relativistic Maxwellian, the inverse transform method and the Sobol method.\citep{sobol76} Next we apply the Lorentz transformation to obtain the relativistic shifted-Maxwellian. Simple rejection methods are proposed to deal with the spatial part of the Lorentz transformation. We validate the algorithms by test problems, followed by discussions. \section{Stationary relativistic Maxwellian} We consider relativistic Maxwell distributions (J\"{u}ttner-Synge distribution\citep{jut11,synge}) in the following form, \begin{equation} \label{eq:JS} f(\vec{u})d^3{u} = \frac{N}{4\pi m^2c T K_2 (mc^2/T)} \exp \Big( -\frac{ \gamma mc^2 }{T} \Big) d^3{u}, \end{equation} where $\vec{u}= \gamma\vec{v}$ is the spatial components of the 4-velocity, $\vec{v}$ is the velocity, $\gamma=[1-(\vec{v}/c)^2]^{-1/2}$ is the Lorentz factor, $m$ is the rest mass, $c$ is the light speed, $T$ is the temperature, and $K_2(x)$ is the modified Bessel function of the second kind. The normalization constant is set such that the number density is $N \equiv \int f(\vec{u})d^3{u}$. Hereafter we set $m=1$ and $c=1$ for simplicity. We use uppercases for fluid quantities and lowercases for particle properties throughout the paper. To generate $\vec{u}$, we consider the spherical transformation $(u_x, u_y, u_z) = ( u \sin \theta \cos \varphi, u \sin \theta \sin \varphi, u \cos \theta )$. Then Equation \ref{eq:JS} yields \begin{equation} \label{eq:exp_u2} f(u) du =\frac{N}{T K_2 (1/T)} \exp \Big(-\frac{\sqrt{1+u^2}}{T} \Big) u^2 du. \end{equation} In the special case of $N=1$, one can read this equation as a probability function with respect to $u$. We generate $u$ whose distribution follows Equation \ref{eq:exp_u2} by either the inverse transform method (Sec.~\ref{section:inverse}) or the Sobol method (Sec.~\ref{section:sobol}). We will describe these methods in the next subsections. After we obtain $u$, we generate $\vec{u}$ on a spherical surface $|\vec{u}|=u$ in the momentum space. Using uniform random variables $X_1 (0<X_1\le 1)$ and $X_2 (0<X_2\le 1)$, we set $\vec{u}$ in the following way, \begin{eqnarray} \label{eq:spherical_scattering} \left\{ \begin{array}{lll} u_x &=& u ~ ( 2 X_1 - 1 ) \label{eq:ux} \\ u_y &=& 2 u \sqrt{ X_1 (1-X_1) } \cos(2\pi X_2) \label{eq:uy} \\ u_z &=& 2 u \sqrt{ X_1 (1-X_1) } \sin(2\pi X_2) \label{eq:uz} \end{array} \right. . \end{eqnarray} Then we obtain a relativistic Maxwellian which follows Equation \ref{eq:JS}. \subsection{Inverse transform method} \label{section:inverse} We consider the cumulative distribution function $F(u)$ with a practical upper bound $u_{\rm max}$, \begin{align} F(u) &= \Big( \int_0^{u} f(u) du \Big) \Big( \int_0^{\infty} f(u) du \Big)^{-1} \nonumber \\ &\simeq \Big( \int_0^{u} f(u) du \Big) \Big( \int_0^{u_{\rm max}} f(u) du \Big)^{-1} . \end{align} In the nonrelativistic limit of $T \ll 1$, $u_{\rm max} = 5 v_{\rm th}$ is sufficient, where $v_{\rm th}=\sqrt{2T}$ is the thermal speed. In the relativistically hot case of $T \gtrsim 1$, Equation \ref{eq:exp_u2} behaves like $\propto \exp(-{u}/T)u^2$ for $u \gg 1$. This decays slower than the nonrelativistic limit of $\propto \exp [-(v/v_{\rm th})^2]v^2$, and so we increase the upper bound to $u_{\rm max} = 20 T$. We usually prepare a numerical table of $F(u)$ with $2000$ or more grid points. Using a uniform random variable $X_3$, we compute \begin{equation} u=F^{-1}(X_3) \end{equation} by referring and interpolating the table. \subsection{Sobol method} \label{section:sobol} Let us consider the gamma distribution. Its probability function $P(x)$ is given by \begin{equation} \label{eq:gamma} P(x; a, b) = \frac{1}{b^a~{\rm Gamma}(a) } x^{a-1}e^{-x/b} ~~~~~(x\ge 0), \end{equation} where $a$ and $b$ are free parameters and ${\rm Gamma}(x)$ is the Gamma function. The gamma distribution with an integer parameter $a$ can be generated by multiple random variables $X_i$'s ($0<X_i\le 1$) in the following way,\citep{stat} \begin{equation} \frac{x}{b} = -\sum_{i=1}^{a} \ln X_i. \end{equation} \citet{sobol76} noticed that the right hand side of Equation \ref{eq:exp_u2} is similar to the third-order Gamma distributions, \begin{equation} \label{eq:Pgamma} P(u; 3, T) = \Big( \frac{1}{2 T^{3}} \Big)\, \exp \Big(-\frac{u}{T} \Big) u^2. \end{equation} For a certain $T$, we initialize $u$ by using three random variables ($X_4 \dots X_6$), \begin{equation} \label{eq:gamma3} \frac{u}{T} = -\ln X_4 - \ln X_5 - \ln X_6 = - \ln X_4X_5X_6 . \end{equation} Comparing the exponential parts in Equations \ref{eq:exp_u2} and \ref{eq:Pgamma}, we obtain a relativistic Maxwellian by the rejection method. By using another random variable $X_7$, we accept the particle if \begin{equation} \label{eq:rej} \exp\Big( \frac{u-\sqrt{1+u^2}}{T} \Big) > X_7. \end{equation} Then we obtain $u$ which is distributed by Equation \ref{eq:exp_u2}. Using Equation~\ref{eq:gamma3}, this criteria can be modified to \begin{equation} \sqrt{1+u^2} < \Big( u -T\ln X_7 \Big) = - T\ln X_4X_5X_6X_7 . \end{equation} This leads to a simple form of the Sobol's criterion,\citep{pod77,pod83} \begin{equation} \label{eq:sobol} \eta^2-u^2 > 1 , \end{equation} where $\eta=-T\ln X_4X_5X_6X_7$. Note that $\eta$ and $u$ share the same variables $X_4,X_5$, and $X_6$. \textcolor{blue}{Make sure to avoid zero in $X_4\ldots X_7$, because $\ln 0$ is undefined.} Once Equation \ref{eq:sobol} (or Eq.~\ref{eq:rej}) is met, we continue to the next step of the spherical scattering (Eq.~\ref{eq:spherical_scattering}). Comparing the normalization factors in Equation \ref{eq:exp_u2} with $N=1$ and Equation \ref{eq:Pgamma}, we obtain the overall efficiency of the rejection method as a function of $T$,\citep{pod77,pod83} \begin{equation} \label{eq:eff} \frac{1}{2 T^2} K_2(1/T) . \end{equation} Figure \ref{fig:eff} shows the acceptance efficiency of the Sobol method, as a function of $T$. The efficiency quickly decreases for $T \rightarrow 0$, while it approaches to $1$ for $T \rightarrow \infty$. \begin{figure}[] \begin{center} \includegraphics[width={\columnwidth},clip]{f1.pdf} \caption{ Acceptance efficiency of the Sobol algorithm as a function of the temperature $T$. The black squares show numerical results in Section \ref{section:test}. \label{fig:eff}} \end{center} \end{figure} \section{Relativistic Shifted-Maxwellian} \subsection{Lorentz transformation} Next we discuss general properties of the Lorentz transformation for particle distributions. We consider the transformation between two frames, $S$ and $S'$. We assume that particles are stationary in the reference frame $S$, and then we switch to a moving frame $S'$ at the 4-velocity $(\Gamma,-\Gamma\beta,0,0)$. Without losing generality, we consider the transformation in the $+x$ direction. In $S'$, we observe the particle distribution, boosted by the 4-velocity $(\Gamma,+\Gamma\beta,0,0)$. We denote the observed properties in $S'$ by the prime sign ($'$). As the total particle number is conserved, we recognize \begin{equation} \label{eq:f_const} f(\vec{x},\vec{u})\,d^3{x}\,d^3{u} = f'(\vec{x'},\vec{u'})\,d^3{x'}\,d^3{u'} . \end{equation} Here, $d^3{x}=dx\,dy\,dz$ is the spatial volume element. Using the time element $dt$ in the same frame, we consider the 4-dimensional volume element of $dt $ and $d^3x$ that is moving at the 4-vector of $\vec{u}$. The 4-dimensional position $(t,x,y,z)$ follows the Lorentz transformation, and so the 4-volume element vector $(dt,dx,dy,dz)$ also follows the Lorentz transformation. Since the Jacobian of the Lorentz transformation matrix $\Lambda$ is 1, the 4-volume $dt\,d^3{x}$ is conserved, i.e., $dt\,d^3{x} = dt'\,d^3{x'}$. Since we deal with the $\vec{u}$-moving volume, the time element $dt$ is related to the canonical time element $d\tau$ in the following way, $dt=\gamma d\tau$. We similarly see $dt'=\gamma' d\tau$. Therefore we obtain \begin{equation} \label{eq:dx} \gamma d^3{x} = \gamma'd^3{x'}. \end{equation} This also indicates the length contraction for the volume. The transformation is slightly different for $d^3{u}$, because $\vec{u}$ is constrained by $u^{\mu}u_{\mu} = u^2-\gamma^2 \equiv -1$. Without losing generality, one can consider the Lorentz transformation by $(\Gamma,-\Gamma\beta,0,0)$ in the $+x$ direction: \begin{align} \gamma' = \Gamma \gamma (1 + \beta v_x), \\ du'_x=\Gamma (du_x + \beta d\gamma), \\ du'_y=du_y, ~~~ du'_z=du_z . \end{align} Under the condition of $\gamma d\gamma=u_xdu_x$, we obtain \begin{equation} \label{eq:du} \frac{d^3{u}}{\gamma} = \frac{d^3{u'}}{\gamma'} . \end{equation} From Equations.~\ref{eq:dx} and \ref{eq:du}, we obtain $d^3{x}\,d^3{u}=d^3{x'}\,d^3{u'}$. This ensures \begin{equation} \label{eq:f} f(\vec{x},\vec{u})=f'(\vec{x'},\vec{u'}). \end{equation} We obtain a relativistic shifted-Maxwellian by simply translating Equation~\ref{eq:f}, \begin{align} \label{eq:shifted} f(\vec{u}) = f'(\vec{u'}) &= \frac{N}{4\pi T K_2 (1/T)} \exp \Big( -\frac{ \gamma }{T} \Big) \\ &= \frac{N}{4\pi T K_2 (1/T)} \exp \Big( -\frac{ \Gamma(\gamma'-\beta u'_x) }{T} \Big) \label{eq:shifted2} \end{align} Since we know nice algorithms (Sec.~II), we would like to initialize the particle momentum $\vec{u}$ in the $S$ frame, and then translate it to the $S'$ frame by the Lorentz transformation, $\vec{u}\rightarrow\vec{u'}$. This procedure contains the momentum-space transformation (Eq.~\ref{eq:du}). However, it does not take care of the spatial part of the transformation, $d^3{x}\rightarrow d^3{x'}$ (Eq.~$\ref{eq:dx}$). Using the $S$-frame quantities, the distribution in $S'$ appears to the observer in the following way, \begin{equation} f'(\vec{u'}) d^3{u'} = f(\vec{u}) \Big(\frac{\gamma'}{\gamma}\Big) d^3{u} . \end{equation} We recognize a volume transform factor $(\gamma'/\gamma)$, because the element volume in $S$ is not identical to the element volume in $S'$. This issue is also addressed by \citet{melzani13}. \begin{equation} \label{eq:factor} \frac{\gamma'}{\gamma} = \Gamma ( 1 + \beta v_x ) . \end{equation} One can also interpret that the number density is reciprocal to the volume size $\propto (d^3{x})^{-1}$ (See also Eq.~\ref{eq:dx}). Since both spacial and momentum transformation (Equations \ref{eq:dx} and \ref{eq:du}) depends on $\vec{u}$, the factor differs {\itshape from particle to particle}. This may sound tricky, but the above formula describes what the observer looks at. We obtain very different results without the volume transformation, as will be shown in Section IV. In this line, we briefly outline relativistic fluid properties. We assume isotropic Maxwellian distribution. From Equation \ref{eq:du}, the number flux 4-vector $N^{\mu}$ yields \begin{equation} \label{eq:N} N^{\mu} = \int f(\vec{u}) u^{\mu} \frac{d^3{u}}{\gamma} . \end{equation} We see $N'^{\mu} = ( N', N'\vec{V'} ) = N ( \Gamma , \Gamma \vec{\beta} )$. Equations \ref{eq:N} ensures that $N^{\mu}$ follows the Lorentz transformation, i.e., $N'^{\mu} = \Lambda^{\mu}_{\alpha} N^{\alpha}$, where $\Lambda$ is the Lorentz tensor. Similarly, the stress-energy tensor $T^{\mu\nu}$ yields, \begin{equation} T^{\mu\nu} = \int f(\vec{u}) u^{\mu}u^{\nu} \frac{d^3{u}}{\gamma} . \end{equation} Clearly it follows the Lorentz transformation, i.e., $T'^{\mu\nu} = \Lambda^{\mu}_{\alpha} \Lambda^{\nu}_{\beta} T^{\alpha\beta}$. In this case, \begin{align} T'^{00} &= \Gamma^2 ( \mathcal{E} + P ) - P, \\ T'^{0i} &= \Gamma^2 ( \mathcal{E} + P ) {\beta}^i, \end{align} where $\mathcal{E} \equiv \int f(\vec{u}) \gamma d^3{u} = N \{ [{K_3(1/T)}/{K_2(1/T)}]-T \}$ is the internal energy density and $P \equiv \int f(\vec{u})u_x ({u_x}/{\gamma}) d^3{u} = NT$ is the pressure in the rest frame. \subsection{Volume transform methods} Here, we describe simple methods to deal with the volume transform factor (Eq.~\ref{eq:factor}). It is impossible to deal with this by adjusting the cell size in PIC simulation, because the transformation differs from particle to particle. One can also change the weight of particles. However, we prefer not to do so, because the ratio of the heaviest particle to the lightest particle could be very large. We propose to adjust the particle number by a rejection method. Using a random variable $X_8$ ($0<X_8\le 1$), we accept the particle if the following condition is met, \begin{equation} \label{eq:factor2} \frac{1}{2\Gamma} \Big( \frac{\gamma'}{\gamma} \Big) = \frac{1}{2}( 1 + \beta v_x ) > X_8. \end{equation} The left hand side ranges from 0 to 1. If the condition is not met, then we re-initialize the particle momentum. The factor $(1/2\Gamma)$ can be absorbed in the normalization constant, because we usually know the value of $2\Gamma N$ before loading particles. The expected value ${\rm E}[x]$ of the acceptance efficiency is $50\%$, \begin{equation} {\rm E}\Big[ \frac{1}{2\Gamma} \Big( \frac{\gamma'}{\gamma} \Big) \Big] = \frac{1}{2}( 1 + \beta {\rm E}[v_x] ) = 0.5. \end{equation} If $S$ is not the fluid rest frame, ${\rm E}[v_x] \ne 0$ and so the efficiency may vary. \begin{figure}[] \begin{center} \includegraphics[width={\columnwidth},clip]{f2.pdf} \caption{ \textcolor{blue}{Lorentz transformation of a relativistic hot plasma distribution. The bottom panel illustrates the flipping method, which is responsible for the spatial part of the Lorentz transformation.} \label{fig:shifted}} \end{center} \end{figure} We can further improve the efficiency in a special case of a symmetric distribution. When $f(u_x)=f(-u_x)$, we multiply the acceptance factor by 2, \begin{equation} \frac{1}{\Gamma} \Big( \frac{\gamma'}{\gamma} \Big) = ( 1 + \beta v_x ). \label{eq:factor_new} \end{equation} The $(1/\Gamma)$ factor is absorbed in the total particle number. We take advantage of the fact that the second term of the right hand side is odd function of $u_x$ (or $v_x$). When $\beta v_x$ is negative, the acceptance factor ranges between $0 < (1-|\beta v_x|) \le 1$. We reject the particles at the probability of $|\beta v_x|$. On the other hand, when $\beta v_x$ is positive, the factor ranges between $1 \le (1+\beta v_x) < 2$. We accept all particles. In addition, we interpret that we need to add another set of particles at the probability of $|\beta v_x|$. If $f(u_x)=f(-u_x)$, the number of the rejected particles and the number of the particles to be added are equal. We simply reverse the sign of $u_x$ of the rejected particles, and then we add them to the positive-$\beta v_x$ side. This logic is schematically illustrated in the bottom of Figure \ref{fig:shifted}. We summarize the algorithm in the following way. If the following condition is met for a random variable $X_9$, \begin{equation} \label{eq:sz} -\beta v_x > X_9, \end{equation} then we change $u_x \rightarrow -u_x$, before computing $u'_x$. Here we combine the two conditions of $-\beta v_x < 0$ and $-\beta v_x > X_9$ to one condition (Eq.~\ref{eq:sz}). The acceptance efficiency is 100\%. We call it the flipping method (Eq.~\ref{eq:sz}) to distinguish it from the rejection method (Eq.~\ref{eq:factor2}). \section{Test problems} \label{section:test} In order to validate the algorithms, we carry out several test problems. We initialize $10^6$ particles in all cases. The black squares in Figure \ref{fig:eff} show the acceptance efficiency of the Sobol method, as a function of $T$. They are in excellent agreement with the expected curve (Eq.~\ref{eq:eff}). We then compute relativistic shifted-Maxwellian by using the Sobol method and the flipping method (Eq.~\ref{eq:sz}). We set $T=1$ and we boost the particles by the bulk Lorentz factor $\Gamma = (1, 1.1, 10)$ in the $+u_x$ direction. \textcolor{blue}{Figure \ref{fig:f} compares numerical results and analytic distributions in the moving frame $S'$, integrated over $u'_y$ and $u'_z$. All distributions are normalized by $\int f'd^3u'=\Gamma N$. The following analytic solution is obtained by using a cylindrical transformation $(u'_x, u'_y, u'_z) = ( u'_x, u'_{\perp}\cos\phi, u'_{\perp}\sin\phi )$ in Equation \ref{eq:shifted2}. \begin{align} f(u'_x) &= \int^\infty_0 \int_0^{2\pi} f'(\vec{u}') u'_{\perp} d\phi d u'_{\perp} \nonumber \\ &= \frac{N(\Gamma\sqrt{1+u'^2_x}+T)}{2\Gamma^2K_2(1/T)} \exp \Big( -\frac{ \Gamma(\sqrt{1+u'^2_x}-\beta u'_x) }{T} \Big) . \label{eq:f_ux} \end{align} The numerical results are in excellent agreement with the analytic solutions. The stationary Maxwellian looks OK. As $\Gamma$ increases, the distribution is stretched in the $+u'_x$ direction. From Equation \ref{eq:f_ux}, we see $f'(u'_x) \propto u'_x \exp[ -(\Gamma(\sqrt{1+u'^2_x}-\beta u'_x)/{T}) ] \approx u'_x\exp[ -({u'_x}/{2 \Gamma T}) ] $ for $u'_x \rightarrow \infty$. Therefore, the slope on the boosted side becomes extremely flat. For $\Gamma=1.1$, the numerical results on the right side ($u'_x \approx 20$) look slightly noisier than on the other side ($u'_x \approx -8$). This is probably a unfair comparison, because the right slope has more gridpoints than the left slope in the low-density range. For $\Gamma=10$, the distribution is highly stretched in $+u'_x$. Outside the figure, it still remains $f'(u'_x)/\Gamma N \approx 4 \times 10^{-3}$ at $u'_x=100$. It will be challenging to initialize such a distribution by a direct rejection method in the $S'$ frame, because we have to extend the parameter domain $2\Gamma$ times longer in $+u'_x$. This gives us another motivation to initialize particles in $S$ and then boost it to the $S'$ frame.} Next, we compute several fluid quantities in the moving frame $S'$. After initializing the particles, we compute the flow vector $N'^{\mu}$ and the stress-energy tensor $T'^{\mu\nu}$. Then we evaluate the average velocity $N'^{x}/N'^0=\beta$ and the average energy flux $T'^{0x}/N'^0=\Gamma\beta (\mathcal{E}+P)/N$. The former is a direct indicator of the bulk motion, and the latter, the energy flux, plays a decisive role to the system evolution. The results are presented in Table \ref{table}. We change two key parameters, the bulk Lorentz factor $\Gamma = (1.1, 10, 10^2)$ and the relativistic temperature $T=(0.1,1,10)$. In the $T=0.1$ case, we use the inverse transform method (Sec. II A), because the efficiency of the Sobol method falls to $\approx 0.001$. We also test the $T=10$ case without the volume transformation. This incorrect case is denoted by the asterisk sign ($*$). In Table \ref{table}, the first rows show the computed results. The second rows show the relative error to analytic solutions. As can be seen, the results appear to be accurate, except for the rightmost columns. Without the volume transformation, we see that the average bulk speed is inaccurate in Table \ref{table}. This is crucial to initialize a relativistic current sheet \citep{harris,hoh66}, in which relativistically hot populations carry the electric current. The energy flux is significantly distorted, too. The average energy flux without the volume transformation is \begin{equation} \frac{ \int f(\vec{u}) u'^0 ({u'^x}/{u'^0}) {d^3{u}} } {\int f(\vec{u}) {d^3{u}}} = \frac{1}{N} \int f(\vec{u}) \Gamma ( \beta u^0 + u^x ) {d^3{u}} = \Gamma \beta \frac{ \mathcal{E} }{ N } . \end{equation} Since $(\mathcal{E}+P)/\mathcal{E} \rightarrow 4/3$ for $T \gg 1$, we lose $25\%$ of the energy flux, regardless of the bulk speed $\beta$. We can similarly evaluate the average energy density without the volume transformation. It deviates from the right value by a factor of $[ 1 + \frac{\Gamma^2-1}{\Gamma^2} \big( \frac{P}{\mathcal{E}} \big) ]^{-1}$, and therefore the error approaches $25\%$ for $\Gamma \gg 1$ and $T \gg 1$. \begin{figure}[] \begin{center} \includegraphics[width={\columnwidth},clip]{f3.pdf} \caption{ \textcolor{blue}{Distribution functions $f'({u}'_x)$ of Lorentz-boosted Maxwellians as a function of $u'_x$. Numerical results are overplotted on the analytic curves (Eq.~\ref{eq:f_ux}). We set $T=1$ for all cases.} \label{fig:f}} \end{center} \end{figure} \begin{table} \begin{center} \caption{ Computed fluid quantities and relative errors. \label{table}} \begin{tabular}{lrrrr} \hline $\frac{N'^x}{\Gamma N}$ & T=0.1 & T=1.0 & T=10 & T=10* \\ \hline $\Gamma$=1.1 & 0.416532 & 0.416708 & 0.416566 & 0.288896 \\ ~ & $1.6 \times 10^{-4}$ & $2.6 \times 10^{-4}$ & $7.7 \times 10^{-5}$ & 0.307 \\ $\Gamma$=10 & 0.994989 & 0.994996 & 0.994958 & 0.975918 \\ ~ & $1.6 \times 10^{-6}$ & $8.9 \times 10^{-6}$ & $2.9 \times 10^{-5}$ & 0.0192 \\ $\Gamma$=100 & 0.999950 & 0.999950 & 0.999950 & 0.999658 \\ ~ & $9.3 \times 10^{-10}$ & ~$4.0 \times 10^{-8}$ & ~$1.3 \times 10^{-7}$ & ~$2.9 \times 10^{-4}$ \\ \hline $\frac{T'^{0x}}{\Gamma N}$ & T=0.1 & T=1.0 & T=10 & T=10* \\ \hline $\Gamma$=1.1 & 0.580438 & 2.00167 & 18.3798 & 13.7357 \\ ~ & $2.9 \times 10^{-4}$ & $5.5 \times 10^{-4}$ & $1.4 \times 10^{-3}$ & 0.252 \\ $\Gamma$=10 & 12.6063 & 43.4805 & 398.147 & 298.838 \\ ~ & $2.6 \times 10^{-6}$ & $1.1 \times 10^{-4}$ & $8.5 \times 10^{-4}$ & 0.250 \\ $\Gamma$=100 & 126.674 & 437.062 & 4001.75 & 3003.43 \\ ~ & $1.5 \times 10^{-4}$ & $9.2 \times 10^{-5}$ & $7.4 \times 10^{-4}$ & 0.250 \\ \hline \end{tabular} \end{center} \end{table} \section{Discussion and Summary} We first reviewed two algorithms to initialize the stationary relativistic Maxwellian. In addition to the simple inverse transform method, we have formally reviewed the Sobol algorithm. In our experience, the inverse transform method is faster than the Sobol method, because it only requires 3 random variables. We don't see any problems, as long as we prepare $10^3$-$10^4$ grids in the table. The algorithm can deal with any spherically-symmetric distributions. On the other hand, the Sobol method has a strong mathematical background. It is very simple, and so we can easily avoid a bug. The method is certainly slower than the inverse transform method, because it uses 6 random variables. However, this will not be a big deal, because we use these algorithms for initialization. The only problem is that the Sobol method becomes extremely inefficient for the nonrelativistic limit of $T \ll 1$. In such a case, we simply switch from the Sobol method to the inverse transform method or the Box-Muller method. Another promising option is the log-concave rejection method, described in Section II and Appendix in \citet{swisdak13}. The algorithm uses 4 random variables, its acceptance efficiency is ${\approx}90\%$, and it is nearly insensitive to $T$. After initializing the stationary Maxwellian, we apply the volume transformation before boosting the particle momentum. We have proposed the two algorithms, the rejection method (Eq.~\ref{eq:factor2}) and the flipping method (Eq.~\ref{eq:sz}). They require one more random variable. The flipping method is our first choice. Since it accepts all particles, the overall efficiency is the same as the base algorithm for the stationary one. As a representative case, the Sobol method with the flipping method are summarized in the pseudocode in Table \ref{table:code}. We emphasize that our volume transform methods are generic. The flipping method can be combined with power-law, waterbag, or any other distributions, as long as it is symmetric in $u_x$ in the $S$ frame. Even when the distribution is non-symmetric, we can divert to the rejection method (Eq.~\ref{eq:factor2}). The acceptance efficiency is typically $50\%$, but it works in any cases. \citet{swisdak13} used the log-concave rejection method twice for the shifted Maxwellian. According to his article, the overall efficiency is $\approx 80\%$ insensitive to $T$. This is a very good result. However, his algorithm is specialized for Maxwellians or possibly other exponential-type distributions. In contrast, our simple methods can deal with any kind of distributions. Using the test problems, we have demonstrated that the combinations of the base methods and the flipping method excellently work. Without the volume transformation, we recognize significant errors up to $25\%$ in the average energy flux. This is because the volume transform factor (Eq.~\ref{eq:factor}) is no longer constant for $\Gamma > 1$ and $T \gg 1$. In summary, we have described numerical algorithms to load relativistic Maxwellians in particle simulations. The inverse transform method and the Sobol method are useful to load the stationary Maxwellian. For shifted Maxwellian, the rejection method (Eq.~\ref{eq:factor2}) and the flipping method (Eq.~\ref{eq:sz}) take care of the spatial part of the Lorentz transformation. These methods are simple and physically-transparent. They can be combined with arbitrary base algorithms. We hope that these algorithms are useful in relativistic kinetic simulations in high-energy astrophysics. \begin{table} \begin{center} \caption{Sobol algorithm with the flipping method. \label{table:code}} \begin{tabular}{l} \\ \hline {\bf repeat}\\ $~~~~$generate $X_1, X_2, X_3, X_4$, uniform on (0, 1]\\ $~~~~u \leftarrow -T \ln X_1X_2X_3$\\ $~~~~\eta \leftarrow -T \ln X_1X_2X_3X_4$\\ {\bf until} $\eta^2 - u^2 > 1$.\\ generate $X_5, X_6, X_7$, uniform on [0, 1]\\ $u_x \leftarrow u ~ ( 2 X_5 - 1 )$ \\ $u_y \leftarrow 2 u \sqrt{ X_5 (1-X_5) } \cos(2\pi X_6)$ \\ $u_z \leftarrow 2 u \sqrt{ X_5 (1-X_5) } \sin(2\pi X_6)$ \\ {\bf if} ($-\beta v_x > X_7$), $u_x \leftarrow -u_x$\\ $u_x \leftarrow \Gamma (u_x + \beta \sqrt{1+u^2})$ \\ {\bf return} $u_x, u_y, u_z$\\ \hline \end{tabular} \end{center} \end{table} \begin{acknowledgements} The author acknowledges M. Oka and A. Taktakishvili for their assistance to find out Sobol's original article and T. N. Kato for his insightful comments on the manuscript. This work was supported by Grant-in-Aid for Young Scientists (B) (Grant No. 25871054). \end{acknowledgements}
{ "timestamp": "2015-04-16T02:08:26", "yymm": "1504", "arxiv_id": "1504.03910", "language": "en", "url": "https://arxiv.org/abs/1504.03910", "abstract": "Numerical algorithms to load relativistic Maxwell distributions in particle-in-cell (PIC) and Monte-Carlo simulations are presented. For stationary relativistic Maxwellian, the inverse transform method and the Sobol algorithm are reviewed. To boost particles to obtain relativistic shifted-Maxwellian, two rejection methods are proposed in a physically transparent manner. Their acceptance efficiencies are ${\\approx}50\\%$ for generic cases and $100\\%$ for symmetric distributions. They can be combined with arbitrary base algorithms.", "subjects": "High Energy Astrophysical Phenomena (astro-ph.HE); Plasma Physics (physics.plasm-ph); Space Physics (physics.space-ph)", "title": "Loading Relativistic Maxwell Distributions in Particle Simulations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464450067471, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.7074345807270571 }
https://arxiv.org/abs/2206.10681
Near-Linear $\varepsilon$-Emulators for Planar Graphs
We study vertex sparsification for distances, in the setting of planar graphs with distortion: Given a planar graph $G$ (with edge weights) and a subset of $k$ terminal vertices, the goal is to construct an $\varepsilon$-emulator, which is a small planar graph $G'$ that contains the terminals and preserves the distances between the terminals up to factor $1+\varepsilon$. We construct the first $\varepsilon$-emulators for planar graphs of near-linear size $\tilde O(k/\varepsilon^{O(1)})$. In terms of $k$, this is a dramatic improvement over the previous quadratic upper bound of Cheung, Goranci and Henzinger, and breaks below known quadratic lower bounds for exact emulators (the case when $\varepsilon=0$). Moreover, our emulators can be computed in (near-)linear time, which lead to fast $(1+\varepsilon)$-approximation algorithms for basic optimization problems on planar graphs, including multiple-source shortest paths, minimum $(s,t)$-cut, graph diameter, and dynamic distace oracle.
\section{Emulators for One-Hole Instances} \label{sec: planar emulator} In this section and the next one we design a near-linear time algorithm for constructing $\varepsilon$-emulators for one-hole instances, as stated in Theorem~\ref{Th:emulator-1hole}. We say that an $\varepsilon$-emulator $(G',T)$ for a one-hole instance $(G,T)$ is \EMPH{aligned} if $(G',T)$ is also a one-hole instance, and the circular orderings of the terminals on the outerfaces of $G$ and of $G'$ are identical. \begin{theorem} \label{Th:emulator-1hole} Given a parameter $\varepsilon \in (0,1)$ and a one-hole instance $(G,T)$ with $|T|=k$, one can compute an aligned $\varepsilon$-emulator for $(G,T)$ of size $|V(G')|=\tilde O(k/\varepsilon^{O(1)})$ in $\tilde O\big((n+k^2)/\varepsilon^{O(1)}\big)$~time. \end{theorem} We complement the upper bound in Theorem~\ref{Th:emulator-1hole} with an $\Omega(k/\varepsilon)$ lower bound on the size of aligned $\varepsilon$-emulators for one-hole instances. This lower bound generalizes the $\Omega(k/\varepsilon)$ lower bound of~\cite{KNZ14}, which holds for one-hole instances too, but only when the emulator is a minor of $G$ (and is thus clearly an aligned emulator). \begin{theorem} \label{thm: one-hole lower bound} For every $k\ge 2$ and $(4/k)<\varepsilon <1$, there is a one-hole instance $(G,T)$ with $|T|=k$, such that every aligned $\varepsilon$-emulator $(G',T)$ for $(G,T)$ must have size $\Omega(k/\varepsilon)$. \end{theorem} All emulators we consider are aligned and therefore we omit the word ``aligned'' from now on. We describe the algorithm and proof for Theorem~\ref{Th:emulator-1hole} in Section~\ref{sec: 1hole_algo}, with the help of the core decomposition lemma (cf.\ Lemma~\ref{lem: decomposing step}). The proof to Lemma~\ref{lem: decomposing step} itself is deferred to Section~\ref{sec: Proof of decomposing step}. The proof of Theorem~\ref{thm: one-hole lower bound} is provided in \Cref{apd: Proof of one-hole lower bound}, since it is not relevant to the proof of Theorem~\ref{thm:main}. \subsection{The Algorithm and its Analysis} \label{sec: 1hole_algo} Let $(G,T)$ be the input one-hole instance. The algorithm for Theorem~\ref{Th:emulator-1hole} consists of two stages. In the first stage, we iteratively decomposes $(G,T)$ into smaller one-hole instances; and in the second stage, we compute emulators for these small instances and then combines them into an emulator for $(G,T)$. Throughout the algorithm we maintain a collection \EMPH{${\mathcal{H}}$} of one-hole instances, that is initialized to be ${\mathcal{H}}=\set{(G,T)}$. Set $\EMPH{$\lambda^*$} \coloneqq c^*\log^2 k/\varepsilon^{20}$, where $k \coloneqq |T|$ and $c^*>0$ is a large enough constant. In the first stage, we repeatedly replace a one-hole instance $(H,U)\in {\mathcal{H}}$ where $|U|>\lambda^*$ with smaller one-hole instances obtained by applying the algorithm from Lemma~\ref{lem: decomposing step} to $(H,U)$, until every one-hole instance $(H,U)$ in ${\mathcal{H}}$ satisfy $|U|\le \lambda^*$. The core of our construction is the following lemma. \begin{lemma} \label{lem: decomposing step} Given one-hole instance $(H,U)$ with $r \coloneqq |U|$, one can compute a collection of one-hole instances $\set{(H_1,U_1), \ldots, (H_s,U_s)}$, such that \begin{itemize \item $U\subseteq \big(\bigcup_{1\le i\le s} U_i\big)$; \item $|U_i|\le 9r/10$ for each $1\le i\le s$; \item $\sum_{1\le i\le s} |U_i| \le O(r)$; and \item for any parameter $100 < \lambda \le \log^2 r$, $\sum_{i: |U_i|> \lambda} |U_i| \le r\cdot\big(1+O(1/\lambda)\big)$. \end{itemize} Moreover, given an $\varepsilon$-emulator $(Z_i,U_i)$ for each $(H_i,U_i)$, algorithm $\textsc{Combine}$ computes for $(H,U)$ an \smash{$\big(\varepsilon+O(\frac{\log^4 r}{r^{0.1}})\big)$}-emulator $(Z,U)$ of size \( |V(Z)|\le \sum_{1\le i\le s} |V(Z_i)|. \) The running time of both algorithms is at most $O\big( (|V(H)|+r^2)\cdot\log r \cdot \log |V(H)| \big)$. \end{lemma} We prove this lemma in Section~\ref{sec: Proof of decomposing step}, and in the remainder of this subsection we use it to complete the proof of Theorem~\ref{Th:emulator-1hole}. \bigskip We associate with the decomposition process a \EMPH{partitioning tree~$\tau$}. Its nodes are all the one-hole instances that ever appear in the collection ${\mathcal{H}}$. Its root node is the initial one-hole instance $(G,T)$, and every tree node $(H,U)$ has children nodes corresponding to the new instances $(H_1,U_1),\ldots,(H_s,U_s)$ generated by Lemma~\ref{lem: decomposing step}. The leaves of $\tau$ are those that are in ${\mathcal{H}}$ at the end of the first stage. (To avoid ambiguity, we refer to elements in $V(\tau)$ as \emph{nodes} and elements in $V(H)$ as \emph{vertices}.) We now describe the second stage of the algorithm. For each one-hole instance $(H,U)$ in ${\mathcal{H}}$ at the end of the first stage, compute a $0$-emulator $(Z,U)$ for $(H,U)$ using the algorithm from Theorem~\ref{thm: quartergrid enumator}.% \footnote{This step can use any $0$-emulator that has size $\textnormal{poly} k$ and can be constructed in time $\tilde{O}(n+\textnormal{poly} k)$, and we conveniently use Theorem~\ref{thm: quartergrid enumator}. } We then iteratively process the non-leaf nodes in $\tau$ inductively in a bottom-up fashion: Given a non-leaf node $(H,U)$ with children $(H_1,U_1),\ldots,(H_s,U_s)$, let $(Z_i,U_i)$ be the emulator computed for $(H_i,U_i)$ by induction. Apply algorithm $\textsc{Combine}$ from Lemma~\ref{lem: decomposing step} to the emulators $(Z_1,U_1),\ldots,(Z_s,U_s)$ to obtain an emulator $(Z,U)$ for instance $(H,U)$. After all nodes in $\tau$ have been processed, output the emulator $(G',T)$ constructed for the root node $(G,T)$. We proceed to show that the instance $(G',T)$ computed by the above algorithm satisfies all the properties required in Theorem~\ref{Th:emulator-1hole}. \paragraph{Size Bound.} We first show that $|V(G')|=\tilde O(k/\varepsilon^{O(1)})$. We denote by ${\mathcal{H}}$ the collection obtained at the end of the first stage. Note that $|V(G')| \le \sum_{(H,U)\in {\mathcal{H}}} O(|U|^2) \le O(\max_{(H,U)\in {\mathcal{H}}} |U|) \cdot \sum_{(H,U)\in {\mathcal{H}}} |U|$. As $\max_{(H,U)\in {\mathcal{H}}} {|U|}\le \lambda^*= O(\log^2 k/\varepsilon^{O(1)})$, it now suffices to bound the total number of terminals in all resulting one-hole instances in ${\mathcal{H}}$ by $\tilde O(k/\varepsilon^{O(1)})$, which we do next via a charging scheme. Let $(H,U)$ be a node in $\tau$ with children $(H_1,U_1),\ldots,(H_s,U_s)$. \begin{itemize}\itemsep=0pt \item For instances $(H_i,U_i)$ with $|U_i|\le \lambda^*$ (which will all be in ${\mathcal{H}}$ at the end of the first stage), charge every vertex in $U_i$ to vertices in $U$. Since $\sum_i |U_i|\le O(|U|)$, each vertex of $U$ gets a charge of $\smash{O(1)}$ this way. We call these charge \EMPH{inactive}. \item For instances $(H_i,U_i)$ with $|U_i|> \lambda^*$, let $U'$ be the set of all new vertices, i.e., they appear in some set $U_i$ but not in $U$; we have $|U'|\le O(|U|/\log^2 |U|)$ by Lemma~\ref{lem: decomposing step}. Charge every vertex in $U'$ uniformly to vertices in $U$, so each vertex gets $O(1/\log^2 |U|)$ charge. We call these charge \EMPH{active}. \end{itemize} The total inactive charge on each vertex of $T$ is $O(\log k)$ because $\tau$ has height $O(\log k)$. As for the total active charge to each vertex in $T$, a quick calculation shows that it is at most $O(1/(\log_{(10/9)} \lambda -1))\le 1/2$. (For a complete proof see Appendix~\ref{apd: calculations}.) Note that this only accounts for the \EMPH{direct} active charge. For example, some terminal does not belong to the initial one-hole instance $(G,T)$, that was first actively charged to the terminals in $T$, can in turn be actively charged by some other terminals later. We call such charge \EMPH{indirect} active charge. The total direct and indirect active charge for each terminal in $T$ is at most $1/2+(1/2)^2+\cdots \le 1$. Altogether, each terminal in $T$ is charged $O(\log k)$. Therefore, the total number of terminals in all resulting instances in ${\mathcal{H}}$ is bounded by $O(k\log k)$, which, combined with previous discussion, implies that $|V(G')|\le \tilde O(k/\varepsilon^{O(1)})$. \paragraph{Correctness.} It remains to show that $(G',T)$ is an $\varepsilon$-emulator for $(G,T)$. Recall that we have associated with the algorithm in first stage a partitioning tree $\tau$. We now define, for each tree node ${(H,U)}$, a value \EMPH{$\varepsilon_{(H,U)}$} as follows. If ${(H,U)}$ is a leaf node, we define $\varepsilon_{(H,U)} \coloneqq 0$. Otherwise, ${(H,U)}$ is a non-leaf node with child nodes in $\tau$ be ${(H_1,U_1)},\ldots,{(H_s,U_s)}$. Denote $\EMPH{$r$} \coloneqq |U|$, and let $c>0$ be a large enough constant that is greater than the constants hidden in all big-O notations in Lemma~\ref{lem: decomposing step} and $c<(c^*)^{1/20}$. We define \[ \varepsilon_{(H,U)} \coloneqq \frac{c\log^4 r}{r^{0.1}}+\max\set{\varepsilon_{(H_1,U_1)},\ldots,\varepsilon_{(H_s,U_s)}}. \] From the properties of the algorithm $\textsc{Combine}$, it is easy to verify that for each node ${(H,U)}$ in $\tau$, the one-hole instance $(Z,U)$ we construct is an $\varepsilon_{(H,U)}$-emulator for $(H,U)$. We now show that $\varepsilon_{(G,T)}\le \varepsilon$. Observe that there are integers $r_1,\ldots,r_t$ with $r_1\le k$, $r_t\ge \lambda^*$, such that for each $1\le i\le t-1$, $r_i\ge (10/9)\cdot r_{i+1}$, $\varepsilon_{(G,T)}=\sum_{1\le i\le t}c\log^4 r_i/(r_i^{0.1})$. A quick calculation gives us $\varepsilon_{(G,T)}\le c\cdot (\log \lambda^*)^{4}/(\lambda^*)^{0.1}$. (For a complete proof see Appendix~\ref{apd: calculations}.) Since $c$ is a constant, and recall that $\lambda^*=c^*/\varepsilon^{20}$ where $c^*>c^{20}$ is large enough, so $\varepsilon_{(G,T)}\le c\cdot (\log \lambda^*)^{4}/(\lambda^*)^{0.1}<\varepsilon$, and therefore $(G',T)$ is an $\varepsilon$-emulator for $(G,T)$. \paragraph{Running Time.} Every time we implement the algorithm from Lemma~\ref{lem: decomposing step} to split some instance in $(H,U)\in {\mathcal{H}}$ with $n' \coloneqq |H|$ and $r \coloneqq |U|$, the running time is $O\big((n'+r^2)\log r\log n'\big)$. We charge its running time (and also the time for $\textsc{Combine}$) to vertices in $H$ as follows: \begin{itemize} \item charge the $O(n' \log r \log n')$ term uniformly to vertices in $H$ (each gets $O(\log k \log n)$ charge); \item charge the $O(r^2\log r \log n')$ term uniformly to terminals in $U$ (each gets $O(k\log k \log n)$ charge). \end{itemize} Since the depth of the partitioning tree ${\mathcal T}$ is at most $O(\log k)$, each non-terminal vertex in $G$ gets in total $O(\log^2 k\log n)$ charge, and each terminal in the resulting collection ${\mathcal{H}}$ at the end of the first stage gets in total $O(k\log^2 k\log n)$ charge. Therefore, the total running time of the algorithm is \[ O(\log^2 k \log n)\cdot n + O(k\log^2 k\log n)\cdot \tilde{O}(k/\varepsilon^{O(1)}) = \tilde{O}\big( (n+k^2) / \varepsilon^{O(1)} \big). \] \section{Construct Emulator using $\textsc{Split}$ and $\textsc{Glue}$: Proof of Lemma~\ref{lem: decomposing step}} \label{sec: Proof of decomposing step} In this subsection we provide the proof of Lemma~\ref{lem: decomposing step}. We first introduce the basic graph operations $\textsc{Split}$ and $\textsc{Glue}$ in Section~\ref{SS:split-and-glue}. Then we describe the algorithm and its analysis. \subsection{Splitting and Gluing} \label{SS:split-and-glue} In this subsection we introduce building blocks for the divide-and-conquer: procedures $\textsc{Split}$ and $\textsc{Glue}$. We will decompose a single one-hole instance $(H,U)$ into many small one-hole instances using procedure $\textsc{Split}$, compute emulators for each of them, and then glue the collection of small emulators together into an emulator for $(H,U)$ using procedure $\textsc{Glue}$. We now introduce the procedures in more detail. \paragraph{Splitting.} The input to procedure \EMPH{$\textsc{Split}$} consists of \begin{itemize} \item a one-hole instance $(H,U)$; \item a non-crossing set ${\mathcal{P}}$ of shortest paths in $H$ connecting pairs of terminals in $U$; and \item a subset $Y$ of vertices on the union of shortest paths in ${\mathcal{P}}$; set $Y$ must contain all endpoints of paths in ${\mathcal{P}}$ and all vertices with degree at least three in the graph $\bigcup_{P\in {\mathcal{P}}} P$ (we call them \EMPH{branch vertices}). \end{itemize} The output of procedure $\textsc{Split}$ is a collection of one-hole instances constructed as follows. Consider a plane embedding of $H$ where all the terminals in $U$ lying on the outerface of $H$. We \emph{slice}% \footnote{The slicing operation, which can be traced back to Reif~\cite{rei-mscpu-1981} (when describing the minimum-cut algorithm by Itai-Shiloach~\cite{is-mfpn-1979}), is sometimes referred to as \emph{cutting}~\cite{efn-gmcse-2012} or \emph{incision}~\cite{mnnw-mcdpg-2018} in the literature.} $H$ open along each path $P$ in ${\mathcal{P}}$ by duplicating every vertex and edge of $P$ to create another path $P'$ identical to $P$. The set of edges incident to each vertex on $P$ are split into two sides naturally based on their cyclic order around the vertex. We index the collection of subgraphs of $H$ obtained by the slicing of $H$ along ${\mathcal{P}}$ by \EMPH{${\mathcal{R}}$}. Let $R$ be an index in ${\mathcal{R}}$ that corresponds to the subgraph $H_R$. The plane embedding of $H$ naturally induces a planar embedding of $H_R$. Define \EMPH{$U_R$} to be the set of all vertices of $H_R$ that is either a terminal in $H_R\setminus P$ or a vertex in $Y$. All vertices of $U_R$ appear on the outerface of $H_R$, and so $(H_R,U_R)$ is a one-hole instance. The output of procedure $\textsc{Split}$ is simply the collection $\set{(H_R,U_R)\mid R\in {\mathcal{R}}}$ that contains, for each subgraph $H_R$ obtained by slicing $H$, a one-hole instance defined in the above way. See \Cref{fig: cut_noncrossing} for an illustration. Note that each vertex $y\in Y$ may now belong to multiple instances in ${\mathcal{H}}$. We call them \emph{copies} of~$y$. \begin{figure}[h] \centering \subfigure{\scalebox{0.55}{\includegraphics[scale=0.2]{Fig/cut_noncrossing_1.jpg}}} \hspace{0.7cm} \subfigure{\scalebox{0.4}{\includegraphics[scale=0.4]{Fig/cut_noncrossing_2.jpg}}} \caption{An illustration of splitting a one-hole instance along a path set ${\mathcal{P}}$. \textit{Left}: Graph $H$, together with terminals in set $U$ (in blue), paths in set ${\mathcal{P}}$ (in different colors), and vertices of $Y$ (red boxes). \textit{Right}: An output instance (that corresponds to the left bottom region of $H$) by procedure $\textsc{Split}$.} \label{fig: cut_noncrossing} \end{figure} \paragraph{Gluing.} We now describe procedure \EMPH{$\textsc{Glue}$}. Assume that we have applied procedure $\textsc{Split}$ to a one-hole instance $(H,U)$, a non-crossing set ${\mathcal{P}}$ of shortest paths, and a vertex subset $Y$ to obtain a collection ${\mathcal{H}} = \set{(H_R,U_R)\mid R\in {\mathcal{R}}}$ of one-hole instances. The input to procedure \emph{$\textsc{Glue}$} consists of \begin{itemize} \item one emulator $(Z_R,U_R)$ for each one-hole instance $(H_R,U_R)$ in ${\mathcal{H}}$; and \item the same vertex subset $Y$ given as the input to procedure $\textsc{Split}$. \end{itemize} The output of procedure $\textsc{Glue}$ is an emulator $(Z,U)$ for $(H,U)$, which is constructed as follows. Graph~$Z$ is obtained by taking the union of all graphs in $\set{Z_R\mid R\in {\mathcal{R}}}$, and identifying, for each vertex $y\in Y$, all copies of $y$. Graph $Z$ is naturally a plane graph by inheriting the embeddings of all $Z_R$s. (See \Cref{fig: glue} for an illustration.) By the assumption that $Y$ contains all the endpoints of paths in ${\mathcal{P}}$, every vertex in $U$ shows up uniquely on the outerface of $Z$. Therefore, $(Z,U)$ is a one-hole instance. Moreover, it is easy to observe that $|V(Z)|\le \sum_{R\in {\mathcal{R}}}|V(Z_R)|$. \begin{figure}[h] \centering \includegraphics[scale=0.12]{Fig/cut_noncrossing_3.jpg} \caption{An illustration of gluing one-hole instances at outer-boundaries. Identified vertices of $U$ are shown in blue) and identified vertices of $Y\setminus U$ are shown in red boxes. } \label{fig: glue} \end{figure} One can verify that both procedures $\textsc{Split}$ and $\textsc{Glue}$ can be implemented in $O(|V(H)|)$ time. Now we now summarize the behavior of the procedures with the following claims. The proofs of Claim~\ref{clm: branch pts} and Claim~\ref{clm: glueset_emulators} are deferred to Appendix~\ref{apd: Proof of branch pts} and~\ref{apd: Proof of glueset_emulators} respectively. \begin{claim} \label{clm: branch pts} Let ${\mathcal{H}}$ be the output of procedure $\textsc{Split}$ applied to a valid input $((H,U),{\mathcal{P}},Y)$, then \begin{enumerate} \item the number of branch vertices is at most $O(|U|)$; and \label{p1} \item if we denote by $Y^*$ the subset of all branch vertices in $Y$, then for every parameter $\lambda \ge 100$,\\ $\sum_{(H_R,U_R)\in {\mathcal{H}}:\text{ } |U_R|\ge \lambda}|U_R|\le |U|\cdot\big(1+O(1/\lambda)\big)+O(|Y\setminus Y^*|).$ \label{p2} \end{enumerate} \end{claim} \begin{claim} \label{clm: glueset_emulators} Let ${\mathcal{H}}$ be output collection of procedure $\textsc{Split}$ when applied to a valid input $((H,U),{\mathcal{P}},Y)$. Let $(\hat H, U)$ be output of procedure $\textsc{Glue}$ when applied to the collection ${\mathcal{H}}$ and set $Y$. For each instance $(H_R,U_R)\in {\mathcal{H}}$, let $(Z_R,U_R)$ be an $\varepsilon$-emulator for $(H_R,U_R)$, and let $(Z,U)$ be the output of procedure $\textsc{Glue}$ when applied to the collection $\set{(Z_R,U_R)}_R$ and set $Y$. Then $(Z,U)$ is an $\varepsilon$-emulator for $(\hat H,U)$. \end{claim} \subsection{Remove All Cut Vertices in $U$} \label{subsec: Remove All Cut Vertices} Before we proceed with the main ingredient for proving Lemma~\ref{lem: decomposing step}, first we describe a reduction on the input instance $(H,U)$ so that no vertex in $U$ is a cut vertex of graph $H$. The impatient readers may skipped ahead to Section~\ref{SSS:small-spread}. We first compute the set \EMPH{$U'$} of all cut vertices of $H$ in $U$, and along the way the maximal 2-vertex-connected subgraphs $\hat H_1,\ldots, \hat H_t$ of $H$ that each contains at least two terminals of $U$. For each $i \in \set{1, \dots, t}$, we denote $\hat U_i \coloneqq U\cap V(\hat H_i)$, so $(\hat H_i, \hat U_i)$ is a one-hole instance. Moreover, from Claim~\ref{clm: glueset_emulators}, if we are given an $\varepsilon$-emulator for instance $(\hat H_i, \hat U_i)$ for each $i$, then by simply gluing them at terminals in $U'$, we can obtain an $\varepsilon$-emulator for instance $(H,U)$. We use the following claim in order to bound $\sum_{1\le i\le t}|\hat U_i|$ and $\sum_{|\hat U_i|\ge \lambda}|\hat U_i|$. \begin{claim} \label{clm: removing separators} $\sum_{1\le i\le t}|\hat U_i|\le O(|U|)$, and $\sum_{|\hat U_i|\ge \lambda}|\hat U_i|\le |U| \cdot (1+O(1/\lambda))$. \end{claim} \begin{proof} Recall that $r \coloneqq |U|$. Consider the following tree \EMPH{$\tau'$}: The node set of tree $\tau'$ is $U'\cup V'$, where $V' \coloneqq \set{v_i\mid 1\le i\le t}$. The edge set of tree $\tau'$ contains, for each $1\le i\le t$ and each node $u'\in U'$, an edge $(u',v_i)$ if $u'\in \hat U_i$. Since vertices of $U'$ are cut vertices of $H$, it is easy to verify that the graph $\tau'$ constructed above is a tree, and moreover, all leaves of $\tau'$ lie in $V'$. We partition set $V'$ into three subsets: \EMPH{$V'_1$} contains all leaf nodes of $\tau'$, \EMPH{$V'_2$} contains all nodes of degree $2$ in $\tau'$, and \EMPH{$\smash{V'_{\ge 3}}$} contains all nodes of degree at least $3$ in $\tau'$. Observe that, for each node $v_i\in V'_1$, since $|\hat U_i|\ge 2$, at least one terminal in $\hat U_i$ does not belong to any other set in $\set{\hat U_1,\ldots,\hat U_t}$. Therefore, $|V'_1|\le r$. Since $\tau'$ is a tree, $|V'_{\ge 3}|\le |V'_1| \le r$. Since for every node in $V'_2$, both its neighbors lie in $U'$, we get that $|V'_2|\le |U'|\le r$. Altogether, $|V(\tau')|\le O(r)$. Note that for every terminal $u'\in U'$, the number of sets $\hat U_i$ that contains $u$ is exactly $\deg_{\tau'}(u')$. Therefore, \[ \sum_{1\le i\le t}|\hat U_i|\le |U\setminus U'|+\sum_{u'\in U'}\deg_{\tau'}(u')\le |U|+O(|V(\tau')|)=O(r). \] We now upper bound $\sum_{|\hat U_i|\ge \lambda}|\hat U_i|$ via a charging scheme. We root the tree $\tau'$ at an arbitrary node of $V'$, and process the nodes in $U'$ one-by-one as follows. Consider a node $u'\in U'$ such that all its child nodes are leaves in $\tau'$. We denote by $v_1,\ldots,v_s$ the child nodes of $u'$. For each $1\le i\le s$, if $|\hat U_i|\ge \lambda$, we charge $u'$ (as one unit) uniformly to vertices of $\hat U_i\setminus \set{u'}$, so each terminal in $\hat U_i\setminus \set{u'}$ is charged at most $2/\lambda$ units. We delete nodes $u'$ and $v_1,\ldots,v_s$ from $\tau'$ and recurse on the remaining tree, until the tree contains no nodes of $U'$. It is easy to observe that the value of $\smash{\sum_{|\hat U_i|\ge \lambda} |\hat U_i|}$ is at most $r$ plus the total charge. We now show that the total charge is $O({1}/\lambda)$. In fact, every terminal in $U$ is directly charged at most $2/\lambda$. Note that it is possible that some terminal in $U'$ was first charged to some other terminals in $U'$, and was later (indirectly) charged for other terminals in $U'$. It is easy to observe that, the total direct and indirect charge is bounded by $2/\lambda+(2/\lambda)^2+\cdots\le 4/\lambda$. Therefore, $\sum_{|\hat U_i|\ge \lambda}|\hat U_i|\le r\cdot (1+O(1/\lambda))$. \end{proof} Note that we can simply return the collection $\set{(\hat H_i,\hat U_i)\mid 1\le i\le t}$ of one-hole instances as the output, and it is easy to verify from the algorithm and Claim~\ref{clm: removing separators} that the output satisfies all properties required in Lemma~\ref{lem: decomposing step} (where the algorithm $\textsc{Combine}$ is simply the procedure $\textsc{Glue}$), unless some set $\hat U_i$ contains more than $(9/10)r$ terminals. However, from Claim~\ref{clm: removing separators}, there is at most one such large instance. Assume without loss of generality that $(\hat H_1,\hat U_1)$ is the unique large instance. We claim that, if Lemma~\ref{lem: decomposing step} holds for instance $(\hat H_1,\hat U_1)$, then Lemma~\ref{lem: decomposing step} holds for the input instance $(H,U)$. In fact, we apply the algorithm from Lemma~\ref{lem: decomposing step} to instance $(\hat H_1,\hat U_1)$ and obtain a collection $\tilde{\mathcal{H}}'$, and we can simply return the collection $\tilde {\mathcal{H}} \coloneqq \tilde{\mathcal{H}}'\cup \set{(\hat H_i,\hat U_i)\mid 2\le i\le t}$. It is easy to verify from the above discussion that all conditions of Lemma~\ref{lem: decomposing step} hold for the collection $\tilde {\mathcal{H}}$ as an output for the original instance $(H,U)$. From now on we focus on proving Lemma~\ref{lem: decomposing step} for the unique large instance $(\hat H_1,\hat U_1)$. For convenience, we rename this large instance by $(H,U)$, denote $r \coloneqq |U|$, and treat it as the original input instance. From our algorithm, no vertex in $U$ is a cut vertex of graph $H$, so if we traverse the outerface of $H$, then every terminal of $U$ appears exactly once. \subsection{The Small Spread Case} \label{SSS:small-spread} Let $(H,U)$ be a planar instance. The \EMPH{spread}\footnote{sometimes also referred to as \emph{aspect ratio}} of the instance $(H,U)$ is defined to be \[ \EMPH{$\Phi(H,U)$} \coloneqq \frac{\max_{u,u'\in U}\textnormal{\textsf{dist}}_H(u,u')}{\min_{u,u'\in U}\textnormal{\textsf{dist}}_H(u,u')}. \] For convenience, we denote $\EMPH{$\Phi$} \coloneqq \Phi(H,U)$. We distinguish between the following two cases, depending on whether $\Phi$ is small or large. In this subsection we assume \EMPH{$\Phi\le 2^{r^{0.9}\log^{2} r}$}. The large spread case will be discussed in Section~\ref{SSS:large-spread}. We will employ the procedure $\textsc{Split}$ in order to decompose the one-hole instance $(H,U)$ into smaller instances. Throughout this case, we use parameters \[ \EMPH{$L_r$} \coloneqq r/100\log^2 r \quad \text{and} \quad \EMPH{$\varepsilon_r$} \coloneqq \log \Phi/L_r, \] so $\varepsilon_r = O((\log r)^4/r^{0.1})$. \paragraph{Balanced terminal pairs.} Denote $U \coloneqq \set{u_1,\ldots,u_r}$, where the terminals are indexed according to the order in which they appear on the outerface. We say that a pair of terminals $(u_i,u_j)$ (with $i<j$) is a \EMPH{$c$-balanced pair} for some parameter $1/2<c<1$, if and only if $j-i\le c\cdot r$ and $i+r-j\le c\cdot r$. In other words, the terminals $u_i$ and $u_j$ separate the outer boundary into two segments, each contains at most $c$-fraction (and therefore at least $(1-c)$-fraction) of the terminals. We first compute the $(3/4)$-balanced pair $u,u'$ of terminals in $U$ that, among all $(3/4)$-balanced pairs of terminals in $U$, minimizes the distance between them in $H$. We compute the $u$-$u'$ shortest path $P$ in $H$. Let the set $Y$ contain the endpoints of $P$, together with the following vertices of $P$: for each $1\le i\le L_r$, \begin{enumerate} \item among all vertices $v$ of $P$ with $\textnormal{\textsf{dist}}_P(v,u)\le e^{i\varepsilon_r}$, the vertex that maximizes its distance to $u$; \item among all vertices $v$ of $P$ with $\textnormal{\textsf{dist}}_P(v,u)\ge e^{i\varepsilon_r}$, the vertex that minimizes its distance to $u$; \item among all vertices $v$ of $P$ with $\textnormal{\textsf{dist}}_P(v,u')\le e^{i\varepsilon_r}$, the vertex that maximizes its distance to $u'$; \item among all vertices $v$ of $P$ with $\textnormal{\textsf{dist}}_P(v,u')\ge e^{i\varepsilon_r}$, the vertex that minimizes its distance to $u'$. \end{enumerate} In other words, if we think of path $P$ as a line, and then mark, for each $1\le j\le L_r$, the point on the line that is at distance $e^{i\varepsilon_r}$ from $u$, and the point on the line that is at distance $e^{i\varepsilon_r}$ from $u'$, then set $Y$ contains, for all marked points, the vertices of $P$ that are closest to it from both sides. By definition, $|Y|\le 4L_r$. We apply the procedure $\textsc{Split}$ to the one-hole instance $(H,U)$, the path set $\set{P}$ and the vertex set $Y$ defined above. Let $(H_1,U_1)$ and $(H_2,U_2)$ be the instances we get. We then simply return the collection $\set{(H_1,U_1),(H_2,U_2)}$ as the output of our algorithm. \paragraph{Analysis of the small spread case.} We now show that the output of the algorithm in this case satisfies the properties required in Lemma~\ref{lem: decomposing step}. First, from the definition of procedure $\textsc{Split}$, every terminal in $U$ continues to be a terminal in at least one instance in $\set{(H_1,U_1),(H_2,U_2)}$. Moreover, since the pair $(u,u')$ of terminals is $(3/4)$-balanced, and $|Y| \le 4L_r = r/(25\log^2 r)$, so $|U_1|\le (3/4)r+r/(25\log^2 r)\le (9/10)r$, and similarly $|U_2|\le (9/10)r$. Second, note that $|U_1|+|U_2|\le |U|+2|Y|\le r\cdot (1+O(L_r/r))=r\cdot (1+O(\frac{1}{\log^2 r}))=r\cdot (1+O(1/\lambda))$, as $\lambda\le \log^2 r$. We now construct an algorithm $\textsc{Combine}$ that satisfies the required properties. Let $(H'_1,U_1)$ be an $\varepsilon$-emulator for $(H_1,U_1)$ and let $(H'_2,U_2)$ be an $\varepsilon$-emulator for $(H_2,U_2)$. The algorithm $\textsc{Combine}$ simply applies the procedure $\textsc{Glue}$ to the collection $\set{(H'_1,U_1),(H'_2,U_2)}$ and set $Y$. Let $(H', U')$ be the one-hole instance that it outputs. It is easy to verify that $U'=U$. The algorithm $\textsc{Combine}$ simply returns the instance $(H', U)$. It remains to show that the output of algorithm $\textsc{Combine}$ satisfies the required properties. Note that the collection $\set{(H_1,U_1),(H_2,U_2)}$ and the set $Y$ also constitute a valid input for procedure $\textsc{Glue}$. Let $(\hat H, \hat U)$ be the instance output by $\textsc{Glue}$ when applied to $\set{(H_1,U_1),(H_2,U_2)}$ and $Y$. It is easy to verify that $\hat U=U$. We use the following claim. \begin{claim} \label{clm: ratio loss for contracting to portals} Instance $(\hat H, U)$ is a $(3\varepsilon_r)$-emulator for instance $(H, U)$. \end{claim} We provide the proof of Claim~\ref{clm: ratio loss for contracting to portals} right after we complete the analysis for the small spread case. From Claim \ref{clm: glueset_emulators}, $(H',U)$ is an $\varepsilon$-emulator for $(\hat H,U)$. From Claim~\ref{clm: ratio loss for contracting to portals}, instance $(\hat H, U)$ is a $(3\varepsilon_r)$-emulator for instance $(H, U)$. Altogether, $(H',U)$ is an $(\varepsilon+3\varepsilon_r)=(\varepsilon+O(\smash{\frac{\log^4 r}{r^{0.1}}}))$-emulator for $(H,U)$. This completes the proof of Lemma~\ref{lem: decomposing step} in the small spread case. \begin{proofof}{Claim~\ref{clm: ratio loss for contracting to portals}} We will show that, for each pair $u_1,u_2$ of terminals in $U$, $$\textnormal{\textsf{dist}}_{H}(u_1,u_2)\le \textnormal{\textsf{dist}}_{\hat H}(u_1,u_2)\le e^{3\varepsilon_r}\cdot\textnormal{\textsf{dist}}_{H}(u_1,u_2).$$ From the procedure $\textsc{Split}$, $H_1$ is the subgraph of $H$ whose image lies in the region surrounded by the image of $P$ and the segment of outer-boundary of $H$ from $u$ clockwise to $u'$ (including the boundary), and $H_2$ is the subgraph of $H$ whose image lies in the region surrounded by the image of $P$ and the segment of outer-boundary of $H$ from $u$ anti-clockwise to $u'$ (including the boundary), and path $P$ is entirely contained in both $H_1$ and $H_2$. We denote by $\hat H_1$ the copy of $H_1$ in graph $\hat H$, and we define graph $\hat H_2$ similarly, so $V(\hat H_1)\cap V(\hat H_2)=Y$. We denote by $P^1, P^2$ the copies of path $P$ in graphs $\hat H_1$ and $\hat H_2$, respectively. See \Cref{fig: proof of case 1} for an illustration. \begin{figure}[h] \centering \subfigure{\scalebox{0.45}{\includegraphics[scale=0.20]{Fig/split1hole_2.jpg}}} \hspace{0.3cm} \subfigure{\scalebox{0.45}{\includegraphics[scale=0.19]{Fig/glue_1.jpg}}} \caption{An illustration graphs $\hat H$, $H_1$, and $H_2$. \textit{Left:} Graphs $H_1$ (top) graph $H_2$ (bottom) viewed as individual graphs. \textit{Right:} Subgraphs $\hat H$ obtained by gluing graphs $H_1$ and $H_2$. Vertices in $Y\setminus \set{u,u'}$ are shown in purple. } \label{fig: proof of case 1} \end{figure} We first show that for each pair $u_1,u_2\in U$, $\textnormal{\textsf{dist}}_{H}(u_1,u_2)\le \textnormal{\textsf{dist}}_{\hat H}(u_1,u_2)$. Consider a pair $u_1,u_2\in U$. Assume first that $u_1,u_2$ both belong to $H_1$ (the case where $u_1,u_2$ both belong to $H_2$ is symmetric). Clearly, in graph $\hat H$, there is a $u_1$-$u_2$ shortest path $Q$ that lies entirely in $\hat H_1$. From the construction of $\hat H$, the same path belongs to $H_1$, and therefore $\textnormal{\textsf{dist}}_H(u_1,u_2)\le \textnormal{\textsf{dist}}_{\hat H}(u_1,u_2)$. Assume now that $u_1\in V(H_1)\setminus \set{u,u'}$ and $u_2\in V(H_2)\setminus \set{u,u'}$ (the case where $u_2\in V(H_1)\setminus \set{u,u'}$ and $u_1\in V(H_2)\setminus \set{u,u'}$ is symmetric). It is easy to see that, in graph $\hat H$, there exists a $u_1$-$u_2$ shortest path that is the sequential concatenation of \begin{enumerate} \item a path $Q_1$ in $\hat H_1$ connecting $u_1$ to some vertex $x_1\in V(P^1)$, that is internally disjoint from $P^1$; \item a subpath $R^1$ of $P^1$ connecting $x_1$ to a vertex $y\in Y$; \item a subpath $R^2$ of $P^2$ connecting $y$ to a vertex $x_2$; and \item a path $Q_2$ in $\hat H_2$ connecting $x_2$ to $u_2$, that is internally disjoint from $P^2$. \end{enumerate} Consider the path in $H$ formed by the sequential concatenation of (i) the copy of $Q_1$ in $H_1$; (ii) the subpath $R$ of $P$ connecting the copy of $x_1$ in $P$ to the copy of $x_2$ in $P$; and (iii) the copy of $Q_2$ in $H_2$. Clearly, this path connects $u_1$ to $u_2$ in $P$. Moreover, since the weight of $R$ is at most the total weight of paths $R^1$ and $R^2$, this path in $H$ has weight at most the weight of the $u_1$-$u_2$ shortest path in $\hat H$. Therefore, $\textnormal{\textsf{dist}}_H(u_1,u_2)\le \textnormal{\textsf{dist}}_{\hat H}(u_1,u_2)$. From now on we focus on showing that, for each pair $u_1,u_2\in U$, $\textnormal{\textsf{dist}}_{\hat H}(u_1,u_2)\le e^{3\varepsilon_r}\cdot\textnormal{\textsf{dist}}_{H}(u_1,u_2)$. Assume first that $u_1,u_2$ both belong to $H_1$ (the case where $u_1,u_2$ both belong to $H_2$ is symmetric). Similar to the previous discussion, the $u_1$-$u_2$ shortest path in $H$ is entirely contained in $H_1$, and so $\textnormal{\textsf{dist}}_{\hat H}(u_1,u_2)=\textnormal{\textsf{dist}}_{H}(u_1,u_2)$. Assume now that $u_1\in V(H_1)\setminus \set{u,u'}$ and $u_2\in V(H_2)\setminus \set{u,u'}$ (the case where $u_1\in V(H_2)\setminus \set{u,u'}$ and $u_2\in V(H_1)\setminus \set{u,u'}$) is symmetric. Let $Q$ be the $u_1$-$u_2$ shortest path in $H$. The intersection between $Q$ and $P$ is a subpath of $P$. Let $x_1,x_2$ be the endpoints of this subpath, so vertices $u_1,x_1,x_2,u_2$ appear on path $Q$ in this order. Let $Q_1$ denote the subpath of $Q$ between $u_1$ and $x_1$, $Q_2$ the subpath of $Q$ between $u_2$ and $x_2$, and $Q'$ the subpath of $Q$ between $x_1$ and $x_2$. We consider the following possibilities, depending on the locations of vertices $x_1,x_2$ and vertices in $Y$. \bigskip \noindent\textit{Possibility 1. There is a vertex in $Y$ between $x_1$ and $x_2$.} Let $y$ be a vertex of $Y$ between vertices $x_1$ and $x_2$. Consider the path $\hat Q$ of $\hat H$ formed by the sequential concatenation of (i) the copy of $Q_1$ in $\hat H_1$ connecting $u_1$ to the copy of $x_1$; (ii) the subpath $R^1$ of $P^1$ connecting the copy of $x_1$ to $y$; (iii) the subpath $R^2$ of $P^2$ connecting $y$ to the copy of $x_2$; and (iv) the copy of $Q_2$ in $\tilde H_2$ connecting the copy of $x_2$ to $u_2$. Since vertex $y$ lies between $x_1$ and $x_2$ on path $P$, from the construction of $\hat H$, the path $\hat Q$ in $\hat H$ constructed above has weight at most the weight of $Q$ in $H$. Therefore, $\textnormal{\textsf{dist}}_{\hat H}(u_1,u_2)\le \textnormal{\textsf{dist}}_{H}(u_1,u_2)$. \bigskip \noindent\textit{Possibility 2. There is no vertex of $Y$ between $x_1$ and $x_2$.} Assume without loss of generality that $|V(H_1)\cap U|\ge |U|/2$, and that $x_1$ is closer to $u$ than to $u'$ in $P$. We use the following observation. \begin{observation} \label{obs: length to charge to} $\textnormal{\textsf{dist}}_H(x_1,u_1)\ge \textnormal{\textsf{dist}}_H(x_1,u)$. \end{observation} \begin{proof} Assume not, then \( \textnormal{\textsf{dist}}_H(u_1,u)\le \textnormal{\textsf{dist}}_H(x_1,u_1)+ \textnormal{\textsf{dist}}_H(x_1,u)< 2\cdot \textnormal{\textsf{dist}}_H(x_1,u)\le \textnormal{\textsf{dist}}_H(u,u')\text{, and} \) \( \textnormal{\textsf{dist}}_H(u_1,u')\le \textnormal{\textsf{dist}}_H(x_1,u_1)+ \textnormal{\textsf{dist}}_H(x_1,u')< \textnormal{\textsf{dist}}_H(x_1,u)+ \textnormal{\textsf{dist}}_H(x_1,u')\le \textnormal{\textsf{dist}}_H(u,u'). \) So both $\textnormal{\textsf{dist}}_H(u_1,u)$ and $\textnormal{\textsf{dist}}_H(u_1,u')$ is less than $\textnormal{\textsf{dist}}_H(u,u')$. However, since $|U|/2\le |V(H_1)\cap U|\le (3/4)\cdot|U|$, it is easy to verify that at least one of the pairs $(u_1,u)$, $(u_1,u')$ is $(3/4)$-balanced, a contradiction to the fact that $u,u'$ is the closest $(3/4)$-balanced terminal pair in $H$. \end{proof} Think of path $P$ as a line connecting $u$ to $u'$. We now mark, for each $1\le j\le L_r$, the point on the line that is at distance $e^{i\varepsilon_r}$ from $u$, and the point on the line that is at distance $e^{i\varepsilon_r}$ from $u'$, and call these marked points \EMPH{landmarks}. It is easy to observe that there is no landmark between vertices $x_1$ and $x_2$. This is because, if there is landmark between vertices $x_1$ and $x_2$, since set $Y$ contains, for all landmark, the vertices of $P$ that are closest to it from both sides, either $x_1$ or $x_2$ or some other vertices of $P$ that lie between $x_1$ and $x_2$ will be added to vertex set $Y$, a contradiction. Let $x$ be the landmark closest to $x_1$ that lies between $u$ and $x_1$, and assume $\textnormal{\textsf{dist}}_P(x,u)=e^{i\varepsilon_r}$. Let $y$ be the vertex of $Y$ closest to the landmark $x$ that lies between $x$ and $x_1$. From the construction of portals, $e^{i\varepsilon_r}\le \textnormal{\textsf{dist}}_P(y,u)<\textnormal{\textsf{dist}}_P(x_1,u),\textnormal{\textsf{dist}}_P(x_2,u)<e^{(i+1)\varepsilon_r}$. Therefore, $\textnormal{\textsf{dist}}_P(x_1,y),\textnormal{\textsf{dist}}_P(x_2,y)\le (e^{\varepsilon_r}-1)\cdot e^{i\varepsilon_r}$. Consider now the $u_1$-$u_2$ path in $\hat H$ formed by concatenation of (i) the copy of $Q_1$ in $\hat H_1$ connecting $u_1$ to the copy $x^1_1$ of $x_1$; (ii) the subpath of $P^1$ connecting $x^1_1$ to $y$; (iii) the subpath of $P^2$ connecting $y$ to the copy $x^2_2$ of $x_2$; and (iv) the copy of $Q_2$ in $\hat H_2$ connecting $x^2_2$ to $u_2$. The total weight of this path is at most \[ \begin{split} & \textnormal{\textsf{dist}}_{\hat H_1}(u_1,x^1_1)+\textnormal{\textsf{dist}}_{\hat H_1}(x^1_1,y)+\textnormal{\textsf{dist}}_{\hat H_2}(x^2_2,y)+\textnormal{\textsf{dist}}_{\hat H_2}(u_2,x^2_2)\\ = \text{ } & \textnormal{\textsf{dist}}_{H}(u_1,x_1)+\textnormal{\textsf{dist}}_{P}(x_1,y)+\textnormal{\textsf{dist}}_{P}(x_2,y)+\textnormal{\textsf{dist}}_{H}(u_2,x_2)\\ = \text{ } & \textnormal{\textsf{dist}}_{H}(u_1,x_1)+\textnormal{\textsf{dist}}_{H}(u_2,x_2)+\textnormal{\textsf{dist}}_{P}(x_1,x_2)+\big(\textnormal{\textsf{dist}}_{P}(x_1,y)+\textnormal{\textsf{dist}}_{P}(x_2,y)-\textnormal{\textsf{dist}}_{P}(x_1,x_2)\big)\\ \le \text{ } & \textnormal{\textsf{dist}}_{H}(u_1,u_2)+ 2\cdot (e^{\varepsilon_r}-1)\cdot e^{i\varepsilon_r}\\ \le \text{ } & \textnormal{\textsf{dist}}_{H}(u_1,u_2)+ 2\cdot (e^{\varepsilon_r}-1)\cdot \textnormal{\textsf{dist}}_{H}(u,x_1)\\ \le \text{ } & \textnormal{\textsf{dist}}_{H}(u_1,u_2)+ 2\cdot (e^{\varepsilon_r}-1)\cdot \textnormal{\textsf{dist}}_{H}(u_1,x_1) \quad\quad(\textnormal{from Observation~\ref{obs: length to charge to}})\\ \le \text{ } & e^{3\varepsilon_r}\cdot\textnormal{\textsf{dist}}_{H}(u_1,u_2). \end{split} \] Therefore, $\textnormal{\textsf{dist}}_{\hat H}(u_1,u_2)\le e^{3\varepsilon_r}\cdot\textnormal{\textsf{dist}}_{H}(u_1,u_2)$. This completes the proof of Claim~\ref{clm: ratio loss for contracting to portals}. \needqedtrue \end{proofof} \subsection{The Large Spread Case} \label{SSS:large-spread} Now we assume \emph{$\Phi > 2^{r^{0.9}\log^2 r}$}. Without loss of generality, we assume that $\min_{u,u'\in U}\textnormal{\textsf{dist}}_H(u,u')=1$ and $\max_{u,u'\in U}\textnormal{\textsf{dist}}_H(u,u')=\Phi$. In the algorithm for this case, we use the following parameters: \[ \EMPH{$\mu$} = r^2, \quad \EMPH{$L$} = \ceil{\log_{\mu} \Phi}, \quad \EMPH{$\varepsilon_r$} = \frac{\log^4 r}{r^{0.1}}, \quad \EMPH{$\varepsilon'_r$} = \frac{1}{r^{0.7}}. \] We first compute a hierarchical partitioning $({\mathcal{S}}_0,{\mathcal{S}}_1,\ldots,{\mathcal{S}}_L)$ of terminals in $U$ in a bottom-up fashion as follows. We proceed in $L$ iterations. In the $i$th iteration, we compute a collection ${\mathcal{S}}_i$ of subsets of $U$ that partition $U$. \begin{itemize} \item We start by letting collection \EMPH{${\mathcal{S}}_0$} contain, for each terminal $u\in U$, a singleton set $\set{u}$. That is, ${\mathcal{S}}_0 \coloneqq \set{\set{u}\mid u\in U}$. \item Consider an index $1\le i\le L$. Assume we have already computed the collection ${\mathcal{S}}_{i-1}$ of subsets, we now describe the computation of collection ${\mathcal{S}}_{i}$, as follows. First, let graph $W_{i-1}$ be obtained from $H$ by contracting each subset $S\in {\mathcal{S}}_{i-1}$ into a single \EMPH{supernode}, that we denote by \EMPH{$v_S$}, and we define $\EMPH{$V_{i-1}$} \coloneqq \set{v_S\mid S\in {\mathcal{S}}_{i-1}}$. Recall that $H$ is an edge-weighted graph, and we let every edge of $W_{i-1}$ have the same weight as the corresponding edge in $H$. Then we construct another auxiliary graph \EMPH{$R_{i-1}$} as follows. Its vertex set is $V_{i-1}$, and it contains an edge connecting $v_S$ to $v_{S'}$ if $\textnormal{\textsf{dist}}_{W_{i-1}}(v_S,v_{S'})\le \mu^i$, or equivalently $\textnormal{\textsf{dist}}_{H}(S,S')\le \mu^i$. Finally, we define \EMPH{${\mathcal{S}}_i$} to be the collection that contains, for each connected component $C$ of graph $R_{i-1}$, the set $\bigcup_{v_S\in V(C)}S$. It is easy to verify that the sets in ${\mathcal{S}}_i$ partition $U$. \end{itemize} This completes the description of the hierarchical partitioning $({\mathcal{S}}_0,{\mathcal{S}}_1,\ldots,{\mathcal{S}}_L)$. Clearly, collection ${\mathcal{S}}_L$ contains a single set $U$. We denote $\EMPH{${\mathcal{S}}$} \coloneqq \bigcup_{0\le i\le L}{\mathcal{S}}_i$. So collection ${\mathcal{S}}$ is a laminar family. That is, for every pair $S,S'\in {\mathcal{S}}$, either $S\cap S'=\varnothing$, or $S\subseteq S'$, or $S'\subseteq S$. \begin{observation} \label{obs: diameter} For each set $S$ in collection ${\mathcal{S}}_i$, $\textnormal{\textsf{diam}}_H(S)\le 2r\cdot\mu^{i}$. \end{observation} \begin{proof} We prove the observation by induction on $i$. The base case is when $i=0$. From the construction, the collection ${\mathcal{S}}_0$ contains only single-vertex sets, so the diameter of each such set is at most $0\le 2r\cdot\mu^0$. Assume that the observation holds for $0,1,\ldots,i-1$. Consider now a cluster $\hat S\in {\mathcal{S}}_i$. From the construction, it is the union of a collection of sets in ${\mathcal{S}}_{i-1}$. Consider any pair $u,u'$ of vertices in $\hat S$. If they belong to the same set of in ${\mathcal{S}}_{i-1}$, then from the induction hypothesis, $\textnormal{\textsf{dist}}_H(u,u')\le 2r\cdot \mu^{i-1}\le 2r\cdot \mu^{i}$. Assume now that $u\in S$ and $u'\in S'$ where $S, S'$ are distinct sets in ${\mathcal{S}}_{i-1}$. Since supernodes $v_{S}$ and $v_{S'}$ lie in the same connected component of graph $R_{i-1}$, there exists a path connecting $v_{S}$ to $v_{S'}$ in $R_{i-1}$, and we denote it by $(v_{S},v_{S_{1}},\ldots,v_{S_{b}},v_{S'})$, where $b\le r-2$ (since the number of supernodes is at most $r$). If we further denote $S_0=S$ and $S_{b+1}=S'$, then there exist, for each $0\le j\le b+1$, a pair $\hat u_j, \hat u'_j$ of vertices in $S_{j}$, such that \begin{itemize} \item $u=\hat u_0$, $u'=\hat u'_{b+1}$; \item for each $0\le j\le b+1$, $\textnormal{\textsf{dist}}_H(\hat u_j, \hat u'_j)\le 2r\cdot\mu^{i-1}$; and \item for each $0\le j\le b$, $\textnormal{\textsf{dist}}_H(\hat u'_j, \hat u_{j+1})\le \mu^{i}$. \end{itemize} Therefore, $\textnormal{\textsf{dist}}_H(u,u')\le r\cdot (2r\cdot\mu^{i-1})+r\cdot \mu^i\le 2r\cdot\mu^{i}$, since $\mu=r^2$. \end{proof} In order to describe and analyze the algorithm, it would be convenient for us to compute a partitioning tree $\tau$ with the hierarchical partitioning $({\mathcal{S}}_0,{\mathcal{S}}_1,\ldots,{\mathcal{S}}_L)$, in a natural way as follows. The vertex set of $\tau$ is $V(\tau) \coloneqq V_0\cup \ldots \cup V_L$ (recall that for each $i$, $V_{i}=\set{v_S\mid S\in {\mathcal{S}}_{i}}$, that is, $V_{i}$ contains, for each set $S\in {\mathcal{S}}_{i}$, the supernode $v_S$ representing $S$). We call nodes in $V_i$ \EMPH{level-$i$ nodes} of tree $\tau$, and we call sets in ${\mathcal{S}}_i$ \EMPH{level-$i$ sets}. Since ${\mathcal{S}}_L=\set{U}$, there is only one level-$L$ node in $\tau$, that we view as the root of $\tau$. The edge set $E(\tau)$ contains, for each pair $S, \hat S$ of sets such that $S\in {\mathcal{S}}_i, \hat S\in {\mathcal{S}}_{i+1}$ for some $i$ and $S\subseteq \hat S$, an edge connecting $v_S$ to $v_{\hat S}$, so $v_S$ is a child node of $v_{\hat S}$, and in this case we also say that $S$ is a \EMPH{child set} of $\hat S$ and $\hat S$ is a \EMPH{parent set} of $S$. It is easy to verify from the construction that $\tau$ is indeed a tree. \begin{observation} \label{obs: sets non-crossing} Let $S,S'$ be disjoint sets in ${\mathcal{S}}$. Let $u_1,u_2$ be any pair of vertices in $S$, and let $u'_1,u'_2$ be any pair of vertices in $S'$. Then the pairs $(u_1,u_2)$ and $(u'_1,u'_2)$ of terminals are non-crossing in $H$. \end{observation} \begin{proof} Assume for contradiction that the pairs $(u_1,u_2)$ and $(u'_1,u'_2)$ are crossing in $H$. Assume that $S$ is a level-$i$ set and $S'$ is a level-$i'$ set, and assume without loss of generality that $i\ge i'$. We first find another two pairs $(u_3,u_4), (u'_3,u'_4)$ of terminals such that $\textnormal{\textsf{dist}}_H(u_3,u_4)\le \mu^i$, $\textnormal{\textsf{dist}}_H(u'_3,u'_4)\le \mu^{i'}$ and the pairs $(u_3,u_4)$ and $(u'_3,u'_4)$ are crossing. We start by finding the pair $(u_3,u_4)$. In fact, if we denote by $\gamma_1$ the boundary segment clockwise from $u'_1$ to $u'_2$ around the outerface of $H$, and denote by $\gamma_2$ the boundary segment clockwise from $u'_2$ to $u'_1$ around the outerface of $H$, then since we have assumed that $(u_1,u_2)$ and $(u'_1,u'_2)$ are crossing, one of $u_1,u_2$ lies on $\gamma_1$ and the other lies on $\gamma_2$. Assume without loss of generality that $u_1$ lies on $\gamma_1$ and $u_2$ lies on $\gamma_2$. From the construction of graphs $R_1,\ldots,R_{i-1}$ and collections ${\mathcal{S}}_1,\ldots,{\mathcal{S}}_{i}$. It is easy to observe that, for every pair $u,u'$ of terminals that belong to the same level-$i$ set, there exists a sequence $u^1,\ldots, u^t$ of terminals in $U$ that all belong to the same level-$i$ set as $u$ and $u'$, such that, if we denote $u=u^0$ and $u'=u^{t+1}$, then for each $0\le j\le t$, $\textnormal{\textsf{dist}}_H(u^j,u^{j+1})\le \mu^i$; and for every pair $u,u'$ of terminals do not belong to the same level-$i$ set, $\textnormal{\textsf{dist}}_H(u,u')> \mu^i$. Consider now the pair $u_1,u_2$ of terminals. Note that they belong to the same level-$i$ set. From the above discussion, there exists a sequence of terminals in $S$ starting with $u_1$ and ending with $u_2$, such that the distance between every pair of consecutive terminals in the sequence is less than $\mu^i$. Since $u_1$ lies on $\gamma_1$ and $u_2$ lies on $\gamma_2$, there must exist a pair $(u_3,u_4)$ of terminals appearing consecutively in the sequence, such that $u_3$ lies on $\gamma_1$ and $u_4$ lies on $\gamma_2$, so pairs $(u_3,u_4)$ and $(u'_1,u'_2)$ are crossing and $\textnormal{\textsf{dist}}_H(u_3,u_4)\le \mu^i$. We can then use similar arguments to find another pair $(u'_3,u'_4)$, such that the pairs $(u_3,u_4)$ and $(u'_3,u'_4)$ are crossing and $\textnormal{\textsf{dist}}_H(u'_3,u'_4)\le \mu^{i'}$. Note that, since $u_3,u_4\in S$ and $u'_3,u'_4\notin S$, $\textnormal{\textsf{dist}}_H(u_3,u'_3)> \mu^i$ and $\textnormal{\textsf{dist}}_H(u_4,u'_4)> \mu^i$. Altogether, we get that \[ \textnormal{\textsf{dist}}_H(u'_3,u'_4)+\textnormal{\textsf{dist}}_H(u_3,u_4)\le \mu^i+\mu^{i'}\le \mu^i+\mu^i<\textnormal{\textsf{dist}}_H(u_3,u'_3)+\textnormal{\textsf{dist}}_H(u_4,u'_4), \] a contradiction to the Monge property on the crossing pairs $(u_3,u_4)$ and $(u'_3,u'_4)$. \end{proof} \paragraph{Expanding sets.} The central notion in the algorithm for the large spread case is the \emph{expanding sets}. Recall that $\varepsilon'_r=r^{-0.7}$. We say that a set $S\in {\mathcal{S}}$ is \EMPH{expanding} if $|\hat S|\ge e^{\varepsilon'_r}\cdot |S|$, where $\hat S$ is the parent set of $S$ (or equivalently, $v_{\hat S}$ is the parent node of $v_S$ in $\tau$); otherwise it is \EMPH{non-expanding}. We now distinguish between two cases, depending on whether ${\mathcal{S}}$ contains a non-expanding set with moderate size. \subsubsection{The Balanced Case: there is a non-expanding set $S$ with $r/5 \le |S| \le 4r/5$} \label{SSS:balanced} We let $\hat S$ be the parent set of $S$. We denote $\EMPH{$S^*$} \coloneqq \hat S\setminus S$, and $\EMPH{$S'$} \coloneqq U\setminus \hat S$, so the sets $S^*$, $S$, and $S'$ partition set $U$. Moreover, we have $r/6\le |S|, |S'|\le 5r/6$ and $|S^*|\le (e^{\varepsilon'_r}-1)r$. We will employ the procedure $\textsc{Split}$ in order to decompose the instance $(H,U)$ into smaller instances, for which we need to compute a non-crossing path set and a set of vertices in the path set, as the input to the procedure, as follows. We say that an ordered pair $(u,u')$ of terminals in $S$ is a \EMPH{border pair} if the segment on the outer-boundary of $H$ from $u$ clockwise to $u'$ contains no other vertices of $S$ but at least one vertex of $S^*\cup S'$. We compute the set \EMPH{${\mathcal M}$} of all border pairs in $S$, and then apply the algorithm from Lemma~\ref{lem: well-structured path set} to graph $H$ and the set of border pairs ${\mathcal M}$, to obtain a set ${\mathcal{P}}$ of shortest paths connecting pairs in ${\mathcal M}$. We call ${\mathcal{P}}$ the \EMPH{border path set} of $S$. It is easy to verify that set ${\mathcal M}$ is non-crossing, and so path set ${\mathcal{P}}$ is also non-crossing. Consider now a border pair $(u,u')$ of terminals and let \EMPH{$P_{u,u'}$} be the $u$-$u'$ shortest path that we have computed. We apply the algorithm from Lemma~\ref{lem: eps_cover_subset} to graph $H$, path $P_{u,u'}$ and each vertex $u^*\in S^*$ that lies on the segment of the outer-boundary of $H$ from $u$ clockwise to $u'$, with parameter $\varepsilon_r$, and compute an $\varepsilon_r$-cover of $u^*$ on $P_{u,u'}$. We then let \EMPH{$Y_{u,u'}$} be the union of all vertices in these $\varepsilon_r$-covers and the endpoints of $P_{u,u'}$, so $Y_{u,u'}$ is a vertex set of $P_{u,u'}$. Let \EMPH{$Y^*$} be the set of all vertices that are either an endpoint of a path in ${\mathcal{P}}$ or have degree at least $3$ in the graph $\bigcup_{P\in {\mathcal{P}}}P$. We then define $\EMPH{$Y$} \coloneqq Y^*\cup (\bigcup_{(u,u')\in {\mathcal M}}Y_{u,u'})$. From \Cref{thm: eps_cover}, \[ |Y\setminus Y^*|\le O\bigg(\frac{|S^*|}{\varepsilon_r}\bigg)\le O\bigg(\frac{(e^{\varepsilon'_r}-1)\cdot r}{\varepsilon_r}\bigg)= O\bigg(\frac{(1/r^{0.7})\cdot r}{\log^4 r/r^{0.1}}\bigg)= O\bigg(\frac{r^{0.4}}{\log^4 r}\bigg). \] We then apply the procedure $\textsc{Split}$ to the one-hole instance $(H,U)$, the non-crossing path set ${\mathcal{P}}$, and the vertex set $Y$. We return the collection ${\mathcal{H}}$ of one-hole instances output by the procedure $\textsc{Split}$ as the output of our algorithm in this case. \paragraph{Analysis of the Balanced Case.} We now show that the output collection of one-hole instances of the above algorithm satisfies the properties required in Lemma~\ref{lem: decomposing step}. \medskip First, we show in the following claim that each instance in ${\mathcal{H}}$ contains at most $(9/10)r$ terminals. \begin{claim} Each instance in ${\mathcal{H}}$ contains at most $(9/10)r$ terminals. \end{claim} \begin{proof} From the construction of the border path set ${\mathcal{P}}$, the one-hole instances in ${\mathcal{H}}$ can be partitioned into two subsets: ${\mathcal{H}}_1$ contains all instances that corresponds to a region in $H$ surrounded by a segment of outer-boundary of $H$ and the image of some path $P\in {\mathcal{P}}$; and set ${\mathcal{H}}_2$ contains all other instances. Each instance in ${\mathcal{H}}_1$ contains at most two terminals in $S$, and so it contains at most $r-|S|+2+|Y\setminus Y^*|\le (9/10)r$ terminals (note that such an instance does not need to contain branch vertices that are not $\varepsilon_r$-cover vertices on its boundary). On the other hand, each instance in ${\mathcal{H}}_2$ does not contain terminals in $S'$, and so it contains at most $r-|S'|+|Y|\le (9/10)r$ terminals. \end{proof} Second, note that $|Y\setminus Y^*|\le O(r^{0.4}/\log^4 r)$, then from Claim~\ref{clm: branch pts}, we get that $\sum_{(H_i,U_i)\in {\mathcal{H}}}|U_i|\le O(r)$ and $\sum_{(H_i,U_i)\in {\mathcal{H}} : |U_i|> \lambda}|U_i| \le r\cdot\big(1+O(1/\lambda)\big)$. We now construct an algorithm $\textsc{Combine}$ that satisfies the required properties in Lemma~\ref{lem: decomposing step}. Recall that we are given, for each instance $(H_i,U_i)\in {\mathcal{H}}$, an $\varepsilon$-emulator $(Z_i,U_i)$. The algorithm $\textsc{Combine}$ simply applies $\textsc{Glue}$ to instances $(Z_1,U_1),\ldots,(Z_s,U_s)$ and returns instance $(Z,U)$ output by $\textsc{Glue}$. It remains to show that the algorithm $\textsc{Combine}$ satisfies the required properties. Note that the one-hole instances $(H_1,U_1),\ldots,(H_s,U_s)$ also form a valid input for procedure $\textsc{Glue}$. Let \EMPH{$(\hat H, \hat U)$} be the one-hole instance that the procedure $\textsc{Glue}$ outputs when it is applied to instances $(H_1,U_1),\ldots,(H_s,U_s)$. It is easy to verify that $\hat U=U$. We use the following claim, whose proof is similar to the proof of Claim~\ref{clm: ratio loss for contracting to portals}, and is deferred to \Cref{apd: Proof of ratio loss for gluepathset}. \begin{claim} \label{clm: ratio loss for gluepathset} Instance $(\hat H, U)$ is an $O(\varepsilon_r)$-emulator for instance $(H, U)$. \end{claim} Now we complete the proof of Lemma~\ref{lem: decomposing step} for the Balanced Case using Claim~\ref{clm: ratio loss for gluepathset}. In fact, since for each $1\le i\le t$, $(Z_i,U_i)$ is an $\varepsilon$-emulator for $(H_i,U_i)$, from Claim \ref{clm: glueset_emulators}, $(Z,U)$ is an $\varepsilon$-emulator for $(\hat H,U)$. Then from Claim~\ref{clm: ratio loss for contracting to portals} and Claim~\ref{clm: ratio loss for gluepathset}, we get that $(Z,U)$ is an $(\varepsilon+O(\varepsilon_r))=(\varepsilon+O(\frac{\log^4 r}{r^{0.1}}))$-emulator for $(H,U)$. Moreover, from the algorithm $\textsc{Glue}$, it is easy to verify that the instance $(Z,U)$ output by the algorithm $\textsc{Combine}$ satisfies that $|V(Z)|\le\sum_{(H_i,U_i)\in {\mathcal{H}}}|V(Z_i)|$. \subsubsection{The Unbalanced Case: every set $S$ is either expanding, or $|S|<r/5$, or $|S|> 4r/5$} \label{SSS:unbalanced} The algorithm in this case consists of two steps. Eventually, we will reduce to the Small Spread Case, and use the algorithm there to complete the decomposition of the instance $(H,U)$. \paragraph{Step 1:} We say that a set $S\in {\mathcal{S}}$ is \EMPH{heavy} if $|S|>4r/5$, and in this case we also say that the node $v_S$ is heavy. Clearly, every level of $\tau$ contains at most one heavy node, and all heavy nodes form a path in $\tau$ which ends at the root node of $\tau$. Let \EMPH{$\hat S$} be the non-expanding heavy set that lies on the lowest level. We denote by \EMPH{$\hat L$} the level that $\hat S$ lies in and let \EMPH{$\check S$} be its parent set. Define $\EMPH{$\hat S^*$} \coloneqq \check S\setminus \hat S$ and $\EMPH{$\hat S'$} \coloneqq U\setminus \check S$. So sets $\hat S^*, \hat S, \hat S'$ partition set $U$, and $|\hat S^*|\le (e^{\varepsilon'_r}-1)r$. We perform the same operations as in the Balanced Case (Section~\ref{SSS:balanced}) to graph $H$ with respect to the partition $(\hat S, \hat S^*, \hat S')$. Let \EMPH{$\hat {\mathcal{H}}$} be the collection we obtain. From similar analysis as in Section~\ref{SSS:balanced}, we get that $\sum_{(H_i,U_i)\in \hat{\mathcal{H}}}|U_i|\le O(r)$, and $\sum_{(H_i,U_i)\in \hat{\mathcal{H}} : |U_i|> \lambda}|U_i| =r\cdot \big(1+O(1/\lambda)\big)$. If additionally we have, for each $(H_i,U_i)\in \hat {\mathcal{H}}$, $|U_i|\le (9/10)r$, then we simply return the collection $\hat{\mathcal{H}}$ as the output. Assume now that there exists some instance $(H_{i^*},U_{i^*})\in \hat {\mathcal{H}}$ with $|U_{i^*}|> (9/10)r$. Note that we may have only one such instance. It is easy to see from the algorithm $\textsc{Split}$ that no terminal of $U_{i^*}$ is a cut vertex in graph $H_{i^*}$. Note that it is now enough to prove Lemma~\ref{lem: decomposing step} for the instance $(H_{i^*}, U_{i^*})$, which we do in the next step. Indeed, if Lemma~\ref{lem: decomposing step} holds for instance $(H_{i^*},U_{i^*})$, then we simply apply the algorithm from Lemma~\ref{lem: decomposing step} to instance $(H_{i^*}, U_{i^*})$ and obtain a collection ${\mathcal{H}}^*$ instances. We simply return the collection $\tilde {\mathcal{H}} \coloneqq (\hat{\mathcal{H}}\setminus \set{(H_{i^*}, U_{i^*})})\cup {\mathcal{H}}^*$. It is easy to verify that the output collection $\tilde {\mathcal{H}}$ satisfies all conditions in Lemma~\ref{lem: decomposing step} for the original input instance $(H,U)$ (where again we simply set $\textsc{Combine}$ to be $\textsc{Glue}$). \paragraph{Step 2:} The goal of this step is to further modify and decompose the instance $(H_{i^*}, U_{i^*})$ into instances with small spread, and eventually apply the algorithm from the Small Spread Case to them. Consider the instance $(H_{i^*}, U_{i^*})$. From the algorithm $\textsc{Split}$, the instance $(H_{i^*}, U_{i^*})$ corresponds to a region of $H$, that is surrounded by shortest paths connecting terminals in $U$. Therefore, for every pair $v,v'$ of vertices in $H_{i^*}$ (that are also vertices in $H$), $\textnormal{\textsf{dist}}_H(v,v')=\textnormal{\textsf{dist}}_{H_{i^*}}(v,v')$. Note that set $U_{i^*}$ can be partitioned into two subsets: set $\tilde S$ contains all terminals in $\hat S$ that lies in $U_{i^*}$, and set $Y_{i^*}$ contains all new terminals (which are vertices in $\varepsilon_r$-covers of vertices of $\hat S^*$ on paths of ${\mathcal{P}}$ and the branch vertices) added in Step~1 that lie on the boundary of graph $H_{i^*}$. Note that the distances between a pair of terminals in $Y_{i^*}$ and the distances between a terminal in $Y_{i^*}$ and a terminal in $\tilde S$ could be very small (even much smaller than $\min_{u,u'}\textnormal{\textsf{dist}}_H(u,u')$) at the moment, which makes it hard to bound the spread from above. Therefore, we start by modifying the instance $(H_{i^*},U_{i^*})$ as follows. \bigskip We let graph \EMPH{$\tilde H$} be obtained from $H_{i^*}$ by adding, for each terminal $u\in Y_{i^*}$, a new vertex $\tilde u$ and an edge $(\tilde u, u)$ with weight $\mu^{\hat L-1}$. We then define $\EMPH{$\tilde U$} \coloneqq \tilde S\cup \set{\tilde u\mid u\in Y_{i^*}}$. This completes the construction of the new instance $(\tilde H, \tilde U)$. We call this operation \EMPH{terminal pulling}. See \Cref{fig: central} for an illustration. It is easy to verify that $(\tilde H, \tilde U)$ is a one-hole instance, and moreover, for each new terminal $\tilde u$ in $\tilde U\setminus \tilde S$, the distance in $\tilde H$ from $\tilde u$ to any other terminal in $\tilde U$ is at least $\mu^{\hat L-1}$. We will show later in the analysis that it is now sufficient to prove Lemma~\ref{lem: decomposing step} for the instance $(\tilde H, \tilde U)$. \begin{figure}[h!] \centering \subfigure[Before: the instance $(H_{i^*},U_{i^*})$.]{\scalebox{0.45}{\includegraphics[scale=0.25]{Fig/central_mod_1.jpg}}} \hspace{0.8cm} \subfigure[After: the instance $(\tilde H,\tilde U)$.]% {\scalebox{0.45}{\includegraphics[scale=0.25]{Fig/central_mod_2.jpg}}} \caption{An illustration of modifying the instance $(H_{i^*}, U_{i^*})$.\label{fig: central}} \end{figure} We now construct the hierarchical clustering \EMPH{$\tilde {\mathcal{S}}$} for instance $(\tilde H, \tilde U)$, in the same way as the hierarchical clustering ${\mathcal{S}}$ for instance $(H,U)$, that is described at the beginning of the large spread case. Let $\tilde \tau$ be the partitioning tree associated with $\tilde {\mathcal{S}}$. Recall that for every pair of vertices in $H_{i^*}$, the distance between them in $H_{i^*}$ is identical to the distance between them in $H$. From the construction of instance $(\tilde H,\tilde U)$, it is easy to verify that both $\tilde{\mathcal{S}}$ and $\tilde \tau$ has depth $\hat L$, and in levels $\hat L-1,\ldots, 1$, new terminals in $\tilde U\setminus \tilde S$ only form singleton sets as each of them is at distance at least $\mu^{\hat L-1}$ from any other terminal in $\tilde U$. Therefore, every non-singleton set in $\tilde {\mathcal{S}}$ is also a set in ${\mathcal{S}}$. \medskip \noindent We say that a set $S$ is \EMPH{good} if \begin{enumerate} [label=(\roman*), ref=(\roman*)] \item $|S|>1$; \item $S$ lies on level at most $\hat L-2\log r/\varepsilon'_r$; \item $S$ is non-expanding; and \item for any other set $S'\in \tilde {\mathcal{S}}$ that lies on level at most $\hat L-2\log r/\varepsilon'_r$ and $S\subseteq S'$, $S'$ is expanding. \end{enumerate} We denote by \EMPH{$\tilde {\mathcal{S}}_g$} the collection of all good sets in $\tilde {\mathcal{S}}$. Next we show that all (good) sets in $\tilde{\mathcal{S}}_g$ lie on level at least $\hat L-O(\log r/\varepsilon'_r)$. From definition of a good set and our assumption for the Unbalanced Case that every set $S\in {\mathcal{S}}$ with $r/5\le |S|\le 4r/5$ is expanding, it is easy to see that all good sets $S$ have size at most $r/5$ (we have used the property that every non-singleton set in $\tilde {\mathcal{S}}$ is also a set in ${\mathcal{S}}$). \begin{observation} \label{obs: good set level} Every good set in $\tilde{\mathcal{S}}$ lies on level at least $\hat L-10\log r/\varepsilon'_r$. Every terminal either forms a singleton set on level at least $\hat L-10\log r/\varepsilon'_r$, or belongs to some good set in $\tilde{\mathcal{S}}_g$. \end{observation} \begin{proof} Denote $\hat L' \coloneqq \hat L-2\log r/\varepsilon'_r$. Let $S$ be a good set. Assume $S$ lies in level $i$. Let $S_{i+1},\ldots,S_{\hat L'}$ be the ancestor sets of $S$ on levels $i+1,\ldots, \hat L'$, respectively. From the definition of good sets, all sets $S_{i+1},\ldots,S_{\hat L'-1}$ are expanding, so we have \[ 1\le |S|\le |S_{i+1}|\le e^{-\varepsilon_r}\cdot |S_{i+2}|\le \cdots\le e^{-\varepsilon'_r\cdot(\hat L'-i-1)}\cdot |S_{L^*}|\le e^{-\varepsilon'_r\cdot(\hat L'-i-1)}\cdot r. \] Therefore, $\varepsilon_r\cdot(\hat L'-i-1)\le \ln r$ and so $i=\hat L'-8\log r/\varepsilon'_r=\hat L-10\log r/\varepsilon'_r$. Similarly, if a terminal in $\tilde S$ does not form a singleton set on level at least $\hat L-10\log r/\varepsilon'_r$, and it does not belong to any good set in $\tilde{\mathcal{S}}_g$, then from the inequality above, its ancestor chain has length at most $8\log r/\varepsilon'_r$, a contradiction. \end{proof} Now for each good set $S$, we compute its border path set $\tilde {\mathcal{P}}_S$ in instance $(\tilde H, \tilde U)$ in the same way as in the Balanced Case (Section~\ref{SSS:balanced}). Now define $\EMPH{$\tilde{\mathcal{P}}$} \coloneqq \bigcup_{S\in \tilde {\mathcal{S}}_g}\tilde{\mathcal{P}}_S$. We show in the next observation that the collection ${\mathcal{P}}$ of paths is non-crossing. \begin{observation} The collection $\tilde{\mathcal{P}}$ of paths is non-crossing. \end{observation} \begin{proof} Assume for contradiction that the collection $\tilde{\mathcal{P}}$ of paths is not non-crossing. Then there exist two distinct sets $S,S'\in \tilde{\mathcal{S}}_g$, a border path $P$ connecting terminals $u_1,u_2$ in $S$ and a border path $P'$ of $S'$ connecting terminals $u'_1,u'_2$ in $S'$, such that the pairs $(u_1,u_2), (u'_1,u'_2)$ are crossing. However, from the definition of good sets, $S\cap S'=\varnothing$. Therefore, from Observation~\ref{obs: sets non-crossing}, pairs $(u_1,u_2), (u'_1,u'_2)$ are non-crossing, a contradiction. \end{proof} Consider now a good set $S\in \tilde{\mathcal{S}}_{g}$. We define $\EMPH{$S^*$} \coloneqq \check S\setminus S$, where \EMPH{$\check S$} is the parent set of $S$ in $\tilde {\mathcal{S}}_g$. Recall that a pair $(u,u')$ of terminals in $S$ is a border pair, if the outer-boundary of $\tilde H$ connecting $u$ to $u'$ contains no other vertices of $S$ but at least one vertex that does not lie in $S$. Now for each border pair $(u,u')$ of terminals in $S$, let \EMPH{$P_{u,u'}$} be the $u$-$u'$ shortest path in $\tilde{\mathcal{P}}_S$ that we have computed. We apply the algorithm from Lemma~\ref{lem: eps_cover_subset} to each vertex $u^*\in S^*$ that lies on the outer-boundary from $u$ clockwise to $u'$ with parameter $\varepsilon_r$, and compute an $\varepsilon_r$-cover of $u^*$ on $P_{u,u'}$. We then let \EMPH{$Y^S_{u,u'}$} be the union of all such $\varepsilon_r$-covers and the endpoints of $P_{u,u'}$. We then let set \EMPH{$Y^S$} be the union of the sets $Y^S_{u,u'}$ for all border pairs $(u,u')$. Finally, we define \EMPH{$Y$} as the union of $\bigcup_{S\in \tilde {\mathcal{S}}_g} Y^S$ and all branch vertices (which we denote by $Y^*$), so $Y$ is a vertex set of $V(\tilde{\mathcal{P}})$ that contains all branch vertices $\tilde{\mathcal{P}}$. Moreover, from \Cref{thm: eps_cover}, \[ \begin{split} |Y\setminus Y^*| & \le O\bigg(\sum_{S\in \tilde {\mathcal{S}}_g}\frac{|S^*|}{\varepsilon_r}\bigg) \le O\bigg(\frac{(e^{\varepsilon'_r}-1)\cdot \sum_{S\in \tilde {\mathcal{S}}_g}|S|}{\varepsilon_r}\bigg) \le O\bigg(\frac{(e^{\varepsilon'_r}-1)\cdot r}{\varepsilon_r}\bigg)\\ & = O\bigg(\frac{(1/r^{0.7})\cdot r}{\log^4 r/r^{0.1}}\bigg)= O\bigg(\frac{r^{0.4}}{\log^4 r}\bigg). \end{split} \] We now apply the algorithm $\textsc{Split}$ to instance $(\tilde H,\tilde U)$, the path set $\tilde{\mathcal{P}}$ and the vertex set $Y$. Let $\tilde {\mathcal{H}}$ be the collection of one-hole instances we get. If all instances $(\hat H, \hat U)$ in $\tilde {\mathcal{H}}$ satisfy that $|\hat U|\le (9/10)r$, then we terminate the algorithm and return $\tilde{\mathcal{H}}$. Assume that there is some instance $(\hat H, \hat U)$ in $\tilde {\mathcal{H}}$ such that $|\hat U|> (9/10)r$. From similar analysis in Step 1, there can be at most one such instance. We denote such an instance by $(\hat H, \hat U)$. We now modify the instance $(\hat H, \hat U)$ as follows. Denote $\EMPH{$L^*$} \coloneqq \hat L-10\log r/\varepsilon'_r$. Let \EMPH{$H^*$} be the graph obtained from $\hat H$ by applying the terminal pulling operation to every terminal in $\hat U\setminus \tilde S$ via an edge of weight $\mu^{L^*-1}$. We then define set \EMPH{$U^*$} to be the union of $(\hat U\cap \tilde S)$ and the set of all new terminals created in the terminal pulling operation. We use the following observation. \begin{observation} $\Phi(H^*,U^*)\le 2^{O(\log^2 r/\varepsilon'_r)}$. \end{observation} \begin{proof} From Observation~\ref{obs: good set level}, every pair of terminals in $U^*$ has distance at least $\mu^{L^{*}-1}$ in graph $H^*$. On the other hand, since graph $\hat H$ is a subgraph of $\tilde H$, every pair of terminals in $U^*$ has distance at most $\mu^{\hat L+1}$ in graph $H^*$. Therefore, $\Phi(H^*,U^*)\le \mu^{\hat L-L^{*}+2}=2^{O(\log^2 r/\varepsilon'_r)}$ as $\mu=r^2$. \end{proof} Since $2^{O(\log^2 r/\varepsilon'_r)}<2^{r^{0.9}\log^2 r}$ when $r$ is larger than some large enough constant, we apply the algorithm from the Small Spread Case to instance $(H^*, U^*)$ and obtain a collection ${\mathcal{H}}_{(\hat H,\hat U)}$ of instance. The output of the algorithm is the collection $\Big(\tilde {\mathcal{H}}\setminus\set{(\hat H,\hat U)}\Big)\cup {\mathcal{H}}_{(\hat H,\hat U)}$ of instances. \paragraph{Analysis of the Unbalanced Case.} Recall that in this step we assume that, after Step 1, there is an instance $(H_{i^*}, U_{i^*})$ with $|U_{i^*}|>(9/10)r$, and we transformed it into another instance $(\tilde H, \tilde U)$. We first show that it is sufficient to prove Lemma~\ref{lem: decomposing step} for instance $(\tilde H, \tilde U)$. All other conditions can be easily verified. We now show that when applying the algorithm $\textsc{Glue}$ to $\varepsilon$-emulators $\set{(\tilde H', \tilde U)}\cup\set{(H'_i, U_i)}_{i\ne i^*}$, we still obtain an $(\varepsilon+ O(\frac{\log^4 r}{r^{0.1}}))$-emulator for $(H,U)$. In fact, we only need to consider the terminal pairs $u,u'$ with $u\in S$ and $u'\notin S$. Note that such a pair $u,u'$ of terminals belongs to different level-$\hat L$ clusters in ${\mathcal{S}}$. From the construction of $\tilde {\mathcal{S}}$, $\textnormal{\textsf{dist}}_H(u,u')\ge \mu^{\hat L}$. Therefore, the transformation from instance $(H_{i^*}, U_{i^*})$ to instance $(\tilde H, \tilde U)$ adds at most an additive $\mu^{\hat L-1}$ to their distance, which is at most $O(\frac{1}{\mu})=O(\frac{1}{r^2})\le O(\frac{\log^4 r}{r^{0.1}})$-fraction of their distance in graph $H$. Therefore, by gluing the $\varepsilon$-emulators $\set{(\tilde H', \tilde U)}\cup\set{(H'_i, U_i)}_{i\ne i^*}$, we still obtain an $(\varepsilon+ O(\frac{\log^4 r}{r^{0.1}}))$-emulator for $(H,U)$. From now on, we focus on proving that the decomposition we computed for instance $(\tilde H, \tilde U)$ satisfies all properties in Lemma~\ref{lem: decomposing step}. Recall that we have first computed a collection $\tilde {\mathcal{S}}_g$ of good sets, computed a path set $\tilde{\mathcal{P}}$ and a subset $Y$ of vertices in $V(\tilde {\mathcal{P}})$ based on sets in $\tilde {\mathcal{S}}_g$, and then applied the procedure $\textsc{Split}$ to $((\tilde H, \tilde U), \tilde{\mathcal{P}}, Y)$ and obtained a collection $\tilde {\mathcal{H}}$ of one-hole instances. Assume first that all instances $(\hat H, \hat U)$ in collection $\tilde {\mathcal{H}}$ satisfies that $|\hat U|\le (9/10)r$. Since $|Y\setminus Y^*|\le O\big(\frac{r^{0.4}}{\log^4 r}\big)$, from Claim~\ref{clm: branch pts}, we get that $\sum_{(\hat H,\hat U)\in \tilde{\mathcal{H}}}|\hat U|\le O(r)$ and $\sum_{(\hat H,\hat U)\in \tilde{\mathcal{H}} : |\hat U|> \lambda}|\hat U|\le r\cdot \big(1+O(1/\lambda)\big)$. We now describe the algorithm $\textsc{Combine}$ that, takes as input, for each instance $(\hat H, \hat U)\in {\mathcal{H}}$, an $\varepsilon$-emulator $(\hat H', \hat U)$, computes an $\big(\varepsilon+O(\varepsilon_r)\big)=\big(\varepsilon+O(\frac{\log^4 r}{r^{0.1}})\big)$-emulator for $(\tilde H, \tilde U)$. We simply apply the algorithm $\textsc{Glue}$ to instances $\set{(\hat H',\hat U)\mid (\hat H, \hat U)\in \tilde {\mathcal{H}}}$ and return the output instance $(\tilde H', \tilde U)$ of $\textsc{Glue}$. The proof that instance $(\tilde H', \tilde U)$ is indeed an $\big(\varepsilon+O(\varepsilon_r)\big)$-emulator for $(\tilde H, \tilde U)$ and the proof that $|V(\tilde H')|\le \sum_{(\hat H, \hat U)\in \tilde {\mathcal{H}}}|V(\hat H')|$ use identical arguments in the Balanced Case, and is omitted here. Assume now that there exists an instance $(\hat H, \hat U)$ in collection $\tilde {\mathcal{H}}$ with $|\hat U|> (9/10)r$. Denote $\tilde {\mathcal{H}}'=\tilde {\mathcal{H}}\setminus \set{(\hat H, \hat U)}$ and denote by $\overline{\mathcal{H}}=\big(\tilde {\mathcal{H}}\setminus\set{(\hat H,\hat U)}\big)\cup {\mathcal{H}}_{(\hat H,\hat U)}$ the output collection of instances. First, note that all instances $(\overline H, \overline U)$ in collection $\tilde {\mathcal{H}}'$ satisfies that $|\overline U|\le (9/10)r$. Since the remaining instances in $\overline{\mathcal{H}}$ is obtained by applying the algorithm from Case 1 to the instance $(H^*, U^*)$, that is obtained from modifying the unique large instance in $(\hat H, \hat U)$. From the algorithm in Case 1, we know that each instance in the output collection contains at most $(9/10)r$ terminals. Second, from similar arguments, we get that $\sum_{(\overline H,\overline U)\in \overline{\mathcal{H}}}|\overline U|\le O(r)$ and $\sum_{(\overline H,\overline U)\in \overline{\mathcal{H}} : |\overline U|> \lambda}|\overline U|\le r\cdot \big(1+O(1/\lambda)\big)$. We now describe the algorithm $\textsc{Combine}$ that, takes as input, for each instance $(\overline H, \overline U)\in {\mathcal{H}}$, an $\varepsilon$-emulator $(\overline H', \overline U)$, computes an $\big(\varepsilon+O(\varepsilon_r)\big)=\big(\varepsilon+O(\frac{\log^4 r}{r^{0.1}})\big)$-emulator for $(\tilde H, \tilde U)$. First, consider the instances in ${\mathcal{H}}_{(\hat H,\hat U)}$ that are obtained from applying the algorithm in Case 1 to $(H^*, U^*)$. We simply use the algorithm $\textsc{Combine}$ described in Case 1 to compute an $\big(\varepsilon+O(\varepsilon_r)\big)$-emulator $(H^{**}, U^*)$ for instance $(H^*, U^*)$. Finally, we apply the algorithm $\textsc{Glue}$ to instances in $\set{ (\overline H',\overline U)\mid (\overline H,\overline U)\in \tilde {\mathcal{H}}'}\cup \set{(H^{**}, U^*)}$ and denote the obtained instance by $(\tilde H', \tilde U)$. Note that, for different sets $S,S'\in \tilde{\mathcal{S}}_g$ such that $S\cap \hat U\ne\varnothing, S'\cap \hat U\ne\varnothing$ and $S\cap S=\varnothing$, if set $S$ lies on level $i$ and set $S'$ lies on level $i'$, then $\textnormal{\textsf{dist}}_{H}(S,S')\ge \mu^{(\max\set{i,i'}+1)}\ge \mu^{L^*}$. Therefore, from similar arguments at the beginning of the analysis, the terminal pulling operation only incur an multiplicative factor-$O(1/r)$ error of the distances between terminals in disjoint sets in $\tilde {\mathcal{S}}_g$. The rest of the proof that instance $(\tilde H', \tilde U)$ is indeed an $\big(\varepsilon+O(\varepsilon_r)\big)$-emulator for $(\tilde H,\tilde U)$ uses almost identical arguments in the Balanced Case, and is omitted here. \subsection{Near-linear Time Implementation of Lemma~\ref{lem: decomposing step}} \label{sec: 1hole_near-linear time} Denote $n \coloneqq |V(H)|$. In this subsection we show that the algorithm described in this section can be implemented in time $O\big( (n+r^2)\cdot\log r \cdot \log n \big)$. The first step of the algorithm is to split the input instance $(H,U)$ into smaller instances at cut vertices. The cut vertices of the plane graph $H$ are simply the vertices encountered more than once when we traverse the boundary of the outerface of $H$, and so they can be computed in $O(n)$ time. Therefore, the algorithm in \Cref{subsec: Remove All Cut Vertices} can be implemented in $O(n)$ time. Consider now the step in \Cref{SSS:small-spread}. In this step we first compute the closest $(3/4)$-balanced pair of terminals in $U$. We show that this can be done in $O(n\log n+r^2\log n)$ time. In fact, we use the algorithm in \cite{kle-msppg-2005} to compute an MSSP data structure of graph $H$, which takes time $O(n\log n)$. We then query the distances between every pair of terminals in $U$, which takes time $O(r^2\log n)$ as the query time of the MSSP data structure is $O(\log n)$. We can then use the acquired information to compute the closest $(3/4)$-balanced pair of terminals in $U$ by simply dropping all the unbalanced pairs and sort. Let this pair be $(u,u')$. Computing the $u$-$u'$ shortest-path in $H$ takes $O(n)$ time. Computing portals (vertices of $P$) takes $O(n)$ time. From \Cref{SS:split-and-glue}, the procedures $\textsc{Split}$ and $\textsc{Glue}$ can be implemented in $O(n)$ time. Therefore, the total running time of the step in \Cref{SSS:small-spread} is $O(n\log n+r^2\log n)$. Consider next the step in \Cref{SSS:large-spread}. In this step we first compute a hierarchical clustering of terminals in $U$, according to their distances in $H$. This can be done in $O(n\log n+r^2\log n)$ time. In fact, we can similarly use the MSSP data structure in \cite{kle-msppg-2005} and query the distances between every pair of terminals in $U$, and then consider the complete graph $K_U$ on $U$ whose edge weights are distances between pairs of its endpoints returned by the MSSP data structure. It is easy to see that, in order to construct the hierarchical clustering ${\mathcal{S}}$, every edge of $K_U$ needs to be visited at most $O(1)$ times. Therefore, the construction of hierarchical clustering takes in total $O(n\log n+r^2\log n)$ time. Note that ${\mathcal{S}}$ is a hierarchical clustering on a collection of $r$ elements, so ${\mathcal{S}}$ contains at most $O(r)$ distinct sets. Since deciding whether or not a set in ${\mathcal{S}}$ is expanding or not takes $O(1)$ time, we can tell in $O(r)$ time whether we are in the Balanced Case or the Unbalanced Case. \begin{itemize} \item In the Balanced Case, the next steps are to compute border pairs, border path sets, $\varepsilon_r$-covers and to use procedure $\textsc{Split}$ to obtain smaller instances. From \Cref{lem: well-structured path set} and Lemma~\ref{lem: eps_cover_subset}, all these takes can be done in $O(n\log r)$ time. \item In the Unbalanced Case, the next steps are to first repeat apply the steps in the Balanced Case to the non-expanding set that lies on the lowest level. From the above discussion, this takes in total $O(n\log r)$ time. If we end up with one instance $(H_{i^*}, U_{i^*})$ with $|U_{i^*}|>(9/10)r$, we need a final step for further splitting this instance. It is easy to verify that the operation of terminal pulling can be done in $O(r)$ time. Constructing the new collection $\tilde {\mathcal{S}}$ takes $O(n\log n+r^2\log n)$ time. Identifying good sets in $\tilde {\mathcal{S}}$ takes $O(r)$ time. The remaining operations are computing border pairs, border path sets, $\varepsilon_r$-covers and using procedure $\textsc{Split}$ to obtain smaller instances. From the above discussion, all these takes can be done in $O(n\log r)$ time. \end{itemize} \noindent Altogether, the running time of the algorithm in this section is $O\big( (n+r^2)\cdot\log r \cdot \log n \big)$. \section{Missing Proofs in \Cref{sec:prelim} and \Cref{sec: planar emulator}} \subsection{Proof of Lemma~\ref{lem: well-structured path set}} \label{apd: Proof of well-structured path set} Let $w: E(G)\to \mathbb{R}^+$ be the edge weight function of graph $G$. We slightly perturb $w$ to obtain another function $w': E(G)\to \mathbb{R}^+$, such that for every pair $P,P'$ of distinct paths in $G$: $w'(P)\ne w'(P')$; and if $w'(P)>w'(P')$, then $w(P)\ge w(P')$. Therefore, for each pair $v,v'$ of vertices in $G$, there is a unique $v$-$v'$ shortest path in $G$ under the weight function $w'$, and this path is also a $v$-$v'$ shortest path in $G$ under the weight function $w$ \cite{mvv-memi-1987,cab-mdpg-2012}. The algorithm uses the technique of divide-and-conquer. We now describe the recursive step. We first construct an auxiliary planar graph $H$ as follows. Its vertex set is $V(H)=T$, and its edge set $E(H)$ contains, for each pair $(t_1,t_2)\in {\mathcal M}$, an edge connecting $t_1$ to $t_2$. Graph $H$ inherits a planar embedding from $G$ and is therefore an outerplanar graph. Denote by ${\mathcal{F}}$ the set of bounded faces in $H$ lying inside a disc $D$. We construct a graph $R$ as follows. Its vertex set is $V(R)=\set{u_F\mid F\in {\mathcal{F}}}$, and its edge set $E(R)$ contains, for every pair $F,F'\in {\mathcal{F}}$, an edge $(u_F,u_{F'})$ if and only if faces $F$ and $F'$ share a segment of non-zero length on their boundaries. It is easy to verify that $R$ is a tree, and $|V(R)|=|{\mathcal M}|+1$. (In other words, $R$ is the \emph{weak-dual} of an outerplanar graph.) We can now efficiently compute a vertex $u_F$ of $R$, such that every connected component of graph $R\setminus \set{u_F}$ contains no more than $|V(R)|/2$ vertices. Denote this vertex by $u_{F^*}$. Consider now the face $F^*$ of $H$. Since in graph $R$, every connected component of graph $R\setminus \set{u_F}$ contains no more than $|V(R)|/2$ vertices, it is easy to see that we can find a pair $t_i,t_j$ of terminals on the intersection of the boundary of $D$ and the boundary of $F^*$, such that, if we draw a straight line segment connecting $t_i,t_j$, and denote by $D_1,D_2$ the discs obtained by cutting $D$ along this segment, then each edges of $H$ is drawn either inside $D_1$ or inside $D_2$, and each of $D_1,D_2$ contains the image of at most $3/4$-fractions of edges in $H$. Consider now the one-hole instance $(G,T)$. We compute a $t_i$-$t_j$ shortest path $P$ in $G$, and cut the graph $G$ into two subgraphs $G_1,G_2$ along path $P$ (so $G_1\cap G_2=P$). Define ${\mathcal M}_1$ to be the subset of ${\mathcal M}$ that contains all pairs whose corresponding edge in graph $H$ is drawn inside $D_1$ in $H$, and we define subset ${\mathcal M}_2$ similarly, so sets ${\mathcal M}_1,{\mathcal M}_2$ partition ${\mathcal M}$, and $|{\mathcal M}_1|,|{\mathcal M}_2|\le (3/4)\cdot |{\mathcal M}|$. We now recurse on graph $G_1$ for computing the shortest paths connecting pairs of ${\mathcal M}_1$ and graph $G_2$ for computing the shortest paths connecting pairs of ${\mathcal M}_2$. This completes the description of the algorithm. It is easy to verify that the running time of the algorithm is $O(\log |{\mathcal M}|\cdot |E(G)|)$, since in every recursive layer, every edge of the original graph $G$ appears in at most two of the graphs that lie on this layer. To complete the proof of Theorem~\ref{lem: well-structured path set}, it suffices to show that, in a recursive step described above, for every pair $(t_1,t_2)\in {\mathcal M}_1$, the unique shortest path in $G$ under $w'$ lies entirely in graph $G_1$ (the case for ${\mathcal M}_2$ and $G_2$ is symmetric), and the set of resulting shortest paths that we computed is well-structured. Assume for contradiction that the $t_1$-$t_2$ shortest-path $P'$ in $G$ does not lie entirely in $G_1$. We view $P'$ as being directed from $t_1$ to $t_2$. Let $v$ ($v'$, resp.) be the first (last, resp.) vertex of $P'$ that lies on $P$ and denote by $\hat P$ ($\hat P'$, resp.) the subpath of $P$ ($P'$, resp.) between $v$ and $v'$. Therefore, some inner vertex of $\hat P'$ does not belong to $G_1$ and therefore does not belong to $P$, and so $\hat P\ne \hat P'$. However, since both $P$ and $P'$ are shortest paths under $w'$, $w'(\hat P)=w'(\hat P')$, a contradiction to the fact that every pair of distinct paths have different weight in $w'$. Via similar arguments we can also show that set of resulting shortest paths that we computed is well-structured. \subsection{Proof of Theorem~\ref{thm: one-hole lower bound}} \label{apd: Proof of one-hole lower bound} In this subsection we provide the proof of Theorem~\ref{thm: one-hole lower bound}. Our example is inspired by the hard example constructed in \cite{KNZ14}. Assume that $1/\varepsilon$ is an integer and $k$ is a multiple of $1/\varepsilon$. This will only cause an additional constant factor in the size bound and will not influence the bound in Theorem~\ref{thm: one-hole lower bound}. We first construct a circular ordering $\sigma$ and a metric $d$ on the terminals. From \cite{co-pemm-2020}, if $d$ satisfies the Monge property (under the circular ordering $\sigma$), then there exists a one-hole instance $(G,T)$ with terminals in $T$ appearing on the boundary in the order $\sigma$. The set $T$ is partitioned into $L=\varepsilon k/4$ groups $T=\bigcup_{1\le i\le L}T^i$, where each group contains $4/\varepsilon$ terminals. Each group $T^i$ is then partitioned into four subgroups $T^i=T^{i,1}\cup T^{i,2}\cup T^{i,3}\cup T^{i,4}$, each containing $1/\varepsilon$ terminals. We denote $T^{i,j}=\set{t^{i,j}_1, \ldots,t^{i,j}_{1/\varepsilon}}$, for each $1\le j\le 4$. The circular ordering $\sigma$ on terminals of $T$ is defined as follows. The groups $T^1,\ldots, T^{L}$ appear clockwise in this order; within each group $T^i$, the subgroups $T^{i,1}, T^{i,2}, T^{i,3}, T^{i,4}$ appear clockwise in this order; and within each subgroup $T^{i,j}$, the vertices $t^{i,j}_1, \ldots,t^{i,j}_{1/\varepsilon}$ appear clockwise in this order. See Figure~\ref{fig: onehole_lower1} for an illustration. The metric $d$ on $T$ is defined as follows. For every pair $t,t'$ of terminals that belong to different groups, $d(t,t')=1/\varepsilon^2$. Consider now a group $T_i$. The metric between terminals in $T_i$ is defined as follows. Consider the $(\frac{1}{\varepsilon}+2)\times (\frac{1}{\varepsilon}+2)$ grid with unit edge weight. We place each terminal in $T$ at a boundary vertex of $H$, in the way shown in Figure~\ref{fig: onehole_lower2}. Now for each pair $t^{i,j}_r, t^{i,j'}_{r'}$ of terminals in $T^i$, we define $d(t^{i,j}_r,t^{i,j'}_{r'})=\textnormal{\textsf{dist}}_H(t^{i,j}_r,t^{i,j'}_{r'})$. It is easy to verify that $d$ is a metric and satisfies the Monge property. \begin{figure}[h] \centering \subfigure[An illustration of ordering $\sigma$.]{\scalebox{0.5}{\includegraphics[scale=0.18]{Fig/onehole_lower1.jpg}}\label{fig: onehole_lower1}} \hspace{0.3cm} \subfigure[An illustration of metric $d$ within a group $T^i$ of terminals.]{\scalebox{0.5}{\includegraphics[scale=0.18]{Fig/onehole_lower2.jpg}}\label{fig: onehole_lower2}} \caption{Illustrations of circular ordering $\sigma$ and metric $d$ within a group $T^i$ of terminals.} \end{figure} Consider now a one-hole instance $(G',T)$ such that the circular ordering in which terminals in $T$ appear on the outer boundary of $G'$ is $\sigma$ and for each pair $t,t'\in T$, $e^{-\varepsilon/3}\cdot \textnormal{\textsf{dist}}_{G'}(t,t')\le d(t,t') \le e^{-\varepsilon/3}$. For each $1\le i\le L$, we define $G'_i$ to be the subgraph of $G'$ induced by the set of all vertices in $G'$ that have distance at most $10/\varepsilon$ from terminal $t^{i,1}_1$. Since in $d$, the distance between every pair of terminals in $\set{t^{1,1}_1,\ldots,t^{L,1}_1}$ is $1/\varepsilon^2$, it is easy to see that the graphs $\set{G'_1,\ldots,G'_L}$ are mutually vertex-disjoint. On the other hand, it is easy to verify that, for every $1\le i\le L$ and every pair $t,t'$ of terminals in $T^i$, the shortest path in $G'$ connecting $t$ to $t'$ is entirely contained in $G'_i$. Therefore, for each $1\le i\le L$, $(G'_i, T^i)$ is an aligned $\varepsilon/3$-emulator for $(G,T^i)$. From similar arguments in \cite{KNZ14}, we get that $|V(G'_i)|\ge \Omega(|T^i|^2)=\Omega(1/\varepsilon^2)$. Therefore, $|V(G')|\ge \sum_{1\le i\le L}|V(G'_i)|\ge L\cdot \Omega(1/\varepsilon^2)= \Omega(k/\varepsilon)$. This shows that any aligned $(\varepsilon/3)$-emulator for $(G,T)$ has size at least $\Omega(k/\varepsilon)$. Theorem~\ref{thm: one-hole lower bound} now follows by scaling. \subsection{Calculations for size and error bounds in \Cref{sec: planar emulator}} \label{apd: calculations} For convenience, we denote $\lambda=\lambda^*$. We prove the following observations. \begin{observation} \label{Obs:size} Let $r_1,\ldots,r_t$ be a sequence of integers, such that $r_1\le k$, $r_t\ge \lambda$, and for each $1\le i\le t-1$, $r_i\ge (10/9)\cdot r_{i+1}$. Then $\sum_{1\le i\le t}(\log_{(10/9)} r_i)^{-2}\le 1/(\log_{(10/9)} \lambda -1)$. \end{observation} \begin{proof} Since for each $1\le i\le t-1$, $r_{i+1}\le (9/10)\cdot r_i$, $\log_{(10/9)} r_{i+1}\le \log_{(10/9)} r_i-1$. Therefore, \[ \sum_{1\le i\le t}\frac{1}{(\log_{(10/9)} r_i)^{2}}\le \sum_{j\ge \log_{(10/9)}\lambda}\frac{1}{j^2}\le \sum_{j\ge \log_{(10/9)}\lambda}\bigg(\frac{1}{j-1}-\frac{1}{j}\bigg)\le \frac{1}{\log_{(10/9)} \lambda -1}. \] \aftermath \end{proof} \begin{observation} \label{Obs:error} Let $r_1,\ldots,r_t$ be a sequence of integers, such that $r_1\le k$, $r_t\ge \lambda$, and for each $1\le i\le t-1$, $r_i\ge (10/9)\cdot r_{i+1}$. Then $\sum_{1\le i\le t}(\log r_i)^{4}/r_i^{0.1}\le 101(\log \lambda)^{4}/\lambda^{0.1}$. \end{observation} \begin{proof} Consider any index $1\le i\le t-1$. Denote $x=\log r_i/\log r_{i+1}$, so $r_i=(r_{i+1})^x$. Assume first that $x<1+10^{-100}$, then since $r_i\ge (10/9)\cdot r_{i+1}$, we get that \[\bigg(\frac{(\log r_{i})^4}{r_i^{0.1}}\bigg)/\bigg(\frac{(\log r_{i+1})^4}{r_{i+1}^{0.1}}\bigg) =\frac{r_{i+1}^{0.1}}{r_{i}^{0.1}}\cdot \bigg(\frac{\log r_{i}}{\log r_{i+1}}\bigg)^4 \le \frac{99}{100}\cdot x^4\le \frac{100}{101}.\] % Assume now that $x\ge 1+10^{-100}$, then since $r_{i+1}\ge \lambda$ and from the definition of $\lambda$, \[\bigg(\frac{(\log r_{i})^4}{r_i^{0.1}}\bigg)/\bigg(\frac{(\log r_{i+1})^4}{r_{i+1}^{0.1}}\bigg) =\frac{r_{i+1}^{0.1}}{r_i^{0.1}}\cdot \bigg(\frac{\log r_{i}}{\log r_{i+1}}\bigg)^4 = \frac{x^4}{(r_{i+1})^{\frac{x-1}{10}}} \le \frac{x^4}{\lambda^{\frac{x-1}{10}}}\le \frac{100}{101}.\] and so \[ \sum_{1\le i\le t}\frac{(\log r_i)^{4}}{r_{i}^{0.1}}< \frac{(\log \lambda)^{4}}{\lambda^{0.1}}\cdot\bigg(1+\frac{100}{101}+\big(\frac{100}{101}\big)^2+\cdots \bigg)\le \frac{101(\log \lambda)^{4}}{\lambda^{0.1}}. \] \aftermath \end{proof} \input{branch_pts} \subsection{Proof of Claim~\ref{clm: glueset_emulators}} \label{apd: Proof of glueset_emulators} Let $u,u'$ be terminals in $U$. We will show that $e^{-\varepsilon}\cdot \textnormal{\textsf{dist}}_Z(u,u')\le \textnormal{\textsf{dist}}_{\hat H}(u,u')\le e^{\varepsilon}\cdot\textnormal{\textsf{dist}}_Z(u,u')$. On the one hand, let $Q$ be the $u$-$u'$ shortest path in $\hat H$. We view path $Q$ as being directed from $u$ to $u'$. Let $\set{u_1,\ldots,u_k}$ be the set of all inner vertices of $Q$ that belongs to $V^*\cup Y$ (recall that $V^*$ is the set of branch vertices), where the vertices are indexed according to the order in which they appear on $Q$. Therefore, if we set $u_0=u$ and $u_{k+1}=u'$, then for each $0\le i\le k$, either one of $u_i,u_{i+1}$ is a branch vertex and so $\textnormal{\textsf{dist}}_Z(u_i,u_{i+1})=\textnormal{\textsf{dist}}_{\hat H}(u_i,u_{i+1})$, or $u_i,u_{i+1}$ are both vertices of $Y$ and belong to the same instance in ${\mathcal{H}}$ and so $\textnormal{\textsf{dist}}_{\hat H}(u_i,u_{i+1})\ge e^{-\varepsilon}\cdot\textnormal{\textsf{dist}}_{Z}(u_i,u_{i+1})$. Thus, if we set, for each $0\le i\le k$, $H_{R_i}$ to be the graph in ${\mathcal{H}}$ that vertices $u_i,u_{i+1}$ belong to, then \[ \begin{split} \textnormal{\textsf{dist}}_{\hat H}(u,u') & = \sum_{0\le i\le k}\textnormal{\textsf{dist}}_{\hat H}(u_i,u_{i+1}) \ge \sum_{0\le i\le k}\textnormal{\textsf{dist}}_{H_{R_i}}(u_i,u_{i+1})\\ & \ge \sum_{0\le i\le k}e^{-\varepsilon}\cdot \textnormal{\textsf{dist}}_{Z_{R_i}}(u_i,u_{i+1}) \ge \sum_{0\le i\le k}e^{-\varepsilon}\cdot \textnormal{\textsf{dist}}_{Z}(u_i,u_{i+1})\ge e^{-\varepsilon}\cdot \textnormal{\textsf{dist}}_{Z}(u,u'). \end{split} \] On the other hand, let $Q'$ be the $u$-$u'$ shortest path in $Z$. We view path $Q'$ as being directed from $u$ to $u'$. Let $\set{u'_1,\ldots,u'_k}$ be the set of all inner vertices of $Q'$ that belongs to $V^*\cup Y$ (recall that $V^*$ is the set of branch vertices), where the vertices are indexed according to the order in which they appear on $Q'$. Therefore, if we set $u'_0=u$ and $u'_{k+1}=u'$, then for each $0\le i\le k$, either one of $u'_i,u'_{i+1}$ is a branch vertex and so $\textnormal{\textsf{dist}}_Z(u'_i,u'_{i+1})=\textnormal{\textsf{dist}}_{\hat H}(u'_i,u'_{i+1})$, or $u'_i,u'_{i+1}$ are both vertices of $Y$ and belong to the same instance in ${\mathcal{H}}$ and so $\textnormal{\textsf{dist}}_Z(u'_i,u'_{i+1})\ge e^{-\varepsilon}\cdot\textnormal{\textsf{dist}}_{\hat H}(u'_i,u'_{i+1})$. Thus, if we set, for each $0\le i\le k$, $H_{R_i}$ to be the graph in ${\mathcal{H}}$ that vertices $u'_i,u'_{i+1}$ belong to, then \[ \begin{split} \textnormal{\textsf{dist}}_Z(u,u') & = \sum_{0\le i\le k}\textnormal{\textsf{dist}}_Z(u'_i,u'_{i+1}) \ge \sum_{0\le i\le k}\textnormal{\textsf{dist}}_{Z_{R_i}}(u'_i,u'_{i+1})\\ & \ge \sum_{0\le i\le k}e^{-\varepsilon}\cdot \textnormal{\textsf{dist}}_{H_{R_i}}(u'_i,u'_{i+1}) \ge \sum_{0\le i\le k}e^{-\varepsilon}\cdot \textnormal{\textsf{dist}}_{\hat H}(u'_i,u'_{i+1})\ge e^{-\varepsilon}\cdot \textnormal{\textsf{dist}}_{\hat H}(u,u'). \end{split} \] \subsection{Proof of Claim~\ref{clm: ratio loss for gluepathset}} \label{apd: Proof of ratio loss for gluepathset} We denote by $\ell$ the level that set $S$ belongs to. We use the following simple observations. \begin{observation} \label{obs: S' easy} For every pair $(u,u')$ with $u\in S$ and $u'\in S'$, $\textnormal{\textsf{dist}}_{H}(u,u')\ge \mu^{\ell+1}$. For every pair $(u,u')$ of terminals in $S'$ that do not belong to the same graph in ${\mathcal{H}}$, $\textnormal{\textsf{dist}}_{H}(u,u')\ge \mu^{\ell+1}$. \end{observation} \begin{proof} From the construction of the collection ${\mathcal{S}}$ and the definition of sets $S,S',S^*$, if $u\in S$ and $u'\in S'$, then $u,u'$ do not belong to the same $(\ell+1)$-level set, and so $\textnormal{\textsf{dist}}_H(u,u')>\mu^{\ell+1}$. Consider now a pair $u,u'$ of terminals in $S'$ that do not belong to the same graph in ${\mathcal{H}}$. From the construction of the graphs in ${\mathcal{H}}$, there must exist a pair $\hat u,\hat u'$ of terminals in $S$, such that the pairs $(\hat u,\hat u')$ and $(u, u')$ are crossing. Therefore, from Monge property, \[ \textnormal{\textsf{dist}}_H(u,u')\ge \textnormal{\textsf{dist}}(u,\hat u)+\textnormal{\textsf{dist}}(u',\hat u')- \textnormal{\textsf{dist}}(\hat u,\hat u') \ge \mu^{\ell+1}+\mu^{\ell+1}-2r\mu^{\ell}>\mu^{\ell+1}, \] where we have used the fact (from Observation~\ref{obs: diameter}) that $\textnormal{\textsf{dist}}(\hat u,\hat u')\le 2r\mu^{\ell}$. \end{proof} Let $u,u'$ be terminals in $U$. We will show that $\textnormal{\textsf{dist}}_{\hat H}(u,u')\le \textnormal{\textsf{dist}}_H(u,u')\le e^{\varepsilon_r}\cdot\textnormal{\textsf{dist}}_{\hat H}(u,u')$. If vertices $u,u'$ belong to the same instance in ${\mathcal{H}}$, then since the instances in ${\mathcal{H}}$ is obtained by cutting along shortest paths in $H$, it is easy to see that $\textnormal{\textsf{dist}}_H(u,u')=\textnormal{\textsf{dist}}_{\hat H}(u,u')$. Therefore, we assume from now on that that terminals $u,u'$ do not belong to the same instance in ${\mathcal{H}}$. We denote by $Y$ the set of all vertices that belongs to more than one instances in ${\mathcal{H}}$. Recall that, in the procedure $\textsc{Split}$, we have sliced $H$ open along a set of shortest paths in $H$. Let ${\mathcal{R}}$ be the collection of regions (of $H$) that we get. Recall that each instance in ${\mathcal{H}}$ corresponds to a region in ${\mathcal{R}}$. We say that an instance $(H_R,U_R)\in {\mathcal{H}}$ is a \EMPH{regular} instance if the corresponding region $R$ is surrounded by (i) a contiguous segment of the outer-boundary of $H$ and (ii) the image of a single path in ${\mathcal{P}}$. Since the paths are well-structured, when we consider a $u$-$u'$ shortest path $Q$ in $H$, we can assume that, for each regular instance $(H_R,U_R)\in {\mathcal{H}}$ with $u,u'\notin V(H_R)$, the intersection between $Q$ and $H_R$ is a subpath of the path in ${\mathcal{P}}$ that surrounds the region $R$ and both endpoints of this subpaths are branch vertices. Consider now the $u$-$u'$ shortest path $Q$ in $H$. Assume that $u\in H_{R}$ and $u'\in H_{R'}$. We view path $Q$ as being directed from $u$ to $u'$. Let $v$ be the last vertex of $Q$ that belongs to $H_R$, and let $v'$ be the first vertex of $Q$ after $v$ that belongs to $H_{R'}$. We distinguish between the following cases. \medskip \noindent\textbf{Case 1. $v\ne v'$.} From the construction of graph $\hat H$ and the above discussion, it is easy to verify that the entire path $Q$ is also contained in graph $\hat H$, so $\textnormal{\textsf{dist}}_{\hat H}(u,u')\le \textnormal{\textsf{dist}}_{H}(u,u')$. On the other hand, it is easy to verify that any shortest path in $\hat H$ connecting $u$ to $u'$ is also entirely contained in $H$, so $\textnormal{\textsf{dist}}_{\hat H}(u,u')\ge \textnormal{\textsf{dist}}_{H}(u,u')$. Therefore, $\textnormal{\textsf{dist}}_{\hat H}(u,u')= \textnormal{\textsf{dist}}_{H}(u,u')$. \medskip \noindent \textbf{Case 2. $v=v'$.} This means that path $Q$ only touches two regions, $R$ and $R'$. If one of $u,u'$ belongs to set $S'$, then from Observation \ref{obs: S' easy} and the fact (from Observation~\ref{obs: diameter}) that the boundary path of $R$ and $R'$ have total length at most $2r\mu^{\ell}$, it is easy to verify that \[\textnormal{\textsf{dist}}_{H}(u,u')\le \textnormal{\textsf{dist}}_{\hat H}(u,u')\le (1+O(1/r))\cdot\textnormal{\textsf{dist}}_{H}(u,u')\le e^{\varepsilon_r}\cdot\textnormal{\textsf{dist}}_{H}(u,u').\] If both $u,u'$ belong to $S$, then from the construction of $\hat H$, $\textnormal{\textsf{dist}}_{H}(u,u')\le \textnormal{\textsf{dist}}_{\hat H}(u,u')$. It remains to consider the case where at least one of $u,u'$ belongs to set $S^*$. Assume without loss of generality that $u\in S^*$. Since the set $Y\cap U_R$ contains an $\varepsilon_r$-cover of $u$ on the boundary path of $R$, there exists a vertex $\hat v\in Y\cap U_R$, such that $\textnormal{\textsf{dist}}_H(u,\hat v)+\textnormal{\textsf{dist}}_H(\hat v,v)\le e^{\varepsilon_r}\cdot \textnormal{\textsf{dist}}_H(u,v)$. In this case we denote by $v_1$ the copy of $v$ in $H_R$ and by $v_2$ the copy of $v$ in $H_{R'}$, then \[ \begin{split} \textnormal{\textsf{dist}}_{H}(u,u')\le \textnormal{\textsf{dist}}_{\hat H}(u,u')\le & \text{ } \textnormal{\textsf{dist}}_{\hat H}(u,\hat v)+\textnormal{\textsf{dist}}_{\hat H}(\hat v,v_2)+\textnormal{\textsf{dist}}_{\hat H}(v_2,u')\\ \le & \text{ }\textnormal{\textsf{dist}}_{H}(u,\hat v)+\textnormal{\textsf{dist}}_{H}(\hat v,v_2)+\textnormal{\textsf{dist}}_{H}(v',u')\\ \le & \text{ } e^{\varepsilon_r}\cdot \textnormal{\textsf{dist}}_{H}(u,v)+\textnormal{\textsf{dist}}_{H}(v,u') \le e^{\varepsilon_r}\cdot \textnormal{\textsf{dist}}_{H}(u,u'). \end{split} \] \section{Missing Proofs in \Cref{sec: general graph}} \subsection{Complete Description of Procedures $\textsc{Split}_h$ and $\textsc{Glue}_h$} \label{apd: cut and glue} \paragraph{Splitting.} The input to procedure \emph{$\textsc{Split}_h$} consists of \begin{itemize} \item an $h$-hole instance $(H,U)$; \item a path $P$ connecting a pair of its terminals that lie on different holes; and \item a set $Y$ of vertices in $P$ that contains both endpoints of $P$. \end{itemize} The output of procedure $\textsc{Split}_h$ is an $(h-1)$-hole instance $(\tilde H, \tilde U)$ that is constructed as follows. Let $u,u'$ be the endpoints of $P$. We denote by $\gamma$ the curve representing the image of path $P$ in $H$, and view it as being directed from $u$ to $u'$. For each $v\in V(P)$, we define $\delta_1(v)$ ($\delta_2(v)$, resp.) as the set of all incident edges of $v$ in graph $H$, whose image lie on the left (right, resp.) side of $\gamma$, as we traverse along $\gamma$ from $u$ to $u'$. We now modify the graph $H$ as follows. Replace each vertex $v\in V(P)$ by two new vertices $v_1$ and $v_2$, where $v_1$ is incident to all edges in $\delta_1(v)$ and $v_2$ is incident to all edges in $\delta_2(v)$. Then we add, for each edge $(v,v')$ of path $P$, an edge $(v_1,v'_1)$ and an edge $(v_2,v'_2)$. The resulting graph is denoted by $\tilde H$. We naturally construct a planar drawing of graph $\tilde H$, as follows. We start from the drawing $\phi$ associated with instance $(H,U)$. We first erase from it the images of all vertices and edges of $P$. Denote by $\alpha$ ($\alpha'$, resp.) the hole in $\phi$ whose boundary contains the image of $u$ ($u'$, resp.). Let $S$ be a thin strip around the curve $\gamma$. We draw the new vertices $u_1,u_2$ at the intersections of $S$ and the boundary of hole $\alpha$, where $u_1$ lies on the left of $\gamma$ and $u_2$ lies on the right of $\gamma$. Similarly, we draw the new vertices $u'_1,u'_2$ at the intersections of $S$ and the boundary of hole $\alpha'$, where $u'_1$ lies on the left of $\gamma$ and $u'_2$ lies on the right of $\gamma$. Now for every other vertex $v\in V(P)$, we draw the new vertex $v_1$ ($v_2$, resp.) on the boundary of $S$ just to the left (right, resp.) of the old image of $v$ in $\phi$. The images of other vertices remain the same as in $\phi$. For each vertex $v\in V(P)$ and each edge $e\in \delta_1(v)$ ($\delta_2(v)$, resp.), we slightly modify the image of $e$ to make it direct to $v_1$ ($v_2$, resp.). Lastly, for each edge $(v,v')\in P$, we draw the image of new edge $(v_1,v'_1)$ ($(v_2,v'_2)$, resp.) as the segment of the boundary of strip $S$ between the points representing the images of $v_1,v'_1$ ($(v_2,v'_2)$, resp.). This completes the construction of a planar drawing of $\tilde H$, that we denote by $\tilde \phi$. See \Cref{fig: splitting h-hole before} and \Cref{fig: splitting h-hole after} for an illustration. We now define $\tilde U$ to be the set obtained from $U$ by replacing for each vertex $y\in Y$, two new vertices $y_1$ and $y_2$ (since such a vertex $y$ belongs to path $P$), so $|\tilde U|=|U|+2|Y|$. The instance $(\tilde H,\tilde U)$ is the output of procedure $\textsc{Split}_h$. We now show that it is indeed an $(h-1)$-hole instance. We define area $\beta=\alpha\cup S\cup \alpha'$. It is easy to observe that no vertices or edges are drawn inside the interior of area $\beta$, and if we denote by $U(\alpha)$ the set of terminals in $H$ that lie on the boundary of $\alpha$, and define set $U(\alpha')$ similarly, then in $\tilde H$, the boundary of $\beta$ contains the images of terminals in $(U(\alpha)\setminus \set{u})\cup (U(\alpha')\setminus \set{u'})\cup \set{y_1,y_2\mid y\in Y}$. Therefore $(\tilde H,\tilde U)$ is a valid $(h-1)$-hole instance. \paragraph{Gluing.} We next describe the procedure \emph{$\textsc{Glue}_{h}$}, which is intuitively a reverse process of procedure called $\textsc{Split}_{h}$. Assume that we have applied the procedure $\textsc{Split}_h$ to some $h$-hole instance $(H,U)$, some path $P$ connecting a pair $u,u'$ of terminals in $U$ that lie on holes $\alpha,\alpha'$ respectively, and a subset $Y$ of vertices in $P$. Let $(\tilde H,\tilde U)$ be the $(h-1)$-hole instance that the procedure $\textsc{Split}_h$ outputs, where holes $\alpha, \alpha'$ are merged into hole $\beta$. We then denote, for each $y\in Y$, by $y^1$ and $y^2$ the two terminals in $\tilde U$ obtained by splitting $y$. The procedure $\textsc{Glue}_{h}$ takes as input an emulator $(\tilde H',\tilde U)$ for instance $(\tilde H,\tilde U)$, and works as follows. We let graph $H'$ be obtained from graph $\tilde H'$ by identifying, for each $y\in Y$, vertex $y^1$ with vertex $y^2$ (and name the obtained vertex $y$). Denote $\tilde Y=\set{y^1,y^2\mid y\in Y}$. We then set $U'=(\tilde U\setminus\tilde Y)\cup \set{u,u'}$. Clearly, $U'=U$. The output of algorithm $\textsc{Glue}_h$ is instance $(H',U)$. We associate with instance $(H',U)$ a planar drawing with terminals of $U$ drawn on the boundary of $h$ holes as follows. We denote by $\gamma$ the boundary segment of hole $\beta$ from $u^2$ to $u^1$ that does not contain any other vertex of $\tilde Y$, and denote by $\gamma'$ the boundary segment of hole $\beta$ from $(u')^1$ to $(u')^2$ that does not contain any other terminal of $\tilde Y$. We now compute, for each $y\in Y$, a curve $\gamma_y$ connecting $y^1$ to $y^2$, such that the curves $\set{\gamma_y\mid y\in Y}$ all lie in hole $\beta$ and are mutually disjoint. We now move, for each $y\in Y$, the images of $y^1$ and $y^2$ along the curve $\gamma_y$ towards each other until they are identified. Now $\gamma$ becomes a closed curve that surrounds a region which does not contain the image of any vertices or edges in its interior. We designate this region by hole $\alpha$. We define hole $\alpha'$ for the closed curve $\gamma'$ similarly. It is easy to verify that all terminals of $U'$ that previously lied on the boundary of hole $\beta$ now lie on the boundary of either hole $\alpha$ or hole $\alpha'$. See \Cref{fig: gluepath_2} for an illustration. Therefore, $(H',U)$ is a valid $h$-hole instance, and it is easy to verify that instance $(H',U)$ is aligned with instance $(H,U)$. \subsection{Proof of Claim~\ref{clm: split glue h holes}} \label{apd: Proof of split glue h holes} For convenience, we rename the selected terminals $u,u'$ by $\hat u, \hat u'$, respectively. Throughout the proof, we will use $u,u'$ to denote some pair of terminals in $U$, and we will show that $e^{-\varepsilon'}\cdot\textnormal{\textsf{dist}}_{Z}(u,u')\le \textnormal{\textsf{dist}}_{\hat H}(u,u')\le e^{\varepsilon'}\cdot\textnormal{\textsf{dist}}_{Z}(u,u')$. On the one hand, let $Q$ be the shortest path in $\hat H$ connecting $u$ to $u'$. We view the path $Q$ as being directed from $u$ to $u'$. Recall that in graph $\hat H$, for each vertex $y\in Y$, we have denoted by $\delta_1(y)$ the incident edges of $y$ that lie on one side of path $P$, and denote by $\delta_2(y)$ the incident edges of $y$ that lie on the other side of path $P$. We denote $E_1=\bigcup_{y\in Y}\delta_1(y)$ and $E_2=\bigcup_{y\in Y}\delta_2(y)$. If either $E(Q)\cap E_1=\varnothing$ or $E(Q)\cap E_2=\varnothing$ holds, then it is immediate to verify that path $Q$ is entirely contained in graph $\tilde H$. Since $(\tilde Z,\tilde U)$ is an $\varepsilon$-emulator for instance $(\tilde H,\tilde U)$, we get that \[\textnormal{\textsf{dist}}_{\hat H}(u,u')=\textnormal{\textsf{dist}}_{\tilde H}(u,u')\ge e^{-\varepsilon}\cdot\textnormal{\textsf{dist}}_{\tilde Z}(u,u')\ge e^{-\varepsilon}\cdot\textnormal{\textsf{dist}}_{Z}(u,u').\] Assume now that $E(Q')\cap E_1\ne \varnothing$ or $E(Q')\cap E_2\ne \varnothing$. Recall that graph $\hat H$ contains two copies $P_1,P_2$ of path $P$ that corresponds to the sides of $E_1, E_2$, respectively. We can assume without loss of generality that path $Q$ is the concatenation of (i) a path $Q_1$ connecting $u$ to some vertex $x_1\in V(P_1)$, that is internally disjoint from $P_1$; (ii) a subpath $P'_1$ of $P_1$ connecting $x_1$ to some vertex $y\in Y$; (iii) a subpath $P'_2$ of $P_2$ connecting $y$ to some vertex $x'_2\in V(P_2)$; and (iv) a path $Q'_2$ connecting $x'_2$ to some vertex $u'$, that is internally disjoint from $P_2$. Recall that $(\tilde Z,\tilde U)$ is an $\varepsilon$-emulator for instance $(\tilde H,\tilde U)$, and instance $(Z,U)$ is obtained by applying the procedure $\textsc{Glue}_h$ to instance $(\tilde Z,\tilde U)$. We denote by $y_1,y_2$ the copies of $y$ in graph $\tilde H$, where $y_1\in V(P_1)$ and $y_2\in V(P_2)$. Then \begin{align*} \textnormal{\textsf{dist}}_{\hat H}(u,u') &= \textnormal{\textsf{dist}}_{\hat H}(u,x_1)+ \textnormal{\textsf{dist}}_{P_1}(x_1,y)+\textnormal{\textsf{dist}}_{P_2}(y,x'_2)+\textnormal{\textsf{dist}}_{\hat H}(x'_2,u)\\ &\ge \textnormal{\textsf{dist}}_{\tilde H}(u,x_1)+ \textnormal{\textsf{dist}}_{\tilde H}(x_1,y_1)+\textnormal{\textsf{dist}}_{\tilde H}(y_2,x'_2)+\textnormal{\textsf{dist}}_{\tilde Z}(x'_2,u)\\ &\ge e^{-\varepsilon}\cdot (\textnormal{\textsf{dist}}_{\tilde Z}(u,x_1)+ \textnormal{\textsf{dist}}_{\tilde Z}(x_1,y_1)+\textnormal{\textsf{dist}}_{\tilde Z}(y_2,x'_2)+\textnormal{\textsf{dist}}_{\tilde Z}(x'_2,u))\\ &\ge e^{-\varepsilon}\cdot (\textnormal{\textsf{dist}}_{Z}(u,x_1)+ \textnormal{\textsf{dist}}_{Z}(x_1,y)+\textnormal{\textsf{dist}}_{Z}(y,x'_2)+\textnormal{\textsf{dist}}_{Z}(x'_2,u))\\ &\ge e^{-\varepsilon}\cdot \textnormal{\textsf{dist}}_{Z}(u,u'). \end{align*} On the other hand, let $Q'$ be the shortest path in $Z$ connecting $u$ to $u'$. We view the path $Q'$ as being directed from $u$ to $u'$. Via similar analysis, we can easily show that, if $Q'$ does not contain vertices of $Y$, then \[\textnormal{\textsf{dist}}_{Z}(u,u')=\textnormal{\textsf{dist}}_{\tilde Z}(u,u')\ge e^{-\varepsilon}\cdot\textnormal{\textsf{dist}}_{\tilde H}(u,u')\ge e^{-\varepsilon}\cdot\textnormal{\textsf{dist}}_{\hat H}(u,u').\] We assume from on now that $Q'$ contains some vertices of $Y$. In graph $\tilde Z$, we denote by $\tilde E_1$ the set of edges incident to some vertex of $Y_1=\set{y_1\mid y\in Y}$, and define set $\tilde E_2$ for set $Y_2=\set{y_2\mid y\in Y}$ similarly. Let $y^1\ldots,y^r$ be the vertices of $Y\cap V(Q)$, where the vertices are indexed according to their appearance on $Q$. For each $0\le j\le r$, we denote by $Q_j$ the subpath of $Q$ between vertices $y^j$ and $y^{j+1}$ (where we set $y^0=u$ and $y^{r+1}=u'$). For each $0\le j\le r$, we set $a(j)$ to be $1$ ($2$, resp.) if the first edge of $Q_j$ belongs to $\tilde E_1$ ($\tilde E_2$, resp.), and set set $b(j)$ to be $1$ ($2$, resp.) if the last edge of $Q_j$ belongs to $\tilde E_1$ ($\tilde E_2$, resp.). Since $(\tilde Z,\tilde U)$ is an $\varepsilon$-emulator for instance $(\tilde H,\tilde U)$, and instance $(Z,U)$ is obtained by applying the procedure $\textsc{Glue}_h$ to instance $(\tilde Z,\tilde U)$, we get that \begin{align*} \textnormal{\textsf{dist}}_{Z}(u,u') & = \sum_{0\le j\le r}\textnormal{\textsf{dist}}_{Z}(y^j,y^{j+1}) = \sum_{0\le j\le r}\textnormal{\textsf{dist}}_{\tilde Z}(y^j_{a(j)},y^{j+1}_{b(j)}) \\ &\ge \sum_{0\le j\le r} e^{-\varepsilon}\cdot \textnormal{\textsf{dist}}_{\tilde H}(y^j_{a(j)},y^{j+1}_{b(j)}) \ge \sum_{0\le j\le r} e^{-\varepsilon}\cdot\textnormal{\textsf{dist}}_{\hat H}(y^j,y^{j+1}) \ge e^{-\varepsilon}\cdot\textnormal{\textsf{dist}}_{\hat H}(u,u'). \end{align*} Altogether, we get that $e^{-\varepsilon'}\cdot\textnormal{\textsf{dist}}_{Z}(u,u')\le \textnormal{\textsf{dist}}_{\hat H}(u,u')\le e^{\varepsilon'}\cdot\textnormal{\textsf{dist}}_{Z}(u,u')$. \subsection{Proof of Claim~\ref{clm: ratio loss for glue h holes}} \label{apd: Proof of ratio loss for glue h holes} For convenience, we rename the selected terminals $u,u'$ by $\hat u, \hat u'$, respectively. Throughout the proof, we will use $u,u'$ to denote some pair of terminals in $U$, and we will show that $e^{-\varepsilon'}\cdot\textnormal{\textsf{dist}}_{\hat H}(u,u')\le \textnormal{\textsf{dist}}_H(u,u')\le \textnormal{\textsf{dist}}_{\hat H}(u,u')$. On the one hand, let $Q$ be a shortest path in $H$ connecting $\hat u$ to $\hat u'$. We view $Q$ as being directed from $u$ to $u'$. If $V(Q)\cap V(P)= \varnothing$, then it is immediate to verify that path $Q$ is entirely contained in graph $\hat H$, so $\textnormal{\textsf{dist}}_{\hat H}(u,u')\le \textnormal{\textsf{dist}}_{H}(u,u')$. Assume now that $V(Q)\cap V(P)\ne \varnothing$. Since $Q$ and $P$ are shortest paths in $H$, $Q\cap P$ is a subpath of both $Q$ and $P$. Let $v,v'$ be the endpoints of this path where $v$ is closer to $u$ and $v'$ is closer to $u'$ on $Q$ (note that it is possible that $v=v'$). Since set $Y$ contains an $\varepsilon'$-cover of $u$ on $P$, there exists some vertex $y\in Y$, such that $\textnormal{\textsf{dist}}_H(u,y)+\textnormal{\textsf{dist}}_H(y,v)\le e^\varepsilon\cdot \textnormal{\textsf{dist}}_H(u,v)$; and similarly since set $Y$ contains an $\varepsilon'$-cover of $u'$ on $Q$, there exists some vertex $y'\in Y$, such that $\textnormal{\textsf{dist}}_H(u',y')+\textnormal{\textsf{dist}}_H(y',v')\le e^\varepsilon\cdot \textnormal{\textsf{dist}}_H(u',v')$. From the construction of graph $\hat H$, we get that \[ \begin{split} \textnormal{\textsf{dist}}_{H}(u,u') = & \text{ } \textnormal{\textsf{dist}}_{H}(u,v)+\textnormal{\textsf{dist}}_{H}(v,v')+\textnormal{\textsf{dist}}_{H}(u',v')\\ \ge & \text{ } e^{-\varepsilon'}\cdot(\textnormal{\textsf{dist}}_H(u,y)+\textnormal{\textsf{dist}}_H(y,v))+\textnormal{\textsf{dist}}_{H}(v,v')+e^{-\varepsilon}\cdot(\textnormal{\textsf{dist}}_H(u',y')+\textnormal{\textsf{dist}}_H(y',v'))\\ \ge & \text{ } e^{-\varepsilon'}\cdot(\textnormal{\textsf{dist}}_{\hat H}(u,y)+\textnormal{\textsf{dist}}_{\hat H}(y,v)+\textnormal{\textsf{dist}}_{\hat H}(v,v')+\textnormal{\textsf{dist}}_{\hat H}(u',y')+\textnormal{\textsf{dist}}_{\hat H}(y',v'))\\ \ge & \text{ } e^{-\varepsilon}\cdot \textnormal{\textsf{dist}}_{\hat H}(u,u'). \end{split} \] On the other hand, let $Q'$ be a shortest path in $\hat H$ connecting $\hat u$ to $\hat u'$. We view $Q'$ as being directed from $u$ to $u'$. Recall that in graph $H$, for each vertex $y\in Y$, we have denoted by $\delta_1(y)$ the incident edges of $y$ that lie on one side of path $P$, and denote by $\delta_2(y)$ the incident edges of $y$ that lie on the other side of path $P$. We denote $E_1=\bigcup_{y\in Y}\delta_1(y)$ and $E_2=\bigcup_{y\in Y}\delta_2(y)$. If either $E(Q')\cap E_1=\varnothing$ or $E(Q')\cap E_2=\varnothing$ holds, then it is immediate to verify that path $Q'$ is entirely contained in graph $H$, so $\textnormal{\textsf{dist}}_{H}(u,u')\le \textnormal{\textsf{dist}}_{\hat H}(u,u')$. Assume now that $E(Q')\cap E_1\ne \varnothing$ or $E(Q')\cap E_2\ne \varnothing$. Recall that graph $\hat H$ contains two copies $P_1,P_2$ of path $P$ that corresponds to the sides of $E_1, E_2$, respectively. We can assume without loss of generality that path $Q'$ is the concatenation of (i) a path $Q'_1$ connecting $u$ to some vertex $x_1\in V(P_1)$, that is internally disjoint from $P_1$; (ii) a subpath $P'_1$ of $P_1$ connecting $x_1$ to some vertex $y\in Y$; (iii) a subpath $P'_2$ of $P_2$ connecting $y$ to some vertex $x'_2\in V(P_2)$; and (iv) a path $Q'_2$ connecting $x'_2$ to some vertex $u'$, that is internally disjoint from $P_2$. Let $x$ be the original copy of $x_1$ in graph $H$, and let $x'$ be the original copy of $x_1$ in graph $H$. From the construction of graph $\hat H$, we get that \[ \begin{split} \textnormal{\textsf{dist}}_{\hat H}(u,u') = & \text{ } \textnormal{\textsf{dist}}_{\hat H}(u,x_1)+\textnormal{\textsf{dist}}_{P_1}(x_1,y)+\textnormal{\textsf{dist}}_{P_2}(y,x'_2)+\textnormal{\textsf{dist}}_{\hat H}(u',x'_2)\\ \ge & \text{ } \textnormal{\textsf{dist}}_{H}(u,x)+\textnormal{\textsf{dist}}_{H}(x,x')+\textnormal{\textsf{dist}}_{H}(u',x') \ge \textnormal{\textsf{dist}}_{H}(u,u'). \end{split} \] \subsection{Proof of \Cref{L:r-division}} \label{apd: Proof of L:r-division} Similar to Frederickson~\cite{fre-faspp-1987} and Klein-Mozes-Sommer~\cite{kms-srsdp-2013}, we recursively find balanced cycle separators to subdivide the input graph. To control the number vertices, boundary vertices, holes, and terminals within each piece simultaneously, we ask the cycle separator to balance these quantities in rounds. Specifically, at recursive level $\ell$: \begin{itemize} \itemsep=0pt \item If $\ell \bmod 4 = 0$, balance the vertices. \item If $\ell \bmod 4 = 1$, balance the boundary vertices. \item If $\ell \bmod 4 = 2$, balance the holes by inserting one \emph{supernode} per hole. \item If $\ell \bmod 4 = 3$, balance the terminals. \end{itemize} We terminate the recursion four rounds after a piece has size at most $r$. The depth of the recursion tree is $\log(n/r)$, and a similar analysis as in Klein-Mozes-Sommer~\cite{kms-srsdp-2013} shows that the number of terminals within each piece is $O(kr/n)$. \section{Applications} \label{S:applications} In this section we present efficient $\varepsilon$-approximate algorithms to several optimization problems on planar graphs that beat their exact counterparts, inclusing multiple-source shortest paths, minimum $(s,t)-$cut, graph diameter, and offline dynamic distance oracle. To put emphasis on the new ideas presented, we assume the readers are familiar with the various tools for optimization on planar graphs and only provide citations to the earlier literature. \subsection{Approximate Multiple-Source Shortest Paths} \label{SS:mssp} The approximate multiple-source shortest paths data structure (\emph{$\varepsilon$-MSSP}) can achieve the following task: Preprocess a plane graph $P$ and a set of terminals $U$ on the outerface of $P$ (that is, a one-hole instance $(P,U)$), and answer distance queries between terminal pairs within $(1+\varepsilon)$-approximation. To prove Theorem~\ref{Th:mssp}, apply Theorem~\ref{Th:bootstrap} on $(P,U)$ to construct another one-hole instance $(P',U)$ that is an $\varepsilon$-emulator of $(P,U)$, which has size \[ O\Paren{ \frac{ ({n}/{\log^C n}) \cdot \textnormal{poly}\log n}{\varepsilon^{O(1)}} } = O\Paren{ \frac{n}{\varepsilon^{O(1)} \textnormal{poly}\log n} } \] and takes $O_\varepsilon(n)$ time. Now construct the MSSP data structure on $P'$ using Klein's algorithm~\cite{kle-msppg-2005}, which takes $O\Paren{ \frac{n}{\varepsilon^{O(1)} \textnormal{poly}\log n} \cdot \log n} = O(n/ \varepsilon^{O(1)})$ time; MSSP answers queries in time $O(\log n)$, which is an $\varepsilon$-approximation to the actual distance between the pairs due to the fact that $(P',U)$ is an $\varepsilon$-emulator. This proves Theorem~\ref{Th:mssp}. \subsection{Approximate Minimum Cut} Here we briefly summarize the minimum $(s,t)$-cut algorithm on planar graphs with non-negative weights by Italiano, Nussbaum, Sankowski, and Wulff-Nilsen~\cite{insw-iamcm-2011}. Many details and edge-cases are omitted for the clarity of presentation. Let $G$ be the input plane graph, and two vertices $s$ and~$t$. \begin{enumerate} \item Compute the dual graph $G^*$ of $G$; it is sufficient to compute a shortest cycle in $G^*$ that separates the \emph{faces} $s^*$ and $t^*$. Find a shortest $s^*$-$t^*$ path $\pi$ in $G^*$. This step takes $O(n)$ time~\cite{hkrs-fsapg-1997}. \item Construct $r$-division in $G^*$ respecting $\pi$ where $r \coloneqq \log^6 n$. Cut $\pi$ open; now each vertex on $\pi$ has a copy. This step takes $O(n)$ time~\cite{kms-srsdp-2013}. \item Compute MSSP~\cite{kle-msppg-2005} for each piece in the $r$-division with respect to the boundary vertices. Prepare the Monge heap data structures~\cite{fr-pgnwe-2006}, and represent each piece as a \emph{dense distance graph}. This step takes $O(n \log r) = O(n \log\log n)$ time for the MSSP~\cite{kle-msppg-2005}, and $O(n \log\log n)$ time to set up the Monge heap data structures and dense distance graphs~\cite{fr-pgnwe-2006}. \item Denote the length of $\pi$ as $p$. Compute $p/\log p$ shortest paths between the two copies of each evenly spaced points on $\pi$, using Reif's divide-and-conquer strategy~\cite{rei-mscpu-1981}; each shortest path is computed by FR-Dijkstra~\cite{fr-pgnwe-2006} on the dense distance graphs. Now the graph is cut into $p/\log p$ \emph{slabs}. This step takes $\tilde{O}(n/\sqrt{r} \cdot \log (p/\log p)) \le O(n)$ time. \item Apply Reif's strategy directly on each slab which now has only $O(\log p)$ vertices from $\pi$, so it takes $O(n \log p) = O(n \log\log n)$ time. \end{enumerate} Overall the algorithm takes $O(n \log\log n)$ time, with Step~3 being the bottleneck. We can safely truncate the edge weights to have polynomial range in linear time when solving the minimum $(s,t)$-cut problem. Now by simply choosing $r \coloneqq \log^C n$ with a bigger $C$ and replacing Step~3 with an $\varepsilon$-emulator per piece using Theorem~\ref{Th:bootstrap}, the new graph has size $O(\frac{n}{r} \cdot \sqrt{r} \textnormal{poly}\log r/\varepsilon^{O(1)}) = O(n / \varepsilon^{O(1)} \textnormal{poly}\log n)$. We can now compute $p$ shortest paths (instead of $p/\log p$) in Step~4 without recursion in Step~5 using Reif's divide-and-conquer strategy directly on the emulators without preparing the MSSP and Monge heap data structures in Step~3 and FR-Dijkstra in Step 4. Therefore the total running time is now $O_\varepsilon(n)$, proving Theorem~\ref{Th:minimum-cut}. \subsection{Approximate Diameter} Here we summarize the $(1+\varepsilon)$-approximate algorithm to compute the diameter of planar graphs with non-negative edge weights by Weimann-Yuster~\cite{wy-adpgl-2016} and Chan-Skrepetos~\cite{cs-faddo-2019}. Again we omit some details about marking/unmarking vertices in the actual algorithm to emphasize on core concepts. Let $G$ be the input planar graph. Given three graphs $H$, $H'$ and $H''$, denote \EMPH{$\textnormal{\textsf{diam}}_{H}(H',H'')$} the longest shortest-path distance with respect to $H$ between a vertex in $H'$ and a vertex in $H''$. \begin{enumerate} \item Compute a \emph{shortest-path} cycle separator $C$ in $G$ and splits $G$ into $A$ and $B$, where $A \cup B = G$ and $A \cap B = C$, using the algorithm by Thorup~\cite{tho-corad-2004}. This step takes $O(n)$ time. \item Construct an auxillary graph $G^+$ by selecting $O(1/\varepsilon)$ evenly-spaced \emph{portals} on $C$; run single-source shortest path algorithm on each portal $p$ to get maximum distance out of all paths from $p$, denoted as $\ell$; add edges from every vertex in $A$ and $B$ to the portals, with the edge-weights being their distances rounded to multiples of $\varepsilon\ell$. This step takes $O(n\cdot (1/\varepsilon))$ time using the linear-time single-source shortest path algorithm by Henzinger-Klein-Rao-Subramanian~\cite{hkrs-fsapg-1997}. \item Approximate $\textnormal{\textsf{diam}}_{G^+}(A,B)$. This step takes $O(n/\varepsilon) + 2^{O(1/\varepsilon)}$ time using brute-force~\cite{wy-adpgl-2016}, or $O(n\cdot(1/\varepsilon)^5)$ time using the farthest Voronoi diagram~\cite{cs-faddo-2019}. \item Build another auxillary graph $A^+$ from $G$ by first adding \emph{denser portals} on $C$, computing shortest paths between denser portals on $C$ with respect to $B$, then planarizing the union of all the shortest paths between dense portal pairs so that $A^+$ remains planar. Following Chan-Skrepetos~\cite{cs-faddo-2019}, the number of denser portals can be set to $|G|^{1/8}/\varepsilon$; compute all-pairs shortest paths between dense portals in $B$ takes $O(|B|\log n + \log n\cdot \sqrt{|B|}/\varepsilon^4)$ time using MSSP~\cite{kle-msppg-2005}; $A^+$ has size $|A| + O(|A|^{1/2}/\varepsilon^4)$. Build the graph $B^+$ similarly by switching the roles of $A$ and $B$. \item Approximate $\textnormal{\textsf{diam}}_{A^+}(A,A)$ and $\textnormal{\textsf{diam}}_{B^+}(B,B)$ recursively; the recursion depth is $O(\log n)$. \item Return the maximum of $\textnormal{\textsf{diam}}_{G^+}(A,B)$, $\textnormal{\textsf{diam}}_{A^+}(A,A)$, and $\textnormal{\textsf{diam}}_{B^+}(B,B)$. \end{enumerate} Overall the algorithm takes $O(n \log^2 n + n\log n \cdot (1/\varepsilon)^5)$ time. Again we can safely truncate the edge weights to have polynomial range when solving the diameter problem. Now we can substitute the construction of $A^+$ and $B^+$ using planarized shortest paths in Step 4 with two $\varepsilon$-emulators using Theorem~\ref{Th:bootstrap}, which only takes $O_\varepsilon(|A|+|B|)$ time to construct and has size $O_\varepsilon((|A|^{1/8}+|B|^{1/8})\textnormal{poly}\log n)$. Thus we improve the total running time to $O_\varepsilon(n \log n)$, proving Theorem~\ref{Th:diameter}. \subsection{Offline Dynamic Approximate Distance Oracle} Here we describe the crucial step in the algorithm by Chen \etal~\cite{cgh+-fdcde-2020a} to construct an offline dynamic $(1+\varepsilon)$-approximate distance oracle with $O(\textnormal{poly}\log n)$ query and update time, assuming that a $(1+\varepsilon)$-\emph{distance-approximating minor} of size $\tilde{O}(k)$ for a planar graph of size $n$ and $k$ terminals can be computed in $O(n \textnormal{poly}(\log n, \varepsilon^{-1}))$ time. Given a sequence of graphs $G_0 \subseteq G_1 \subseteq \dots \subseteq G_\ell$, denote $H_p \coloneqq G_p \setminus G_{p-1}$ for any $p \in \set{1,\dots,\ell}$. The proof of Theorem~4.15 in Chen \etal~\cite{cgh+-fdcde-2020a} iteratively constructs graphs $G'_1, \dots, G'_\ell$ in the following way: \[ G'_{p} \coloneqq \textsc{Emulator}(G'_{p-1} \cup H_{p}, T_{p}) \] for some terminal set $T_p$ (irrelevant to the discussion here), where $\textsc{Emulator}(G, T)$ returns an $\varepsilon$-emulator of $G$ with respect to terminal set $T$. When $\textsc{Emulator}(G, T)$ guarantees to return a minor of the input graph $G$, one can argue that $G'_{p}$ must be a minor of $G'_{p-1} \cup H_{p}$, which by induction is a minor of $\bigcup_{1 \le k \le p-1} H_k \cup H_{p} = G_p$ which must be planar {\cite[Lemma~4.16]{cgh+-fdcde-2020a}}. \smallskip To prove Theorem~\ref{Th:dynamic-oracle}, we follow the algorithm by Chen \etal~\cite{cgh+-fdcde-2020a} almost verbatim; the only missing piece is to prove that $G'_{p}$ remains planar in our setting. Observe that our emulator construction solely relies on the $\textsc{Split}$ and $\textsc{Glue}$ procedures introduced in Section~\ref{SS:split-and-glue}. (The base case from Theorem~\ref{thm: quartergrid enumator} can be replaced by the $O(k^4)$-size distance-approximating minor~\cite{KNZ14}.) While the emulator $G'$ produced by split-and-glue is technically not a minor of the input graph $G$, there is another planar supergraph $\hat G$ modified from $G$ such that $G'$ is a minor of $\hat G$. Now we can proceed to prove that $G'_{p}$ is planar using our construction for $\textsc{Emulator}(G, T)$. \begin{claim} For any $p \in \set{1,\dots,\ell}$, $G'_{p}$ is planar when $\textsc{Emulator}(G, T)$ is implemented using Theorem~\ref{thm:main}. \end{claim} \begin{proof} We will prove the following stronger statement by induction on $p$: there is a planar graph $\hat G_p$ constructed from $G_p$ by vertex spitting (the reverse operation to edge contraction), edge subdivision (by breaking an edge into two using a degree-2 node), and edge duplications (by creating multiedges from an existing edge), and contains $G'_p$ as a minor. We say a plane graph $H$ is a \EMPH{topological minor} of some graph $\hat H$ if $\hat H$ is constructed from $H$ by vertex spitting, edge subdivision and edge duplications. (Notice that this is difference from the standard terminology; in fact it is a topological minor \emph{in the dual}.) Notice the crucial property that if plane graph $H$ is a topological minor of $\hat H$, then $\hat H$ must also be a plane graph. First we introduce an operation that we will later use in the construction of $\hat G_p$. Recall that we can \emph{slice} a graph $H$ open along some path $P$ by duplicating every vertex and edge of $P$ to create another path $P'$ identical to $P$. The set of edges incident to each vertex on $P$ are split into two sides naturally based on their cyclic order around the vertex. Now we also add an edge between each vertex on $P$ and its copy in $P'$. We call this operation a \EMPH{pizza slice}. A pizza slice of a graph $H$ must contain $H$ as a topological minor. Every graph constructed from slice-and-gluing $H$ along a set of paths is a minor of some pizza slice of $H$. By induction hypothesis, there is a planar graph $\hat G_{p-1}$ containing $G'_{p-1}$ as a minor and $G_{p-1}$ as a topological minor. Now because the endpoints of all edges in $H_p$ can still be found in $G'_{p-1}$ and $\hat G_{p-1}$, $G'_{p-1} \cup H_{p}$ is a minor of $\hat G_{p-1} \cup H_{p}$. We know by induction hypothesis that $\hat G_{p-1}$ contains $G_p$ as a topological minor, so edges in $H_p$ can be safely added to $\hat G_{p-1}$ without destroying planarity; therefore $\hat G_{p-1} \cup H_{p}$ is still planar, and so does $G'_{p-1} \cup H_{p}$. Therefore $G'_{p} \coloneqq \textsc{Emulator}(G'_{p-1} \cup H_{p}, T_{p})$ is also planar from the emulator construction. Now we describe the construction of $\hat G_p$ from $G_p$ and $G'_p$. As $G'_p$ is constructed using split-and-glue from $Z_p \coloneqq G'_{p-1} \cup H_{p}$ by Theorem~\ref{thm:main}, there is a pizza slice $\hat Z_p$ of $Z_p$ that contains $G'_p$ as a minor. Using the lifting property that a topological minor commutes with a minor, there is another plane graph $\hat G_p$ that contains $\hat G_{p-1} \cup H_{p}$ as a topological minor; one can indeed construct $\hat G_p$ from $\hat G_{p-1} \cup H_{p}$ using pizza slices on a set of paths mimicking the one used during the slice-and-glue operations to obtain $G'_p$ from $Z_p$. Now $\hat G_p$ contains $G'_p$ as a minor because $\hat G_p$ contains $\hat Z_p$ as a minor and $\hat Z_p$ contains $G'_p$ as a minor by construction. $\hat G_p$ also contains $G_p$ as a topological minor because $\hat G_p$ contains $\hat G_{p-1} \cup H_p$ as a topological minor, which by induction contains $G_{p-1} \cup H_p$ as a topological minor. Therefore the existence of $\hat G_p$ is established. The base case is clear: Define $\hat G_1$ to be the pizza slice of $G'_0 \cup H_1 = G_0 \cup H_1 = G_1$ that contains $G'_1$ as a minor from the emulator construction. Thus the claim is proved. \end{proof} \subsection{Proof of Claim~\ref{clm: branch pts}} \label{apd: Proof of branch pts} \paragraph{Item~\ref{p1} of Claim~\ref{clm: branch pts}.} We define the graph $\tilde H$ as the union of (i) all paths in ${\mathcal{P}}$; and (ii) the cycle that connects all vertices of $U$ in the order that they appear on the outer-boundary of the drawing associated with $H$, so $\tilde H$ is a planar graph, and the drawing of $H$ naturally induces a planar drawing of $\tilde H$. Let $\tilde H'$ be the graph obtained from $\tilde H$ by suppressing all degree-$2$ vertices, so the planar drawing of $\tilde H$ naturally induces a planar drawing of $\tilde H'$. Since $\tilde H'$ has no degree-$2$ vertices, the number of faces, edges and vertices are all within a constant factor. Therefore, to show that the number of branch vertices is $O(|U|)$, it suffices to show that the number of vertices in $\tilde H'$ is $O(|U|)$, and therefore it suffices to show that the number of faces in the planar drawing of $\tilde H'$ is $O(|U|)$. We first construct an outerplanar graph $X$ on $U$ as follows. The edge set of $X$ is the union of (i) all edges of the cycle that connects all vertices of $U$ in the order that they appear on the outerface; and (ii) for each path in ${\mathcal{P}}$, an edge connecting its endpoints in $U$. Clearly, $X$ has $|U|$ vertices and $O(|U|)$ edges. The circular ordering on vertices of $U$ naturally defines a drawing of $X$. Clearly, the number of faces in this drawing is $O(|U|)$, and moreover, the total size of all faces is $O(|U|)$ (where the size of a face is the number of vertices that lie on the boundary of the face). Let $F$ be a face in the drawing of $X$ defined above. We denote by $|F|$ the number of vertices that lie on the boundary of $F$. We now show that this face gives birth to at most $O(|F|)$ faces in $\tilde H'$. Let $Y$ be a graph defined as follows. The vertex set $V(Y)$ contains, for each boundary edge $e$ of $F$, a node $y_e$ representing $e$. The edge set $E(Y)$ contains, for every pair $y_e,y_{e'}$ of vertices, an edge connecting them iff the corresponding paths (in ${\mathcal{P}}$) of edge $e$ and $e'$ either share an edge or share an internal vertex that does not belong to any other path in ${\mathcal{P}}$. Since ${\mathcal{P}}$ is well-structured and non-crossing, the graph $Y$ is an outerplanar graph, and so $|E(Y)|=O(|V(Y)|)=O(|F|)$. Since the number of faces in $\tilde H'$ that $F$ gives birth to is at most the number of edges in $Y$ plus one, we get that the number of faces in $\tilde H'$ that $F$ gives birth is at most $O(|F|)$. Therefore, the total number of faces in $\tilde H'$ in at most a constant times the total size of all faces in $X$, which is $O(|U|)$. This completes the proof of \ref{p1}. \bigskip \paragraph{Item~\ref{p2} of Claim~\ref{clm: branch pts}.} For convenience, we rename $Y\setminus Y^*$ as $Y$. In other words, set $Y$ only contains vertices that belong to exactly two paths of ${\mathcal{P}}$, so each vertex of $Y$ is contained in at most two instances in ${\mathcal{H}}$, contributing at most $2$ to the sum $\sum_{(H_R,U_R)\in {\mathcal{H}}:\text{ } |U_R|\ge \lambda}|U_R|$. We denote by ${\mathcal{R}}$ the set of regions in $H$ obtained by the procedure $\textsc{Split}$. Recall that, for each region $R\in {\mathcal{R}}$, set $U_R$ contains all branch vertices and vertices of $U\cup Y$ that lie on the boundary of $R$. Therefore, if we denote by $U'_R$ the set that contains all branch vertices and vertices of $U$ lying on the boundary of $R$, then it suffices to show that \begin{equation} \label{eq: main} \sum_{R\in {\mathcal{R}}:\text{ } |U'_R|\ge \lambda/2}|U'_R|\le |U|\cdot\bigg(1+O\bigg(\frac{1}{\lambda}\bigg)\bigg). \end{equation} This is because, for each $R\in {\mathcal{R}}$, if $|U'_R|< \lambda/2$ while $|U_R|\ge \lambda$, then $|Y\cap U_R|\ge \lambda/2 \ge |U'_R|$ and so $|U_R|\le 2\cdot |Y\cap U_R|$, and since every vertex of $Y$ appears on the boundaries of at most two regions in ${\mathcal{R}}$, we get that $$\sum_{R\in {\mathcal{R}}:\text{ } |U'_R|< \lambda/2,\text{ } |U_R|\ge \lambda}|U_R|\le \sum_{R\in {\mathcal{R}}:\text{ } |U'_R|< \lambda/2,\text{ } |U_R|\ge \lambda}2\cdot|Y\cap U_R|\le O(|Y|).$$ Combined with Inequality~\ref{eq: main} and the above discussion, this completes the proof of Claim~\ref{clm: branch pts}. The remainder of this section is dedicated to the proof of Inequality~\ref{eq: main}. Using similar arguments in the proof of Claim~\ref{clm: ratio loss for contracting to portals}, we can show that it suffices to prove Inequality~\ref{eq: main} when no vertex of $U$ is a cut vertex of $H$. In other words, when we traverse the outerface of graph $H$, every terminal in $U$ will be visited once, and so we get a circular ordering on terminals in $U$. Denote $\lambda' \coloneqq \lambda/2$. We say that a region $R\in {\mathcal{R}}$ is \emph{big} if $|U'_R|\ge \lambda'$, otherwise we say it is \emph{small}. We need the following observation: if all regions in ${\mathcal{R}}$ are big, then Claim~\ref{clm: branch pts} holds. \begin{observation} \label{obs: all big face} Let $\hat \lambda>10$ be any integer. If for all $R\in {\mathcal{R}}$, $|U'_R|\ge \hat \lambda$, then $$\sum_{R\in {\mathcal{R}}}|U'_R|\le |U|\cdot\big(1+O(1/\hat \lambda)\big).$$ \end{observation} \begin{proof} Denote $U=\set{u_1,\ldots,u_r}$, where the terminals are indexed according to the circular ordering in which they appear on the outerface of $H$. We define a graph $W$ as follows. We start from the graph obtained by taking the union of all paths in ${\mathcal{P}}$. We then suppress all degree-$2$ non-terminals. Finally, we add the cycle $(u_1,\ldots,u_r,u_1)$. Clearly, $W$ is a planar graph, and the planar drawing of $H$ naturally defines a drawing of $W$: start with the planar drawing of all paths in ${\mathcal{P}}$ induced by the planar drawing of $H$, contracting degree-$2$ non-terminals, and finally draw every edge $(u_i,u_{i+1})$ along the boundary of the disc in which the one-hole instance $(H,U)$ lies in. Note that each region $R\in {\mathcal{R}}$ corresponds to a face in the planar drawing of $W$, that we denote by $F_R$. Moreover, the vertices lying on the boundary of $F_R$ are exactly the vertices of $U'_R$. Consider now the dual graph $W^*$ of $W$ with respect to the planar drawing defined above. Clearly, every node in $W^*$ corresponds to a region in ${\mathcal{R}}\cup \set{R_{\infty}}$, where $R_{\infty}$ is the region outside the disc in which the one-hole instance $(H,U)$ lies in. We denote $V(W^*)=\set{v_R\mid R\in {\mathcal{R}}}\cup \set{v_{\infty}}$. On the one hand, for each $R\in {\mathcal{R}}$, $|U'_R|$ is equal to the number of edges on the boundary of face $F_R$, which is then equal to the degree of vertex $v_R$ in $W^*$. Therefore, $$\sum_{R\in {\mathcal{R}}}|U'_R|=\sum_{v\in V(W^*), v\ne v_{\infty}} \deg_{W^*}(v).$$ Recall that every region $R\in {\mathcal{R}}$ satisfies that $|U'_R|\ge \hat \lambda$, so $\deg_{W^*}(v)\ge \hat \lambda$ for all $v\in V(W^*), v\ne v_{\infty}$. On the other hand, since the paths in ${\mathcal{P}}$ are well-structured and non-crossing, and we have suppressed all degree-$2$ vertices, it is easy to observe that the subgraph of $W^*$ induced by all vertices of $\set{v_R\mid R\in {\mathcal{R}}}$ is a simple graph. In other words, all edges that have a parallel copy in $W^*$ must be incident to $v_{\infty}$. Since the number of edges in $W^*$ incident to $v_{\infty}$ is $|U|$, if we subdivide every edge incident to $v_{\infty}$ by a new vertex, then the resulting graph, which we denote by $\hat W^*$, is a planar simple graph, and so $|E(\hat W^*)|\le 3\cdot |V(\hat W^*)|$. Therefore, \[ |U|+2\cdot |U|+\sum_{v\in V(W^*), v\ne v_{\infty}}\deg_{W^*}(v)\le 2\cdot |E(\hat W^*)|\le 6\cdot |V(\hat W^*)|\le 6\cdot (|U|+|V(W^*)|), \] so $3|U|+(|V(W^*)|-1)\cdot \hat \lambda\le 6(|U|+|V(W^*)|)$, and so $|V(W^*)|\le (3|U|+\hat \lambda)/(\hat \lambda-6)\le O(|U|/\hat \lambda)$. Altogether, we get that \[ \sum_{R\in {\mathcal{R}}}|U'_R|=\sum_{v\in V(W^*), v\ne v_{\infty}} \deg_{W^*}(v)=|U|+\sum_{v\in V(W^*), v\ne v_{\infty}} \deg_{W^*\setminus v_{\infty}}(v) \le |U|+O(|U|/\hat \lambda). \] \aftermath \end{proof} \noindent We now proceed to prove Inequality~\ref{eq: main} using \Cref{obs: all big face}. Let $W$ be the plane graph defined in the proof of \Cref{obs: all big face}, and we say that graph $W$ is \emph{generated} by the set ${\mathcal{P}}$ of paths. We prove the following observation. \begin{observation} \label{obs: path face} Let $P$ be a path in ${\mathcal{P}}$, let $F$ be a face, and let $C$ be the boundary cycle of $F$. Then either $P\cap C=\emptyset$, or the intersection between $P$ and $C$ is a subpath of both $P$ and $C$. \end{observation} \begin{proof} Assume that $P\cap C\ne\emptyset$; and furthermore, $P\cap C$ contains at least two vertices (since otherwise a single vertex is a subpath of both $P$ and $C$, and we are done). Assume for contradiction that $P\cap C$ is not a subpath of $P$. It is easy to verify that there are two vertices $u,u'$, such that $u,u'\in V(P)\cap V(C)$, but every vertex in $P$ between $u$ and $u'$ does not belong to $C$. Denote by $P'$ the subpath of $P$ connecting $u$ to $u'$. Note that $u,u'$ separates $C$ into two path, that we denote by $C_1, C_2$. Assume without loss of generality that the region surrounded by $P'\cup C_1$ does not contain the outerface. Let $e$ be an edge of $C_1$. Since graph $W$ is generated by paths in ${\mathcal{P}}$, edge $e$ must belong to some path $P''\in {\mathcal{P}}$. However, since both endpoints of $P''$ lie outside of the region surrounded by $P'\cup C_1$, and since $C_1$ is a segment of a face, path $P''$ must contain two vertices of $P'$, and the subpath of $P''$ between these two vertices contains the edge $e$, which does not belong to $P'$. Therefore, paths $P'$ and $P''$ are not well-structured, a contradiction. \end{proof} Let $P$ be a path and let $F$ be a face, such that $P$ and the boundary cycle $C_F$ of $F$ intersect, and the intersection $P\cap C_F$ is a subpath of both $P$ and $C_F$. Let $u,u'$ be the endpoints of this subpath. We define $P_{\oplus F}$ as the path obtained from $P$ by replacing the subpath between $u$ and $u'$ with the other subpath of $C_F$ connecting $u$ to $u'$ that does not belong to $P$. \newcommand{\textsf{bs}}{\textsf{bs}} Recall that we only need to prove Inequality \ref{eq: main} for $W$, where the left hand side $\sum_{R\in {\mathcal{R}}:\text{ } |U'_R|\ge \lambda'}|U'_R|$, which we denote by $\textsf{bs}(W)$, is the sum of the sizes of all big faces. We will first iteratively modify $W$ until we are unable to do so, such that the value $\textsf{bs}(W)$ never decreases. Then we will show that the value $\textsf{bs}(\tilde W)$ of the resulting graph $\tilde W$ is bounded by $|U|\cdot(1+O(1/\lambda'))$ using \Cref{obs: all big face}. We now describe the algorithm that iteratively modifies the graph $W$. Throughout, we maintain a plane graph $\hat W$, that is initialized to be $W$, and a set $\hat{\mathcal{P}}$ of paths, that is initialized to be ${\mathcal{P}}$. We will always ensure that $\hat{\mathcal{P}}$ is a well-structured set of paths, and graph $\hat W$ is generated by $\hat {\mathcal{P}}$. When the algorithm proceeds, the plane graph $\hat W$ evolves, and so does the set of faces in its planar drawing. We say that a face is big (small, resp.) iff its boundary contains at least (less than, resp.) $\lambda'$ vertices. We say that a tuple $(e,F_1,F_2)$ \emph{critical}, iff (i) $e$ is an edge in $\hat W$, $F_1$ is a small face, and $F_2$ is a big face, such that $e$ is incident to $F_1$ and $F_2$; and (ii) no vertex of $F_1$ is incident to any other big face than $F_2$. We say that a pair $(P,P')$ of paths in $\hat {\mathcal{P}}$ is a \emph{blocking pair} for a critical tuple $(e,F_1,F_2)$, iff (i) $e\in E(P)$, $e\notin E(P')$; and (ii) the pair $P_{\oplus F_1}, P'$ of paths are not well-structured. We distinguish between the following cases. \medskip \noindent \textbf{Case 1:} There is a critical tuple $(e,F_1,F_2)$ with no blocking pairs, and the degree of at least one endpoint of $e$ is at least $4$. In this case, we simply replace each path $P\in \hat{\mathcal{P}}$ that contains the edge $e$ with path $P_{\oplus F_1}$, and then update $\hat W$ to be the graph generated by the resulting set $\hat{\mathcal{P}}$ of paths. See \Cref{fig: case1}. \begin{figure}[h] \centering \subfigure[Before: Faces $F_1$ and $F_2$ share an edge $e$. Two paths containing $e$ in $\hat {\mathcal{P}}$ are shown in green and red.]{\scalebox{0.11}{\includegraphics{Fig/case1_before.jpg}}} \hspace{1.0cm} \subfigure[After: Faces $F_1$ and $F_2$ are merged into $F$. The modified segment of two paths are shown in dashed lines.]{\scalebox{0.11}{\includegraphics{Fig/case1_after.jpg}}} \caption{An illustration of graph and path modification in Case 1.\label{fig: case1}} \end{figure} It is clear that the invariant that $\hat W$ is generated by $\hat{{\mathcal{P}}}$ still holds in this case. Also, since there is no blocking pair for the critical tuple $(e,F_1,F_2)$, the resulting path set $\hat {\mathcal{P}}$ is still well-structured. Moreover, since no path in the resulting set $\hat {\mathcal{P}}$ contains the edge $e$, the resulting graph $\hat W$ no longer contains the edge $e$, either. Since the resulting graph $\hat W$ may not contain any new edge, the number of faces in $\hat W$ decreases by at least $1$ (as faces $F_1$ and $F_2$ are merged into a single face). We now show that the value $\textsf{bs}(\hat W)$ does not decrease. First, since the modification of paths in $\hat{\mathcal{P}}$ only involves edges and vertices in $C_{F_1}$, the boundary cycle of face $F_1$, the graph $\hat W\setminus C_{F_1}$ remain unchanged, so every big face other than $F_2$ remain unchanged as well, and so is their contribution to $\textsf{bs}(\hat W)$. Second, consider the resulting face $F$ into which $F_1$ and $F_2$ are merges. Note that $F$ contains all original vertices of $F_2$ as branch vertices. This is because all vertices of $F_2\setminus F_1$ remain unchanged, and since at least one of the endpoints of $e$ has degree at least $4$ in $\hat W$ before this iteration, this endpoint remain as branch vertices in the resulting graph $\hat W$, and the face $F$ contains at least one more branch vertex. Therefore, face $F$ contains at least the same number of branch vertices as the previous big face $F_2$. It follows that the value $\textsf{bs}(\hat W)$ does not decrease. \medskip \noindent \textbf{Case 2:} There is a critical tuple $(e,F_1,F_2)$ with no blocking pairs, where $F_1$ contains more than $3$ vertices, and the degrees of both endpoints of $e$ are $3$. In this case, we update the path set $\hat {\mathcal{P}}$ and graph $\hat W$ in the same way as the previous case. Via similar arguments, we can show that the number of faces decreases by at least $1$, and the value $\textsf{bs}(\hat W)$ does not decrease. \medskip \noindent \textbf{Case 3:} There is a critical tuple $(e,F_1,F_2)$ and a blocking pair $(P,P')$ for it. Since paths $P$ and $P'$ are well-structured, but paths $P_{\oplus F_1}$ and $P'$ are not, from \Cref{obs: path face}, there must be two disjoint subpaths $P'_1, P'_2$ of $P'$, such that $P'_1=P\cap P'$ and $P'_2=C_{F_1}\cap P'$. We first give both paths $P$ and $P'$ a direction, such that $P'_1$ appears before $P'_2$ on $P'$, and $P'_1$ appears before edge $e$ on $P$. Let $u$ be the last vertex of $P'_1$, let $v'$ be the first vertex of $P'_2$, and let $v$ be the first vertex of $C_{F_1}\cap P$ that appears on $P$. We first show that $v$ and $v'$ must be adjacent on $C_{F_1}$. Assume not, let $X$ be the segment of $C_{F_1}$ between $u$ and $v'$ that does not contain $e$, and let $x$ be an inner vertex of $X$. Since $\deg(x)\ge 3$, we let $e_x$ be an edge incident to $x$, such that $e_x\notin C_{F_1}$. Consider the region $R$ surrounded by (i) the subpath of $P$ between $u$ and $v$; (ii) the subpath of $P'$ between $u$ and $v'$; and (iii) path $X$. It is clear that $e_x$ must lie entirely in $R$. On the other hand, let $P_x$ be a path in $\hat{\mathcal{P}}$ that contains the edge $e_x$, so both endpoints of $P_x$ lie outside $R$. Since paths in $\hat {\mathcal{P}}$ are non-crossing and well-structured, path $P_x$ must exit region $R$ at $v$ and $v'$, but since $e_x\notin E(C_{F_1})$, the intersection between $P_x$ and $C_{F_1}$ is neither a subpath of $C_{F_1}$ nor a subpath of $P_x$, a contradiction to \Cref{obs: path face}. Via similar arguments, we can show that no edge may lie inside the interior of region $R$. In other words, region $R$ is in fact a face, which we denote by $F'$ (see \Cref{fig: case3_before}). Moreover, since vertices $v,v'$ are not incident to any other big faces, $F'$ is a small face. We now ``suppress'' the face $F'$ as follows. We first contract the edge $(v,v')$ of $C_{F_1}$, while identifying vertices $v$ and $v'$ into a single vertex $v''$. We then ``identify'' the subpath of $P$ between $u$ and $v$ (which we denote by $\tilde P$) with the subpath of $P'$ between $u$ and $v'$ (which we denote by $\tilde P'$). Specifically, if originally $\tilde P=(u,y_1,\ldots,y_s,v)$ and $\tilde P'=(u,y'_1,\ldots,y'_t,v')$, then we replace these two paths with a new path $\tilde P''=(u,y_1,\ldots,y_s,y'_1,\ldots,y'_t,v'')$, and we do not modify the incident edges of any $y_i$ or $y'_j$ (see \Cref{fig: case3_after}). We update $\hat W$ to be the resulting graph after this step. \begin{figure}[h] \centering \subfigure[Before: Vertex $u$ is shown in brown. Paths $P,P'$ are shown in red, green respectively. Face $F'$ is shown in orange.]{\scalebox{0.11}{\includegraphics{Fig/case3_before.jpg}}\label{fig: case3_before}} \hspace{1.0cm} \subfigure[After: Face $F'$ is suppressed, vertices $v,v'$ are contracted into $v''$, and the two subpaths are identified.]{\scalebox{0.11}{\includegraphics{Fig/case3_after.jpg}}\label{fig: case3_after}} \caption{An illustration of graph and path modification in Case 3.\label{fig: case3}} \end{figure} This face suppression naturally defines a way of modifying the paths in $\hat {\mathcal{P}}$, as follows. Denote by $C_{F'}$ the boundary cycle of face $F'$. For every path $P\in \hat {\mathcal{P}}$: \begin{itemize} \item if $P\cap C_{F'}=\emptyset$, then we do not modify it; \item if $P\cap V(C_{F'})\subseteq \set{v,v'}$, then we let it contain the new vertex $v''$ at the same location; \item if $P\cap C_{F'}$ is a subpath of $\tilde P$ or a subpath of $\tilde P'$, then we replace that subpath of $P$ with the corresponding subpath of $\tilde P''$. \end{itemize} It is easy to verify that the resulting set $\hat{\mathcal{P}}$ is non-crossing and well-structured, and it still generates the resulting graph $\hat W$. Also, the number of faces in $\hat W$ decreases by $1$ in this case. We now show that the value $\textsf{bs}(\hat W)$ does not decrease. Note that the degree of every vertex except for $v,v'$ does not change, and the degree of the new vertex $v''$ obtained from contracting $(v,v')$ has degree at least $3$ in the resulting graph, so all big faces remain unchanged, and so are their contribution to $\textsf{bs}(\hat W)$. \bigskip We denote by $\tilde W$ the graph $\hat W$ when none of the Cases 1-3 described above happens. We are then guaranteed that, for each small face $F$ in $\tilde W$, either \begin{itemize} \item it does not share a vertex with any big faces; or \item it contains exactly $3$ vertices, it shares a vertex with exactly one big face, and both endpoints of the edge that it shares with that big face has degree exactly $3$; or \item it shares a vertex with at least two big faces (in this case we call it a \emph{bridge} face). \end{itemize} We call vertices that are shared by a bridge face and a big face \emph{bridge vertices}, and we call vertices that belong to at least two big faces \emph{interface vertices}. Clearly, bridge vertices and interface vertices must be branch vertices. Consider now any big face $F$, and let $V'_F$ be the set of its bridge vertices and interface vertices. We prove the following observation. \begin{observation} \label{obs: O(1) between interface} Let $F$ be a big face and let $u,u'$ be a pair of vertices in $V'_F$ that appear consecutively on $C_F$. That is, there is a subpath $Q$ of $C_F$ connecting $u$ to $u'$ that does not contain any other vertex of $V'_F$. Then the number of branch vertices that is an internal vertex of $Q$ is at most $2$. \end{observation} \begin{proof} Consider any edge $e$ in path $Q$ that is not incident to $u$ or $u'$. Let $F'$ be the other face that $e$ is incident to, so $F'$ is a small face. Since both endpoints of $e$ are not in $V'_F$, face $F'$ do not share vertex with any other big faces. From the above discussion, face $F'$ has to contain exactly three vertices, and the degrees of both endpoints of $e$ are exactly $3$. Let $z_e$ be the other vertex of face $F'$. Note that, via similar arguments we can show that all internal vertices of $Q$ have degree exactly $3$. Therefore, the vertex $z_{e'}$ defined for every other edge $e'$ of $Q$ that is not incident to $u$ or $u'$ has to coincide with $z_e$. But if the number of branch vertices that is an internal vertex of $Q$ is greater than $2$, then there exists a vertex $u''\in V(Q)$ that is not adjacent to either $u$ or $u''$. Now the existence of edge $(z_e,u'')$ can be shown to cause a contradiction to the well-structuredness of $\hat {\mathcal{P}}$, using similar arguments in the proof of \Cref{obs: path face}. \end{proof} Similarly, we can prove the following observation. \begin{observation} Let $F,F'$ be a pair of big faces, and let $\hat F, \hat F'$ be a pair of bridge faces, such that both $\hat F, \hat F'$ share vertices with both $F, F'$. Then if we denote by $R$ the region outside $F,F',\hat F, \hat F'$ surrounded by the boundaries of $F,F',\hat F, \hat F'$ that does not contain the outerface, then the boundary of $R$ contains at most $8$ bridge vertices. \end{observation} Consider now the dual graph $\tilde W^*$ of the resulting graph $\tilde W$. From similar arguments in the proof of \Cref{obs: all big face}, we know that in order to show $\sum_{R\in {\mathcal{R}}}|U'_R|\le |U|\cdot\big(1+O(1/ \lambda')\big)$, it suffices to show that $\sum_{v\in V(\tilde W^*), v\ne v_{\infty}} \deg_{\tilde W^*\setminus v_{\infty}}(v) \le O(|U|/ \lambda')$. We denote by $\check W$ the subgraph of $\tilde W^*$ induced by all nodes corresponding to big faces and bridge faces. From the above two observations, we know that, it suffices to show that $\sum_{v\in V(\check W)} \deg_{\check W}(v) \le O(|U|/\lambda')$. Let $\hat F$ be a bridge face. We denote by $F_1,\ldots, F_t$ the big faces that share a vertex with $\hat F$, where the faces are indexed according to the circular ordering in which they intersect with $\hat F$. Then, it is easy to see that, if we replace, for each bridge face $\hat F$, all edges incident to node $v_{\hat F}$ (the node in $\check W$ that corresponds to face $\hat F$) with edges $(v_{F_1},v_{F_2}),\ldots, (v_{F_t},v_{F_1})$, then the resulting graph $\check{W}$ is still a planar graph, with each every having at most one parallel copy. Using similar arguments in the proof of \Cref{obs: all big face}, we can show that $\sum_{v\in V(\check W)} \deg_{\check W}(v) \le O(|U|/\lambda')$. This completes the proof of Claim~\ref{clm: branch pts}. \section{Introduction} Graph compression describes a paradigm of transforming a large graph $G$ to a smaller graph $G'$ that preserves, perhaps approximately, certain graph features such as distances or cut values. The algorithmic utility of graph compression is apparent --- the compressed graph $G'$ may be computed as a preprocessing step, reducing computational resources for subsequent processing and queries. This general paradigm covers famous examples like spanners, Gomory-Hu trees, and cut/flow/spectral edge-sparsifiers, in which case $G'$ has the same vertex set as~$G$, but fewer edges. Sometimes the compression is non-graphical and comprises of a small data structure instead of a graph $G'$; famous examples are distance oracles and distance labeling. We study another well-known genre of compression, called \emph{vertex sparsification}, whose goal is for $G'$ to have a small vertex set. In this setting, the input graph $G$ has a collection of $k$ designated vertices $T$, called the \emph{terminals}. The compressed graph $G'$ should contain, besides the terminals in~$T$, a small number of vertices and preserve a certain feature among the terminals. Specifically, we are interested in preserving the distances between terminals up to multiplicative factor $\alpha\ge 1$ in an edge-weighted graph (where the weights are interpreted as lengths). Formally, given a graph $G$ with terminals $T\subseteq V(G)$, an \emph{emulator} for $G$ with \emph{distortion} $\alpha\ge 1$ is a graph $G'$ that contains the terminals, i.e., $T\subseteq V(G')$, satisfying \begin{equation} \label{eq:distortion} \forall x,y\in T, \quad \textnormal{\textsf{dist}}_{G}(x,y) \leq \textnormal{\textsf{dist}}_{G'}(x,y) \leq \alpha\cdot \textnormal{\textsf{dist}}_G(x,y) , \end{equation} where $\textnormal{\textsf{dist}}_G$ denotes the shortest-path distance in $G$ (and similarly for $G'$). In the important case when $\alpha = 1+\varepsilon = e^{\Theta(\varepsilon)}$ for $0\le\varepsilon\le1$, we simply say $G'$ is an \emph{$\varepsilon$-emulator}.% \footnote{Our definition in Section~\ref{sec:prelim} differs slightly (allowing two-sided errors), affecting our results only in some hidden~constants. } Notice that $G'$ need not be a subgraph or a minor of $G$ (in such two settings $G'$ is known as a \emph{spanner} and a \emph{distance-approximating~minor}). We focus on the case where $G$ is known to be planar, and thus require also $G'$ to be planar (which excludes the trivial solution of a complete graph on $T$). This requirement is natural and also important for applications, where fast algorithms for planar graphs can be run on $G'$ instead of on $G$. Such a requirement that $G'$ has structural similarity to $G$ is usually formalized by assuming that both $G$ and $G'$ belong to $\mathcal{F}$ for a fixed graph family $\mathcal{F}$ (e.g., all planar graphs). If $\mathcal{F}$ is a minor-closed family, one can further impose the stronger requirement that $G'$ is a minor of $G$, and this clearly implies that $G'$ is in $\mathcal{F}$. Vertex sparsifiers commonly exhibit a tradeoff between accuracy and size, which in our case of an emulator $G'$, are the distortion $\alpha$ and the number of vertices of $G'$. Let us briefly overview the known bounds for planar graphs. At one extreme of this tradeoff we have the ``exact'' case, where distortion is fixed to $\alpha=1$ and we wish to bound the (worst-case) size of the emulator $G'$ \cite{CGH16,CGMW18,GHP20}. For planar graphs, the known size bounds are $O(k^4)$~\cite{KNZ14} and $\Omega(k^2)$~\cite{KZ12,co-pemm-2020}.% \footnote{For fixed distortion $\alpha=1$, every graph $G$ in fact admits a minor of size $O(k^4)$ \cite{KNZ14}, but for some planar graphs (specifically grids) every minor \cite{KNZ14} or just planar emulator \cite{KZ12,co-pemm-2020} must have $\Omega(k^2)$ vertices. } At the other extreme, we fix the emulator size to $|V(G')|=k$, i.e., zero non-terminals, and we wish to bound the (worst-case) distortion~$\alpha$ \cite{BG08,CXKR06,KKN15,Cheung18,FKT19}. For planar graphs, the known distortion bounds are $O(\log k)$~\cite{Filtser18} and lower bound $2$~\cite{Gupta01}.% Our primary interest is in minimizing the size-bound when the distortion $\alpha$ is $1+\varepsilon$, i.e., $\varepsilon$-emulators, a fascinating sweet spot of the tradeoff. The minimal loss in accuracy is a boon for applications, but it is usually challenging as one has to control the distortion over iterations or recursion. For planar graphs, the known size bounds for a distance-approximating minor are $\tilde{O}((k/\varepsilon)^2)$ \cite{CGH16} and $\Omega(k/\varepsilon)$ \cite{KNZ14}. Improving the upper bound from quadratic to linear in $k$ is an outstanding question that offers a bypass to the aforementioned $\Omega(k^2)$ lower bound for exact emulators ($\alpha=1$). In fact, no subquadratic-size emulators for planar graphs are known to exist even when we allow the emulators to be arbitrary graphs, except for when the input is unweighted~\cite{CGMW18} or for trivial cases like trees. \paragraph{Notation.} Throughout the paper, we consider undirected graphs with non-negative edge weights, and denote $n=|V(G)|$ and $k=|T|$. A \emph{plane graph} refers to a planar graph together with a specific embedding in the plane. We suppress poly-logarithmic terms by writing $\tilde{O}(t) = t\cdot\textnormal{poly}\log t$, and multiplicative factors that depend on $\varepsilon$ by writing $O_\varepsilon(t) = O(f(\varepsilon)\cdot t)$. We write $\log^* t$ for the iterated logarithm of $t$. \subsection{Main Result} \label{sec:results} We design the first $\varepsilon$-emulators for planar graphs that have near-linear size; furthermore, these emulators can be computed in near-linear time. These two efficiency parameters can be extremely useful, and we indeed present a few applications in \Cref{sec:applications}. \begin{theorem} \label{thm:main} For every $n$-vertex planar graph $G$ with $k$ terminals and parameter $0<\varepsilon<1$, there is a planar $\varepsilon$-emulator graph $G'$ of size $|V(G')|=\tilde O(k/\varepsilon^{O(1)})$. Furthermore, such an emulator can be computed deterministically in time $\tilde O(n /\varepsilon^{O(1)})$. \end{theorem} The result dramatically improves over the previous $\tilde{O}((k/\varepsilon)^2)$ upper bound of Cheung, Goranci and Henzinger~\cite{CGH16}. Moreover, it breaks below the aforementioned lower bound $\Omega(k^2)$ for exact emulators ($\alpha=1$)~\cite{KZ12,KNZ14,co-pemm-2020}. Unsurprisingly, our result is unlikely to extend to all graphs, because for some (bipartite) graphs, every minor with fixed distortion $\alpha<2$ must have $\Omega(k^2)$ vertices~\cite{CGH16}. See \Cref{tab:PlanarEmulators} for comparison to prior work. \begin{table}[h!]\small \centering \smallskip \def1.3{1.3} \begin{tabular}{c:cc:cc} \multicolumn{1}{c}{Distortion} & \multicolumn{2}{c}{Size (lower/upper)} & Requirement & Reference \\ \hline $1$ & $\Omega(k^2)$ & & planar & \cite{KZ12,co-pemm-2020} \\ $1$ & & $O(k^4)$ & minor & \cite{KNZ14} \\ \hdashline $1+\varepsilon$ & $\Omega(k/\varepsilon)$ & & minor & \cite{KNZ14} \\ $1+\varepsilon$ & & $\tilde{O}((k/\varepsilon)^2)$ & minor & \cite{CGH16} \\ \rowcolor{Highlight} $1+\varepsilon$ & & $\tilde{O}(k /\textnormal{poly}\varepsilon)$ & planar & Theorem~\ref{thm:main} \\ \hdashline $O(\log k)$ & & $k$ & minor & \cite{Filtser18} \end{tabular} \label{tab:PlanarEmulators} \caption{Distance emulators for planar graphs.} \end{table} \subsection{Algorithmic Applications} \label{sec:applications} We present a few applications of our emulators to the design of fast $(1+\varepsilon)$-approximation algorithms for standard optimization problems on planar graphs. \medskip Our first application is to construct an approximate version of the multiple-source shortest paths data structure, called \emph{$\varepsilon$-MSSP}: Preprocess a plane graph $G$ and a set of terminals $T$ on the outerface of $G$, so as to quickly answer distance queries between terminal pairs within $(1+\varepsilon)$-approximation. The preprocessing time of our data structure is $O_\varepsilon(n)$, which for any fixed $\varepsilon>0$ is faster than Klein's $O(n \log n)$-time algorithm~\cite{kle-msppg-2005} for the exact setting when $\varepsilon=0$. Both algorithms have the same query time $O(\log n)$. \begin{theorem} \label{Th:mssp} Given a parameter $0<\varepsilon<1$, an $n$-vertex plane graph $G$ with the range of edge weights bounded by $n^{O(1)}$,\footnote{Our algorithm can also handle general weights with a slightly slower $O_\varepsilon(n \textnormal{poly}(\log^* n))$ preprocessing time.} and a set of terminals $T$ all lying on the boundary of $G$ with $|T| \le O(n/\log^C n)$ for some large enough constant $C$, one can preprocess an $\varepsilon$-MSSP data structure on~$G$ with respect to $T$ in time $O_\varepsilon(n)$, that answers queries in time $O(\log n)$. \end{theorem} Our second application is an $O_\varepsilon(n)$-time algorithm to compute $(1+\varepsilon)$-approximate minimum $(s,t)$-cut in planar graphs, which for fixed $\varepsilon>0$ is faster than the $O(n \log\log n)$-time exact algorithm by Italiano, Nussbaum, Sankowski, and Wulff-Nilsen~\cite{insw-iamcm-2011}. \begin{theorem} \label{Th:minimum-cut} Given an $n$-vertex planar graph $G$ with two distinguished vertices $s,t\in V(G)$ and a parameter $0<\varepsilon<1$, computing $(1+\varepsilon)$-approximate minimum $(s,t)$-cut in $G$ takes $O_\varepsilon(n)$ time. \end{theorem} Our third application is an $O_\varepsilon(n \log n)$-time algorithm to compute a $(1+\varepsilon)$-approximate diameter in planar graphs, which for fixed $0<\varepsilon<1$ is faster than the $O(n \log^2 n + \varepsilon^{-5} n\log n)$-time algorithm of Chan and Skrepetos~\cite{cs-faddo-2019} (which itself improves over Weimann and Yuster~\cite{wy-adpgl-2016}). \begin{theorem} \label{Th:diameter} Given an $n$-vertex planar graph $G$ and a parameter $0<\varepsilon<1$, one can compute a $(1+\varepsilon)$-approximation to its diameter in time $O_\varepsilon(n \log n)$. \end{theorem} Finally, one important open problems in the field of dynamic algorithms is the existence of efficient $(1+\varepsilon)$-approximate distance oracle on planar graphs. Abboud and Dahlgaard~\cite{ad-pcbdp-2016a} provided an $\Omega(n^{1/2-o(1)})$ lower bound on the query and update time for such oracles in the exact setting. Recently, Chen \etal~\cite{cgh+-fdcde-2020a} showed that if one can efficiently construct a $(1+\varepsilon)$-\emph{distance-approximating minor} of size $\tilde{O}(k)$ for a planar graph with $n$ nodes and $k$ terminals in $O(n \textnormal{poly}(\log n, \varepsilon^{-1}))$ time, then there is an offline dynamic $(1+\varepsilon)$-approximate distance oracle with $O(\textnormal{poly}\log n)$ query and update time. Here we show that while our $\varepsilon$-emulator is not strictly a $(1+\varepsilon)$-distance-approximating minor, the same distance oracle can still be constructed. This demonstrates that an efficient $(1+\varepsilon)$-approximate distance oracle on planar graphs exists. \begin{theorem} \label{Th:dynamic-oracle} There is an offline dynamic $(1+\varepsilon)$-approximate distance oracle for any planar graph of size $n$ with $O(\textnormal{poly}\log n)$ query and update time. \end{theorem} \subsection{Technical Contributions} A central technical contribution of this paper is to carry out a \emph{spread reduction} for the all-terminal-pairs shortest path problem when the input graph can be embedded in the plane and the terminals all lie on the outerface; the \emph{spread} is defined to be the ratio between the largest and the smallest distances between terminals. Spread reduction is a crucial preprocessing step for many optimization problems, particularly in Euclidean spaces or on planar graphs~\cite{sa-atpgs-2012,bg-ltase-2013,KKN15,cfs-ntasc-2019,fl-ntasg-2020}, that replaces an instance with a large spread with one or multiple instances with a bounded spread. In many cases, one can reduce the spread to be at most polynomial in the input size. However, we are not aware of previous work that achieves such a reduction in our context, where many pairs of distances have to be preserved all at once. In fact, even after considerable work we only managed to reduce the spread to be sub-exponential. We now provide a bird-eye's view of our emulator construction. The emulator problem on plane graphs with an arbitrary set of terminals can be reduced to the same problem on plane graphs, but with the strong restriction that all the terminals lies on a constant number of faces, known as \emph{holes} (cf.\ Section~\ref{sec: general graph}), using a separator decomposition that splits the number of vertices and terminals evenly; such a decomposition (called the \emph{$r$-division}) can be computed efficiently~\cite{fre-faspp-1987,kms-srsdp-2013}. From there we can further slice the graph open into another plane graph with all the terminals on a single face, which without loss of generality we assume to be the outerface. We refer to it as a \emph{one-hole instance}. To construct an emulator for a one-hole instance $G$ we adapt a recursive \emph{split-and-combine} strategy (cf.\ Section~\ref{sec: planar emulator}). We will attempt to split the input instance into multiple one-hole instances along some shortest paths that distribute the terminals evenly (cf.\ Lemma~\ref{lem: decomposing step}). Every time we slice the graph $G$ open along a shortest path~$P$, we compute a small collection of vertices on $P$ called the \emph{portals}, that approximately preserve the distances from terminals in $G$ to the vertices on~$P$. The portals are duplicated during the slicing along $P$ and added to the terminal set (i.e., become terminals) at each piece incident to $P$, to ensure that further processing will (approximately) preserve their distances as well. We emphasize that the naive idea of placing portals at equally-spaced points along $P$ is not sufficient, as some terminals in $G$ might be arbitrarily close to $P$. Instead, we place portals at exponentially-increasing intervals from both ends of $P$. After splitting the original instance into small enough pieces by recursively slicing along shortest paths and computing the portals, we compute exact emulators for each piece using any of the polynomial-size construction~\cite{KNZ14,co-pemm-2020}. Next we glue these small emulators back along the paths by identifying multiple copies of the same portal into one vertex. See Figure~\ref{fig: intuition}. \begin{figure}[h] \centering \subfigure[A one-hole instance, a set of paths (shown in red, green and purple curves), and portals (shown as red boxes). Slicing the instance open along these paths gives us smaller pieces.]{\scalebox{0.5}{\includegraphics[scale=0.19]{Fig/intuition_1.jpg}}} \hspace{0.4cm} \subfigure[The one-hole instance obtained from gluing together the emulators for the small pieces at the portals (shown as red boxes).]{\scalebox{0.5}{\includegraphics[scale=0.19]{Fig/intuition_2.jpg}}} \caption{Illustration of the split-and-combine process for a one-hole instance.} \label{fig: intuition} \end{figure} Let $U$ be the set of terminals in the current piece, and let $r \coloneqq |U|$. We need the portals to be dense enough so that only a small error term, of the form $r^{-\delta}$ (meaning that the distortion increases multiplicatively by $1+r^{-\delta}$) will be added to the distortion of the emulator after the gluing, as this will eventually guarantee (through more details like the stopping condition of the recursion) that the final distortion is $1+\varepsilon$ and the final emulator size has polynomial dependency on $\varepsilon^{-1}$. At the same time, the number of portals cannot be too large, as they are added to the terminal set, causing the number of terminals per piece to go down slowly and creating too many pieces, and in the end the size of the combined emulator might be too big. It turns out that the sweet spot is to take roughly $L_r \coloneqq r/\log^2 r$ portals. Calculations show that in such case the portals preserve distances up to an additive error term $\log\Phi/L_r$, where $\Phi$ is the \emph{spread} of the terminal distances (cf.\ Claim~\ref{clm: ratio loss for contracting to portals}). When $\Phi \leq \smash{2^{r^{0.9}}}$, we will get the polynomially-small $\tilde{O}(r^{-0.1})$ error term needed for the gluing (cf.\ Section~\ref{SSS:small-spread}). However, even when the original input has a polynomial spread to start with, in general we cannot control the spread of all the pieces occurring during the split-and-combine process, because portals are added to the terminal sets. Therefore a new idea is needed. When $\Phi > 2^{r^{0.9}}$, we need to tackle the spread directly. We perform a \emph{hierarchical clustering} of the terminals (cf.\ Section~\ref{SSS:large-spread}). At each level $i$, we connect two clusters of terminals from the previous level $i-1$ using an edge if their distance is at most $r^{2i}$; then we group each connected component into a single cluster. The key to the spread reduction is the idea of \emph{expanding clusters}. A cluster $S$ is \emph{expanding} if its parent cluster $\hat S$ is at least \smash{$\sim\!e^{r^{-0.7}}$}-factor bigger. Intuitively, if all clusters are expanding, then the number of levels in the hierarchical clustering must be at most $r^{0.7}$, and therefore the spread must be at most sub-exponential. So in the high-spread case some non-expanding cluster must exist. \begin{itemize} \item If such non-expanding cluster $S$ is of moderate size (that is, in between $r/5$ and $4r/5$) (cf.\ Section~\ref{SSS:balanced}), we construct a collection of \emph{non-crossing} shortest paths between terminals in $S$ (non-crossing means that no two paths with endpoint pairs $(s_1,s_2)$ and $(t_1,t_2)$ have their endpoints in an interleaving order $(s_1,t_1,s_2,t_2)$ on the outerface) in which no two paths intersect except at their endpoints. Again compute portals on the paths from every terminal in $\hat S \setminus S$, but now using $\varepsilon_r$-covers~\cite{tho-corad-2004} for $\varepsilon_r\coloneqq r^{-0.1}$, and split along the paths to create sub-instances. Because the cluster is non-expanding and has moderate size, the number of terminals in $\hat S \setminus S$ is at most $(e^{r^{-0.7}} - 1)|S| \le r^{0.3}$, and thus the number of portals is $O(r^{0.3}/\varepsilon_r) \le O(r^{0.4})$, which is a gentle enough increase in the number of terminals. The hard part is to argue that the portals created are sufficient for the recombined instance to be an emulator. This can be done by observing that terminal pairs among $U\setminus \hat S$ are far apart, and similarly when one terminal is from $S$ and the other is from $U\setminus \hat S$; hence only terminal pairs involving $\hat S \setminus S$ have to be dealt with using properties of $\varepsilon_r$-covers (cf.\ Claim~\ref{clm: ratio loss for gluepathset}). \item If there are no non-expanding clusters with moderate size (cf.\ Section~\ref{SSS:unbalanced}), we find a non-expanding cluster $\tilde S$ of lowest level that contains most of the terminals, and construct a collection of non-crossing shortest paths between terminals in $\tilde S$ like the previous case. However this time, after computing the $r^{-0.1}$-covers and splitting along the paths, there might be one instance containing too many terminals. % In this case, we find \emph{every} non-expanding cluster $S$ of \emph{maximal level}; such clusters must all lie within $\tilde{O}(r^{0.7})$ levels from $\tilde S$ because we cannot have nested expanding clusters for $\tilde{O}(r^{0.7})$ consecutive levels. The Monge property guarantees that the shortest paths generated by the union of these maximal-level non-expanding clusters must be non-crossing because all such clusters are disjoint (cf.\ Observation~\ref{obs: sets non-crossing}). Now if we split the graph based on the path set generated, each resulting instance either has moderate size, or must have small spread, and we safely fall back to the earlier cases. \end{itemize} \paragraph{Applications.} A widely adopted pipeline in designing efficient algorithms for distance-related optimization problems on planar graphs in recent years consists of the following steps: \begin{enumerate} \item Decompose the input planar graph into small pieces each of size at most $r$ with a small number of boundary vertices and $O(1)$ holes, called an \emph{$r$-division} (see Frederickson~\cite{fre-faspp-1987} and Klein-Mozes-Sommer~\cite{kms-srsdp-2013}; \item Process each piece so that all-pairs shortest paths between boundary vertices within a piece can be extracted efficiently by the \emph{multiple-source shortest paths} algorithm for planar graphs (Klein~\cite{kle-msppg-2005}); \item Further process each piece into a \emph{compact data structure} that supports efficient min-weight-edge queries and updates (SMAWK~\cite{akm+-gama-1987}, Fakcharoenphol and Rao~\cite{fr-pgnwe-2006}); \item Compute shortest paths in the original graph in a problem-specific fashion, now with each piece replaced with the compact data structure, using a \emph{modified Dijkstra algorithm} (Fakcharoenphol and Rao~\cite{fr-pgnwe-2006}). \end{enumerate} The conceptual role of our planar emulators is an alternative to Step~3. The reason for the development of the aforementioned machinery and complex algorithms is to get around the size lower bound in representing the all-pairs distances for the pieces. The benefit of replacing the data structure with a single planar emulator is that the whole graph stays planar. One can then simply replace Step 4 with the standard Dijkstra algorithm (or even better, with the $O(n)$-time algorithm for planar graphs by Henzinger~\etal~\cite{hkrs-fsapg-1997}). More importantly, one can \emph{recurse} on the resulting graph when appropriate, and compress the graph further and further with small additive errors slowly accumulated (cf.\ Section~\ref{SS:bootstrapping}). This allows us to construct near-linear-size $\varepsilon$-emulator in $O_\varepsilon(n \textnormal{poly}\log^* n)$ time and even $O_\varepsilon(n)$ time using a precomputed look-up table for pieces that are tiny compared to $n$ when the spread of the input graph is bounded by a polynomial, which can easily be achieved by standard spread reduction techniques for many optimization problems. \subsection{Related Work} In addition to emulators, there are other lines of research on graph compression preserving distance information. Among them the most studied objects are \emph{spanners} and \emph{preservers} (when the sparsifier is required to be a subgraph of the input graph) and \emph{distance oracles} (a data structure that reports exact or approximate distances between pairs of vertices). We refer the reader to the excellent survey \cite{ahmed2020graph}. There are also rich lines of works for constructing vertex sparsifiers that preserve cut/flow values (known as \emph{cut/flow sparsifiers}) exactly \cite{HKNR98,CSWZ00,KR13,KR14,KPZ17,GHP20,KR20} or approximately \cite{Moitra09,CLLM10,Chuzhoy12,AGK14,EGKRTT14,MM16,GR16,goranci2021expander}. \section*{Acknowledgements} We thank the anonymous reviewers for their helpful comments, as well as pointing out the result by Chen \etal~\cite{cgh+-fdcde-2020a} on the offline dynamic approximate distance oracles. \small \bibliographystyle{alphaurl} \section{Emulator for Edge-Weighted Planar Graphs} \label{sec: general graph} In this section we provide the proof of Theorem~\ref{thm:main}. In \Cref{sec: alg_O(1)hole}, we show an algorithm for computing $\varepsilon$-emulators for $O(1)$-hole instances. Then in \Cref{sec: alg_general}, we complete the proof of Theorem~\ref{thm:main} using the results in \Cref{sec: alg_O(1)hole}. We will prove in Section~\ref{SS:bootstrapping} that an $\varepsilon$-emulator of size $O_\varepsilon(k \polylog k)$ can be computed in $O_\varepsilon(n)$ time. \subsection{Emulator for $O(1)$-Hole Instances} \label{sec: alg_O(1)hole} In this subsection we present a near-linear time algorithm for constructing $\varepsilon$-emulators for $O(1)$-hole instances. We first define \emph{aligned emulators} for $O(1)$-hole instances similarly as aligned emulators for one-hole instances, as follows. Let $(G,T)$ and $(G',T)$ be two $h$-hole instances. We denote by ${\mathcal{F}}$ the set of holes in $G$ that contain the images of all terminals, and define ${\mathcal{F}}'$ for $G'$ similarly, so $|{\mathcal{F}}|=|{\mathcal{F}}'|=h$. We say that instances $(G,T)$ and $(G',T)$ are \emph{aligned}, if and only if there is a one-to-one correspondence between faces in ${\mathcal{F}}$ and faces in ${\mathcal{F}}'$, such that for every face $F\in {\mathcal{F}}$, the set $T(F)$ of terminals that it contains is identical to the set $T(F')$ of terminals contained in its corresponding face $F'\in {\mathcal{F}}'$, and moreover, the circular orderings in which the terminals of $T(F)$ appearing on faces $F$ and $F'$ are identical. If $(G,T)$ and $(G',T)$ aligned and $(G,T)$ is an $\varepsilon$-emulator for $(G',T)$, then we say that $(G,T)$ is an \emph{aligned $\varepsilon$-emulator} for $(G',T)$. Throughout this section, all emulators we construct for various $O(1)$-hole instances are aligned emulators. Therefore, we will omit the word ``aligned'' and only refer to them by $\varepsilon$-emulators or simply emulators. The main result of this section is the following lemma. \begin{lemma} \label{L:emulator-constant_hole} For any $0<\varepsilon<1$ and any $h$-hole instance $(H,U)$ with $n \coloneqq |H|$ and $r \coloneqq |U|$, there exists an $h$-hole instance $(H',U)$ that is an $\varepsilon$-emulator for $(H,U)$ with size $|V(H')|\le r \cdot (c h\log r/\varepsilon)^{c h}$ for some universal constant $c$. Moreover, such an emulator can be computed in time $O\big((n+r^2)\cdot(h\log n/\varepsilon)^{O(h)}\big)$. \end{lemma} The remainder of this subsection is dedicated to the proof of Lemma~\ref{L:emulator-constant_hole}. We first introduce basic algorithms $\textsc{Split}_h$ and $\textsc{Glue}_h$ for splitting and gluing $h$-hole instances that are similar to the algorithms $\textsc{Split}$ and $\textsc{Glue}$ for splitting and gluing one-hole instances in~\Cref{SS:split-and-glue}. \paragraph{Splitting and Gluing.} The input to procedure \emph{$\textsc{Split}_h$} (for some integer $h>1$) consists of: \begin{itemize} \item an $h$-hole instance $(H,U)$; \item a path $P$ connecting a pair of terminals lying on two different holes; and \item a set $Y\subseteq V(P)$ of vertices that contains both endpoints of $P$. \end{itemize} The output of $\textsc{Split}_h$ is an $(h-1)$-hole instance. Intuitively, \emph{$\textsc{Split}_h$} slices the graph $H$ open along the path $P$ connecting two separate holes in the graph, as illustrated in Figure~\ref{fig: splitting h-hole after}. We denote by $(\tilde H,\tilde U)$ the $(h-1)$-hole instance obtained by applying procedure $\textsc{Split}_h$ to instance $(H,U)$, path $P$, and vertex set $Y$. Intuitively, procedure \emph{$\textsc{Glue}_{h}$} takes as input an emulator for $(\tilde H,\tilde U)$, and outputs an emulator for the original instance $(H,U)$ by identifying the two copies in $\tilde H$ of every vertex in $Y$, as illustrated in Figure~\ref{fig: gluepath_2}. A complete description of these procedures is provided in \Cref{apd: cut and glue}. \begin{figure}[h!] \centering \subfigure[Graph $H$: holes $\alpha, \alpha'$ (shaded gray), terminals on $\alpha$ and $\alpha'$ (blue), path $P$ (red), vertices of $Y$ that are not endpoints of $P$ (purple). ]{\scalebox{0.5}{\includegraphics[scale=0.14]{Fig/manyhole_1.jpg}\label{fig: splitting h-hole before}}} \hspace{0.2cm} \subfigure[Graph $\tilde H$: the new hole $\beta$ (shaded gray), terminals on $\beta$ (blue and purple), and the new $u_1$-$u'_1$ path and $u_2$-$u'_2$ path (red).]{\scalebox{0.5}{\includegraphics[scale=0.14]{Fig/manyhole_2.jpg}\label{fig: splitting h-hole after}}} \subfigure[An illustration of the output instance of $\textsc{Glue}_h$, when the input is the $(h-1)$-hole instances in \Cref{fig: splitting h-hole after}. Holes $\alpha$ and $\alpha'$ are restored.]{\includegraphics[scale=0.08]{Fig/glue_2.jpg}\label{fig: gluepath_2}} \caption{An illustration of splitting and gluing an $h$-hole instance along a path.\label{fig: splitting h-hole}} \end{figure} Note that instance $(\tilde H,\tilde U)$ is also a valid input for procedure $\textsc{Glue}_h$. Let $(\hat H, \hat U)$ be the $h$-hole instance obtained by applying procedure $\textsc{Glue}_h$ to instance $(\tilde H,\tilde U)$. Clearly, $\hat U = U$. We use the following claim, whose proof is similar to Claim~\ref{clm: glueset_emulators} and thus is deferred to Appendix~\ref{apd: Proof of split glue h holes}. \begin{claim} \label{clm: split glue h holes} Let $(Z,U)$ be the instance obtained by applying procedure $\textsc{Glue}_h$ to an $\varepsilon$-emulator $(\tilde Z,\tilde U)$ of $(\tilde H,\tilde U)$. Let $(\hat H,U)$ be the instance obtained by applying procedure $\textsc{Glue}_h$ to $(\tilde H,\tilde U)$. Then $(Z,U)$ is an $\varepsilon$-emulator for $(\hat H,U)$. \end{claim} We now complete the proof of Lemma~\ref{L:emulator-constant_hole} by induction on $h$. The base case (when $h=1$) follows from Theorem~\ref{Th:emulator-1hole}. Consider now the case where the input $(H,U)$ is an $h$-hole instance for $h>1$. We first compute a pair of terminals $(u,u')$ that lie on different holes, and a shortest path $P$ in $H$ connecting $u$ to $u'$, such that $P$ does not contain any terminal as internal vertices. Then for each $\hat u\in U\setminus \set{u,u'}$, we use the algorithm from \Cref{thm: eps_cover} and parameter $\EMPH{$\varepsilon'$} \coloneqq \varepsilon/h$ to compute an $\varepsilon'$-cover of $\hat u$ on path $P$. Let $Y$ be the union of all such $\varepsilon'$-covers together with the endpoints of $P$, so $Y\subseteq V(P)$. Note that from \Cref{thm: eps_cover} we have $|Y|\le O(|U|/\varepsilon')\le O(rh/\varepsilon)$, and by using the algorithm from Lemma~\ref{lem: eps_cover_subset}, $Y$ can be computed in $O(h\cdot n \log r)$ time. Let $c$ be a large enough constant that is greater than all hidden constants in \Cref{Th:emulator-1hole}. We then apply the procedure $\textsc{Split}_h$ to the $h$-hole instance $(H,U)$, the path $P$ and the vertex set $Y$. Let $(\tilde H,\tilde U)$ be the $(h-1)$-hole instance $\textsc{Split}_h$ returns. From procedure $\textsc{Split}_h$, $|\tilde U|\le |U|+2|Y|\le c\cdot rh/\varepsilon$, since $c$ is large enough. Recall that instance $(\hat H,U)$ is obtained by applying the procedure $\textsc{Glue}_h$ to instance $(\tilde H,\tilde U)$. We use the following claim, whose proof is similar Claim~\ref{clm: ratio loss for contracting to portals}, and is deferred to Appendix~\ref{apd: Proof of ratio loss for glue h holes}. \begin{claim} \label{clm: ratio loss for glue h holes} Instance $(\hat H, U)$ is an $\varepsilon'$-emulator for instance $(H, U)$. \end{claim} Consider the $(h-1)$-hole instance $(\tilde H,\tilde U)$. From the induction hypothesis, if we set $\EMPH{$\varepsilon''$} \coloneqq \varepsilon(1-\frac 1 h)$, then there is another $(h-1)$-hole instance $(\tilde H',\tilde U)$ that is an $\varepsilon''$-emulator for $(\tilde H,\tilde U)$, such that \[ \begin{split} |V(\tilde H')| & \le |\tilde U|\cdot \bigg(\frac{ch\cdot \log |\tilde U|}{\varepsilon''}\bigg)^{c(h-1)} \\ & \le \frac{crh}{\varepsilon}\cdot \bigg(\frac{ch\cdot \log (crh/\varepsilon)}{\varepsilon\cdot (1-1/h)}\bigg)^{c(h-1)} \\ & \le r\cdot \bigg(\frac{ch}{\varepsilon}\bigg)^{c(h-1)+1}\cdot \bigg(\frac{\log (crh/\varepsilon)}{(1-h)}\bigg)^{c(h-1)} \\ & \le r\cdot \bigg(\frac{ch}{\varepsilon}\bigg)^{ch}\cdot \bigg(\log r+ \log (crh/\varepsilon)\bigg)^{c(h-1)} \\ & \le r\cdot \bigg(\frac{ch \log r}{\varepsilon}\bigg)^{ch}. \end{split} \] where we have used the fact that $(1-\frac{1}{h})^{-c(h-1)}\le e^{c}< c^{c-1}$, as $c$ is large enough. We apply procedure $\textsc{Glue}_h$ to instance $(\tilde H',\tilde U)$, and let $(H',U)$ be the $h$-hole instance we get. From the procedure $\textsc{Glue}_h$, $|V(H')|\le |V(\tilde H')|\le r\cdot (ch\log r/\varepsilon)^{ch}$. On the other hand, since instance $(\tilde H',\tilde U)$ is an $\varepsilon''$-emulator for $(\tilde H,\tilde U)$, from Claim~\ref{clm: split glue h holes}, instance $(H',U)$ is an $\varepsilon''$-emulator for $(\hat H, U)$. Since $(\hat H, U)$ is an $\varepsilon'$-emulator for instance $(H, U)$ (from Claim~\ref{clm: ratio loss for glue h holes}), using the fact that $\varepsilon''+\varepsilon'=\varepsilon(1-1/h)+\varepsilon/h=\varepsilon$, we conclude that $(H',U)$ is an $\varepsilon$-emulator for instance $(H,U)$. \medskip Note that the above proof also gives an algorithm for constructing an $\varepsilon$-emulator of $(H,U)$ of size at most $r \cdot (ch\log r/\varepsilon)^{ch}$. Specifically, if $(H,U)$ is the input $h$-hole instance, then we slice it open along some shortest path $P$ that connects a pair of terminals lying on different holes, add $\varepsilon'$-covers of terminals in $U$ on $P$, get an $(h-1)$-hole instance $(\tilde H, \tilde U)$, and then we recursively construct an $\varepsilon''$-emulator for $(\tilde H, \tilde U)$ and glue it along $P$ to get an $\varepsilon$-emulator for $(H,U)$. The following claim completes the proof of Lemma~\ref{L:emulator-constant_hole}. \begin{claim} The running time of the above algorithm is $O\big((n+r^2)\cdot(h\log n/\varepsilon)^{O(h)}\big)$. \end{claim} \begin{proof} We prove the claim by induction on $h$. The base case is when $h=1$. From \Cref{Th:emulator-1hole}, the running time of the above algorithm is at most $(n+r^2)\cdot (c \log n/\varepsilon)^c$ on an $n$-vertex graph when $h=1$. Consider the inductive case: The $\textsc{Split}_h$ and $\textsc{Glue}_h$ algorithms runs in time at most $cn$. Since the input to the algorithm $\textsc{Split}_h$ is an $n$-vertex graph, $\textsc{Split}_h$ produces a graph $(\tilde H, \tilde U)$ with at most $2n$ vertices. Therefore, from the induction hypothesis, the construction of an $\varepsilon'$-emulator for $(\tilde H, \tilde U)$ takes at most $(2n+r^2)\cdot (c(h-1)\log n/\varepsilon'')^{c(h-1)}$ time. Therefore, the total running time of the algorithm is at most \[ (2n+r^2)\cdot \bigg(\frac{c(h-1)\log n}{\varepsilon''}\bigg)^{c(h-1)}+2cn\le (n+r^2)\cdot \bigg(\frac{ch\log n}{\varepsilon}\bigg)^{ch}. \] \aftermath \end{proof} \subsection{Algorithm for General Planar Graphs: Proof of Theorem~\ref{thm:main}} \label{sec: alg_general} \paragraph{Separators and recursive decomposition.} Let $r$ be any positive integer. An \EMPH{$r$-division with few holes}~\cite{fre-faspp-1987,kms-srsdp-2013} of a $n$-vertex connected plane graph $G$ is a collection ${\mathcal{G}}$ of connected subgraphs of $G$, called the \EMPH{pieces}, such that \begin{itemize} \item every edge in $G$ belongs to at least one piece in ${\mathcal{G}}$; \item $|{\mathcal{G}}|=O(n/r)$; \item the number of vertices in $H$ is at most $r$ for each piece $H\in {\mathcal{G}}$; \item the number of \EMPH{boundary vertices} in $H$ (that is, vertices in $V(H)$ that also belong to some other piece in ${\mathcal{G}}$) is $O(\sqrt{r})$; and \item for each piece $H\in {\mathcal{G}}$, there are $O(1)$ faces, called \EMPH{holes}, whose boundaries contain all boundary vertices of $H$ (when considered as a plane graph). \end{itemize} We often refer to an $r$-division with few holes as an \EMPH{$r$-division}. A standard $r$-division can be computed in linear time for any $r$~\cite{kms-srsdp-2013}. However in our application we need to compute $r$-divisions of instances that evenly distribute the terminals among pieces. In particular, we need the following lemma, whose proof is deferred to \Cref{apd: Proof of L:r-division}. \begin{lemma} \label{L:r-division} Given an instance $(G,T)$ with $n \coloneqq |V(G)|$ and $k \coloneqq |T|$ computing an $r$-division for graph $G$ takes in $O(n)$ time, where each piece contains $O(1 + kr/n)$ terminals. \end{lemma} We use the following lemma, which is crucial for the proof of Theorem~\ref{thm:main}. \begin{lemma} \label{lem: size reduction step} Given a planar instance $(H,U)$ with $n \coloneqq |V(H)|$ and $k \coloneqq |U|$, and a parameter $0<\varepsilon<1$, computing an $\varepsilon$-emulator $(H',U)$ for $(H,U)$ with $|V(H')|\le O\Paren{ \sqrt{n k}\cdot(\log n/\varepsilon)^{c'} }$ takes $O\Paren{n\cdot (c'\log n/\varepsilon)^{c'}}$ time for some large enough universal constant $c'$. Furthermore, if $(H,U)$ is an $h$-hole instance, then $(H',U)$ is also an $h$-hole instance. \end{lemma} \begin{proof} Let $c'$ be a constant that is greater than $c$ and all other hidden constants in Lemma~\ref{L:emulator-constant_hole}. We first compute an $r$-division for $H$, with parameter $r \coloneqq n/k$ using the algorithm from Lemma~\ref{L:r-division}. Let ${\mathcal{R}}$ be the collection of pieces in $H$ that we obtain. From Lemma~\ref{L:r-division}, \begin{itemize} \item $|{\mathcal{R}}|=O(k)$; \item the number of vertices in each piece in ${\mathcal{R}}$ is at most $O(n/k)$; \item the number of boundary vertices in each piece in ${\mathcal{R}}$ is at most $O(\sqrt{n/k})$; \item the number of terminals in $T$ in each piece in ${\mathcal{R}}$ is $O(1)$; and \item there are $O(1)$ holes in each piece in ${\mathcal{R}}$. \end{itemize} For each graph piece $R$ in ${\mathcal{R}}$, let \EMPH{$U_R$} be the set that contains all boundary vertices of $R$ and all terminals in $U$. Observe that $(R,U_R)$ is an $h$-hole instance for some constant $h$. We apply the algorithm from Lemma~\ref{L:emulator-constant_hole} to instance $(R,U_R)$, and let $(R',U_R)$ be the $\varepsilon$-emulator we get, so $|V(R')|\le |U_R|\cdot (ch\log (n/k))/\varepsilon)^{ch}$. Also, such an emulator can be computed in at most $(|V(R)|+|U_R|^2)\cdot (h\log n/\varepsilon)^{ch}$ time. Therefore, all emulators in $\set{(R', U_R)\mid R\in {\mathcal{R}}}$ can be computed in time \[ \sum_{R\in {\mathcal{R}}} O\bigg(\big(|V(R)|+|U_R|^2\big)\cdot \Big(\frac{h\log n}{\varepsilon}\Big)^{ch}\bigg) \le O\Paren{ n\cdot \Big(\frac{h\log n}{\varepsilon}\Big)^{ch} } \le O\Paren{ n\cdot \Big(\frac{c'\log n}{\varepsilon}\Big)^{c'} }, \] as $\sum_{R\in {\mathcal{R}}} |V(R)|\le O(k)\cdot (n/k)=O(n)$, $\sum_{R\in {\mathcal{R}}} |U_R|^2\le O(k)\cdot (\sqrt{n/k})^2\le O(n)$, and $c'$ is large enough. We then glue the emulators together via a process similar to $\textsc{Glue}$ and $\textsc{Glue}_h$, and eventually obtain an $\varepsilon$-emulator $(H',U)$ for $(H,U)$, with size \[ |V(H')|\le \sum_{ R\in {\mathcal{R}}}|U_R|\cdot \bigg(\frac{ch \log (n/k)}{\varepsilon}\bigg)^{ch} \le O\bigg(k\cdot\sqrt{\frac{n}{k}}\bigg)\cdot \bigg(\frac{ch\log n}{\varepsilon}\bigg)^{ch} \le O\Paren{ \sqrt{nk} \cdot \bigg(\frac{\log k}{\varepsilon}\bigg)^{c'} }, \] as both $c$ and $h$ are constants. \end{proof} \paragraph{Algorithm for Theorem~\ref{thm:main}.} Let $G$ be the input $n$-vertex plane graph and let $T$ be the set of terminals of size $k$. We first preprocess the graph $G$ into a new graph $G_0$ as follows. If $n<k^2$, then we set $G_0=G$. If $n\ge k^2$, we use the algorithm in {\cite[Theorem~6.9]{CGH16}} with parameter $\varepsilon/2$ to compute an $(\varepsilon/2)$-emulator $G_0$ for $G$ with size $O(k^2 \log^2 k/\varepsilon^2)$. This can be done in time $\tilde O(n/\varepsilon^{O(1)})$ by a slight modification of the algorithm in \cite{CGH16} (in particular, we remove their preprocessing step that reduces the number of vertices to $k^4$). Either way, we obtain an $(\varepsilon/2)$-emulator $G_0$ for $G$, and $|V(G_0)|=O(k^2 \log^2 k/\varepsilon^2)$. We then set $L \coloneqq \log\log k$ and $\varepsilon' \coloneqq \varepsilon/2L$. Now sequentially for each $0\le i\le L-1$, we apply the algorithm from Lemma~\ref{lem: size reduction step} to instance $(G_i,T)$ and parameter $\varepsilon'$ to obtain an $\varepsilon'$-emulator $(G_{i+1},T)$ for $(G_i,T)$. Finally, we return $(G',T)=(G_L,T)$ as the output. Note that $\varepsilon' L =(\varepsilon/2)$ and thus $(G_L,T)$ is an $\varepsilon/2$-emulator of $(G_0,L)$, and is therefore an $\varepsilon$-emulator for $(G,T)$. From Lemma~\ref{lem: size reduction step}, the running time of our algorithm is $\tilde O(n /\varepsilon^{O(1)})$. In order to complete the proof of \Cref{thm:main}, it suffices to show that $|V(G')|\le O(k\cdot (\log k/\varepsilon)^{O(1)})$, which follows immediately from the next claim (by setting $i=L$). \begin{claim} For each $0\le i\le L$, $|V(G_i)|\le k^{1+2^{-i}}\cdot (\log k/\varepsilon')^{2c'-c'/2^{i}}$. \end{claim} \begin{proof} We prove the claim by induction on $i$. The base case is when $i=0$. From the preprocessing step, $|V(G_0)|\le O(k^2 \log^2 k/\varepsilon^2)\le k^2(\log k/\varepsilon')^2$, so the claim holds, as $c'$ is large enough. Consider the inductive case. From Lemma~\ref{lem: size reduction step}, \[ \begin{split} |V(G_{i})| & \le \sqrt{|V(G_{i-1})|\cdot k}\cdot \bigg(\frac{\log k}{\varepsilon'}\bigg)^{c'}\\ & \le \sqrt{\big(k^{1+2^{-(i-1)}}\cdot (\log k/\varepsilon')^{2c'-c'/2^{(i-1)}}\big)\cdot k}\cdot \bigg(\frac{\log k}{\varepsilon'}\bigg)^{c'}\\ & \le k^{(1+2^{-(i-1)}+1)/2}\cdot \bigg(\frac{\log k}{\varepsilon'}\bigg)^{(2c'-c/2^{(i-1)})/2+c'}\\ & = k^{1+2^{-i}}\cdot (\log k/\varepsilon')^{2c'-c'/2^{i}}. \end{split} \] Therefore the claim holds for all $i$. \end{proof} \subsection{Bootstrapping} \label{SS:bootstrapping} Perhaps surprisingly, we can further reduce the running time for constructing an $\varepsilon$-emulator to be linear to the size of the graph whenever $k$ is ``sufficiently'' sublinear and the range of the edge weights (that is, the ratio between the smallest and largest weights) are polynomially bounded, using the idea of \emph{bootstrapping} combining with a precomputed look-up table. \begin{theorem} \label{Th:bootstrap} Given any parameter $0<\varepsilon<1$ and any instance $(H,U)$ with $n \coloneqq |H|$ and $k \coloneqq |U|$ satisfying $k \le n / \log^{D} n$ for some big enough constant $D$, and the range of the edge weights are bounded by polynomial in $n$, computing an emulator $(Z,U)$ for $(H,U)$ of size $|V(Z)| \le O(k \polylog k/\varepsilon^{O(1)})$ takes $O_\varepsilon(n)$ time. Furthermore, if $(H,U)$ is an $h$-hole instance, then $(Z,U)$ is an $h$-hole instance. \end{theorem} \begin{proof} We apply $r$-division iteratively with exponentially-growing values of $r$; intuitively each time we shrink the graph by a very small amount, just enough to absorb the logarithmic terms required to compute the emulators. \begin{itemize} \item First compute $r$-division of $H$ for $r \coloneqq (\log\log\log n)^{6C}$ that evenly distribute the terminals in $U$ using Lemma~\ref{L:r-division}, where $C$ is bigger than the number of logs we need in the running time of Theorem~\ref{thm:main}. Replace each piece in the $r$-division by an $\varepsilon$-emulator with respect to the boundary vertices and terminals using Theorem~\ref{thm:main}; every piece contains $O(r^{1/2} + k(\log\log\log n)^{6C}/n) \le O(r^{1/2})$ boundary vertices and terminals. The total time on the emulator construction is \[ O\Paren{ r \cdot \Paren{\frac{\log r}{\varepsilon}}^{O(1)} } \cdot O\Paren{\frac{n}{r}} \le O\Paren{ \frac{n \cdot (\log\log\log\log n)^{O(1)}}{\textnormal{poly}\varepsilon} }; \] and the new graph $H'$ has size \[ O\Paren{ r^{1/2} \Paren{\frac{\log r^{1/2}}{\varepsilon}}^{C} } \cdot O\Paren{\frac{n}{r}} \le O \Paren{ \frac{n}{\varepsilon^C (\log\log\log n)^{2C}} }. \] \item Now the graph is about $(\log\log\log n)^{2C}$-factor smaller than original, we can compute another $r'$-division for $r' \coloneqq (\log\log n)^{6C}$, and replace each piece in the $r'$-division by an $\varepsilon$-emulator with respect to the boundary vertices and terminals; every piece contains $O(r'^{1/2} + k(\log\log n)^{6C}/n) \le O(r'^{1/2})$ boundary vertices and terminals. This way, instead of spending $O_\varepsilon(n (\log\log\log n)^{O(1)})$ time if we perform $r'$-division directly on the original graph, now it takes time \[ O_\varepsilon \Paren{ \frac{n}{(\log\log\log n)^{2C}} \cdot (\log\log\log n)^{C} } \le O_\varepsilon(n). \] The new graph $H''$ has size about $O_\varepsilon(n/(\log\log n)^{2C})$. \item Now the graph is about $(\log\log n)^{2C}$-factor smaller than original, we can compute another $r''$-division for $r'' \coloneqq (\log n)^{6C}$, and replace each piece in the $r''$-division by an $\varepsilon$-emulator with respect to the boundary vertices and terminals; every piece contains $O(r''^{1/2} + k(\log n)^{6C}/n) \le O(r''^{1/2})$ boundary vertices and terminals, and this takes time \[ O_\varepsilon \Paren{ \frac{n}{(\log\log n)^{2C}} \cdot (\log\log n)^{C} } \le O_\varepsilon(n). \] The new graph $H'''$ has size about $O_\varepsilon(n/(\log n)^{2C})$. \item Finally, compute an $\varepsilon$-emulator for $H'''$ with respect to the terminals. This takes time \[ O_\varepsilon\Paren{ \frac{n}{(\log n)^{2C}} \cdot (\log n)^C } \le O_\varepsilon(n) \] The final emulator has size $O(k \polylog k / \varepsilon^{O(1)})$. \end{itemize} \noindent The accumulated distortion in distance is $4\varepsilon$. Overall the bottleneck is to compute the first set of emulators for pieces in the $r$-division, which takes $O(n \cdot (\log\log\log\log n)^{O(1)})$ time. We can avoid spending super-linear time to compute the first set of emulators; instead, we precompte a look-up table for every graph up to size $r = (\log\log\log n)^{6C}$, every possible subset of terminals, and every edge-weight functions rounded to the closest power of $1+\varepsilon$. \paragraph{Look-up table.} Now we can describe the construction of the look-up table. \begin{itemize} \item There are $2^{O(r)}$ plane graphs $K$ up to size $r$. \item There are $2^{r}$ possible choices for the terminal subset $U_K$. \item The spread of any instance $(K,U_K)$ is at most $n^{O(1)}$ because the range of the edge weights is polynomial in $n$; so if we round the weight of each edge to the closest power of $1+\varepsilon$, there are $\log_{1+\varepsilon} n^{O(1)} \le O(\log n / \varepsilon)$ possible weight values per edge, and thus ${O(\log n / \varepsilon)}^{2^{O(r)}}$ many different (rounded) edge-weight functions (because $\varepsilon$ is a constant). \item Computing an $\varepsilon$-emulator for each instance $(K,U_K)$ takes $r^{O(1)}$ time. \end{itemize} Overall, it takes \[ 2^{O(r)} \cdot 2^r \cdot O(\log n / \varepsilon)^{2^{O(r)}} \cdot r^{O(1)} \le 2^{2^{O_\varepsilon((\log\log\log n)^{6C})}} \le o_\varepsilon(n) \] time to precompute a look-up table, so that for any instance $(K,U_K)$ from the pieces of the first $r$-division, one can round the edge weights of $K$ and find the $\varepsilon$-emulator for $(K,U_K)$ directly from the look-up table. Rounding the edge-weights to the closest powers of $(1+\varepsilon)$ will introduce at most $O(\varepsilon)$ distortion. As a result, an $\varepsilon$-emulator of size $O(k \polylog k/\varepsilon^{O(1)})$ for $(H,U)$ can be computed in $O_\varepsilon(n)$ time. \end{proof} \section{Preliminaries} \label{sec:prelim} All logarithms are to the base of $2$. All graphs are simple and undirected. Let $G$ be a connected graph. A vertex $v\in V(G)$ is called a \EMPH{cut vertex} of $G$ if the graph $G\setminus \set{v}$ is disconnected. The cut vertices of a plane graph $G$ can be computed in time $O(|V(G)|+|E(G)|)$. Let $G$ be a graph with an edge-weight function $\EMPH{$w$}\colon E(G) \to \mathbb{R}_{+}$. The weight of a path $P$ is defined as $w(P) \coloneqq\sum_{e\in E(P)} w(e)$. The shortest-path distance between two vertices $u$ and $v$ is denoted by \EMPH{$\textnormal{\textsf{dist}}_G(u,v)$}. For a subset $S$ of vertices in $G$, we define $\EMPH{$\textnormal{\textsf{diam}}_G(S)$} \coloneqq \max_{u, u'\in S}\textnormal{\textsf{dist}}_G(u,u')$. For a pair of disjoint subsets of vertices $(S,S')$ in $G$, we define $\EMPH{$\textnormal{\textsf{dist}}_G(S,S')$} \coloneqq \min_{u\in S, u'\in S'}\textnormal{\textsf{dist}}_G(u,u')$. \paragraph{Emulators.} Throughout, we consider graph $G$ equipped with a special set of vertices $T$, called \EMPH{terminals}. We refer to the pair $(G,T)$ as an \EMPH{instance}. Let $(G,T)$ and $(H,T)$ be a pair of instances with the same set of terminals, and let $\varepsilon\in[0,1]$. We say that $H$ is an \EMPH{$\varepsilon$-emulator} for $G$ with respect to $T$, or equivalently, instance $(H,T)$ is an $\varepsilon$-emulator for instance $(G,T)$ if \begin{equation} \label{eq:distortion2} \forall x,y\in T, \quad e^{-\varepsilon}\cdot\textnormal{\textsf{dist}}_G(x,y) \le \textnormal{\textsf{dist}}_H(x,y) \le e^{\varepsilon}\cdot\textnormal{\textsf{dist}}_G(x,y). \end{equation} Throughout, we use Equation~\eqref{eq:distortion2} as the definition of an $\varepsilon$-emulator instead of Equation~\eqref{eq:distortion}; but since we restrict our attention to $\varepsilon<1$, the two definitions are equivalent up to scaling $\varepsilon$ by a constant factor. By definition, if $(H,T)$ is an $\varepsilon$-emulator for $(G,T)$, then $(G,T)$ is also an $\varepsilon$-emulator for $(H,T)$. Moreover, if $(G,T)$ is an $\varepsilon$-emulator for $(G',T)$ and $(G',T)$ is an $\varepsilon'$-emulator for $(G'',T)$, then $(G,T)$ is an $(\varepsilon+\varepsilon')$-emulator for $(G'',T)$. Most instance $(G,T)$ considered in this paper are \EMPH{planar instances} where graph $G$ is a connected plane graph. We say that a planar instance $(G,T)$ is an \EMPH{$h$-hole instance} for an integer $h>0$ if the terminals lie on at most $h$ faces in the embedding of $G$. The faces incident to some terminals are called \EMPH{holes}. Notice that in a one-hole instance $(G,T)$, we can safely assume all the terminals in $T$ lie on the outerface $G$. By definition, a $0$-emulator preserves distances exactly, i.e., $\textnormal{\textsf{dist}}_G(x,y)=\textnormal{\textsf{dist}}_{G'}(x,y)$ for all $x,y\in T$. \begin{theorem}[Chang-Ophelders~{\cite[Theorem~1]{co-pemm-2020}}] \label{thm: quartergrid enumator} Given one-hole instance $(G,T)$ with $n \coloneqq |V(G)|$ and $k \coloneqq |T|$, one can compute a $0$-emulator $(G',T)$ for $(G,T)$ of size $|V(G')| \le k^2$. The running time of the algorithm is $O((n+k^2)\log n)$. \end{theorem} \paragraph{Crossing pairs and the Monge property.} Let $(G,T)$ be a one-hole instance. Assume that no terminal in $T$ is a cut vertex of $G$, every terminal appears exactly once as we traverse the boundary of the outerface. Let $(t_1,t_2), (t'_1,t'_2)$ be two terminal pairs whose four terminals are all distinct. We say that the pairs $(t_1,t_2),(t'_1,t'_2)$ are \EMPH{crossing} if the clockwise order in which these terminals appear on the boundary is either $(t_1,t'_1,t_2,t'_2)$ or $(t_1,t'_2,t_2,t'_1)$; otherwise we say that they are \EMPH{non-crossing}. A collection ${\mathcal M}$ of pairs of terminals in $T$ is called \emph{non-crossing} if every two pairs in ${\mathcal M}$ is non-crossing. Sometimes we abuse the language and say that a set of shortest paths ${\mathcal{P}}$ in $G$ is \emph{non-crossing} when the collection of endpoint pairs for the paths is non-crossing. The \EMPH{Monge property \footnote{Technically, this is known as the \EMPH{cyclic Monge property}~\cite{co-pemm-2020}. } states that, for every one-hole instance $(G,T)$ and every crossing pairs of terminals $(t_1,t_2)$ and $(t'_1,t'_2)$, \[ \textnormal{\textsf{dist}}_G(t_1,t_2)+\textnormal{\textsf{dist}}_G(t'_1,t'_2) \ge \textnormal{\textsf{dist}}_G(t'_1,t_2)+\textnormal{\textsf{dist}}_G(t_1,t'_2). \] \paragraph{Well-structured sets of shortest paths.} Consider a graph $G$ and a collection ${\mathcal{P}}$ of shortest paths in $G$. We say that the set ${\mathcal{P}}$ is \EMPH{well-structured} if for every pair of paths $(P,P')$ in ${\mathcal{P}}$, $P\cap P'$ is a single subpath of both $P$ and $P'$. It is not hard to see that every collection of shortest paths in~$G$ is well-structured if the shortest path between any two vertices in $G$ is unique. Such condition can be enforced with high probability if we perturb the edge-weights in $G$ slightly and apply the \emph{isolation lemma}~\cite{mvv-memi-1987}. If randomization is to be avoided, one can use a \emph{lexicographic perturbation} by redefining the edge weights to be a vector~\cite{cha-odlp-1952,dow-gsmml-1955,hm-amcpm-1994}, or the \emph{leftmost rule} when choosing a shortest path~\cite{ek-lamfm-2013} when $G$ is a plane graph. A deterministic lexicographic perturbation scheme that guarantees the uniqueness of shortest paths in an $n$-vertex plane graph can be computed in $O(n)$ time~\cite{efl-hmpfs-2018}. Therefore from here on we assume that all the planar graphs we consider have unique shortest path between every pair of vertices, and every collection of shortest paths is well-structured. The proof of the following lemma is provided in Appendix~\ref{apd: Proof of well-structured path set}. \begin{lemma} \label{lem: well-structured path set} Given a one-hole instance $(G,T)$ and a non-crossing collection ${\mathcal M}$ of pairs of terminals in $T$, one can compute a well-structured set ${\mathcal{P}}$ of shortest paths, one for each pair of terminals in $T$ in $O(|E(G)|\cdot \log|{\mathcal M}|)$ time. \end{lemma} \paragraph{\boldmath{$\varepsilon$}-covers.} We use the notion of \emph{$\varepsilon$-covers}~\cite{ks-fdass-1998,tho-corad-2004}. Let $\varepsilon \in (0,1)$ be a parameter. Let $G$ be a graph and let $P$ be a shortest path in $G$ connecting some pair of vertices. Consider now a vertex $v$ in $G$ that does not belong to path $P$. An \EMPH{$\varepsilon$-cover} of $v$ on $P$ is a subset $S$ of vertices in $P$ such that, for each vertex $x\in V(P)$, taking the detour from $v$ to some $y\in S$ then to~$x$ is a $(1+\varepsilon)$-approximation to the shortest path from $v$ to $x$, i.e., there exists $y\in S$ for which $\textnormal{\textsf{dist}}_G(v,y)+\textnormal{\textsf{dist}}_G(y,x)\le (1+\varepsilon)\cdot \textnormal{\textsf{dist}}_G(v,x)$. Small $\varepsilon$-cover of size $O(1/\varepsilon)$ is known to exist. \begin{theorem}[% Thorup~{\cite[Lemma~3.4]{tho-corad-2004}}] \label{thm: eps_cover} Let $\varepsilon \in (0,1)$ be a constant. For every shortest path $P$ in some graph $G$ and every vertex $v\notin P$, there is an $\varepsilon$-cover of $v$ on $P$ with size $O(1/\varepsilon)$. Moreover, such an $\varepsilon$-cover can be computed in $O(|E(G)|)$ time. \end{theorem} We emphasize that choosing $O(1/\varepsilon)$ ``portals'' at equal distance on the path $P$ as in Klein-Subramanian~{\cite{ks-fdass-1998}} is not sufficient, because the distance from $v$ to $P$ might be much smaller than the length of $P$. The linear-time construction is not stated in Lemma 3.4 of~\cite{tho-corad-2004}, but it can be inferred from their proof. In fact, we will use the following construction that allows us to efficiently compute the union of $\varepsilon$-covers of a subset $Y$ of vertices along the boundary of plane graph; the proof is a simple divide-and-conquer similar to Reif~\cite{rei-mscpu-1981}, which we omit here. \begin{lemma} \label{lem: eps_cover_subset} Let $\varepsilon \in (0,1)$ be a constant and $G$ is a plane graph. Given a subset $Y$ of vertices that lie on the same face of $G$ and a shortest path $P$ connecting a pair of vertices in $G$, we can compute the union of $\varepsilon$-covers of each vertex in $Y$ on $P$ in $O(|E(G)| \cdot \log |Y|)$ time. \end{lemma}
{ "timestamp": "2022-06-23T02:02:00", "yymm": "2206", "arxiv_id": "2206.10681", "language": "en", "url": "https://arxiv.org/abs/2206.10681", "abstract": "We study vertex sparsification for distances, in the setting of planar graphs with distortion: Given a planar graph $G$ (with edge weights) and a subset of $k$ terminal vertices, the goal is to construct an $\\varepsilon$-emulator, which is a small planar graph $G'$ that contains the terminals and preserves the distances between the terminals up to factor $1+\\varepsilon$. We construct the first $\\varepsilon$-emulators for planar graphs of near-linear size $\\tilde O(k/\\varepsilon^{O(1)})$. In terms of $k$, this is a dramatic improvement over the previous quadratic upper bound of Cheung, Goranci and Henzinger, and breaks below known quadratic lower bounds for exact emulators (the case when $\\varepsilon=0$). Moreover, our emulators can be computed in (near-)linear time, which lead to fast $(1+\\varepsilon)$-approximation algorithms for basic optimization problems on planar graphs, including multiple-source shortest paths, minimum $(s,t)$-cut, graph diameter, and dynamic distace oracle.", "subjects": "Data Structures and Algorithms (cs.DS); Computational Geometry (cs.CG); Discrete Mathematics (cs.DM)", "title": "Near-Linear $\\varepsilon$-Emulators for Planar Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464527024444, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7074345805048264 }
https://arxiv.org/abs/1703.07146
Computing Milnor fiber monodromy for some projective hypersurfaces
We describe an algorithm computing the monodromy and the pole order filtration on the top Milnor fiber cohomology of hypersurfaces in $\mathbb{P}^n$ whose pole order spectral sequence degenerates at the second page. In the case of hyperplane arrangements and free, locally quasi-homogeneous hypersurfaces, and assuming a key conjecture, this algorithm is much faster than for a hypersurface as above. Our conjecture is supported by the results due to L. Narv\' ez Macarro and M. Saito on the roots of Bernstein-Sato polynomials of such hypersurfaces, by all the examples computed so far, and by one partial result. For hyperplane arrangements coming from reflection groups, a surprising symmetry of their pole order spectra on top cohomology is displayed in our examples. We also improve our previous results in the case of plane curves.
\section{Introduction} \label{sec:intro} Let $V:f=0$ be a reduced hypersurface in the complex projective space $\mathbb{P}^{n}$, defined by a homogeneous polynomial $f \in S=\mathbb{C}[x_0,...,x_n]$, of degree $d$. Consider the corresponding complement $M=\mathbb{P}^{n}\setminus V$, and the global Milnor fiber $F$ defined by $f(x_0,...,x_n)=1$ in $\mathbb{C}^{n+1}$, with monodromy action $h:F \to F$, $h(x)=\exp(2\pi i/d)\cdot x$. A special case of great interest is when $f$ is a product of linear forms, and then $V$ is a hyperplane arrangement ${\mathcal A}$, and the corresponding complement is traditionally denoted by $M({\mathcal A})$. A lot of efforts were made, in the case of hyperplane arrangements most of the time, to determine the eigenvalues of the monodromy operators \begin{equation} \label{mono1} h^m: H^m(F,\mathbb{C}) \to H^m(F,\mathbb{C}) \end{equation} with $1 \leq m \leq n$, see for instance \cite{A2, PB, BSett, BY, BDS, Cal1, Cal2, CS, DHA, DL3, MP, MPP, PS, Se2, S2}. However, in most of these papers, either only the monodromy action on $H^1(F, \mathbb{C})$ is considered, or the results are just sufficient conditions for the vanishing of some eigenspaces $H^m(F,\mathbb{C})_{\lambda}$. These conditions are usually not necessary, see Example \ref{exNF} below. For complexified real arrangements, an approach to compute the monodromy operators using the associated Salvetti complex is explained in \cite{Cal1, Cal2, SalSet, Se2}. However, note that the Milnor fibers considered in \cite{Cal1, Cal2} are not the same as the Milnor fibers in our note, but they correspond to the discriminants of some reflection groups. In this paper we explain an approach working for {\it some} hypersurfaces, namely in technical terms for hypersurfaces $V:f=0$ whose pole order spectral sequence $E_*(f)$ described below degenerates at the $E_2$-term. For hyperplane arrangements and free locally quasi-homogeneous hypersurfaces, modulo a basic conjecture that is {\it one of the main contribution of this paper}, see Conjecture \ref{conj2} below, this algorithm is quite efficient. This conjecture is suggested by the fact that, for these latter hypersurfaces, the roots of their Bernstein-Sato polynomials enjoy special properties, as proved by L. Narv\' ez Macarro \cite{NM} and M. Saito \cite{Sa0}. In fact our results are either {\it conjectural}, depending on whether the Conjecture \ref{conj2} holds (as for instance in Examples \ref{exD4}, \ref{exG}, \ref{exA214} and \ref{exNF}, and for $k$ a resonant value), or {\it certain}, but based on additional information coming from other viewpoints (as for instance in the previous examples, but for $k$ non-resonant, see Remark \ref{rkprim1} on this point, or in Examples \ref{exFA}, \ref{exBraid}, \ref{exD3}, \ref{exGEN} and \ref{exGEN2}). Our computation gives not only the dimensions of the eigenspaces $H^n(F,\mathbb{C})_{\lambda}$ of the monodromy, but also the dimensions of the graded pieces $Gr^p_PH^n(F,\mathbb{C})_{\lambda}$, where $P$ denotes the pole order filtration on the cohomology group $H^m(F,\mathbb{C})$, see section 2 below for the definition. The dimensions of the eigenspaces $H^m(F,\mathbb{C})_{\lambda}$ for $m<n$ can then be computed by decreasing induction on $n$, using a generic linear section and the formula \eqref{Euler} below. In the case $n=2$, this approach was already described in \cite{DStFor, DStMFgen} in the case of a reduced plane curve $C:f=0$. However, even in this case, we bring here valuable new information, see Proposition \ref{propFP} and Corollary \ref{corcurve}. These two results have short and rather direct proofs, but their consequences for the practical computations are substantial, and hence we regard them as {\it main results} of our paper. Assume now that $n>2$ and let $H \subset \mathbb{C}^{n+1}$ be a generic hyperplane with respect to the hypersurface $V$, passing through the origin. Let $V_H=V \cap H$ be the corresponding hyperplane section of $V$ in $\mathbb{P}(H)=\mathbb{P}^{n-1}$, and denote by $F_H$ the corresponding Milnor fiber in $H=\mathbb{C}^n$ and by $$h_H^m: H^m(F_H,\mathbb{C}) \to H^m(F_H,\mathbb{C})$$ the associated monodromy operators. Then it is known that the obvious inclusion $\iota_H: F_H \to F$ induces isomorphisms $H^m(F,\mathbb{C}) = H^m(F_H,\mathbb{C})$ for $m=1,2,..., n-2$, as well as a monomorphism \begin{equation} \label{mono} \iota_H^*:H^{n-1}(F,\mathbb{C}) \to H^{n-1}(F_H,\mathbb{C}), \end{equation} see for instance \cite{D1}, which are compatible with the monodromy operators. Consider the Alexander polynomials of $V$, which are just the characteristic polynomials of the monodromy, namely \begin{equation} \label{Delta} \Delta^j(V)(t)=\det (t\cdot Id -h^j|H^j(F,\mathbb{C})), \end{equation} for $j=0,1,...,n$, denoted by $\Delta^j({\mathcal A})(t)$ in the case $V={\mathcal A}$. It is clear that one has $\Delta^0(V)(t)=t-1$, and moreover \begin{equation} \label{Euler} \Delta^0(V)(t)\Delta^1(V)(t)^{-1}\cdots \Delta^n(V)(t)^{(-1)^n}=(t^d-1)^{\chi(M)}, \end{equation} where $\chi(M)$ denotes the Euler characteristic of the complement $M$, see for instance \cite[Proposition 4.1.21]{D1}. When $V$ is a hyperplane arrangement ${\mathcal A}$, the Euler characteristic $\chi(M({\mathcal A}))$ is easily computable from the intersection lattice $L({\mathcal A})$, see \cite{DHA, OT}. By induction, assume that we know how to compute the characteristic polynomials $\Delta^j(V_H)(t)$ for $j=0,1,...,n-1$. It follows that \begin{equation} \label{Delta2} \Delta^j(V)(t)=\Delta^j(V_H)(t), \end{equation} for $j=0,1,...,n-2$, and hence, in view of the formula \eqref{Euler}, it is enough to determine the top degree Alexander polynomial $\Delta^n(V)(t)$ and the Euler characteristic $\chi(M)=n+1 -\chi(V)$. {\it The computation of this Alexander polynomial $\Delta^n(V)(t)$ is the main aim of this paper}. For the computation of the Euler characteristic $\chi(V)$, see \cite[Corollary 2]{MSS}. Here is in short how we proceed. Let $\Omega^j$ denote the graded $S$-module of (polynomial) differential $j$-forms on $\mathbb{C}^{n+1}$, for $0 \leq j \leq n+1$. The complex $K^*_f=(\Omega^*, \dd f \wedge)$ is just the Koszul complex in $S$ of the partial derivatives $f_0, f_1 ... f_n$ of the polynomial $f$ with respect to $x_0, x_1,...,x_n$. The general theory says that there is a spectral sequence $E_*(f)$, whose first term $E_1(f)$ is computable from the cohomology of the Koszul complex $K^*_f$ and whose limit $E_{\infty}(f)$ gives us the action of the monodromy operator on the graded pieces of the reduced cohomology $\tilde H^*(F,\mathbb{C})$ of the Milnor fiber with respect to the pole order filtration $P$, see \cite{Dcomp, DS1, Sa3, Sa4} as well as \cite[Chapter 6]{D1}. In this note we present an algorithms to compute the second page of the spectral sequence $E_*(f)$. Several examples computed so far suggest the following. \begin{conj} \label{conj1} The spectral sequences $E_*(f)$ degenerates at the $E_2$-term for any hyperplane arrangement ${\mathcal A}:f=0$ and any free locally quasi-homogeneous hypersurface $V:f=0$ in $\mathbb{P}^n$. \end{conj} For the moment it is not clear how to prove this conjecture, not even how to check that it holds in a specific example. For a related property, {\it extremely useful for performing our computations}, see Definition \ref{def1}, Remark \ref{rkprim1} and Conjecture \ref{conj2}. This property holds in all the cases where we dispose of enough additional information to compute the monodromy operators, see Examples \ref{exFA}, \ref{exBraid}, \ref{exD3}. It also holds for some irreducible non-free surfaces, see Examples \ref{exGEN} and \ref{exGEN2}. Theorem \ref{thmlog1} gives some theoretical support for Conjecture \ref{conj2}, and is a final {\it main result} in our paper. Conjecture \ref{conj1} can also be regarded as an extension of the following recent deep result due to M. Saito \cite{Sa3}. \begin{thm} \label{thmconj0} If a hypersurface $V:f=0$ in $\mathbb{P}^n$ has only isolated singularities, then the spectral sequence $E_*(f)$ degenerates at the $E_2$-term if and only if these singularities are weighted homogeneous. In particular, Conjecture \ref{conj1} holds for $n=2$. \end{thm} From a different point of view, Conjecture \ref{conj1} can be regarded as a special case of a general conjecture for singular projective hypersurfaces going back to H. Terao \cite{Terao78}, and saying that $E_2(f)=E_{\infty}(f)$ always holds. This conjecture is known to fail in general, e.g. by Theorem \ref{thmconj0} above or by looking at the surface $V':f'=0$ introduced at the end of Example \ref{exGEN}, see also \cite{Dcomp}. The remarkable fact pointed out in our paper is that Terao's Conjecture seems to hold {\it in a stronger form} for any hyperplane arrangement. In the final section several examples of plane arrangements in $\mathbb{P}^3$, as well as examples of (free, locally quasi-homogeneous or general) surfaces in $\mathbb{P}^3$, are considered to illustrate the method. For reflection arrangements, the pole order spectrum $Sp^0_P(f)$ of the top cohomology group $H^3(F, \mathbb{C})$ has a surprising symmetry property, see Remark \ref{rkKEY4}, which is not present in the case of other arrangements considered in Examples \ref{exFA} and \ref{exNF}. There is no explanation for this symmetry for the moment, just a possible analogy to the formula \eqref{eqMN} verified by the Bernstein-Sato polynomial of a free, locally quasi-homogeneous hypersurface. The computations in this note were made using the computer algebra system Singular \cite{Sing}. The corresponding codes are available on request. \medskip We thank Morihiko Saito for very useful discussions related to the subject and the presentation of this note, see in particular Remark \ref{rkF=P} and Remark \ref{nonfreearr}. \section{Gauss-Manin complexes, Koszul complexes, and Milnor fiber cohomology} \label{sec2} Let $S$ be the polynomial ring $\mathbb{C}[x_0,...,x_n]$ with the usual grading and consider a reduced homogeneous polynomial $f \in S$ of degree $d$. The graded Gauss-Manin complex $C_f^*$ associated to $f$ is defined by taking $C_f^j=\Omega^j[\partial_t]$, i.e. formal polynomials in $\partial_t$ with coefficients in the space of differential forms $\Omega^j$, where $\deg \partial_t=-d$ and the differential $\dd: C_f^j \to C_f^{j+1}$ is $\mathbb{C}$-linear and given by \begin{equation} \label{difC} \dd (\omega \partial_t^q)=(\dd \omega)\partial_t^q-(\dd f \wedge \omega) \partial_t^{q+1}, \end{equation} see for more details \cite[Section 4]{DS1}. The complex $C_f^*$ has a natural increasing filtration $P'_*$ defined by \begin{equation} \label{filC} P'_qC^j_f=\oplus_{i \leq q+j}\Omega^j\partial_t^i. \end{equation} If we set $P'^q=P'_{-q}$ in order to get a decreasing filtration, then one has \begin{equation} \label{grC} Gr^q_{P'}C^*_f=\sigma_{\geq q}(K^*_f((n+1-q)d)), \end{equation} the truncation of a shifted version of the Koszul complex $K^*_f$. Moreover, this filtration $P'^q$ yields a decreasing filtration $P'$ on the cohomology groups $H^j(C^*_f)$ and a spectral sequence \begin{equation} \label{spsqC} E_1^{q,j-q}(f) \Rightarrow H^j(C^*_f). \end{equation} On the other hand, the reduced cohomology $\tilde H^j(F,\mathbb{C})$ of the Milnor fiber $F:f(x_0,...,x_n)=1$ associated to $f$ has a pole order decreasing filtration $P$, see \cite[Section 3]{DS1}, such that there is a natural identification for any integers $q$, $j$ and $k \in [1,d]$ \begin{equation} \label{filH} P'^{q+1}H^{j+1}(C^*_f)_k=P^q\tilde H^j(F,\mathbb{C})_{\lambda}, \end{equation} where $\lambda=\exp (-2 \pi ik/d).$ Moreover, the $E_1$-term of the spectral sequence \eqref{spsqC} is completely determined by the morphisms of graded $\mathbb{C}$-vector spaces \begin{equation} \label{diff1} \dd ' : H^j(K^*_f) \to H^{j+1}(K^*_f), \end{equation} induced by the exterior differentiation of forms, i.e. $\dd ' :[\omega] \mapsto [\dd (\omega)]$. More precisely, this spectral sequence $E_*(f)$ is the direct sum of $d$ spectral sequences $E_*(f)_k$, for $k \in [1,d]$, where \begin{equation} \label{newspsq} E_1^{s,t}(f)_k=H^{s+t+1}(K^*_f)_{td+k} \text{ and } \dd_1:E_1^{s,t}(f)_k \to E_1^{s+1,t}(f)_k, \ \ \dd _1:[\omega] \mapsto [\dd (\omega)]. \end{equation} With this notation, one has \begin{equation} \label{limit0} E_{\infty}^{s,t}(f)_k=Gr_P^s\tilde H^{s+t}(F,\mathbb{C})_{\lambda }. \end{equation} Since the Milnor fiber $F$ is a smooth affine variety, its cohomology groups $H^m(F, \mathbb{C})$ have a decreasing Hodge filtration $F$ coming from the mixed Hodge structure constructed by Deligne, see \cite{PeSt}. The two filtrations $P$ and $F$ are related by the inclusion \begin{equation} \label{PFincl} F^sH^{m}(F,\mathbb{C}) \subset P^s H^m(F,\mathbb{C}), \end{equation} for any integers $s,m$, see formula $(4.4.8)$ in \cite{DS1}. This inclusion and the equality $F^0H^{m}(F,\mathbb{C})=H^{m}(F,\mathbb{C})$, imply the vanishing \begin{equation} \label{limit} E_{\infty}^{s,t}(f)_k=0 \end{equation} for any $s<0$ or $t<0$, in other words the limit page of the spectral sequence is contained in the first quadrant. Moreover, for $k=d$, one has in addition $E_{\infty}^{0,n}(f)_d=0$, see \cite[Proposition 5.2]{BS0}. \begin{ex} \label{exsmooth} If $F$ is the Milnor fiber associated to a smooth hypersurface $V$ in $\mathbb{P}^n$, then the inclusion \eqref{PFincl} becomes an equality, see for instance \cite{Steen} and \cite[Example 6.2.13]{ D1}. Moreover one has \begin{equation} \label{limit2} E_{\infty}^{s,t}(f)_k=0 \text{ for } s+t \ne n \text{ and } E_{\infty}^{n-t,t}(f)_k=\mu(td+k-n-1), \end{equation} where $\mu(a)$ is the coefficient of $t^a$ in the polynomial $$\left( \frac{1-t^{d-1}}{1-t}\right)^{n+1}.$$ In particular, $\mu(a) \ne 0$ if and only if $1 \leq a \leq (n+1)(d-2)$. \end{ex} One has the following result, the second part of which answers positively a conjecture made in Remark 2.9 (i) in \cite{DStMFgen}. \begin{prop} \label{propFP} Let $V$ be a hypersurface in $\mathbb{P}^n$ and $H$ a generic hyperplane. Consider the linear inclusion $\iota_H:F_H \to F$ defined in the Introduction. Then the induced morphisms $\iota_H^*:H^{m}(F,\mathbb{C}) \to H^{m}(F_H,\mathbb{C})$ are strictly compatible with the Hodge filtration $F^p$ and compatible with the pole order filtration $P^p$. Moreover, one has the following. \begin{enumerate} \item If $V$ has only isolated singularities, then the Hodge filtration $F^p$ and the pole order filtration $P^p$ coincide on $H^{n-1}(F,\mathbb{C})$. \item For any hypersurface $V$, in particular for any hyperplane arrangement ${\mathcal A}$, the Hodge filtration $F^p$ and the pole order filtration $P^p$ coincide on $H^{1}(F,\mathbb{C})$. \end{enumerate} \end{prop} \proof Since $\iota_H:F_H \to F$ is a regular mapping, the strict compatibility of $\iota_H^*$ with the Hodge filtration $F^p$ is well known, see \cite{De}. In particular, for $m=n-1$ and $V$ with isolated singularities, since $\iota_H^*$ is injective, this means that a cohomology class ${\alpha} \in H^{n-1}(F,\mathbb{C})$ satisfies \begin{equation} \label{C1} {\alpha} \in F^pH^{n-1}(F,\mathbb{C}) \text{ if and only if } \iota_H^*({\alpha} )\in F^pH^{n-1}(F_H,\mathbb{C}). \end{equation} The compatibility of $\iota_H^*$ with the pole order filtration $P^p$ means that \begin{equation} \label{C2} {\alpha} \in P^pH^{m}(F,\mathbb{C}) \text{ implies } \iota_H^*({\alpha} )\in P^pH^{m}(F_H,\mathbb{C}). \end{equation} This property comes from the fact that $\iota_H^*$ induces a morphism $\iota_H^*: C^*_f \to C^*_{f_H}$ between the corresponding Gauss-Manin complexes, where $f_H$ denotes the restriction of the polynomial $f$ to the hyperplane $H$, thought of here as a hyperplane in $\mathbb{C}^{n+1}$. This morphism preserves the $P'_*$ filtrations introduced in \eqref{filC}, i.e. one clearly has $\iota_H^* (P'_qC^*_f) \subset P'_qC^*_{f_H},$ for any integer $q$. To prove the claim (1), it is enough in view of the inclusion in \eqref{PFincl} to prove the converse inclusion $P^pH^{n-1}(F,\mathbb{C}) \subset F^pH^{n-1}(F,\mathbb{C})$ for any $p$. So take ${\alpha} \in P^pH^{n-1}(F,\mathbb{C})$. Using \eqref{C2}, it follows that $$\iota_H^*({\alpha} )\in P^pH^{n-1}(F_H,\mathbb{C})=F^pH^{n-1}(F_H,\mathbb{C}).$$ The last equality is due to the fact that the hyperplane section $V_H$ being smooth, the Hodge filtration $F^p$ and the pole order filtration $P^p$ coincide on $H^{n-1}(F_H,\mathbb{C})$, as seen in Example \ref{exsmooth}. We conclude using \eqref{C1}. The claim (2) follows in a similar way, by taking a generic $(n-2)$-codimensional linear section instead of the hyperplane $H$. \endproof \begin{rk} \label{rkF=P} (i) We thank Morihiko Saito for teaching us to be strict about the difference between strict compatibility and compatibility of morphisms of filtered objects in the above proof. This corrects a serious gap in our original presentation of the proof above. See also \cite[Remark 4.4]{Sa3} for a similar property in a general context. \noindent (ii) We do not know whether the morphism $\iota_H^*:H^{m}(F,\mathbb{C}) \to H^{m}(F_H,\mathbb{C})$ is strictly compatible with the pole order filtration $P^p$. Indeed, a morphism of filtered complexes, strictly compatible with the filtrations, does not induce in general a strictly compatible morphism when we pass to cohomology groups, see \cite[Section 1.10]{DS2} If this strict compatibility holds for $\iota_H^*:H^{m}(F,\mathbb{C}) \to H^{m}(F_H,\mathbb{C})$, and if one knows the $P$-filtration on $H^*(F_H,\mathbb{C})$, in order to determine it on $H^*(F,\mathbb{C})$, it is enough to determine the $P$-filtration on the top cohomology $H^n(F,\mathbb{C})$ and to identify the image of $H^{n-1}(F,\mathbb{C})$ inside $H^{n-1}(F_H,\mathbb{C})$ under $\iota^*_H$. \noindent (iii) It is known that the equality $F^s=P^s$ does not hold on $H^2(F,\mathbb{C})$, even when $F$ is the Milnor fiber of a line arrangement in $\mathbb{P}^2$, see for instance \cite[Remark 2.9.(ii)]{DStMFgen}. \end{rk} The following result is a major improvement of Theorem 1.2 in \cite{DStFor}. \begin{cor} \label{corcurve} For any curve $V:f=0$ in $\mathbb{P}^2$, in order to compute the corresponding Alexander polynomial $\Delta^1(V)$, it is enough to compute the dimensions $\dim E_2^{1,0}(f)_k$ for any $k=1,2,...,d$. More precisely, let $\lambda= \exp(-2\pi ik/d)$ and $m(\lambda)$ be the multiplicity of $\lambda$ as a root of the Alexander polynomial $\Delta^1(V)$. Then one has $$m(\lambda)=\dim E_{2}^{1,0}(f)_k+\dim E_{2}^{1,0}(f)_{d-k},$$ for $1 \leq k<d$ and $m(1)=\dim E_{2}^{1,0}(f)_d$. \end{cor} \proof It is well known, e.g. one can use the proof of Proposition \ref{propFP} above, that $H^1(F,\mathbb{C})_{\ne 1}$ is a pure Hodge structure of weight 1. For any $\lambda= \exp(-2\pi ik/d) \ne 1$, and with obvious notation, it follows that $$m(\lambda)=h^{1,0}(H^1(F,\mathbb{C})_{\lambda})+h^{0,1}(H^1(F,\mathbb{C})_{\lambda})=h^{1,0}(H^1(F,\mathbb{C})_{\lambda})+h^{1,0}(H^1(F,\mathbb{C})_{\overline \lambda}),$$ where ${\overline \lambda}$ denotes the complex conjugate of $\lambda$. On the other hand, we have $$h^{1,0}(H^1(F,\mathbb{C})_{\lambda})=\dim Gr_P^1\tilde H^{1}(F,\mathbb{C})_{\lambda }=\dim E_{\infty}^{1,0}(f)_k=\dim E_{2}^{1,0}(f)_k,$$ where the last equality is obvious. These equalities yield our claim for $\lambda \ne 1$. The claim for $\lambda= 1$ follows from the fact that $H^{1}(F,\mathbb{C})_1=H^{1}(M,\mathbb{C})$ is a pure Hodge structure of weight $(1,1)$. \endproof \begin{rk} \label{rkSpF} Consider the $j$-th {\it Hodge spectrum} of the plane curve $V:f=0$, defined by \begin{equation} \label{sp1} Sp_F^j(f)=\sum_{{\alpha}>0}n^j_{F,f,{\alpha}}t^{{\alpha}} \end{equation} for $j=0,1$, where $$n^j_{F,f,{\alpha}}=\dim Gr_F^pH^{2-j}(F,\mathbb{C})_{\lambda}$$ with $p=[3-{\alpha}]$ and $\lambda=\exp(-2 \pi i \alpha)$. When $V$ is a line arrangement, then simple formulas for the difference $$Sp_F(f)=Sp_F^0(f)-Sp_F^1(f)$$ are given in \cite{BS}. It follows from Proposition \ref{propFP} and the proof of Corollary \ref{corcurve}, that once we know the dimensions $\dim E_2^{1,0}(f)_k$ for any $k=1,2,...,d-1$, we can compute spectrum $Sp_F^1(f)$, and hence via \cite{BS}, the spectrum $Sp_F^0(f)$ as well. This gives us precise information on the Hodge structure on $H^2(F,\mathbb{C})$ in this case. \end{rk} \section{Hyperplane arrangements, free locally quasi-homogeneous divisors, and Bernstein-Sato polynomials} In this section with explain why the limit page of the spectral sequences discussed above enjoy a very useful property in the case of hyperplane arrangements. Let $(D,0):g=0$ be a complex analytic hypersurface germ at the origin of $\mathbb{C}^{n+1}$ and denote by $b_{g,0}(s)$ the corresponding (local) Bernstein-Sato polynomial. If the analytic germ $g$ is given by a homogeneous polynomial, then one can define also the global Bernstein-Sato polynomial $b_g(s)$ of $g$, and one has an equality $b_g(s)=b_{g,0}(s)$, see for more details \cite{SaSurvey, Sa1, Sa2}. Let $R_{g,0}$ be the set of roots of the polynomial $b_{g,0}(-s)$. When $g$ is a homogeneous polynomial, we use the simpler notation $R_g=R_{g,0}$. In this section we consider the case when $g=f$ is the defining equation of a hypersurface $V$ in $\mathbb{P}^n$ and denote by $D=CV$ the affine cone over $V$, defined in $\mathbb{C}^{n+1}$ by the equation $f=0$. Recall M. Saito's fundamental results in \cite[Theorem 2]{Sa1} and \cite[Theorem 1]{Sa0}. \begin{thm} \label{thmBS} Let $V: f=0$ be a hypersurface in $\mathbb{P}^n$, let $\alpha >0$ be a rational number and set $\lambda=\exp(-2 \pi i \alpha)$. \begin{enumerate} \item If $Gr_P^p H^{n}(F,\mathbb{C})_{\lambda }\ne 0$, where $p=[n+1-{\alpha}]$, then $ {\alpha} \in R_f$. \item If the sets ${\alpha}+\mathbb{N}$ and $ \cup_{a \in D, a \ne 0}R_{f,a}$ are disjoints, then the converse of the assertion $(1)$ holds. \end{enumerate} \end{thm} \begin{thm} \label{thmBS2} Let ${\mathcal A}: f=0$ be an arrangement of $d$ hyperplanes in $\mathbb{P}^n$. Then $$\max R_f <2-\frac{1}{d}.$$ \end{thm} \begin{rk} \label{rkKEY2} As shown by Narv\' ez Macarro, the Bernstein-Sato polynomial $b_f$ of a free arrangement ${\mathcal A}:f=0$ satisfies the equality \begin{equation} \label{eqMN} b_f(s-2)=\pm b_f(-s), \end{equation} see \cite{NM}. This equality implies that the zero set $R_f \subset (0,2)$ is stable under the involution ${\alpha} \mapsto 2-{\alpha}$, including the multiplicities of the roots. In fact, the equation \eqref{eqMN} holds for a larger class of free hypersurfaces, namely those of {\it linear Jacobian type}, see \cite{NM}. As noted in \cite[Corollary 4.3]{NM}, for any such free hypersurface one has $R_f \subset (0,2)$. A locally quasi-homogeneous divisor $V: f=0$ in $\mathbb{P}^n$ is of linear Jacobian type, see \cite[Theorem (1.6)]{NM}, and an example of such a surface in given below, see Example \ref{exD3}. Indeed, it is easy to see that the hypersurface $V$ and its affine cone $D=CV$ (regarded as a germ at the origin), are locally quasi-homogeneous divisors in the same time. Moreover, it is clear that $V$ (as a projective hypersurface, see \cite[Section 8.1]{DHA}) and its affine cone $D=CV$ (regarded as a germ at the origin) are free in the same time. \end{rk} {\it Assume from now on in this section, except in Definition \ref{def1} and in the final subsection \ref{planecurve}, that $V: f=0$ is either a hyperplane arrangement in $\mathbb{P}^n$, or a free locally quasi-homogeneous divisor in $\mathbb{P}^n$. Let $\delta_{k,d}$ be 1 if $k =d$ and 0 otherwise, and $\lambda= \exp(-2\pi i k/d)$.} \begin{cor} \label{corPfilt} With the above assumption, one has $$Gr_P^p H^{n}(F,\mathbb{C})= 0$$ for any $p \leq n-2$ and $Gr_P^{n-1} H^{n}(F,\mathbb{C})_1= 0$. In other words, for any $k=1,...,d$ one has $$P^{n-1+\delta_{k,d}}H^n(F,\mathbb{C})_{\lambda }=H^n(F,\mathbb{C})_{\lambda }.$$ In particular $E^{n-t,t}_{\infty}(f)_k=0$ for $t>1-\delta_{k,d}$. \end{cor} \proof Assume $Gr_P^p H^{n}(F,\mathbb{C})_{\eta } \ne 0$ for some $p$ and some $\eta=\exp(-2 \pi i \alpha)$ with $p \leq n+1 -{\alpha}<p+1$. Then Theorem \ref{thmBS} (1) implies that ${\alpha} \in R_f$ and hence using Theorem \ref{thmBS2} or Remark \ref{rkKEY2}, we get $2>{\alpha} >n-p.$ If $p\leq n-2$, or if $p=n-1$ and ${\alpha}$ is an integer, then we get a contradiction. \endproof \begin{rk} \label{rkF=P2} If the morphism $\iota_H^*:H^{m}(F,\mathbb{C}) \to H^{m}(F_H,\mathbb{C})$ is strictly compatible with the pole order filtration $P^p$, then one has in addition the following property: \medskip $(\star)$ for a hyperplane arrangement, $E^{s,t}_{\infty}(f)_k=0$ for any $s\geq 0$ and $t\geq 2-\delta_{k,d}$. \medskip Indeed, the property $(\star)$ is equivalent to $P^{m-1+\delta_{k,d}}H^m(F,\mathbb{C})_{\lambda }=H^m(F,\mathbb{C})_{\lambda }$ for any $m$. It follows that in this situation the limit page $E^{s,t}_{\infty}(f)_k$ is not only contained in the first quadrant, but in fact the non-zero terms are situated on just two horizontal lines when $k \ne d$, namely the lines $t=0$ and $t=1$ (and, respectively, only on the line $t=0$ when $k=d$). \end{rk} If we assume Conjecture \ref{conj1} and proceed by induction, it remains to compute the dimension of the terms $E^{n-q,q}_{2}(f)_k=0$ for any $q\in \{0,1\}$ and $k=1,2,...,d$. It is this property that makes possible the computations in a reasonable amount of time. Since Conjecture \ref{conj1} is difficult to check in practice, we introduce the following notion, motivated by Corollary \ref{corPfilt}. \begin{definition} \label{def1} Let $k,m$ be positive integers satisfying $1 \leq k \leq d$ and $1 \leq m \leq n$. We say that the hypersurface $V:f=0$ is $(k,m)$-top-computable if $E^{n-t,t}_{\infty}(f)_k=0$ for $m-\delta_{k,d}<t\leq n-\delta_{k,d}$, and $$\sum_{t=0}^{m-\delta_{k,d}}\dim E^{n-t,t}_{2}(f)_k =\dim H^n(F,\mathbb{C})_{\lambda }.$$ \end{definition} The vanishing of $E^{0,n}_{\infty}(f)_d=0$, which is essential for this definition, follows from \cite[Proposition 5.2]{BS0}. As an example, a smooth hypersurface $V:f=0$ in $\mathbb{P}^n$ is $(k,n)$-top-computable for any $k$, but not $(k,n-1)$-top-computable. For any hypersurface $V:f=0$ and any $t$, one has $\dim E^{n-t,t}_{2}(f)_k \geq \dim E^{n-t,t}_{\infty}(f)_k $. Hence, if $V$ is $(k,m)$-top-computable then $ E^{n-t,t}_{2}(f)_k = E^{n-t,t}_{\infty}(f)_k $ for $0 \leq t \leq m-\delta_{k,d}$. In particular, the information on the $P$-filtration on $H^n(F,\mathbb{C})$ given by the second term $E_2$ is also complete in this case. For a hypersurface $V:f=0$ not covered by Corollary \ref{corPfilt}, the simplest way to check the vanishings $E^{n-t,t}_{\infty}(f)_k=0$ for $t>m-\delta_{k,d}$ is to use the vanishings in \eqref{limit} and to check by a direct computation whether $E^{n-t,t}_{2}(f)_k=0$ for $m-\delta_{k,d}<t \leq n-\delta_{k,d}$. Using this approach, irreducible, non-free surfaces in $\mathbb{P}^3$ that are still $(k,1)$-top-computable are displayed in Examples \ref{exGEN} and \ref{exGEN2}. \begin{rk} \label{rkprim1} The conditions in Definition \ref{def1} are easy to check as soon as we know $\dim H^n(F,\mathbb{C})_{\lambda }$ in the case of an arrangement ${\mathcal A}$ of $d$ hyperplanes. Note that the vanishings necessary for the $(k,1)$-top-computability hold by Corollary \ref{corPfilt}. Let $d'=d/e$ where $e=G.C.D. (d,k)$. Then $\lambda$ is a $d$-root of unity of order $d'$. If there is a hyperplane $H \in {\mathcal A}$, such that for any dense edge $X \subset H$, the number $n_X$ of hyperplanes in ${\mathcal A}$ containing $X$ is not a multiple of $d'$, then it is known that $H^m(F,\mathbb{C})_{\lambda }=0$ for $m<n$ and $\dim H^n(F,\mathbb{C})_{\lambda }=|\chi(M({\mathcal A})|$, see \cite[Theorem 6.4.18]{D2}, \cite{Lib}. In this case we say that $k$ is {\it non-resonant with respect to the arrangement} ${\mathcal A}$. Hence, the answer given by the second page of the spectral sequence is correct as soon as we know that $k$ is non-resonant and we have the equality $$\sum_{q=0}^{1-\delta_{k,d}}\dim E^{n-q,q}_{2}(f)_k =|\chi(M)|.$$ The new information given in such a case by the second page of the spectral sequence concerns the pole order filtration on $H^{n}(F,\mathbb{C})_{\lambda}$. On the other hand, the fact that this equality holds in all the computed cases gives strong support for Conjecture \ref{conj2} below. \end{rk} The following is the main conjecture put forth in our paper. \begin{conj} \label{conj2} For any arrangement ${\mathcal A}: f=0$ of $d$ hyperplanes in $\mathbb{P}^n$, and for any free locally quasi-homogeneous divisor $V:f=0$ of degree $d$ in $\mathbb{P}^n$, the defining polynomial $f$ is $(k,1)$-top-computable for any positive integer $k$ satisfying $1 \leq k \leq d$. \end{conj} One has the following partial result, saying that Conjecture \ref{conj2} holds for $k=d$. \begin{thm} \label{thmlog1} With the above assumption on $V: f=0$, one has $$\dim E^{n,0}_{2}(f)_d = \dim E^{n,0}_{\infty}(f)_d =\dim H^n(F,\mathbb{C})_{1} =\dim H^n(M,\mathbb{C}).$$ \end{thm} \proof Let us denote as above by $D$ the affine cone in $X=\mathbb{C}^{n+1}$ over $V$. Then let us denote by $\Omega^*(*D)$ the de Rham complex of rational differential forms on $X$ with poles of arbitrary orders along the hypersurface $D$. Consider the subcomplex $\Omega^*(D)$ of differential forms in $\Omega^*(*D)$ with logarithmic poles along the hypersurface $D$. A form $\omega \in \Omega^p(D)$ satisfies, by definition $$ f \omega \in \Omega^p \text{ and } f \dd \omega \in \in \Omega^{p+1},$$ or, equivalently $\omega=\eta/f$ with $\eta \in \Omega^p$ and $\dd f \wedge \eta$ divisible by $f$. It is clear that one has a natural identification $$\Omega^{n+1}(D)_0=H^{n+1}(K_f^*)_d=E^{n,0}_{1}(f)_d,$$ given by $\eta/f \mapsto \eta$, where the grading on the $S$-module $\Omega^p(D)$ is the usual one, i.e. $|\eta'/f|=|\eta'|-d$, for any $\eta' \in \Omega^p$. Next the homogeneous component $\Omega^{n}(D)_0$ can be identified by the same map as above to the direct sum $$ S_{d-n-1}\cdot \omega_n \oplus Syz^n_d$$ where $$\omega_n=\sum_{i=0}^n(-1)^ix_i \dd x_0 \wedge \dd x_1 \wedge ... \wedge \widehat {\dd x_i} \wedge ... \wedge \dd x_n,$$ and $$Syz^n_d=\ker \left( \dd f \wedge : \Omega^{n}_d \to \Omega^{n+1}_{2d} \right)=H^{n}(K_f^*)_d=E^{n-1,0}_{1}(f)_d.$$ If $h \in S_{m}$, a direct computation shows that $$d\left( \frac{h\cdot \omega_n}{f}\right)=(m+n+1-d) \frac{h\cdot \omega'_n}{f} ,$$ where $\omega'_n=\dd x_0 \wedge \dd x_1 \wedge ... \wedge \dd x_n.$ In particular the image of the differential $d: \Omega^{n}(D)_0 \to \Omega^{n+1}(D)_0$ coincides with the image of the differential $d_1: E^{n-1,0}_{1}(f)_d \to E^{n,0}_{1}(f)_d$, and all the other differentials $d: \Omega^{n}(D)_q \to \Omega^{n+1}(D)_q$ for $q \ne d$ are surjective. It follows that \begin{equation} \label{eqK1} \dim E^{n,0}_{2}(f)_d=\dim H^{n+1}(\Omega^*(D)_0)=\dim H^{n+1}(\Omega^*(D)). \end{equation} On the other hand, we clearly have $\dim H^n(M,\mathbb{C})= \dim H^{n+1}(X \setminus D, \mathbb{C})$, see for instance \cite[Prop. 6.4.1]{D2}. It remains to show that $\dim H^{n+1}(\Omega^*(D))=\dim H^{n+1}(X \setminus D, \mathbb{C})$. When $(D,0)$ is a free locally quasi-homogeneous divisor, this follows from \cite{CMN}. When $D$ is a hyperplane arrangement, one has to use \cite[Proposition 6.1]{WY}. \endproof \subsection{The arbitrary hypersurface case} \label{planecurve} The algorithm presented below can be applied for any hypersurface $V:f=0$ in $\mathbb{P}^n$ to compute the terms $E_2^{n-q,q}(f)_k$ of the second page of the above spectral sequences. However, in the general case $Q=qd+k$ takes values up to $(n+1)d$ and not only $2d-1$, which increases dramatically the computer time. Once the second page is computed, then it is a difficult question to decide whether \begin{equation} \label{eqHappy} E_2^{n-q,q}(f)_k=E_{\infty}^{n-q,q}(f)_k, \end{equation} and hence whether we have obtained the correct results. In some cases, one can proceed as follows. Using \eqref{limit}, we see that \begin{equation} \label{eqHappy2} \dim H^{n}(F,\mathbb{C})_{\lambda} \leq \sum_{q=0,...,n}\dim E_2^{n-q,q}(f)_k \end{equation} where $\lambda=\exp(-2 \pi i k/d)$, and equality holds if and only if one has the equality \eqref{eqHappy} for any $q$. Two examples of such a computation are given below in Examples \ref{exGEN} and \ref{exGEN2}. \section{The algorithm} Consider the graded $S-$submodule $AR(f) \subset S^{n+1}$ of {\it all relations} involving the derivatives of $f$, namely $$r=(r_0,r_1,...,r_n) \in AR(f)_q$$ if and only if \begin{equation} \label{syz1} r_0f_0+r_1f_1+ ... +r_nf_n=0 \end{equation} and the polynomials $r_0,r_1,...,r_n$ are in $S_q$. Since $S$ is a noetherian ring, the graded $S$-module $AR(f)$ admits a (minimal) system of generators $r^{(j)}$ of Jacobian syzygies, where $j=1,...,g$. Assume that $$r^{(j)}=(r^{(j)}_0,r^{(j)}_1,...,r^{(j)}_n)$$ for $j=1,...,g$ and let $d_j=\deg r^{(j)}_m$, for any $m\in [0,n]$ with $r^{(j)}_m \ne 0$. Assume moreover that $$d_1 \leq d_2 \leq ... \leq d_g.$$ Such a system of generators can be determined using the software SINGULAR \cite{Sing} or CoCoA \cite{Co}, see Remark \ref{rkKEY1}. To each syzygy $r=(r_0,r_1,...,r_n) \in AR(f)_q$ we can associate an $n$-differential form in $\Omega^n_{q+n}$ by the formula \begin{equation} \label{form1} \omega(r)=\sum_{i=0}^n(-1)^ir_i \dd x_0 \wedge \dd x_1 \wedge ... \wedge \widehat {\dd x_i} \wedge ... \wedge \dd x_n. \end{equation} Then equation \eqref{syz1} is equivalent to $\dd f \wedge \omega(r)=0$ and \begin{equation} \label{form2} \dd \omega(r)=(\sum_{i=0}^n(r_i)_i )\dd x_0 \wedge \dd x_1 \wedge ... \wedge \dd x_n, \end{equation} where $(r_i)_i$ denotes the partial derivative of $r_i$ with respect to $x_i$ for $i=0,1,...,n$. It follows that the dimension of $E_2^{n-k,q}(f)_k$, which is the cokernel of the differential $$\dd_1:E_1^{n-1-q,q}(f)_k \to E_1^{n-q,q}(f)_k$$ can be computed as follows. Consider the linear mapping \begin{equation} \label{fi4} \phi_Q:S_{Q-d_1-n} \times ...\times S_{Q-d_g-n}\times S^{n+1}_{Q-d-n} \to S_{Q-n-1}, \end{equation} given by $$((A_1,...,A_g),(B_0,..., B_n)) \mapsto \sum_{ 0 \leq i \leq n} ( (\sum_{1\leq j \leq g} A_jr^{(j)}_i)_i+B_if_i),$$ for $Q=qd+k$. This map can be described in a more compact way by using differential forms as follows. The formula \eqref{form1} implies that the application $\phi_Q$ is nothing else but the map $$S_{Q-d_1-n} \times ...\times S_{Q-d_n-n}\times \Omega^{n}_{Q-d} \to \Omega^{n+1}_Q$$ given by $$((A_1,...,A_n), \eta) \mapsto \dd (\sum_{j=1,n}A_j\omega(r^{(j)}))+\dd f \wedge \eta.$$ Let $R_Q$ be the rank of this linear mapping, which is computed using the software SINGULAR for instance. Then one clearly has \begin{equation} \label{E2dim} \dim E_2^{n-q,q}(f)_k=\dim S_{Q-n-1}-R_Q= {Q-1 \choose n}- R_Q, \end{equation} for any $q=0,...,n$. In the case of a hyperplane arrangement ${\mathcal A}:f=0$, or of a free divisor $V:f=0$ of linear Jacobian type, we can consider only the values $q \leq 1$ if we assume Conjecture \ref{conj2}, while $k \leq d$ by definition. It follows that it is enough to take $Q \leq 2d$ in this case. In fact, for a hyperplane arrangement, it is known that $F^nH^n(F,\mathbb{C})_1=H^n(F,\mathbb{C})_1=H^n(M({\mathcal A}), \mathbb{C})$, which implies that in fact we need to consider only the values $Q \leq 2d-1$ in such a case. \begin{rk} \label{nonfreearr} In case one does not like to use the generating system of syzygies produced by a computer software, one can proceed in the following more direct way, already considered by us in \cite{DStMFgen} and by Morihiko Saito in \cite{Sa4}. Let $V: f=0$ be a degree $d$ hypersurface in $\mathbb{P}^n$, and for each $Q=qd+k$ consider the linear map \begin{equation} \label{fi5} \phi^1_Q: \Omega^n_{Q-d} \to \Omega^{n+1}_{Q}, \ \ \ \eta \mapsto \dd f \wedge \eta. \end{equation} Let $\kappa^1(Q)$ be the dimension of the kernel $K^1(Q)$ of this map. Clearly $\kappa^1(Q)=0$ for $Q<d+n$. Consider next the map \begin{equation} \label{fi6} \phi^2_Q: \Omega^n_{Q-d} \times \Omega^n_{Q} \to \Omega^{n+1}_{Q} \times \Omega^{n+1}_{Q+d}, \end{equation} given by $$(\eta_1,\eta_2) \mapsto (\dd f \wedge \eta_1 + \dd (\eta_2), \dd f \wedge \eta_2).$$ Let $\kappa^2(Q)$ be the dimension of the kernel $K^2(Q)$ of this map. Note that one has $$K^2(Q) \subset \Omega^n_{Q-d} \times K^1(Q+d),$$ and hence the dimension of the vector space $\phi^2_Q(\Omega^n_{Q-d} \times K^1(Q+d))$ is given by \begin{equation} \label{fi7} R_Q= \dim (\Omega^n_{Q-d} \times K^1(Q+d))-\kappa^2(Q)=\kappa^1(Q+d)-\kappa^2(Q)+ (n+1){Q-d \choose n}. \end{equation} The dimensions $\kappa^1(Q+d)$ and $\kappa^2(Q)$ can be computed using the software SINGULAR for instance. Then one clearly has the same formula as above, namely \begin{equation} \label{E2dim2} \dim E_2^{n-k,q}(f)_k=\dim S_{Q-n-1}-R_Q= {Q-1 \choose n}- R_Q. \end{equation} This approach seems to increase the necessary computing time as well as the necessary computer memory substantially. We thank Morihiko Saito for telling us that the algorithm to compute the second page of the spectral sequence using the system of generators is better not only in the case of a free hypersurface, when we had already used it, but also in the general case. \end{rk} \begin{rk} \label{rkKEY1} (i) The command $syz(...)$ in the software SINGULAR does not always give a minimal set of generators for the graded $S$-module $AR(f)$. For the quartic surface discussed in Example \ref{exGEN}, it lists 6 generators for the order $rp$, 7 generators for the order $dp$ and 8 generators for the orders $lp$ and $Dp$. Here $rp =$ reverse lexicographical ordering, $dp =$ degree reverse lexicographical ordering, $lp =$ lexicographical ordering, and $Dp =$ degree lexicographical ordering. Note also that in some of these listings, the generators are not given with the degrees in increasing order. To get the minimal set of generators one should use the command $minbase(syz( ... ))$. \medskip \noindent (ii) To get information on the complexity of computations in the algorithm, it would be useful to have an upper bound in terms of the geometry of the hypersurface $V:f=0$ on $g$, the minimal number of generators for the $S$-module $AR(f)$, and a lower bound on the minimal degree $d_1$. Note that for a smooth hypersurface we have the $g=n(n+1)/2$ linear independent Koszul generators of degree $d_1=d-1$. The equality $g=n(n+1)/2$ holds also for the plane arrangement in Example \ref{exNF} and the singular surface in Example \ref{exGEN} below. On the other hand, note that \cite[formula (1.3) and Example 4.3 (i)]{DStEdin} imply that for a hypersurface with just one $A_1$ singularity, one has $$ g = \frac{n(n+1)}{2}+1.$$ And \cite[Theorem 1.4]{DStEdin} implies that for a nodal curve $C$ in $\mathbb{P}^2$, one has $g \geq r-1$, where $r$ is the number of irreducible components of $C$. Lower bounds for $d_1$ are known for hypersurfaces having only isolated singularities, see \cite[Theorem 9]{DS2} for the case of weighted homogeneous singularities, and \cite[Theorem 2.4]{DAG} for arbitrary isolated singularities. \medskip \noindent (iii) Note that \cite[Theorem 1.5 and Example 4.3 (i)]{DStEdin} imply that for a hypersurface with just one $A_1$ singularity, one has $d_1=...d_{g-1}=d-1$ and $d_g=n(d-2).$ \end{rk} \begin{rk} \label{freehyper} If $V:f=0$ is a free hypersurface, then $g=n$ as the $S$-module $AR(f)$ is free. One can verify that the basis given by SINGULAR is correct using Saito's criterion, i.e. the $(n+1)$-square matrix having as the first row $x_0,...,x_n$, and the $j+1$-st row given by $(r^{(j)}_0,r^{(j)}_1,...,r^{(j)}_n)$ for $j=1,...,n$, should have as determinant a constant, non-zero multiple of $f$, see \cite{OT, Te, Yo}. In particular, in this case $d_1+d_2+...+d_n=d-1$. It is known that, in the case of a hyperplane arrangement ${\mathcal A}:f=0$, the exponents $d_j$ determine the Betti numbers of the complement $M({\mathcal A})$, see for instance \cite{OT}, \begin{equation} \label{poincare} \pi({\mathcal A},t):=\sum_{i=0}^n b_i(M({\mathcal A}))t^i=\prod_{j=1}^n(1+d_jt). \end{equation} \end{rk} \section{Examples} In this section we consider plane arrangements in $\mathbb{P}^3$, except in Examples \ref{exD3}, \ref{exGEN} and \ref{exGEN2} where irreducible quartic surfaces in $\mathbb{P}^3$ are considered, and we replace the coordinates $x_0,x_1,x_2,x_3$ by $x,y,z,w$. To state the results, we consider the pole order spectrum defined by \begin{equation} \label{sp1.5} Sp^0_P(f)=\sum_{{\alpha}>0}n_{P,f,{\alpha}}t^{{\alpha}} \end{equation} where $$n_{P,f,{\alpha}}=\dim Gr_P^pH^{3}(F,\mathbb{C})_{\lambda}$$ with $p=[4-{\alpha}]$ and $\lambda=\exp(-2 \pi i \alpha)$. In view of Corollary \ref{corPfilt} and assuming Conjecture \ref{conj2}, in the case of a plane arrangement the exponents ${\alpha}$ with possibly non-zero coefficients $n_{P,f,{\alpha}}$ are of the form \begin{equation} \label{sp2} {\alpha}= \frac{Q}{d} \text{ and } n_{P,f,{\alpha}}=\dim E_2^{n-q,q}(f)_k, \end{equation} where $Q=qd+k$ as above, with $q=0,1$ and $k=1,...,d$. Note that one has the equality $$b_n(F)=\sum_{{\alpha}>0}n_{P,f,{\alpha}}.$$ \begin{ex}[A family of free arrangements] \label{exFA} Consider the arrangement $${\mathcal A}(p,q): (x^p+y^p)(z^q+w^q)=0.$$ This arrangement is free with exponents $(1,p-1,q-1)$ and the monodromy operators $h^m: H^m(F,\mathbb{C}) \to H^m(F,\mathbb{C})$ can be easily computed using \cite[Theorem 1.4]{DNag} or \cite{Tapp}. For all the pairs $(p,q)$ we have tested, i.e. $2 \leq p,q \leq d=p+q \leq 12$, the algorithm described above gives the correct result. In other words, the corresponding arrangements ${\mathcal A}(p,q)$ are $(k,1)$-top-computable for all the integers $k$ with $1 \leq k \leq d$. For instance, for the arrangement ${\mathcal A}(4,8)$ we get the following spectrum \begin{equation} \label{spA48} Sp^0_P(f)=3t^{\frac{6}{12}} +10t^{\frac{9}{12}}+21t^{\frac{12}{12}}+12t^{\frac{15}{12}}+9t^{\frac{18}{12}}+2t^{\frac{21}{12}}. \end{equation} Note that this spectrum is not symmetric with respect to the monomial $21t=21t^{\frac{12}{12}}$, and an equality similar to that in Corollary \ref{corcurve} does not hold for the multiplicities of the roots of $$\Delta^3({\mathcal A}(4,8))=\Phi_1^{21}\cdot \Phi_2^{12}\cdot \Phi_4^{12}.$$ Here and in the sequel, $\Phi_j$ denotes the $j$-th cyclotomic polynomial. \end{ex} \begin{ex}[The braid arrangement $A_4$] \label{exBraid} The braid arrangement $A_4$ is defined in $\mathbb{C}^5$ by the equation $$\prod_{0\leq i<j \leq 4}(x_i-x_j)=0.$$ However, this arrangement is not essential. Using the coordinate change $x=x_1-x_0$, $y=x_2-x_0$, $z=x_3-x_0$, $w=x_4-x_0$, $u=x_0+x_1+x_2+x_3+x_4$, we see that the essential version of this arrangement is given in $\mathbb{C}^4$, corresponding to $u=0$, by the equation $${\mathcal A}: f=xyzw(x-y)(x-z)(x-w)(y-z)(y-w)(z-w)=0.$$ Regarded as an arrangement in $\mathbb{P}^3$, this arrangement is known to be free with exponents $(d_1,d_2,d_3)=(2,3,4)$. Running the algorithm described above and using the fact that \eqref{eqHappy2} is an equality in this case as implied by Settepanella's results in \cite[Table 2]{Se2}, we get that ${\mathcal A}$ is $(k,1)$-top-computable for any $k\in [1,10]$. In particular, we have \begin{equation} \label{spA5} Sp^0_P(f)=t^{\frac{4}{10}} +4t^{\frac{5}{10}}+5t^{\frac{6}{10}}+6t^{\frac{7}{10}}+6t^{\frac{8}{10}}+6t^{\frac{9}{10}}+24t^{\frac{10}{10}}+ \end{equation} $$+6t^{\frac{11}{10}}+6t^{\frac{12}{10}}+6t^{\frac{13}{10}}+5t^{\frac{14}{10}}+4t^{\frac{15}{10}}+t^{\frac{16}{10}}.$$ Then the formula for the spectrum clearly implies the following formula for the Alexander polynomial $\Delta^3({\mathcal A})$: \begin{equation} \label{AlexA5.3} \Delta^3({\mathcal A})=\Phi_1^{24}\cdot \Phi_2^{8}\cdot \Phi_5^{6}\cdot \Phi_{10}^{6}, \end{equation} which coincides of course with the formula given in \cite[Table 2]{Se2}. It is known that in this case $\Delta^1({\mathcal A})=\Phi_1^{9}$, see \cite{MP} or \cite[Table 2]{Se2}. Using the formula \eqref{Euler} and \eqref{poincare}, we get $\chi(M({\mathcal A}))= -6$ and it follows that $$\Delta^2({\mathcal A})=\Phi_1^{26}\cdot \Phi_2^{2}.$$ This coincides again with the formula given in \cite[Table 2]{Se2}. Any value $k\ne 5, 10$ is non-resonant with respect to the arrangement $A_4$, so for such a $k$, we can get the above results without using Settepanella's results in \cite[Table 2]{Se2}, as explained in Remark \ref{rkprim1}. \end{ex} \begin{ex}[The Coxeter arrangement $D_4$] \label{exD4} The arrangement $D_4$ is defined in $\mathbb{C}^4$ by the equation $${\mathcal A}: f=(x^2-y^2)(x^2-z^2)(x^2-w^2)(y^2-z^2)(y^2-w^2)(z^2-w^2)=0.$$ Regarded as an arrangement in $\mathbb{P}^3$, this arrangement is known to be free with exponents $(d_1,d_2,d_3)=(3,3,5)$, see \cite{OT}. Running the algorithm described above {\it and assuming Conjecture \ref{conj2} true}, we get \begin{equation} \label{spD4} Sp^0_P(f)=t^{\frac{4}{12}} +4t^{\frac{5}{12}}+10t^{\frac{6}{12}}+12t^{\frac{7}{12}}+23t^{\frac{8}{12}}+16t^{\frac{9}{12}}+20t^{\frac{10}{12}}+16t^{\frac{11}{12}}+45t^{\frac{12}{12}}+ \end{equation} $$+16t^{\frac{13}{12}}+20t^{\frac{14}{12}}+16t^{\frac{15}{12}}+23t^{\frac{16}{12}}+12t^{\frac{17}{12}}+10t^{\frac{18}{12}}+4t^{\frac{19}{12}}+t^{\frac{20}{12}}.$$ This formula for the spectrum clearly implies the following formula for the Alexander polynomial $\Delta^3({\mathcal A})$: \begin{equation} \label{AlexD4.3} \Delta^3({\mathcal A})=\Phi_1^{45}\cdot \Phi_2^{20}\cdot \Phi_3^{24}\cdot \Phi_4^{16} \cdot \Phi_{6}^{20}\cdot \Phi_{12}^{16}. \end{equation} It is known that in this case $\Delta^1({\mathcal A})=\Phi_1^{11}\Phi_3$, see \cite{MP}. Using the formula \eqref{Euler} and \eqref{poincare}, we get $\chi(M({\mathcal A}))= -16$ and it follows that $$\Delta^2({\mathcal A})=\Phi_1^{39}\cdot \Phi_2^{4} \cdot \Phi_3^{9} \cdot \Phi_6^{4}.$$ Any value $k \ne 2, 4, 6,12$ is non-resonant with respect to the arrangement $D_4$. \end{ex} \begin{ex}[The complex reflection arrangement ${\mathcal A}(3,3,4)$] \label{exG} The hyperplane arrangement ${\mathcal A}(3,3,4)$ is defined in $\mathbb{C}^4$ by the equation $${\mathcal A}: f=(x^3-y^3)(x^3-z^3)(x^3-w^3)(y^3-z^3)(y^3-w^3)(z^3-w^3)=0.$$ Regarded as an arrangement in $\mathbb{P}^3$, this arrangement is known to be free with exponents $(d_1,d_2,d_3)=(4,6,7)$, see \cite{OT}. Running the algorithm described above {\it and assuming Conjecture \ref{conj2} true}, we get \begin{equation} \label{spG} Sp^0_P(f)=t^{\frac{4}{18}} +4t^{\frac{5}{18}}+10t^{\frac{6}{18}}+19t^{\frac{7}{18}}+31t^{\frac{8}{18}}+46t^{\frac{9}{18}}+59t^{\frac{10}{18}}+71t^{\frac{11}{18}}+98t^{\frac{12}{18}}+ \end{equation} $$+86t^{\frac{13}{18}}+89t^{\frac{14}{18}}+92t^{\frac{15}{18}}+90t^{\frac{16}{18}}+90t^{\frac{17}{18}}+168t^{\frac{18}{18}}+90t^{\frac{19}{18}}+90t^{\frac{20}{18}}+92t^{\frac{21}{18}}+89t^{\frac{22}{18}}+$$ $$+86t^{\frac{23}{18}}+98t^{\frac{24}{18}}+71t^{\frac{25}{18}}+59t^{\frac{26}{18}}+46t^{\frac{27}{18}}+31t^{\frac{28}{18}}+19t^{\frac{29}{18}}+10t^{\frac{30}{18}}+4t^{\frac{31}{18}}+t^{\frac{32}{18}}.$$ This formula for the spectrum clearly implies the following formula for the Alexander polynomial $\Delta^3({\mathcal A})$: \begin{equation} \label{AlexG.3} \Delta^3({\mathcal A})=\Phi_1^{168}\cdot \Phi_2^{92}\cdot \Phi_3^{108}\cdot \Phi_6^{92} \cdot \Phi_{9}^{90}\cdot \Phi_{18}^{90}. \end{equation} It is known that in this case $\Delta^1({\mathcal A})=\Phi_1^{17}\Phi_3$, see \cite{MP}. In fact, a generic plane section ${\mathcal A}_H$ has 42 triple points and 27 nodes, and the result follows also from \cite{BDS}. Using the formula \eqref{Euler} and \eqref{poincare}, we get $\chi(M({\mathcal A}))= -90$ and it follows that $$\Delta^2({\mathcal A})=\Phi_1^{94}\cdot \Phi_2^{2} \cdot \Phi_3^{19} \cdot \Phi_6^{2}.$$ Any value $k \ne 3, 6, 9,18$ is non-resonant with respect to the arrangement ${\mathcal A}(3,3,4)$. \end{ex} \begin{ex}[The complex reflection arrangement ${\mathcal A}(2,1,4)$] \label{exA214} The hyperplane arrangement ${\mathcal A}(2,1,4)$ is defined in $\mathbb{C}^4$ by the equation $${\mathcal A}: f=xyzw(x^2-y^2)(x^2-z^2)(x^2-w^2)(y^2-z^2)(y^2-w^2)(z^2-w^2)=0.$$ Regarded as an arrangement in $\mathbb{P}^3$, this arrangement is known to be free with exponents $(d_1,d_2,d_3)=(3,5,7)$, see \cite{OT}. Running the algorithm described above {\it and assuming Conjecture \ref{conj2} true}, we get \begin{equation} \label{spA214} Sp^0_P(f)=t^{\frac{4}{16}} +4t^{\frac{5}{16}}+9t^{\frac{6}{16}}+16t^{\frac{7}{16}}+25t^{\frac{8}{16}}+32t^{\frac{9}{16}}+39t^{\frac{10}{16}}+44t^{\frac{11}{16}}+ \end{equation} $$+47t^{\frac{12}{16}}+48t^{\frac{13}{16}}+48t^{\frac{14}{16}}+48t^{\frac{15}{16}}+105t^{\frac{16}{16}}+48t^{\frac{17}{16}}+48t^{\frac{18}{16}}+48t^{\frac{19}{16}}+47t^{\frac{20}{16}}+$$ $$+44t^{\frac{21}{16}}+39t^{\frac{22}{16}}+32t^{\frac{23}{16}}+25t^{\frac{24}{16}}+16t^{\frac{25}{16}}+9t^{\frac{26}{16}}+4t^{\frac{27}{16}}+t^{\frac{28}{16}}.$$ This formula for the spectrum clearly implies the following formula for the Alexander polynomial $\Delta^3({\mathcal A})$: \begin{equation} \label{AlexA214.3} \Delta^3({\mathcal A})=\Phi_1^{105}\cdot \Phi_2^{50}\cdot \Phi_4^{48}\cdot \Phi_8^{48} \cdot \Phi_{16}^{48}. \end{equation} It is known that in this case $\Delta^1({\mathcal A})=\Phi_1^{15}$, see \cite{MP}. Using the formula \eqref{Euler} and \eqref{poincare}, we get $\chi(M({\mathcal A}))= -48$ and it follows that $$\Delta^2({\mathcal A})=\Phi_1^{73}\cdot \Phi_2^{2}.$$ Any value $k \ne 6$ is non-resonant with respect to the arrangement ${\mathcal A}(2,1,4)$. \end{ex} \begin{rk} \label{rkKEY4} Note that all the spectra coming from reflection groups in Examples \ref{exBraid}, \ref{exD4}, \ref{exG}, \ref{exA214} above enjoy a perfect symmetry with respect the monomial containing $t$, i.e. the coefficients of $t^{{\alpha}}$ and $t^{2-{\alpha}}$ coincide for all $0<{\alpha} <1$. This symmetry might be related to the symmetry of the Bernstein-Sato polynomial $b_f$ of a free arrangement ${\mathcal A}:f=0$ recalled in \eqref{eqMN}. However, note that for some free arrangements as in Example \ref{exFA}, the pole order spectra are not symmetric, but such arrangements seem to be quite exceptional. Indeed, most of the free arrangements we have tested so far enjoy the above spectrum symmetry property. \end{rk} \begin{ex}[A non free arrangement] \label{exNF} Consider the arrangement ${\mathcal A}$ defined in $\mathbb{C}^4$ by the equation $${\mathcal A}: f=xyzw(x+y+z)(y-z+w)=0.$$ This arrangement is far from being free, the $S$-module $AR(f)$ has 6 generators of degrees $2,2,3,3,3,3$ respectively. Running the algorithm described above {\it and assuming Conjecture \ref{conj2} true}, we get \begin{equation} \label{spGO} Sp^0_P(f)=t^{\frac{4}{6}} +2t^{\frac{5}{6}}+8t^{\frac{6}{6}}+2t^{\frac{7}{6}}+2t^{\frac{8}{6}}+2t^{\frac{9}{6}}+t^{\frac{10}{6}}. \end{equation} This formula for the spectrum clearly implies the following formula for the Alexander polynomial $\Delta^3({\mathcal A})$: \begin{equation} \label{AlexNF.3} \Delta^3({\mathcal A})=\Phi_1^{8}\cdot \Phi_2^{2}\cdot \Phi_3^{2}\cdot \Phi_6^{2}. \end{equation} A generic plane section of ${\mathcal A}$ is a nodal line arrangement, and hence in this case $\Delta^1({\mathcal A})=\Phi_1^{5}$. It is easy to compute $\chi(M({\mathcal A}))= -2$, and hence using the formula \eqref{Euler}, we get $$\Delta^2({\mathcal A})=\Phi_1^{10}.$$ Note that the two points $A=(0:0:0:1)$ and $B=(1:0:0:0)$ both correspond to dense edges $X$ with $n_X=4$, and any hyperplane in ${\mathcal A}$ contains at least one of these two points. It follows that the value $k= 3$ is resonant with respect to the arrangement ${\mathcal A}$, i.e. the defining property of non-resonancy in Remark \ref{rkprim1} is not satisfied, but still there is no contribution to $H^m(F,\mathbb{C})_{-1}$ for $m<3$. This fact suggests that Yoshinaga's results in \cite{Yo0} for real line arrangements might have a higher dimensional analogue. \end{ex} \begin{ex}[A free discriminant surface] \label{exD3} Consider the surface in $\mathbb{P}^3$ given by $$V:f=y^2z^2-4xz^3-4y^3w+18xyzw-27x^2w^2=0.$$ Then $V$ is just the discriminant of cubic binary forms in $\mathbb{P}(\mathbb{C}[u,v]_3)=\mathbb{P}^3$, i.e. the set of cubic forms in $u,v$ with a multiple linear factor. It is known that $V$ is a free surface with exponents $d_1=d_2=d_3=1$ and $V$ is homeomorphic to $\mathbb{P}^1 \times \mathbb{P}^1$, see \cite{DStFS}. Using the homogeneity under the obvious $G\ell_2(\mathbb{C})$-action on $\mathbb{P}(\mathbb{C}[u,v]_3)$, it is easy to see that $V$ is locally quasi-homogeneous. Running the algorithm described above, we get the following for the terms occurring in the inequality \eqref{eqHappy2} \begin{equation} \label{spD3} E_2^{3-q,q}(f)_k=0 \text{ for } (q,k) \ne (0,4) \text{ and } \dim E_2^{3,0}(f)_4 =1. \end{equation} It follows that $b_3(F) \leq 1$. On the other hand we have $$\chi(F)=4 \chi(M)=4(\chi(\mathbb{P}^3)-\chi(V))=4(4-4)=0.$$ Note that a generic plane section of $V$ is a quartic curve with 4 cusps, and hence $b_0(F)=1$ and $b_1(F)=0$, see for instance \cite[Proposition 4.4.8]{D1}. It follows that $$b_3(F)=b_2(F)+b_0(F) \geq 1.$$ It follows that $b_3(F)=1$ and hence $f$ is $k$-top-computable for any $k\in [1,4]$. The corresponding Alexander polynomials are $\Delta^3(V)=\Delta^0(V)=\Phi_1$ and $\Delta^2(V)=\Delta^1(V)=1$. \end{ex} \begin{ex}[A non-free irreducible surface] \label{exGEN} Consider the surface in $\mathbb{P}^3$ given by $$V:f=x^3z+x^2y^2+y^2w(y+w)=0,$$ and the corresponding Milnor fiber $F:f=1$ in $\mathbb{C}^4$. Let $H$ be the hyperplane in $\mathbb{C}^4$ given by $x=0$ and note that $F_0=F \cap H$ is given by $y^2w(y+w)=1$ in $\mathbb{C}^3$, with coordinates $y,z,w$. It follows that $F_0$ is a smooth surface, homotopically equivalent to the affine curve $F_0':y^2w(y+w)=1$ in $\mathbb{C}^2$, with coordinates $y,w$. It is easy to see that the projective closure $C$ of $F_0'$ is a quartic irreducible curve, with a unique singular point of type $A_3$. It follows that $\chi(C)=2-(4-1)(4-2)+\mu(C)=-1$, see for instance \cite[Corollary 5.4.4]{D1}. Then $\chi(F_0')=\chi(C)-3=-4$, since $C$ has 3 points at infinity. It follows that $b_0(F_0)=1$, $b_1(F_0)=5$ and $b_j(F_0)=0$ for $j \geq 2$. On the other hand, the projection on the $x$-coordinate induces a locally trivial fibration $$ \mathbb{C}^2 \to F \setminus F_0 \to \mathbb{C}^*,$$ and hence $F \setminus F_0$ is homotopy equivalent to $\mathbb{C}^*$. The Gysin sequence in homology, $$\cdots \to H_k(F\setminus F_0) \to H_k(F) \to H_{k-2}(F_0) \to H_{k-1}(F\setminus F_0) \to \cdots,$$ see for instance \cite[Equation (2.2.13)]{D1}, yields the following Betti numbers for $F$: $b_0(F)=1$, $b_1(F)=b_2(F)=0$ and $b_3(F)=5$. Indeed, note that a generic plane section $V_P=V \cap P$ of $V$ is an irreducible curve in $P=\mathbb{P}^2$ having a point of multiplicity $3$. It follows by \cite[Corollary 4.3.8]{D1} that $$\pi_1(P \setminus V_P)=\pi_1(\mathbb{P}^3 \setminus V)=\mathbb{Z}/ d\mathbb{Z}.$$ Then \cite[Corollary 4.1.10]{D1} implies that $b_1(F)=0$. Running the algorithm described above, we get the following non-zero terms among those occurring in the inequality \eqref{eqHappy2} \begin{equation} \label{ssGEN} \dim E_2^{2,1}(f)_k=1 \text{ for } k=1,2,3,4 \text{ and } \dim E_2^{3,0}(f)_4 =1. \end{equation} It follows that we get an equality in \eqref{eqHappy2}, with implies that $V:f=0$ is $(k,1)$-top-computable for all the integers $k$ with $1 \leq k \leq 4$. In particular we have \begin{equation} \label{spGEN} Sp^0_P(f)=t^{\frac{4}{4}} +t^{\frac{5}{4}}+t^{\frac{6}{4}}+t^{\frac{7}{4}}+t^{\frac{8}{4}}. \end{equation} The corresponding Alexander polynomials are $\Delta^3(V)= \Phi_1^2\cdot \Phi_2\cdot \Phi_4$, $\Delta^0(V)=\Phi_1$ and $\Delta^2(V)=\Delta^1(V)=1$. The value of the Alexander polynomial $\Delta^3(V)$ can also be obtained from the isomorphism $H_3(F) \to H_1(F_0)$ in the above Gysin sequence, using its functoriality, but not the spectrum $Sp^0_P(f)$. Note also that the surface in $\mathbb{P}^3$ given by $$V':f'=x^3z+x^2y^2+y^2w(y+w)+x^2w^2=f+x^2w^2=0,$$ has the same topological properties as $V:f=0$, but this time the inequality in \eqref{eqHappy2} is strict. \end{ex} \begin{ex}[Another non-free irreducible surface] \label{exGEN2} Consider the surface in $\mathbb{P}^3$ given by $$V:f=x^4+x^3z+yz^2w=0,$$ and the corresponding Milnor fiber $F:f=1$ in $\mathbb{C}^4$. Consider the hyperplanes $H_y:y=0$ and $H_z:z=0$ in $\mathbb{C}^4$ and set $F_y=F \cap H_y$, $F_z=F \cap H_z$, $F_0=F_y \cap F_z$ and $F'=F\setminus F_0$. It follows that $F_0$ is a smooth curve in $F$, homotopically equivalent to 4 points. The corresponding Gysin sequence in homology contains the sequence $$0= H_4(F) \to H_{0}(F_0) \to H_{3}(F') \to H_3(F) \to H_{-1}(F_0)=0,$$ which implies $b_3(F)=b_3(F')-4$. Next $D'=(F_y \cup F_z) \setminus F_0$ is a smooth divisor in $F'$ and the projection $$p: F' \setminus D' \to (\mathbb{C}^*)^2, \ \ \ (x,y,z,w) \mapsto (y,z)$$ is a locally trivial fibration with contractible fibers and hence it is a homotopy equivalence. Moreover, it is easy to see that $D'$ has five connected components, one homotopy equivalent to $\mathbb{C}$ minus 5 points, the other four homotopy equivalent to $\mathbb{C}^*$. It follows that $b_1(D')=9$. The corresponding Gysin sequence of the pair $(F',D')$ contains the sequence $$0= H_{3}(F' \setminus D') \to H_3(F') \to H_{1}(D') \to H_{3}(F' \setminus D')=\mathbb{Z},$$ which implies that $b_3(F') \in \{8,9\}$. Therefore we get $b_3(F) \in \{4,5\}$. Running the algorithm described above, we get the following non-zero terms among those occurring in the inequality \eqref{eqHappy2} \begin{equation} \label{ssGEN2} \dim E_2^{2,1}(f)_k=1 \text{ for } k=1,2,3 \text{ and } \dim E_2^{3,0}(f)_4 =1. \end{equation} The inequality \eqref{eqHappy2} implies $b_3(F)=4$ and hence $V:f=0$ is $(k,1)$-top-computable for all the integers $k$ with $1 \leq k \leq 4$. In particular we have \begin{equation} \label{spGEN2} Sp^0_P(f)=t^{\frac{4}{4}} +t^{\frac{5}{4}}+t^{\frac{6}{4}}+t^{\frac{7}{4}}. \end{equation} Using \cite[Proposition 4.4.3]{D1}, we get as above $b_1(F)=0$. Consider the partition of $V$ given by $V=V_z \cup (V \setminus V_z)$, where $V_z=V \cap \{ z=0\}$. Then $V_z=\mathbb{P}^1$ and hence $\chi(V_z)=2$. Using the theory of tame polynomials, see for instance \cite{Br}, it follows that $\chi(V \setminus V_z)=2$, and hence $\chi(V)=\chi(V_z)+\chi(V \setminus V_z)=4$. It follows that $\chi(M)=\chi(\mathbb{P}^3)-\chi(V)=0$. This implies that corresponding Alexander polynomials are $\Delta^3(V)= \Phi_1\cdot \Phi_2\cdot \Phi_4$, $\Delta^0(V)=\Phi_1$ and $\Delta^2(V)=\Phi_2\cdot \Phi_4$. In particular $b_2(F)=3$. \end{ex}
{ "timestamp": "2017-10-05T02:10:15", "yymm": "1703", "arxiv_id": "1703.07146", "language": "en", "url": "https://arxiv.org/abs/1703.07146", "abstract": "We describe an algorithm computing the monodromy and the pole order filtration on the top Milnor fiber cohomology of hypersurfaces in $\\mathbb{P}^n$ whose pole order spectral sequence degenerates at the second page. In the case of hyperplane arrangements and free, locally quasi-homogeneous hypersurfaces, and assuming a key conjecture, this algorithm is much faster than for a hypersurface as above. Our conjecture is supported by the results due to L. Narv\\' ez Macarro and M. Saito on the roots of Bernstein-Sato polynomials of such hypersurfaces, by all the examples computed so far, and by one partial result. For hyperplane arrangements coming from reflection groups, a surprising symmetry of their pole order spectra on top cohomology is displayed in our examples. We also improve our previous results in the case of plane curves.", "subjects": "Algebraic Geometry (math.AG); Algebraic Topology (math.AT)", "title": "Computing Milnor fiber monodromy for some projective hypersurfaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464506036182, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7074345789834497 }
https://arxiv.org/abs/1809.10525
Packing of Circles on Square Flat Torus as Global Optimization of Mixed Integer Nonlinear problem
The article demonstrates rather general approach to problems of discrete geometry: treat them as global optimization problems to be solved by one of general purpose solver implementing branch-and-bound algorithm (B&B). This approach may be used for various types of problems, i.e. Tammes problems, Thomson problems, search of minimal potential energy of micro-clusters, etc. Here we consider a problem of densest packing of equal circles in special geometrical object, so called square flat torus $\mathbb{R}^2/\mathbb{Z}^2$ with the induced metric. It is formulated as Mixed-Integer Nonlinear Problem with linear and non-convex quadratic constraints.The open-source B&B-solver SCIP,this http URL, and its parallel implementation ParaSCIP,this http URL, had been used in computing experiments to find "very good" approximations of optimal arrangements. The main result is a confirmation of the conjecture on optimal packing for N=9 that was published in 2012 by O. Musin and A. Nikitenko. To do that, ParaSCIP took about 2000 CPU*hours (16 hours x 128 CPUs) of cluster HPC4/HPC5, National Research Centre "Kurchatov Institute",this http URL
\section{Introduction} Densest packing problems appear in many areas of discrete geometry. Hereinafter one of these problems is considered: a kind of Tammes problem for so called square flat torus. Flat torus in a nutshell is a factor--space $\mathbb{R}}%\mathfrak{R^2/\mathbb{Z}}%\mathfrak{R^2$ with metric induced by \quot{ordinary} Euclidean metric ($\mathbb{Z}}%\mathfrak{R^2$ - is an integer lattice in $\mathbb{R}}%\mathfrak{R^2$). The problem of optimal packings of congruent circles into flat torus has been studied in \cite{bib:2012arXiv1212.0649M,bib:musin2016optimal}. As to practical reasons this problem relates to the problem of \quot{super resolution of images} for aero-photography and space imagery. The problem has very simple formulation: find arrangement of $N$ points in flat torus to maximize minimal distance between any pair of these points. The feature of the problem is an \quot{unusual} distance between points of a torus, which are treated as equivalence classes of factor--space $\mathbb{R}}%\mathfrak{R^2/\mathbb{Z}}%\mathfrak{R^2$. Figure \ref{fig:ft_metric} illustrates definition of distance between points $\mathbf{x}$ and $\mathbf{y}$ on Square Flat Torus - the length of red segment, i.e. we have the following formula: \begin{equation}\label{eq:ft_metric} \begin{array}{c} d(x,y)\doteq \sqrt{\LRB{\min\SET{\MOD{\mathbf{x}_{1}{-}\mathbf{y}_{1}},1{-}\MOD{\mathbf{x}_{1}{-}\mathbf{y}_{1}} } }^2 + \LRB{\min\SET{\MOD{\mathbf{x}_{2}{-}\mathbf{y}_{2}},1{-}\MOD{\mathbf{x}_{2}{-}\mathbf{y}_{2}} } }^2}. \end{array} \end{equation} \begin{figure}[h \centerline{\includegraphics[scale=0.16]{ft_metric.png}} \caption{Distance between $\mathbf{x}$ and $\mathbf{y}$ on Square Flat Torus} \label{fig:ft_metric} \end{figure} Oleg Musin and Anton Nikitenko have studied this problem for $N{=}\SET{2,3,4,5,6,7,8,9}$ and pool results in the articles \cite{bib:2012arXiv1212.0649M,bib:musin2016optimal} Their proof of optimality is based on a computer enumeration of irreducible contact graphs corresponding to potentially optimal arrangement of points on the flat torus. They have proved optimal packings for $N$ up to 8 and presented a conjecture of optimal arrangement for $N{=}9$. The approach presented hereinafter is based on global optimization by branch-and-bound solvers. The packing problem is formulated as mixed-integer nonlinear programming problem (MINLP) with binary variables and non-convex quadratic constraints. This approach is not a new one and has been widely used in studies of packing problems, e.g. see \cite{bib:castillo2008solving}. We have paid attention to flat torus packing problem in our experiments on global and discrete optimization in distributed computing environment \cite{bib:voloshinov2017implementation,bib:smirnov2017concurrent,bib:smirnov2018dd}. The main advantage of combinatorial geometry is an explicit, analytic, expression for optimal "max-min"-distance. The main advantage of global optimization -- ability to use general-purpose branch-and-bound solver including parallel implementations for high-performance clusters. \paragraph{Paper Organisation.} This paper is organised as follows. Formulation of Flat Torus Packing problem as mixed integer nonlinear (bilinear non-convex) mathematical programming problem is done in Section \ref{sec:formulation}. The Section \ref{sec:experimentsScip} presents results of computing experiments for $N{=}\SET{4,5,6,7,8}$, including brief description of SCIP solver \cite{bib:GleixnerBastubbeEifleretal.2018} that was used for global optimization. The next Section \ref{sec:experimentsParaScip} shows results for $N=\SET{8,9}$ obtained by ParaSCIP solver \cite{bib:shinano2011parascip} that is parallel implementation of SCIP based on MPI. Some details of ParaSCIP compilation on {HPC4/HPC5}\xspace cluster of NRC ``Kurchatov Institute'' are provided also. Finally, we present solution found by ParaSCIP for $N{=}9$, which confirmed conjecture made in \cite{bib:2012arXiv1212.0649M, bib:musin2016optimal}. Conclusion Section \ref{sec:experimentsParaScip} is followed by Acknowledgements. \section{Formulation as global optimization MINLP} \label{sec:formulation} Let $\IJ$ be a set of unordered pairs of points' indices: $\IJ\doteq\SET{(i,j): 1{{\leqslant}}i{<}j{{\leqslant}}N}$. The problem may be formulated as follows (maximize minimum of squared paiwise distance (\ref{eq:ft_metric})): \begin{equation}\label{eq:tftgo} \begin{array}{c} D\to \max\limits_{x_{ik}}:\\%,y_{ijk},\eta_{ijk},z_{ijk}} \\ D \wLE \sum\limits_{k{=}1{:}2}\LRB{\min\SET{\MOD{x_{ik}{-}x_{jk}},1{-}\MOD{x_{ik}{-}x_{jk}} } }^2~~\LRB{(i,j){\in}\IJ},\\ 0{\leqslant}x_{ik}{\leqslant} 1 \LRB{k{=}1{:}2,~i{=}1{:}N}. \end{array} \end{equation} Formulation (\ref{eq:tftgo}) has non-differentiable functions in constraints. Let's avoid non-smoothness at the expense of introducing auxiliary continuous and binary variables:\footnote{There are a number of literature on various (similar to each other) \quot{tricks} to perform such conversion of non-smooth problems to MILP or MINLP, but, it seems that the article \cite{bib:dantzig1960significance} was one of the first.}: \begin{equation}\label{eq:auxvar} \begin{array}{l} y_{ijk}\doteq\min\SET{\MOD{x_{ik}{-}x_{jk}},1{-}\MOD{x_{ik}{-}x_{jk}} },~\LRB{k{=}1{:}2,~(i,j){\in} \IJ},\\ z_{ijk} \doteq {-}\MOD{x_{ik}{-}x_{jk}} = \min\SET{x_{jk}{-}x_{ik}, x_{ik}{-}x_{jk}}~\LRB{k{=}1{:}2,~(i,j){\in} \IJ}, \\ \eta_{ijk}{\in}\SET{0,1}, ~ \zeta_{ijk}{\in}\SET{0,1} ~\LRB{k{=}1{:}2,~(i,j){\in} \IJ}. \end{array} \end{equation} Take the first equation of (\ref{eq:auxvar}). It is equivalent to the following system of inequalities (${k{=}1{:}2,~(i,j){\in} \IJ}$): \begin{equation}\label{eq:yijk} \begin{array}{l} y_{ijk} \wLE \MOD{x_{ik}{-}x_{jk}},\\ y_{ijk} \wLE 1{-}\MOD{x_{ik}{-}x_{jk}},\\ y_{ijk} \wGE \MOD{x_{ik}{-}x_{jk}} - \eta_{ijk}\\ y_{ijk} \wGE 1 - \MOD{x_{ik}{-}x_{jk}} - 1 + \eta_{ijk} = {-}\MOD{x_{ik}{-}x_{jk}} + \eta_{ijk}. \end{array} \end{equation} \noindent Equivalence means that $y_{ijk}, x_{ik}, x_{jk}$ satisfies first equation of (\ref{eq:auxvar}) {\bf iff}~ there exists some binary $\eta_{ijk}$ that satisfies (\ref{eq:yijk}) with the same $y_{ijk}, x_{ik}, x_{jk}$. Easy proof may be done considering that 1 is the maximal difference between functions $\MOD{\Delta_{ijk}}$ and $1{-}\MOD{\Delta_{ijk}}$ on the interval $\MOD{\Delta_{ijk}}{\in}[0,1]$ (inclusion $\MOD{\Delta_{ijk}}{\in}[0,1]$ follows from inclusions $x_{ik}{\in}[0,1]$ and $x_{jk}{\in}[0,1]$), see Figure~\ref{fig:yijk}. \begin{figure}[h \centerline{\includegraphics[scale=0.7]{flatTorus_yijk_1.png}} \caption{Illustration for system of linear inequalities (\ref{eq:yijk})} \label{fig:yijk} \end{figure} Continue transformation of (\ref{eq:yijk}). Note that the second and the third inequalities are equivalent to the following: \begin{equation}\label{eq:yijk23} \begin{array}{l} x_{ik}{-}x_{jk}\wLE 1{-}y_{ijk}, ~x_{jk}{-}x_{ik}\wLE 1{-}y_{ijk},\\ x_{ik}{-}x_{jk}\wLE y_{ijk}{+}\eta_{ijk}, ~x_{jk}{-}x_{ik}\wLE y_{ijk}{+}\eta_{ijk}.\\ \mbox{or as two side inequalities }\\ {-}y_{ijk}{-}\eta_{ijk}\wLEx_{ik}{-}x_{jk}\wLE 1{-}y_{ijk},\\ {-}1{+}y_{ijk}\wLEx_{ik}{-}x_{jk}\wLE y_{ijk}{+}\eta_{ijk} \end{array} \end{equation} Let's use definition of $z_{ijk}$ in (\ref{eq:auxvar}) to transform the first and the last lines in \ref{eq:yijk}: \begin{equation}\label{eq:yijk14} \begin{array}{l} z_{ijk}\wLE{-}y_{ijk},\\ y_{ijk}\wGEz_{ijk}{+}\eta_{ijk}.\\ \mbox{or as two side inequalities }\\ z_{ijk}{+}\eta_{ijk} \wLE y_{ijk} \wLE -z_{ijk} \end{array} \end{equation} Note that definition of $z_{ijk}$ is equivalent to the following system of linear inequalities: \begin{equation}\label{eq:zijk} \begin{array}{l} z_{ijk}\wLEx_{ik}{-}x_{jk}, ~z_{ijk}\wLEx_{jk}{-}x_{ik},\\ z_{ijk}\wGEx_{ik}{-}x_{jk}{-}2\zeta_{ijk}, z_{ijk}\wGEx_{jk}{-}x_{ik}{-}2\LRB{1{-}\zeta_{ijk}}.\\ \mbox{or as two side inequalities }\\ z_{ijk}\wLE x_{ik}{-}x_{jk} \wLE z_{ijk} {+} 2\zeta_{ijk},\\ {-}z_{ijk}{-}2\LRB{1{-}\zeta_{ijk}}\wLEx_{ik}{-}x_{jk} \wLE {-}z_{ijk}. \end{array} \end{equation} \noindent The proof of that equivalence is the same as mentioned after system (\ref{eq:yijk}) considering that 2 is the maximal difference between functions $\Delta$ and ${-}\Delta$ on the interval $\Delta{\in} [-1,1]$, see Figure~\ref{fig:zijk}. \begin{figure}[h \centerline{\includegraphics[scale=0.7]{flatTorus_zijk_1.png}} \caption{Illustration for system of linear inequalities (\ref{eq:zijk}) \label{fig:zijk} \end{figure} Finally, from definitions (\ref{eq:auxvar}) and relations (\ref{eq:yijk})--(\ref{eq:zijk}) it follows that the problem (\ref{eq:tftgo}) is equivalent to the following mixed-integer non-linear (and non-convex) problem with quadratic constraints ($k{=}1{:}2$, $i{=}1{:}N$, $(i,j){\in} \IJ$): \begin{equation}\label{eq:minlp} \begin{array}{l} D\to \max ({\mbox{with variables}~x_{ik},y_{ijk},z_{ijk},\eta_{ijk},\zeta_{ijk}}), \mbox{s.t.}:\\ D \wLE \sum\limits_{k{=}1{:}2}y_{ijk}^2,\\%~\LRB{(i,j){\in}\IJ},\\ {-}y_{ijk}{-}\eta_{ijk}\wLEx_{ik}{-}x_{jk}\wLE 1{-}y_{ijk}, \\ {-}1{+}y_{ijk}\wLEx_{ik}{-}x_{jk}\wLE y_{ijk}{+}\eta_{ijk}, \\ z_{ijk}{+}\eta_{ijk} \wLE y_{ijk} \wLE -z_{ijk}, \\ z_{ijk}\wLE x_{ik}{-}x_{jk} \wLE z_{ijk} {+} 2\zeta_{ijk}, \\ {-}z_{ijk}{-}2\LRB{1{-}\zeta_{ijk}}\wLEx_{ik}{-}x_{jk} \wLE {-}z_{ijk}, \\ 0{\leqslant}x_{ik}{\leqslant} 1, y_{ijk}{\in}\mathbb{R}}%\mathfrak{R, z_{ijk}{\in}\mathbb{R}}%\mathfrak{R, \eta_{ijk}{\in}\SET{0,1}, \zeta_{ijk}{\in}\SET{0,1}. \end{array} \end{equation} \noindent The problem has $2N^2$ continuous variables $x_{ik}, y_{ijk}, z_{ijk}$ and $2N(N{-}1)$ binary variables $\eta_{ijk}, \zeta_{ijk}$. Then, except the last line, the problem has $5N(N{-}1)$ linear constraints with continuous variables, just as much linear mixed-integer constraints with continuous and binary variables and $\frac{N(N-1)}{2}$ quadratic non-convex constraints with continuous variables. In addition to above constraints, some auxiliary constraints may be added to reduce the number of redundant solutions that might be obtained by translation by axis \quot{OX}, \quot{OY}, renumbering of points, mirror image etc.: \begin{equation} \begin{array}{l} x_{11}{=}0.5, x_{12}{=}0~(\mbox{the first point is fixed to $(0.5,0)$}),\\ x_{(i{+}1)2}\wLE x_{i2}~(1\wLE i\wLE N{-}1)~(\mbox{ascending 2nd coordinates}),\\ x_{21}\wLE x_{11} \end{array} \label{eq:auxcons} \end{equation} Simplest additional constraint, in the last line of (\ref{eq:auxcons}), reduces computing time almost twice (due to twice less volume of domain in multi-dimension space of continuous variables). \section{Computing experiments with SCIP, {N$\mathbf{{\leqslant}}$8}} \label{sec:experimentsScip} Computing experiments were performed with SCIP ({\bf S}olving {\bf C}onstrained {\bf I}nteger {\bf P}ro\-gram\-ming), \cite{bib:GleixnerBastubbeEifleretal.2018}. This is a fairly popular an open-source solver, which can be used freely for research and educational purposes. On the home page \href{http://scip.zib.de/}{scip.zib.de} one can read: \quot{SCIP is a framework for Constraint Integer Programming oriented towards the needs of mathematical programming experts \dots ~as a pure MIP and MINLP solver or as a framework for branch-cut-and-price. SCIP is implemented as C callable library and provides C++ wrapper classes for user plugins. It can also be used as a standalone program to solve mixed integer programs given in various formats such as MPS, LP, flatzinc, CNF, OPB, WBO, PIP, etc.} In our studies on optimization modelling we prefer another, so called NL-format (representing an instance of mathematical programming problem) from AMPL (A Mathematical Programming Language) \cite{bib:ampl-book}, which has almost 35 years long history (since 1985). SCIP supports NL-format by special \texttt{scipampl} build. Originally, AMPL required usage of special commercially licensed translator: to create NL-file that might be passed to any AMPL-compatible solvers, including free ones; to parse solution SOL-files returned from solvers. But in 2005 AMPL developers disclosed internal formats of NL-files \cite{bib:gay2005writing}. Thanks to that, Pyomo ({\bf PY}thon {\bf O}ptimization {\bf M}odeling {\bf O}bjects, \href{http://pyomo.org}{pyomo.org}, \cite{bib:hart2017pyomo}, open-source and free optimization modeling tool) now supports creation of NL-files. Thus, very popular in scientific researches Python programming language may be used to generate optimization problems, which may be solved by a proper AMPL-compatible solver. Important feature of SCIP is its capability to solve optimization problems having polynomials in constraints (other non-linearities are admitted also). In our experiments, the Thomson problem, which has been formulated as NLP with polynomials of the 4th degree in equality-constraints, was solved by SCIP~\cite{bib:smirnov2018dd}. Details of implementations of branch-and-bound algorithm in SCIP for the case of bilinear and non-convex polynomial constraints may be found in the article \cite{bib:vigerske2017scip}. For brevity we give the following citation: \quot{SCIP uses convex envelopes for well-known univariate functions, linearization cuts for convex constraints, and the classical McCormick relaxation of bilinear terms. All of these are dynamically separated, for pure NLPs also at a solution of the NLP relaxation...}. Returning to Flat Torus Packing Problems, Pyomo is used to create NL-file from MINLP presented in (\ref{eq:minlp}) and (\ref{eq:auxcons}). Then NL-file is processed by \texttt{scipampl} application, which returns SOL-file with solution and some LOG-file with auxiliary information about solving process. Finally, SOL-file is processed by Python code via Pyomo package features to analyse solution obtained (including creation of all illustrations presented below). Solution times for cases $N{=}4,5,6,7,8$ are presented in Table \ref{tbl:soltimes4_8}. Problems with $N{=}4,5,6,7$ has been solved on desktop [CPU=Intel~Core~i7-6700~@~3.40GHz, Mem=32Gb]. For $N{=}8$ the problem has been solved on standalone server [CPU=2$\times$Xeon5620~@~2.4Ghz, Mem=32Gb]. SCIP had been run with default settings, except: relative gap had been set to $1.e-6$ and memory limit - to 28Gb (actually the worst case $N{=}8$ occupied about 8Gb). \begin{table}[ht] \begin{center} \begin{tabularx}{10cm}{|l|X|X|X|X|c|} \hline N & 4 & 5 & 6 & 7 & 8 \\ \hline Solving time, sec & 3 & 30 & 118 & 2552 & 27240 (454 min) \\ \hline \end{tabularx} \end{center} \caption{Solving times for $N{=}\SET{4,5,6,7,8}$, one SCIP process} \label{tbl:soltimes4_8} \end{table} \paragraph{Results for N=7.} The case $N{=}7$ deserves more attention as, see \cite{bib:2012arXiv1212.0649M,bib:musin2016optimal}, there are three different (up to isometric transformation) optimal arrangements, see \cite{bib:musin2016optimal}, Fig.3b--Fig.3d. \hspace{-0.2em}A few efforts have been done to find all these configurations in results of SCIP solving. By default SCIP solver stores optimal solutions in a list that is available by proper commands of SCIP-console. So, after successful completion of solving user can compare a number of solutions. Results are presented in the figures \ref{fig:opt7b}--\ref{fig:opt7d}. They have been selected manually (\quot{draw-and-compare}) from 11 optimal solutions found by standalone SCIP process. Because branch-and-bound algorithm inherently differs from \quot{exact} combinatorial method (enumeration of irreducible contact graphs) that has been used in \cite{bib:2012arXiv1212.0649M,bib:musin2016optimal}, SCIP founds redundant solutions (auxiliary constraints (\ref{eq:auxcons}) are not enough to avoid them). Every configurations coincides with one of those found in \cite{bib:musin2016optimal} after some isometric transformation (see captions of the figures \ref{fig:opt7b}--\ref{fig:opt7d}). Pay attention to a free position of the point number 2 on the Fig.\ref{fig:opt7b}, its circle can be freely moved within area surrounded by other grey circles. \begin{figure}[!h \centerline{\includegraphics[scale=0.45]{Tammes_flatTorus_p7_all_0_Musin_b_SCIP.png}} \caption{$N{=}7$, the 1st configuration (see \cite{bib:musin2016optimal}, Fig.3b, $d^{*}{=}\frac{1}{1{+}\sqrt{3}}$)} \label{fig:opt7b} \end{figure} \begin{figure}[!h \centerline{\includegraphics[scale=0.45]{Tammes_flatTorus_p7_all_1_Musin_c_SCIP.png}} \caption{$N{=}7$, the 2nd configuration (see \cite{bib:musin2016optimal}, Fig.3c, rotate $90^o \circlearrowright$ , $d^{*}{=}\frac{1}{1{+}\sqrt{3}}$) \label{fig:opt7c} \end{figure} \begin{figure}[!h \centerline{\includegraphics[scale=0.45]{Tammes_flatTorus_p7_all_3_Musin_d_SCIP.png}} \caption{$N{=}7$, the 3d configuration (see \cite{bib:musin2016optimal}, Fig.3d, flip $\updownarrow$ and rotate $90^o \circlearrowleft$ , $d^{*}{=}\frac{1}{1{+}\sqrt{3}}$)} \label{fig:opt7d} \end{figure} \begin{figure}[h]\vspace*{4pt \centerline{\includegraphics[scale=0.5]{Tammes_flatTorus_p8__v2.png}} \caption{Optimal configuration for $N{=}8$ found by SCIP (see \cite{bib:musin2016optimal}, Fig.3e, $d^{*}{=}\frac{1}{1{+}\sqrt{3}}$)} \label{fig:opt8} \end{figure} \section{Computing experiments with ParaSCIP, {N{=}8,9}.} \label{sec:experimentsParaScip} The Table \ref{tbl:soltimes4_8} demonstrates dramatic growth of solving time (and complexity of Flat Torus Packing Problem) on increasing value of $N$. May be it is the reason for that the article \cite{bib:musin2016optimal} presents only conjecture about optimal configuration for $N{=}9$. Our attempts to solve problem for $N{=}9$ by \quot{single--threaded} SCIP have failed. Computing efficiency of branch-and-bound algorithm implemented in SCIP may be substantially increased by its parallel implementation as ParaSCIP solver does. \paragraph{ParaSCIP -- parallel implementation of B\&B.} ParaSCIP, \cite{bib:shinano2011parascip}, is a distributed memory massively parallel MIP and MINLP solver based on Ubiquity Generator (UG) framework, \href{http://ug.zib.de}{ug.zib.de}. This framework is aimed at making MIP and other discrete problem solvers parallel. In the framework the solver and the communication mechanism are abstracted making it easier to parallelize different solvers. Two communication mechanisms are available in the library: distributed memory MPI and shared memory POSIX-threads. SCIP, CPLEX and Xpress solvers are supported with MPI and only SCIP with both mechanisms. Only SCIP-based specializations of the framework (ParaSCIP for MPI-based and FiberSCIP for POSIX-threads based implementations) are publicly available. ParaSCIP can utilize quite big computing resources which allows it to solve problems not solvable even with commercial solvers on a single host: ParaSCIP ran on 80000 cores in parallel while solving open MIP instances from MIPLIB2010 \cite{bib:shinano2016solving}. \begin{sloppypar} Compiling SCIP and ParaSCIP on {HPC4/HPC5}\xspace (cluster of National Research Center "Kurchatov Institute", \href{https://www.top500.org/site/50615}{www.top500.org/site/50615}) is not easy because {HPC4/HPC5}\xspace has CentOS 6 with GCC 4.4 tool-chain installed which does not support C++11 extensions required for SCIP build. We used another machine with a similar CentOS version but with devtoolset-7 (GCC 7.3) installed to build solvers and then copied them to the cluster. Machines had Intel CPUs of different families (Haswell on cluster vs westmere on our machine) so default optimizations during compilation could be not optimal for the cluster. Finally Ipopt, SCIP and ParaSCIP were compiled on our machine and then copied to the cluster where they were running well. \end{sloppypar} After consultation with ParaSCIP developers we ran \verb|parascip| with non-default settings. We used the following ParaSCIP settings:\\ \begin{verbatim} Quiet = FALSE LogSolvingStatusFilePath = "./logs/" LogNodesTransferFilePath = "./logs/" CheckpointFilePath = "./logs/" SolutionFilePath = "./logs/" NotificationInterval = 10 LogSolvingStatus = TRUE Checkpoint = TRUE CheckpointInterval = 1800 \end{verbatim} Also \quot{depth-first search} option were set for search tree traversing to reduce memory consumption in solvers: \begin{verbatim} nodeselection/dfs/stdpriority = 300000 \end{verbatim} The following command line was used to run ParaSCIP: \begin{verbatim} mpirun parascip parascip.set tammesTorus_d2_p9.cip -q -s dfs.set \end{verbatim} Here \verb|parascip.set| is the ParaSCIP settings file listed above; \verb|tammesTorus_d2_p9.cip| is the problem definition converted to CIP-format from NL-format; \verb|dfs.set| is the settings file for depth-first search specified above. \paragraph{ParaSCIP vs SCIP for N=8.} This case have been used to compare performance of ParaSCIP and SCIP on {HPC4/HPC5}\xspace. Solving times are presented in the Table \ref{tbl:soltimes8_KIAE}. One can pay attention that solving with a single SCIP process on the cluster took much more time than that on a standalone server (see last column of the Table \ref{tbl:soltimes4_8}). The reason is that the problem instance passed to the cluster did not have the last of auxiliary constraints (\ref{eq:auxcons}). \begin{table}[!ht] \begin{center} \begin{tabular}{|l|l|c|} \hline \parbox[t]{6cm}{{HPC4/HPC5}\xspace, \\ CPU Xeon E5-2680v3 12C @ 2.5GHz \vspace{0.2em}} & cores & \parbox[t]{2cm}{solving time, \\min } \\ \hline % SCIP & 1 & 780 \\ \hline ParaSCIP & 8 (7 solvers) & 126 \\ \hline \end{tabular} \end{center} \caption{Solving times for $N{=}8$, for SCIP and ParaSCIP (on {HPC4/HPC5}\xspace)} \label{tbl:soltimes8_KIAE} \end{table} To evaluate efficiency of parallelization one should remember one feature of ParaSCIP: one of ParaSCIP processes (dedicated for MPI application), plays role of Load Coordinator and, actually, does not work with branch-and-bound algorithm's search tree. So we have:\\ efficiency (CPU): 780/126/8 = 0.77\\ efficiency (solvers): 780/126/7 = 0.88. \paragraph{Results of ParaSCIP for N=9.} ParaSCIP with 128 processes on 8 nodes (127~solvers) of cluster {HPC4/HPC5}\xspace had solved the problem in 956 minutes. It is the most promising result presented in this work. Optimal configuration found is shown in the Fig. \ref{fig:opt9} and coincides with conjecture presented in \cite{bib:2012arXiv1212.0649M,bib:musin2016optimal}. Taking into account load balancing between working processes the following evaluation of complexity may be done: $127{\times}956/60{\approx}2043 ~\mbox{CPU}{\times}\mbox{hours}$. Total complexity, including CPU dedicated for Load Coordination is $128{\times}956/60{\approx}2059 ~\mbox{CPU}{\times}\mbox{hours}$. We believe that this complexity may be reduced almost twice if the last of auxiliary constraints (\ref{eq:auxcons}) would be added. \begin{figure}[!h]\vspace*{4pt \centerline{\includegraphics[scale=0.5]{Tammes_flatTorus_p9_opt_ParaSCIP.png}} \caption{Optimal configuration for $N{=}9$ found by ParaSCIP (see \cite{bib:musin2016optimal}, Fig.3f, $d^{*}{=}\frac{1}{\sqrt{5{+}2\sqrt{3}}}$)} \label{fig:opt9} \end{figure} \section{Conclusion} \label{sec:conclusion} Obtained results confirmed the relevance of global optimization approach in solving hard problems of combinatorial geometry. A few tricks in formulation Flat Torus Packing problem as mixed integer nonlinear problem and \quot{general purpose} solver SCIP with its parallel version ParaSCIP let to give \quot{computer aided} proof of optimal arrangement for 9 points. All results obtained before for $N{=}4,5,6,7,8$ have been reproduced also. Once more has been confirmed semi-empirical rule (see \cite{bib:smirnov2018dd}) that simple auxiliary constraints that reduces the volume of feasible domain (i.e. last inequality of (\ref{eq:auxcons})) may substantially reduce solving time for branch-and-bound algorithm using McCormik envelopes for global optimization of nonlinear problems. General purpose solver SCIP and ParaSCIP, its parallel implementation, can be used successfully for global optimization of mixed integer nonlinear problems with bilinear constraints. \subsubsection*{Acknowledgements} Authors are grateful to Alexey Tarasov for recommendation to try Flat Torus Packing problem in our experiments with distributed implementations of B\&B\xspace. Authors thank Stefan Vigerske and Yuji Shinano for their consultations on using of SCIP and ParaSCIP solvers.\\ This work is supported by the Russian Science Foundation (project No. 16-11-10352).\\ This work has been carried out using computing resources of the federal collective usage centre Complex for Simulation and Data Processing for Mega-science Facilities at NRC "Kurchatov Institute", \href{http://ckp.nrcki.ru}{ckp.nrcki.ru}. \label{sect:bib} \bibliographystyle{unsrt
{ "timestamp": "2018-12-18T02:31:31", "yymm": "1809", "arxiv_id": "1809.10525", "language": "en", "url": "https://arxiv.org/abs/1809.10525", "abstract": "The article demonstrates rather general approach to problems of discrete geometry: treat them as global optimization problems to be solved by one of general purpose solver implementing branch-and-bound algorithm (B&B). This approach may be used for various types of problems, i.e. Tammes problems, Thomson problems, search of minimal potential energy of micro-clusters, etc. Here we consider a problem of densest packing of equal circles in special geometrical object, so called square flat torus $\\mathbb{R}^2/\\mathbb{Z}^2$ with the induced metric. It is formulated as Mixed-Integer Nonlinear Problem with linear and non-convex quadratic constraints.The open-source B&B-solver SCIP,this http URL, and its parallel implementation ParaSCIP,this http URL, had been used in computing experiments to find \"very good\" approximations of optimal arrangements. The main result is a confirmation of the conjecture on optimal packing for N=9 that was published in 2012 by O. Musin and A. Nikitenko. To do that, ParaSCIP took about 2000 CPU*hours (16 hours x 128 CPUs) of cluster HPC4/HPC5, National Research Centre \"Kurchatov Institute\",this http URL", "subjects": "Optimization and Control (math.OC); Distributed, Parallel, and Cluster Computing (cs.DC)", "title": "Packing of Circles on Square Flat Torus as Global Optimization of Mixed Integer Nonlinear problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464499040092, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.707434578476324 }
https://arxiv.org/abs/1507.07279
Tate classes, equivariant geometry and purity
A simple explanation for the ubiquity of "vanishing of cohomology in odd degrees" in equivariant contexts is given.
\subsection{Introduction} This note gives a simple explanation for the ubiquity of `vanishing of cohomology in odd degrees' in equivariant contexts. The heart of the matter is that actions of linear algebraic groups force the mixed Hodge structures showing up in cohomology to be of type $(n,n)$. Combined with purity, which is also often forced by the equivariant context, yields the aforementioned vanishing. Arguments of this nature have been standard in Kazhdan-Lusztig theory for decades now (for instance, see \cite{G}, \cite{KL}, \cite{MS}, \cite{So}, \cite{SW}, \cite{Sp}; also see \cite{BJ}). The only ``new contribution"\footnote{ ``It is all already in Dedekind".} that this note makes is an insistence on emphasizing the word ``Tate". Several natural questions, related to algebraic cycles, arise from these observations. These have satisfactory answers, but treating them requires some motivic machinery. This has been relegated to a separate paper. \textbf{Acknowledgments: }This note is mainly a digression arising from a project joint with W. Soergel and M. Wendt \cite{SVW}. In particular, the Basic Observation below was born out of explanations by W. Soergel of a more general statement for motivic sheaves. In its current presentation, Theorem \ref{contractingthm} owes its formulation to a conversation with M. A. de Cataldo. Finally, this note would never have seen light of day were it not for J. Gandini and A. Maffei's insistence (and constant encouragement) that these results were not completely frivolous. \subsection{Conventions}\label{s:conventions} A `variety' will always mean a `separated scheme of finite type over $\mathrm{Spec}(\CC)$'. Hereon, I will write `$\mathrm{pt}$' instead of `$\mathrm{Spec}(\CC)$'. Constructible sheaves, cohomology, etc., will always be with $\QQ$-coefficients, and with respect to the complex analytic site associated to a variety. I will freely use the existence of functorial mixed Hodge structures on cohomology, compactly supported cohomology, equivariant cohomology, etc. (see \cite{D} or \cite{Sa}). A $\ZZ$-graded mixed Hodge structure $H^*$ (for instance, the cohomology $H^*(X)$ of a variety $X$) will be called \emph{pure} if each $H^i$ is a pure Hodge structure of weight $i$. A mixed Hodge structure will be called \emph{Tate} if it is an extension of Hodge structures of type $(n,n)$ (the $n$ is allowed to vary). \subsection{The Basic Observation}\label{s:basic} The following Lemma is well known. \begin{lemma}[{\cite[\S 9.1]{D}}]Let $G$ be a linear algebraic group. Then $H^*(G)$ is Tate. \end{lemma} \begin{proof} We may assume $G$ is connected reductive (Levi decomposition). Then the splitting principle applies. \end{proof} The following is essentially contained in \cite[\S 9]{D} (in \cite{D} the observation is only made explicit for $X=\mathrm{pt}$; regardless, it is in there). \begin{mainObs1}Let $G$ be a linear algebraic group acting on a variety $X$. If $H^*(X)$ is Tate, then the $G$-equivariant cohomology $H^*_G(X)$ is Tate. \end{mainObs1} \begin{proof}Consider the usual simplicial variety $[X/G]_{\bullet}$ (see \cite[\S6.1]{D}) computing $H^*_G(X)$. Filtering $[X/G]_{\bullet}$ by skeleta yields a spectral sequence converging to $H^*_G(X)$ \cite[Proposition 8.3.5]{D}. The $E_1$ entries of this spectral sequence are of the form $H^q(G^{\times p}\times X)$. \end{proof} The following consequence is essentially contained in \cite[Theorem 1(a)]{BP} (the argument in \cite{BP} is quite different though, and as formulated, \cite[Theorem 1(a)]{BP} is a statement about $E$-polynomials). \begin{cor} Let $G$ be a linear algebraic group and $K\subset G$ a closed subgroup. Then $H^*(G/K)$ is Tate. \end{cor} \begin{cor}\label{tatecor} Let $X$ be a variety on which a linear algebraic group $G$ acts with finitely many orbits. Then the compactly supported cohomology $H^*_c(X)$ is Tate. \end{cor} \subsection{Intersection cohomology} Write $IH^*(X)$ for the intersection cohomology of $X$, normalized so that if $X$ is smooth and equidimensional, then $IH^k(X) = H^k(X)$. \begin{thm}\label{oddvanish} Let $X$ be a complete variety endowed with the action of a linear algebraic group $G$. If $X$ admits a $G$-equivariant resolution of singularities $E \to X$ such that $E$ admits finitely many orbits, then $IH^*(X)$ vanishes in odd degrees. \end{thm} \begin{proof} By the Decomposition Theorem, $IH^*(X)$ is a direct summand of $H^*(E)$. The latter is pure and Tate (Corollary \ref{tatecor}). \end{proof} \subsection{Springer's Homotopy Lemma}\label{obs2} It will now be convenient to use the language of mixed Hodge modules \cite{Sa}. One can avoid this and only use classical (as in the style of \cite{D}) mixed Hodge theory, but this would make the language cumbersome. Functors on mixed Hodge modules will tacitly be derived. Write $\const{\mathrm{pt}}$ for the trivial (weight $0$) rank one pure Hodge structure on $\mathrm{pt}$. Let $X$ be a variety, and $a\colon X \to \mathrm{pt}$ the structure map. Set $\const{X} = a^*\const{\mathrm{pt}}$. Let $S$ be a variety endowed with a $\CC^{\times}$-action that contracts $S$ to some point $i\colon \{x\} \hookrightarrow S$. Let $a\colon S \to \{x\}$ be the evident map. Call a complex $\mathcal{A}$, of mixed Hodge modules on $S$, \emph{na\"ively equivariant} if there exists an isomorphism $\alpha^*\mathcal{A} \simeq p^*\mathcal{A}$, where $\alpha, p \colon \CC^{\times} \times S \to S$ are the action and projection maps respectively. \begin{lemma}[Springer's Homotopy Lemma {\cite[Proposition 1]{Sp}}] If $\mathcal{A}$ is na\"ively equivariant on $S$, then the canonical maps $a_*\mathcal{A} \mapright{\sim} i^*\mathcal{A}$ and $i^!\mathcal{A}\mapright{\sim}a_!\mathcal{A}$ are isomorphisms. \end{lemma} \subsection{Contracting slices}\label{s:contracting}Let $G$ be a linear algebraic group acting on a variety $X$. A \emph{contracting slice} at a point $x\in X$ is the data of a locally closed subvariety $S\subset X$ containing $x$, and satisfying: \begin{enumerate} \item the map $G\times S \to X$, $(g,x)\mapsto gx$ is smooth; \item there exists a one parameter subgroup $\CC^{\times}\to G$ that leaves $S$ stable and contracts $S$ to $x$. \end{enumerate} We will say that the $G$-action on $X$ \emph{admits contracting slices} if each $G$-orbit contains a point that admits a contracting slice. The following result is contained either implicitly or explicitly (sometimes in special cases) or in slightly different language (for instance, stalkwise/pointwise purity of pure Hodge modules vs. purity of fibres) in \cite[\S5.2]{BeBe}, \cite{BJ}, {\cite[\S14]{BL}}, \cite{dCMM}, \cite{G}, \cite{KL}, \cite{MS}, \cite{So}, \cite{SW}, \cite{Sp}. Undoubtedly, this is an incomplete list: the use of contracting slices pervades representation theory. \begin{thm} \label{contractingthm}Let $G$ be a linear algebraic group acting on $E$ and $X$. Assume $E$ is rationally smooth, and admits finitely many orbits. Let $\pi\colon E\to X$ be a $G$-equivariant proper morphism. If $X$ admits contracting slices, then the cohomology of each fibre $H^*(\pi^{-1}(x))$, $x\in X$, is pure and Tate. In particular, $H^*(\pi^{-1}(x))$ vanishes in odd degrees. \end{thm} \begin{proof} The purity assertion is a special case of the well known fact that contracting slices guarantee pointwise purity (i.e., purity of stalks and costalks at all points) of every pure $G$-equivariant Hodge module on $X$. In slightly more detail, it suffices to prove the result for a single point in each $G$-orbit in $X$. The restriction of $\pi_*\const{E}$ to a contracting slice is pure \cite[\S2.3.2]{MS}. Thus, Springer's Homotopy Lemma yields purity at each contraction point. As there are only finitely many $G$-orbits in $E$, the isotropy group $G_x$ acts with finitely many orbits on the fibre $\pi^{-1}(x)$. So Corollary \ref{tatecor} applies. \end{proof} \subsection{Examples}\label{s:examples} The above story now applies to flag varieties, toric varieties, symmetric varieties, wonderful compactifications, ..., where there are well known group actions and/or contracting slices. \begin{example}Let $X$ be a complete rationally smooth variety on which a linear algebraic group $G$ acts with finitely many orbits (for instance a complete simplicial toric variety). Then $H^*(X)$ is pure (rational smoothness plus completeness) and Tate (Corollary \ref{tatecor}). In particular, $H^*(X)$ vanishes in odd degrees. \end{example} \begin{example}Let $G$ be a connected reductive group. A $G$-variety is called \emph{spherical} if it contains a dense orbit for a Borel subgroup of $G$. Complete spherical varieties are known to satisfy the assumptions of Theorem \ref{oddvanish}. In particular, if $X$ is spherical, then $IH^*(X)$ vanishes in odd degrees \cite{BJ}. Note that toric varieties are spherical. \end{example} \begin{example}Let $B\subset G$ be a Borel subgroup, and $\pi\colon E \to G/B$ a $B$-equivariant proper morphism to the flag variety $G/B$. Assume $E$ is rationally smooth, and admits finitely many orbits. Let $x\in G/B$. Then, for a suitable product $U$ of root subgroups of $G$, the map $u \mapsto ux$ defines an embedding $U\hookrightarrow G/B$ whose image is a cell transversal to the $B$-orbit of $x$. This cell is contracted by a one parameter subgroup (of $B$) to $x$. Consequently, Theorem \ref{contractingthm} applies, and $H^*(\pi^{-1}(x))$ vanishes in odd degrees for all $x\in G/B$. In fact, $H^*(\pi^{-1}(x))$ is generated by algebraic cycles (this follows from purity combined with \cite[Theorem 3]{To}). This generalizes the fact that fibres of Bott-Samelson resolutions can be paved by affine spaces. \end{example} \begin{example}The same result as in the previous example holds if we replace $G/B$ by $G/K$, where $K\subset G$ is a symmetric subgroup. Contracting slices are known to exist for the $B$-action on $G/K$ \cite{MS}. \end{example} \begin{example}Analogously, Theorem \ref{contractingthm} applies to spherical varieties that admit contracting slices for the $G$-action. Not all spherical varieties admit contracting slices. Regardless, it can be shown (use the argument in the proof of \cite[Theorem 4]{BJ}) that on a \emph{normal} spherical variety, every pure $G$-equivariant mixed Hodge module is pointwise pure (i.e., its stalks and costalks are pure at every point). This immediately yields the conclusions of Theorem \ref{contractingthm}. \end{example} \begin{example}Let $G$ be connected semisimple, and let $\mathcal{N}$ be the cone of nilpotent elements in $Lie(G)$. Then the adjoint action of $G$ on $\mathcal{N}$ admits contracting slices (for instance, see \cite[\S3.7.14]{CG}). Hence, Theorem \ref{contractingthm} applies. Unhappily, this doesn't yield the vanishing of the cohomology of Springer fibres in odd degrees, since $G$ doesn't act on the Springer resolution with finitely many orbits. \end{example} \subsection{Complements} \begin{enumerate} \item The Basic Observation is a specific instance of the more general observation that if $X_{\bullet}$ is a simplicial variety with each $H^*(X_i)$ Tate, then $H^*(X_{\bullet})$ is Tate. This yields statements, analogous to the Basic Observation, for algebraic stacks with atlases. I don't know any examples (apart from $[X/G]$) where this yields anything interesting that is not already well known using simpler methods. However, see \cite{Sh}. \item The Basic Observation has a weak converse: if $H^*(X)$ is pure, and $H^*_G(X)$ is Tate, then $H^*(X)$ is Tate. This is immediate, since purity implies $H^*_G(X)\simeq H^*_G(\mathrm{pt}) \otimes H^*(X)$ as an $H^*_G(\mathrm{pt})$-module. I don't know of a counterexample to this statement with the purity assumption dropped. \item Springer's Homotopy Lemma uses $\CC^{\times}$-actions to infer purity. These can also be exploited to deduce the Tate property: Let $X$ be a variety endowed with a $\CC^{\times}$-action. Assume $H^*(X)$ is pure. If the cohomology of the fixed point subvariety $H^*(X^{\CC^{\times}})$ is Tate, then so is $H^*(X)$. This assertion should be viewed as a cohomological counterpart to the classical Bialynicki-Birula decomposition \cite{BB}. To prove it, note that the Localization Theorem (in equivariant cohomology) yields that restriction $H^*_{\CC^{\times}}(X) \to H^*_{\CC^{\times}}(X^{\CC^{\times}})$ is an isomorphism modulo $H^*_{\CC^{\times}}(\mathrm{pt})$-torsion. Purity of $H^*(X)$ implies: \[ H^*_{\CC^{\times}}(X) \simeq H^*_{\CC^{\times}}(\mathrm{pt}) \otimes H^*(X) \] as an $H^*_{\CC^{\times}}(\mathrm{pt})$-module. In particular, $H^*_{\CC^{\times}}(X)$ is free. Consequently, the restriction $H^*_{\CC^{\times}}(X) \hookrightarrow H^*_{\CC^{\times}}(X^{\CC^{\times}})$ is an injection. Both $H^*_{\CC^{\times}}(X^{\CC^{\times}})$ and $H^*_{\CC^{\times}}(\mathrm{pt})$ are Tate. Therefore, $H^*(X)$ must also be Tate. \item Although I have not checked the details, everything in this note should extend readily to the context of P. Deligne's Weil conjecture machinery and Frobenius actions on $\ell$-adic cohomology. \end{enumerate}
{ "timestamp": "2015-08-03T02:01:28", "yymm": "1507", "arxiv_id": "1507.07279", "language": "en", "url": "https://arxiv.org/abs/1507.07279", "abstract": "A simple explanation for the ubiquity of \"vanishing of cohomology in odd degrees\" in equivariant contexts is given.", "subjects": "Algebraic Geometry (math.AG); Representation Theory (math.RT)", "title": "Tate classes, equivariant geometry and purity", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464478051827, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.707434576954947 }
https://arxiv.org/abs/2204.11244
Log Calabi-Yau structure of projective threefolds admitting polarized endomorphisms
Let $X$ be a normal projective variety admitting a polarized endomorphism $f$, i.e., $f^*H\sim qH$ for some ample divisor $H$ and integer $q>1$. It was conjectured by Broustet and Gongyo that $X$ is of Calabi-Yau type, i.e., $(X,\Delta)$ is lc for some effective $\mathbb{Q}$-divisor such that $K_X+\Delta\sim_{\mathbb{Q}} 0$. In this paper, we establish a general guideline based on the equivariant minimal model program and the canonical bundle formula. In this way, we prove the conjecture when $X$ is a smooth projective threefold.
\section{Introduction} We work over an algebraically closed field of characteristic 0. Let $f:X\to X$ be a surjective endomorphism of a normal projective variety $X$. In the curve case, it is well known by the Hurwitz formula that $X$ is either a rational curve or an elliptic curve when $f$ is non-isomorphic. This is equivalent to saying that the anticanonical divisor $-K_X$ is effective. In higher dimensional case, to easily eliminate the distraction of automorphism, one focuses on the {\it polarized} endomorphism $f$, i.e., $f^*H\sim qH$ for some ample divisor $H$ and integer $q>1$. Then by making use of the ramification divisor formula, Zhang and the author showed that $-K_X$ is effective when $X$ is $\mathbb{Q}$-Gorenstein. However, the effectivity of the anticanonical divisor, though important, can say very few on the detailed characterization of higher dimensional varieties. A delicate operation is running the $f$-equivariant (after iteration) minimal model program. The smooth surface case is settled by Nakayama and the general higher dimensional situation is settled by Zhang and the author (cf.~\cite{Nak02},\cite{MZ18},\cite{MZ19},\cite{MZ20},\cite{CMZ20}). In this way, Broustet and Gongyo \cite{BG17} proposed the following conjecture and proved the surface case. Recall that a normal projective variety $X$ is of {\it Calabi-Yau type} if $(X,\Delta)$ is an lc pair for some effective Weil-$\mathbb{Q}$ divisor $\Delta$ such that $K_X+\Delta\sim_{\mathbb{Q}} 0$. By the abundance (cf.~\cite[Theorem 1.2]{Gon13}), the latter condition is equivalent to $K_X+\Delta\equiv 0$. We also call the pair $(X, \Delta)$ {\it log Calabi-Yau}. \begin{conjecture}\label{main-conj-cy} Let $X$ be a normal projective variety admitting a polarized endomorphism. Then $X$ is of Calabi-Yau type. \end{conjecture} In higher dimensional cases, Conjecture \ref{main-conj-cy} has been recently verified for rationally connected smooth projective varieties by Yoshikawa; see \cite{Yos20} or Theorem \ref{thm-yos}. The main purpose of this paper is to give a partial guideline on Conjecture \ref{main-conj-cy} and to provide a full solution for the case of smooth projective threefolds. The following is our main result. \begin{theorem}\label{main-thm-lcy} Let $X$ be a smooth projective threefold admitting a polarized endomorphism $f$. Then $X$ is of Calabi-Yau type, i.e., $(X,\Delta)$ is an lc pair for some effective $\mathbb{Q}$-divisor $\Delta$ with $K_X+\Delta\sim_{\mathbb{Q}} 0$. \end{theorem} We briefly explain the strategy and difficulty in our proof. We first run the $f$-equivariant minimal model program which ends up with a $Q$-abelian variety $Y$, i.e., a quasi-\'etale quotient of an abelian variety. Since the smooth rationally connected case has been verified by Yoshikawa, we may assume $\dim(Y)>0$. Then we observe the fibration $\pi:X\to Y$ and its $f$-periodic general fibre. By applying the canonical bundle formula and ramification divisor formula, the natural idea is to reduce the problem to the following Conjecture \ref{main-conj-lc} proposed by Yoshinori Gongyo. \begin{conjecture}[Gongyo]\label{main-conj-lc} Let $f:X\to X$ be a $q$-polarized endomorphism of a smooth projective variety $X$. Then $(X,\frac{R_f}{q-1})$ is an lc pair after iteration. \end{conjecture} However, for the surface case, we cannot fully prove Conjecture \ref{main-conj-lc} for $\mathbb{P}^2$, which is the only left case. So the main difficulty of proving Theorem \ref{main-thm-lcy} remains in the case when $\pi:X\to Y$ is a $\mathbb{P}^2$-bundle over an elliptic curve $Y$. By applying the Iitaka fibration of the anticanonical divisor, we only need to focus on the case when $-K_X$ is big; see Section \ref{sec-big}. This new condition allows us to reduce the problem to the only very concrete case when $X\cong\mathbb{P}_Y(\mathcal{F}_2\oplus \mathcal{L})$ where $\mathcal{F}_2$ is the unique indecomposable rank 2 vector bundle with non-trivial global sections and $\mathcal{L}$ is a line bundle of negative degree. \begin{ack} The author would like to thank professor Paolo Cascini for the warm hospitality and inspiring discussion during visiting Imperial Colledge London in 2018. He would also like to thank Doctor Guolei Zhong for valuable suggestions. The author is supported by a Research Fellowship of KIAS. \end{ack} \section{Preliminaries} We use the following notation throughout this paper. \begin{notation}\label{notation2.1} Let $X$ be a projective variety. \begin{itemize} \item The symbols $\sim$ (resp.~$\sim_{\mathbb Q}$, $\equiv$) denote the \textit{linear equivalence} (resp.~\textit{$\mathbb Q$-linear equivalence}, \textit{numerical equivalence}) on $\mathbb{Q}$- (or $\mathbb{R}$-) Cartier divisors. We also use $\equiv$ to denote the {\it numerical equivalence} of $1$-cycles on $X$. \item Denote by $\textup{NS}(X) = \operatorname{Pic}(X)/\operatorname{Pic}^0(X)$ the {\it N\'eron-Severi group} of $X$. Let $\operatorname{N}^1(X):=\operatorname{NS}(X)\otimes_\mathbb{Z}\mathbb{R}$ the space of $\mathbb{R}$-Cartier divisors modulo numerical equivalence and $\rho(X) :=\dim_{\mathbb{R}}\operatorname{N}^1(X)$ the {\it Picard number} of $X$. Let $\operatorname{N}_1(X)$ be the dual space of $\operatorname{N}^1(X)$ consisting of 1-cycles. Denote by $\operatorname{Nef}(X)$ the cone of {\it nef divisors} in $\operatorname{N}^1(X)$ and $\overline{\operatorname{NE}}(X)$ the dual cone consisting of {\it pseudo-effective 1-cycles} in $\operatorname{N}_1(X)$. \item Let $f:X\to X$ be a surjective endomorphism. A subset $Y\subseteq X$ is {\it $f^{-1}$-invariant} (resp.~{\it $f^{-1}$-periodic}) if $f^{-1}(Y)=Y$ (resp.~$f^{-s}(Y)=Y$ for some $s>0$). \item A surjective endomorphism $f:X\to X$ is \textit{$q$-polarized} if $f^*H\sim qH$ for some ample Cartier divisor $H$ and integer $q>1$; see \cite[Propositions 1.1]{MZ18} for the equivalent definitions. \item A smooth projective variety $X$ is {\it rationally connected} if any two general points of $X$ can be connected by a chain of rational curves. \item A normal projective variety $X$ is of \textit{Fano type}, if there is an effective Weil $\mathbb{Q}$-divisor $\Delta$ on $X$ such that the pair $(X,\Delta)$ has at worst klt singularities and $-(K_X+\Delta)$ is ample and $\mathbb{Q}$-Cartier. If $\Delta=0$, we say that $X$ is a \textit{(klt) Fano variety}. \item Let $Y$ be a projective variety and $\mathcal{E}$ a vector bundle of rank $n$. Denote by $\pi:\mathbb{P}_Y(\mathcal{E})\to Y$ the projective bundle of hyperplanes in $\mathcal{E}$ (not lines in $\mathcal{E}$), so that $\pi_*\mathcal{O}_{\mathbb{P}_Y(\mathcal{E})}(1) = \mathcal{E}$. \end{itemize} \end{notation} The following lemma is well-known and useful. \begin{lemma}\label{lem-cy-bir} Let $\pi:X\to Y$ be a birational morphism of two normal projective varieties. Then $Y$ is of Calabi-Yau type if $X$ is of Calabi-Yau type. \end{lemma} \begin{proof} Suppose the pair $(X, \Delta_X)$ is log Calabi-Yau. Let $\Delta_Y:=\pi_*\Delta_X$. Then $K_Y+\Delta_Y=\pi_*(K_X+\Delta_X)\sim_Q 0$. Note that $K_X+\Delta_X=\pi^*(K_Y+\Delta_Y)$. By \cite[Lemma 3.38]{KM98}, $(Y,\Delta_Y)$ has singularities not worse than $(X,\Delta_X)$. So $(Y,\Delta_Y)$ is lc. \end{proof} \begin{lemma}\label{lem-cy-qe} Let $\pi:X\to Y$ be a quasi-\'etale finite surjective morphism of normal projective varieties. Then $X$ is of Calabi-Yau type if and only if so is $Y$. \end{lemma} \begin{proof} Suppose $(Y, \Delta_Y)$ is log Calabi-Yau. Let $\Delta_X=\pi^*\Delta_Y$. Since $\pi$ is quasi-\'etale, $K_X=\pi^*K_Y$ and hence $(X, \Delta_X)$ is log Calabi-Yau by \cite[Proposition 5.20]{KM98}. Conversely, assume that $(X, \Delta_X)$ is log Calabi-Yau. Note that the Galois closure of $\pi$ is still quasi-\'etale by \cite[Theorem 3.7]{GKP16}. So we may assume that $\pi$ is the quotient map of $X$ by a finite group $G$. Note that $(X,g^*\Delta_X)$ is log Calabi-Yau for any $g\in G$. Let $\Delta=\frac{1}{|G|}\sum\limits_{g\in G} g^*\Delta_X$ and $\Delta_Y=\frac{1}{\deg \pi}\pi_*\Delta$. Then $\Delta=\pi^*\Delta_Y$ Note that $(X,\Delta)$ is log Calabi-Yau and hence $(Y,\Delta_Y)$ is log Calabi-Yau. \end{proof} \begin{proposition}\label{prop-kappa0} Let $f:X\to X$ be a polarized endomorphism of a projective variety. Let $D$ be an effective $\mathbb{Q}$-Cartier divisor on $X$ with $\kappa(X, D)=0$. Then $\operatorname{Supp} D$ is $f^{-1}$-periodic. \end{proposition} \begin{proof} By \cite[Proposition 3.7]{MZ19}, we have $$f^*f_*D\sim_{\mathbb{Q}} f_*f^*D=(\deg f) D.$$ Since $\kappa(X, D)=0$, we have $f^*f_*D=(\deg f) D$. In particular, $f^{-i}f^i(\operatorname{Supp} D)=\operatorname{Supp} D$ for all $i\ge 0$. By \cite[Lemma 8.1]{Men20}, $\operatorname{Supp} D$ is $f^{-1}$-periodic. \end{proof} A $\mathbb{Q}$-divisor $D$ is said to be {\it $\mathbb{Q}$-movable} if for any prime divisor $\Gamma$, one has $\Gamma\not\subseteq \operatorname{Supp} D'$ for some effective $\mathbb{Q}$-divisor $D'\sim_{\mathbb{Q}} D$. Let $f:X\to X$ be a polarized endomorphism of a normal projective variety. Denote by $T_f$ the finite union of $f^{-1}$-periodic prime divisors (cf.~\cite[Corollary 3.8]{MZ20}). Denote by $P_f:=-(K_X+T_f)$. \begin{proposition}\label{prop--kkappa0} Let $X$ be a $\mathbb{Q}$-Gorenstein normal projective variety admitting a polarized endomorphism $f$. Assume further $\kappa(X, -K_X)=0$. Then $\operatorname{Supp} R_f=T_f$, $-K_X\sim_Q T_f$ and $(X, T_f)$ is lc. \end{proposition} \begin{proof} This follows from \cite[Theorem 6.2]{MZ19} and \cite[Theorem 1.3]{Zha13}. \end{proof} We generalize \cite[Theorem 1.5]{MZ19}. \begin{theorem}\label{thm-move} Let $f:X\to X$ be a polarized endomorphism of a $\mathbb{Q}$-factorial normal projective variety. Then $-(K_X+T_f)$ is $\mathbb{Q}$-movable. \end{theorem} \begin{proof} After iteration, we may assume each component of $T_f$ is $f^{-1}$-invariant. Consider the log ramification divisor formula $$f^*(-(K_X+T_f))=-(K_X+T_f)+\Delta_f$$ where $\Delta_f=R_f-(q-1)T_f$ is effective and contains no common component of $T_f$. Replacing $-K_X$ by $-(K_X+T_f)$ in the proof of \cite[Theorem 6.2]{MZ19}, we have that $-(K_X+T_f)$ is effective. Consider the $\sigma$-decomposition $$-(K_X+T_f)\sim_{\mathbb{Q}} P+N$$ where $\kappa(X,N)=0$ and $P$ is $\mathbb{Q}$-movable. By Proposition \ref{prop-kappa0}, we may assume each component of $N$ is $f^{-1}$-invariant after iteration. So $\operatorname{Supp} N\subseteq T$. Let $\Gamma$ be a prime divisor contained in $\operatorname{Supp} N$. By the triangle inequality, $$\sigma_{\Gamma}(f^*(-(K_X+T_f)))\le \sigma_{\Gamma}(-(K_X+T_f))+\sigma_{\Gamma}(\Delta_f)=\sigma_{\Gamma}(-(K_X+T_f))$$ On the other hand, $$\sigma_{\Gamma}(f^*(-(K_X+T_f)))=q\sigma_{\Gamma}(-(K_X+T_f)).$$ So $\sigma_{\Gamma}(-(K_X+T_f))=0$ and hence $N=0$. Since $P$ is $\mathbb{Q}$-movable, we may further assume that $P$ has coefficients $\le 1$. \end{proof} We recall the following result by Yoshikawa \cite[Proposition 6.2]{Yos20} that Fano type can be preserved by (polarized) equivariant birational map. \begin{proposition}\label{prop-bir-fanotype} Let $f:X\to X$ be a polarized endomorphism of a normal projective variety $X$. Let $\pi:X\dashrightarrow Y$ be an $f$-equivariant birational map. Then $X$ is of Fano type if and only if so is $Y$. \end{proposition} \begin{proof} We may simply take $\Delta=\Gamma=0$ in \cite[Proposition 6.2]{Yos20}. \end{proof} We recall the nice result of Yoshikawa \cite[Corollary 1.4]{Yos20}. \begin{theorem}\label{thm-yos} Let $X$ be a rationally connected smooth projective variety admitting a polarized endomorphism. Then $X$ is of Fano type. \end{theorem} \section{Singularities of ramification divisor} In this section, we give some general results concerning Conjecture \ref{main-conj-lc}. First, we consider the coefficients of the ramification divisor. \begin{theorem}\label{thm-coefficient} Let $f:X\to X$ be a $q$-polarized endomorphism of a projective variety. Then after iteration, the coefficient $r_P$ of $R_f$ on each prime divisor $P$ has $r_P\le q-1$ and the equality holds if and only if $P$ is $f^{-1}$-periodic. \end{theorem} \begin{proof} Let $b$ be the number of prime divisors contained in the branch locus of $f$. Let $c$ be the maximal coefficient appeared in $f^*P$ with $P$ being a prime divisor. Choose some $a>0$ such that $c^b<q^{a/2}$. Choose some $s>0$ such that $(\frac{q^t}{q^t-1})^{s/t}>c^b$ for any $1\le t<a$. Let $N=\max\{a,s\}$ and take $n>N$. Let $P$ be a prime divisor which is not $f^{-1}$-periodic. Let $Q$ be a prime divisor in $f^{-n}(P)$. Let $r$ be the coefficient of $(f^n)^*P$ on $Q$. We shall show that $r<q^n$. If $P$ is not $f$-periodic, then all irreducible components of $\{f^{-i}(P)\}_{i> 0}$ are different with each other, and hence $r\le c^b<q^{a/2}<q^n$. In the following, we consider the case when $P$ is $f$-periodic with period $t\ge 1$. Let $r_1$ be the coefficient of $(f^t)^*P$ on $P$. Since all irreducible components of $\{f^{-i}(P)\}_{0<i\le t}$ are different with each other, we have $r_1<c^b<q^{a/2}$. Moreover, we claim that $r_1<q^t$. Suppose the contrary that we can write $(f^t)^*P=q^tP+E$ for some effective divisor $E$. By the projection formula, $$(\deg f^t)P=(f^t)_*(f^t)^*P=(f^t)_*(q^tP+E)=q^t(\deg f^t|_P)P+(f^t)_*E.$$ Note that $\deg f^t=q^{t\cdot \dim(X)}$ and $\deg f^t|_P=q^{t\cdot (\dim(X)-1)}$. Therefore, we have $E=0$ and hence $P$ is $f^{-1}$-periodic, a contradiction. So the claim is proved. Let $e$ be the minimal non-negative integer such that $f^e(Q)=P$. Let $r_2$ be the coefficient of $(f^e)^*P$ on $Q$. Then $r=r_2\cdot r_1^{(n-e)/t}$. If $t\ge a$, then $r<c^b\cdot (q^{a/2})^{n/a}<q^n$. If $t<a$, then $r<c^b\cdot (q^t-1)^{n/t}<q^n$. \end{proof} \begin{remark} The proof of Theorem \ref{thm-coefficient} can be easily applied to consider general surjective endomorphisms, though the statement could be a bit wordy. We leave the readers the pleasure of the coefficient estimate. \end{remark} We observe the behaviour of ramification divisors between equivariant birational morphisms. \begin{proposition}\label{prop-induction-birational} Let $\pi:X\to Y$ be a birational morphism of $\mathbb{Q}$-factorial normal projective varieties. Let $f:X\to X$ and $g:Y\to Y$ be two $q$-polarized endomorphisms such that $g\circ \pi=\pi\circ f$. Then $$K_X+\frac{R_{f}}{q-1}=\pi^*(K_Y+\frac{R_g}{q-1}).$$ In particular, $(X, \frac{R_f}{q-1})$ is lc if $(Y, \frac{R_g}{q-1})$ is lc. \end{proposition} \begin{proof} Since $K_X$ and $K_Y$ are $\mathbb{Q}$-Cartier, we may write $$K_X=\pi^*K_Y+E$$ where $E$ has supports contained in the exceptional divisor of $\pi$. By the ramification divisor formula, we have $$K_X=f^*K_X+R_f\text{ and } K_Y=g^*K_Y+R_g.$$ By the above three equations, we have $$\pi^*K_Y+E=f^*(\pi^*K_Y+E)+R_f=\pi^*g^*K_Y+f^*E+R_f=\pi^*(K_Y-R_g)+f^*E+R_f.$$ Note that $f^*E=qE$. So we have $$\pi^*\frac{R_g}{q-1}-\frac{R_f}{q-1}=E.$$ Then we have $$K_X+\frac{R_{f}}{q-1}=\pi^*(K_Y+\frac{R_g}{q-1})$$ as desired. \end{proof} The canonical bundle formula plays a key role in our reduction. \begin{proposition}\label{prop-cbf-rf} Let $\pi:X\to Y$ be an algebraic fibration where $X$ is an $n$-dimensional smooth projective variety and $Y$ is normal projective with $K_Y\equiv 0$. Let $f:X\to X$ and $g:Y\to Y$ be $q$-polarized endomorphisms such that $g\circ \pi=\pi\circ f$ and $f^*K_X\equiv qK_X$. Then $(X, \frac{R_f}{q-1})$ is log Calabi-Yau after iteration if Conjecture \ref{main-conj-lc} holds true for $f^s|_F:F\to F$ where $F$ is a $f$-periodic (of period $s$) general fibre of $\pi$. \end{proposition} \begin{proof} By the ramification divisor formula, we have $$K_X+\frac{R_{f}}{q-1}\equiv 0$$ which holds true after arbitrary iteration. Let $F$ be a general $f$-invariant smooth fibre of $\pi$ after iteration (cf.~\cite[Theorem 5.1]{Fak03}). Note that $f|_F$ is $q$-polarized and $R_{f|_F}=R_f|_F$. By the assumption, $(F, \frac{R_{f|_F}}{q-1})$ is lc after iteration. So $(X, \frac{R_f}{q-1})$ is lc over the generic point of $\pi$. By the lc canonical bundle formula \cite[Theorem 4.1.1]{Fuj04}, we have $$0\equiv K_X+\frac{R_{f}}{q-1}=\pi^*(K_Y+B+M)$$ where $B$ is effective and $M$ is pseudo-effective. Note that $K_Y\equiv 0$. So $B=0$ and hence $(X, \frac{R_f}{q-1})$ is lc. \end{proof} \section{Surface case} We first prove Conjecture \ref{main-conj-lc} for surfaces except $\mathbb{P}^2$. \begin{theorem}\label{thm-surf} Let $f:X\to X$ be a $q$-polarized endomorphism of a smooth projective surface with $\rho(X)>1$. Then $(X, \frac{R_f}{q-1})$ is lc after iteration. \end{theorem} \begin{proof} We apply \cite[Theorem 1.8]{MZ18}. If $K_X$ is pseudo-effective, then $R_f=0$ and we are done. In the following, we may assume $K_X$ is not pseudo-effective. After iteration, we may run an $f$-equivariant minimal model program of $X$ which ends up with a Fano contraction $\pi:X'\to Y$ with $Y$ being a curve of genus $\le 1$. Then $f^*|_{\operatorname{N}^1(X)}=q\operatorname{id}$ and hence $K_X+\frac{R_f}{q-1}\equiv 0$ by the ramification divisor formula. By Proposition \ref{prop-induction-birational}, we may assume $X=X'$. Note that $X$ is a ruled surface over $Y$. We finish the proof with the following 2 cases. \textbf{Case 1.} Suppose $Y$ is elliptic. Note that Conjecture \ref{main-conj-lc} holds for curves and $f^*|_{\operatorname{N}^1(X)}=q\operatorname{id}$. So we are done by Proposition \ref{prop-cbf-rf}. \textbf{Case 2.} Suppose $Y=\mathbb{P}^1$. Then $Y\cong F_d$ with $d\ge 0$. If $d=0$, then $R_f$ is simply a sum of fibres and horizontal sections with coefficients $\le q-1$ and hence this case is simple. We may assume now that $d>0$. Then $\pi$ admits a unique negative section curve $C$. Note that $f^{-1}(C)=C$. Then we have the decomposition by effective $\mathbb{Q}$-divisors $$\frac{R_f}{q-1}=C+V+H$$ where $V$ has support in the fibres of $\pi$ and each component of $H$ dominates $Y$. Clearly, $C$ is not a component of $H$ each component of $V$ has the coefficient $\le 1$. Fix a point $y\in Y$. Let $F:=\pi^{-1}(y)$. For the restriction $$f|_F:F\cong \mathbb{P}^1\to f(F)\cong \mathbb{P}^1,$$ we have $\frac{R_{f|_F}}{q-1}=\frac{R_f}{q-1}|_F=C|_F+H|_F$. Note that $\frac{R_f}{q-1}\cdot F=2$, each component of $\frac{R_{f|_F}}{q-1}$ has coefficient $\le 1$, and $C|_F$ is a reduced point. Consider the local intersection number at $x\in F$. If $x= C\cap F$, then $(H\cdot F)_x=0$ and hence $$((C+H)\cdot F)_x=1.$$ If $x\neq C\cap F$, then $$((C+H)\cdot F)_x\le (C+H)\cdot F-(C\cdot F)_{C\cap F}=\frac{R_f}{q-1}\cdot F-1=1.$$ In particular, we have $(X,F+C+H)$ is lc near $F$ by \cite[Corollary 5.57]{KM98}. Note that $\frac{R_{f|_F}}{q-1} \le F+C+H$ near $F$. So $(X, \frac{R_{f|_F}}{q-1})$ is lc. \end{proof} We give some partial results on the $\mathbb{P}^2$ case which is enough for the proof of Theorem \ref{main-thm-lcy}. \begin{theorem}\label{thm-p2-ticurve} Let $f:X\cong \mathbb{P}^2\to X$ be a $q$-polarized endomorphism. Suppose $T_f\neq\emptyset$. Then $(X, \frac{R_f}{q-1})$ is log Calabi-Yau after iteration. \end{theorem} \begin{proof} Write $\frac{R_f}{q-1}=T_f+\Delta_f$. Note that $\Delta_f$ and $T_f$ have no common components. By Theorem \ref{thm-coefficient}, we may assume that $\Delta_f$ has coefficient $<1$ after sufficient iteration of $f$. Then the non-klt locus $\text{Nklt}(X, \frac{R_f}{q-1})=T_f\cup S$ where $S$ is a finite set of points outside $T_f$. We first show that the non-lc locus $\text{Nlc}(X, \frac{R_f}{q-1})\cap T_f=\emptyset$. In particular, if $S=\emptyset$, then $(X, \frac{R_f}{q-1})$ is lc. Suppose the contrary and let $x\in \text{Nlc}(X, \frac{R_f}{q-1})\cap T_f$. Let $C$ be an irreducible component of $T_f$ containing $x$. By \cite{Gur03}, $C$ is a line. After iteration, we may assume $f^{-1}(C)=C$. Consider the log ramification divisor formula, $$(K_X+C)=f^*(K_X+C)+R_f-(q-1)C.$$ Apply adjunction on $C$, we have $$K_C=(f|_C)^*K_C+(R_f-(q-1)C)|_C=(f|_C)^*K_C+R_{f|_C}.$$ So $(R_f-(q-1)C)|_C=R_{f|_C}$. Note that $\frac{R_{f|_C}}{q-1}$ has coefficient $\le 1$. Then $((\frac{R_f}{q-1}-C)\cdot C)_x\le 1$. By inverse of adjunction (cf.~\cite[Corrolary 5.57]{KM98}), $(X,\frac{R_f}{q-1})$ is lc near $x$, a contradiction. Assume now that $S\neq \emptyset$. Then $\text{Nklt}(X, \frac{R_f}{q-1})$ is not connected. By \cite[Theorem 1.2]{HH19}, $(X, \frac{R_f}{q-1})$ is plt, a contradiction. \end{proof} \begin{corollary}\label{cor-p2-ticurve} Let $f:X\cong \mathbb{P}^2\to X$ be a $q$-polarized endomorphism. Suppose there is a birational morphism $\pi:W\to X$ from a normal projective surface $W$ such that $\pi\circ h=f\circ \pi$ for some surjective endomorphism $h:W\to W$. Suppose further either the exceptional locus of $W$ is reducible or $W$ has more than one negative curves. Then $T_f\neq\emptyset$ and hence $(X, \frac{R_f}{q-1})$ is log Calabi-Yau after iteration. \end{corollary} \begin{proof} By Proposition \ref{prop-bir-fanotype}, $W$ is of Fano type and hence $\mathbb{Q}$-factorial (cf.~\cite[Proposition 4.11]{KM98}). By \cite[Lemma 4.3]{MZ19}, the negative curves on $W$ are $h^{-1}$-periodic. Note that the existence of negative curve on $W$ not contracted by $\pi$ will cause $T_f\neq \emptyset$. So we may always assume that the exceptional locus of $W$ is reducible. Let $E=\sum\limits_{i=1}^n E_i$ be the reduced $\pi$-exceptional divisor with $n>1$. Write $K_W=\pi^*K_X+\sum\limits_{i=1}^n a_i E_i$ where $a_i>0$ since $X$ is smooth. By the negativity lemma, $K_W$ is not $\pi$-nef. So we can run $h$-equivariant (after iteration) relative minimal model program of $W$ over $X$ which finally ends up with $X$. Denote by $\tau:W_1\to X$ the last step and $\sigma:W\to W_1$ the composition of the previous steps. Note that $W_1$ is of Fano type and $\rho(W_1)=2$. Denote by $p:W_1\to Y$ be the contraction induced by the extremal ray different with $\tau$. If $p$ is birational, then the exceptional curve of $p$ is not contracted by $\tau$ and hence $T_f\neq\emptyset$. So we may assume that $p:W_1\to Y\cong \mathbb{P}^1$ is a $\mathbb{P}^1$-fibration. Suppose that $E_1$ is contracted by $\sigma$. Then $p(\sigma(E_1))$ is an $(h|_Y)^{-1}$-invariant point on $Y$ (cf.~\cite[Lemma 7.5]{CMZ20}). Therefore, $p^{-1}(p(\sigma(E_1)))$ is an $(h|_{W_1})^{-1}$-invariant curve which is not contracted by $\tau$ and hence $T_f\neq\emptyset$. Finally, we apply Theorem \ref{thm-p2-ticurve}. \end{proof} \section{Anti-canonical divisor}\label{sec-big} Throughout this section, we use the following setting: \begin{blank}\label{set-rel2} Let $f:X\to X$ be a $q$-polarized endomorphism of a smooth projective threefold $X$ admitting an ($f$-equivariant) Fano contraction $\pi:X\to Y$ with $Y$ being an elliptic curve and general fibres being $\mathbb{P}^2$. In this setting, $\rho(X)=2$ and we may further assume $f^*|_{\operatorname{N}^1(X)}=q \operatorname{id}$. Denote by $$\phi:X\dashrightarrow Z$$ the Chow reduction of the Iitaka fibration of $-K_X$ which is $f$-equivariant by \cite[Theorem 7.8]{MZ19}. \end{blank} \begin{proposition}\label{prop-semiample} Suppose there is another fibration $\tau:X\to V$ different with $\pi$ with $0<\dim(V)<3$. Then $-K_X$ is semi-ample. In particular, $X$ is of Calabi-Yau type. \end{proposition} \begin{proof} Note that $\rho(V)=1$ because $\rho(X)=2$. Since $\pi$ and $\tau$ are two different fibrations to positive lower dimensional varieties, the two extremal rays of $\operatorname{Nef}(X)$ are generated by the pullbacks of ample divisors on $Y$ and $V$ which are not big. Hence, $\operatorname{Nef}(X)=\operatorname{PE}^1(X)$. Note that $-K_X$ is not ample but effective and $\pi$-ample. So $-K_X\equiv \phi^*H$ for some ample $\mathbb{Q}$-divisor $H$ on $V$. By Bertini's theorem, we may assume $(X, \phi^*H)$ is lc (even terminal) after a suitable choice of $H$. By the abundance, $-K_X\sim_{\mathbb{Q}} \phi^*H$ and we are done. \end{proof} \begin{theorem}\label{thm-kappa1} Assume further $\kappa(X, -K_X)<3$, i.e., $-K_X$ is not big. Then $X$ is of Calabi-Yau type. \end{theorem} \begin{proof} By Proposition \ref{prop--kkappa0}, we may assume $\kappa(X, -K_X)>0$. By Proposition \ref{prop-semiample}, we may assume $\phi$ is not well-defined. Let $W$ be the normalization of the graph of $\phi$. Denote by $h:W\to W$ the lifting of $f$ and $\sigma:W\to X$ the induced birational morphism. Since $X$ is smooth, the exceptional locus of $\sigma$ is of pure codimension one in $W$ (cf.~\cite[Corollary 2.63]{KM98}). We denote it by $E:=\sum\limits_{i=1}^n E_i$ with $n>0$. After iteration, we may assume $E_i$ is $h^{-1}$-invariant. Note that the elliptic curve $Y$ admits no $g^{-1}$-periodic points. So $\pi(\sigma(E_i))=Y$. Let $W_y:=(\pi\circ \sigma)^{-1}(y)$ be an $h$-invariant general fibre after iteration (cf. \cite[Theorem 5.1]{Fak03}). Denote by $X_y:=\pi^{-1}(y)\cong \mathbb{P}^2$. If $E$ is reducible, i.e., $n>1$, then $E\cap W_y$ is also reducible. By Corollary \ref{cor-p2-ticurve}, $(X_y, \frac{R_{f|_{X_y}}}{q-1})$ is log Calabi-Yau after iteration of $f$. By Proposition \ref{prop-cbf-rf}, $(X, \frac{R_f}{q-1})$ is log Calabi-Yau after iteration of $f$. So we may assume $E$ is irreducible. Note that $E$ is $\mathbb{Q}$-Cartier (cf.~\cite[Lemma 2.62]{KM98}). Then $W$ is $\mathbb{Q}$-factorial. By \cite[Theorem 1.3]{Zha13}, $(W,E)$ is lc. Since $W\backslash E$ is smooth, $W$ is klt. Write $K_W=\sigma^*K_X+aE$ with $a>0$ since $X$ is smooth. Then $$0\le \kappa(W, -K_W)\le \kappa(W, -\sigma^*K_X)=\kappa(X, -K_X)$$ where the first inequality follows from Theorem \ref{thm-move} and the last equality follows from \cite[Theorem 5.13]{Uen75}. Let $\phi_W:W\dashrightarrow V$ be the Chow reduction of the Iitaka fibration of $-K_W$. So $\dim(V)<3$. Suppose $\phi_W$ is not well-defined. Let $\widetilde{W}$ be the normalization of the graph of $\phi_W$ and $\widetilde{\sigma}:\widetilde{W}\to W$ the induced birational morphism. Since $W$ is $\mathbb{Q}$-factorial, the exceptional locus of $\widetilde{\sigma}$ is of pure codimension one in $\widetilde{W}$ (cf.~\cite[Corollary 2.63]{KM98}) and each irreducible component dominates $Y$ by a similar argument. In particular, the restricted birational morphism $\widetilde{W}_y\to X_y$ has reducible exceptional locus. Then we are done by Corollary \ref{cor-p2-ticurve} and Proposition \ref{prop-cbf-rf}. So we may assume that $\phi_W$ is well-defined, $\rho(W_y)=2$ and $W_y$ has only one negative curve $C=E\cap W_y$, i.e., the exceptional curve of $W_y\to X_y$. We give more infomation on $W_y$. By Proposition \ref{prop-bir-fanotype} and \cite[Proposition 4.11]{KM98}, $W_y$ is of Fano type and $\mathbb{Q}$-factorial. Since $W_y$ has only one negative curve (which is also $K_{W_y}$-negative since $X$ is smooth), another contraction of $W_y$ is a $\mathbb{P}^1$-fibration over $\mathbb{P}^1$. In particular, $-K_{W_y}$ is ample. If $\dim(V)=0$, then $(W,T_h)$ is log Calabi-Yau by Proposition \ref{prop--kkappa0}. If $\dim(V)=1$, then $\phi_W$ is just the Iitaka fibration of $-K_W$ and hence $-K_W$ is semi-ample. In both cases, $W$ and hence $X$ are of Calabi-Yau type by Lemma \ref{lem-cy-bir}. We are left with $\dim(V)=2$. Note that $-K_W|_{W_y}=-K_{W_y}$ is ample. So the restriction $\phi_W|_{W_y}:W_y\to V$ is surjective. We first assume that the induced map $X_y\dashrightarrow V$ is not well-defined. By the rigidity lemma (cf.~\cite[Lemma 1.15]{Deb01}), $\phi_W|_{W_y}$ contracts no curves. Then $\phi_W|_{W_y}:W_y\to V$ is finite surjective. By \cite[Lemma 5.16 and Proposition 5.20]{KM98}, $V$ is klt and $\mathbb{Q}$-factorial. Let $C_V:=\phi_W|_{W_y}(C)$. By the projection formula, $C_V$ is also a negative curve. Then, we have an $h|_V$-equivariant divisorial contraction $V\to V_1$ which contracts $C_V$. By the rigidity lemma again, the induced rational map $X_y\dashrightarrow V_1$ is then well-defined. If the map $X_y\dashrightarrow V$ is already well-defined, then we simply identify $V_1$ with $V$. Consider the induced $f$-equivariant dominant rational map $\phi':X\dashrightarrow V_1$. Note that indeterminant locus of $\phi'$ is $f^{-1}$-invariant which does not dominate $Y$ since $\phi'$ is well-defined near $X_y$. By \cite[Lemma 7.5]{CMZ20}, $\phi'$ is well-defined and we are done by Proposition \ref{prop-semiample}. \end{proof} \section{Proof of Theorem \ref{main-thm-lcy}} We refer to \cite{Ati57} for well-known facts on vector bundles over elliptic curves. By $\mathcal{F}_n$ we mean the unique indecomposable rank $n$ vector bundle with non-trivial global sections over an elliptic curve. The following lemmas will be used later. \begin{lemma}\label{lem-f2} Let $S\cong \mathbb{P}_Y(\mathcal{F}_2)$ where $Y$ is an elliptic curve and $\mathcal{F}_2$ is the unique indecomposable rank 2 vector bundle with non-trivial global sections. Then $S$ admits no polarized endomorphism. \end{lemma} \begin{proof} Suppose the contrary that there is a $q$-polarized endomorphism $f:S\to S$. We may assume $f^*|_{\operatorname{N}^1(S)}=q\operatorname{id}$. Note that $\pi:S\to Y$ has a unique section $C$ with $C^2=0$ and $-K_S\sim 2C$. Then $\mathcal{O}_S(1)\cong \mathcal{O}_S(C)$ and $\pi_*\mathcal{O}_S(n)\cong \operatorname{Sym}^n(\mathcal{F}_2)\cong \mathcal{F}_{n+1}$ for $n>0$ (cf.~\cite[Theorem 9]{Ati57}). In particular, $h^0(S,nC)=h^0(Y,\mathcal{F}_{n+1})=1$ and $\kappa(S,-K_S)=0$. By Proposition \ref{prop--kkappa0}, we have $\operatorname{Supp} R_f=T_f$ and $-K_S\sim_{\mathbb{Q}} T_f$ . Note that $T_f$ is reduced. So $T_f\neq 2C$ and hence $\kappa(S, C)>0$, a contradiction. \end{proof} \begin{proof}[Proof of Theorem \ref{main-thm-lcy}] {\bf Step 1.} In this step, we reduce our situation to the case when $T_f=\emptyset$, $f^*|_{\operatorname{N}^1(X)}=q\operatorname{id}$ and $X\cong \mathbb{P}_Y(\mathcal{E})$ for some rank 3 vector bundle $\mathcal{E}$ on an elliptic curve $Y$. By \cite[Theorem 1.8]{MZ18}, we run $f$-equivariant minimal model program (after iteration) and let $Y$ be the end product: a $Q$-abelian variety. Denote by $\pi:X\to Y$ be the induced composition. If $\dim(Y)=3$, then $X=Y$ is $Q$-abelian and hence $(X,\frac{R_f}{q-1})=(X,0)$ is log Calabi-Yau. If $\dim(Y)=0$, then $X$ is of Fano type by Theorem \ref{thm-yos}. Suppose $\dim(Y)=2$. By Theorem \ref{thm-move}, we have $-(K_X+T_f)\sim_{\mathbb{Q}} P$ for some effective $\mathbb{Q}$-divisor $P$ with coefficients $\le 1$ and containing no component of $T_f$. Let $F$ be a general fibre of $\pi$. Since $F$ is general, we may assume that $\operatorname{Supp} P \cup T_f$ intersects with $F$ transversally. In particular, $(F, (T_f+P)|_F)$ is lc and $(X,T_f+P)$ is lc over the generic point of $\pi$. By the canonical bundle formula \cite[Theorem 4.1.1]{Fuj04}, we have $$0\equiv K_X+T_f+P=\pi^*(K_Y+B+M)$$ Note that $B$ is effective and $M$ is pseudo-effective and $K_Y\equiv 0$. So $B=0$ and hence $(X, T_f+P)$ is lc and $X$ is of Calabi-Yau type. Now we may assume that $\dim(Y)=1$. Then $f^*|_{\operatorname{N}^1(X)}=q\operatorname{id}$ after iteration by \cite[Theorem 1.8]{MZ18}. By Theorems \ref{thm-surf} and Proposition \ref{prop-cbf-rf}, the only left case is when the general fibre of $\pi$ is $\mathbb{P}^2$. In particular, $\rho(X)=2$ and $\pi$ is a Fano contraction. By \cite[Theorem 3.5]{Mor82} and since the Brauer group of the elliptic curve $Y$ is trivial, $X\cong \mathbb{P}_Y(\mathcal{E})$ for some rank 3 vector bundle $\mathcal{E}$ on $Y$. Suppose $f^{-1}(P)=P$ for some prime divisor $P$. Then $P$ dominates $Y$ since $f|_Y$ is \'etale and polarized. Let $F$ be an $f$-invariant (after iteration) fibre of $\pi$. Then $D\cap F\subseteq T_{f|_F}$. By Theorem \ref{thm-p2-ticurve} and Proposition \ref{prop-cbf-rf}, $(X, \frac{R_f}{q-1})$ is log Calabi-Yau. So we may assume $T_f=\emptyset$. {\bf Step 2.} In this step, we reduce our situation to the case when $\mathcal{E}\cong \mathcal{F}_2\oplus \mathcal{L}$ for some line bundle $\mathcal{L}$. Note that $f|_Y$ is polarized and hence Zariski dense periodic points. So after iteration and choosing some fixed point as the identity element of $Y$, we may assume $f|_Y$ is an isogeny. In particular, $f|_Y$ commutes with any multiplication map. By Lemma \ref{lem-cy-qe}, we can always replace $X$ by base change of multiplication map (by 3) on $Y$. So we may assume $3| \deg \mathcal{E}$. By Theorem \ref{thm-kappa1}, we may further assume $-K_X$ is big. Consider the following two cases. Suppose $\mathcal{E}\cong \mathcal{L}_1\oplus \mathcal{L}_2\oplus \mathcal{L}_3$ where $\mathcal{L}_1\cong \mathcal{O}_E$. Let $D_1$ be the divisor determined by the projection $\mathcal{E}\to \mathcal{L}_2\oplus \mathcal{L}_3$ and define $D_2$ and $D_3$ similarly. Consider the exact sequnce $$0\to \mathcal{O}_X(1)\otimes\mathcal{O}_X(-D_1)\to \mathcal{O}_X(1)\to \mathcal{O}_X(1)\otimes \mathcal{O}_{D_1}\to 0.$$ Taking $\pi_*$, we have $$0\to \pi_*(\mathcal{O}_X(1)\otimes\mathcal{O}_X(-D_1))\to \mathcal{E}\to \pi_*\mathcal{O}_{D_1}(1)=\mathcal{L}_2\oplus \mathcal{L}_3\to 0$$ with $0$ on the right because $R^1\pi_*(\mathcal{O}_X(1)\otimes\mathcal{O}_X(-D_1))=0$ since all the fibres are $\mathbb{P}^2$. By the projection formula, we have $\mathcal{O}_X(1)\cong \mathcal{O}_X(D_1)$ and similarly, $D_1\sim D_2+\pi^*c_1(\mathcal{L}_2)$ and $D_1\sim D_3+\pi^*c_1(\mathcal{L}_3)$. So by the relative Euler sequence, we have $$-K_X\sim 3D_1-\pi^*(c_1(\mathcal{E}))=D_1+D_2+D_3.$$ It is easy to see that $D_1+D_2+D_3$ has simple normal crossing. In particular, $(X,D_1+D_2+D_3)$ is a log Calabi-Yau pair. Suppose $\mathcal{E}$ is indecomposable. Since $3| \deg \mathcal{E}$, we have $\mathcal{E}\cong \mathcal{F}_3\otimes \mathcal{L}=0$ for some line bundle $\mathcal{L}$ (cf.~\cite[Theorem 5]{Ati57}). Therefore, we may assume $\mathcal{E}\cong \mathcal{F}_3$. However, $-K_X$ is then nef but not big, a contradiction with our assumption. Now we may assume $\mathcal{E}\cong \mathcal{F}\oplus \mathcal{L}$ where $\mathcal{F}$ is an indecomposable rank 2 vector bundle and $\mathcal{L}$ is a line bundle. Note that we may also assume that $\mathcal{F}$ does not split after taking pullback of the multiplication map (by 2) of $Y$. So we may assume that $\mathcal{F}\cong \mathcal{F}_2$. {\bf Step 3.} In this step, we show that $\deg \mathcal{L}<0$. Note that there is a non-splitting exact sequence $$0\to \mathcal{O}_Y\to \mathcal{E}\to \mathcal{O}_Y\oplus \mathcal{L}\to 0$$ which is induced by the non-splitting exact sequence of $\mathcal{F}_2$. Let $D$ be the divisor determined by $\mathcal{E}\to \mathcal{O}_Y\oplus \mathcal{L}$. Then $\mathcal{O}_X(1)\cong \mathcal{O}_X(D)$. Let $D'$ be the divisor determined by the natrual projection $\mathcal{E}\to \mathcal{F}_2$. Then $D\sim D'+\pi^*c_1(\mathcal{L})$ and we have $$-K_X\sim 3D-\pi^*c_1(\mathcal{L})\sim 2D+D'.$$ Note that $D|_D\sim C$ where $C$ is the section curve determined by $\mathcal{O}_E\oplus \mathcal{L}\to \mathcal{L}$. Then we see that $-K_X$ is big if and only if $\deg \mathcal{L}\neq 0$. By Lemma \ref{lem-f2}, $D'$ is not $f$-periodic. Note that $f^*D'\equiv qD'$ and $f^{-1}(D')\neq D'$. So $D'|_{D'}$ is pseudo-effective and hence nef on $D'$. Then $D'$ is nef. Note that $-K_X$ is not nef because otherwise $X$ is of Fano type. So $D$ is not nef and hence $\deg \mathcal{L}<0$. {\bf Step 4.} In this step, we show that $\kappa(X,D)=0$. Note that $$\pi_*(\mathcal{O}_X(n))\cong \operatorname{Sym}^n(\mathcal{E})\cong \bigoplus_{i=0}^n \mathcal{F}_{i+1}\otimes \mathcal{L}^{\otimes (n-i)}$$ for $n\ge 0$. Then $$h^0(X, nD)=\sum_{i=0}^n h^0(Y,\operatorname{Sym}^i(\mathcal{F}_2)\otimes \mathcal{L}^{\otimes (n-i)})=h^0(Y,\mathcal{F}_{n+1})=1$$ by noticing that $h^0(Y,\mathcal{F}_{i+1}\otimes \mathcal{L}^{\otimes (n-i)})=0$ for any $i<n$. So $\kappa(X,D)=0$. {\bf End of the proof.} By Proposition \ref{prop-kappa0}, $D$ is $f^{-1}$-periodic. However, this contradicts the reduction we obtained in Step 1. \end{proof} \begin{remark} When $X\cong \mathbb{P}_Y(\mathcal{F}_2\oplus\mathcal{L})$ for some line bundle $\mathcal{L}$ of negative degree, we do not know whether $X$ is of Calabi-Yau type or not. What we proved above is that either $X$ is of Calabi-Yau type or $X$ admits no polarized endomorphism. Therefore, it will be very interesting to study the following question. \end{remark} \begin{question} Let $X\cong \mathbb{P}_{Y}(\mathcal{E})$ be a projective bundle over an elliptic curve $Y$. \begin{enumerate} \item When will $X$ be of Calabi-Yau type? \item When will $X$ admit a polarized endomorphism? \end{enumerate} \end{question}
{ "timestamp": "2022-05-03T02:31:43", "yymm": "2204", "arxiv_id": "2204.11244", "language": "en", "url": "https://arxiv.org/abs/2204.11244", "abstract": "Let $X$ be a normal projective variety admitting a polarized endomorphism $f$, i.e., $f^*H\\sim qH$ for some ample divisor $H$ and integer $q>1$. It was conjectured by Broustet and Gongyo that $X$ is of Calabi-Yau type, i.e., $(X,\\Delta)$ is lc for some effective $\\mathbb{Q}$-divisor such that $K_X+\\Delta\\sim_{\\mathbb{Q}} 0$. In this paper, we establish a general guideline based on the equivariant minimal model program and the canonical bundle formula. In this way, we prove the conjecture when $X$ is a smooth projective threefold.", "subjects": "Algebraic Geometry (math.AG); Dynamical Systems (math.DS)", "title": "Log Calabi-Yau structure of projective threefolds admitting polarized endomorphisms", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464527024444, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7074345747042133 }
https://arxiv.org/abs/astro-ph/9307039
The Expected Dipole in the Distribution of Cosmological Gamma-Ray Bursts
If gamma-ray bursts originate at cosmological distances then their angular distribution should exhibit a dipole in the direction of the solar motion relative to the cosmic microwave background. This is due to the combined effects of abberation, an anisotropic shift of the burst event rate, and an angular variation in the distance out to which bursts can be detected.We derive the amplitude of the expected dipole for an open and flat cosmological model, and for various possible evolution rates of the burst population. Although our dimensionless velocity with respect to the CMB rest frame is of order $10^{-3}$, the dipole amplitude is of order $10^{-2}$, an order of magnitude larger. The results depend very weakly on the value of $\Omega_0$, but are sensitive to the spectral index of the bursts' photon spectra, and to the rate of evolution of the burst population. There is no dependence on the value of the Hubble constant.A clear detection of the dipole will require a larger sample of bursts than currently available (of order $10^4$). Future statistical analyses of the hypothesis that bursts originate at cosmological distances should take this effect into account, rather than assuming a perfectly isotropic distribution, for obtaining the correct statistical significance of their results.
\section{INTRODUCTION} If the observed $\gamma$-ray bursts originate at cosmological distances (e.g., Paczy{\'n}ski 1991) then the distance scale of their distribution must correspond to a redshift of order unity for the burst intensity distribution to be consistent with observations (e.g., Paczy{\'n}ski 1991; Piran 1992; Mao and Paczy{\'n}ski 1992; Fenimore {\it et al. } 1992; Wickramasinghe {\it et al. } 1993). Assuming that the universe is homogeneous and free of bulk flows on such scales the angular distribution of bursts should appear to be perfectly isotropic to an observer at rest with respect to the cosmic microwave background (CMB) frame, up to statistical fluctuations. However, the solar system is moving relative to the CMB frame at a speed of $370\!\pm\!10$ km s$^{-1}$ in the direction $(l,b)\!=\!(264.7\deg,48.2\deg)$ (Peebles 1993). Consequently, the bursts' distribution should exhibit a dipole component in that direction due to several effects: abberation, anisotropic Doppler shift of the event rate, and the angular variation in the distance out to which bursts can be detected. We derive the amplitude of the expected dipole for a Friedmann cosmological model, examine its dependence on various evolution rates of the burst population, and discuss its dependence on the luminosity function. For sets of parameters which are consistent with the observed $\left<V/V_{max}\right>\,\,$ parameter we obtain a dipole amplitude of $\sim 10^{-2}$, an order of magnitude larger than that of the CMB temperature dipole. In \S{2} we derive the three effects which contribute to the anisotropy, and evaluate the amplitude of the dipole in \S{3}. In \S{4} we summarize the results and discuss their implications. \section{THE ANISOTROPY EFFECTS} \def \tilde{\theta}{\tilde{\theta}} Denoting the dimensionless velocity of the solar system relative to the CMB frame by $\beta\!\equiv\!{v/c}$ and using the Lorentz transformation it can be shown that photon directions are related by \begin{equation} \cos\theta = {\cos\tilde{\theta} + \beta \over 1 + \beta\cos\tilde{\theta} } \,\,\,\,\,\,\,\, , \end{equation} where $\tilde{\theta}$ and $\theta$ are the angles between our direction of motion and the direction to a source in the CMB rest frame and in our frame, respectively. Therefore, assuming an isotropic distribution of bursting objects in the CMB frame, the angular distribution of sources as observed by us should be proportional to \begin{equation} {dN_s\over d\Omega} \, \propto \, f^{2}(\theta) \:\: , \end{equation} where \begin{equation} f(\theta) \equiv { \sqrt{1-\beta^2} \over 1 - \beta\cos\theta } \, \simeq 1 + \delta(\theta) \,\,\,\,\, , \end{equation} $\delta(\theta)\equiv \beta\cos\theta$, and the rightmost approximation is due to $\beta\!\ll\!1$. This is the effect of abberation which makes the sources ``bunch up'' in the forward direction. In addition, the burst frequency is Doppler shifted by the factor $f(\theta)$, implying a highest event rate in the direction of motion. These effects are independent of the cosmological model and of possible evolution of the burst population. The part of a burst's intrinsic luminosity which is shifted into the detector bandwidth depends on the effective Doppler shift. Thus, the number of detectable events also varies with direction. In order to calculate this effect let us assume the following: 1) a Friedmann cosmological model with vanishing cosmological constant. 2) all the bursts have identical power-law spectra for the photon number in the comoving frame, $n_{\gamma}(E)\!\propto\! E^{-S}$. 3) The burst detector is sensitive to photons in a fixed energy bandpass, $E_1\!\le\! E\!\le\! E_2$, and is triggered by a peak flux higher than a given detection treshold $F_{min}$ (the peak flux is proportional to the peak photon count rate). We shall also assume that the bursts are ``standard candles'', and discuss broader luminosity functions in \S{3}. Let us define the luminosity of a source by \def {\~z}{{\~z}} \begin{equation} L \equiv \int_{E_1}^{E_2}{E\, n_{\gamma}(E)\, dE} \,\,\,\,\, , \end{equation} where $n_\gamma(E)$ is normalized accordingly. For a detector at rest relative to the CMB frame the burst's luminosity which is shifted into the detector bandwidth is $(1+z)^{2-S}L$, where $z$ is the cosmological redshift of the source in the CMB frame. Dividing by $(1+z)^2$ for the time dilation in the reception of photons and for the loss of energy per photon, the peak flux at the observer is \def {\tilde{z}}_{max}{{\tilde{z}}_{max}} \begin{equation} F(z) = {L\over 4\pi} {(1+z)^{-S} \over r^{2}(z) } \,\,\,\,\,\,\, , \end{equation} where $r$ is the proper motion distance to the source and is given by \begin{equation} r(z) = {2c\over H_0} \, {\Omega_{0}z + (2-\Omega_0)(1-\sqrt{1+\Omega_{0}z}) \over \Omega_{0}^{2} (1+z)} \:\:\:\:\:. \end{equation} Thus, in the CMB rest frame bursts with luminosity $L$ can be detected out to a redshift ${\tilde{z}}_{max}$ which is defined by $F({\tilde{z}}_{max})\!=\!F_{min}$, where $F_{min}$ is the detection treshold. In our moving frame, the effective Doppler shift of photons from a source which is located at a cosmological redshift $z$ is $(1+z)/f(\theta)$. Therefore, a detector in our frame can detect bursts out to a cosmological redshift $z_{max}$ which is defined by \begin{equation} { (1+{\tilde{z}}_{max})^{-S} \over r^{2}({\tilde{z}}_{max}) } \, = \, { (1+z_{max})^{-S}\,\left[f(\theta)\right]^{S} \over r^{2}(z_{max}) } \,\,\,\,\, . \end{equation} Obviously, $z_{max}\!=\!z_{max}({\tilde{z}}_{max},\theta,S)$. Substituting $z_{max}\!\equiv\! {\tilde{z}}_{max} + \Delta z$, replacing $f^{S}$ by $1\!+\!S\delta(\theta)$ (see Eq. [3]), and keeping terms up to first order ($\delta\!\ll1$ and consequently $\Delta{z}\!\ll\!{\tilde{z}}_{max}$), we obtain \def r_{max}{r_{max}} \begin{equation} \Delta z = \left( {{2\over r({\tilde{z}}_{max})}{\left {dr\over dz} \right|_{{\tilde{z}}_{max}} \!\!\!\! + \, {S\over 1+{\tilde{z}}_{max}} } \, \right) S\delta(\theta) \,\,\,\,\,\,\,\,\, , \end{equation} where $r(z)$ is given by equation (6). Thus, the number of events that we should observe in a solid angle $d\Omega$ is \begin{equation} {dN(\theta) \over d\Omega} \propto \left[ 1+\delta(\theta)\right] }^3 \!\!\!\!\! \int\limits_{0}^{{\tilde{z}}_{max}+\Delta{z(\theta)}} {\!\!\!\!\! {n_s(z) \over (1+z)} \, r^{2}(z) \, {dr\over dz} \, dz } \, \,\,\,\,\,\, , \end{equation} where $n_s(z)$ is the number of sources per comoving volume at cosmological redshift $z$, and the factor $[1\!+\!\delta]^3$ is due to the effects of abberation and the modified event rate that we discussed earlier. The burst population may evolve since $z\!\sim\!{1}$ until the present epoch. Let us assume that the comoving source number density is given by $n_s\!\propto\!(1+z)^{-\alpha}$, so $\alpha\!=\!0$ corresponds to a constant rate of bursts per unit comoving volume per unit comoving time, and positive values of $\alpha$ describe an increase in the source population, or equivalently in the intrinsic event rate, with time. Thus, the integral in equation (9) can be denoted by $T(z_{max},\Omega_{0},\alpha)$, where \begin{equation} T(z,\Omega_{0},\alpha)\! &\equiv& \! \int\limits^{z} {\! {r^2 \over (1+z)^{1+\alpha}}\,{dr\over dz}\,dz} \,\,\,\,\,\,\, . \end{equation} This integral can be evaluated analytically for certain values of $\alpha$, e.g., for $\alpha\!\!=\!\!0$ ($n_s\!=\!{\rm constant}$) we obtain \begin{eqnarray} T(z,\Omega_{0}\!<\!1,0) &=& {(2-\Omega_{0})\left[x-8x^{3}(1-\Omega_{0})\right] \sqrt{1+4x^{2}(1-\Omega_{0})}\over 64 (\Omega_{0}-1)^2} - {x^4\over 2} \nonumber \\ & & \nonumber \\ & & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, +\, {\Omega_{0} x^3\over 6(\Omega_{0}-1)} + \, {(\Omega_{0}-2){\rm arcsinh}(2x\sqrt{1-\Omega_{0}}) \over 128 (1-\Omega_{0})^{5/2}} \,\,\, ; \nonumber \\ & & \nonumber \\ T(z,\Omega_{0}\!=\!1,0) &=& {x^3\over 3} - {x^4\over 2} + {x^5\over 5} \,\,\,\,\, , \end{eqnarray} where $x\!\equiv\!r(z)H_{0}/2c$, and we have ignored the coefficient $(2c/H_0)^3$ since equation (9) is a proportionality relation. We may replace $T({\tilde{z}}_{max}\!+\!\Delta{z})$ by $T({\tilde{z}}_{max})+ (dT/dz)|_{{\tilde{z}}_{max}}\Delta{z}$ due to $\Delta{z}\!\!\ll\!{\tilde{z}}_{max}$. Thus, substituting equation (8) for $\Delta{z}$, and using the definition of $\delta (\theta)$, we obtain \begin{equation} {dN(\theta)\over d{\Omega}} \, \propto\, 1\, +\, \left(3 + K\right)\! \beta\cos\theta \, + \, O(\beta^2) \,\,\,\,\,\,\, , \end{equation} where \begin{equation} K({\tilde{z}}_{max},\Omega_{0},\alpha,S) \equiv \left( {{2S\over r({\tilde{z}}_{max})}{\left {dr\over dz} \right|_{{\tilde{z}}_{max}} \!\!\!\! + \, {S^{2}\over 1+{\tilde{z}}_{max}} } \, \right) {1\over T({\tilde{z}}_{max})} \left {dT\over dz}\right|_{{\tilde{z}}_{max}} \,\,\,\,\,\, , \end{equation} and $T$ is defined by equation (10). Notice that the above results are independent of the value of the Hubble constant. In order to gain some insight we calculated the function $K$ for the case of $\alpha\!=\!0$ and found it to be well fitted (to within a few percents) by \begin{equation} K \, \simeq \, 6.7\, \left({S\over 2}\right)^{\!1.4} \! \Omega_{0}^{-1/3} \, {\tilde{z}}_{max}^{\,-2.7} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, (\alpha\!=\!0) \end{equation} in the range of parameters $\,1.0\!\le S\le\!2.5\, ,\: 0.2\!\le\!\Omega_{0}\!\le1, \, $ and $1\!\le{\tilde{z}}_{max}\!\le2$. \section{THE DIPOLE AMPLITUDE} The redshift out to which bursts are detected, ${\tilde{z}}_{max}\,$, is not a free parameter. It is determined by the requirement that the burst intensity distribution coincides with the observed one, namely, that the $\left<V/V_{max}\right>\,\,$ parameter equals the measured value. Since $\left<V/V_{max}\right>$_{BATSE}$=0.330\pm0.016$ (Meegan {\it et al. } 1993), the BATSE instrument can detect bursts out to a redshift ${\tilde{z}}_{max}$ which is determined by \begin{equation} \langle {V_{ }\over V_{max}}\rangle \equiv \, {1\over T({\tilde{z}}_{max},\Omega_{0},\alpha)} \int\limits_{0}^{{\tilde{z}}_{max}}{ {F^{3/2}_{min}\over F^{3/2}} \, {r^2 \over (1+z)^{1+\alpha}} \, {dr\over dz}\, dz} \, \,= \, 0.330 \,\,\,\,\, , \end{equation} where $T$ is defined in equation (10), $F$ is given by equation (5), and $F_{min}\!\equiv\!{F({\tilde{z}}_{max})}$. Thus, ${\tilde{z}}_{max}$ depends on $\alpha$, $S$, and $\Omega_{0}$. There is a considerable diversity in the observed spectra of $\gamma$-ray bursts, but the average spectral index of the photon spectrum, $S$, is somewhere between $-1.5$ and $-2$ (schaefer {\it et al. } 1992). Regarding the $\alpha$ parameter, it is clear that the population of cosmological $\gamma$-ray bursts may evolve with epoch but we have no observational constraint on that. Therefore, we shall examine the cases of moderate ($\alpha\!\!=\!\! 1/2$) and rapid ($\alpha\!\!=\!\!1$) rates of evolution, as well as the case of a constant comoving event rate ($\alpha\!=\!0). Substituting $\beta\!=\!1.233\!\times10^{-3}$ in equation (12), the amplitude of the dipole is \begin{equation} \left[ 3.7 + 1.23K({\tilde{z}}_{max},\Omega_{0},\alpha,S)\right]\times 10^{-3} \,\,\,\,\, , \end{equation} where, for a given $\Omega_{0}$, $\alpha$, and $S$, the value of ${\tilde{z}}_{max}$ is determined from equation (15), and $K$ is evaluated using equation (13). The results for various combinations of parameters are shown in Table 1. For reasonable sets of parameters the dipole amplitude is of order $10^{-2}$, an order of magnitude larger than $\beta$, and it is almost independent of the value of $\Omega_{0}$. At first sight it seems surprising that the dipole amplitude increases when evolution of the burst population becomes significant (Table 1). Afterall, a higher value of $\alpha$ implies a smaller number of bursts originating within a given range of redshift, $[z,z\!+\!\!\Delta{z}]$. However, an increasing rate of evolution also compels a {\it lower\/} ${\tilde{z}}_{max}$ since evolution replaces some of the ``redshift effect'' which is required for a consistency with the observed $\left<V/V_{max}\right>$. Therefore, the proper volume within $[{\tilde{z}}_{max},{\tilde{z}}_{max}\!+\!\Delta z]$, relative to the volume within $[0,{\tilde{z}}_{max}]$ is of order $3\Delta r/r$, where $r\!=\!r({\tilde{z}}_{max})$ and $\Delta r\!=\!(dr/dz)|_{{\tilde{z}}_{max}}\Delta z$. Thus, since $\Delta{r}/r\sim O(\beta)$, and the effect of evolution is of order $\alpha\beta$, the net effect of an increasing $\alpha$ is an increase in the fraction of detectable bursts, as long as $\alpha\!\lesssim3$. We should keep in mind that the possibility of a negative value of $\alpha$, namely, a decrease in the comoving event rate with time, cannot be excluded. In such case the dipole amplitude will be lower, and the redshift out to which bursts are detected will be higher. We argue that $\alpha$ is unlikely to be negative and large for the following reasons: 1) bursts at a too high redshift would introduce a severe difficulty to most progenitor models, e.g., the merging neutron star model, since galaxies may not have formed yet. 2) it would imply a strong correlation between the brightness and the duration of bursts, which is not observed. The assumption that all the bursts are ``standard candles'' may be adequate, but a broader luminosity function cannot be excluded. In such case, ${\tilde{z}}_{max}$ would depend on $L$ through equation (5), and an integration over the range of possible luminosities, $\int{\!dL \,\Phi(L)\,}$, should precede the r.h.s of equations (9), (10), and (15). We argue that if the luminosity function is falling with increasing luminosity, e.g., a power law distribution ($\Phi(L)\!\propto\! L^{-\gamma}$), than the dipole amplitude will {\it increase}, the more so for a larger value of $\gamma$. The reason for that is the following: assuming ``standard candles'' implies that most of the observed bursts originate at distances close to the boundary of the sphere of detectable bursts. By contrast, in the case of a steeply falling luminosity function the average distance to a burst may be considerably smaller. Therefore, from an argument similar to the one presented in the previous paragraph, as well as from the apparently (inverse) strong dependency of the dipole amplitude on the cosmological redshift (e.g., equation [12]) we conclude that replacing the ``standard candle'' assumption by a falling luminosity function will tend to increase the dipole amplitude. A detailed calculation for specific luminosity functions is beyond the scope of the present study. \section{CONCLUSION} {\it Assuming\/} that $\gamma$-ray bursts originate at cosmological distances, we have shown that three effects combine together to produce a dipole anisotropy in the bursts' angular distribution. The dipole should point in the direction of the solar motion relative to the cosmic microwave background rest frame. The amplitude of the predicted dipole depends weakly on $\Omega_{0}$, but it is sensitive to the the spectral index of the photon spectra, and to the rate of evolution of the burst population. It is independent of the value of the Hubble constant. The maximum redshift at which bursts can be detected is not a free parameter but is constrained by the requirement that the $\left<V/V_{max}\right>\,\,$ parameter be consistent with observations. The dipole amplitude turns out to be of order $10^{-2}$ for various combinations of parameters. This is an order of magnitude larger than what one would expect since the solar velocity with respect to the CMB is $370\, {\rm km\, s}$^{-1}$ ($\simeq10^{-3}$). Obviously, the sun is in motion relative to the Galaxy too, so one would expect a similiar effect even if bursts originate within an extended Galactic halo (e.g., Fishman {\it et al. } 1978; Atteia and Hurley 1986; Maoz 1993). However, in this case the amplitude of the predicted anisotropy ($<\!1\%$) is negligible relative to the uncertainties in our understanding of the exact shape of the halo. It is only within the cosmological origin hypothesis that the dipole due to the solar motion is of practical interest. The predicted dipole cannot provide a strong test to the hypothesis of a cosmological origin of $\gamma$-ray bursts until a sample of the order of $10^{4}$ bursts is established. The sky exposure map will also have to be complete to a sufficient accuracy. In the near future, being aware of the expected dipole, rather than testing the consistency of the data with a perfectly isotropic distribution, will enable future statistical analyses to derive a more reliable statistical significance for their results. I wish to thank Avi Loeb, Ramesh Narayan, and Tsvi Piran for comments. This work was supported by the U.S. National Science Foundation, grant PHY-91-06678. \newpage \doublespace \vspace{0.9in} {\bf TABLE 1.} The Dipole Amplitude \vspace{0.2in} \begin{tabular}{l c c| c c|} source &\multicolumn{2}{c|}{$\Omega_{0}=1$} & \multicolumn{2}{c}{$\Omega_{0}=0.3$} \\ \cline{2-3} \cline{4-5} evolution& S=2 &\multicolumn{1}{c|}{S=1.5} &S=2 &S=1.5 \\ \hline \!$n_s=$ constant &$(1.02\, ;\, 11.7\!\times\! 10^{-3}) &$(1.30\, ;\, 6.4\!\times\! 10^{-3}) &$(1.22\, ;\, 11.2\!\times\! 10^{-3}) &$(1.65\, ;\, 6.1\!\times\! 10^{-3}) \\ \!$n_s\propto (1\!+\!z)^{-1/2}$ &$(0.86\, ;\, 14.3\!\times\! 10^{-3}) &$(1.06\, ;\, 7.8\!\times\! 10^{-3}) &$(0.99\, ;\, 13.9\!\times\! 10^{-3}) &$(1.27\, ;\, 7.6\!\times\! 10^{-3}) \\ \!$n_s\propto (1\!+\!z)^{-1}$ &$(0.75\, ;\, 17.4\!\times\! 10^{-3}) &$(0.90\, ;\, 9.4\!\times\! 10^{-3}) &$(0.83\, ;\, 18.0\!\times\! 10^{-3}) &$(1.02\, ;\, 9.6\!\times\! 10^{-3}) \\ \end{tabular} \vspace{0.3in} {\bf Table 1} - The redshift out to which bursts can be detected, ${\tilde{z}}_{max}$, and the amplitude of the dipole component (equation [16]), evaluated for several combinations of the photon spectral index, $S$, the cosmological density parameter, $\Omega_{0}$, and the rate of evolution of the burst population. In general, ${\tilde{z}}_{max}\!\simeq\!1$ and the dipole amplitude is of order $10^{-2}$. The dependence on the various parameters is discussed in \S{3}. \newpage
{ "timestamp": "1993-07-29T18:03:47", "yymm": "9307", "arxiv_id": "astro-ph/9307039", "language": "en", "url": "https://arxiv.org/abs/astro-ph/9307039", "abstract": "If gamma-ray bursts originate at cosmological distances then their angular distribution should exhibit a dipole in the direction of the solar motion relative to the cosmic microwave background. This is due to the combined effects of abberation, an anisotropic shift of the burst event rate, and an angular variation in the distance out to which bursts can be detected.We derive the amplitude of the expected dipole for an open and flat cosmological model, and for various possible evolution rates of the burst population. Although our dimensionless velocity with respect to the CMB rest frame is of order $10^{-3}$, the dipole amplitude is of order $10^{-2}$, an order of magnitude larger. The results depend very weakly on the value of $\\Omega_0$, but are sensitive to the spectral index of the bursts' photon spectra, and to the rate of evolution of the burst population. There is no dependence on the value of the Hubble constant.A clear detection of the dipole will require a larger sample of bursts than currently available (of order $10^4$). Future statistical analyses of the hypothesis that bursts originate at cosmological distances should take this effect into account, rather than assuming a perfectly isotropic distribution, for obtaining the correct statistical significance of their results.", "subjects": "Astrophysics (astro-ph)", "title": "The Expected Dipole in the Distribution of Cosmological Gamma-Ray Bursts", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.975946443607529, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7074345739121926 }
https://arxiv.org/abs/1212.0733
From Boundary Crossing of Non-Random Functions to Boundary Crossing of Stochastic Processes
One problem of wide interest involves estimating expected crossing-times. Several tools have been developed to solve this problem beginning with the works of Wald and the theory of sequential analysis. An extension of his approach is provided by the optional sampling theorem in conjunction with martingale inequalities. Deriving the explicit close form solution for the expected crossing times may be difficult. In this paper, we provide a framework that can be used to estimate expected crossing times of arbitrary stochastic processes. Our key assumption is the knowledge of the average behavior of the supremum of the process. Our results include a universal sharp lower bound on the expected crossing times.
\section{Introduction} \label{intro} One problem of wide interest in the study of stochastic processes involves estimating $E[T_r]$, the expected time at which a process crosses a boundary. Following the work of \cite{delapena97}, we let $X_t$, $t \geq 0$, be a measurable process with the first passage time $T_r = \inf\{t > 0 : X_t \geq r\}$, $r>0$ at the level $r$. Deriving the explicit closed form solution for $E[T_r]$ can sometimes be difficult and so we are interested in finding the possibility of obtaining bounds for $E[T_r]$ above or below by some functions that are related to $a(t) := E[\sup_{0 \leq s\leq t}X_s]$. This approach was introduced in de la Pe\~na \cite{delapena97} (published in Section 2.7 of \cite{delapenagine}) in which the author constructed bounds for $E[T_r]$ for a class of Banach-valued processes with independent increments via decoupling. The bounds derived are of interest in applications where the moments of the maximal process can be readily obtained.\\ The main idea consists of an extension of the concept of boundary crossing by non-random functions to the case of random processes. $a(t)$ can be intuitively interpreted as a natural clock for all processes with the same $a(t)$. Here, we assume that we have information on $a(t)$. Furthermore, we assume that $X_t$ is a measurable separable process, $\sup_{0 \leq s \leq t}X_s$ is, therefore, well-defined and $T_r$ is measurable.\\ We would like to draw an attention to readers that the results derived in this paper provide a decoupling reinterpretation (in the context of boundary crossing) of the results due to Wald that concern randomly stopped sums of independent random variables (see \cite{wald45}), to Burkh\"older and Gundy on randomly stopped processes with independent increments (see \cite{burkholdergundy70}) and to Klass on bounds for randomly stopped Banach space-valued random sums (see \cite{klass88} and \cite{klass90}).\\ Recall that for $\{X_i\}$ a sequence of iid random variables adapted to $\mathcal{F}_i = \sigma(X_1, \ldots, X_i)$ with $E[X_1] = \mu$, where $|\mu| < \infty$, we have \begin{equation*} E[S_T] = \mu E[T], \end{equation*} where $T$ is a stopping time adapted to $\sigma\{X_i\}$, $S_n = \sum^n_{i=1}X_i$ and $E[T] < \infty$. Furthermore, if $E[X_1] = 0$ and $E[X^2_1] < \infty$, then \begin{equation*} E[S^2_T] = E[X^2_1]E[T], \end{equation*} whenever $E[T] < \infty$. These celebrated results are known as Wald's first and second equations. As we can observe, $S_T = \sum^\infty_{i=1} X_i I(T \geq i)$ is composed of summands $X_i I(T \geq i)$ that are products of independent variables. This motivate the idea of replacing $\{X_iI(T\geq i)\}$ by $\{\widetilde{X}_iI(T \geq i)\}$, where $\{\widetilde{X}_i\}$ is an independent copy of $\{X_i\}$ and independent of $T$ as well. As a result, the above Wald's equations can be viewed from the decopuling perspective where, if we denote $\widetilde{S}_n = \sum^n_{i=1} \widetilde{X}_i$, then, whenever $E[T] < \infty$ and $E[X^2_1] < \infty$, \begin{equation*} E[S_T] = E[\widetilde{S}_T] = E[S_{\widetilde{T}}] \quad\quad\text{and}\quad\quad E[S^2_T] = E[\widetilde{S}^2_T] = E[S^2_{\widetilde{T}}]. \end{equation*} It is important to realize that the variables $S_T$ and $\widetilde{S}_T$ can have drastically different behavior. One may consider a case in which $X_i$'s are iid random variables with $\Pr\{X_i=1\} = \Pr\{X_i = -1\} = \frac{1}{2}$. Define $T = \inf\{n > 0 : S_n = a \text{~or~} S_n = -b\}$ for some integers $a, b > 0$. It is easy to see that $S_T$ can only take a value of either $a$ or $-b$ whereas $\widetilde{S}_T$'s value is not restricted to these two choices.\\ An extension of Wald's second equation to the case of independent random variables having finite second moments with $T$ as a stopping time defined on the $X$'s was studied by de la Pe\~na and Govindarajulu \cite{delapenagov}. The following bound is sharp: \begin{equation} E[S^2_T] \leq 2E[\widetilde{S}^2_T], \end{equation} where $S_n = \sum^n_{i=1}X_i$ and with $\widetilde{S}$ an independent copy of $S$. If we let $T_r = \inf\{n: S^2_n \geq r\}$ and $a(n) = E[\max_{1 \leq j \leq n}S^2_j]$, then $$ r \leq E[S^2_{T_r}] \leq 2 E[\widetilde{S}^2_{T_r}] \leq 2 E\left[\max_{1 \leq j \leq T_r}\widetilde{S}^2_j\right], $$ hence \begin{equation*} E[a(T_r)] \geq \frac{r}{2}. \end{equation*} which is closely related to our main result; see Proposition 1 below.\\ Along this vain, Klass \cite{klass88} obtained results on a best possible improvement of Wald's equation. In his work, Klass obtained bounds for stopped partial sum of independent random elements taking values if Banach space $(\mathcal{B}, \|\cdot\|)$. To be specific, he derived that \begin{equation*} E\left[\max_{1 \leq n \leq T}\Phi(\|S_n\|)\right] \leq 20(18^\alpha)E\left[\max_{1 \leq n \leq T}\Phi(\|\widetilde{S}_n\|)\right], \end{equation*} where $\Phi(\cdot): [0, \infty)$ is a nondecreasing function such that $\Phi(0) = 0$ and for $\alpha >0$, $\Phi(cx) \leq c^\alpha\Phi(x)$ for all $c>2, x>0$ . Furthermore, in \cite{klass90}, the corresponding lower bound was also derived and hence \begin{equation} c_\alpha E\left[\max_{1 \leq n \leq T}\Phi(\|\widetilde{S}_n\|)\right] \leq E\left[\max_{1\leq n\leq T}\Phi(\|S_n\|)\right] \leq 20(18^\alpha) E\left[\max_{1 \leq n \leq T}\Phi(\|\widetilde{S}_n\|)\right].\label{eq:Klass} \end{equation} The corresponding counterpart for processes defined in continuous time domain was developed by \cite{delapenaeisenbaum94} (see also \cite{delapena96}) in which they showed \begin{equation*} c_\alpha E\left[\sup_{1 \leq t \leq T}\Phi(\|\widetilde{N}_t\|)\right] \leq E\left[\sup_{1\leq t\leq T}\Phi(\|N_t\|)\right] \leq 20(18^\alpha) E\left[\sup_{1 \leq t \leq T}\Phi(\|\widetilde{N}_t\|)\right], \end{equation*} where $N_t, t\geq 0$ is a $\mathcal{B}$-valued process continuous on the right with limits from the left with independent increments with $(\mathcal{B}, \|\cdot\|)$ is a separable Banach space. As observed above, the inclusion of $a(t)$ facilitates the decopuling between the random stopping time $T$ and the underlying process $X_t$. All these results are closely connected with sequential analysis, for details, readers are referred to Lai \cite{lai01} for a recent survey.\\ In our results, below, we use $E[a(T_r)]$ as a key quantity. Here, we review our interpretation of this quantity. If we define \begin{equation*} M(t) = \sup_{0 \leq s \leq t} X(s),\text{and } X_t \text{~is~ continuous} \end{equation*} then, $M(T_r) \equiv r$ with probability one. But, in contrast, \begin{equation*} E[a(T_r)] := \int (E[M(t)])dF_{T_r}(t) = \int (E[\widetilde{M}(t)])dF_{T_r}(t), \end{equation*} which can be interpreted as $E[a(T_r)] = E[a(\widetilde{T}_r)] = E[M(\widetilde{T}_r)]$, where $\widetilde{T}_r =_d T_r$ with $T_r$ and $\widetilde{T}_r$ independent. Thus it fits into the decoupling theme discussed above. This is further discussed in the remarks following Proposition 1.\\ In this paper, we devlope upon \cite{brownetal11} and \cite{delapenaetal11} to provide a universal sharp bound for $E[a(T_r)] \geq \frac{r}{2}$, which under a concavity assumption on $a(t)$ gives $E[T_r] \geq a^{-1}(\frac{r}{2})$; compare de la Pena and Yang \cite{delapenayang} in which the following bound is presented: \begin{equation} E[T_r] \geq (1-\epsilon)a^{-1}(\epsilon r), \quad \epsilon \in (0,1). \end{equation} In addition, for a wide class of processes, we show that $$a^{-1}(\frac{r}{2}) \leq E[T_r] \leq a^{-1}(2r)$$ as well as the stability property $\displaystyle\sup_{r > 0}\bigg|\frac{E[a(T_r)]-r}{r}\bigg| \leq 1$. The above result, coupled with Eq. (1.1) of \cite{delapenayang}, without the concavitiy assumption on $a$, gives: \begin{equation*} \frac{1}{2}a^{-1}(\frac{r}{2}) \leq E[T_r] \leq a^{-1}(2r). \end{equation*} It should be noted that, the theory of first passage times for random processes has been extensively developed in recent times. In particular, the distribution of the first hitting times of Brownian motion has been studied through inverse Gaussian distribution; see \cite{sehadri94}. A similar approach to the first hitting times involving Le\'vy processes is also available. Readers may refer to \cite{sato99} for details. The typical methods, however, assume full knowledge of the distribution. In contrast, our approach provides bounds for all processes with a common $a(t) = E[\sup_{0 \leq s \leq t}X_s]$ based on the approximate knowledge of moments of the maximal process, or $E[X_t]$. Even in situations when the distribution of the process is known, the quantity $E[T_r]$ might not be easily obtained as shown in Example 7 in which we study the relative growth of the boundary crossing of a three-dimensional Brownian motion and related processes. (Renewal processes)\\ The rest of the paper is organized as follows: In section 2, we obtain upper and lower bounds on $E[T_r]$, as well as bounds on $E[a(T_r)]$. Section 3 elaborates some possible extensions of our methodology that can handle siutations in which the expected first hitting time is hard to obtain. An application of the bounds derived are presented in Section 4, followed by Section 5, which summarizes the paper. \section{Main results} With the above definitions, let $a^{-1}(\xi) = \inf\{t > 0 : a(t) \geq \xi\}$ and $a^{-1}(\xi) = \infty$ if $a(t) < \xi, \forall t$. We have the following proposition. \begin{prop} \label{prop:lowerboundaTr} For all non-negative, measurable process $X_t$ with \newline $E\left[\sup_{0 \leq s \leq t}X_s \right] = a(t)$, \begin{equation} \frac{r}{2} \leq E[a(T_r)] \label{eq:eatrlower} \end{equation} and the bound is sharp. Furthermore, if $a(t)$ is assumed to be concave, we obtain \begin{equation} a^{-1}(\frac{r}{2}) \leq E[T_r]. \label{eq:etrlower} \end{equation} \end{prop} A decoupling reinterpretation of \eqref{eq:eatrlower} is given as follows: Assuming that $X_t$ is continuous, we have $$r = E\left[\sup_{0 \leq s \leq T_r}X_s\right] = E[M(T_r)],$$ we can rewrite \eqref{eq:eatrlower} as \begin{equation*} \frac{r}{2} = \frac{1}{2}E\left[\sup_{0 \leq s \leq T_r}X_s\right] \leq E\left[\sup_{0 \leq s \leq T_r}\widetilde{X}_s\right], \end{equation*} in which case the underlying process and its random stopping time are independent (decopuled). \begin{proof}\label{pf:one} Let $F$ be the cdf of $T_r$. If $F$ is continuous, then \begin{align*} a(t) & = E\left[\sup_{0 \leq s \leq t} X(s)\right]\\ & \geq r \Pr\left\{\sup_{0 \leq s \leq t} X(s) \geq r\right\}\\ & = r\Pr\{T_r \leq t\} = rF(t). \end{align*} Due to the continuity of $F$, it is easy to see that $F(T_r) \sim \mathcal{U}(0,1)$ and $E[F_{T_r}(T_r)] = \frac{1}{2}$. As a result, we can conclude that $E[a(T_r)] \geq \frac{r}{2}$. More generally if the distribution of $T_r$ is not necessarily continuous then since \begin{align*} E[F(T_r)] & = \Pr\left(\widetilde{T}_r \leq T_r\right)\\ & = \Pr\left(\widetilde{T}_r = T_r \right) + \frac{1}{2}\Pr\left(\widetilde{T}_r \neq T_r\right)\\ & = \frac{1}{2}\left[1 + \Pr\left(T_r = \widetilde{T}_r\right)\right] \geq \frac{1}{2}, \end{align*} it follows that $E[a(T_r)] = rE[F_{T_r}(T_r)] \geq r/2$. To prove the sharpness of the bound, consider \begin{equation*} X_t = rI(t \geq U), \end{equation*} where $U \sim \mathcal{U}(0,1)$, then $T_r =_d U$ and $E[a(T_r)] = E[rU] = \frac{r}{2}$. To prove \eqref{eq:etrlower}, it follows immediate by Jensen's inequality that \begin{align*} -a(E[T_r]) & \leq -E[a(T_r)] \leq -\frac{r}{2}\\ \Longrightarrow~~~~~~~~~ E[T_r] & \geq a^{-1}\left(\frac{r}{2}\right). \end{align*} \end{proof} \noindent One may be interested in obtaining an upper bound for $E[a(T_r)]$ or $E[T_r]$ without further assumption. This is, however, impossible. The reason is that, without assumptions that control the growth of the process $X_t$ for any $t > T_r$, the value of $a(t)$ can blow up. Below we introduce a counter-example that demonstrates the impossibility that an upper bound can be obtained without further assumption.\\ \begin{ex} \label{eg:exp} Let $X_t = tY,$ with $Y$ a non-negative random variable. Then \begin{equation*}a(t) = tE[Y]\end{equation*} and \begin{equation*}E[T_r] = rE[Y^{-1}].\end{equation*} Suppose $Y$ is exponentially distributed with mean 1, $E[Y]=1$ while $E[Y^{-1}] = \infty$, so the behavior of $a(t)$ is controlled; however, $E[a(T_r)] = rE[Y]E[Y^{-1}] = rE[Y^{-1}]$ can be arbitrarily large. Controlling the growth of the proces after $T_r$ is reached enables us to derive an upper bound as shown in Proposition \ref{prop:upperboundaTr}. \end{ex} \begin{defi} A real random variable $A$ is less than a random variable $B$ in the ``usual stochastic order'' if \begin{equation*} \Pr(A>x) \le \Pr(B>x)\text{ for all }x \in (-\infty,\infty), \end{equation*} which is denoted $A \le_{st} B$; see \cite{rogerwilliams} and \cite{ross96}. \end{defi} \begin{prop} \label{prop:upperboundaTr} Assume that $X_t$ is non-negative and continuous, and in addition $X_t$ is a time homogeneous Markov process and that $T_{kr} - T_{(k-1)r} \geq_{st} T_r$, for $k \geq 2$, then \begin{equation} \label{eq:upperboundEaTr} E[a(T_r)] \leq 2r \end{equation} and \begin{equation} E[T_r] \leq a^{-1}(2r). \label{eq:etrupper} \end{equation} \end{prop} \begin{proof} First notice that if $X_t$ is a Markov process with continuous paths - irreducible state space $[0,A]$, $0 < A \leq \infty$. If $r<A$, then $Pr\{T_r < \infty\} = 1$. Observe that, because of the continuous paths, we have \begin{equation*} T_r = T_s + (T_r - T_s), s<r, \end{equation*} thus $T_r$ is stochastically greater than $T_s$ and $T_r-T_s$. Now \begin{equation*} Pr\{T_r > x+y\} = Pr\{T_r > x\} Pr\{T_r - x > y | T_r > x\} \end{equation*} and \begin{equation*} T_r - x | T_r > x, X(x) = y \sim T_r - T_y, \end{equation*} by the Markov property, and since $T_r - T_z \leq_{st} T_r$, for $0 \leq z < t$, $Pr\{T_r - x > y | T_r > x\} \leq Pr\{T_r > y\},$ so that, if we define $\bar{F}(\cdot) = 1-F(\cdot)$, \begin{equation} \label{eq:NBU} \bar{F}_{T_r}(x+y) \leq \bar{F}_{T_r}(x)\bar{F}_{T_r}(y), \end{equation} which is the submultiplicative, or new better than used (NBU), property. For details about NBU property, see \cite{barlowproschan75} and \cite{brown06}. Since $T_r$ is NBU, it has a finite mean. \\ Define $\bar{G}(t) := \frac{1}{\mu}\int^\infty_{t}\bar{F}_{T_r}(x)dx$, where $\mu = E[T_r]$, the stationary renewal distribution corresponding to $T_r$, since \begin{align*} \bar{G}(t) & = \frac{\bar{F}_{T_r}(t)}{\mu}\int^\infty_t \frac{\bar{F}_{T_r}(x)}{\bar{F}_{T_r}(t)}dx\\ &\leq \frac{\bar{F}_{T_r}(t)}{\mu}\int^\infty_t \bar{F}_{T_r}(x-t)dx\\ & = \frac{\bar{F}_{T_r}(t)}{\mu}\mu = \bar{F}_{T_r}(t), \end{align*} it follows that $G \leq_{st} F_{T_r}$.\\ The stationary renewal distribution corresponding to $F_{T_r}$ has $X^*_1 \sim G$, and $\{X_i\}_{i\geq 1} \sim F_{T_r}$. It satisfies, \begin{equation*} M^*(t) = E[N^*(t)] = E[\text{\# of renewals in }[0,t]] = \frac{t}{\mu}. \end{equation*}~\\ An ordinary renewal process has $X_1 \sim F$ with $F \geq_{st} G$, since $F$ is NBU. It follows that \begin{align*} M(t) & = E[N(t)]\\ & = E[\text{\# of renewals in } [0,t] \text{ for ordinary renewal process}]\\ & \leq E[N^*(t)] = \frac{t}{\mu}, \end{align*} and hence \begin{equation*} E[M(T_r)] \leq E[\mu^{-1}T_r] = 1. \end{equation*} ~\\Under the assumption that $T_{kr} - T_{(k-1)r} \geq_{st} T_r$, then $$ \widetilde{N}_t r \leq \max_{0 \leq s \leq t} X_s \leq (\widetilde{N}_t+1)r, $$ where $\widetilde{N}_t = \max\{k: T_{kr}\leq t\}$, i.e. the number of renewals prior to time $t$. It follows that \begin{align} a(t) & \leq rE[N_t+1]\nonumber\\ & = (M_t + 1)r\nonumber\\ & \leq \left(\frac{t}{\mu}+1\right)r \label{eq:atupper} \end{align} and thus \begin{equation*} E[a(T_r)] \leq \left(\frac{E[T_r]}{\mu}+1\right)r \leq 2r. \end{equation*} By plugging in, specifically, $t = E[T_r]$ into \eqref{eq:atupper}, we yield \begin{equation} E[T_r] \leq a^{-1}\left(\frac{E[T_r]}{\mu} +1 \right)r = a^{-1}(2r), \end{equation} which completes the proof. \end{proof} As mentioned, eq. \eqref{eq:etrupper}, coupled with Eq. (1.1) of \cite{delapenayang}: \begin{equation} \label{eq:rightbound} \frac{1}{2}a^{-1}(\frac{r}{2}) \leq E[T_r] \leq a^{-1}(2r). \end{equation} By assuming that $a(t)$ is concave, the lower bound can be improved to $a^{-1}\left(\frac{r}{2}\right)$, which is sharp. If it is further assumed that the conditions specified in Proposition 2.2 hold, we obtain the following bounds: \begin{equation} \label{eq:mainresult2} a^{-1}(\frac{r}{2}) \leq E[T_r] \leq a^{-1}(2r), \end{equation} hence gives the right order of magnitude of the expected value of the first hitting time. In fact, \cite{yang} used this approach to obtain bounds on $E[T_r]$ for additive processes including a certain class of stochastic integrals extending the works of Burkh\"older and Gundy; see \cite{burkholdergundy70}.\\ Readers may compare the results obtained in Propositions 2.1 and 2.2 to Theorem 3 of \cite{delapenayang} whose lower bound suggested: $E[T_r] \geq (1-\epsilon) a^{-1}(\epsilon r), 0 < \epsilon < 1$, is sharpened by our new lower bound derived in the case of concave functions. The results as shown in \eqref{eq:rightbound} and \eqref{eq:mainresult2} provide the values of the constants that appear in the bounds as shown in \cite{yang}, which provides examples of the approach applied to stochastic integrals.\\ It can be easily derived from \eqref{eq:eatrlower} and \eqref{eq:upperboundEaTr} that \begin{equation*} -\frac{1}{2} \leq \frac{E[a(T_r)]-r}{r} \leq 1, \end{equation*} shows stability of $E[a(T_r)]$ as $r$ changes and the linearization property of $a(t)$. In addition, this shows the linearizing property of $a(t)$ as can be easily seen that by ~\\ \begin{equation} \sup_{r > 0} \bigg|\frac{E[a(T_r)]-r}{r}\bigg| = \sup_{r > 0} \bigg|\frac{E[\sup_{0 \leq s \leq T_r}\widetilde{X}_s]-E[\sup_{0 \leq s \leq T_r} X_s]}{r}\bigg|\leq 1. \label{eq:rate} \end{equation}~\\ Equation \eqref{eq:rate} may explain the claim that $a(t)$ can be interpreted as a natural clock for all the processes with the same $a(t)$ since through $a(t)$, $E[a(T_r)]$ is in a linear relationship with the predefined boundary $r$.\\ The above framework provides a broad foundation for more general applications. A rich array of examples are given in \cite{delapenayang}. In particular, the above results can be extended easily to $\mathbf{X} \in \mathbb{R}^d$. We will provide the following examples for illustration. \begin{ex} Suppose that we are interested in the first hitting time of the process $X_t$ that hits either a lower bound $a$ or an upper bound $b$, where $a < 0 < b$. We may define \begin{equation*} f_{a,b}(x) = \begin{cases} x/a, & x < 0\\x/b, & x\geq 0, \end{cases} \end{equation*} (see Figure 1) and hence \begin{equation*} T_{a,b} = \inf\{t > 0: X_t \notin [a,b]\} = \inf\{t > 0 : f_{a,b}(X_t)>1\} \end{equation*} and \begin{equation*} a(t) = E\left[\sup_{0 \leq s \leq t} f_{a,b}(X_s)\right] = E\left[\sup_{0 \leq s \leq t} I(X_s<0)a^{-1}X_s+I(X_s \geq 0)b^{-1}X_s \right]. \end{equation*} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.5]{bounds.jpg} \caption{Illustration of asymmetric bounds for general (not necessarily non-negative) processes.} \label{fig:illustration} \end{center} \end{figure} \end{ex} More generally, this can be further generalized into cases where the first hitting time is defined as $T_r=\inf\{t \geq 0: f_{c_1, c_2, \ldots, c_m}(\boldsymbol X_t) > r\}$. Below shown one specical case that can be handled under this framework: One can also take for example, (a) $f(x) = x^+, x \in \mathbb{R}$; (b) $f(x,y) = |x-y|, x, y \in \mathbb{R}^d$; (c) $f(x, y) = \rho(x-y)$ for a metric $\rho$; (d) $f(x,y) = e^{-|x-y|}, x, y \in \mathbb{R}^d$ and so on for appropriate applications. Moreover, the boundary itself can be not fixed. \begin{ex} Suppose that $g$ is a deterministic or stochastic process. Define $Y_t = X_t/g_t$ and $T_r = \inf \{t: Y_t > r\}$, where $X$ is similarly defined as in Example 3. This corresponds to the hitting time of the process $X_t$ reaching $g_t$ and $a(t) = E[\sup_{s\leq t}X_s/g_s]$. \end{ex} \section{Some extensions} Notice that, under the concave assumption on $a(\cdot)$, we can derive a similar lower bound to the one derived in Section 2 that is more readily available. Observe that \begin{equation*} a(t) = E\left[\sup_{0 \leq s \leq t}X_s\right] \geq \sup_{0 \leq s \leq t}E[X_s] := \kappa(t). \end{equation*} It follows that $a^{-1}(r) \leq \kappa^{-1}(r)$. Thus, if the conditions hold for the upper bound, then \begin{equation} E[T_r] \leq a^{-1}(2r) \leq \kappa^{-1}(2r), \end{equation} where $\kappa(\cdot)$ may be obtained more easily, compared to $a(\cdot)$. \begin{ex} Consider the absolute value of a standard Brownian motion $|W_t|$. It can be shown that \begin{equation*} \kappa(t) = E|W_t| = \sqrt{2t/\pi} \end{equation*} and it is known that $E[\sup_{0 \leq s \leq t}W_s] = E|W_t|$, but $a(t) = E[\sup_{0\leq s\leq t}|W_s|]$ appears difficult to compute. In fact, it equals \begin{equation*} c(t)\sqrt{2t/\pi}, \end{equation*} where $1 < c(t) < 2$, a multiple depending on $t$. It should be noted that the conditions for the upper bound to hold are easily verifable in this case. Thus \begin{equation*} E[T_r] \leq \kappa^{-1}(2r) = 2\pi r^2, \end{equation*} while it is known that $E[T_r] = r^2$. \end{ex} If more information about the process $X_t$ is known, we can obtain a relaxed lower bound that can be easily expressed in a more manageable form. Suppose $X_t$ is a submartingale with right continuous paths, it is well known that \begin{equation*} \Pr\left\{\sup_{0 \leq s \leq t}X_s \geq r\right\} \leq \frac{E[X^+_t]}{r} := \frac{\eta(t)}{r}. \end{equation*} Therefore, it follows that $\Pr\{T_r \leq t\} = \Pr\{\sup_{0 \leq s \leq t}X_s \geq r\} \leq \eta(t)/r$, which leads to \begin{align} \frac{E[\eta(T_r)]}{r} & \geq \int \Pr\{T_r \leq t\}dF_{T_r}(t) \geq \frac{1}{2} \nonumber\\ \Longrightarrow \quad\quad\quad E[\eta(T_r)] & \geq \frac{r}{2}, \label{eq:modre} \end{align} which is a stonger result compared with Proposition 2.1. \begin{ex} Consider $W^2_t$, where $W_t$ is a standard Brownian motion. Denote $T_r$ the first passage time of $|W_t|$ to $r$, (so for $W^2_t$ the first passage time to $r^2$). Applying the result of \eqref{eq:modre}, we get \begin{equation*} E[T_r] \geq \frac{r^2}{2}. \end{equation*} Note that the actual value of $E[T_r]$ is $r^2$ in this case. Here, $\eta(t) = E[X^2_t] = t$. \end{ex} The usefulness of \eqref{eq:modre} can be demonstrated in the following example in which a closed form expression of $E[T_r]$ is difficult to obtain. \begin{ex} Consider a submartingale \begin{equation*} X_t = |W_t^2-t|, \end{equation*} where $W_t$ is a standard Brownian motion. It can be shown that \begin{equation*} \eta(t) = E|W^2_t - t| = tE|W^2_1 - 1| = \sqrt{\frac{8}{\pi e}}t. \end{equation*} Then it follows that \begin{equation*} \frac{r}{2} \leq E\left[\eta(T_r)\right] = \sqrt{\frac{8}{\pi e}}E[T_r], \end{equation*} so that \begin{equation*} E[T_r] \geq \frac{r}{2}\sqrt{\frac{\pi e}{8}} \approx 0.5116r. \end{equation*} In this exmple, the mean of $T_r$, the first passage time of $|W^2_t - t|$ to $r$, is difficult to compute. \end{ex} Before ending this section, we would like to point out that the finiteness of the mean of $T_r$ is important because this issue can cause problems in other applications. A good illustration is demonstrated in the following example. \begin{ex} Consider \begin{equation*} W^+_t = \max\{0, B_t\}, \end{equation*} where, again, $B_t$ is a standard Brownian motion. The first passage time of $W^+_t$ to $r > 0$ coincides with the first passage time of $W_t$ to $r$ which has an infinite mean. $W^+_t$ is not a Markov process. Hence, results of Proposition 2 do not apply to $W^+_t$ case. $T_r$ is finite with probability 1, but this is not NBU; see definition \eqref{eq:NBU}.\\ Since $W_{s+t} \sim W_s + (W_{s+t}-W_t) = W_s + \widetilde{W}_t$, where $\widetilde{W}_t$ is an independent Brownian motion, \begin{equation*} W^+_{T_r + t} \sim (W_{T_r}+\widetilde{W}_t)^+ = (r + \widetilde{W}_t)^+ \leq r + \widetilde{W}_t^+. \end{equation*} When $\widetilde{W}^+_t$ hits $r$, $\widetilde{W}_t^++r$ hits $2r$ but $W^+_{T_r + t}$ may not have hit $2r$. Thus $T_{2r} \geq_{st} T_r + \widetilde{T}_r$ and $T_{2r}-T_r \geq T_r$. More generally, $T_{(k+1)r}-T_{kr} \geq_{st} T_r$. Thus, following our previous argument, we have \begin{equation*} rN_t \leq \max_{0 \leq s \leq t}\widetilde{W}_s \leq r(N_t + 1), \end{equation*} but in this case, $\{N_t\}$ is a renewal process with an infinite mean interarrival time. There is no stationary distribution $G$ (of $T_r$) and \eqref{eq:atupper} does not hold. In this case, $E[a(T_r)] = E[\sqrt{2/\pi}\sqrt{T_r}] = \infty$. This shows that upper bounding $E[a(T_r)]$ is challenging to deal with in general. \end{ex} \section{The rate of growth of the maximum of Bessel processes} This section is dedicated to the discussion of the rate of growth of the maximum of Bessel processes which can be obtained via the inequalities obtained in Section 2 (and 3). We are going to consider a case in which the hitting time of the radius/surface area/volume of the largest multi-dimensional Brownian motions that hits a predefined boundary. \begin{ex} Let $\mathbf{p_i} = (p^{(1)}_i, p^{(2)}_i, p^{(3)}_i), i = 1, \ldots, m$, be a set of $m$ spheres that perhaps represent some identified tumours in a human body, i.e. a three-dimensional space. At time $t = 0$, we start a three-dimension Brownian motion with coordinates processes centered at each one of these points. For each $i$ a sphere of radius $r_i(t)$ is given, where the radius equals the distance of the location of the three dimensional Brownian motion at time $t$ to its starting point $\mathbf{p}_i$ We are interested in getting qualitative information on how long it will take before the radius (volume) of at least one of the spheres exceeds a fixed level (say $r$, $r>0$) as the size of $m$ varies. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.5]{spheres.jpg} \caption{Illustration of Examples 5 and 6.} \end{center} \end{figure} Let $\mathcal{X} = \{\mathbf{X}_i(t) = (X^{(1)}_i(t), X^{(2)}_i(t), X^{(3)}_i(t))\}_{i=1, 2, \ldots, m}$ where, for each $i$, $\mathbf{X}_i(t) = \mathbf{p}_i + \mathbf{B}_i(t)$ corresponds to the location of the three-dimensional Brownian motion $\mathbf{B}_{i}(t)$ started at point $\mathbf{p}_i$ and $\mathcal{B}(t) = \{(B^{(1)}_i(t), B^{(2)}_i(t), B^{(3)}_i(t))\}\newline_{i = 1, \ldots, m}$. The radius of the largest sphere is given by $\|\mathcal{B}(t)\| = \newline\sup_{j\leq m}\sqrt{\sum^3_{i=1}(B^{(i)}_{j}(t))^2}$.\\ Observe that for each $j$, $Y_j(t) = \sum^3_{i=1}(B^{(i)}_j(t))^2$ is a submartingale, since \begin{equation*} E\left[Y_j(t) | \mathcal{F}_s\right] = \sum^3_{i=1}E\left[(B_j^{(i)}(t))^2\big|\mathcal{F}_s\right] \geq \sum^3_{i=1}(B^{(i)}_j(s))^2 = Y_j(s). \end{equation*} It follows that $Y_t := \max_{i \leq j \leq m}Y_j(t)$ is a submartingale because \begin{align*} E[Y_t | \mathcal{F}_s] & = E[\max\{Y_1(t), \ldots, Y_d(t) | \mathcal{F}_s\}]\\ & \geq \max\{E[Y_j(t) |\mathcal{F}_s], j = 1, \ldots, d\}\\ & \geq \max(Y_j(s), j = 1, \ldots, m) = Y_s. \end{align*} Let $T_{r,d}$ be the first passage time of $\|B(t)\| = \sup_{j \leq m}\sqrt{\sum^3_{i=1}(B^{(i)}_j(t))^2}$ to $r$, equivalently the first passage time of $Y_t = \|B_t\|^2$ to $r^2$. Then \begin{align*} \Pr\{T_{r,d} \leq t\} & = \Pr\left\{\sup_{s \leq t}Y_s \geq r^2\right\}\\ & \leq \frac{E[Y_t]}{r^2} = \frac{t}{r^2}E\left[\max_{1 \leq i \leq m}\chi^2_{3,i}\right]. \end{align*} Therefore, we can write \begin{equation} \frac{1}{2} \leq \frac{E\left[\max_{1 \leq i \leq d}\chi^2_{3,i}\right]}{r^2} E[T_{r,d}] \quad \Longrightarrow \quad E[T_{r,d}] \geq \frac{r^2}{2E\left[\max_{1 \leq i \leq d}\chi^2_{3,i}\right]}. \label{eq:lowerbessel} \end{equation} It can be shown that $E\left[\max_{1 \leq i \leq d}\chi^2_{3,i}\right] \leq 3\sum^d_{i=1} i^{-1}$ and hence \eqref{eq:lowerbessel} can be rewritten as \begin{equation} E[T_{r,d}] \geq \frac{r^2}{6\sum^d_{i=1}i^{-1}}, \end{equation} which follows from the Corollary on Page 266 of \cite{barlowetall72}. In fact, we can also approximate the values of $E[\max_{1 \leq i \leq d}\chi^2_{3,i}]$ via simulations. The corresponding values for various $d$ are tabulated in Table \ref{table:sim}: \begin{table}[h] \begin{center} \begin{tabular}{ |c|cccccc| } \hline d & 1 & 2 & 3 & 4 & 5 & 10 \\\hline $E\left[\sqrt{\max_{1 \leq i \leq d}\chi^2_{3,i}}\right]$ & 1.599 & 1.979 & 2.173 & 2.324 & 2.413 & 2.720 \\\hline\hline d & 15 & 20 & 30 & 40 & 50 & 100\\\hline $E\left[\sqrt{\max_{1 \leq i \leq d}\chi^2_{3,i}}\right]$ & 2.875 & 2.987 & 3.132 & 3.237 & 3.307 & 3.527\\\hline \end{tabular}\\ \begin{tabular}{c} \\ \end{tabular} \caption{Approximation of $E\left[\sqrt{\max_{1 \leq i \leq d}\chi^2_{3,i}}\right]$ via 10,000 simulations.} \label{table:sim} \end{center} \end{table} \end{ex} In general, for $\{X_i(t), t \geq 0, \}_{i = 1, \ldots, m}$ be iid as $\{X(t), t \geq 0\}$ a submartingale. Let $T^{(r)}_i$ be the first passage time to $r$ for $\{X_i(t), t\geq 0\}$, then $\min_{1 \leq i \leq m}T^{(r)}_i$ is the first passage time to $r$ for $\max_{1 \leq i \leq m}X_i(t)$. Let $\{X(t), t\geq 0\}$ be independent of $\{X_i(t), t \geq 0\}_{i = 1,\ldots, m}$ with $T_r$, its first passage time to $r$. Note that \begin{equation*} \int\Pr\{T_r \leq t\}dF_{\min\{T^{(r)}_1, \ldots, T^{(r)}_d\}}(t) = \Pr\left\{T_r \leq \min\{T^{(r)}_1, \ldots, T^{(r)}_d\}\right\} \geq (d+1)^{-1}. \end{equation*} Since \begin{equation*} \Pr\{T_r \leq t\} \leq \frac{E[X(t)]}{r} = \frac{\kappa(t)}{r}, \end{equation*} so \begin{align*} (d+1)^{-1} &\leq \int\Pr\{T_r \leq t\}dF_{\min\{T^{(r)}_1, \ldots, \min\{T^{(r)}_d\}}(t) \\ &\leq \frac{E[\kappa(\min\{T^{(r)}_1, \ldots, T^{(r)}_d\})]}{r}. \end{align*} As a result, $E[\kappa(\min\{T^{(r)}_1, \ldots, T^{(r)}_d\})] \geq \frac{r}{d+1}$ For $\|X_t\|$, we use $\|B_t^2\|$ and $r^2$ instead. Recall that $\kappa(t) = E\left[\sum^3_{i=1}(B^{(i)}_j(t))^2\right] = 3t.$ This gives \begin{equation*} E\left[\min_{1 \leq j \leq d}T_i\right] \geq \frac{r^2}{3(d+1)}. \end{equation*} This result is not as good as what we can obtain in Example 3.4 in which $E[T_{r,d}] \geq \frac{r^2}{6\sum^d_{i=1}i^{-1}}$. But there we used the result that $\chi^2_3$ is IFR (increasing failure rate); in other examples, we might know the type of distribution that $T^{(r)}_i$ follows.\\ The following example studies the upper bound of the Bessel process discussed in Example 3.4. \begin{ex} (Example 5 of a non-Markov process for which the upper bound holds) Consider the Bessel process studied in Example 5 again. $\|B_s\|$ is known to be strongly Markov but $\max_{1\leq j \leq d}\|B^{(j)}_t\|$ may not be. Note that \begin{equation*} B_{t+s} = B_s + (B_{t+s} - B_s) =_d B_s + \widetilde{B}_t, \end{equation*} where $\{\widetilde{B}_t, t \geq 0\}$ is distributed as $\{B_t, t \geq 0\}$ and $B$ is independent of $\widetilde{B}$. It follows that \begin{equation*} \|B_{t+s}\| =_d \|B_s + \widetilde{B}_t\| \leq_{st} \|B_s\| + \|\widetilde{B}_t\|, \end{equation*} and hence \begin{equation*} \|B_{t+s}\| - \|B_s\| \leq_{st} \|\widetilde{B}_t\|, \quad t \geq 0, \end{equation*} independently of $\mathcal{F}_s$, the history accumlated up to time $s$. Thus, independently of $\mathcal{F}_s$, the first passage time of $\|B_{t+s}\| - \|B_s\|$ to $r$ is stochastically larger than $T_r$.\\ Now consier $\|\{B^{(j)}\}_t\|$, $j = 1, \ldots, d$. Given $\mathcal{F}_s$, which denotes the history of all $j$ processes up to time $s$, the conditional distribution of each $\|B^{(j)}_{t+s} - B^{(j)}_s\|$ is stochastically larger than each of the $\|\widetilde{B^{(j)}_t}\|$ for all $t \geq 0$. It follows that the first passage time of $Z_t := \max_{1 \leq j \leq d}\|B^{(j)}_t\|$ from $kr$ to $(k+1)r$ is stochastically larger than the minimum of $d$ random variables, each distributed as $T_r$. Hence $T^*_{(k+1)r}- T^*_{kr} \geq_{st} T^*_r$ for all $k \geq 1$. It further follows that, $T^*_{kr}$ is stochastically larger than the convolution of $n$ iid random variables, each distributed as $T^*_r$, where $T^*_r = \min_{1 \leq j \leq d}T_r^{(j)} (= T_{r,d}$).\\ Letting $\widetilde{N}_t = \max\{k: T_{kr}^* \leq t\}, N(t) = \max\{k: \sum^k_{i=1}T^*_{r,i} \leq t\}$, we can write \begin{eqnarray*} E\widetilde{N}_t \leq E[N(t)] \leq \left(\frac{t}{E[T_{r,d}]}+1\right)r, \end{eqnarray*} since $T^*_r = T_{r,d}$ is NBU. Next, denote $Y_d =_d \max_{1 \leq \nu \leq d}\{\chi^2_{3,\nu}\}$, we have \begin{equation*} \sqrt{t}E[\sqrt{Y_d}] = E\left[\max_{1\leq j\leq d}\|B^{(j)}_t\|\right] \leq r\left(\frac{t}{E[T_r^*]}+1\right). \end{equation*} It follows that $\sqrt{E[T_r]}E[\sqrt{Y_d}] \leq 2r$ [by letting $t = E[T_{r,d}]$ and $\sqrt{E[T_r]} \leq 2r(E\sqrt{Y_d})$]. As a result, we have \begin{equation} E[T_r] \leq \frac{4r^2}{[E\sqrt{Y_d}]^2}. \end{equation} The two-sided bound in this case is thus \begin{equation*} \frac{r^2}{4[E\sqrt{Y_d}]^2} \leq E[T_{r,d}] \leq \frac{4r^2}{[E\sqrt{Y_d}]^2}, \end{equation*} or equivalently, \begin{equation} a^{-1}(r/2) \leq E[T_{r,d}] \leq a^{-1}(2r), \end{equation} which is the same as what we obtained in eq. \eqref{eq:mainresult2}. Note that, in this example, the underlying process $Z_t = \max_{1 \leq j \leq d}\|B^{(j)}_t\|$ is not Markov. However, each $\|B^{(j)}_t\|$ is Markov and so $T^{(r)}_r$ is NBU. It follows that $T^*_r = \min_{1 \leq j \leq d}T_r^{(j)}$ is NBU and thus has a finite mean.\\ Finally, we would like to emphasize that since the constants involved in the bounds derived are independent of the size of $d$, the inequalities obtained can be used to derive quantitative comparisons on the expected first passage times for processes with different values of $d$. That is, if we include the dependence on $d$ for different values, say $d_1$ and $d_2$, and take ratios, we have \begin{equation*} E[T_{2, r, d_2}^p] \approx \frac{E[\sup_{i \leq d_1}|B_1^{(i)}|^p]}{E[\sup_{i \leq d_2}|B_2^{(i)}|^p]}E[T_{1, r, d_1}^p], \end{equation*} which gives us information on the relative growth rate between the maxima of Bessel processes. \end{ex} All the examples shown in this section involve Brownian motion, below shown is an example that demonstrate how the bounds can be applied to other types of random variables whose distribution is not Gaussian. \begin{ex} Let $X_1, X_2, \ldots, X_n$ be non-negative, possibly dependent random variables with $\Pr\{X_i = X_j\} = 0$ for all $i \neq j$. Let $T_r$ denote the $r$th smallest amongst $X_1, \ldots, X_n$, $F_i$ be the marginal CDF of $X_i$ and $N(t) = \{\# X_i \leq t\}$. Then, \begin{equation*} a(t) = E[N(t)] = \sum^n_{i=1}F_i(t) \end{equation*} and \begin{equation*} E[a(T_r)] = E\left[\sum^n_{i=1}F_i(T_r)\right] \geq \frac{r}{2}. \end{equation*} Thus, $N(T_r) \equiv r$ and \begin{equation*} E[a(T_r)] = E[\widetilde{N}(T_r)] \geq \frac{r}{2}, \end{equation*} illustrating the decoupling aspect of this inequality. Again, $\widetilde{N}(t) = \{\# \widetilde{X}_i \leq t \}$, where $\{\widetilde{X}_i\}_{i = 1, \ldots, n}$ are independent copies of $\{X_i\}_{i = 1, \ldots, n}$ It should be noted that this lower bound is obtained without the knowledge of the dependence structure of $X$'s.\\ The above setting can be used to model a pool of debtors whose survival times (time until which they become default) follow some distribution with non-increasing density function, say exponential distribution or a subset of Weibull distribution. In this case, $T_r$ can be interpreted as the time when $r$ (out of $n$) debtors have gone bankrupt, which can be an important time stamp that triggers the termination of payment to a lower tranch of collateralized debt obligation (CDO). By assuming stationarity, we can treat the data as two sets of independent copies and use the historical data to estimate the current set of individuals (random variables). Unlike the use of copula to model the event time, the results presented previously can provide bounds for the event time without knowing the dependence structure of the debtors. \end{ex} \section{Conclusion} In this paper, we derive bounds for the expectation of the stopping time of arbitrary stochastic processes. The approach we use (see \cite{delapena97} and \cite{delapenayang}) involves the concept of boundary crossing of a non-random function to that of a random function. In the situations where the moment of the maximal process is available, the results shown can be helpful for the estimation of $E[T_r]$. In particular, for non-negative, continuous and time homogenuous Markovian processes, with the assumption that $T_{kr} - T_{(k-1)r} \geq_{st} T_r$ for $k \geq 2$, we show that the order of magnitude is the same up to constants. The result of \eqref{eq:rightbound} suggests that it is appropriate to view the process through $a(t)$, which serves as a clock for all processes with the same $a(t)$, as reflected in \eqref{eq:rate}. The lower bound derived can be applied to arbitray measurable processes and it is particularly useful in the study of renewal processes. \\%The upper bound is valid in many setups, including a class of time-homogenuous Markov processes.\\ \bibliographystyle{spmpsci}
{ "timestamp": "2012-12-12T02:03:49", "yymm": "1212", "arxiv_id": "1212.0733", "language": "en", "url": "https://arxiv.org/abs/1212.0733", "abstract": "One problem of wide interest involves estimating expected crossing-times. Several tools have been developed to solve this problem beginning with the works of Wald and the theory of sequential analysis. An extension of his approach is provided by the optional sampling theorem in conjunction with martingale inequalities. Deriving the explicit close form solution for the expected crossing times may be difficult. In this paper, we provide a framework that can be used to estimate expected crossing times of arbitrary stochastic processes. Our key assumption is the knowledge of the average behavior of the supremum of the process. Our results include a universal sharp lower bound on the expected crossing times.", "subjects": "Methodology (stat.ME); Probability (math.PR)", "title": "From Boundary Crossing of Non-Random Functions to Boundary Crossing of Stochastic Processes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464513032269, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7074345736899621 }
https://arxiv.org/abs/2106.03306
HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projections
This paper studies Principal Component Analysis (PCA) for data lying in hyperbolic spaces. Given directions, PCA relies on: (1) a parameterization of subspaces spanned by these directions, (2) a method of projection onto subspaces that preserves information in these directions, and (3) an objective to optimize, namely the variance explained by projections. We generalize each of these concepts to the hyperbolic space and propose HoroPCA, a method for hyperbolic dimensionality reduction. By focusing on the core problem of extracting principal directions, HoroPCA theoretically better preserves information in the original data such as distances, compared to previous generalizations of PCA. Empirically, we validate that HoroPCA outperforms existing dimensionality reduction methods, significantly reducing error in distance preservation. As a data whitening method, it improves downstream classification by up to 3.9% compared to methods that don't use whitening. Finally, we show that HoroPCA can be used to visualize hyperbolic data in two dimensions.
\subsection{The Poincar\'e Model of Hyperbolic Space} Hyperbolic geometry is a Riemannian geometry with constant negative curvature $-1$, where curvature measures deviation from flat Euclidean geometry. For easier visualizations, we work with the $d$-dimensional Poincar\'e model of hyperbolic space: $\H^d=\{{x}\in\mathbb{R}^d: \|x \|<1\}$, where $\|\cdot \|$ is the Euclidean norm. In this model, the Riemannian distance can be computed in cartesian coordinates by: \begin{equation}\label{eq:hyp_distance} d_\H (x, y)=\arccosh \left(1+2\frac{\|x-y\|^2}{(1-\|x\|^2)(1-\|y\|^2)} \right). \end{equation} \paragraph{Geodesics} Shortest paths in hyperbolic space are called \emph{geodesics}. In the Poincar\'e model, they are represented by straight segments going through the origin and circular arcs perpendicular to the boundary of the unit ball~(\cref{fig:horo_geo}). \paragraph{Geodesic submanifolds} A submanifold $M \subset \H^d$ is called \emph{(totally) geodesic} if for every $x, y \in M$, the geodesic line connecting $x$ and $y$ belongs to $M$. This generalizes the notion of affine subspaces in Euclidean spaces. In the Poincar\'e model, geodesic submanifolds are represented by linear subspaces going through the origin and spherical caps perpendicular to the boundary of the unit ball. \input{tables/coord} \subsection{Directions in Hyperbolic space}\label{subsec:ideal} The notions of directions, and coordinates in a given direction can be generalized to hyperbolic spaces as follows. \paragraph{Ideal points} As with parallel rays in Euclidean spaces, geodesic rays in $\H^d$ that stay close to each other can be viewed as sharing a common \emph{endpoint at infinity}, also called an \emph{ideal point}. Intuitively, ideal points represent \emph{directions} along which points in $\H^d$ can move toward infinity. The set of ideal points $\S_\infty^{d-1}$, called the \emph{boundary at infinity} of $\H^d$, is represented by the unit sphere $\S_\infty^{d-1}=\{ \|x\| = 1\}$ in the Poincar\'e model. We abuse notations and say that a geodesic submanifold $M \subset \H^d$ contains an ideal point $p$ if the boundary of $M$ in $\S_\infty^{d-1}$ contains $p$. \paragraph{Busemann coordinates} In Euclidean spaces, each \emph{direction} can be represented by a unit vector $w$. The \emph{coordinate} of a point $x$ in the direction of $w$ is simply the dot product $w \cdot x$. In hyperbolic geometry, directions can be represented by ideal points but dot products are not well-defined. Instead, we take a ray-based perspective: note that in Euclidean spaces, if we shoot a ray in the direction of $w$ from the origin, the coordinate $w \cdot x$ can be viewed as the \emph{normalized distance to infinity in the direction of that ray}. In other words, as a point $y=tw$, ($t>0$) moves toward infinity in the direction of $w$: \[ w \cdot x = \lim_{t \to \infty} \left( d(0, tw) - d(x, tw)\right). \] This approach generalizes to other geometries: given a unit-speed geodesic ray $\gamma(t)$, the \emph{Busemann function} $B_\gamma (x)$ of $\gamma$ is defined as:\footnote{Note that compared to the above formula, the sign convention is flipped due to historical reasons.} \[ B_\gamma (x) = \lim_{t \to \infty} \left( d(x, \gamma(t)) - t \right). \] Up to an additive constant, this function only depends on the endpoint at infinity of the geodesic ray, and not the starting point $\gamma(0)$. Thus, given an ideal point $p$, we define the Busemann function $B_p(x)$ of $p$ to be the Busemann function of the geodesic ray that starts from the origin of the unit ball model and has endpoint $p$. Intuitively, it represents the \emph{coordinates} of $x$ in the direction of $p$. In the Poincar\'e model, there is a closed formula: \[ B_{p}(x) = \ln \frac{ \| p - x \|^2 }{1 - \| x\|^2}. \] \paragraph{Horospheres} The level sets of Busemann functions $B_p (x)$ are called \emph{horospheres centered at} $p$. In this sense, they resemble spheres, which are level sets of distance functions. However, intrinsically as Riemannian manifolds, horospheres have curvature zero and thus also exhibit many properties of planes in Euclidean spaces. Every geodesic with endpoint $p$ is orthogonal to every horosphere centered at $p$. Given two horospheres with the same center, every orthogonal geodesic segment connecting them has the same length. In this sense, \textit{concentric horospheres resemble parallel planes in Euclidean spaces}. In the Poincar\'e model, horospheres are Euclidean spheres that touch the boundary sphere $\S_\infty^{d-1}$ at their ideal centers~(\cref{fig:horo_geo}). Given an ideal point $p$ and a point $x$ in $\H^d$, there is a unique horosphere $S(p, x)$ passing through $x$ and centered at $p$. \subsection{Geodesic Projections} PCA uses orthogonal projections to project data onto subspaces. Orthogonal projections are usually generalized to other geometries as \emph{closest-point projections}. Given a target submanifold $M$, each point $x$ in the ambient space is mapped to the closest-point to it in $M$: \[ \pi^{\operatorname{G}}_M (x) = \argmin_{y \in M} d_M(x, y). \] One could view $\pi^{\operatorname{G}}_M (\cdot)$ as the map that pushes each point $x$ along an orthogonal geodesic until it hits $M$. For this reason, it is also called \emph{geodesic projection}. In the Poincar\'e model, these can be computed in closed-form (see~\cref{sec:app_exp}). \subsection{Manifold Statistics} PCA relies on data statistics which do not generalize easily to hyperbolic geometry. One approach to generalize the arithmetic mean is to notice that it is the minimizer of the sum of squared distances to the inputs. Motivated by this, the Fr\'echet mean~\cite{frechet1948elements} of a set of points $S$ in a Riemannian manifold $(M, d_M)$ is defined as: \[ \mu_M(S)\coloneqq \argmin_{y \in M} \sum_{x \in S}d_M(x, y)^2.\label{eq:frechet_mean} \] This definition only depends on the intrinsic distance of the manifold. For hyperbolic spaces, since squared distance functions are convex, $\mu(S)$ always exists and is unique.\footnote{For more general geometries, existence and uniqueness hold if the data is well-localized ~\cite{kendall1990probability}.} Analogously, the Fr\'echet variance is defined as: \begin{equation} \sigma^2_M(S)\coloneqq\frac{1}{|S|} \sum_{x\in S}^N d_M(x, \mu(S))^2. \label{eq:frechet_var} \end{equation} We refer to~\cite{huckemann2020statistical} for a discussion on different intrinsic statistics in non-Euclidean spaces, and a study of their asymptotic properties. \input{figures/hyperbolic_lines} \subsection{Hyperbolic Flags} \label{subsec:horo_components} In Euclidean spaces, one can take the linear spans of more and more components to define a nested sequence of linear subspaces, called a flag. To generalize this to hyperbolic spaces, we first need to adapt the notion of linear/affine spans. Recall that geodesic submanifolds are generalizations of affine subspaces in Euclidean spaces. \begin{definition} Given a set of points $S$ (that could be inside $\H^d$ or on the boundary sphere $\S_\infty^{d-1}$), the smallest geodesic submanifold of $\H^d$ that contains $S$ is called the \emph{geodesic hull} of $S$ and denoted by $\GH(S)$. \end{definition} Thus, given $K$ ideal points $p_1, p_2, \dots, p_K$ and a base point $b \in \H^d$, we can define a nested sequence of geodesic submanifolds $\GH(b,p_1) \subset \GH(b,p_1,p_2) \subset \dots \subset \GH(b,p_1, \dots, p_K)$. This will be our notion of flags. \begin{remark} The base point $b$ is only needed here for technical reasons, just like an origin $\mathbf{o}$ is needed to define linear spans in Euclidean spaces. We will see next that it does not affect the projection results or objectives~(\cref{thm:horosphere-proj-base-point-independent}). \end{remark} \begin{remark} We assume that none of $b, p_1, \dots, p_K$ are in the geodesic hull of the other $K$ points. This is analogous to being linearly independent in Euclidean spaces. \end{remark} \subsection{Projections via Horospheres}\label{subsec:horo_proj} In Euclidean PCA, points are projected to the subspaces spanned by the given directions in a way that preserves coordinates in those directions. We seek a projection method in hyperbolic spaces with a similar property. Recall that coordinates are generalized by Busemann functions~(\cref{table:coord}), and that horospheres are level sets of Busemann functions. Thus, we propose a projection that preserves coordinates by moving points along horospheres. It turns out that this projection method also preserves distances better than the traditional geodesic projection. As a toy example, we first show how the projection is defined in the $K = 1$ case (i.e. projecting onto a geodesic) and why it tends to preserve distances well. We will then show how to use $K \ge 1$ ideal points simultaneously. \subsubsection{Projecting onto \texorpdfstring{$K=1$}{K = 1} Directions} For $K = 1$, we have one ideal point $p$ and base point $b$, and the geodesic hull $\GH(b,p)$ is just a geodesic $\gamma$. Our goal is to map every $x \in \H^d$ to a point $\pi^{\operatorname{H}}_{b,p} (x)$ on $\gamma$ that has the same Busemann coordinate in the direction of $p$: $$B_p(x) = B_p(\pi^{\operatorname{H}}_{b,p} (x)).$$ Since level sets of $B_p (x)$ are horospheres centered at $p$, the above equation simply says that $\pi^{\operatorname{H}}_{b,p}(x)$ belongs to the horosphere $S(p,x)$ centered at $p$ and passing through $x$. Thus, we define: \begin{equation} \pi^{\operatorname{H}}_{b,p} (x)\coloneqq \gamma \cap S(p, x). \end{equation} Another important property that $\pi^{\operatorname{H}}_{b,p}(\cdot)$ shares with orthogonal projections in Euclidean spaces is that it preserves distances along a direction -- lengths of geodesic segments that point to $p$ are preserved after projection~(\cref{fig:horo_geo_proj}): \begin{prop} \label{prop:horosphere-proj-preserv-best-case-1D} For any $x \in \H^d$, if $y \in \GH(x,p)$ then: \[ d_\H(\pi^{\operatorname{H}}_{b,p}(x), \pi^{\operatorname{H}}_{b,p}(y)) = d_\H(x,y). \] \end{prop} \begin{proof} This follows from the remark in \cref{subsec:ideal} about horospheres: every geodesic going through $p$ is orthogonal to every horosphere centered at $p$, and every orthogonal geodesic segment connecting concentric horospheres has the same length~(\cref{fig:horo_geo}). In this case, the segments from $x$ to $y$ and from $\pi^{\operatorname{H}}_{b,p}(x)$ to $\pi^{\operatorname{H}}_{b,p}(y)$ are two such segments, connecting $S(p,x)$ and $S(p,y)$. \end{proof} \input{figures/distance_preservation} \subsubsection{Projecting onto \texorpdfstring{$K>1$}{K > 1} Directions}\label{subsec:horo_proj_high} We now generalize the above construction to projections onto higher-dimensional submanifolds. We describe the main ideas here; \cref{appendix-sec:horo} contains more details, including an illustration in the case $K=2$ (\cref{fig:horo_proj_3d}). Fix a base point $b \in \H^d$ and $K > 1$ ideal points $\{p_1, \dots, p_K\}$. We want to define a map from $\H^d$ to $M \coloneqq \GH(b, p_1, \dots, p_K)$ that preserves the Busemann coordinates in the directions of $p_1, \dots, p_K$, i.e.: \[ B_{p_j} (x)=B_{p_j}\left( \pi^{\operatorname{H}}_{b,p_1,\dots,p_K}(x) \right)\text{ for every }j = 1, \dots, K. \] As before, the idea is to take the intersection with the horospheres centered at $p_j$'s and passing through $x$: \begin{align*} \pi^{\text{H}}_{b, p_1,\dots,p_K}: \H^d &\to M \\ x &\mapsto M \cap S(p_1, x) \cap \dots \cap S(p_K, x). \end{align*} It turns out that this intersection generally consists of two points instead of one. When that happens, one of them will be strictly closer to the base point $b$, and we define $\pi^{\operatorname{H}}_{b,p_1,\dots,p_K}(x)$ to be that point. As with~\cref{prop:horosphere-proj-preserv-best-case-1D}, $\pi^{\operatorname{H}}_{b,p_1,\dots,p_K}(\cdot)$ preserves distances along $K$-dimensional manifolds % (\cref{cor:horosphere-proj-preserve-some-distance}). In contrast, geodesic projections in hyperbolic spaces \emph{never} preserve distances (except between points already in the target): \begin{restatable}[]{prop}{geoshrink} \label{prop:geodesic-proj-shrink-path} Let $M \subset \H^d$ be a geodesic submanifold. Then every geodesic segment of distance at least $r$ from $M$ gets at least $\cosh(r)$ times shorter under the geodesic projection $\pi^{\operatorname{G}}_M (\cdot)$ to $M$: \[ \operatorname{length}(\pi^{\operatorname{G}}_M (I)) \leq \frac{1}{\cosh(r)} \operatorname{length}(I). \]\label{prop:distortion} In particular, the shrink factor grows exponentially as the segment $I$ moves away from $M$. $\qedsymbol$ \end{restatable} The proof is in~\cref{appendix-sec:orthogonal-projection-distortion}. \paragraph{Computation} Interestingly, horosphere projections can be computed without actually computing the horospheres. The key idea is that if we let $P = \GH(p_1, \dots, p_K)$ be the geodesic hull of the horospheres' centers, then the intersection $S(p_1, x) \cap \dots \cap S(p_K,x)$ is simply the orbit of $x$ under the rotations around $P$. (This is true for the same reason that spheres whose centers lie on the same axis must intersect along a circle around that axis). Thus, $\pi^{\operatorname{H}}_{b,p_1,\dots,p_K}(\cdot)$ can be viewed as the map that rotates $x$ around until it hits $M$. It follows that it can be computed by: \setlist{nolistsep} \begin{enumerate}[topsep=0ex,leftmargin=*] \item Find the geodesic projection $c = \pi^{\operatorname{G}}_P(x)$ of $x$ onto $P$. \item Find the geodesic $\alpha$ on $M$ that is orthogonal to $P$ at $c$. \item Among the two points on $\alpha$ whose distance to $c$ equals $d_\H(x, c)$, returns the one closer to $b$. \end{enumerate} The detailed computations and proof that this recovers horospherical projections are provided in~\cref{appendix-sec:horo}. \subsection{Intrinsic Variance Objective}\label{subsec:obj} In Euclidean PCA, directions are chosen to maximally preserve information from the original data. In particular, PCA chooses directions that maximize the Euclidean variance of projected data. To generalize this to hyperbolic geometry, we define an analog of variance that is intrinsic, i.e. dependent only on the distances between data points. As we will see in \cref{sec:horopca}, having an intrinsic objective helps make our algorithm \emph{location (or base point) independent}. The usual notion of Euclidean variance is the squared sum of distances to the mean of the projected datapoints. Generalizing this is challenging because non-Euclidean spaces do not have a canonical choice of mean. Previous works have generalized variance either by using the \emph{unexplained variance} or \emph{Fr\'echet variance}. The former is the squared sum of residual distances to the projections, and thus avoids computing a mean. However, it is not intrinsic. The latter is intrinsic \cite{fletcher2004pga} but involves finding the Fr\'echet mean, which is not necessarily a canonical notion of mean and can only be computed by gradient descent. Our approach uses the observation that in Euclidean space: \[ \sigma^2(S)=\frac{1}{n}\sum_{x\in S} \left\| x - \mu(S) \right \|^2 = \frac{1}{n^2} \sum_{x,y\in S} \| x-y \|^2. \] Thus, we propose the following generalization of variance: \begin{equation} \sigma_\mathbb{H}^2(S) = \frac{1}{n^2} \sum_{x,y\in S} d_\mathbb{H}(x, y)^2.\label{eq:variance-as-sum-of-squared-distances} \end{equation} This function agrees with the usual variance in Euclidean space, while being a function of distances only. Thus it is well defined in non-Euclidean space, is easily computed, and, as we will show next, has the desired invariance due to isometry properties of horospherical projections. \subsection{Experimental Setup}\label{subsec:exp_setup} \paragraph{Baselines} We compare \textsc{HoroPCA}{} to several dimensionality reduction methods, including: \begin{enumerate*}[label=(\arabic*)] \item Euclidean PCA, which should perform poorly on hyperbolic data, \item Exact PGA, \item Tangent PCA (tPCA), which approximates PGA by moving the data in the tangent space of the Fr\'echet mean and then solves Euclidean PCA, \item BSA, \item Hyperbolic Multi-dimensional Scaling (hMDS)~\cite{sala2018representation}, which takes a distance matrix as input and recovers a configuration of points that best approximates these distances, \item Hyperbolic autoencoder (hAE) trained with gradient descent~\cite{ganea2018hyperbolic,hinton2006reducing}. \end{enumerate*} To demonstrate their dependence on base points, we also include two baselines that perturb the base point in PGA and BSA. We open-source our implementation\footnote{\url{https://github.com/HazyResearch/HoroPCA}} and refer to~\cref{sec:app_exp} for implementation details on how we implemented all baselines and \textsc{HoroPCA}{}. \paragraph{Datasets} For dimensionality reduction experiments, we consider standard hierarchical datasets previously used to evaluate the benefits of hyperbolic embeddings. More specifically, we use the datasets in~\cite{sala2018representation} including a fully balanced tree, a phylogenetic tree, a biological graph comprising of diseases' relationships and a graph of Computer Science (CS) Ph.D. advisor-advisee relationships. These datasets have respectively 40, 344, 516 and 1025 nodes, and we use the code from~\cite{gu2018learning} to embed them in the Poincar\'e ball. For data whitening experiments, we reproduce the experimental setup from~\cite{cho2019large} and use the Polbooks, Football and Polblogs datasets which have 105, 115 and 1224 nodes each. These real-world networks are embedded in two-dimensions using~\citet{chamberlain2017neural}'s embedding method. \paragraph{Evaluation metrics} To measure distance-preservation after projection, we use average distortion. If $\pi(\cdot)$ denotes a mapping from high- to low-dimensional representations, the average distortion of a dataset $S$ is computed as: \[ \frac{1}{\binom{|S|}{2}} \sum_{x\neq y\in S}\frac{|d_\H(\pi(x), \pi(y))-d_\H(x, y)|}{d_\H(x, y)}. \] We also measure the Fr\'echet variance in~\cref{eq:frechet_var}, which is the analogue of the objective that Euclidean PCA optimizes\footnote{All mentioned PCA methods, including \textsc{HoroPCA}{}, optimize for some forms of variance but \emph{not} Fr\'echet variance or distortion.}. Note that the mean in~\cref{eq:frechet_var} cannot be computed in closed-form and we therefore compute it with gradient-descent. \subsection{Dimensionality Reduction}\label{subsec:exp_reduce} We report metrics for the reduction of 10-dimensional embeddings to two dimensions in~\cref{tab:pytorch_reduction}, and refer to~\cref{sec:app_exp} for additional results, such as more component and dimension configurations. All results suggest that \textsc{HoroPCA}{} better preserves information contained in the high-dimensional representations. On distance preservation, \textsc{HoroPCA}{} outperforms all methods with significant improvements on larger datasets. This supports our theoretical result that horospherical projections better preserve distances than geodesic projections. Furthermore, \textsc{HoroPCA}{} also outperforms existing methods on the explained Fr\'echet variance metric on all but one dataset. This suggests that our distance-based formulation of the variance~(\cref{eq:variance-as-sum-of-squared-distances}) effectively captures variations in the data. We also note that as expected, both PGA and BSA are sensitive to base point choices: adding Gaussian noise to the base point leads to significant drops in performance. In contrast, \textsc{HoroPCA}{} is by construction base-point independent. % \subsection{Hyperbolic Data Whitening}\label{subsec:exp_cls} An important use of PCA is for data whitening, as it allows practitioners to remove noise and decorrelate the data, which can improve downstream tasks such as regression or classification. Recall that standard PCA data whitening consists of (i) finding principal directions that explain the data, (ii) calculating the coordinates of each data point along these directions, and (iii) normalizing the coordinates for each direction (to have zero mean and unit variance). Because of the close analogy between \textsc{HoroPCA}{} and Euclidean PCA, these steps can easily map to the hyperbolic case, where we (i) use \textsc{HoroPCA}{} to find principal directions (ideal points), (ii) calculate the Busemann coordinates along these directions, and (iii) normalize them as Euclidean coordinates. Note that this yields Euclidean representations, which allow leveraging powerful tools developed specifically for learning on Euclidean data. We evaluate the benefit of this whitening step on a simple classification task. We compare to directly classifying the data with Euclidean Support Vector Machine (eSVM) or its hyperbolic counterpart (hSVM), and also to whitening with tPCA. Note that most baselines in~\cref{subsec:exp_setup} are incompatible with data whitening: hMDS does not learn a transformation that can be applied to unseen test data, while methods like PGA and BSA do not naturally return Euclidean coordinates for us to normalize. To obtain another baseline, we use a logarithmic map to extract Euclidean coordinates from PGA. \input{tables/cls} We reproduce the experimental setup from~\cite{cho2019large} who split the datasets in 50\% train and 50\% test sets, run classification on 2-dimensional embeddings and average results over 5 different embedding configurations as was done in the original paper~(\cref{tab:cls}). \footnote{Note that the results slightly differ from~\cite{cho2019large} which could be because of different implementations or data splits.} \textsc{HoroPCA}{} whitening improves downstream classification on all datasets compared to eSVM and hSVM or tPCA and PGA whitening. This suggests that \textsc{HoroPCA}{} can be leveraged for hyperbolic data whitening. Further, this confirms that Busemann coordinates do capture variations in the original data. \subsection{Visualizations}\label{subsec:exp_viz} When learning embeddings for ML applications (e.g. classification), increasing the dimensionality can significantly improve the embeddings' quality. To effectively work with these higher-dimensional embeddings, it is useful to visualize their structure and organization, which often requires reducing their representations to two or three dimensions. Here, we consider embeddings of the mammals subtree of the Wordnet noun hierarchy learned with the algorithm from~\cite{nickel2017poincare}. We reduce embeddings to two dimensions using PGA and \textsc{HoroPCA}{} and show the results in~\cref{fig:poincare_wordnet}. We also include more visualizations for PCA and BSA in~\cref{fig:poincare_wordnet_full} in the Appendix. % As we can see, the reduced representations obtained with \textsc{HoroPCA}{} yield better visualizations. For instance, we can see some hierarchical patterns such as ``feline hypernym of cat" or ``cat hypernym of burmese cat". These patterns are harder to visualize for other methods, since these do not preserve distances as well as \textsc{HoroPCA}{}, e.g.\ PGA has 0.534 average distortion on this dataset compared to 0.078 for \textsc{HoroPCA}{}. \section{Horospherical Projection: Proofs and Discussions}\label{appendix-sec:horo} \input{appendix/horo} \section{Geodesic Projection: Distortion Analysis}\label{appendix-sec:orthogonal-projection-distortion} \input{appendix/geodesic_proj_distort} \input{tables/datasets} \section{Additional Experimental Details}\label{sec:app_exp} \input{appendix/experiments} \subsection{Well-definedness} \label{appendix-subsec:horosphere-proj-well-defined} Recall that given a base point $b \in \H^d$ and $K > 1$ ideal points $\{p_1, \dots, p_K\}$, we would like to define $\pi^{\operatorname{H}}_{b, p_1, \dots, p_K}$ by $$x \to M \cap S(p_1, x) \cap S(p_2, x) \cap \dots \cap S(p_K, x),$$ where $M = \GH(b, p_1, \dots, p_K)$ is the target submanifold and $S(p_j, x)$ is the horosphere centered at $p_j$ and passing through $x$. For this definition to make sense, the intersection in the right hand side must contain exactly one point for each $x \in \H^d$. Unfortunately, this is not the case: In fact, the intersection generally consists of two points. Nevertheless, we will show that there is a consistent way to choose one from these two points, making the function $\pi^{\operatorname{H}}_{b,p_1,\dots,p_K}$ well-defined. This is the result of \cref{thm:horosphere-proj-well-defined}. First, to understand the above intersection, we give a more concrete description of $\cap_j S(p_j, x)$. \begin{lemma} \label{lemma:intersection-of-horospheres-is-orbit} Let $P = \GH(p_1, p_2, \dots, p_K)$. Then for every $x \in \H^d$, the intersection of horospheres $$S(x) = S(p_1, x) \cap S(p_2, x) \cap \dots \cap S(p_k, x)$$ is precisely the orbit of $x$ under the group $G$ of rotations around $P$. \end{lemma} \begin{proof} First, note that every rotation around $P$ preserves the horospheres $S(p_j, x)$ - just like how every rotation around an axis preserves every sphere whose center is on that axis. It follows that $S(x)$ is preserved by $G$. In particular, the orbit of $x$ under $G$ is contained in $S(x)$. It remains to show that $S(x)$ contains no other points. To this end, consider any $y \neq x$ in $S(x)$. The perpendicular bisector $B$ of $x$ and $y$ is a totally geodesic hyperplane of $\H^d$ that contains every $p_j$ (because each $p_j$ is intuitively the center of a sphere that goes through $x$ and $y$). Thus, by the definition of geodesic hull, $B \supset P$. In particular, the reflection through $B$ sends $x$ to $y$ and fixes every point in $P$. Now take any geodesic hyperplane $A$ that contains both $P$ and $y$, so that the reflection through $A$ fixes $y$ and every point in $P$. Then the composition of the reflections through $B$ and $A$ is a rotation that sends $x$ to $y$ and fixes every point in $P$. In other words, it is a rotation around $P$ that sends $x$ to $y$. Therefore, $y$ belongs to the orbit of $x$ under $G$. \end{proof} \begin{cor} \label{cor:intersection-of-horospheres-is-sphere} If $x \in P$ then $S(x) = \{x\}$. Otherwise, let $\pi^{\operatorname{G}}_P (x)$ be the geodesic projection of $x$ onto $P$, and $Q(x)$ be the geodesic submanifold that orthogonally complements $P$ at $\pi^{\operatorname{G}}_P(x)$. Then $S(x) \subset Q(x)$ and is precisely the (hyper)sphere in $Q(x)$ that is centered at $\pi^{\operatorname{G}}_P(x)$ and passing through $x$. \end{cor} \begin{proof} This follows from \cref{lemma:intersection-of-horospheres-is-orbit}. If $x \in P$ then every rotation around $P$ fixes $x$, so the orbit of $x$ is just itself. Now consider the case $x \not\in P$. All rotations around $P$ must preserve $\pi^{\operatorname{G}}_P (x)$ and the orthogonal complement $Q(x)$ of $P$ at $\pi^{\operatorname{G}}_P (x)$. Furthermore, when restricted to the space $Q(x)$, these rotations are precisely the rotations in $Q(x)$ around the point $\pi^{\operatorname{G}}_P (x)$. Thus, for every $y \in Q(x)$, the orbit of $y$ under $G$ is a sphere in $Q(x)$ centered at $\pi^{\operatorname{G}}_P (x)$. In particular, $S(x)$, which is the orbit of $x$, is the sphere in $Q(x)$ that is centered at $\pi^{\operatorname{G}}_P(x)$ and passing through $x$. \end{proof} \cref{cor:intersection-of-horospheres-is-sphere} gives the following characterization of the intersection $$M \cap S(p_1, x) \cap S(p_2, x) \cap \dots \cap S(p_K,x) = M \cap S(x):$$ Note that $P = \GH(p_1, \dots, p_K)$ is a geodesic submanifold of $M = \GH(b, p_1, \dots, p_K)$ and that $\dim P = \dim M - 1$. Thus, through every point $y \in P$, there is a unique geodesic $\alpha$ on $M$ that goes through $y$ and is perpendicular to $P$. \begin{cor} \label{cor:horosphere-proj-two-candidates} If $x \in P$ then $M \cap S(x) = \{x\}$. Otherwise, let $\alpha$ be the geodesic on $M$ that goes through $\pi^{\operatorname{G}}_P (x)$ and is perpendicular to $P$. Then $M \cap S(x)$ consists of the two points on $\alpha$ whose distance to $\pi^{\operatorname{G}}_P(x)$ equals $d_\H (x, \pi^{\operatorname{G}}_P(x))$. \end{cor} \begin{proof} The case $x \in P$ is clear since $x \in M$ and $S(x) = \{x\}$ by \cref{cor:intersection-of-horospheres-is-sphere}. For the other case, let $Q(x)$ be the orthogonal complement of $P$ at $\pi^{\operatorname{G}}_P (x)$. Then by \cref{cor:intersection-of-horospheres-is-sphere}, $S(x)$ is precisely the sphere in $Q(x)$ that is centered at $c(x)$ and passing through $x$. Now note that $M \cap Q(x) = \alpha$. Since $S(x) \subset Q(x)$, this gives $M \cap S(x) = M \cap Q(x) \cap S(x) = \alpha \cap S(x)$. We know that every sphere intersects every line through the center at two points. \end{proof} Therefore, to define $\pi^{\operatorname{H}}_{b,p_1,\dots,p_K}(x)$, we just need to choose one of the two poins in $M \cap S(x)$ in a consistent way (so that the map is differentiable). To this end, note that $P$ cuts $M$ into two half-spaces, and exactly one of them contains the base point $b$. (Recall that $b \in M$ and $b \not\in P$ by the ``independence'' condition). We denote this half by $P_b$. Then, while $S(x)$ contains two points in $M$, it only contains one point in $P_b$: \begin{theorem} \label{thm:horosphere-proj-well-defined} Let $\alpha$ be the geodesic on $M$ that goes through $\pi^{\operatorname{G}}_P(x)$ and is perpendicular to $P$. Let $\alpha^+$ be the half of $\alpha$ contained in $P_b$. Then $\alpha^+$ intersects the sphere $S(x)$ at a unique point $x'$, which is also the unique intersection point between $P_b$ and $S(x)$. Thus, we can define $\pi^{\operatorname{H}}_{b,p_1, \dots, p_K}$ by $$\pi^{\operatorname{H}}_{b,p_1,\dots,p_K} (x) = \alpha^+ \cap S(x) = P_b \cap S(x).$$ Equivalently, $\pi^{\operatorname{H}}_{b,p_1, \dots, p_K}(x)$ is the point in $M \cap S(x)$ that is strictly closer to $b$. \end{theorem} \begin{proof} By \cref{cor:intersection-of-horospheres-is-sphere}, $S(x)$ is a sphere centered at $\pi^{\operatorname{G}}_P(x)$. Then, since $\alpha^+$ is a geodesic ray starting at $\pi^{\operatorname{G}}_P(x)$, it must intersect $S(x)$ at a unique point $x'$. Next, we have $$P_b \cap S(x) = P_b \cap M \cap S(x) = P_b \cap \alpha \cap S(x) = \alpha^+ \cap S(x),$$ where the first equality holds because $P_b \subset M$, the second because $M \cap S(x) = \alpha \cap S(x)$ by the proof of \cref{cor:horosphere-proj-two-candidates}, and the third because $P_b \cap \alpha = \alpha^+$. Thus, $P_b \cap S(x)$ is precisely $x'$. Finally, let $x''$ be the other point of $M \cap S(x) = \alpha \cap S(x)$. Then $P$ is the perpendicular bisector of $x'$ and $x''$ in $M$. Thus, every point on the same side of $P$ in $M$ as $x'$ (but not on the boundary $P$) is strictly closer to $x'$ than to $x''$. By definition, $b$ is one of such point. Thus, $\pi^{\operatorname{H}}_{b,p_1, \dots,p_K}(x)$ is the point in $M \cap S(x)$ that is closer to $b$. \end{proof} \input{figures/horo_proj_3d} \subsection{Geometric properties} \label{appendix-subsec:horosphere-proj-properties} From \cref{lemma:intersection-of-horospheres-is-orbit} and \cref{thm:horosphere-proj-well-defined}, we obtain another interpretation of $\pi^{\operatorname{H}}_{b, p_1, \dots, p_K}$: It maps $\H^d$ to $P_b \subset M$ by rotating every point $x \in \H^d$ around $P$ until it hits $P_b$. In other words, we have \begin{theorem}[The ``open book'' interpretation] \label{thm:open-book-interpretation} For any $x \not\in P$, let $M_x = \GH(P \cup \{x\})$. Then $P$ cuts $M_x$ into two half-spaces; we denote the half that contains $x$ by $P_x$. Then, when restricted to $P_x$, the map $$\pi^{\operatorname{H}}_{b, p_1, \dots, p_K}: P_x \to P_b$$ is simply a rotation around $P$. $\qedsymbol$ \end{theorem} Following this, we call $P$ the \emph{spine} of the horosphere projection. The identity $\H^d = \cup_{x \not\in P} P_x$ can be thought of as an \emph{open book decomposition} of $\H^d$ into \emph{pages} $P_x$ that are bounded by the spine $P$. The horosphere projection $\pi^{\operatorname{H}}_{b, p_1, \dots, p_K}$ then simply acts by collapsing every page onto a specified page $P_b$. Here are some consequences of this interpretation: \begin{cor} \label{cor:horosphere-proj-only-depends-on-spine} $\pi^{\operatorname{H}}_{b, p_1, \dots, p_K}$ only depends on the spine $P$ and not specifically on $p_1, \dots p_K$. Thus, when we are not interested in the specific ideal points, we simply write $\pi^{\operatorname{H}}_{b,P}$. \end{cor} \begin{proof} As noted above, $\pi^{\operatorname{H}}_{b, p_1, \dots, p_K}(x)$ can be obtained by rotating $x$ around $P$ until it hits $P_b$. This operation does not used the exact positions of $p_j$ at all. \end{proof} \begin{cor} \label{cor:horosphere-proj-base-point-independent} The choice of $b$ does not affect the geometry of the projection $\pi^{\operatorname{H}}_{b,P}$. More precisely, for any two base points $b, b' \not \in P$, the horosphere projections $$\pi^{\operatorname{H}}_{b,P}: \H^d \to P_b \ \ \text{and} \ \ \pi^{\operatorname{H}}_{b',P}: \H^d \to P_{b'}$$ only differ by a rotation $P_b \to P_{b'}$ around $P$. \end{cor} \begin{proof} By \cref{thm:open-book-interpretation}, when restricted to any page $P_x$, the maps $\pi^{\operatorname{H}}_{b,P}: \H^d \to P_b$ and $\pi^{\operatorname{H}}_{b',P}: \H^d \to P_{b'}$ are just rotations around $P$. The difference between any two rotations around $P$ is another rotation around $P$. \end{proof} In particular, \cref{cor:horosphere-proj-base-point-independent} implies \cref{thm:horosphere-proj-base-point-independent}: \isometry* As discussed in \cref{sec:horopca}, this theorem helps reduces parameters and simplifies the computation of $\pi^{\operatorname{H}}_{b,P}$. \begin{remark} \label{remark:flag-parameter-count} \cref{cor:horosphere-proj-only-depends-on-spine} and \cref{cor:horosphere-proj-base-point-independent} together imply that the \textsc{HoroPCA}{} algorithm \eqref{eq:horo_opt} only depends on the geodesic hulls $\GH(p_1), \GH(p_1,p_2), \dots \GH(p_1, \dots, p_K)$ of the ideal points and not the specific ideal points themselves. It follows that, theoretically, the search space of \eqref{eq:horo_opt} has dimension $dK - \frac12 K(K+1)$ -- the same as the dimension of the space of flags in Euclidean spaces. In our implementation, for simplicity we parametrize the $K$ ideal points independently, which results in a suboptimal search space dimension $(d-1)K$. Nevertheless, this is still slightly more efficient than the parametrizations used in PGA and BSA, which require have $(d+1)K$-dimensions. \end{remark} The following corollaries say that horospherical projections share a nice property with Euclidean orthogonal projections: When projecting to a $K$-dimensional submanifold, they preserve the distances along $K$ dimensions and collapse the distances along the other $d-K$ orthogonal dimensions: \begin{cor} \label{cor:horosphere-proj-rotational-invariant} $\pi^{\operatorname{H}}_{b,P}$ is invariant under rotations around $P$. In other words, if a rotation around $P$ takes $x$ to $y$ then $\pi^{\operatorname{H}}_{b,P}(x) = \pi^{\operatorname{H}}_{b,P}(y)$. Consequently, every $x \not\in P$ belongs to a $(d-K)$-dimensional submanifold that is collapsed to a point by $\pi^{\operatorname{H}}_{b,P}$. \end{cor} \begin{proof} The open book interpretation tells us that $\pi^{\operatorname{H}}_{b,P}(x)$ and $\pi^{\operatorname{H}}_{b,P}(y)$ are precisely the intersections of $P_b$ with $S(x)$ and $S(y)$, respectively. If $y$ belongs to the rotation orbit $S(x)$ of $x$ then the rotation orbit $S(y)$ of $y$ is the same as $S(x)$. Thus $\pi^{\operatorname{H}}_{b,P}(x) = \pi^{\operatorname{H}}_{b,P}(y)$. Hence, for every $x \in \H^d$, $S(x)$ is collapsed to a point by $\pi^{\operatorname{H}}_{b,P}$. To see that $\dim S(x) = d-K$ when $x \not\in P$, recall that by \cref{cor:intersection-of-horospheres-is-sphere}, if $Q(x)$ is the orthogonal complement of $P$ at $\pi^{\operatorname{G}}_P (x)$ then $S(x)$ is a hypersphere inside $Q(x)$. Since the ideal points $p_j$ are assumed to be ``affinely independent,'' we have $\dim P = K-1$, so $\dim Q(x) = d - (K-1)$, and $\dim S(x) = \dim Q(x) - 1 = d - K$. \end{proof} \begin{cor} \label{cor:horosphere-proj-preserve-some-distance} For every $x \in \H^d$, there exists a $K$-dimensional totally geodesic submanifold (with boundary) that contains $x$ and is mapped isometrically to $P_b$ by $\pi^{\operatorname{H}}_{b,P}$. If $x \not\in P$ then such a manifold is unique. \end{cor} \begin{proof} If $x \not\in P$ then the submanifold $P_x$ in \cref{thm:open-book-interpretation} is a geodesic submanifold that contains $x$ and is mapped isometrically to $P_b$ by $\pi^{\operatorname{H}}_{b,P}$. As in the proof of \cref{cor:horosphere-proj-rotational-invariant}, we have $\dim P = K-1$ and $\dim P_x = \dim P + 1 = K$. Since \cref{cor:horosphere-proj-rotational-invariant} implies that the other $d-K$ dimensions are collapsed by $\pi^{\operatorname{H}}_{b,P}$, no other distances from $x$ can be preserved. Thus, $P_x$ is the unique submanifold with the desired properties. If $x \in P$ then $x \in P_y$ for every $y \not\in P$. Note that this means \emph{every} distance from $x$ is preserved by $\pi^{\operatorname{H}}_{b,P}$. \end{proof} The following corollaries say that, like Euclidean orthogonal projections, horospherical projections never increase distances. Thus, minimizing distortion is roughly equivalent to maximizing projected distances. This is another motivation for \cref{eq:variance-as-sum-of-squared-distances}. \begin{cor} \label{cor:horosphere-proj-non-expanding-infinitesimal} For every $x \in \H^d$ and every tangent vector $\vec{v}$ at $x$, $$\| \pi^{\operatorname{H}}_{b,P} (\vec{v}) \|_\H \leq \| \vec{v} \|_\H.$$ \end{cor} \begin{proof} This follows from \cref{cor:horosphere-proj-preserve-some-distance} and \cref{cor:horosphere-proj-rotational-invariant}: If $x \in P$ then the proof of \cref{cor:horosphere-proj-preserve-some-distance} implies that $\pi^{\operatorname{H}}_{b,P}$ preserves \emph{every} distance from $x$. Thus, the desired inequality is actually an equality. If $x \not \in P$ then by $\vec{v}$ has an orthogonal decomposition $\vec{v} = \vec{u} + \vec{u}^\perp$, where $\vec{u}$ and $\vec{u}^\perp$ are tangent and perpendicular to $P_x$, respectively. By \cref{cor:horosphere-proj-preserve-some-distance} and \cref{cor:horosphere-proj-rotational-invariant}, $\pi^{\operatorname{H}}_{b,P}$ preserves the length of $\vec{u}$ while collapsing $\vec{u}^\perp$ to $0$. It follows that $\| \pi^{\operatorname{H}}_{b,P} (\vec{v}) \|_\H = \| \vec{u} \|_\H \leq \| \vec{v} \|_\H$. \end{proof} \begin{cor} \label{cor:horosphere-proj-non-expanding} $\pi^{\operatorname{H}}_{b,P}$ is non-expanding. In other words, for every $x, y \in \H^d$, $$d_\H (\pi^{\operatorname{H}}_{b,P}(x), \pi^{\operatorname{H}}_{b,P}(y)) \leq d_\H (x,y).$$ \end{cor} \begin{proof} We first show that for any path $\gamma(t)$ in $\H^d$, $$\operatorname{length}(\pi^{\operatorname{H}}_{b,P}(\gamma)) \leq \operatorname{length}(\gamma).$$ Indeed, note that the velocity vector of $\pi^{\operatorname{H}}_{b,P}(\gamma)$ is precisely $\pi^{\operatorname{H}}_{b,P}(\dot{\gamma})$. Then, by \cref{cor:horosphere-proj-non-expanding-infinitesimal}, \begin{align*} \operatorname{length}(\pi^{\operatorname{H}}_{b,P}(\gamma)) &= \int \| \pi^{\operatorname{H}}_{b,P} (\dot{\gamma}(t)) \|_\H dt \\ &\leq \int \| \dot{\gamma}(t) \|_\H dt = \operatorname{length} (\gamma). \end{align*} Now for any $x, y \in \H^d$, let $\gamma(t)$ be the geodesic segment from $x$ to $y$. Then $\pi^{\operatorname{H}}_{b,P}(\gamma)$ is a path connecting $\pi^{\operatorname{H}}_{b,P}(x)$ and $\pi^{\operatorname{H}}_{b,P}(y)$. Thus, the projected distance is at most the length of $\pi^{\operatorname{H}}_{b,P}(\gamma)$, which by the above argument is at most $\operatorname{length(\gamma)} = d_\H (x,y).$ \end{proof} \subsection{Detour: A Review of the Hyperboloid Model} \label{appendix-subsec:hyperboloid-model-background} \cref{thm:horosphere-proj-well-defined} suggests that $\pi^{\operatorname{H}}_{b,P}(x)$ can be computed in three steps: \begin{enumerate} \item Find the geodesic projection $\pi^{\operatorname{G}}_P(x)$ of $x$ onto $P$. \item Find a geodesic ray $\alpha^+$ on $P_b$ that starts at $\pi^{\operatorname{G}}_P(x)$ and is orthogonal to $P$. \item Return the unique point on $\alpha^+$ that is of distance $d_\H(x, \pi^{\operatorname{G}}_P(x))$ from $\pi^{\operatorname{G}}_P(x)$. \end{enumerate} It turns out that these subroutines are easier to implement in the \emph{hyperboloid model} instead of the Poincar\'e model of hyperbolic spaces. Thus, we first briefly review the basic definitions and properties of this model. The readers who are already familiar with the hyperboloid model can skip to \cref{appendix-subsec:horosphere-proj-algo}, where we describe the full algorithm. For a more detailed treatment, see \cite{Thurston-notes}. \begin{remark} The above steps are slightly different from the ones mentioned in \cref{subsec:horo_proj_high}. However, by \cref{thm:horosphere-proj-well-defined}, these two descriptions are equivalent. We decided to use the ``closer-to-$b$'' description in \cref{subsec:horo_proj_high} because it is slightly more self-contained, but the actual implementation will be based on the above three steps. \end{remark} \paragraph{Minkowski spaces} We first describe \emph{Minkowski spaces}, which is the embedding space where the hyperboloid model sits in as hypersurfaces. The $(d+1)$-dimensional Minkowski space $\mathbb{R}^{1,d}$ is like the flat Euclidean space $\mathbb{R}^{1+d}$ except with a dot product that has a negative sign in the first coordinate. More precisely, $\mathbb{R}^{1,d}$ is the vector space $\mathbb{R}^{1+d}$ equipped with the indefinite, non-degenerate bilinear form $$B \left( (t, x_1, \dots, x_d),(u, y_1, \dots, y_d) \right) = -tu + \sum_{i=1}^d x_i y_i,$$ which serves as the ``dot product.'' Like with Euclidean spaces, the quantity $B(\vec{v}, \vec{v})$ is called the \emph{(Minkowski) squared norm} of the vector $\vec{v}$. Two vectors $\vec{u}, \vec{v} \in \mathbb{R}^{1,d}$ are called \emph{orthogonal} if $B(\vec{u}, \vec{v}) = 0$. The \emph{orthogonal complement} of a linear subspace $V \subset \mathbb{R}^{1,d}$ is the set $V^\perp = \{\vec{w} \in \mathbb{R}^{1,d}: B(\vec{w}, \vec{v}) = 0 \text{ for every } \vec{v} \in V\}$, which still has dimension $\dim \mathbb{R}^{1,d} - \dim V$. However, unlike in Euclidean spaces, vectors in $\mathbb{R}^{1,d}$ can have negative or zero squared norms, and linear subspaces can intersect their orthogonal complements. \paragraph{Types of vectors in $\mathbb{R}^{1,d}$} Vectors with negative, zero, and positive (Minkowski) squared norms are called \emph{time-like}, \emph{light-like}, and \emph{space-like}, respectively. Time-like and light-like vectors together form a solid double cone in $\mathbb{R}^{1,d}$. If $\vec{v}$ is a time-like or light-like vector, we call it \emph{future-pointing} if its first coordinate is positive, otherwise we call it \emph{past-pointing}. It follows from Cauchy-Schwarz inequality that. \begin{prop} \label{prop:future-pointing-dot-product-negative} If $\vec{u}, \vec{v}$ are future-pointing time-like or light-like vectors then $B(\vec{u}, \vec{v}) < 0$. \end{prop} A linear subspace $V$ of $\mathbb{R}^{1,d}$ is called \emph{space-like} if every non-zero vector in $V$ is space-like. In that case, the bilinear form $B(\cdot, \cdot)$ restricts to a positive-definite bilinear form on $V$, thus making it isometric to an Euclidean vector space. It follows from \cref{prop:future-pointing-dot-product-negative} that \begin{prop} \label{prop:ortho-complement-time-like-is-space-like} The orthogonal complement of a time-like vector is a space-like linear subspace. \end{prop} \paragraph{The hyperboloid model of hyperbolic spaces} We are now ready to introduce the hyperboloid model $\H^d$. It sits inside $\mathbb{R}^{1,d}$ in a similar way to how the unit sphere $\S^d$ sits inside the Euclidean space $\mathbb{R}^{1+d}$. \begin{definition} \label{defn:hyperboloid-model} The \emph{hyperboloid model} of $d$-dimensional hyperbolic spaces is the set $\H^d$ of future-pointing vectors in $\mathbb{R}^{1,d}$ with Minkowsi squared norm $-1$. \end{definition} \begin{remark} For the rest of \cref{appendix-sec:horo}, we will use $\H^d$ to denote this hyperboloid model as a subset of $\mathbb{R}^{1,d}$, and not the abstract hyperbolic space or its other models (e.g.\ Poincar\'e ball). This should not lead to any ambiguities because we will not work with any other model. \end{remark} The following properties of $\H^d$ further illustrate the analogy with spheres in Euclidean spaces. \begin{prop} \label{prop:hyperboloid-model-orthogonal-to-radial} For every $\vec{x} \in \H^d$, the tangent space $T_{\vec{x}} \H^d$ of $\H^d$ at $\vec{x}$ is (parallel to) the orthogonal complement of $\vec{x}$ in $\mathbb{R}^{1,d}$. \end{prop} \begin{prop} \label{prop:hyperboloid-model-isometric-poincare} When restricted to each tangent space of $\H^d$, the bilinear form $B(\cdot, \cdot)$ is positive definite. This defines a Riemannian metric on $\H^d$ which has constant curvature $-1$. The \emph{stereographic projection} $$(t,x_1,\dots,x_d) \to \left(\frac{x_1}{1+t}, \dots, \frac{x_d}{1+t} \right)$$ is an isometry between $\H^d$ and the Poincar\'e model. Its inverse map is given by $$(y_1, \dots, y_d) \to \frac{(1 + \sum_i y_i^2, 2y_1, \dots, 2y_d)}{1 - \sum_i y_i^2}.$$ \end{prop} \begin{prop} \label{prop:hyperboloid-model-geodesic-submanifold} Every $k$-dimensional geodesic submanifold of $\H^d$ is the intersection of $\H^d$ with a $(k+1)$-dimensional linear subspace of $\mathbb{R}^{1,d}$. \end{prop} In the hyperboloid model, the ideal points of hyperbolic spaces are represented by light-like \emph{directions} (instead of individual vectors): \begin{prop} \label{prop:hyperboloid-model-ideal-points} Each ideal point of $\H^d$ is represented by a $1$-dimensional linear subspace spanned by some $\vec{v} \in \mathbb{R}^{1,d}$ with $B(\vec{v},\vec{v}) = 0$. The map $$(t,x_1,\dots,x_d) \to \left(\frac{x_1}{t}, \dots, \frac{x_d}{t} \right)$$ gives a correspondence between a light-like vector $\vec{v} \in \mathbb{R}^{1,d}$ and an ideal point $p \in \S_\infty^{d-1}$ in the Poincar\'e model that is represented by $\operatorname{span}(\vec{v})$. This correspondence is compatible with the stereographic projection in \cref{prop:hyperboloid-model-isometric-poincare}. Its inverse map is given by $$(y_1, \dots, y_d) \to (1, y_1, \dots, y_d).$$ \end{prop} We conclude this section by noting that geodesic hulls in the hyperboloid model are closely related to linear spans in Minkowski space: \begin{prop} \label{prop:hyperboloid-model-geodesic-hull-is-linear-span} Let $S$ be a set of vectors that are either in $\H^d$ or represent ideal directions of $\H^d$. Then the geodesic hull of $S$ in $\H^d$ is the intersection of $\H^d$ with the linear span of $S$. \end{prop} In particular, since the spine $P$ in our setting (\cref{thm:horosphere-proj-well-defined}) is the geodesic hull of some ideal points, it is cut out by the linear span of the corresponding ideal directions. \subsection{Computation of Horospherical Projections} \label{appendix-subsec:horosphere-proj-algo} \input{tables/proj_alg} Now we describe how the three steps mentioned at the beginning of \cref{appendix-subsec:hyperboloid-model-background} can be implemented in the hyperboloid model. The results of this section are summarized in \cref{alg:example}. To transfer back and forth between the hyperboloid and Poincar\'e models, we use the formulas in \cref{prop:hyperboloid-model-isometric-poincare} and \cref{prop:hyperboloid-model-ideal-points}. \paragraph{Step 1: Computing geodesic projections} We first describe how to compute the geodesic (or closest-point) projection from $\H^d$ to a geodesic submanifold $P$ in $\H^d$. This process is very similar to how geodesic projections work in Euclidean spheres: We first perform an orthogonal projection onto the linear subspace $V$ that cuts out $P$, then rescale the result to get a vector on $\H^d$. Generally, orthogonal projections in Minkowski spaces are very similar to those in Euclidean spaces. However, since vectors in $\mathbb{R}^{1,d}$ can have norm zero, the orthogonal projection $\pi^{\operatorname{Mink}}_V$ is not well-defined for every subspace $V$. Thus, a little extra argument is needed. \begin{prop}[Orthogonal projections onto time-containing linear subspaces] \label{prop:minkowski-orthogonal-proj} Let $V$ be a linear subspace of $\mathbb{R}^{1,d}$ that contains some time-like vectors. Then \begin{enumerate} \item $V \cap V^\perp = \{0\}$. Consequently, we can define a linear orthogonal projection $\pi^{\operatorname{Mink}}_V: \mathbb{R}^{1,d} \to V$ as follows: Since $\mathbb{R}^{1,d} = V \oplus V^\perp$, every vector $\vec{x} \in \mathbb{R}^{1,d}$ can be uniquely written as $\vec{x} = \vec{z} + \vec{n}$ for some $\vec{z} \in V$ and $\vec{n} \in V^\perp$. Then, we let $\pi^{\operatorname{Mink}}_V (\vec{x}) \coloneqq \vec{z}.$ \item Let $A$ be a matrix whose column vectors form a linear basis of $V$. Let $B$ be the $(1+d)\times(1+d)$ symmetric matrix associated to the bilinear form $B$, i.e.\ the diagonal matrix with entries $(-1, 1, 1, 1, \dots, 1)$. Then $A^\top BA$ is non-singular, and the linear projection $\pi^{\operatorname{Mink}}_V$ is given by $\vec{x} \to A (A^\top BA)^{-1} A^\top B\vec{x}$ \item If $\vec{x}$ is a future-pointing time-like vector then so is $\pi^{\operatorname{Mink}}_V(\vec{x})$. \end{enumerate} \end{prop} \begin{proof} $ $ \begin{enumerate} \item Let $\vec{v}$ be a time-like vector in $V$. Then $\vec{v}^\perp$ is space-like by \cref{prop:ortho-complement-time-like-is-space-like}. Since $V \cap V^\perp \subset V^\perp \subset \vec{v}^\perp$, the intersection $V \cap V^\perp$ must be space-like. On the other hand, for every $\vec{w} \in V \cap V^\perp$, we have $B(\vec{w}, \vec{w}) = 0$, so $\vec{w}$ must be light-like. It follows that $V \cap V^\perp = \{0\}$. The $\mathbb{R}^{1,d} = V \oplus V^\perp$ part of the claim is a standard linear algebra fact. \item This follows from the same argument that deduces the formula for Euclidean orthogonal projections. \item To avoid cumbersome notations, let $\vec{z} = \pi^{\operatorname{Mink}}_V(\vec{x})$. Then since $\vec{x} - \vec{z}$ is orthogonal to $V$ and in particular to $\vec{z}$, we have the Pythagorean formula $$B(\vec{z}, \vec{z}) + B(\vec{x} - \vec{z}, \vec{x} - \vec{z}) = B(\vec{x}, \vec{x}).$$ On the other hand, since $\vec{x} - \vec{z}$ is orthogonal to $V$ and in particular orthogonal to some time-like vectors in $V$, by \cref{prop:ortho-complement-time-like-is-space-like}, it must be space-like. Thus, if $B(\vec{x}, \vec{x}) < 0$ then the above equation implies $B(\vec{z}, \vec{z}) < 0$, which means $\vec{z}$ is time-lile. Now either $\vec{z}$ or $-\vec{z}$ is future-pointing. In the latter case, since $\vec{x}$ is future-pointing, \cref{prop:future-pointing-dot-product-negative} implies $B(-\vec{z}, \vec{x}) < 0$. On the other hand, we have $B(\vec{z}, \vec{x}) = B(\vec{z}, \vec{z}) < 0$, contradicting the above inequality. Thus, $\vec{z}$ is future-pointing. \qedhere \end{enumerate} \end{proof} \begin{prop}[Geodesic projections in $\H^d$] \label{prop:hyperboloid-geodesic-proj} Let $P$ be a geodesic submanifold of $\H^d$. Recall that by \cref{prop:hyperboloid-model-geodesic-submanifold}, $P = \H^d \cap V$ for some linear subspace $V$ of $\mathbb{R}^{1,d}$. Then for every $\vec{x} \in \H^d$, the geodesic projection $\pi^{\operatorname{G}}_P (\vec{x})$ of $\vec{x}$ onto $P$ in $\H^d$ is given by $$\pi^{\operatorname{G}}_P (\vec{x}) = \frac{\vec{z}}{\sqrt{-B(\vec{z}, \vec{z})}},$$ where $\vec{z} = \pi^{\operatorname{Mink}}_V(\vec{x})$ is the linear orthogonal projection of $\vec{x}$ onto $V$. \end{prop} \begin{proof} Again, to avoid cumbersome notations, let $$\vec{w} = \frac{\vec{z}}{\sqrt{-B(\vec{z}, \vec{z})}}.$$ We will show that $\vec{w} = \pi^{\operatorname{G}}_P(\vec{x})$. First, note that since $\vec{z}$ is a future-pointing time-like vector by \cref{prop:minkowski-orthogonal-proj}, $\vec{w} \in \H^d$. Since $\vec{w} \in V$, it follows that $\vec{w} \in P$. Let $W$ be the linear span of $\vec{w}$ and $\vec{x}$, so that $W \cap \H^d$ is the geodesic $\gamma$ in $\H^d$ that connects $\vec{w}$ and $\vec{x}$. Note that $\vec{w}$ and $\vec{x} - \vec{z}$ form a basis of $W$. They are both orthogonal to the tangent space $T_{\vec{w}} P$ of $P$ at $\vec{w}$ because: \begin{itemize} \item By \cref{prop:hyperboloid-model-orthogonal-to-radial}, $\vec{w}$ is orthogonal to every tangent vector of $\H^d$ at $\vec{w}$. \item $T_{\vec{w}} P$ is contained in $V$, which is orthogonal to $\vec{x} - \vec{z}$. \end{itemize} Thus, $W$ is orthogonal to $T_{\vec{w}} P$. It follows that the geodesic $\gamma$ is orthogonal to $P$, which means $\vec{w} = \pi^{\operatorname{G}}_P (\vec{x})$. \end{proof} \paragraph{Step 2: Finding orthogonal geodesic ray} Recall that if $P$ is a geodesic submanifold of $\H^d$ and $\vec{b} \in \H^d$ is a point not in $P$, then the geodesic hull $M$ of $P \cup \{\vec{b}\}$ in $\H^d$ is a geodesic submanifold of $\H^d$ with dimension $\dim P + 1$. Thus, $P$ cuts $M$ into two halves, and we denote the half that contains $\vec{b}$ by $P_b$. Given a point $\vec{w} \in P$, there exists a unique geodesic ray $\alpha^+$ that starts at $\vec{w}$, stays on $P_b$, and is orthogonal to $P$. In this section, we describe how to compute $\alpha^+$. Recall that by \cref{prop:hyperboloid-model-geodesic-submanifold}, $P = V \cap \H^d$ for some linear subspace $V$. \begin{prop} The vector $$\vec{u} = (\vec{b} - \vec{w}) - \pi^{\operatorname{Mink}}_{V} (\vec{b} - \vec{w})$$ is tangent to $M$ and orthogonal to $P$ at $\vec{w}$. Furthermore, it points toward the side of $P_b$. \end{prop} \begin{proof} By construction, $\vec{u}$ is orthogonal to $V$. Note that $V$ contains both $\vec{w}$ and the tangent space $T_{\vec{w}} P$ of $P$ at $\vec{w}$. Thus, $\vec{u}$ is orthogonal to both of them. Together with \cref{prop:hyperboloid-model-orthogonal-to-radial}, this implies that $\vec{u}$ is a tangent vector of $\H^d$ at $\vec{w}$ that is orthogonal to $P$. Next, note that $M$ is the intersection of $\H^d$ with the linear span of $V \cup \{\vec{b}\}$, and since $\vec{b}$ and $\vec{w}$ both belong to this linear span, so does $\vec{u}$. Thus, since $\vec{u}$ is a tangent vector of $\H^d$ at $\vec{w}$, it must be tangent to $M$. By construction, $\vec{u}$ points from $\vec{w}$ toward the side of $\vec{b}$ instead of away from it. More rigorously, note that by essentially the same argument as above, $$\vec{a} \coloneqq (\vec{b} - \vec{w}) - \pi^{\operatorname{Mink}}_{\vec{w}} (\vec{b} - \vec{w})$$ is a tangent vector at $\vec{w}$ of the geodesic on $\H^d$ that goes from $\vec{w}$ to $\vec{b}$. Also, note that $$\vec{a} - \vec{u} = \pi^{\operatorname{Mink}}_V (\vec{b} - \vec{w}) - \pi^{\operatorname{Mink}}_{\vec{w}} (\vec{b} - \vec{w}).$$ Since $V = T_{\vec{w}} P \oplus \operatorname{span}(\vec{w})$ is an orthogonal decomposition, this means $$\vec{a} - \vec{u} = \pi^{\operatorname{Mink}}_{T_{\vec{w}} P} (\vec{b} - \vec{w}),$$ which in particular implies $\vec{a} - \vec{u} \in T_{\vec{w}} P$. It follows that $\vec{u}$ is the projection of $\vec{a}$ onto the orthogonal complement of $T_{\vec{w}} P$ in $T_{\vec{w}} \H^d$. In an Euclidean vector space, the dot product of any vector with its projection onto any direction is positive unless the projection is zero. Since $T_{\vec{w}} \H^d$ is a space-like subspace, this statement applies to $\vec{a}$ and $\vec{u}$. Note that $\vec{u}$ cannot be zero because otherwise $\vec{b}$ would belong to $V$ and hence $P$. Thus, we conclude that $\vec{a} \cdot \vec{u} > 0$, which means that $\vec{u}$ points toward the side of $P_b$. \end{proof} Thus, $\vec{u}$ is the tangent vector at $\vec{w}$ of the desired ray $\alpha^+$. To compute points on this ray, we can use the exponential map at $\vec{w}$. \paragraph{Step 3: The exponential map} Finally, given a distance $d$ and a tangent vector $\vec{u}$ at $\vec{w}$ of a geodesic ray $\alpha^+$ in $\H^d$, we need to compute the point on $\alpha^+$ that is of hyperbolic distance $d$ from $\vec{w}$. This is based on the following lemma: \begin{lemma} Suppose that $\vec{u}$ is a unit tangent vector of $\H^d$ at $\vec{w}$. Then $$\alpha(t) = (\cosh t) \vec{w} + (\sinh t) \vec{u}$$ is a unit-speed geodesic with $\alpha(0) = \vec{w}$ and $\dot{\alpha}(0) = \vec{u}$. \end{lemma} \begin{proof} Note that $\vec{w}$ and $\vec{u}$ are orthogonal by \cref{prop:hyperboloid-model-orthogonal-to-radial}. The facts that $\alpha(0)=\vec{w}$, $\dot{\alpha}(0) = \vec{u}$, and that $\alpha(t)$ is a unit-speed curve on $\H^d$ follow from simple, direct computations. To see that $\alpha(t)$ is a geodesic, note that it is the intersection of $\H^d$ with the linear subspace $\operatorname{span}(\vec{w}, \vec{u})$ and apply \cref{prop:hyperboloid-model-geodesic-submanifold}. \end{proof} It follows that \begin{prop} If $\vec{u}$ is the tangent vector at the starting point $\vec{w}$ of a geodesic ray $\alpha^+$ in $\H^d$ then $$(\cosh d) \vec{w} + (\sinh d) \frac{\vec{u}}{\sqrt{B(\vec{u}, \vec{u})}}$$ is the point on $\alpha^+$ that is of distance $d$ from $\vec{w}$. \hfill \qedsymbol \end{prop} From the results in this section, we conclude that \cref{alg:example} computes the horospherical projection $\pi^{\operatorname{H}}_{b,P}(x)$. \subsection{Datasets} \paragraph{Embedding} The datasets we use are structured as graphs with nodes and edges. We map these graphs to the hyperbolic space using different embedding methods. The datasets from~\cite{sala2018representation} can be found in the open-source implementation\footnote{\url{https://github.com/HazyResearch/hyperbolics}} and we compute hyperbolic embeddings for different dimensions using the PyTorch code from~\cite{gu2018learning}. We also consider embeddings computed with Sarkar's combinatorial construction~\cite{sarkar2011low} and report the average distortion measure with respect to the original graph distances in~\cref{tab:datasets}. For classification experiments, we follow the experimental protocol in~\cite{cho2019large} and use the embeddings provided in the open-source implementation.\footnote{\url{https://github.com/hhcho/hyplinear}} \paragraph{Centering} For simplicity, we center the input embeddings so that they have a Fr\'echet mean of zero. This makes computations of projections more efficient. To do so, we compute the Fr\'echet mean of input hyperbolic points using gradient-descent, and then apply an isometric hyperbolic reflection that maps the mean to the origin.\footnote{This reflection can be computed in closed-form using circle inversions, see for instance Section 2 in~\cite{sala2018representation}.} \subsection{Implementation Details in the Poincar\'e Model} \cref{appendix-subsec:horosphere-proj-algo} (in particular, \cref{prop:hyperboloid-geodesic-proj} and~\cref{alg:example}) describes how geodesic projections and horospherical projections can be computed efficiently in the hyperboloid model. However, because the Poincar\'e ball model is useful for visualizations and is popular in the ML literature, here we also give high-level descriptions of simple alternative methods to compute these projections directly in the Poincar\'e model. This subsection is organized as follows. We first describe an implementation of geodesic projections in the Poincar\'e model (\cref{appendix-subsec:geodesic-proj-implementation-in-poincare}). We then describe our implementation of all baseline methods, which rely on geodesic projections (\cref{appendix-subsec:baseline-implementation}). Finally, we describe how \textsc{HoroPCA}{} could also be implemented in the Poincar\'e model (\cref{appendix-subsec:horosphere-proj-implementation-in-poincare}). \subsubsection{Computing Geodesic Projections} \label{appendix-subsec:geodesic-proj-implementation-in-poincare} Recall that geodesic projections map points to a target submanifold such that the projection of a point is the point on the submanifold that is closest to it. In the Poincar\'e model, when the target submanifold goes through the origin, geodesic projections can be computed efficiently as follows. Consider any geodesic submanifold $P$ that contains the origin in the Poincar\'e Ball: this must be a linear space since any geodesic that goes through the origin in this model is a straight-line. The Euclidean reflection with respect to $P$ is a hyperbolic isometry (i.e.\ preserves distances). Given any point $x$, we can compute its Euclidean (and hyperbolic) reflection $r_P(x)$ with respect to $P$. Then, the (hyperbolic geometry) midpoint between $x$ and $r_P(x)$ belongs to $P$ and is the geodesic projection of $x$ onto $P$. There is a closed-form expression for this midpoint, which can be derived using Mob\"{i}us operations. \subsubsection{Implementation of Baselines} \label{appendix-subsec:baseline-implementation} We now detail our baseline implementation. \paragraph{PCA and tPCA} We used the Singular Value Decomposition (SVD) PyTorch implementation to implement both PCA and tPCA. Before using the SVD algorithm, tPCA maps points to the tangent space at the Fr\'chet mean using the logarithmic map. Having centered the data, this mapping can be done using the logarithmic map at the origin which is simply: \[ \log_\mathbf{o}(x)=\arctanh(||x||)\frac{x}{||x||}. \] \paragraph{PGA and BSA} Both PGA and BSA rely on geodesic projections. When the data is centered, the target submanifolds in BSA and PGA are simply linear spaces going through the origin. We can therefore compute the geodesic projections in these methods using the computations described above. To test their sensitivity to the base point, we also implemented two baselines that perturb the base point, by adding Gaussian noise. \paragraph{hMDS} To implement hMDS, we simply implemented the formulas in the original paper (see Algorithm 2 in~\cite{sala2018representation}). \paragraph{hAE} For hyperbolic autoencoders, we used two fully-connected layers following the approach of~\cite{ganea2018hyperbolic}. One layer was used for encoding into the reduced number of dimensions, and one layer was used for decoding into the original number of dimensions. We trained these networks by minimizing the reconstruction error, and used the intermediate hidden representations as low-dimensional representations. \subsubsection{\textsc{HoroPCA}{} Implementation} \label{appendix-subsec:horosphere-proj-implementation-in-poincare} The definition of horospherical projections onto $K > 1$ directions involves taking the intersection of $K$ horospheres $S(p_1, x) \cap \dots \cap S(p_K, x)$ (\cref{subsec:horo_proj_high}). Although \cref{appendix-subsec:horosphere-proj-algo} shows that horospherical projections can be computed efficiently in the hyperboloid model without actually computing these intersections, it is also possible to implement \textsc{HoroPCA}{} by directly computing these intersections. This can be done in a simple iterative fashion. For example, in the Poincar\'e ball, horospheres can be represented as Euclidean spheres. Note that the intersection of two $d$-dimensional Euclidean hyperspheres $S(p_0, r_0)$ and $S(p_1, r_1)$ is a $d-1$-dimensional hypersphere which can be represented by a center, a radius, and $d-1$-dimensional subspace. The radius is uniquely determined by the radii of the original spheres $r_0, r_1$ and the distance between their centers $\|p_0-p_1\|$ and analyzing the 2-dimensional case, for which there is a simple closed formula. The center can similarly be found as a weighted combination of the centers $p_0, p_1$ based on the formula for the 2D case. Lastly, this $d-1$-dimensional subspace can be represented by noting it is the orthogonal space to the vector $p_1-p_0$. Thus the intersection of these $d$-dimensional hyperspheres can be easily computed, and this process can be iterated to find the intersection of $K$ spheres $S(p_i, r_i)$. \input{figures/distortion_hist} \subsection{Additional Experimental Results} \subsubsection{Distortion Analysis} We first analyze the average distortion incurred by horospherical and geodesic projections on a toy example with synthetically-generated data. We generate 1000 points the Poincar\'e disk by sampling tangent vectors from a multivariate Gaussian, and mapping them to the disk using the exponential map at the origin. We then sample 100 random ideal points (directions) in the disk, and consider the corresponding straight-line geodesics pointing towards these directions. We project the datapoints onto these geodesics and measure the average distortion from before and after projection and visualize the results in a histogram~(\cref{fig:distortion_hist}). As we can see, Horospherical projections achieve much lower distortion than geodesic projections on average, suggesting that these projections may better preserve information such as distances in higher-dimensional datasets. \subsubsection{Dimensionality Reduction Results} \paragraph{Sarkar embeddings} \cref{tab:pytorch_reduction} showed dimensionality reduction results for embeddings learned with optimization. Here, we consider the same reduction experiment on combinatorial embeddings and report the results in~\cref{tab:sarkar_reduction}. The results confirm the trends observed in~\cref{tab:pytorch_reduction}: \textsc{HoroPCA}{} outperforms baseline methods, with significant improvements on distance preservation. \paragraph{More dimension/component configurations} We consider the reduction of 50-dimensional PyTorch embeddings of the Diseases and CS Ph.D. datasets. We plot average distortion and explained Fr\'echet variance for different number of components in~\cref{fig:dim_plots}. \textsc{HoroPCA}{} significantly outperforms all previous generalizations of PCA. \textsc{HoroPCA}{} also outperforms hMDS, which is a competitive baseline, but not a PCA method as discussed before. \input{figures/dimension_plots} \input{figures/wordnet_visualization_full} \section{Introduction} \input{01intro} \section{Background} \input{02background} \section{Generalizing PCA to the Hyperbolic Space}\label{sec:horo} \input{04model} \section{\textsc{HoroPCA}{}}\label{sec:horopca} \input{05model} \section{Experiments} \input{06experiments} \input{figures/wordnet_visualizations} \section{Related Work}\label{sec:related} \input{03related} \section{Conclusion} \input{07conclusion} \section*{Acknowledgements} \input{ack} \bibliographystyle{icml2021/icml2021}
{ "timestamp": "2021-06-08T02:32:15", "yymm": "2106", "arxiv_id": "2106.03306", "language": "en", "url": "https://arxiv.org/abs/2106.03306", "abstract": "This paper studies Principal Component Analysis (PCA) for data lying in hyperbolic spaces. Given directions, PCA relies on: (1) a parameterization of subspaces spanned by these directions, (2) a method of projection onto subspaces that preserves information in these directions, and (3) an objective to optimize, namely the variance explained by projections. We generalize each of these concepts to the hyperbolic space and propose HoroPCA, a method for hyperbolic dimensionality reduction. By focusing on the core problem of extracting principal directions, HoroPCA theoretically better preserves information in the original data such as distances, compared to previous generalizations of PCA. Empirically, we validate that HoroPCA outperforms existing dimensionality reduction methods, significantly reducing error in distance preservation. As a data whitening method, it improves downstream classification by up to 3.9% compared to methods that don't use whitening. Finally, we show that HoroPCA can be used to visualize hyperbolic data in two dimensions.", "subjects": "Machine Learning (cs.LG)", "title": "HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projections", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464485047916, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7074345716614596 }
https://arxiv.org/abs/1906.01990
Covariate Selection Based on a Assumpton-free Approach to Linear Regression with Exact Probabilities
In this paper we give a completely new approach to the problem of covariate selection in linear regression. A covariate or a set of covariates is included only if it is better in the sense of least squares than the same number of Gaussian covariates consisting of i.i.d. $N(0,1)$ random variables. The Gaussian P-value is defined as the probability that the Gaussian covariates are better. It is given in terms of the Beta distribution, it is exact and it holds for all data. The covariate selection procedures based on this require only a cut-off value $\alpha$ for the Gaussian P-value: the default value in this paper is $\alpha=0.01$. The resulting procedures are very simple, very fast, do not overfit and require only least squares. In particular there is no regularization parameter, no data splitting, no use of simulations, no shrinkage and no post selection inference is required. The paper includes the results of simulations, applications to real data sets and theorems on the asymptotic behaviour under the standard linear model. Here the stepwise procedure performs overwhelmingly better than any other procedure we are aware of. An R-package {\it gausscov} is available.
\section{Introduction} \label{sec:intro} Given an observation vector $\bs{y} \in \mathbb{R}^n$ with covariates $\bs{x}_i, i=1,\ldots,q,$ the problem is to decide which covariates if any to include in the regression. To illustrate our method we start with the simplest case of just one covariate, $q=1$. The classical approach is to assume a linear model $\bs{y}=\beta_1\bs{x}_1+\sigma\boldsymbol{\varepsilon}$ with $\boldsymbol{\varepsilon}$ consisting of i.i.d. $N(0,1)$ random variables and then to test the null hypothesis $H_0: \beta_1=0$. The test is typically based on the F distribution and yields a P-value $P_F$ which is the basis for the decision as to whether to include $\bs{x}_1$ or not. Instead of formulating the null hypothesis $H_0$ which requires a model, we ask whether $\bs{x}_1$ is better than noise. More precisely we replace $\bs{x}_1$ by a random Gaussian covariate $\bs{Z}_1$ consisting of i.i.d. $N(0,1)$ random variables and ask whether $\bs{Z}_1$ is better than $\bs{x}_1$ as measured by the sums of the squared residuals. This is the case if $(\bs{y}^{\top}\bs{Z}_1)^2/\Vert \bs{Z}_1\Vert^2>(\bs{y}^{\top}\bs{x}_1)^2/\Vert \bs{x}_1\Vert^2$ where here and below $\Vert \cdot\Vert$ denotes the standard $L_2$-norm. This is equivalent to \[(\bs{y}^{\top}\bs{Z}_1)^2/(\Vert \bs{y}\Vert^2\Vert \bs{Z}_1\Vert^2)> (\bs{y}^{\top}\bs{x}_1)^2/(\Vert \bs{y}\Vert^2\Vert \bs{x}_1\Vert^2)\] The Gaussian covariate method is based on the first surprising fact that \[(\bs{y}^{\top}\bs{Z}_1)^2/(\Vert \bs{y}\Vert^2\Vert \bs{Z}_1\Vert^2)\sim B_{1/2,(n-1)/2}\] independently of $\bs{y}$. The probability that $\bs{x}_1$ is worse than random Gaussian noise or, equivalently, that Gaussian noise is better than $\bs{x}_1$ is then \[\mathop{\mathrm{I\!P}}\nolimits(\mathrm{RSS} < \mathrm{rss})=B_{(n-1)/2,1/2}(\mathrm{rss}/\mathrm{rss}_0)\] which we define as the Gaussian P-value $P_G$ of $\bs{x}_1$. Here $\mathrm{RSS}$ denotes the sum of squared residuals based on $\bs{Z}_1$, $\mathrm{rss}$ the sum of squared residuals based on $\bs{x}_1$ and $\mathrm{rss}_0=\Vert \bs{y}\Vert^2$. The second surprising fact is that the two P-values are equal, $P_F=P_G$. This follows from \begin{equation} \label{eq:p-value_F_G} P_G=B_{(n-1)/2,1/2}(\mathrm{rss}/\mathrm{rss}_0)=1 - F_{1,n-1}\Bigl( \frac{(n-1)(\mathrm{rss}_0 - \mathrm{rss})}{\mathrm{rss}} \Bigr)=P_F \end{equation} where $B_{a,b}(\cdot)$ denotes the c.d.f. of the beta distribution with parameters $a,b>0$ and $F_{k,\ell}(\cdot)$ denotes the c.d.f. of the Fisher's F with $k$ and $\ell$ degrees of freedom. A proof of this and a general result is given in the Appendix. Although the two P-values are equal the situations are very different. Firstly, the sources of randomness are different. For $P_F$ it derives from the randomness of the error term $\boldsymbol{\varepsilon}$ in the model. For $P_G$ it derives from the randomness of the covariate $\bs{Z}_1$, the data $(\bs{y}_1,\bs{x}_1)$ could be deterministic. The P-value $P_F$ has a frequentist interpretation as the truth of $H_0$ cannot be correctly determined on the basis the data alone. The P-value $P_G$ holds for the given data, it is not frequentist. To use the Gaussian covariate method for covariate selection all the statistician has to do is to specify a cut-off value $\alpha$ for the inclusion of a covariate. This is not a regularization parameter, it is, so to speak, an output or selection parameter: a covariate is selected if and only if the probability that it is better than random noise is at least $1-\alpha$ or, equivalently, if the probability that Gaussian noise is better is at least $\alpha$. The value of $\alpha$ can be interpreted as the probability of a false positive. This is discussed in Section~\ref{sec:false_pos} and supports the claim in the abstract that the Gaussian selection method does not overfit. A first attempt at using random covariates in the context of covariate selection is to be found on page 279 of \cite{ DAV14}. There the random covariate was chosen to mimic the actual covariate, for example, binomial random variables for 0-1-covariates. Eventually it was recognized that the `correct' method was to use Gaussian covariates which lead to this paper. In Section~\ref{sec:real_data} we consider eight different real data sets, the red wine and Boston housing data sets, the dental data set (a three-way ANOVA), two sets on gene expression, the leukemia and osteoarthritis data sets, the number of sunspots and the Melbourne daily temperatures data sets and a data set using USA economic data. Depending on the data we consider all subsets, the stepwise selection, repeated stepwise selection, the inclusion of interactions, determining periodicities, lagged data and the construction of dependency graphs. Apart from the construction of dependency graphs the largest data set, the Boston housing data set with interactions $(n,q)=(506,203490)$ requires about four seconds computing time. The construction of a dependency graph with 38415 edges for the 48802 covariates of the osteoarthritis data took 85 minutes. We give a running comparison with lasso which is one of the most used, if not the most used, covariate selection method. The original paper by Tibshirani \cite{TIB96} is listed by the Royal Statistical Society as its third most cited Series B publication with 3,693 citations. The version we use is the default version of {\it cv.glmnet} in the R package {\it glmnet} (\cite{FRHASIQITI17}). Here the value of the regularization parameter {\it lambda} is chosen by 10-fold cross-validation. The package offers various plots, but it is not clear, at least to us, how these help to select covariates. Instead we suggest that the covariates be selected by their P-values. The remainder of this paper is organized as follows. In the short Section~\ref{sec:F-test.etc} we state a theorem on the equality of the two P-values (\ref{eq:p-value_F_G}) for the general case of several covariates. Section~\ref{sec:cov_selec} starts with the derivation of the the P-value (\ref{equ:step_P}) which is the basis of everything in the paper. The remaining two subsections use this P-value to derive covariate selection procedures. The first of these in Section~\ref{sec:all_sets} considers all possible subsets and is only possible if $q$ is not too large. For large $q$ and in particular for the case $q>>n$ a step-wise procedure is defined in Section~\ref{sec:step_wise}. This is probably the more important one for practical use.In Section~\ref{sec:equiv} we discuss post selection analysis and give a new interpretation of a standard confidence region for the model-free approach. Section~\ref{sec:false_pos} considers the problem of false positives and Section~\ref{sec:false_neg} the problem of false negative. The construction of dependency graphs is discussed in Section~\ref{sec:graphs}. Extensions to $M$-regression, non-linear regression and the Kullback-Leibler discrepancy and the use of the $L_1$ norm are described in Section~\ref{sec:extensions}. Some asymptotic results on the behaviour of the step-wise procedure are given in Section~\ref{sec:boun_asymp}. These are stronger than asymptotic consistency as the value of $\alpha$ is fixed. Some simulation results and applications to real data sets are presented in Section~\ref{sec:examples}. In Section~\ref{sec:comments} we give some simulations which indicate that the cross-validation version of lasso is very sensitive to correlated errors and the exact signal in the non-parametric situation. We indicate how this can be overcome by using the P-values as in the Gaussian covariate procedure. Proofs of theoretical results and technical details are deferred to appendices. \section{Exact probabilities for the model-free approach} \label{sec:F-test.etc} Suppose that $k\le \min(q,n-1)$ and that the standard model $y=\sum_{i=1}^k\beta_i\bs{x}_i+\sigma\boldsymbol{\varepsilon}$ holds. Let $1\le k_0<k$ and put $k_1=n-k_0$. We use the standard F test to test the null hypothesis $H_0: \beta_{k_0+1}=\ldots=\beta_k=0$. The resulting P-value is \[P_F=1 - F_{k-k_0,n-k} \Bigl( \frac{(\mathrm{rss}_0 - \mathrm{rss})/(k- k_0)}{\mathrm{rss}/(n - k)} \Bigr) \] where $\mathrm{rss}$ denotes the sum of squared residuals for the regression based on all $k$ covariates and $\mathrm{rss}_0$ denotes the sum of squared residuals for the regression based on the first $k_0$ covariates. The model-free approach to the combined relevance of the covariates $\bs{x}_i,i=k_0+1,\ldots,k$ is as follows. Each such covariate $\bs{x}_i$ is replaced by a random covariate $\bs{Z}_i=N_n({\bf 0},{\bf I})$ and $\bs{y}$ is regressed on the covariates $\bs{x}_1,\ldots,\bs{x}_{k_0},\bs{Z}_{k_0+1},\ldots ,\bs{Z}_k$ which results in a sum $\mathrm{RSS}$ of squared residuals. The P-value is defined as before by \[P_G=\mathop{\mathrm{I\!P}}\nolimits(\mathrm{RSS}<\mathrm{rss}).\] \begin{Theorem} \label{the:PF_PG} Under the above assumptions \[P_G=\ B_{(n - k)/2,(k - k_0)/2} \Bigl( \frac{\mathrm{rss}}{\mathrm{rss}_0} \Bigr) =1 - F_{k-k_0,n-k} \Bigl( \frac{(\mathrm{rss}_0 - \mathrm{rss})/(k- k_0)}{\mathrm{rss}/(n - k)} \Bigr)=P_F.\] \end{Theorem} The proof is given in the Appendix. In a forthcoming paper \cite{DUMDAV20} it is shown that validity of the P-value $P_F$ and equality with $P_G$ remains valid for any error term $\boldsymbol{\varepsilon}$ which is orthogonally invariant. Furthermore the independent random covariate vectors $\bs{Z}_i$, $k_0 < i \le k$, with distribution $N_n(\boldsymbol{0},\boldsymbol{I})$ may replaced by $\bs{H}(\bs{x}_i)$, $k_0 < i \le k$, where $\bs{H}$ is uniformly distributed on the set of all orthogonal transformations such that $\bs{H}(\bs{x}_j) = \bs{x}_j$ for $j \le k_0$. This opens the possibility to replace the particular F test statistics with more general criteria and to utilize collinearities between the covariate vectors. \section{Selecting covariates} \label{sec:cov_selec} \subsection{The P-value} \label{sec:P-val} Suppose $k\ge 0$ covariates have already been selected. Denote this set by $\mathcal{M}$ and the sum of squared residuals when $\bs{y}$ is regressed on the covariates in $\mathcal{M}$ by $\mathrm{rss}_0$. There remain $q-k$ covariates not in ${\mathcal M}$. Consider one of these covariates $\bs{x}_i$ and denote the sum of squared residuals when $\bs{y}$ is regressed on ${\mathcal M}\cup \bs{x}_i$ by $\mathrm{rss}_i$. We now ask whether $\bs{x}_i$ is better than random noise. To do this we replace the $q-k$ covariates by random Gaussian covariates $\bs{Z}_j=N_n({\boldsymbol 0},{\boldsymbol I})$. For each $j=1,\ldots,q-k$ we regress $\bs{y}$ on ${\mathcal M}\cup \{\bs{Z}_j\}$, denote the sum of squares residuals by $\mathrm{RSS}_j$ and put $\mathrm{RSS}_b=\min_{j=1,\ldots,q-k}\{\mathrm{RSS}_j\}$. Note that the $\bs{Z}_j$ are so to speak chosen anew at each stage. The P-value of $\bs{x}_i$ is defined as $P_i=\mathop{\mathrm{I\!P}}\nolimits(\mathrm{RSS}_b\le \mathrm{rss}_i)$, that is the probability that at least one of the random Gaussian covariates is better than $\bs{x}_i$. It follows from Theorem~\ref{the:PF_PG} that $\mathrm{RSS}_j/\mathrm{rss}_0~\sim B_{(n-k)/2,1/2}$ and hence the random variables $B_{(n-k)/2/1/2}(\mathrm{RSS}_j/\mathrm{rss}_0)$ are independently and uniformly distributed over $(0,1)$. It follows that the P-value of $\bs{x}_i$ is given by \[P_i=P_{G,i}=1-\left(1-\ B_{(n - k)/2,1/2} ( \mathrm{rss}_i/\mathrm{rss}_0)\right)^{q-k}\] which may be written more compactly and for further use as \begin{equation} \label{equ:step_P} P_i=B_{1,q-k}(B_{(n-k)/2,1/2}(\mathrm{rss}_i/\mathrm{rss}_0)). \end{equation} We note that this $P$-value depends on the number $q$ of covariates at the statistician's disposal. The P-value (\ref{equ:step_P}) is the basis of this paper. Indeed in a certain sense it {\it is} this paper. Everything we do is based on it, from the consideration of all subsets, to the stepwise procedure and to the asymptotics. Suppose the covariate $\bs{x}_i$ has the P-value $P_i$ given by (\ref{equ:step_P}). Then \[\mathrm{rss}_i/\mathrm{rss}_0=\text{qbeta}(\text{qbeta}(P_i,1,q-k),(n-k)/2,1/2)\] and the corresponding standard F P-value of $\bs{x}_i$, that is the P-value of $\bs{x}_i$ in a standard linear regression based on ${\mathcal M}\cup \bs{x}_i$, is \begin{equation} \label{equ:step_F_P_val} P_{F,i}=1-\text{pf}((n-k)(1/\text{qbeta}(\text {qbeta}(P_i,1,q-k),(n-k)/2,1/2)-1),1,n-k). \end{equation} The two P-values can be very different. If we put $(n,q)=(129,48802)$ as for the osteoarthritis data (see \cite{COXBATT17}) and $P_i=0.01$ then $P_{F,i}=$ 2.059495e-07. \subsection{All subsets} \label{sec:all_sets} Suppose $q<n$. The procedure considers all $2^q$ subsets. For a subset ${\mathcal S}$ of size $k$ the P-value (\ref{equ:step_P}) of each individual covariate in the subset is given by (\ref{equ:step_P}) with $k-1$ in place of $k$, with $rss_{0,i}$, the sum of squared residual based on ${\mathcal S}_i={\mathcal S}\setminus \{\bs{x}_i\}$, in place of $rss_0$ and with $rss_{\mathcal S}$, the sum of squared residuals based on ${\mathcal S}$ in place of $rss_i$. All subsets are retained for which each covariate in the subset has a P-value at most $\alpha$ chosen by the statistician. In a second step all subsets which are subsets of some other retained subset are discarded. The remaining subsets are maximal in the sense that it is not possible to include another covariate whilst still maintaining the upper bound $\alpha$ for all covariates in the subset. Finally if desired the retained subsets may be ordered by the sums of the squared residuals or the number of covariates in the subset. \subsection{The Gaussian step-wise procedure} \label{sec:step_wise} Suppose $k$ covariates have already been selected with sum of squared residuals $rss_0$. There remain $q-k$ covariates. For any such covariate $\bs{x}_i$ its P-values is given by (\ref{equ:step_P}). The candidate for selection is the $\bs{x}_i$ with the smallest $rss_i$. If its P-values is less than the cut-off value $\alpha$ it is selected and the procedure continues. Otherwise the procedure terminates. At each step instead of considering just one covariate for selection one can consider a specified $kmax$ for example $kmax=10$ . These are the first $kmax$ chosen as above but with $\alpha >1$. All subsets of these $kmax$ covariates are considered as in Section~\ref{sec:all_sets}. If there is no subset all of whose P-values are less than the cut-off values of $\alpha$ the procedure terminates. Otherwise that subset with the largest reduction in the sum of squared residuals is selected and the procedure continues. No stepwise procedure is guaranteed to work but Theorems \ref{thm:consistency.0}, \ref{thm:consistency.general} and \ref{thm:consistency.orthogonal} in Section~\ref{sec:boun_asymp} give sufficient condition when considering data generated under the standard linear model with a known correct set of covariates. These theorems differ from other theorems on consistency as they hold for a fixed cut-off value of $\alpha$. For large $n$ the probability of not selecting the correct subset tends to $\alpha$. This supports the interpretation of $\alpha$ as the probability of selecting a false positive. \subsection{The repeated Gaussian step-wise procedure} \label{sec:rep_step_wise} In many cases interest may only centre on obtaining a good parsimonious approximation to the data. In other cases, gene expression data being an example, one would like to determine those covariates which are strongly related to the dependent variable. In such cases it may be useful to use the repeated Gaussian stepwise procedure. The covariates selected using all the data are eliminated and the Gaussian stepwise procedure applied to the remaining data. Again the selected covariates are eliminated and this is continued until there are no covariates left with a P-value less than $\alpha$. An example is given in Section~\ref{sec:examples}. \section{Equivalence regions} \label{sec:equiv} Suppose the standard linear model holds with $q<n$ covariates $\bs{x}=(\bs{x}_1,\ldots,\bs{x}_q)$ and Gaussian errors with true vector of coefficients $\boldsymbol{\beta_{\text{true}}}$. Then a $1-\alpha$ confidence region $C(1-\alpha)$ for $\boldsymbol{\beta_{\text{true}}}$ is given by \begin{equation} \label{1} C(1-\alpha)=\left\{\Vert \boldsymbol{x}(\boldsymbol{\beta}-\boldsymbol{\beta}_{\text{ls}})\Vert^2\le \mathrm{rss}_0\,\frac{q\,\text{qf}(1-\alpha,q,n-q)}{n-q}\right\} \end{equation} where $\boldsymbol{\beta}_{\text{ls}}$ is the least squares value of the coefficients and $\mathrm{rss}_0$ the sum of the squared residuals. The interpretation is that $C(1-\alpha)$ contains $\boldsymbol{\beta}_{\text{true}}$ with probability $1-\alpha$. This of course depends on the concept of a true value which is not available for a model-free approach. Nevertheless it is possible to derive $C(1-\alpha)$ in the model-free approach as follows. We rewrite (\ref{1}) as \begin{equation} \label{2} C(1-\alpha)=\left\{\boldsymbol{\beta}:\Vert \bs{y}-\bs{x}\boldsymbol{\beta}\Vert^2 \le \Vert \bs{y}-\bs{x}\boldsymbol{\beta}_{\text{ls}}\Vert^2+\mathrm{rss}_0\,\frac{q\,\text{qf}(1-\alpha,q,n-q)}{n-q}\right\}. \end{equation} In other words $C(1-\alpha)$ denotes those $\boldsymbol{\beta}$ for which the sum of the squared residuals is no larger by a specified quantity than that based on $\boldsymbol{\beta}_{\text{ls}}$ . We now dispense with the linear model and take $(\bs{y},\bs{x})$ as given. A value $\boldsymbol{\beta}$ will be regarded as equivalent to the least squares value $\boldsymbol{\beta}_{\text{ls}}$ if the sum of squared residuals $\Vert \bs{y}-\bs{x}\boldsymbol{\beta}\Vert^2$ is not much larger than the minimum value $\Vert \bs{y}-\bs{x}\boldsymbol{\beta}_{\text{ls}}\Vert^2$. It remains to specify `not much larger' in the model-free situation. We do this by regressing $\boldsymbol{y}-\boldsymbol{x}\boldsymbol{\beta}$ on $q$ i.i.d. $N_n({\boldsymbol 0},\boldsymbol{I})$ random covariates. Denote the sum of squared residuals by $\mathrm{RSS}$. As before we have \[\frac{\mathrm{RSS}}{\Vert \bs{y}-\bs{x}\boldsymbol{\beta}\Vert^2}=\frac{\mathrm{RSS}}{\mathrm{rss}_0+\Vert \bs{x}({\boldsymbol \beta}- {\boldsymbol \beta}_{\text{ls}})\Vert^2} \sim \text{Beta}((n-q)/2,q/2)\] which gives \[\mathop{\mathrm{I\!P}}\nolimits(\mathrm{RSS}\le \mathrm{rss}_0)=\text{pbeta}\big(\mathrm{rss}_0/(\mathrm{rss}_0+ \Vert \bs{x}({\boldsymbol \beta}-{\boldsymbol \beta}_{\text{ls}})\Vert^2),(n-q)/2,q/2\big).\] Specifying a lower value $\alpha$ for this probability gives \begin{eqnarray*} \label{eq:equivreg1} \Vert {\bs{x}}({\boldsymbol \beta}- {\boldsymbol \beta}_{\text{ls}})\Vert^2&\le& rss_0\frac{\text{qbeta}(1-\alpha,q/2, (n-q))/2)}{1-\text{qbeta}(1-\alpha,q/2,(n-q)/2)}\\ &=& rss_0\,\frac{q\,\text{qf}(1-\alpha,n-q)}{n-q}. \end{eqnarray*} Thus the equivalence region and the confidence region are the same but the first is model free whilst the second requires a linear model. The two interpretations are entirely different and this has consequences for post-selection inference.\\ Post-selection inference deals with the problem of statistical inference after a model has been chosen, for example as in Section~\ref{sec:cov_selec}. Even if one of the models is true, the remainder are not and so all standard P-values and confidence regions are invalidated. There is also a problem of super efficiency. We refer to \cite{POETLEE03} and \cite{BERetal13}. Suppose now $\bs{y}$ and $k$ covariates $\bs{x}_i, i=1,\ldots,k$ are given, a linear regression involving all $k$ covariates is performed and all individual P-values are stated. Suppose the covariate $\bs{x}_1$ has a P-value of 0.9. Then `the probability than random Gaussian white noise is better than $\bs{x}_1$ is 0.9' where `better' means `gives a smaller sum of squared residuals'. This is a correct statement no matter what the data and it provides grounds for excluding the covariate $\bs{x}_1$. Similar arguments holds for equivalence regions. \section{False positives} \label{sec:false_pos} In statistics a false positive is the acceptance of a null hypothesis concerning the value of a parameter although the hypothesis is false. The definition is free of semantics and the decision is based on the P-value of some statistic under the null hypothesis. In the model free context of this paper such a definition is not possible. Instead we regard a random Gaussian covariate to be a universal irrelevant covariate and its acceptance to be a universal false positive. The word 'universal` is meant to indicate that it applies to any data set. The Gaussian procedure is specifically constructed to avoid such false positives: the probability of accepting one is at most $\alpha$, the cut-off value for the P-value. In this sense the Gaussian procedure tries to avoid a single false positive. The simulations, the real data examples and the Theorems \ref{thm:consistency.0}, \ref{thm:consistency.general} and \ref{thm:consistency.orthogonal} support this. This is much stricter than trying to control or minimize the false discovery rate \cite{BENJHOCH95}. We consider the all subset approach with the standard P-values defined by under the situation where there are $q_1$ relevant covariates and $q_2$ Gaussian covariates with $q_1+q_2=q<n$. Consider now two Gaussian covariates $Z_1$ and $Z_2$ in a subset containing just the two Gaussian covariates. We assume that the relevant covariates in the subset have been selected. The subset will be selected if the P-values, $p(Z_1)$ and $p(Z_2)$ are both less than the cut-off P-value $\alpha$. This probability is \begin{eqnarray*} \mathop{\mathrm{I\!P}}\nolimits(p(Z_1)\le \alpha, p(Z_2)\le \alpha)&=&\mathop{\mathrm{I\!E}}\nolimits(\mathop{\mathrm{I\!P}}\nolimits(p(Z_1)\le \alpha|Z_2)\{\mathop{\mathrm{I\!P}}\nolimits(Z_2)\le \alpha\})\\ &=&\alpha \mathop{\mathrm{I\!E}}\nolimits(\{\mathop{\mathrm{I\!P}}\nolimits(Z_2)\le \alpha\})=\alpha^2 \end{eqnarray*} where we have used the fact $\mathop{\mathrm{I\!P}}\nolimits(p(Z_1)\le \alpha|Z_2)=\alpha$ because $Z_1$ and $Z_2$ are independent and the P-values are model free. This extends to $k_2\le q_2$ Gaussian covariates. Thus the expected number of accepted subsets containing a Gaussian covariate is \[ 2^{q_1}\sum_{k_2=1}^{q_2}{q_2 \choose k_2}\alpha^{k_2}=2^{q_1}(1-(1+\alpha)^{q_2})\approx \alpha q_22^{q_1}.\] Expressed as a proportion of all subsets this is $\alpha q_22^{-q_2}$. The P-values (\ref{equ:step_P}) can be used and as these are larger the probability of including a false positive is smaller. The expressions become however more complicated. In the same situation but with $q_1<n$ and q arbitrarily large the probability that the stepwise method includes a false positive is $\alpha$. See also Theorems and \ref{thm:consistency.0}, \ref{thm:consistency.general} and \ref{thm:consistency.orthogonal}. \section{False negatives} \label{sec:false_neg} It can happen that the first P-value exceeds the cut-off value $\alpha$ and the procedure stops with out selecting any relevant covariates although there may well be such. The dental data set of \cite{SEHTUK01} is such an example: it is analysed in Section~\ref{sec:dent}. The problem can be mitigated as described in Section~\ref{sec:step_wise} by setting $\alpha>1$ and $kmax=k$ for some chosen $k$. Another possibility is that the individual effects of a set of covariates are small, that is the corresponding P-values are large, but the P-value of the $R^2$ statistic is very small indicating that the covariates taken as a whole do have a relevant effect. So far we have only come across this problem in the simulations in Sections~\ref{sec:sim1} and \ref {sec:rangrph}. As an example we take the simulations discussed in Section~\ref{sec:sim1}. The parameters are $(n,q)=(1000,1000)$ and 60 of the covariates have a non-zero coefficient value, namely $\beta=4.5/\sqrt{1000}$. We use the stepwise Gaussian method to choose the 60 covariates. Of these 50 have non-zero coefficients and of these only two were chosen by the default version of the Gaussian method with $\alpha=0.01$. \begin{figure} \centering \includegraphics[width=.8\textwidth,height=150px]{p-values.pdf} \caption{The first 60 P-values with the stepwise P-values (o) and the adjusted P-values (*). } \label{fig:p-values} \end{figure} Figure~\ref{fig:p-values} shows the plot of the one-step P-values and the adjusted P-values for the 60 selected covariates. The sum of the squared residuals was 843.4. We now regress the dependent variable $\bs{Y}_{1000}$ on 1000 Gaussian covariates, choose 60 using the Gaussian stepwise procedure and note the sum of the squared residuals. The smallest value over the 500 simulations was 1099.1 giving a P-value so to speak of 0. Repeating this with $\beta=1/\sqrt{1000}$ gave a P-value of 0.2 indicating that this value of $\beta$ is about the limit of detectability. We propose the following. The default stepwise method compares the best of the remaining covariates with the best of the same number of i.i.d. $N_n(\boldsymbol{0},\boldsymbol{I})$ which is the first order statistic. We weaken this by comparing the best of the remaining covariates with the $\nu$th best of the random Gaussian covariates. This is accomplished by defining the P-value for $\bs{x}_i$ as \begin{equation} \label{equ:step_P_nu} P_i=B_{\nu,q-k+1-\nu}(B_{(n-k)/2,1/2}(\mathrm{rss}_i/\mathrm{rss}_0)). \end{equation} where we use the same notation as for (\ref{equ:step_P}). Again, this probability is exact. One could instead just specify another cut-off probability instead of the default value $\alpha=0.01$ but is not easily interpretable which is why we prefer specifying $\nu$. The larger $\nu$ the more likely it is that false positives will be selected. To estimate the number of false positives we regress $\bs{y}\in \mathbb{R}^n$, any $\bs{y}$ as it is model-free, on $q$ i.i.d. $N_n(\boldsymbol{0},\boldsymbol{I})$ Gaussian covariates for a given $\nu$. Any selected covariate is a false positive. Simulations can be performed using {\it fsimords} which is part of the {\it gausscov} R-package. As an example we put $(n,q,\alpha,\nu)=(1000,1000,0.01,5)$ which is used in Section~\ref{sec:sim1}. The result of 10000 simulations is given in Table~\ref{tab1}. The means and standard deviations for $\nu=5$ and $\nu=10$ are (1.32,1.13) and (4.50,2.29) respectively. {\footnotesize \begin{table}[h] \begin{center} \begin{tabular}{ccccccccccccccc} $\nu$&0&1&2&3&4&5&6&7&8&9&10&$\ge$ 11\\ 5&0.28&0.34&0.23&0.10&0.04&0.01&0.00&0.000&0.00&0.00&0.00&0.00\\ 10&0.02&0.06&0.12&0.16&0.17&0.16&0.12&0.08&0.05&0.03&0.01&0.01\\ \end{tabular} \caption{Histogram of false positives $(n,q,\alpha)=(1000,1000,0.01)$ with $\nu=5$ and $10$ based on 10000 simulations using {\it fsimords}. \label{tab1}} \end{center} \end{table} } Thus increasing $\nu$ from 1 to 5 will on average lead to about 1.32 false positives. If the increase in the number of covariates selected is much greater than this it may be deemed reasonable to use $\nu=5$. Examples of this are given in the simulations in Sections~\ref{sec:sim1} and \ref{sec:rangrph} . \section{Dependency graphs} \label{sec:graphs} A dependency graph for a set of covariates $\bs{x}$ can be calculated using {\it fgraphst}. This regresses each covariate $\bs{x}_i$ on the remaining covariates using the stepwise Gaussian covariate method. The covariate $\bs{x}_i$ is then joined to the selected covariates $\bs{x}_{\ell},\ell \in S_i$ to give the edges $(i,\ell),\ell \in S_i$. The interpretation is that given the $\bs{x}_{\ell} \in S_i$ the covariate $\bs{x}_i$ is independent of each of the remaining covariates. Note that $(i,j)$ has a different interpretation than $(j,i)$ but the software gives the option of identifying the two. The repeated stepwise procedure of Section~\ref{sec:rep_step_wise} can also be used. Typically it gives much larger graphs. Again given any subset $\bs{x}_{\ell} \in S_i$ resulting from the repeated procedure the covariate $\bs{x}_i$ is independent of each of the remaining covariates. When reconstructing the graph the given value of $\alpha$ is by default divided by the number of covariates $q$. Examples are given in Section~\ref{sec:examples}. \section{Beyond least squares} \label{sec:extensions} We briefly consider extension to robust ($M$-)regression, non-linear regression and minimization of the Kullback-Leibler discrepancy and the $L_1$ norm instead of the sum of squared residuals. \subsection{$M$-regression} \label{sec:rob} Let $\rho$ by a symmetric, positive and twice differentiable convex function with $\rho(0)=0$. The default function will be the Huber's $\rho$-function with a tuning constant $c$ (\cite{HUBRON09}, page 69) defined by \begin{equation} \rho_{c}(u)=\left\{\begin{array}{ll} \frac{u^2}{2}, &\vert u\vert \le c,\\ c\vert u\vert -\frac{c^2}{2},&\vert u \vert > c.\\ \end{array} \right. \end{equation} The default value of $c$ will be $c=1$. For a given subset ${\mathcal M}_0$ of size $m_0$ the sum of squared residuals is replaced by \begin{equation} \label{equ:min_rho} s_0(\rho,\sigma)=\min_{\boldsymbol{\beta}({\mathcal M}_0)}\,\frac{1}{n}\sum_{i=1}^n \rho\left(\frac{y_i-\sum_{j \in{\mathcal M}_0}x_{ij}\beta_j({\mathcal M}_0)}{\sigma}\right). \end{equation} which can be calculated using the algorithm described in {\bf 7.8.2} of \cite{HUBRON09}. The minimizing $\beta_j({\mathcal M}_0)$ will be denoted by $\beta_j({\mathcal M}_0,lr)$. For some $\nu\notin {\mathcal M}_0$ put \begin{equation} \label{equ:min_rho_j} s_{\nu}(\rho,\sigma)=\min_{\boldsymbol{\beta}({\mathcal M}_0\cup \{\nu\})}\,\frac{1}{n}\sum_{j=1}^n \rho\left(\frac{y_j-\sum_{j \in{\mathcal M}_0\cup\{\nu\}}x_{ij}\beta_j({\mathcal M}_0\cup\{\nu\})}{\sigma}\right). \end{equation} Replace all the covariates not in ${\mathcal M}_0$ by standard Gaussian white noise, include the $\ell$th such random covariate denoted by $Z_{\ell}$ and put \begin{equation} S_{\ell}(\rho,\sigma)=\min_{\boldsymbol{\beta}({\mathcal M}_0),b}\,\frac{1}{n}\sum_{i=1}^n \rho\left(\frac{y_i-\sum_{j \in{\mathcal M}_0}x_{ij}\beta_j({\mathcal S}_0)-bZ_{\ell}}{\sigma}\right). \end{equation} A Taylor expansion gives \begin{eqnarray} S_{\ell}(\rho,\sigma)&\approx&\frac{1}{2}\frac{\left(\sum_{i=1}^n\rho^{(1)}\left(\frac{r_i}{\sigma}\right)Z_i\right)^2}{\sum_{i=1}^n\rho^{(2)}\left(\frac{r_i}{\sigma}\right)Z_i^2}\nonumber\\ &\approx&s_0(\rho,\sigma)-\frac{1}{2}\frac{\left(\sum_{i=1}^n\rho^{(1)}\left(\frac{r_i}{\sigma}\right)\right)^2}{\sum_{i=1}^n\rho^{(2)}\left(\frac{r_i}{\sigma}\right)}\chi^2_1 \end{eqnarray} with $r_i=y_i-\sum_{j \in{\mathcal M}_0}x_{ij}\beta_j({\mathcal M}_0,lr)$. This leads to the asymptotic $P$-value for ${\bf x}_{\nu}$ \begin{equation} \label{equ:pval_m} 1-\text{pchisq}\left( \frac{2s_0(\rho^{(2)},\sigma)}{s_0(\rho^{(1)},\sigma)} (s_0(\rho,\sigma)-s_{\nu}(\rho,\sigma))\right)^{q-m_0}. \end{equation} corresponding to the exact $P$-value for linear regression. Here \[ s_0(\rho^{(1)},\sigma)= \frac{1}{n}\sum_{i=1}^n \rho^{(1)}\left(\frac{r_i}{\sigma}\right)^2 , \quad s_0(\rho^{(2)},\sigma)= \sum_{i=1}^n \rho^{(2)}\left(\frac{r_i}{\sigma}\right).\] It remains to specify the choice of scale $\sigma$. The initial value of $\sigma$ is the median absolute deviation of $\boldsymbol{y}$ multiplied by the Fisher consistency factor 1.4826. After the next covariate has been included the new scale $\sigma_1$ is taken to be \begin{equation} \label{equ:sig_rob_reg} \sigma_1^2=\frac{1}{(n-\nu_0-1)c_f}\sum_{i=1}^n \rho^{(1)}(r_1(i)/\sigma_0)^2 \end{equation} where the $r_1(i)$ are the residuals based on the $m_0+1$ covariates and $c_f$ is the Fisher consistency factor given by \[ c_f=\mathop{\mathrm{I\!E}}\nolimits(\rho^{(1)}(Z)^2) \] where $Z$ is ${\mathcal N}(0,1)$ (see \cite{HUBRON09}). Other choices are possible. \subsection{Non-linear approximation} \label{sec:non_lin} For a given subset ${\mathcal M}$ of covariates the dependent variable $\boldsymbol{y}$ is now approximated by $g(\bs{x}({\mathcal M})\boldsymbol{\beta}({\mathcal M}))$ where $g$ is a smooth function $g$. Consider a subset ${\mathcal M}_0$, write \begin{equation} ss_0= \min_{\boldsymbol{\beta}({\mathcal M}_0)}\,\frac{1}{n}\sum_{i=1}^n (y_i-g(\bs{x}_i({\mathcal M}_0)^{\top}\boldsymbol{\beta}({\mathcal M}_0)))^2. \end{equation} and denote the minimizing $\boldsymbol{\beta}({\mathcal M}_0)$ by $\boldsymbol{\beta}({\mathcal M}_0,ls)$. Now include one additional covariate $\bs{x}_{\nu}$ with $\nu \notin {\mathcal M}_0$ to give ${\mathcal M}_1={\mathcal M}_0\cup\{\nu\}$, denote the mean sum of squared residuals by $ss_{\nu}$. As before all covariates not in ${\mathcal M}_0$ are replaced by standard Gaussian white noise. Include the $\ell$th random covariate denoted by $Z_{\ell}$ and put \[SS_{\ell}=\min_{\beta({\mathcal M}_0),b}\frac{1}{n}\sum_{i=1}^n (y_i-g(\bs{x}_i({\mathcal M}_0)^{\top}\boldsymbol{\beta}({\mathcal M}_0)+bZ_{\ell}))^2. \] Arguing as above for robust regression results in \begin{equation} \label{equ:lsq_non_lin_1} SS_1\approx ss_0- \frac{\sum_{i=1}^n r_i({\mathcal M}_0)^2g^{(1)}(\bs{x}_i({\mathcal M}_0)^{\top}\boldsymbol{\beta}({\mathcal M}_0,ls))^2}{\sum_{i=1}^ng^{(1)}(\bs{x}_i({\mathcal M}_0)^{\top}\boldsymbol{\beta}({\mathcal M}_0,ls))^2}\chi^2_1 \end{equation} where \begin{equation}\label{equ:lsq_non_lin_2} r_i({\mathcal M}_0)=y_i-g(\bs{x}_i({\mathcal M}_0)^{\top}\widetilde{\boldsymbol{\beta}}({\mathcal M}_0,ls)). \end{equation} The asymptotic $P$-value for the covariate ${\bf x}_{\nu}$ corresponding to the asymptotic $P$-value (\ref{equ:pval_m}) for $M$-regression is \begin{equation} \label{equ:pval_non_lin} 1-\text{pchisq}\left(\frac{(ss_0-ss_{\nu})\sum_{i=1}^ng^{(1)} (\bs{x}_i({\mathcal M}_0)^{\top}\boldsymbol{\beta}({\mathcal M}_0,ls))^2} {\sum_{i=1}^n r_i({\mathcal M}_0)^2g^{(1)}(\bs{x}_i({\mathcal M}_0)^{\top}\boldsymbol{\beta}({\mathcal M}_0,ls))^2},1\right)^{q-m_0}. \end{equation} In the case of logistic regression with $g(u)=\exp(u)/(1+\exp(u))$ we have \begin{equation}\label{equ:lsq_non_lin_2_logistic} \frac{\sum_{i=1}^n r_i({\mathcal M}_0)^2g^{(1)}(\bs{x}_i({\mathcal M}_0)^{\top}\boldsymbol{\beta}({\mathcal M}_0,ls))^2}{\sum_{i=1}^ng^{(1)}(\bs{x}_i({\mathcal M}_0)^{\top}\boldsymbol{\beta}({\mathcal M}_0,ls))^2} = \frac{\sum_{i=1}^n(y_i-p_i(0))^2p_i(0)^2(1-p_i(0))^2} {\sum_{i=1}^np_i(0)^2(1-p_i(0))^2} \end{equation} where \[p_i(0)=\frac{\exp(\bs{x}_i({\mathcal M}_0)^{\top}\boldsymbol{\beta}({\mathcal M}_0,ls))}{1+\exp(\bs{x}_i({\mathcal M}_0)^{\top}\boldsymbol{\beta}({\mathcal M}_0,ls))}.\] This corrects a mistake in Chapter 11.6.1.2 of \cite{DAV14} where \[\frac{\sum_{i=1}^np_i^3(1-p_i)^3} {\sum_{i=1}^np_i^2(1-p_i)^2}\] occurs repeatedly instead of \[\frac{\sum_{i=1}^n(y_i-p_i)^2p_i^2(1-p_i)^2} {\sum_{i=1}^np_i^2(1-p_i)^2}.\] \subsection{Kullback-Leibler and logistic regression} \label{sec:kul_leib} For integer data least squares can be replaced by minimizing the Kullback-Leibler discrepancy. We consider the case of 0-1 data and logistic regression: \begin{eqnarray} \label{equ:kul_leib_1} kl(\bs{y},\bs{x}({\mathcal M}),\boldsymbol{\beta}({\mathcal M}))&=&-\sum_{i=1}^n\Big(y_i\log p(\bs{x}({\mathcal M})_i,\boldsymbol{\beta}({\mathcal M}))\nonumber\\ &&+(1-y_i)\log(1- p(\bs{x}({\mathcal M})_i,\boldsymbol{\beta}({\mathcal M})))\Big) \end{eqnarray} where \[ p(\bs{x}_i,\boldsymbol{\beta}) \ = \ \frac{\exp(\bs{x}_i^{\top}\boldsymbol{\beta})} {1+\exp(\bs{x}_i^{\top}\boldsymbol{\beta})} . \] Denoting the minimum for the subset ${\mathcal M}_0$ by $kl_0$ and the minimum for ${\mathcal M}_0\cup \{\nu\}$ by $kl_{\nu}$ the arguments of the previous two sections lead to the asymptotic $P$-value \begin{equation} \label{equ:asym_p_kl} 1-\text{pchisq}\left(\frac{2\sum_{i=1}^n p_i(0)(1-p_i(0))}{\sum_{i=1}^n(y_i-p_i(0))^2}(kl_0-kl_{\nu})\right)^{q-m_0}. \end{equation} for the covariate ${\bf x}_{\nu}$. The $p_i(0)$ are the values of $p(\bs{x}_i,\boldsymbol{\beta})$ giving the minimum $kl_0$. \subsection{$L_1$ regression} \label{sec:L1} The idea extends to $L_1$ regression but the P-values must now be obtained by simulation. This is time consuming and yields only an upper bound for the correct P-value but it may be of interest in certain circumstances. \section{Bounds and asymptotics} \label{sec:boun_asymp} We provide some theoretical results about the stepwise choice of covariates in the model-based framework, in Tukey's sense a `challenge'. Throughout this section we assume that \[ \bs{y} \ = \ \bs{\mu} + \sigma \bs{Z} \] with unknown parameters $\bs{\mu} \in \mathbb{R}^n$, $\sigma > 0$ and random noise $\bs{Z} \sim N_n(\boldsymbol{0},\boldsymbol{I})$. Moreover, we assume without loss of generality that $\|\bs{x}_i\|=1, i = 1, i\in\mathcal{N}$ with $\mathcal{N}=\{1,\ldots,q\}$. The set of chosen covariates is denoted by $\widehat{\mathcal{M}}$ . We consider firstly the case of no signal, $\bs{\mu} = \boldsymbol{0}$. In this situation the correct decision is $\widehat{\mathcal{M}}=\emptyset$. \begin{Theorem} \label{thm:consistency.0} If $\bs{\mu} = \boldsymbol{0}$ then \[ \mathop{\mathrm{I\!P}}\nolimits(\widehat{\mathcal{M}} \ne \emptyset) \ \le \ - \log(1 - \alpha) . \] Furthermore if $q \to \infty$ and $n/\log(q)^2 \to \infty$ then for fixed $\alpha \in (0,1)$, \[ \mathop{\mathrm{I\!P}}\nolimits(\widehat{\mathcal{M}} \ne \emptyset) \ \le \ \alpha + o(1) \] as uniformly in $(\bs{x}_i), i=1,\ldots,q.$. In the special case of orthonormal regressors $\bs{x}_i$, \[ \mathop{\mathrm{I\!P}}\nolimits(\widehat{\mathcal{M}} \ne \emptyset) \ \to \ \alpha \] $q \to \infty$. \end{Theorem} If $\bs{\mu} \ne \boldsymbol{0}$ we suppose that $\bs{\mu}=\sum_{i \in \mathcal{M}_{*}}\beta_i\bs{x}_i$ where $ \mathcal{M}_{*}$ is a subset of $\mathcal{N}$ of size $m_{*}<n$ and the $\bs{x}_i, i\in \mathcal{M}_{*}$ are linearly independent. For any subset $\mathcal{M}$ of $\mathcal{N}$ we denote the linear subspace of $\mathbb{R}^n$ spanned by the $\bs{x}_i, i\in\mathcal{M}$ by $\mathbb{V}_{\mathcal{M}}$ and the orthogonal complement of this subspace by $\mathbb{V}_{\mathcal{M}}^{\perp}$. The orthogonal projection onto $\mathbb{V}_{\mathcal{M}}^\perp$ is denoted by $Q_\mathcal{M}$ and for any $i \in \mathcal{N}\setminus \mathcal{M}$ we write \[ \bs{x}_{\mathcal{M},i}^{} \ := \ \|Q_\mathcal{M}\bs{x}_i\|^{-1} Q_\mathcal{M} \bs{x}_i \] (with $0^{-1} \boldsymbol{0} := \boldsymbol{0}$). With the above notation we have \begin{Theorem}[Consistency of stepwise choice, general design] \label{thm:consistency.general} Suppose that \[ \bs{\mu} \ \in \ \mathbb{V}_{\mathcal{M}_*} \] and that the two following assumptions hold:\\ (A.1) \ $\min(n,q)/m_* \to \infty$ and $\log(q)^2/n \to 0$, and \noindent (A.2) \ for some fixed $\tau > 2$, \[ \min_{j \in \mathcal{M}_*, \mathcal{M} \subset \mathcal{M}_* \setminus \{j\}, i \in \mathcal{N}\setminus \mathcal{M}_*} \, \frac{|\bs{x}_{\mathcal{M},j}^\top\bs{\mu}| - |\bs{x}_{\mathcal{M},i}^\top\bs{\mu}|} {\sqrt{n \sigma^2 + \|\bs{\mu}\|^2}} \ \ge \ \frac{\sqrt{\tau \log q} + 2\sqrt{m_*}}{\sqrt{n}} . \] Then the stepwise procedure yields a random set $\widehat{\mathcal{M}} \subset \mathcal{N}$ such that \[ \mathop{\mathrm{I\!P}}\nolimits(\mathcal{M}_* \subset \widehat{\mathcal{M}}) \ \to \ 1 \quad\text{and}\quad \mathop{\mathrm{I\!P}}\nolimits(\mathcal{M}_* \subsetneq \widehat{\mathcal{M}}) \ \le \ \alpha + o(1) , \] \end{Theorem} If the $\bs{x}_i, i\in \mathcal{M}_*$ are orthonormal the result can be simplified. \begin{Theorem}[Consistency of stepwise choice, orthogonal design] \label{thm:consistency.orthogonal} Suppose \[ \bs{\mu} \ = \ \sum_{i\in\mathcal{M}_*} \beta_i \bs{x}_i \] where the $\bs{x}_i$ are orthonormal and that the two following conditions hold\\ \noindent (A.1') \ $q/m_* \to \infty$, and \noindent (A.2') \ for some fixed $\tau > 2$, \[ \min_{i\in \mathcal{M}_*} \, \frac{|\beta_i|}{\sqrt{n\sigma^2 + \sum_{i \in\mathcal{M}_*}\beta_i^2}} \ \ge \ \frac{\sqrt{\tau \log q} + \sqrt{2 \log m_*}}{\sqrt{n}} . \] Then stepwise procedure yields a random set $\widehat{\mathcal{M}} \subset \mathcal{N}$ such that \[ \mathop{\mathrm{I\!P}}\nolimits(\mathcal{M}_* \subset \widehat{\mathcal{M}}) \ \to \ 1 \quad\text{and}\quad \mathop{\mathrm{I\!P}}\nolimits(\mathcal{M}_* \subsetneq \widehat{\mathcal{M}}) \ \le \ \alpha + o(1) . \] \end{Theorem} It is of interest to compare Theorem~\ref{thm:consistency.orthogonal} with Theorem 1 of \cite{LOKTAYTIB214} for lasso regression. There they prove (in our notation) that the first $m_*$ covariates entering the lasso path are, with probability tending to 1, those in $\mathcal{M}_*$. Our condition (A.2') is replaced by the weaker \[\min_{i\in \mathcal{M}_*}\, \vert\beta_i\vert-\sigma \sqrt{2\log(q)} \rightarrow \infty.\] However their result is restricted to $q<n$, they use the given $\sigma$, not an estimate, and there is no termination rule. See their Remark 1 on page 420 and their Section 6. \section{Simulations and real data} \label{sec:examples} All the following were done using R version 3.3.1 (2016-06-21) and the package {\it gausscov} with all the default values of {\it gausscov}, in particular with the default value 0.01 for {\it alpha}. There is one exception: in Section~\ref{sec:sim1} we use several values of {\it nu} chosen as described in Section~\ref{sec:false_neg}. The philosophy behind this is, to cite Tukey, `What is needed is a consumer product - something designed by experts for innocents'. The user only has to enter the data and hit `return'. This is also true for the version of lasso we use which is the default version of {\it cv.glmnet}. It uses 10-fold cross validation to choose the value of the regularization parameter {\it lambda}. \subsection{Simulations} \label{sec:sims} \subsubsection{Tutorial 1} \label{sec:sim1} The knockoff procedure is explained in \cite{CAFAJALV2018}. The tutorial in question is Tutorial 1 of {\footnotesize \begin{verbatim} https://web.stanford.edu/group/candes/knockoffs/software/knockoff/ \end{verbatim} } \noindent which gives a simulation using knockoff. The dimensions are $(n,q)=(1000,1000)$. The 1000 covariates are Gaussian and dependent with a Toeplitz covariance matrix $\Sigma$ given by $\Sigma_{i,j}=\rho^{\vert i-j\vert}$ with $\rho=0.25$. Of the covariates $p=60$ are chosen at random and denoted by $\bs{X}_i,i=1,\ldots,60$. The dependent variable $\bs{Y}$ is given by \[\bs{Y}=\sum_{i=1}^{60}\beta_iX_i+N_{1000}(\boldsymbol{0},\boldsymbol{I})\] with all the $\beta_i=amplitude/\sqrt{n}$ with $amplitude=4.5$. These are the particular values chosen for the first simulation discussed below. There is a second tutorial with a binary dependent variable. The results are similar and not given here but are available in \cite{DAV18} with however $\alpha=0.05.$ \begin{table}[h] \begin{center} {\footnotesize \begin{tabular}{ccccc} &\multicolumn{2}{c}{Tutorial 1}\\ method&fp&fn&time\\ \hline lasso&76.8&1.0&8.5\\ knockoff&4.24&10.3&83.3\\ $\nu=1$&0.00&53.2&0.05\\ $\nu=5$&1.76&14.0&0.23\\ $\nu=10$&5.60&6.32&0.26\\ \hline \quad \end{tabular} } \caption{Comparison of lasso, knockoff and Gaussian covariates based on 25 simulations with $(n,q,p,amplitiude,\rho)=(1000,1000,60,4.5,0.25)$. \label{tab2}} \end{center} \end{table} The number of false positives is denoted by `fp' and false negatives by `fn'. The total number of covariates selected is given by 60-fn+fp. The time for each simulation is given in seconds. The first line for lasso shows that on average it selects about 140 covariates each selection requiring about 8 seconds. All the relevant covariates are chosen but also on average about 80 false ones. Knockoff selects on average about 60 covariates of which 5-6 are false positives. It requires about 80 seconds for each selection.\\ The Gaussian covariate method with default values selects on average just 7 covariates. None of these are false positives. Putting $\nu=5$ results in $60-16.8+2.76\approx 46$ covariates being selected. To judge how many of these are false positives we run {\it fsimords} as described in Section~\ref{sec:false_neg}. The result is given in Table~\ref{tab1} suggesting that only about two of these are false positives and consequently about 16 are false negatives. These numbers agree with the actual values given in Table~\ref{tab2}. From Table~\ref{tab2} it is seen that the Gaussian covariate method with $\nu=10$ selects about $60+6.84-6.12\approx 61$ covariates. Running {\it fsimords} with $\nu=10$ gives a mean of 4.62 false positives. This suggests that of the 61 chosen covariates about 5 are on average false positives and consequently about 4 are false negatives. This again agrees with the values in Table~\ref{tab2} and suggests that in terms of minimizing the number of false decisions $\nu=10$ is the best choice of $\nu$ which agrees with Table~\ref{tab2}. We emphasize here that the choice $\nu=10$ results from running {\it fsimords} and not by choosing the best value on running Tutorial 1. It is seen that knockoff and the choice $\nu=10$ give about the same results in terms of the sum $fp+fn$. The big difference is the running times. Whereas knockoff requires over two minutes for each simulation the Gaussian covariate method requires less than 0.5 seconds. \begin{table} \begin{center} {\footnotesize \begin{tabular}{ccccc} &\multicolumn{2}{c}{Tutorial 1}\\ method&fp&fn&time\\ \hline lasso&0.55&0.00&2.98\\ knockoff&0.00&5.00&77.1\\ $\nu=1$&0.05&0.00&0.047\\ $\nu=5$&1.20&0.00&0.052\\ $\nu=10$&4.150&0.00&0.066\\ \hline \quad \end{tabular} } \caption{Comparison of lasso, knockoff and Gaussian covariates based on 25 simulations with $(n,q,p,amplitiude,\rho)=(1000,1000,5,45,0.25)$. \label{tab:2.5}} \end{center} \end{table} Table~\ref{tab:2.5} is interesting. As before we put $(n,q)=(1000,1000)$ but only five of the covariates are chosen a with very large $\beta_i=45/\sqrt{1000}$. Lasso and the three versions of the Gaussian covariate method have no false negatives. Knockoff has five false negatives no false positives every time meaning that it selected nothing. \subsubsection{Random graphs} \label{sec:rangrph} This is based on \cite{MEIBUE06} with $(n,q)=(n,p)=(1000,600)$ where $n=1000$ is the dimension of each covariate and $p=600$ the number of covariates. On the last line of page~13 the expression $\varphi(d/\sqrt{p})$ with $\varphi$ the density of the standard normal distribution and $d$ the Euclidean distance is clearly false. It has been replaced by $\varphi(23.5d)$ which gives about 1800 nodes compared with the 1747 of \cite{MEIBUE06}. The Meinshausen-B\"uhlmann method with {\it alpha}=0.05 and non-directed edges ((8) of their paper) resulted in 1109 edges of which two were false positives giving 640 false negatives. One simulation of the modified (as described above) Meinshausen-B\"uhlmann random graph method produced 1823 edges. The default Gaussian covariate method yielded 1590 non-directed edges of which 1 was a false positive and 234 were false negatives. Putting {\it alpha}=0.05 (comparable with Meinshausen-B\"uhlmann?) gave 1679 edges of which one was a false positive and 145 were false negatives. The time required in both cases was about 11 seconds. Putting $\nu=2$ resulted 1821 edges of which five were false positives and seven false negatives. To judge this 100,000 simulations were perform as in Section~\ref{sec:false_neg} with $(n,q,\alpha)=(1000,600,0.01/600)$. Of these 99,449 resulted in no false positives, 548 in just one and 3 in two giving an average of 0.00554 false positives. Repeating this 600 times gives an average number of 3.32 false positives. So of the 231 additional edges one can expect that only about three were false positives which agrees well the the actual number. One application of lasso to the same graph resulted in 2851 edges of which 774 were false positives and 16 false negatives. The time required was 47 minutes. \subsection{Real data} \label{sec:real_data} In this section we make no effort to give a complete statistical analysis of the data sets which would include for example possible outliers and an examination of residuals. The aim is simply to give the results of using the methods of Sections~\ref{sec:all_sets} and \ref{sec:rep_step_wise}. \subsubsection{Red wine data} \label{sec:red} The size of the red wine data is $(n,q+1)=(1599,12)$ with the 12th coordinate being the dependent variable giving subjective evaluations of the quality of the wine. The data are available from {\footnotesize \begin{verbatim} UCI Machine Learning Repository: Wine Quality Data Set \end{verbatim} } Considering all possible subsets as described in Section~\ref{sec:all_sets} results in 20 linear approximations. The best in terms of the smallest sum of squared residuals is based on the six covariates volatile acidity, chlorides, total sulfur dioxide, pH, sulphates and alcohol. The results are given in Table~\ref{tab40}. \begin{table} \begin{center} \begin{tabular}{lccc} covariate&Regression coef.&P-values&st. P-values\\ \hline 0& 4.297& 4.49e-26& 4.49e-26\\ alcohol&0.291& 9.61e-61& 1.60e-61\\ volatile acidity&-1.038& 1.64e-23 &2.73e-24\\ sulphates&0.889& 7.86e-15&1.31e-15\\ total sulfur dioxide& -0.002& 1.83e-05& 3.05e-06\\ chlorides&-2.002& 3.28e-06& 5.46e-07\\ pH&-0.435&1.10e-03& 1.83e-04\\ \end{tabular} \caption{Red wine data: best subset.\label{tab40}} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{lccc} covariate&P-value&rss&ratio $rss/rss_0$\\ \hline\\ 0& 0.00e+00& 1042.2& 0.02010\\ alcohol&0.00e+00& 805.9& 0.77327\\ volatile acidity&0.00e+00& 711.8&0.88326\\ sulphates&2.03e-10& 692.1&0.97234\\ total sulfur dioxide& 1.03e-04& 683.9& 0.98813\\ chlorides&1.00e-04& 675.9&0.98825\\ pH&1.10e-03& 669.9& 0.99124\\ free sulfur dioxide&8.23e-02& 667.5& 0.99643\\ citric acid&7.42e-01& 667.1& 0.99929\\ residual sugar&8.19e-01& 666.8& 0.99962\\ fixed acidity&8.50e-01& 666.7&0.99984\\ density&4.09e-01 & 666.4& 0.99957\\ \end{tabular} \caption{Red wine data: covariates in order of selection by the Gaussian stepwise method.\label{tab4}} \end{center} \end{table} The results of the stepwise method are given in Table~\ref{tab4}. The first six are selected which agrees with the all subsets method. Table~\ref{tab4} can be compared with Table 5 of \cite{LOKTAYTIB214}. \subsubsection{Boston housing data} \label{sec:boston} The size of the Boston housing data is $(n,q+1)=(506,14)$ with the 14th coordinate being the dependent variable giving median value of owner-occupied homes in multiples of 1000\$. The data are available from the R package MASS\\ {\footnotesize \begin{verbatim} https://CRAN.R-project.org/package=MASS \end{verbatim} } Considering all possible subsets as described in Section results in 34 linear approximations. The best in terms of the smallest sum of squared residuals is based on all covariates except 3 and 7. The results are given in Table~\ref{tab60} \begin{table} \begin{center} \begin{tabular}{lccc} covariate&Regression coef.&P-values&st. P-values\\ \hline 0& 36.34& 2.73e-12& 2.73e-12\\ 1& -0.108&3.03e-03& 1.01e-03\\ 2& 0.046& 2.26e-03& 7.54e-04\\ 4 & 2.719& 4.65e-03& 1.55e-03\\ 5& -17.38& 3.63e-06& 1.21e-06\\ 6& 3.802& 8.67e-19& 2.89e-19\\ 8& -1.493 &2.05e-14& 6.84e-15\\ 9 & 0.300& 8.99e-06& 3.00e-06\\ 10& -0.012& 1.56e-03& 5.21e-04\\ 11& -0.947& 2.77e-12& 9.24e-13\\ 12& 0.009& 1.67e-03&5.57e-04\\ 13& -0.523& 6.42e-25& 2.14e-25\\ \end{tabular} \caption{Boston housing data: best subset.\label{tab60}} \end{center} \end{table} The results of the Gaussian stepwise procedure are given in Table~\ref{tab6}. With a cut-off P-value of $\alpha=0.01$ the first seven are selected. The Boston housing data are considered again in Section~\ref{sec:inter}. \begin{table} \begin{center} \begin{tabular}{cccc} covariate&P-value&rss&ratio $rss/rss_0$\\ \hline 0& 0.00e+00& 42716&0.143\\ 13& 0.00e+00& 19472& 0.456\\ 6&0.00e+00& 15439& 0.793\\ 11& 1.81e-13& 13728& 0.889\\ 8&1.67e-04& 13229& 0.964\\ 5& 4.94e-07& 12469& 0.943\\ 4 &2.12e-03& 12141& 0.974\\ 12&5.39e-03& 11868& 0.978\\ \end{tabular} \caption{Boston housing data: covariates in order of selection by the Gaussian stepwise procedure.\label{tab6}} \end{center} \end{table} \subsubsection{The dental data} \label{sec:dent} We transcribed this data from Table 1 of \cite{SEHTUK01}. It is a $8\times 3\times 5$ ANOVA table with the dependent variable being the hardness of a dental gold filling. There were eight different gold alloys, three different methods of preparation and five different dentists who prepared the filling . A full description of the data is given in \cite{SEHTUK01}. We can write this as a linear regression problem with 13 covariates. The covariates 1:7 are the first seven gold alloys, the covariate 8:9 the first two methods of preparation and the covariates 10:13 the first four dentists. The all subsets method of Section~\ref{sec:all_sets} with adjusted P-values returns only the covariates 8 and 9 with P-values 2.60e-06 and 1.61e-07 respectively. The Gaussian stepwise procedure returns no covariates for $\alpha=0.05$ . If we now follow Section~\ref{sec:false_neg} and put $kmax=10$ the stepwise method results in the covariates 1 and 5:13. Considering all subsets of these 10 covariates results in the covariates 8 and 9. Following \cite{SEHTUK01} and \cite{DAV12} we allow interactions for the seven observations 14, 90, 93, 96, 103, 119 and 120. The all subsets method returns the covariates 8, 9, 15:17 and 20. The stepwise method returns the covariates 15:17 and 20. It thus fails to pick up the covariates 8 and 9. We now again set $kmax=10$ in the stepwise version to end up with the covariates 6, 8 ,9 and 14:20. Again considering all subsets we obtain the covariates 8, 9, 15:17 as for the all subsets method. Lasso performed well on this data set. In ten runs without the interaction terms it returned the covariates 6-11 six times and the covariates 5-11 four times. With the interactions it resulted in the covariates 6-19 every time. \subsubsection{Leukemia data} \label{sec:leuk} The dimensions of the leukemia data (\cite{GOLETAL99}) are $(n,q+1)=(72,3572)$. The data are available from {\footnotesize \begin{verbatim} http://stat.ethz.ch/~dettling/bagboost.html \end{verbatim} } \noindent For more information about the data see \cite{DETBUH02}. The repeated Gaussian covariate procedure gives 115 linear approximations and involving 281 covariates. The time required was 1.7 seconds. The first two linear approximations are given in Table~\ref{tab:leukemia}. The first column gives the number of the linear approximation, the second the covariates included in this approximation, the third the stepwise P-values of the covariates and the fourth the sum of squared residuals as each successive covariate is included. Columns five and six given respectively the P-values as defined by (\ref{equ:step_P}) and the standard least squares P-values. \begin{table} \begin{center} \begin{tabular}{cccccc} \multicolumn{6}{c}{The leukemia data repeated stepwise}\\ approx.&covariate&stepwise P-value&rss&P-value&st. P-value\\ \hline 1& 0& 4.15e-08& 16.32& 7.65e-21& 7.65e-21\\ 1& 1182& 0.00e+00& 4.26& 1.49e-15& 4.17e-19\\ 1&1219& 8.58e-04& 2.88& 4.12e-04& 1.16e-07\\ 1& 2888& 3.58e-03& 2.02& 3.58e-03& 1.00e-06\\ 2 & 0& 4.15e-08& 16.32&1.73e-19& 1.73e-19\\ 2& 1652& 0.00e+00& 4.38& 7.24e-05& 2.03e-08\\ 2& 979& 9.36e-05& 2.79& 9.37e-05& 2.62e-08\\ \end{tabular} \caption{The first two linear approximations for the leukemia data.\label{tab:leukemia}} \end{center} \end{table} We now apply the strategy of Section~\ref{sec:step_wise} setting the number of covariates to ten, $kmax=10$. This resulted in the six additional covariates 1946, 2102, 183, 3038, 2558, 801 and 2491. The all subset method with adjusted P-values reduces this to six covariates, the original three plus 183, 3038 and 2558. Putting $kmax=20$ gives the same result. This can also be done for the repeated stepwise procedure. It results in 75 linear approximations involving 250 covariates. The time required was 2.5 seconds. Five applications of lasso resulted in between 11 and 23 covariates. The time required for each was about 0.5 second. Knockoff took six hours to produce 14 covariates. These numbers are too large for a sample size of $n=72$ but the all subsets method of Section~\ref{sec:all_sets} can be used to make a further selection. For the lasso set of size 23 the first four subsets were \[\{672,1182,2481,2888\},\{672,979,2481\},\{657,672,979\},\{1182, 1219, 2888\}.\] The covariates are numbers 1,2,3,7,8 and 48 on the repeated Gaussian list. \subsubsection{Osteoarthritis data} \label{sec:osteo} The osteoarthritis data with dimensions $(n,q+1)=(129,48803)$ was analysed in \cite{COXBATT17}. The authors selected 17 covariates. The repeated Gaussian covariate method Section~\ref{sec:rep_step_wise} gave 165 covariates forming 63 linear approximations. These included six of the 17 covariates chosen in \cite{COXBATT17}. The time required was 10 seconds. The first two linear approximations are in Table~\ref{tab:osteo}. If $kmax$ is set to 10 the repeated Gaussian covariate procedure results in 56 linear approximations involving 207 covariates. The time required was 21 seconds. \begin{table} \begin{center} \begin{tabular}{cccccc} \multicolumn{6}{c}{The osteoarthritis data repeated stepwise}\\ approx.&covariate&P-value&st. P-value\\ \hline 1 & 0 &1.66e-21& 1.66e-21\\ 1 &11499 &3.85e-18& 7.89e-23\\ 1 &31848 &1.42e-05& 2.91e-10\\ 1 &33321& 6.44e-03& 1.32e-07\\ 2 & 0 &6.03e-01& 6.03e-01\\ 2 &44902& 1.26e-09& 2.58e-14\\ 2 & 3630& 8.48e-14& 1.74e-18\\ 2 &43770& 4.06e-03& 8.33e-08\\ \end{tabular} \caption{The first two linear approximations for the osteoarthritis data.\label{tab:osteo}} \end{center} \end{table} Five applications of lasso resulted in between 16 and 61 covariates. The time for each application was about 10 seconds. This data set is much too large for knockoff. \subsubsection{Interactions} \label{sec:inter} We now consider the Boston housing data again but take the covariates to consist of all 77520 interactions of order at most 7. The result of the Gaussian stepwise procedure are given in Table~\ref{tab6}. The time required was about 1.5 seconds. Note that the first two interactions give a smaller sum of squared residuals 9930 than all of the original 13 covariates 11078. There are 203490 interactions of degree at most 8: the Gaussian method gives a similar result as before in about 4 seconds. \begin{table} \begin{center} \quad\\ \begin{tabular}{ccccc} interaction&P-value&rss&adj. P-value&st. P-value\\ \hline 0 &0.00e+00 &42716& 5.29e-139& 5.29e-139\\ $6^4\cdot12$&0.00e+00& 16350& 3.21e-118& 4.14e-123\\ $6^4\cdot11\cdot12\cdot13$&0.00e+00& 9930& 2.33e-62& 3.01e-67\\ $5\cdot6^4\cdot 8$& 9.42e-08& 8979& 4.49e-06& 5.79e-11\\ $1\cdot5^5\cdot 13$&7.30e-07& 8184& 8.52e-09& 1.10e-13\\ $1^2\cdot4\cdot 9\cdot 11^3$&3.84e-06 & 7506& 1.28e-11& 1.65e-16\\ $1^2\cdot4\cdot5\cdot6\cdot9^2$&1.22e-09& 6668& 1.22e-09& 1.57e-14\\ \end{tabular} \caption{Boston housing data: interactions of order $\le 7$ in order of selection by the Gauss stepwise procedure.\label{tab7}} \end{center} \end{table} The top panel of Figure~\ref{fig:bostinter_res}shows the residuals based on a standard least squares fit using the 13 covariates. The bottom panel shows the residuals based on the six interactions of Table~\ref{tab7}. There are two outliers (Tukey=exotic observations), the observations 369 and 372. The remaining 504 observations are reasonably well approximated. \begin{figure} \centering \includegraphics[width=.8\textwidth,height=120px]{boston_res.pdf} \includegraphics[width=.8\textwidth,height=120px]{bostoninter_res.pdf} \caption{Top: the residuals based on the 13 covariates. Bottom: the residuals based on the 77520 interaction terms of Table~\ref{tab7}.} \label{fig:bostinter_res} \end{figure} Five applications of lasso resulted with in between 16 and 61 selected covariates. The applications took on average 80 seconds each. \subsubsection{Non-parametric regression} \label{sec:snspt_melbourne} As mentioned in the Introduction the Gaussian stepwise can be applied to non-parametric regression with covariates of the form $\bs{x}_{\ell} = (f_{\ell}(u_i))_{i=1}^n$ with given basis functions $f_{\ell}, \ell =1,\ldots.q$. What the method cannot do in this context is to take shape or other restrictions into account as for example in \cite{DAVKOV01,KOV07,DUEMKOV09}. It is however well adapted to finding periodicities in the dependent variable by taking the basis functions to be of the form $f_{\ell}=\sin(\pi \ell (1:n)/n),\cos(\pi \ell (1:n)/n),\ell=1,\ldots,n/2$ or some subset. \\ \noindent We consider the monthly average number of sunspots from 1749 to 2020 giving in all 3253 observations. The data are available from \begin{verbatim} Source: WDC-SILSO, Royal Observatory of Belgium, Brussels \end{verbatim} The number of covariates is 3253 although a much smaller set would be sufficient . In all 55 covariates were selected. The time required was about 2.4 seconds. The first ten together with their periods in years were\\ \begin{tabular}{ccccccccccc} covariate&98&101&108&10&92&128&16&97&19&133\\ period&11.06&10.74&10.04&108.4&11.79&8.47&67.77&11.18&57.07&8.15\\ \end{tabular} \newline\\ The top panel of Figure~\ref{fig:snspt_melt} shows the regression function based on these covariates.\\ Five applications of lasso resulted on average in 730 covariates. Each application took about 18 seconds. The top panel of Figure~\ref{fig:lasso_melb} shows a plot of the mean square error against $\log(\lambda)$ for the sunspot data. The numbers at the top give the number of covariates selected. A second data set we consider is the minimum daily temperature in Melbourne from 1981-1990. the source is {\footnotesize \begin{verbatim} https://www.kaggle.com/paulbrabban/daily-minimum-temperatures-in-melbourne \end{verbatim} } The size is $n=3650$ and again we take the covariates to be the trigonometric functions mentioned above. The stepwise method resulted in eight covariates with periodicities in days of 365, 365, 187, 912, 182, 85, 3650 and 36. The regression function based on these covariates is shown in the bottom panel of Figure~\ref{fig:snspt_melt}.\\ Five applications of lasso yielded each time about 1050 covariates. Each application took about 29 seconds. The bottom panel of Figure~\ref{fig:lasso_melb} a plot plot of the mean square error against $\log(\lambda)$ for the Melbourne data. The numbers at the top give the number of covariates selected. \begin{figure}[t] \centering \includegraphics[width=.8\textwidth,height=120px]{snspt.pdf} \includegraphics[width=.8\textwidth,height=120px]{mel_temp.pdf} \caption{Top: the regression function for the sunspot data. Bottom: the regression function for the Melbourne daily minimum temperature.} \label{fig:snspt_melt} \end{figure} \begin{figure}[t] \centering \includegraphics[width=.8\textwidth,height=120px]{lasso_snspt.pdf} \includegraphics[width=.8\textwidth,height=120px]{lasso_mel.pdf} \caption{The lasso plots of the mean square error for the sunspot (top) Melbourne temperature (bottom) data. The numbers at the top are the number of selected covariates.} \label{fig:lasso_melb} \end{figure} \subsubsection{Autoregressive and lagged regression} The data we considered are the USA quarterly data 1919-1941,1947-1983 available from \begin{verbatim} http://data.nber.org/data/abc/ \end{verbatim} We merged the two time intervals and used the values given in 1972\$. The dependent variable was taken to be the Gross national Product (GNP72). Taking the covariates to be the first 16 lags of GNP72, that is, an autoregressive approach, the all subsets method results just in the lags 1 and 2. This is also the result of the Gaussian stepwise procedure, Table~\ref{tab:gnp} \begin{table} \begin{center} \begin{tabular}{ccc} covariate&adj. P-value&st. P-value\\ \hline 0& 3.48e-01&3,48e-01\\ 1& 1.13e-54& 7.57e-56\\ 2& 1.46e-06& 9.740e-08\\ \end{tabular} \caption{The USA Gross National Product data with the first two lags.\label{tab:gnp}} \end{center} \end{table} The following 21 further indices (see the above data source for an explanation) were included each with lags of 1:16 giving 352 covariates in all:\\ \noindent CPRATE, CORPYIELD, M1, M2, BASE, CSTOCK, WRICE67, PRODUR72, NONRES72, IRES72, DBUSI72, CDUR72, CNDUR72, XPT72, MPT72, GOVPUR72, NCSPDE72, NCSBS72, NCSCON72,CCSPDE72,CCSBS72\\ \noindent We are not economists so whether this makes sense or not we leave to the reader. The result of the Gaussian stepwise procedure is given in Table~\ref{tab:gnp_all}. \begin{table} \begin{center} \begin{tabular}{ccl} P-value&st.P-value&covariate\\ \hline 4.66e-01& 4.66e-01&\\ 1.21e-315& 3.45e-318&Gross National Product, lag 1 \\ 6.85e-15& 1.97e-17& Commercial Paper Rate, lag 2\\ 1.71e-04& 4.8e-07& Change Business Inventories, lag 4\\ \end{tabular} \caption{The USA Gross National Product data using the first 16 lags of the 22 indices listed above.\label{tab:gnp_all}} \end{center} \end{table} \noindent \begin{table} \begin{center} \begin{tabular}{ccl} P-value&st.P-value&covariate\\ \hline 4.59e-02& 4.59e-02&\\ 8.93e-140& 2.56e-142&Gross National Product,lag1\\ 1.00e+00& 3.17e-01&Index of all Common Stocks , lag 1\\ 7.18e-01& 3.61e-03&Non-Residential Structures, lag 1\\ 1.00e+00& 2.42e-01& Imports, lag 15\\ \end{tabular} \caption{The USA Gross National Product data using the first 16 lags of the 22 indices listed above using lasso.\label{tab:gnp_all_lasso}} \end{center} \end{table} Five applications of lasso resulted each time in the covariates lasso 1, 97, 161 and 271. The results are given in Table~\ref{tab:gnp_all_lasso}. We applied the R autoregressive function to the Melbourne data. It results in an AR process of order 21. The Gaussian stepwise method allowing for lags of 400 results in the following ten lags in order of importance: 1, 2, 342, 172, 7, 366, 350, 4, 21 and 384. Five applications of lasso gave 35 lags twice and 36 lags three times. \subsubsection{Dependency graphs} The graph for the covariates of the leukemia data resulted in 1577 directed edges in six seconds. The number of undirected edges was 1294. The repeated Gaussian method gives 11655 edges in about 40 seconds. Ten of the covariates had between 50 and 75 subgroups. Lasso produced 32322 in about 30 minutes. Finally a dependency graph for the 48802 covariates of the osteoarthritis data was calculated. It consisted of 38415 directed edges. The computing time was about 85 minutes. It was estimated that lasso would require over 5 days. \section{Some comments on Gaussian covariates, lasso and cross-validation} \label{sec:comments} The above results demonstrate clearly that the Gaussian covariates method does not overfit, neither in the simulations nor for the real data sets. The largest subset for the osteoarthritis data consisted of eight covariates, but the largest standard P-value was 2.23e-06 and the largest P-value 8.78e-03 just less than the cut-off value 0.01. So even this subset would not qualify for overfitting in the statistical sense. If you use logistic regression it can be reduced to five, but not less, and give a perfect fit. In contrast lasso with the cross-validation option overfits every time when the number of covariates is large, which here means more than 600. It overfits most spectacularly for the sunspot and Melbourne temperature data sets as Figure~\ref{fig:lasso_melb} shows. Why this should be so is not clear. We look at this in more detail. Firstly we note that the residuals from the Gaussian covariate fit to the Melbourne data shown in \ref{fig:snspt_melt} have a standard deviation of about 2.7 and a first order correlation of about 0.5. We use these values in the simulations below. The correlations are obtained by setting the errors $\varepsilon_t=(Z_t+\gamma Z_{t-1})/\sqrt{1+\gamma^2}$ where the $Z_t$ are i.i.d. $N(0,1)$ and putting $\gamma=2-\sqrt{3}$ and $1$ to give correlations of 0.25 and 0.5 respectively. Table~\ref{tab:comment} shows the results of 20 simulations and gives in order the mean, the smallest and the largest numbers of covariates chosen for the two procedures, lasso and Gaussian covariates. This is done for three different values of the first order correlation of the errors in each case with standard deviation 2.7, and for three different signals. The signals are no signal, the sine function $20\sin(2*pi*10*(1:3650)/3650)$ and the function shown in Figure~\ref{fig:mel_smooth}. This is the function obtained from the Gaussian covariate method with the number of chosen covariates set to five. It results in the functions 40, 39, 77, 16 and 79. \begin{table} \begin{center} \begin{tabular}{ccccccccccc} signal&method&\multicolumn{3}{c}{$\rho=0.0$}&\multicolumn{3}{c}{$\rho=0.25$}&\multicolumn{3}{c}{$\rho=0.5$}\\ \hline no signal&lasso&8.15&0&56&719&522&879&1449&1376&1548\\ &Gauss&0&0&0&0&0&0&0.3&0&1\\ sine&lasso&4.1&2&6&14.5&9&19&33&20&48\\ &Gauss&1&1&1&1&1&1&1.25&1&2\\ Figure~\ref{fig:mel_smooth}&lasso&646&449&814&1135&1047&1188&1410&1341&1444\\ &Gauss&3.3&3&4&3.6&3&5&4.1&2&6\\ \hline \end{tabular} \caption{The number of covariates selected by lasso and the Gaussian procedure for various first order correlations and signals.\label{tab:comment}} \end{center} \end{table} The results show that lasso consistently overfits and is very sensitive to changes in the correlation structure of the errors as well as to the underlying signal. We do not know why this is so. In contrast the Gaussian procedure is very stable and accurate. There is a simple procedure which can prevent lasso from overfitting. Each time a new covariate is included the P-values according to (\ref{equ:step_P}) are calculated and the procedure terminates with the last subset of selected covariates where all P-values are less than the chosen threshold $\alpha$. For $\alpha=0.01$ this gives the same result as the Gaussian covariate method. \begin{figure}[t] \centering \includegraphics[width=.8\textwidth,height=120px]{mel_smooth.pdf} \caption{The smooth regression function for the Melbourne data used in Table~\ref{tab:comment}} \label{fig:mel_smooth} \end{figure}
{ "timestamp": "2020-05-19T02:21:30", "yymm": "1906", "arxiv_id": "1906.01990", "language": "en", "url": "https://arxiv.org/abs/1906.01990", "abstract": "In this paper we give a completely new approach to the problem of covariate selection in linear regression. A covariate or a set of covariates is included only if it is better in the sense of least squares than the same number of Gaussian covariates consisting of i.i.d. $N(0,1)$ random variables. The Gaussian P-value is defined as the probability that the Gaussian covariates are better. It is given in terms of the Beta distribution, it is exact and it holds for all data. The covariate selection procedures based on this require only a cut-off value $\\alpha$ for the Gaussian P-value: the default value in this paper is $\\alpha=0.01$. The resulting procedures are very simple, very fast, do not overfit and require only least squares. In particular there is no regularization parameter, no data splitting, no use of simulations, no shrinkage and no post selection inference is required. The paper includes the results of simulations, applications to real data sets and theorems on the asymptotic behaviour under the standard linear model. Here the stepwise procedure performs overwhelmingly better than any other procedure we are aware of. An R-package {\\it gausscov} is available.", "subjects": "Methodology (stat.ME); Applications (stat.AP)", "title": "Covariate Selection Based on a Assumpton-free Approach to Linear Regression with Exact Probabilities", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464471055739, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7074345706472083 }
https://arxiv.org/abs/1403.5719
Logic Blog 2013
The 2013 logic blog has focussed on the following:1. Higher randomness. Among others, the Borel complexity of $\Pi^1_1$ randomness and higher weak 2 randomness is determined.2. Reverse mathematics and its relationship to randomness. For instance, what is the strength of Jordan's theorem in analysis? (His theorem states that each function of bounded variation is the difference of two nondecreasing functions.)3. Randomness and computable analysis. This focusses on the connection of randomness of a real $z$ and Lebesgue density of effectively closed sets at $z$.4. Exploring similarity relations for Polish metric spaces, such as isometry, or having Gromov-Hausdorff distance $0$. In particular their complexity was studied.5. Various results connecting computability theory and randomness.
\part{Metric spaces} \newpage \newpage \part{Higher randomness} \section{Greenberg, Monin: An upper bound on the Borel rank of the set of $\Pi^1_1$-random reals} Written by Benoit Monin in August, joint work with Noam Greenberg. \\ Recall that a set $Z \in \seqcantor$ is $\Pi^1_1$-random if it is in no $\Pi^1_1$ null class. Kechris~\cite{Kechris:75} showed that there is a largest $\Pi^1_1$ null class, which can be seen as a universal test for $\Pi^1_1$-randomness. A simple direct proof of this fact is in the last section of Hjorth and Nies~\cite{Hjorth.Nies:07}. For background on higher randomness see \cite[Ch.\ 9]{Nies:book}. We show that the set of $\Pi^1_1$-randoms is $\mathbf{\Pi^0_3}$. Together with Yu Liang's result in Section~\ref{s:GB-Monin sharpness}, this give the exact Borel rank of the $\Pi^1_1$-random reals (thus also the exact Borel rank of the Kechris' largest $\Pi^1_1$ nullset). We will show that being $\Pi^1_1$-random is equivalent to a certain notion of genericity. The class of elements for this notion of genericity will have the Borel complexity of $\mathbf{\Pi^0_3}$. This notion of genericity is a variation of the notion of forcing with $\Pi^0_1$ class of positive measure, where we use the same idea lying in the difference between $1$-generic and weakly-$1$-generic. Following the thesis of Kautz where forcing with closed classes of positive measure is called Solovay forcing, we introduce the two notions of \textbf{weakly-Solovay-$\Sigma^1_1$-generic} real and \textbf{Solovay-$\Sigma^1_1$-generic} reals: \begin{definition} We say that $X$ is weakly Solovay-$\Sigma^1_1$-generic if for any uniformly $\Sigma^1_1$ sequence $\{F_n\}_{n \in \omega}$ of closed sets of positive measure with $\lambda(\bigcup_n F_n)=1$ we have that $X$ is in one of the $F_n$. \end{definition} It is easy to see that weak Solovay-$\Sigma^1_1$-genericity is the same as higher weak 2-randomness (sometimes called strong $\Pi^1_1$-ML-randomness). We now give the notion of genericity that will turn out to be equivalent to $\Pi^1_1$-randomness: \begin{definition} We say that $X$ is Solovay-$\Sigma^1_1$-generic if for any uniformly $\Sigma^1_1$ sequence $\{F_n\}_{n \in \omega}$ of closed set of positive measure, $\Sigma^1_1$, we have either $X$ is in one of the $F_n$ or there is a $\Sigma^1_1$ closed set of positive measure $G$ such that $G \cap \bigcup F_n =\emptyset$ and $X \in G$. \end{definition} Clearly, the two last definitions are related between each other in the same way as $1$-genericity is related to weak $1$-genericity. This justifies the terms weakly-Solovay-$\Sigma^1_1$-generic and Solovay-$\Sigma^1_1$-generic. We now have to prove that this last notion of genericity coincides with $\Pi^1_1$-randomness. The only difficult part of the demonstration is to show that if $X$ is Solovay-$\Sigma^1_1$-generic then $\omega_1^X$ is equal to $\omega_1^{ck}$. In order to prove this, we use the idea imaginated by Sacks and simplified by Greenberg, to show that the set of $X$ with $\omega_1^X > \omega_1^{ck}$ has measure $0$.\\ The idea is the following, suppose that for some $X$ we have a function $\varphi$ such that: $$\forall n\ \ \exists \alpha < \omega_1^{ck}\ \ \varphi^X(n) \in \mathcal{O}_\alpha^X$$ where $\mathcal{O}^X$ is the set of Kleene's notation for ordinals computable in $X$ and $\mathcal{O}^X_\alpha$ the set of Kleene's notation for ordinals computable in $X$ with order-type strictly smaller than $\alpha$. Suppose also that $X$ is Solovay-$\Sigma^1_1$-generic. Then we will show that the supremum of $\varphi^X(n)$ over $n \in \omega$ is smaller than $\omega_1^{ck}$. To show this we need two lemmas: \begin{lemma} \label{approx} Let $S$ be a $\Sigma^1_1$ predicate of positive measure of the form $$S(X) \leftrightarrow \exists n\ \ \forall \alpha < \omega_1^{ck}\ \ S_{\alpha, n}(X)$$ where $S_{\alpha, n}$ is a $\Delta^1_1$ predicate uniformly in $n$ and $\alpha$. Then there is a union of uniformly $\Sigma^1_1$ closed set $\bigcup_n F_n \subseteq S$ with $\lambda(S - \bigcup_n F_n) = 0$. \end{lemma} \begin{proof} Let $S_n=\{X\ | \forall \alpha < \omega_1^{ck}\ \ S_{\alpha, n}(X)\}$. So we have $S=\bigcup_n S_n$ and $S_n = \bigcap_{\alpha < \omega_1^{ck}} S_{\alpha, n}$. Let us fix $n$ and let us build a union of uniformly $\Sigma^1_1$ closed set $\bigcup_m F_{m,n} \subseteq S_n$ with $\lambda(S_n - F_{m,n}) < 2^{-m}$.\\ For each $m$, in each $S_{\alpha, n}$, find a $\Sigma^1_1$ closed set $F_{\alpha, m, n}$ with $F_{\alpha, m, n} \subseteq S_{\alpha, n}$ and $\lambda(S_{\alpha, n} - F_{\alpha, m, n}) < 2^{-p(\alpha)}2^{-m}$ where $p$ is an injection of $\omega_1^{ck}$ into $\omega$. Let us set $F_{m,n}=\bigcap_{\alpha < \omega_1^{ck}} F_{\alpha, m, n}$. As the intersection of closed set $\bigcap_\alpha F_{\alpha, m, n}$ is a closed set and as the predicate $\forall \alpha\ \ X \in F_{\alpha, m, n}$ is clearly a $\Sigma^1_1$ predicate, we have that $F_{m,n}$ is a $\Sigma^1_1$ closed set. Also as $S_n - F_{m,n}=\bigcup_\alpha S_n - F_{\alpha, m,n}$ we have: $$ \begin{array}{rcl} \lambda(S_n - F_{m,n})&\leq&\lambda(\bigcup_\alpha S_n - F_{\alpha, m,n}) \\ &\leq&\lambda(\bigcup_\alpha S_{n, \alpha} - F_{\alpha, m,n}) \\ &\leq&\sum_\alpha \lambda(S_{\alpha, n} - F_{\alpha, m, n}) \leq 2^{-m} \end{array} $$ Then, uniformly in $n$ and $m$ we have sequenes of $\Sigma^1_1$ closed set $F_{n,m} \subseteq S_n$ such that $\lambda(S_n - F_{n,m}) < 2^{-m}$. Any $\omega$-order of $\omega \times \omega$ gives us the desired sequence of $\Sigma^1_1$ closed set. \end{proof} \begin{lemma} Let $P(X)$ be a $\Pi^1_1$ predicate of the form $$P(X) \leftrightarrow \forall n\ \ \exists \alpha < \omega_1^{ck}\ \ P_{\alpha, n}(X)$$ where each $P_{\alpha, n}$ is $\Delta^1_1$ uniformly in $n$ and $\alpha$. Suppose that $X$ is Solovay $\Sigma^1_1$-generic and suppose $P(X)$. Then there exists a $\Sigma^1_1$ closed set $F$ of positive measure with $X \in F$ and $\lambda(F - P)=0$. \end{lemma} \begin{proof} If the complement of $\{X\ |\ P(X)\}$ is of measure 0 then take $F=2^\omega$. Otherwise from lemma \ref{approx} we have a union of $\Sigma^1_1$ closed set of positive measure included in the complement and equal to it up to a set of measure 0. As $X$ is Solovay $\Sigma^1_1$-generic and in $P$ we have a $\Sigma^1_1$ closed set of positive measure containing $X$ which is disjoint from the complement of $P$ up to a set of measure 0. \end{proof} We can now prove the desired theorem: \begin{theorem}\label{hard to find name} If $Y$ is Solovay $\Sigma^1_1$-generic then $\omega_1^Y=\omega_1^{ck}$. \end{theorem} \begin{proof} Suppose that $Y$ is Solovay $\Sigma^1_1$-generic. For any Turing functional $\varphi^X$, consider the set: $$P=\{X\ |\ \forall n\ \ \exists \alpha < \omega_1^{ck}\ \ \varphi^X(n) \in \mathcal{O}^X_{\alpha}\}$$ Let $P_n=\{X\ |\ \exists \alpha < \omega_1^{ck}\ \ \varphi^X(n) \in \mathcal{O}^X_{\alpha}\}$ and $P_{\alpha, n}=\{X\ |\ \varphi^X(n) \in \mathcal{O}^X_{\alpha}\}$, so $P=\bigcap_n P_n$ and $P_n=\bigcup_{\alpha < \omega_1^{ck}}P_{\alpha, n}$. Note that $P_{\alpha, n}$ is $\Delta^1_1$ uniformly in $n$ and $\alpha$.\\ Suppose that $Y$ is in $P$. As $Y$ is Solovay $\Sigma^1_1$-generic, from the previous proposition, it is contained in a closed set of positive measure $F$ with $\lambda(F-P)=0$. In particular for each $n$ we have $\lambda(F - P_n)=0$ and then $\lambda(F^c \cup P_n)=1$. Then for each pair $\langle n, m \rangle$ we can search for the smallest ordinal $\alpha_{n,m}$ such that: $$\lambda(F^c_{\alpha_{n,m}} \cup \bigcup_{\alpha < \alpha_{n,m}} P_{\alpha, n}) > 1 - 2^{-m}$$ where $F^c_\alpha$ is the open set $F^c$ enumerated up to stage $\alpha$. Let $\alpha^*=\sup_{n, m} \alpha_{n,m}$. By admissibility we have that $\alpha^*< \omega_1^{ck}$. Then we have: $$ \begin{array}{rlcl} &\forall n\ \ \lambda(F^c_{\alpha^*} \cup \bigcup_{\alpha < \alpha^*} P_{\alpha, n})&=&1\\ \rightarrow&\forall n\ \ \lambda(F_{\alpha^*} \cap \bigcap_{\alpha < \alpha^*} P_{\alpha, n}^c)&=&0\\ \rightarrow&\forall n\ \ \lambda(F \cap \bigcap_{\alpha < \alpha^*} P_{\alpha, n}^c)&=&0\\ \rightarrow&\forall n\ \ \lambda(F - \bigcup_{\alpha < \alpha^*} P_{\alpha, n})&=&0\\ \rightarrow&\lambda(F - \bigcap_n \bigcup_{\alpha < \alpha^*} P_{\alpha, n})&=&0 \end{array} $$ As $X$ if Solovay-$\Sigma^1_1$ generic it is in particular weakly-Solovay-$\Sigma^1_1$ generic and then it weakly-$\Pi^1_1$-random. In particular it belongs to no $\Sigma^1_1$ set of measure 0. Then as $F - \bigcap_n \bigcup_{\alpha < \alpha^*} P_{\alpha, n}$ is a $\Sigma^1_1$ set of measure 0 we have that $X$ belongs to $\bigcap_n \bigcup_{\alpha < \alpha_{n,m}} P_{\alpha, n}$ and then $\sup_n \varphi^X(n) \leq \alpha^* < \omega_1^{ck}$. \end{proof} Using the equivalence between $\Pi^1_1$-random and $\Delta^1_1$-random + $\omega_1^X=\omega_1^{ck}$, we then have that the Solovay $\Sigma^1_1$-generic are included in the $\Pi^1_1$-randoms. All we have to do is prove the reverse inclusion. \begin{theorem}\label{theorem: monin and greenberg} The set of Solovay $\Sigma^1_1$-generic is exactly the set of $\Pi^1_1$ randoms. \end{theorem} \begin{proof} Suppose $X$ is not Solovay $\Sigma^1_1$-generic. Either $\omega^{X}_1 > \omega_1^{ck}$ and then $X$ is not $\Pi^1_1$-random. Or $\omega^{X}_1 = \omega_1^{ck}$. In this case there is a sequence of $\Sigma^1_1$ closed set $\bigcup_n F_n$ of positive measure such that $X$ is not in $\bigcup_n F_n$ and such that any $\Sigma^1_1$ closed set of positive measure which is disjoint from $\bigcup_n F_n$ does not contain $X$. The complement of $\bigcup_n F_n$ is a $\Pi^1_1$ set contaning $X$ that we can write as an uncountable union of Borel sets. As $\omega^{X}_1 = \omega_1^{ck}$ we have that $X$ is in the first $\omega_1^{ck}$ components of the uncountable union. Then $X$ is in a $\Delta^1_1$ set disjoint from $\bigcup_n F_n$. Since we can approximate this $\Delta^1_1$ from below by a union of $\Sigma^1_1$ closed set of the same measure, then $X$ is in a $\Pi^1_1$ set of measure 0 and then not $\Pi^1_1$-random. \end{proof} \begin{corollary} The set of $\Pi^1_1$-randoms is $\mathbf{\Pi^0_3}$ \end{corollary} The notion of test is interesting. For any union of closed $\Sigma^1_1$ set $S=\bigcup_n S_n$, let us define $\tilde{S}$ as the smallest intersection of $\Pi^1_1$ open set $O=\bigcap_n O_n$ contaning it. Then a test is equal to $\tilde{S} - S$. The question of whether weakly-Solovay-$\Sigma^1_1$-generic implies Solovay-$\Sigma^1_1$-generic is now the same as the open question of whether weak-$\Pi^1_1$-randomness implies $\Pi^1_1$-randomness. And this question is still open. \section{Yu Liang: A lower bound on the Borel rank \\ of the set of $\Pi^1_1$-random reals} Input by Yu Liang in April. Let $\mathcal{S}$ be the set of $\Pi^1_1$-random reals. We will show that $\mathcal S$ is not $\mathbf{\Sigma}^0_2$. (This will be improved to not $\mathbf{\Sigma}^0_3$ below.) Since $\mathcal{S}$ is a dense meager set, it is not $\mathbf{\Pi}^0_2$. As noted by Chong, Nies and Yu \cite[proof of Thm 3.12]{Chong.Nies.Yu:08}, $\+ S$ is Borel: $\+ S$ is the intersection of the $\Delta^1_1$ randoms ($\PI 3$) with the sets that are low for $\omega_1^{CK}$, which is properly $\mathbf{\Pi}^0_{\omega_1^{CK}+2}$ by a result of M. Steel~\cite[end of Section 2]{Steel:78}. Since $\mathcal{S}$ is a $\Sigma^1_1$-set, there must be some recursive tree $T\subseteq 2^{\omega}\times \omega^{\omega}$ such that $$x\in \mathcal{S}\leftrightarrow \exists f \forall n(x\upharpoonright n,f\upharpoonright n)\in T.$$ Now assume for a contradiction that $\mathcal{S}$ is a $\mathbf{\Sigma}^0_2$-set. Choose a sequence of closed sets $\{P_n\}_{n\in \omega}$ so that $\bigcup_{n\in \omega}P_n=\mathcal{S}$. Recall that the Gandy topology on Cantor space is given by the $\Sigma^1_1$ sets as a countable basis. (Note this is Polish on the set of sets that are low for $\omega_1^{CK}$.) \begin{lemma} For each $n$, the set $\mathcal{S}\setminus P_n$ is comeager in $\mathcal{S}$ in the sense of the Gandy topology. \end{lemma} \begin{proof} Fix a $\Sigma^1_1$ uncountable set $C\subseteq \mathcal{S}$. Let $\bar{C}$ be the closure (in the Cantor space sense) of $C$. Then $\bar{C}$ is a $\Sigma^1_1$ closed set. The leftmost real in $\bar{C}$ is either hyperarithmetic or hyperarithmetically equivalent to $\+ O$. So $\bar{C}$ is not a subset of $P_n$. So there must be some $\sigma\in 2^{<\omega}$ so that $[\sigma]\cap C\neq\emptyset$ but $[\sigma]\cap C\cap P_n=\emptyset$. Let $D=[\sigma]\cap C$. So $D\subseteq C$ is an open set (in the Gandy topology sense). Thus $\mathcal{S}\setminus P_n$ contains a dense open set and so must be comeager in $\mathcal{S}$. \end{proof} Since Gandy topology has Baire property, there must be some real $x\in \bigcap_{n\in \omega}\mathcal{S}\setminus P_n=\mathcal{S}\setminus \bigcup_{n\in \omega} P_n$, contradiction. \bigskip \section{Yu: Strong $\Pi^1_1$-ML-randomness is properly $\mathbf{\Pi}^0_3$} \label{s:GB-Monin sharpness} Input by Yu in May. Strong $\Pi^1_1$-ML-randomness is the higher analog of weak 2-randomness. This was mentioned in a problem in \cite[Ch.\ 9]{Nies:book}. It is open whether this notion is the same as $\Pi^1_1$-randomness. Obviously the collection of strongly $\Pi^1_1$-ML-random reals is $\mathbf{\Pi}^0_3$. \begin{proposition} The collection of strongly $\Pi^1_1$-ML-random reals is not $\mathbf{\Sigma}^0_3$. \end{proposition} We use a forcing argument. Let $\mathbb{P}=(\mathbf{P},\leq)$ where $\mathbf{P}$ is the collection of $\Sigma^1_1$ closed sets with a positive measure. Obviously, if $g$ is sufficiently generic, then it must be strongly $\Pi^1_1$-ML-random. \begin{lemma}\label{lemma: tech for sml} For any $\Sigma^1_1$ tree $T$ with $\mu([T])>0$ having only $\Pi^1_1$-ML random reals, there is a uniformly $\Pi^1_1$ sequence open sets $\{U_n\}_{n\in \omega}$ so that \begin{itemize} \item $\forall n\mu(U_n\cap [T])<2^{-n}$; and \item for any $\sigma$, if $[\sigma]\cap [T]\neq\emptyset$, then $[\sigma]\cap [T] \cap (\bigcap_n\{U_n\}_{n\in \omega})\neq\emptyset$. \end{itemize} \end{lemma} \begin{proof} This is like difference tests. Given $n$, for any $\sigma$, we enumerate strings of $2^{2\cdot |\sigma|+n+1}$ into $U_n$ from left to right which are possibly the leftmost finite string of length $2\cdot|\sigma|+n+1$ among those in $[\sigma]\cap [T]$. The sequence $\{U_n\}_{n\in \omega}$ is precisely what we want. \end{proof} \begin{lemma}[Nies, see Thm. 11.7 \cite{LogicBlog:12}]\label{lemma: nies} A $\Pi^1_1$-ML random real $x$ is $\Pi^1_1$-difference random if and only if the $\Pi^1_1$ version of Chaitin's halting probability $\underline \Omega \not \leq_{\mathrm{h-T}}x$. \end{lemma} \begin{lemma}[Greenberg, Bienvenu, Monin]\label{lemma: GLM} No strongly $\Pi^1_1$-ML random is $\mathrm{h-T}$-above $\underline \Omega$. So by Lemma~\ref{lemma: nies} every strongly $\Pi^1_1$-ML random real is $\Pi^1_1$-difference random. \end{lemma} Now given ANY sequence open sets $\{V_n\}_{n\in \omega}$, let $\mathcal{D}_{V}=\{T\mid T\in \mathbf{P}\wedge [T]\cap \bigcap_{n\in \omega}V_n=\emptyset\}$. \begin{lemma}\label{lemma: dense dv} If $\bigcap_{n\in \omega}V_n$ only contains strongly $\Pi^1_1$-ML-random, then $\mathcal{D}_{V}$ is dense. \end{lemma} \begin{proof} Given any condition $T\in \mathbf{P}$. By Lemma \ref{lemma: tech for sml}, there is a uniformly $\Pi^1_1$ sequence open sets $\{U_n\}_{n\in \omega}$ as described. Then there must be some $\sigma$ so that $[\sigma]\cap [T]\neq \emptyset$ but $[\sigma]\cap [T]\cap (\bigcap_{n\in \omega}V_n)=\emptyset$ (Otherwise, there must be a real $x\in [T]\cap (\bigcap_{n\in \omega}V_n) \cap (\bigcap_{n\in \omega}U_n)$. Then $x$ would be a strongly $\Pi^1_1$-ML random but not $\Pi^1_1$-difference random, a contradiction to Lemma \ref{lemma: GLM}). Let $[S]=[\sigma]\cap [T]$. Then $S\leq T$ and $S\in \mathcal{D}_V$. \end{proof} Given any $\mathbf{\Sigma}^0_3$-set, if $g$ is sufficiently generic, then $g$ is a strongly $\Pi^1_1$-ML-random real not in this set. This concludes the proof of the proposition. \section[Greenberg and Monin]{Yu based on Greenberg and Monin: a lower bound on the Borel rank of the set of $\Pi^1_1$-random reals } Input by Yu in August, 2013. The result is essentially due to Greenberg and Monin. It is Greenberg who told me the result. Let $\mathbb{P}=(\mathbf{P},\leq)$ where $\mathbf{P}$ is the collection of $\Sigma^1_1$ closed sets with a positive measure. \begin{lemma} Every $\mathbb{P}$-generic real $g$ is $\Pi^1_1$-random. \end{lemma} \begin{proof} By Theorem \ref{theorem: monin and greenberg}, it is sufficient to prove that $g$ is $\Sigma^1_1$-Solovay generic. Given a uniformly $\Sigma^1_1$-closed sets $\{F_n\}_{n\in \omega}$ with positive measure. Let $$\mathcal{D}=\{F_n\mid n\in \omega\}\cup \{F\mbox{ is closed and }\Sigma^1_1 \mid F\cap (\bigcup_{n\in \omega}F_n)=\emptyset\}.$$ Obviously $\mathcal{D}$ is dense. So the lemma follows. \end{proof} Now suppose that $\{V_n\}_{n}$ is a sequence open sets so that $\bigcap_{n\in \omega}V_n$ only contains $\Pi^1_1$ random reals. Let $$\mathcal{D}_{V}=\{T\mid T\in \mathbf{P}\wedge [T]\cap \bigcap_{n\in \omega}V_n=\emptyset\}.$$ By Lemma \ref{lemma: dense dv}, $\mathcal{D}_{V}$ is dense. In conclusion, the collection of $\Pi^1_1$-random reals is not $\mathbf{\Sigma}^0_3$. {\bf Remark:} By the proof, for any set $\mathcal A$ of reals, if $\Pi^1_1$-random $\subseteq \mathcal A\subseteq \Pi^1_1$-difference random, then $A$ is not $\mathbf{\Sigma}^0_3$. \section{Bienvenu, Greenberg, Monin: A $\Pi^1_1$-MLR set $X$ is not $\Pi^1_1$-random iff $X$ $hT$-computes a $\Pi^1_1$ sequence which is not $\Delta^1_1$.} The $hT$-reductions are the most general version of Turing-reductions, as defined by Bienvenu, Greenberg and Monin in LogicBlog2012. We have that if $X$ $hT$-computes a $\Pi^1_1$ sequence which is not $\Delta^1_1$, then $X$ $h$-computes $\mathcal{O}$ and is thus not $\Pi^1_1$-random, as $\omega_1^X > \omega_1^{\mathrm{CK}} \leftrightarrow X \geq_h \mathcal{O}$. So all we need to prove is the following theorem: \begin{theorem} If $X$ is $\Pi^1_1$-MLR but not $\Pi^1_1$-random, then $X$ $hT$-computes a $\Pi^1_1$ sequence which is not $\Delta^1_1$. \end{theorem} \begin{proof} Suppose that $X$ is $\Pi^1_1$-MLR but not $\Pi^1_1$-random. Then from theorem \ref{hard to find name} there is a uniform intersection of $\Pi^1_1$ open sets $\bigcap_n O_n$ so that $X \in \bigcap_n O_n$ and so that no $\Delta^1_1$ closed set $F \subseteq \bigcap_n O_n$ of positive measure contains $X$ (and thus no $\Delta^1_1$ closed set $F \subseteq \bigcap_n O_n$ contains $X$). Let $\{W_e\}_{e \in \omega}$ be an enumeration of the $\Pi^1_1$ subsets of $\omega$. We will construct a $\Pi^1_1$ sequence $A$ which is not $\Delta^1_1$ and so that $X$ can hT-compute $A$. We use the usual way to make $A$ not $\Delta^1_1$, by meeting each requirement $$R_e : W_e \mbox{ infinite } \rightarrow A \cap W_e \neq \emptyset$$ making sure in the meantime that $A$ is co-infinite.\\ In what follows, to speak of ordinal stages and finite substages in a clean way, we use the ordinal version of the euclidian division: For an ordinal $\alpha$, there is a unique pair of ordinal $\langle \beta, n \rangle$ so that $\alpha=\omega \times \beta + n$. Furthermore one can uniformly find $\beta$ and $n$ from $\alpha$ (a simple research within the ordinals smaller than $\alpha$). Then, the stage $\omega \times \beta + n$ should be understood as substage $n$ of stage $\alpha$.\\ \noindent \textbf{Construction of $A$}:\\ \\ First, for each $e$ let $b_e$ be a boolean initialized to 'false'. At stage $\gamma=\omega \times \alpha + \langle e, m, k \rangle$ (At stage $\alpha$, at substage $\langle e, m, k \rangle$), if $b_e$ is marked 'true', go to the next stage (next substage), otherwise if $m \in W_e[\alpha]$ with $m>2e$, then consider the $\Delta^1_1$ set $\bigcap_n O_n[\alpha]$ and compute an increasing union of $\Delta^1_1$ closed sets $\bigcup_n F_n$ with $\bigcup_n F_n \subseteq \bigcap_n O_n[\alpha]$ and $\lambda(\bigcup_n F_n)=\lambda(\bigcap_n O_n[\alpha])$. \\ If $\lambda(F_k^c \cap O_m[\gamma]) \leq 2^{-e}$ then enumerate $m$ into $A$ at stage $\gamma$, mark $b_e$ as 'true' and let $U_{\langle m, e \rangle}=F_k^c \cap O_m[\gamma]$ (the $U_{\langle m, e \rangle}$ are intended to form a higher Solovay test).\\ \noindent \textbf{Verification that $A$ is not $\Delta^1_1$}:\\ \\ $A$ is co-infinite because for each $e$ at most one $m$ is enumerated into $A$ and this $m$ is bigger than $2e$. Now suppose that $W_e$ is infinite. There exists then $\alpha<\omega_1^{\mathrm{CK}}$ so that $W_e[\alpha]$ is infinite (otherwise the function which to $n$ associates the first ordinal time at which the $n$-th element enters $W_e$ would have its range cofinal in $\omega_1^{\mathrm{CK}}$, which is not possible). Then there exists $\beta>\alpha$ so that $\lambda(\bigcap_n O_n - \bigcap_n O_n[\beta]) < 2^{-e}$. Thus there is a $\Delta^1_1$ closed set $F_k \subseteq \bigcap_n O_n[\beta]$ so that $\lambda(\bigcap_n O_n - F_k) < 2^{-e}$. Then there exists $a$ for that for all $b \geq a$ we have $\lambda(O_b - F_k) < 2^{-e}$ and in particular $\lambda(O_b[\omega \times \beta + \omega] - F_k) < 2^{-e}$. But as $W_e[\alpha]$ is already infinite we have for some $m \in W_e[\beta]$ with $m>2e$ that $\lambda(O_m[\omega \times \beta + \langle e, m, k \rangle] - F_k) < 2^{-e}$ and then at stage $\omega \times \beta + \langle e, m, k\rangle$, $m$ is enumerated into $A$, if $R_e$ is not met yet.\\ \noindent \textbf{Verification that $\{U_{\langle m, e \rangle}\}_{m,e \in \omega}$ is a higher Solovay test}:\\ \\ We put an open set in the Solovay test only when $R_e$ is 'actively' met, and this open set has measure smaller than $2^{-e}$. As each $R_e$ is 'actively' met at most once, we have a Solovay test.\\ \noindent \textbf{Computation of $A$ from $X$}:\\ \\ Note that we now just describe the algorithm to compute $A$ from $X$. The verification that the algorithm works as expected is given in the next section. Let $p$ be the smallest integer so that for any $m>p$, $X$ is in no $U_{\langle m, e \rangle}$. To decide whether $m>p$ is in $A$, look for the smallest $\alpha$ such that $X \in O_m[\alpha]$. Then decide that $m$ is in $A$ iff $m$ is in $A[\alpha]$.\\ \noindent \textbf{Verification that $X$ computes $A$}:\\ \\ Let $p$ be the smallest integer so that for any $m>p$, $X$ is in no $U_{\langle m, e \rangle}$. Suppose for $m>p$ that $X \in O_m[\alpha]$ and $m \notin A[\alpha]$. Suppose also that at latter stage $\gamma=\omega \times \beta + \langle e, m, k \rangle >\alpha$, the integer $m$ is enumerated into $A$. By construction, it means we have $\lambda(O_m[\gamma] - F_k) < 2^{-e}$ for some $\Delta^1_1$ closed set $F_k \subseteq \bigcap_n O_n$ and that $U_{\langle m, e \rangle}=O_m[\gamma] - F_k$ (Note that $U_{\langle m, e \rangle}$ cannot be replaced latter because of a different $k$, as $R_e$ is now met). As $X$ does not belong to $U_{\langle m, e \rangle}$ and does not belong to $F_k$, it does not belong to $O_m[\gamma]$ which contradicts the fact that it belongs to $O_m[\alpha] \subseteq O_m[\gamma]$. \end{proof} \section{Bienvenu, Greenberg, Monin: For any $n\geq 4$ we have $\Pi^1_1$-random $\leftrightarrow P^0_n(\Pi^1_1)$-random} We say that a set is $P^0_2(\Pi^1_1)$ if it is equal to $\bigcap_n O_n$ where each $O_n$ is a $\Pi^1_1$ open set uniformly in $n$. The $P^0_3(\Pi^1_1)$ sets are those of the form $\bigcap_m \bigcup_n F_{n,m}$ where each $F_{n,m}$ is a $\Sigma^1_1$ closed set uniformly in $n$ and $m$. The $P^0_n(\Pi^1_1)$ sets and $S^0_n(\Pi^1_1)$ sets are then defined for any $n \in \omega$, following the same logic.\\ Let $\mathcal{O}_1$ be a $\Pi^1_1$ set of unique computable ordinal notations. For $o \in \mathcal{O}_1$ we denote by $|o|$ the corresponding ordinal. We will consider in this section an extension of the notion of functionals, which seems more adapted to work in the higher world. Some recent work of Bienvenu, Greenberg and Monin, still unpublished, says that we can have for some $X,Y$ that $X \geq_{hT} Y$, but with the impossibility of having a computation of $Y$ from $X$ with functional consistent everywhere. This is why we decide here to make of inconsistency something 'normal' by defining $\Pi^1_1$ ``relationals" which are intended to be to $\Pi^1_1$ relations what $\Sigma^0_1$ functionals are to $\Sigma^0_1$ functions.\\ A $\Pi^1_1$ relational $\varphi_P$ is given by a $\Pi^1_1$ predicate $P \subseteq 2^{<\omega} \times \omega \times \mathcal{O}_1$. We write $\varphi_P^X(n) \downarrow$ the predicate $\exists o\ \exists \sigma\ \prec X\ P(\sigma, n, o)$. If $p \in \omega$ we write $\varphi_P^X(n) \downarrow = p$ for $\exists \sigma \prec X\ P(\sigma, n, p)$. Note that we can have distinct values $p_1, p_2 \in \mathcal{O}_1$ so that $\varphi_P^X(n) \downarrow = p_1$ and $\varphi_P^X(n) \downarrow = p_2$. We write $\ensuremath{\mathrm{dom}} \varphi_P$ for the set of $X$ such that any $n$ is in relation with at least one element of $p$ : $\{X\ |\ \forall n\ \exists p\ \varphi_P^X(n) \downarrow = p\}$. \begin{fact} \label{fact} Each $\Pi^1_1$ relational $\varphi_P$ corresponds to the higher $\Pi^0_2$ set $\ensuremath{\mathrm{dom}} \varphi_P$. Conversely each higher $\Pi^0_2$ set $\bigcap_n O_n$ corresponds to the $\Pi^1_1$ relational $\varphi_P^X(n)\downarrow=p \leftrightarrow p \in \mathcal{O}_1 \wedge \exists \sigma \prec X\ \sigma \in O_n[|p|]$. \end{fact} \begin{lemma} \label{relationals} If $\omega_1^Z > \omega_1^{\mathrm{CK}}$ and $Z$ is $\Delta^1_1$ random then there is a $\Pi^1_1$ relational $\varphi_P$ such that $Z \in \ensuremath{\mathrm{dom}} \varphi_P$ and such that $\sup_n \{\min \{|o|\ |\ \varphi^Z_P(n) \downarrow = o\}\} = \omega_1^{\mathrm{CK}}$. \end{lemma} \begin{proof} From theorem \ref{hard to find name} there is a higher $\Pi^0_2$ set $\bigcap_n O_n$ containing $Z$ so that $Z$ is in no $\Sigma^1_1$ closed set of positive measure included in $\bigcap_n O_n$ (and then in no $\Sigma^1_1$ closed set included in $\bigcap_n O_n$). Consider for each $\alpha$ computable the set $\bigcap_n O_n[\alpha]$. We can approximate $\bigcap_n O_n[\alpha]$ from below by a union of $\Delta^1_1$ closed sets of the same measure and as $Z$ is in no $\Delta^1_1$ nullset and in no $\Delta^1_1$ closed sets included in $\bigcap_n O_n[\alpha]$, $Z$ cannot be in $\bigcap_n O_n[\alpha]$. This implies that for $\varphi_P$ defined from $\bigcap_n O_n$ in fact \ref{fact}, we have $\sup_n \{\min \{o\ |\ \varphi_P(n) \downarrow = o\}\} = \omega_1^{\mathrm{CK}}$. \end{proof} \noindent We now have the following lemma: \begin{lemma}\label{plouf} From any $\Pi^1_1$ relational $\varphi_P$ one can obtain effectively in $\varepsilon$ a $\Pi^1_1$ relational $\varphi_Q$ so that: \renewcommand{\arraystretch}{1.5} $$ \begin{array}{l} 1:\ \ \ensuremath{\mathrm{dom}}{\varphi_P}=\ensuremath{\mathrm{dom}}{\varphi_Q}\\ 2:\ \ \forall X\ \forall n\ (\exists! o\ \varphi_P^X(n)=o) \rightarrow \varphi_Q^X(n)=o\\ 3:\ \ \forall X\ \forall n\ \min \{|o|\ |\ \varphi_P^X(n)=o\} \leq \min \{|o|\ |\ \varphi_Q^X(n)=o\}\\ 4:\ \ \lambda(\{X\ |\ \exists n\ \exists o_1 \neq o_2\ \varphi_Q^X(n)\downarrow=o_1 \wedge \varphi_Q^X(n)\downarrow=o_2\}) \leq \epsilon \end{array} $$ \renewcommand{\arraystretch}{1} \end{lemma} \begin{proof} {.}\\ \\ \textbf{The construction}:\\ \\ In what follows, to speak of ordinal stages and finite substages in a clean way, we use the ordinal version of the euclidian division: For an ordinal $\alpha$, there is a unique pair of ordinal $\langle \beta, n \rangle$ so that $\alpha=\omega \times \beta + n$. Furthermore one can uniformly find $\beta$ and $n$ from $\alpha$ (a simple research within the ordinals smaller than $\alpha$). Then, the stage $\omega \times \beta + n$ should be understood as substage $n$ of stage $\alpha$.\\ Take a computable sequence of rationals $\varepsilon_n$ so that $\sum_n \varepsilon_n \leq \varepsilon$. For each $n$ and uniformly in $n$ we do the following:\\ At stage $\gamma=\omega \times \alpha + \langle \sigma, o \rangle$ let $A_{\gamma}=\bigcup \{[\tau]\ |\ \exists \beta < \gamma\ \varphi_Q^\tau(n)[\beta]\downarrow\}$. If $\varphi_P^\sigma(n)[\alpha]\downarrow=o$, we effectively find a clopen set $B_{\gamma}=\bigcup_{i<m} [\tau_i] \subseteq [\sigma]$ so that $B_{\gamma} \cup A_{\gamma}$ covers $[\sigma]$ and such that $\lambda (B_{\gamma} \cap A_{\gamma}) < \varepsilon_n 2^{-p(\gamma)}$, where $p$ is an injection from $\omega_1^{\mathrm{CK}}$ to $\omega$. We then set $\varphi_Q^{\tau_i}(n)[\gamma]=o$ for any of the $\tau_i$ such that $B_{\gamma}=\bigcup_{i<m} [\tau_i]$.\\ \\ \textbf{Verifcation}: \\ \begin{enumerate} \item Let us prove $\ensuremath{\mathrm{dom}}{\varphi_Q} \subseteq \ensuremath{\mathrm{dom}}{\varphi_P}$. Suppose that $\varphi_Q^X(n)\downarrow$. Then by construction we have a stage $\gamma$ with $X$ in a clopen set $B_{\gamma} \subseteq [\sigma]$ with $\varphi_P^\sigma(n)\downarrow$. Then we also have $\varphi_P^X(n)\downarrow$ which gives us $\ensuremath{\mathrm{dom}}{\varphi_Q} \subseteq \ensuremath{\mathrm{dom}}{\varphi_P}$. For the other inclusion, suppose that $\varphi_P^\sigma(n)\downarrow$ with $\sigma \prec X$. Then by construction we have a stage $\gamma$ and an open set $B_\gamma \cup A_\gamma$ covering $[\sigma]$ with $\varphi_Q^Y(n)[\gamma]\downarrow$ for any $Y$ in $B_\gamma \cup A_\gamma$. Then we have that $\varphi_Q^X(n)\downarrow$ and then $\ensuremath{\mathrm{dom}}{\varphi_P} \subseteq \ensuremath{\mathrm{dom}}{\varphi_Q}$. \item Suppose that $\exists! o\ \varphi_P^X(n)=o$. Consider the smallest $\gamma = \omega \times \alpha + \langle \sigma, o \rangle$ so that $\sigma \prec X$ and $\varphi_P^\sigma(n)[\alpha]\downarrow=o$. By hypothesis $\{\tau\ |\ \exists \beta < \gamma\ \varphi_Q^\tau(n)[\beta]\downarrow\}=\emptyset$ and then $\varphi_Q^\sigma(n)[\gamma]\downarrow=o$. \item This is true because the images of $n$ via $\varphi_Q^X$ are a subset of the images of $n$ via $\varphi_P^X$. \item For a given $n$ we have that $\{X\ |\ \exists o_1 \neq o_2\ \varphi_Q^X(n)\downarrow=o_1 \wedge \varphi_Q^X(n)\downarrow=o_2\}$ is included in $\bigcup_\gamma B_\gamma \cap A_\gamma$. Also we have $\lambda(\bigcup_\gamma B_\gamma \cap A_\gamma) \leq \sum_\gamma \varepsilon_n 2^{-p(\gamma)} \leq \varepsilon_n$. \end{enumerate} \end{proof} \begin{lemma}\label{good} If $\omega_1^Z > \omega_1^{\mathrm{CK}}$ and $Z$ is $\Pi^1_1$-ML-random then there is a $\Pi^1_1$ relational $\varphi_P$ such that $Z \in \ensuremath{\mathrm{dom}} \varphi_P$, such that $\forall n\ \exists!o\ \varphi_P^Z(n)=o$ and such that $\sup_n |\varphi_P(n)| = \omega_1^{\mathrm{CK}}$. \end{lemma} \begin{proof} Suppose that $\omega_1^Z > \omega_1^{\mathrm{CK}}$ and $Z$ is $\Delta^1_1$ random, from lemma \ref{relationals} we have $\varphi_P$ such that $Z \in \ensuremath{\mathrm{dom}} \varphi_P$ and such that $\sup_n \{\min \{o\ |\ \varphi_P(n) \downarrow = o\}\} = \omega_1^{\mathrm{CK}}$. But from lemma \ref{plouf} one can obtain uniformly in $\varepsilon$ a functional $\varphi_Q$ with $\ensuremath{\mathrm{dom}} \varphi_Q=\ensuremath{\mathrm{dom}} \varphi_P$ and so that the $\Pi^1_1$ open set: $$\{X\ |\ \exists n\ \exists o_1 \neq o_2\ \varphi_Q^X(n)\downarrow=o_1 \wedge \varphi_Q^X(n)\downarrow=o_2\}$$ has measure smaller than $\varepsilon$. Since $Z$ is $\Pi^1_1$-ML random there exists a relational $\varphi_Q$ with $Z$ in its domain, which is functionnal on $Z$ and (using 3 of lemma \ref{plouf}) such that $\sup_n |\varphi_Q(n)| = \omega_1^{\mathrm{CK}}$. \end{proof} We now assume that we have $Z$ $\Pi^1_1$-ML-random with $\omega_1^Z > \omega_1^{\mathrm{CK}}$ and that we have a relational $\varphi_P$ with the properties of lemma \ref{good}. In order to put $Z$ in a $\P^0_2(\Pi^1_1)$ nullset, we would like $\ensuremath{\mathrm{dom}} \varphi_P$ to have measure 0. In order to do so we would like $\ensuremath{\mathrm{dom}} \varphi_P$ to contain no $X$ with $\omega_1^X = \omega_1^{\mathrm{CK}}$. This is what we are trying to achieve now. Note that we eventually won't be able to put $Z$ in a $\P^0_2(\Pi^1_1)$ nullset, but only in a $\P^0_4(\Pi^1_1)$ nullset.\\ For any $e \in \omega$ we define $R_e \subseteq \omega \times \omega$ by $R_e(n,m) \leftrightarrow \langle n, m \rangle \in W_e$. We define then $R_e \rest k$ to be $R_e \rest k (n,m) \leftrightarrow \langle m, k \rangle \in W_e \wedge R_e(n, m)$. Note that $R_e \rest k$ is well defined for any $e$. Also in what follows, a morphisms from a relation $R_a$ to another relation $R_b$ is a function $f$ total on $\ensuremath{\mathrm{dom}}{R_a}$, with $f(\ensuremath{\mathrm{dom}}{R_a}) \subseteq \ensuremath{\mathrm{dom}}{R_b}$ and $R_a(x, y) \rightarrow R_b(f(x), f(y))$. Let us consider the two following predicates on $2^\omega \times \omega$: \renewcommand{\arraystretch}{1.5} $$ \begin{array}{rcl} C_1(X,e)&\leftrightarrow&\exists n\ \exists o_n\ \varphi^X_P(n)\downarrow = o_n \wedge \forall f\ f \mbox{ is not a morphism from } R_{o_n} \mbox{ to } R_e\\ C_2(X,e)&\leftrightarrow&\exists m\ \forall n\ \exists o_n\ \varphi^X_P(n)\downarrow = o_n \wedge \forall f\ f \mbox{ is not a morphism from } R_e \rest m \mbox{ to } R_{o_n} \end{array} $$ \renewcommand{\arraystretch}{1} We will now join them into one predicate. Let us define $G$ to be the $\Pi^0_2$ set of $e$ so that $R_e$ is a linear order of $\omega$. We then define: $$C(X)\leftrightarrow X \in \ensuremath{\mathrm{dom}}\varphi_P \wedge (\forall e \in G\ \ C_1(X,e) \vee C_2(X,e))$$ Let us first make sure that $\{X\ |\ C(X)\}$ is $P^0_4(\Pi^1_1)$. We have that $\ensuremath{\mathrm{dom}} \varphi_P$ is $P^0_2(\Pi^1_1)$, $\{X\ |\ C_1(X,e)\}$ is $S^0_1(\Pi^1_1)$ uniformly in $e$ and the set $\{X\ |\ C_2(X,e)\}$ is $\Sigma^0_3(\Pi^1_1)$ uniformly in $e$. Then the set $\ensuremath{\mathrm{dom}} \varphi_P \cap (\{X\ |\ C_1(X,e)\} \cup \{X\ |\ C_2(X,e)\})$ is $S^0_3(\Pi^1_1)$ uniformly in $e$. As $G$ has a $\Pi^0_2$ description, we have that $\{X\ |\ C(X)\}$ is a $P^0_4(\Pi^1_1)$ set. The goal is now to prove that $Z \in C$ and that for all $X \in \ensuremath{\mathrm{dom}} \varphi$ if $\varphi_P$ is functionnal on $X$ and $\omega_1^X=\omega_1^{\mathrm{CK}}$ then $X \notin C$. Let us first prove that $Z \in C$. By hypothesis we have $Z \in \ensuremath{\mathrm{dom}}\varphi_P$. Take any $e \in G$. Suppose that $R_e$ is a well-founded relation. As $e$ is already in $G$ we have that $R_e$ is a well-ordered relation. Then $|R_e| < \omega_1^{\mathrm{CK}}$. But then there is some $n$ so that $|\varphi^Z_P(n)| > |R_e|$ and we cannot have a morphism from $R_{\varphi^Z_P(n)}$ to $R_e$. Then $C_1(Z, e)$ is true. Suppose now that $R_e$ is a ill-founded relation. There is then some $m$ so that $R_e \rest m$ is already ill-founded. But as $R_{\varphi^Z_P(n)}$ is well-founded for every $n$, then for every $n$ we cannot have a morphism from $R_e \rest m$ to $R_{\varphi^Z_P(n)}$. Then $C_2(Z, e)$ is true. Let us first prove that $\forall X \in \ensuremath{\mathrm{dom}} \varphi$ if $\varphi_P$ is functional on $X$ and $\omega_1^X=\omega_1^{\mathrm{CK}}$ then $X \notin C$. Take any $Y \in \ensuremath{\mathrm{dom}} \varphi_P$ so that $\varphi_P$ is functional on $Y$ and $\omega_1^Y=\omega_1^{\mathrm{CK}}$. Then $\sup_n |\varphi^Y_R(n)| = \alpha < \omega_1^{\mathrm{CK}}$. Thus there exists some code $e \in G$ so that $R_e$ is a well-order of order-type $\alpha$. For this $e$ we certainly have for all $n$ a morphism from $R_{\varphi^Y_R(n)}$ into $R_e$. Then we do not have $C_1(Y, e)$. Let us now prove that we do not have $C_2(Y, e)$. For any $m$ we have $|R_e \rest m| < \alpha$ (even if $\alpha$ is successor). But because $\alpha=\sup_n \varphi^Y_R(n)$ there is necessarily some $n$ so that $|\varphi^Y_R(n)|>|R_e \rest m|$. Thus there is a morphism from $R_e \rest m$ into $R_{\varphi^Y_R(n)}$. Then we do not have $C_2(Y, e)$ and then we do not have $C(Y)$. \noindent Thus the measure of $\{X\ |\ C(X)\}$ is bounded by the measure of $$H=\{X \in \ensuremath{\mathrm{dom}} \varphi_P\ |\ \varphi_P \mbox{ is not functional on } X\}$$ But we can obtain uniformly in $\varepsilon$ some predicate $C_\varepsilon(X)$ with $C_\varepsilon(Z)$ and with the measure of $H$ bounded by $\varepsilon$. As each $C_\varepsilon$ is $P^0_4(\Pi^1_1)$ uniformly in $\varepsilon$ we have that $\bigcap_\epsilon C_\varepsilon$ is a $P^0_4(\Pi^1_1)$ set of measure 0 and containing $Z$. Thus $P^0_4(\Pi^1_1)$ nullsets are enough to capture anything that a $\Pi^1_1$ nullset can capture. In particular the $P^0_n(\Pi^1_1)$ sets do not capture anything more for $n > 4$. \newpage \part{Reverse Mathematics} \section{Nies: The strength of Jordan decomposition \\ for functions of bounded variation} \label{s:GMNS reverse} Written by Nies based on work with N.\ Greenberg, J.S.\ Miller, and T.\ Slaman. All real valued functions will have domain $[0,1]$ unless otherwise mentioned. Variables $f,g,h$ denote functions. Recall that for a function $f$, one defines the variation function $V_f$ by $$V_f(x)= \sup \sum_{i=1}^{n-1} | f(t_{i+1}) - f(t_i)| < \infty,$$ where the sup is taken over all collections $ t_1 \le t_2 \le \ldots \le t_n$ in $[0,x]$. One says that $f$ is of bounded variation if $V_f(1)< \infty$. The Jordan decomposition theorem states that every function $f$ of bounded variation can be written in the form $f = g-h$ where $g,h$ are nondecreasing. Moreover, if $f$ is continuous, we can ensure that $g,h$ are continuous as well. This is easily proved: let $g = V_f$, and observe that $h=g-f$ is nondecreasing. So we have a Jordan decomposition $f = g- (g-f)$. Jordan proved this theorem in his lectures at the Ecole Polytechnique 1882-7. Today it is often treated in a first course on real analysis. Its strength in the sense of reverse mathematics is not obvious. We will see that, depending on whether we want $g,h$ to be continuous or not, gives rise to principle equivalent to $\mathtt{ACA}_0$, or to $\mathtt{WKL_0}$. Let us say that \begin{center} $f \le_{\mathtt{slope}} g$ $:\Leftrightarrow$ $\forall x< y \, [ f(y) -f(x) \le g(y)- g(x)]$. \end{center} That is, the slopes of $g$ are at least as large as the slopes of $f$. Clearly, this is equivalent to saying that $h:= g-f$ is nondecreasing. Thus, the problem of finding a Jordan decomposition of $f$ is equivalent to finding a nondecreasing function $g$ with $f \le_{\mathtt{slope}}g $. This was already pointed out in \cite{Rettinger.Zheng:04}. (Sometimes one of the functions is partial; then we only look at slopes in the domain.) \subsection{Jordan decomposition via continuous functions} We first work in the usual setting of reverse mathematics, which only deals with continuous functions, suitably encoded, for instance, by the values at rationals, together with a modulus of uniform continuity (see e.g.\ Simpson's book II.6). It is equivalent (over $\mathtt{RCA_0}$) to describe $f$ as the limit of a Cauchy name $\seq{p_s}$ with respect to the sup norm, where the $p_s$ are polygonal functions with rational breakpoints. Thus $||p_t- p_s||_{sup} \le \tp{-s}$ for $t> s$. The principle $\mathtt{Jordan_{cont}}$ says that for each (continuous) function $f$ of bounded variation, there are continuous nondecreasing functions $g,h$ such that $f= g-h$. In Proposition~\ref{prop:ACA} and Theorem~\ref{thm:WKL} below, technique and some writing involving reverse mathematics was provided by Keita Yokoyama. \begin{proposition}\label{prop:ACA} Over $\mathtt{RCA_0}$, we have \begin{center} $\mathtt{Jordan_{cont}} \leftrightarrow \mathtt{ACA_0}$. \end{center} \end{proposition} \begin{proof} $\leftarrow$: given $f$, from the jump of a representation of $f$ as a continuous function we can compute a representation of $V_f$ as a continuous function. This works in $\mathtt{ACA_0}$. \noindent $\rightarrow$: Suppose we are given a model of $\mathtt{Jordan_{cont}}$. Let \begin{center} $q_n = 1-\tp{-n}$, and $q_{n,s} = q_n - \tp{-n-s-1}$. \end{center} Instead of proving $\ACA$, we will show the existence of the range of any one-to-one functions on ${\mathbb{N}}$ within $\ensuremath{\mathbf{RCA_0}}} \newcommand{\WKL}{\ensuremath{\mathbf{WKL_0}}} \newcommand{\ACA}{\ensuremath{\mathbf{ACA_0}}} \newcommand{\ATR}{\ensuremath{\mathbf{ATR_0}}} \newcommand{\Pioo}{\ensuremath{\mathbf{\Pi^1_1\mhyphen CA_0}}$. (See \cite[Lemma~III.1.3]{Simpson:09}). Let $h:{\mathbb{N}}\to{\mathbb{N}}$ be a one-to-one function. Define continuous functions $f_{s}:[0,1]\to{\mathbb{R}}$ as follows: define $f_{s}$ on $[q_{n,k},q_{n,k+1}]$ to be a sawtooth function of height $2^{-k}$ with $2^{k-n}$ many teeth if $k\le s$ and $h(k)=n$, and $f_{s}=0$ otherwise. Then, the limit $f=\lim_{s\to\infty}f_{s}$ exists. Let $f\le_{slope}g$. Take a function $\eta:{\mathbb{N}}\to{\mathbb{N}}$ so that $g(q_{n})-g(q_{n,\eta(n)})<2^{-n}$. This can be done by the following argument: since $g$ is continuous and $\lim_{k\to\infty}q_{n,s}=q_{n}$, we have \begin{center} $\forall n \exists k \theta(n,k)\equiv\exists m\theta_{0}(n,m,k)$ \end{center} where $\theta(n,k)$ is a $\Sigma^{0}_{1}$-formula which expresses $g(q_{n})-g(q_{n,k})<2^{-n}$, and $\theta_{0}(n,m,k)$ is a $\Sigma^{0}_{0}$-formula. Define $\eta_{0}$ as $\eta_{0}(n)$ to be the least $\langle m,k\rangle$ where $\langle\cdot,\cdot\rangle$ is a standard pairing function, and $\eta(n)=(\eta_{0}(n))_{1}$. Now, if $h(k)=n$, then $g(q_{n})-g(q_{n,k})\ge g(q_{n,k+1})-g(q_{n,k})\ge 2^{-n}$, and hence $k<\eta(n)$. Thus, $n\in \mathrm{rng}(h)\Leftrightarrow\exists k<\eta(n)\ h(k)=n$, which means that the range of $h$ exists by $\Delta^{0}_{1}$-comprehension. \end{proof} \begin{cor} Over $\mathtt{RCA_0}$, the statement that every (continuous) function $f$ of bounded variation has a variation function is equivalent to $\mathtt{ACA_0}$. \end{cor} \begin{proof} Given $X$, let $f$ be the function constructed above. If $V_f$ exists then $V_f- (V_f-f)$ is a continuous Jordan decomposition. So $V_f$ computes $X'$. \end{proof} \subsection{Nies, Yokoyama: Jordan decomposition without continuity} A weaker principle is obtained if we admit non continuity of $g,h$ in a Jordan decomposition $f = g-h$. We only require that $g,h$ are defined in $I_{\mathbb{Q}}:={\mathbb{Q}} \cap [0,1]$. An ${\mathbb{R}}$ -valued function $g$ defined on $I_{\mathbb{Q}}$ is given by a path $Z_f$ through a binary tree. Let $\langle p_n, q_n \rangle$ be a list of all pairs of rationals $\langle p, q \rangle$ with $0 \le p \le 1$. We let $Z_f(2n)= 1$ iff $g(p_n) < q_n$. We let $Z_f(2n+1)= 1$ iff $g(p_n) > q_n$. We often identify $f$ and $Z_f$. It is clear that the nondecreasing functions form a $\PI{1}$ class. The principle $\mathtt{Jordan_{{\mathbb{Q}}}}$ says that for each (necessarily continuous by the encoding) function $f$ of bounded variation, there are nondecreasing functions $g,h$ defined on $I_{\mathbb{Q}}$ such that $f= g-h$ on $I_{\mathbb{Q}}$. Letting $\widehat g(x) = \sup\{ g(q) \mid \, q \le x \, \land \, q \in I_{\mathbb{Q}}\}$, we obtain an actual Jordan decomposition because $f \le_\mathtt{slope} \hat g$. We first prove a purely computability theoretic result. Yokoyama has given the extension to reverse mathematics- see below. \begin{theorem}[Greenberg, Miller, Nies, Slaman, 2013] \label{thm:PA} An oracle $B$ is PA-complete $\leftrightarrow$ for each computable function $f$ on $[0,1]$ of bounded variation, $B$ computes a function $g\colon \, I_{\mathbb{Q}} \to {\mathbb{R}}$ with $f \le_\mathtt{slope} g$. \end{theorem} \begin{proof} $\leftarrow$: Given $f$, via the encoding above, the functions $g$ defined on $I_{\mathbb{Q}}$ with $f \le_\mathtt{slope} g$ form a nonempty $\PI{1}(f)$ class. Then $B$ computes a member of this class. \noindent $\rightarrow$: We define a computable function $f$ of bounded variation on $[0,1]$ such that each function $g\colon \, I_{\mathbb{Q}} \to {\mathbb{R}}$ with $f \le_\mathtt{slope} g$ has PA degree. Let $\+ P \subseteq \seqcantor$ be a nonempty $\PI{1}$ class of sets of PA degree, such as the (binary encoded) completions of Peano arithmetic. As usual $\+ P_s$ is a clopen set computable from $s$ approximating $\+ P$ at stage $s$. So $\+ P = \bigcap_s \+ P_s$. By standard methods there is a computable prefix-free sequence $\seq {\sigma_s} \sN s$ of strings of length $s$ such that $[\sigma_s] \cap \+ P_s \neq \emptyset$ for each $n$. Given $\sigma \in 2^{ < \omega}$, let $I_\sigma = [0.\sigma, 0.\sigma + \tp{-\ensuremath{|\sigma|}}]$ be the corresponding closed subinterval of $[0,1]$. By stage $s$ we determine $f$ up to a precision of $\tp{-s}$. Suppose $n$ enters $\emptyset'$ at stage $s$. Let $\sigma = \sigma_s$. We define $f$ on $I_\sigma$ to be a sawtooth function of height $\tp{-s}$ with $\tp{s-n}$ many teeth. It is clear that this adds at most $\tp {-n+1}$ to the variation of $f$. So $f$ is of bounded variation. (It is in fact AC since it can be written as an integral.) Now suppose $g\colon \, I_{\mathbb{Q}} \to {\mathbb{R}}$ is a function such that $f \le_\mathtt{slope} g$. As before for $x \in [0,1]$ let $\widehat g(x) = \sup\{ g(q) \mid \, q \le x \, \land \, q \in I_{\mathbb{Q}}\}$. \noindent {\it Case 1.} $\widehat g$ is discontinuous at the real $y= 0.Y$ for some $Y \in \+ P$. Then $Y \le_{\mathrm{T}} g$, so $g$ is of PA degree. (To see this, fix rational $r$ with $\widehat g( y) < r < g^+(y)$. Then $p< y \leftrightarrow g(p) < r$, and $p > y \leftrightarrow g(p ) > r$. ) \vspace{6pt} \noindent {\it Case 2.} Otherwise. Then $\emptyset' \le_{\mathrm{T}} g$: given $n$, using $g$ compute stage $s$ such that for each $\sigma$ of length $s$ with $[\sigma] \cap \+ P_s \neq \emptyset$, we have $g(\max I_\sigma) - g(\min I_\sigma ) < \tp{-n}$. This $s$ exists by case assumption using compactness of Cantor space. (This part of the argument cannot be adapted to reverse mathematics.) Then as before we have $n \in \emptyset' \leftrightarrow n \in \emptyset'_s$. \end{proof} Yokoyama, starting from the proof of Theorem~\ref{thm:PA} above, has provided a proof that works in reverse mathematics. \begin{theorem} \label{thm:WKL} Over $\mathtt{RCA_0}$, we have \begin{center} $\mathtt{Jordan_{\mathbb{Q}}} \leftrightarrow \mathtt{WKL_0}$. \end{center} \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:WKL}.] $\leftarrow$: the original proof can be carried out within $\WKL$. $\rightarrow$: we reason within $\ensuremath{\mathbf{RCA_0}}} \newcommand{\WKL}{\ensuremath{\mathbf{WKL_0}}} \newcommand{\ACA}{\ensuremath{\mathbf{ACA_0}}} \newcommand{\ATR}{\ensuremath{\mathbf{ATR_0}}} \newcommand{\Pioo}{\ensuremath{\mathbf{\Pi^1_1\mhyphen CA_0}}$. Let $T\subseteq2^{<{\mathbb{N}}}$ be an infinite binary tree. We will show that $T$ has a path. Let $\tilde{T}=\{\tau\notin T\mid \tau\upharpoonright(|\tau|-1)\in T\}$. Without loss of generality, we may assume that $\tilde{T}$ is infinite. Define $h:{\mathbb{N}}\to2^{<{\mathbb{N}}}$ as $h(n)$ to be (one of) the shortest leaf (dead end) of $T\setminus\{h(k)\mid k<n\}$. (Note that $h$ can be considered as $h:{\mathbb{N}}\to{\mathbb{N}}$ by the usual coding.) Then, we can easily see that $T\setminus \mathrm{rng}(h)=\mathrm{Ext}(T)=\{\sigma\in T\mid \sigma$ has infinitely many extensions in $T\}$. Let $\langle \tilde{\sigma}_{k}\mid k\in{\mathbb{N}}\rangle$ be an enumeration of $\tilde{T}$ such that $|\tilde{\sigma}_{i}|\le |\tilde{\sigma}_{i+1}|$. By an easy calculation, $|\tilde{\sigma}_{k}|\le l$ implies $k\le 2^{l}$. For given $\sigma\in 2^{<{\mathbb{N}}}$, define $I_{\sigma}=[l_{\sigma},r_{\sigma}]=[0.\sigma,0.\sigma+2^{|\sigma|}]$. Now, define continuous functions $f_{s}:[0,1]\to{\mathbb{R}}$ as follows: define $f_{s}$ on $I_{\tilde{\sigma}_{k}}$ to be a sawtooth function of height $2^{-k}$ with $2^{k-n}$ many teeth if $k\le s$ and $h(k)=n$, and $f_{s}=0$ otherwise. Then, the limit $f=\lim_{s\to\infty}f_{s}$ exists. Now suppose $g:I_{{\mathbb{Q}}}\to{\mathbb{R}}$ is a function such that $f\le_{slope}g$. Define $\Delta:{\mathbb{N}}\to{\mathbb{R}}$ as \[\Delta(k)=\max\{g(r_{\sigma})-g(l_{\sigma})\mid \sigma\in T\wedge |\sigma|=k\}.\] Note that $\Delta(k)<2^{-n}$ can be expressed by a $\Sigma^{0}_{1}$-formula. \noindent \textit{Case 1}: $\lim_{n\to\infty}\Delta(n)=0$. In this case, using the same argument as the above proof, take $\eta:{\mathbb{N}}\to{\mathbb{N}}$ so that $\Delta(\eta(n))<2^{-n}$. If $h(k)=n$, then, $g(r_{\tilde{\sigma}_{k}})-g(l_{\tilde{\sigma}_{k}})\ge 2^{-n}$, hence, $|\tilde{\sigma}_{k}|\le \eta(n)$. Thus, $n\in \mathrm{rng}(h)\Leftrightarrow\exists k\le 2^{\eta(n)}\ h(k)=n$, thus, $T\setminus\mathrm{rng}(h)=\mathrm{Ext}(T)$ exists. Hence, we can easily find a path of $T$. \noindent \textit{Case 2}: $\lim_{n\to\infty}\Delta(n)>0$. In this case, take $q\in{\mathbb{Q}}$ such that $\lim_{n\to\infty}\Delta(n)>q>0$, and define $\hat{T}$ as $\hat{T}=\{\sigma\in T\mid g(r_{\sigma})-g(l_{\sigma})>q\}$. Then, $\hat{T}$ is an infinite subtree of $T$, and there exists $K\in{\mathbb{N}}$ such that for any prefix-free $P\subseteq \hat{T}$, $|P|\le K$. For this, take $K$ so that $Kq>g(1)-g(0)$. Then, for any prefix-free $P\subseteq \hat{T}$, \[|P|q<\sum_{\sigma\in P}g(r_{\sigma})-g(l_{\sigma})\le g(1)-g(0)<Kq.\] Thus, we have $|P|\le K$. Now, we can find a path of $\hat{T}$ by the following claim. \noindent\textit{Claim} ($\ensuremath{\mathbf{RCA_0}}} \newcommand{\WKL}{\ensuremath{\mathbf{WKL_0}}} \newcommand{\ACA}{\ensuremath{\mathbf{ACA_0}}} \newcommand{\ATR}{\ensuremath{\mathbf{ATR_0}}} \newcommand{\Pioo}{\ensuremath{\mathbf{\Pi^1_1\mhyphen CA_0}}$). If $T$ is an infinite binary tree, and there exists $K\in{\mathbb{N}}$ such that for any prefix-free $P\subseteq T$, $|P|\le K$, then $T$ has a path. By $\Sigma^{0}_{1}$-induction, take \begin{center} $k=\max\{i\le K\mid \exists P\subseteq T$ such that $P$ is prefix-free and $|P|=i\}$, \end{center} and let $P_{k}\subseteq T$ be its witness. Then, any long enough $\sigma\in T$ is an extension of a member of $P_{k}$, and it has at most one successor. Thus, by $\Sigma^{0}_{1}$-induction, there exists $\tau\in P_{k}$ such that $\tau\in \mathrm{Ext}(T)$. Since each extension of $\tau$ has exactly one successor, we can easily find a path of $T$ extending $\tau$. \end{proof} A stronger variant $\mathtt{strong \ Jordan_{{\mathbb{Q}}}}$ would require that $f$ is only defined on $I_{\mathbb{Q}}$, and of bounded variation for partitions consisting of rationals. By the proof above, this principle is also equivalent to $\mathtt{WKL_0}$ over $\mathtt{RCA_0}$. \section{Yokoyama : Notes on BVDiff and reverse mathematics} The following was contributed by {Keita Yokoyama% \footnote{ School of Information Science, Japan Advanced Institute of Science and Technology, 1-1 Asahidai, Nomi, Ishikawa, 923-1292, JAPAN, E-mail: y-keita@jaist.ac.jp} } following a talk of Nies at JAIST (Kanazawa, Japan) suggesting the principles studied below. The principle $\mathrm{BVDiff}$ was introduced by Greenberg, Nies and Slaman in Auckland, Nov 2012. A function $f: \subseteq [0,1]\to \mathbb{R}$ with domain containing ${\mathbb{Q}} \cap[0,1]$ is said to be pseudo-differentiable at $z\in (0,1)$ if $f(z):=\lim_{x\in \mathbb{Q}, x\to z} f(x)$ exists and for every $\varepsilon>0$, there exists $\delta>0$ such that for every $0<|h|,|h'|<\delta$, \[\left| \frac{f(z+h)-f(z)}{h}- \frac{f(z+h')-f(z)}{h'}\right|<\varepsilon.\] Note that this is equivalent (over $\mathsf{RCA_0}$) to the definition of pseudo-differentiability in \cite[Section 7]{Brattka.Miller.ea:nd}. In the reverse mathematics setting, note that we don't require that the derivative exists as a real of the model. \begin{theorem} The following are equivalent over $\mathsf{RCA_0}$. \begin{enumerate} \item $\mathsf{WWKL_0}$. \item {$\mathrm{BVDiff}$:} every continuous bounded variation function $f:[0,1]\to \mathbb{R}$ has a pseudo-differentiable point. \item {$\mathrm{aeBVDiff}$:} every continuous bounded variation function $f:[0,1]\to \mathbb{R}$ is pseudo-differentiable almost surely in the following sense: \begin{itemize} \item[] for any family of open intervals $\mathcal{U}=\{(u_{i},v_{i})\}_{i\in\mathbb{N}}$, if $\mathcal{U}$ covers any pseudo-differential points of $f$, then $\sum_{i\in \mathbb{N}}(v_{i}-u_{i})\ge 1$. \end{itemize} \end{enumerate} \end{theorem} This theorem follows from the Greenberg/Miller/Nies/Slaman results in Section~\ref{s:GMNS reverse}, Brattka/Miller/Nies \cite{Brattka.Miller.ea:nd}, and a new fact: \begin{proposition}\label{propBVDiff} $\mathrm{BVDiff}$ (or $\mathrm{aeBVDiff}$) is provable within $\mathsf{WWKL_0}$. \end{proposition} \begin{lemma}[Brattka/Miller/Nies \cite{Brattka.Miller.ea:nd}, $\mathsf{RCA_0}$]\label{lem1} Let $f:\mathbb{Q}\cap[0,1]\to \mathbb{R}$ be a non-decreasing function, and let $z\in [0,1]$ be Martin-L\"of (computably) random relative to $f$. Then, $f$ is pseudo-differentiable at $z$. \end{lemma} Recall Theorem~\ref{thm:WKL} above: The following are equivalent over $\mathsf{RCA_0}$. \begin{enumerate} \item $\mathsf{WKL_0}$. \item For every bounded variation function $f:\mathbb{Q}\cap[0,1]\to \mathbb{R}$, there exist non-decreasing functions $g,h:\mathbb{Q}\cap[0,1]\to\mathbb{R}$ such that $f=g-h$. \end{enumerate} Another crucial ingredient is the following, due to Stephen G.~Simpson and Keita Yokoyama. \begin{lemma}[\cite{Simpson.Yokoyama:11}, Lemma 3.6] \label{lem3} For any countable model $(M,S)\models\mathsf{WWKL_0}$, there exists $\bar{S}\supseteq S$ such that $(M,\bar{S})\models\mathsf{WKL_0}$ and the following holds: \begin{itemize} \item[$(\dag)$] for any $A\in \bar{S}$ there exists $B\in S$ such that $B$ is Martin-L\"of random relative to $A$. \end{itemize} \end{lemma} \begin{proof}[Proof of Proposition~\ref{propBVDiff}.] We will show that BVDiff holds in any countable model of $\mathsf{WWKL_0}$. Let $(M,S)$ be a countable model of $\mathsf{WWKL_0}$, and let $f:[0,1]\to \mathbb{R}$ be a continuous bounded variation function in $(M,S)$. By Lemma~\ref{lem3}, take an extension $\bar{S}\supseteq S$ such that $(M,\bar{S})\models\mathsf{WKL_0}$ and satisfies $(\dag)$. Then, by Theorem~\ref{thm:WKL}, there exist non-decreasing functions $g,h\in \bar{S}$, $g,h:\mathbb{Q}\cap[0,1]\to\mathbb{R}$ such that $f=g-h$. By $(\dag)$, take $z\in S$, $z\in [0,1]$ which is Martim-L\"of random relative to $g\oplus h$ (in $(M,\bar{S})$). Then, by Lemma~\ref{lem1}, both of $g$ and $h$ are pseudo-differentiable at $z$ (in $(M,\bar{S})$), thus, $f$ is pseudo-differentiable at $z$ (in $(M,{S})$). To show aeBVDiff, let $\mathcal{U}=\{(u_{i},v_{i})\}_{i\in\mathbb{N}}$ be a family of open intervals such that $\sum_{i\in \mathbb{N}}(v_{i}-u_{i})< 1$. Then, $[0,1]\setminus \bigcup_{i\in\mathbb{N}}(u_{i},v_{i})$ is a closed set which has a positive measure. Thus, in the above proof, we can find a Martin-L\"of random point $z$ in $[0,1]\setminus \bigcup_{i\in\mathbb{N}}(u_{i},v_{i})$. \end{proof} \begin{remark} {\rm Within $\mathsf{WWKL_0}$ one cannot assure the existence of the value of the derivative, in other words, the pseudo-differentiability in BVDiff cannot be replaced with the (usual) differentiability. In fact, Jason Rute showed that the following are equivalent over $\mathsf{RCA_0}$.} \begin{enumerate} \item Every continuous bounded variation function $f:[0,1]\to \mathbb{R}$ has a derivative somewhere, \textit{i.e.}, there exists $z\in (0,1)$ such that $f'(z)=\lim_{x\to z}{(f(x)-f(z))}/{(x-z)}$ exists. \item $\mathsf{ACA_0}$. \end{enumerate} \end{remark} \section{Randomness notions as principles of reverse mathematics} Written by Nies based on work with Greenberg and Slaman in Cambridge, June 2012 and Auckland, Dec 2012. Let $\mathsf{C}$ denote a randomness notion. For instance $\mathsf{MLR}$ is ML-randomness, $\mathsf{CR}$ is computable randomness and $\mathsf{SR}$ is Schnorr randomness. We study the strength of the system \begin{center} $\mathsf{C_0} = \mathsf{RCA_0} + \forall X \exists Y \, [Y \in \mathsf{C}^X]$. \end{center} Note that $\mathsf{MLR_0}$ is equivalent to weak weak K\"onigs lemma at least for $\omega$-models. \begin{proposition} \label{prop: ML CR} $\mathsf{CR_0}$ does not imply $\mathsf{MLR_0}$, as shown by a suitable $\omega$-model. \end{proposition} This suggests to call the principle $\mathsf{CR_0}$ weak weak weak K\"onigs lemma. Recall that every high set is Turing above a computably random set by \cite{Nies.Stephan.ea:05} (also see \cite[Ch.\ 7]{Nies:book}). \begin{proof} By the proof of \cite[Lemma 4.11]{Cholak.Greenberg.ea:06}, for each set $B$ of non-d.n.c.\ degree there is a set $X$ high (even LR-hard) relative to $B$ such that $B \oplus X$ is also not of d.n.c.\ degree. Iterating this in the standard way, we build an $\omega$-model of $\mathsf{CR_0}$ without a set of d.n.c.\ degree. In particular, there is no ML-random set. \end{proof} Recall from \cite{Nies.Stephan.ea:05} that every non high Schnorr random set is already ML-random. This only requires $\Sigma_1$-induction. The following is somewhat surprising and maybe explains why these two randomness notions are harder to separate than other pairs of notions. \begin{proposition} \label{prop: CR SR} $\mathsf{CR_0}$ is equivalent to $\mathsf{SR_0}$. \end{proposition} \begin{proof} Let $\+ M$ be a model of $\mathsf{SR_0}$. Let $X$ be a set of $\+ M$. Arguing within $\+ M$, if no set $Y$ is high in $X$, then $\mbox{\rm \textsf{SR}}^X = \mbox{\rm \textsf{MLR}}^X$, so by assumption on $\+ M$ there is a $Z$ in $\mbox{\rm \textsf{CR}}^X$. Otherwise, some set $Y$ is high in $X$, i.e., $X'' \le_{\mathrm{T}} (Y\oplus X)'$, and so $Y \oplus X$ computes a set in $\mbox{\rm \textsf{CR}}^X$. \end{proof} We note that analytical equivalents such as $\mathsf{BVDiff}$ of a randomness axiom are harder to come by in the absence of a universal test. However, we note that $\mbox{\rm \textsf{CR}}_0$ is equivalent, over $\mathsf{RCA_0} + $ sufficient induction, to the statement that for each $X$ there is real $z$ such that each nondecreasing function $f$ computable from $X$ is differentiable at $z$. \begin{question} {\rm Find other pairs of randomness notions close to each other where the corresponding principles are equivalent. For instance, consider pairs among $\mathsf{MLR_0}$, difference random reals, ML-random density one points Oberwolfach random reals.}\end{question} $\mathsf{MLR_0}$ and $\mathsf{DiffR_0}$ should be equivalent over $\mathsf{RCA_0} + $ by an argument similar to \ref{prop: CR SR}. \part{Randomness and computable analysis} \section{Nies: Density and differentiability: dyadic versus full} \label{s:diff and porous} \newcommand{{\sf cdf}}{{\sf cdf}} For $U,V$ measurable subsets of a measure space $(X, \mu)$, if $\mu(V) >0$ we let \begin{center} $\mu_V(U) = \mu(U \cap V) / \mu (V)$. \end{center} This is the local, or conditional, measure of $U$ with respect to $V$. The definitions below follow \cite{Bienvenu.Hoelzl.ea:12a}. Let~$\lambda$ denote Lebesgue measure. \begin{definition} We define the (lower Lebesgue) density of a set $\+ C \subseteq {\mathbb{R}}$ at a point~$z$ to be the quantity $$\varrho(\+ C | z):=\liminf_{\gamma,\delta \rightarrow 0^+} \frac{\lambda([z-\gamma,z+\delta] \cap \+ C)}{\gamma + \delta}.$$ \end{definition} \noindent (If we let $I= [z-\gamma, z+\delta]$, the expression above turns into the local measure $\mathbf{\lambda}_I(\+ C)$.) Intuitively, this measures the fraction of space filled by~$\+ C$ around~$z$ if we ``zoom in'' arbitrarily close. Note that $0 \le \varrho(\+ C | z) \le 1$. We will first discuss the Lebesgue density theorem. \begin{theorem}[Lebesgue density theorem] Let $\+ C \subseteq {\mathbb{R}}$ be a measurable set. Then $\varrho(\+ C | z)=1$ for almost every $z\in \+ C$. \end{theorem} It is interesting to compare this modern formulation with the original in \cite[page 407]{Lebesgue:1910}: \ \scalebox{2.4}{\includegraphics[width=5cm,bb=0 0 380 47]{LebesgueDensity1910p407.pdf}} In 1910, mathematical writing was rather different from what it is today. The statement above is at the \emph{end} of a long argument. None of the statements in the 90-page monograph is labelled; so there is no cross referencing. A (closed) basic dyadic interval has the form $[r \tp{-n}, (r+1) \tp{-n}]$ where $r \in \mathbb Z, n \in \mathbb N$. The lower dyadic density of a set $\+ C \subseteq {\mathbb{R}}$ at a point~$z$ is the variant one obtains when only considering basic dyadic intervals containing~$z$: $$\varrho_2(\+ C | z):=\liminf_{z \in I \, \land \, |I| \rightarrow 0} \frac{\lambda(I \cap \+ C)}{|I|}, $$ where $I$ ranges over basic dyadic intervals containing $z$. Clearly $\varrho_2(\+ C | z) \ge \varrho(\+ C | z)$. Sometimes we use \emph{open} basic dyadic intervals; for the definition above the distinction does not matter. Suppose that a real $z$ is not a dyadic rational. Let $0.Z$ be its binary expansion. Note that $\varrho_2(\+ C|z)$ is the same as $$\liminf_{\sigma \prec Z \, \land \, \ensuremath{|\sigma|} \rightarrow \infty} \frac{\lambda( [\sigma] \cap \+ C )}{\tp{-\ensuremath{|\sigma|}}}, $$ when we view $\+ C$ as a subclass of $\seqcantor$. This is the density in Cantor space. \begin{definition}[\cite{Bienvenu.Hoelzl.ea:12a}] Consider $z\in[0,1]$. \begin{itemize} \item We say that $z$ is a \emph{density-one point} if $\varrho(\+ C | z)=1$ for every effectively closed class $\+ C$ containing $z$. \item We say that $z$ is a \emph{positive density point} if $\varrho(\+ C | z)>0$ for every effectively closed class $\+ C$ containing $z$. \end{itemize} \end{definition} By the Lebesgue density theorem and the fact that there are only countably many effectively closed classes, almost every real $z$ is a density-one point. Note that we can form similar definitions with dyadic density. The distinction between positive and full density is typical for the setting of effective analysis. In classical analysis, everything is settled by Lebesgue's theorem. In effective analysis, more randomness is required to ensure a real is a full density-one point. Day and Miller~\cite{Day.Miller:nd} have built a ML-random real which is a positive density point but not density-one point. They can in fact ensure this real is $\Delta^0_2$. A closely related notion, \emph{non-porosity}, originates in the work of Denjoy. See for instance \cite[5.8.124]{Bogachev.vol1:07} (but note the typo in the definition there). We say that a set $\+ C\subseteq\mathbb R$ is \emph{porous at} $z$ via the constant $\varepsilon >0$ if there exists arbitrarily small $\beta>0$ such that $(z-\beta, z+ \beta)$ contains an open interval of length $\varepsilon\beta $ that is disjoint from $\+ C$. We say that $\+ C$ is \emph{porous at} $z$ if it is porous at $z$ via some~$\varepsilon>0$. \begin{definition}[\cite{Bienvenu.Hoelzl.ea:12a}] We call $z$ a \emph{porosity point} if some effectively closed class to which it belongs is porous at $z$. Otherwise, $z$ is a \emph{non-porosity point}. \end{definition} Clearly, if $\+C$ is porous at $z$ then $\varrho(\+ C | z)<1$, so $z$ is not a density-one point. Therefore, almost every point of $\+ C$ is a non-porosity point. \subsection{Dyadic density 1 is equivalent to full density 1 for ML-randoms reals} \cite[Remark 3.4]{Bienvenu.Hoelzl.ea:12a}~show that a ML-random real which is a \emph{dyadic} positive density point already is a full positive density point; both notions coincide with difference randomness, or equivalently, being ML-random and Turing incomplete~ \cite{Franklin.Ng:10}. Mushfeq Khan and Joseph Miller have recently proved the analog of this result for density-one points. \begin{theorem} \label{thm:ddensity vs full density} Let $z$ be a ML-random dyadic density-one point. Then $z$ is a full density-one point. \end{theorem} The actual statement Joe and Mushfeq proved is the following. \begin{thm} \label{prop:denseporous} Suppose $z$ is a non-porosity point. Let $\+ P$ be a $\PI{1}$ class, $z\in \+ P$, and $\rho(\+ P\mid z) < 1$. Then already $\rho_2 (P \mid z) < 1$. \end{thm} Thus, we have the same class $\+ P$ both times. This implies Theorem \ref{thm:ddensity vs full density} since $z$ is Turing incomplete, and hence by the result of Bienvenu et al.\ \cite{Bienvenu.Hoelzl.ea:12a} a non-porosity point. In Remark~\ref{rem:right-c.e.} we will see that in fact $\rho(\+ P\mid z) =\rho_2 (P \mid z) $ for each non-porosity point $z$. \begin{proof}[Proof of Thm.\ \ref{prop:denseporous}] Let $\epsilon >0$ be such that $\rho(P\mid z) < 1-\epsilon$. Assume that $\rho_2 (P \mid z) =1$. Let $n^*$ be sufficiently large so that $\mathbf{\lambda}_L(\+ P) \ge 1 - \epsilon/4 $ for each basic dyadic interval of length $\le \tp{-n^*}$ containing $z$. Consider now an arbitrary interval $I$ of length $\le \tp{-n^*}$ with $z \in I$ and $\mathbf{\lambda}_I(\+ P)< 1-\epsilon$. Let $n$ be such that $\tp{-n+1} > |I| \ge \tp{-n}$; thus, $n \ge n^*$. We may cover $I$ with three consecutive basic dyadic intervals $A,B,C$ of length $\tp{-n}$. Say $z \in B$. Since $\+ P$ is relatively thin in $I$, but thick in $B$, this means that $\+ P$ must be thin in $A$ or $C$. This leads to large `holes' arbitrarily close to $z$ in an appropriate $\PI{1}$ class $\+ Q$, which shows that $z$ is a porosity point. This class $\+ Q$ consists of the basic dyadic intervals where $\+ P$ is thick: \begin{center} $\+ Q = [0,1] - \bigcup \{ L \colon \, \mathbf{\lambda}_L(\+ P) < 1- \delta\}$ \end{center} where $\delta = \epsilon/4$ and $L$ ranges over \emph{open} basic dyadic intervals. We obtain that $\+Q$ is porous at $z$ via porosity constant $1/3$. \noindent \emph{Technical detail:} We have $$\mathbf{\lambda}(\+ P \cap ( A \cup B \cup C))< 3 \cdot \tp{-n} - \epsilon |I| \le (3 - \epsilon) \tp{-n},$$ while $$\mathbf{\lambda} ( \+ P \cap B ) \ge (1-\delta ) \tp{-n}.$$ Therefore $$\mathbf{\lambda}(\+ P \cap ( A \cup C))<(2 - (\epsilon -\delta)) \tp{-n},$$ and so \begin{center} $\mathbf{\lambda}(\+ P \cap A)<(1 - (\epsilon -\delta)/2) \tp{-n}$ or $\mathbf{\lambda}(\+ P \cap C)<(1 - (\epsilon -\delta)/2) \tp{-n}.$ \end{center} Thus, since $\frac 3 8 \epsilon > \delta$ one of $A$, $C$ will be removed from $\+ Q$. The case that $z \in A$ or $z \in C$ is similar. \end{proof} \subsection{Background from analysis, and two lemmas on comparing derivatives} We need notation and a few definitions, mostly taken from \cite{Brattka.Miller.ea:nd} or \cite{Bienvenu.Hoelzl.ea:12a}. For a function~$f\colon\subseteq{\mathbb{R}}\to{\mathbb{R}}$, the \emph{slope} at a pair $a,b$ of distinct reals in its domain is \[ S_f(a,b) = \frac{f(a)-f(b)}{a-b}. \] For an interval $A$ with endpoints $a,b$, we also write $S_f(A)$ instead of $S_f(a,b)$. For a string $\sigma$ by $[\sigma]$ we denote the closed basic dyadic interval $[0.\sigma, 0.\sigma + \tp{-\ensuremath{|\sigma|}}]$. The open basic dyadic interval is denoted $(\sigma)$. We write $S_f([\sigma])$ with the expected meaning. If $z$ is in an open neighborhood of the domain of~$f$, the \emph{upper} and \emph{lower derivatives} \label{def_upper_lower_deriv} of $f$ at $z$ are \[ \overline D f(z) = \limsup_{h\rightarrow 0} S_f(z, z+h) \quad \textnormal{and} \quad \underline D f(z) = \liminf_{h\rightarrow 0} S_f(z, z+h), \] where as usual, $h$ ranges over positive and negative values. The derivative $f'(z)$ exists if and only if these values are equal and finite. We can also consider the upper and lower \emph{pseudo}-derivatives defined by: \begin{align*} \utilde Df(x) &= \liminf_{h \to 0^+} \, \{S_f(a,b) \mid \, a\le x \le b \, \land \, \, 0 < b-a\le h\}, \\ \widetilde Df(x) &= \limsup_{h \to 0^+} \, \{S_f(a,b) \mid \, a\le x \le b \, \land \, \, 0 < b-a\le h\}. \end{align*} where $a,b$ range over rationals in $[0,1]$. We only use them because in our arguments it is often convenient to consider (rational) intervals containing $x$, rather than intervals with $x$ as an endpoint. Also, we want to be able to discuss pseudo-differentiability for partial functions that are defined on all rationals in $[0,1]$, such as in the last section of \cite{Brattka.Miller.ea:nd}. Brattka et al.\ \cite[after Fact 2.4 ]{Brattka.Miller.ea:nd} check that $\underline Df(z) \le \utilde Df(z) \le \widetilde Df(z) \le \overline Df(z)$ for any real~$z\in [0,1]$; in \cite[Fact 7.2]{Brattka.Miller.ea:nd} they verify that for continuous functions with domain $[0,1]$, the lower and upper pseudo-derivatives of $f$ coincide with the usual lower and upper derivatives. They also coincide if $f$ is nondecreasing: for instance, to show $ \utilde Df(z) \le \underline Df(z)$, fix an arbitrarily small $\epsilon >0$. Given $h > 0$, choose rationals $a \le z $, $z+h \le b$ such that $(b-a) \le (1+\epsilon ) h$. Then $S_f(z, z+h) \le (1+\epsilon) S_f(a,b)$. We will use the subscript $2$ to indicate that all the limit operations are restricted to the case of basic dyadic intervals containing $z$. For instance, \[\widetilde D_2f(x) = \limsup_{|A| \to 0} \, \{S_f(A) \mid \, x \in A \, \land \, A \text{ is basic dyadic interval}\}. \] \subsubsection{A pair of analytical lemmas} Similar to Theorem~\ref{thm:ddensity vs full density}, we show that discrepancy of dyadic and full upper/lower derivatives at $z$ implies that some closed set is porous at $z$. \begin{lemma}\label{lem:classic diff porous} Suppose $f \colon \, [0,1] \to {\mathbb{R}}$ is a nondecreasing function. Suppose for a real $z \in [0,1]$, with binary representation $z = 0.Z$, there is rational $p$ such that % \[\widetilde D_2 f(z) < p < \widetilde Df(z).\] % Let $\sigma^* \prec Z$ be any string such that $\forall \sigma \, [ Z \succ \sigma \succeq \sigma^* \Rightarrow S_f([\sigma]) \le p]$. Then the closed set \begin{equation} \+ C = [\sigma^*] - \bigcup \{ (\sigma) \colon \, \sigma \succeq \sigma^* \, \land \, S_f([\sigma]) >p\},\label{eqn: def C porous} \end{equation} which contains $z$, is porous at $z$. \end{lemma} \begin{proof} Suppose $k\in {\mathbb{N}} $ is such that $p(1+\tp{-k+1})< \widetilde Df(z)$. We show that there exists arbitrarily large $n$ such that some basic dyadic interval $[a, \dot a]$ of length $\tp{-n-k}$ is disjoint from $\+ C$, and contained in $[z- \tp{-n+2}, z + \tp{-n+2}]$. In particular, we can choose $\tp{-k-2}$ as a porosity constant. By choice of $k$ there is an interval $I \ni z$ of arbitrarily short positive length such that $ p(1+\tp{-k+1})< S_f(I) $. Let $n$ be such that $\tp{-n+1} > |I| \ge \tp{-n}$. Let $a_0$ be greatest of the form $v \tp{-n-k}$, $v \in {\mathbb{Z}}$, such that $a_0 < \min I$. Let $a_v = a_0 + v \tp{-n-k}$. Let $r$ be least such that $a_r \ge \max I$. Since $f$ is nondecreasing and $a_r - a_0 \le |I| + \tp{-n-k+1} \le (1+ \tp{-k+1}) |I|$, we have \[ S_f (I) \le S_f(a_0, a_r) (1+ \tp{-k+1} ),\] and therefore $S_f(a_0,a_r)>p$. Then, by the averaging property of slopes at consecutive intervals of equal length, there is an $u<r$ such that $$S_f(a_u,a_{u+1})>p.$$ Since $(a_u, a_{u+1}) = (\sigma)$ for some string $\sigma$, this gives the required `hole' in $\+ C$ which is near $z \in I$ and large on the scale of $I$: in the definition of porosity let $\beta = \tp{-n+2}$ and note that we have $[a_u, a_{u+1}] \subseteq [z- \tp{-n+2}, z + \tp{-n+2}]$ because $z \in I$ and $|I| < \tp{-n+1}$. \end{proof} There also is a \textbf{dual lemma} for lower derivatives. Note that it can \emph{not} simply be obtained from the first by taking $-f$ because the function in the dual lemma is still non\emph{de}creasing. In fact, now the shortish dyadic intervals we choose in the proof are all \emph{contained in}~$I$. (So in fact we can get a porosity constant $\tp{-k-1}$.) \begin{lemma}\label{lem:classic diff porous 2} Suppose $f \colon \, [0,1] \to {\mathbb{R}}$ is a nondecreasing function. Suppose for a real $z \in [0,1]$, with binary representation $z = 0.Z$, there a rational $q$ such that % \[\utilde D f(z) < q < \utilde D_2f(z).\] % Let $\sigma^* \prec Z$ be any string such that $\forall \sigma \, [ Z \succ \sigma \succeq \sigma^* \Rightarrow S_f([\sigma]) \ge q]$. Then the closed set \[ \+ C = [\sigma^*] - \bigcup \{ (\sigma) \colon \, \sigma \succeq \sigma^* \, \land \, S_f([\sigma]) < q\},\] which contains $z$, is porous at $z$. \end{lemma} \begin{proof} The argument is very similar to the previous one. We will show that we can choose as a porosity constant $\tp{-k-1}$ where $k\in {\mathbb{N}} $ is such that $\utilde D f(z) < q(1- \tp{-k+1})$. There is an interval $I \ni z$ of arbitrarily short positive length such that $S_f(I) < q(1- \tp{-k+1})$. As before, let $n$ be such that $\tp{-n+1} > |I| \ge \tp{-n}$. Let $a_0$ be least of the form $v \tp{-n-k}$, $v \in {\mathbb{Z}}$, such that $a_0 \ge \min (I)$. Let $a_v = a_0 + v \tp{-n-k}$. Let $r$ be greatest such that $a_r \le \max (I)$. Since $f$ is nondecreasing and $a_r - a_0 \ge |I| - \tp{-n-k+1} \ge (1- \tp{-k+1}) |I|$, we have \[ S_f (I) \ge S_f(a_0, a_r) (1- \tp{-k+1} ),\] and therefore $S_f(a_0,a_r)< q$. Then there is an $u<r$ such that $$S_f(a_u,a_{u+1})< q.$$ As before, this gives the required hole in $\+ C$ which is near $z \in I$. \end{proof} \subsubsection{Basic dyadic intervals shifted by $1/3$} For $m \in {\mathbb{N}}$ let $\+D _m $ be the collection of intervals of the form $$[k \tp{-m}, (k+1)\tp{-m}]$$ where $k \in {\mathbb{Z}}$. Let $ \+ D'_m$ be the set of intervals $(1/3) +I $ where $I \in \+ D_m$. % We use a `geometric' fact from Morayne and Solecki~\cite{Morayne.Solecki:89}: \begin{fact} \label{fact:geom} Let $m \ge 1$. If $I \in \+ D_m$ and $J \in \+ D'_m$, then the distance between an endpoint of $I$ and an endpoint of $J$ is at least $1/(3 \cdot 2^m)$. \end{fact} To see this: assume that $k \tp{-m} - ( p \tp{-m} +1/3) < 1/(3 \cdot 2^m)$. This yields $(3k-3p-2^m)/ (3 \cdot2^m) < 1/(3 \cdot 2^m)$, and hence $3| 2^m$, a contradiction. In the following we need values of functions at endpoints of any such intervals. So we think of nondecreasing functions $f \colon \, [0,1] \to {\mathbb{R}}$ extended to all of ${\mathbb{R}}$ via $f(x) = f(0)$ for $x< 0$ and $f(y) = f(1)$ for $y>1$. Effectiveness properties, such as computable or interval-c.e. (defined below), are preserved by this because it suffices to compute values of the function in question at rationals. \subsection{Differentiability of nondecreasing computable functions} \ We give a short proof of the following. \begin{theorem}[\cite{Brattka.Miller.ea:nd}, Thm.\ 4.1]\label{thm:CRd dyadic diff} Suppose $f \colon \, [0,1] \to {\mathbb{R}}$ is a nondecreasing computable function. Let $z \in [0,1]$ be computably random. Then $f'(z)$ exists. \end{theorem} \begin{proof} We may assume $z> 1/2$, else we work with $f(x+1/2)$ instead of $f$. Recall that a Cauchy name is a sequence $(p_i) \sN i$, $p_i \in {\mathbb{Q}}$, such that $\forall k > i \, |p_i - p_k | \le \tp{-i}$. Consider the computable martingale \begin{center} $M(\sigma) = S_f(0. \sigma, 0. \sigma+ \tp{-\ensuremath{|\sigma|}})$. \end{center} Computability of~$M$ means that $M(\sigma) $ is given by a uniformly in $\sigma$ computable Cauchy name. We denote by $M(\sigma)_u$ the $u$-th term of this Cauchy name, so that $|M(\sigma) - M(\sigma)_u | \le \tp{-u}$. Note that $\lim_n M(Z\uhr n)$ exists and is finite for each computably random real~$Z$. This is a version of Doob martingale convergence; see, for instance \cite{Downey.Hirschfeldt:book}. Returning to the language of slopes, the convergence of $M$ on $Z$ means that $\utilde D_2 f(z)= \widetilde D_2 f(z) < \infty$. Assume for a contradiction that $f'(z)$ fails to exist. First suppose that $\widetilde D_2 f(z) < \widetilde Df(z)$. Choose rationals $r,p$ such that $\widetilde D_2 f(z) <r < p < \widetilde Df(z)$. Choose $u \in {\mathbb{N}}$ so large that $\widetilde D_2 f(z) <r - \tp{-u}$ and $r+\tp{-u} <p$. As usual let $Z\in \seqcantor $ be such that $z = 0.Z$. Let $n^*$ be sufficiently large so that $ [S_f(A) \le r -\tp{-u}]$ for each basic dyadic interval $A$ containing $z$ and of length $\le \tp{-n^*}$. Choose $k$ with $p(1+\tp{-k+1})< \widetilde Df(z)$. Then Lemma~\ref{lem:classic diff porous} applies via the string $\sigma^*= Z \uhr {n^*}$. We define a computable rational-valued martingales $L, L'$ such that $L$ succeeds on $Z$, or $L'$ succeeds on~$Y$ where $0.Y$ is the binary expansion of $z-1/3$. \vspace{6pt} \noindent \fbox{\emph{Defining $L$.}} It suffices to consider strings $\sigma \succeq \sigma^*$. Let $L(\sigma^*)=1$. Suppose $\eta \succeq \sigma^* $ and $L(\eta)$ has been defined. Check if there is a string $\alpha$ of length $k+4$ such that $M(\eta \alpha)_u> r$. (Note we have an algorithm for that because $f$ is computable.) If so, bet $0$ on $\eta \alpha$ (we know that $\eta \alpha \not \prec Z$, so this won't make us lose along~$Z$). In return, increase the capital by a factor of $\tp{k+4}/(\tp{k+4}-1)$ along all strings $\eta \widehat \alpha$ such that $|\widehat \alpha| = k+4$ and $\widehat \alpha \neq \alpha$. Continue the strategy with all strings $\eta \widehat \alpha$. If no such $\alpha $ exists, don't bet, that is, let $L(\eta 0) = L(\eta 1) = L(\eta)$. Continue with the strings $\eta 0$ and $\eta 1$. \vspace{6pt} \noindent \fbox{\emph{Defining $L'$.}} Let $\rho^* = Y\uhr{n^* +1}$. It suffices to consider strings $\rho \succeq \rho^*$. Let $L'(\rho^*)=1$. Suppose $\rho \succeq \rho^*$ and $L(\rho)$ has been defined. Check if there is a string $\beta$ of length $k+5$ such that $[\rho \beta]+1/3 \subseteq [\tau]$ for a string $\tau$ of length $|\rho \beta| -1$, and $M(\tau)_u> r$. If so, bet $0$ on $\rho \beta$ (we know that $\rho \beta \not \prec Y$). In return, increase the capital by a factor of $\tp{k+5}/(\tp{k+5}-1)$ along all strings $\rho \widehat \beta$ such that $|\widehat \beta| = k+5$ and $\widehat \beta \neq \beta$. Continue the strategy with all strings $\rho \widehat \beta$. If no such $\beta $ exists, don't bet, that is, let $L(\rho 0) = L(\rho 1) = L(\rho)$. Continue with the strings $\rho 0$ and $\rho 1$. We show that $L$ succeeds on $Z$, or $L'$ succeeds on $Y$. Let $\+ C$ be the class from (\ref{eqn: def C porous}) in Lemma~\ref{lem:classic diff porous}. Consider $n \ge n^*+4$ and a hole $[a,\dot a] \cap \+ C = \emptyset$ where $[a,\dot a]$ is a basic dyadic interval of length $\tp{-n-k}$, and $[a,\dot a] \subseteq [z - \tp{-n+2}, z+ \tp{-n+2}]$. By Fact~\ref{fact:geom} we have \begin{claim} One of the following is true. \begin{itemize} \item[(i)] $z, a, \dot a $ are all contained in a single interval $A$ taken from $\+ D_{n-4}$. \item[(ii)] $z,a, \dot a $ are all contained in a single interval $A'$ taken from $\+ D'_{n-4}$. \end{itemize} \end{claim} In case (i) let $A = [\eta]$, so that $\eta \prec Z$ (recall $Z \not \in {\mathbb{Q}}$ so there is no problem with the end points). Let $[a,\dot a] = \eta \alpha$ where $|\alpha|= k+4$. We have $z \not \in [a,\dot a]$, and $L$ increases its capital by a factor of $\tp{k+4}/(\tp{k+4}-1)$ along all strings $\eta \hat \alpha$ as above. Now suppose case (ii) applies. Let $\rho$ be the string such that $A'= [\rho]+ 1/3$. There is $[b, \hat b]$ from $\+ D'_{n+k+1}$ with $[b, \dot b] \subseteq [a, \dot a]$. Since (ii) holds we have $[b, \dot b] = [\rho \beta]$ for some string $\beta$ of length $k+5$. We have $z \not \in [b,\dot b]$ and $L'$ increases its capital by a factor of $\tp{k+5}/(\tp{k+5}-1)$ along all strings $\rho \widehat \beta$ as above. Suppose now that $L$ fails on $Z$. Then for all sufficiently long $\gamma \prec Y$ we can find $\rho$ with $\gamma \preceq \rho \prec Z$ and $L'$ increases its capital by a fixed factor $>1$ on the next $k+5$ bits of $Y$. Also the capital of $L'$ along $Y$ never decreases, because there is no basic dyadic interval $[\tau] \ni z$ with $|\tau| \ge n^*$ and $S_f(\tau)_u \ge r$. So $L'$ succeeds on $Y$. The case $\utilde D f(z) < \utilde D_2f(z)$ is analogous, using Lemma \ref{lem:classic diff porous 2} instead of Lemma~\ref{lem:classic diff porous}. \end{proof} The method of the proof has an interesting consequence. See e.g.~\cite[7.6.2]{Nies:book} or \cite{Downey.Hirschfeldt:book} for the definition of Church (or computable) stochasticity. By \cite{Ambos.Mayordomo.ea:96}, also see \cite[6.4.11]{Downey.Hirschfeldt:book}, $X \in \seqcantor$ is Church stochastic iff no computable martingale that uses only finitely many, positive rational betting factors can win on $X$. The martingales $L$, $L'$ constructed above are of this kind (in fact we have to modify them slightly in order to avoid betting 0). \begin{corollary} Suppose that $z$ is Church stochastic. Then for each nondecreasing computable function $f \colon \, [0,1] \to {\mathbb{R}}$, we have $\widetilde D_2 f(z) = \widetilde Df(z)$ and $ \utilde D_2f(z) = \utilde Df(z)$. \end{corollary} This means that on the rather generous class of Church stochastic reals $z$, the lower/upper derivative of a nondecreasing computable $f$ is completely given by the slopes at basic dyadic intervals containing $z$. In particular, the derivative at $z$ equals the dyadic derivative. \subsection{Polynomial time randomness and differentiability} Recall that we represent a real $x$ by a Cauchy name $(p_i) \sN i$. We have $p_i \in {\mathbb{Q}}$, and $\forall k > i |p_i - p_k | \le \tp{-i}$. For feasible analysis, we use a compact set of Cauchy names: the signed digit representation of a real. Such Cauchy names, called \emph{special}, have the form $p_i = \sum_{k=0}^i b_k \tp{-k}$, where $b_k \in \{-1,0,1\}$. (Also, $b_0=0, b_1 =1$.) So they are given by paths through $\{-1,0,1\}^\omega$, something a resource bounded TM can process. We call the $b_k$ the \emph{symbols} of the special Cauchy name. \begin{definition} A function $g \colon [0,1] \to {\mathbb{R}}$ is polynomial time computable if there is a polynomial time TM turning every special Cauchy name for $x \in [0,1]$ into a special Cauchy name for $g(x)$. \end{definition} This means that the first $n$ symbols of $g(x)$ can be computed in time poly(n), thereby using polynomially many symbols of the oracle tape holding~$x$. Functions such as $e^x, \sin x$ are polynomial time computable, essentially because analysis gives us rapidly converging approximation sequences, such as $\sum x^n/n!$. The argument given above can be adapted to polynomial time randomness. A martingale $M \colon 2^{ < \omega} \to {\mathbb{R}}$ is called polynomial time computable if from string $\sigma$ and $i \in {\mathbb{N}}$ we can in time polynomial in $\ensuremath{|\sigma|} + i$ compute the $i$-th component of a special Cauchy name for $M(\sigma)$. In this case we can compute a polynomial time rational valued martingale dominating $M$ (Schnorr / Figueira-N). We say $Z$ is \emph{polynomial time random} if no polynomial time martingale succeeds on $Z$. For definitions omitted here see~\cite{Figueira.Nies:13}. \begin{theorem}[]\label{thm: poly dyadic diff} Let $z \in [0,1]$. Then $z$ is polynomial time random $\Leftrightarrow$ \hfill $f'(z)$ exists for each nondecreasing polynomial time computable \hfill function $f\colon [0,1] \to {\mathbb{R}}$. \end{theorem} The implication $\Rightarrow$ and other results were independently proved by A.\ Kawamura, who directly adapted the proof of \cite{Brattka.Miller.ea:nd}, Thm.\ 4.1] to the polynomial time setting. \begin{proof} \n $\LA:$\ Suppose $z$ is not polynomial time random. Then some polynomial time martingale $L$ succeeds on the binary expansion $Z$ of~$z$. By \cite[Lemma 3]{Figueira.Nies:13}, there is a polynomial time martingale $M$ with the savings property that succeeds on $Z$. Let $\mu_M$ be the corresponding measure given by $\mu_M([\sigma])= \tp{-\ensuremath{|\sigma|}}M(\sigma)$. Let ${\sf cdf}_M$ be the cumulative distribution function of $\mu_M$ given by $ {\sf cdf}_M(x) = \mu_M[0,x)$. By \cite[Lemma 3]{Figueira.Nies:13}, for each dyadic rational $p$, ${\sf cdf}_M(p)$ is a dyadic rational that can be computed from $p$ in polynomial time. Since $ M$ has the savings property, by \cite[Prop.\ 5]{Figueira.Nies:13}, ${\sf cdf}_M$ satisfies the `almost Lipschitz condition': there is $ \epsilon>0$ such that for every $x,y\in[0,1]$, if $y-x\leq\epsilon$ then $$ {\sf cdf}_M(y)-{\sf cdf}_M(x) = O(-(y-x)\cdot\log(y-x)). $$ This implies that $f={\sf cdf}_M$ is polynomial time computable: Suppose we are given a special Cauchy name $(p_i)\sN i$ for a real $z$. We know that $|z- p_{n+ \log n}| = O(\tp{-n-log n})$. So by the pseudo Lipschitz condition, we have $|f(z)- f(p_{n + \log n})| = O(\tp{-n})$. So a TM can determine in polynomial time from the first $n + \log n$ symbols of the special Cauchy name for $z$ the first $n$ symbols of a special Cauchy name for $f(z)$. \n $\RA:$\ Since $f$ is polynomial time computable, all the martingales involved in the proof of Theorem~\ref{thm:CRd dyadic diff} are computable in polynomial time. The usual proof of Doob martingale convergence can be turned into a polynomial time construction, and hence shows that any polynomial time martingale converges on every polynomial random real. Thus we have $\utilde D_2 f(z)= \widetilde D_2 f(z) < \infty$. Furthermore, by the base invariance of polynomial time randomness~\cite[Thm.\ 4]{Figueira.Nies:13}, if $z$ is polynomially random then so is $z-1/3$. So $\widetilde D_2 f(z) = \widetilde Df(z)$ and $\utilde D f(z) = \utilde D_2f(z)$ by the argument given above. \end{proof} \subsection{Interval c.e.\ functions} \subsubsection{Background} We quote from \cite{Bienvenu.Greenberg.ea:OWpreprint}. Let $g\colon [0,1] \rightarrow {\mathbb{R}}$. For $0 \le x< y \le 1$ define the \emph{variation} of $g$ in $[x,y]$ by $$V(g,[x,y]) = \sup \left\{\sum_{i=1}^{n-1} \bigl| g(t_{i+1}) - g(t_i)\bigr| : x \le t_1 \le t_2 \le \ldots \le t_n \le y\right\}.$$ The function $g$ is of bounded variation if $V(g,[0,1])$ is finite. If $g$ is a continuous function of bounded variation then the function $f(x) = V(g, [0,x])$ is also continuous. If $g$ is computable then the function $f(x) = V(g, [0,x])$ is lower semicomputable (but may fail to be computable). A further property of this ``variation function'' comes from the observation that $V(g,[x,y]) + V(g, [y,z]) = V (g, [x,z])$ for $x< y< z$ (see \cite[Prop.\ 5.2.2]{Bogachev.vol1:07}). Identifying the variations of computable functions, Freer, Kjos-Hanssen, Nies and Stephan \cite{Freer.Kjos.ea:nd} studied a class of monotone, continuous, lower semicomputable functions which they called \emph{interval-c.e.} \begin{definition} \label{def:intervalce} A non-decreasing, lower semicontinuous function $f\colon [0,1]\to {\mathbb{R}}$ is \emph{interval-c.e.}\ if $f(0)=0$, and $f(y)-f(x)$ is a left-c.e.\ real, uniformly in rationals $x<y$. \end{definition} Thus, the variation function of each computable function of bounded variation is interval-c.e. Freer et al.\ \cite{Freer.Kjos.ea:nd}, together with Rute, showed that conversely, every continuous interval-c.e.\ function is the variation of a computable function. (End quote.) Note that the better term would be \emph{interval-left-c.e.} There is also a dual concept, being \emph{interval right-c.e.}, where $f(y)-f(x)$ is a uniformly a right-c.e.\ real. For instance, the function $f(x) = \mathbf{\lambda} ([0,x] \cap \+ P)$ for an effectively closed class $\+ P$ is interval right-c.e. There is a curious break of symmetry that the variations of computable functions are the continous interval \emph{left}-c.e.\ functions vanishing at~$0$. This seems to say the left-c.e.\ version is the cooler one. (We note that either class is closed under the `double mirror' transformation: if $f$ is interval left-c.e. [right c.e.] then so is $\hat f(x)= 1- f(1-x)$. The slopes $S_{\hat f}(x,y)= S_f(1-y,1-x)$.) \subsubsection{Interval (left)-c.e.\ functions: upper dyadic equals upper full derivative for non-porosity points} \begin{proposition}\label{pro:interval c.e.} Let $f \colon \, [0,1] \to {\mathbb{R}}$ be interval-c.e. Then $\widetilde D_2 f(z) = \widetilde Df(z)$ for each non-porosity point $z$. \end{proposition} \begin{proof} Assume $\widetilde D_2 f(z) < \widetilde Df(z)$. Since $f$ is interval c.e., we can view $S_f(\sigma)$ as a left-c.e.\ martingale. In particular, the class $\+ C$ defined in (\ref{eqn: def C porous}) in Lemma~\ref{lem:classic diff porous} is effectively closed. This class is porous at $z$ for a contradiction. \end{proof} \subsubsection{Dual fact for interval right-c.e.\ functions} \begin{remark} \label{rem:right-c.e.} If $f$ is interval \emph{right}-c.e.\ we can apply the dual Lemma~\ref{lem:classic diff porous 2} to conclude that, $\utilde D f(z) = \utilde D_2f(z)$ for each non-porosity point $z$. For instance, let $f$ be the Lipschitz function given by $f(x) = \mathbf{\lambda} ([0,x] \cap \+ P)$ for an effectively closed class $\+ P$. Then we may conclude that (lower) dyadic density of $\+ P$ at a non-porosity point $x$ coincides with the (lower) full density, thereby obtaining a strengthening of Proposition~\ref{prop:denseporous}. \end{remark} \subsubsection{Interval c.e.\ functions: dyadic equals full derivative for reals at which all left-c.e.\ martingales converge} Consider a real $z \in [0,1] - {\mathbb{Q}}$. If a martingale $M$ converges to a finite value at the binary expansion of $z$, we write $M(z)$ for this finite value. We say that $z$ is a \emph{convergence point for c.e.\ martingales} if $M(z)$ exists for each c.e.\ martingale $M$. Convergence points for c.e.\ martingales coincide with the ML-random (dyadic) density one points. This was obtained by 2012 work of a group in Madison consisting of Uri Andrews, Mingzhong Cai, David Diamondstone, Steffen Lempp, and Joseph S.\ Miller. The implication \begin{center} martingale convergence $\Rightarrow$ density one \end{center} was already pointed out in \cite{Bienvenu.Greenberg.ea:OWpreprint}. The hard implication is \begin{center} dyadic density one $\Rightarrow$ martingale convergence. \end{center} See Theorem~\ref{thm:Madison} below. \vspace{6pt} \begin{theorem}\label{thm:interval left-c.e. MG and derivative} Let $f \colon \, [0,1] \to {\mathbb{R}}$ be interval-c.e. Let $z$ be a convergence point for c.e.\ martingales. Then $f'(z)$ exists. \end{theorem} \begin{proof} We may assume $z> 1/2$, else we work with $f(x+1/2)$ instead of $f$. The real $z$ is a a dyadic density one point, hence a (full) density one point by the Khan-Miller Theorem~\ref{thm:ddensity vs full density}. Then $z-1/3$ is also a ML-random density-one point, so using the work of the Madison group discussed in Section~\ref{s:Andrews}, $z-1/3$ is also a c.e.\ martingale convergence point. In particular, both $z$ and $z-1/3$ are non-porosity points. For a nondecreasing function $g\colon [0,1] \to {\mathbb{R}}$ recall that $M_g $ is the (dyadic) martingale associated with the slope $S_g$ evaluated at intervals of the form $[ i \tp{-n}, (i+1) \tp{-n}]$. Thus, \begin{center} $M_g(\sigma) = S_g(0. \sigma, 0. \sigma+ \tp{-\ensuremath{|\sigma|}})$. \end{center} Let $M= M_f$. Note that $M$ converges on $z$ by hypothesis. Thus $\utilde D_2f(z)= \widetilde D_2f(z) =M(z)$. By Proposition~\ref{pro:interval c.e.} again, we have $\widetilde D_2f(z) = \widetilde Df(z)$. It remains to show that \begin{equation} \label{eqn: lower dyadic} \utilde Df(z)= \utilde D_2f(z). \end{equation} Since $f$ is nondecreasing, this will establish that $f'(z)$ exists. Let $\widehat f(x) = f(x+1/3)$, and let $M' = M_{\widehat f}$. We now show that $M'$ converges on $z-1/3$, and the limits coincide. \begin{claim} $M(z) = M'(z-1/3)$. \end{claim} As pointed out above, $z-1/3$ is also a convergence point for c.e.\ martingales. So $M'$ converges on $z-1/3$. If $M(z) < M'(z-1/3)$ then $\widetilde D_2 f(z) < \widetilde Df(z)$. However $z$ is a non-porosity point, so this contradicts Proposition~\ref{pro:interval c.e.}. If $M'(z-1/3) < M(z) $ we argue similarly using that $z-1/3$ is a non-porosity point. This establishes the claim. Hooray! To show (\ref{eqn: lower dyadic}), we extend the method in the proof of Lemma~\ref{lem:classic diff porous 2}, taking into account both dyadic intervals, and dyadic intervals shifted by $1/3$. Recall that $\utilde D_2f(z) = M(z)$. Assume for a contradiction that (\ref{eqn: lower dyadic}) fails. Then we can choose rationals $p,q$ such that \[\utilde D f(z) < p < q < M(z) = M'(z-1/3).\] Let $k\in {\mathbb{N}}$ be such that $p< q(1- \tp{-k+1})$. Let $u,v$ be rationals such that \begin{center} $ q< u < M(z) <v$ and $v-u\le \tp{-k-3}(u-q)$. \end{center} Let $n^* \in {\mathbb{N}}$ be such that for each $n \ge n^*$ and any interval $A\in \+ D_n \cup \+ D'_n$, we have $S_f(A) \ge u$. % Let \begin{eqnarray*} \+ E &=& \{ X \in \seqcantor \colon \, \forall n \ge n^* M(X\uhr n)\le v \}\\ \+ E' &=& \{ W \in \seqcantor \colon \, \forall n \ge n^* M'(W\uhr n) \le v \} \end{eqnarray*} Since $f$ is interval c.e., these classes are $\PI{1}$. In Cantor space we can apply notions of porosity via the usual transfer to $[0,1]$ given by the binary expansion. Let $0.Z$ be as usual the binary expansion of $z$. By the choice of $n^*$ we have $Z \in \+ E $. Let $0.Y$ be the binary expansion of $z-1/3$. We have $Y \in \+ E'$. We will show that $\+ E $ is porous at~$Z$, or $\+ E'$ is porous at $Y$. Consider an interval $I \ni z$ of positive length $\le \tp{-n^*-3}$ such that $S_f(I) \le p$. Let $n$ be such that $\tp{-n+1} > |I| \ge \tp{-n}$. Let $a_0$ [$b_0$] be least of the form $w \tp{-n-k}$ [$w \tp{-n-k} +1/3$], where $w \in {\mathbb{Z}}$, such that $a_0 [b_0] \ge \min (I)$. Let $a_i = a_0 + i \tp{-n-k}$ and $b_j = b_0 + j \tp{-n-k}$. Let $r,s$ be greatest such that $a_r \le \max (I)$ and $b_s \le \max(I)$. As before, since $f$ is nondecreasing and $a_r - a_0 \ge |I| - \tp{-n-k+1} \ge (1- \tp{-k+1}) |I|$, we have % $S_f (I) \ge S_f(a_0, a_r) (1- \tp{-k+1} )$, % and therefore $S_f(a_0,a_r)< q$. Then there is an $i<r$ such that $S_f(a_i,a_{i+1})< q$. Similarly, there is $j< s$ such that $S_f(b_j,b_{j+1})< q$. \begin{claim} One of the following is true. \begin{itemize} \item[(i)] $z, a_i,a_{i+1} $ are all contained in a single interval taken from $\+ D_{n-3}$. \item[(ii)] $z, b_j,b_{j+1} $ are all contained in a single interval taken from $\+ D'_{n-3}$. \end{itemize} \end{claim} For suppose that (i) fails. Then there an endpoint of an $A\in \+ D_{n-3}$ (that is, a number of the form $w\tp{-n+3}$ with $w\in {\mathbb{Z}}$) between $\min (z, a_i) $ and $\max (z, a_{i+1})$. Note that $\min (z, a_i) $ and $\max (z, a_{i+1})$ are in $I$. By Fact~\ref{fact:geom} and $|I| < \tp{-n+1}$, there can be no endpoint of an interval $A' \in \+ D'_{n-3}$ in $I$. Then, since $b_j, b_{j+1} \in I $, (ii) holds. This establishes the claim. Suppose $I$ is an interval as above and $\tp{-n+1} > |I| \ge \tp{-n}$, where $n \ge n^*+3$. Let $\eta = Z \uhr {n-3}$ and $\eta' = Y \uhr {n-3}$. If (i) holds for this $I$ then there is a string $\alpha$ of length $k+3$ (where $[\eta \alpha]=[a_i, a_{i+1}]$) such that $M( \eta \alpha) < q$. So by the choice of $q< u< v$ and since $M(\eta) \ge u$ there is $\beta$ of length $k+3$ such that $M(\eta \beta)> v$. This yields a hole in $\+ E$, large and near $Z$ on the scale of $I$, which is required for porosity of $\+ E$ at $Z$. Similarly, if (ii) holds for this $I$, then there is a string $\alpha$ of length $k+3$ (where $[\eta' \alpha]=[b_j, b_{j+1}]$) such that $M(\eta' \alpha) < q$. So by the choice of $q< u< v$ and since $M'(\eta')\ge u$ there is a string $\beta$ of length $k+3$ such that $M'(\eta' \beta)> v$. This yields a hole large and near $Y$ on the scale of $I$ required for porosity of $\+ E'$ at $Y$. Thus, if case (i) applies for arbitrarily short intervals $I$, then $\+ E$ is porous at $Z$, whence $z$ is a porosity point. Otherwise (ii) applies for intervals below a certain length. Then $\+ E'$ is porous at $Y$, whence $z-1/3$ is a porosity point. \end{proof} \subsubsection{Interval c.e.\ functions: characterizing ML-randomness} \ \noindent Nies and Stephan have shown that there is an interval c.e.\ function~$h$ whose points of differentiability coincides with the ML-randoms. The same is true for the convergence points of the left-c.e.\ martingale $S_h(\sigma)$. All this is obtained from the following stronger statement, which also strengthens \cite[Cor.6.6]{Bienvenu.Greenberg.ea:nd}: \begin{theorem}\label{thm:interval c.e. char ML-random} There is a continuous interval c.e.\ function $h$ such that $h'(x)$ exists for each ML-random real $x$, and $\utilde Dh(x) = \infty$ whenever $x$ is not ML-random. \end{theorem} \begin{proof} Brattka el at.\ \cite[Lemma 6.5]{Brattka.Miller.ea:nd} show that there is a computable function $f$ of bounded variation (in fact, absolutely continuous) such that $f'(z)$ exists only for Martin-L{\"o}f{} random reals $z$. Let \begin{center} $h(x) = V(f,[0,x])$. \end{center} To see that $h$ is as required, we have to look at the construction of $f$, which is actually given in \cite[proof of Thm.\ 6.1]{Brattka.Miller.ea:nd}, a result on weak 2-randomness. The function $f$ is a superposition of steeper and steeper sawtooth functions based on intervals $C_{m,i}$ of length rapidly decreasing in $m$, which are enumerated into a universal ML test $\langle \+ G_m \rangle$. If $x$ is ML-random then $x \not \in \+ G_m$ for almost every $m$, and hence for each $i$ we have $x \not \in C_{m,i}$. This means that $h$ is polygonal in a sufficiently small neighbourhood of $x$, and $x$ is not a break point. So $h'(x)$ exists. On the other hand, if $x$ is not ML-random then the change in variation due to the infinite superposition of sawteeth above $x$ adds up, and so $\utilde Dh(x) = \infty$. (Save the amazon.) For detail see the hopefully forthcoming paper~\cite{Greenberg.Hoelzl.ea:nd}. \end{proof} \section{Khan: A dyadic density-one point that is not full density-one} (Submitted by Mushfeq Khan, with acknowledgements to Joe Miller for many helpful discussions.) It seems intuitively likely that being full density-one is a stronger property than being dyadic density-one (see Section~\ref{s:diff and porous} for definitions). After all, in the case of the latter, we are severely limiting the types of intervals with which we can witness drops in density. In this section, we construct a dyadic density-one point which is not a full density-one point. We use the symbol $\mu$ to refer exclusively to the standard Lebesgue measure on Cantor space. If $\sigma$ is a string, and $C$ a measurable set, the shorthand $\mu_\sigma(C)$ denotes the relative measure of $C$ in the cone above $\sigma$. The following lemma, which is a critical part of the argument, is a special case of the Kolmogorov inequality for martingales (see for example, ~\cite[7.1.9]{Nies:book}, and consider the martingale $S(\sigma) = \mu_\sigma(W)$). \begin{lemma}\label{vicinity_cantor_space} Suppose $W \subseteq 2^\omega$ is open. Then for any $\varepsilon$ such that $\mu(W) \le \varepsilon \le 1$, let $U_\varepsilon$ denote the set $\dset{X \in 2^\omega}{ \mu_\rho(W) \ge \varepsilon \textrm{ for some $\rho \prec X$}}$. We call $U_\varepsilon$ the \emph{$\varepsilon$-vicinity} of $W$. Then $\mu(U_\varepsilon) \le \mu(W)/\varepsilon$. \end{lemma} \begin{theorem}\label{weak_vs_full} There is a dyadic density-one point that is not a density-one point. \end{theorem} \begin{proof} We build the desired real $Y$ by computable approximation. At each stage $s$ of the construction, we have a sequence of finite strings $\sigma_{0, s} \prec \sigma_{1, s} \prec ... $ approximating $Y$. At the same time, we build a $\Sigma^0_1$ class $B$ whose complement witnesses the fact that $Y$ is not a density-one point. Let $W_e$ denote the upward closure of the $e$-th c.e. set. Each c.e.\ set represents a requirement that needs to be met by $Y$. In other words, for each $e$, if $Y$ is not in $[W_e]$, we require that $\lim_{\rho \prec Y} \mu_\rho([W_e]) = 0$. Priorities are assigned to c.e.\ sets in the usual manner, with $W_j$ having higher priority than $W_i$ for any $i > j$. We make use of the following shorthand: Let $C$ be a measurable set and $\tau$ and $\tau'$ two strings such that $\tau \prec \tau'$. If for every $\rho$ such that $\tau \preceq \rho \prec \tau'$, $\mu_\rho(C) < \alpha$, then we say that \emph{between $\tau$ and $\tau'$, $\mu(C) < \alpha$}. At any stage $s$, for each $k$, we will be working above $\sigma_{k, s}$ to define $\sigma_{k+1, s}$. We have two goals in mind: Firstly, for any $e < k$ such that $\sigma_{k, s}$ is not already a member of $W_e$, we must keep the measure of $W_e$ between $\sigma_{k, s}$ and $\sigma_{k+1, s}$ below a certain threshold. If the threshold is exceeded, say at a string $\rho$ between $\sigma_{k, s}$ and $\sigma_{k + 1, s}$, we shall reroute $\sigma_{k+1}$ above $\rho$ to enter $W_e$. Secondly, we must ensure that there is an interval $I \subseteq [\sigma_{k, s}]$ such that $[\sigma_{k + 1, s}] \subseteq I$ and $\mu_I(B) > 1/4$. Both goals must be satisfied while keeping $Y$ from entering $[B]$. Globally, we must maintain the fact that between $\sigma_{k, s}$ and $\sigma_{k + 1, s}$, the measure of $B$ remains \emph{strictly below} a threshold $\beta(k, s)$, which is updated each time we act above $\sigma_{k, s}$ by rerouting $\sigma_{k + 1}$. The construction begins by setting $\sigma_{0, 0}$ equal to the empty string. \noindent \fbox{\emph{Process above $\sigma_{k, s}$.}} When we first start working above $\sigma_{k, s}$, say at stage $s_0$, we set $\beta(k, s_0) = \beta^*(k)$ (see below for how $\beta^*(k)$ is defined). If $k > 0$, then we start by choosing a $\nu \succ \sigma_{k, s_0}$ long enough so that between $\sigma_{k - 1, s_0}$ and $\sigma_{k, s_0}$, $\mu(B \cup [\nu]) < \beta_{k-1, s_0}$. We let $\sigma_{k+1, s_0} = \nu 1 0^j$ and enumerate the string $\nu 0 1^j$ into $B$, where $j$ is chosen large enough so that the measure of $[B]$ between $\sigma_{k, s_0}$ and $\sigma_{k+1, s_0}$ remains below $\beta^*(k)$. If $k = 0$, $\nu$ can be chosen to be the empty string. In a subsequent stage $s$, suppose that $C_0, ..., C_l$ are those among the first $k$ c.e.\ sets that $\sigma_{k, s}$ is not already a member of, in order of descending priority. Now if for some $\rho$ between $\sigma_{k, s}$ and $\sigma_{k+1, s}$ and some $j \le l$, $\mu_\rho([C_j])$ exceeds $\sqrt{\beta(k, s)}$ and no action has yet been taken for a higher priority $C_{j'}$, then we act by rerouting $\sigma_{k+1, s}$ above $\rho$. Let $\nu \succeq \rho$ be a string in $C_j$ long enough so that: \begin{enumerate} \item Between $\rho$ and $\nu$, $\mu([B]) < \sqrt{\beta(k, s)}$. \item $[B] \cap [\nu] = \emptyset$. \item If $k > 0$, then between $\sigma_{k-1, s}$ and $\sigma_{k, s}$, $\mu(B \cup [\nu])$ must be strictly less than $\beta(k-1, s)$. \end{enumerate} Let $j$ be large enough so that between $\sigma_{k, s}$ and $\nu$, $\mu(B \cup [\nu 0 1^j])$ remains strictly below $\sqrt{\beta(k, s)}$. We set $\sigma_{k+1, s + 1} = \nu 1 0^j$ and enumerate $\nu 0 1^j$ into $B$. Finally, we set $\beta(k, s+1) = \sqrt{\beta(k, s)}$. \noindent \fbox{\emph{Choosing $\beta^*(k)$.}} We move $\sigma_{k+1, s+1}$ into $C_j$ when the following is seen to occur at some stage $s$: For some $\rho$ between $\sigma_{k, s}$ and $\sigma_{k+1, s}$, $\mu_\rho([C_j])$ exceeds the measure of the $\sqrt{\beta(k, s)}$-vicinity of $[B]$ above $\rho$, i.e., if $\mu_\rho([C_j]) > \beta(k, s)/\sqrt{\beta(k, s)} > \mu_\rho(B)/\sqrt{\beta(k, s)}$. If this does not occur, we wish to limit the measure of $C_j$ to $2^{-k}$ between $\sigma_{k, s}$ and $\sigma_{k+1, s}$. Each time we act above $\sigma_{k, s}$, the value of $\beta(k, s + 1)$ is magnified by a power of $1/2$, so we require that $\beta^*(k)$ satisfy \[(\beta^*(k))^{1/2^{k+1}} \le 2^{-k}.\] \noindent \textbf{Verification.} \begin{claim}\label{weak_vs_full_meas_preservation} Unless we act immediately above $\sigma_{k, s}$, the measure of $B$ between $\sigma_{k, s}$ and $\sigma_{k+1, s}$ remains strictly below $\beta(k, s)$. \end{claim} \begin{proof} Condition (2) above ensures that if $\sigma_{k, s}$ is redefined at stage $s$ due to an action above $\sigma_{l, s}$ for some $l < k$, then $\mu(B \cap [\sigma_{k, s}]) = 0$. If we act above $\sigma_{k+1, s}$, then condition (3) ensures that $\mu(B)$ remains below $\beta(k, s)$ between $\sigma_{k, s}$ and $\sigma_{k+1, s}$. Note that there is a string $\nu$ such that $\sigma_{k + 1, s} \prec \nu \prec \sigma_{k+2, s}$ and $\mu(B \cup [\nu]) < \beta(k, s)$ between $\sigma_{k, s}$ and $\sigma_{k+1, s}$. So if we act above $\sigma_{l, s}$ for some $l > k + 1$, then we add some measure to $B$, but this measure is contained entirely in $[\nu]$. \end{proof} \begin{claim} We can act above $\sigma_{k, s}$ while satisfying requirements (1) through (3) above. \end{claim} \begin{proof} By Claim~\ref{weak_vs_full_meas_preservation}, $\mu(B) < \beta(k, s)$ between $\sigma_{k, s}$ and $\sigma_{k+1, s}$. So if at stage $s$, for some $\rho$ between $\sigma_{k, s}$ and $\sigma_{k+1, s}$, $\mu(C_j)$ exceeds $\sqrt{\beta(k, s)}$ then by Lemma~\ref{vicinity_cantor_space} there is an $X \in C_j$ extending $\rho$ such that for every $\alpha$ such that $\rho \preceq \alpha \prec X$, $\mu_\alpha(B) < \sqrt{\beta(k, s)}$. Thus there are arbitrarily long strings extending $\rho$ satisfying condition (1). Conditions (2) and (3) are met by simply choosing a long enough such string. \end{proof} \begin{claim} For each $k \in \omega$, $\sigma_k = \lim_s \sigma_{k, s}$ exists, and $Y = \bigcup_{k} \sigma_k$ is total. \end{claim} \begin{proof} Assume that $\sigma_{k, s}$ has stabilized by stage $s$. Then $\sigma_{k+1}$ is redefined above $\sigma_{k, s}$ at most $k$ times. \end{proof} \begin{claim} $Y$ is a dyadic density-one point. \end{claim} \begin{proof} Suppose that $Y \notin [W_e]$. Let $k$ be large enough so that $k > e$ and for all $e' < e$, if $Y \in W_{e'}$, then $\sigma_k \in W_{e'}$. For any $k' > k$, let $s$ be large enough so that $\sigma_{k', s}$ has stabilized. By our choice of $k$, we never act above $\sigma_{k', s}$ for the sake of $W_{e'}$ for any $e' < e$, and by the assumption that $Y \notin [W_e]$, we never act for the sake of $W_e$. Let $t > s$ be such that $\sigma_{k'+1, t}$ has stabilized. For all $t' > t$, between $\sigma_{k', t'}$ and $\sigma_{k' + 1, t'}$, $\mu(W_e)$ does not exceed $\sqrt{\beta(k', t')}$, which is always bounded by $2^{-k}$. \end{proof} \begin{claim} $Y$ is not a density-one point. \end{claim} \begin{proof} Let $\sigma_k$ and $\sigma_{k+1}$ be the final values of $\sigma_{k, s}$ and $\sigma_{k+1, s}$ respectively. Then by construction there is a string $\nu$ such that $\sigma_k \prec \nu \prec \sigma_{k+1} \prec Y$, and $\sigma_{k+1} = \nu 1 0^j$ for some $j$ and $\nu 0 1^j \in B$. Let $l = |\nu| + j + 1$ and let $I$ be the interval $(0.\nu 1 - 2^{-l}, 0.\nu 1 + 2^{-l})$. Since $Y$ is a dyadic density-one point, $Y$ is not a rational and so $Y \in [0.\nu 1, 0.\nu 1 + 2^{-l}) \subset I$, while the left half of $I$ belongs entirely to $B$. \end{proof} This completes the proof of Theorem~\ref{weak_vs_full}. We note that the construction actually ensures that $\seqcantor -{B}$ is porous at $Y$. \end{proof} \section{Nies: Upper density and partial computable randomness} The \emph{upper} (Cantor-space) density of a set $\+ C \subseteq \seqcantor$ at a point~$Z$ is: $$\overline \varrho_2(\+ C | Z):=\limsup_{\sigma \prec Z\, \land \, |\sigma| \rightarrow \infty} \frac{\lambda(I \cap \+ C)}{|I|}, $$ where $I$ ranges over basic dyadic intervals containing $z$. Bienvenu et al.\ \cite[Prop.\ 5.4]{Bienvenu.Greenberg.ea:preprint} showed that for any effectively closed set $\+ P$ and ML-random $Z \in \+ P$, we have $\overline \varrho_2(\+ P\mid Z) =1$; this implies of course that the upper density in ${\mathbb{R}}$ also equals $1$. The following shows that ML-randomness was actually too strong an assumption. The right level seems to be given by the ``ugly duckling'' notion of partial computable randomness. See \cite[Ch.\ 7]{Nies:book} for background. If the measure $\mathbf{\lambda} \+P$ is a computable real, then in fact computable randomness of $Z$ suffices. In that case the full dyadic density is $1$. \begin{proposition} \label{prop:PC random upper density} Let $\+ P \subseteq \seqcantor$ be effectively closed. Let $Z \in \+ P$ be partial computably random. Then $\overline \varrho_2(\+ P \mid Z) =1$. \end{proposition} \begin{proof} Suppose there is $q < 1$ and $n^*$ such that $\mathbf{\lambda}_\sigma(\+ P) < q$ for each $\eta \prec Z$ with $|\eta| \ge n^*$. We will define a partial computable martingale $M$ that succeeds on $Z$. Let $M(\eta)=1 $ for all strings $\eta$ with $|\eta| \le n^*$. Now suppose that $M(\eta)$ has been defined for a string $\eta$ of length at least $n^*$, but $M$ is as yet undefined on extensions of $\eta$. Search for $t > |\eta|$ such that \[ \tp{-(t -|\eta|)} \# \{\tau \mid \, |\tau| = t \, \land \, [\tau] \cap \+ P _t = \emptyset\} > 1- q. \] If $t$ is found, bet all the capital existing at $\eta$ on the strings $\sigma \succ \eta$ with $\ensuremath{|\sigma|} = t $ that are not $\tau$'s as above, thereby multiplying the capital by $1/q$. Now repeat with all such strings $\sigma \succ \eta$ of length $t$. The formal definition of $M$ is as follows (supplied by Jing Zhang). For all $|\tau|\leq n^*$, $M(\tau)=1$. Next we define $M$ inductively on $2^{<\omega}$. Suppose $M$ has been defined on $\alpha$ and $M(\alpha)=\beta$, let $t\in \omega$ such that $t>|\alpha|$ and let $S=\{\tau\in 2^t: [\tau]\cap P_t\}$ and $r=|S|>2^{t-|\tau|}(1-q)$. For each $\sigma \in 2^t\backslash S$, define $M(\sigma)=\frac{1}{q}\alpha$, and let $\tau^*\in S$ be the leftmost element and define \[M(\tau^*)=2^{t-|\tau|}\alpha - \frac{1}{q}\alpha (2^{t-|\tau|}-r)\] For any $\sigma\succcurlyeq \alpha$ and $|\sigma|<t$, define $M$ accordingly to make $M$ a martingale. Next is the verification. First we check that $\forall \tau\preccurlyeq Z$, $M(\tau)$ is defined. We verify this inductively. Suppose $\eta \preccurlyeq Z$ is already defined. Then by assumption, $\lambda_\eta(\bar{P})>(1-q)$. Therefore, there exists a stage $t\in \omega$ such that $\lambda_{\eta}(\bar{P}_t)>(1-q)$. Thus we have $\Sigma_{\tau\in 2^t, \eta \preccurlyeq \tau}\lambda_{\tau}(\bar{P}_t)=2^{t-|\eta|} \lambda_\eta(\bar{P}_t)>2^{t-|\eta|}(1-q)$; here we use the fact that for any measurable class $Q\subset 2^\omega$, the function $\sigma \mapsto \lambda_\sigma(Q)$ is a martingale. Therefore, we have found such a $t$ to define a proper extension of $\eta$. By induction, $M$ is defined on $Z$. It is easy to see $Z$ succeeds on $M$ since every time a new string is defined, the capital becomes $\frac{1}{q}>1$ times of the original capital. Note that $M$ succeeds on $Z$ because \emph{all} strings $\sigma \prec Z$ of length $\ge n^*$ qualify as possible $\eta$'s where $t$ exists. On the other hand, if $\eta$ is off $Z$ then there may be no $t$, so $M$ can be partial. \end{proof} \begin{question} Is there a computably random $Z$ in some $\PI{1}$ class $\+ P$ so that $\overline \varrho_2(\+ P \mid Z) < 1$ ? \end{question} \begin{proposition} Let $\+ P \subseteq \seqcantor$ be effectively closed with $\mathbf{\lambda} \+ P$ computable. Let $Z \in \+ P$ be computably random. Then $\varrho_2(\+ P \mid Z) =1$. \end{proposition} \begin{proof} First we show $\overline \varrho_2(\+ P \mid Z) =1$. The easy, but not quite accurate, argument would be that in the construction above, before searching for $t$, we ask whether $\mathbf{\lambda}_\eta (\+ P) < q$; only then do we attempt to find $t$. This isn't quite right because ``$\mathbf{\lambda}_\eta (\+ P) < q$'' is merely $\SI 1$, even though $\mathbf{\lambda}_\eta (\+ P) $ is a uniformly in $\eta$ computable real. To amend this, fix $q'< q$ such that in fact $\mathbf{\lambda}_\sigma(\+ P) < q'$ for each $\eta \prec Z$ with $|\eta| \ge n^*$. We ask simultaneously \begin{itemize} \item[(1)] whether $\mathbf{\lambda}_\eta (\+ P) > q'$; if the positive answer to this $\SI 1$ question turns up first we don't bet on extensions of $\eta$ \item[(2)] $\mathbf{\lambda}_\eta (\+ P) < q$; in this case we bet. \end{itemize} One of the queries must yield an answer. The computable martingale $\eta \to \mathbf{\lambda}_\eta(\+ P)$ cannot oscillate along the computably random $Z$. Thus, the dyadic density $\rho_2(\+ P \mid Z)$ is 1. \end{proof} In fact, Schnorr randomness of $Z$ is sufficient as a hypothesis in the preceding proposition by deeper work of \cite{Pathak.Rojas.ea:12} and \cite{Freer.Kjos.ea:nd}. The characteristic function $1_P$ is $L_1$-computable because there is a sequence $\seq{1_{P_{g(n)}}}\sN n$, where $g$ is a computable function such that $\mathbf{\lambda} (P_{g(n)}-P) \le \tp{-n}$. Now use e.g. \cite[Theorem 3.15]{Pathak.Rojas.ea:12}. \section{Density-one points and Madison tests (written by Nies)} \label{s:Andrews} \newcommand{\mbox{\rm \textsf{wt}}}{\mbox{\rm \textsf{wt}}} \newcommand{\mbox{\rm \textsf{rk}}}{\mbox{\rm \textsf{rk}}} The following is 2012 work of a group at Madison, consisting of U.\ Andrews, M.\ Cai, D.\ Diamondstone, S.\ Lempp, and l.n.l.\ J.\ S.\ Miller. The writeup below, due to Nies, is based on discussions with Miller, and Miller's slides for his talks at the Buenos Aires Semester 2013. Technical details in the verifications have been added. No proof by the Madison group has appeared so far (June 2014). A martingale $L \colon 2^{ < \omega} \to {\mathbb{R}}^+_0$ is called \emph{left-c.e.} if $L(\sigma)$ is a left-c.e.\ real uniformly in $\sigma$. We focus on convergence of such a martingale along a real $Z$, which means that $\lim_n L(Z \uhr n)$ exists in ${\mathbb{R}}$. Unlike the case of computable martingales, convergence requires more randomness than boundedness. For instance, let $\+ U = [0, \Omega)$, and let $L(\sigma) = \mathbf{\lambda} (\+ U \mid [\sigma])$ (as a shorthand we use $\mathbf{\lambda}_\sigma(\+ U)$ for this conditional measure); then the left-c.e.\ martingale $L$ is bounded by 1 but diverges on $\Omega$ because $\Omega$ is Borel normal. \begin{theorem}[Andrews, Cai, Diamondstone, Lempp and Miller, 2012] \label{thm:Madison} The following are equivalent for a ML-random real $z \in [0,1]$. \begin{itemize} \item[(i)] $z$ is a dyadic density-one point. \item[(ii)] Every left-c.e.\ martingale converges along $Z$, where $0.Z$ is the binary expansion of $z$. \end{itemize} \end{theorem} Note that by Theorem~\ref{thm:ddensity vs full density}, $z$ is a full density-one point iff $z$ is a dyadic density-one point. A ML-random satisfying any of these equivalent conditions will be called \emph{density random}. \begin{proof} (ii) $\to$ (i) is \cite[Cor 5.5]{Bienvenu.Greenberg.ea:preprint}. \iffalse \noindent (i) $\to$ (ii). We can work within Cantor space because dyadic density is the same in Cantor space as in $[0,1]$. For $X \subseteq \seqcantor$ we define the weight ${\mbox{\rm \textsf{wt}}} ( X) = \sum_{\sigma \in X} \tp{-\ensuremath{|\sigma|}}$. Let $\sigma^\prec = \{ \tau \in 2^{ < \omega} \colon \, \sigma \prec \tau \}$. We use a technical test concept that reveals its beauty only after several days of study. \begin{definition} A \emph{Madison test} is a computable sequence $\seq {U_s}\sN s$ of strong indices for finite subsets of $2^{ < \omega}$ such that $U_0 = \emptyset$, for each stage $s$ we have $\mbox{\rm \textsf{wt}} (U_s) \le c$ for some constant $c$, and for all strings $\sigma, \tau$, \begin{itemize} \item[(a)] $\tau \in U_s - U_{s+1} \to \exists \sigma \prec \tau \, [ \sigma \in U_{s+1} - U_s]$ \item [(b)] $\mbox{\rm \textsf{wt}} (\sigma^\prec \cap U_s) > \tp{-\ensuremath{|\sigma|}} \to \sigma \in U_s$. \end{itemize} Note that by (a), $U(\sigma) := \lim_s U_s(\sigma)$ exists for each $\sigma$; in fact, $U_s(\sigma)$ changes at most $\tp{\ensuremath{|\sigma|}}$ times. We say that $Z$ \emph{fails} $\seq {U_s}\sN s$ if $Z \uhr n \in U$ for infinitely many $n$; otherwise $Z$ \emph{passes} $\seq {U_s}\sN s$. \end{definition} Note that $\mbox{\rm \textsf{wt}}(U_s) \le \mbox{\rm \textsf{wt}}(U_{s+1}) \le 1$, and $\mbox{\rm \textsf{wt}}(U) = \sup_s \mbox{\rm \textsf{wt}}(U_s)$. Thus, $\mbox{\rm \textsf{wt}}(U)$ is a left-c.e.\ real. \begin{lemma} \label{lem: density to Madison} Let $Z$ be a ML-random dyadic density-one point. Then $Z$ passes each Madison test. \end{lemma} To see this, suppose that $Z$ fails a Madison test $\seq {U_s}\sN s$. We build a ML-test $\seq {\+ S^k} \sN k$ such that $\forall \sigma \in U \, [ \mathbf{\lambda}_\sigma(\+ S^k) \ge \tp{-k}]$, and therefore $\overline \rho(\seqcantor - \+ S^k \mid Z) \le 1- \tp{-k}$. Since $Z$ is ML-random we have $Z \not \in \+ S^k$ for some $k$. So $Z$ is not a density-one point. To define the $\+ S^k$ we introduce for each $k, s \in \omega$ and each string $\sigma \in U_s$ clopen sets $\+ A^k_{\sigma,s} \subseteq [\sigma]$ given by uniformly computable strong indices, such that $\mathbf{\lambda} (\+ A^k_{\sigma,s} )= \tp{-\ensuremath{|\sigma|} -k}$ for each $\sigma \in U_s$. We update these clopen sets at stages $s$ when $\sigma \in U_{s+1} - U_s$. For each $\tau \succ \sigma$ with $\tau \in U_s - U_{s+1}$, put $\+ A^k_{\tau, s}$ into an auxiliary clopen set $ \widetilde {\+ A}^k_{\sigma, s+1}$. Since $\sigma \not \in U_s$, by (b) we have $\mbox{\rm \textsf{wt}} (\sigma^\prec \cap U_s) \le \tp{-\ensuremath{|\sigma|}}$, and so inductively $\mathbf{\lambda} (\widetilde {\+ A}^k_{\sigma,s+1}) \le \tp{- \ensuremath{|\sigma|} -k}$. Now to obtain $ \+ A^k_{\sigma , s+1}$ simply add mass from $[\sigma]$ to $\widetilde {\+ A}^k_{\sigma , s+1}$ in order to ensure equality as required. Let $\+ S^k_t = \bigcup_{\sigma \in U_t} \+ A^k_{\sigma,t}$. Then $\+ S^k_t \subseteq \+ S^k_{t+1}$ by property (a) of Madison tests. Clearly $\mathbf{\lambda} \+ S^k_t \le \tp {-k} \mbox{\rm \textsf{wt}} (U_t) \le \tp{-k}$. So $\+ S^k = \bigcup_t \+ S ^k_t$ determines a ML-test. So $Z \not \in \+ S^k$ for some $k$. If $\sigma \in U$ then by construction $\+ A^k_{\sigma,s}$ has measure $\tp{-\ensuremath{|\sigma|} -k}$ for almost all $s$. Thus $ \mathbf{\lambda}_\sigma(\+ S^k) \ge \tp{-k}$ as required. This shows the lemma. \begin{lemma} \label{Madison to MG convergence} Suppose that $Z$ passes each Madison test. Then every left-c.e.\ martingale $L$ converges along $Z$. \end{lemma} To see this, first we show that $Z$ is ML-random. Assume otherwise, so $Z \in \bigcap_m \+ G_m$ for a ML-test $\seq {\+ G_m}$. Let $\seq {G_m}$ be a uniformly c.e.\ sequence of antichains in $2^{ < \omega}$ with $\Opcl{G_m} = \+ G_m$. We define a Madison test $\seq {U_s}$ where strings never leave. Put the empty string $\langle \rangle $ into $U_1$. If $\sigma \in U_s$, put all $\rho \succ \sigma$ with $\rho \in G_{\ensuremath{|\sigma|}+ 1}$ into $U_{s+1}$. Clearly for $s>0$ we have $\mbox{\rm \textsf{wt}} (U_{s+1}- U_s) \le \mbox{\rm \textsf{wt}}(U_s - U_{s-1}) /2$. Thus $\mbox{\rm \textsf{wt}} (U_s) \le 1$ for each $s$. Also $Z$ fails $\seq {U_s}$. Now let $L(\sigma) = \sup_s L_s(\sigma)$ where $\seq {L_s}$ is a uniformly computable sequence of martingales and $L_0 = 0$. If $L$ diverges along $Z$, there is $\varepsilon < L(\langle \rangle) $ with \[\limsup_n L(Z \uhr n) - \liminf_n L(Z \uhr n) > \varepsilon.\] Since $Z$ is computably random, for each $s$ the limit $\lim_n L_s(Z \uhr n)$ exists, and is at most $ \liminf_n L(Z \uhr n)$. Thus for each $s$ there are infinitely many $n$ with $L(Z\uhr n) - L_s(Z \uhr n) > \varepsilon$. Based on this insight we define a Madison test which $Z$ fails. Along with the $U_s$ we define a uniformly computable labelling function $\gamma_s \colon \, U_s \to \{0, \ldots,s\}$. \vspace{6pt} \leftskip 0.5cm \noindent {\it Let $U_0 = \emptyset$. For $s>0$ we put the empty string $\langle \rangle $ into $U_s$ and let $\gamma_s(\langle \rangle) = 0$. If already $\sigma \in U_s$ with $\gamma_s(\sigma) = t$, then we also put into $U_s$ all strings $\tau \succ \sigma$ that are minimal under the prefix ordering $\prec$ with $L_s(\tau) - L_t(\tau) > \varepsilon$. Let $\gamma_s(\tau) $ be the least $r$ with $L_r(\tau) - L_t(\tau) > \varepsilon$. } \leftskip 0cm \vspace{6pt} \noindent Note that $\gamma_s(\tau)$ simply records the greatest stage $r \le s $ at which $\tau$ entered~$U_r$. We verify that $\seq {U_s}$ is a Madison test. For (a), suppose that $\tau \in U_s - U_{s+1}$. Let $\sigma_0 \prec \sigma_1 \prec \ldots \prec \sigma_n = \tau$ be the prefixes of $\tau $ in $U_s$. We can choose a least $i< n$ such that $\sigma_{i+1}$ is no longer the minimal extension of $\sigma_i$ at stage $s+1$. Thus there is $\eta$ with $\sigma_i \prec \eta \prec \sigma_{i+1}$ and $L_{s+1}(\eta) - L_{\gamma_s(\sigma_i)}(\eta) > \varepsilon$. Then $\eta \in U_{s+1}$ and $\eta \prec \tau$, as required. To verify (b) requires more work. We fix $s$ and write $M_t(\eta)$ for $ L_s(\eta ) - L_t(\eta)$. \begin{claim} \label{cl:tech} For each $\rho \in U_s$, where $\gamma_s( \rho) = r$, we have \[\tp{- |\rho|}M_r(\rho) \ge \varepsilon \cdot \mbox{\rm \textsf{wt}} (U_s \cap \rho^\prec). \] \end{claim} In particular, for $\rho = \langle \rangle$, we obtain that $\mbox{\rm \textsf{wt}} (U_s)$ is bounded by a constant $c=L_s(\langle \rangle)/\varepsilon$ as required. \noindent For $\sigma \in U_s$ and $n \in {\mathbb{N}}$ let $U_s^{\sigma,n}$ be the strings strictly above $\sigma$ and at a distance to $\sigma$ of at most $n$, that is, the set of strings $\tau$ such that there is $\sigma = \sigma_0 \prec \ldots \prec \sigma_m =\tau$ on $U_s$ with $m \le n$ and $\sigma_{i+1}$ a child of $\sigma_i$ for each $i<m$. To establish the claim, we show by induction on $n$ that \[\tp{- |\rho|}M_r(\rho) \ge \varepsilon \cdot \mbox{\rm \textsf{wt}} (U_s^{\rho, n}). \] If $n=0$ then $U_s^{\rho,n}$ is empty so the right hand side is $0$. Now suppose that $ n>0$. Let $F$ be the set of immediate successors of $\rho$ on $U_s$. Let $r_\tau = \gamma_s(\tau)$. By the inductive hypothesis, we have for each $\tau \in F$% \begin{eqnarray} \label{eqn:taus} \tp{-|\tau|} M_r( \tau) & = & \tp{-|\tau|} [(L_{r_\tau}(\tau)- L_r(\tau)) + M_{r_\tau}(\tau)] \\ & \ge & \tp{-|\tau|} \cdot \varepsilon + \varepsilon \cdot \mbox{\rm \textsf{wt}} (U_s ^{ \tau, n-1}). \nonumber \end{eqnarray} Then, taking the sum over all $\tau \in F$, $$ \tp{-|\rho|} M_r(\rho) \ge \sum_{\tau \in F} \tp{-|\tau|}M_r(\tau) \ge \varepsilon \cdot \mbox{\rm \textsf{wt}} (U_s ^{\rho,n}). $$ The first inequality is Kolmogorov's inequality for martingales, using that the $\tau$ form an antichain. For the second inequality we have used (\ref{eqn:taus}) and that $U_s^{ \rho, n} =F \cup \bigcup_{\tau \in F} U_s ^{ \tau, n-1}$. This completes the induction and shows the claim. Now, to obtain (b), suppose that $\mbox{\rm \textsf{wt}} (U_s \cap \sigma^\prec) > \tp{- |\sigma|}$. We use Claim~\ref{cl:tech} to show that $\sigma \in U_s$. Assume otherwise. Let $\rho \prec \sigma$ be in $U_s$ with $|\rho|$ maximal, and let $r = \gamma_s(\rho)$. As before, let $F$ be the prefix minimal extensions of $\sigma$ in $U_s$, and $r_\tau = \gamma_s(\tau)$. Then $L_{r_\tau}(\tau)- L_r(\tau)> \varepsilon$ for $\tau \in F$. Since $\tau \in U_s$, we can apply the claim to $\tau$, so (\ref{eqn:taus}) is valid. Arguing as before, but with $\sigma$ instead of $\rho$, we have $$ \tp{-|\sigma|} M_r(\sigma) \ge \sum_{\tau \in F} \tp{-|\tau|}M_r(\tau) \ge \varepsilon \cdot \mbox{\rm \textsf{wt}} (U_s \cap \sigma^\prec) $$ (that part of the argument did not use that $\rho \in U_s$). Since $\mbox{\rm \textsf{wt}} (U_s \cap \sigma^\prec) > \tp{- |\sigma|}$, this implies that $M_r(\sigma) > \epsilon$. Hence some $\sigma'$ with $\rho \prec \sigma' \preceq \sigma$ is in $U_s$, contrary to the maximality of $\rho$. This concludes the verification that $\seq {U_s}$ is a Madison test. As mentioned already, for each $r$ there are infinitely many $n$ with $L(Z\uhr n) - L_r(Z \uhr n) > \varepsilon$. This shows that $Z$ fails this test: suppose inductively that we have $\sigma \prec Z$ and $r$ is least such that $\sigma \in U_t$ for all $t\ge r$ (so that $\gamma_t(\sigma) = r$ for all such $t$). Choose $n > \ensuremath{|\sigma|}$ for this $r$. Then $\tau = Z \uhr n$ is a viable extension of $\sigma$, so $\tau$, or some prefix of it that is longer that $\sigma$, is in $U$. \end{proof} \fi \noindent (ii) $\to$ (i) is \cite[Corollary 5.5]{Bienvenu.Greenberg.ea:preprint}. \noindent (i) $\to$ (ii). We can work within Cantor space because the dyadic density of a class $\+ P \subseteq [0,1]$ at $z$ is the same as the density of $\+ P$ viewed as a subclass of Cantor space at $Z$. We use the technical concept of ``Madison tests''. They are also called density tests, even though they actually seem to be motivated by oscillation of martingales. As a first step, Lemma \ref{lem: density to Madison} shows that if $Z \in \seqcantor$ is a ML-random dyadic density-one point, then $Z$ passes all Madison tests. As a second step, Lemma~\ref{Madison to MG convergence} shows that if $Z$ passes all Madison tests, then every left-c.e.\ martingale converges along $Z$. We will now introduce and motivate this technical test concept. We define the weight of a set $X \subseteq \seqcantor$ as $${\mbox{\rm \textsf{wt}}} ( X) = \sum_{\sigma \in X} \tp{-\ensuremath{|\sigma|}}.$$ Let $\sigma^\prec = \{ \tau \in 2^{ < \omega} \colon \, \sigma \prec \tau \}$. \begin{definition} A \emph{Madison test} is a computable sequence $\seq {U_s}\sN s$ of computable subsets of $2^{ < \omega}$ such that $U_0 = \emptyset$, there is a constant $c$ such that for each stage $s$ we have $\mbox{\rm \textsf{wt}} (U_s) \le c$, and for all strings $\sigma, \tau$, \begin{itemize} \item[(a)] $\tau \in U_s - U_{s+1} \to \exists \sigma \prec \tau \, [ \sigma \in U_{s+1} - U_s]$ \item [(b)] $\mbox{\rm \textsf{wt}} (\sigma^\prec \cap U_s) > \tp{-\ensuremath{|\sigma|}} \to \sigma \in U_s$. \end{itemize} Note that by (a), $U(\sigma) := \lim_s U_s(\sigma)$ exists for each $\sigma$; in fact, $U_s(\sigma)$ changes at most $\tp{\ensuremath{|\sigma|}}$ times. We say that $Z$ \emph{fails} $\seq {U_s}\sN s$ if $Z \uhr n \in U$ for infinitely many $n$; otherwise $Z$ \emph{passes} $\seq {U_s}\sN s$. \end{definition} We show that $\mbox{\rm \textsf{wt}}(U_s)\leq \mbox{\rm \textsf{wt}}(U_{s+1})$, so that $\mbox{\rm \textsf{wt}}(U) = \sup_s \mbox{\rm \textsf{wt}}(U_s) < \infty $ is a left-c.e.\ real. Suppose that $\sigma$ is minimal under the prefix relation such that $\sigma\in U_{s+1}-U_s$. By (b) and since $\sigma\not \in U_{s}$, we have $\mbox{\rm \textsf{wt}}(\sigma^\prec \cap U_s)\leq 2^{-|\sigma|}$. So enumerating $\sigma$ adds $\tp{-\ensuremath{|\sigma|}}$ to the weight, while the weight of strings above $\sigma$ removed from $U_s$ is at most $2^{-|\sigma|}$. \begin{remark} \label{rem:Doob Madison} {\rm The definition of a Madison test is closely related to Dubins' inequality, which limits the amount of oscillation a martingale can have; see, for instance, \cite[Exercise 2.14 on pg.\ 238]{Durrett:96}. Note that this inequality implies a version of the better-known Doob upcrossing inequality by taking the sum over all $k$. We only need to discuss these inequalities in the restricted setting of martingales on $2^{ < \omega}$. Consider a computable rational-valued martingale $B$; that is, $B(\sigma)$ is a rational uniformly computed (as a single output) from~$\sigma$. Suppose that $c,d$ are rationals, $0< c< d$, $B(\langle \rangle)< c$, and $B$ oscillates between values less than $c$ and greater than $d$ along a bit sequence $Z$. An \emph{upcrossing} (for these values) is a pair of strings $\sigma \prec \tau$ such that $B(\sigma)< c$, $B(\tau)>d$, and $B(\eta) \le d$ for each $\eta$ such that $\sigma \preceq \eta \prec \tau$. By Dubins' inequality, for each $k$ we have \begin{equation} \label{eqn: upcr} \mathbf{\lambda}\{X \colon \, \exists k \, \text{upcrossings along} \, X\} \le (c/d)^k.\end{equation} (See \cite[Cor.\ b.7]{Bienvenu.Greenberg.ea:preprint} for a proof of this fact using the notation of the present paper.) Suppose now that $2c< d$. We define a Madison test that $Z$ fails. Strings never leave the computable approximation of the test, so (a) holds. We put $\langle \rangle$ into $U_0$. If $\sigma \in U_{s-1}$, put into $U_s$ all strings $\eta$ such that $B(\tau) >d$ and $B(\eta) < c$ for some $\tau \succ \sigma$ chosen prefix minimal, and $\eta \succ \tau$ chosen prefix minimal. Let $U = \bigcup U_s$ (which is in fact computable). For each $\sigma$, by the upcrossing inequality (\ref{eqn: upcr}) localised to $[\sigma]$, we have $\mbox{\rm \textsf{wt}} (\sigma^\prec \cap U) \le \tp{-\ensuremath{|\sigma|}} \sum_{k\ge 1} (c/d)^k < \tp{-\ensuremath{|\sigma|}}$, so (b) is satisfied vacuously. } \end{remark} As already noted in \cite{Bienvenu.Greenberg.ea:preprint}, if $B =\sup B_s$ is a left-c.e.\ martingale where the $B_s$ are uniformly computable martingales, then an upcrossing apparent at stage $s$ can later disappear because $B(\sigma)$ increases. In this case, as we will see in the proof of Lemma~\ref{Madison to MG convergence}, the full power of the conditions (a) and (b) is needed to obtain a Madison test from the oscillating behaviour of $B$. % We make the first step of the argument outlined above. \begin{lemma} \label{lem: density to Madison} Let $Z$ be a ML-random dyadic density-one point. Then $Z$ passes each Madison test. \end{lemma} \begin{proof} Suppose that a ML-random bit sequence $Z$ fails a Madison test $\seq {U_s}\sN s$. We will build a ML-test $\seq {\+ S^k} \sN k$ such that $\forall \sigma \in U \, [ \mathbf{\lambda}_\sigma(\+ S^k) \ge \tp{-k}]$, and therefore $$\underline \varrho(\seqcantor - \+ S^k \mid Z) \le 1- \tp{-k}.$$ Since $Z$ is ML-random we have $Z \not \in \+ S^k$ for some $k$. So $Z$ is not a dyadic density-one point, as witnessed by the $\PI{1}$ class $\seqcantor - \+ S^k$. To define the $\+ S^k$ we construct, for each $k, t \in \omega$ and each string $\sigma \in U_t$, clopen sets $\+ A^k_{\sigma,t} \subseteq [\sigma]$ given by strong indices for finite sets of strings computed from $k, \sigma, t$, such that $\mathbf{\lambda} (\+ A^k_{\sigma,t} )= \tp{-\ensuremath{|\sigma|} -k}$ for each $\sigma \in U_t$. We will let $\+ S^k$ be the union of these sets over all $\sigma$ and $t$. The clopen sets for $k$ and a final string $\sigma \in U$ will be disjoint from the $\PI{1}$ class $\+ S^k$. Condition (b) on Madison tests ensures that during the construction, a string $\sigma$ can inherit the clopen sets belonging to its extensions $\tau$, without risking that the $\PI{1}$ class becomes empty above $\sigma$. \vspace{6pt} \noindent \emph{Construction of clopen sets $\+ A^k_{\sigma,t} \subseteq [\sigma]$ for $\sigma \in U_t$.} \noindent No sets need to be defined at stage $0$ because $U_0 = \emptyset$. Suppose at stage $t+1$, we have $\sigma \in U_{t+1} - U_t$. For each $\tau \succ \sigma$ such that $\tau \in U_t - U_{t+1}$, put $\+ A^k_{\tau, t}$ into an auxiliary clopen set $ \widetilde {\+ A}^k_{\sigma, t+1}$. Since $\sigma \not \in U_t$, by condition (b) on Madison tests, we have $\mbox{\rm \textsf{wt}} (\sigma^\prec \cap U_t) \le \tp{-\ensuremath{|\sigma|}}$, and so inductively \begin{center} $\mathbf{\lambda} (\widetilde {\+ A}^k_{\sigma,t+1}) \le \tp{- \ensuremath{|\sigma|} -k}$. \end{center} Now, to obtain $ \+ A^k_{\sigma , t+1}$ we simply add mass from $[\sigma]$ to $\widetilde {\+ A}^k_{\sigma , t+1}$ in order to ensure equality as required. Let $$\+ S^k_t = \bigcup_{\sigma \in U_t} \+ A^k_{\sigma,t}.$$ Then $\+ S^k_t \subseteq \+ S^k_{t+1}$ by condition (a) on Madison tests. Clearly \begin{center} $\mathbf{\lambda} \+ S^k_t \le \tp {-k} \mbox{\rm \textsf{wt}} (U_t) \le \tp{-k}$. \end{center} So $\+ S^k = \bigcup_t \+ S ^k_t$ determines a ML-test. Since $Z$ is ML-random, we have $Z \not \in \+ S^k$ for some $k$. If $\sigma \in U$ then by construction $\mathbf{\lambda} \+ A^k_{\sigma,s} = \tp{-\ensuremath{|\sigma|} -k}$ for almost all $s$. Thus $ \mathbf{\lambda}_\sigma(\+ S^k) \ge \tp{-k}$ as required. \end{proof} We begin the second step of the argument with an intermediate fact. \begin{lemma} Suppose that $Z$ passes each Madison test. Then $Z$ is computably random. \end{lemma} \begin{proof} Rather than giving a direct proof, we will rely on Remark~\ref{rem:Doob Madison}. Suppose $Z$ is not computably random. The proof of \cite[Thm.\ 4.2]{Freer.Kjos.ea:nd} turns success of a rational-valued computable martingale $M$ with the savings property into oscillation of another such martingale $B$. Slightly adapting the (arbitrary) bounds for the oscillation given there, we may assume that $B$ is as in Remark~\ref{rem:Doob Madison} for $a=2, b=5$: if $M$ succeeds along $Z$, then there are infinitely many upcrossings $\tau \prec \eta \prec Z$, $B(\tau) < 2$ and $B(\eta) >5$. Therefore $Z$ fails some Madison test. \end{proof} \begin{lemma} \label{Madison to MG convergence} Suppose that $Z$ passes each Madison test. Then every left-c.e.\ martingale $L$ converges along $Z$. \end{lemma} \begin{proof} The $L$ be a left-c.e.\ martingale. Then $L(\sigma) = \sup_s L_s(\sigma)$ where $\seq {L_s}$ is a uniformly computable sequence of martingales and $L_0 = 0$ and $L_s(\sigma) \le L_{s+1}(\sigma)$ for each $\sigma$ and~$s$. Since $Z$ is computably random, $\lim_n L_s(Z \uhr n)$ exists for each $s$. If $L$ diverges along $Z$, there is $\varepsilon < L(\langle \rangle) $ such that \[\limsup_n L(Z \uhr n) - \liminf_n L(Z\uhr n) > \varepsilon.\] Based on this fact we define a Madison test that $Z$ fails. Along with the $U_s$ we define a uniformly computable labelling function $\gamma_s \colon \, U_s \to \{0, \ldots,s\}$. \vspace{6pt} \leftskip 0.5cm \noindent {\it Let $U_0 = \emptyset$. For $s>0$ we put the empty string $\langle \rangle $ into $U_s$ and let $\gamma_s(\langle \rangle) = 0$. If already $\sigma \in U_s$ with $\gamma_s(\sigma) = t$, then we also put into $U_s$ all strings $\tau \succ \sigma$ that are minimal under the prefix ordering with $L_s(\tau) - L_t(\tau) > \varepsilon$. Let $\gamma_s(\tau) $ be the least $r$ with $L_r(\tau) - L_t(\tau) > \varepsilon$. } \leftskip 0cm \vspace{6pt} \noindent Note that $\gamma_s(\tau)$ records the greatest stage $r \le s $ at which $\tau$ entered~$U_r$. Intuitively, this construction attempts to find upcrossings between the values $ \liminf_n L(Z\uhr n) < \limsup_n L(Z\uhr n)$. Clearly $\lim_n L_t(Z \uhr n \le \liminf_n L(Z\uhr n)$. If a string $\tau $ as above is sufficiently long, then in fact $L_t(\tau) < \liminf_n L(Z\uhr n)$ so we have an upcrossing. We verify that $\seq {U_s}$ is a Madison test. For condition (a), suppose that $\tau \in U_s - U_{s+1}$. Let $\sigma_0 \prec \sigma_1 \prec \ldots \prec \sigma_n = \tau$ be the prefixes of $\tau $ in $U_s$. We can choose a least $i< n$ such that $\sigma_{i+1}$ is no longer the minimal extension of $\sigma_i$ at stage $s+1$. Thus there is $\eta$ with $\sigma_i \prec \eta \prec \sigma_{i+1}$ and $L_{s+1}(\eta) - L_{\gamma_s(\sigma_i)}(\eta) > \varepsilon$. Then $\eta \in U_{s+1}$ and $\eta \prec \tau$, as required. To verify condition (b) requires more work. We fix $s$ and write $M_t(\eta)$ for $ L_s(\eta ) - L_t(\eta)$. \begin{claim} \label{cl:tech} For each $\eta \in U_s$, where $\gamma_s( \eta) = r$, we have \[\tp{- |\eta|}M_r(\eta) \ge \varepsilon \cdot \mbox{\rm \textsf{wt}} (U_s \cap \eta^\prec). \] \end{claim} \noindent In particular, letting $\eta = \langle \rangle$, we obtain that $\mbox{\rm \textsf{wt}} (U_s)$ is bounded by a constant $c=1 + L(\langle \rangle)/\varepsilon$ as required. \noindent For $\sigma \in U_s$ and $k \in {\mathbb{N}}$ let $U_s^{\sigma,k}$ be the strings strictly above $\sigma$ and at a distance to $\sigma$ of at most $k$, that is, the set of strings $\tau$ such that there is $\sigma = \sigma_0 \prec \ldots \prec \sigma_m =\tau$ on $U_s$ with $m \le k$ and $\sigma_{i+1}$ a child of $\sigma_i$ for each $i<m$. To establish the claim, we show by induction on $k$ that \[\tp{- |\eta|}M_r(\eta) \ge \varepsilon \cdot \mbox{\rm \textsf{wt}} (U_s^{\eta, k}). \] If $k=0$ then $U_s^{\eta,k}$ is empty so the right hand side is $0$. Now suppose that $ k>0$. Let $F$ be the set of immediate successors of $\eta$ on $U_s$. Let $r_\tau = \gamma_s(\tau)$. By the inductive hypothesis, we have for each $\tau \in F$% \begin{eqnarray} \label{eqn:taus} \tp{-|\tau|} M_r( \tau) & = & \tp{-|\tau|} [(L_{r_\tau}(\tau)- L_r(\tau)) + M_{r_\tau}(\tau)] \\ & \ge & \tp{-|\tau|} \cdot \varepsilon + \varepsilon \cdot \mbox{\rm \textsf{wt}} (U_s ^{ \tau, k-1}). \nonumber \end{eqnarray} Then, taking the sum over all $\tau \in F$, $$ \tp{-|\eta|} M_r(\eta) \ge \sum_{\tau \in F} \tp{-|\tau|}M_r(\tau) \ge \varepsilon \cdot \mbox{\rm \textsf{wt}} (U_s ^{\eta,k}). $$ The first inequality is Kolmogorov's inequality for martingales, using that the $\tau$ form an antichain. For the second inequality we have used (\ref{eqn:taus}) and that $U_s^{ \eta, k} =F \cup \bigcup_{\tau \in F} U_s ^{ \tau, k-1}$. This completes the induction and shows the claim. Now, to obtain (b), suppose that $\mbox{\rm \textsf{wt}} (U_s \cap \sigma^\prec) > \tp{- |\sigma|}$. We use Claim~\ref{cl:tech} to show that $\sigma \in U_s$. Assume otherwise. Let $\eta \prec \sigma$ be in $U_s$ with $|\eta|$ maximal, and let $r = \gamma_s(\eta)$. As before, let $F$ be the prefix minimal extensions of $\sigma$ in $U_s$, and $r_\tau = \gamma_s(\tau)$. Then $L_{r_\tau}(\tau)- L_r(\tau)> \varepsilon$ for $\tau \in F$. Since $\tau \in U_s$, we can apply the claim to $\tau$, so (\ref{eqn:taus}) is valid. Arguing as before, but with $\sigma$ instead of $\eta$, we have $$ \tp{-|\sigma|} M_r(\sigma) \ge \sum_{\tau \in F} \tp{-|\tau|}M_r(\tau) \ge \varepsilon \cdot \mbox{\rm \textsf{wt}} (U_s \cap \sigma^\prec) $$ (that part of the argument did not use that $\eta \in U_s$). Since $\mbox{\rm \textsf{wt}} (U_s \cap \sigma^\prec) > \tp{- |\sigma|}$, this implies that $M_r(\sigma) > \epsilon$. Hence some $\sigma'$ with $\eta \prec \sigma' \preceq \sigma$ is in $U_s$, contrary to the maximality of $\eta$. This concludes the verification that $\seq {U_s}$ is a Madison test. As explained already, for each $r$ there are infinitely many $n$ with $L(Z\uhr n) - L_r(Z \uhr n) > \varepsilon$. This shows that $Z$ fails this test: suppose inductively that we have $\sigma \prec Z$ such that there is a least $r$ with $\sigma \in U_t$ for all $t\ge r$ (so that $\gamma_t(\sigma) = r$ for all such $t$). Choose $n > \ensuremath{|\sigma|}$ for this $r$. Then from some stage on $\tau = Z \uhr n$ is a viable extension of $\sigma$, so $\tau$, or some prefix of it that is longer than $\sigma$, is in~$U$. \end{proof} \noindent This concludes our proof of Thm.\ \ref{thm:Madison}. \end{proof} The Oberwolfach group (Bienvenu, Greenberg, Kucera, Nies, and Turetsky) \cite[Cor.\ 5.5]{Bienvenu.Greenberg.ea:preprint} showed that every OW-random is density random. The Madison group provided a direct proof of this fact. A left-c.e.\ bounded test is a nested sequence $\seq{\+ V_n}$ of uniformly $\Sigma^0_1$ classes such that for some computable sequence of rationals $\seq{\beta_n}$ and $\beta = \sup_n \beta_n$ we have $\mathbf{\lambda}(\+ V_n)\le \beta-\beta_n$ for all $n$. $Z$ fails this test if $Z \in \bigcap_n \+ V_n$. The OW group introduced this test notion and used it for one possible characterisation of OW randomness. The Madison group used these tests (formerly called Auckland tests) directly. \begin{prop} \label{prop: OW MSN} Every OW random $Z$ is density random. \end{prop} \begin{proof} Given left-c.e.\ martingale $M$ we want to show that $M$ converges along $Z$. Let $M= \sup D_m$ where $D_m$ is a computable rational valued martingale uniformly in $m$. Let $\beta = M(\langle \rangle) $ and $\beta_m = D_m( \langle \rangle)$ so that $\beta = \sup_m \beta_m$. Let $L_m = M- D_m$ be the ``rest'' martingale at stage $m$. Assume that $M$ does not converge along $Z$. Multiplying $M$ by a sufficiently large integer we may then assume that \begin{center} $1 < \limsup_k M(Z\uhr k) - \liminf_k M(Z\uhr k)$. \end{center} Define the left-c.e.\ bounded test by \[ \+ V_m= \{ Y \colon \, \exists k \, L_m(Y\uhr k) >1\}. \] Then by the usual Kolmogorov inequality, we have $\mathbf{\lambda} \+ V_m \le L_m( \langle \rangle) = \beta - \beta_m$. To show $Z$ fails $(\+ V_m)$: $Z$ is computably random, so $l_m = \lim_k D_m(Z \uhr k)$ exists for each $m$. Furthermore, $l_m \le \inf_k M(Z \uhr k)$. Thus $\exists k \, L_m(Y\uhr k) >1$; namely, the divergence is only due to the rest martingale at stage $m$. \end{proof} We note that this proof fails in the higher setting of randomness notions. See Section~\ref{s: higher density}. \section{Nies: Density and higher randomness} \label{s: higher density} \newcommand{\Pi^1_1}{\Pi^1_1} \newcommand{\Sigma^1_1}{\Sigma^1_1} \newcommand{\omega_1^{CK}}{\omega_1^{CK}} By Nies (August). The work of the Madison group described in Section~\ref{s:Andrews} can be lifted to the domain of higher randomness. Interestingly, density one now can be equivalently required for any $\Sigma^1_1$ class containing the real, not necessarily closed. We use the following fact due to Greenberg. It is a higher analog of the original weaker version of Prop.\ \ref{prop:PC random upper density}. \begin{proposition}[N.\ Greenberg, 2013] \label{prop:higher ML random upper density} Let $\+ C \subseteq \seqcantor$ be $\Sigma^1_1$. Let $Z \in \+ C$ be $\Pi^1_1$-ML-random. Then $\overline \varrho_2(\+ C \mid Z) =1$. \end{proposition} \begin{proof} If $\overline \varrho_2(\+ C \mid Z) < 1$ then there is a positive rational $q<1$ and $n^*$ such that for all $n\ge n^*$ we have $\mathbf{\lambda}_{Z\uhr n}(\+ C)< q$. Choose a rational $r$ with $q< r<1$. We define $\Pi^1_1$-anti chains in $2^{ < \omega}$ $U_n$, uniformly in $n$. Let $U_0 = \{\langle Z \uhr {n^*} \rangle\}$. Suppose $U_n$ has been defined. For each $\sigma \in U_n$, at a stage $\alpha$ such that $\mathbf{\lambda}_\sigma(\+ C_\alpha) < q$, we obtain effectively a hyper-arithmetical antichain $V$ of extensions of $\sigma$ such that $\+ C _\alpha \cap [\sigma] \subseteq \Opcl V$ and $\mathbf{\lambda}_\sigma(\Opcl V) < r$. Put $V$ into $U_{n+1}$. Clearly $\mathbf{\lambda} U_n \le r^n$ for each $n$. Also, $Z \in \bigcap_n U_n$, so $Z$ is not $\Pi^1_1$-ML-random. \end{proof} A martingale $M\colon 2^{ < \omega} \to {\mathbb{R}}$ is called left-$\Pi^1_1$ if $M(\sigma)$ is a left-$\Pi^1_1$ real uniformly in $\sigma$. \begin{thm} Let $Z$ be $\Pi^1_1$-ML-random. The following are equivalent. \begin{itemize} \item[(i)] $\rho(\+ C \mid Z)=1 $ for each $\Sigma^1_1$-class $\+ C$ containing $Z$. \item[(ii)] $\rho(\+ C \mid Z)=1 $ for each \emph{closed} $\Sigma^1_1$-class $\+ C$ containing $Z$. \item[(iii)] each left-$\Pi^1_1$ martingale converges on $Z$ to a finite value. \end{itemize} \end{thm} \begin{proof} (iii) $\to$ (i): The measure of a $\Sigma^1_1$ set is left-$\Sigma^1_1$ in a uniform way (see e.g.\ \cite[Ch.\ 9]{Nies:book}). Therefore $M(\sigma)= 1- \mathbf{\lambda}_\sigma(\+ C)$ is a left-$\Pi^1_1$ martingale. Since $M$ converges along $Z$, and since by Prop.\ \ref{prop:higher ML random upper density} $\liminf_n M(Z\uhr n) = 0$, it converges along $Z$ to $0$. This shows that $\rho(\+ C \mid Z)=1 $. \noindent (ii) $\to$ (iii). We follow the proof of the Madison Theorem~\ref{thm:Madison} given above. All stages $s$ are now interpreted as computable ordinals. Computable functions/ constructions, are now functions $\omega_1^{CK} \to L_{\omega_1^{CK}}$ with $\Sigma_1$ graph/ assignments of recursive ordinals to instructions. \begin{definition} A \emph{$\Pi^1_1$-Madison test} is a $\Sigma_1$ over $L_{\omega_1^{CK}}$ function $\seq {U_s}_{ s< \omega_1^{CK}}$ mapping ordinals to (hyperarithmetical) subsets of $2^{ < \omega}$ such that $U_0 = \emptyset$, for each stage $s$ we have $\mbox{\rm \textsf{wt}} (U_s) \le c$ for some constant $c$, and for all strings $\sigma, \tau$, \begin{itemize} \item[(a)] $\tau \in U_s - U_{s+1} \to \exists \sigma \prec \tau \, [ \sigma \in U_{s+1} - U_s]$ \item [(b)] $\mbox{\rm \textsf{wt}} (\sigma^\prec \cap U_s) > \tp{-\ensuremath{|\sigma|}} \to \sigma \in U_s$. \end{itemize} Also $U_\gamma(\sigma) = \lim_{\alpha < \gamma} U_\alpha(\sigma)$ for each limit ordinal $\gamma$. \end{definition} The following well-known fact can be proved similar to \cite[1.9.19]{Nies:book}. \begin{lemma} \label{lem:extend open} Let $\+ A \subseteq \seqcantor$ be a hyperarithmetical open. Given a rational $q$ with $q > \mathbf{\lambda} A$, we can effectively determine from $\+ A, q$ a hyperarithmetical open $\+ S \supseteq \+ A$ with $\mathbf{\lambda} \+ S = q$. \end{lemma} \begin{lemma} \label{lem: density to higher Madison} Let $Z$ be a $\Pi^1_1$ ML-random such that $\rho(\+ C \mid Z)=1 $ for each \emph{closed} $\Sigma^1_1$-class $\+ C$ containing $Z$. Then $Z$ passes each $\Pi^1_1$-Madison test. \end{lemma} The proof follows the proof of the analogous Lemma~\ref{lem: density to Madison}. The sets $\+ A^k_{\sigma,s}$ are now hyperarithmetical open sets computed from $k,\sigma, s$. Suppose $\sigma \in U_{s+1} - U_s$. The set $\widetilde {\+ A}^k_{\sigma,s}$ is defined as before. To effectively obtain $ \+ A^k_{\sigma , s+1}$, we apply Lemma~\ref{lem:extend open} to add mass from $[\sigma]$ to $\widetilde {\+ A}^k_{\sigma , s+1}$ in order to ensure $\mathbf{\lambda} ( {\+ A}^k_{\sigma,s+1}) = \tp{- \ensuremath{|\sigma|} -k}$ as required. As before let $\+ S^k_t = \bigcup_{\sigma \in U_t} \+ A^k_{\sigma,t}$. Then $\+ S^k_t \subseteq \+ S^k_{t+1}$ by property (a) of $\Pi^1_1$ Madison tests. Clearly $\mathbf{\lambda} \+ S^k_t \le \tp {-k} \mbox{\rm \textsf{wt}} (U_t) \le \tp{-k}$. So $\+ S^k = \bigcup_{t< \omega_1^{CK}} \+ S ^k_t$ determines a $\Pi^1_1$ ML-test. By construction $\overline \rho(\seqcantor - \+ S^k \mid Z) \le 1- \tp{-k}$. Since $Z$ is ML-random we have $Z \not \in \+ S^k$ for some $k$. So $\overline \rho(\+ C \mid Z) < 1 $ for the {closed} $\Sigma^1_1$-class $\+ C = \seqcantor - \+ S_k$ containing $Z$. The analog of Lemma~\ref{Madison to MG convergence} also holds. \begin{lemma} Suppose that $Z$ passes each $\Pi^1_1$-Madison test. Then every left-$\Pi^1_1$ martingale $L$ converges along $Z$. \end{lemma} The proof of \ref{Madison to MG convergence} was already set up so that this works. The uniformly hyp labelling functions $\gamma_s$ now map $U_s$ to $\omega_1^{CK}$. Note that the antichains $F$ can now be infinite. \end{proof} A $\Pi^1_1$ ML-random satisfying any of the three conditions above will be called {$\Pi^1_1$-density random}. We note the following implications, none of which are known to be proper. \begin{center} higher weak 2 random $\Rightarrow$ $\Pi^1_1$ OW-random $\Rightarrow$ $\Pi^1_1$ density random. \end{center} The first implication is due to Bienvenu, Greenberg and Monin. The second is the higher analog of Proposition \ref{prop: OW MSN}: \begin{prop} \label{prop: higher OW MSN} Every $\Pi^1_1$ OW random $Z$ is $\Pi^1_1$ density random. \end{prop} However, we need to go back to the original proof \cite[Cor.\ 5.5]{Bienvenu.Greenberg.ea:preprint}. The reason is that the left-c.e.\ bounded tests don't make sense in the higher setting; Oberwolfach tests, in contrast, can be suitably adapted. The $n$-th test component is obtained by counting $n$ oscillations. \section{Miyabe: Being a Lebesgue point for each integral tests} Input by Kenshi Miyabe. \begin{theorem}\label{th:density-one-converge} The following are equivalent for a ML-random real $z \in [0,1]$. \begin{enumerate} \item $z$ is a density-one point. \item Every left-c.e.\ martingale converges on $z$. \end{enumerate} \end{theorem} A ML-random satisfying one of these conditions will be called \emph{density random}. This theorem follows from the following theorems. \begin{theorem}[Mushfeq Khan and Joseph Miller]\label{th:ML-dyadic-full} Let $z$ be a ML-random dyadic density-one point. Then $z$ is a full density-one point. \end{theorem} \begin{theorem}[Bienvenu et al.\ \cite{Bienvenu.Greenberg.ea:OWpreprint}] If every left-c.e.\ martingale converges on $z$, then $z$ is a dyadic density-one point. \end{theorem} \begin{theorem}[Andrews, Cai, Diamondstone, Lempp and Miller, 2012] If $z$ is a ML-random dyadic density-one point, then every left-c.e.\ martingale converges on $z$. \end{theorem} Here, we give a characteriziation of density randomness via the Lebesgue differentiation theorem. \begin{theorem}\label{th:density-lebesgue} The following are equivalent for $z\in[0,1]$: \begin{enumerate} \item $z$ is density random. \item $z$ is a dyadic Lebesgue point for each integral test. \item $z$ is a Lebesgue point for each integral test. \end{enumerate} \end{theorem} Recall that an \emph{integral test} on $[0,1]$ with the Lebesgue measure is an integrable lower semicomputable function $f:[0,1]\to\overline{\mathbb{R}}^+$. Note that one direction is easy. \begin{proof}[Proof of (ii) $\Rightarrow$ (i) of Theorem \ref{th:density-lebesgue}] Suppose that $z$ is a Lebesgue point for each integral test. Then $f(z)$ is finite for each integral test $f$, whence $z$ is ML-random. Let $C$ be a $\Pi^0_1$ class containing $z$. We define a function $f:[0,1]\to\overline{\mathbb{R}}^+$ by \[f(x)=\begin{cases}1&\mbox{ if }x\not\in C\\ 0&\mbox{ if }x\in C.\end{cases}\] Then, $f$ is an integral test. Since $z$ is a Lebesuge point for $f$, $C$ has density-one at $z$. \end{proof} For the converse, we first show the following lemma. \begin{lemma}\label{lem:weak-l} If an ML-random set $z$ is a dyadic weak Lebesgue point for an integral test $f$, then $z$ is a dyadic Lebesgue point for $f$. \end{lemma} \begin{proof} As a notation, for a function $f:\subseteq [0,1]\to\mathbb{R}$ and $z\in[0,1]$, let \[D(f,\sigma)=\frac{\int_{[\sigma]}f\ d\mu}{2^{-n}}.\] Then, $z$ is a dyadic Lebesgue point iff $\lim_n D(f,z\upharpoonright n)=f(z)$. If $f$ is a integral test, then $D(f,-)$ is a left-c.e.\ martingale. Suppose that $z$ is not a dyadic Lebesgue point for an integral test $f$ and $z$ is a dyadic weak Lebesgue point for $f$. Then $\lim_n D(f,z\upharpoonright n)=:r$ exists and $f(z)\ne r$. Let \[f=\sup_s f_s\] where $\{f_s\}$ is a computable sequence of rational step functions. Then, there is a computable order $u$ such that $D(f_s,\sigma)=D(f_s,\sigma0)=D(f_s,\sigma1)$ for each $\sigma$ satisfying $|\sigma|\ge u(s)$. Unless $z$ is a dyadic rational, we have \[\lim_n D(f_s,z\upharpoonright n)=f_s(z).\] Suppose that $r<f(z)$. Since $\lim_s f_s(z)=f(z)$, there is $t$ such that \[r<f_t(z)\le f(z).\] Then \[r<f_t(z)=\lim_n D(f_t,z\upharpoonright n)\le\lim_n D(f,z\upharpoonright n).\] This is a contradiction. Suppose that $r>f(z)$. Let $q$ be a rational such that $f(z)<q<r$. We build a new integral test $g$ such that $g(z)=\infty$. We prepare auxiliary uniformly c.e.\ sets $\{S_n\}$ where $S_n\subseteq2^{<\omega}\times\omega$ for each $n$. Let $S_0=\{(\lambda,0)\}$ where $\lambda$ is the empty string. For each $n\ge1$ and $(\sigma,s)\in S_{n-1}$, computably enumerate $(\tau,t)$ into $S_n^\sigma$ so that \begin{itemize} \item $\sigma\prec\tau$, \item $|\tau|\ge u(s)$, \item $D(f_t,\tau)>q$, \item $\{\tau\in2^{<\omega}\ :\ (\tau,t)\in S_n^\sigma\}$ is prefix-free, \end{itemize} We can further assume that \[\bigcup\{[\tau]\ :\ \sigma\prec\tau,\ |\tau|\ge u(s),\ D(f,\tau)>q\}=\bigcup\{[\tau]\ :\ (\tau,t)\in S_n^\sigma\}.\] Let $S_n=\bigcup_{(\sigma,s)\in S_{n-1}}S_n^\sigma$. For each $(\tau,t)\in S_n$, let \[g_\tau=(q-D(f_s,\sigma))\mathbf{1}_{[\tau]}\] where $(\sigma,s)\in S_{n-1}$ and $\sigma\prec\tau$. We define $g$ by \[g=\sum_n\sum_{(\tau,t)\in S_n}g_\tau.\] Note that \[\int g_\tau\ d\mu\le(D(f_t,\tau)-D(f_s,\sigma))2^{-|\tau|} =\int_{[\tau]}(f_t-f_s)d\mu,\] thus $\int g\ d\mu\le \int f\ d\mu<\infty$. Hence, $g$ is an integral test. Since $\lim_n D(f,z\upharpoonright n)=r>q$, there exists $(\tau_n,t_n)\in S_n$ such that $\tau_n\prec z$ for each $n$. Then, \[g(z)=\sum_n (q-D(f_s,\sigma)) \ge\sum_n(q-f(z))=\infty.\] \end{proof} \begin{proof}[Proof of (i) $\Rightarrow$ (ii) of Theorem \ref{th:density-one-converge}] Suppose that $z$ is density random. Let $f$ be an integral test. Then $D(f,-)$ is a left-c.e.\ martingale. By Theorem \ref{th:density-one-converge}, $\lim_n D(f,z\upharpoonright n)$ exists, whence $z$ is a dyadic weak Lebesgue point for $f$. By Lemma \ref{lem:weak-l}, $z$ is a dyadic Lebesgue point for $f$. \end{proof} To drop ``dyadic'', we recall the following results. \begin{proposition} Let $f:[0,1]\to\mathbb{R}$ be interval-c.e. Then $\widetilde{D}_2f(z)=\widetilde{D}f(z)$ and $\utilde{D}_2 f(z)=\utilde{D} f(z)$ for each non-porosity point $z$. \end{proposition} \begin{lemma}[ \cite{Brattka.Miller.ea:nd}; after Fact 2.4, Fact 7.2] For each real $z$, \[\underline{D}f(z)\le\utilde{D}f(z)\le \widetilde{D}f(z)\le\overline{D}f(z).\] If $f$ is continuous, then \[\utilde{D}f(z)=\underline{D}f(z)\mbox{ and } \widetilde{D}f(z)=\overline{D}f(z).\] \end{lemma} \begin{lemma}[Lemma 3.8 in \cite{Bienvenu.Hoelzl.ea:12a}] Let $C$ be a $\Pi^0_1$ class. If $z\in C$ is difference random, then $C$ is not porous at $z$. \end{lemma} \begin{proof}[Proof of (ii) $\iff$ (iii) of Theorem \ref{th:density-one-converge}] Note that (iii) $\Rightarrow$ (ii) holds by definition. We prove (ii) $\Rightarrow$ (iii). Suppose that $z$ is a dyadic Lebesgue point for each integral test. Then, $z$ is density random, whence difference random, thus a non-porosity point. Let $f$ be an integral test. Then, $F(x)=\int_{[0,x]}f\ d\mu$ is interval-c.e.\ and continuous. Hence, \[\limsup_{Q\to z}\frac{\int_Q f\ d\mu}{\mu(Q)} =\overline{D}F(z) =\widetilde{D}F(z) =\widetilde{D}_2F(z) =\limsup_{n\to\infty}\frac{\int_{[z\upharpoonright n]}f\ d\mu}{\mu(Q)} =f(z).\] Similarly, we have $\liminf_{Q\to z}\frac{\int_Q f\ d\mu}{\mu(Q)}=f(z)$, whence $z$ is a Lebesgue point for $f$. \end{proof} Actually, only one integral test characterizes density randomness. \begin{lemma} Let $f,g$ be integral tests. If an ML-random set $x$ is a dyadic weak Lebesgue point for $f+g$, then $x$ is a dyadic weak Lebesgue point for $f$. \end{lemma} \begin{proof} Suppose that $x$ is a dyadic weak Lebesug point for $f+g$ and $x$ is not a dyadic Lebesgue point for $f$. Let \[r=\lim_n D(f+g,x\upharpoonright n).\] Then, there are rationals $p,q$ $(p<q)$ such that $D_n(f,x\upharpoonright n)>q$ for infinitely many $n$ and $D_n(f,x\upharpoonright n)<p$ for infinitely many $n$. Notice that $q\le r$. By replacing $q$ with $\frac{p+q}{2}$, we can assume that $q<r$. Let $\epsilon=\frac{q-p}{3}>0$. Then, there is a natural number $N$ such that, for each $n>N$, we have \[|D(f+g,x\upharpoonright n)-r|<\epsilon.\] Hence, \[r-\epsilon<D(f,x\upharpoonright n)+D(g,x\upharpoonright n)<r+\epsilon.\] If $D(f,x\upharpoonright n)>q$, then \[D(g,x\upharpoonright n)<r+\epsilon-q.\] If $D(f,x\upharpoonright n)<p$, then \[D(g,x\upharpoonright n)>r-\epsilon-p.\] If $D(g,x\upharpoonright n)>r-\epsilon-p$, then \[D(f,x\upharpoonright n)<r+\epsilon-r+\epsilon+p=2\epsilon+p<q.\] We consider the following betting strategy. First use the strategy $f$ until $D(f,x\upharpoonright n)>q$. When found, stop betting until $D(g,x\upharpoonright n)>r-\epsilon-q$. At the stage $n$, use the strategy \[\frac{q}{2\epsilon+p}f.\] Then, $x$ is not ML-random. \end{proof} \begin{theorem} Let $f$ be a Solovay-complete integral test. Then $x$ is a Lebesuge point for $f$ if and only if $x$ is density random. \end{theorem} \begin{proof} The ``if'' direction follows from Theorem \ref{th:density-lebesgue}. Suppose $x$ is not density random. We can assume that $x$ is ML-random, because, otherwise, $f(x)=\infty$ and $x$ is not a dyadic Lebesgue point for $f$. Then there is an integral test $g$ such that $x$ is not a dyadic Lebesgue point for $g$. Since $f$ is Solovay-complete, there are a rational $q$ and an integral test $h$ such that \[f=\frac{g}{q}+h.\] Notice that $x$ is not a dyadic Lebesgue point for $\dfrac{g}{q}$. By Lemma \ref{lem:weak-l}, $x$ is not a dyadic weak Lebesgue point for $\dfrac{g}{q}$. By the lemmas above, $x$ is not a dyadic weak Lebesgue point for $f$. Thus, $x$ is not a Lebesgue point for~$f$. \end{proof} \newpage \part{Similarity relations for Polish metric spaces} \noindent In October 2013, Andr\'e Nies gave a talk as part of the Universality and Homogeneity Trimester at the Hausdorff Institute for Mathematics (HIM) in Bonn. The summary follows. We are given a class of structures. We always mean concrete presentations of structures (rather than ``up to isomorphism''). We address the following \noindent {\bf leading questions} for this class: \begin{itemize} \item[(a)] Which similarity relations are there on the class? \item[(b)] How complex are these similarity relations? \item[(c)] If structures $X,Y$ in the class are similar, how complex, relative to $X,Y$, is the means for showing this? For instance, if $X \cong Y$, can one compute an isomorphism from the structures? \end{itemize} In the model theoretic setting, we could be given the countable models of a first-order theory. In this setting, some answers to these questions are: \begin{itemize} \item[(a)] isomorphism $\cong$, elementary equivalence $\equiv$, elementary equivalence $\equiv_\alpha$ for $L_{\omega_1, \omega}$ sentences of rank $< \alpha$. \item [(b)] Isomorphism of countable graphs, linear orders, countable Boolean algebras is $\le_B$ complete for orbit equivalence relations of continuous $S_\infty$ actions (where $\le_B$ is Borel reducibility, and $S_\infty$ is the Polish group of permutations of $\omega$). \item [(c)] Suppose the similarity is $\cong$. For certain natural classes, this question has been answered in computable model theory. That area introduced the notion of being \emph{relatively computably categorical}, where presentations of $X,Y$ together uniformly compute an isomorphism if there is one at all. For instance, a dense linear order is r.c.c. There are variants, such as being \emph{uniformly computably categorical}, where one computes an isomorphism from computable indices for the structures. \end{itemize} We will be mainly considering the {\bf metric} setting. We are given a class of Polish metric spaces. To answer (a): The following similarities, which will be defined formally below, have been studied. \begin{center} Isometry $\cong_i$, homeomorphism $\cong_h$, \end{center} \begin{center} Gromov-Hausdorff distance $0$, Lipschitz equivalence. \end{center} The former two are discussed in detail in \cite[Ch.\ 14]{Gao:09}. The latter two are due to Gromov; see his book \cite[Ch.3]{Gromov:07} (the first edition dates from 1998). After some preliminary facts, we will answer (b) and (c) for the metric setting. We also consider Polish metric spaces with some additional structure, such as Banach spaces, or spaces with a probability measure on the Borel sets. \section{Representing Polish metric spaces} We adopt the global view. Single structures are thought of as points in a ``hyperspace''. To endow this hyperspace with its own structure, it matters how we represent a single structure. For metric spaces, two ways are common. \begin{itemize} \item[(1)] Let $\mathbb{U}$ denote the Urysohn space. Let $F(\mathbb{U})$ denotes its Effros algebra, which is a $\sigma$-algebra where the points are closed subsets of $\mathbb{U}$. Each Polish metric space is isometric to an element of $F(\mathbb{U})$. See Gao \cite[Ch.\ 14]{Gao:09}. \item[(2)] A point $V = \seq {v_{i.k}}\sN{i,k} \in \mathbb R^{{\mathbb{N}} \times {\mathbb{N}}}$ is a {\it distance matrix} if $V$ is a pseudo-metric on ${\mathbb{N}}$. Let $M_V$ denote the completion of the corresponding pseudo-metric space. This means that in $M_V$ we have a distinguished dense sequence of points $\seq {p_i}$ and present the space by giving their distances. We merely ask that $V$ is a pseudo-metric in order to ensure that the set $\+ M$ of distance matrices is closed in $R^{{\mathbb{N}} \times {\mathbb{N}}}$. \end{itemize} Both representations are in a sense equivalent as pointed out for instance in~\cite[Ch.\ 14]{Gao:09}. However, the second one is better for studying the complexity of the space. For instance, a computable metric space $(M, d, \seq {p_i})$ is given by a distance matrix $w$ such that $w_{i,k} = d(p_i, p_k)$ is a computable real uniformly in $i,k$. A Polish group action is a continuous action $G \times X \to X$ where $G$ is a Polish group and $X$ a Polish space. We write $G \curvearrowright X$ to say that $G$ acts on $X$ continuously. The corresponding \emph{orbit equivalence relation} is $E^X_G = \{\langle x, y \rangle \colon \, \exists g \, [ g x = y]\}$. \section{Polish metric spaces and the classical Scott analysis.} A metric space $(M,d)$ can be turned into a structure in the language with binary relations $S_q$ for $ q \in {\mathbb{Q}}^+$, where $S_q(a,b)$ holds if $d(a,b) < q$. \begin{definition} Let $M$ be an $\mathcal{L}$-structure. We define inductively what it means for finite tuples $\bar{a},\bar{b}$ from $M$ of the same length to be $\alpha$-equivalent, denoted by $\bar{a}\equiv_\alpha\bar{b}$. \begin{itemize} \item $\bar{a}\equiv_0\bar{b}$ if and only if the quantifier-free types of the tuples are the same. \item For a limit ordinal $\alpha$, $\bar{a}\equiv_\alpha\bar{b}$ if and only if $\bar{a}\equiv_\beta\bar{b}$ for all $\beta<\alpha$. \item $\bar{a}\equiv_{\alpha+1}\bar{b}$ if and only if both of the following hold: \begin{itemize} \item For all $x\in M$, there is some $y\in M$ such that $\bar{a}x\equiv_\alpha\bar{b}y$ \item For all $y\in M$, there is some $x\in M$ such that $\bar{a}x\equiv_\alpha\bar{b}y$ \end{itemize} \end{itemize} \end{definition} The \emph{Scott rank} $\mathrm{sr}(M)$ of a structure $M$ is defined as the smallest $\alpha$ such that $\equiv_\alpha$ implies $\equiv_{\alpha+1}$ for all tuples of that structure. We remark that always $\mathrm{sr}(M)<|M|^+$. \begin{fact} A Polish space has Scott rank $0$ iff it is ultrahomogeneous. \end{fact} Friedman, K\"orwien and Nies (2012) have shown that for each $\alpha < \omega_1$, there is a countable Polish ultrametric space $M$ such that $\mathrm{sr}(M) = \alpha \times \omega$. \begin{question} \ \noindent (a) Does every Polish metric space have countable Scott rank? \noindent (b) Can it in fact be described within the class of Polish metric spaces by an $L_{\omega_1, \omega} $ sentence? \end{question} \noindent Note (Feb 2014). Question (a) has been answered in the affirmative by Michal Ducha, a postdoc from Warsaw (student of J. Zapletal) who participated in the HIM program. \section{Isometry $\cong_i$} In 1998 Anatoly Vershik~\cite{Vershik:98} asked about the complexity of isometry $\cong_i$ on Polish metric spaces, and in particular if one can assign invariants. The answer was a resounding no. By the following result, $\cong_i$ is Borel equivalent to $E^{F(\mathbb{U})}_ { \text{Iso}(\mathbb{U})}$, the universal orbit equivalence relation given by the action of the isometry group of $\mathbb{U}$ on the Effros algebra of $\mathbb{U}$. \begin{theorem}[Gao-Kechris 2000; Clemens; see \cite{Gao:09}, Ch.\ 14] \ \begin{itemize} \item[(1)] $\cong_i \ \le_B \ E^{F(\mathbb{U})}_ { \text{Iso}(\mathbb{U})}$. \item[(2)] For every Polish group action $G \curvearrowright X$ we have $E^X_G \ \le_B \ \cong_i$. \end{itemize} \end{theorem} Let $\+ K$ be the class of compact metric spaces. Note that this is $\Pi^0_3$ with respect to the distance matrix representation of Polish metric spaces, because compactness is equivalent to being totally bounded. Isometry of compact spaces is much simpler than in the general case: the points in some fixed Polish space can serve as invariants. \begin{theorem}[Essentially Gromov \cite{Gromov:07}, Thm 3.27.5] \[ \cong_i \cap (\+ K \times \+ K ) \le_B \text{id}_{\mathbb{R}}. \] \end{theorem} \begin{proof} Gromov shows that the sequence of sets of $n \times n$ distance matrices that occur in a compact space $X$ constitute a complete set of invariants. Each such matrix is a point in a compact set $K_n(X) \subseteq {\mathbb{R}}^{n^2}$. The sequence of such compact sets can be represented by a single point in a Polish space, say ${\mathbb{R}}$. \end{proof} \noindent \emph{Computable versions.} The distance matrices $ V= \seq {v_{i.k}}\sN{i,k} \in \mathbb R^{{\mathbb{N}} \times {\mathbb{N}}}$ form an effectively closed set. They can in fact be coded as the infinite branches of a $\PI{1}$ tree $\subseteq 2^{ < \omega}$. Such a branch provides yes/no answers to queries of the form $|v_{i,k} - q| < \epsilon$ for $i,k \in {\mathbb{N}}$, $q \in {\mathbb{Q}}^+_0$, and $\epsilon \in {\mathbb{Q}}^+$. Let $V_e$ denote the $e$-th partial computable distance matrix. The domain of this partial computable function grows as long as the data are consistent with being a distance matrix; if they are seen to be not (a $\SI 1$ event) it stops, so that the function is only defined on an initial segment of ${\mathbb{N}}$. Being total is $\PI 2$. Let $M_e$ denote the computable metric space given by the $e$-th (total) distance matrix $V_e$. The following can be proved by computably reducing the isomorphism problem for computable graphs by Fokina et al. \cite{Fokina.Friedman.etal:10}. \begin{prop} $\{ \langle e,k \rangle \colon \, M_e \cong_i M_k \}$ is complete for $\Sigma^1_1$ equivalence relations on $\omega$ with respect to computable reductions. \end{prop} \begin{prop}[Melnikov and Nies \cite{Melnikov.Nies:13}] The set $C$ of indices for compact computable metric spaces is $\PI 3$. Isometry is $\PI 2$ within that set, that is, of the form $E \cap C \times C$ where $E$ is a $\PI 2$ relation. \end{prop} \section{II. Having Gromov-Hausdorff distance $0$.} The following is ongoing work of Itai Ben Yaacov, Nies, and Todor Tsankov. One thinks of two metric spaces $X,Y$ as isometric within error $\epsilon$ if they can be isometrically embedded into a third metric space $Z$ in such a way that the usual Hausdorff distance of the two images is at most $\epsilon$. ``$X,Y$ isometric within error $0$'' clearly means that the completions of $X,Y$ are isometric. The Gromov- Hausdorff distance of $X,Y$ is defined by \begin{center} $d_{GH} (X,Y) = \inf \{ \epsilon \colon\, X,Y \, \text{are isometric within error} \, \epsilon\}$. \end{center} (Also see Subsection~\ref{ss:bi-Katetov} for an equivalent definition.) For instance, if we let $X = \{0, 1 \}$ and $Y = \{ 1/4, 3/4 \}$, then \begin{center} $d_{GH} (X,Y) = 1/4$. \end{center} So, are there examples of non-isometric spaces $X,Y$ with GH-distance $0$? If so, neither $X$ nor $Y$ can be compact (Gromov). Also there is no positive lower bound on the distance of distinct points, otherwise a near isometry with error less than that bound will be an isometry. During the HIM talk, Nies mentioned an example: let $\mathbb E$ be the unit sphere of the Gurarij space. Let $v \in \mathbb E$ be smooth, and $w$ be non-smooth. Let $X = Y = \mathbb E \cup \{a,b\}$, with $d_X(a,b)= d_Y(a,b) = 3$. We set $d_X(v,a) = d_X(v, b) = 3$, and $d_Y(w,a) = d_Y(w, b) = 3$. Any isometry would have to map $v$ to $w$, which is impossible. However, by general properties of the Gurarij space, $d_{GH}(X,Y) = 0$. \subsection{Fact and more examples for GH-distance} After Nies' HIM talk, Matatiahou Rubin and Philipp Schlicht constructed further, simpler examples. Let $B_X$ denote the unit ball of a Banach space~$X$. \begin{prop} There are nonisometric Banach spaces $X,Y$ with $d_{GH}(B_X, B_Y)=0$. \end{prop} To prove this let $D= \seq {p_i}\sN i$ be a dense sequence of distinct elements in $(1,2)$, say. Let $U_p$ be the 2-dimensional ${\mathbb{R}}$ vector space with $\ell_p$ norm. Let $E_D$ be the $c_0$-sum of the spaces $U_{p_i}$. That is, null sequences, with norm the maximum of the individual $\ell_{p_i}$ norms. If we have two dense sequences with different sets of members, the unit balls of the spaces are at distance 0, but not isometric. Let $\mathbb{G}$ denote the Gurarij space. By a (continuous) model-theoretic argument, related to $\aleph_0$-categoricity, one can show that if $X$ is a Banach space and $d_{GH}(B_X,B_{\mathbb{G}})=0$, then $X$ is isometric to $\mathbb{G}$. \begin{remark}[Melleray-Schlicht] \mbox{} \begin{enumerate} \item Any two separable Banach spaces $X,Y$ with $d_{GH}(X,Y)<\infty$ are isometric. \item Isometry on Polish spaces reduces to $E_{GH}$. \end{enumerate} \end{remark} \begin{proof} For Banach spaces $X,Y$, $(X,Y)\in E_{GH}$ and $(X,Y)\in E^{\infty}_{GH}$ are equivalent by rescaling (i.e. rescaling the metric space into which we embed the spaces). It follows from the Main Theorem in a paper by Omladic and Semrl~\cite{Omladic.Semrl:95} that it is sufficient to prove that there is an $\epsilon$-isometry $T\colon X\rightarrow Y$ for some $\epsilon$. For any two perfect Polish spaces with $d_{GH}(X,Y)=0$, we can construct a bijective $\epsilon$-isometry by a straightforward back-and forth argument. The second claim follows from a paper of Melleray~\cite{Melleray:07}, where he shows that isometry of Polish spaces reduces to isometry of Banach spaces. \end{proof} The following result of Schlicht and Rubin shows that there is a single $E_{GH}$ class such that the isometry equivalence relation inside is Borel bi-reducible with identity on $\seqcantor$. In particular, there are continuum many non-isometric spaces with discrete topology that are mutually at GH distance 0. We equip $[0,\epsilon]\times (\omega+1)\times\mathbb{R}$ with the metric $d$ defined by $d((x,i,y),(x',i',y'))=1$ if $(x,i)\neq (x',i')$ and $d((x,i,y),(x',i',y'))=|y-y'|$ if $(x,i)=(x',i')$. \begin{definition} Suppose that $f\colon [0,\epsilon]\rightarrow \omega+1$ is a function. \begin{enumerate} \item Let $X_f=\{(x,i,0),(x,i,x)\in [0,\epsilon]\times (\omega+1)\times\mathbb{R}\mid i\leq f(x)\}$ with the metric from $[0,\epsilon]\times (\omega+1)\times\mathbb{R}$. \item Let $\mathrm{supp}(f)=\{x\in [0,\epsilon]\mid f(x)\neq 0\}$ denote the \emph{support} of $f$. \item Let $\mathrm{bound}(f)=\{(x,i)\mid x\in \mathrm{supp}(f),\ i\leq f(x)\}$. \end{enumerate} \end{definition} If $|\mathrm{supp}(f)|=\omega$, then $X_f$ is a discrete countable metric space with distance set $\mathrm{supp}(f)\cup\{0,1\}$. \begin{proposition} Suppose that $\epsilon\leq\frac{1}{2}$. Suppose that $f_0\colon [0,\epsilon]\rightarrow \omega+1$ is a function such that $\mathrm{supp}(f_0)$ is a countable dense subset of $[0,\epsilon]$. Then $\mathrm{id}_{{}^{\omega}2}$ is Borel reducible to $\mathrm{Iso}\upharpoonright [X_{f_0}]_{GH}$. \end{proposition} \begin{proof} Note that for arbitrary functions $f,g\colon [0,\epsilon]\rightarrow \omega+1$, $X_f$, $X_g$ are isometric if and only if $f=g$. \begin{claim} Suppose that $f,g\colon [0,\epsilon]\rightarrow \omega+1$ are functions such that $\mathrm{supp}(f)$, $\mathrm{supp}(g)$ are countable dense subsets of $[0,\epsilon]$. Then $d_{GH}(X_f,X_g)=0$. \end{claim} \begin{proof} Note that for every $\delta>0$, there is a bijection $h\colon \mathrm{bound}(f) \rightarrow \mathrm{bound}(g)$ such that $|x-h(x,i)_0|<\delta$ for all $(x,i)\in \mathrm{bound}(f)$. Let $h\times\mathrm{id}\colon X_f\rightarrow \mathrm{bound}(g)\times \mathbb{R}$, $(h\times \mathrm{id})(x,i,y)=(h(x,i),y)$. Then $h\times \mathrm{id}$ is distance preserving and $d_H((h\times\mathrm{id})[X_f],X_g)\leq\delta$. Hence $d_{GH}(X_f,X_g)\leq\delta$. \end{proof} Let $D_q=\{0,q\}$ for $q>0$. Suppose that $(q_n,i_n)_{n\in\omega}$ is an enumeration of $\mathrm{bound}(f_0)$ without repetitions. Suppose that $X$ is a complete metric space with $d_{GH}(X,X_{f_0})=0$. Suppose that $0<\delta<1$. Since $d_{GH}(X,X_{f_0})<\frac{\delta}{3}$, $X$ is of the form $X=\bigsqcup_{n\in\omega} X_n^{\delta}$ with \begin{enumerate} \item \label{condition: small GH distance} $d_{GH}(X_n^{\delta},D_{q_n})< \delta$ and \item \label{condition: large distance between pieces} $|d(x,y)-1|<\delta$ if $x\in X_m^{\delta}$, $y\in X_n^{\delta}$, and $m\neq n$. \end{enumerate} Let $X_n=X_n^{\frac{1}{2}}$. Conditions \ref{condition: small GH distance} and \ref{condition: large distance between pieces} imply that for all $\delta<\frac{1}{2}$ and all $n$, there is some $m$ with $X_m^{\delta}=X_n$. Hence for each $n$ there is a sequence $(n_i)_{i\in\omega}$ in $\omega$ with $d_{GH}(X_n,D_{q_{n_i}})<\frac{1}{2^i}$. It follows that $1\leq |X_n|\leq 2$. Let $p_n=d(x,y)$ if $X_n=\{x,y\}$. Let $A=\{p_n\mid n\in\omega\}$. \begin{claim} $d(x,y)=1$ for all $x\in X_m$ and $y\in X_n$ with $m\neq n$. \end{claim} \begin{proof} This follows from Condition \ref{condition: large distance between pieces} and since for all $\delta<\frac{1}{2}$ and all $k$, there is some $l$ with $X_l^{\delta}=X_k$. \end{proof} \begin{claim} $A\subseteq [0,\epsilon]$. \end{claim} \begin{proof} Suppose that $X_n=\{x,y\}$ and $\eta=d(x,y)-\epsilon>0$. Suppose that $X_n=X_m^{\eta}$. This contradicts the fact that $d_{GH}(X_m^{\eta},D_{q_m})< \eta$ by Condition~(\ref{condition: small GH distance}). \end{proof} \begin{claim} $A$ is dense in $(0,\epsilon)$. \end{claim} \begin{proof} Suppose that $U\subseteq (0,\epsilon)$ is nonempty and open with $U\cap A=\emptyset$. Suppose that $(q_n-\delta,q_n+\delta)\subseteq U$. This contradicts the fact that $d_{GH}(X_n^{\frac{\delta}{2}},D_{q_n})< \frac{\delta}{2}$ by Condition (\ref{condition: small GH distance}). \end{proof} Let $f\colon [0,\epsilon]\rightarrow \omega+1$, $f(x)=0$ if $x\notin A$, $f(0)=i$ if $|\{n\in \omega\mid |X_n|=1\}|=i$, and $f(z)=i$ if $|\{n\in \omega\mid \exists x,y\in X_n\ d(x,y)=z\}|=i$ for $z\in (0,\epsilon]$. Then $X_f$, $X$ are isometric. \end{proof} \subsection{Bi-Katetov functions} \label{ss:bi-Katetov}One can describe being isometric within error $\epsilon$ without referring to a third space. A~\emph{bi-Katetov function} $f \colon X \times Y \to {\mathbb{R}}$ is defined as \[ f(x, y) = d_Z(i(x), j(y)), \] where $i, j$ are embeddings into some metric space as above. Equivalently, $f$ is $1$-Lipschitz in both variables and \begin{align*} d_A(x, w) &\leq f(x, y) + f(w, y) \\ d_B(y, z) &\leq f(x, y) + f(x, z) \end{align*} A bi-Katetov function $f$ can be seen as an approximate isometry. Its error $q_f$ is given by \[ q_f= \max(\sup_x \inf_y f(x, y), \sup_y \inf_x f(x, y)).\] By definition this equals the Hausdorff distance of the isometric images above. For instance, if there is an actual onto isometry $\theta: X \to Y$, we can let $f(x,y) = d_Y(\theta(x), y)$ and obtain the least possible error $0$. Conversely, as mentioned above, if the spaces are complete and the error is $0$ then there is an onto isometry. Clearly we have \[ d_{GH}(X, Y) = \inf_{f } q_f, \] where $f$ runs through all the bi-Katetov functions on $X \times Y$. \begin{remark} \label{rem: extension} f $A \subseteq X$ and $B \subseteq Y$, then any bi-Katetov function defined on $A \times B$ extends to one $f'$ defined on $X \times Y$. One uses amalgamation: \[ f'(x, y) = \inf_{a\in A, b \in B} d_X(x, a) + f(a, b) + d_Y(b, y). \] \end{remark} \subsection{Continuous Scott analysis.} We define approximations to $d_{GH}$ from below by induction on countable ordinals. \noindent Suppose $\bar a = \seq {a_i}_{i < n}$ and $\bar b = \seq {b_i}_{i < n}$ are enumerated finite metric spaces. Following Uspenskii~\cite{Uspenskii:08} define \[ r_{0, n} (\bar a, \bar b) = \inf_{f \ \text{is bi-Katetov on} \, \bar a \times \bar b } \max_{i< n} f(a_i, b_i). \] Uspenskii gives an explicit expression for this in \cite[Proposition 7.1]{Uspenskii:08}: \begin{equation} \label{eq:Uspenskii r0} r_{0, n} (\bar a, \bar b) = \varepsilon /2 \, \text{ where } \, \varepsilon = \max_{i,k < n} | d(a_i, a_k) - d(b_i, b_k)|. \end{equation} (In fact, Uspenskii builds a bi-Katetov function such that $f(a_i, b_i) = \varepsilon/2 $ for each~$i$.) \begin{definition} Suppose $A$ and $B$ are metric spaces and $\bar a \in A^n, \bar b \in B^n$. Define by induction on ordinals $\alpha$: \begin{align*} r_{0, n}^{A, B}(\bar a, \bar b) &= r_{0, n}(\bar a, \bar b) \\ r_{\alpha+1, n}^{A, B}(\bar a, \bar b) &= \max \big( \sup_{x \in A} \inf_{y \in B} r_{\alpha, n+1}^{A, B}(\bar a x, \bar b y), \sup_{y \in B} \inf_{x \in A} r_{\alpha, n+1}^{A, B}(\bar a x, \bar b y) \big) \\ r_{\alpha, n}^{A, B}(\bar a, \bar b) &= \sup_{\beta < \alpha} r_{\beta, n}^{A, B}(\bar a, \bar b), \quad \text{for } \alpha \ \text{a limit ordinal}. \end{align*} \end{definition} Given a metric space $(X, d)$ and $n \ge 1$, we equip $X^n$ with the ``maximum'' metric $d (\bar u, \bar v) = \max_{i< n} d(u_i, v_i)$. The following are not hard to check. \begin{lemma} \label{lem: basic props of r's} Fix separable metric spaces $A, B$ of finite diameter. \begin{enumerate} \item \label{i:1} For each $\alpha$ and each $n$, the functions $r_{\alpha, n}^{A, B}(\bar a, \bar b)$ are 1-Lipschitz in $\bar a$ and $\bar b$. \item \label{i:2} The functions $r_{\alpha, n}^{A, B}(\bar a, \bar b)$ are nondecreasing in $\alpha$. \item \label{i:3}There is $\alpha < \omega_1$ after which all the $r_{\alpha, n}^{A, B}$ stabilize. \end{enumerate} \end{lemma} \begin{theorem}[Ben Yaacov, Nies, Tsankov 2013] \label{prop:GH approx} Let $A,B$ be separable metric spaces of finite diameter. Let $\alpha^*$ be such that $r_{\alpha^*+1, n}^{A, B} = r_{\alpha^* , n}^{A, B}$ for each $n$. Then \[ r_{\alpha^*, 0}^{A, B} = d_{GH}(A,B). \] \end{theorem} \begin{proof} Since $A,B$ are fixed we suppress them in our notations. Variables $a,a_i$ etc range over $A$, and $b_i$ etc.\ range over $B$. For tuples $\bar a, \bar b$ of length $k$, let \[ \delta_k ( \bar a , \bar b) = \inf_f \{ \max (q_f, \max_{i<k} f(a_i,b_i))\}, \] where $f$ ranges over bi-Katetov functions on $A \times B$. We show that for each $n$ and tuples $\bar a, \bar b$ of length $n$, \[ r_{\alpha^*, n} (\bar a, \bar b) = \delta_n ( \bar a , \bar b).\] For $n=0$ this establishes the theorem. Firstly, we show by induction on ordinals $\alpha$ that \[ r_{\alpha, n} (\bar a, \bar b) \le \delta_n ( \bar a , \bar b)).\] The cases $\alpha = 0$ and $\alpha $ limit ordinal are immediate. For the successor case, suppose that $ \delta_n ( \bar a , \bar b) < s$ via a bi-Katetov function $f$ on $A \times B$. For each $x \in A$ we can pick $y \in B$ such that $f(x,y) < s$. Then $\delta_{n+1} (\bar a x, \bar b y) < s$ via the same $f$. Inductively we have $r_{\alpha, n+1} (\bar ax , \bar by) < s$. Similarly, for each $y \in B$ we can pick $x \in A$ such that $r_{\alpha, n+1} (\bar ax , \bar by) < s$. This shows that $ r_{\alpha+1, n} (\bar a, \bar b) \le s$. Secondly, we verify that \[ \delta_n ( \bar a , \bar b)) \le r_{\alpha^*, n} (\bar a, \bar b) \] Let $ r_{\alpha^*, n} (\bar a, \bar b) < t$. We combine a back-and-forth argument with the compactness of the space of bi-Katetov functions in order to build a bi-Katetov function $f$ with $q_f \le t$ and $\max_{i < n} f(a_i, b_i ) \le t$. To do so we extend $\bar a, \bar b$ to dense sequences in $A,B$ respectively. Let $D \subseteq A, E \subseteq B$ be countable dense sets. Let $\bar u ^k$ denote a tuple of length $k$; in particular, we can write $\bar a = \bar a^n$ and $\bar b = \bar b^n$. We ensure that \begin{center} $ r_{\alpha^*, k} (\bar a^k, \bar b^k) < t$ for each $k \ge n$. \end{center} Suppose $\bar a ^k , \bar b^k$ have been defined. If $k$ is even, let $a_k$ be the next element in $D$. Using $r_{\alpha^*+1, k} (\bar a^k, \bar b^k) = r_{\alpha^*, k} (\bar a^k, \bar b^k)$ we can choose $b_k$ so that $ r_{\alpha^*, k+1} (\bar a^{k+1}, \bar b^{k+1}) < t$. Similarly, if $k$ is odd, let $b_k$ be the next element in $E$ and choose $a_k$ as required. By Lemma~\ref{lem: basic props of r's}(2) we have $ r_{0, k} (\bar a^k, \bar b^k) < t$ for each $k\ge n$ via some bi-Katetov function $\widetilde f_k$ defined on $\{ a _0, \ldots, a_{k-1}\} \times \{ b_0, \ldots, b_{k-1} \}$. By Remark~\ref{rem: extension} we can extend this to a bi-Katetov function $ f_k$ defined on $A \times B$. By the compactness of the space of bi-Katetov functions on $A \times B$, viewed as elements of $\mathbb R^{D \times E}$, there is a subsequence $k_0 < k_1 < \ldots$ such that $\seq {f_{k_u}}$ converges pointwise to a bi-Katetov function $f$. Since bi-Katetov functions are 1-Lipschitz in both arguments, this implies $\lim_u f_{k_u} (a_p, b_p) = f(a_p, b_p)$ for each $p$. Therefore $f(a_p,b_p) \le t$. This implies $q_f \le t$ as required. \end{proof} \begin{definition} The \emph{continuous Scott rank} of $A$ is the least $\alpha$ for which \[ r_{\alpha, n}^{A, A}(\bar a_1, \bar a_2) = r_{\alpha+1, n}^{A, A}(\bar a_1, \bar a_2), \quad \text{for all } n, \bar a_1, \bar a_2 \in A^n. \] \end{definition} One can define an equivalence relation $E_{GH}$ on the set of distance matrices $\mathcal M$ by \[ A {E_{GH}} B \iff d_{GH}(A, B) = 0. \] Using the continuous Scott analysis we can show: \begin{thm} Each equivalence class of $E_{GH}$ is Borel. \end{thm} \begin{proof} By induction each $r_{\alpha, n}$ is a Borel function $\+ M \times \+ M \times {\mathbb{N}}^n \times {\mathbb{N}}^n \to [0, 1]$. Next one needs to prove the following. Fix $A_0 \in \+ M$ and let $\alpha_0 = \mbox{\rm \textsf{rank}} A_0$. \begin{itemize} \item for each $\alpha$, the set $\{B \in \+ M : \mbox{\rm \textsf{rank}}(B) = \alpha\}$ is Borel; \item $B {E_{GH}} A_0 \iff \mbox{\rm \textsf{rank}}(B) = \alpha_0 \, \land \, r_{\alpha, 0}^{A_0, B} = 0$. \end{itemize} \end{proof} \begin{question} Is the function $d_{GH}(A_0, \cdot)$ Borel on $\mathcal M$? \end{question} \section{III. Homeomorphism $\cong_h$} We collect some results, most of which are proved in \cite[Ch.\ 14]{Gao:09}. For general Polish metric spaces, $\cong_h$ is merely known to be $\bf \Sigma^1_2$. Homeomorphism of compact metric spaces $X,Y$ is analytic, because homeomorphisms are uniformly continuous. In fact, by the Banach-Stone theorem, we have \begin{center} $X \cong_h Y \Leftrightarrow \+ C(X) \cong_i \+ C(Y)$; \end{center} so by the aforementioned results of Gao and Kechris on isometry~\cite{Gao.Kechris:03}, $\cong_h$ on compact metric spaces is Borel reducible to an orbit equivalence relation. (A similar argument works for locally compact metric spaces, using $ \+ C_0(X)$, the $C^*$ algebra of continuous functions vanishing at $\infty$; however, for a Polish metric space, to be locally compact is known to be properly $\mathbf \Pi^1_1$.) Camerlo and Gao \cite{Camerlo.Gao:01} proved that graph isomorphism is Borel reducible to homeomorphism of totally disconnected compact metric spaces (i.e., separable Stone spaces). One notes that countable compact metric spaces $X$ won't work here, because $X$ is scattered and hence given by the Cantor-Bendixson rank $\alpha$, together with the size of the last set $X^{(\alpha)}$. The main question remains open. \begin{question} Determine the complexity with respect to $\le_B$ of $\cong_h$ for compact metric spaces. \end{question} In contrast, in the computable case the complexity is known to be as large as possible. \begin{thm} Homeomorphism of compact computable metric spaces is complete for $\Sigma^1_1$ equivalence relations on $\omega$ with respect to computable reductions. \end{thm} \begin{proof} Friedman et al.\ \cite{Fokina.Friedman.etal:10} showed this for isomorphism of computable graphs. It can be verified that the construction Camerlo and Gao \cite{Camerlo.Gao:01} use for providing their Borel reduction is effective. Hence, if the given graph is computable, then uniformly in its index they build a compact computable metric space. \end{proof} \section{The complexity of particular isometries} Let us return to the leading questions posed initially. It appears that Questions (b) and (c) are closely connected: \vspace{6pt} \noindent {\it It is easy to detect that $X$ is similar to $Y$ $\Leftrightarrow$ \hfill we can determine from $X, Y$ a means via which the similarity holds.} \vspace{6pt} \noindent We will provide some evidence for this thesis, first for compact metric spaces, and then for metric measure spaces studied by Gromov~\cite{Gromov:07} and Vershik. For a function $g$, let $g'$ be the halting problem relative to the graph of $g$. \begin{theorem}[Melnikov, Nies \cite{Melnikov.Nies:13}] Let $X,Y$ be compact metric spaces. Let $A$ be an oracle Turing equivalent to the Turing jump of (the presentation of) $X$ together with $Y$. \begin{itemize} \item[(a)] If $X \cong_i Y$ then there is an isometry $g$ such that $g' \le_{\mathrm{T}} A''$. \item[(b)] there are isometric compact computable metric spaces $X,Y$ with no isometry $g \le_{\mathrm{T}} {\emptyset'}$. \end{itemize} \end{theorem} For (a) note that is suffices to build $g$ an isometric embedding. By compactness we can view embeddings as branches on a subtree $T \subseteq \omega^{ < \omega}$ with an $A'$ computable bound on the level size. Now apply the low basis theorem relative to $A'$ in order to obtain $g$. A metric measure (m.m.) space has the form $T= (X,\mu, d)$ where $(X,d)$ is Polish, $\mu$ a Borel probability measure. We may assume that $\mu U > 0$ for any non-empty open $U$; otherwise, replace $X$ by the least conull closed set. \begin{theorem}[Gromov (1997), see \cite{Gromov:07}] Measure-preserving isometry of m.m.\ spaces is smooth. \end{theorem} Gromov used as invariants the sequence of distributions $D_n$ of the distance matrix of $n$ randomly chosen points. He used moments to show that $\seq {D_n} \sN n$ is a complete invariant for the m.m.\ space $T$. Note that in the subsequence lemma, there is a typo. It should say $\liminf$ there. In 1996 Anatoly Vershik \cite{Vershik:98} gave a proof as well; also see the survey~\cite{Vershik:04}. He describes $T$ by the single invariant $D_T$, the distribution of the distance matrix of a randomly chosen infinite sequence $(x_i)$. More formally, $D_T$ is the push forward measure of $d(x_i, x_k)$ on the space $\+ M\subseteq {\mathbb{R}}^{\omega\times \omega}$ of distance matrices. He used a form of the law of large numbers to reconstruct $T$ from $D_T$. The 2006 paper by Cameron and Vershik is also relevant here. We can give an effective analysis of Vershik's proof. Let $\+ O \subseteq \omega$ be some $\Pi^1_1$ complete set. \begin{theorem} Suppose $T_1, T_2$ are computable m.m.\ spaces (that is, the measure of Boolean combinations of open balls is uniformly computable). Then there is a measure-preserving isometry $\Theta$ such that $\Theta \le_{\mathrm{T}} \+ O$. \end{theorem} \begin{proof} Recall that $\+ M$ is the Polish space of distance matrices. Following Vershik, we have canonical maps $F_k \colon \, T_k^\omega \to \+ M$, $ \overline x \to \seq {d(x_i, x_k)}\sN {i,k}$. Let $D_k = F_k \mu_k ^ \omega$ be the push forward measure on $\+ M$. This is the distribution of the distance matrix for a randomly picked sequence of points in $T_k$. \end{proof} The main source of complexity is that one has to pick an element $r$ in a non-empty $\Sigma^1_1$ class of distance matrices. By Gandy basis theorem, there is such an $r$ with $\mathcal O^r \le_h \mathcal O$. Ongoing work of Melnikov and Nies reduces this complexity to $\Delta^0_3$. \part{Other topics} \section{Yu: A note on the Greenberg-Montalban-Slaman Theorem} Greenberg, Montalban and Slaman prove the following theorem. \begin{theorem}[Greenberg, Montalban and Slaman \cite{GMS11}]\label{theorem: gms 11} Assume that $\omega_1$ is inaccessible in $L$. For any countable structure $\mathcal{M}$, if the set $A=\{x\mid \exists \mathcal{N}\in L[x](\mathcal{N}\cong \mathcal{M})\}$ contains all the nonconstructible reals, then $A=2^{\omega}$. \end{theorem} We prove that, under the weaker assumption that $\omega_1^L<\omega_1$, Theorem \ref{theorem: gms 11} remains true for any $\Sigma^1_2$-equivalence relation. \begin{proof} For any equivalence relation $E$, reduction $\leq_r$ over $2^{\omega}$ and real $x\in 2^{\omega}$, let $$\mathrm{Spec}_{E,r}(x)=\{y\mid \exists z\leq_r y(E(z,x))\}$$ be the $(E,r)$-spectrum of $x$. Let $E$ be a $\Sigma^1_2$-relation and $x$ be a real so that $\mathrm{Spec}_{E,L}(x)\supseteq\{z\mid z\not\in L\}$. Since $E$ is $\Sigma^1_2$, there must be some $\Pi^1_1$-relation $R_0\subseteq (2^{\omega})^3$ so that $$\forall y\forall z (E(y,z)\leftrightarrow \exists s R_0(y,z,s)).$$ By the Shoenfield absoluteness, $$\forall y\forall z (E(y,z)\leftrightarrow \exists s \in L_{\omega_1^{L[y\oplus z]}}[y\oplus z]R_0(y,z,s)).$$ In particular, $$\forall y (E(y,x)\leftrightarrow \exists s \in L_{\omega_1^{L[y\oplus x]}}[y\oplus x]R_0(y,x,s)).$$ Note that, by the assumption, the set $\mathrm{Spec}_{E,L}(x)$ is $\Sigma^1_2(x)$ and conull. Since $\omega_1^L<\omega_1$, there are conull many $L$-random reals. So the set $\{y\mid \omega_1^L=\omega_1^{L[y]}\}$ is conull. We may also assume that $\omega_1^{L[x]}=\omega_1^L$. Then the set $\{y\mid \omega_1^{L[x\oplus y]}=\omega_1^L\}$ is also conull. Then $z\in \mathrm{Spec}_{E,L}(x)\leftrightarrow$ $$ \exists t \exists y \exists s (t\mbox{ codes a well ordering } \wedge y\in L_{|t|}[z]\wedge s \in L_{|t|}[y\oplus x]\wedge R_0(y,x,s)).$$ For any real $t$ coding a well ordering, let $$z\in R_{1,t}\leftrightarrow \exists y\in L_{|t|}[z] \exists s\in L_{|t|}[y\oplus x]( R_0(y,x,s)).$$ Then $R_{1,t}\subseteq \mathrm{Spec}_{E,L}(x)$ is a $\Pi^1_1(t\oplus x)$-set and so measurable. Moreover, if $z$ is $L[x]$-random, then $z\in \mathrm{Spec}_{E,L}(x)$ if and only if $z\in R_{1,t}$ for some real $t\in L$ coding a well ordering. Since $\mu(\mathrm{Spec}_{E,L}(x))=1$ and $\omega_1^L<\omega_1$, there must be some $t\in L$ coding a well ordering so that $\mu(R_{1,t})>0$. Fix such a real $t_0\in L$. Then there must be some formula $\varphi$ in the set theory language so that the set $$R_{1,t_0,\varphi}=\{z\mid \exists y \exists s\in L_{|t_0|}[y\oplus x](\forall n(n\in y\leftrightarrow L_{|t_0|}[z]\models \varphi(n))\wedge R_0(y,x,s))\}$$ has positive measure. Then there must be some $\sigma\in 2^{<\omega}$ so that $$\mu(R_{1,t_0,\varphi}\cap [\sigma])>\frac{7}{8}\cdot 2^{-|\sigma|}.$$ Were $x$ constructible, then $R_{1,t_0}\cap [\sigma]$ would contain a constructible real. Now we try to get rid of the parameter $x$. Let \begin{multline*}S=\{r\mid \mu( \{z\succ \sigma \mid \exists y \exists s\in L_{|t_0|}[y\oplus r]\\ (\forall n(n\in y\leftrightarrow L_{|t_0|}[z]\models \varphi(n))\wedge R_0(y,r,s))\})>\frac{3}{4}\cdot 2^{-|\sigma|}\}.\end{multline*} Then $S$ is a $\Pi^1_1(t_0)$-set and every real in $S$ is $E$-equivalent to $x$. Since $x\in S$, we have that $S$ is not empty. Thus there must be some $t_0$-constructible, and so constructible, real in $S$. This completes the proof. \end{proof} By a similar method, one also can prove: \begin{proposition}\label{proposition: on pi11} For any $\Pi^1_1$-equivalence relation $E$ and real $x$, if $\mathrm{Spec}_{E,h}(x)\supseteq\{z\mid z\not\in \Delta^1_1\}$, then $\mathrm{Spec}_{E,h}(x)= 2^{\omega}$. \end{proposition} Let $MA$ be Martin's axiom, By a similar method, one also can prove \begin{theorem}\label{theorem: boldface} Assume that $MA \, \land \, 2^{\aleph_0}>\aleph_1+\omega_1^L=\omega_1$. Then for any real $x_0, $ and $\Sigma^1_2(x_0)$-relation $E$, if \ $\mathrm{Spec}_{E,L}(x)\supseteq\{z\in 2^{\omega}\mid z\not\in L[x_0]\}$, then $\mathrm{Spec}_{E,L}(x)= 2^{\omega}$. \end{theorem} So the large cardinal assumption in \cite{GMS11} is unnecessary. \newpage Note: \emph{Normality of a real relative to non-integral bases, and uniform distribution} has moved to the 2014 Blog. \section{Turetsky: $K^X \ge_T X$} Proved by Miller and Turetsky, and then vastly simplified by Bienvenu. Let $K$ denote prefix free descriptive string complexity. \begin{proposition} For any real $X$, $K^X \ge_T X$. \end{proposition} \begin{proof} $X$ is $X$-trivial. That is, $K^X(X\!\!\upharpoonright_n) \le K^X(n) + c$. Note that $K^X$ can compute the tree $\{\sigma \in 2^{<\omega} : K^X(\sigma) \le K^X(|\sigma|) + c\}$. This tree has finitely many infinite paths, and $X$ is one of them. As an isolated path, $K^X$ can compute $X$. \end{proof} \section[Notes on a theorem of Hirschfeldt, Kuyper and Schupp]{Nies: Notes on a theorem of Hirschfeldt, Kuyper and Schupp regarding coarse computation and $K$-triviality} Recall that we write $X \le_{ibT} Y$ if $X \le_{\mathrm{T}} Y$ with use function bounded by the identity. When building prefix free machines, we use the terminology of \cite[Section 2.3]{Nies:book} such as Machine Existence Theorem (also called the Kraft-Chaitin Theorem), bounded request set etc. \newcommand{\mathbf{c}}{\mathbf{c}} Hirschfeldt, Kuyper and Schupp (2013) proved the following in slightly different language. \begin{thm} \label{thm:Kuc-g} Let $Y$ be a $\Delta^0_2$ set of positive effective Hausdorff dimension. There is a cost function $\mathbf{c}$ such that $A \models \mathbf{c}$ implies $A \le_{ibT} D $ for any set $D$ with $\overline \rho (D \triangle Y) =0$. Moreover, if $Y$ is $\omega$-c.e., then $\mathbf{c}$ can be chosen to be benign.\end{thm} \begin{proof} The proof given here extends a similar result in \cite{Greenberg.Nies:11}, and also \cite[Thm 5.5]{Nies:nd}. By the hypothesis on $Y$ there is a positive rational $\delta$ such that \[ 3\delta < \liminf_n K(Y \uhr n)/n.\] Let $\seq {Y_s}$ be a computable approximation of $Y$. To help with building a reduction procedure for $A \le_{ibT} D$, via the Machine Existence Theorem we give prefix-free descriptions of initial segments $Y_s\uhr e$. On input $x$, if at a stage $s>x$, $e$ is least such that $Y(e)$ has changed between stages $x$ and $s$, then we still hope that $Y_s \uhr e$ is the final version of $Y \uhr e$. So whenever $A(x)$ changes at such a stage $s$, we give a description of $Y_s\uhr e$ of length $\lfloor \delta e \rfloor$. We will define an appropriate cost function $\mathbf{c}$ so that a set $A $ that obeys~$\mathbf{c}$ changes little enough that we can provide all the descriptions needed. To ensure that $A \le_{ibT} D$, we define a computation $ \Gamma(D \uhr x) $ with output $A(x)$ at the least stage $t\ge x $ such that $Y_t \triangle D\uhr e$ has sufficiently few $1$'s for each $e \le x$. Then $A(x) $ cannot change at any stage $s> t$ (for almost all $x$), for otherwise $Y_s \uhr e$ would receive a description of length $\lfloor \delta e \rfloor$, where $e$ is least such that $Y(e)$ has changed between $x$ and $s$. We give the details. Let $H$ denote the binary Bernoulli entropy. Choose a rational $\beta > 0$ such that $H(\beta )\le \delta$. This implies that no more than $\tp{\delta n}$ strings $v$ of length $n$ have at most $ \beta n$ many $1$'s (see Wikipedia page on binomial coefficients). Therefore, for each such string $v$, we have \begin{equation} \label{eqn:count 1s} K(v) \le \delta n + 2 \log n + O(1). \end{equation} Next we give a definition of a cost function $\mathbf{c}$. Let $ \mathbf{c}(x,s) = 0 $ for each $x\ge s$. If $x<s$, and~$e< x$ is least such that $Y_{s-1} ( e) \neq Y_s ( e)$, let \begin{equation} \label{eqn:defn of cY} \mathbf{c}(x,s) = \max( \mathbf{c}(x,s-1), \tp{-\lfloor \delta e \rfloor} ). \end{equation} We show that $A \models \mathbf{c}$ implies $A \le_{ibT} D$ for any set $A$. We may suppose that $\mathbf{c}{\seq{A_s}} \le 1$. Enumerate a bounded request set~$L$ as follows. When $A_{s-1}(x)\neq A_s(x)$ and $e$ is least such that $e=x$ or $Y_{t-1} ( e) \neq Y_t ( e)$ for some $t \in [x,s)$, put the request $\langle \lfloor \delta e \rfloor +1 , Y_s \uhr e \rangle$ into $L$. Then $L$ is indeed a bounded request set. We show $A \le_{ibT} D$. Choose $e_0$ with $4 \log (e_0) \le \delta e$ and for each $e \ge e_0$ the number of $1$'s in $(Y \triangle D) \uhr e$ is at most $\beta e/2$. Choose $s_0\ge e_0$ such that $Y_s \uhr{e_0}$ is stable for each $s \ge s_0$. Given an input $x\ge s_0$, using~$D$ as an oracle, compute $t >x$ such that \[ \forall e. e_0 \le e \le x [ (Y_t \triangle D) \uhr x \, \text{has at most} \, \beta e/2 \, \text{many}\, 1\text{'s}].\] We claim that $A(x) = A_t(x)$. Otherwise $A_{s}(x) \neq A_{s-1}(x)$ for some $s > t$. Let $e \le x$ be the largest number such that $Y_r\uhr e = Y _t \uhr e$ for all~$r$, $t < r \le s$. If $e < x$ then $Y(e)$ changes in the interval $(t,s]$ of stages. Hence, by the choice of $t\ge s_0$, we cause $K(Y_s \uhr e) < \lfloor \delta e + O(1)$. Since $e \ge e_0$, the string $(Y_s \triangle Y) \uhr e$ has at most $\lfloor \beta e \rfloor $ many $1$'s. Thus, by (\ref{eqn:count 1s}), \[ K(Y \uhr e) \le K(Y_s \uhr e) + K((Y_s \triangle Y) \uhr e ) +O(1)\le 2 \delta e + 4 \log e + O(1). \] This contradicts the definition of $\delta$ for $x$ large enough. \end{proof} \bibliographystyle{plain} \def$'${$'$}
{ "timestamp": "2014-06-20T02:08:26", "yymm": "1403", "arxiv_id": "1403.5719", "language": "en", "url": "https://arxiv.org/abs/1403.5719", "abstract": "The 2013 logic blog has focussed on the following:1. Higher randomness. Among others, the Borel complexity of $\\Pi^1_1$ randomness and higher weak 2 randomness is determined.2. Reverse mathematics and its relationship to randomness. For instance, what is the strength of Jordan's theorem in analysis? (His theorem states that each function of bounded variation is the difference of two nondecreasing functions.)3. Randomness and computable analysis. This focusses on the connection of randomness of a real $z$ and Lebesgue density of effectively closed sets at $z$.4. Exploring similarity relations for Polish metric spaces, such as isometry, or having Gromov-Hausdorff distance $0$. In particular their complexity was studied.5. Various results connecting computability theory and randomness.", "subjects": "Logic (math.LO)", "title": "Logic Blog 2013", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464457063559, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7074345696329567 }
https://arxiv.org/abs/2206.05795
Powers with minimal commutator length in free products of groups
Given a free product of groups $G = {\large *}_{j \in J} A_j$ and a natural number $n$, what is the minimal possible commutator length of an element $g^n \in G$ not conjugate to elements of the free factors? We give an exhaustive answer to this question.
\section{Introduction} It is well known that a proper power of a nonidentity element cannot be a commutator in a free group \cite{Sch59}. Clearly, the square of a nonidentity element can be a product of two commutators and the cube of a nonidentity element can be a product of three commutators. Culler \cite{Cull81} showed that in the free group $F(a, b)$ a cube can be a product of two commutators: $[a,b]^3 = [a^{-1}ba, a^{-2}bab^{-1}][bab^{-1}, b^2]$, where $[a, b] \coloneqq a^{-1} b^{-1} a b$. Moreover, Culler showed that the element $[a, b]^n$ can always be decomposed into a product of $\intpart*{n \mathbin{/} 2} + 1$ commutators (where $[x]$ is the integer part of $x$). The minimal integer $k$ such that an element $g$ of a group $G$ can be decomposed into a product of $k$ commutators is called the commutator length of $g$ and denoted by $\cl(g)$. Thus $\cl([a, b]^n) \leq \intpart*{n \mathbin{/} 2} + 1$. It turned out that this estimate is sharp for free groups: for any nonidentity element $g$ of a free group $\cl(g^n) \geq \intpart*{n \mathbin{/} 2} + 1$. First it was proved by Comerford, Comerford and Edmunds \cite{CCE91} for products of only $2$ commutators and then it was proved by Duncan and Howie \cite{DH91} in the general case. Moreover, they proved a similar assertion for free products of locally indicable groups: if $g$ is an element of a free product of locally indicable groups such that $g$ is not conjugate to elements of the free factors, then $\cl(g^n) \geq \intpart*{n \mathbin{/} 2} + 1$. This assertion turned out to be true in a free product of arbitrary torsion-free groups, it was independently discovered by Ivanov and Klyachko \cite{IK18} and Chen \cite{Ch18}. \begin{definition} Let $G$ be a group with a fixed free-product decomposition: $G=\mathbin{\text{\Large$*$}}_{j\in J} A_j$. We denote by $\hat{G}$ the set of all elements of $G$ not conjugate to elements of the free factors, and define $k(G, n)$ as the minimal number $k$ such that an element $g^n \in \hat{G}$ can be decomposed into a product of $k$ commutators. \end{definition} Thus it follows from \cite{Cull81} in conjunction with \cite{IK18} or \cite{Ch18} that $$k(G, n) = \intpart*{\frac{n}{2}} + 1$$ for free products of torsion-free groups. For free products of arbitrary groups Culler's estimate is not sharp any longer. For example, in the free product $\langle a \rangle_3 * \langle b \rangle$ a cube can be a commutator: $[a, b]^3 = [b^{-1}aba, ab^{-1}ab]$. We denote by $N(G)$ the minimal order of a nonidentity element of $G$. In \cite{IK18} it was proved that the same estimate as for free products of torsion-free groups holds true for free products of arbitrary groups, but only if $n$ is relatively small: $$k(G, n) = \intpart*{\frac{n}{2}} + 1 \text{,}\quad \text{ if } n < N(G)\text{.}$$ Whereas in \cite{Ch18} it was proved that $$k(G, n) \geq \intpart*{\frac{n - \intpart*{\frac{2n}{N(G)}}}{2}} + 1\text{.}$$ The author and Klyachko \cite{BK22} generalized these estimates. It was shown that $$k(G, n) \geq \intpart*{\frac{n}{2}} - \intpart*{\frac{n}{N(G)}} + 1\text{.}$$ In this work we prove that this estimate is sharp. \begin{theorem} \label{main_theorem} Let $G=\mathbin{\text{\Large$*$}}_{j\in J} A_j$ be a free product of nontrivial groups. Then $$k(G,n) = \intpart*{\frac{n}{2}} - \intpart*{\frac{n}{N(G)}} + 1\text{.}$$ \end{theorem} Actually, we prove a more general result. \begin{definition} Let $G$ be a group with a fixed free-product decomposition: $G=\mathbin{\text{\Large$*$}}_{j\in J} A_j$. For an element $g \in \hat{G}$ with a cyclically reduced form $a_{j_1, 1} \ldots a_{j_m, m}$ (where $a_{j_i, i} \in A_{j_i}$), we denote by $N(g)$ the minimal order of its letters $a_{j_1, 1}$, ..., $a_{j_m, m}$. For $N \in \{N(g) \mid g \in \hat{G}\}$ we define $k(G, n, N)$ as the minimal number $k$ such that an element $g^n \in \hat{G}$ with $N(g)=N$ can be decomposed into a product of $k$ commutators. \end{definition} \begin{theorem} \label{general_theorem} Let $G=\mathbin{\text{\Large$*$}}_{j\in J} A_j$ be a free product of nontrivial groups and $N$ be a natural number. If $N \in \{N(g) \mid g \in \hat{G}\}$, then $$k(G,n,N) = \intpart*{\frac{n}{2}} - \intpart*{\frac{n}{N}} + 1\text{.}$$ \end{theorem} Similarly to Culler's examples, the minimal commutator length is achieved on powers of commutators. \begin{theorem} \label{a_t_theorem} Let $G=\mathbin{\text{\Large$*$}}_{j\in J} A_j$ be a free product of nontrivial groups. If $a \in A_{j_1}$ and $t \in A_{j_2}$ are two nonidentity elements lying in different free factors such that $\ord(a) \leq \ord(t)$, then $$\cl([a, t]^n) = \intpart*{\frac{n}{2}} - \intpart*{\frac{n}{\ord(a)}}+ 1\text{.}$$ \end{theorem} \begin{remark} If $N(G)$, $N$ or $\ord(a)$ is infinite, we naturally consider $\intpart*{n \mathbin{/} N(G)}$, $\intpart*{n \mathbin{/} N}$ or $\intpart*{n \mathbin{/} \ord(a)}$ to be zero. In that case $k(G, n)$, $k(G, n, N)$ or $\cl([a, t]^n)$ is equal to $\intpart*{n \mathbin{/} 2} + 1$ which corresponds to the known results for free products of torsion-free groups. \end{remark} \begin{corollary} If $a$ and $b$ are two nonidentity elements of a group $G$, then $$\cl([a, b]^n) \leq \intpart*{\frac{n}{2}} - \intpart*{\frac{n}{\ord(a)}} + 1\text{.}$$ \end{corollary} For example, if $a^3=1$, then all powers of $[a, b]$ up to $7$ are equal to a product of $2$ commutators. Moreover, $[a, b]^9$ is also a product of $2$ commutators and $[a, b]^3$ is a commutator itself. If $a^4=1$, then $[a, b]^4$ is a product of $2$ commutators and $[a, b]^8$ is a product of $3$ commutators. A geometric language is used to prove these theorems: for each $a$, $t$ and $n$ we construct a Howie diagram $D$ on a closed oriented surface of genus $k$, where $k = \intpart*{n \mathbin{/} 2} - \intpart*{n \mathbin{/} \ord(a)} + 1$, such that $D$ has only one face, the label of this face is $[a, t]^n$ and all vertices of $D$ are interior. The existence of such a diagram implies that $[a, t]^n$ can be decomposed into a product of $k$ commutators. We start with the definition of Howie diagrams and their relation to products of commutators in Section \ref{howie_diagram_section}. Diagrams for $[a, t]^n$ with the minimal possible genus are constructed in Section \ref{minimal_diagrams_section}. These diagrams are used to prove the theorems in Section \ref{proof_section}. \section{Howie diagrams and products of commutators} \label{howie_diagram_section} Diagrams similar to those we will now define were previously considered in \cite{How83}, \cite{How90}, \cite{Kl93}, \cite{Le09} and many other works. But our definitions slightly differ and are taken from \cite{BK22}. Namely, suppose that $S$ is a closed oriented surface, and $\Gamma$ is a finite undirected graph embedded into $S$ which divides it into simply connected domains. Such a graph determines a cell decomposition of $S$, i.e., a mapping $\mathrm{M}$ called a map on $S$: $$ \mathrm{M} : \bigsqcup_{i=1}^m D_i \to S \text{,} $$ where $D_i$ are two-dimensional disks. The mapping $\mathrm{M}$ is continuous, surjective and injective on the interior (i.e., on $\bigsqcup\limits_{i=1}^m(D_i\setminus \partial D_i)$), the preimage of each point is finite, and the preimage of the graph $\Gamma$ is the union of the boundaries of the faces: ${\mathrm{M}}^{-1}(\Gamma)=\bigsqcup_{i=1}^m \partial D_i$. The preimages of the vertices of $\Gamma$ are called corners of the map. We say that a corner $c$ is at a vertex~$v$ if ${\mathrm{M}}(c)=v$. The vertices and edges of $\Gamma$ are referred to as vertices and edges of the map $\mathrm{M}$. The disks $D_i$ are called faces or cells of the map. Such a map is called a diagram over a free product $A*B$ if: \begin{enumerate} \item The graph $\Gamma$ is bipartite. There are two types of vertices: $A$-vertices and $B$-vertices, and each edge joins an~$A$-vertex with a $B$-vertex. \item The corners at $A$-vertices are labeled by elements of the group $A$ and the corners at $B$-vertices are labeled by elements of $B$. \item Some vertices are distinguished and called exterior. All the other vertices are called interior. \item The label of each interior $A$-vertex equals $1$ in the group $A$ and the label of each interior $B$-vertex equals $1$ in the group $B$, where the label of a vertex is the product of labels of corners at this vertex taken clockwise (thus the label of a vertex is defined up to conjugation in $A$ or $B$). \end{enumerate} \begin{figure}[!h] \centering \vspace{0.25cm} \resizebox{!}{5.5cm}{\input{path_label.tex}} \vspace{0.25cm} \caption{Labels of paths.} \label{fig:path_label} \end{figure} The label of a face of a diagram is the product of labels of all corners of this face taken counterclockwise. It is an element of the free product $A*B$ defined up to conjugation. Now let us define the label of a path. First, we construct an auxiliary graph $\Gamma'$ by insertion of additional $A$- and $B$-vertices of degree $2$ at all the edges of the graph $\Gamma$, so that the new graph $\Gamma'$ is again bipartite. All the corners of these new vertices are labeled by $1$. Let $p$ be a path in $\Gamma'$ whose endpoints are not vertices of $\Gamma$. We represent $p$ as a composition of paths $p_1 \ldots p_m$ such that each $p_i$ traverses one and only one vertex of $\Gamma$ and its endpoints are not vertices of $\Gamma$. The label $l(p)$ of the path $p$ is defined as the product $l(p_1) \ldots l(p_m)$, where the label $l(p_i)$ of the path $p_i$ traversing only one vertex $v$ of $\Gamma$ is the product of labels of all the corners of $v$ lying to the left of the path $p_i$. The corners are taken clockwise starting from the corner adjacent to the edge of $\Gamma$ containing the initial vertex of $p_i$. See Fig. \ref{fig:path_label} for an example. The path $p_1$ traverses only one vertex and the path $p_2$ traverses two vertices. Their labels are $l(p_1) = a_2 a_3 a_4$ and $l(p_2) = a_1 a_2 a_3 b_1$. The labels of their inverses are $l(p_1^{-1}) = a_5 a_1$ and $l(p_2^{-1}) = b_2 b_3 a_4 a_5$. Note that $l(p_1) l(p_1^{-1}) = a_2 a_3 a_4 a_5 a_1$ is equal to the label of the vertex traversed by $p_1$. In general, if a path $p$ traverses only interior vertices, then $l(p) l(p^{-1}) = 1$. Thus if all vertices of a diagram are interior and a path $p'$ is obtained from a path $p$ by removal or insertion at any place of a subpath of the form $q q^{-1}$, then $l(p)=l(p')$. In particular, if a path $p$ can be transformed into the trivial path by consecutive removals of subpaths of the form $q q^{-1}$, then $l(p)=1$. \begin{figure}[!h] \centering \vspace{0.25cm} \resizebox{!}{2.5cm}{\input{diagram_example.tex}} \vspace{0.25cm} \caption{An example of a diagram on a torus.} \label{fig:diagram_example} \end{figure} Now let us look at an example of a diagram over the free product $\langle a \rangle_3 * \langle b \rangle_3$ shown in Fig. \ref{fig:diagram_example}. It is placed on a torus represented as a rectangle with opposite sides identified. The diagram has two interior vertices, three edges and a face whose label is $(ab)^3$. This is a geometric interpretation of the fact that the element $(ab)^3$ is a commutator in the free product $\langle a \rangle_3 * \langle b \rangle_3$. It follows from the next lemma. \begin{lemma}\label{diagram_lemma} Let $u$ be a cyclically reduced element of a free product $A * B$ not conjugated to elements of the free factors. If there is a diagram $D$ over $A * B$ on a closed oriented surface of genus $k$ such that $D$ has only one face, the label of this face is $u$ and all the vertices of $D$ are interior, then $u$ is a product of $k$ commutators. \end{lemma} Let $D$ be a diagram on a surface $S$ defined by a graph $\Gamma$. In the following proof by a path we mean either a path in the graph $\Gamma$, consisting of edges, or a path on the surface $S$ as a continuous map from the unit interval to $S$. If $p$ is a path in $\Gamma$, we also denote by $p$ the corresponding path on $S$ obtained by natural identification of each edge of the path with the unit interval. \begin{proof} Let us represent the surface $S$ of the diagram $D$ as a standard $4k$-gon $P_{4k}$ with identified boundary edges. Choose a vertex $Q$ of the auxiliary graph $\Gamma'$ such that it is not a vertex of $\Gamma$ and the label of the face read starting from this vertex is $u$. Denote the closed boundary path of the face starting at $Q$ as $p$. Note that $l(p)=u$ and $p$ is homotopic to the boundary path of $P_{4k}$ conjugated by a simple path $q$ connecting the vertex $Q$ to a vertex of $P_{4k}$. This conjugated boundary path is equal to $[a_1, b_1] \ldots [a_k, b_k]$, where $a_i$ and $b_i$ are the closed paths corresponding to the edges of $P_{4k}$ conjugated by $q$. Choose a point $C$ on the surface such that $C$ does not belong to $\Gamma$ and the boundary of $P_{4k}$. An arbitrary closed path $r$ on the surface $S \setminus C$ is homotopic to some closed path in $\Gamma'$. Indeed, consider the preimage $\mathrm{M}^{-1}(r)$ of this path on the disk $D_1$ (where $M$ and $D_1$ are taken from the definition of $D$). Centrally project $\mathrm{M}^{-1}(r)$ on the boundary $\partial D_1$ through the point $\mathrm{M}^{-1}(C)$ and take the image of this projection. Denote this new closed path as $r'$. Clearly, it lies in $\Gamma'$ and it is homotopic to $r$ on $S \setminus C$ by its construction. Finally, we turn $r'$ into a path in $\Gamma'$ by additional homotopy. Hence the paths $a_i$ and $b_i$ are homotopic to some paths $a'_i$ and $b'_i$ in the graph $\Gamma'$ and the path $p$ is homotopic to $p' = [a'_1, b'_1] \ldots [a'_k, b'_k]$. We can assume that this homotopy lies in $\Gamma'$ because it can be projected through $\mathrm{M}^{-1}(C)$. Thus the path $p^{-1} p'$ is homotopic in the graph to the trivial path. It means that $p^{-1} p'$ can be transformed into the trivial path by consecutive removals of subpaths of the form $q q^{-1}$. Since all the vertices of the diagram are interior, $1 = l(p^{-1} p') = l(p^{-1}) l(p') = l(p)^{-1} l(p')$. Thus $$u = l(p) = l(p') = [l(a'_1), l(b'_1)] \ldots [l(a'_k), l(b'_k)]\text{.}$$ \end{proof} \section[]{Diagrams for $\boldsymbol{[a, t]^n}$} \label{minimal_diagrams_section} In this section we prove the following lemma. \begin{lemma}\label{D_n_N_lemma} Let $a \in A$ and $t \in T$ be two elements of two groups, and let $n$ and $N$ be two natural numbers such that $n \geq N \geq 3$. If $N$ is even or $N$ and $n$ are odd, then there is a diagram $D_{n, N}$ over the free product $A * T$ on a closed oriented surface of genus $\intpart*{n \mathbin{/} 2} - \intpart*{n \mathbin{/} N} + 1$ such that $D_{n, N}$ has only one face and the label of this face is $[a, t]^n$. All the vertices of $D_{n, N}$ are interior if $a^N=1$. \end{lemma} \begin{proof} We explicitly construct a desired diagram on a closed surface represented by a rectangle whose top and bottom sides subdivided into $2k$ edges. Each edge from the top side has a corresponding edge on the bottom side. If we denote the top edges as $e_1, \ldots, e_{2k}$, then the bottom edges are $e_2, e_1, \ldots, e_{2k}, e_{2k-1}$. The surface is obtained by identification of the top edges with the corresponding bottom edges. The left and right sides of the rectangle are contracted to a single point. See Fig. \ref{fig:contour} for an example. Such a rectangle forms a closed oriented surface of genus $k$ if there are $2k$ edges on the top. We also admit a degenerate rectangle without top and bottom edges which represents a sphere. \begin{figure}[!h] \centering \vspace{0.25cm} \resizebox{!}{4cm}{\input{contour.tex}} \vspace{0.25cm} \caption{A closed oriented surface of genus $k$.} \label{fig:contour} \end{figure} The diagram is defined by its graph $\Gamma$ drawn on this rectangle. All corners of $A$-vertices will have label $a$ or $a^{-1}$. All corners of $T$-vertices will have label $t$ or $t^{-1}$. Some $A$-vertices will be labeled by $A^+$ or $A^-$. All corners of such vertices are labeled by $a$ or $a^{-1}$ respectively. Let us represent $n$ as $n=rN+q$, where $0 \leq q < N$. The diagram is constructed depending on the parities of $N$, $r$ and $q$. We consider the following cases (where $s = \intpart*{q \mathbin{/} 2}$): \begin{enumerate} \item $n=rN+2s$, where $N$ and $r$ are odd. \item $n=rN+2s+1$, where $N$ is odd and $r$ is even. \item $n=rN+2s$, where $N$ is even. \item $n=rN+2s+1$, where $N$ is even. \end{enumerate} \vspace{0.2cm} \noindent \textbf{Case 1. $\boldsymbol{n=rN+2s}$, where $\boldsymbol{N}$ and $\boldsymbol{r}$ are odd} \vspace{0.05cm} \noindent First, we construct a diagram $D_{N, N}$. Consider a rectangle with $N-1$ top edges $e_i$. Place one $A^+$-vertex $v^+$ of degree $N$ at the top half of the rectangle and one $A^-$-vertex $v^-$ of degree $N$ at the bottom half of the rectangle. We define a $t$-arc as two edges incident to an interior $T$-vertex of degree $2$. For each $i$ connect $v^+$ and $v^-$ using a $t$-arc crossing the edge $e_i$ of the rectangle. Then connect $v^+$ and $v^-$ using a $t$-arc lying inside the rectangle. In total there are $N$ arcs. Corners of $T$-vertices of these $t$-arcs are labeled in such a way, that moving counterclockwise along these arcs from $v^+$ to $v^-$ we meet the corner with label $t$. See Fig. \ref{fig:D_1_1_and_D_3_3_and_D_5_5} for examples. Note that we also consider the degenerate diagram $D_{1, 1}$ which will be used later. \begin{figure}[!h] \centering \vspace{0.25cm} \resizebox{!}{4.5cm}{\input{D_1_1.tex}} \resizebox{!}{4.5cm}{\input{D_3_3.tex}} \hspace{0.4cm} \resizebox{!}{4.5cm}{\input{D_5_5.tex}} \vspace{0.25cm} \caption{$D_{1,1}$, $D_{3,3}$ and $D_{5,5}=D_{3, 3} \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} D_{3, 3}$.} \label{fig:D_1_1_and_D_3_3_and_D_5_5} \end{figure} The diagram $D_{N, N}$ has genus $(N - 1) \mathbin{/} 2$. If $D_{N,N}$ has only one face, then its label is $[a, t]^N$. Clearly, $D_{3, 3}$ has only one face. To prove it for other $N$, we define a composition of two diagrams. \begin{definition} Let $D$ be a diagram drawn on a rectangle with $2k$ edges $e_i$. If $k$ is positive, we call the left border of $D$ a path which goes along the graph $\Gamma$ from the leftmost intersection point of $\Gamma$ with the bottom side of the rectangle to the leftmost intersection point of $\Gamma$ with the top side of the rectangle. We call the right border of $D$ a path which goes along the graph $\Gamma$ from the rightmost intersection point of $\Gamma$ with the top side of the rectangle to the rightmost intersection point of $\Gamma$ with the bottom side of the rectangle. \end{definition} Note that the left and right borders are oriented. The left border is traversed from bottom to top and the right border is traversed from top to bottom. \begin{definition} Let $D$ be a diagram drawn on a rectangle. Let $(v^{+}, v^{-})$ be a pair of an $A^{+}$-vertex and an $A^{-}$-vertex, both lying on the left border of $D$ or both lying on the right border of $D$. We call such a pair positively oriented if moving along the respective border of $D$ we first meet $v^{+}$. Otherwise we call such a pair negatively oriented. \end{definition} \begin{definition} Let $D_1$ and $D_2$ be two diagrams drawn on rectangles $R_1$ and $R_2$. We denote by $D_1 + D_2$ a new diagram obtained as follows: remove the right side of $R_1$, remove the left side of $R_2$ and attach the right side of $R_1$ to the left side of $R_2$. \end{definition} \begin{remark} Note that $D_1 + D_2$ is not actually a correct diagram since it has non-simply connected domains. Nevertheless, we call it a diagram for simplicity of notation. \end{remark} \begin{definition} Let $D_1$ and $D_2$ be two diagrams drawn on rectangles. Suppose that there are an $A^{+}$-vertex $v^{+}_1$ and an $A^{-}$-vertex $v^{-}_1$ both lying on the right border of $D_1$ and there are an $A^{+}$-vertex $v^{+}_2$ and an $A^{-}$-vertex $v^{-}_2$ both lying on the left border of $D_2$, such that $v^{+}_2$ and $v^{-}_2$ are connected by a $t$-arc. Suppose also that the pair $(v^{+}_1, v^{-}_1)$ and the pair $(v^{+}_2, v^{-}_2)$ have different orientations. We denote by $D_1 \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} D_2$ a diagram obtained from $D_1 + D_2$ as follows: remove the $t$-arc connecting $v^{+}_2$ and $v^{-}_2$, glue $v^{+}_2$ to $v^{+}_1$ and glue $v^{-}_2$ to $v^{-}_1$. \end{definition} For example, $D_{5,5} = D_{3, 3} \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} D_{3, 3}$ (see Fig. \ref{fig:D_1_1_and_D_3_3_and_D_5_5}). More generally, $$D_{N, N} = \underbrace{D_{3, 3} \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} \ldots \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} D_{3, 3}}_{(N-1)\mathbin{/}2 \text{ summands}}\text{.}$$ Clearly, if $D_1$ and $D_2$ are two diagrams with one face, then $D_1 \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} D_2$ is also a diagram with one face. If the labels of their faces are $[a, t]^{n_1}$ and $[a, t]^{n_2}$ respectively, then the label of the face of $D_1 \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} D_2$ is $[a, t]^{n_1+n_2-1}$. If $D_1$ has genus $k_1$ and $D_2$ has genus $k_2$, then $D_1 \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} D_2$ has genus $k_1 + k_2$. Since $D_{3, 3}$ has only one face, it follows from the above representation that $D_{N, N}$ has only one face as well. \begin{definition} Let $D$ be a diagram drawn on a rectangle. We denote by $D^{-}$ the same diagram where labels of all corners are inverted (in particular, $A^+$-vertices turn into $A^-$-vertices and vice versa). \end{definition} \begin{definition} Let $D_1$ and $D_2$ be two diagrams drawn on rectangles. Suppose that there is a $t$-arc $f_1$ in $D_1$ connecting an $A^{+}$-vertex $v^{+}_1$ with an $A^{-}$-vertex $v^{-}_1$ and there is a $t$-arc $f_2$ in $D_2$ connecting an $A^{+}$-vertex $v^{+}_2$ with an $A^{-}$-vertex $v^{-}_2$. Suppose also that the interior of $f_1$ has a nonempty intersection with the right border of $D_1$ and the interior of $f_2$ has a nonempty intersection with the left border of $D_2$. Orient $f_1$ and $f_2$ such that they start at $A^{+}$-vertices. Suppose that these orientations are both the same as (or both different from) the orientation of the respective border. Take the diagram $D_1 + D_2$. Add a new $t$-arc connecting $v_1^{+}$ with $v_2^{-}$ and add a new $t$-arc connecting $v_1^{-}$ with $v_2^{+}$. The diagram obtained after the removal of the $t$-arc connecting $v_1^{+}$ with $v_1^{-}$ is denoted by $D_1 \mathbin{\mathrm{\sqsupset}} D_2$. The diagram obtained after the removal of the $t$-arc connecting $v_2^{+}$ with $v_2^{-}$ is denoted by $D_1 \mathbin{\mathrm{\sqsubset}} D_2$. \end{definition} \begin{figure}[!h] \centering \vspace{0.25cm} \resizebox{!}{4cm}{\input{sumII_example.tex}} \vspace{0.25cm} \caption{$D_{3, 3}^-$, $D_{3, 3}$ and $D_{3, 3}^- \mathbin{\mathrm{\sqsupset}} D_{3,3}$.} \label{fig:sumII_example} \end{figure} See Fig. \ref{fig:sumII_example} for an example. Clearly, if $D_1$ and $D_2$ are two diagrams with one face, then $D_1 \mathbin{\mathrm{\sqsupset}} D_2$ ($D_1 \mathbin{\mathrm{\sqsubset}} D_2$) is also a diagram with one face. If the labels of their faces are $[a, t]^{n_1}$ and $[a, t]^n_2$, then the label of the face of $D_1 \mathbin{\mathrm{\sqsupset}} D_2$ ($D_1 \mathbin{\mathrm{\sqsubset}} D_2$) is $[a, t]^{n_1+n_2+1}$. If $D_1$ has genus $k_1$ and $D_2$ has genus $k_2$, then $D_1 \mathbin{\mathrm{\sqsupset}} D_2$ ($D_1 \mathbin{\mathrm{\sqsubset}} D_2$) has genus $k_1 + k_2$. We define $$D_{rN, N} = D_{(r-2)N, N} \mathbin{\mathrm{\sqsupset}} D_{N-2, N-2}^- \mathbin{\mathrm{\sqsubset}} D_{N, N}\text{.}$$ See Fig. \ref{fig:D_15_5_and_D_25_5} for examples. Clearly, all the $A^+$- and $A^-$-vertices of $D_{rN, N}$ have degree $N$. Due to the properties of $\mathbin{\mathrm{\sqsupset}}$ and $\mathbin{\mathrm{\sqsubset}}$ we get by induction that $D_{rN, N}$ has only one face and its label is $[a,t]^{((r-2)N + (N-2) + 1) + N + 1} = [a, t]^{rN}$. The genus is equal to $\intpart*{rN \mathbin{/} 2} - \intpart*{rN \mathbin{/} N} + 1$ since $$ \intpart*{\frac{(r-2)N}{2}} - \intpart*{\frac{(r-2)N}{N}} + 1 + \frac{N - 3}{2} + \frac{N - 1}{2} = \intpart*{\frac{rN}{2}} - \intpart*{\frac{rN}{N}} + 1\text{.} $$ \begin{remark} For $N=3$ there arises the degenerate diagram $D_{1, 1}^-$. We define the left border of $D_{1, 1}^-$ as a path going from the $A^{+}$-vertex to the $A^{-}$-vertex and the right border of $D_{1, 1}^-$ as a path going from the $A^{-}$-vertex to the $A^{+}$-vertex. In that case $D_{(r-2)3, 3} \mathbin{\mathrm{\sqsupset}} D_{1, 1}^- \mathbin{\mathrm{\sqsubset}} D_{3, 3}$ is correctly defined. See Fig. \ref{fig:D_15_3} for an example. \end{remark} \begin{figure}[!h] \centering \vspace{0.3cm} \subfloat{\resizebox{!}{3.8cm}{\input{D_15_5.tex}}} \vspace{0.3cm} \subfloat{\resizebox{!}{3.8cm}{\input{D_25_5.tex}}} \vspace{0.3cm} \caption{$D_{15,5}$ and $D_{25,5}$.} \label{fig:D_15_5_and_D_25_5} \end{figure} \begin{figure}[!h] \centering \vspace{0.3cm} \resizebox{!}{4cm}{\input{D_1_1_minus.tex}} \hspace{0.3cm} \resizebox{!}{4cm}{\input{D_15_3.tex}} \vspace{0.3cm} \caption{$D_{1,1}^{-}$ and $D_{15,3}$.} \label{fig:D_15_3} \end{figure} Finally, let us construct a diagram for $n=rN+2s$. This is done by composing the diagram $D_{rN, N}$ with $s$ auxiliary diagrams $D_{+2}$ shown in Fig. \ref{fig:D_+2_and_D_+2_2}. \begin{definition} Let $D_1$ and $D_2$ be two diagrams drawn on rectangles. Suppose that there is an edge $f_1$ in $D_1$ whose adjacent corners have labels $t$, $t^{-1}$, $a^{\varepsilon}$ and $a^{\varepsilon}$, and there is an edge $f_2$ in $D_2$ whose adjacent corners have labels $t$, $t^{-1}$, $a^{\varepsilon}$ and $a^{\varepsilon}$ (where $\varepsilon \in \{+1, -1\}$). Suppose also that the interior of $f_1$ has a nonempty intersection with the right border of $D_1$ and the interior of $f_2$ has a nonempty intersection with the left border of $D_2$. Orient $f_1$ and $f_2$ such that they start at $T$-vertices. Suppose that these orientations are both the same as (or both different from) the orientation of the respective border. We denote by $D_1 \mathbin{\mathrm{\asymp}} D_2$ a diagram obtained from $D_1 + D_2$ as follows: add a new edge connecting the $A$-vertex adjacent to $f_1$ with the $T$-vertex adjacent to $f_2$, add a new edge connecting the $T$-vertex adjacent to $f_1$ with the $A$-vertex adjacent to $f_2$, and remove the edges $f_1$ and $f_2$. \end{definition} See Fig. \ref{fig:D_+2_and_D_+2_2} for examples. The following properties of this operation are obvious: If $D_1$ ($D_2$) has only one face and $D_2$ ($D_1$) has two faces such that the edge $f_2$ ($f_1$) is adjacent to both these faces, then $D_1 \mathbin{\mathrm{\asymp}} D_2$ has only one face. If the labels of the faces of $D_1$ and $D_2$ are $[a, t]^{n_1}$, $[a, t]^{n_2}$ and $[a, t]^{n_3}$, then the label of the face of $D_1 \mathbin{\mathrm{\asymp}} D_2$ is $[a, t]^{n_1 + n_2 + n_3}$. If $D_1$ has genus $k_1$ and $D_2$ has genus $k_2$, then $D_1 \mathbin{\mathrm{\asymp}} D_2$ has genus $k_1 + k_2$. If all the vertices of $D_1$ and $D_2$ are interior, then all the vertices of $D_1 \mathbin{\mathrm{\asymp}} D_2$ are also interior. We define $$D_{rN+2s, N} = D_{rN, N} \mathbin{\mathrm{\asymp}} \underbrace{D_{+2} \mathbin{\mathrm{\asymp}} \ldots \mathbin{\mathrm{\asymp}} D_{+2}}_{s \text{ summands}}\text{.}$$ See Fig. \ref{fig:D_+2_and_D_+2_2} for examples. Clearly, $D_{+2}$ has two faces and both their labels are equal to $[a, t]$. All the vertices of $D_{+2}$ are interior and the whole left border of $D_{+2}$ is adjacent to both faces of $D_{+2}$. The genus of $D_{+2}$ is $1$. Due to the properties of $\mathbin{\mathrm{\asymp}}$ we get that $D_{rN+2s, N}$ has only one face and the label of this face is $[a,t]^{rN + 2s}$. The genus of $D_{rN+2s, N}$ is $$ \intpart*{\frac{rN}{2}} - \intpart*{\frac{rN}{N}} + 1 + s = \intpart*{\frac{rN + 2s}{2}} - \intpart*{\frac{rN + 2s}{N}} + 1 $$ if $2s < N$. All the vertices of $D_{rN+2s, N}$ are interior if $a^N=1$. \begin{figure}[!h] \centering \vspace{0.25cm} \resizebox{!}{3.4cm}{\input{D_+2.tex}} \hspace{0.3cm} \resizebox{!}{3.4cm}{\input{D_+4.tex}} \hspace{0.3cm} \resizebox{!}{3.4cm}{\input{D_7_5.tex}} \vspace{0.25cm} \caption{$D_{+2}$, $D_{+2} \mathbin{\mathrm{\asymp}} D_{+2}$ and $D_{7, 5} = D_{5, 5} \mathbin{\mathrm{\asymp}} D_{+2}$.} \label{fig:D_+2_and_D_+2_2} \end{figure} \vspace{0.35cm} \noindent \textbf{Case 2. $\boldsymbol{n=rN+2s+1}$, where $\boldsymbol{N}$ is odd and $\boldsymbol{r}$ is even} \vspace{0.05cm} \noindent A diagram for this case is obtained by composing the auxiliary diagram $D_{+N+1, N}$ with the diagram $D_{(r-1)N+2s,N}$. The diagram $D_{+N+1, N}$ is obtained from $D_{N, N}^-$ in the following way: add a new edge which starts from the left corner of the central $T$-vertex, then consecutively traverses all the edges of the rectangle $e_2, e_1, \ldots, e_{N-1}, e_{N-2}$ and ends at the right corner of the central $T$-vertex. Label both left corners of this new vertex of degree $4$ with $t$ and both right corners with $t^{-1}$. Add a new $A$-vertex of degree $2$ to this new edge. Label its corners with $a$ and $a^{-1}$ so that the labels of the faces are equal to powers of $[a, t]$. See Fig. \ref{fig:D_+N+1} for examples. Since $D_{N, N}$ has one face, the diagram $D_{+N+1, N}$ has $2$ faces. Due to symmetry, their labels are equal to $[a, t]^{\frac{N+1}{2}}$ and each $t$-arc is adjacent to these two faces. We define $$D_{rN + 2s + 1, N} = D_{+N+1, N} \mathbin{\mathrm{\asymp}} D_{(r-1)N + 2s, N}\text{.}$$ See Fig. \ref{fig:D_+N+1} for examples. Since $D_{(r-1)N + 2s, N}$ has one face, the diagram $D_{+N+1, N}$ has two faces and all $t$-arcs of $D_{+N+1, N}$ are adjacent to these two faces, we obtain that $D_{rN + 2s + 1, N}$ has one face. Its label is $$[a, t]^{2 \frac{N+1}{2} + (r-1)N + 2s} = [a, t]^{rN + 2s + 1}\text{.}$$ The genus of $D_{rN + 2s + 1, N}$ is \begin{eq} &\frac{N-1}{2} + \intpart*{\frac{(r-1)N + 2s}{2}} - \intpart*{\frac{(r-1)N + 2s}{N}} + 1 = \\ &\intpart*{\frac{(r-1)N + 2s}{2} + \frac{N + 1}{2}} - \intpart*{\frac{(r-1)N + 2s}{N} + 1} + 1 = \\ &\intpart*{\frac{rN + 2s + 1}{2}} - \intpart*{\frac{rN + 2s + 1}{N}} + 1 \end{eq} if $2s + 1 < N$. All the vertices of $D_{rN+2s+1, N}$ are interior if $a^N=1$. \begin{figure}[!h] \centering \vspace{0.3cm} \subfloat{ \resizebox{!}{3.8cm}{\input{D_+N+1_3.tex}} \hspace{0.2cm} \resizebox{!}{3.8cm}{\input{D_+N+1_5.tex}} \hspace{0.2cm} \resizebox{!}{3.8cm}{\input{D_11_5.tex}} } \vspace{0.4cm} \subfloat{ \resizebox{!}{3.815cm}{\input{D_23_5.tex}} } \vspace{0.3cm} \caption{$D_{+4,3}$, $D_{+6,5}$, $D_{11,5} = D_{+6,5} \mathbin{\mathrm{\asymp}} D_{5,5}$ and $D_{23, 5}$.} \label{fig:D_+N+1} \end{figure} \vspace{0.35cm} \noindent \textbf{Case 3. $\boldsymbol{n=rN+2s}$, where $\boldsymbol{N}$ is even} \vspace{0.05cm} \noindent A diagram for this case is obtained from the diagram $D_{2,2}$ shown in Fig. \ref{fig:D_2_2}. Its genus is $1$ and it has one face with label $[a, t]^2$. We define \begin{gather*} D_{N, N} = D_{2, 2} \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} \underbrace{D_{3, 3} \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} \ldots \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} D_{3, 3}}_{(N - 2)\mathbin{/} 2 \text{ summands}}\text{,} \\ D_{rN, N} = D_{N, N} \mathbin{\mathrm{\sqsupset}} \underbrace{D_{N-1, N-1} \mathbin{\mathrm{\sqsupset}} \ldots \mathbin{\mathrm{\sqsupset}} D_{N-1, N-1}}_{r-1 \text{ summands}}\text{,} \\ D_{rN+2s, N} = D_{rN, N} \mathbin{\mathrm{\asymp}} \underbrace{D_{+2} \mathbin{\mathrm{\asymp}} \ldots \mathbin{\mathrm{\asymp}} D_{+2}}_{s \text{ summands}}\text{.} \end{gather*} See Fig. \ref{fig:D_2_2} for examples. Since $D_{2, 2}$ has one face, the diagram $D_{rN+2s, N}$ also has one face. The label of this face is $[a, t]^{rN+2s}$. The genus of $D_{rN+2s, N}$ is $$ \intpart*{\frac{rN + 2s}{2}} - \intpart*{\frac{rN + 2s}{N}} + 1 $$ if $2s < N$. It is computed analogously to the previous cases. All the vertices of $D_{rN+2s, N}$ are interior if $a^N=1$. \begin{figure}[!h] \centering \vspace{0.25cm} \subfloat{ \resizebox{!}{3.8cm}{\input{D_2_2.tex}} \hspace{0.3cm} \resizebox{!}{3.8cm}{\input{D_4_4.tex}} \hspace{0.3cm} \resizebox{!}{3.8cm}{\input{D_8_4.tex}} } \subfloat{ \resizebox{!}{3.555cm}{\input{D_6_4.tex}} \hspace{0.2cm} \resizebox{!}{3.555cm}{\input{D_14_4.tex}} } \vspace{0.25cm} \caption{$D_{2,2}$, $D_{4,4}$, $D_{8,4}$, $D_{6,4}$ and $D_{14,4}$.} \label{fig:D_2_2} \end{figure} \vspace{0.35cm} \noindent \textbf{Case 4. $\boldsymbol{n=rN+2s+1}$, where $\boldsymbol{N}$ is even} \vspace{0.05cm} \noindent A diagram for this case is obtained from the diagram $D_{3,2}$ shown in Fig. \ref{fig:D_3_2}. Its genus is $1$ and it has one face with the label $[a, t]^3$. We define \begin{gather*} D_{N+1, N} = D_{3, 2} \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} \underbrace{D_{3, 3} \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} \ldots \mathbin{\vcenter{\hbox{$\mathsmaller{\mathsmaller{\stackrel{\prec}{\prec}}}$}}} D_{3, 3}}_{(N - 2)\mathbin{/} 2 \text{ summands}}\text{,} \\ D_{rN+1, N} = D_{N+1, N} \mathbin{\mathrm{\sqsupset}} \underbrace{D_{N-1, N-1} \mathbin{\mathrm{\sqsupset}} \ldots \mathbin{\mathrm{\sqsupset}} D_{N-1, N-1}}_{r-1 \text{ summands}}\text{,} \\ D_{rN+2s+1, N} = D_{rN+1, N} \mathbin{\mathrm{\asymp}} \underbrace{D_{+2} \mathbin{\mathrm{\asymp}} \ldots \mathbin{\mathrm{\asymp}} D_{+2}}_{s \text{ summands}}\text{.} \end{gather*} See Fig. \ref{fig:D_3_2} for examples. The diagram $D_{rN+2s+1, N}$ is exactly the same as the diagram $D_{rN+2s, N}$, one should only replace $D_{2, 2}$ with $D_{3, 2}$. So it is easy to see that the label of its face is $[a, t]^{rN+2s+1}$ and it has the same genus as $D_{rN+2s, N}$: $$ \intpart*{\frac{rN + 2s}{2}} - \intpart*{\frac{rN + 2s}{N}} + 1 = \intpart*{\frac{rN + 2s + 1}{2}} - \intpart*{\frac{rN + 2s + 1}{N}} + 1 $$ if $2s+1 < N$. All the vertices of $D_{rN+2s+1, N}$ are interior if $a^N=1$. \begin{figure}[!h] \centering \vspace{0.3cm} \resizebox{!}{3.8cm}{\input{D_3_2.tex}} \hspace{0.3cm} \resizebox{!}{3.8cm}{\input{D_5_4.tex}} \hspace{0.3cm} \resizebox{!}{3.8cm}{\input{D_9_4.tex}} \vspace{0.3cm} \caption{$D_{3, 2}$, $D_{5, 4}$ and $D_{9, 4}$.} \label{fig:D_3_2} \end{figure} We considered all cases and thus Lemma \ref{D_n_N_lemma} is proved. \end{proof} \section{Proofs of the theorems} \label{proof_section} \begin{proof}[Proof of Theorem \ref{a_t_theorem}] Let $$ \hat{k}(n, N) = \left[ \frac{n}{2} \right] - \left[ \frac{n}{N} \right] + 1 \text{.} $$ We want to prove that $\cl([a, t]^n) = \hat{k}(n, \ord(a))$ if $a \in A_{j_1}$ and $t \in A_{j_2}$ are two nonidentity elements such that $\ord(t) \geq \ord(a)$ and $j_1 \neq j_2$. Let us denote $\ord(a)$ by $N$. Main theorem from \cite{BK22} states that $\cl([a, t]^n) \geq \hat{k}(n, N)$. Hence it is sufficient to show that $\cl([a, t]^n) \leq \hat{k}(n, N)$. If $n < N$, then it follows from Culler's examples (in particular, if $N$ is infinite). If $N=2$, then it follows because $$[a, t]^n = [a, t^{(-1)^{n+1}} a t^{(-1)^{n}} \ldots a t]\text{.}$$ Thus we assume that $N \geq 3$ and $n \geq N$. If $N$ is even or $N$ and $n$ are odd, then the desired inequality follows from Lemma \ref{D_n_N_lemma} and Lemma \ref{diagram_lemma}. If $N$ is odd and $n$ is even, let us consider two cases: \begin{enumerate} \item $n=rN+2s$, where $r$ is even and $s$ is such that $0 \leq 2s < N$. Lemma \ref{D_n_N_lemma} and Lemma \ref{diagram_lemma} imply that $\cl([a, t]^{(r-1)N+2s}) \leq \hat{k}((r-1)N+2s, N)$ and $\cl([a, t]^{N}) \leq \hat{k}(N, N)$. Hence \begin{eq} \cl([a, t]^{rN+2s}) &\leq \cl([a, t]^{(r-1)N+2s}) + \cl([a, t]^{N}) \\ &\leq \hat{k}((r-1)N+2s, N) + \hat{k}(N, N) \\ &= \frac{(r-1)N + 2s - 1}{2} - (r-1) + 1 + \frac{N-1}{2} - 1 + 1 \\ &= \frac{rN + 2s}{2} - r + 1 = \hat{k}(rN+2s, N)\text{.} \end{eq} \item $n=rN+2s+1$, where $r$ is odd and $s$ is such that $0 \leq 2s+1 < N$. Lemma \ref{D_n_N_lemma} and Lemma \ref{diagram_lemma} imply that $\cl([a, t]^{rN}) \leq \hat{k}(rN, N)$. Moreover, $\cl([a, t]^{2s+1}) = s+1$ since $2s+1 < N$. Hence \begin{eq} \cl([a, t]^{rN+2s+1}) &\leq \cl([a, t]^{rN}) + \cl([a, t]^{2s+1}) \\ &\leq \hat{k}(rN, N) + s + 1 = \frac{rN - 1}{2} - r + 1 + s + 1 \\ &= \frac{rN + 2s + 1}{2} - r + 1 = \hat{k}(rN+2s+1, N)\text{.} \end{eq} \end{enumerate} \end{proof} \begin{proof}[Proof of Theorem \ref{general_theorem}] If $N \in \{ N(g) \mid g \in \hat{G} \}$, then there are elements $a \in A_{j_1}$ and $t \in A_{j_2}$ such that $\ord(a)=N$, $\ord(t)\geq N$ and $j_1 \neq j_2$. Main theorem from \cite{BK22} implies that $k(G, n, N) \geq \hat{k}(n, N)$ and Theorem \ref{a_t_theorem} implies that $k(G, n, N) \leq \hat{k}(n, N)$ since $N([a, t])=N$. Thus $k(G, n, N) = \hat{k}(n, N)$. \end{proof} \begin{proof}[Proof of Theorem \ref{main_theorem}] There are elements $a \in A_{j_1}$ and $t \in A_{j_2}$ such that $\ord(a)=N(G)$, $\ord(t)\geq N(G)$ and $j_1 \neq j_2$. Theorem 1 from \cite{BK22} implies that $k(G, n) \geq \hat{k}(n, N(G))$ and Theorem \ref{a_t_theorem} implies that $k(G, n) \leq \hat{k}(n, N(G))$. Thus $k(G, n) = \hat{k}(n, N(G))$. \end{proof} \vspace{0.5cm} \begin{spacing}{1.05} {\small \noindent \textbf{\upshape Acknowledgments.} The author thanks his advisor, Anton Klyachko, for helpful discussions and remarks, Olga Kulikova for a valuable observation which helped the author to come to the results presented in this paper and Pavel Izmailov for proofreading the text.} \end{spacing} \begin{spacing}{1.1} \printbibliography \end{spacing} \vspace{0.5cm} { \small {\scshape \noindent Faculty of mechanics and mathematics of Moscow State University, \par \noindent Moscow 119991, Leninskie gory, MSU. \par \noindent Moscow Center for Fundamental and Applied Mathematics. \par} {\noindent {\it Email}: {\rmfamily kuynzereb@gmail.com}}} \end{document}
{ "timestamp": "2022-06-14T02:19:17", "yymm": "2206", "arxiv_id": "2206.05795", "language": "en", "url": "https://arxiv.org/abs/2206.05795", "abstract": "Given a free product of groups $G = {\\large *}_{j \\in J} A_j$ and a natural number $n$, what is the minimal possible commutator length of an element $g^n \\in G$ not conjugate to elements of the free factors? We give an exhaustive answer to this question.", "subjects": "Group Theory (math.GR)", "title": "Powers with minimal commutator length in free products of groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.975946445006747, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.707434569125831 }
https://arxiv.org/abs/quant-ph/0702004
Non-negative Wigner functions in prime dimensions
According to a classical result due to Hudson, the Wigner function of a pure, continuous variable quantum state is non-negative if and only if the state is Gaussian. We have proven an analogous statement for finite-dimensional quantum systems. In this context, the role of Gaussian states is taken on by stabilizer states. The general results have been published in [D. Gross, J. Math. Phys. 47, 122107 (2006)]. For the case of systems of odd prime dimension, a greatly simplified proof can be employed which still exhibits the main ideas. The present paper gives a self-contained account of these methods.
\section{Introduction} The Wigner distribution establishes a correspondence between quantum mechanical states and real pseudo-probability distributions on phase space. 'Pseudo' refers to the fact that, while the Wigner function resembles many of the properties of probability distributions, it can take on negative values. It is therefore of interest to characterize those quantum states that are classical in the sense of giving rise to non-negative phase space distributions. For the case of pure states described by vectors in $\mathcal{H}=L^2(\mathbbm{R})$, the resolution of this problem was given by Hudson in Ref.\ \cite{hudson}, and later extended to multiple-particles by Soto and Claverie (Ref.\ \cite{soto}). \begin{theorem} (\emph{Hudson, Soto, Claverie)} Let $\psi\in L^2(\mathbbm{R}^n)$ be a state vector. The Wigner function of $\psi$ is non-negative if and only if $\psi$ is a \emph{Gaussian state}. By definition, a vector is Gaussian if and only if it is of the form \begin{equation*} \psi(q)=e^{2\pi i (q \theta q+x q)}, \end{equation*} where $x\in\mathbbm{R}^n$ and $\theta$ is a symmetric matrix with entries in $\mathbbm{C}$ \footnote{ Note that the boundedness of $\psi\in L^2(\mathbbm{R}^n)$ implies that $\theta$ has positive semi-definite imaginary part. }. \end{theorem} It is our objective to prove that the situation for discrete quantum systems is very similar, at least when the dimension of the Hilbert space is odd. The following Theorem states the main result. \begin{theorem} \emph{(Discrete Hudson's Theorem)}\label{thMain} Let $d$ be odd and $\psi\in L^2(\mathbbm{Z}_d^n)$ be a state vector. The Wigner function of $\psi$ is non-negative if and only if $\psi$ is a \emph{stabilizer state}. Given that $\psi(q)\neq0$ for all $q$, a vector $\psi$ is a stabilizer state if and only if it is of the form \begin{equation*} \psi(q)=e^{\frac{2\pi}{d} i (q \theta q+x q)}, \end{equation*} where $x\in\mathbbm{Z}_d^n$ and $\theta$ is a symmetric matrix with entries in $\mathbbm{Z}_d$. \end{theorem} In the previous theorem, $\mathbbm{Z}_d:=\{0,\dots,d-1\}$ denotes the set of integers modulo $d$. It turns out that, although the formulation of Hudson's result carries naturally over to finite dimensional systems, the respective proofs are radically different. The original argument relies crucially on function theory, which is of course not available in the setting that this paper addresses. Recently, Galvao \emph{et.\ al.\ }took a first step into the direction of classifying the quantum states with positive Wigner function (see Ref.\ \cite{galvao}). To explain the relationship of their results to the present paper, we have to comment shortly on two different approaches to defining discrete Wigner functions. On the one hand, it has long been realized that Wigner's definition carries over naturally to discrete odd-dimensional systems (Ref.\ \cite{longPaper,oldWootters,vourdas,leonhardt,miquel,villegar,klimov,ruzzi, chaturvedi,diploma,wootters} and Section \ref{scWigner}). This approach is the one used in the present paper. On the other hand, Gibbons, Hoffmann, and Wootters listed a set of axioms that candidate definitions have to fulfill in order to resemble the properties of the well-known continuous case (Ref.\ \cite{wootters}). Let us call functions that fall into this class \emph{generalized Wigner functions}. The characterization does not specify a unique solution: for a $d$-dimensional Hilbert space, there exist $d^{d-1}$ distinct generalized Wigner functions. The construction of Gibbons \emph{et.\ al.\ } has been described only for the case where $d$ is the power of a prime. If the dimension of the Hilbert space is of the form $d^n$, a second ambiguity arises. We are free to conceive such a space either as being associated to a system of $n$ constituents, each of dimension $d$, or to a single one of dimension $d^n$. While the Wigner function is the same for both cases, the set of stabilizer states is not (see Refs. \cite{longPaper, diploma}). Indeed, the 'single-particle' stabilizer states turn out to be a proper subset of the 'multiple-particle' ones. As a striking example, the generalized Bell and GHZ states nor are not stabilizer states on single $d^2$ or $d^3$-dimensional systems. In Ref.\ \cite{galvao} it was proved that a state of a \emph{single-particle} system of prime-power dimension is a stabilizer state if and only if \emph{all} its generalized Wigner functions are non-negative. The authors aim to establish necessary requirements for quantum computational speedup. Indeed, if the Wigner function of a quantum computer is positive at all times, then it operates only with stabilizer states and hence offers no advantage over classical computers, by the Gottesman-Knill Theorem (Ref.\ \cite{nielsen}). For the case of pure states, our results imply the ones of Ref.\ \cite{galvao} and exceed them in two ways. Firstly, we show that it suffices to check positivity for a single definition of the Wigner function, as opposed to $d^{d-1}$ ones. Secondly, our statements hold for multiple-particle systems, which constitute the proper setting for both quantum computation and the Gottesman-Knill Theorem. On the other hand, Ref.\ \cite{galvao} makes assertions about mixed states and qubit systems, which are not covered by our findings. Our general results have been published in Ref.\ \cite{longPaper}. However, the proof is rather involved. Many technicalities arise due to the fact that for non-prime $d$, arithmetic modulo $d$ lacks the desirable properties of finite fields. Our aim in writing Ref.\ \cite{longPaper} was to achieve the broadest possible generality in spite of these difficulties. The downside of this approach is that core ideas of the argument are obscured by technical issues. The present paper employs a different method of proof, which is available only for systems of odd prime dimension. For this special case, the main result of Ref.\ \cite{longPaper} can be obtained using only a fraction of the space. It is our hope that this paper makes the ideas accessible to a wider audience. The next section summarizes further findings contained in Ref.\ \cite{longPaper}. We go on to recall the definition and properties of discrete Wigner functions in Section \ref{scWigner}. Section \ref{scMainTheorem} is devoted to a complete proof of the easiest special case of Theorem \ref{thMain}, that being given by a single particle on a Hilbert space of prime dimension. \section{Further results and implications} It is natural to ask how Hudson's results generalize to mixed states. Certainly, mixtures of Gaussian states are positive on phase space and Narcowich in Ref.\ \cite{narcowich} conjectured that all such quantum states are convex combinations of Gaussian ones. Br\"ocker and Werner refuted the conjecture by giving a counter-example (Ref.\ \cite{werner}). We show in Ref.\ \cite{longPaper} that the situation is similar in the finite setting. Further, we show how to lift the ambiguity in the axiomatic characterization of Wigner functions by requiring Clifford covariance, note that a unitary operator preserves positivity if and only it is a Clifford operation, discuss the relation of various ways to introduce Wigner functions and stabilizer states in dimensions of the form $d=p^n$, and give an explicit account on the connection between stabilizer states and Gaussian states. \section{Wigner Functions}\label{scWigner} This section provides a very superficial introduction to discrete Wigner functions. We allow ourselves to refer the reader to Refs. \cite{longPaper} for further details. In what follows $d$ denotes an odd prime. All integer arithmetic in this paper is implicitly assumed to be $\bmod\, d$. The symbol $2^{-1}=(d+1)/2$ is the multiplicative inverse modulo $d$. All state vectors are elements of the Hilbert space $\mathcal{H}$ spanned by $\{\ket0.,\dots,\ket d-1.\}$. Lastly, $\omega=e^{\frac{2\pi}d i}$ is a $d$th root of unity. The relations \begin{eqnarray*}\label{shiftClock} x(q)\ket k. = \ket k+q., \quad\quad z(p)\ket k. = \omega^{p k} \ket k. \end{eqnarray*} define the \emph{shift} and \emph{boost} operators respectively. The most central element in the theory are the \emph{Weyl operators} (in quantum information also known as the \emph{generalized Pauli operators}) given by \begin{eqnarray*} w(p,q)=\omega^{-2^{-1}p q} z(p)x(q). \end{eqnarray*} The \emph{characteristic function} of an operator $\rho$ is given by the expansion coefficients of $\rho$ in terms of the Weyl operators \begin{eqnarray*} \Xi_\rho(\xi,x) = \frac1d \tr(w(\xi,x)^\dagger \rho). \end{eqnarray*} We define the \emph{Wigner function} to be the symplectic Fourier transform of the characteristic function: \begin{eqnarray*} W_\rho(p,q) &=& \frac1d \sum_{\xi,x \in \mathbbm{Z}_d} \omega^{p\xi-q x}\, \Xi_\rho(\xi,x). \end{eqnarray*} It is a tedious yet straight-forward computation to show that the Wigner function of a pure state is given by \begin{eqnarray*} W_{\psi}(p,q) &:=& W_{\ket\psi.\bra\psi.}(p,q) \\ &=& \frac1{d} \sum_{\xi \in \mathbbm{Z}_d} \omega^{-\xi p}\, \psi(q+2^{-1} \xi)\bar\psi(q-2^{-1} \xi). \nonumber \end{eqnarray*} If $S$ is a $2\times 2$-matrix with elements in $\mathbbm{Z}_d$ and determinant $1$, then there exists a unitary operation $\mu(S)$ (the \emph{Weil} \cite{weil} \footnote{ There is a confusing similarity of names: the Weil representation (after Andr\'e Weil) acts on the Weyl operators (after Hermann Weyl). } or \emph{metaplectic} representation of $S$) such that \begin{equation*} \mu(S) w(p,q) \mu(S)^\dagger = w(S(p,q)). \end{equation*} The Wigner function is covariant in the sense that, if $\rho'=\mu(S)\,\rho\,\mu(S)^\dagger$, then \begin{equation}\label{weilCovariance} W_{\rho'}(p,q) = W_\rho(S(p,q)). \end{equation} Similarly, the Weyl operators induce translations of the Wigner function. Letting $\rho'=w(p',q')\,\rho\,w(p',q')^\dagger$, it holds that \begin{equation}\label{weylCovariance} W_{\rho'}(p,q) = W_\rho(p+p',q+q'). \end{equation} The \emph{Clifford group} is the set of unitary matrices that send Weyl operators to Weyl operators under conjugation \footnote{ Note that the ``Clifford group'' which appears in the context of quantum information theory \cite{gottesman} has no connection to the group by the same name used e.g.\ in the representation theory of $SO(n)$. }. Every Clifford mapping is of the form $w(p,q)\mu(S)$ and hence preserves positivity of the Wigner function. Finally, \emph{stabilizer states} are the images of the computational basis states under the action of the Clifford group. \section{Main Theorem -- Single particles in prime dimensions} \label{scMainTheorem} Define the \emph{self correlation function} \begin{equation*} K_\psi(q,x)=\psi(q+2^{-1}x) \bar\psi(q-2^{-1}x) \end{equation*} and note that the Wigner function obeys \begin{equation*} W(p,q) =\frac1{d} \sum_{x} \omega^{-p x} K_\psi(q,x) . \end{equation*} Recall that the Fourier transform $\hat f$ of a function $f: \mathbbm{Z}_d \to \mathbbm{C}$ is defined to be $\hat f(x)=1/d \sum_q \omega^{-qx} f(q)$. Therefore, for a fixed $q_0$, $W(p,q_0)$ is the Fourier transform of $K(q_0,x)$. Hence $W$ is non-negative if and only if the $d$ functions $K(q_0,\cdot)$ have non-negative Fourier transforms. In harmonic analysis, the set of functions with non-negative Fourier transforms is characterized by a well-known theorem due to Bochner. We state an elementary version of Bochner's Theorem, along with a variation for subsequent use. \begin{theorem}\label{thBochner} \emph{(Variations of Bochner's Theorem)} Consider a function $f: \mathbbm{Z}_d\to \mathbbm{C}$. It holds that \begin{enumerate} \item The Fourier transform of $f$ is non-negative if and only if the matrix \begin{equation*} {A^x}_q=f(x-q) \end{equation*} is positive semi-definite. \item The Fourier transform of $f$ has constant modulus (i.e. $|\hat f(x)|=\const$) if and only if $f$ is orthogonal to its translations: \begin{eqnarray*} \tuple f, \hat x(q) f. = \sum_{x} \bar f(x) f(x-q) = 0, \end{eqnarray*} for all non-zero $q\in \mathbbm{Z}_d$. \end{enumerate} \end{theorem} \begin{proof} The matrix $A$ is circulant. It is well-known that circulant matrices are normal (hence diagonalizable) with eigenvalues given by the Fourier transform of the first row (up to a positive normalization constant). The first claim is now immediate. By the same argument, $A$ is proportional to a unitary matrix if and only if $|\hat f(q)|$ is constant. But a matrix is unitary if and only if its rows form an ortho-normal set of vectors. \end{proof} The next three lemmas harvest some consequences of Bochner's Theorem to gain information on the pointwise modulus $|\psi(q)|$ of the vector. \begin{lemma} \label{lmIneq} \emph{(Modulus Inequality)} Let $\psi$ be a state vector with positive Wigner function. It holds that \begin{equation*} |\psi(q)|^2 \geq |\psi(q-x)|\,|\psi(q+x)| \end{equation*} for all $q,x \in \mathbbm{Z}_d$. \end{lemma} \begin{proof} Fix a $q \in \mathbbm{Z}_d$. As $W_\psi$ is non-negative, so is the Fourier transform of $K_\psi(q,x)$ with respect to $x$. Bochner's Theorem implies that ${A^x}_y = K(x-y,q)$ is positive semi-definite (\emph{psd}) which in turn implies that all principal sub-matrices are psd. In particular the determinant of the $2\times 2$ principal sub-matrix \begin{eqnarray*} && \left( \begin{array}{cc} K_\psi(q,0) & K_\psi(q,2x) \\ K_\psi(q, -2x) & K_\psi(q,0) \end{array} \right) \\ &=& \left( \begin{array}{cc} |\psi(q)|^2 & \psi(q+x)\bar\psi(q-x) \\ \bar\psi(q+x)\psi(q-x) & |\psi(q)|^2 \end{array} \right) \end{eqnarray*} must be non-negative. But this means \begin{eqnarray*} |\psi(q)|^4 - |\bar\psi(q+x) \psi(q-x)|^2 \geq 0, \end{eqnarray*} which proves the theorem. \end{proof} We will call the set of points where a state-vector is non-zero its \emph{support}. \begin{lemma}\label{lmSupport} \emph{(Support Lemma)} Let $\psi$ be a state vector with positive Wigner function. If $\psi$ is supported on two points, then it has maximal support. \end{lemma} \begin{proof} Denote by $S=\supp\psi$ the support of $\psi$. $S$ has the property to contain the midpoint of any two of its elements. Indeed, if $a, b \in S$, then setting $q=2^{-1} (a+b)$ and $x=2^{-1}(a-b)$ in the Modulus Inequality shows that \begin{equation*} |\psi(2^{-1}(a+b))| \geq |\psi(a)|\,|\psi(b)| > 0, \end{equation*} hence $2^{-1}(a+b) \in S$. Assume there exist two points $a,b\in S$. Requiring $a=0$ is no loss of generality, for else we substitute $\psi$ by $\psi'=w(0,-a)\psi$. By Eq.\ (\ref{weylCovariance}), $\psi'$ has positive Wigner function if and only if $\psi$ has. We claim that \begin{equation}\label{balancedProp} 2^{-l}\beta \,b \in S \end{equation} for all $l$ and $\beta\leq2^l$. The proof is by induction on $l$. Suppose Eq.\ (\ref{balancedProp}) holds for some $l$. If $\beta \leq 2^{l+1}$ is even, then $2^{-l-1}\beta\,b= 2^{-l}(\beta/2)b\in S$. Else, \begin{eqnarray*} 2^{-l-1} \beta\, b = 2^{-1}\big( 2^{-l} \frac{\beta-1}2\, b + 2^{-l} \frac{\beta+1}2\,b \big) \in S, \end{eqnarray*} which proves the claim. Now,by Fermat's Little Theorem $2^{d-1}=1 \mod d$ and hence, setting $l=d-1$ in Eq.\ (\ref{balancedProp}), we conclude that $\beta\, b \in S$ for all $\beta \leq d-1$. But every point in $\mathbbm{Z}_d$ is of that form. \end{proof} \begin{lemma}\label{constlemma} \emph{(Constant Modulus)} Let $\psi$ be a state vector with positive Wigner function and maximal support. Then $|\psi(q)|=\const$. \end{lemma} \begin{proof} Pick two points $x, q \in \mathbbm{Z}_d$ and suppose $|\psi(q)|>|\psi(x)|$. Letting $z=x-q$, the assumption reads $|\psi(q)|>|\psi(q+z)|$. Lemma \ref{lmIneq} centered at $q+z$ gives \begin{eqnarray*} |\psi(q+z)|^2 &\geq& |\psi(q)|\,|\psi(q+2z)| \\ &>& |\psi(q+z)| \, |\psi(q+2z)|, \end{eqnarray*} therefore $|\psi(q+z)| > |\psi(q+2z)|$. By inducting on this scheme, we arrive at \begin{equation*} |\psi(q)| > |\psi(q+z)| > |\psi(q+2 z)| > \cdots \end{equation*} and hence $|\psi(q)|>|\psi(q+dz)|=|\psi(q)|$, which is a contradiction. Thus $|\psi(q)|\leq|\psi(x)|$. Swapping the roles of $x$ and $q$ proves that equality must hold. \end{proof} \begin{theorem}\label{mainPrime} \emph{(Main Theorem -- Special Case)} Let $d$ be prime and $\psi \in L^2(\mathbbm{Z}_d)$ be a state vector with positive Wigner function. Then $\psi$ is a stabilizer state. \end{theorem} \begin{proof} By the Support Lemma, $\psi$ is either a position eigenstate or else it has maximal support. In the former case, $\psi$ is manifestly a stabilizer state, so we need only treat the latter. Let $U$ be a Clifford operation. Since $U$ preserves positivity, the Support Lemma applies to $U\psi$. Suppose $U$ is such that $\supp U\psi$ contains just a single point. Then $U\psi$ belongs to the computational basis and hence, by definition, $\psi$ is a stabilizer state. Therefore, we are left to treat those state vectors whose image under any Clifford operation has maximal support. The proof is concluded by showing that such states do not exist. For assume there is such a vector $\psi$. As $\psi$ has pointwise constant modulus, so does $K_\psi$. Employing Theorem \ref{thBochner}, we find that, for every fixed $q_0$, $W(p,q_0)$ is orthogonal to its own translations. But since $W$ is non-negative, it follows that $W(p,q_0)$ can be non-zero on at most one point. A Wigner function that is concentrated at a single point can not represent a physical state \footnote{ Such a Wigner function corresponds to a Hermitian operator with both positive and negative eigenvalues (see Ref.\ \cite{longPaper}). One can think of this fact as an incarnation of the uncertainty principle. }. There must hence exist at least two points $a, b$ in the support of $W$ (note that we are now considering the support of Wigner functions and no longer the support of state vectors). Making once more use of the fact that translations are implemented by Clifford operations, assume $a=0$. There exists a unit-determinant matrix $S$ that sends $b$ to a vector of the form $Sb=(0,q_0)^T$. But then there are two points in the support of $W_{\mu(S)\psi}(p,q_0)$, contradicting our earlier derivation. \end{proof} \section{Summary} We have proved a 'classicality result' for discrete Wigner functions: those state vectors which give rise to a classical probability distribution in phase space belong to the set of stabilizer states. These, in turn, allow for an efficient classical description. Comparing the proof of the special case treated here to the involved argument employed in Ref.\ \cite{longPaper}, it becomes apparent how much the geometrical properties of integer residues modulo prime numbers simplify the structure. \section{Acknowledgments} The author is grateful for support and advice provided by Jens Eisert during all stages of this project. Comments by and discussions with K.\ Audenaert S.\ Chaturvedi, H.\ Kampermann, M.\ Kleinmann, A.\ Klimov, M.\ Ruzzi, and C.K.\ Zachos are kindly acknowledged. This work has benefited from funding provided by the European Research Councils (EURYI grant of J. Eisert), the European Commission (Integrated Project QAP), the EPSRC (Interdisciplinary Research Collaboration IRC-QIP), and the DFG.
{ "timestamp": "2007-01-31T23:53:22", "yymm": "0702", "arxiv_id": "quant-ph/0702004", "language": "en", "url": "https://arxiv.org/abs/quant-ph/0702004", "abstract": "According to a classical result due to Hudson, the Wigner function of a pure, continuous variable quantum state is non-negative if and only if the state is Gaussian. We have proven an analogous statement for finite-dimensional quantum systems. In this context, the role of Gaussian states is taken on by stabilizer states. The general results have been published in [D. Gross, J. Math. Phys. 47, 122107 (2006)]. For the case of systems of odd prime dimension, a greatly simplified proof can be employed which still exhibits the main ideas. The present paper gives a self-contained account of these methods.", "subjects": "Quantum Physics (quant-ph)", "title": "Non-negative Wigner functions in prime dimensions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464429079201, "lm_q2_score": 0.7248702761768249, "lm_q1q2_score": 0.7074345676044539 }
https://arxiv.org/abs/1006.3105
Holomorphic functions on subsets of C
Let $\Gamma $ be a $C^\infty $ curve in $\Bbb{C}$ containing 0; it becomes $\Gamma_\theta $ after rotation by angle $\theta $ about 0. Suppose a $C^\infty $ function $f$ can be extended holomorphically to a neighborhood of each element of the family $\{\Gamma_\theta \}$. We prove that under some conditions on $\Gamma $ the function $f$ is necessarily holomorphic in a neighborhood of the origin. In case $\Gamma $ is a straight segment the well known Bochnak-Siciak Theorem gives such a proof for \textit{real analyticity}. We also provide several other results related to testing holomorphy property on a family of certain subsets of a domain in $\Bbb{C}$.
\section{Introduction} The Bochnak-Siciak Theorem [Bo,Si] states the following. Let $f\in C^\infty (D),$ $D$ is a domain, $0\in D\subset {\Bbb R}^n.$ Suppose $f$ is (real) analytic on every line segment through $0$. Then $f$ is analytic in the neighborhood of $0$ (as a function of $n$ variables). For $n=2$ this statement can be interpreted as follows. Consider the segment $ I=\{(x,y)|x\in [-1,1],y=0\},$ $I_\theta $ its rotation by angle $\theta $ about the origin. If $f$ is real analytic on each $I_\theta $ then $f$ is real analytic in a neighborhood of the origin as a function of two variables. Here we are interested in examining a similar statement regarding the holomorphic property of $f$. That is if $\Gamma $ is a $C^\infty $ curve in $\Bbb{C}$ containing $0,$ $\Gamma _\theta $ its rotation by angle $\theta $ about the origin, and $f$ can be extended holomorphically to a neighborhood of each $\Gamma _\theta $, then under what condition on $\Gamma $ can one claim that $f$ is holomorphic in a neighborhood of $0$? For $ \Gamma $ real analytic (including $\Gamma =I$) the answer is negative, but for some $C^\infty $ curves the answer is positive. The questions we are examining here as well as the Bochnak-Siciak Theorem can be considered as solving the Osgood-Hartogs-type problems; here is a quote from [ST]: ``Osgood-Hartogs-type problems ask for properties of `objects' whose restrictions to certain `test-sets' are well known''. [ST] has a number of examples of such problems. Other meaningful and interesting problems and examples of this type one can find in ([AM], [BM], [Bo], [LM], [Ne, Ne2, Ne3], [Re], [Sa], [Si], [Zo]), and other papers. Most of the research has been devoted to consideration of formal power series and specific classes of functions of several variables as `objects' which converge (or, in case of functions, have the property of being smooth) on each curve (or subvariety of lower dimension) of a given family. The property of a series to be convergent (or, for functions, to be smooth) is then proved. Our work in this paper is also related to another set of specific Osgood-Hartogs-type problems. The famous Hartogs theorem states that a function $f$ in ${\Bbb{C}}^n$, $n>1$, is holomorphic if it is holomorphic in each variable separately, that is, $f$ is holomorphic in ${\Bbb{C}}^n$ if it is holomorphic on every complex line parallel to an axis. So, one can test the holomorphy of a function in ${\Bbb{C}}^n$ by examining if it is holomorphic on each of the above mentioned complex lines. There is a wide area of interesting results on testing holomorphy on subsets of ${\Bbb{C}}$, specifically on curves: see [A1-A3, AG, E, G1-G3, T1, T2] and references in those articles. Some of these results assume a holomorphic extension into the inside of each closed curve in a given family, others a ``Morera-type'' property. In this paper we also consider testing holomorphy on subsets of $\Bbb{C}$. In addition to rotations about a point (when the subset is a curve) as mentioned in the beginning, we will allow some linear transformations to be applied to these subsets. We consider a subset $S\subset \Bbb{C}$ and form a family of ``test-sets'' by considering all images of $S$ under a (small enough) subset of $\mathcal{L}$, the set of all linear holomorphic automorphisms of ${\Bbb{C}}$. We then discuss the conditions on $S$ under which a $C^\infty $ function given in a domain will be holomorphic in that domain if it is holomorphic on this specific family of sets. Below is a more precise explanation. Let $S\subset \Bbb{C}$. We say that $f:S\rightarrow \Bbb{C}$ is holomorphic if $f$ is a restriction on $S$ of a function holomorphic in some open neighborhood of $S$. Let $\Bbb{L}$ be a subset of $\mathcal{L}$. \textbf{Definition.} The set $S$ \textit{has Hartogs property with respect to $\Bbb{L}$} (denoted $S\in~ \hat{H}(\Bbb L)$) \textit{if the following holds:} \textit{Let $\Omega \subset {\Bbb{C}}$ be a domain, $f:\Omega \rightarrow \Bbb{C}$ a $C^\infty $ function. Suppose for any $L\in \Bbb{L},$ $f$ restricted to $L(S)\cap \Omega $ is holomorphic. Then $f$ is holomorphic in $\Omega $}. The main question we are addressing here is: which sets $S$ have Hartogs property with respect to a given set of transformations? We will examine this question depending on $\operatorname{dim} (S)$ - the real Hausdorff dimension of $S $. We consider three cases and provide the following answers: 1. $\operatorname{dim} (S)>1$. We prove that in this case $S\in \hat{H}(\Bbb T)$, where $\Bbb T$ is the group of linear translations (Theorem~\ref{S1}). 2. $\operatorname{dim} (S)=1$.$\,$ Such a set may or may not have Hartogs property with respect to $\Bbb T$. In addition to examples we examine explicitly the case when $S= \Gamma$ is a $C^\infty $ curve, as referred to in the beginning of this introduction. We consider the set of transformations ${\Bbb T}_1=\{\sigma\circ\tau: \sigma\in{\Bbb T}, \tau\in {\Bbb U}\}$, where $\Bbb{U}$ is an open subset of the group $\Bbb{C}^{*}$. Though we do not provide a complete classification of these curves we nevertheless point out the major obstacle for a curve to have Hartogs property: real analyticity. So, in this case we essentially show that if $S$ is a $C^\infty $ curve then $S\in \hat{H}({\Bbb T}_1)$ if and only if $S$ is not analytic (for exact statements see Proposition~\ref{A1}, and Theorem~\ref{S2}). 3. $\operatorname{dim} (S)<1$. As in case 2 such a set may or may not have Hartogs property with respect to $\Bbb T$. We specifically examine the situation when $S$ is a sequence with one limit point (so $\operatorname{dim} (S)=0$), and with a reasonable restriction (a slight change of the definition of a holomorphic function on a sequence), our investigation essentially explains that $S$ has a certain Hartogs property if and only if such a sequence does not eventually end up on an analytic curve (for the precise statement see Theorem~\ref{S3} and the discussions preceding this theorem). \section{Main Results} {\[Case\ 1:\operatorname{dim} (S)>1\]$ $} Let $S$ $\subset \Bbb{C}$. In this section we prove the following \begin{Theorem} \label{S1} If $\operatorname{dim} (S)>1$, then $S\in \hat{H}(\Bbb T)$. \end{Theorem} The proof of this theorem follows from several statements below. For all of them $S$ is an arbitrary subset of $\Bbb C$. First we consider the following. Let $p\in S$. A point $t$ in $T:=\{z\in{\Bbb C}: |z|=1\}$ is said to be a limit direction of $S$ at $p$ if there exists a sequence $(q_j)$ in $S$ such that $\lim_j q_j=p$ and $\lim_j\tau(p,q_j)=t$, where $\tau(p,q_j):=(q_j-p)/|q_j-p|$. \begin{Lemma} \label{TS} Let $\Omega$ $\subset \Bbb{C}$ be an open set, $p\in \Omega \cap S$ and there are at least two limit directions $t_1,t_2$ of $S$ at $p$. Suppose a function $f\in C^1 (\Omega )$ is holomorphic on $S \cap \Omega$. If $t_1\neq \pm t_2$ then $\frac{\partial f}{\partial \overline{z}}=0$ at $p$. \end{Lemma} \begin{pf} The derivatives of $f$ along linearly independent directions $t_1$ and $t_2$ coincide with derivatives of a holomorphic function in the neighborhood of $p$. The statement now follows from the Cauchy-Riemann equations. \end{pf} \begin{Corollary} \label{CS} If a set $S\subset \Bbb{C}$ has a point $p$ with at least two limit directions $t_1\neq \pm t_2$ of $S$ at $p$, then $S$ has Hartogs property with respect to $\Bbb T$. \end{Corollary} \begin{pf} Let $\Omega \subset \Bbb{C}$, $f$ $\in C^\infty (\Omega )$. Suppose that for any translation $L$, $f$ is holomorphic on $L(S)\cap \Omega .$ Let $z_0\in \Omega $. Pick such an $L$, that $L(p)=z_0$. Since $f$ is holomorphic on $L(S)\cap \Omega $, and (by choice of $p$) there are at least two limit directions $t_1\neq \pm t_2$ of $L(S)\cap \Omega$ at $z_0$, then by Lemma~\ref {TS},~ $\frac{\partial f}{\partial \overline{z}}=0$ at $z_0$. So, $\frac{\partial f}{\partial \overline{z}}=0$ everywhere on $\Omega $, and therefore $f$ is holomorphic on $\Omega $. \end{pf} For a positive integer $N$ let $S_N$ be the set of points $p$ in $S$ such that $S$ has no more than $N$ distinct limit directions of $S$ at $p$. Let $M_d$ denote the Hausdorff measure of dimension $d$. Let $D(p, r)$ denote the closed disc centered at $p$ of radius $r$. \begin{Lemma} \label{MS} For $d>1$, $M_d(S_N)=0$. Hence the Hausdorff dimension of $S_N$ is $\le1$.\end{Lemma} \begin{pf} Choose a positive integer $K$ and a positive number $\epsilon$ such that $$ B:={2^dN\over K^{d-1}}<1,\;\; D(0,1)\cap \{q: |\tau(0,q)-1|\le\epsilon\} \subset \cup_{j=1}^KD(j/K,1/K).$$ For a positive integer $n$ let $S_N^n$ be the set of points $p$ of $S$ such that there exist $N$ directions $t_k$, $k=1,\dots,N$, depending on $p$, satisfying $$D(p, 1/n)\cap S\subset \cup_{k=1}^N\{q\in{\Bbb C}: |\tau(p,q)-t_k|<\epsilon\}.$$ Fix $n$ and consider a disc $D(p', r)$, where $p'\in \Bbb C$ and $r\le 1/(2n)$. If $S_N^n\cap D(p', r)$ is not empty, let $p$ be a point of this intersection. So there exist $N$ directions $t_k$, $k=1,\dots,N$, satisfying $$D(p, 2r)\cap S_N^n\subset D(p, 2r)\cap\cup_{k=1}^N\{q\in{\Bbb C}: |\tau(p,q)-t_k|<\epsilon\}.$$ The set on the right side of the above equation can be covered by $KN$ discs of radius $(2r/K)$ with centers $$p+{2rjt_k\over K},\;j=1,\dots,K,\; k=1,\dots, N.$$ Hence $D(p',r)\cap S_N^n$ can be covered by $KN$ closed discs of radius $(2r/K)$ provided $r\le 1/(2n)$. Now there is a positive integer $L$ such that $S_N^n$ is covered by $L$ discs of radius $1/(2n)$: $S_N^n\subset \cup_{j=1}^L D(p_j, 1/(2n))$. Each set $S_N^n\cap D(p_j, 1/(2n))$ is covered by $KN$ discs of radius $1/(nK)$. Hence $S_N^n$ is covered by $LKN$ discs of radius $1/(nK)$. For each of these smaller discs we can proceed with the similar construction. So, continuing this way we see that for any $\nu=1,2, \dots$, the set $S_N^n$ is covered by $L(KN)^\nu$ discs of radius $(1/2n)(2/K)^\nu$. It follows that $M_d(S_N^n)\le L(KN)^\nu\cdot [(1/2n)(2/K)^\nu]^d=C B^\nu$, where $C=L/(2n)^d$. Hence $M_d(S_N^n)=0$. Since $S_N\subset \cup_{n=1}^\infty S_N^n$, we obtain $M_d(S_N)=0$.\end{pf} {\it Proof of Theorem~\ref{S1}.} Since $\operatorname{dim} (S)>1$, then by Lemma~\ref{MS}, $S\setminus S_2\neq \emptyset $. Therefore there is a point $p\in (S\setminus S_2)$, with at least two limit directions $t_1\neq \pm t_2$ of $S$ at $p$. Now by Corollary~\ref {CS}, $S$ has Hartogs property with respect to $\Bbb T$. \qed \newline {\[Case\ 2:\operatorname{dim} (S)=1\]$ $} The most interesting situation in this case is when $S=\Gamma$ is a curve. By using Corollary~\ref{CS} one can easily construct curves that have Hartogs property with respect to $\Bbb T$ (any broken curve (not a segment) consisting of two links and forming an angle would be such an example). On the other hand if $\Gamma$ is a real analytic curve the following statement holds. \begin{Proposition}\label{A1} Let $\Gamma \subset \Bbb{C}$ be a real analytic curve. Then $\Gamma $ does not have Hartogs property with respect to $\mathcal{L}$.\end{Proposition} \begin{pf} Consider a domain $\Omega \subset \Bbb{C}$, say the unit disk, $f=\overline{ z}=x-iy$ - a nowhere holomorphic function. We prove that $f$ can be extended holomorphically to a neighborhood of $L(\Gamma )\cap \Omega $ for any $L\in \mathcal{L}$. Without any loss of generality we may assume $L=id$, so we now consider $\Gamma \cap \Omega $. Due to the uniqueness theorem for holomorphic functions we only need to prove the extendability of $f$ locally for any point $z_0\in \Gamma \cap \Omega $. Again with no loss of generality we may assume that $z_0=0$ and that near the origin $\Gamma $ is described by the equation $y=\varphi (x)$, where $\varphi (x)$ is a real analytic function. Replacing now real coordinates with $z=x+iy$ we get an implicit equation $\frac 1{2i}(z-\overline{z})=\varphi (\frac 12(z+\overline{ z}))$, and from here one can locally recover $\overline{z}=\psi (z)$ on $\Gamma $, where $ \psi (z)$ is holomorphic near the origin. \end{pf} We will now concentrate on smooth curves that are not analytic. We start with the following definition. Let $f(z)$ be a function defined on an open set $\Omega$ in the complex plane ${\Bbb C}$ containing the origin. The function $f$ is said to have a Taylor series at $0$ if there is a formal power series $g(z,w)=\sum_{jk} a_{jk} z^jw^k \in {\Bbb C}[[z,w]]$ such that for each nonnegative integer $n$, $$f(z)-\sum_{j+k\le n}a_{jk}z^j\overline z^k =o(|z|^n).$$ The Taylor series of $f$ at 0 is $g(z,\overline z)=\sum_{jk}a_{jk}z^j\overline z^k$. We note that every $C^\infty $ function defined in the neighborhood of $0$ has a Taylor series at $0$. Consider a curve of the form $\Gamma:=\{t+i\phi(t): 0\le t\le b\}$, where $\phi$ is a real-valued continuous function defined on the interval $[0,b]$. The function $\phi$ is said to have a Taylor series at $0$ if there exists an $h(z):=\sum_j b_jz^j\in {\Bbb C}[[z]]$ such that for each nonnegative integer $n$, $$\phi(t)-\sum_{j\le n} b_jt^j=o(|t|^n).$$ Pick an open set $\Bbb{U\subset }$ $\Bbb{C}^{*}$, and denote ${\Bbb T}_1=\{\sigma\circ\tau: \sigma\in{\Bbb T}, \tau\in {\Bbb U}\}$. \begin{Theorem} \label{S2} Let $S:=\{t+i\phi(t): 0\le t\le b\}$ be a continuous curve with $\phi(0)=0$. Suppose $\phi$ has a Taylor series at 0, and for no $\lambda>0$ is $\phi$ analytic on $[0,\lambda)$. Then $S\in \hat{H}({\Bbb T}_1)$. \end{Theorem} This theorem is a corollary of Theorem~\ref{AF} below. First some remarks on formal power series. ${\Bbb C}[[x_1, x_2, \dots, x_n]]$ denotes the set of (formal) power series $$g(x_1,\dots, x_n)=\sum_{k_1,\dots, k_n\ge0} a_{k_1\dots k_n} x_1^{k_1}\cdots x_n^{k_n} $$ of $n$ variables with complex coefficients. Let $g(0)=g(0,\dots,0)$ denote the coefficient $a_{0,\dots,0}$. A power series equals 0 if all of its coefficients $a_{k_1\dots k_n}$ are equal to 0. A power series $g\in {\Bbb C}[[x_1, x_2, \dots, x_n]]$ is said to be convergent if there is a constant $C=C_g$ such that $|a_{k_1\dots k_n} |\le C^{k_1+\cdots +k_n}$ for all $(k_1,\dots,k_n)\not=(0,\dots,0)$. \begin{Lemma} \label{MT1} Let $g\in{\Bbb C} [[x,y]]$ with $g'_y\not=0$, let $h\in {\Bbb C}[[x]]$ be a non-zero power series with $h(0)=0$, let $E$ be a nonempty open set in the complex plane. Suppose that $g(sx, \overline sh(x))$ is convergent for each $s\in E$. Then $g$ is convergent and $h$ is convergent. \end{Lemma} \begin{pf} Pick $s=c\exp (i\alpha )\neq 0$, where $c=\mid s\mid $, $s\in E$. We fix $\alpha $ and since $E$ is an open set, there is a non-empty interval $(a,b)$ so for any $c\in [a,b], $ $c\exp (i\alpha )\in E$. Replacing $x$ with $x_1\exp (-i\alpha )$ we get $ g(sx,\overline{s}h(x))=g(cx_1,ch_1(x_1))$. So, $g(cx_1,ch_1(x_1))$ converges for all $c\in [a,b]$. Using now Theorem 1.2 from [FM] we see that $h_1(x)$ converges, implying the convergence of $h(x)$ as well. Now if $ h(x)$ is not a monomial of the form $a_1x$, we apply Theorem 1.1 from [FM] to conclude that $g(x,y)$ is convergent as well. For the exceptional case $ h(x)=a_1x$ we need a different selection for the range of $s\in E$. Fix a number $l>0$ and a non-zero interval $[\beta _1,\beta _2]$, such that $s=l\exp (i\beta )\in E$ for all $\beta \in [\beta _1,\beta _2]$. Then $g(sx, \overline{s}h(x))=g(s_1x_1,s_1^{-1}h_1(x_1))$, where $s_1=\exp (i\beta )$, $ x_1=lx$, and $h_1(x_1)={a_1}x_1$. Applying again Theorem 1.1 from [FM] we prove the convergence of $g(x,y)$ in this case as well. \end{pf} \begin{Theorem}\label{AF} Let $f(z)$ be a continuous function defined on an open connected set $\Omega$ in the complex plane ${\Bbb C}$ containing the origin, let $\Gamma:=\{t+i\phi(t): 0\le t<b\}$ be a continuous curve with $\phi(0)=0$, and let $E$ be a connected open set in the complex plane. Suppose $f$ and $\phi$ have a Taylor series at 0, that $\phi$ is analytic on $[0,\lambda)$ for no $\lambda>0$, and that for each $s\in E$ with $s\ne0$ there exists a holomorphic function $F_s$ defined in an open set $U_s$ containing $s^{-1}\Omega\cap\Gamma$ such that $f(sz)=F_s(z)$ for $z\in s^{-1}\Omega\cap\Gamma$. Then $f$ is holomorphic in the open set $\Lambda:=\cup_{s\in E} \Gamma_s$, where $\Gamma_s$ is the connected component of $s\Gamma$ containing the origin. \end{Theorem} \begin{pf} Let $g(z,\overline z)$ and $h(t)$ be the Taylor series at $0$ of $f$ and $\phi$ respectively. Let $\gamma(t)=t+i\phi(t)$ and $\omega(t)=t+ih(t)$. Consider an $s\in E$ with $s\ne0$. Since \begin{equation}\label{hyp} f(s\gamma(t))=F_s(\gamma(t))\end{equation} for $t\in[0,b]$, we see that $$g(s\omega(t),\overline s(2t-\omega(t)))=F_s(\omega(t))$$ as elements in ${\Bbb C}[[t]]$. Let $\psi(t)\in {\Bbb C}[[t]]$ be the inverse of $\omega(t)$ so that $\omega(\psi(t))=t$. Then \begin{equation}\label{der}g(st,\overline s(2\psi(t)-t))=F_s(t).\end{equation} We claim that $g_w(z,w)\equiv 0$. Suppose that is not the case. By Lemma~\ref{MT1}, $g(z,w)$ and $2\psi(t)-t$ are convergent. So $\psi(t)$ is convergent and $\omega(t)$ is convergent. There is a positive number $r$ such that the disk $D(0,r)\subset \Omega$, $g(z,w)$ represents a holomorphic function in $D(0,r)\times D(0,r)$, and $\psi(z)$, $\omega(z)$ represent holomorphic functions in $D(0,r)$. By (\ref{hyp}) and (\ref{der}), \begin{equation}\label{main} f(s\gamma(t))=g(s\gamma(t), \overline s(2\psi(\gamma(t))-\gamma(t))),\end{equation} provided \begin{equation}\label{region} t\in [0,b], s\in E, | s\gamma(t)|<r, | s(2\psi(\gamma(t))-\gamma(t))|<r. \end{equation} We choose an open disc $U:=D(a,v)\subset E$ with $0<v<|a|/2$, and a positive number $c<r$, such that (\ref{region}) and (\ref{main}) are satisfied for $t\in [0,c]$ and $s\in U$. Fix a $t_0\in (0,c)$. There is an $s_0\in U$ such that $g_w(s_0\gamma(t_0),w)\not\equiv 0$. Let $z_0=s_0\gamma(t_0)$. By (\ref{main}) we have, for all $t$ sufficiently close to $t_0$, that \begin{equation}\label{cons}f(z_0)=f({z_0\over \gamma(t)}\cdot\gamma(t))=g(z_0, \overline z_0\cdot{2\psi(\gamma(t))-\gamma(t)\over\overline{\gamma(t)}}).\end{equation} Since $g_w(z_0,w)\not\equiv 0$, the set $\{w: g(z_0,w)=f(z_0)\}$ is discrete. Hence the function $$p(t):={2\psi(\gamma(t))-\gamma(t)\over\overline{\gamma(t)}}$$ is constant for all $t$ sufficiently close to $t_0$. It follows that $p(t)$ is constant on $(0, c)$. So there is a complex constant $C$ such that \begin{equation}\label{final} 2\psi(\gamma(t))-\gamma(t)=C \overline{\gamma(t)},\;\; 0\le t<c. \end{equation} Taking derivatives at 0, we obtain $2\psi'(0)\gamma'(0)-\gamma'(0)=C\overline{\gamma'(0)}$, which forces $C=1$, since $\psi'(0)\gamma'(0)=1$ and $2-\gamma'(0)=\overline{\gamma'(0)}$. From (\ref{final}) and $\gamma(t)+\overline{\gamma(t)}=2t$ it follows that \begin{equation}\label{fin} \psi(\gamma(t))=t,\;\; 0\le t<c. \end{equation} The above equation implies that $\gamma(t)=\omega(t)$ for $0\le t<c$, contradicting the hypothesis that $\gamma$ is analytic on $[0,\lambda)$ for no $\lambda>0$. Therefore $g_w(z,w)\equiv 0$. Now $g(z,w)$ does not depend on $w$, so $g\in {\Bbb C}[[z]]$, and (\ref{der}) becomes $$g(st)=F_s(t),$$ which implies that $g$ is convergent. Hence $g$ represents a holomorphic function in $D(0,r)$ for some $r>0$. It follows from (\ref{hyp}) that \begin{equation}\label{mainp} f(s\gamma(t)))=g(s\gamma(t)),\end{equation} provided $|s\gamma(t)|<r$. Therefore $f$ is holomorphic in the open set $Q:=D(0,r)\cap E\Gamma$. We now prove that $f$ is holomorphic in $\Lambda$. If $0\in \Lambda$, then $0\in Q$, and we already know that $f$ is holomphic in a neighborhood of $0$. Fix a point $p\in \Lambda$, $p\not=0$. Then $p\in \Gamma_s$ for some $s\in E$, $s\not=0$, and $q:=p/s$ is a point of $\Gamma$, so $q=t_0+i\phi(t_0)$ for some $t_0\in (0,b)$. Since $\Gamma_s$ is the connected component of $s\Gamma$ containing the origin, we see that there is a $\delta\in (0, b-t_0)$ so that $\Gamma':=\{t+i\phi(t): 0\le t\le t_0+\delta\}$ satisfies that $s\Gamma'\subset \Omega$. There is a holomorphic function $F_s$ defined in an open set $U_s\subset s^{-1}\Omega$ containing $\Gamma'$ such that $f(sz)=F_s(z)$ for $z\in \Gamma'$. Let $V_s=sU_s$ and $G_s(z)=F_s(z/s)$. Then $ s\Gamma'\subset V_s\subset \Omega$, $G_s$ is defined on $V_s$, and $G_s(z)=f(z)$ for $z\in s\Gamma'$. Choose an $\epsilon>0$ such that the disc $D:=D(s,\epsilon)$ is contained in $E$, $D$ does not contain the origin, and $D\Gamma'\subset V_s$. We now prove that $f=G_s$ in $D\Gamma'$, hence $f$ is holomorphic in $D\Gamma'$. Consider a $u\in D$. There is a holomorphic function $G_u$ defined in an open set $V_u\subset \Omega$ containing $u\Gamma'$ such that $f(z)=F_u(z)$ for $z\in u\Gamma'$. Since $V_s\cap V_u$ contains a neighborhood of the origin, $D(0,r)\cap V_s\cap V_u$ is non-empty. By the uniqueness theorem, in the open set $D(0,r)\cap V_s\cap V_u$, the three holomorphic functions $g$, $G_s$ and $G_u$ are equal. Thus $G_s$ and $G_u$ are equal in the connected component of $V_s\cap V_u$ containing $u\Gamma'$. It follows that $f=G_s$ on $u\Gamma'$ for each $u\in D$. Thus $f=G_s$ in $D\Gamma'$, and $f$ is holomorphic in $D\Gamma'$, which is a neighborhood of $p$. Therefore $f$ is holomorphic in $\Lambda$. \end{pf} {\it Proof of Theorem~\ref{S2}.} Denote $\Gamma =S=\{t+i\phi(t): 0\le t\le b\}$. Let $\Omega \subset \Bbb{C}$ be a domain, $f\in C^\infty (\Omega )$, $z_0\in \Omega $. Without any loss of generality we may assume that $0\in \Omega $, and (since one can use translations to move $\Gamma $) $z_0=0$. We take $E=\Bbb U$ and consider $L_s(\Gamma )=s\Gamma $ for $s\in E$. There is a holomorphic function $G_s(z)$ in the neighborhood of $L_s(\Gamma )\cap \Omega $ that coincides with $f$ on that intersection. Consider $F_s(z)=G_s(sz)$. Then $f(sz)=F_s(z)$ on $ s^{-1}\Omega \cap \Gamma $. By Theorem~\ref{AF},~ $\frac{\partial f}{\partial \overline{z}}=0$ at $z_0$. So, $\frac{\partial f}{\partial \overline{z}}=0$ everywhere on $\Omega $, and therefore $f$ is holomorphic on $\Omega $. \qed {\[Case\ 3:\operatorname{dim} (S)<1\]$ $} In this case an interesting situation to examine is when $S$ is a bounded sequence $(z_n)$ (and therefore $\operatorname{dim} (S)=0$). By using Corollary~\ref{CS} one can easily construct sequences with one limit point that have Hartogs property with respect to $\Bbb T$. On the other hand if one takes a sequence that is located on an analytic curve, and has a limit point on that curve, such a sequence will not have a Hartogs property even with respect to the entire group $\mathcal{L}$. So, a natural hypothesis here is that in order for $(z_n)$ to have Hartogs property with respect to $\mathcal{L}$ there must be no analytic curve $\Gamma$ that ${z_n}\in \Gamma$ for large $n$. However this is not true, and one can construct a counterexample. A similar statement we prove below holds, but it requires a change in the definition of a holomorphic function on a sequence. We will say that a function $f$ on $(z_n)$ is holomorphic if it can be extended as a holomorphic function to a {\it connected} open neighborhood of~$(z_n)$. If a set $S=(z_n)$ has Hartogs property with respect to $\Bbb L$ and with the above definition of a holomorphic extension, we will denote that by $S\in \hat{H_0}({\Bbb L})$. We need another definition for the theorem below. Consider a sequence $(z_n)$ of complex numbers. Write $z_n=t_n+iu_n$. We assume that $t_n>0$ and $\lim z_n=0$. The sequence $(z_n)$ is said to have a Taylor series at 0 if there is an $h(z)=\sum_j b_j z^j\in {\Bbb C}[[z]]$ such that $$u_n-\sum_{j\le k}b_j t_n^j=o(t_n^k),\;\;\; n\rightarrow \infty,$$ for each nonnegative integer $k$. Note that $h$ has real coefficients and $b_0=0$. We say that $(z_n)$ eventually lies on an analytic curve if there exists a curve $\Gamma =\{(x,y):y=\varphi (x)\}$, with $\varphi $ - real analytic function and $\exists N$ such that $z_n\in \Gamma $ for $n\geq N$. \begin{Theorem} \label{S3} Let $S=(z_n)$, $z_1=0$, and $(z_n)$ has a Taylor series at 0 of the form $z_n\sim t_n+i h(t_n)$, where $t_n$ are positive real numbers, and $h\in {\Bbb C} [[t]]$ has real coefficients. Suppose that $(z_n)$ does not eventually lie on any analytic curve. Then $S\in \hat{H_0}({\Bbb T}_2)$, where ${\Bbb T}_2=\{\sigma\circ\tau: \sigma\in{\Bbb T}, \tau\in C^*\}$.\end{Theorem} This theorem is a corollary of the following \begin{Theorem}\label{seq} Let $f(z)$ be a continuous function defined on the unit disc $D(0,1)$ in ${\Bbb C}$ that has a Taylor series at 0 and let $(z_n)$ be a sequence with $z_1=0$ that has a Taylor series at 0 of the form $z_n\sim t_n+i h(t_n)$, where $t_n$ are positive real numbers, and $h\in {\Bbb C} [[t]]$ has real coefficients. Suppose that $(z_n)$ does not eventually lie on an analytic curve, and that for each $s\in{\Bbb C}$ with $s\ne0$ there is a holomorphic function $F_s(z)$ defined on a connected neighborhood $U_s$ of the set $Q_s:=s^{-1}D(0,1)\cap\{z_n\}$ such that $f(sz)=F_s(z)$ for $z\in Q_s$. Then $f$ is holomorphic in a neighborhood of the origin.\end{Theorem} \begin{pf} Let $g(z,\overline z)$ be the Taylor series of $f$ at 0. Let $\omega(t)=t+ih(t)$. Then \begin{equation}\label{hyp1}g(s\omega(t),\overline s(2t-\omega(t)))=F_s(\omega(t))\end{equation} as elements in ${\Bbb C}[[t]]$. Let $\psi(t)\in {\Bbb C}[[t]]$ be the inverse of $\omega(t)$. We claim that $g_w(z,w)\equiv 0$. Suppose that is not the case. Similar to the proof of Theorem~\ref{AF}, we see that $h(t)$, $\omega(t)$, $\psi(t)$ are convergent, and \begin{equation}\label{der1}g(st,\overline s(2\psi(t)-t))=F_s(t).\end{equation} There is a positive number $r$ such that $D(0,r)\subset \Omega$, $g(z,w)$ represents a holomorphic function in $D(0,r)\times D(0,r)$, and $\psi(z)$, $\omega(z)$ represent holomorphic functions in $D(0,r)$. It follows that \begin{equation}\label{main1} f(sz_n)=g(sz_n, \overline s(2\psi(z_n)-z_n)),\end{equation} provided \begin{equation}\label{region1} |z_n|<r, | sz_n |<r, | s(2\psi(z_n)-z_n|<r. \end{equation} Fix $z_0\in D(0,r)$ with $z_0\ne0$ such that $g_w(z_0,w)\not\equiv0$. Then the set $\{w: g(z_0,w)=f(z_0)\}$ is discrete. Equation (\ref{main1}) implies that \begin{equation}\label{cons1}f(z_0)=f({z_0\over z_n}\cdot z_n)=g(z_0, \overline z_0\cdot {2\psi(z_n)-z_n\over\overline z_n})=g(z_0, w_n),\end{equation} where $w_n:=\overline z_0(2\psi(z_n)-z_n)/\overline z_n$. Since the set $\{w: g(z_0,w)=f(z_0)\}$ is discrete, and since $\lim w_n=\overline z_0$, we see that there is a positive integer $K$ such that $w_n=\overline z_0$ for $n\ge K$. Recall that $z_n\sim t_n+i h(t_n)$. The equation $w_n=\overline z_0$ is equivalent to $\psi(z_n)=t_n$, or $z_n=\omega(t_n)=t_n+ih(t_n)$, contradicting the hypothesis that $(z_n)$ does not eventually lie on an analytic curve. Therefore $g_w(z,w)\equiv 0$. Now $g(z,w)$ does not depend on $w$, so $g\in {\Bbb C}[[z]]$, and (\ref{der1}) becomes $g(st)=F_s(t)$, which clearly implies that $g$ is convergent. Hence $g$ represents a holomorphic function in $D(0,r)$ for some $r>0$. Thus $f(sz_n)=g(sz_n)$, provided $|sz_n|<r$. Therefore $f$ is holomorphic in $D(0,r)$. \end{pf}
{ "timestamp": "2011-03-01T02:04:09", "yymm": "1006", "arxiv_id": "1006.3105", "language": "en", "url": "https://arxiv.org/abs/1006.3105", "abstract": "Let $\\Gamma $ be a $C^\\infty $ curve in $\\Bbb{C}$ containing 0; it becomes $\\Gamma_\\theta $ after rotation by angle $\\theta $ about 0. Suppose a $C^\\infty $ function $f$ can be extended holomorphically to a neighborhood of each element of the family $\\{\\Gamma_\\theta \\}$. We prove that under some conditions on $\\Gamma $ the function $f$ is necessarily holomorphic in a neighborhood of the origin. In case $\\Gamma $ is a straight segment the well known Bochnak-Siciak Theorem gives such a proof for \\textit{real analyticity}. We also provide several other results related to testing holomorphy property on a family of certain subsets of a domain in $\\Bbb{C}$.", "subjects": "Complex Variables (math.CV)", "title": "Holomorphic functions on subsets of C", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.97594644290792, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7074345676044537 }
https://arxiv.org/abs/1803.09532
Gaussian kernel quadrature at scaled Gauss-Hermite nodes
This article derives an accurate, explicit, and numerically stable approximation to the kernel quadrature weights in one dimension and on tensor product grids when the kernel and integration measure are Gaussian. The approximation is based on use of scaled Gauss-Hermite nodes and truncation of the Mercer eigendecomposition of the Gaussian kernel. Numerical evidence indicates that both the kernel quadrature and the approximate weights at these nodes are positive. An exponential rate of convergence for functions in the reproducing kernel Hilbert space induced by the Gaussian kernel is proved under an assumption on growth of the sum of absolute values of the approximate weights.
\section{Introduction} Let $\mu$ be the standard Gaussian measure on $\R$ and $f \colon \R \to \R$ a measurable function. We consider the problem of numerical computation of the integral with respect to $\mu$ of $f$ using a \emph{kernel quadrature rule} (we reserve the term \emph{cubature} for rules on higher dimensions) based on the Gaussian kernel \begin{equation}\label{eq:Gaussian} k(x,y) = \exp\bigg( \! -\frac{(x-y)^2}{2\ell^2} \bigg) \end{equation} with the length-scale $\ell > 0$. Given any distinct \emph{nodes} $x_1,\ldots,x_N$, the kernel quadrature rule is an approximation of the form \begin{equation*} Q_k(f) \coloneqq \sum_{n=1}^N w_{k,n} f(x_n) \approx \mu(f) \coloneqq \frac{1}{\sqrt{2\pi}} \int_\R f(x) \neper^{-x^2/2} \dif x, \end{equation*} with its \emph{weights} $w_k = (w_{k,1},\ldots,w_{k,N}) \in \R^N$ solved from the linear system of equations \begin{equation}\label{eq:linSys} K w_k = k_\mu, \end{equation} where $[K]_{ij} \coloneqq k(x_i,x_j)$ and $[k_\mu]_i \coloneqq \int_\R k(x_i,x) \dif \mu(x)$. This is equivalent to uniquely selecting the weights such that the $N$ kernel translates $k(x_1,\cdot),\ldots,k(x_N,\cdot)$ are integrated exactly by the quadrature rule. Kernel quadrature rules can be interpreted as best quadrature rules in the reproducing kernel Hilbert space (RKHS) induced by a positive-definite kernel~\citep{Larkin1970}, integrated kernel (radial basis function) interpolants~\sloppy{\citep{Bezhaev1991,SommarivaVianello2006}}, and posteriors to $\mu(f)$ under a Gaussian process prior on the integrand~\sloppy{\citep{Larkin1972,OHagan1991,BriolProbInt2018}}. \rev{Recently, \citet{FasshauerMcCourt2012} have developed a method to circumvent the well-known problem that interpolation with the Gaussian kernel becomes often numerically unstable---in particular when $\ell$ is large---because the condition number of $K$ tends to grow with an exponential rate~\citep{Schaback1995}. They do this by truncating the Mercer eigendecomposition of the Gaussian kernel after $M$ terms and replacing the interpolation basis $\{k(x_n,\cdot)\}_{n=1}^N$ with the first $M$ eigenfunctions. In this article we show that application of this method with $M=N$ to kernel quadrature yields, when the nodes are selected by a suitable and fairly natural scaling of the nodes of the classical Gauss--Hermite quadrature rule, an accurate, explicit, and numerically stable approximation to the Gaussian kernel quadrature weights. Moreover, the proposed nodes appear to be a good and natural choice for the Gaussian kernel quadrature.} To be precise, \Cref{thm:main} states that the quadrature rule $\widetilde{Q}_k$ that exactly integrates the first $N$ Mercer eigenfunctions of the Gaussian kernel and uses the nodes \begin{equation*} \xkgh_n \coloneqq \frac{1}{\sqrt{2} \alpha \beta} x_n^\textsm{GH} \end{equation*} has the weights \begin{equation*} \widetilde{w}_{k,n} \coloneqq \bigg(\frac{1}{1+2\delta^2}\bigg)^{1/2} w_n^\textsm{GH} \neper^{\delta^2 \xkgh_n^2} \sum_{m=0}^{\floor{(N-1)/2}} \frac{1}{2^m m! } \bigg( \! \frac{2\alpha^2 \beta^2}{1+2\delta^2} - 1 \bigg)^m \mathrm{H}_{2m}(x_n^\textsm{GH}), \end{equation*} $\widetilde{w}_k = (\widetilde{w}_{k,1}, \ldots, \widetilde{w}_{k,N}) \in \R^N$, where $\alpha$ (for which the value $1/\sqrt{2}$ seems the most natural), $\beta$, and $\delta$ are constants defined in \Cref{eq:constants}, $\mathrm{H}_n$ are the probabilists' Hermite polynomials~\eqref{eq:hermite}, and $x_n^\textsm{GH}$ and $w_n^\textsm{GH}$ are the nodes and weights of the $N$-point Gauss--Hermite quadrature rule. We argue that these weights are a good approximation to $w_{k}$ and accordingly call them \emph{approximate Gaussian kernel quadrature weights}. Although we derive no bounds for the error of this weight approximation, numerical experiments in \Cref{sec:numsim} indicate that the approximation is accurate and that it appears that $\widetilde{w}_{k} \to w_{k}$ as $N \to \infty$. In \Cref{sec:tensor} we extend the weight approximation for $d$-dimensional Gaussian tensor product kernel cubature rules of the form \begin{equation*} Q_k^d = Q_{k,1} \otimes \cdots \otimes Q_{k,d}, \end{equation*} where $Q_{k,i}$ are one-dimensional Gaussian kernel quadrature rules. Since each weight of $Q_k^d$ is a product of weights of the univariate rules, an approximation for the tensor product weights is readily available. \rev{ It turns out that the approximate weight and the associated nodes $\xkgh_n$ have a number of desirable properties: \begin{itemize} \item We are not aware of any work on efficient selection of ``good'' nodes in the setting of this article. The Gauss--Hermite nodes~\citep[Section 3]{OHagan1991} and random points~\citep{RasmussenGhahramani2002} are often used, but one should clearly be able to do better, while computation of the optimal nodes~\cite[Section 5.2]{Oettershagen2017} is computationally demanding. As such, given the desirable properties, listed below, of the resulting kernel quadrature rules, the nodes $\xkgh_n$ appear to be an excellent heuristic choice. These nodes also behave naturally when $\ell \to \infty$; see \Cref{sec:lengthscale}. \item Numerical experiments in \Cref{sec:numsim-positivity} suggest that both $w_{k,n}$ (for the nodes $\xkgh_n$) and $\widetilde{w}_{k,n}$ are positive for any $N \in \N$ and every $n = 1,\ldots, N$. Besides the optimal nodes, the weights for which are guaranteed to be positive when the Gaussian kernel is used~\citep{RichterDyn1971a,Oettershagen2017}, there are no node configurations that give rise to positive weights as far as we are aware of. \item Numerical experiments in \Cref{sec:numsim-stability,sec:numsim-positivity} demonstrate that computation of the approximate weights is numerically stable. Furthermore, construction of these weights only incurs a quadratic computational cost in the number of points, as opposed to the cubic cost of solving $w_k$ from \Cref{eq:linSys}. See \Cref{sec:complexity} for more details. Note that to obtain a numerically stable method it is not necessary to use the nodes $\xkgh_n$ as the method in \citep{FasshauerMcCourt2012} can be applied in a straightforward manner for any nodes. However, doing so one forgoes a closed form expression and has to use the QR decomposition. \item In \Cref{sec:convergence,sec:tensor} we show that slow enough growth with $N$ of $\sum_{i=1}^N \abs[0]{\wkgh_{k,n}}$ (numerical evidence indicates this sum converges to one) guarantees that the approximate Gaussian kernel quadrature rule---as well as the corresponding tensor product version---converges with an exponential rate for functions in the RKHS of the Gaussian kernel. Convergence analysis is based on analysis of magnitude of the remainder of the Mercer expansion and rather explicit bounds on Hermite polynomials and their roots. Magnitude of the nodes $\xkgh_n$ is crucial for the analysis; if they were further spread out the proofs would not work as such. \item We find the connection to the Gauss--Hermite weights and nodes that the closed form expression for $\widetilde{w}_k$ provides intriguing and hope that it can be at some point used to furnish, for example, a rigorous proof of positivity of the approximate weights. \end{itemize} } \section{Approximate weights}\label{sec:main} This section contains the main results of this article. The main contribution is derivation, in \Cref{thm:main}, of the weights $\widetilde{w}_k$, that can be used to approximate the kernel quadrature weights. We also discuss positivity of these weights, the effect the kernel length-scale $\ell$ is expected to have on quality of the approximation, and computational complexity. \subsection{Eigendecomposition of the Gaussian kernel}\label{sec:eigendecomposition} Let $\nu$ be a probability measure on the real line. If the support of $\nu$ is compact, Mercer's theorem guarantees that any positive-definite kernel $k$ admits an absolutely and uniformly convergent eigendecomposition \begin{equation}\label{eq:eigenDecomp} k(x,y) = \sum_{n=0}^\infty \lambda_n \phi_n (x) \phi_n(y) \end{equation} for positive and non-increasing eigenvalues $\lambda_n$ and eigenfunctions $\phi_n$ that are included in the RKHS $\mathcal{H}$ induced by $k$ and orthonormal in $L^2(\nu)$. Moreover, $\sqrt{\lambda_n} \phi_n$ are $\mathcal{H}$-orthonormal. If the support of $\nu$ is not compact, the expansion~\eqref{eq:eigenDecomp} converges absolutely and uniformly on all compact subsets of $\R \times \R$ under some mild assumptions~\citep{Sun2005,SteinwartScovel2012}. For the Gaussian kernel~\eqref{eq:Gaussian} and measure the eigenvalues and eigenfunctions are available analytically. For a collection of explicit eigendecompositions of some other kernels, see for instance~\citep[Appendix A]{FasshauerMcCourt2015} Let $\mu_\alpha$ stand for the Gaussian probability measure, \begin{equation*} \dif \mu_\alpha(x) \coloneqq \frac{\alpha}{\sqrt{\pi}} \neper^{-\alpha^2 x^2} \dif x, \end{equation*} with variance $1/(2\alpha^2)$ (i.e., $\mu = \mu_{1/\sqrt{2}}\,$) and \begin{equation}\label{eq:hermite} \mathrm{H}_n(x) \coloneqq (-1)^n \neper^{x^2/2} \od[n]{}{x} \neper^{-x^2/2} \end{equation} for the (unnormalised) probabilists' Hermite polynomial satisfying the orthogonality property $\inprod{\mathrm{H}_n}{\mathrm{H}_m}_{L^2(\mu)} = n! \, \delta_{nm}$. Denote \begin{equation}\label{eq:constants} \epsilon = \frac{1}{\sqrt{2}\ell}, \hspace{0.5cm} \beta = \bigg( 1 + \bigg(\frac{2\epsilon}{\alpha}\bigg)^2 \bigg)^{1/4}, \hspace{0.5cm} \text{and} \hspace{0.5cm} \delta^2 = \frac{\alpha^2}{2} (\beta^2 - 1) \end{equation} and note that $\beta > 1$ and $\delta^2 > 0$. Then the eigenvalues and $L^2(\mu_\alpha)$-orthonormal eigenfunctions of the Gaussian kernel are~\citep{FasshauerMcCourt2012} \begin{equation}\label{eq:eigenvalues} \lambda_n^\alpha \coloneqq \sqrt{\frac{\alpha^2}{\alpha^2 + \delta^2 + \epsilon^2}} \bigg( \frac{\epsilon^2}{\alpha^2 + \delta^2 + \epsilon^2} \bigg)^{n} \end{equation} and \begin{equation}\label{eq:eigenfunctions} \phi_n^\alpha(x) \coloneqq \sqrt{\frac{\beta}{n!}} \neper^{-\delta^2 x^2} \mathrm{H}_{n} \big( \sqrt{2} \alpha \beta x \big). \end{equation} See~\citep[Section 12.2.1]{FasshauerMcCourt2015} for verification that these indeed are Mercer eigenfunctions and eigenvalues for the Gaussian kernel. The role of the parameter $\alpha$ is discussed in \Cref{sec:appweights}. The following result, also derivable from Equation 22.13.17 in~\citep{AbramowitzStegun1964}, will be useful. \begin{lemma}\label{lemma:integral} The eigenfunctions~\eqref{eq:eigenfunctions} of the Gaussian kernel~\eqref{eq:Gaussian} satisfy \begin{equation*} \mu(\phi_{2m+1}^\alpha) = 0 \hspace{0.5cm} \text{and} \hspace{0.5cm} \mu(\phi_{2m}^\alpha) = \bigg(\frac{\beta}{1+2\delta^2}\bigg)^{1/2} \frac{\sqrt{(2m)!}}{2^{m} m! } \bigg(\frac{2\alpha^2 \beta^2}{1+2\delta^2} - 1 \bigg)^m \end{equation*} for $m \geq 0$. \end{lemma} \begin{proof} Since an Hermite polynomial of odd order is an odd function, $\mu(\phi_{2m+1}^\alpha) = 0$. For even indices, use the explicit expression \begin{equation*} \mathrm{H}_{2m}(x) = \frac{(2m)!}{2^{m}} \sum_{p=0}^m \frac{(-1)^{m-p}}{(2p)!(m-p)!} \big( \sqrt{2} x \big)^{2p}, \end{equation*} the Gaussian moment formula \begin{equation*} \int_\R x^{2p} \neper^{-\delta^2 x^2} \dif \mu(x) = \frac{1}{\sqrt{2\pi}} \int_\R x^{2p} \neper^{-(\delta^2 + 1/2) x^2} \dif x = \frac{(2p)!}{2^p p!(1+2\delta^2)^{p+1/2}}, \end{equation*} and the binomial theorem to conclude that \begin{equation*} \begin{split} \mu(\phi_{2m}^\alpha) &= \frac{\sqrt{(2m)!\beta}}{2^{m}} \sum_{p=0}^m \frac{(-1)^{m-p}}{(2p)!(m-p)!} (2 \alpha \beta)^{2p} \int_\R x^{2p} \neper^{-\delta^2 x^2} \dif \mu(x) \\ &= \frac{(-1)^m \sqrt{(2m)! \beta}}{2^{m}\sqrt{1+2\delta^2}} \sum_{p=0}^m \frac{1}{p!(m-p)!} \bigg( -\frac{2\alpha^2 \beta^2}{1+2\delta^2} \bigg)^p \\ &= \frac{(-1)^m \sqrt{(2m)! \beta}}{2^{m} m! \sqrt{1+2\delta^2}} \sum_{p=0}^m {m \choose p} \bigg( -\frac{2\alpha^2 \beta^2}{1+2\delta^2} \bigg)^p \\ &= \bigg(\frac{\beta}{1+2\delta^2}\bigg)^{1/2} \frac{\sqrt{(2m)!}}{2^{m} m! } \bigg(\frac{2\alpha^2 \beta^2}{1+2\delta^2} - 1 \bigg)^m. \end{split} \end{equation*} \qed \end{proof} \subsection{Approximation via QR decomposition}\label{sec:qr} We begin by outlining a straightforward extension to kernel quadrature of the work of \citet[Chapter 13]{FasshauerMcCourt2012,FasshauerMcCourt2015} on numerically stable kernel interpolation. Recall that the kernel quadrature weights $w_k \in \R^N$ at distinct nodes $x_1,\ldots,x_N$ are solved from the linear system $K w_k = k_\mu$ with \sloppy{${[K]_{ij} = k(x_i,x_j)}$} and \sloppy{${[k_{\mu}]_i = \int_\R k(x_i,x) \dif \mu(x)}$}. Truncation of the eigendecomposition~\eqref{eq:eigenDecomp} after $M \geq N$ terms\footnote{\rev{Low-rank approximations (i.e.\ $M < N$) are also possible~\citep[Section 6.1]{FasshauerMcCourt2012}.}} yields the approximations $K \approx \Phi \Lambda \Phi^\transpose$ and $k_\mu \approx \Phi \Lambda \phi_\mu$, where \sloppy{${[\Phi]_{ij} \coloneqq \phi_{j-1}^\alpha(x_i)}$} is an $N \times M$ matrix, the diagonal $M \times M$ matrix $[\Lambda]_{ii} \coloneqq \lambda_{i-1}$ contains the eigenvalues in appropriate order, and \sloppy{${[\phi_\mu]_i \coloneqq \mu(\phi_{i-1})}$} is an $M$-vector. The kernel quadrature weights $w_k$ can be therefore approximated by \begin{equation}\label{eq:Mweights} \widetilde{w}_k^M \coloneqq \big( \Phi \Lambda \Phi^\transpose \big)^{-1} \Phi \Lambda \phi_\mu. \end{equation} \Cref{eq:Mweights} can be written in a more convenient form by exploiting the QR decomposition. The QR decomposition of $\Phi$ is \begin{equation*} \Phi = QR \coloneqq Q \begin{bmatrix} R_1 & R_2 \end{bmatrix} \end{equation*} for a unitary $Q \in \R^{N \times N}$, an upper triangular $R_1 \in \R^{N \times N}$, and $R_2 \in \R^{N \times (M-N)}$. Consequently, \begin{equation*} \widetilde{w}_k^M = \big( QR \Lambda R^\transpose Q^\transpose \big)^{-1} QR \Lambda \phi_\mu = Q \big( R \Lambda R^\transpose \big)^{-1} R \Lambda \phi_\mu. \end{equation*} The decomposition \begin{equation*} \Lambda = \begin{bmatrix} \Lambda_1 & 0 \\ 0 & \Lambda_2 \end{bmatrix} \end{equation*} of $\Lambda \in \R^{M \times M}$ into diagonal $\Lambda_1 \in \R^{N \times N}$ and $\Lambda_2 \in \R^{(M-N) \times (M-N)}$ allows for writing \begin{equation*} R \Lambda R^\transpose = R_1 \Lambda_1 \big( R_1^\transpose + \Lambda_1^{-1} R_1^{-1} R_2 \Lambda_2 R_2^\transpose \big). \end{equation*} Therefore, \begin{equation}\label{eq:MweightsFinal} \widetilde{w}_k^M = Q \big( R_1^\transpose + \Lambda_1^{-1} R_1^{-1} R_2 \Lambda_2 R_2^\transpose \big)^{-1} \begin{bmatrix} I_N & \Lambda_1^{-1} R_1^{-1} R_2 \Lambda_2 \end{bmatrix} \phi_\mu, \end{equation} where $I_N$ is the $N \times N$ identity matrix. \rev{If $\epsilon^2/(\alpha^2+\delta^2+\epsilon^2)$ is small (i.e., $\ell$ is large), numerical ill-conditioning in \Cref{eq:MweightsFinal} for the Gaussian kernel is associated with the diagonal matrices $\Lambda_1^{-1}$ and $\Lambda_2$.} Consequently, numerical stability can be significantly improved by performing the multiplications by these matrices in the terms $\Lambda_1^{-1} R_1^{-1} R_2 \Lambda_2 R_2^\transpose$ and $\Lambda_1^{-1} R_1^{-1} R_2 \Lambda_2$ analytically; \rev{see \citep[Sections 4.1 and 4.2]{FasshauerMcCourt2012} for more details}. Unfortunately, using the QR decomposition does not provide an attractive closed form solution for the approximate weights $\widetilde{w}_k^M$ \rev{for general $M$}. Setting $M = N$ turns $\Phi$ into a square matrix, enabling its direct inversion and formation of an \rev{explicit} connection to the classical Gauss--Hermite quadrature. \rev{The rest of the article is concerned with this special case.} \subsection{Gauss--Hermite quadrature} Given a measure $\nu$ on $\R$, the $N$-point \emph{Gaussian quadrature rule} is the unique $N$-point quadrature rule that is exact for all polynomials of degree at most $2N-1$. We are interested in \emph{Gauss--Hermite quadrature rules} that are Gaussian rules for the Gaussian measure~$\mu$: \begin{equation*} \sum_{n=1}^N w_n^\textsm{GH} p(x_n^\textsm{GH}) = \mu(p) \end{equation*} for every polynomial $p \colon \R \to \R$ with $\deg p \leq 2N-1$. The nodes $x_1^\textsm{GH}, \ldots x_N^\textsm{GH}$ are the roots of the $N$th Hermite polynomial $\mathrm{H}_N$ and the weights $w_1^\textsm{GH}, \ldots, w_N^\textsm{GH}$ are positive and sum to one. The nodes and the weights are related to the eigenvalues and eigenvectors of the tridiagonal Jacobi matrix formed out of three-term recurrence relation coefficients of normalised Hermite polynomials~\citep[Theorem 3.1]{Gautschi2004}. We make use of the following theorem, a one-dimensional special case of a more general result due to \citet{Mysovskikh1968}. See also~\citep[Section 7]{Cools1997}. This result also follows from the Christoffel--Darboux formula~\eqref{eq:christoffelDarboux}. \begin{theorem}\label{thm:Mysovskikh} Let $\nu$ be a measure on $\R$. Suppose that $x_1,\ldots,x_N$ and $w_1,\ldots,w_N$ are the nodes and weights of the unique Gaussian quadrature rule. Let $p_0,\ldots,p_{N-1}$ be the $L^2(\nu)$-orthonormal polynomials. Then the matrix $[P]_{ij} \coloneqq \sum_{n=0}^{N-1} p_n(x_i) p_n(x_j)$ is diagonal and has the diagonal elements $[P]_{ii} = 1/w_i$. \end{theorem} \subsection{Approximate weights at scaled Gauss--Hermite nodes}\label{sec:appweights} Let us now consider the approximate weights~\eqref{eq:Mweights} with $M = N$. Assuming that $\Phi$ is invertible, we then have \begin{equation*} w_k \approx \widetilde{w}_k \coloneqq \widetilde{w}_k^N = \big( \Phi \Lambda \Phi^\transpose \big)^{-1} \Phi \Lambda \phi_\mu = \Phi^{-\transpose} \phi_\mu. \end{equation*} \rev{Note that the exponentially decaying Mercer eigenvalues, a major source of numerical instability, do not appear in the equation for $\widetilde{w}_k$.} The weights $\widetilde{w}_k$ are those of the unique quadrature rule that is exact for the $N$ first eigenfunctions $\phi_0^\alpha,\ldots,\phi_{N-1}^\alpha$. For the Gaussian kernel, we are in a position to do much more. Recalling the form of the eigenfunctions in \Cref{eq:eigenfunctions}, we can write $\Phi = \sqrt{\beta} E^{-1} V$ for the diagonal matrix $[E]_{ii} \coloneqq \neper^{\delta^2 x_i^2}$ and the Vandermonde matrix \begin{equation}\label{eq:Vmatrix} [V]_{ij} \coloneqq \frac{1}{\sqrt{(j-1)!}} \mathrm{H}_{j-1} \big( \sqrt{2}\alpha\beta x_i \big) \end{equation} of scaled and normalised Hermite polynomials. From this it is evident that $\Phi$ is invertible---which is just a manifestation of the fact that the eigenfunctions of a totally positive kernel constitute a Chebyshev system~\citep{Kellog1918,Pinkus1996}. Consequently, \begin{equation*} \widetilde{w}_k = \frac{1}{\sqrt{\beta}} E V^{-\transpose} \phi_\mu. \end{equation*} Select the nodes \begin{equation*} \xkgh_n \coloneqq \frac{1}{\sqrt{2} \alpha \beta} x_n^\textsm{GH}. \end{equation*} Then the matrix $V$ defined in \Cref{eq:Vmatrix} is precisely the Vandermonde matrix of the normalised Hermite polynomials and $V V^\transpose$ is the matrix $P$ of \Cref{thm:Mysovskikh}. Let $W_\textsm{GH}$ be the diagonal matrix containing the Gauss--Hermite weights. It follows that $V^{-\transpose} = W_\textsm{GH} V$ and \begin{equation}\label{eq:wa1st} \widetilde{w}_k = \frac{1}{\sqrt{\beta}} E V^{-\transpose} \phi_\mu = \frac{1}{\sqrt{\beta}} E W_\textsm{GH} V \phi_\mu. \end{equation} Combining this equation with \Cref{lemma:integral}, we obtain the main result of this article. \begin{theorem}\label{thm:main} Let $x_1^\textsm{GH}, \ldots, x_N^\textsm{GH}$ and $w_1^\textsm{GH}, \ldots, w_N^\textsm{GH}$ stand for the nodes and weights of the $N$-point Gauss--Hermite quadrature rule. Define the nodes \begin{equation}\label{eq:nodes} \xkgh_n = \frac{1}{\sqrt{2} \alpha \beta} x_n^\textsm{GH}. \end{equation} Then the weights $ \wkgh_{k} \in \R^N$ of the $N$-point quadrature rule \begin{equation*} \widetilde{Q}_k(f) \coloneqq \sum_{n=1}^N \wkgh_{k,n} f(\xkgh_n), \end{equation*} defined by the exactness conditions $\widetilde{Q}_k(\phi_n^\alpha) = \mu_\alpha(\phi_n^\alpha)$ for $n = 0,\ldots,N-1$, are \begin{equation}\label{eq:approximation} \widetilde{w}_{k,n} = \bigg(\frac{1}{1+2\delta^2}\bigg)^{1/2} w_n^\textsm{GH} \neper^{\delta^2 \xkgh_n^2} \sum_{m=0}^{\floor{(N-1)/2}} \frac{1}{2^m m! } \bigg( \! \frac{2\alpha^2 \beta^2}{1+2\delta^2} - 1 \bigg)^m \mathrm{H}_{2m}(x_n^\textsm{GH}), \end{equation} where $\alpha$, $\beta$, and $\delta$ are defined in \Cref{eq:constants} and $\mathrm{H}_{2m}$ are the probabilists' Hermite polynomials~\eqref{eq:hermite}. \end{theorem} Since the weights $\wkgh_{k}$ are obtained by truncating of the Mercer expansion of $k$, it is to be expected that $\wkgh_k \approx w_k$. This motivates our calling of these weights the \emph{approximate Gaussian kernel quadrature weights}. We do not provide theoretical results on quality of this approximation, but the numerical experiments in \Cref{sec:numsim-approximation} indicate that the approximation is accurate and that its accuracy increases with $N$. See~\citep{FasshauerMcCourt2012} for related experiments. An alternative non-analytical formula for the approximate weights can be derived using the Christoffel--Darboux formula~\citep[Section 1.3.3]{Gautschi2004} \begin{equation}\label{eq:christoffelDarboux} \sum_{m=0}^M \frac{\mathrm{H}_m(x)\mathrm{H}_m(y)}{m!} = \frac{\mathrm{H}_M(y) \mathrm{H}_{M+1}(x) - \mathrm{H}_M(x) \mathrm{H}_{M+1}(y)}{M! (x-y)}. \end{equation} From~\Cref{eq:wa1st} we then obtain (keep in mind that $x_1^\textsm{GH},\ldots,x_N^\textsm{GH}$ are the roots of~$\mathrm{H}_N$) \begin{equation*} \begin{split} \widetilde{w}_{k,n} &= \frac{1}{\sqrt{\beta}} w_n^\textsm{GH} \neper^{\delta^2 \xkgh_n^2} \sum_{m=0}^{N-1} \frac{1}{\sqrt{m!}} \mathrm{H}_m(x_n^\textsm{GH}) \mu(\phi_m^\alpha) \\ &= w_n^\textsm{GH} \neper^{\delta^2 \xkgh_n^2} \int_\R \neper^{-\delta^2 x^2} \sum_{m=0}^{N-1} \frac{\mathrm{H}_m(x_n^\textsm{GH}) \mathrm{H}_m(\sqrt{2}\alpha \beta x)}{m!} \dif \mu(x) \\ &= \frac{w_n^\textsm{GH} \neper^{\delta^2 \xkgh_n^2} \mathrm{H}_{N-1}(x_n^\textsm{GH})}{\sqrt{2\pi} (N-1)!} \int_\R \frac{\mathrm{H}_N(\sqrt{2}\alpha \beta x) }{\sqrt{2}\alpha \beta x-x_n^\textsm{GH}} \neper^{-(\delta^2 + 1/2) x^2} \dif x \\ &= \frac{w_n^\textsm{GH} \neper^{\delta^2 \xkgh_n^2} \mathrm{H}_{N-1}(x_n^\textsm{GH})}{2\sqrt{\pi}\alpha\beta(N-1)!} \int_\R \frac{\mathrm{H}_N(x) }{x-x_n^\textsm{GH}} \exp\bigg( -\frac{\delta^2 + 1/2}{2\alpha^2\beta^2} x^2 \bigg) \dif x. \end{split} \end{equation*} This formula is analogous to the formula \begin{equation*} w_n^\textsm{GH} = \frac{1}{\sqrt{2\pi} N \mathrm{H}_{N-1}(x_n^\textsm{GH})} \int_\R \frac{\mathrm{H}_N(x)}{x-x_n^\textsm{GH}} \neper^{-x^2/2} \dif x \end{equation*} for the Gauss--Hermite weights. Plugging this in, we get \begin{equation*} \widetilde{w}_{k,n} = \frac{\neper^{\delta^2 \xkgh_n^2}}{2\sqrt{2} \pi \alpha \beta N!} \int_\R \frac{\mathrm{H}_N(x)}{x-x_n^\textsm{GH}} \neper^{-x^2/2} \dif x \int_\R \frac{\mathrm{H}_N(x) }{x-x_n^\textsm{GH}} \exp\bigg( -\frac{\delta^2 + 1/2}{2\alpha^2\beta^2} x^2 \bigg) \dif x. \end{equation*} It appears that both $w_{k,n}$ and $\widetilde{w}_{k,n}$ of \Cref{thm:main} are positive for many choices of~$\alpha$; see \Cref{sec:numsim-positivity} for experiments involving $\alpha = 1/\sqrt{2}$. Unfortunately, we have not been able to prove this. In fact, numerical evidence indicates something slightly stronger. Namely that the even polynomial \begin{equation*} R_{\gamma,N}(x) \coloneqq \sum_{m=0}^{\floor{(N-1)/2}} \frac{\gamma^m}{2^m m! } \mathrm{H}_{2m}(x) \end{equation*} of degree $2\floor{(N-1)/2}$ is positive for every $N \geq 1$ and (at least) every $0 < \gamma \leq 1$. This would imply positivity of $\widetilde{w}_{k,n}$ since the Gauss--Hermite weights $w_n^\textsm{GH}$ are positive. For example, with $\alpha = 1/\sqrt{2}$, \begin{equation*} \frac{2\alpha^2 \beta^2}{1+2\delta^2} - 1 = \frac{2\sqrt{1+8\epsilon^2}}{1+\sqrt{1+8\epsilon^2}} - 1 = \frac{\sqrt{1+8\epsilon^2}-1}{1+\sqrt{1+8\epsilon^2}} \in (0, 1). \end{equation*} As discussed in~\citep{FasshauerMcCourt2012} in the context of kernel interpolation, the parameter $\alpha$ acts as a global scale parameter. While in interpolation it is not entirely clear how this parameter should be selected, in quadrature it seems natural to set $\alpha = 1/\sqrt{2}$ so that the eigenfunctions are orthonormal in $L^2(\mu)$. This is the value that we use, though also other values are potentially of interest since $\alpha$ can be used to control the spread of the nodes independently of the length-scale $\ell$. In \Cref{sec:convergence}, we also see that this value leads to more natural convergence analysis. \subsection{Effect of the length-scale}\label{sec:lengthscale} Roughly speaking, magnitude of the eigenvalues \begin{equation*} \lambda_n^\alpha = \sqrt{\frac{\alpha^2}{\alpha^2 + \delta^2 + \epsilon^2}} \bigg( \frac{\epsilon^2}{\alpha^2 + \delta^2 + \epsilon^2} \bigg)^{n} \end{equation*} determines how many eigenfunctions are necessary for an accurate weight approximation. We therefore expect that the approximation~\eqref{eq:approximation} is less accurate when the length-scale $\ell$ is small (i.e., $\epsilon = 1/(\sqrt{2}\ell)$ is large). This is confirmed by the numerical experiments in \Cref{sec:numsim}. Consider then the case $\ell \to \infty$. This scenario is called the \emph{flat limit} in scattered data approximation literature where it has been proved\footnote{It is interesting to note that the first published observation of analogous phenomenon is, as far as we are aware of, due to~\citet[Section 3.3]{OHagan1991} in kernel quadrature literature, predating the work of \citet{DriscollFornberg2002}. See also~\citep{Minka2000} for early quadrature-related work on the topic.} that the kernel interpolant associated to an isotropic kernel with increasing length-scale converges to (i) the unique polynomial interpolant of degree $N-1$ to the data if the kernel is infinitely smooth~\citep{LarssonFornberg2005,Schaback2005,LeeYoonYoon2007} or (ii) to a polyharmonic spline interpolant if the kernel is of finite smoothness~\citep{LeeMicchelliYoon2014}. In our case, $\ell \to \infty$ results in \begin{equation*} \epsilon \to 0, \hspace{0.5cm} \beta \to 1, \hspace{0.5cm} \delta^2 \to 0, \hspace{0.5cm} \lambda_n^\alpha \to 0, \hspace{0.5cm} \text{and} \hspace{0.5cm} \phi_n^\alpha(x) \to \mathrm{H}_n \big(\sqrt{2} \alpha x \big). \end{equation*} If the nodes are selected as in \Cref{eq:nodes}, $\xkgh_n \to x_n^\textsm{GH}/(\sqrt{2}\alpha)$. That is, if $\alpha = 1/\sqrt{2}$ \begin{equation*} \phi_n^\alpha(x) \to \mathrm{H}_n(x), \hspace{0.5cm} \xkgh_n \to x_n^\textsm{GH}, \hspace{0.5cm} \text{and} \hspace{0.5cm} \widetilde{w}_{k,n} \to w_n^\textsm{GH}. \end{equation*} That the approximate weights convergence to the Gauss--Hermite ones can be seen, for example, from \Cref{eq:approximation} by noting that only the first term in the sum is retained at the limit. Based on the aforementioned results regarding convergence of kernel interpolants to polynomial ones at the flat limit, it is to be expected that also $w_{k,n} \to w_n^\textsm{GH}$ as $\ell \to \infty$ (we do not attempt to prove this). Because the Gauss--Hermite quadrature rule is the ``best'' for polynomials and kernel interpolants convergence to polynomials at the flat limit, the above observation provides another justification for the choice $\alpha = 1/\sqrt{2}$ that we proposed the preceding section. When it comes to node placement, the length-scale is having an intuitive effect if the nodes are selected according to \Cref{eq:nodes}. For small $\ell$, the nodes are placed closer to the origin where most of the measure is concentrated as integrands are expected to converge quickly to zero as $\abs[0]{x} \to \infty$, whereas for larger $\ell$ the nodes are more---but not unlimitedly---spread out in order to capture behaviour of functions that potentially contribute to the integral also further away from the origin. \subsection{On computational complexity} \label{sec:complexity} Because the Gauss--Hermite nodes and weights are related to the eigenvalues and eigenvectors of the tridiagonal Jacobi matrix~\citep[Theorem 3.1]{Gautschi2004} they\rev{---and the points $\tilde{x}_n$---}can be solved in quadratic time \rev{(in practice, these nodes and weights can be often tabulated beforehand)}. From \Cref{eq:approximation} it is seen that computation of each approximate weight is linear in $N$: there are approximately $(N-1)/2$ terms in the sum and the Hermite polynomials can be evaluated on the fly using the three-term recurrence formula \rev{$\mathrm{H}_{n+1}(x) = x \mathrm{H}_n(x) - n \mathrm{H}_{n-1}(x)$}. That is, computational cost of obtaining $\tilde{x}_n$ and $\widetilde{w}_{k,n}$ for $n=1,\ldots,N$ is \rev{quadratic} in $N$. Since the kernel matrix $K$ of the Gaussian kernel is dense, solving the exact kernel quadrature weights from the linear system~\eqref{eq:linSys} \rev{for the points $\tilde{x}_n$} incurs a \rev{more demanding} cubic computational cost. Because computational cost of a tensor product rule does not depend on the nodes and weights after these have been computed, the above discussion also applies to the rules presented in \Cref{sec:tensor}. \section{Convergence analysis}\label{sec:convergence} In this section we analyse convergence in the reproducing kernel Hilbert space $\mathcal{H} \subset C^\infty(\R)$ induced by the Gaussian kernel of quadrature rules that are exact for the Mercer eigenfunctions. First, we prove a generic result (\Cref{thm:convergence}) to this effect and then apply this to the quadrature rule with the nodes $\xkgh_n$ and weights $\widetilde{w}_{k,n}$. If $\sum_{n=1}^N \abs[0]{\widetilde{w}_{k,n}}$ does not grow too fast with $N$, we obtain exponential convergence rates.  Recall some basic facts about reproducing kernel Hilbert spaces spaces~\citep{BerlinetThomasAgnan2004}: (i) $\inprod{f}{k(x,\cdot)}_\mathcal{H} = f(x)$ for any $f \in \mathcal{H}$ and $x \in \R$ and (ii) $f = \sum_{n=0}^\infty \lambda_n^\alpha \inprod{f}{\phi_n^\alpha} \phi_n^\alpha$ for any $f \in \mathcal{H}$. The \emph{worst-case error} $e(Q)$ of a quadrature rule $Q(f) = \sum_{n=1}^N w_n f(x_n)$ is \begin{equation*} e(Q) \coloneqq \sup_{\norm[0]{f}_\mathcal{H} \leq 1} \abs[0]{\mu(f) - Q(f)}. \end{equation*} Crucially, the worst-case error satisfies \begin{equation*} \abs[0]{\mu(f) - Q(f)} \leq \norm[0]{f}_\mathcal{H} e(Q) \end{equation*} for any $f \in \mathcal{H}$. This justifies calling a sequence $\{Q_N\}_{N=1}^\infty$ of $N$-point quadrature rules \emph{convergent} if $e(Q_N) \to 0$ as $N \to \infty$. For given nodes $x_1,\ldots,x_N$, the weights $w_k = (w_{k,1},\ldots,w_{k,N})$ of the kernel quadrature rule $Q_k$ are unique minimisers of the worst-case error: \begin{equation*} w_k = \argmin_{w \in \R^N} \sup_{\norm[0]{f}_\mathcal{H} \leq 1} \, \abs[3]{ \int_\R f \dif \mu - \sum_{n=1}^N w_i f(x_i) }. \end{equation*} It follows that a rate of convergence to zero for $e(Q)$ also applies to $e(Q_k)$. A number of convergence results for kernel quadrature rules on compact spaces appear in~\citep{Bezhaev1991,KanagawaSriperumbudurFukumizu2017,BriolProbInt2018}. When it comes to the RKHS of the Gaussian kernel, characterised in~\citep{Steinwart2006,Minh2010}, \citet{KuoWozniakowski2012} have analysed convergence of the Gauss--Hermite quadrature rule. Unfortunately, it turns out that the Gauss--Hermite rule converges in this space if and only if $\epsilon^2 < 1/2$. Consequently, we believe that the analysis below is the first to establish convergence, under the assumption (supported by our numerical experiments) that the sum of $\abs[0]{\widetilde{w}_{k,n}}$ does not grow too fast, of an explicitly constructed sequence of quadrature rules in the RKHS of the Gaussian kernel with any value of the length-scale parameter. We begin with two simple lemmas. \begin{lemma}\label{lemma:eigFuncBound} The eigenfunctions $\phi_n^\alpha$ admit the bound \begin{equation*} \sup_{n \geq 0} \, \abs[0]{ \phi_n^\alpha(x) } \leq K \sqrt{\beta} \neper^{\alpha^2 x^2/2} \end{equation*} for a constant $K \leq 1.087$ and every $x \in \R$. \end{lemma} \begin{proof} For each $n \geq 0$, the Hermite polynomials obey the bound \begin{equation}\label{eq:HermitePolyBound} \frac{1}{n!} \, H_n(x)^2 \leq K^2 \neper^{x^2/2} \end{equation} for a constant $K \leq 1.087$~\citep[p.\ 208]{Erdelyi1953}. See~\citep{BonanClark1990} for other such bounds\footnote{In particular, the factor $n^{-1/6}$ could be added on the right-hand side. This would make little difference in convergence analysis of \Cref{thm:convergence}.}. Thus \begin{equation*} \phi_n^\alpha(x)^2 = \frac{\beta}{n!} \neper^{-2\delta^2 x^2} \mathrm{H}_n \big( \sqrt{2}\alpha\beta x \big)^2 \leq K^2 \beta \exp\big( (\alpha^2\beta^2 - 2\delta^2) x^2 \big) = K^2 \beta \neper^{\alpha^2 x^2}. \end{equation*} \qed \end{proof} \begin{lemma}\label{lemma:ConvConstant} Let $\alpha = 1/\sqrt{2}$. Then \begin{equation*} \sqrt{ \frac{\epsilon^2}{1/2 + \delta^2 + \epsilon^2} } \neper^{\rho/(2\beta^2)} \in (0, 1) \end{equation*} for every $\ell > 0$ if and only if $\rho \leq 2$. \end{lemma} \begin{proof} The function \begin{equation*} \gamma(\epsilon^2) \coloneqq \frac{\epsilon^2}{1/2 + \delta^2 + \epsilon^2} \neper^{\rho/\beta^2} \end{equation*} satisfies $\gamma(0) = 0$ and $\gamma(\epsilon^2) \to 1$ as $\epsilon^2 \to \infty$. The derivative \begin{equation*} \frac{\dif \gamma(\epsilon^2)}{\dif \epsilon^2} = \frac{4\neper^{\rho/\beta^2} (1 + 4(2-\rho)\epsilon^2) }{(4\epsilon^2 + \beta^2 +1)\beta^3} \end{equation*} is positive when $\rho \leq 2$. For $\rho > 2$, the derivative has a single root at $\epsilon_0^2 = 1/(4(\rho-2))$ so that $\gamma(\epsilon_0^2) > 1$. That is, $\gamma(\epsilon^2) \in (0,1)$, and consequently $\gamma(\epsilon^2)^{1/2} \in (0,1)$, if and only if $\rho \leq 2$. \qed \end{proof} \begin{theorem}\label{thm:convergence} Let $\alpha = 1/\sqrt{2}$. Suppose that the nodes $x_1,\ldots,x_N$ and weights $w_1,\ldots,w_N$ of an $N$-point quadrature rule $Q_N$ satisfy \begin{enumerate} \item $\sum_{n=1}^N \abs[0]{w_{n}} \leq W_N$ for some $W_N \geq 0$; \item $Q_N(\phi_n^\alpha) = \mu(\phi_n^\alpha)$ for each $n = 0,\ldots,M_N-1$ for some $M_N \geq 1$; \item $\sup_{1\leq n \leq N}\abs[0]{x_{n}} \leq 2 \sqrt{M_N} / \beta$. \end{enumerate} Then there exist constants $C_1,C_2 > 0$, independent of $N$ and $Q_N$, and $0 < \eta < 1$ such that \begin{equation*} e(Q_N) \leq (1 + C_1 W_N) C_2 \eta^{M_N}. \end{equation*} Explicit forms of these constants appear in \Cref{eq:ConvConstants}. \end{theorem} \begin{proof} For notational convenience, denote \begin{equation*} \lambda_n^\alpha = \lambda_n = \sqrt{\frac{1/2}{1/2 + \delta^2 + \epsilon^2}} \bigg( \frac{\epsilon^2}{1/2 + \delta^2 + \epsilon^2} \bigg)^{n} = \tau \lambda^n \end{equation*} and $\phi_n = \phi_n^\alpha$. Because every $f \in \mathcal{H}$ admits the expansion $f = \sum_{n=0}^\infty \lambda_n \inprod{f}{\phi_n}_\mathcal{H} \phi_n$ and $Q_N(\phi_n) = \mu(\phi_n)$ for $n < M_N$, it follows from the Cauchy--Schwarz inequality and $\norm[0]{\phi_n}_\mathcal{H} = 1/\sqrt{\lambda_n}$ that \begin{equation}\label{eq:truncation-bound} \begin{split} \abs[0]{\mu(f) - Q_N(f)} &= \abs[3]{\sum_{n=M_N}^\infty \lambda_n \inprod{f}{\phi_n}_\mathcal{H} \, [ \mu(\phi_n) - Q_N(\phi_n) ]} \\ &\leq \norm[0]{f}_\mathcal{H} \sum_{n=M_N}^\infty \lambda_n^{1/2} \abs[0]{ \mu(\phi_n) - Q_N(\phi_n) }. \end{split} \end{equation} From \Cref{lemma:eigFuncBound} we have $\abs[0]{\phi_n(x)} \leq K \sqrt{\beta} \neper^{x^2/4}$ for a constant $K \leq 1.087$. Consequently, the assumption $\sup_{1\leq m \leq N} \abs[0]{x_m} \leq 2 \sqrt{M_N}/\beta$ yields \begin{equation*} \sup_{1\leq m \leq N} \, \sup_{n \geq 0} \, \abs[0]{\phi_n(x_m)} \leq K \sqrt{\beta} \neper^{M_N/\beta^2}. \end{equation*} Combining this with Hölder's inequality and $L^2(\mu)$-orthonormality of $\phi_n$, that imply $\mu(\phi_n) \leq \mu(\phi_n^2)^{1/2} = 1$, we obtain the bound \begin{equation}\label{eq:PhiError} \abs[0]{ \mu(\phi_n) - Q_N(\phi_n) } \leq 1 + \sum_{m=1}^N \abs[0]{w_m} \abs[0]{\phi_n(x_m)} \leq 1 + K \sqrt{\beta} W_N \neper^{M_N/\beta^2}. \end{equation} Inserting this into \Cref{eq:truncation-bound} produces \begin{equation}\label{eq:ConvConstants} \begin{split} \abs[0]{\mu(f) - Q_N(f)} &\leq \norm[0]{f}_\mathcal{H} \big( 1 + W_N K \sqrt{\beta} \neper^{M_N/\beta^2} \big) \sum_{n=M_N}^\infty \lambda_n^{1/2} \\ &= \norm[0]{f}_\mathcal{H} \big( 1 + K \sqrt{\beta} W_N \neper^{M_N/\beta^2} \big) \sqrt{\tau} \sum_{n=M_N}^\infty \lambda^{n/2} \\ &= \norm[0]{f}_\mathcal{H} \big( 1 + K \sqrt{\beta} W_N \neper^{M_N/\beta^2} \big) \frac{\sqrt{\tau}}{1-\sqrt{\lambda}} \lambda^{M_N/2} \\ &\leq \norm[0]{f}_\mathcal{H} \big( 1 + K \sqrt{\beta} W_N \big) \frac{\sqrt{\tau}}{1-\sqrt{\lambda}} \big( \sqrt{\lambda} \neper^{1/\beta^2} \big)^{M_N}. \end{split} \end{equation} Noticing that $\sqrt{\lambda} \neper^{1/\beta^2} < 1$ by \Cref{lemma:ConvConstant} concludes the proof. \qed \end{proof} \begin{remark} From \Cref{lemma:ConvConstant} we observe that the proof does not yield $\eta < 1$ (for every~$\ell$) if the assumption $\sup_{1\leq n \leq N}\abs[0]{x_{n}} \leq 2 \sqrt{M_N} / \beta$ on placement of the nodes is relaxed by replacing the constant $2$ on the right-hand side with $C > 2$. \end{remark} Consider now the $N$-point approximate Gaussian kernel quadrature rule \sloppy{${\widetilde{Q}_{k,N} = \sum_{n=1}^N \widetilde{w}_{k,n} f(\xkgh_n)}$} whose nodes and weights are defined in \Cref{thm:main} and set $\alpha = 1/\sqrt{2}$. The nodes $x_n^\textsm{GH}$ of the $N$-point Gauss--Hermite rule admit the bound~\citep{AreaDimitrovGodoyRonveaux2004} \begin{equation*} \sup_{1 \leq n \leq N} \abs[0]{x_n^\textsm{GH}} \leq 2\sqrt{N-1} \end{equation*} for every $N \geq 1$. That is, \begin{equation*} \xkgh_n = \frac{1}{\beta} x_n^\textsm{GH} \leq \frac{2\sqrt{N}}{\beta}. \end{equation*} Since the rule $\widetilde{Q}_{k,N}$ is exact for the first $N$ eigenfunctions, $M_N = N$. Hence the assumption on placement of the nodes in \Cref{thm:convergence} holds. As our numerical experiments indicate that the weights $\widetilde{w}_{k,n}$ are positive and $\sum_{n=1}^N \abs[0]{\widetilde{w}_{k,n}} \to 1$ as $N \to \infty$, it seems that the exponential convergence rate of \Cref{thm:convergence} is valid for $\widetilde{Q}_{k,N}$ (as well as for the corresponding kernel quadrature rule $Q_{k,N}$) with $M_N = N$. Naturally, this result is valid whenever the growth of the absolute weight sum is, for example, polynomial in~$N$. \begin{theorem}\label{thm:ConvergenceSpecific} Let $\alpha = 1/\sqrt{2}$ and suppose that $\sup_{N \geq 1} \sum_{n=1}^N \abs[0]{\widetilde{w}_{k,n}} < \infty$. Then the quadrature rules $\widetilde{Q}_{k,N}(f) = \sum_{n=1}^N \widetilde{w}_{k,n} f(\xkgh_n)$ and $Q_{k,N}(f) = \sum_{n=1}^N w_{k,n} f(\xkgh_n)$ satisfy \begin{equation*} e(Q_{k,N}) \leq e(\widetilde{Q}_{k,N}) = \bigO(\eta^N) \end{equation*} for $0 < \eta < 1$. \end{theorem} Another interesting case are the \emph{generalised Gaussian quadrature rules}\footnote{Note that the cited results are for kernels and functions on compact intervals. However, generalisations for the whole real line are possible~\citep[Chapter VI]{KarlinStudden1966}.} for the eigenfunctions. As the eigenfunctions constitute a complete Chebyshev system~\mbox{\citep{Kellog1918,Pinkus1996}}, there exists a quadrature rule $Q^*_N$ with positive weights $w_1^*,\ldots,w_N^*$ such that $Q_N^*(\phi_n) = \mu(\phi_n)$ for every $n=0,\ldots,2N-1$~\citep{Barrow1978}. Appropriate control of the nodes of these quadrature rules would establish an exponential convergence result with the ``double rate'' $M_N = 2N$. \section{Tensor product rules}\label{sec:tensor} Let $Q_1,\ldots,Q_d$ be quadrature rules on $\R$ with nodes $X_i = \{ x_{i,1},\ldots,x_{i,N_i} \}$ and weights $w_{1}^i,\ldots,w_{N_i}^i$ for each $i=1,\ldots,d$. The \emph{tensor product rule} on the Cartesian grid \sloppy{${X \coloneqq X_1 \times \cdots \times X_d \subset \R^d}$} is the cubature rule \begin{equation}\label{eq:tensorRule} Q^d(f) \coloneqq (Q_1 \otimes \cdots \otimes Q_d)(f) = \sum_{\mathcal{I} \leq \mathcal{N}} w_\mathcal{I} f(x_\mathcal{I}), \end{equation} where $\mathcal{I} \in \N^d$ is a multi-index, $\mathcal{N} \coloneqq (N_1,\ldots,N_d) \in \N^d$, and the nodes and weights are \begin{equation*} x_\mathcal{I} \coloneqq (x_{1,\mathcal{I}(1)}, \ldots x_{d,\mathcal{I}(d)}) \in X \hspace{0.5cm} \text{and} \hspace{0.5cm} w_{\mathcal{I}} \coloneqq \prod_{i=1}^d w_{\mathcal{I}(i)}^i. \end{equation*} We equip $\R^d$ with the $d$-variate standard Gaussian measure \begin{equation}\label{eq:measureMultiD} \dif \mu^d(x) \coloneqq (2\pi)^{-d/2} \neper^{-\norm[0]{x}^2/2} \dif x = \prod_{i=1}^d \dif \mu(x_i). \end{equation} The following proposition is a special case of a standard result on exactness of tensor product rules~\citep[Section 2.4]{Oettershagen2017}. \begin{proposition}\label{prop:tensor} Consider the tensor product rule~\eqref{eq:tensorRule} and suppose that, for each $i=1,\ldots,d$, $Q_i(\phi^i_n) = \mu(\phi^i_n)$ for some functions $\phi^i_1,\ldots,\phi_{N_i}^i \colon \R \to \R$. Then \begin{equation*} Q^d(f) = \mu^d(f) \hspace{0.5cm} \text{for every} \hspace{0.5cm} f \in \lspan \Set[\big]{ \textstyle \prod_{i=1}^d \phi_{\mathcal{I}(i)}^i }{ \mathcal{I} \leq \mathcal{N} }. \end{equation*} \end{proposition} When a multivariate kernel is \emph{separable}, this result can be used in constructing kernel cubature rules out of kernel quadrature rules. We consider $d$-dimensional separable Gaussian kernels \begin{equation}\label{eq:multiDKernel} k^d(x,y) \coloneqq \exp\bigg( \! - \frac{1}{2} \sum_{i=1}^d \frac{(x_i - y_i)^2}{\ell_i^2} \bigg) = \prod_{i=1}^d \exp\bigg( \! -\frac{(x_i-y_i)^2}{2\ell_i^2} \bigg) \eqqcolon \prod_{i=1}^d k_i(x_i,y_i), \end{equation} where $\ell_i$ are dimension-wise length-scales. For each $i=1,\ldots,d$, the kernel quadrature rule $Q_{k,i}$ with nodes $X_i = \{ x_{i,1},\ldots,x_{i,N_i} \} $ and weights $w_{k,1}^i,\ldots,w_{k,N_i}^i$ is, by definition, exact for the $N_i$ kernel translates at the nodes: \begin{equation*} Q_{k,i}\big(k(x_{i,n}, \cdot)\big) = \mu\big(k(x_{i,n}, \cdot)\big) \end{equation*} for each $n=1,\ldots,N_i$. \Cref{prop:tensor} implies that the $d$-dimensional kernel cubature rule $Q_k^d$ at the nodes $X = X_1 \times \cdots \times X_d$ is a tensor product of the univariate rules: \begin{equation}\label{eq:tensorKQ} Q_k^d(f) = (Q_{k,1} \otimes \cdots \otimes Q_{k,d})(f) \eqqcolon \sum_{\mathcal{I} \leq \mathcal{N}} w_{k,\mathcal{I}} f(x_\mathcal{I}), \end{equation} with the weights being products of univariate Gaussian kernel quadrature weights, \sloppy{${w_{k,\mathcal{I}} = \prod_{i=1}^d w_{k,\mathcal{I}(i)}}$}. This is the case because each kernel translate $k^d(x,\cdot)$, $x \in X$, can be written as \begin{equation*} k^d(x, \cdot) = \prod_{i=1}^d k_i(x_i, \cdot) \end{equation*} by separability of $k^d$. We can extend \Cref{thm:main} to higher dimensions if the node set is a Cartesian product of a number of scaled Gauss--Hermite node sets. For this purpose, for each $i=1,\ldots,d$ we use the $L(\mu_{\alpha_i})^2$-orthonormal eigendecomposition of the Gaussian kernel $k_i$. The eigenfunctions, eigenvalues, and other related constants from \Cref{sec:eigendecomposition} for the eigendecomposition of the $i$th kernel are assigned an analogous subscript. Furthermore, use the notation \begin{equation*} \lambda_\mathcal{I} \coloneqq \prod_{i=1}^d \lambda_{\mathcal{I}(i)}^{\alpha_i} \hspace{0.5cm} \text{and} \hspace{0.5cm} \phi_\mathcal{I}(x) = \prod_{i=1}^d \phi_{\mathcal{I}(i)}^{\alpha_i}(x_i). \end{equation*} \begin{theorem}\label{thm:mainMultiD} For $i=1,\ldots,d$, let $x_{i,1}^\textsm{GH},\ldots,x_{i,N_i}^\textsm{GH}$ and $w_{i,1}^\textsm{GH}, \ldots w_{i,N_i}^\textsm{GH}$ stand for the nodes and weights of the $N_i$-point Gauss--Hermite quadrature rule and define the nodes \begin{equation}\label{eq:multiDNdoes} \xkgh_{i,n} \coloneqq \frac{1}{\sqrt{2} \alpha_i \beta_i} x_{i,n}^\textsm{GH}. \end{equation} Then the weights of the tensor product quadrature rule \begin{equation*} \widetilde{Q}_k^d(f) \coloneqq \sum_{\mathcal{I} \leq \mathcal{N}} \wkgh_{k,\mathcal{I}} f(\xkgh_{\mathcal{I}}), \end{equation*} that is defined by the exactness conditions $\widetilde{Q}_k^d(\phi_\mathcal{I}) = \mu^d(\phi_\mathcal{I})$ for every $\mathcal{I} \leq \mathcal{N}$, are \sloppy{${\widetilde{w}_{k,\mathcal{I}} = \prod_{i=1}^d \widetilde{w}_{k,\mathcal{I}(i)}^i}$} for \begin{equation*} \widetilde{w}_{k,n}^i = \bigg(\frac{1}{1+2\delta_i^2}\bigg)^{1/2} w_{i,n}^\textsm{GH} \neper^{\delta^2 \xkgh_{i,n}^2} \sum_{m=0}^{\floor{(N-1)/2}} \frac{1}{2^m m! } \bigg( \! \frac{2\alpha_i^2 \beta_i^2}{1+2\delta_i^2} - 1 \bigg)^m \mathrm{H}_{2m}(x_{i,n}^\textsm{GH}), \end{equation*} where $\alpha$, $\beta$, and $\delta$ are defined in \Cref{eq:constants} and $\mathrm{H}_{2m}$ are the probabilists' Hermite polynomials~\eqref{eq:hermite}. \end{theorem} As in one dimension, the weights $\wkgh_{k,\mathcal{I}}$ are supposed to approximate $w_{k,\mathcal{I}}$. Moreover, convergence rates can be obtained: a tensor product analogues of~\Cref{thm:convergence,thm:ConvergenceSpecific} follow from noting that every function $f \colon \R^d \to \R$ in the RKHS $\mathcal{H}^d$ of $k^d$ admits the multivariate Mercer expansion \begin{equation*} f(x) = \sum_{ \mathcal{I} \geq 0 } \lambda_\mathcal{I} \inprod{f}{\phi_\mathcal{I}}_{\mathcal{H}^d} \phi_\mathcal{I}(x). \end{equation*} See~\citep{KuoSloanWozniakowski2017} for similar convergence analysis of tensor product Gauss--Hermite rules in~$\mathcal{H}^d$. \begin{theorem} Let $\alpha_1 = \cdots = \alpha_d = 1/\sqrt{2}$. Suppose that the nodes $x_{i,1},\ldots,x_{i,N_i}$ and weights $w_{1}^i,\ldots,w_{N_i}^i$ of the $N_i$-point quadrature rules $Q_{1,N_1},\ldots,Q_{d,N_d}$ satisfy \begin{enumerate} \item $\sup_{1\leq i \leq d} \sum_{n=1}^{N_i} \abs[0]{w_{n}^i} \leq W_\mathcal{N}$ for some $W_\mathcal{N} \geq 1$; \item $Q_{i,N_i}(\phi_n^\alpha) = \mu(\phi_n^\alpha)$ for each $n = 0,\ldots,M_{N_i}-1$ and $i=1,\ldots,d$ for some $M_{N_i} \geq 1$; \item $\sup_{1\leq n \leq {N_i}}\abs[0]{x_{i,n}} \leq 2 \sqrt{M_{N_i}} / \beta$ for each $i=1,\ldots,d$. \end{enumerate} Define the tensor product rule \begin{equation*} Q_\mathcal{N}^d = Q_{1,N_1} \otimes \cdots \otimes Q_{d,N_d}. \end{equation*} Then there exist constants $C > 0$, independent of $\mathcal{N}$ and $Q_\mathcal{N}^d$, and $0 < \eta < 1$ such that \begin{equation*} e(Q_\mathcal{N}^d) \leq C W_\mathcal{N}^d \eta^{M}, \end{equation*} where $M = \min(M_{N_1},\ldots,M_{N_d})$. Explicit forms of $C$ and $\eta$ appear in \Cref{eq:ConvConstantsMulti}. \end{theorem} \begin{proof} The proof is largely analogous to that of \Cref{thm:convergence}. Since $f \in \mathcal{H}^d$ can be written as \begin{equation*} f = \sum_{\mathcal{I} \geq 0} \lambda_\mathcal{I} \inprod{f}{\phi_\mathcal{I}}_{\mathcal{H}^d} \phi_{\mathcal{I}}, \end{equation*} by defining the index set \begin{equation*} \mathcal{A}_\mathcal{M} \coloneqq \Set[\big]{\mathcal{I} \in \N^d}{\mathcal{I}(i) \geq M_{N_i} \text{ for at least one $i \in \{1,\ldots,d\}$} } \subset \N^d \end{equation*} we obtain \begin{equation*} \abs[0]{\mu^d(f) - Q_\mathcal{N}^d(f)} = \abs[3]{\sum_{\mathcal{I} \in \mathcal{A}_{\mathcal{M}}} \lambda_\mathcal{I} \inprod{f}{\phi_\mathcal{I}}_{\mathcal{H}^d} \big[ \mu^d(\phi_{\mathcal{I}}) - Q_\mathcal{N}^d(\phi_{\mathcal{I}}) \big] }. \end{equation*} Consequently, the Cauchy--Schwarz inequality yields \begin{equation}\label{eq:MultiFBound} \begin{split} \abs[0]{\mu^d(f) - Q_\mathcal{N}^d(f)} &\leq \norm[0]{f}_{\mathcal{H}^d} \sum_{\mathcal{I} \in \mathcal{A}_{\mathcal{M}}} \lambda_\mathcal{I}^{1/2} \abs[1]{ \mu^d(\phi_{\mathcal{I}}) - Q_\mathcal{N}^d(\phi_{\mathcal{I}}) } \\ &= \norm[0]{f}_{\mathcal{H}^d} \tau^{d/2} \sum_{\mathcal{I} \in \mathcal{A}_{\mathcal{M}}} \lambda^{\abs[0]{\mathcal{I}}/2} \abs[1]{ \mu^d(\phi_{\mathcal{I}}) - Q_\mathcal{N}^d(\phi_{\mathcal{I}}) }, \end{split} \end{equation} where we again use the notation \begin{equation*} \tau = \sqrt{ \frac{1/2}{1/2+\delta^2 + \epsilon^2} } \hspace{1cm} \text{ and } \hspace{1cm} \lambda = \frac{\epsilon^2}{1/2 + \delta^2 + \epsilon^2}. \end{equation*} Since $\mu(\phi_n) \leq 1$ for any $n \geq 0$, integration error for the eigenfunction $\phi_\mathcal{I}$ satisfies \begin{equation}\label{eq:MultiD-rec1} \begin{split} \abs[1]{ \mu^d(\phi_{\mathcal{I}}) - Q_\mathcal{N}^d(\phi_{\mathcal{I}}) } \hspace{-2cm}& \\ ={}& \abs[3]{ \prod_{i=1}^d \mu( \phi_{\mathcal{I}(i)} ) - \prod_{i=1}^d Q_{i,N_i}( \phi_{\mathcal{I}(i)} ) } \\ ={}& \Bigg\lvert \big[ \mu(\phi_{\mathcal{I}(d)}) - Q_{d,N_d}(\phi_{\mathcal{I}(d)}) \big] \prod_{i=1}^{d-1} \mu( \phi_{\mathcal{I}(i)} ) \bigg. \\ & \bigg. + Q_{d,N_d}(\phi_{\mathcal{I}(d)}) \bigg( \prod_{i=1}^{d-1} \mu( \phi_{\mathcal{I}(i)} ) - \prod_{i=1}^{d-1} Q_{i,N_i}( \phi_{\mathcal{I}(i)} ) \bigg) \Bigg\rvert \\ \leq{}& \abs[1]{\mu(\phi_{\mathcal{I}(d)}) - Q_{d,N_d}(\phi_{\mathcal{I}(d)})} \\ &+ \abs[1]{Q_{d,N_d}(\phi_{\mathcal{I}(d)})} \abs[3]{ \prod_{i=1}^{d-1} \mu( \phi_{\mathcal{I}(i)} ) - \prod_{i=1}^{d-1} Q_{i,N_i}( \phi_{\mathcal{I}(i)} ) }. \end{split} \end{equation} \rev{ Define the index sets $\mathcal{B}_\mathcal{M}^j(\mathcal{I}) = \Set{ j \leq i \leq d }{ \mathcal{I}(i) \geq M_{N_i}}$ and their cardinalities $b_\mathcal{M}^j(\mathcal{I}) = \#\mathcal{B}_\mathcal{M}^j(\mathcal{I}) \leq d-j+1$ for $j \geq 1$. Because $\abs[0]{\mu(\phi_{\mathcal{I}(i)}) - Q_{i,N_i}(\phi_{\mathcal{I}(i)})} = 0$ and $\abs[0]{Q_{i,N_{i}}(\phi_{\mathcal{I}(i)})} = \abs[0]{\mu(\phi_{\mathcal{I}(i)})} \leq 1$ if $\mathcal{I}(i) < M_{N_i}$, expansion of the recursive inequality~\eqref{eq:MultiD-rec1} gives \begin{equation}\label{eq:Multi-rec-expanded} \begin{split} \abs[1]{ \mu^d(\phi_{\mathcal{I}}) - Q_\mathcal{N}^d(\phi_{\mathcal{I}}) } \hspace{-2cm}& \\ &\leq \sum_{i=1}^d \abs[1]{\mu(\phi_{\mathcal{I}(i)}) - Q_{i,N_i}(\phi_{\mathcal{I}(i)})} \prod_{j=i+1}^{d} \abs[1]{Q_{j,N_{j}}(\phi_{\mathcal{I}(j)})} \\ &= \sum_{i \in \mathcal{B}_\mathcal{M}^1(\mathcal{I})} \abs[1]{\mu(\phi_{\mathcal{I}(i)}) - Q_{i,N_i}(\phi_{\mathcal{I}(i)})} \prod_{j=i+1}^{d} \abs[1]{Q_{j,N_{j}}(\phi_{\mathcal{I}(j)})} \\ &\leq \sum_{i \in \mathcal{B}_\mathcal{M}^1(\mathcal{I})} \abs[1]{\mu(\phi_{\mathcal{I}(i)}) - Q_{i,N_i}(\phi_{\mathcal{I}(i)})} \prod_{j \in \mathcal{B}_\mathcal{M}^{i+1}(\mathcal{I})} \abs[1]{Q_{j,N_{j}}(\phi_{\mathcal{I}(j)})}. \end{split} \end{equation} \Cref{eq:PhiError} provides the bounds \sloppy{${\abs[0]{\mu(\phi_{\mathcal{I}(i)}) - Q_{i,N_i}(\phi_{\mathcal{I}(i)})} \leq 1 + K \sqrt{\beta} W_{\mathcal{N}} \neper^{M_{N_i}/\beta^2}}$} and $\abs[0]{Q_{i,N_i}(\phi_{\mathcal{I}(i)})} \leq K \sqrt{\beta} W_\mathcal{N} \neper^{M_{N_i}/\beta^2}$ for the constant $K = 1.087$ that, when plugged in \Cref{eq:Multi-rec-expanded}, yield \begin{equation}\label{eq:MultiPhiIBound} \begin{split} \abs[1]{ \mu^d(\phi_{\mathcal{I}}) - Q_\mathcal{N}^d(\phi_{\mathcal{I}}) } \hspace{-2.8cm}& \\ &\leq \sum_{i \in \mathcal{B}_\mathcal{M}^1(\mathcal{I})} \big( 1 + K \sqrt{\beta} W_{\mathcal{N}} \neper^{M_{N_i}/\beta^2} \big) \prod_{j \in \mathcal{B}_\mathcal{M}^{i+1}(\mathcal{I})} K \sqrt{\beta} W_\mathcal{N} \neper^{M_{N_j}/\beta^2} \\ &= \sum_{i \in \mathcal{B}_\mathcal{M}^1(\mathcal{I})} \big( 1 + K \sqrt{\beta} W_{\mathcal{N}} \neper^{M_{N_i}/\beta^2} \big) \big(K \sqrt{\beta} W_\mathcal{N} \big)^{b_\mathcal{M}^{i+1}(\mathcal{I})} \exp\Bigg( \frac{1}{\beta^2} \sum_{ j \in \mathcal{B}_\mathcal{M}^{i+1}(\mathcal{I}) } M_{N_j} \Bigg) \\ &\leq 2 \sum_{i \in \mathcal{B}_\mathcal{M}^1(\mathcal{I})} \big(K \sqrt{\beta} W_\mathcal{N} \big)^{b_\mathcal{M}^{i}(\mathcal{I})} \exp\Bigg( \frac{1}{\beta^2} \sum_{ j \in \mathcal{B}_\mathcal{M}^{i}(\mathcal{I}) } M_{N_j} \Bigg), \end{split} \end{equation} where the last inequality is based on the facts that $i \in \mathcal{B}_\mathcal{M}^i(\mathcal{I})$ if \sloppy{${i \in \mathcal{B}_\mathcal{M}^1(\mathcal{I})}$} and \sloppy{${1 + K \sqrt{\beta} W_{\mathcal{N}} \neper^{M_{N_i}/\beta^2} \leq 2 K \sqrt{\beta} W_{\mathcal{N}} \neper^{M_{N_i}/\beta^2}}$}, a consequence of \sloppy{${K, \beta, W_\mathcal{N} \geq 1}$}. \Cref{eq:MultiPhiIBound,eq:MultiFBound}, together with \Cref{lemma:ConvConstant}, now yield \begin{align*} \lvert &\mu^d(f) - Q_\mathcal{N}^d(f) \rvert \\ &\leq 2 \norm[0]{f}_{\mathcal{H}^d} \tau^{d/2} \sum_{\mathcal{I} \in \mathcal{A}_{\mathcal{M}}} \lambda^{\abs[0]{\mathcal{I}}/2} \sum_{ i \in \mathcal{B}_\mathcal{M}^1(\mathcal{I})} \big(K \sqrt{\beta} W_\mathcal{N} \big)^{b_\mathcal{M}^i(\mathcal{I})} \exp\Bigg( \frac{1}{\beta^2} \sum_{ j \in \mathcal{B}_\mathcal{M}^i(\mathcal{I}) } M_{N_j} \Bigg) \\ &\leq 2 \norm[0]{f}_{\mathcal{H}^d} \tau^{d/2} \big(K \sqrt{\beta} W_\mathcal{N} \big)^d \sum_{\mathcal{I} \in \mathcal{A}_{\mathcal{M}}} \lambda^{\abs[0]{\mathcal{I}}/2} \sum_{ i \in \mathcal{B}_\mathcal{M}^1(\mathcal{I})} \exp\Bigg( \frac{1}{\beta^2} \sum_{ j \in \mathcal{B}_\mathcal{M}^i(\mathcal{I}) } M_{N_j} \Bigg) \\ &\leq 2d \norm[0]{f}_{\mathcal{H}^d} \tau^{d/2} \big(K \sqrt{\beta} W_\mathcal{N} \big)^d \sum_{\mathcal{I} \in \mathcal{A}_{\mathcal{M}}} \lambda^{\abs[0]{\mathcal{I}}/2} \neper^{\abs[0]{\mathcal{I}} / \beta^2} \\ &= 2d \norm[0]{f}_{\mathcal{H}^d} \big(K \sqrt{\tau \beta} W_\mathcal{N} \big)^d \sum_{\mathcal{I} \in \mathcal{A}_{\mathcal{M}}} \big(\sqrt{\lambda} \neper^{1/\beta^2}\big)^{\abs[0]{\mathcal{I}}} \\ &\leq 2d \norm[0]{f}_{\mathcal{H}^d} \big(K \sqrt{\tau \beta} W_\mathcal{N} \big)^d \big(\sqrt{\lambda} \neper^{1/\beta^2}\big)^M \sum_{\mathcal{I} \geq 0} \big(\sqrt{\lambda} \neper^{1/\beta^2}\big)^{\abs[0]{\mathcal{I}}} \\ &= 2d \norm[0]{f}_{\mathcal{H}^d} \big(K \sqrt{\tau \beta} W_\mathcal{N} \big)^d \big(\sqrt{\lambda} \neper^{1/\beta^2}\big)^M \bigg( \frac{1}{1-\sqrt{\lambda} \neper^{1/\beta^2}} \bigg)^d. \end{align*} The claim therefore holds with \begin{equation} \label{eq:ConvConstantsMulti} C = 2d \bigg( \frac{K \sqrt{\tau \beta} }{1-\sqrt{\lambda} \neper^{1/\beta^2}} \bigg)^d \quad \text{ and } \quad \eta = \sqrt{\lambda} \neper^{1/\beta^2} < 1. \end{equation} } \qed \end{proof} A multivariate version of \Cref{thm:ConvergenceSpecific} is obvious. \section{Numerical experiments}\label{sec:numsim} This section contains numerical experiments on properties and accuracy of the approximate Gaussian kernel quadrature weights defined in \Cref{thm:main,thm:mainMultiD}. The experiments have been implemented in MATLAB, and they are available at \texttt{https://github.com/tskarvone/gauss-mercer}. The value $\alpha = 1/\sqrt{2}$ is used in all experiments. The experiments indicate that \begin{enumerate} \item Computation of the approximate weights in \Cref{eq:approximation} is numerically stable. \item The weight approximation is quite accurate, its accuracy increasing with the number of nodes and the length-scale, as predicted in \Cref{sec:lengthscale}. \item The weights $w_{k,n}$ and $\widetilde{w}_{k,n}$ are positive for every $N$ and $n=1,\ldots,N$ and their sums converge to one exponentially in $N$. \item The quadrature rule $\widetilde{Q}_{k}$ converges exponentially, as implied by \Cref{thm:ConvergenceSpecific} and empirical observations on the behaviour of its weights. \item In numerical integration of specific functions, the approximate kernel quadrature rule $\widetilde{Q}_{k}$ can achieve integration accuracy almost indistinguishable from that of the corresponding Gaussian kernel quadrature rule $Q_{k}$ and superior to some more traditional alternatives. \end{enumerate} This suggest~\Cref{eq:approximation} can be used as an accurate and numerically stable surrogate for computing the Gaussian kernel quadrature weights when the naive approach based on solving the linear system~\eqref{eq:linSys} is precluded by ill-conditioning of the kernel matrix. Furthermore, the choice~\eqref{eq:nodes} of the nodes by scaling the Gauss--Hermite nodes appears to yield an exponentially convergent kernel quadrature rule that has positive weights. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{fig-stability.pdf} \caption{\rev{Absolute kernel quadrature weights, as computed directly from the linear system~\eqref{eq:linSys}, and the approximate weights~\eqref{eq:approximation} for $N = 99$, nodes $\tilde{x}_{k,n}$, and three different length-scales. Red is used to indicate those of $w_{k,n}$ that are negative. The nodes are in ascending order, so by symmetry it is sufficient to display weights only for $n=1,\ldots,50$ (in fact, $w_{k,n}$ are not necessarily numerically symmetric; see \Cref{sec:numsim-approximation}). The Gauss--Hermite nodes and weights were computed using the Golub--Welsch algorithm~\citep[Section~3.1.1.1]{Gautschi2004} and MATLAB's variable precision arithmetic. \Cref{eq:approximation} did not present any numerical issues as the sum, which can contain both positive and negative terms, was always dominated by the positive terms and all its terms were of reasonable magnitude.}}\label{fig:stability} \end{figure} \subsection{Numerical stability and distribution of weights}\label{sec:numsim-stability} \rev{ We have not encountered any numerical issues when computing the approximate weights~\eqref{eq:approximation}. In this example we set $N=99$ and examine the distribution of approximate weights $\widetilde{w}_{k,n}$ for $\ell = 0.05$, $\ell = 0.4$ and $\ell = 4$. \Cref{fig:stability} depicts (i) approximate weights $\widetilde{w}_{k,n}$, (ii) absolute kernel quadrature weights $\abs[0]{w_{k,n}}$ obtained by solving the linear system~\eqref{eq:linSys} for the points $\tilde{x}_{n}$ and, for $\ell = 4$, (iii) Gauss--Hermite weights $w_n^\textsm{GH}$. The approximate weights $\widetilde{w}_{k,n}$ display no signs of numerical instabilities; their magnitudes vary smoothly and all of them are positive. That $\widetilde{w}_{k,1} > \widetilde{w}_{k,2}$ for $\ell = 0.05$ appears to be caused by the sum in \Cref{eq:approximation} having not converged yet: the constant $2\alpha^2\beta^2/(1+2\delta^2) - 1$, that controls the rate of convergence of this sum, converges to $1$ as $\ell \to 0$ (in this case its value is $0.9512$) and $H_{2m}(x_1^\textsm{GH}) > 0$ for every $m = 1,\ldots,49$ while $H_{2m}(x_n^\textsm{GH}) < 0$ for $m = 46,47,48,49$. This and further experiments in \Cref{sec:numsim-approximation} merely illustrates that quality of the weight approximation deteriorates when $\ell$ is small---as predicted in \Cref{sec:lengthscale}. Behaviour of $\widetilde{w}_{k,n}$ is in stark contrast to the naively computed weights $w_{k,n}$ that display clear signs of numerical instabilities for $\ell = 0.4$ and $\ell = 4$ (condition numbers of the kernel matrices were roughly $2.66 \times 10^{16}$ and $3.59 \times 10^{18}$). Finally, the case $\ell = 4$ provides further evidence for numerical stability of \Cref{eq:approximation} since, based on \Cref{sec:lengthscale}, $\widetilde{w}_{k,n} \to w_n^\textsm{GH}$ as $\ell \to \infty$ and, furthermore, there is reason to believe that $w_{k,n}$ would share this property if they were computed in arbitrary-precision arithmetic. \Cref{sec:numsim-positivity} and the experiments reported by \citet{FasshauerMcCourt2012} provide additional evidence for numerical stability of \Cref{eq:approximation}. } \subsection{Accuracy of the weight approximation}\label{sec:numsim-approximation} \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{fig-approx.pdf} \caption{Relative weight approximation error~\eqref{eq:RelErr} for different length-scales.}\label{fig:accuracy} \end{figure} Next we assess quality of the weight approximation $\widetilde{w}_{k} \approx w_{k}$. \Cref{fig:accuracy} depicts the results for a number of different length-scales in terms of norm of the relative weight error, \begin{equation}\label{eq:RelErr} \sqrt{ \sum_{n=1}^N \bigg( \frac{w_{k,n} - \widetilde{w}_{k,n}}{w_{k,n}} \bigg)^2 }. \end{equation} As the kernel matrix quickly becomes ill-conditioned, computation of the kernel quadrature weights $w_{k}$ is challenging, particularly when the length-scale is large. To partially mitigate the problem we replaced the kernel quadrature weights with their QR decomposition approximations $\widetilde{w}_k^M$ derived in \Cref{sec:qr}. The truncation length~$M$ was selected based on machine precision; see \citep[Section 4.2.2]{FasshauerMcCourt2012} for details. Yet even this does not work for large enough $N$. Because kernel quadrature rules on symmetric point sets have symmetric weights~\citep[Section 5.2.4]{KarvonenSarkka2018,Oettershagen2017}, breakdown in symmetricity of the computed kernel quadrature weights was used as a heuristic proxy for emergence of numerical instability: for each length-scale, relative errors are presented in \Cref{fig:accuracy} until the first $N$ such that $\abs[0]{1-w_{k,N}/w_{k,1}} > 10^{-6}$, ordering of the nodes being from smallest to the largest so that $w_{k,N} = w_{k,1}$ in absence of numerical errors. \subsection{Properties of the weights}\label{sec:numsim-positivity} \Cref{fig:weights} shows the minimal weights $\min_{n=1,\ldots,N} \widetilde{w}_{k,n}$ and convergence to one of $\sum_{n=1}^N \abs[0]{\widetilde{w}_{k,n}}$ for a number of different length-scales. These results provide strong numerical evidence for the conjecture that $\widetilde{w}_{k,n}$ remain positive and that the assumptions of \Cref{thm:ConvergenceSpecific} hold. Exact weights, as long as they can be reliably computed (see \Cref{sec:numsim-approximation}), exhibit behaviour practically indistinguishable from the approximate ones and are not therefore depicted separately in \Cref{fig:weights}. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{fig-weights.pdf} \caption{Minimal weights and convergence to one of the the sum of absolute values of the weights for six different length-scales.}\label{fig:weights} \end{figure} \subsection{Worst-case error}\label{sec:experiment-wce} The worst-case error $e(Q)$ of a quadrature rule $Q(f) = \sum_{n=1}^N w_n f(x_n)$ in a reproducing kernel Hilbert space induced by the kernel $k$ is explicitly computable: \begin{equation}\label{eq:WCE-explicit} e(Q)^2 = \mu(k_\mu) + \sum_{n,m = 1}^N w_n w_m k(x_n,x_m) - 2 \sum_{n=1}^N w_n k_\mu(x_n). \end{equation} \Cref{fig:wce} compares the worst-case errors in the RKHS of the Gaussian kernel for six different length-scales of (i) the classical Gauss--Hermite quadrature rule, (ii) the quadrature $\widetilde{Q}_{k}(f) = \sum_{n=1}^N \widetilde{w}_{k,n} f(\tilde{x}_n)$ of \Cref{thm:main}, and (iii) the kernel quadrature rule with its nodes placed uniformly between the largest and smallest of $\tilde{x}_n$. We observe that $\widetilde{Q}_{k}$ is, for all length-scales, the fastest of these rules to converge (the kernel quadrature rule at $\tilde{x}_n$ yields WCEs practically indistinguishable from those of $\widetilde{Q}_{k}$ and is therefore not included). It also becomes apparent that the convergence rates derived in \Cref{thm:convergence,thm:ConvergenceSpecific} for $\widetilde{Q}_{k}$ are rather conservative. For example, for $\ell = 0.2$ and $\ell = 1$ the empirical rates are $e(\widetilde{Q}_{k}) = \mathcal{O}(\neper^{-c N})$ with $c \approx 0.21$ and $c \approx 0.98$, respectively, whereas \Cref{eq:ConvConstants} yields the theoretical values $c \approx 0.00033$ and $c \approx 0.054$, respectively. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{fig-wce.pdf} \caption{Worst-case errors~\eqref{eq:WCE-explicit} in the Gaussian RKHS as functions of the number of nodes of the quadrature rule of \Cref{thm:main} (SGHKQ), the kernel quadrature rule with nodes placed uniformly between the largest and smallest of $\tilde{x}_n$ (UKQ), and the Gauss--Hermite rule (GH). WCEs are displayed until the square root of floating-point relative accuracy ($\approx 1.4901 \times 10^{-8}$) is reached.}\label{fig:wce} \end{figure} \subsection{Numerical integration} Set $\ell = 1.2$ and consider the integrand \begin{equation}\label{eq:test-function} f(x) = \prod_{i=1}^d \exp\bigg( \! -\frac{c_i x^2}{2\ell^2} \bigg) x^{m_i}. \end{equation} When $0 < c_i < 4$ and $m_i \in \N$ for each $i=1,\ldots,d$, the function is in $\mathcal{H}$~\citep[Theorems 1 and 3]{Minh2010}. Furthermore, the Gaussian integral of this function is available in closed form: \begin{equation*} (2\pi)^{-d/2} \int_{\R^d} f(x) \neper^{-\norm[0]{x}^2/2} \dif x = \prod_{i=1}^d \frac{m_i!}{2^{m_i/2} (m_i/2)!} \bigg( \frac{\ell}{\sqrt{c_i}} \bigg)^{m_i+1} \bigg( \frac{1}{1+\ell^2/c_i}\bigg)^{(m_i+1)/2} \end{equation*} when $m_i$ are even (when they are not even, the integral is obviously zero). \Cref{fig:int} shows integration error of the three methods (or, in higher dimensions, their tensor product versions) used in \Cref{sec:experiment-wce} and the kernel quadrature rule based on the nodes $\tilde{x}_n$ for (i) $d=1$, $m_1 = 6$, $c_1 = 3/2$ and (ii) $d=3$, $m_1=6$, $m_2 = 4$, $m_3 = 2$, $c_1 = 3/2$, $c_2 = 3$, $c_3 = 1/2$. As expected, there is little difference between $\widetilde{Q}_{k}$ and $Q_{k}$. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{fig-integration.pdf} \caption{Error in computing the Gaussian integral of the function~\eqref{eq:test-function} in dimensions one and three using the quadrature rule of \Cref{thm:main} (SGHKQ), the corresponding kernel quadrature rule (KQ), the kernel quadrature rule with nodes placed uniformly between the largest and smallest of $\tilde{x}_n$ (UKQ), and the Gauss--Hermite rule (GH). Tensor product versions of these rules are used in dimension three.}\label{fig:int} \end{figure} \bibliographystyle{apa} \providecommand{\BIBYu}{Yu}
{ "timestamp": "2019-05-03T02:15:36", "yymm": "1803", "arxiv_id": "1803.09532", "language": "en", "url": "https://arxiv.org/abs/1803.09532", "abstract": "This article derives an accurate, explicit, and numerically stable approximation to the kernel quadrature weights in one dimension and on tensor product grids when the kernel and integration measure are Gaussian. The approximation is based on use of scaled Gauss-Hermite nodes and truncation of the Mercer eigendecomposition of the Gaussian kernel. Numerical evidence indicates that both the kernel quadrature and the approximate weights at these nodes are positive. An exponential rate of convergence for functions in the reproducing kernel Hilbert space induced by the Gaussian kernel is proved under an assumption on growth of the sum of absolute values of the approximate weights.", "subjects": "Numerical Analysis (math.NA)", "title": "Gaussian kernel quadrature at scaled Gauss-Hermite nodes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464485047916, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7074345658608464 }
https://arxiv.org/abs/1905.01605
Nitsche's method for a Robin boundary value problem in a smooth domain
We prove several optimal-order error estimates for a finite-element method applied to an inhomogeneous Robin boundary value problem (BVP) for the Poisson equation defined in a smooth bounded domain in $\mathbb{R}^n$, $n=2,3$. The boundary condition is weakly imposed using Nitsche's method. The Robin BVP is interpreted as the classical penalty method with the penalty parameter $\varepsilon$. The optimal choice of the mesh size $h$ relative to $\varepsilon$ is a non-trivial issue. This paper carefully examines the dependence of $\varepsilon$ on error estimates. Our error estimates require no unessential regularity assumptions on the solution. Numerical examples are also reported to confirm our results.
\section{Introduction} This paper presents several optimal-order error estimates for a finite-element method (FEM) applied to an inhomogeneous Robin boundary value problem for the Poisson equation defined in a smooth bounded domain in $\mathbb{R}^n$, $n=2,3$. The boundary condition is weakly imposed using Nitsche's method \cite{MR0341903}. The case of a polyhedral domain has already been addressed in \cite{MR2501054}; this paper is a generalization of \cite{MR2501054} to a smooth domain. Moreover, we also evaluate the symmetric interior penalty (SIP) discontinuous Galerkin (DG) method. The motivation of this study is discussed in detail below. The boundary condition is an indispensable component of the well-posed problem of partial differential equations (PDEs). In the field of scientific computation, a significant attention should be focused on imposing the boundary conditions, although this task is sometimes understood as simple and unambiguous. The Neumann boundary condition is naturally formulated in the variational equation and is handled directly in FEM. By contrast, the specification of the Dirichlet boundary condition (DBC) needs discussion. In a traditional FEM, including continuous $\mathcal{P}^k$ FEM, DBC is simply imposed by specifying the nodal values at boundary nodal points. Meanwhile, the penalty method and Nitsche's method for DBC provide reformulations of DBC as the Neumann condition or Robin boundary condition (RBC). Hence, their implementations are rather easy. As indicated by Bazilevs et al. \cite{MR2273854,MR2355732}, the method of ``weak imposition'' of DBC using Nitsche's method is useful for resolving the issue of spurious oscillations for non-stationary Navier--Stokes and convection--diffusion equations. From the viewpoint of physics, we also need to consider complex boundary conditions. Boundary conditions involving the Laplace--Beltrami operator $\Delta_\Gamma$, such as a dynamic boundary condition \begin{equation*} \pderiv{u}{t} +\pderiv{u}{\nu} + au -b\Delta_\Gamma u =g \end{equation*} and a generalized RBC \begin{equation} \label{eq:GRBC} \pderiv{u}{\nu} + au -b\Delta_\Gamma u = g, \end{equation} play important roles in application to the reduced fluid-structure interaction model and Cahn--Hilliard equation (see, e.g., \cite{MR2243323}, \cite{MR2385883} and \cite{MR2629535}). Nitsche's method may be an effective approach to address these boundary conditions, and therefore, is worthy of a thorough investigation. When numerically solving PDEs in a smooth domain, we often utilize polyhedral approximations of the domain. Generally, a facile approximation of the problem may result in a wrong numerical solution; the so-called Babu\v{s}ka's paradox in \cite[\S 5]{MR0170133} is a remarkable example. Therefore, investigating not only the error caused by discretizations but also that caused by domain approximations is important. For the standard FEM, approximating domains is a common problem, and analysis of the energy norm is well-developed thus far (see, e.g., \cite[\S 4.4]{MR0443377} and \cite[\S 4.4]{MR0520174}). Recently, the optimal order $W^{1,\infty}$ and $L^\infty$ stability and error estimates were established (refer to \cite{2018arXiv180400390K} for detail). Consequently, we evaluate Nitsche's method for PDEs in a smooth bounded domain. In the first step, we consider a simple Robin boundary value problem for the Poisson equation as a model problem, based on \cite{MR2501054}. Moreover, Nitsche's method naturally appears in the imposition of DBC and RBC of the DG method. Hence, we also study the DG method because the FEM and DG methods can be analyzed simultaneously. Our results are summarized as follows. We state the model Robin boundary value problem to be considered in Section \ref{sec:model}. Then, we mention the standard FEM \eqref{eq:nitsche}, SIPDG method \eqref{eq:dg}, and several parameter-/mesh-dependent norms $\norm{\cdot}_{N,h},\norm{\cdot}_{{DG},h}$. These norms are defined as \eqref{eq:norm_n} and \eqref{eq:norm_dg} and include the $H^1$ semi-norm. Assuming $u$ as the solution of the Robin boundary value problem, we let $u_{N}$ and $u_{DG}$ be the solutions of Nitsche's and DG methods, respectively. Then, we prove the DG energy error estimates (refer to Theorem \ref{thm1}) \begin{align} \norm{\tilde u - u_N}_{N,h},\, \norm{\tilde u - u_{DG}}_{{DG},h} &\le Ch(\norm{u}_{H^s(\Omega)}+\norm{\tilde u_0}_{H^1(\widetilde{\Omega})}+\norm{\tilde g}_{H^1(\widetilde{\Omega})}) \end{align} if $u\in H^s(\Omega)$ for $s=2,3$. Moreover, we obtain the $L^2$ error estimates (refer to Theorem \ref{thm2}) \begin{align} \norm{\tilde u - u_N}_{L^2(\Omega_h)},\norm{\tilde u - u_{DG}}_{L^2(\Omega_h)} &\le Ch^2(\norm{u}_{H^4(\Omega)}+\norm{\tilde u_0}_{H^3(\widetilde{\Omega})}+\norm{\tilde g}_{H^3(\widetilde{\Omega})}) \end{align} if $u\in H^4(\Omega)$. First, we present several preliminary results in Section \ref{sec:preliminaries}. Then, we state the proofs of Theorems \ref{thm1} and \ref{thm2} in Sections \ref{sec:pr_thmh1} and \ref{sec:pr_thml2}, respectively. Finally, we show the results of numerical experiments to confirm the validity of our theoretical results in Section \ref{sec:numerical_example}. We also discuss previous related studies. Barret and Elliott \cite{MR968097} studied the iso-parametric FEM for a similar problem and obtained similar results as ours. Specifically, we applied several techniques from \cite{MR968097}. However, regularity assumptions slightly vary from ours and the DG method was not addressed. Cockburn et al. \cite{MR2576369} considered the DG method and approximation of domains only in a one-dimensional problem. Zhang \cite{MR3448242} and Bassi and Rebay\cite{MR1607481} also reported numerical results. Chen and Chen \cite{MR2113680} studied the DG method in the ``exactly fitted'' triangulation. Kashiwabara \textit{et al.} \cite{MR3296617} investigated the standard FEM for \eqref{eq:GRBC} and proved the optimal-order convergence, where \eqref{eq:GRBC} is posed only on a ``flat'' part of the boundary. Kov\'{a}cs and Lubich \cite{MR3614879} also considered the standard FEM for \eqref{eq:GRBC} in a smooth domain, but the DG method was not addressed. For the DG method, some analyses for the dynamic boundary condition were proven by Antonietti \textit{et al.} \cite{MR3456973}. Nevertheless, applying the results of \cite{MR3456973} to actual problems is difficult because these are shown only in a rectangular domain. \paragraph{Notation.} At the end of the Introduction, we list the notations used in this paper. We follow the standard notation of, for example, \cite{MR2424078} for function spaces and their norms. Particularly, for $1\le p \le \infty$ and a positive integer $j$, we use the standard Lebesgue space $L^{p}(\mathcal{O})$ and Sobolev space $W^{j,p}(\mathcal{O})$. Hereinafter, $\mathcal{O}$ denotes the bounded domain in $\mathbb{R}^n$. The semi-norm and norm of $W^{j,p}(\mathcal{O})$ are denoted, respectively, by \[ \abs{v}_{W^{i,p}(\mathcal{O})} = \left(\sum_{\abs{\alpha} = i}\norm{\pderiv[\alpha]{v}{x}}_{L^p(\mathcal{O})}^p\right)^{{1}/{p}},\quad \norm{v}_{W^{j,p}(\mathcal{O})} = \left(\sum_{i = 0}^{j} \abs{v}_{W^{i,p}(\mathcal{O})}^p\right)^{{1}/{p}}. \] The inner product of $L^2(\mathcal{O})$ is denoted by $(\cdot,\cdot)_{\mathcal{O}}$. We also use the fractional-order Sobolev space $W^{s,p}(\mathcal{O})$ for $s>0$. Generally, we write $H^s(\mathcal{O}) = W^{s,2}(\mathcal{O})$. For $\Gamma \subset \partial \mathcal{O}$, we define $W^{j,p}(\Gamma)$ and $H^s(\Gamma)$ by using a surface measure $d\gamma=d\gamma_\Gamma$ in a common approach. The inner product of $L^2(\Gamma)$ is denoted by $\langle\cdot,\cdot\rangle_{\Gamma}$. Moreover, $\mathcal{P}^r(\mathcal{O})$ denotes the set of all polynomials of degree $\le r$. \section{Model problem and main results} \label{sec:model} \subsection{Model problem} \label{model} Supposing that $\Omega \subset \mathbb{R}^n\,(n=2,\,3)$ is a bounded domain with a sufficiently smooth boundary $\Gamma = \partial \Omega$, we consider the Robin boundary value problem for the Poisson equation as follows: \begin{equation} \left\{\begin{array}{rcllll} -\Delta u &=& f & \rm in & \Omega &\\ \displaystyle\pderiv{u}{\nu} + \frac{1}{\varepsilon}u&=&\displaystyle \frac{1}{\varepsilon}u_0 + g &\rm on &\Gamma , & \end{array}\right. \label{eq:poi} \end{equation} where $\partial/\partial\nu$ denotes the differentiation along the outward unit normal vector $\nu$ to $\Gamma$ and $\varepsilon$ is a positive constant. Moreover, $f$, $u_0$, and $g$ are the given functions. Throughout this paper, we assume that \begin{equation} \tag{H1} f\in L^2(\Omega),\quad u_0\in H^{3/2}(\Gamma),\quad g\in H^{1/2}(\Gamma),\quad \mbox{and}\quad \Gamma \mbox{ is a $C^{2}$ boundary}. \end{equation} Under these assumptions, $\mathcal{E}u_0\in H^2(\Omega)$ and $\mathcal{E}g\in H^1(\Omega)$ exist, such that $\mathcal{E}u_0=u_0$ on $\Gamma$, $\mathcal{E}g=g$ on $\Gamma$, $\|\mathcal{E}u_0\|_{H^{2}(\Omega)}\le C\|u_0\|_{H^{3/2}(\Gamma)}$, and $\|\mathcal{E}g\|_{H^{1}(\Omega)}\le C\|g\|_{H^{1/2}(\Gamma)}$. From the general theory of elliptic PDEs, we recognize that for a non-negative integer $m$, a unique solution $u\in H^{m+2}(\Omega)$ of \eqref{eq:poi} exists if $f\in H^{m}(\Omega)$, $u_0\in H^{m+3/2}(\Gamma)$, $g\in H^{m+1/2}(\Gamma)$, and $\Gamma$ is a $C^{m+2}$ boundary. \subsection{Numerical schemes} Let $\{\mathcal{T}_h\}_{h}$ be a family of regular triangulations in the sense of \cite[(4.4.15)]{MR2373954}, where the granularity parameter $h$ is defined as $h = \displaystyle \max_{K \in \mathcal{T}_h}h_K$. Assuming $\Omega_h = \operatorname{int}(\bigcup_{K \in \mathcal{T}_h} \overline K)$, the boundary of $\Omega_h$ is expressed as $\Gamma_h = \partial \Omega_h$. We introduce the set of all edges as \[ \overline{\mathcal{I}_h} \coloneqq \{E \colon E \text{ is an $(n-1)$-face of some } K \in \mathcal{T}_h\}. \] Then, the boundary mesh inherited from $\mathcal{T}_h$ is defined by \[ \mathcal{E}_h=\{E\in \overline{\mathcal{I}_h}\colon E\subset\Gamma_h\}, \] and $\Gamma_h$ is expressed as $\Gamma_h = \bigcup_{E \in \mathcal{E}_h} E$. We assume that $\Gamma_h$ is an approximate surface/polygon of $\Gamma$ in the sense that \begin{equation} \tag{H2} \mbox{every vertex of $E \in \mathcal{E}_h$ lies on $\Gamma$.} \end{equation} We define the following two finite-element spaces: \begin{align} V_N &\coloneqq \{ \chi \in C(\overline \Omega) \colon \chi|_K \in \mathcal{P}^1(K)\,{}^\forall K\in \mathcal{T}_h\} ;\label{eq:fespace_n}\\ V_{DG} &\coloneqq \{ \chi \in L^2(\Omega) \colon \chi|_K \in \mathcal{P}^1(K)\,{}^\forall K\in \mathcal{T}_h\}. \label{eq:fespace_dg} \end{align} Furthermore, we set \[ \mathcal{I}_h \coloneqq \{E\in \overline{\mathcal{I}_h}\colon E\not\subset\Gamma_h\}=\overline{\mathcal{I}_h}\backslash\mathcal{E}_h. \] The symbols $\mean{\cdot}$ and $\jump{\cdot}$ denote the average and jump of a function at an edge $E$, respectively; the precise definitions are described below. For each $E\in\mathcal{I}_h$, two distinct $K_1,K_2 \in \mathcal{T}_h$ exist, satisfying $E = \overline{K}_1 \cap \overline{K}_2$. The unit vectors of $E$ outgoing from $K_1$ and $K_2$ are denoted by $n_1$ and $n_2$, respectively. Supposing that $v$ is a suitably smooth function defined in $K_1\cup K_2\cup \Gamma$, we define the restrictions of $v$ as $v_1=v|_{K_1}$ and $v_2=v|_{K_2}$. Then, we set \begin{align*} \mean{v} &\coloneqq \frac{1}{2}(v_1 +v_2),& \jump{v} &\coloneqq v_1n_1 + v_2n_2,\\ \mean{\nabla v} &\coloneqq \frac{1}{2}(\nabla v_1 + \nabla v_2),& \jump{\nabla v} &\coloneqq \nabla v_1 \cdot n_1 + \nabla v_2 \cdot n_2. \end{align*} Note that $\mean{v}$ and $\jump{\nabla v}$ are vector-valued functions, while $\jump{v}$ and $\mean{\nabla v}$ are scalar-valued functions. Finally, for $E \in \mathcal{E}_h$, we set $\displaystyle \mean{\nabla v} \coloneqq \pderiv{v}{\nu_h}$. We set \[ (w,v)_\omega=\int_\omega wv~dx, \quad (\nabla w,\nabla v)_{\omega}=\int_\omega \nabla w\cdot \nabla v~dx,\quad \langle w,v \rangle_{E}=\int_E wv~dS \] for $\omega\subset\Omega$ and $E\in\overline{\mathcal{I}_h}$. Moreover, we define the bilinear forms as follows: \begin{align} a_h^N(w,v) &= (\nabla w,\nabla v)_{\Omega_h} + b_h(w,v)\\ b_h(w,v)&= \sum_{E \in \mathcal{E}_h}\biggl\{-\frac{\gamma h_E}{\varepsilon+\gamma h_E}\Bigl(\langle\pderiv{w}{\nu_h},v\rangle_{E}+\langle w,\pderiv{v}{\nu_h}\rangle_{E}\Bigr);\nonumber\\ &\hspace{7em}+ \frac{1}{\varepsilon+\gamma h_E}\langle w,v\rangle_E -\frac{\varepsilon\gamma h_E}{\varepsilon+\gamma h_E}\langle\pderiv{w}{\nu_h},\pderiv{v}{\nu_h}\rangle_E \biggr\};\\ a_h^{DG}(w,v) &= \sum_{K \in \mathcal{T}_h}(\nabla w,\nabla v)_{K} + b_h(w,v) + J_h(w,v);\\ J_h(w,v) &= \sum_{E \in \mathcal{I}_h}\left\{-\langle \mean{\nabla w},\jump{v}\rangle_E - \langle \jump{w},\mean{\nabla v}\rangle_E + \frac{1}{\gamma h_E}\langle \jump{w},\jump{v} \rangle_E \right\}. \end{align} Here, $\gamma$ is a penalty parameter and $h_E = \operatorname{diam} E$. The bilinear form $a_h^N$ is taken from \cite{MR2501054} and $a_h^{DG}$ appears in the SIPDG method (see \cite{MR1885715}). We also define the following linear form $l_h(v)$: \begin{align} l_h(v) &= (\tilde f, v)_{\Omega_h} + \sum_{E \in \mathcal{E}_h}\biggl\{\frac{1}{\varepsilon+\gamma h_E}\langle \tilde u_{0},v\rangle_{E} -\frac{\gamma h_E}{\varepsilon+\gamma h_E}\langle \tilde u_0, \pderiv{v}{\nu_h}\rangle_E\nonumber\\ &\hspace{10em}+ \frac{\varepsilon}{\varepsilon+\gamma h_E}\langle \tilde g,v\rangle_{E} -\frac{\varepsilon\gamma h_E}{\varepsilon+\gamma h_E}\langle \tilde g, \pderiv{v}{\nu_h}\rangle_E \biggr\}. \end{align} In this section, we will state our schemes. First, the standard FEM combined with Nitsche's method for the inhomogeneous RBC is expressed as \begin{equation} \textup{(N)}\quad \text{Find}\quad u_N \in V_N \quad \text{ s.t. }\quad a_h^N(u_N,\chi) = l_h(\chi) \quad {}^\forall \chi \in V_N. \label{eq:nitsche} \end{equation} The second one is the SIPDG method, which is expressed as \begin{equation} \textup{(DG)}\quad \text{Find}\quad u_{DG} \in V_{DG} \quad \text{ s.t. }\quad a_h^{DG}(u_{DG},\chi) = l_h(\chi) \quad {}^\forall \chi \in V_{DG}. \label{eq:dg} \end{equation} We call \emph{Nitsche's method} (N) and the \emph{DG method} (DG) for brevity. We use the following norms that depend on $\varepsilon$ and $h_E$: \begin{align} \norm{v}_N^2 &\coloneqq \norm{\nabla v}_{L^2(\Omega_h)}^2 + \sum_{E \in \mathcal{E}_h} \frac{1}{\varepsilon+h_E}\norm{v}_{L^2(E)}^2,\\ \norm{v}_{N,h}^2 &\coloneqq \norm{v}_N^2 + \sum_{E \in \mathcal{E}_h} h_E\norm{\pderiv{v}{\nu_h}}_{L^2(E)}^2, \label{eq:norm_n}\\ \norm{v}_{DG}^2 &\coloneqq \norm{v}_{N}^2 + \sum_{E \in \mathcal{I}_h} \frac{1}{h_E}\norm{\jump{v}}^2_{L^2(E)},\\ \norm{v}_{DG,h}^2 &\coloneqq \norm{v}_{DG}^2 + \sum_{E \in \mathcal{I}_h\cup\mathcal{E}_h}h_E\norm{\mean{\nabla v}}_{L^2(E)}^2.\label{eq:norm_dg} \end{align} Note that $\norm{\cdot}_{N}$ and $\norm{\cdot}_{N,h}$ are uniformly equivalent on $V_N$ in $h$. Similarly, $\norm{\cdot}_{DG}$ and $\norm{\cdot}_{DG,h}$ are equivalent on $V_{DG}$. \subsection{Main results} We fix a sufficiently smooth domain $\widetilde \Omega$, which includes $\overline{\Omega}$ and $\overline{\Omega_h}$. Particularly, we assume that there is an $h_0>0$ such that \begin{equation} \tag{H3} \operatorname{dist}(\partial\tilde{\Omega},\Omega)\ge h_0\quad \mbox{and}\quad \operatorname{dist}(\partial\tilde{\Omega},\Omega_h)\ge h_0, \end{equation} for $h\le h_0$. For any $m\ge 0$, there exists a linear operator $\map{P}{H^m(\Omega)}{H^m(\widetilde \Omega)}$ such that \[ (Pv)|_\Omega=v\mbox{ in }\Omega,\quad \|Pv\|_{H^m(\tilde{\Omega})}\le C_m\|v\|_{H^m(\Omega)}, \] where $C_m$ denotes a positive constant depending only on $m$ and $\Omega$. Here, we will discuss our main results. The following results are valid for a sufficiently smooth $h$. Specifically, we always assume that $h\le h_0$, where $h_0$ is defined previously, although we do not mention it explicitly. Furthermore, (H1), (H2), and (H3) are assumed throughout. We set $\tilde{u}_0=P(\mathcal{E}u_0)$ and $\tilde{g}=P(\mathcal{E}g)$. \begin{thm}\label{thm1} Assuming that $u \in H^s(\Omega)$ is the solution of \eqref{eq:poi}, we set $\tilde{u}=Pu$, where $s=2$ if $\Omega$ is convex; otherwise, $s=3$. Let $u_N \in V_N$ and $u_{DG} \in V_{DG}$ be the solutions of \eqref{eq:nitsche} and \eqref{eq:dg}, respectively. Then, for a sufficiently small $\gamma$, the following error estimates hold: \begin{equation} \norm{\tilde u - u_N}_{N,h},\, \norm{\tilde u - u_{DG}}_{{DG},h} \le Ch(\norm{u}_{H^s(\Omega)}+\norm{\tilde u_0}_{H^1(\widetilde{\Omega})}+\norm{\tilde g}_{H^1(\widetilde{\Omega})}), \label{eq:errorh1} \end{equation} where $C$ denotes a positive constant that is independent of $\varepsilon$ and $h$. \end{thm} \begin{thm}\label{thm2} Assuming that $u \in H^4(\Omega)$ is the solution of \eqref{eq:poi}, we set $\tilde u = Pu$. Moreover, we assume that $\tilde{u}_0\in H^3(\tilde{\Omega})$ and $\tilde{g}\in H^3(\tilde{\Omega})$. Let $u_N \in V_N$ and $u_{DG} \in V_{DG}$ be the solutions of \eqref{eq:nitsche} and \eqref{eq:dg}, respectively. Then, for a sufficiently small $\gamma$, the following error estimates hold: \begin{equation} \norm{\tilde u - u_N}_{L^2(\Omega_h)},\norm{\tilde u - u_{DG}}_{L^2(\Omega_h)} \le Ch^2(\norm{u}_{H^4(\Omega)}+\norm{\tilde u_0}_{H^3(\widetilde{\Omega})}+\norm{\tilde g}_{H^3(\widetilde{\Omega})}), \label{eq:errorl2} \end{equation} where $C$ denotes a positive constant that is independent of $\varepsilon$ and $h$. \end{thm} \section{Preliminaries}\label{sec:preliminaries} In this section, we collect some auxiliary results. Below, \# represents $N$ and $DG$ because the properties of Nitsche's and DG methods are quite similar. Given that $\Omega$ is a $C^2$ domain, a local coordinate system $\{U_r,y_r,\phi_r\}_{r=1}^M$ exists to ensure the following: \begin{enumerate}[1)] \item $\{U_r\}_{r=1}^M$ is an open covering of $\Gamma = \partial \Omega$.\label{enum_dom1} \item A congruent transformation $A_r$ exists to ensure that $y_r= (y_{r1},y'_{r}) = A_r(x)$, where $x$ is the original coordinate.\label{enum_dom2} \item $\phi_r$ is a $C^2$ function in $\Delta_r\coloneqq \{y_{r1} \in \mathbb{R} \colon \abs{y_{r1}}\le \alpha\} $ and $\Gamma \cap U_r$ is a graph of $\phi_r$ with respect to the coordinate $y_r$.\label{enum_dom3} \end{enumerate} Assuming that $h$ is sufficiently small if necessary, our possible assumptions are as follows: \begin{enumerate}[1)] \setcounter{enumi}{3} \item A function $\phi_{rh}$ exists to ensure that $\Gamma_h \cap U_r$ is a graph of $\phi_{rh}$ with respect to the coordinate $y_r$. \end{enumerate} In addition, we assume that $h_0$ is sufficiently small to ensure that for any $x \in \Gamma$ and $r=1,\ldots,M$, the open ball $B(x,h_0)$ with center $x$ and radius $h_0$ is contained in a neighborhood $U_r$. Let $d(x)$ be the signed distance function defined by \[ d(x) \coloneqq \begin{cases} -\operatorname{dist}(x,\Gamma) & x \in \Omega \\ \operatorname{dist}(x,\Gamma) & x \in \mathbb{R}^n\backslash\Omega. \end{cases} \] We define $\Gamma(\delta) \coloneqq \{x \in \mathbb{R}^n \colon \abs{d(x)} < \delta\}$. Then, for a sufficiently small $\delta$, the orthogonal projection $\pi$ onto $\Gamma$ exists such that \begin{equation} x = \pi(x) + d(x) \nu(\pi(x)) \quad (x \in \Gamma(\delta),\,\pi(x) \in \Gamma), \end{equation} where $\nu$ is an outward unit normal vector on $\Gamma$. Because $h$ is sufficiently small, $\pi$ is defined on $\Gamma_h\subset \Gamma(\delta)$ and for each $E\in \mathcal{E}_h$, and $\pi(E)$ comprises some local neighborhood $U_r$. In this case, $\pi|_{\Gamma_h}$ has the inverse operator $\pi^*(x)=x+t^*(x)\nu(x)$, and a positive constant $C_{E}$ exists satisfying \[\norm{t^*}_{W^{0,\infty}(\Gamma)}\le C_{0}h^2.\] Moreover, $\pi(\mathcal{E}_h)\coloneqq\{\pi(E) \colon E \in \mathcal{E}_h\}$ is a partition of $\Gamma$. We assume that all these properties hold for any $h\le h_0$ by assuming that $h_0$ is sufficiently small if necessary. In this situation, the following boundary-skin estimates are available. For the detail, refer to \cite[Theorems 8.1, 8.2, and 8.3]{MR3563279} and \cite[Lemma A.1]{2018arXiv180400390K}. \begin{lem}[Boundary-skin estimates]\label{lem:bdskin} For $C_{0}h^2 \le \delta \le 2C_{0}h^2$ with a positive constant $C_0$, the following estimates hold: \begin{equation} \abs{\int_{\pi(E)}f\,d\gamma - \int_E f\circ \pi\,d\gamma_h} \le Ch^2 \int_{\pi(E)} \abs{f}\,d\gamma \quad f \in L^1(\pi(E)),\,E \in \mathcal{E}_h.\label{eq:bs_1} \end{equation} \begin{equation} \norm{f-f\circ\pi}_{L^p(\Gamma_h)} \le C\delta^{1-1/p}\norm{f}_{W^{1,p}(\Gamma(\delta))} \quad f \in W^{1,p}(\Gamma(\delta)).\label{eq:bs_2} \end{equation} \begin{equation} \norm{f}_{L^p(\Gamma(\delta))} \le C(\delta \norm{\nabla f}_{L^p(\Gamma(\delta))}+\delta^{1/p}\norm{f}_{L^p(\Gamma)}) \quad f \in W^{1,p}(\Gamma(\delta)).\label{eq:bs_3} \end{equation} \begin{equation} \norm{f}_{L^p(\Omega_h\setminus\Omega)} \le C(\delta \norm{\nabla f}_{L^p(\Omega_h\setminus\Omega)}+\delta^{1/p}\norm{f}_{L^p(\Gamma_h)}) \quad f \in W^{1,p}(\Omega_h).\label{eq:bs_4} \end{equation} \begin{equation} \norm{\nu_h-\nu\circ\pi}_{L^\infty(\Gamma_h)} \le Ch,\label{eq:bs_5} \end{equation} Here, $\nu_h$ is the outward unit normal vector of $\Gamma_h$. \end{lem} The bilinear form $a_h^\#$ has the following properties. \begin{lem} A positive constant $C$ that is independent of $\varepsilon$ and $h$ exists, satisfying \begin{equation} a_h^\#(w,v) \le C \norm{w}_{\#,h}\norm{v}_{\#,h}\quad {}^\forall w,\,v \in H^s(\Omega_h) + V_\#. \label{eq:continuity} \end{equation} Moreover, for a sufficiently small $\gamma$, we have \begin{equation} a_h^\#(\chi,\chi) \ge C \norm{\chi}_\#^2 \quad {}^\forall \chi \in V_\# \label{eq:coercivity}, \end{equation} where $C$ denotes a positive constant that is independent of $\varepsilon$ and $h$. Consequently, the schemes \eqref{eq:nitsche} and \eqref{eq:dg} have unique solutions. \end{lem} \begin{proof} Estimate \eqref{eq:continuity} is a consequence of H\"older's inequality. Estimate \eqref{eq:coercivity} for Nitsche's method is known (refer to \cite[Theorem 3.2]{MR2501054}). Moreover, verifying \eqref{eq:coercivity} for the DG method is necessary. Using H\"older's inequality and trace inequality for polynomials, we have \begin{align*} a_h^{DG}(\chi,\chi) &\ge \norm{\nabla \chi}_{L^2(\Omega)}^2 \\ &\hspace{2em}+ \sum_{E \in \mathcal{E}_h}\frac{1}{\varepsilon + \gamma h_E}\Bigl\{-2\gamma\varepsilon\norm{\pderiv{\chi}{\nu_h}}_{L^2(E)}\norm{\chi}_{L^2(E)}+\norm{\chi}_{L^2(E)}^2 -\varepsilon\gamma h_E \norm{\chi}_{L^2(E)}^2\Bigr\},\\ & \hspace{2em}+ \sum_{E \in \mathcal{I}_h}\Bigl\{ -2\norm{\mean{\nabla \chi}}_{L^2(E)}\norm{\jump{\chi}}_{L^2(E)}^2 + \frac{1}{\gamma h_E}\norm{\jump{\chi}}_{L^2(E)}^2\Bigr\}\\ &\ge \left(1-\frac{1}{\delta_1}\frac{C\gamma^2h_E}{\varepsilon + \gamma h_E}-\frac{C\varepsilon \gamma}{\varepsilon+\gamma h_E}-\frac{C\gamma}{\delta_2}\right)\norm{\nabla \chi}_{L^2(\Omega_h)}\\ & \hspace{2em}+ \sum_{E \in \mathcal{E}_h}\frac{1-\delta_1}{\varepsilon+\gamma h_E}\norm{\pderiv{\chi}{\nu_h}}_{L^2(E)}^2 + \sum_{E \in \mathcal{I}_h}\frac{1-\delta_2}{\gamma h_E}\norm{\jump{\chi}}_{L^2(E)} \end{align*} If $2C\gamma < 1$, then we can choose $\delta_i$ satisfying $2C\gamma < \delta_i < 1$ for $i=1,2$. Therefore, we have proven \eqref{eq:coercivity}. \end{proof} A projection operator exists from $\Pi_\#$ to $V_\#$, thus satisfying \begin{equation} \abs{w-\Pi_\# w}_{H^m(K)} \le C h^{2-m}\norm{w}_{H^2(K)} \quad {}^\forall w \in H^2(\Omega_h), K \in \mathcal{T}_h, m = 0,\,1,\,2. \label{eq:interpolation} \end{equation} \begin{lem}\label{lem:approx} Assuming that $u \in H^2(\Omega)$ is the solution of \eqref{eq:poi}, we set $\tilde u =Pu$. Let $u_N \in V_N$ and $u_{DG} \in V_{DG}$ be the solutions of \eqref{eq:nitsche} and \eqref{eq:dg}, respectively. Then, for a sufficiently small $\gamma$, we have \begin{align} \norm{\tilde u - u_\#}_{\#,h} &\le C \left[\inf_{\xi\in V_\#}\norm{\tilde u - \xi}_{\#,h}+\sup_{\chi \in V_\#} \frac{\nabs{a_h^\#(\tilde u,\chi)-l_h(\chi)}}{\norm{\chi}_{\#}}\right]\label{eq:approx_h1}\\ \norm{\tilde u - u_\#}_{L^2(\Omega_h)} &\le C\Biggl[ \norm{\tilde u - u_\#}_{L^2(\Omega_h\setminus\Omega)} + h\norm{\tilde u - u_\#}_{\#,h}\nonumber\\ &+ \sup_{z \in H^2(\Omega)}\frac{\norm{\tilde z-\Pi_\#\tilde z}_{\#,h}\norm{\tilde u-u_\#}_{\#,h}+\nabs{a_h^\#(\tilde u,\Pi_\#\tilde z)-l_h(\Pi_\#\tilde z)}}{\norm{z}_{H^2(\Omega)}}\Biggr]\label{eq:approx_l2}, \end{align} where $\tilde z = Pz$ for $z\in H^2(\Omega)$. Therein, $C$ denotes positive constants that are independent of $\varepsilon$ and $h$. \end{lem} \begin{proof} Assuming $\xi \in V_\#$ and $\chi = u_\#-\xi$, we have \begin{align*} \norm{\chi}_\#^2 &\le C a_h^\#(\chi,\chi) \\ &= C(a_h^\#(\tilde u-\xi,\chi)-a_h^\#(\tilde u,\chi)+ l_h(\chi))\\ & \le C \norm{\tilde u -\xi}_{\#,h}\norm{\chi}_{\#} + \nabs{a_h^\#(\tilde u,\chi)- l_h(\chi)}, \end{align*} where \eqref{eq:coercivity}, \eqref{eq:continuity}, and the equivalence of the norms are applied. This, as well as the triangular inequality, implies \eqref{eq:approx_h1} Assuming $\eta \in L^2(\Omega_h)$, we define $\tilde \eta$ as \[\tilde \eta =\begin{cases} \eta & (x \in \Omega)\\ 0 & (\text{otherwise}). \end{cases} \] Let $z \in H^2(\Omega)$ be the solution of \[ \left\{\begin{array}{rcllll} -\Delta z &= \tilde\eta & \rm in & \Omega &\\ \displaystyle\pderiv{z}{\nu} + \frac{1}{\varepsilon}z&=0&\rm on &\Gamma. & \end{array}\right. \] Then, \begin{equation} \norm{z}_{H^2(\Omega)} \le C \norm{\eta}_{L^2(\Omega)}. \label{eq:adjoint} \end{equation} For $w \in H^s(\Omega_h)+V_{\#}$ and $v \in H^2(\Omega_h)$, by applying the integration by parts, we have \begin{align} (w,-\Delta v)_{\Omega_h} & = \sum_{K \in \mathcal{T}_h}(\nabla w,\nabla v)_{\Omega_h}-\sum_{E \in \mathcal{E}_h}\langle w,\pderiv{v}{\nu_h}\rangle_E - \sum_{E \in \mathcal{I}_h}\bigl(\langle \jump{w},\mean{\nabla v}\rangle_E + \langle \mean{w},\jump{\nabla v}\rangle_E\bigr) \nonumber\\ &=a_h^\#(w,v)-\sum_{E \in \mathcal{E}_h}\frac{\varepsilon}{\varepsilon+\gamma h_E}\langle w-\gamma h_E\pderiv{w}{\nu_h},\pderiv{v}{\nu_h}+\frac{1}{\varepsilon}v \rangle_E. \label{eq:consistency} \end{align} By substituting \eqref{eq:consistency} for $w = \tilde u -u_\#$ and $v = \tilde z$, we obtain \begin{align} (\tilde u-u_\#,\eta)_{\Omega_h} &= (\tilde u - u_\#,-\Delta \tilde z)_{\Omega_h} + (\tilde u - u_\#,\eta + \Delta \tilde z)_{\Omega\setminus\Omega_h} \nonumber \\ &= a_h^\#(\tilde u -u_\#,\tilde z) + (\tilde u - u_\#,\eta + \Delta \tilde z)_{\Omega\setminus\Omega_h} \nonumber\\ &\hspace{5em}- \sum_{E \in \mathcal{E}_h}\frac{\varepsilon}{\varepsilon+\gamma h_E}\langle \tilde u -u_\#-\gamma h_E\pderiv{\tilde u -u_\#}{\nu_h},\pderiv{\tilde z}{\nu_h}+\frac{1}{\varepsilon}\tilde z \rangle_E\nonumber\\ &= a_h^\#(\tilde u -u_\#,\tilde z-\Pi_{\#}\tilde z)+a_h^\#(\tilde u,\Pi_\#\tilde z)-l_h(\Pi_\#\tilde z) + (\tilde u - u_\#,\eta + \Delta \tilde z)_{\Omega\setminus\Omega_h} \nonumber\\ &\hspace{5em}- \sum_{E \in \mathcal{E}_h}\frac{\varepsilon}{\varepsilon+\gamma h_E}\langle \tilde u -u_\#-\gamma h_E\pderiv{\tilde u -u_\#}{\nu_h},\pderiv{\tilde z}{\nu_h}+\frac{1}{\varepsilon}\tilde z \rangle_E.\label{eq:pf_approxl2_1} \end{align} Given that $\displaystyle \nabla\tilde z \cdot \nu+\frac{1}{\varepsilon}\tilde z =0$ on $\pi(E)$, by using the boundary-skin estimates, we have \begin{align} &\hspace{1em}\sum_{E \in \mathcal{E}_h}\frac{\varepsilon}{\varepsilon+\gamma h_E}\langle \tilde u -u_\#-\gamma h_E\pderiv{\tilde u -u_\#}{\nu_h},\pderiv{\tilde z}{\nu_h}+\frac{1}{\varepsilon}\tilde z \rangle_E\nonumber \\ &\le C \norm{\pderiv{\tilde z}{\nu_h}+\frac{1}{\varepsilon}\tilde z}_{L^2(\Gamma_h)}\norm{\tilde u -u_\#}_{\#,h}\nonumber \\ &\le C \left(\norm{\nabla\tilde z \cdot (\nu\circ \pi)+\frac{1}{\varepsilon}\tilde z}_{L^2(\Gamma_h)}+\norm{\nabla z\cdot(\nu_h-\nu\circ \pi)}_{L^2(\Gamma_h)}\right)\norm{\tilde u -u_\#}_{\#,h}\nonumber \\ &\le C h\norm{z}_{H^2(\Omega)}\norm{\tilde u -u_\#}_{\#,h}. \label{eq:pf_approxl2_2} \end{align} Hence, we deduce \begin{align} \norm{\tilde u - u_\#}_{L^2(\Omega_h)} & = \sup_{\eta \in L^2(\Omega_h)} \frac{(\tilde u-u_\#,\eta)_{\Omega_h}}{\norm{\eta}_{L^2(\Omega)}} \nonumber \\ & \le C\norm{\tilde u - u_\#}_{L^2(\Omega\setminus\Omega_h)}+Ch\norm{\tilde u -u_\#}_{\#,h} \nonumber \\ & \hspace{3em} C \sup_{\eta \in L^2(\Omega_h)} \frac{a_h^\#(\tilde u -u_\#,\tilde z-\Pi_{\#}\tilde z)+a_h^\#(\tilde u,\Pi_\#\tilde z)-l_h(\Pi_\#\tilde z)}{\norm{\eta}_{L^2(\Omega)}}. \end{align} Using \eqref{eq:adjoint} and \eqref{eq:continuity}, we have \eqref{eq:approx_l2}. \end{proof} \section{Energy error estimates (Proof of Theorem \ref{thm1})} \label{sec:pr_thmh1} \begin{proof}[Proof of Theorem \ref{thm1}.] By substituting \eqref{eq:consistency} for $v = \tilde u$, we have \begin{align} a_h^\#(\tilde u,w) - l_h(w) &= (-\Delta \tilde u-f,w)_{\Omega_h} + \sum_{E \in \mathcal{E}_h}\frac{\varepsilon}{\varepsilon + \gamma h_E}\langle \pderiv{\tilde u}{\nu_h} + \frac{\tilde u-\tilde u_0}{\varepsilon}-\tilde g,w - \gamma h_E \pderiv{w}{\nu_h} \rangle_E. \label{eq:pfthm1h1_1} \end{align} Considering that $-\Delta \tilde u = \tilde f$ on $\Omega$, by using the boundary-skin estimates and trace inequality, we obtain \begin{align} \abs{(-\Delta \tilde u-f,w)_{\Omega_h}} &\le \norm{\Delta \tilde u+ f}_{L^2(\Omega_h\setminus\Omega)}\norm{w}_{L^2(\Omega_h\setminus\Omega)} \nonumber \\ &\le C h^2\norm{u}_{H^3(\Omega)}\norm{w}_{\#,h}. \label{eq:pfthm1h1_2} \end{align} By using the boundary-skin estimates, we have \begin{align} &\hspace{1em}\sum_{E \in \mathcal{E}_h}\frac{\varepsilon}{\varepsilon + \gamma h_E}\langle \pderiv{\tilde u}{\nu_h} + \frac{\tilde u-\tilde u_0}{\varepsilon}-\tilde g,w - \gamma h_E \pderiv{w}{\nu_h} \rangle_E\nonumber \\ & \le C\norm{\pderiv{\tilde u}{\nu_h} + \frac{\tilde u-\tilde u_0}{\varepsilon}-\tilde g}_{L^2(\Gamma_h)}\norm{w}_{\#,h} \nonumber \\ & \le Ch(\norm{u}_{H^2(\Omega)}+\norm{\tilde u_0}_{H^1(\widetilde \Omega)}+\norm{\tilde g}_{H^1(\widetilde \Omega)})\norm{w}_{\#,h}. \label{eq:pfthm1h1_3} \end{align} Therefore, we deduce \begin{align} \nabs{a_h^\#(\tilde u,w) - l_h(w)} &\le C h(\norm{u}_{H^2(\Omega)}+h\norm{u}_{H^3(\Omega)}+\norm{\tilde u_0}_{H^1(\widetilde \Omega)}+\norm{\tilde g}_{H^1(\widetilde \Omega)})\norm{w}_{\#,h}. \label{eq:approx_consistency} \end{align} By substituting \eqref{eq:approx_consistency} for $w = \chi$ and using \eqref{eq:interpolation}, estimate \eqref{eq:errorh1} holds. If $\Omega$ is convex, then $\Omega_h \subset \Omega$. Hence, we obtain \[ \abs{(-\Delta \tilde u-f,w)_{\Omega_h}} = 0, \] and \[ \nabs{a_h^\#(\tilde u,w) - l_h(w)} \le C h(\norm{u}_{H^2(\Omega)}+\norm{\tilde u_0}_{H^1(\widetilde \Omega)}+\norm{\tilde g}_{H^1(\widetilde \Omega)})\norm{w}_{\#,h}. \] \end{proof} Our finite-element and DG spaces are defined using only the P1 element, and Theorem \ref{thm1} is optimal in the energy norm. If we use higher-order elements, then the resulting error estimate becomes non-optimal because of the difference between $\Omega$ and $\Omega_h$. However, we can obtain the optimal result using the P2 element by assuming a symmetry condition. That is, we prove the following corollary. We define the two finite-element spaces $V_{N,2}$ and $V_{DG,2}$ as \[ V_{N,2} = \{\chi \in C(\overline \Omega) \colon \chi|_{K} \in \mathcal{P}^2(K), \ K\in\mathcal{T}_h \}, \] \[ V_{DG,2} = \{\chi \in L^2(\Omega) \colon \chi|_{K} \in \mathcal{P}^2(K), \ K\in\mathcal{T}_h \}. \] \begin{cor}\label{cor:p2} Assume that $\Omega = \{x \in \mathbb{R}^2 \colon \abs{x} < 1 \}$ and the solution $u \in H^3(\Omega)$ of \eqref{eq:poi} is radially symmetric. Supposing that $u_{\#} \in V_{\#,2}$ is the solution of \begin{equation} a_h^\#(u_{\#},\chi) = l_h(\chi) \quad {}^\forall \chi \in V_{\#,2}, \end{equation} we have \begin{equation} \norm{\tilde u - u_\#}_{\#,h} \le C h^2 (\norm{u}_{H^3(\Omega)} + \norm{\tilde u_0}_{H^2(\Omega)} + \norm{\tilde g}_{H^2(\Omega)}) . \label{eq:p2approx} \end{equation} \end{cor} \begin{proof} In a similar manner as the proofs of Lemma \ref{lem:approx} and Theorem \ref{thm1}, we have \[ \norm{\tilde u - u_\#}_{\#,h} \le C \left[\inf_{\xi\in V_{\#,2}}\norm{\tilde u - \xi}_{\#,h}+\sup_{\chi \in V_{\#,2}} \frac{\nabs{a_h^\#(\tilde u,\chi)-l_h(\chi)}}{\norm{\chi}_{\#}}\right], \]\[ \inf_{\xi\in V_{\#,2}}\norm{\tilde u - \xi}_{\#,h} \le Ch^2\norm{u}_{H^3(\Omega)}, \] and \begin{align*} \nabs{a_h^\#(\tilde u,\chi)-l_h(\chi)} &\le C\norm{\pderiv{\tilde u}{\nu_h} + \frac{\tilde u-\tilde u_0}{\varepsilon}-\tilde g}_{L^2(\Gamma_h)}\norm{\chi}_{\#,h}\\ &\le C \norm{\nabla u \cdot \nu_h-\nabla (u \circ \pi) \cdot (\nu \circ \pi)}_{L^2(\Gamma_h)} \\ &\hspace{3em}+ Ch^2(\norm{u}_{H^2(\Omega)}+ \norm{\tilde u_0}_{H^2(\Omega)} + \norm{\tilde g}_{H^2(\Omega)})\norm{\chi}_{\#,h}. \end{align*} Given that $u$ is radially symmetric, a function $U$ exists such that $U(\abs{x})=u(x)$ for $x \in \Omega$. For $x \in \Gamma_h$, we define $0\le \alpha(x)<1$ satisfying $\cos \alpha(x) = \nu_h(x) \cdot (\nu\circ\pi(x))$. Then, we have \[ \pi(x) = \frac{x}{\abs{x}},\quad \nabla u = U'(x)x,\quad \nu(x) = x,\quad 1-\cos\alpha(x)\le 2\sin^2\frac{\alpha(x)}{2}\le Ch^2, \] \[ \nabla u \cdot \nu_h = U'(\abs{x})\cos\alpha(x),\quad\nabla (u \circ \pi) \cdot (\nu \circ \pi) =U'(1). \] Hence, we obtain \begin{align} &\hspace{1em}\norm{\nabla u \cdot \nu_h-\nabla (u \circ \pi) \cdot (\nu \circ \pi)}_{L^2(\Gamma_h)}^2\nonumber \\ &\le \int_{\Gamma_h} \abs{U'(\abs{x})-U'(1)}^2\,d\gamma_h + \abs{U'(1)}^2\int_{\Gamma_h}\abs{1-\cos\alpha(x)}^2\,d\gamma_h \nonumber \\ &\le \int_{\Gamma_h}\left(\int_{\abs{x}}^1\abs{U''(s)}ds\right)^2d\gamma_h + C\abs{U'(1)}^2h^4\nonumber \\ &\le C_0h^2\int_{\Gamma_h}\int_{1-C_0h^2}^1\abs{U''(s)}^2\,dsd\gamma_h + C\abs{U'(1)}^2h^4\nonumber \\ &\le Ch^4\norm{u}_{H^2(\Omega)}^2. \end{align} Therefore, we achieve the estimate \eqref{eq:p2approx}. \end{proof} \section{$L^2$ error estimate (Proof of Theorem \ref{thm2})} \label{sec:pr_thml2} \begin{proof}[Proof of Theorem \ref{thm2}.] We define following bilinear and linear forms. \begin{align} a(w,v) &= (\nabla w,\nabla v)_{\Omega} + b(w,v)\\ b(w,v)&= \sum_{E \in \mathcal{E}_h}\biggl\{-\frac{\gamma h_E}{\varepsilon+\gamma h_E}\Bigl(\langle\pderiv{w}{\nu},v\rangle_{\pi(E)}+\langle w,\pderiv{v}{\nu}\rangle_{\pi(E)}\Bigr),\nonumber\\ &\hspace{7em}+ \frac{1}{\varepsilon+\gamma h_E}\langle w,v\rangle_{\pi(E)} -\frac{\varepsilon\gamma h_E}{\varepsilon+\gamma h_E}\langle\pderiv{w}{\nu},\pderiv{v}{\nu}\rangle_{\pi(E)} \biggr\},\\ l(v) &= (\tilde f, v)_{\Omega} + \sum_{E \in \mathcal{E}_h}\biggl\{\frac{1}{\varepsilon+\gamma h_E}\langle \tilde u_{0},v\rangle_{\pi(E)} -\frac{\gamma h_E}{\varepsilon+\gamma h_E}\langle \tilde u_0, \pderiv{v}{\nu}\rangle_{\pi(E)}\nonumber\\ &\hspace{10em}+ \frac{\varepsilon}{\varepsilon+\gamma h_E}\langle \tilde g,v\rangle_{\pi(E)} -\frac{\varepsilon\gamma h_E}{\varepsilon+\gamma h_E}\langle \tilde g, \pderiv{v}{\nu}\rangle_{\pi(E)} \biggr\} \end{align} Then, we obtain \[ a(u,v) = l(v) \quad {}^\forall v \in H^s(\Omega). \] Considering that $\tilde u$ and $\tilde z$ are continuous in $\Omega_h$, we have $J_h(\tilde u,\tilde z) = 0$. Moreover, \begin{align} a_h^\#(\tilde u,\tilde z)-l_h(\tilde z) &=a_h^\#(\tilde u,\tilde z) - a(\tilde u,\tilde z) + l(\tilde z) -l_h(\tilde z) \nonumber \\ &= \left(\int_{\Omega_h\setminus\Omega}(\nabla \tilde u\cdot \nabla \tilde z - \tilde f\tilde z)\,dx-\int_{\Omega\setminus\Omega_h}(\nabla \tilde u \cdot \nabla \tilde z - \tilde f \tilde z)\,dx\right) \nonumber \\ &+\sum_{E\in\mathcal{E}_h}\frac{1}{\varepsilon+\gamma h_E} \biggl[\langle -\gamma h_E \pderiv{\tilde u}{\nu_h}+\tilde u - \tilde u_0 - \varepsilon \tilde g,\tilde z \rangle_E \nonumber\\ &\hspace{15em}-\langle -\gamma h_E \pderiv{u}{\nu}+u - u_0 - \varepsilon g, z \rangle_{\pi(E)}\biggr] \nonumber\\ &-\sum_{E\in\mathcal{E}_h}\frac{\varepsilon\gamma h_E}{\varepsilon+\gamma h_E}\langle \pderiv{\tilde u}{\nu_h}+\frac{\tilde u - \tilde u_0}{\varepsilon} - \tilde g,\pderiv{\tilde z}{\nu_h}\rangle_E\nonumber \\ &= I_1+I_2-I_3. \label{eq:prthm1l2_1} \end{align} Using \eqref{eq:bs_3}, we obtain \begin{align} \abs{I_1}&\le Ch^2\left(\norm{\nabla\tilde u \cdot \nabla\tilde z}_{W^{1,1}(\widetilde \Omega)} + \norm{\tilde f \tilde z}_{W^{1,1}(\widetilde \Omega)} \right) \nonumber \\ &\le Ch^2\norm{u}_{H^3(\Omega)}\norm{z}_{H^2(\Omega)} \label{eq:prthm1l2_2} \end{align} Given that \[ \langle w,v \rangle_E -\langle w,v \rangle_{\pi(E)} = \int_E(wv-(w\circ\pi)(v\circ\pi))\,d\gamma_h + \int_E (w\circ\pi)(v\circ\pi)\,d\gamma_h - \int_{\pi(E)} wv\,d\gamma, \] by using the boundary-skin estimates, we have \begin{align} \abs{I_2} &\le Ch^2 \norm{(-\gamma h\nabla\tilde u\cdot \nu+\tilde u -\tilde u_0 - \varepsilon g)\tilde z}_{L^1(\Gamma)}\nonumber\\ &\hspace{1em}+C\sum_{E\in\mathcal{E}_h}\Bigl\{\norm{-\gamma h_E \nabla \tilde u \cdot (\nu \circ \pi) + \tilde u - \tilde u_0 -\varepsilon g}_{L^2(E)}\norm{\tilde z-\tilde z\circ\pi}_{L^2(E)}\nonumber\\ &\hspace{-1em}+\norm{(-\gamma h_E \nabla \tilde u \cdot (\nu \circ \pi) + \tilde u - \tilde u_0 -\varepsilon g)-(\gamma h_E \nabla (\tilde u\circ \pi) \cdot (\nu \circ \pi)+\tilde u\circ \pi -\tilde u_0 \circ \pi -\varepsilon \tilde \circ \pi}_{L^\infty(E)}\norm{z}_{L^1(E)}\Bigr\}\nonumber\\ &\hspace{1em}+Ch\norm{\nabla \tilde u (\nu_h - \nu\circ\pi)\tilde z}_{L^1(\Gamma_h)}\nonumber\\ &\le Ch^2(\norm{u}_{H^2(\Omega)}+\norm{\tilde u_0}_{H^1(\widetilde \Omega)}+\norm{\tilde g}_{H^1(\widetilde \Omega)})\norm{z}_{H^1(\Omega)} \nonumber\\ &\hspace{1em}+ Ch^2(\norm{u}_{W^{2,\infty}(\Omega)}+\norm{\tilde u_0}_{W^{1,\infty}(\widetilde \Omega)}+\norm{\tilde g}_{W^{1,\infty}(\widetilde \Omega)})\norm{z}_{H^2(\Omega)}+Ch^2\norm{\nabla \tilde u\tilde z}_{L^1(\Gamma_h)}\nonumber\\ &\le C h^2(\norm{u}_{H^4(\Omega)}+\norm{\tilde u_0}_{H^3(\widetilde \Omega)}+\norm{\tilde g}_{H^3(\widetilde \Omega)})\norm{z}_{H^2(\Omega)}.\label{eq:prthm1l2_3} \end{align} Similarly, we obtain \begin{align} \abs{I_3}\le Ch^2(\norm{u}_{H^2(\Omega)}+\norm{\tilde u_0}_{H^1(\widetilde \Omega)}+\norm{\tilde g}_{H^1(\widetilde \Omega)})\norm{z}_{H^2(\Omega)}.\label{eq:prthm1l2_4} \end{align} Consequently, we deduce \begin{align} \nabs{a_h^\#(\tilde u,\tilde z)-l_h(\tilde z)} \le C h^2(\norm{u}_{H^4(\Omega)}+\norm{\tilde u_0}_{H^3(\widetilde \Omega)}+\norm{\tilde g}_{H^3(\widetilde \Omega)})\norm{z}_{H^2(\Omega)}.\label{eq:prthm1l2_5} \end{align} By substituting \eqref{eq:approx_consistency} for $w = \tilde z-\Pi_\#\tilde z$, we have \begin{align} \nabs{a_h^\#(\tilde u,\Pi_\#\tilde z)-l_h(\Pi_\#\tilde z)} &\le \nabs{a_h^\#(\tilde u,\tilde z-\Pi_\#\tilde z)-l_h(\tilde z-\Pi_\#\tilde z)} + \nabs{a_h^\#(\tilde u,\tilde z)-l_h(\tilde z)} \nonumber \\ &\le C h^2(\norm{u}_{H^4(\Omega)}+\norm{\tilde u_0}_{H^3(\widetilde \Omega)}+\norm{\tilde g}_{H^3(\widetilde \Omega)})\norm{z}_{H^2(\Omega)}. \end{align} Finally, we have \[ \norm{\tilde u -u_\#}_{\Omega_h\setminus\Omega} \le Ch\norm{\tilde u -u_\#}_{DG,h} \] and we obtain the estimate \eqref{eq:errorl2}. \end{proof} \section{Numerical examples} \label{sec:numerical_example} In this section, we present some numerical results to verify the validity of our error estimates. We consider the domain $\Omega = \{x \in \mathbb{R}^2 \colon \abs{x} < 1\}$. First, we confirm the validity of the estimates described in Theorem \ref{thm1}. Then, we consider the exact solution $u(x_1,x_2) = \sin(x_1)\sin(x_2)$ and the corresponding $f$, $g$ and $u_0$. Let $\varepsilon=1$. We calculate the energy error $\norm{\tilde u - u_\#}_{\#,h}$ and the $L^2$ error $\norm{\tilde u - u_\#}_{L^2(\Omega_h)}$. Figure \ref{fig:nsym_p1_err} shows the calculation results, where the left graph \subref{fig:n_nsym_p1} is Nitsche's method and the right one \subref{fig:dg_nsym_p1} is the DG method. As observed in the figure, the convergence orders are almost $O(h)$ for both norms. Thus, the optimal convergence rates are actually observed and the estimates of Theorem \ref{thm1} are confirmed. \begin{figure}[bt] \centering \subfloat[][Nitsche's method]{\includegraphics[scale = 0.7]{n_nsym_p1.pdf}\label{fig:n_nsym_p1}} \subfloat[][DG method]{\includegraphics[scale = 0.7]{dg_nsym_p1.pdf}\label{fig:dg_nsym_p1}} \caption{Energy errors $\norm{\tilde u - u_\#}_{\#,h}$ and $L^2$ errors $\norm{\tilde u - u_\#}_{L^2(\Omega_h)}$} \label{fig:nsym_p1_err} \end{figure} Subsequently, we consider the exact solution $u(x_1,x_2) = \sqrt{(x_1+1)^2+x_2^2}$ and the corresponding $f$, $g$, and $u_0$. Let $\varepsilon=1$. In this case, $u \in H^2(\Omega)$ and $u \not\in H^4(\Omega)$. That is, the assumption of Theorem \ref{thm2} does not hold. Figure \ref{fig:n_sqrt_err} illustrates the result of Nitsche's method. As shown in the figure, the convergence orders are almost $O(h)$ for the energy and $L^2$ errors. This result is consistent with Theorem \ref{thm2}. \begin{figure}[bt] \centering \includegraphics[scale = 0.7]{n_sqrt_p1.pdf} \caption{Energy errors and $L^2$ errors of $\sqrt{(x_1+1)^2+x_2^2}$}\label{fig:n_sqrt_err} \end{figure} Finally, we verify the estimates of Corollary \ref{cor:p2}. We consider the exact solution $u(x_1,x_2) = \exp(-x_1^2-x_2^2)$, which is a radially symmetric function. Figure \ref{fig:n_sym_err} illustrates the results of Nitsche's method, where the left graph \subref{fig:n_sym_p1} uses the P1 element and the right one \subref{fig:n_sym_p2} utilizes the P2 element. We observe that the convergence orders are almost $O(h^2)$ for the energy and $L^2$ errors using the P2 element. Therefore, the estimates of Corollary \ref{cor:p2} are confirmed. The results of the non-symmetric case, $u(x_1,x_2) = \sin(x_1)\sin(x_2)$, are shown in Figure \ref{fig:n_nsym_err}. As observed in the figure, the order is almost $O(h^{1.5})$ for the energy error using the P2 element. \begin{figure}[bt] \centering \subfloat[][P1 elements]{\includegraphics[scale = 0.7]{n_sym_p1.pdf}\label{fig:n_sym_p1}} \subfloat[][P2 elements]{\includegraphics[scale = 0.7]{n_sym_p2.pdf}\label{fig:n_sym_p2}} \caption{Energy errors and $L^2$ errors of $u(x_1,x_2) = \exp(-x_1^2-x_2^2)$} \label{fig:n_sym_err} \end{figure} \begin{figure}[bt] \centering \subfloat[][P1 elements]{\includegraphics[scale = 0.7]{n_nsym_p1.pdf}\label{fig:n_nsym_p1_2}} \subfloat[][P2 elements]{\includegraphics[scale = 0.7]{n_nsym_p2.pdf}\label{fig:n_nsym_p2}} \caption{Energy errors and $L^2$ errors of $u(x_1,x_2) = \sin(x_1)\sin(x_2)$} \label{fig:n_nsym_err} \end{figure} \section{Conclusion} \label{sec:conclusion} We have presented the energy and $L^2$ error estimates of Nitsche's and DG methods for the Poisson equation with RBC in a smooth domain. The results are optimal for the P1 elements, and the energy error is optimal for the P2 elements in the case of a radially symmetric function. In our future work, we will extend these results to generalized RBC and dynamic boundary conditions. \section*{Acknowledgment} The first author was supported by Program for Leading Graduate Schools, MEXT, Japan. The second author was supported by JST CREST Grant Number JPMJCR15D1, Japan, and JSPS KAKENHI Grant Number 15H03635, Japan. \bibliographystyle{plain}
{ "timestamp": "2019-06-03T02:13:27", "yymm": "1905", "arxiv_id": "1905.01605", "language": "en", "url": "https://arxiv.org/abs/1905.01605", "abstract": "We prove several optimal-order error estimates for a finite-element method applied to an inhomogeneous Robin boundary value problem (BVP) for the Poisson equation defined in a smooth bounded domain in $\\mathbb{R}^n$, $n=2,3$. The boundary condition is weakly imposed using Nitsche's method. The Robin BVP is interpreted as the classical penalty method with the penalty parameter $\\varepsilon$. The optimal choice of the mesh size $h$ relative to $\\varepsilon$ is a non-trivial issue. This paper carefully examines the dependence of $\\varepsilon$ on error estimates. Our error estimates require no unessential regularity assumptions on the solution. Numerical examples are also reported to confirm our results.", "subjects": "Numerical Analysis (math.NA)", "title": "Nitsche's method for a Robin boundary value problem in a smooth domain", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464443071381, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7074345628180921 }
https://arxiv.org/abs/1508.04439
Zeros of harmonic polynomials, critical lemniscates and caustics
In this paper we sharpen significantly several known estimates on the maximal number of zeros of complex harmonic polynomials. We also study the relation between the curvature of critical lemniscates and its impact on geometry of caustics and the number of zeros of harmonic polynomials.
\section{Introduction} We concern ourselves in this paper with complex \emph{harmonic polynomials}, i.e., polynomials which admit a decomposition $$h(z) = p(z) + \overline{q(z)},$$ where $p=p_n$ and $q=q_m$ are analytic polynomials of degrees $n$ and $m$ respectively. An interesting open question is, for given $n$ and $m$, to find the maximal number of solutions to the equation $h(z)=0$, i.e., extending the Fundamental Theorem of Algebra to harmonic polynomials, see \cite{harmonious} and references therein. Throughout the paper we assume $n>m$, for the case $n=m$ could give an infinite solution set. Wilmshurst \cite{Wil,Wil98}, in his doctoral thesis, proved the following: \begin{thm-others}[Wilmshurst] The equation $h(z) = 0$ has at most $n^2$ solutions. \end{thm-others} \noindent The proof of this result readily follows from Bezout's theorem \cite{Coo}. Seeking to improve on this bound, Wilmshurst conjectured that the equation $h(z) = 0$ has at most $3n-2 + m(m-1)$ solutions. Khavinson and Swi\c{a}tek \cite{K-S} confirmed Wilmshurst's conjecture when $m=1$ using complex dynamics, and the bound was shown to be sharp by Geyer \cite{Gey}. However, as was shown by Lee, Lerario, and Lundberg \cite{LLL}, the conjecture is not true in general, for example, when $m=n-3$. Also see \cite{HLLM} for many more counterexamples. Our first theorem bounds the number of roots off the coordinate axes for the harmonic polynomials with real coefficients. \begin{thm}\label{thm:1} For a harmonic polynomial $h(z) = p_n(z) + \overline{q_m(z)}$ with real coefficients, the equation $h(z) = 0$ has at most $n^2 - n$ solutions that satisty $({\rm Re}\,z) ({\rm Im} z)\neq 0$. \end{thm} Our next two theorems provide lower bounds on the maximal number of roots. \begin{thm}\label{thm:2} For all $n>m$, there exists a harmonic polynomial $h(z) = p_n(z) + \overline{q_m(z)}$ with at least $3n-2$ roots. \end{thm} \begin{thm}\label{thm:3} For all $n>m$, there exists a harmonic polynomial $h(z) = p_n(z) + \overline{q_m(z)}$ with at least $m^2+m+n$ roots. \end{thm} The above three theorems will be proved in Section \ref{sec:3proofs}. \vspace{0.3cm} \noindent{\it Remark.} \begin{itemize \item[(i)] The reason why in Theorem \ref{thm:1} we only consider roots off the coordinate axes is the following. Consider $p(z)=z^n + (z-1)^n$ and $q(z)=z^n - (z-1)^n$. Then $h(z)=p(z)+\overline{q(z)}$ has $n^2$ number of roots including the root at $0$ with the multiplicity $n$. In fact, this is the polynomial that Wilmshurst used (with a slight perturbation to split the multiple root at the origin) to show that the maximal bound $n^2$ is sharp. This example shows that Theorem \ref{thm:1} is sharp. Theorem \ref{thm:1} also yields that the harmonic polynomial with real coefficients and with the maximal ($n^2$) number of roots should have at least $n$ roots on the coordinate axes as in the above example. \item[(ii)] Theorem \ref{thm:2} and Theorem \ref{thm:3} complement each other. Theorem \ref{thm:2} is stronger than Theorem \ref{thm:3} when $m^2+m+2<2n$ and Theorem \ref{thm:3} is stronger than Theorem \ref{thm:2} when $m^2+m+2>2n$. Also, Theorem \ref{thm:2} is not at all trivial since the argument principle for harmonic functions \cite{DHL,SS02,ST20} only yields that $h$ has at least $n$ zeroes (see Fact \ref{fact:n} in Section \ref{sec:proof1}). \item[(iii)] Note that, compared to Wilmshurst's conjecture, Theorem \ref{thm:3} undercounts the number of roots by $2(n-m-1)$. See Section \ref{sec:further} for the in-depth discussion. \end{itemize} Theorem \ref{thm:3} allows the following important corollary. \begin{cor}\label{cor-Znm} Let $Z_{n,m}$ denote the maximal possible number of zeros of $h=p_n+\overline{q_m}$. Then, for any fixed integer $a\geq 1$, we have $$\limsup_{n\to\infty}\frac{Z_{n,n-a}}{n^2}=1. $$ More generally, if $m= \alpha n + o(n)$ with $0\leq\alpha\leq 1$, we have $$\limsup_{n\to\infty}\frac{Z_{n,m}}{n^2}\geq\alpha^2. $$ \end{cor} \begin{proof} From Theorem \ref{thm:3} we have $Z_{n,m}\geq n^2+2(1-a) n + a(a-1)$ for the first case, and $Z_{n,m}\geq \alpha^2 n^2+o(n^2)$ for the second case. \end{proof} This corollary yields that the maximal number of roots is asymptotically given by Wilmshurst's theorem. This answers a question posed by the first author more than a decade ago. Also this corollary complements the estimates on the expected number of zeros of Gaussian random harmonic polynomials obtained by Li and Wei in \cite{Li-Wei} and, more recently, by Lerario and Lundberg in \cite{LL}. For example, for $m=\alpha n$, $\alpha <1$, the expected number of zeros is $\sim n$ (\cite{Li-Wei}) for the Gaussian harmonic polynomials and $\sim c_\alpha n^{3/2}$ \cite{LL} for the truncated Gaussian harmonic polynomials. Yet, Corollary \ref{cor-Znm} yields that among all harmonic polynomials the maximal number $\sim \alpha n^2$ of zeros occurs with positive probability, thus expanding further the results in \cite{Ble} for $m=1$. To state our last theorem, we have to introduce the set \begin{equation}\label{eq:Omega} \Omega = \{z : \lvert f(z)\rvert <1 \}, \end{equation} where $$f(z)=\frac{p'_n(z)}{q_m'(z)}.$$ Recall that the mapping $z\mapsto h(z)$ is {\em sense-reversing} precisely on $\Omega$, i.e. the Jacobian of the map $h$ is negative on $\Omega$. The boundary $\partial\Omega$ is the {\em lemniscate} $\{z:|f(z)|=1\}$. Each connected component of $\Omega$ must contain at least one critical point of $p_n$. Indeed, if there is a connected component {\em without} a critical point, by applying the maximum modulus principle to $f(z)$ and $1/f(z)$ with $|f(z)|=1$ on the boundary of that component, we have that $f$ is a unimodular constant, a contradiction. This implies that there are at most $\deg p_n'=n-1$ connected components of $\Omega$. For $m=1$ (when Wilmshurst's conjecture was proven to hold \cite{K-S}), Wilmshurst guessed (\cite{Wil}, p.73) that the following might be true: ``{\em In each component of $\Omega$ where $h(z)=p_n(z)+\overline z$ is sense reversing, the behavior of $h$ will be essentially determined by the $\overline z$ term so there will only be one zero of $h$}''. From that the maximal number of roots (i.e., $3n-2$ for $m=1$) may be obtained by the argument principle when each connected component of $\Omega$ contains a zero. The statement above is true when the component of $\Omega$ is {\em convex} according to the following result (see \cite{Wil}, p.75). \begin{thm-others}[Sheil-Small] If $g(z)$ is an analytic function in a {\em convex} domain $D$ and $|g'(z)|<1$ in $D$, then $\overline z+g(z)$ is injective on $D$. \end{thm-others} Note that the theorem relates the geometry of critical lemniscates with the number of zeros, because $D$ can have at most one zero of the function $\overline z+g(z)$ if the latter is injective. In a {\em non-convex} component of $\Omega$, it is possible to have more than one zero of $h(z)$. In \cite{Wil} an example is given where a non-convex component of $\Omega$ contains {\em two} critical points of $p_n$ and {\em two} zeros of $h(z)=p_n(z)+\overline z$. This is not surprising because each component of $\Omega$ can have a zero of $h$ and, therefore, one can have two zeros in a component by merging two components into one. The resulting component then has two critical points of $p_n$. It was however not clear whether a connected component containing a single critical point of $p_n$ could possibly have more than one zero of $h$. Here we show that it {\em is indeed} possible and, also, present a necessary and sufficient condition for having more than one zero of $h$ in a connected component of $\Omega$ that contains a {\em single} zero of $f$ (i.e., a single critical point of $p_n$). The theorem holds for general $m$ and $n$. \begin{thm}\label{thm-3} Let $n>m$. Let $D$ be a connected component of $\Omega$ (defined by \eqref{eq:Omega}) containing exactly one zero of $f$. On a smooth part of the curve $q_m(\partial D)$ (the image of $\partial D$ under $q_m$) let $\kappa$ be the curvature of $q_m(\partial D)$ with respect to the counterclockwise arclength parametrization of $\partial D$. Then, the following are equivalent: \begin{itemize} \item[i)] Let $f(z)=p_n'(z)/q_m'(z)$. There exists $z\in\partial D$ such that $$\frac{\kappa(z)}{|f'(z)|}<-\frac{1}{2}.$$ \item[ii)] There exists $\theta\in {\mathbb R}$ and $A\in{\mathbb C}$ such that $$\widetilde p_n(z)-\overline{q_m(z)}$$ has at least two zeros in $D$, where $\widetilde p_n(z)= e^{i\theta} p_n(z)+A$. \end{itemize} \end{thm} \begin{remark}{\hspace{0.1cm}} \begin{itemize} \item[(a)] Note that $\kappa$ is the curvature of $\partial D$ when $q_m(z)=z$. In this case, the theorem tells us exactly how ``non-convex'' the domain $D$ needs to be in order to have multiple zeros of $h$, improving upon Sheil-Small's theorem. \item[(b)] In statement (ii) of Theorem \ref{thm-3}, we note that $|\widetilde p'_n(z)|=| p'_n(z)|$. The corresponding lemniscate, $\{z:|\widetilde p_n'(z)/q'_m(z)|=1\}$, is therefore, the same for all $\theta$ and $A$. \end{itemize} \end{remark} For $(n,m)=(4,1)$, we provide an example where a component of $\Omega$ with one critical point of $p_n$ contains two zeros of $h$, see Figure \ref{fig:41concave}. Note that the roots appear where $\Omega$ is (slightly) concave. The example is produced based on the discussion following our final Theorem \ref{lem-no-inflec} in Section \ref{sec:nonconvex} regarding the shapes of critical lemniscates. \begin{figure}[h] \includegraphics[width=0.6\textwidth]{41-lemniscate.png}\qquad \includegraphics[width=0.255\textwidth]{41-lemniscate-zoom.png} \caption{\label{fig:41concave} Roots (dots) for $h(z)=-(0.934124 +0.356949 i) z^*+(0.0581623 +0.156514 i) z^4+(0.354765-0.131835 i) z^3-(0.116325 +0.313028 i) z^2-(1.06429 -0.395504 i) z+(0.247627\, +0.020994 i)$. The stars are for the zeros of $f$ and the shaded region is $\Omega$. The right picture is a zooming image of the top component of $\Omega$. } \end{figure} \noindent{\bf Acknowledgement.} This work resulted from an REU group discussions that also included Prof. Catherine B\'en\'eteau and Brian Jackson. The second author was supported by Simons Collaboration Grants for Mathematicians. The first and third authors were supported by the USF Proposal Enhancement Grant No. 18326 (2015), PIs: D. Khavinson and R. Teodorescu. We are greatly indebted to the referee for pointing out an error in the initial version of Theorem \ref{thm:1} that was due to a misinterpretation of D. Bernstein's theorem. \section{Proofs of Theorems \ref{thm:1}, \ref{thm:2} and \ref{thm:3}}\label{sec:3proofs} \subsection{Proof of Theorem \ref{thm:1}} Given a bivariate real polynomial $P$ defined by \begin{equation*} P(x,y)=\sum\limits_{i=0}^n \sum\limits_{j=0}^m a_{ij}x^iy^j,\quad a_{ij}\in\mathbb{R}, \end{equation*} the {\em Newton polygon ${\mathcal N}_P$ of $P$} is the convex hull of $N_P\subset {\mathbb R}^2$ where $N_P= \left\{(i,j) : a_{ij}\neq0\right\}$. Given two polynomials $P$ and $Q$ with Newton polygons $\mathcal{N}_P$ and $\mathcal{N}_Q$, let $\mathcal{M}_{P,Q}$ be the {\em Minkowski sum} of $\mathcal{N}_P$ and $\mathcal{N}_Q$, defined by $$\mathcal{M}_{P,Q} = \{(i_1+i_2,j_1+j_2) \vert (i_1,j_1) \in \mathcal{N}_P, (i_2,j_2) \in \mathcal{N}_Q\}.$$ Let $[X]$ denote the area of a set $X \subset \mathbb{R}^2$. We then define the \emph{mixed area} of $P$ and $Q$ as $[\mathcal{M}_{P,Q}] - [\mathcal{N}_P] - [\mathcal{N}_Q]$. \begin{thm-others}[D. Bernstein; cf. \cite{Ber3}, or the original articles \cite{Ber,Ber2}] The number of solutions to the system of polynomial equations $p(x,y)=q(x,y)=0$ satisfying $x y \neq 0$ does not exceed the mixed area of $p$ and $q$. \end{thm-others} For any analytic polynomials $p_n(z)$ and $q_m(z)$ of degree $n,m$, we have $$h(x+iy)=p_n(x+iy)+\overline{q_m(x+iy)} = A(x,y)+iB(x,y),$$ where $A$ and $B$ are polynomials with real coefficients. Let $\mathcal{A}$ be the Newton polygon of $A(x,y)$ and $\mathcal{B}$ be the Newton polygon of $B(x,y)$. As we will see below, the polynomials $p_n$ and $q_n$ having real coefficients leads to a certain structure of ${\mathcal A}$ and ${\mathcal B}$. \begin{lemma}\label{lem:newton-poly} Given a generic $h(z)$ with only real coefficients let ${\mathcal A}$ and ${\mathcal B}$ be defined as above. If $n$ is even, $\mathcal{A}$ is the isosceles triangle with vertex set $\{(0,0),(0,n),(n,0)\}$ and $\mathcal{B}$ is the trapezoid with vertex set $\{(0,1),(0,n-1),(1,n-1),(n-1,1)\}$. If $n$ is odd, $\mathcal{A}$ is the trapezoid with vertex set $\{(0,0),(0,n-1),(1,n-1),(n,0)\}$ and $\mathcal{B}$ is the isosceles triangle with vertex set $\{(0,1),(0,n),(n-1,1)\}$ (see Figure 2). \end{lemma} \begin{figure} \includegraphics[scale=0.2]{a6.jpg} \hspace{30pt} \includegraphics[scale=0.2]{b6.jpg}\\ \vspace{30pt} \includegraphics[scale=0.2]{a5.jpg}\hspace{30pt} \includegraphics[scale=0.2]{b5.jpg} \caption{The Newton polygons $\mathcal{A}$ (left) and $\mathcal{B}$ (right) for $n=6$ (top) and $n=5$ (bottom), respectively.} \end{figure} \begin{proof} The lemma follows directly from the fact that $$A(x,y) = \sum_{ \substack{j+k \leq n,\\ k\text{ is even}} } a_{jk}x^jy^k, \qquad B(x,y) = \sum_{\substack{j+k \leq n,\\ k\text{ is odd}}} b_{jk}x^jy^k , \qquad a_{jk}, b_{jk} \in \mathbb{R}.$$ The condition, $j+k\leq n$, on the summation indices follows from $\deg h=\deg p_n =n$. The condition on $k$ to be even or odd comes from $p_n$ and $q_m$ having real coefficients. (Note that in the expansion of $p_n(x+iy)$ or, of $q_m(x+iy)$, the ``$i$'' comes only together with $y$ -- for a term of the form $x^j y^k$, the power of ``$i$'' is given by $k$, the power of $y$.) We need to show that all the coefficients that correspond to the extreme points of the convex hulls are nonvanishing. First of all, those that satisfy $j+k=n$: \begin{equation*} \begin{split} &\text{$a_{0,n}$, $a_{n,0}$, $b_{1,n-1}$ and $b_{n-1,1}$ for even $n$,}\\ &\text{$a_{1,n-1}$, $a_{n,0}$, $b_{0,n}$ and $b_{n-1,1}$ for odd $n$,} \end{split} \end{equation*} are all nonvanishing because $\deg p_n=n$. Using $h(z)=p_n(z)+\overline{q_m(z)}$, we note that $$h(x), \quad \frac{\partial}{\partial y}h(x+i y)\bigg|_{y=0}, \quad \frac{\partial^{n-1}}{\partial y^{n-1}}h(x+i y)\bigg|_{y=0} $$ are all nontrivial polynomials in $x$, due to the presence of the terms, $x^n$, $x^{n-1}y$ and $x y^{n-1}$ in $h(x+iy)$ respectively. So there exists $x_0\in\mathbb{R}$ such that, when $x=x_0$, none of the above three polynomials vanishes. Defining $$\widetilde h(z)= h(z+ x_0),\quad x_0\in\mathbb{R},$$ $\widetilde h$ is a real polynomial (of the same holomorphic and antiholomorphic degrees) and has the same number of zeros as $h$. Moreover, the coefficients at the extreme points are all nonvanishing because \begin{align*} a_{0,0}&= \widetilde h(0) = h(x_0), \\ b_{0,1}&=\frac{\partial}{\partial y}\widetilde h(i y)\bigg|_{y=0}=\frac{\partial}{\partial y}h(x_0+i y)\bigg|_{y=0}, \\ \heartsuit_{0,n-1}&=\frac{1}{(n-1)!}\frac{\partial^{n-1}}{\partial y^{n-1}}\widetilde h(i y)\bigg|_{y=0}=\frac{1}{(n-1)!}\frac{\partial^{n-1}}{\partial y^{n-1}}h(x_0+i y)\bigg|_{y=0}, \end{align*} where, in the last line, the symbol $\heartsuit$ stands for $a$, when $n$ is odd, and for $b$, when $n$ is even. \end{proof} Now we prove Theorem \ref{thm:1}. We divide the proof into two cases. \noindent \textbf{Case 1.} Let $n$ be even. From Lemma \ref{lem:newton-poly}, $\mathcal{A}$ is an isosceles triangle with vertex set $\{(0,0), (0,n),(n,0)\}$ and $\mathcal{B}$ is the trapezoid with vertex set $\{(0,1),(0,n-1),(1,n-1),(n-1,1)\}$. Now, the Minkowski sum $\mathcal{M}$ of $\mathcal{A}$ and $\mathcal{B}$ is a trapezoid with vertex set $\{(0,1), (0,2n-1), (1,2n-1),(2n-1,1)\}$. The mixed area is, then, $$[\mathcal{M}] - [\mathcal{A}] - [\mathcal{B}] = (2n^2 - 2n)-\frac{n^2}{2} - \frac{n^2-2n}{2}= n^2 - n.$$ \noindent \textbf{Case 2.} Let $n$ be odd. From the lemma, $\mathcal{A}$ is the trapezoid with vertex set $\{(0,0),(0,n-1),(1,n-1),(n,0)\}$ and $\mathcal{B}$ is the isosceles triangle with vertex set $\{(0,1),(0,n),(n-1,1)\}$. The Minkowski sum $\mathcal{M}$ of $\mathcal{A}$ and $\mathcal{B}$ is then the trapezoid with vertex set $\{(0,1),(0,2n-1),(1,2n-1),(2n-1,1)\}$. The mixed area is $$[\mathcal{M}] - [\mathcal{A}] - [\mathcal{B}] = (2n^2 - 2n) - \frac{n^2-1}{2} - \frac{(n-1)^2}{2}=n^2-n. $$ \subsection{Proof of Theorem \ref{thm:2}}\label{sec:proof1} For a harmonic function $h(z) = p(z) + \overline{q(z)}$, we say that $h$ is \emph{sense-preserving} at a point $z$ if the Jacobian of $h$, $$\det\left[\begin{array}{cc} \partial_{x} {\rm Re} \,h(x+iy ) & \partial_{y} {\rm Re} \,h(x+iy ) \\ \partial_{x} {\rm Im} \,h(x+iy ) & \partial_{y} {\rm Im} \,h(x+iy ) \end{array} \right]_{x+iy=z} = \lvert p'(z) \rvert ^2 - \lvert q'(z) \rvert ^2,$$ is positive, and \emph{sense-reversing} at $z$ if $\overline{h(z)}$ is sense-preserving at $z$. Otherwise, we say $h$ is \emph{singular} at $z$. We also say that $h$ is \emph{regular} if all the zeros of $h$ are either sense-reversing, or sense-preserving. For an oriented, closed curve $\Gamma$ such that a continuous function $F$ does not vanish on $\Gamma$, we denote by $\Delta_{\Gamma}\arg F(z)$ the increment in the argument of $F$ along $\Gamma$. The following is well known. \begin{thm-others}[The argument principle for harmonic functions \cite{DHL,SS02}] Let $H$ be a harmonic function in a Jordan domain $D$ with boundary $\Gamma$. Suppose $H$ is continuous in $\overline D$ and $H\neq 0$ on $\Gamma$. Suppose $H$ has no singular zeros in $D$, and let $N=N_+ - N_-$, where $N_+$ and $N_-$ are the number of sense-preserving zeros and sense-reversing zeros of $H$ in $D$ respectively. Then, $\Delta_{\Gamma}\arg H(z)=2\pi N$. \end{thm-others} The next fact \cite{K-S} follows then by applying the argument principle to a circle of a sufficiently large radius where $|p_n|\gg|q_m|$. \begin{fact}\label{fact:n} Let $h=p_n+\overline{q_m}$ be regular. Let $N_+$ be the number of sense-preserving zeros of $h$ and $N_-$ be the number of sense-reversing zeros. Then, \begin{equation*} n=N_+ - N_-. \end{equation*} \end{fact} An elegant proof (due to Donald Sarason) of the following lemma can be found in \cite{K-S}. \begin{lemma}\label{lem:dense} If $p(z)$ is a polynomial of degree greater than 1, then the set of complex numbers $c$ for which $p(z) +\overline{q(z)} - c$ is regular is open and dense in $\mathbb{C}$. \end{lemma} For $m=1$, Bleher et al. \cite{Ble} proved that in the space $\mathbb{C}^{n+1}$ of harmonic polynomials $p_n(z)+\overline{z}$ of degree $n\geq 2$ the set of ``simple polynomials'', i.e., regular polynomials with $k$ zeros, is a non-empty open subset of $\mathbb{C}^{n+1}$ if and only if $k = n,n+2,\dots, 3n-4,3n-2$, and that the non-simple polynomials are contained in a real algebraic subset of $\mathbb{C}^{n+1}$. The following fact is noted in \cite{LSL}, Theorem 3.2. \begin{lemma}\label{lem:regular} If the function $h(z) = p_n(z) + \overline{z}$ has $3n-2$ zeros, then $h(z)$ is regular. \end{lemma} \begin{proof} Assume $h$ is not regular and has exactly $3n-2$ roots. The number of roots that are not sense-preserving is at most $n-1$ because each of those roots attracts a critical point of $p_n$ under the iteration of $z\mapsto p_n(z)$, cf. Proposition 1 in \cite{K-S}. Therefore, the number of sense-preserving roots must be at least $2n-1$. For all non-singular roots $z_j$'s, there exists $\epsilon>0$ such that the disks, $B_{\epsilon}(z_j) = \{ z : \lvert z - z_j \rvert < \epsilon \}$, do not intersect each other, do not intersect the singular set $\{z : |p'(z)|=1\}$, and $\Delta_{\partial B_\epsilon(z_j)}\arg h =\pm 2\pi$. Defining $$\delta= \min_{z\in \partial B_\epsilon(z_j)} |h(z)|, $$ for any $c\in{\mathbb C}$ with $\lvert c \rvert < \delta$, we have $\Delta_{\partial B_\epsilon(z_j)}\arg h =\Delta_{\partial B_\epsilon(z_j)}\arg (h - c) $ and, using the argument principle for harmonic functions, the equation $h(z) - c$ has exactly one zero in each $B_{\epsilon}(z_j)$. Suppose $z_0$ is a singular zero of $h$. Since $h(z_0) = 0$ and $h$ is continuous near $z_0$, there is an $\eta>0$ such that $B_{\eta}(z_0)$ does not intersect any $B_\epsilon(z_j)$ and $$\text{$\lvert h(z) \rvert < \delta$ for all $z \in B_{\eta}(z_0)$.}$$ Further, the set $B_{\eta}(z_0)$ intersects sense-preserving region, since otherwise, $\log|p'(z)|$ would be constant over $B_\eta(z_0)$ by the maximum modulus principle. Let $\zeta$ be a sense-preserving point in $B_{\eta}(z_0)$. We can set $c=h(\zeta)$ since $\lvert h(\zeta) \rvert < \delta$, and $h(z)-c=h(z)-h(\zeta)$ must have zeros in each $B_{\epsilon}(z_j)$ and at $\zeta\in B_\eta(z_0)$. Consequently, $h(z)-h(\zeta)$ has at least $2n$ sense-preserving zeros. By Lemma \ref{lem:dense}, we can choose $\zeta$ such that $h(z)-h(\zeta)$ is a regular polynomial, which contradicts the result of Khavinson and Swi\c{a}tek \cite{K-S} that the regular polynomial can have at most $2n-1$ sense-preserving roots. \end{proof} We now complete the proof of Theorem \ref{thm:2}. \begin{proof} Let $p_n(z)$ be an analytic polynomial such that the equation $p_n(z) + \overline{z}=0$ has $3n-2$ solutions. By Lemma \ref{lem:regular} the polynomial $h(z)=p_n(z)+\overline{z}$ is regular. Let $z_0$ be a zero of $h$. One can find a circle, $\Gamma$, centered at $z_0$ with radius $\epsilon$ such that $h$ does not vanish on $\Gamma$ and $\Delta_{\Gamma}\arg h=\pm 2\pi$. As in a standard proof of Rouch\'e's theorem, taking $\delta$ such that $$0<\delta< \frac{\min_{z\in\Gamma}|h(z)|}{\max_{z\in\Gamma}(|z|^m+1)},$$ we have the perturbed mapping, $z \mapsto h(z) + \delta \overline{z}^m$, that preserves the winding number of $h(\Gamma)$, that is, the perturbed mapping also vanishes in the interior of $\Gamma$. Applying the same argument to all $3n-2$ zeros of $h$, we can choose $\delta$ such that $h(z)+\delta\overline z^m$ has (at least) $3n-2$ zeros. Setting now $q_m(z)=\delta z^m+z$ completes the proof. \end{proof} \begin{remark} This proof suggests that if the equation $p_n(z) + \overline{q_{m-1}(z)}=0$ has at most $k$ solutions, then there exists a harmonic polynomial $p_n(z) + \overline{q_m(z)}$ with $k$ zeros. However, a proof of this, following the above argument, would require that $p_n(z) + \overline{q_{m-1}(z)}$ be regular. \end{remark} \subsection{Proof of Theorem \ref{thm:3}} Let us sketch the procedure (similar to \cite{LLL,HLLM}) that allows creating examples of harmonic polynomials with a large number of roots (cf. Figure \ref{fig:4252}). Fix $n$ and $m<n$. Let \begin{equation*} \begin{split} S(z)&=(z-a)^{n-1}(z+(n-1)a), \\ T(z)&=(z-b)^{m+1}(z^{n-m-1}+t_{n-m-2}z^{n-m-2}+\cdots+t_0). \end{split} \end{equation*} The $n-m-1$ complex parameters $t_j$'s in $T(z)$ are uniquely determined by the condition that $S(z) -T(z)$ is a polynomial of degree $m$. Then we choose $a\in{\mathbb C}$ and $b\in{\mathbb C}$ by hand to maximize the number of intersections between the two sets: \begin{equation*} \Gamma_T=\{z\,|\, {\rm Im}\,T(z)=0\},\quad \Gamma_S=\{z\,|\, {\rm Re}\,S(z)=0\}. \end{equation*} These intersections are the roots of the equation $p_n(z)+\overline{q_m(z)}=0$ where \begin{equation*} p_n(z)=S(z) + T(z),\quad q_m(z)=S(z)-T(z), \end{equation*} since \begin{equation*} p_n(z)+\overline{q_m(z)}=2\,{\rm Re}\,S(z)+2\,i\, {\rm Im}\,T(z) . \end{equation*} \begin{figure}[h] \mbox{\includegraphics[width=0.45\textwidth]{42.png}} \hspace{5px} \mbox{\includegraphics[width=0.45\textwidth]{52.png}} \caption{\label{fig:4252} Curves $\Gamma_T$ (black) and $\Gamma_S$ (red) for for $n=4, 5$ and $m=2$. The shaded region is $\Omega$. For $n=4$ (left) we choose $a=0$ and $b=1.1-0.1i$ to produce 12 roots. For $n=5$ (right) we choose $a=1.5-0.5i$ and $b=-0.05+0.92i$ to produce 15 roots. } \end{figure} Theorem \ref{thm:3} can be stated equivalently as follows: \begin{thm-others}[Theorem \ref{thm:3}] For $a=0$ and for a generic choice of $b\in{\mathbb C}$, the equation $p_n+\overline{q_m}=0$ defined in terms of $S$ and $T$ (as above) has at least $m^2+m+n$ roots. \end{thm-others} \begin{proof} Choose $a=0$ and let $\Gamma_S$ be the set of rays emanating from the origin and extending to the infinity, i.e., $\{\infty\times e^{(\frac{1}{2}+k)\pi i/n}:k=0,1,\cdots,2n-1\}$. Note that $\Gamma_T$ has $2m+2$ curved rays emanating from $b$ where every ray eventually approaches the infinity in the directions of $\{\infty\times e^{l\pi i/n}\}_{l}$ such that: i) different rays correspond to different values of $l$, and ii) $l$ is chosen in a subset, that we will denote by $N_{2m+2}$, containing $2m+2$ numbers from $\{0,1,\cdots,2n-1\}$. Assuming that $b\notin\Gamma_S$ and that none of the rays hit any critical point of $T(z)$ except at $z=b$, those curved rays do not intersect each other. Let $W_l$'s ($l=0,1,\cdots,2n-1$) denote the connected components (that we will call ``sectors'') in $\mathbb{C}\setminus\Gamma_S$ that contain the asymptotic direction $\infty\times e^{l\pi i/n}$. A simple geometric consideration tells us that the number of intersections between $\Gamma_S$ and a curved ray starting from $b\in W_{l_1}$ and continuing into $W_{l_2}$ without passing the origin is at least $$\min(|l_2-l_1|,2n-|l_2-l_1|).$$ This means that the minimal possible number of intersections between $\Gamma_S$ and ``the $2m+2$ curved rays in $\Gamma_T$'' is $$\sum_{l_2\in N_{2m+2} }\min(|l_2-l_1|,2n-|l_2-l_1|)\geq 0+2(1+2+\cdots+ m ) + (m+1) = (m+1)^2.$$ The remaining part of $\Gamma_T$ (i.e. that is not connected to $b$) approaches the infinity in $2n-2m-2$ different sectors among $W_l$, that are not already taken by the rays from $b$. Assuming that there are no critical points of $T$ in $\Gamma_T$ except the one at $b$, each curve in the ``remaining part of $\Gamma_T$'' must connect two sectors from the $2n-2m-2$ sectors such that, around $\infty$, each sector is "hit" by one curve only. Since there is at least one intersection between each curve and $\Gamma_S$, the minimal number of intersections between the ``remaining part of $\Gamma_T$'' and $\Gamma_S$ is $n-m-1$ and the minimal number of intersections between $\Gamma_T$ and $\Gamma_S$ is given by $(m+1)^2+n-m-1=m^2+m+n$. \end{proof} \subsection{A remark on Wilmshurst's conjecture}\label{sec:further} \begin{figure}[h] \begin{center}\includegraphics[width=0.7\textwidth]{lemniscate.png} \caption{\label{fig:97} Roots and $\Omega$ (shaded region) for $(n,m)=(9,7)$, $a=0$ and $b=1.1-0.1 i$.} \end{center} \end{figure} Comparing with Wilmshurst's conjecture, Theorem \ref{thm:3} undercounts the number of roots by \begin{equation}\label{eq:diff} 3n-2+m(m-1) - (m^2+m+n) = 2(n-m-1). \end{equation} One can show that the corresponding lemniscate has $2m+2$ curves that connects $b$ and $a$ (cf. Figure \ref{fig:97}). The numerics suggests that the $2m+1$ regions in between these curves have, respectively, $$ m-1,m-2,\cdots,2,1,1,2,\cdots,m-1,m,$$ (counting from top to bottom in Figure \ref{fig:97}) roots. Among these exactly $1+2+\cdots+m=m(m+1)/2$ of them are found in the sense-reversing region (with $m$ components; the shaded part in Figure \ref{fig:97}) and, therefore, the total number of roots must be at least \begin{equation}\label{eq:mmn} 2\times \frac{m^2+m}{2}+ n = m^2+ m + n \end{equation} by Fact \ref{fact:n}, giving the same number as in Theorem \ref{thm:3}. Since there can be at most $n-1$ components in $\Omega$, there can be $n-m-1$ extra components in the sense-reversing region, and these components are not connected to the point $b$. Wilmshurst's count is obtained {\em when each of these components has exactly one root}. This increases the total number of {\em sense-reversing roots} by $n-m-1$ and, hence, increases the total number of roots by $2(n-m-1)$, cf. \eqref{eq:mmn}, which is precisely Wilmshurst's count, cf. \eqref{eq:diff}. The various counterexamples studied in \cite{LLL,HLLM} indicate that, the $n-m-1$ ``extra components'' of $\Omega$ (that are not connected to $b$) can have more than one root in each component. For example, Figure \ref{fig:97} shows that there are two roots inside the component of $\Omega$ that is not connected to $b$. For $m=n-2$, our numerical experiment supports the structure shown in Figure \ref{fig:97}: there are $m^2+m+n$ roots that are counted in terms of the $2m+2$ curves connecting $a$ and $b$, and the excessive zeros are twice the number of zeros found in the component of $\Omega$ that is not connected to $b$. Choosing $a=0$ and $b=e^{i\pi/(2n)}+\epsilon$ ($\epsilon\neq 0$ is needed so that the origin is not a root), we found that the number of ``excessive zeros'', i.e. (the total number of zeros)$-(m^2+m+n)$, increases by 4 whenever $n$ hits the numbers: $$ 7, 15, 22, 30, 37, 45, 52, 60, 68, 75, 83, 90, 98, 105, 113, 120, 128, 136,\cdots. $$ For example, for $n=100$, there are total 13 numbers before 100 from the list, and the total number of zeros is $4\times 13 +(m^2+m+n) = 52+9998$, exceeding Wilmshurst's count by $4\times 13 -2(n-m-1) = 52 - 2 = 50 $. These experiments prompt us to suggest the following conjecture. \bigskip \noindent{\bf Conjecture. } {\em When $m=n-2$, the maximal number of roots of $h=p_n+\overline{q_m}$ is given by \begin{equation*} n^2 - \frac{3}{2} n + o(n) \end{equation*} as $n$ grows to $\infty$ (which is larger than Wilmshurst's count of $n^2-2n+4$). } \section{Geometry of caustics: Proof of Theorem \ref{thm-3}} As before, let $\Omega$ be defined by \eqref{eq:Omega}. \begin{lemma}\label{lem:D-univalent} Setting $$f(z)=p'_n(z)/q'_m(z),$$ let $D$ be a connected component of $\Omega$ with exactly $k$ (counting the multiplicities) zeros of $f$ in $D$. Then, $f:D\to {\mathbb D}$, where ${\mathbb D}$ is the unit disk, is a branched covering of degree $k$. \end{lemma} \begin{proof} By definition, $D$ is a connected component of $f^{-1}({\mathbb D})$. By the argument principle, for any $w\in {\mathbb D}$, the number of preimages of $w$ under $f$ inside $D$ is given by the winding number, $\frac{1}{2\pi}\Delta_{\partial D} \arg (f(z)-w)$. Since $f(\partial D)\subset\partial\mathbb D$ and cannot ``backtrack" on $\partial D$, the winding number does not depend on $w\in{\mathbb D}$ and it is $k$ at $w=0$ because $D$ contains exactly $k$ zeros of $f$. \end{proof} When $D$ contains exactly one critical point of $p_n$, $f:D\to{\mathbb D}$ is a univalent map. In this case, let $\eta:[0,2\pi)\to \partial D$ be the parametrization of $\partial D$ given by \begin{equation*} \eta(\theta) = f^{-1}(e^{i\theta}), \end{equation*} where $f^{-1}$ on $\partial{\mathbb D}$ is obtained by the continuous extension of $f^{-1}:{\mathbb D}\to D$. This parametrization of $\partial D$ is given by the harmonic measure of $D$ (normalized by the factor $2\pi$) with respect to the pole at the zero of $f$, i.e., \begin{equation}\label{harm-para} d\theta = d\arg f(z),\quad z\in\partial D. \end{equation} This viewpoint can be be generalized when there are $k$ zeros of $f$ in $D$. In this case, the same equation \eqref{harm-para} defines the parametrization, $\eta:[0,2\pi k)\to \partial D$, by the harmonic measure of $k$-sheeted disk. On a smooth part of the curve $\partial D$ (i.e., where $f'\neq 0$), let $v(z)$ be the tangent vector of the curve $\partial D$ at $z\in \partial D$ given by \begin{equation}\label{eq:v} v(z)= \frac{d \eta(\theta)}{d\theta}=i\frac{f(z)}{f'(z)}. \end{equation} Let us consider the image of $\partial D$ under $h$. For each $z\in\partial D$ with $f'(z)\neq 0$ we obtain a tangent vector of the curve $h(\partial D)$ at $h(z)$ as follows ($v$ is the tangent vector to $\partial D$, cf. \eqref{eq:v}, $|f|=1$ on $\partial D$ ): \begin{equation}\label{eq:V} V(z) = \Big (v(z)\partial +\overline{v(z)}\, \overline \partial\Big) h(z) = v(z) p'_n(z) -\overline{v(z)} \,\overline{q'_m(z)}, \end{equation} assuming this expression does not vanish. The next two lemmas deal with the geometry of the caustic that we identify now. \vspace{0.3cm} \noindent{\bf Definition.} {\it The image of the critical lemniscate $h(\partial \Omega)$ is called the caustic. } \begin{lemma}\label{lem-caustic} Let $D$ be a component of $\Omega$ with exactly $k$ zeros (counting multiplicity) of $f$ and $\partial D$ is parametrized by $\eta:[0,2\pi k)\to\partial D$ such that $ d\arg f(\eta(\theta)) =d\theta$. At $z=\eta(\theta)\in\partial D$ where $f'(z)\neq 0$ and ${\rm Im}\big(v(z) q'_m(z) \sqrt{f(z)}\big)\neq 0$, we have \begin{equation*} \frac{d \arg V(\eta(\theta))}{d\theta}=\frac{1}{2}. \end{equation*} \end{lemma} In other words, the caustic, away from the possible singularities, has constant curvature with respect to the special parametrization defined above. \begin{proof} Note that, for $z\in \partial D$, the two terms in the right hand side of \eqref{eq:V} have the same modulus (i.e., $|p'_n|=|q'_m|$ on $\partial D$). We may rewrite $V$ as \begin{equation}\label{eq:V-cusp} \begin{split} V(z)&=\sqrt{\frac{p'_n(z)}{q'_m(z)}} |q'_m(z)| \left(v(z) \sqrt{\frac{p'_n(z)}{\overline{q'_m(z)}}} -\overline{v(z)} \,\sqrt{\frac{\overline{q'_m(z)}}{p'_n(z)}}\right) \\&= \sqrt{\frac{p'_n(z)}{q'_m(z)}} |q'_m(z)| 2i\, {\rm Im}\left(v(z) \sqrt{\frac{p'_n(z)}{\overline{q'_m(z)}}}\right) \\&= 2i\,\sqrt{f(z)} \, {\rm Im}\left(v(z) q'_m(z) \sqrt{f(z)}\right). \end{split} \end{equation} (Note that the result is independent of the branch of the square root function as $\sqrt{f(z)}$ appears twice.) If ${\rm Im}\Big(v(z) q'_m(z) \sqrt{f(z)}\Big)\neq 0$, then we have, modulo $\pi$, \begin{equation*} \arg V(z) =\frac{1}{2}\arg f(z) + \frac{\pi}{2}=\frac{-i}{2}\log f(z) +\frac{\pi}{2}, \end{equation*} where, in the last equality, we used again that $|f(z)|=1$ at $z\in\partial D$. We have: \begin{equation}\label{eq:V-sqrt-f} \frac{d\arg V(\eta(\theta))}{d\theta} = \frac{-i}{2}\frac{f'}{f}\frac{d\eta(\theta)}{d\theta}=\frac{1}{2}, \end{equation} where we used \eqref{eq:v} in the last equality. \end{proof} The next lemma characterizes the ``possible singularities''. \begin{lemma}\label{lem:caustic-cusp} The only singularities of the curve $h(\partial D)$ are cusps. When $z_0\in\partial D$ is not in the branch cut of $\sqrt{f}$ there is a cusp at $h(z_0)$ if and only if the mapping from $\partial D$ to ${\mathbb R}$ given by $$z\mapsto {\rm Im}\Big(v(z) q'_m(z) \sqrt{f(z)}\Big) \text{~~ on }\partial D,$$ changes sign across $z_0$. When $z_0$ is in the branch cut of $\sqrt{f}$, the cusp occurs at $h(z_0)$ if and only if the mapping, $$z\mapsto {\rm Re}\left(\frac{\sqrt{f(z)}}{\sqrt{f(z_0)}}\right) \,{\rm Im}\Big(v(z) q'_m(z) \sqrt{f(z)}\Big) \text{~~ on }\partial D,$$ changes sign across $z_0$. \end{lemma} \begin{proof} First note that away from the branch cuts of $\sqrt f$, the set $$\partial D\cap\{z:{\rm Im}\big(v(z) q'_m(z) \sqrt{f(z)}\big)=0\}$$ is finite. Indeed, otherwise as a level set of a harmonic function it would contain an arc. By \eqref{eq:V}, $V=0$ over that arc and, therefore, $h$ maps the arc into a point. Yet $h$, a harmonic polynomial of degree $n$, has finite valence $\leq n^2$, a contradiction. If $f'\neq 0$ on $\partial D$ then all the functions (i.e. $f(z)$, $v(z)$ and $q'_m(z)$) that appear in the expression \eqref{eq:V-cusp} for the tangent vector, $V$, of the caustic are smooth along $\partial D$. Therefore, from \eqref{eq:V-cusp}, a singularity of $h(\partial D)$ occurs only when $V$ changes sign, hence $h(\partial D)$ must have a cusp there. The lemma follows immediately. If $f'(z_0)=0$ for $z_0\in\partial D$ (i.e., it is a critical point of the lemniscate), then $|v(z_0)|=\infty$ by \eqref{eq:v}. However, the argument of $V$ (i.e., $\arg V$) is still either continuous or jumps by $\pi$. And a cusp occurs when $V$ changes direction, i.e., $\arg V$ jumps by $\pi$. \end{proof} If $f'(z_0)=0$ for $z_0\in\partial D$, multiple components of $\Omega$ merge together at $z_0$. It turns out that the image of $\partial \Omega$ under $h$ has an interesting structure, as we will see below. \vspace{0.3cm} \noindent{\it Remark (a critical point on the critical lemniscate).} Let $z_0\in\partial\Omega$ satisfy $f'(z_0)=\cdots=f^{(k-1)}(z_0)=0$ and $f^{(k)}(z_0)\neq 0$, i.e., \begin{equation*} f(z)=f(z_0) + \frac{f^{(k)}(z_0)}{k!}(z-z_0)^{k}+{\mathcal O}\big((z-z_0)^{k+1}\big). \end{equation*} Taking the absolute value and rotating if necessary, we obtain \begin{equation}\label{eq:fabs} |f(z)|=|f(z_0)| + \frac{|f(z_0)|}{k!}{\rm Re}\left[\frac{f^{(k)}(z_0)}{f(z_0)}(z-z_0)^{k}\right]+{\mathcal O}\big((z-z_0)^{k+1}\big). \end{equation} Locally, the lemniscate consists of $2k$ curved rays meeting at $z_0$ and it divides the plane into $2k$ wedge-shaped sections with the angle $\pi/k$ at $z_0$. Among them, total $k$ sections, equally spaced, are included in $\Omega$. To be more precise, taking a sufficiently small disk $B$ centered at $z_0$, $B\cap\Omega$ has exactly $k$ components such that $B\cap\partial\Omega$ consists of $2k$ curves emanating from $z_0$ with the angular directions given by \begin{equation*} \theta_j= \frac{1}{k}\arg\frac{f(z_0)}{f^{(k)}(z_0)}+\frac{\pi}{k}\bigg(j+\frac{1}{2}\bigg),\quad j=0,1,\cdots,2k-1. \end{equation*} Note that the sections between $\theta_{2\ell}$ and $\theta_{2\ell+1}$ for every $\ell=0,1,\cdots, k-1$ are in $\Omega$. Therefore, taking the single component (let us denote it by $D_\ell$) of $B\cap\Omega$ between $\theta_{2\ell}$ and $\theta_{2\ell+1}$, the tangent vector of its boundary changes angular direction from $\theta_{2\ell+1}-\pi$ to $\theta_{2\ell}$ at $z_0$. Using \eqref{eq:V-cusp}, assuming $z_0$ is not in the branch cut of $\sqrt{f}$, the corresponding tangent vector of $h(\partial D_\ell)$ changes the direction by $\pi$ at $h(z_0)$ (i.e., has a cusp singularity) if and only if \begin{equation*} {\rm Im}\left(e^{i(\theta_{2\ell+1}-\pi)}q'_m(z_0)\sqrt{f(z_0)}\right) {\rm Im}\left(e^{i\theta_{2\ell}}q'_m(z_0)\sqrt{f(z_0)}\right)<0 \end{equation*} or, equivalently, \begin{equation*} \theta_{2\ell}+\arg\left(q'_m(z_0)\sqrt{f(z_0)}\right)\in \left(0,\frac{k-1}{k}\pi\right)\cup\left(\pi, \frac{2k-1}{k}\pi\right). \end{equation*} When $k$ is even (respectively odd) there can be at most two (respectively one) values of $\ell$'s that do {\em not} satisfy the above condition. It means that for those values of $\ell$'s $h(\partial D_\ell)$ has a smooth boundary at $h(z_0)$, and for the other values of $\ell$'s, $h(\partial D_\ell)$ has a cusp at $h(z_0)$. In Figure \ref{fig:petals}, the middle picture shows a caustic where the critical point $z_0=0$ corresponds to the three cusps at the origin, and the last picture shows a caustic where one component (red) of the lemniscate maps $z_0$ to a regular point of the caustic. \begin{figure} \includegraphics[width=0.2\textwidth]{lemn-3.pdf} \includegraphics[width=0.35\textwidth]{caustic-3.pdf} \includegraphics[width=0.35\textwidth]{caustic-31.pdf} \caption{\label{fig:petals} Lemniscate (left) and caustics, $h(z)=\frac{z^4}{4}-z-e^{i\theta}\overline z$ for $\theta=0$ and $\theta=\frac{\pi}{6}$.} \end{figure} \begin{lemma}\label{lem:cuspnumber} Let $D$ be a {\em simply connected} component of $\Omega$ with exactly $k$ zeros of $f$. The number of cusps in $h(\partial D)$ is odd (resp. even) when $k$ is odd (resp. even) and, moreover, is $\geq 2+k$. \end{lemma} \begin{proof} According to Lemma \ref{lem:caustic-cusp} a cusp in $h(\partial D)$ occurs whenever ${\rm Im}\big( v \,q'_m \sqrt f \big)$ changes sign. To locate such events, it is convenient to define the function, $\Psi:[0,2\pi k)\to\mathbb{R}$ by \begin{equation}\label{eq:Psi} \Psi:\theta\to \arg\Big( v(z) \,q'_m(z) \sqrt{f(z)} \Big)_{z=\eta(\theta)} = \arg v(\eta(\theta)) + \arg q'_m(\eta(\theta)) + \arg \sqrt{f(\eta(\theta))}. \end{equation} For the term $\arg q'_m(\eta(\theta))$, we choose the branch of the function ``$\arg$'' such that the term is continuous with respect to $\theta$. For the term $\sqrt {f(\eta(\theta))}$ we choose the branch of $\sqrt f$ and ``$\arg$'' function (separately from the previous one), so that that term is continuous. Lastly, for the first term, $\arg v$, we choose the branch of ``$\arg$'' such that the term is a piecewise continuous function where the only discontinuities are at the critical points of the lemniscate (i.e., where $f'(\eta(\theta))=0$). At the discontinuity $\arg v$ jumps by a positive angle, $\frac{k-1}{k\pi}\in [\pi/2,\pi)$ from $\theta_{2\ell+1}-\pi$ to $\theta_{2\ell}$ using the notations in the above Remark ``a critical point on the critical lemniscate''; the angle can vary according to the order $k\geq 2$ of the critical point. As a consequence, $\Phi$ is a continuous function with only discontinuities being the jump(s) by angles in $[\pi/2,\pi]$ at the critical points of the lemniscate. For a cusp to occur at $\theta$, the condition in Lemma \ref{lem:caustic-cusp} gives that \begin{equation}\label{eq:graph} \bigcap_{\epsilon>0}(\Psi(\theta-\epsilon),\Psi(\theta+\epsilon)) \text{~ contains $ l\pi$ for some integer $l$.} \end{equation} Let us look at the three terms in $\Psi$ individually. The last term is linear in $\theta$ with the slope $1/2$ by \eqref{eq:V-sqrt-f} and, therefore, $\Delta \arg \sqrt {f} = k\pi$. (Here and below $\Delta$ stands for the increment over $\partial D$.) The second term, $\arg q'_m$, is a continuous function and $\Delta \arg q'_m=0$ because $q'_m$ does not vanish in the closure of $D$. Lastly, the first term, $\arg v$, is a piecewise continuous function with $\Delta \arg v = 2\pi$ where the only discontinuities are at the critical points of the lemniscate (i.e., where $f'(\eta(\theta))=0$). Summing up, the total increment of $\Psi$ over $[0,2k\pi)$ is $(2+k)\pi$ and, therefore, there are at least $k+2$ points (and exactly $k+2$ if $\Psi$ is monotone) where the condition \eqref{eq:graph} holds. Suppose now that $\Psi$ is not monotone. For any point $\theta$ that satisfies the condition \eqref{eq:graph} there are two possibilities: as $\epsilon\to +0$, $$\text{(A):}~~ \Psi(\theta-\epsilon)<l\pi \text{ and } \Psi(\theta+\epsilon)>l\pi\quad\text{ or }\quad \text{(B):}~~\Psi(\theta-\epsilon)>l\pi \text{ and } \Psi(\theta+\epsilon)<l\pi. $$ Since the total increment of $\Psi$ is $(k+2)\pi$, the number of points satisfying the condition (A) must be larger than those satisfying (B) by exactly $k+2$. Therefore the number of points satisfying either (A) or (B) can be bigger than $k+2$ by an even number. As a consequence, the number of cusps is always odd (respectively even) when $k$ is odd (respectively even). \end{proof} \begin{lemma}\label{lem:cusps-univalency} Let $D$ be a component of $\Omega$ with a single zero of $f$. If $h(\partial D)$ has only three cusps, then it is a Jordan curve. There are more than three cusps in $h(\partial D)$ (hence five or more according to Lemma \ref{lem:cuspnumber}) if and only if there exists a point with the winding number of $h(\partial D)$ bigger than one, i.e., there exists $p\in\mathbb{C}$ such that $\Delta_{\partial D}\arg (h(\cdot)-p)\geq 2\pi$. \end{lemma} \begin{proof} To prove the first statement of the lemma, it suffices to show that the three smooth (open) arcs between the three cusps do not intersect each other. Choosing any two arcs, there exists a cusp where the two arcs meet. Since the tangent vector rotates at most by $\pi$ over a smooth part of the curve (cf. Lemma \ref{lem-caustic}), the two arcs can only get farther from each other as one moves along the arcs starting from the common cusp. (Note however that, if the constant curvature is bigger than $1/2$, $h(\partial D)$ can self-intersect.) To prove the second statement, assume $h(\partial D)$ has five or more cusps. Let $\Gamma$ be a smooth curve that is obtained by slightly ``smoothing'' all the cusps of $h(\partial D)$, see the left picture in Figure \ref{fig:smoothcusp}. Considering infinitessimally small smoothing, such deformation indicates that the tangent vector of $\Gamma$ rotates by $$\frac{1}{2}\cdot 2\pi-\#\{\text{cusps}\}\pi\leq \pi -5\pi = - 4\pi$$ over the whole curve. \begin{figure} \includegraphics[width=0.6\textwidth]{smoothing-cusp.pdf} \caption{\label{fig:smoothcusp} Cusp in caustic (left). Dotted line shows a ``cusp after smoothing''. Relative winding numbers in a region around an intersection point (right).} \end{figure} Therefore $\Gamma$ cannot be a (smooth) Jordan curve and $h(\partial D)$ must have a self-intersection (that cannot be removed by a small perturbation, such as the one in the right picture of Figure \ref{fig:smoothcusp}). Let us define the orientation on $h(\partial D)$ by the orientation inherited from $\partial D$. For each point $p\notin h(\partial D)$, one can consider the winding number of $h(\partial D)$ around $p$. The winding number of $h(\partial D)$ on the left side (with respect to the orientation) is bigger than the one on the right side by $+1$. Then, in a neighborhood of a self-intersection, there must be a pair of regions where the winding numbers differ by $2$, see the right picture in Figure \ref{fig:smoothcusp}. \end{proof} Now we prove Theorem \ref{thm-3}. To have more than three cusps, the function $\Psi$ that is defined in the proof of Lemma \ref{lem:cuspnumber} must be non-monotonic, i.e., there must be a point where the slope of the graph of $\Psi$ is negative, i.e. \begin{equation}\label{eq:negative-slope} \frac{d\Psi(\theta)}{d\theta}= \frac{d\arg v(\eta(\theta))}{d\theta} + \frac{d\arg q'_m(\eta(\theta))}{d\theta} + \frac{1}{2}<0. \end{equation} Note that the first two terms in the left hand side give the curvature of $q_m(\partial D)$ with respect to the parametrization by $\theta$ or, equivalently, $\kappa/|f'|$ where $\kappa$ is the curvature with respect to the arclength on $\partial D$ (cf. \eqref{eq:v}): $$ \left|\frac{d\eta(\theta)}{d\theta}\right| \kappa= \frac{d}{d\theta}\arg\frac{d q_m(\eta(\theta))}{d\theta} =\frac{d\arg v(\eta(\theta))}{d\theta} + \frac{d\arg q'_m(\eta(\theta))}{d\theta}. $$ So the inequality \eqref{eq:negative-slope} is exactly the one that appears in i) of Theorem \ref{thm-3}. Once the graph of $\Psi$ is non-monotonic, one can create more roots of $\Psi \equiv 0 \mod\pi$ by vertically shifting this graph. This is done by the following transformation, which preserves the lemniscate $\{z:|p'_n(z)/q'_m(z)|=1\}$: $$p_n \to e^{i\varphi} p_n, \quad \varphi\in{\mathbb R}.$$ Under this transformation, $$\arg(v\,q'_m\sqrt f) \rightarrow \arg(v\,q'_m\sqrt f)+\varphi/2$$ because of the term $\sqrt f$. This means that, for some $\varphi$, $h(\partial D)$ can have more than five cusps. By Lemma \ref{lem:cusps-univalency}, there exists a point, say $p\in {\mathbb C}$, where $\Delta_{\partial D}(h(\cdot)-p)>2\pi$. This means that $$\widetilde p_n(z) = e^{i\varphi} p_n(z) - p $$ satisfies $\Delta_{\partial D}\widetilde h>2\pi$ where $\widetilde h = \widetilde p_n +\overline{q_m}$ and, therefore, $\widetilde h$ has at least two roots inside $D$. This ends the proof of Theorem \ref{thm-3}. \section{Construction of non-convex lemniscate}\label{sec:nonconvex} Here we explain how we found the example in Figure \ref{fig:41concave}. \begin{thm}\label{lem-no-inflec} Let $f$ be a rational function and $\Omega=\{z:|f(z)|<1\}$. Let $z_0 \in \partial \Omega$, $f'(z_0) = 0$ and $(\log f)''(z_0) \neq 0$. In a neighborhood of $z_0$, $\partial\Omega$ is a union of two smooth arcs that intersect perpendicularly at $z_0$. Moreover, the point $z_0$ is not an inflection point (i.e. the curvature is strictly positive or negative at $z_0$) for either of the arcs if and only if \begin{equation*} {\rm Re}\bigg( e^{\pm i\pi/4} \frac{(\log f)'''(z_0)}{(\log f)''(z_0)^{3/2}}\bigg)\neq 0. \end{equation*} \end{thm} \vspace{0.3cm} \begin{cor}\label{cor:concave} If $z_0\in\partial\Omega$ satisfies the assumptions in Theorem \ref{lem-no-inflec} and, furthermore, is not an inflection point of $\partial\Omega$, then there is a connected component of $\Omega$ whose curvature of the boundary (with respect to the counterclockwise orientation) converges to a negative value at $z_0$. \end{cor} \begin{proof} According to Theorem \ref{lem-no-inflec} there exists an open neighborhood $U$ of $z_0$ such that $\Omega\cap U$ is the disjoint union of two domains, see Figure \ref{fig:two-cases} for an illustration ($\Omega\cap U$ is the shaded region). When the ``no inflection'' condition in Theorem \ref{lem-no-inflec} is satisfied, the local configuration of $\Omega$ has two possibilities: $\Omega\cap U$ is the disjoint union of a convex and a non-convex domain (left in Figure \ref{fig:two-cases}) or, $\Omega\cap U$ is the disjoint union of two non-convex domains (right in Figure \ref{fig:two-cases}). In either case, there exists a connected component of $\Omega\cap U$ whose boundary has a negative curvature in a neighborhood of $z_0$. \end{proof} \begin{figure} \begin{centering} \includegraphics[width=0.55\textwidth]{two-cases.pdf} \caption{\label{fig:two-cases} Lemniscate near $z_0$, see text below Theorem \ref{lem-no-inflec}.} \end{centering} \end{figure} \noindent{\em Remark.} { To relate Theorem \ref{lem-no-inflec} to our harmonic polynomials, we take $f(z) = p_n'(z)/q_m'(z)$. For $m=1$, as $z$ approaches $z_0$ along the ``concave'' arc, $\kappa(z)<0$ by Corollary \ref{cor:concave} and $|f'(z)|\to 0$. Therefore, one obtains that $\kappa(z)/|f'(z)|\to -\infty$ as $z$ approaches $z_0$ along the ``concave'' arc. Hence we obtain the ``non-convex'' domain $D$ as is needed to satisfy the condition $\kappa/|f'|<-1/2$ in Theorem \ref{thm-3}. } \vspace{0.3cm} In the rest of this section, we prove Theorem \ref{lem-no-inflec}. Let $f$ be a meromorphic function. Let $\gamma:\mathbb{R}\to\mathbb{C}$ be the parametrization of a curve in $\mathbb{C}$ such that $\gamma(0) = z_0$ and $f'(z_0) = 0$. Taking the derivatives of $\log \lvert f(\gamma(t)) \rvert$ at $t=0$ we obtain: $$\frac{\textrm{d}}{\textrm{d}t} \log \lvert f(\gamma(t)) \rvert \bigg|_{t=0}=0,$$ $$\frac{\textrm{d}^2}{\textrm{d}t^2} \log \lvert f(\gamma(t)) \rvert\bigg|_{t=0} = \textrm{Re}\Big(\dot\gamma(0)^2 (\log f)''(z_0)\Big),$$ $$\frac{\textrm{d}^3}{\textrm{d}t^3}\log \lvert f(\gamma(t)) \rvert \bigg|_{t=0}= \textrm{Re}\Big(3\dot\gamma(0)\ddot\gamma(0) (\log f)''(z_0)+\dot\gamma(0)^3(\log f)'''(z_0)\Big).$$ Above, $\dot\gamma$ and $\ddot\gamma$ denote, respectively, the derivative and the second derivative of $\gamma$ with respect to $t$. Consider the Taylor expansion of $\log|f(\gamma(t))|$ around $t=0$. We have \begin{equation*}\begin{split} \log|f(\gamma(t))| &= \textrm{Re}\Big(\dot\gamma(0)^2 (\log f)''(z_0)\Big)\frac{t^2}{2} \\&+\textrm{Re}\Big(3\dot\gamma(0)\ddot\gamma(0) (\log f)''(z_0)+\dot\gamma(0)^3(\log f)'''(z_0)\Big)\frac{t^3}{6} + {\mathcal O}(t^4). \end{split} \end{equation*} If $\gamma$ parmametrizes the lemniscate $\partial \Omega$ passing through $z_0$, where $\Omega$ is as in \eqref{eq:Omega}, then we have $\log|f(\gamma(t))|=0$ identically for all $t$, and all the coefficients in the above Taylor series must vanish. The first coefficient vanishes when \begin{equation}\label{eq:use1} \dot\gamma(0)^2 = i c_1 \overline{(\log f)''(z_0)},\quad c_1\in\mathbb{R}. \end{equation} We may assume that $\gamma$ is the arclength parametrization, i.e., $|\dot\gamma|\equiv 1$. Then $\ddot\gamma\overline{\dot\gamma}$ is purely imaginary and \begin{equation}\label{eq:use2}\ddot\gamma(0)=i c_2\, \dot\gamma(0) \end{equation} for some real constant $c_2$. The second coefficient in the Taylor series gets simplified as \begin{equation*} \begin{split} & \textrm{Re}\Big(3\dot\gamma(0)\ddot\gamma(0) (\log f)''(z_0)+\dot\gamma(0)^3(\log f)'''(z_0)\Big) \\ & =- 3c_1c_2 |(\log f)''(z_0)|^2 -c_1 {\rm Im}\Big( \dot\gamma(0)\overline{(\log f)''(z_0)} (\log f)'''(z_0)\Big), \end{split} \end{equation*} using \eqref{eq:use1} and \eqref{eq:use2}. The above expression vanishing implies that \begin{equation*} c_2 = - \frac{1}{3}{\rm Im}\bigg( \dot\gamma(0)\frac{(\log f)'''(z_0)}{(\log f)''(z_0)}\bigg). \end{equation*} Summarizing, we obtain \begin{equation*} \gamma(t) = z_0 + \dot\gamma(0) t - \frac{i\,\dot\gamma(0)}{6}{\rm Im}\bigg( \dot\gamma(0)\frac{(\log f)'''(z_0)}{(\log f)''(z_0)}\bigg) t^2 + {\mathcal O}(t^3), \end{equation*} where, using \eqref{eq:use1}, \begin{equation*} \dot\gamma(0)=\pm e^{\pm i\pi/4}i \frac{\overline{\sqrt{(\log f)''(z_0)}}}{|\sqrt{(\log f)''(z_0)}|}=\pm e^{\pm i\pi/4}i \frac{|\sqrt{(\log f)''(z_0)}|}{\sqrt{(\log f)''(z_0)}}. \end{equation*} The two signs can be chosen arbitrarily and independently. This proves that there are two different arcs orthogonal at $z_0$. The curve $\partial\Omega$ does not have an inflection point at $z_0$ when the quadratic term in the Taylor expansion of $\gamma(t)$ is non-zero, i.e., \begin{equation*} {\rm Re}\bigg( e^{\pm i\pi/4} \frac{(\log f)'''(z_0)}{(\log f)''(z_0)^{3/2}}\bigg)\neq 0. \end{equation*} The proof is now complete.
{ "timestamp": "2016-01-21T02:09:50", "yymm": "1508", "arxiv_id": "1508.04439", "language": "en", "url": "https://arxiv.org/abs/1508.04439", "abstract": "In this paper we sharpen significantly several known estimates on the maximal number of zeros of complex harmonic polynomials. We also study the relation between the curvature of critical lemniscates and its impact on geometry of caustics and the number of zeros of harmonic polynomials.", "subjects": "Complex Variables (math.CV)", "title": "Zeros of harmonic polynomials, critical lemniscates and caustics", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464520028357, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7074345625958615 }
https://arxiv.org/abs/1611.02238
Equivalence of Szegedy's and Coined Quantum Walks
Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk-step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk-steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate.
\section{Introduction} Quantum walks are one of the primary tools for developing quantum algorithms. For example, they have resulted in quantum algorithms for searching \cite{SKW2003,CG2004}, element distinctness \cite{Ambainis2004}, triangle finding \cite{Magniez2007}, and boolean formula evaluation \cite{FGG2008,Ambainis2010}. They also speed up backtracking algorithms \cite{Montanaro2015} and are universal for quantum computation \cite{Childs2009,Lovett2010,Childs2013}. Several reviews of quantum walks are available, including \cite{Kempe2003,Ambainis2003,Venegas2012}. There are several definitions of quantum walks, depending on the scheme for quantizing a classical random walk. In discrete-time, the most popular models are Szegedy's quantum walk and coined quantum walks. Szegedy's quantum walk \cite{Szegedy2004} arises from quantizing a classical random walk or Markov chain. In his scheme, the walk occurs on the edges of the bipartite double cover of the original graph, and the evolution is defined by two reflection operators. Coined quantum walks, on the other hand, walk on the vertices of the original graph. They predate Szegedy's quantum walk and even provided inspiration for Szegedy's work \cite{Szegedy2004}. Coined quantum walks began with Meyer \cite{Meyer1996a,Meyer1996b}, who investigated them in the context of quantum cellular automata. He showed that an internal spin degree of freedom could be included to make the evolution nontrivial, and the resulting dynamics are a discretization of the Dirac equation. Aharonov \textit{et al.}~\cite{Aharonov2001} framed this evolution as a quantum walk, renaming the internal state a coin, in reference to a coin flip in classical random walks. The quantum walk is then defined by a coin flip followed by a shift or hop to adjacent vertices. Under certain conditions, it is known from Fact 3.2 of \cite{Magniez2012} that one step of Szegedy's quantum walk is equivalent to two steps of the coined quantum walk. This fact is stated without proof, however, and the precise relationships between the individual operators are not given. Some details are described in lecture notes \cite{Whitfield2012}, but again the exact relationships between individual operators are not explored. In this paper, we amend this oversight, explicitly showing that Szegedy's first reflection operator is equal to the ``Grover diffusion'' coin flip of the coined quantum walk, while Szegedy's second reflection operator is equal to the combination of a ``flip-flop'' shift, Grover diffusion coin flip, and another flip-flop shift. These relationships between the operators also hold for the seminal search schemes, where Szegedy's quantum walk quantizes a Markov chain with absorbing marked vertices \cite{Szegedy2004}, and the coined quantum walk uses the Grover diffusion coin for unmarked vertices and the negative identity for marked vertices \cite{SKW2003}. A modified version of Szegedy's quantum walk that includes complex phases was recently shown to be equivalent to a modified coined quantum walk on a multigraph, under certain conditions \cite{Portugal2016b,Portugal2017}. In this scenario, Szegedy's first reflection operator is equal to the coin flip, and Szegedy's second reflection operator is equal to the shift. Hence, one step of Szegedy's quantum walk is equal to \emph{one} step of the coined. In this paper, however, we focus on the standard definitions of the quantum walks, where one step of Szegedy's quantum walk is equal to \emph{two} steps of the coined. Recently, Santos \cite{Santos2016} proposed two alternative methods for searching with Szegedy's quantum walk that, rather than using absorbing vertices, utilizes oracles similar to the one from Grover's algorithm \cite{Grover1996}. She proved that her first algorithm is equivalent to Szegedy's algorithm with absorbing vertices for a certain class of graphs. That is, for these graphs, Szegedy's search algorithm with absorbing vertices can be expressed in terms of a Grover-type oracle. For her second algorithm, Santos showed that it achieves a better success probability when searching the complete graph, indicating that search with Grover-type queries can be superior. Furthermore, the improved success probability of this algorithm can be used to achieve perfect state transfer \cite{Stefanak2016}. This raises the question of whether Santos's schemes, based on Szegedy's quantum walk, are equivalent to coined quantum walks. Santos's algorithms are not encompassed within the framework of Fact 3.2 of \cite{Magniez2012}, so in this paper, we give the coined quantum walk search algorithms that are equivalent to Santos's two search algorithms. We prove that her first algorithm is equivalent to a coined quantum walk with one walk-step per query to a Grover-type oracle \cite{AKR2005}, and her second algorithm is equivalent to a coined quantum walk with two walk-steps per query to a Grover-type oracle \cite{Wong11}. Both of these are established concepts in coined quantum walks. Then we are able to explain Santos's results using the large body of research on coined quantum walks and expand upon her conclusions. In the next section, we define Szegedy's quantum walk, review how it searches with absorbing vertices, and then define Santos's search schemes. Then in Section III, we follow the same outline as Section II, except with equivalent coined quantum walks. So we define the coined quantum walk and prove its equivalence to Szegedy's by giving the explicit relationships between the operators. Then we explain its seminal search algorithm and prove its equivalence to Szegedy's search with absorbing vertices. Finally, we give the two coined quantum walks that are equivalent to Santos's Szegedy-based search algorithms with Grover-type oracles. This allows us to understand Santos's results using the vast literature on coined quantum walks. \section{Szegedy's Quantum Walk} \subsection{Definition} \begin{figure} \begin{center} \subfloat[]{ \includegraphics{graph} \label{fig:graph} } \quad \subfloat[]{ \includegraphics{graph_szegedy} \label{fig:graph_szegedy} } \caption{(a) An irregular graph of $N = 4$ vertices, and (b) its bipartite double cover.} \end{center} \end{figure} We assume throughout the paper that the classical random walk we are quantizing is unbiased, meaning it occurs on an undirected and unweighted graph. For example, for the graph in Fig.~\ref{fig:graph}, a classical random walker at vertex 2 has probability $1/3$ each of jumping to vertices 1, 3, and 4. Similarly, a walker at vertex 3 has probability $1/2$ each of jumping to vertices 2 and 4. Szegedy's quantum walk occurs on the edges of the bipartite double cover of the original graph. Mathematically, if the original graph is $G$, then its bipartite double cover is the graph tensor product $G \times K_2$, where $K_2$ is the complete graph of two vertices. This duplicates the vertices into two partite sets $X$ and $Y$. A vertex in $X$ is connected to a vertex in $Y$ if and only if they are connected in the original graph. For example, the bipartite double cover of Fig.~\ref{fig:graph} is shown in Fig.~\ref{fig:graph_szegedy}. Note if the number of edges in the original graph is $|E|$, the number of edges in its bipartite double cover is $2|E|$. Since the quantum walk occurs on the edges of the bipartite double cover, the Hilbert space of the walk is $\mathbb{C}^{2|E|}$. We denote a walker on the edge connecting $x \in X$ with $y \in Y$ as $\ket{x,y}$. Then the computational basis is \[ \{ \ket{x,y} : x \in X, y \in Y, x \sim y \}, \] where $x \sim y$ denotes that vertices $x$ and $y$ are adjacent. Szegedy's walk is defined by repeated applications of \[ W = R_2 R_1, \] where \begin{gather*} R_1 = 2 \sum_{x \in X} \ketbra{\phi_x}{\phi_x} - I, \\ R_2 = 2 \sum_{y \in Y} \ketbra{\psi_y}{\psi_y} - I \end{gather*} are reflection operators defined by \begin{gather*} \ket{\phi_x} = \frac{1}{\sqrt{\deg(x)}} \sum_{y \sim x} \ket{x,y}, \\ \ket{\psi_y} = \frac{1}{\sqrt{\deg(y)}} \sum_{x \sim y} \ket{x,y}, \end{gather*} where $\text{deg}(x)$ is the degree of vertex $x$ (\textit{i.e.}, the number of neighbors of $x$), and $y \sim x$ sums over the neighbors of $x$. Note that $\ket{\phi_x}$ is the equal superposition of edges incident to $x \in X$, and $\ket{\psi_y}$ is the equal superposition of edges incident to $y \in Y$. As proved in \cite{Wong25} and in congruence with the ``inversion about the mean'' of Grover's algorithm \cite{Grover1996}, the reflection $R_1$ goes through each vertex in $X$ and reflects the amplitude of its incident edges about their average amplitude, and $R_2$ similarly does this for the vertices in $Y$. For example, for the bipartite double cover in Fig.~\ref{fig:graph_szegedy}, say the amplitude of edge $\ket{2,1}$ is $c_{2,1}$, the amplitude of edge $\ket{2,3}$ is $c_{2,3}$, and the amplitude of edge $\ket{2,4}$ is $c_{2,4}$. They are all incident to vertex $2 \in X$, and their average is \[ \bar{c}_2 = \frac{c_{2,1} + c_{2,3} + c_{2,4}}{3}. \] When $R_1$ is applied, each of the three amplitudes are inverted about this mean, so \begin{gather*} c_{2,1} \rightarrow 2 \bar{c}_2 - c_{2,1}, \\ c_{2,3} \rightarrow 2 \bar{c}_2 - c_{2,3}, \\ c_{2,4} \rightarrow 2 \bar{c}_2 - c_{2,4}. \end{gather*} $R_1$ does a similar inversion at each vertex in $X$, and $R_2$ does this for vertices in $Y$. \subsection{Search with Absorbing Vertices} \begin{figure} \begin{center} \subfloat[]{ \includegraphics{graph_marked} \label{fig:graph_marked} } \quad \subfloat[]{ \includegraphics{graph_szegedy_marked} \label{fig:graph_szegedy_marked} } \quad \subfloat[]{ \includegraphics{graph_szegedy_marked_dropped} \label{fig:graph_szegedy_marked_dropped} } \caption{(a) Search for vertex $2$ on an irregular graph of $N = 4$ vertices, (b) its bipartite double cover, and (c) the nonzero edges in Szegedy's quantum walk search algorithm.} \end{center} \end{figure} To search for a marked vertex in a graph with a classical random walk, one randomly walks until a marked vertex is found, and then the walker stays put at the marked vertex. Given this procedure, marked vertices act as absorbing vertices, as illustrated in Fig.~\ref{fig:graph_marked}. Szegedy's quantum walk searches by quantizing this random walk with absorbing vertices, and the resulting bipartite double cover is shown in Fig.~\ref{fig:graph_szegedy_marked}. As shown in \cite{Wong25}, the dashed line connecting vertex $2$ in $X$ and $Y$ can be ignored since it has zero amplitude throughout the evolution, so the effective graph is Fig.~\ref{fig:graph_szegedy_marked_dropped}. Then the search is performed by repeatedly applying \[ W' = R_2' R_1', \] where the prime distinguishes that we are searching for absorbing vertices. At unmarked vertices, $R_1'$ and $R_2'$ act identically to $R_1$ and $R_2$, respectively, by inverting the amplitudes of the edges around their average at each vertex. At marked vertices, however, $R_1'$ and $R_2'$ act differently \cite{Wong25}, flipping the signs of the amplitudes of all incident edges. This seminal search scheme has been investigated for a variety of graphs, including the one-dimensional periodic lattice or cycle \cite{Wong25,Santos2010b} and the complete graph \cite{Santos2010a}. Szegedy also investigated his search scheme for general symmetric and vertex-transitive graphs with a unique marked vertex \cite{Szegedy2004}. \subsection{Search with Grover's Oracle} In Grover's algorithm \cite{Grover1996}, the system evolves by repeatedly applying two reflections: The first reflects the state through the marked vertex, and the second reflects across the initial uniform state. This first reflection acts as an oracle query $Q$, and it negates the amplitude at the marked vertex. That is, \[ Q \ket{x} = \left\{ \begin{array}{rl} -\ket{x}, & x\ {\rm marked} \\ \ket{x}, & x\ {\rm unmarked} \\ \end{array} \right.. \] Motivated by this, Santos \cite{Santos2016} defined Grover-type oracles in Szegedy's scheme. In the bipartite double cover (\textit{c.f.}, Fig.~\ref{fig:graph_szegedy_marked_dropped}), there are marked vertices in each partite set $X$ and $Y$. So we get two Grover-type oracles, one for each set: \begin{gather*} Q_1 = \left( I_N - 2 \!\!\!\!\!\! \sum_{x \in {\rm marked}} \!\!\!\!\!\! \ketbra{x}{x} \right) \otimes I_N, \\ Q_2 = I_N \otimes \left( I_N - 2 \!\!\!\!\!\! \sum_{y \in {\rm marked}} \!\!\!\!\!\! \ketbra{y}{y} \right). \end{gather*} $Q_1$ flips the sign of an edge if its incident to a marked vertex in $X$, and $Q_2$ acts similarly, except it flips the sign of an edge if its incident to a marked vertex in $Y$. Using these Grover-type queries, Santos introduced several Szegedy-based schemes for searching. The first algorithm incorporates queries in both the $X$ and $Y$ partite sets, and it repeatedly applies \[ W_{q1} = R_2 Q_2 R_1 Q_1. \] The second algorithm only applies queries to vertices in $X$, repeatedly applying \[ W_{q2} = R_2 R_1 Q_1. \] Santos also proposed using $R_2 Q_1 R_1 Q_1$, but since $Q_1 R_1 Q_1 = R_1$, this is equivalent to $W = R_2 R_1$, so it is a pure walk that does not search. Similarly, Santos numerically showed that $Q_1 R_2 Q_1 R_1$ causes an initially uniform state to evolve negligibly. Thus, we only focus on $W_{q1}$ and $W_{q2}$ here. Santos \cite{Santos2016} showed that the first operator $W_{q1}$ is sometimes equal to Szegedy's original search operator $W'$, meaning Szegedy's search can be interpreted as a Grover-type query algorithm in those cases. In particular, she showed this to be true for strongly regular graphs with one marked vertex \cite{Santos2016}, which are simple enough that the evolution occurs in a 4D subspace. Then it can be shown that $t$ applications of $W_{q1}$ on the initial state of the search algorithm is exactly equal to $t$ applications of $W'$. For her second algorithm, Santos showed that $W_{q2}$ outperforms Szegedy's $W'$ when searching the complete graph. In particular, if the complete graph has $N$ vertices, and $k$ of them are marked, then each operator reaches the following success probability $p_*$ after $t_*$ applications of the respective operators: \[ \begin{array}{lll} W': & p_* = \frac{1}{2}, & t_* = \frac{\pi}{4} \sqrt{\frac{N}{2k}} \\ W_{q2}: & p_* = 1, & t_* = \frac{\pi}{4} \sqrt{\frac{N}{k}}. \end{array} \] That is, with Szegedy's original absorbing operator $W'$, the success probability reaches $1/2$, so we expect to repeat the algorithm twice before finding the marked vertex, on average, leading to an overall runtime that is a factor of $\sqrt{2}$ slower than Santos's alternative operator $W_{q2}$. In addition, Szegedy's absorbing operator $W'$ can be interpreted as making two queries per iteration, one in $X$ and another in $Y$, whereas Santos's operator $W_{q2}$ only makes one query in $X$ per iteration. This observation further makes Szegedy's search algorithm slower than Santos's in the number of queries. Next, we give the coined quantum walks that are equivalent to Szegedy's quantum walk, Szegedy's search algorithm with absorbing vertices, and Santos's algorithms with Grover-type oracles. We will see that Santos's schemes are equivalent to established coined quantum walks, and so those results carry into Santos's framework and provide extensions to her work. \section{Equivalent Coined Quantum Walks} \subsection{Definition} \begin{figure} \begin{center} \subfloat[]{ \includegraphics{graph_coined} \label{fig:graph_coined} } \quad \subfloat[]{ \includegraphics{graph_coined_marked} \label{fig:graph_coined_marked} } \caption{(a) A coined quantum walk on an irregular graph of $N = 4$ vertices, and (b) search for vertex $2$.} \end{center} \end{figure} A coined quantum walk jumps on the vertices of the original graph, not the bipartite double cover. It has an additional coin degree of freedom, however, indicating which direction the walker points. This is illustrated in Fig.~\ref{fig:graph_coined}, where each vertex has directions associated with it. For example, a particle at vertex $2$ can point towards vertices $1$, $3$, and $4$ in superposition. Note the number of directions in which a walker at vertex $v$ can point is $\text{deg}(v)$, so the Hilbert space is $\mathbb{C}^{\sum_v \text{deg}(v)}$. If the particle is at vertex $a$ and points towards vertex $b$, we write the state as $\ket{a,b}$. Then the computational basis is \[ \left\{ \ket{a,b} : a,b \in \{1, \dots, N\}, a \sim b \right\}, \] The quantum walk is defined by repeated applications of \[ U = S C, \] where $C$ is the coin flip that shuffles the internal state of the particle at each vertex, and $S$ is the shift that causes the particle to hop. In this paper, we consider the most common choices for both. For the coin, we use Grover's diffusion coin, which is the permutation-symmetric operator that is furthest from the identity operator \cite{Moore2002}, and it is defined to be \[ C = 2 \sum_{a = 1}^N \ketbra{s_a}{s_a} - I, \] where \[ \ket{s_a} = \frac{1}{\sqrt{\deg(a)}} \sum_{b \sim a} \ket{a,b} \] is the state of a particle at vertex $a$ uniformly pointing towards each of its neighbors. Then for each vertex $a$, $C$ reflects the internal coin state across the equal superposition $\ket{s_a}$. As shown in \cite{Wong23} and in congruence to the ``inversion about the mean'' of Grover's algorithm \cite{Grover1996}, for each vertex $a$, $C$ inverts the amplitude of each coin state at $a$ about the average amplitude of coin states at $a$. For example, for the coined quantum walk in Fig.~\ref{fig:graph_coined}, consider a particle at vertex $2$. Say it points to vertices $1$, $3$, and $4$ with respective amplitudes $c_{2,1}$, $c_{2,3}$, and $c_{2,4}$. The average of these amplitudes is \[ \bar{c}_2 = \frac{c_{2,1} + c_{2,3} + c_{2,4}}{3}. \] When $C$ is applied, each of the three amplitudes are inverted about this mean, so \begin{gather*} c_{2,1} \rightarrow 2 \bar{c}_2 - c_{2,1}, \\ c_{2,3} \rightarrow 2 \bar{c}_2 - c_{2,3}, \\ c_{2,4} \rightarrow 2 \bar{c}_2 - c_{2,4}. \end{gather*} The coin $C$ inverts about the mean at each vertex. For the shift $S$, we use the flip-flop shift, which causes the particle to hop and then turn around. For example, a particle at vertex $a$ pointing to vertex $b$ jumps to vertex $b$ and points at vertex $a$, so $S\ket{a,b} = \ket{b,a}$. This shift is commonly used because it is naturally defined on nonlattice and irregular graphs \cite{Wong13}, and it is necessary for fast search algorithms \cite{AKR2005}. Coined quantum walks with Grover's diffusion coin and the flip-flop shift have been explored on a variety of graphs, such as the hypercube \cite{Moore2002,Marquezino2008}, 2D grid \cite{Marquezino2010}, and regular graphs \cite{Wong23}. Later, we will give many more examples in the context of searching. Now we prove that Szegedy's quantum walk is equivalent to this coined quantum walk, with one application of Szegedy's equal to two applications of the coined, \textit{i.e.}, $W = U^2$. Although this was already known in Fact 3.2 of \cite{Magniez2012}, our proof includes the precise relationships between the individual operators composing the walks, and these details are necessary to later prove that Santos's walks are also equivalent to coined quantum walks. We begin by noting that the Hilbert spaces of the two quantum walks are identical. Recall that Szegedy's quantum walk evolves in $\mathbb{C}^{2|E|}$, and the coined quantum walk evolves in $\mathbb{C}^{\sum_v \text{deg}(v)}$. In any graph, however, the sum of the degrees of the vertices is equal to twice the number of edges, so $\sum_v \text{deg}(v) = 2|E|$. This can be seen in Fig.~\ref{fig:graph_coined}, since each edge supports two directions. For example, the edge connecting vertices $1$ and $2$ supports two basis states, one for a particle at $1$ pointing to $2$ and another at $2$ pointing to $1$. Thus, $\mathbb{C}^{2|E|} = \mathbb{C}^{\sum_v \text{deg}(v)}$. Now we give a bijection between basis states in Szegedy's quantum walk and the coined quantum walk: Szegedy's walker on the edge connecting vertices $i \in X$ with $j \in Y$ is equivalent to a coined particle at vertex $i$ pointing towards vertex $j$. Both of these states are denoted by the same basis vector $\ket{i,j}$. Thus, the undirected edges in Fig.~\ref{fig:graph_szegedy} and the directed edges in Fig.~\ref{fig:graph_coined} depict the same quantum state. This bijection allows us to reinterpret Szegedy's quantum walk $W = R_2 R_1$ from its original description on the edges of the bipartite double cover (\textit{c.f.}, Fig.~\ref{fig:graph_szegedy}) to the coined description on the directed graph (\textit{c.f.}, Fig.~\ref{fig:graph_coined}). Let us begin with $R_1$. Recall that in the bipartite double cover, $R_1$ goes through each vertex in $X$ and inverts its edges about their average at the vertex. Then in the language of the coined quantum walk, $R_1$ goes through each vertex in the directed graph, inverting its \emph{outgoing} amplitudes about their average at the vertex. These outgoing amplitudes are the coin states at each vertex, so this is precisely the Grover diffusion coin $C$, so \[ R_1 = C. \] Now consider $R_2$. In the bipartite double cover, $R_2$ goes through each vertex in $Y$ and inverts its edges about their average at the vertex. Reinterpreting this as a coined quantum walk, $R_2$ goes through each vertex of the directed graph and inverts its \emph{incoming} amplitudes about their average at the vertex. This is equivalent to the flip-flop shift followed by the Grover coin and another flip-flop shift, \textit{i.e.}, \[ R_2 = SCS. \] The first flip-flop shift exchanges incoming and outgoing amplitudes. Then the Grover coin inverts outgoing amplitudes about their mean. Finally, the flip-flop shift again swaps incoming and outgoing amplitudes, so the net effect is that incoming amplitudes, not outgoing amplitudes, were inverted about their means. \begin{table} \caption{\label{table:summary} Summary of the equivalences of Szegedy's and coined quantum walks.} \begin{center} \begin{tabular}{ccc} \hline\noalign{\smallskip} \textbf{Szegedy's} & \textbf{Coined} & \textbf{Equivalence} \\ \noalign{\smallskip}\hline\noalign{\smallskip} $W = R_2 R_1$ & $U = SC$ & $W = U^2$ \\ $W' = R_2' R_1'$ & $U_{\rm SKW} = S(C\ {\rm unmarked}, -I\ {\rm marked})$ & $W' = U_{\rm SKW}^2$ \\ $W_{q1} = R_2 Q_2 R_1 Q_1$ & $SCQ$ & $W_{q1} = (SCQ)^2$ \\ $W_{q2} = R_2 R_1 Q_1$ & $U^2Q = SCSCQ$ & $W_{q2} = U^2Q$ \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} Combining these two results $R_1 = C$ and $R_2 = SCS$, we have \[ W = R_2 R_1 = SCSC = U^2, \] thus proving that two applications of the coined quantum walk is exactly equivalent to one application of Szegedy's quantum walk. This equivalence summarized in the first body row (not the titular row) of Table~\ref{table:summary}. \subsection{Search with Absorbing Vertices} Now we give the seminal algorithm by Shenvi, Kempe, and Whaley (SKW) \cite{SKW2003} for searching using a coined quantum walk. In this scheme, the Grover diffusion coin $C$ is still used for unmarked vertices, but the negative identity operator $-I$ is used as the coin at marked vertices. That is, we repeatedly apply the following operator: \[ U_{\rm SKW} = S \cdot \left\{ \begin{array}{rl} C & {\rm on\ unmarked\ vertices} \\ -I & {\rm on\ marked\ vertices} \\ \end{array} \right\} . \] This selective coin operator acts as a query to an oracle, and $S$ is the usual flip-flop shift. Search using SKW's scheme has been explored on a large number of graphs, including the hypercube \cite{SKW2003}, two- and higher-dimensional grids \cite{AKR2005,AR2008,NR2016a}, Sierpinski gaskets \cite{Lara2013}, complete graphs with and without self-loops \cite{Wong10}, and the simplex of complete graphs with a fully marked clique \cite{Wong11}. Other explorations include the impact of internal-state measurements \cite{Wong18} and stationary states \cite{Wong24}. Now we prove that Szegedy's quantum walk with absorbing vertices is equivalent to SKW's coined quantum walk search scheme. Although this is encompassed in Fact 3.2 of \cite{Magniez2012}, our simple argument gives the reason why. Recall that in Szegedy's quantum walk with absorbing marked vertices, $R_1'$ and $R_2'$ perform an ``inversion about the mean'' at edges incident to unmarked vertices. So for unmarked vertices, we still have $R_1' = C$ and $R_2' = SCS$. For marked vertices, however, $R_1'$ and $R_2'$ flip the signs of incident edges, so this is equivalent to multiplying them by $-I$. So at unmarked vertices, we have $R_1' = -I$ and $R_2' = S(-I)S$. Thus, \begin{gather*} R_1' = \left\{ \begin{array}{rl} C & {\rm on\ unmarked\ vertices} \\ -I & {\rm on\ marked\ vertices} \\ \end{array} \right\}, \\ R_2' = S \left\{ \begin{array}{rl} C & {\rm on\ unmarked\ vertices} \\ -I & {\rm on\ marked\ vertices} \\ \end{array} \right\} S. \end{gather*} Combining these, we see that one application of Szegedy's quantum walk with absorbing marked vertices $W' = R_2' R_1'$ is equivalent to two applications of the coined quantum walk in SKW's search model, \textit{i.e.}, \[ W' = U_{\rm SKW}^2. \] This is summarized in the second body row of Table~\ref{table:summary}. Note this equivalence holds no matter how many marked vertices are present. Fact 3.2 of \cite{Magniez2012}, however, only addresses the case of a single marked vertex $w$, for which $U_\text{SKW}$ can be written as \cite{AKR2005} \[ U_{SKW} = SC (I - 2 \ketbra{w,\phi_w}{w,\phi_w}). \] The term in parenthesis is a reflection in the Hilbert space, and this is a requirement for Fact 3.2 of \cite{Magniez2012} to apply. As we will see next, the Grover-type queries used in Santos's algorithms do not have this form, and so our upcoming equivalence results lie beyond the scope of Fact 3.2 of \cite{Magniez2012}. \subsection{Search with Grover's Oracle} Thus far, we have proved the equivalence of Szegedy's quantum walk and the coined quantum walk, including their seminal search algorithms. Although the general equivalence was already known from Fact 3.2 of \cite{Magniez2012}, our details included the specific relationships between the operators and hold for multiple marked vertices. Now we use these relationships to obtain the coined quantum walks that are equivalent to Santos's search algorithms $W_{q1}$ and $W_{q2}$, which are based on Szegedy's quantum walk with a Grover-type oracle. For the coined quantum walk, the Grover-type oracle $Q$ flips the amplitudes at marked vertices. So if the particle is at vertex $a$ pointing towards vertex $b$, the oracle acts as: \[ Q \ket{a,b} = \left\{ \begin{array}{rl} -\ket{a,b}, & a\ {\rm marked} \\ \ket{a,b}, & a\ {\rm unmarked} \\ \end{array} \right.. \] This oracle query is \emph{not} a reflection in the full Hilbert space. In the case of a regular graph, the full Hilbert space is the tensor product of vertex and coin spaces, and $Q$ is only a reflection in the vertex space, not the full space. Thus, Grover-type oracles are generally not within the scope of Fact 3.2 of \cite{Magniez2012}. Now let us consider Santos's first algorithm, $W_{q1} = R_2 Q_2 R_1 Q_1$. We already have $R_1 = C$ and $R_2 = SCS$, so we need to determine $Q_1$ and $Q_2$. Beginning with $Q_1$, recall that in the bipartite double cover (\textit{c.f.}, Fig.~\ref{fig:graph_szegedy_marked_dropped}) that $Q_1$ flips the signs of the edges incident to marked vertices in $X$. By our bijection, this is equivalent to a coined quantum walk (\textit{c.f.}, Fig.~\ref{fig:graph_coined_marked}) where we flip the \emph{outgoing} edges from marked vertices. This is exactly $Q$, so we have \[ Q_1 = Q. \] Now for $Q_2$, in the bipartite double cover, $Q_2$ flips the signs of the edges incident to marked vertices in $Y$. Then in the coined quantum walk, this is equivalent to flipping the signs of \emph{incoming} edges to marked vertices. To implement this, we can apply the flip-flop shift $S$ to swap incoming and outgoing edges, apply the query $Q$ to flip the signs of outgoing edges from marked vertices, and apply the flip-flop shift $S$ again so that the net effect is that the incoming edges to marked vertices had their signs flipped. That is, \[ Q_2 = SQS. \] Utilizing these relations, Santos's first search algorithm is $W_{q1} = SCSSQSCQ$. Since $S^2 = I$, this simplifies to \[ W_{q1} = SCQSCQ = (SCQ)^2. \] Note that $SCQ$ is a coined quantum walk that takes one walk-step $U = SC$ per oracle query $Q$, and Santos's first algorithm is equivalent to two iterations of it. This is summarized in the third body row of Table~\ref{table:summary}. This coined quantum walk $SCQ$ is another well-established scheme for searching using a coined quantum walk. It first appeared in \cite{AKR2005}, where this search scheme on the complete graph with a self-loop at each vertex is exactly equivalent to Grover's algorithm (apart from a factor of $2$). Other investigations of this quantum walk search operator $SCQ$ include the complete graph with an arbitrary number of self-loops per vertex \cite{Wong10}, search with potential barriers \cite{Wong13}, improving the success probability by measuring the coin state \cite{Wong18}, and the effect of stationary states \cite{Wong24}. All of these results directly map to Santos's walk $W_{q1}$. Santos showed that her walk $W_{q1}$ is equivalent to Szegedy's quantum walk with absorbing marked vertices $W'$ when searching strongly regular graphs with a unique marked vertex. As coined quantum walks, this is an equivalence between $SCQ$ and SKW's search algorithm with a selective coin $U_{\rm SKW}$, and it is well understood: If the neighbors of each marked vertex evolve identically due to the symmetry of the graph, then the two operators $SCQ$ and $U_{\rm SKW}$ are identical. This is true for any distance-transitive graph with a unique marked vertex, which includes the complete graph and Johnson graphs in general \cite{Wong20}, the strongly regular graphs in Santos's analysis \cite{Santos2016,Wong5}, the hypercube \cite{SKW2003}, and arbitrary-dimensional lattices \cite{AKR2005}. They can also be equivalent with multiple marked vertices, such as the hypercube with two marked vertices at opposite ``ends'' of the hypercube \cite{Wong18} or the 2D grid with a marked diagonal \cite{AR2008}. This resolves an open question as to when Szegedy's search algorithm can be interpreted as a Grover-type query algorithm. Now for Santos's second algorithm $W_{q2} = R_2 R_1 Q_1$. Using our previous relations, we have $W_{q2} = SCSCQ$. Since a step of the coined quantum walk is $U = SC$, this yields \[ W_{q2} = U^2 Q. \] So Santos's second algorithm is equivalent to a coined quantum walk that takes two walk-steps $U$ for each oracle query $Q$. This is summarized in the last row of Table~\ref{table:summary}. Taking multiple walk-steps per oracle query is also an existing concept in coined quantum walks. It was introduced by Wong and Ambainis \cite{Wong11} for searching the ``simplex of complete graphs'' for a fully marked cluster. They showed that using multiple walk-steps per oracle query results in an algorithm that is quadratically faster in the number of queries than one that takes one walk-step per oracle query. Santos's result, then, reveals that taking multiple walk-steps per oracle query also improves search on the complete graph, suggesting it could be a general method for speeding up quantum search algorithms. \section{Discussion and Conclusion} Szegedy's quantum walk and coined quantum walks are two popular quantum analogues of classical random walks. Szegedy's quantum walk is given by two reflections, while the coined quantum walk is given by a coin flip followed by a shift. Although it is known that one step of Szegedy's quantum walk is equivalent to two steps of the coined, we determined the precise relationship between the operators. In particular, we proved that the first reflection in Szegedy's quantum walk is equivalent to the Grover diffusion coin, while the second reflection is equivalent to the flip-flop shift, Grover diffusion coin, and flip-flop shift again. These identifications allowed us to reinterpret Santos's two Szegedy-based search schemes with Grover-type oracles as coined quantum walks. As a result of our new equivalence results, a vast literature on coined quantum walks also applies to Santos's schemes. This allowed us to characterize when Szegedy's search algorithm can be expressed in terms of a Grover-type oracle. Before, it was only known for strongly regular graphs with a unique marked vertex, but it is now known for a host of other graphs as well, including graphs with multiple marked vertices. We also showed that Santos's second algorithm is equivalent to searching with multiple steps of the quantum walk per oracle query. This supports the hope that other problems can be similarly sped up using this technique. \begin{acknowledgements} Thanks to Peter H\o{}yer for pointing out Fact 3.2 of \cite{Magniez2012}. This work was supported by the U.S.~Department of Defense Vannevar Bush Faculty Fellowship of Scott Aaronson. \end{acknowledgements} \bibliographystyle{qinp}
{ "timestamp": "2017-07-18T02:05:44", "yymm": "1611", "arxiv_id": "1611.02238", "language": "en", "url": "https://arxiv.org/abs/1611.02238", "abstract": "Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk-step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk-steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate.", "subjects": "Quantum Physics (quant-ph)", "title": "Equivalence of Szegedy's and Coined Quantum Walks", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464513032269, "lm_q2_score": 0.72487026428967, "lm_q1q2_score": 0.7074345620887357 }
https://arxiv.org/abs/1612.09045
A relative anti-concentration inequality
Given two vectors in Euclidean space, how unlikely is it that a random vector has a larger inner product with the shorter vector than with the longer one? When the random vector has independent, identically distributed components, we conjecture that this probability is no more than a constant multiple of the ratio of the Euclidean norms of the two given vectors, up to an additive term to allow for the possibility that the longer vector has more arithmetic structure. We give some partial results to support the basic conjecture.
\section{The question} We conjecture the following {\em relative anti-concentration} inequality: If $\alpha,\beta\in \mathbb{R}^{n}$, and $X_{i}$ are i.i.d. real-valued random variables with a non-degenerate distribution, then \begin{align}\label{eq:relanticoncentrationgoal} \mathbf{P}\left\{\Big| \sum_{i=1}^{n}\alpha_{i}X_{i} \Big|\le \Big|\sum_{i=1}^{n}\beta_{i}X_{i} \Big|\right\}\le C\frac{\|\beta\|}{\|\alpha\|}+\frac{C}{\mbox{LCD}(\alpha)}. \end{align} Here $C$ is a constant, $\|\beta\|^{2}=\beta_{1}^{2}+\ldots +\beta_{n}^{2}$, and $\mbox{LCD}(\alpha)$ is the ``essential least common denominator'' introduced by Rudelson and Vershynin in their inverse Littlewood-Offord theorems. Its precise definition is recalled later. In this paper, we prove special cases of this inequality, under conditions on the distribution of $X_{1}$ or on the coefficients, and in some cases not requiring the second term at all. To put the inequality in context, recall the L\'{e}vy concentration function of a real-valued random variable $X$, defined as $$Q_{X}(t)=\sup_{a\in \mathbb{R}}\mathbf{P}\{a\le X\le a+t\}.$$ {\em Anti-concentration inequalities} are upper bounds on the concentration function, perhaps for a range of $t$ (for instance, on $Q_{X}(0)$, which is the maximal size of an atom). The famous Littlewood-Offord problem is an anti-concentration inequality for $S=\sum_{i=1}^{n}v_{i}X_{i}$ where $X_{i}$ are independent Bernoulli random variables. It states that $Q_{S}(t)\le C/\sqrt{n}$, provided $t\le v_{i}$ for all $i$. This has been generalized in different directions. The Kolmogorov-Rogozin inequality generalizes to sums of independent random variables. H\'{a}lasz's inequalities and the inverse Littlewood-Offord theorems (Arak, Tao and Vu, Rudelson and Vershynin, etc.) are stronger bounds on $Q_{S}$ (also allowing general distributions of $X_{i}$s) under constraints on the arithmetic structure of $v_{i}$s. See \cite{nguyenvu}, \cite{gotzezaitsev} or \cite{rudelsonvershyninicm} for more on this fascinating subject. In short, these are upper bounds on the small-ball probabilities of linear forms under product measure. There are anti-concentration theorems of quadratic forms of independent random variables and more generally for polynomials (eg., \cite{taovubook}, \cite{mekanguyenvu}). Now it is clear why we call \eqref{eq:relanticoncentrationgoal} a ``relative'' anti-concentration inequality (think of $\|\beta\|$ as small and $\|\alpha\|$ as large, else the inequality is trivial), since it asks for the probability that a linear form with small coefficients dominates another one with large coefficients. Why do we expect the bound on the right? If $X_{i}$ are i.i.d. standard Gaussians, then it is an easy calculation (shown later) that the probability is bounded by $C\|\beta\|/\|\alpha\|$. We expect essentially the same bound in general, except that for discrete random variables such as Bernoullis, the second term is needed. This is because the quantity $\|\beta\|/\|\alpha\|$ can be made as small as desired by scaling $\beta$ down, while the left hand side cannot be smaller than the atom size of $\sum \alpha_{i}X_{i}$ at $0$ (which can be non-zero if $\alpha$ has an arithmetic structure). The term $1/\mbox{LCD}(\alpha)$ is precisely what Rudelson and Vershynin use to bound the largest atom of $\sum \alpha_{i}X_{i}$. The special case when $\beta_{i}=1$ and $\alpha_{i}=i$, has an application to the study of zeros of random polynomials. In this case, the inequality \eqref{eq:relanticoncentrationgoal} (the bound on the right is simply $1/n$) was proved by S\"{o}ze~\cite{kensoze} (see Lemma~3 in his paper) who used it to prove a bound for the expected number of real zeros of random polynomials with i.i.d. coefficients. Other than that, we do not know of any applications of the inequality~\eqref{eq:relanticoncentrationgoal}. However it appears to have a natural appeal and in this paper we prove several partial results to support our conjecture. \para{Acknowledgement} After the first version of our article was posted on the axiv, Sasha Sodin communicated to us a Fourier analytic proof of \eqref{eq:relanticoncentrationgoal}, under the assumption that $X_i$ have a sub-exponential distribution. We are grateful to him for allowing us to include his elegant proof in this version of the paper. \section{Our results} Let us write $X=(X_{1},\ldots ,X_{n})$ so that $\<\alpha,X\>=\sum_{i=1}^{n}\alpha_{i}X_{i}$ and $\<\beta,X\>=\sum_{i=1}^{n}\beta_{i}X_{i}$. First we show in Section~\ref{sec:gaussian} that if $X_{i}$ are i.i.d. standard Gaussian random random variables, then \begin{align}\label{eq:gaussianineq} \mathbf{P}\{|\<\alpha,X\>|\le |\<\beta,X\>|\}\le 2\frac{\|\beta\|}{\|\alpha\|}. \end{align} This may be taken as a motivation for \eqref{eq:relanticoncentrationgoal}, but without the second term. As will be clear later, for discrete random variables, the second term become necessary. Our main results are as follows: \begin{itemize}\setlength\itemsep{3pt} \item When $X_i$s have a log-concave distribution, \eqref{eq:gaussianineq} holds with a larger constant (Theorem~\ref{thm:logconcave}). \item If $X_i$s have sub-Gaussian (Theorem~\ref{thm:subgaussian}) or mean zero sub-exponential distibution (Theorem~\ref{thm:subexponential}), we prove \eqref{eq:relanticoncentrationgoal}, but losing a factor of $\log(\|\alpha\|/\|\beta\|)$ in the first term on the right. \item There are a few other minor results (Corollary~\ref{cor:mixlogconcave} and Corollary~\ref{cor:mixuniform}) got by taking mixtures of log-concave distributions etc. \item After the first version of our paper appeared, Sasha Sodin sent us a sketch of a proof of \eqref{eq:relanticoncentrationgoal} for sub-exponential random variables. His result (Theorem~\ref{thm:sodinsargument}) improves on Theorem~\ref{thm:subexponential} by getting rid of the spurious $\log(\|\alpha\|/\|\beta\|)$ factor. In some sense, this is the strongest result in this paper (except for the symmetry assumption which we were not able to get rid of). \end{itemize} The Fourier analytic method of proof of Sodin is also entirely different from our other proofs. Hence we retain Theorem~\ref{thm:subexponential} (and its short proof) and also give full details of Sodin's proof. For a reader with limited time, we recommend reading just the proofs of Theorem~\ref{thm:logconcave} and Theorem~\ref{thm:sodinsargument}. Before stating the results, we recall the definition of $\mbox{LCD}$ as introduced by Rudelson and Vershynin. Among the minor variants of this quantity in their papers, we take the one in \cite{rudelsonvershyninicm}. For a vector $\alpha\in \mathbb{R}^{n}$ and a positive number $\gamma$, define its {\em essential least common denominator} as \[\begin{aligned} \mbox{LCD}_{\gamma}(\alpha)=\inf\left\{\theta>0{\; : \;} \mbox{dist}(\theta\alpha,\mathbb{Z}^{n})\le \min\left\{\gamma,\frac{1}{10}\|\theta\alpha\|\right\}\right\}. \end{aligned}\] With this definition, Rudelson and Vershynin proved that (see Theorem~4.2 in \cite{rudelsonvershyninicm}) that if $X_{i}$ are i.i.d. random variables with $Q_{X_{1}}(1)=p<1$ and $\|\alpha\|=1$, then for $S=\<\alpha,X\>$, we have \begin{align}\label{eq:rudelsonvershyninbound} Q_{S}(\epsilon)\le C_{p}\left\{ \epsilon+\frac{1}{\mbox{LCD}_{\gamma}(\alpha)}+e^{-c_{p}\gamma^{2}}\right\}. \end{align} Here and elsewhere, one may make the choice $\gamma\asymp \sqrt{n}$ so that the term $e^{-c\gamma^{2}}$ become irrelevant (with $n$ discrete random variables, any non-trivial event will occur with at least $e^{-cn}$ probability). \begin{theorem}\label{thm:subgaussian} Let $X_{i}$ be i.i.d. with a sub-Gaussian distribution, i.e., $\mathbf{P}\{|X_{1}|\ge t\}\le Ce^{-ct^{2}}$. Assume $\mathbf{E}[X_{i}]=0$. Then, for any $\alpha,\beta\in \mathbb{R}^{n}$, and any $\gamma>0$, we have \begin{align*} \mathbf{P}\{|\<\alpha,X\>|\le |\<\beta,X\>|\}\le C'\left\{\frac{\|\beta\|}{\|\alpha\|}\sqrt{\log\frac{\|\alpha\|}{\|\beta\|}}+\frac{1}{\mbox{LCD}_{\gamma}(\alpha/\|\alpha\|)} + e^{-c'\gamma^{2}}\right\}. \end{align*} where $C',c'$ depend on $C,c$. \end{theorem} A similar inequality holds under slightly milder conditions. A zero mean random variable $X$ is said to have {\em sub-exponential distribution} with parameters $(\nu,b)$ with $\nu>0,b>0$, if \[\begin{aligned} \mathbf{E}[e^{\lambda X}]\le e^{\lambda^{2}\nu^{2}/2} \;\;\; \mbox{ for }|\lambda|\le \frac{1}{b}. \end{aligned}\] This is equivalent to the finiteness of the moment generating function $M(t)=\mathbf{E}[e^{tX}]$ for $|t|\le c$ for some $c>0$ which in turn is equivalent to exponential decay of tail probabilities $\mathbf{P}\{|X_1|>t\}$ (see ~\cite{bartlett} for details). \begin{theorem}\label{thm:subexponential} Let $X_i$ be i.i.d. zero mean random variables with a sub-exponential distribution with parameters $(\nu,b)$. Then, for any $\alpha,\beta \in \mathbb{R}^{n}$, and any $\gamma>0$, we have \[\begin{aligned} \mathbf{P}\{|\<\alpha,X\>|\le |\<\beta,X\>|\}\le C'\left\{\frac{\|\beta\|}{\|\alpha\|}\log\frac{\|\alpha\|}{\|\beta\|}+\frac{1}{\mbox{LCD}_{\gamma}(\alpha/\|\alpha\|)} + e^{-c'\gamma^{2}} \right\} \end{aligned}\] where $C',c'$ depend on $\nu,b$. \end{theorem} The inequalities in these two theorems are sub-optimal, due to the presence of the logarithmic terms on the right. This comes from the fact that our proof works by separately bounding the probability that $|\<\alpha,X\>|$ is small and the probability that $|\<\beta,X\>|$ is large. In case of Gaussian, or more generally log-concave densities, we are able to handle the joint distribution of $\<\alpha,X\>$ and $\<\beta,X\>$ and hence the inequalities in \eqref{eq:gaussianineq} and in Theorem~\ref{thm:logconcave} below are optimal. \begin{theorem}\label{thm:logconcave} If $X_i$ are i.i.d. with a non-degenerate log-concave density that is symmetric about $0$, then \[\begin{aligned} \mathbf{P}\{|\<\alpha,X\>|\le |\<\beta,X\>|\}\le C\frac{\|\beta\|}{\|\alpha\|} \end{aligned}\] where $C$ is a constant. \end{theorem} These three theorems and Theorem~\ref{thm:sodinsargument} below are the main results of this paper. Since log-concave densities decay exponentially, in all these theorems we have exponential decay of the tails of $X_{1}$. By taking mixtures of log-concave random variables, one can allow somewhat heavier tails, as in the following two corollaries to Theorem~\ref{thm:logconcave}. \begin{corollary}\label{cor:mixlogconcave} Let $X_{i}=\xi_{i} Y_{i}$ where $Y_{i}$ are i.i.d. with a symmetric, log-concave density, $\xi_{i}$ are i.i.d. positive random variables with $\mathbf{E}[\xi_{1}^{2}]\le B$ and $\mathbf{E}[1/\xi_{1}^{2}]\le B$ for some $B$ and $\xi_{i}$ are independent of $Y_{i}$s. Then, \[\begin{aligned} \mathbf{P}\{|\<\alpha,X\>|\le |\<\beta,X\>|\}\le CB\,\frac{\|\beta\|}{\|\alpha\|} \end{aligned}\] where $C$ is the constant in Theorem~\ref{thm:logconcave}. \end{corollary} In particular, writing a unimodal density as a mixture of uniform densities on intervals, we get the following conclusion. \begin{corollary}\label{cor:mixuniform} Let $X_{i}$ be i.i.d. with a symmetric unimodal density $f$ such that $\int t^{2}f(t)dt\le B$ and $\int_{0}^{1}t^{-3}[\mathbf{P}\{|X|\le t\}-2tf(t)]dt\le B$ for some $B$. Then, \[\begin{aligned} \mathbf{P}\{|\<\alpha,X\>|\le |\<\beta,X\>|\}\le 12C B\,\frac{\|\beta\|}{\|\alpha\|}. \end{aligned}\] \end{corollary} Note that second condition on the density is satisfied by $f(t)=e^{-|t|^{1+\delta}}$ for any $\delta>0$ but not by $e^{-|t|}$. The condition restricts how sharply the density can peak at the origin. \begin{remark} One can get a variant of Corollary~\ref{cor:mixlogconcave} with the bound of $\frac{1}{p_{\ell}}\mathbf{E}[(\xi_{1}^{2}+\ldots +\xi_{\ell}^{2})^{-1}]$ where $p_{\ell}$ is the $\ell$-th largest of the numbers $\alpha_{i}^{2}/\sum \alpha_{i}^{2}$. This is some times applicable when we have some information on $\alpha$ (eg., that it is not dominated by a single $\alpha_{i}$). We skip details. \end{remark} Now we state the result of Sodin referred to earlier. This is an improvement over Theorem~\ref{thm:subexponential}, except for the assumption of symmetry. \begin{theorem}\label{thm:sodinsargument} Let $X_i$ be i.i.d. zero mean random variables with a sub-exponential distribution with parameters $(\nu,b)$. Assume that the distribution of $X_i$s is symmetric about zero. Then, for any $\alpha,\beta \in \mathbb{R}^{n}$, and any $\gamma>0$, we have \begin{align*} \mathbf{P}\{|\<\alpha,X\>|\le |\<\beta,X\>|\}\le C'\left\{\frac{\|\beta\|}{\|\alpha\|}+\frac{1}{\mbox{LCD}_{\gamma}(\alpha/\|\alpha\|)} + e^{-c'\gamma^{2}} \right\} \end{align*} where $C',c'$ depend on the distribution of $X_1$. \end{theorem} \section{Proof of the inequality for Gaussians}\label{sec:gaussian} We prove \eqref{eq:gaussianineq} in this section. Let $U'=\<\alpha,X\>$ and $V'=\<\beta,X\>$. Let $\rho=\frac{\<\alpha,\beta\>}{\|\alpha\|\|\beta\|}$ and let $\xi,\eta$ be i.i.d. standard Gaussians. For simplicity of notation, let $\theta=\|\beta\|/\|\alpha\|$. Then $(U',V')$ has the same joint distribution as $(U,V)$ where $V=\|\beta\|\xi$ and $U=\|\alpha\|(\rho \xi+\sqrt{1-\rho^{2}}\eta)$. Hence, \begin{align*} \mathbf{P}\{|U'|\le |V'|\} = \mathbf{P}\left\{\Big|\frac{\eta}{\xi}+\frac{\rho}{\sqrt{1-\rho^{2}}}\Big|\le \frac{\theta}{\sqrt{1-\rho^{2}}}\right\}=\mathbf{P}\left\{\frac{\eta}{\xi}\in [a-\ell,a+\ell]\right\} \end{align*} where $a=\rho/\sqrt{1-\rho^{2}}$ and $\ell=\theta/\sqrt{1-\rho^{2}}$. Now, $\eta/\xi$ has Cauchy distribution whose density $1/\pi(1+t^{2})$ is unimodal and has the maximum value of $1/\pi$. Hence, \begin{align}\label{eq:bdforcauchyinterval} \mathbf{P}\left\{\frac{\eta}{\xi}\in [a-\ell,a+\ell]\right\}\le \begin{cases} \frac{2\ell}{\pi} & \mbox{ for any }a,\ell, \\ \frac{2\ell}{\pi (a-\ell)^{2}} &\mbox{ if }a-\ell>0, \\ \frac{\ell}{\pi (a+\ell)^{2}} &\mbox{ if }a+\ell<0. \end{cases} \end{align} If $\rho^{2}\le 1-\frac{1}{\pi^{2}}$, we use the first bound in \eqref{eq:bdforcauchyinterval} to get \[\begin{aligned} \mathbf{P}\{|U'|\le |V'|\} \le \frac{2\theta}{\pi\sqrt{1-\rho^{2}}}\le 2\theta. \end{aligned}\] If $\rho^{2}>1-\frac{1}{\pi^{2}}$, then use the second or third bound in \eqref{eq:bdforcauchyinterval} (depending on $\rho>0$ or $\rho<0$) to get \[\begin{aligned} \mathbf{P}\{|U'|\le |V'|\}\le \frac{2\sqrt{1-\rho^{2}}}{\pi(\rho-\theta)^{2}}\theta. \end{aligned}\] We may assume $\theta\le \frac12$ (otherwise $2\theta$ is a trivial bound for any probability). Then, checking numerically that $(\rho-\theta)^{2} \ge 0.15$ and $\sqrt{1-\rho^{2}}\le \frac{1}{\pi}$, we see that the right hand side of the previous inequality is smaller than $2\theta$. \section{Proofs of Theorems~\ref{thm:subgaussian} and \ref{thm:subexponential}} \begin{proof}[Proof of Theorem~\ref{thm:subgaussian}] As $X_i$ are i.i.d. sub-Gaussian, by a version of Bernstein's inequality (see Theorem~3.3 in \cite{rudelsonnotes}), for any $t>0$, we have \begin{equation} \mathbf{P}\{|\<\beta,X\>|\ge t\} \le Ce^{-c\frac{t^{2}}{\|\beta\|^{2}}}. \end{equation} Next, using the Rudelson-Vershynin inverse Littlewood-Offord result~\eqref{eq:rudelsonvershyninbound}, we have $$ \mathbf{P}\left\{|\<\alpha,X\>|\leq t\right\}\leq c_1\left\{\frac{1}{\mbox{LCD}_{\gamma}(\alpha/\|\alpha\|)}+\frac{t}{\|\alpha\|}\right\}+c_2e^{-c_3\gamma^2} $$ Hence, $$ \mathbf{P}\{|\<\alpha,X\>|\le |\<\beta,X\>|\}\leq Ce^{-c\frac{t^2}{\|\beta\|^2}}+c_1\left\{\frac{1}{\mbox{LCD}_{\gamma}(\alpha/\|\alpha\|)}+\frac{t}{\|\alpha\|}\right\}+c_2e^{-c_3\gamma^2} $$ Choose $t=\frac{1}{\sqrt{c}}\|\beta\|\sqrt{\log\frac{\|\alpha\|}{\|\beta\|}}$, and we get $$ \mathbf{P}\{|\<\alpha,X\>|\le |\<\beta,X\>|\}\lesssim \frac{\|\beta\|}{\|\alpha\|}+\frac{\|\beta\|}{\|\alpha\|}\sqrt{\log\frac{\|\alpha\|}{\|\beta\|}}+\frac{1}{\mbox{LCD}_{\gamma}(\alpha/\|\alpha\|)}+ e^{-c\gamma^{2}}. $$ We shall always take $\|\beta\|<\|\alpha\|$ so that the bound in the statement of the lemma follows. \end{proof} In proving Theorem~\ref{thm:subexponential}, we shall use the key concentration property \[\begin{aligned} \mathbf{P}\{|X|\ge t\}\le\begin{cases} e^{-t^{2}/2\nu^{2}} &\mbox{ if }0\le t\le \nu^{2}/b,\\ e^{-t/2b} &\mbox{ if }t>\nu^{2}/b. \end{cases} \end{aligned}\] This well-known inequality (essentially due to Bernstein) may be worked out from the exercises on page 205 of Uspensky's book~\cite{uspensky}. For a more easily accessible reference, see \cite{bartlett}. For the following proof, we introduce the notation $\beta_{\max}=\max_{i\le n}|\beta_{i}|$. \begin{proof}[Proof of Theorem~\ref{thm:subexponential}] As $X_i$ are i.i.d. sub-exponential with parameters $(\nu,b)$, hence $\beta_iX_i$ are independent sub-exponential with parameters $(\beta_i\nu,\beta_ib)$, and $\< \beta,X\>$ is sub-exponential with parameters $(\nu^*,b^*)$ where $b^*=b\beta_{\max}$, and $\nu^*=\nu \|\beta\|$. Hence, \begin{align}\label{eq:boundsforsumbetax} \mathbf{P}\{|\<\beta,X\>|\geq t_0\}\leq \left\{ \begin{array}{ll} e^{-\frac{t_0^2}{2\nu^2\|\beta\|^2}} & \mbox{ if } 0\leq t_0\leq \frac{\nu^2\|\beta\|^2}{b\beta_{max}} ,\\ e^{-\frac{t_0}{2b\beta_{\max}}} & \mbox{ if } t_0 >\frac{\nu^2\|\beta\|^2}{b\beta_{max}}. \end{array} \right. \end{align} Again, using Rudelson-Vershynin's inverse Littlewood-Offord result, we have \begin{align}\label{eq:boundforalpXterm} \mathbf{P}\left\{\frac{|\<\alpha,X\>}{\|\alpha\|}\leq \frac{t_0}{\|\alpha\|}\right\}\leq C_1\left\{\frac{1}{\mbox{LCD}_{\gamma}(\alpha/\|\alpha\|)}+\frac{t_0}{\|\alpha\|}\right\}+C_2e^{-c_3\gamma^2} \end{align} When $\frac{\beta_{\max}}{\|\beta\|}\sqrt{\log\frac{\|\alpha\|}{\|\beta\|}}\leq \frac{\nu}{\sqrt{2}b}$, put $t_0=\sqrt{2}\nu \|\beta\|\sqrt{\log\frac{\|\alpha\|}{\|\beta\|}}$ and use the first inequality in \eqref{eq:boundsforsumbetax}. That term become $\|\beta\|/\|\alpha\|$. Adding it to \eqref{eq:boundforalpXterm} gives us the bound \[\begin{aligned} \mathbf{P}\{|\<\alpha,X\>|\le |\<\beta,X\>|\}&\le \frac{\|\beta\|}{\|\alpha\|} + C_1\left\{\frac{1}{\mbox{LCD}_{\gamma}(\alpha/\|\alpha\|)}+\sqrt{2}\nu\frac{\|\beta\|}{\|\alpha\|}\sqrt{\log\frac{\|\alpha\|}{\|\beta\|}}\right\}+C_2e^{-c_3\gamma^2} \\ &\le C'\left\{\frac{\|\beta\|}{\|\alpha\|}\sqrt{\log\frac{\|\alpha\|}{\|\beta\|}}+\frac{1}{\mbox{LCD}_{\gamma}(\alpha/\|\alpha\|)} + e^{-c'\gamma^{2}} \right\} \end{aligned}\] which is better than we claimed, because of the square root on the logarithmic factor. When $\frac{\beta_{\max}}{\|\beta\|}\sqrt{\log\frac{\|\alpha\|}{\|\beta\|}}> \frac{\nu}{\sqrt{2}b}$, put $t_0=2b\beta_{\max} \log \frac{\|\alpha\|}{\|\beta\|}$ and use the second inequality in \eqref{eq:boundsforsumbetax}. That term is again $\|\beta\|/\|\alpha\|$. Adding it to \eqref{eq:boundforalpXterm} gives us the bound \[\begin{aligned} \mathbf{P}\{|\<\alpha,X\>| \le |\<\beta,X\>|\} &\le \frac{\|\beta\|}{\|\alpha\|}+C_1\left\{\frac{1}{\mbox{LCD}_{\gamma}(\alpha/\|\alpha\|)}+2b\frac{\beta_{\max}}{\|\alpha\|}\log \frac{\|\alpha\|}{\|\beta\|}\right\}+C_2e^{-c_3\gamma^2}\\ &\le C'\left\{\frac{\|\beta\|}{\|\alpha\|}\log\frac{\|\alpha\|}{\|\beta\|}+\frac{1}{\mbox{LCD}_{\gamma}(\alpha/\|\alpha\|)} + e^{-c'\gamma^{2}} \right\}. \end{aligned}\] since $\beta_{\max}\le \|\beta\|$. This completes the proof. \end{proof} \section{Proof of Theorem~\ref{thm:logconcave}} A probability distribution $\mu$ on $\mathbb{R}^{2}$ is said to be isotropic if it has zero mean and identity covariance, ie., \[\begin{aligned} \int_{R^{2}}\x d\mu(\x)=0, \mbox{ and }\int_{\mathbb{R}^{2}} \x \x^{t}d\mu(\x) = \mat{1}{0}{0}{1}. \end{aligned}\] We shall use the following lemma about isotropic log-concave measures in the plane. Let $D(x,r)$ denote the open disk of radius $r$ centered at $x$. \begin{lemma}\label{lem:bivariatelogconcave} Let $p(x,y)=e^{-f(x,y)}$ be an isotropic, log-concave density on $\mathbb{R}^{2}$. Let $\mathcal{L}=\{(x,y) : p(x,y)\geq p(0,0)/2\}$. There exist two numerical constants $0<a<A$ and $0<b<B$, such that $D(0,a)\subseteq \mathcal{L} \subseteq D(0,A)$ and $b\le \max\limits_{(u,v)}p(u,v)\le B$. \end{lemma} \begin{proof} This can be read off from Lemma~5.14 of Lovasz and Vempala~\cite{lovaszvempala} (their lemma is valid in any dimension) as follows: Part (a) of that lemma immediately gives $a=1/9$. Next, by part (d) of their lemma, $p(0,0)\ge 2^{-14}$. Integrating the density over $\mathcal L$, we see that $\mbox{area}(\mathcal L)\le 2^{15}$. If $\mathcal L$ intersects $\partial D(0,A)$ at a point $\x$, then by convexity (draw the tangents from $\x$ to the circle $\partial D(0,a)$ and join $\x$ and these points of tangency to the origin to get two right angles triangles) its area is at least $a\sqrt{A^{2}-a^{2}}$. Hence, we must have $A\le a^{-1}2^{16}$. Lastly, by the already quoted bound, we may take $b=2^{-14}$ and $B=2/\pi a^{2}$ (the latter because the density is at least $p(0,0)/2$ on $D(0,a)$). \end{proof} \para{Sketch of an alternate argument} If one does not care about explicit constants, it is also possible to prove Lemma~\ref{lem:bivariatelogconcave} by a compactness argument. We explain it to show the existence of the number $a>0$. It is clear that for any isotropic, log-concave density, there is an $a>0$ that works, what is non-trivial is the uniform choice of the constant. Now suppose there is no such uniform constant $a>0$. Then we may take a sequence of isotropic, log-concave densities $p_{n}$ such that $p_{n}(x_{n})\le p_{n}(0,0)/2$ with $|x_{n}|\le 1/n$. Since rotation of an isotropic log-concave density is also isotropic and log-concave, we may assume that $x_{n}=(1/n,0)$. The space of log-concave measures is closed under weak convergence (Proposition~3.6 of \cite{saumardwellner}), hence we may assume that $p_{n}(x)dx$ converge weakly to a log-concave measure $\mu$. For log-concave measures, weak convergence implies convergence of all moments (Corollary~6 in the arXiv version of \cite{meckeses}), hence $\mu$ is isotropic. But now, the density of $\mu$ must vanish on the $(0,\infty)$, which contradicts the existence of $a$ specific to $\mu$. This shows the existence of a uniform constant $a>0$ as claimed. Similarly one can argue for the existence of $A$, $b$ and $B$. Now we turn to the proof of Theorem~\ref{thm:logconcave}. \para{Claim} If Theorem~\ref{thm:logconcave} holds when $\<\alpha,\beta\>=0$, then it hold for any $\alpha,\beta$. \begin{proof} Given any $\alpha,\beta$ (not necessarily orthogonal), write $\beta=a\alpha + \gamma$ where $\<\alpha,\gamma\>=0$. Since $|\langle \beta,x \rangle| \leq |a||\langle \alpha,x\rangle| + |\langle \gamma,x \rangle |$, we get $$\mathbf{P}(|\langle\alpha,x\rangle| \leq |\langle\beta,x\rangle|) \leq \mathbf{P} \left(|\langle\alpha,x\rangle|\leq \frac{|\langle\gamma,x\rangle|}{1-|a|}\right) \leq C\frac{ \|\gamma\|}{\|\alpha\|(1-|a|)}$$ where the last inequality holds because $\langle\alpha,\frac{\gamma}{1-|a|}\rangle=0$, and our assumption that relative LO holds when inner product is $0$. And now as $\|\beta\|^2=|a|^2\|\alpha\|^2 +\|\gamma\|^2$, hence $\|\gamma\| \leq \|\beta\|$, and $|a| \leq \|\beta\|/\|\alpha\| <1/10$ ( without loss of generality we can assume this, otherwise we can take the constant C in the RHS of the relative LO inequality to be greater than 10, so that the RHS becomes greater than 1, and hence the inequality holds trivially). Hence \[\begin{aligned} C \frac{\|\gamma\|}{\|\alpha\|(1-|a|)} \leq C'\frac{ \|\beta\|}{\|\alpha\|}. \end{aligned}\] Thus, it suffices to prove Theorem~\ref{thm:logconcave} when $\langle\alpha,\beta\rangle=0$. \end{proof} Now we prove the theorem for orthogonal $\alpha,\beta$. \begin{proof}[Proof of Theorem~\ref{thm:logconcave} when $\<\alpha,\beta\>=0$] If $\alpha,\beta$ are orthogonal and non-zero vectors, then define $U=\<\alpha,X\>/\|\alpha\|$ and $V=\<\beta,X\>/\|\beta\|$. Clearly $(U,V)$ has an isotropic, log-concave distribution. Hence, \[\begin{aligned} \mathbf{P}\{|\<\alpha,X\>|\le |\<\beta,X\>|\}=\mathbf{P}\{(U,V)\in S\} \end{aligned}\] where $S=\{(u,v){\; : \;} \frac{|u|}{|v|}\le \|\beta\|/\|\alpha\|\}$. Note that $S$ is a union of two sectors in the plane, each with an angle of $2\theta$ where $\tan \theta=\|\beta\|/\|\alpha\|$. By Lemma~\ref{lem:bivariatelogconcave} and the log-concavity of $p$, we have the bound $p(u,v)\le p(0,0)2^{-k}\le B2^{-k}$ for $(u,v)\not\in D(0,kA)$ and $k\ge 1$. On $D(0,A)$ we use the bound $p(u,v)\le B$. Hence, \[\begin{aligned} \mathbf{P}\{(U,V)\in S\}&\le \sum_{k=1}^{\infty} B2^{-k}\mbox{area}(S\cap D(0,kA))\\ &= 2\pi \theta BA^{2}\sum_{k=1}^{\infty}k^{2}2^{-k} \\ &\le C\theta \end{aligned}\] for some $C$. As $\theta \le \tan\theta$ and $\tan\theta=\frac{\|\beta\|}{\|\alpha\|}$, we get $\mathbf{P}\{(U,V)\in S\}\le C\frac{\|\beta\|}{\|\alpha\|}$. \end{proof} \section{Proofs of Corollary~\ref{cor:mixlogconcave} and Corollary~\ref{cor:mixuniform}} \begin{proof}[Proof of Corollary~\ref{cor:mixlogconcave}] Write $X_{i}=\xi_{i}Y_{i}$ where $Y_{i}$ are i.i.d. with a log-concave distribution. Condition on $\xi_{i}$s and apply Theorem~\ref{thm:logconcave} to get \begin{align}\label{eq:conditionandtakeexpt} \mathbf{P}\left\{|\<\alpha,X\>|\le |\<\beta,X\>|\right\} &= 10\mathbf{E}\left[\sqrt{\frac{\sum_{i=1}^{n}\beta_{i}^{2}\xi_{i}^{2}}{\sum_{i=1}^{n} \alpha_{i}^{2}\xi_{i}^{2}}} \right]\le 10\sqrt{\mathbf{E}\left[\sum_{i=1}^{n}\beta_{i}^{2}\xi_{i}^{2}\right]}\sqrt{\mathbf{E}\left[\frac{1}{\sum_{i=1}^{n}\alpha_{i}^{2}\xi_{i}^{2}}\right]} \end{align} by Cauchy-Schwarz inequality. Now, by the bound $\mathbf{E}[\xi_{i}^{2}]\le B$, we get \[\begin{aligned} \mathbf{E}\left[\sum_{i=1}^{n}\beta_{i}^{2}\xi_{i}^{2}\right] \;\le\; B\sum_{i=1}^{n}\beta_{i}^{2} \; = \; B\|\beta\|^{2}. \end{aligned}\] By Jensen's inequality aplied to the convex function $x\mapsto 1/x$, we get \[\begin{aligned} \mathbf{E}\left[\frac{1}{\sum_{i=1}^{n}\alpha_{i}^{2}\xi_{i}^{2}}\right] \; \le \; \frac{1}{\|\alpha\|^{2}}\sum_{i=1}^{n}\frac{\alpha_{i}^{2}}{\|\alpha\|^{2}} \mathbf{E}[1/\xi_{i}^{2}] \; \le \; \frac{B}{\|\alpha\|^{2}}. \end{aligned}\] Using these bounds, we see that the right hand side of \eqref{eq:conditionandtakeexpt} is at most $10B\|\beta\|/\|\alpha\|$. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:mixuniform}] For $0<y<f(0)$, let $g(y)$ be the length of the interval $\{s{\; : \;} f(s)\ge y\}$. Then, $g$ is a density (evaluate the area under $f$ by integrating over the x-coordinate first and then over the y-coordinate). Further, if $\xi$ is a random variable with density $g$ and $Y$ has $\mbox{Uniform}[-1/2,1/2]$ density and $\xi$, $Y$ are independent, then $\xi Y$ has the density $f$. Since $Y$ is log-concave, we can apply Corollary~\ref{cor:mixlogconcave} to get the conclusion we want, {\em if} $\mathbf{E}[\xi^{2}]$ and $\mathbf{E}[\xi^{-2}]$ are finite. Since $\mathbf{E}[Y^{2}]=1/12$, we see that $\mathbf{E}[\xi^{2}]=12\int t^{2}f(t)dt$. Further, \[\begin{aligned} \mathbf{E}[\xi^{-2}]=\int_{0}^{\infty}\mathbf{P}\{\xi<t\}\frac{2}{t^{3}}dt = \int_{0}^{\infty}\frac{2}{t^{3}}[\mathbf{P}\{|X|\le t\}-tf(t)] dt. \end{aligned}\] Hence the conditions in the statement of the theorem ensure that $\mathbf{E}[\xi^{2}]$ and $\mathbf{E}[\xi^{-2}]$ are finite, and the conclusion follows. \end{proof} \section{Sodin's proof of Theorem~\ref{thm:sodinsargument}} By scaling $\alpha$ and $\beta$ to have unit norm, we recast the theorem in the following equivalent form: Let $U=\<\alpha,X\>$ and $V=\<\beta,X\>$ where $\|\alpha\|=\|\beta\|=1$. Then there are constants $C,c$ depending on the distribution of $X_1$ such that for any $\gamma>0$, we have \begin{align}\label{eq:equivalentformoftheorem7} \mathbf{P}\left\{|U|\le \epsilon |V|\right\}\le C\left\{\epsilon +\frac{1}{\mbox{LCD}_{\gamma}(\alpha)}+e^{-c\gamma^2}\right\}\;\;\; \mbox{ for any }\epsilon>0. \end{align} We may also replace $X_i$ by $X_i/b$ and assume that they are sub-exponential with parameters $(\nu,1)$. Thus if $\varphi(\lambda)=\mathbf{E}[e^{i\lambda X_1}]$ denotes the characteristic function of $X_i$s and $M(\lambda)=\varphi(-i\lambda)$ denotes the moment generating function, then $M(\lambda)\le e^{\lambda^2\nu^2/2}$ for $|\lambda|\le 1$. As stated in \eqref{eq:boundsforsumbetax}, this implies that \begin{align}\label{eq:tailboundforsumofsubexprvs} \mathbf{P}\{|V|>u\}\le \begin{cases} e^{-\frac{u^2}{2\nu^2}} & \mbox{ for } 0\le u\le \nu^2, \\ e^{-\frac{u}{2}} & \mbox{ for }u>\nu^2.\end{cases} \end{align} since $\beta_{\max}\le \|\beta\|=1$. Fix $\epsilon>0$ and break the event in \eqref{eq:equivalentformoftheorem7} as follows. \begin{align*} \mathbf{P}\{|U|<\epsilon |V|\}\le \mathbf{P}\{|U|<\epsilon\}+\sum_{k=0}^{\infty} \mathbf{P}\left\{|U|<2^{k+1}\epsilon, \ 2^k\le |V|\le 2^{k+1}\right\}. \end{align*} By the Rudelson-Vershynin inquality \eqref{eq:rudelsonvershyninbound}, the first event can be controlled as \begin{align}\label{eq:boundforsmallU} \mathbf{P}\{|U|<\epsilon\}\le C\left\{ \epsilon+\frac{1}{L_{\gamma}(\alpha)}+e^{-c\gamma^{2}}\right\}. \end{align} where we have written $L_{\gamma}$ for $\mbox{LCD}_{\gamma}(\alpha)$, for simplicity of notation. We claim that for any $R\ge 1$ \begin{align}\label{eq:boundforintermediateUV} \mathbf{P}\left\{|U|\le \epsilon R, \ V>R\right\}\le C e^{-R}\left\{\epsilon R + \frac{1}{L_{\gamma}} + e^{-c\gamma^2}\right\}. \end{align} Identical bound holds for $\mathbf{P}\{|U|\le \epsilon R, \ V<-R\}$ by symmetry. Summing these estimates over $R=2^k$ (and changing $\epsilon$ to $2\epsilon$) we get \begin{align*} \sum_{k=0}^{\infty} \mathbf{P}\left\{|U|<2^{k+1}\epsilon, \ 2^k\le |V|\le 2^{k+1}\right\} \le C\left\{\epsilon+\frac{1}{L_{\gamma}}+e^{-c\gamma^2}\right\}. \end{align*} Adding this to \eqref{eq:boundforsmallU}, we get \eqref{eq:equivalentformoftheorem7}. Thus, only the proof of \eqref{eq:boundforintermediateUV} remains. \noindent{\em Proof of \eqref{eq:boundforintermediateUV}:} If $|U|<\epsilon R$ and $V>R$, then $V-\frac{1}{(\epsilon R)^2}U^2\ge R-1$. Therefore, \begin{align*} \mathbf{P}\left\{|U|<\epsilon R , V>R\right\}&\le e^{1-R}\mathbf{E}\left[ e^{V-\frac{1}{(\epsilon R)^2}U^2}\right] \\ &=\frac{1}{ \sqrt{\pi}}e^{1-R} \int_{\mathbb{R}}\mathbf{E}\left[ e^{V+2i\frac{x}{\epsilon R}U}\right] e^{-x^2}dx \end{align*} using the identity $\int_{\mathbb{R}}e^{2itx- x^2}dx=\sqrt{\pi}e^{-t^2}$ and interchanging the integral and expectation. Write $V+2i\frac{x}{\epsilon R}U = \sum_{k=1}^n \left(\beta_k+2i\frac{x\alpha_k}{\epsilon R}\right)X_k$ to see that \begin{align*} \mathbf{E}\left[ e^{V+2i\frac{x}{\epsilon R}U}\right] &=\prod\limits_{k=1}^n\varphi\left(-i\beta_k+2\frac{\alpha_k x}{\epsilon R}\right) \\ &= \prod\limits_{k=1}^n M(\beta_k) \; \prod\limits_{k=1}^n\varphi_k\left(\frac{2\alpha_k x}{\epsilon R}\right) \end{align*} where $\varphi_k(\lambda):=\frac{1}{M(\beta_k)}\varphi\left(\lambda-i\beta_k\right)$ is the characteristic function of the exponentially tilted measure $dF_k(x):=\frac{1}{M(\beta_k)}e^{\beta_k x }dF(x)$, with $F$ being the distribution of $X_1$. As $|\beta_k| \le 1$, we have $M(\beta_k)\le e^{\nu^2\beta_k^2/2}$ for each $k$. Using $\|\beta\|=1$ the product of $M(\beta_k)$ over $k$ is at most $e^{\nu^2/2}$. Consequently, writing $t_k=2\alpha_k/\epsilon R$, \begin{align}\label{eq:boundintermsofintegralofproductofcfs} \mathbf{P}\left\{|U|<\epsilon R , V>R\right\}&\le C e^{-R}\int\limits_{\mathbb{R}} \prod\limits_{k=1}^n \left|\varphi_k\left(t_k x\right)\right| \; e^{-x^2}dx. \end{align} We introduce some notation. Let $Y_k,Y_k'$ denote independent random variable with distribution $F_k$ and let $W_k=Y_k-Y_k'$. Then $|\varphi_k|^2$ is the characteristic function of $W_k$. Fix $\delta>0$ and $p<1$ such that $Q_{X_1}(\delta)\le p$. Let $q_k=\mathbf{P}\{|W_k|\ge 2\delta\}$. Then \begin{align*} \log |\varphi_k(t)|^2 &\le - (1-|\varphi_k(t)|^2) \\ &=-\mathbf{E}[1-\cos(tW_k)] \\ &\le -q_k\mathbf{E}\left[1-\cos(tW_k)\ \pmb{\big|} \ |W_k|\ge 2\delta\right]. \end{align*} By Lemma~\ref{lem:tiltedanduntilted} and its Corollary~\ref{cor:tiltedanduntilted} that are proved later, using the bound $M(t)\le M(1)$ for $|t|\le 1$, we deduce that there are positive constants $q$ and $\tau$ depending only on $F$ such that for all $k$ and for all $s$ we have \begin{align*} q_k\ge q \;\;\mbox{ and }\;\; \mathbf{E}[1-\cos(sW_k)\ \pmb{\big|} \ |W_k|\ge 1]\ge \tau \mathbf{E}[1-\cos(sW)\ \pmb{\big|} \ |W|\ge 1], \end{align*} where $W=X_1-X_1'$ (the analogue of $W_k$ but for the untilted random variable). Using these uniform estimates in \eqref{eq:boundintermsofintegralofproductofcfs}, we arrive at \begin{align*} \mathbf{P}\left\{|U|<\epsilon R , V>R\right\}&\le C e^{-R}\int_{\mathbb{R}}\exp\left\{-\frac{q\tau}{2} \mathbf{E}\left[ \sum\limits_{k=1}^n(1-\cos(t_kxW)) \left.\vphantom{\hbox{\Large (}}\right| |W|\ge \delta\right] \right\} e^{-x^2}\ dx \\ &\le C e^{-R}\int_{\mathbb{R}} \mathbf{E}\left[\exp\left\{-\frac{q\tau}{2} \sum\limits_{k=1}^n(1-\cos(t_kxW))\right\} \left.\vphantom{\hbox{\Large (}}\right| |W|\ge \delta \right]e^{-x^2}dx \end{align*} by Jensen's inequality. Now interchange conditional expectation with integral and then replace the conditional expectation over $|W|\ge \delta$ by the maximum over $|W|\ge \delta$. That gives us \begin{align}\label{eq:boundintermsofcosines} \mathbf{P}\left\{|U|<\epsilon R , V>R\right\} &\le C e^{-R}\sup_{|w|\ge \delta} \int\limits_{\mathbb{R}} \exp\left\{-\frac{q\tau}{2} \sum\limits_{k=1}^n(1-\cos(t_kxw)) -x^2\right\} dx \end{align} From this point, the arguments are virtually identical to those of Friedland and Sodin~\cite{friedlandsodin} (one small difference is that their version of LCD is not the same). Since $1-\cos(\theta)\ge 8\ \mbox{dist}^2(\frac{\theta}{2\pi},\mathbb{Z})$, we have $ \sum_{k=1}^n(1-\cos(t_kxw)) \ge 8 \ \mbox{dist}^2\left(\frac{xw}{\pi \epsilon R}\alpha,\mathbb{Z}^n\right)$. Fix $w\ge \delta$ (identical argument applies to $w\le -\delta$) and use this bound in the integral above to write \begin{align} \label{eq:boundonintegralforfixedw} \int\limits_{\mathbb{R}} \exp\left\{-\frac{q\tau}{2} \sum\limits_{k=1}^n(1-\cos(t_kxw)) -x^2\right\} dx &\le \int\limits_{\mathbb{R}} \exp\left\{-4 q\tau \mbox{dist}^2\left(\frac{xw}{\pi \epsilon R}\alpha,\mathbb{Z}^n\right)\right\} e^{ -x^2} dx \nonumber \\ =8q\tau & \int_{0}^{\infty}\mu\left\{x{\; : \;} \mbox{dist}\left(\frac{xw}{\pi \epsilon R}\alpha,\mathbb{Z}^n\right)\le z\right\} \ z e^{-4q\tau z^2} \; dz \end{align} where $\mu$ is the measure $e^{-x^2}dx$ on the line (the last equality is by the well-known principle $\int f d\mu=\int_0^{\infty}\mu\{f>t\}dt$ for non-negative $f$). Let $I(z):=\{x\in \mathbb{R}{\; : \;} \mbox{dist}\left(\frac{xw}{\pi \epsilon R}\alpha,\mathbb{Z}^n\right)\le z\}$. For $z\le \frac12 \gamma$, we now show that $I(z)$ is a union of well-separated short intervals. Indeed, if $x,y\in I(z)$, then $\frac{(x-y)w}{\pi \epsilon R}\alpha$ is within $2z$ distance of $\mathbb{Z}^n$. Hence, by the definition of $\mbox{LCD}$, we must have \begin{align*} \mbox{ either } \;\;\; \frac{|x-y|w}{\pi \epsilon R}\ge L_{2z}\ge L_{\gamma} \;\;\; \mbox{ or } \;\;\; \frac{1}{10}\frac{|x-y|w}{\pi \epsilon R}\le 2z. \end{align*} Therefore, $I(z)$ is contained in a union of intervals $I_j$, $j\in \mathbb{Z}$, such that \begin{inparaenum}[(a)]\item $I_j$ lies to the left of $I_{j+1}$, \item each $I_j$ has length at most $\frac{20 \pi \epsilon R z}{w}$ and \item $I_j$ and $I_{j+1}$ are at distance at least $\frac{\pi \epsilon R}{w}L_{\gamma}$ from each other. \end{inparaenum} Indexing them so that $a_0$ is the closest among $a_j$s to the origin, we see that $I_j$ is at a distance of at least $(|j|-1)\frac{\pi \epsilon R}{w}L_{\gamma}$ from the origin. Thus, \begin{align*} \mu\{I(z)\} &\le \sum_{j=0}^{\infty} \int\limits_{I_j}e^{-x^2}dx \\ &\le \frac{20\pi \epsilon R z}{w}\left(1+2\sum_{j=1}^{\infty} \exp\left\{-\frac{1}{w^2}(|j|-1)^2 \pi^2 \epsilon^2 R^2 L_{\gamma}^2 \right\}\right) \end{align*} By a standard comparison of the sum to the integral, we get \begin{align*} \mu\{I(z)\} &\le \frac{20\pi \epsilon R z}{w} \left(1+2\int_0^{\infty} \exp\left\{-\frac{1}{w^2}u^2 \pi^2 \epsilon^2 R^2 L_{\gamma}^2\right\} du \right) \\ &= \frac{20\pi \epsilon R z}{w} \left(1+\frac{\sqrt{\pi} w}{ \pi \epsilon R L_{\gamma}} \right) \\ &\le 70 z\left(\frac{\epsilon R}{w} +\frac{1}{L_{\gamma}} \right). \end{align*} Plugging this bound (and the trivial bound $\mu\{I(z)\}\le \mu(\mathbb{R}) = \sqrt{\pi}$ for $z\ge \frac12 \gamma$) into \eqref{eq:boundonintegralforfixedw} to bound that integral as \begin{align*} &\le 70 \left(\frac{\epsilon R}{w} +\frac{1}{L_{\gamma}} \right)\int\limits_{0}^{\gamma/2}8q\tau z^2 e^{-4q\tau z^2} dz +\int_{\gamma/2}^{\infty} 8q\tau z e^{-4q\tau z^2} dz & \\ &\le C \left(\frac{\epsilon R}{w}+\frac{1}{L_{\gamma}}+e^{-c\gamma^2}\right) \end{align*} where $C,c$ depend on $q$ and $\tau$. Since $w\ge \delta$, absorbing $1/\delta$ into $C$, from \eqref{eq:equivalentformoftheorem7} we have \begin{align*} \mathbf{P}\{|U|<\epsilon R, V>R\}\le C e^{-R}\left(\epsilon R +\frac{1}{L_{\gamma}}+e^{-c\gamma^2}\right) \end{align*} This completes the proof of \eqref{eq:boundforintermediateUV}. \hfill \qed The following lemma and its corollary were used in the proof. Its content is that the exponential tilts of a given probability distribution are uniformly comparable to the original distribution, as long as the tilting parameter is bounded. We assume symmetry here (by a variant of this Lemma without symmetry, one may enable one to remove the symmetry assumption in Theorem~\ref{thm:sodinsargument}, but we do not know how). \begin{lemma}\label{lem:tiltedanduntilted} Let $F$ be a probability distribution on the line symmetric about $0$. Let $dF_t(x)=\frac{1}{M(t)}e^{tx}dF(x)$ where $M(t)=\int e^{tx}dF(x)$. Let $\varphi:\mathbb{R}\mapsto [0,1]$ be an even measurable function. Then, for any $t\in \mathbb{R}$ \begin{align*} \int\!\!\int \varphi(x-x')\ dF(x)dF(x') \le M(t)\int\!\!\!\int \varphi(x-x') \ dF_t(x)dF_t(x'). \end{align*} \end{lemma} \begin{proof} Write \begin{align*} &\int\!\!\!\int \varphi(x-x')\ dF(x)dF(x') = \int\!\!\int \varphi(x-x')e^{\frac12 t(x+x')} e^{-\frac12 t(x+x')}dF(x)dF(x') \\ & \;\;\; \le \left(\int\!\!\!\int \varphi(x-x')e^{ t(x+x')}dF(x)dF(x') \right)^{\frac12} \left(\int\!\!\!\int \varphi(x-x')e^{-t(x+x')}dF(x)dF(x') \right)^{\frac12}. \end{align*} If we make the change of variables $(x,x')\mapsto (-x,-x')$ in the second integral, then the evenness of $\varphi$ and the symmetry of $F$ shows that it is identical to the first integral. Thus the right hand side is equal to $M(t)\int\!\!\!\int \varphi(x-x') \ dF_t(x)dF_t(x')$. \end{proof} \begin{corollary}\label{cor:tiltedanduntilted} In the setting of Lemma~\ref{lem:tiltedanduntilted}, let $X_t,X_t'$ be i.i.d. random variables with distribution $F_t$. Let $W_t=X_t-X_t'$. Then $\mathbf{P}\{|W_t|\ge \delta\}\ge \frac{1}{M(t)}\mathbf{P}\{|W|\ge \delta\}$ for some $c>0$ and all $\delta>0$. \end{corollary} \begin{proof} Take $\varphi(w)={\mathbf 1}_{|w|>\delta}$ in the Lemma. \end{proof}
{ "timestamp": "2018-05-23T02:05:13", "yymm": "1612", "arxiv_id": "1612.09045", "language": "en", "url": "https://arxiv.org/abs/1612.09045", "abstract": "Given two vectors in Euclidean space, how unlikely is it that a random vector has a larger inner product with the shorter vector than with the longer one? When the random vector has independent, identically distributed components, we conjecture that this probability is no more than a constant multiple of the ratio of the Euclidean norms of the two given vectors, up to an additive term to allow for the possibility that the longer vector has more arithmetic structure. We give some partial results to support the basic conjecture.", "subjects": "Probability (math.PR)", "title": "A relative anti-concentration inequality", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464499040094, "lm_q2_score": 0.72487026428967, "lm_q1q2_score": 0.7074345610744845 }
https://arxiv.org/abs/1205.3460
A note on Codazzi tensors
We discuss a gap in Besse's book, recently pointed out by Merton, which concerns the classification of Riemannian manifolds admitting a Codazzi tensors with exactly two distinct eigenvalues. For such manifolds, we prove a structure theorem, without adding extra hypotheses and then we conclude with some application of this theory to the classification of three-dimensional gradient Ricci solitons.
\section{Introduction} For $n\geq 3$, let $(M^{n},g)$ be a smooth Riemannian manifold and consider a Codazzi tensor ${\mathrm{T}}$ on $M^{n}$, i.~e., a symmetric bilinear form satisfying the Codazzi equation $$ (\nabla_{X}{\mathrm{T}})(Y,Z)=(\nabla_{Y}{\mathrm{T}})(X,Z)\, , $$ for every tangent vectors $X,Y,Z$. In the book {\em Einstein Manifolds}~\cite{besse}, by Besse, it is proved that if a Riemannian manifold $(M^n,g)$ admits a Codazzi tensor ${\mathrm{T}}$ such that at every point of $M^n$, ${\mathrm{T}}$ has exactly two distinct eigenvalues, then \begin{itemize} \item if the constant multiplicities of the two eigenspaces are larger than one, $(M^n,g)$ is locally a Riemannian product, \item if the above multiplicities are respectively 1 and $n-1$ {\em and the trace of ${\mathrm{T}}$ is constant}, then $(M^n,g)$ is locally a warped product of an $(n-1)$--dimensional Riemannian manifolds on an interval of ${\mathbb R}$. \end{itemize} For more details, we refer the reader to discussion~16.12 in~\cite{besse}. Before showing this result, Besse states "{\em ... a similar argument works without this hypothesis [that trace of ${\mathrm{T}}$ is constant]}". Recently in~\cite{merton}, G.~Merton provided a counterexample to the local warping structure, showing that the last Besse's statement is false. In~\cite{merton}, he also discusses some possible extra hypotheses, weaker than {\em trace of ${\mathrm{T}}$ constant}, under which the local warped structure can be obtained. Our goal here is to describe, without adding extra hypotheses to Besse's statement, what is the local geometric structure of a Riemannian manifold admitting a Codazzi tensor with exactly two distinct eigenvalues. Essentially, one has that the manifold may present zones where it is a warped product on a interval and zones where it is not. In this latter case, it turns out that the manifold admits a local totally geodesics foliation. This is the content of our Theorem~\ref{mainteo}. In Section~\ref{example}, we will give an example of a Riemannian manifold where both the situations (local warped product structure and local totally geodesics foliation) described in our structure theorem are present at the same time. Finally, in the last section, we will show how this Codazzi tensors theory can be applied to the classification of gradient Ricci solitons. \medskip \begin{ackn} The authors are partially supported by the Italian project FIRB--IDEAS ``Analysis and Beyond''. \end{ackn} \bigskip \section{Codazzi Tensors with Two Distinct Eigenvalues} In this section we present the statement and the proof of our main theorem. \begin{teo} \label{mainteo} Let ${\mathrm{T}}$ be a Codazzi tensor on $(M^{n},g)$, with $n\geq 3$. Suppose that at every point of $M^n$, the tensor ${\mathrm{T}}$ has exactly two distinct eigenvalues $\rho$ and $\sigma$ of multiplicity 1 and $n-1$, respectively. Finally, we let $W=\{ p\in M^n \, \big| \, d\sigma(p) \neq 0 \}$. Then, we have that \begin{enumerate} \item The closed set $\overline{W}=W\cup \partial W$ with the metric $g|_{\overline{W}}$ is locally isometric to the warped product of some $(n-1)$--dimensional Riemannian manifold on an interval of ${\mathbb R}$ and $\sigma$ is constant along the "leaves" of the warped product. \item The boundary of $W$, if present, is given by the disjoint union of connected totally geodesic hypersurfaces where $\sigma$ is constant. \item Each connected component of the complement of $\overline{W}$ in $M$, if present, has $\sigma$ constant and it is foliated by totally geodesic hypersurfaces. \end{enumerate} The $(n-1)$--dimensional tangent subspaces to the above warping hypersurfaces at point (1) and to the totally geodesic hypersurfaces at points (2) and (3) are the eigenspaces of ${\mathrm{T}}$ with respect to $\sigma$. \end{teo} \begin{proof} Since the Codazzi tensor ${\mathrm{T}}$ has exactly two distinct eigenvalues $\rho$ and $\sigma$ of multiplicity 1 and $n-1$, respectively, we have by Proposition~16.11 in~\cite{besse} that the tangent bundle $TM$ of $M$ splits as the orthogonal direct sum of two {\em integrable} eigendistributions: a line field $V_{\rho}$ and a codimension one distribution $V_{\sigma}$ with totally {\em umbilical} leaves, which means that the second fundamental form ${h}$ of each leaf is proportional to the metric $g^{\sigma}$, induced by $g$ on $V_{\sigma}$. To fix the notations, we will denote by ${\nabla}$ the Levi--Civita connection of the metric $g$ on $M^n$ and we recall that the (scalar) second fundamental form of a leaf $L$ of the codimension one distribution $V_{\sigma}$ can be defined as $$ h(X,Y)=-g(\nabla_X Y,\nu) \, , $$ where $X$ and $Y$ are vector fields along $L$ and $\nu$ is a choice of a unit normal vector field to $L$. The fact that $L$ is umbilical means that, for every couple of vector fields $X,Y$ tangent to $L$, we have $$ h(X,Y) \, = \, \frac{{\mathrm H}}{n-1} \, g^{\sigma}(X,Y) \, , $$ where ${\mathrm H}$, the mean curvature of $L$, is defined as the trace of $h$ with respect to $g^{\sigma}$. Since $n\geq 3$, we have that the codimension one distribution $V_\sigma$ has dimension strictly bigger than one. Thus, we infer from Proposition~16.11 in~\cite{besse} that the eigenfunction $\sigma$ must be constant along the leaves of $V_\sigma$. In particular, whenever $d\sigma \neq 0$, the leaves of $V_\sigma$ are locally regular level sets of $\sigma$. To proceed, we fix a point $p \in M$ and we consider a local coordinate system $(x^0,\ldots, x^{n-1})$ adapted to the leaves of $V_\sigma$ on a neighborhood $U$ of $p$. This means that $\partial/\partial x^0 \in V_\rho$ and $\partial/\partial x^j \in V_\sigma$, for $j= 1,\ldots, n-1$. In this chart, the unit vector field $\nu = (\partial / \partial x^0) / \sqrt{g_{00}}$ is normal to any leaf of the distribution $V_\sigma$ and since the two eigendistributions are mutually orthogonal we immediately get $g_{0j} = 0$ and ${\mathrm{T}}_{0j}=0$, for $j=1, \ldots,n-1$. If $L$ is the leaf of $V_\sigma$ through the point $p$, the second fundamental form of $L$ about $p$ and the umbilicity condition can be written as \begin{equation} \label{umbilic} {h}_{ij} \, = \,-\big\langle {\nabla}_{\frac{\partial\,}{\partial x^{i}}} \tfrac{\partial\,}{\partial x^{j}}, \nu\big\rangle \, = \, \,-\big\langle {\nabla}_{\frac{\partial\,}{\partial x^{i}}} \tfrac{\partial\,}{\partial x^{j}}, \tfrac{\partial\,}{\partial x^{0}} \big\rangle/\sqrt{g_{00}} \, = \, -{\Gamma}^{0}_{ij}\sqrt{g_{00}}= \frac{{\mathrm H}}{n-1} \, g_{ij}^\sigma\,, \end{equation} for $i,j = 1,\ldots,n-1$.\\ Denoting by ${\nabla}^{\sigma}$ the Levi--Civita connection of the induced metric $g^{\sigma}$, the Codazzi--Mainardi equations (see Theorem~1.72 in~\cite{besse}) read \begin{equation}\label{codman} \big({\nabla}^{\sigma}_{\frac{\partial\,}{\partial x^{i}}}{h}\big) \big(\tfrac{\partial\,}{\partial x^{j}},\tfrac{\partial\,}{\partial x^{k}}\big) - \big({\nabla}^{\sigma}_{\frac{\partial\,}{\partial x^{j}}}{h}\big) \big(\tfrac{\partial\,}{\partial x^{i}},\tfrac{\partial\,}{\partial x^{k}}\big) \, = \, \big\langle \,{{\mathrm {Rm}}} \big(\tfrac{\partial\,}{\partial x^{i}},\tfrac{\partial\,}{\partial x^{j}}\big) \tfrac{\partial\,}{\partial x^{k}}, \nu \, \big\rangle \,. \end{equation} Using the umbilicity property~\eqref{umbilic} of $L$ and tracing the left hand side of equation~\eqref{codman} with the inverse of the metric $(g^\sigma)^{ik}_{\sigma} = g^{ik}$, we get $$ g^{ik}\bigg[ \, \big({\nabla}^{\sigma}_{\frac{\partial\,}{\partial x^{i}}}{h}\big) \big(\tfrac{\partial\,}{\partial x^{j}},\tfrac{\partial\,}{\partial x^{k}}\big) - \big({\nabla}^{\sigma}_{\frac{\partial\,}{\partial x^{j}}}{h}\big) \big(\tfrac{\partial\,}{\partial x^{i}},\tfrac{\partial\,}{\partial x^{k}}\big) \,\bigg] = \, \frac{1}{n-1}\,\partial_{j}{{\mathrm H}} - \partial_{j}{{\mathrm H}} \,=\, -\frac{n-2}{n-1} \, \partial_{j}{{\mathrm H}} \, . $$ Tracing also the right hand side, we get \begin{equation}\label{eq1000} -\frac{n-2}{n-1} \, \partial_{j}{{\mathrm H}} \,=\, g^{ik} \, \big\langle \,{{\mathrm {Rm}}} \big(\tfrac{\partial\,}{\partial x^{i}},\tfrac{\partial\,}{\partial x^{j}}\big) \tfrac{\partial\,}{\partial x^{k}}, \nu \, \big\rangle \, = \, {\mathrm {Ric}} \big(\tfrac{\partial}{\partial x^{j}},\nu\big)\, = \, {\mathrm {Ric}}_{0j} /\sqrt{g_{00}}\,, \end{equation} as $g^{i0}=0$ when $i\geq 1$ and $\big\langle \,{{\mathrm {Rm}}} \big(\tfrac{\partial\,}{\partial x^{i}},\tfrac{\partial\,}{\partial x^{j}}\big) \tfrac{\partial\,}{\partial x^{k}}, \nu \, \big\rangle$ is equal to zero if $i=k=0$.\\ Now, it is a general fact (see Corollary~16.17 in~\cite{besse}) that every Codazzi tensor ${\mathrm{T}}$ commutes with the Ricci tensor, that is, $g^{kl}{\mathrm{T}}_{ik}{\mathrm {Ric}}_{lj}=g^{kl}{\mathrm {Ric}}_{ik}{\mathrm{T}}_{lj}$. In particular, $$ \rho \, {\mathrm {Ric}}_{0j} \, = \, g^{kl}{\mathrm{T}}_{0k}{\mathrm {Ric}}_{lj} \, = \, g^{kl}{\mathrm {Ric}}_{0k}{\mathrm{T}}_{lj} \, = \, \sigma \, g^{kl}{\mathrm {Ric}}_{0k}g_{lj}=\sigma \, {\mathrm {Ric}}_{0j}\,, $$ hence, ${\mathrm {Ric}}_{0j}=0$ for every $j = 1, \dots, n-1$, as $\rho \not=\sigma$ in $U$. We conclude by equation~\eqref{eq1000} that the mean curvature ${{\mathrm H}}$ is constant along every connected component of $L$, hence, the same conclusion holds for any leaf of $V_{\sigma}$. Next, we recall from Proposition 16.11 (ii) in~\cite{besse} that the eigenvalue $\sigma$ is constant along the leaves of $V_\sigma$, thus, in our local chart, it only depends on the $x^0$ variable. Moreover, by the same proposition, one has that \begin{equation} \label{meancur} {{\mathrm H}}\, = \,\frac{1 }{\rho -\sigma}\, \frac{\partial \sigma}{\partial x^0} \,. \end{equation} From this we deduce that the connected component of the $V_\sigma$--leaves through critical points of $\sigma$ are minimal and by the umbilicity they are also totally geodesic. This gives the description at the point (3) of the (possibly non present) interior of the set where $d\sigma=0$. We pass now to consider the open set $W\subset M$ given by the complement of the critical points of $\sigma$ in $M$. We are going to prove that $\rho$ is locally constant on the connected component of the $V_\sigma$--leaves which are sitting in $W$. To see this, it is sufficient to take the (coordinate) derivative of both sides of relation~\eqref{meancur} with respect to $x^j$, for $j=1,\ldots,n-1$. This gives $$ 0=\partial_j{{\mathrm H}} \, = \,- \frac{1}{(\rho -\sigma)^2}\, \frac{\partial \rho}{\partial x^j} \frac{\partial \sigma}{\partial x^0} + \frac{1}{\rho -\sigma}\, \frac{\partial^2 \sigma}{\partial x^j \partial x^0} \, = \,- \frac{1}{(\rho -\sigma)^2}\, \frac{\partial \rho}{\partial x^j} \frac{\partial \sigma}{\partial x^0} \,, $$ where we used the symmetry of the second derivative together with the constancy of $\sigma$ along the $V_\sigma$--leaves. Since in our coordinates $d\sigma = \partial_0 \sigma dx^0$ and $d\sigma \neq 0$ in $W$, the claim follows. To conclude, we observe that the boundary of $W$ (if any) can be described as a suitable union of connected component of level sets of $\sigma$. By continuity the eigenvalue $\rho$ must be locally constant also on $\partial W$. To show that $g$ has a warped product structure on $\overline{W} = W \cup \partial W$, we first observe that the condition $\partial_j \rho= 0$, for $j=1, \ldots, n-1$, combined with~\cite[Proposition 16.11--(ii)]{besse}, implies that $V_{\rho}$ is a {\em geodesic} line distribution in $\overline{W}$. This means that $\nabla_\nu \nu = 0$, which easily implies $\Gamma^{j}_{00}=0$, hence $\partial_j g_{00}=0$, for every $j= 1,\ldots, n-1$. Equation~\eqref{umbilic} then yields $$ \frac{\partial g_{ij}}{\partial x^0} \,=\, -2 \,{\Gamma}^{0}_{ij} = \frac{2 ({{\mathrm H}}/{\sqrt{g_{00}}})}{n-1}\, g_{ij}\,. $$ Since ${{\mathrm H}}$ and $g_{00}$ are constant along $V_{\sigma}$, one has that $$ \frac{\partial g_{ij}}{\partial x^0}(x^{0}, \dots, x^{n-1}) \,=\, \varphi(x^{0}) \,g_{ij}(x^{0}, \dots, x^{n-1}) \,, $$ for some function $\varphi$ depending only on the $x^{0}$ variable. Setting $\psi(x^0) = d \varphi / d x^0$, one has that $e^{-\psi} g_{ij}$ does not depend on the $x^0$ variable. Thus, for every $i,j = 1, \ldots, n-1$, we can write $$ g_{ij}(x^{0}, \dots, x^{n-1})\,=\,e^{\psi(x^{0})} \, G_{ij}(x^{1}, \dots, x^{n-1}) \,, $$ for some suitable functions $G_{ij}$. This prove that $g$ has a local warped product structure in $\overline{W}$ and the proof is complete. \end{proof} \begin{rem}\label{remanal} If the metric is analytic and the Riemannian manifold $(M^n,g)$ is connected, the presence of an open set where $\sigma$ is constant implies that everywhere $\sigma$ is constant and $d\sigma=0$, hence $W=\emptyset$. In the opposite case $\overline{W}=M^n$ and the totally geodesic hypersurfaces (where $\sigma$ is constant) whose union gives $\partial W$ are locally finite.\\ Hence, in the analytic case we have a dichotomy: either the whole manifold is locally a warped product or it is globally foliated by totally geodesic hypersurfaces. \end{rem} \section{An Example} \label{example} We show now that actually the two situations described in Theorem~\ref{mainteo} can be both present in a Riemannian manifold if the metric is only smooth but not analytic. We follow the line of Merton~\cite{merton}. Let $M={\mathbb R}\times{{\mathbb S}}^1\times{{\mathbb S}}^1$ be endowed with the Riemannian metric $$ g(t,x,y)=\bigl(\sigma(t)-\rho(t,x,y)\bigr)^{-2} dt^2+\sigma dx^2+\sigma dy^2\,, $$ where $\sigma:{\mathbb R}\to{\mathbb R}^+$ and $\rho:M\to{\mathbb R}$ are smooth functions, such that: \begin{itemize} \item The function $\sigma$ is monotone increasing from 1 to 2, with $\sigma^\prime>0$, in the interval $(-\infty,-1)$, constant equal to 2 in the interval $[-1,1]$ and again monotone increasing from 2 to 3, with $\sigma^\prime>0$, in the interval $(1,+\infty)$. \item The function $\rho$ is equal to $3\sigma$ when $t\in(-\infty,-1]$ or $t\in[1,+\infty)$, for every $(x,y)\in{{\mathbb S}}^1\times{{\mathbb S}}^1$. \item For $t\in(-1,1)$ and every $(x,y)\in{{\mathbb S}}^1\times{{\mathbb S}}^1$, the function $\rho$ is nonconstant on the leaves $\{t\}\times{{\mathbb S}}^1\times{{\mathbb S}}^1$, in particular it cannot be three times the function $\sigma$. \end{itemize} We then define the (1,1)--tensor ${\mathrm{T}}$ as follows \begin{align*} {\mathrm{T}}(\partial_t)=&\,\rho(t,x,y)\partial_t\\ {\mathrm{T}}(\partial_x)=&\,\sigma(t)\partial_x\\ {\mathrm{T}}(\partial_y)=&\,\sigma(t)\partial_y \end{align*} and we will show that ${\mathrm{T}}$ is a Codazzi tensor.\\ The (0,2)--version of ${\mathrm{T}}$ reads \begin{align*} {\mathrm{T}}_{tt}=&\,{\mathrm{T}}_t^t \, g_{tt}=\frac{\rho}{(\sigma-\rho)^2}\\ {\mathrm{T}}_{xx}=&\,{\mathrm{T}}_x^x \, g_{xx}=\sigma^2\\ {\mathrm{T}}_{yy}=&\,{\mathrm{T}}_y^y \, g_{yy}=\sigma^2\\ \end{align*} and all the other components are null. The Christoffel symbols of the metric $g$ are given by \begin{align*} \Gamma^t_{tt}=&\,-\bigl(\sigma-\rho\bigr)^{-1}(\sigma^\prime-\partial_t\rho)\\ \Gamma^i_{tt}=&\,-\sigma^{-1}\bigl(\sigma-\rho\bigr)^{-3}\partial_i\rho\\ \Gamma^t_{it}=&\,\bigl(\sigma-\rho\bigr)^{-1}\partial_i\rho\\ \Gamma^i_{jt}=&\,\sigma^{-1}\sigma^\prime\delta_j^i/2\\ \Gamma^t_{ij}=&\,-\bigl(\sigma-\rho\bigr)^{2}\sigma^\prime\delta_{ij}/2\\ \Gamma^k_{ij}=&\,0\,, \end{align*} where the indices $i,j,k$ can only be $x$ and $y$.\\ Thus we compute (we skip the trivial checks) \begin{align*} \nabla_y{\mathrm{T}}_{xx}-\nabla_x{\mathrm{T}}_{yx} =&\,\partial_y\sigma^2-2{\mathrm{T}}_{xp}\Gamma^p_{xy}+{\mathrm{T}}_{yp}\Gamma^p_{xx} +{\mathrm{T}}_{xp}\Gamma^p_{xy}\\ =&\,-\sigma^2\Gamma^x_{xy}+\sigma^2\Gamma^y_{xx}\\ =&\,0 \end{align*} \begin{align*} \nabla_t{\mathrm{T}}_{xx}-\nabla_x{\mathrm{T}}_{tx} =&\,\partial_t\sigma^2-2{\mathrm{T}}_{xp}\Gamma^p_{xt}+{\mathrm{T}}_{tp}\Gamma^p_{xx} +{\mathrm{T}}_{xp}\Gamma^p_{xt}\\ =&\,2\sigma\sigma^\prime-\sigma^2\Gamma^x_{xt}+\frac{\rho}{(\sigma-\rho)^2}\Gamma^t_{xx}\\ =&\,2\sigma\sigma^\prime-\sigma\sigma^\prime/2-\sigma^\prime\rho/2\\ =&\,\bigl(3\sigma-\rho\bigr)\sigma^\prime/2 \end{align*} \begin{align*} \nabla_x{\mathrm{T}}_{tt}-\nabla_t{\mathrm{T}}_{xt} =&\,\partial_x\biggl(\frac{\rho}{(\sigma-\rho)^2}\biggr) -2{\mathrm{T}}_{tp}\Gamma^p_{tx}+{\mathrm{T}}_{xp}\Gamma^p_{tt} +{\mathrm{T}}_{tp}\Gamma^p_{xt}\\ =&\,\frac{\partial_x \rho}{(\sigma-\rho)^2} +\frac{2\rho\,\partial_x\rho}{(\sigma-\rho)^3} -\frac{\rho}{(\sigma-\rho)^2}\Gamma^t_{tx}+\sigma^2\Gamma^x_{tt}\\ =&\,\frac{\partial_x \rho}{(\sigma-\rho)^2} +\frac{2\rho\,\partial_x\rho}{(\sigma-\rho)^3} -\frac{\rho\,\partial_x\rho}{(\sigma-\rho)^3} -\frac{\sigma\,\partial_x\rho}{(\sigma-\rho)^{3}}\\ =&\,0 \end{align*} \begin{align*} \nabla_t{\mathrm{T}}_{xy}-\nabla_x{\mathrm{T}}_{ty} =&\,-{\mathrm{T}}_{xp}\Gamma^p_{ty}-{\mathrm{T}}_{yp}\Gamma^p_{tx} +{\mathrm{T}}_{yp}\Gamma^p_{tx}+{\mathrm{T}}_{tp}\Gamma^p_{xy}\\ =&\,-\sigma^2\Gamma^x_{ty}+\frac{\rho}{(\sigma-\rho)^2}\Gamma^t_{xy}\\ =&\,0 \end{align*} \begin{align*} \nabla_x{\mathrm{T}}_{yt}-\nabla_y{\mathrm{T}}_{xt} =&\,-{\mathrm{T}}_{yp}\Gamma^p_{tx}-{\mathrm{T}}_{tp}\Gamma^p_{xy} +{\mathrm{T}}_{xp}\Gamma^p_{ty}+{\mathrm{T}}_{tp}\Gamma^p_{xy}\\ =&\,-\sigma^2\Gamma^y_{tx}+\sigma^2\Gamma^x_{ty}\\ =&\,0\,. \end{align*} Hence, by our choices for the functions $\sigma$ and $\rho$, the tensor ${\mathrm{T}}$ is a Codazzi tensor. It is easy to see that in the zone where $\sigma$ is nonconstant, the manifold is a warped product on a interval of ${\mathbb R}$, instead, in the zone $(-1,1)\times{{\mathbb S}}^1\times{{\mathbb S}}^1$, if the function $\rho$ is suitably chosen nonconstant on the leaves $\{t\}\times{{\mathbb S}}^1\times{{\mathbb S}}^1$, it can be checked that $(M,g)$ is not a warped product on an interval (actually, in this example, it is incidentally a warped product on ${{\mathbb S}}^1\times{{\mathbb S}}^1$), see the careful analysis in~\cite{merton}.\\ Hence, the two situations described in Theorem~\ref{mainteo} are both present in this example. \section{Three--Dimensional Gradient Ricci Solitons} Let $(M^{3},g)$ be a three--dimensional gradient Ricci soliton, that is a Riemannian manifold satisfying the equation \begin{equation}\label{sol} {\mathrm {Ric}} + \nabla^{2} f \,=\, \lambda \,g \end{equation} for some smooth function $f:M^3\to{\mathbb R}$ and some constant $\lambda\in{\mathbb R}$. \begin{lemma} On every three--dimensional gradient Ricci soliton the tensor $$ {\mathrm{T}} \,=\, \big( {\mathrm {Ric}} - \tfrac{1}{2}{\mathrm R} \, g \big) e^{-f} $$ is a Codazzi tensor. \end{lemma} \begin{proof} Let $(M^{3},g)$ be a three dimensional gradient Ricci soliton satisfying equation~\eqref{sol} and let $$ {\mathrm{T}}_{ij} \,=\, \big( {\mathrm R}_{ij}- \tfrac{1}{2}{\mathrm R}\,g_{ij}\big)\,e^{-f} \,. $$ We want to prove that ${\mathrm{T}}$ is a Codazzi tensor, i.e. we have to show that $$ \nabla_{k}{\mathrm{T}}_{ij} \,=\, \nabla_{j} {\mathrm{T}}_{ik} \,, $$ for every $i,j,k=1,2,3$. One has \begin{eqnarray}\label{ids1} \nabla_{k}{\mathrm{T}}_{ij} - \nabla_{j} {\mathrm{T}}_{ik} &=& \big[\nabla_{k}{\mathrm R}_{ij}-\nabla_{j}{\mathrm R}_{ik} -\tfrac{1}{2}(\nabla_{k}{\mathrm R}\,g_{ij}-\nabla_{j}{\mathrm R}\,g_{ik})\big]\,e^{-f}\nonumber\\ && + \,\big[\tfrac{1}{2}{\mathrm R}(\nabla_{k}f\,g_{ij}-\nabla_{j}f\,g_{ik})-\nabla_{k}f\,{\mathrm R}_{ij}+\nabla_{j}f\,{\mathrm R}_{ik} \big]\,e^{-f}\,. \end{eqnarray} On the other hand, the following two identities hold on any gradient Ricci soliton (for a proof, see~\cite{mantemin2}, for instance) \begin{eqnarray}\label{eqs1} & \nabla_{k} {\mathrm R} \,=\, 2 \nabla_{p}f\, {\mathrm R}_{pk}& \\ \label{eqs2} & \nabla_{k}{\mathrm R}_{ij} - \nabla_{j}{\mathrm R}_{ik} \,=\, -{\mathrm R}_{kjip}\nabla_{p}f \,. & \end{eqnarray} Moreover, since we are in dimension three, one has the decomposition of the Riemann tensor $$ {\mathrm R}_{kjip} \,=\, {\mathrm R}_{ik} g_{jp}-{\mathrm R}_{kp}g_{ij}+{\mathrm R}_{jp}g_{ik}-{\mathrm R}_{ij}g_{kp}- \tfrac{1}{2}{\mathrm R}(g_{ik}g_{jp}-g_{ij}g_{kp}) \,. $$ Combining with equation~\eqref{eqs2}, we obtain $$ \nabla_{k}{\mathrm R}_{ij} - \nabla_{j}{\mathrm R}_{ik} \,=\, -\nabla_{j}f\,{\mathrm R}_{ik} +\nabla_{p}f\,{\mathrm R}_{kp}g_{ij}-\nabla_{p}f\,{\mathrm R}_{jp}g_{ik}+\nabla_{k}f\, {\mathrm R}_{ij}-\tfrac{1}{2}{\mathrm R}(\nabla_{k}f\,g_{ij}-\nabla_{j}f\,g_{ik}) \,. $$ Hence, substituting this in equation~\eqref{ids1} and using relation~\eqref{eqs1}, we immediately get \begin{eqnarray*} \nabla_{k}{\mathrm{T}}_{ij} - \nabla_{j} {\mathrm{T}}_{ik} \,=\,0\,. \end{eqnarray*} \end{proof} As an application of this lemma and the results of the previous sections, we have the following theorem. \begin{teo}\label{teosol} Let $(M^3,g)$ be a complete, three--dimensional, simply connected Riemannian manifold, which is a shrinking or steady, gradient Ricci soliton and assume that there exists an open subset $U\subset M$, where the Ricci tensor of $g$ has at most two distinct eigenvalues. Then, either the manifold splits a line or it is locally conformally flat. \end{teo} \begin{proof} By~\cite{zhang2}, as $(M^3,g)$ is complete, this gradient Ricci soliton generates an ancient Ricci flow. Then, by the result~\cite[Corollary~2.4]{chen2} the evolving manifold, hence the Ricci soliton, must have nonnegative sectional curvatures. Moreover, it is well known, by the properties of the parabolic equations, that the metric $g$ must be analytic (see~\cite[Chapter~3, Section~2]{chowbookII}). If at least one sectional curvature is zero at some point, then the manifold $(M^3,g)$ "splits a line" (see~\cite{chowbookII}), that is, it is isometric to the Riemannian product of ${\mathbb R}$ with a surface. Hence, we will assume in the rest of the proof that all the sectional curvatures are strictly positive everywhere. The analyticity of the metric implies that either at every point of the open subset $U$ all of the three eigenvalues of the Ricci tensor coincide, or there is another, possibly smaller, open subset $W$ of $M^3$ such that the Ricci tensor has everywhere in $W$ exactly two distinct eigenvalues. In the first case $(U,g)$ is locally isometric to an Einstein manifold with positive curvature, then, by analyticity, $(M^{3},g)$ must be isometric to the sphere ${{\mathbb S}}^3$. Thus, we assume from now on that there exists an open subset $W$ where the Ricci tensor has exactly two distinct eigenvalues. This implies that on $W$ the Codazzi tensor ${\mathrm{T}}=({\mathrm {Ric}}-{\mathrm R} g/2)e^{-f}$ has two distinct eigenvalues $\sigma$, with multiplicity $2$, and $\rho$, with multiplicity $1$. As Remark~\ref{remanal} applies to this case, by Theorem~\ref{mainteo}, we have that two possible subcases: either around every point of $W$ the manifold is locally isometric to a warped product of a surface on an interval, or the eigenvalue $\sigma$ of the Codazzi tensor ${\mathrm{T}}$ is constant on $W$, hence on the whole $M^3$ by analyticity. On the other hand, since the curvature of $(M^{3},g)$ is strictly positive, $(M^{3},g)$ cannot admit equidistant totally geodesic submanifolds of dimension greater than one by the second Rauch comparison theorem (see~\cite{groklinmey}) and this latter subcase is excluded. In the former subcase, the leaves of the distribution $V_\sigma$ are umbilical and the two eigenvalues of ${\mathrm{T}}$ are constant on every leaf. Let $L$ be a connected component of a leaf of $V_{\sigma}$. By Gauss formula, one has that the scalar curvature of the induced metric $g^{\sigma}$ is given by \begin{equation} {\mathrm R}^{\sigma} \,=\, {\mathrm R} - 2{\mathrm R}_0^0+{\mathrm H}^{2}/2\,,\label{rsigmacon} \end{equation} where ${\mathrm H}$ denotes the mean curvature of $L$. Since $g$ has positive sectional curvature, one has ${\mathrm R} g-2\,{\mathrm {Ric}}>0$ and we obtain that ${\mathrm R}^{\sigma}$ is positive. We want to prove that $g$ is locally a warped product on an interval of ${\mathbb R}$ of two--dimensional fibers with constant positive curvature. Hence, we have to show that ${\mathrm R}^{\sigma}$ is constant on $L$. First of all, we observe that by the same reasoning as in the proof of Theorem~\ref{mainteo}, we know that ${\mathrm H}$ is constant on $L$. Thus, it remains to show that also the quantity ${\mathrm R}-2{\mathrm R}_0^0$ is constant on $L$. The fact that all the eigenvalues of the tensor ${\mathrm{T}}$ are constant on $L$, implies that the trace of ${\mathrm{T}}$ $$ {\mathrm{tr}}({\mathrm{T}}) \,=\, -{\mathrm R} e^{-f}/2 $$ is constant on $L$. We claim that also $f$ has to be constant on $L$. Using the adapted coordinate system as in the proof of Theorem~\ref{mainteo}, we assume by contradiction that $\partial_{j} f\neq 0$, for some $j\in\{1,2\}$ at some point of $L$. As $\partial_{j}{\mathrm{tr}}({\mathrm{T}})=0$ it follows that $\nabla f$ and $\nabla{\mathrm R}$ are parallel and $$ {\mathrm R}\nabla f\,=\, \nabla{\mathrm R} \,=\, 2{\mathrm {Ric}}(\nabla f , \cdot)\,, $$ hence, $\nabla f$ is an eigenvalue of the Ricci tensor. Being $\partial_j f\not=0$, then it must be $$ {\mathrm R}\nabla f\,=\, 2{\mathrm {Ric}}_j^j\nabla f\,, $$ which is a contradiction, as ${\mathrm R}=2{\mathrm {Ric}}_j^j+{\mathrm {Ric}}_0^0$ and ${\mathrm {Ric}}_0^0$ is positive by assumption (notice that we have used the fact that the Ricci tensor has exactly two distinct eigenvalues with the same eigendistributions as the Codazzi tensor ${\mathrm{T}}$). Thus, we have proved that $f$ is constant on $L$ which implies that ${\mathrm R}$ is constant on $L$ too. Then, it follows, by the definition of ${\mathrm{T}}$ and the fact that its eigenvalues are constant on $L$, that also the eigenvalues of the Ricci tensor are constant on $L$. In particular, ${\mathrm R}_0^0$ is constant on $L$. By relation~\eqref{rsigmacon}, we conclude that $L$ has positive constant scalar curvature ${\mathrm R}^{\sigma}$. Hence, the leaf $L$ is locally isometric to ${{\mathbb S}}^{2}$ and the metric $g$ in $W$ is locally a warped product of an interval with two--dimensional spherical fibers. In particular it is locally conformally flat. Using once again the analyticity, we can conclude that since $(W, g|_W)$ is a locally conformally flat open subset of $(M^3,g)$, then the whole $(M^3,g)$ must be locally conformally flat. This completes the proof. \end{proof} \begin{rem} By the same argument, the conclusion of this theorem also holds for complete, three--dimensional, simply connected, {\em expanding}, gradient Ricci solitons with nonnegative sectional curvatures. \end{rem} \begin{rem} Three--dimensional, locally conformally flat, gradient, shrinking and steady Ricci solitons were classified respectively by Ni--Wallach~\cite{nw2} and Cao--Chen~\cite{caochen}. In particular, under the assumptions of Theorem~\ref{teosol} we have that $(M^{3},g)$ is isometric to ${{\mathbb S}}^{3}$, ${\mathbb R}\times {{\mathbb S}}^{2}$ or ${\mathbb R}^{3}$ in the shrinking case, whereas in the steady case it is isometric to ${\mathbb R}^{3}$ or the {\em Bryant} soliton. If a three--dimensional, gradient, shrinking or steady Ricci soliton splits a line, then it must be the Riemannian product of ${\mathbb R}$ with a two--dimensional, gradient Ricci soliton of the same kind, that is, ${{\mathbb S}}^{2}$ or ${\mathbb R}^2$ in the shrinking case and ${\mathbb R}^2$ or {\em Hamilton's cigar} in the steady case. \end {rem} \bibliographystyle{amsplain}
{ "timestamp": "2013-10-02T02:11:37", "yymm": "1205", "arxiv_id": "1205.3460", "language": "en", "url": "https://arxiv.org/abs/1205.3460", "abstract": "We discuss a gap in Besse's book, recently pointed out by Merton, which concerns the classification of Riemannian manifolds admitting a Codazzi tensors with exactly two distinct eigenvalues. For such manifolds, we prove a structure theorem, without adding extra hypotheses and then we conclude with some application of this theory to the classification of three-dimensional gradient Ricci solitons.", "subjects": "Differential Geometry (math.DG)", "title": "A note on Codazzi tensors", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464415087019, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.707434560789589 }
https://arxiv.org/abs/alg-geom/9202026
Picard-Fuchs equations and mirror maps for hypersurfaces
We describe a strategy for computing Yukawa couplings and the mirror map, based on the Picard-Fuchs equation. (Our strategy is a variant of the method used by Candelas, de la Ossa, Green, and Parkes in the case of quintic hypersurfaces.) We then explain a technique of Griffiths which can be used to compute the Picard-Fuchs equations of hypersurfaces. Finally, we carry out the computation for four specific examples (including quintic hypersurfaces, previously done by Candelas et al.). This yields predictions for the number of rational curves of various degrees on certain hypersurfaces in weighted projective spaces. Some of these predictions have been confirmed by classical techniques in algebraic geometry.
\section*{Introduction} The phenomenon of mirror symmetry dramatically caught the attention of mathematicians with the recent work of P.~Candelas, X.~C.~de la Ossa, P.~S.~Green, and L.~Parkes \cite{pair}. Starting with a particular pair of ``mirror manifolds'', calculating certain period integrals, interpreting the results as Yukawa couplings, and then re-interpreting those results in light of the ``mirror manifold'' phenomenon, Candelas et al.\ were able to give predictions for the numbers of rational curves of various degrees on the general quintic threefold. In fact, algebraic geometers have had a difficult time verifying these predictions, but all successful attempts to calculate the numbers of curves have eventually confirmed the predictions. What is so striking about this work is that the calculation which predicts the numbers of rational curves on quintic threefolds is in reality a calculation about the variation of Hodge structure on a completely {\em different\/} family of Calabi-Yau threefolds. An asymptotic expansion is made of a function which comes from that variation, and the coefficients in the expansion are then used to predict numbers of rational curves. In \cite{guide}, we interpreted the calculation of Candelas et al.~\cite{pair} in terms of variation of Hodge structure. Here we take a more down to earth approach, and work directly with period integrals and their properties. (This is perhaps closer in spirit to the original paper.) We have found a way to modify the computational strategy employed in \cite{pair}. Our modified method computes a bit less (there are two unknown ``constants of integration''), but it is easier to actually carry out the computation. We in fact carry it out in three new examples. This leads to new predictions about numbers of rational curves on certain Calabi-Yau threefolds. Our strategy for computing Yukawa couplings is based on the Picard-Fuchs equation for the periods of a one-parameter family of algebraic varieties. We explain in sections 1 and 2 how this equation can be used to compute Yukawa couplings and the mirror map for a family of Calabi-Yau threefolds with $h^{2,1}=1$. We then go on in section 3 to review a method of Griffiths \cite{griffiths} for calculating Picard-Fuchs equations of hypersurfaces. Related ideas have also been introduced into the physics literature in \cite{blok-var,cad-ferr,ferrara,lsw}. In sections 4 and 5, we carry out the computation in four examples, including the quintic hypersurface. The resulting predictions about numbers of rational curves are discussed in section 6. \section{The Picard-Fuchs equation and monodromy} Let $\bar\pi: \overline{\cal X}\to \overline C$ be a family of $n$-dimensional projective algebraic varieties, parameterized by a compact Riemann surface $\overline C$. Let $C \subset \overline C$ be an open subset such that the induced family $\pi: {\cal X}\to C$ has smooth fibers. If we choose topological $n$-cycles $\gamma_0,\dots,\gamma_{r-1}$ which give a basis for the $n^{\text{th}}$ homology of one particular fiber $X_0$, and choose a holomorphic $n$-form $\omega$ on $X_0$, then the {\em periods} of $\omega$ are the integrals \[ \int_{\gamma_0}\omega,\dots,\int_{\gamma_{r-1}}\omega. \] Since the fibration $\pi: {\cal X}\to C$ is differentiably locally trivial, a local trivialization can be used to extend the cycles $\gamma_i$ from $X_0$ to cycles $\gamma_i(z)$ on $X_z$ which depend on $z$, where $z$ is a local coordinate on $C$. The holomorphic $n$-form $\omega$ can also be extended to a family of $n$-forms $\omega(z)$ which depend on the parameter $z$. If this is done in an algebraic way, then $\omega(z)$ extends to a meromorphic family of $n$-forms (i.e. poles are allowed) over the entire space $\overline{\cal X}$. The cycles $\gamma_i(z)$ determine homology classes which are locally constant in $z$. However, an attempt to extend these cycles globally will typically lead to monodromy: for each closed path in $C$, there will be some linear map $T$ represented by a matrix $T_{ij}$ such that transporting $\gamma_i$ along the path produces at the end a cycle homologous to $\sum T_{ij}\gamma_j$. The same phenomenon will hold for the periods: for a globally defined meromorphic family of $n$-forms $\omega(z)$, the local periods $\int_{\gamma_i(z)}\omega(z)$ extend by analytic continuation to multiple-valued functions of $z$, transforming according to the same monodromy transformations $T$ as do the homology classes of the cycles. The periods $\int_{\gamma(z)}\omega(z)$ satisfy an ordinary differential equation called the {\em Picard-Fuchs equation\/} of $\omega$. The existence of this equation can be explained as follows. Choose a local coordinate $z$ on some open set $U\subset C$, and consider the vector \[ v_j(z) := [\frac{d^j}{dz^j}\int_{\gamma_0(z)}\omega(z),\dots, \frac{d^j}{dz^j}\int_{\gamma_{r-1}(z)}\omega(z)] \in\Bbb C^r. \] For generic values of the parameter $z$, the dimensions \[ d_j(z):=\dim(\operatorname{span}\{v_0(z),\dots,v_j(z)\}) \] must be constant. Since $d_j(z)\le r$, these spaces cannot continue to grow indefinitely. There will thus be a smallest $s$ such that \[v_s(z)\in\operatorname{span}\{v_0(z),\dots,v_{s-1}(z)\}\] (for generic $z$). We can write \[ v_s(z) =-\sum_{j=0}^{s-1}C_j(z)v_j(z) \] with the coefficients $C_j(z)$ depending on $z$. The {\em Picard-Fuchs equation}, satisfied by all the periods of $\omega(z)$, is then \begin{equation} \label{picfuc} \frac{d^sf}{dz^s}+\sum_{j=0}^{s-1}C_j(z)\frac{d^jf}{dz^j}=0. \end{equation} The precise form of the equation depends on both the local coordinate $z$ on $C$, and the choice of holomorphic form $\omega(z)$. Note that the coefficients $C_j(z)$ may acquire singularities at special values of $z$. When we approach a point $P$ in $\overline C-C$, the Picard-Fuchs equation has (at worst) a {\em regular singular point\/} at $P$ \cite{gr-bull,nkatz,deligne}. If we choose a parameter $z$ which is centered at $P$ (that is, $z=0$ at $P$), then the coefficients $C_j(z)$ in the Picard-Fuchs equation typically will have poles at $z=0$. However, if we multiply the Picard-Fuchs operator \begin{equation} \label{PFop} \frac{d^s}{dz^s}+\sum_{j=0}^{s-1}C_j(z)\frac{d^j}{dz^j} \end{equation} by $z^s$ and rewrite the result in the form \begin{equation} \label{logform} (z\frac{d}{dz})^s+\sum_{j=0}^{s-1}B_j(z)(z\frac{d}{dz})^j \end{equation} then the new coefficients $B_j(z)$ are holomorphic functions of $z$. (This is one of several equivalent definitions of ``regular singular point''.) We call eq.~\eqref{logform} the {\em logarithmic form\/} of the Picard-Fuchs operator. The structure of ordinary differential equations with regular singular points is a classical topic in differential equations: a convenient reference is \cite{codlev}. We can rewrite eq.~\eqref{picfuc} as a system of first-order equations, using the logarithmic form eq.~\eqref{logform}, as follows: let \begin{equation} \label{Az} A(z) = \begin{bmatrix} 0 & 1 & & & \\ & 0 & 1 & & \\ & & \ddots & \ddots & \\ & & & 0 & 1 \\ -B_0(z) & -B_1(z) & \dots & \dots & -B_{s-1}(z) \end{bmatrix}. \end{equation} Then solutions $f(z)$ to the equation eq.~\eqref{picfuc} are equivalent to solution vectors \[w(z) = \begin{bmatrix} f(z) \\ z\frac d{dz}f(z) \\ \vdots \\ (z\frac d{dz})^{s-1}f(z) \end{bmatrix} \] of the matrix equation \begin{equation} \label{matrixeqn} z\frac d{dz}w(z)=A(z)w(z). \end{equation} For a matrix equation such as eq.~\eqref{matrixeqn}, the facts are these (see \cite{codlev}). There is a constant $s\times s$ matrix $R$ and a $s\times s$ matrix $S(z)$ of (single-valued) functions of $z$, regular near $z=0$, such that \[\Phi(z)=S(z) \cdot z^R \] is a {\em fundamental matrix} for the system. This means that the columns of $\Phi(z)$ are a basis for the space of solutions at each nonsingular point $z\ne0$. The multiple-valuedness of the solutions has all been put into $R$, since \[z^R:=e^{(log z)R} = I+(\log z)R+\frac{(\log z)^2}{2!}R^2+\cdots\] is a multiple-valued matrix function of $z$. The local monodromy on the solutions given by analytic continuation along a path winding once around $z=0$ in a counterclockwise direction is given by $e^{2\pi iR}$ (with respect to the basis given by the columns of $\Phi$). The matrix $R$ is by no means unique. \begin{theorem} Suppose that $z\frac d{dz}w(z)=A(z)w(z)$ is a system of ordinary differential equations with a regular singular point at $z=0$. Suppose that distinct eigenvalues of $A(0)$ do not differ by integers. Then there is a fundamental matrix of the form \[\Phi(z)=S(z) \cdot z^{A(0)} \] and $S(z)$ can be obtained as a power series \[S(z)=S_0+S_1z+S_2z^2+\cdots\] by recursively solving the equation \[ z\frac d{dz}S(z)+S(z)\cdot A(0)=A(z)\cdot S(z) \] for the coefficient matrices $S_j$. Moreover, any such series solution converges in a neighborhood of $z=0$. \end{theorem} A proof can be found in \cite{codlev}, together with methods for treating the case in which eigenvalues of $A(0)$ {\em do} differ by integers. We will be particularly interested in systems with {\em unipotent monodromy}: by definition, this means that $e^{2\pi iR}$ is a unipotent matrix, so that $(e^{2\pi iR}-I)^m\ne0$, $(e^{2\pi iR}-I)^{m+1}=0$ for some $m$ called the {\em index}. \begin{corollary} Suppose that $ (z\frac{d}{dz})^sf(z)+\sum_{j=0}^{s-1}B_j(z)(z\frac{d}{dz})^jf(z) $ is an ordinary differential equation with a regular singular point at $z=0$. If $B_j(0)=0$ for all $j$, then the solutions of this equation have unipotent monodromy of index $s$. \end{corollary} The corollary follows by calculating with eq.~\eqref{Az}, setting $z=0$ and $B_j(0)=0$ to produce \[ e^{2\pi iA(0)}= \begin{bmatrix} 1 & 2\pi i & \frac{(2\pi i)^2}{2!} & \dots & \frac{(2\pi i)^{s-1}}{(s-1)!} \\ & 1 & 2\pi i & \dots & \frac{(2\pi i)^{s-2}}{(s-2)!} \\ & & \ddots & & \vdots \\ & & & 1 & 2\pi i \\ & & & & 1 \end{bmatrix}.\] \section{Computing the mirror map} Recall that a {\em Calabi-Yau manifold} is a compact K\"ahler manifold $X$ of complex dimension $n$ which has trivial canonical bundle, such that the Hodge numbers $h^{k,0}$ vanish for $0<k<n$. Thanks to a celebrated theorem of Yau \cite{yau}, every such manifold admits Ricci-flat K\"ahler metrics. Suppose now that $\pi: {\cal X}\to C$ is a family of Calabi-Yau threefolds with $h^{2,1}(X)=1$, which is not a locally constant family. The third cohomology group $H^3(X)$ has dimension $r=4$. It follows that the Picard-Fuchs equation has order at most 4. (In fact, it is not difficult to show that it has order exactly 4.) Let $z$ be a coordinate on $\overline C$ centered at a point $P\in\overline C-C$. We say that {\em $P$ is a point at which the monodromy is maximally unipotent} if the monodromy is unipotent of index 4. As we have seen in the corollary, if $B_j(0)=0$ in the logarithmic form of the Picard-Fuchs equation, $z=0$ will be such a point. We will assume for simplicity that our points of maximally unipotent monodromy have this form, leaving appropriate modifications for the general case to the reader. We review the calculation of the Yukawa coupling, following \cite{pair}. Let $\omega(z)$ be a family of $n$-forms, and let \[W_k := \int_{X_z}\omega(z)\wedge\frac{d^k}{dz^k}\omega(z).\] A fundamental principle from the theory of variation of Hodge structure (cf.~\cite{transcendental}) implies that $W_0$, $W_1$, and $W_2$ all vanish. The {\em Yukawa coupling} is the first non-vanishing term $W_3$. Candelas et al.\ show that the Yukawa coupling $W_3$ satisfies the differential equation \[ \frac{dW_3(z)}{dz}=-\frac12C_3(z)W_3(z), \] where $C_3(z)$ is a coefficient in the Picard-Fuchs equation \eqref{picfuc}. The Yukawa coupling as defined clearly depends on the ``gauge'', that is, on the choice of holomorphic $3$-form $\omega(z)$. If fact, if we alter the gauge by $\omega(z)\mapsto f(z)\omega(z)$, then $W_k$ transforms as \[ W_k\mapsto f(z)\sum_{j=0}^k\binom{k}{j}\frac{d^jf(z)}{dz^j}W_{k-j}. \] Since $W_0=W_1=W_2=0$, the change in the Yukawa coupling $W_3$ is simply $W_3\mapsto f(z)^2W_3$. The Yukawa coupling also depends on the choice of coordinate $z$, and in fact is often denoted by $\kappa_{zzz}$. If we change coordinates from $z$ to $w$, we must change the differentiation operator from $d/dz$ to $d/dw$. The chain rule then imples that \[ \kappa_{www} = \left(\frac{dz}{dw}\right)^3\kappa_{zzz}.\] Candelas et al.~\cite{pair} use physical arguments to set the gauge in this calculation, and to find an appropriate (multiple-valued) parameter $t$ with which to compute. (The associated differentiation operator $d/dt$ is single-valued.) What will be important for us are the following observations about their results. The gauge used by Candelas et al.\ determines a family of meromorphic $n$-forms $\widetilde\omega(z)$ with the property that the period function \[\int_{\gamma}\widetilde\omega(z)\equiv1\] for some cycle $\gamma$. Moreover, the parameter $t$ determined by Candelas et al.\ is a parameter defined in an angular sector near $z=0$ which has two crucial properties: \begin{enumerate} \item If we analytically continue along a simple loop around $z=0$ in the counterclockwise direction, $t$ becomes $t+1$. (It will be convenient to also introduce $q=e^{2\pi it}$, which remains single-valued near $z=0$.) \item There are cycles $\gamma_0$ and $\gamma_1$ such that $\int_{\gamma_0}\omega(z)$ is single valued near $z=0$, and \[t=\frac{\int_{\gamma_1}\omega(z)}{\int_{\gamma_0}\omega(z)}\] in an angular sector near $z=0$. \end{enumerate} Each period function $\int_{\gamma}\omega(z)$ is a solution to the Picard-Fuchs equation of the family. Translating the results of the previous section into the present context, we obtain the following: \begin{lemma} Suppose that $z=0$ is a point of maximally unipotent monodromy such that $B_j(0)=0$, where $B_j(z)$ are the coefficients in the logarithmic form of the Picard-Fuchs equation. Then \begin{enumerate} \item There is a period function for $\omega(z)$, \[f_0(z):=\int_{\gamma_0}\omega(z)\] which is single-valued near $z=0$. This period function is unique up to multiplication by a constant. (This implies that the cycle $\gamma_0$ is also unique up to a constant multiple.) In particular, the family of meromorphic $n$-forms \[\widetilde\omega(z):=\frac{\omega(z)}{\int_{\gamma_0}\omega(z)}\] will have the property that \[\int_{\gamma}\widetilde\omega(z)\equiv1\] for some $\gamma$, and it is the unique such family up to constant multiple. \item Fixing a choice of period function $f_0(z)$ as in part (1), there is a period function \[f_1(z):=\int_{\gamma_1}\omega(z)\] such that $\varphi(z):=f_1(z)/f_0(z)$ transforms as \[\varphi(z)\mapsto\varphi(z)+1\] upon transport around $z=0$ in the counterclockwise direction. The ratio $\varphi(z)$ is unique up to the addition of a constant. \end{enumerate} \end{lemma} This, then, is our alternate strategy for computing the Yukawa coupling: we find solutions of the Picard-Fuchs equation which have the properties specified in the lemma, and we use those to fix the gauge and specify the natural parameter, up to two unknown constants of integration. \section{Picard-Fuchs equations for hypersurfaces} We now review a method of Griffiths \cite{griffiths} for describing the cohomology of a hypersurface, which can be used to determine the Picard-Fuchs equation of a one-parameter family of hypersurfaces. Calculations of this sort were earlier made by Dwork \cite[Sec. 8]{dwork}. Griffiths' method was extended to the weighted projective case by Steenbrink \cite{steen} and Dolgachev \cite{dolg}, who we follow. We denote a weighted projective $n$-space by $\Bbb P^{(k_0,\dots,k_n)}$, where $k_0,\dots,k_n$ are the weights of the variables $x_0,\dots,x_n$. Weighted homogeneous polynomials can be identified with the aid of the Euler vector field \[ \theta=\sum k_jx_j\frac{\partial}{\partial x_j} \] which has the property that $\theta P=(\deg P)\cdot P$ for any weighted homogeneous polynomial $P$. Contracting the volume from on $\Bbb C^{n+1}$ with $\theta$ produces the fundamental weighted homogeneous differential form (of ``weight'' $k:=\sum k_j$) \[ \Omega:=\sum_{j=0}^n(-1)^jk_jx_j\, dx_0\wedge\dots\wedge\widehat{dx_j}\wedge\dots\wedge dx_n. \] Rational differentials of degree $n$ on $\Bbb P^{(k_0,\dots,k_n)}$ can be described as expressions $P\Omega/Q$, where $P$ and $Q$ are weighted homogeneous polynomials with $\deg P+k=\deg Q$. Suppose that $Q$ is a weighted homogeneous polynomial defining a quasismooth hypersurface ${\cal Q}\subset \Bbb P^{(k_0,\dots,k_n)}$. (That is, $Q=0$ defines a hypersurface in $\Bbb C^{n+1}$ which is smooth away from the origin.) The middle cohomology of ${\cal Q}$ is then described by means of differential forms with poles (of all orders) along ${\cal Q}$. Each such form $P\Omega/Q^\ell$ is made into a cohomology class by a ``residue'' construction: for an $(n-1)$-cycle $\gamma$ on ${\cal Q}$, the tube over $\gamma$ (an $S^1$-bundle inside the (complex) normal bundle of ${\cal Q}$) is an $n$-cycle $\Gamma$ on $\Bbb P^{(k_0,\dots,k_n)}$ disjoint from ${\cal Q}$. We can then define the residue of $P\Omega/Q^\ell$ by \[ \int_{\gamma}\operatorname{Res}_{\cal Q}\left(\frac{P\Omega}{Q^\ell}\right)= \frac1{2\pi i}\int_{\Gamma}\frac{P\Omega}{Q^\ell}. \] Since altering $P\Omega/Q^\ell$ by an exact differential does not change the value of these integrals, we see that the cohomology of ${\cal Q}$ is represented by equivalence classes of rational differential forms $P\Omega/Q^\ell$ modulo exact forms. Here is Griffiths' ``reduction of pole order'' calculation which shows how to reduce modulo exact forms in practice. Let $Q$ and $A_j$ be weighted homogeneous polynomials, with $\deg Q=d$, $\deg A_j =\ell d +k_j-k$. Define \[ \varphi=\frac1{Q^\ell}\sum_{i<j}(k_ix_iA_j-k_jx_jA_i) dx_0\wedge\dots\wedge\widehat{dx_i}\wedge \dots\wedge\widehat{dx_j}\wedge\dots\wedge dx_n \] and then calculate \begin{equation} \label{griffiths} d\varphi=\frac {\left(\ell\sum A_j\frac{\partial Q}{\partial x_j} -Q\sum\frac{\partial A_j} {\partial x_j}\right)\Omega}{Q^{\ell+1}} =\frac{\ell\sum A_j\frac{\partial Q}{\partial x_j}\Omega}{Q^{\ell+1}} -\frac{\sum\frac{\partial A_j} {\partial x_j}\Omega}{Q^\ell}. \end{equation} Thus, any form whose numerator lies in the Jacobian ideal $ J=(\partial Q/\partial x_0, \dots,\partial Q/\partial X_n) $ is equivalent (modulo exact forms) to a form with smaller pole order. This idea can be used to calculate Picard-Fuchs equations as follows. The cycles $\Gamma$ do not change (in homology) when $z$ varies locally. So we can differentiate under the integral sign \[ \frac{d^k}{dz^k}\int_{\gamma}\operatorname{Res}_{\cal Q}\left(\frac{P\Omega}{Q^\ell}\right)= \frac1{2\pi i}\int_{\Gamma}\frac{d^k}{dz^k}\left(\frac{P\Omega}{Q^\ell}\right) \] when $Q$ depends on a parameter $z$. (Note that $\Omega$ is independent of $z$.) The Picard-Fuchs operator \eqref{PFop} will have the property that \[ \left( \frac{d^s}{dz^s}+\sum_{j=0}^{s-1}C_j(z)\frac{d^j}{dz^j} \right) \left(\frac {P\Omega}Q\right)=d\varphi \] is an exact form. To find it, take successive $z$-derivatives of the integrand $P\Omega/Q$ and use the reduction of order of pole formula \cite{griffiths} to determine a linear relation among those derivatives, modulo exact forms. \section{Examples: Picard-Fuchs equations} We will calculate the Picard-Fuchs equations for certain one-parameter families of Calabi-Yau threefolds. Our choice of families is motivated by the mirror construction of Greene and Plesser \cite{greene-plesser}. We choose weights $k_0,\dots,k_4$ with $k_0\ge k_1\ge\dots\ge k_4$ for a weighted projective 4-space such that $d_j:=k/k_j$ is an integer, where $k:=\sum k_j$. We also assume that $\gcd\{k_j\ | \ j\ne j_0\}=1$ for every $j_0$. These assumptions then imply that $k=\operatorname{lcm}\{d_j\}$. Consider the pencil of hypersurfaces ${\cal Q}_\psi \subset \Bbb P^{(k_0,\dots,k_4)}$ defined by $Q(x,\psi)=0$, where \[ Q(x,\psi) := \sum_{j=0}^4 x_j^{d_j} - k\psi\prod_{j=0}^4 x_j . \] This pencil has a natural group of diagonal automorphisms preserving the holomorphic 3-form. To define it, let $\mugroup{m}$ denote the multiplicative group of $m^{\text{th}}$ roots of unity (considered as a subgroup of $\Bbb C^\times$), and let \[ G=(\mugroup{d_0}\times\dots\times\mugroup{d_4})/\mugroup{k}, \] where we embed $\mugroup{k}$ in $\mugroup{d_0}\times\dots\times\mugroup{d_4}$ by \[ \alpha\mapsto(\alpha^{k_0},\dots,\alpha^{k_4}). \] Note that since $\sum k_j=k$, the formula \[ f(\alpha_0,\dots,\alpha_4)=(\prod \alpha_j)^{-1} \] determines a well-defined homomorphism $f:G\to\Bbb C^\times$. Let $G_0=\ker(f)$. We can regard $Q(x,\psi)=0$ as defining a hypersurface ${\cal Q}\subset\Bbb P^{(k_0,\dots,k_4)}\times\Bbb C$. The group $G$ acts on $\Bbb P^{(k_0,\dots,k_4)}\times\Bbb C$ by \[ (x_0,\dots,x_4;\psi)\mapsto(\alpha_0x_0,\dots,\alpha_4x_4;f(\alpha)\psi) \] for $\alpha=(\alpha_0,\dots,\alpha_4)\in G$. The polynomial $Q(x,\psi)$ is invariant under this action. Thus, the action preserves ${\cal Q}$, and maps ${\cal Q}_\psi$ isomorphically to ${\cal Q}_{f(\alpha)\psi}$. It follows that the group $G_0$ acts on ${\cal Q}_\psi$ by automorphisms, and that the induced action of $G/G_0\cong\mugroup{k}$ establishes isomorphisms between ${\cal Q}_\psi/G_0$ and ${\cal Q}_{\lambda\psi}/G_0$ for $\lambda\in\mugroup{k}$. The quotient space ${\cal Q}_\psi/G$ has only canonical singularities. By a theorem of Markushevich \cite[Prop.~4]{markushevich} and Roan \cite[Prop.~2]{roan1}, these singularities can be resolved to give a Calabi-Yau manifold ${\cal W}_\psi$. There are choices to be made in this resolution process; we do not specify a choice. By another theorem of Roan \cite[Lemma 4]{roan2}, any two resolutions differ by a sequence of flops. Note that the differential form $\Omega$ from the previous section transforms as $\Omega\mapsto(\prod \alpha_j)\Omega$ under the action of $\alpha\in G$. Thus, the rational differential \[ \omega_1=\frac{\psi\Omega}{Q(x,\psi)} \] is invariant under the action of $G$; we define $\omega(\psi)=\operatorname{Res}_{{\cal Q}_\psi}(\omega_1)$. Since the holomorphic 3-forms $\omega(\psi)$ on ${\cal Q}_\psi$ are invariant on $G_0$, they induce holomorphic 3-forms on ${\cal W}_\psi$. Moreover, the homology group $H_3({\cal W}_\psi)$ contains the $G_0$-invariant part $H_3({\cal Q}_\psi)^{G_0}$ of the homology of ${\cal Q}_\psi$. If we know that the dimensions of these spaces agree, then they will coincide (at least for homology with coefficients in a field). In this case, the periods of ${\cal W}_\psi$ can actually be computed as periods of the holomorphic form $\omega(\psi)$ on ${\cal Q}_\psi$, over $G_0$-invariant cycles. Thanks to the isomorphisms between ${\cal Q}_\psi$ and ${\cal Q}_{\lambda\psi}$ for $\lambda\in\mugroup{k}$ and the invariance of the rational differential $\omega_1$ under $G$, these periods will be invariant under $\psi\mapsto\lambda\psi$. In particular, they will be functions of $z=\psi^{-k}$ alone. It is likely that the resolutions ${\cal W}_\psi$ of ${\cal Q}_\psi/G_0$ could be chosen so that the action of $G/G_0$ would lift to isomorphisms between ${\cal W}_\psi$ and ${\cal W}_{\lambda\psi}$. (We verified this in the case of quintic hypersurfaces in \cite{guide}.) In this case, there would be an actual family of Calabi-Yau threefolds for which $z$ served as a parameter. It may be that such resolutions could be constructed by finding an appropriate partial resolution of ${\cal Q}/G$. However, we do not need the existence of this family to describe the computation of the Yukawa coupling. \begin{table}[t] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{|c|c|c|} \hline $k$ & $(k_0,\dots,k_4)$ & $Q(x,\psi)$ \\ \hline 5 & $(1,1,1,1,1)$ & $x_0^5+x_1^5+x_2^5+x_3^5+x_4^5-5\psi x_0x_1x_2x_3x_4$ \\ 6 & $(2,1,1,1,1)$ & $x_0^3+x_1^6+x_2^6+x_3^6+x_4^6-6\psi x_0x_1x_2x_3x_4$ \\ 8 & $(4,1,1,1,1)$ & $x_0^2+x_1^8+x_2^8+x_3^8+x_4^8-8\psi x_0x_1x_2x_3x_4$ \\ 10 & $(5,2,1,1,1)$ & $x_0^2+x_1^5+x_2^{10}+x_3^{10}+x_4^{10} -10\psi x_0x_1x_2x_3x_4$ \\ \hline \end{tabular} \end{center} \caption{The hypersurfaces.} \label{tab1} \end{table} We will carry out the computation in four specific examples. These come from the lists of Candelas, Lynker and Schimmrigk \cite{cls}; they found that there are exactly four types of hypersurface in weighted projective four-space which are Calabi-Yau threefolds with Picard number one. The weights of the space are given in the second column of table~\ref{tab1}. For each of those cases, Greene and Plesser's mirror construction \cite{greene-plesser} yields the family ${\cal W}_\psi$ which we have described above. And Roan's formula \cite{roanpf} for the Betti numbers verifies that $b_3$ is indeed 4 (with $h^{2,1}=1$). The remaining columns in table~\ref{tab1} show the value of $k$, and give the equation $Q(x,\psi)$ explicitly. We describe the $G_0$-invariant cohomology by means of the rational differential forms \[ \omega_\ell:= \frac{ (-1)^{\ell-1}(\ell-1)!\,\psi^\ell(\prod x_i^{\ell-1})\Omega}{Q(x,\psi)^\ell}. \] These are chosen because of the evident $G$-invariance in the numerator; the coefficients were adjusted so that the formula \begin{equation} \label{derivs} -\frac1k\psi\frac d{d\psi}\omega_\ell=-\frac{\ell}k\omega_\ell +\omega_{\ell+1} \end{equation} would not be overly burdened with constants. We compute with the differential operator $-\frac1k\psi\frac d{d\psi}$ because it coincides with $z\frac d{dz}$. A basis for the $G_0$-invariant cohomology is then given by the residues of $\omega_1$, $\omega_2$, $\omega_3$, $\omega_4$. To compute the Picard-Fuchs equation, we must find an expression for $\omega_5$ as a linear combination of $\omega_1,\dots,\omega_4$ modulo exact forms. That expression, combined with \eqref{derivs}, will then yield the desired differential equation. We carried out this calculation using the Gr\"obner basis algorithm \cite{buchberger}, modifying an implementation written in {\sc maple} by Yunliang Yu (cf.~\cite{yu}). We first calculated a Gr\"obner basis for the Jacobian ideal $ J=(\partial Q/\partial x_0,\dots,\partial Q/\partial x_4) $, working in the ring $\Bbb C(\psi)[x_0,\dots,x_4]$ of polynomials whose coefficients are rational functions of $\psi$. The reduction of pole order was then achieved step by step as follows: given a form $\eta_\ell$, the residue of a global form with a pole of order $\ell$, we used the Gr\"obner basis to reduce the numerators of both $\eta_\ell$ and $\omega_\ell$ to standard form. We could thus determine a coefficient $\varepsilon_\ell\in\Bbb C(\psi)$ such that the numerator of $\eta_\ell-\varepsilon_\ell\omega_\ell$ lies in $J$. Another application of Gr\"obner basis reduction produced explicit coefficients \[ \eta_\ell-\varepsilon_\ell\omega_\ell=\sum A_{\ell j}\frac {\partial Q}{\partial x_j}. \] Then the Griffiths formula \eqref{griffiths} determines forms $\varphi_\ell$ and $\eta_{\ell-1}$ such that \[ \eta_\ell-\varepsilon_\ell\omega_\ell=d\varphi_\ell+\eta_{\ell-1}, \] and $\eta_{\ell-1}$ has a pole of order $\ell-1$. Beginning with $\eta_5=\omega_5$ and applying this procedure several times, one finds \[ \omega_5=\varepsilon_1\omega_1+\dots+\varepsilon_4\omega_4 +d\varphi. \] The results of this computation for our four examples are summarized in table 2. The coefficients $\varepsilon_\ell$ are in fact functions of $z=\psi^{-k}$ (as expected from our earlier discussion), and have been displayed as such. \begin{table}[t] \renewcommand{\arraystretch}{1} \begin{center} \begin{tabular}{|c|cccc|} \hline \rule{0pt}{14pt} $k$ & $\varepsilon_1$ & $\varepsilon_2$ & $\varepsilon_3$ & $\varepsilon_4$ \\[4pt] \hline & & & & \\ 5 & $\displaystyle\frac{1}{625(z-1)}$ & $\displaystyle\frac{-3}{25(z-1)}$ & $\displaystyle\frac{1}{(z-1)}$ & $\displaystyle\frac{-2}{(z-1)}$ \\ & & & & \\ 6 & $\displaystyle\frac{1}{324(z-4)}$ & $\displaystyle\frac{-5}{18(z-4)}$ & $\displaystyle\frac{-(z-50)}{18(z-4)}$ & $\displaystyle\frac{-(z+20)}{3(z-4)}$ \\ & & & & \\ 8 & $\displaystyle\frac{1}{16(z-256)}$ & $\displaystyle\frac{-15(z+256)}{512(z-256)}$ & $\displaystyle\frac{-5(3z-1280)}{64(z-256)}$ & $\displaystyle\frac{-(3z+1280)}{4(z-256)}$ \\ & & & & \\ 10 & $\displaystyle\frac{5}{4(z-12500)}$ & $\displaystyle\frac{-(7z+37500)}{200(z-12500)}$ & $\displaystyle\frac{-(7z-62500)}{20(z-12500)}$ & $\displaystyle\frac{-(z+12500)}{(z-12500)}$ \\ & & & & \\ \hline \end{tabular} \end{center} \caption{The results of the Gr\"obner basis calculation.} \label{tab2} \end{table} The differential equation for $[\omega_1,\dots,\omega_4]$ determined by this procedure has the form \[ z\frac d{dz} \begin{bmatrix} \omega_1 \\ \omega_2 \\ \omega_3 \\ \omega_4 \end{bmatrix} = \begin{bmatrix} -\frac1k & 1 & 0 & 0 \\ 0 & -\frac2k & 1 & 0 \\ 0 & 0 & -\frac3k & 1 \\ \varepsilon_1 & \varepsilon_2 & \varepsilon_3 & \varepsilon_4-\frac4k \end{bmatrix} \begin{bmatrix} \omega_1 \\ \omega_2 \\ \omega_3 \\ \omega_4 \end{bmatrix}. \] To calculate the Picard-Fuchs equation, we must change basis via \[ {\renewcommand{\arraystretch}{1.5} \begin{bmatrix} \omega_1 \\ z\frac d{dz}\omega_1 \\ (z\frac d{dz})^2\omega_1 \\ (z\frac d{dz})^3\omega_1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ -\frac1k & 1 & 0 & 0 \\ \frac1{k^2} & -\frac3k & 1 & 0 \\ -\frac1{k^3} & \frac7{k^2} & -\frac6k & 1 \end{bmatrix} \begin{bmatrix} \omega_1 \\ \omega_2 \\ \omega_3 \\ \omega_4 \end{bmatrix}. } \] This determines an equation in the form \eqref{Az}, with \begin{equation} \label{Beqn} {\renewcommand{\arraystretch}{1.5} \begin{array}{rrr} B_0(z) & = & -\varepsilon_1(z) - \frac1k\varepsilon_2(z) - \frac2{k^2}\varepsilon_3(z) -\frac6{k^3}\varepsilon_4(z)+\frac{24}{k^4} \\ B_1(z) & = & -\varepsilon_2(z)-\frac3k\varepsilon_3(z)- \frac{11}{k^2}\varepsilon_4(z) +\frac{50}{k^3} \\ B_2(z) & = & -\varepsilon_3(z)-\frac6k\varepsilon_4(z)+\frac{35}{k^2} \\ B_3(z) & = & -\varepsilon_4(z)+\frac{10}k. \end{array} } \end{equation} As can be directly verified in each of our cases, $B_j(0)=0$. It follows that the monodromy at $z=0$ is maximally unipotent. (In the case of quintics ($k=5$), this had been shown in \cite{pair}; cf.~\cite{guide}.) \section{Examples: Mirror maps} We next compute the mirror maps for our four examples, based on their Picard-Fuchs equations. Expanding eqs.~\eqref{PFop} and \eqref{logform}, one finds that the coefficient $C_3(z)$ coincides with $(6+B_3(z))/z$. Moreover, in our four examples, a straightforward computation based on eq.~\eqref{Beqn} and table~\ref{tab2} shows that $B_3(z)=2z/(z-\lambda)$, where $\lambda=1$, $4$, $256$, $12500$ when $k=5$, $6$, $8$, $10$, respectively. Thus, \[ C_3(z)=\frac{6+B_3(z)}{z}=\frac6z+\frac2{z-\lambda}. \] The Yukawa coupling $\kappa_{zzz}$ in the gauge $\omega(z)$ is therefore given by a function $W_3(z)$ which satisfies the differential equation \[ \frac{dW_3(z)}{dz}=\left(\frac{-3}{z}+\frac{-1}{z-\lambda}\right)W_3(z). \] Thus, in the gauge $\omega(z)$ we have \[ \kappa_{zzz}=\frac{c_1}{(2\pi i)^3z^3(z-\lambda)}. \] Here $c_1/(2\pi i)^3$ is the first ``constant of integration'': we have introduced a factor of $(2\pi i)^3$ in order to simplify a later formula. In order to determine the natural gauge, we must find a solution $f_0(z)$ of the Picard-Fuchs equation which is regular near $z=0$. Using the corresponding vector $w_0(z)$ of which $f_0(z)$ is the first component, we want a solution to the vector equation \begin{equation} \label{w0eqn} z\frac d{dz}w_0(z)=A(z)w_0(z) \end{equation} which is regular near $z=0$. ($A(z)$ is given by eqs.~\eqref{Az}, \eqref{Beqn}, and table~\ref{tab2}.) This can be found using power-series techniques, and there is a solution with $f_0(0)\ne0$ in each of our four cases. We normalize so that $f_0(0)=1$; alternatively, we could have absorbed the leading term of $f_0(z)$ into the constant of integration $c_1$. As a result, the gauge-fixed value of $\kappa_{zzz}$ takes the form \[ \kappa_{zzz}=\frac{c_1}{(2\pi i)^3z^3(z-\lambda)(f_0(z))^2}, \] where the constant $c_1$ has yet to be determined. We now search for the good parameter $t$. We should locate a second solution $f_1(z)$, or its corresponding vector $w_1(z)$, which is multiple-valued and has the correct monodromy properties. The monodromy will be such that if we introduce \[v(z):=2\pi iw_1(z)-(\log z)w_0(z)\] and its first component \[g(z):=2\pi if_1(z)-(\log z)f_0(z),\] then $v(z)$ will be single-valued and regular near $z=0$. It is easy to calculate that the matrix equation satisfied by $v(z)$ is \begin{equation} \label{veqn} z\frac d{dz}v(z)=A(z)v(z)-w_0(z). \end{equation} Solutions to this equation can be found by power-series techniques. We normalize the solution so that $g(0)=0$. The parameter $t$ is then given by \[ t=\frac1{2\pi i}\log c_2+\frac1{2\pi i}\log z+\frac{g(z)}{f_0(z)} \] ($\frac1{2\pi i}\log c_2$ is the second ``constant of integration'') and the associated parameter $q$ is \[ q=e^{2\pi it}=c_2ze^{g/f_0}. \] Let us define \[ \delta(z)=1+z\frac d{dz}\left(\frac{g(z)}{f_0(z)}\right), \] so that \[ \frac{dq}{dz}=c_2\delta(z)e^{g/f_0}. \] Then by the chain rule, \[ \frac{dz}{dt}=\frac{dq/dt}{dq/dz}=\frac{2\pi iz}{\delta(z)}. \] It follows that the gauge-fixed value of $\kappa_{ttt}$ is \[ \kappa_{ttt}=\left(\frac{dz}{dt}\right)^3\kappa_{zzz} =\frac{c_1}{(\delta(z))^3(z-\lambda)(f_0(z))^2}. \] Finally we express this normalized $\kappa_{ttt}$ as a power series in $q$. The constants $c_1$ and $c_2$ have yet to be determined; however, we can define \begin{equation} \label{h0z} h_0(z)=\frac{1}{(\delta(z))^3(z-\lambda)(f_0(z))^2} \end{equation} \begin{equation} \label{hjz} h_j(z)=\frac1{\delta(z)e^{g/f_0}}\cdot\frac{dh_{j-1}(z)}{dz} \end{equation} and find that \[ h_j(z)=\frac{(c_2)^j}{c_1}\left(\frac d{dq}\right)^j\kappa_{ttt}, \] so that \[ \kappa_{ttt}=\sum_{j=0}^{\infty} \frac{c_1}{(c_2)^j}\, \frac{h_j(0)}{j!}\, q^j. \] \begin{proposition} The numbers $h_j(0)$ are rational numbers. \end{proposition} \begin{pf} The coefficient matrix $A(z)$ in the vector equation \eqref{w0eqn} has entries in $\Bbb Q(z)$; if written out in power series, all the power series coefficients will be rational numbers. Finding a power series solution to \eqref{w0eqn} then involves solving linear equations with rational coefficients at each step: the solutions will be rational. Thus, $w_0(z)$ and $f_0(z)$ are power series in $z$ with rational coefficients. Similarly, $v(z)$ and $g(z)$ are power series with rational coefficients, since they come from equation \eqref{veqn}. Furthermore, since exponentiating a power series with rational coefficients (whose constant term is zero) again gives a power series with rational coefficients, $e^{g/f_0}$ and $\delta(z)$ are power series in $z$ with rational coefficients. But then by \eqref{h0z}, $h_0(z)$ is clearly a power series in $z$ with rational coefficients; similarly for $h_j(z)$ by \eqref{hjz}. It follows that each $h_j(0)$ is a rational number. \end{pf} \section{Choosing the constants and predicting the numbers of rational curves} Calabi-Yau threefolds with $h^{2,1}=1$ are conjectured to be the ``mirrors'' of other Calabi-Yau threefolds with $h^{1,1}=1$. In the four examples we have considered, this mirror property can be realized by a construction of Greene and Plesser \cite{greene-plesser}. The threefolds ${\cal W}_\psi$ are mirrors of threefolds ${\cal M}\subset\Bbb P^{(k_0,\dots,d_4)}$, which are hypersurfaces of weighted degree $k=\sum k_j$. The Picard group of ${\cal M}$ is cyclic, generated by some ample divisor $H$. Mirror symmetry predicts that the $q$-expansion of the gauge-fixed Yukawa coupling \[ \kappa_{ttt}=a_0+a_1q+a_2q^2+\cdots \] will have integers as coefficients. Moreover, by a formula conjectured in \cite{pair} and established in \cite{psa-drm}, if this $q$-expansion is written in the form \begin{equation} \label{formla2} \kappa_{ttt} = n_0 + \sum_{j=1}^\infty \frac{n_jj^3q^j}{1-q^j} = n_0 + n_1q + (2^3n_2 + n_1)q^2 + \cdots . \end{equation} then the coefficients $n_j$ are also integers. The first term $n_0$ is predicted to coincide with $H^3$ (the absolute degree of ${\cal M}$), and $n_j$ is predicted to be the number of rational curves $C$ on ${\cal M}$ with $C\cdot H=j$, assuming that all rational curves on ${\cal M}$ are disjoint and have normal bundle ${\cal O}(-1)\oplus{\cal O}(-1)$. These two predictions can be used to choose the constants of integration in our examples. First, the absolute degree $d$ is the lowest order term which appears in the polynomial $Q(x,\psi)$; to ensure that $n_0=d$ we must take $c_1=-\lambda d$. Second, the formula~\eqref{formla2} puts very strong divisibility constraints on the coefficients $a_j$, and it seems likely that there will be a unique choice of $c_2$ which satisfies all of these constraints. \begin{table}[t] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{|c|rrrrr|} \hline $k$ & $n_0$ & $n_1$ & $n_2$ & $n_3$ & $n_4$ \\ \hline 5 & 5 & 2875 & 609250 & 317206375 & 242467530000 \\ 6 & 3 & 7884 & 6028452 & 11900417220 & 34600752005688 \\ 8 & 2 & 29504 & 128834912 & 1423720546880 & 23193056024793312 \\ 10 & 2 & 462400 & 24431571200 & 3401788732948800 & 700309317702649312000 \\ \hline \end{tabular} \end{center} \caption{The predicted numbers of curves.} \label{tab3} \end{table} We have calculated the first 20 coefficients (using {\sc mathematica}) in each of our four examples. There does indeed appear to be a unique choice for $c_2$ which produces integers for $n_1,\dots n_{20}$: that choice turns out to be $c_2=k^{-k}$ in each of our examples. Making this choice leads to the values for $n_j$ displayed in table~\ref{tab3}. Table~\ref{tab3} therefore contains predictions about numbers of rational curves on the weighted projective hypersurfaces. For a general hypersurface in ${\cal M}\subset\Bbb P^{(k_0,\dots,d_4)}$ of degree $k=\sum k_j$, the prediction is that there should be $n_j$ rational curves $C$ with $C\cdot H=j$, where $H$ generates $\operatorname{Pic}({\cal M})$. The first line of the table reproduces the predictions made by Candelas et al.\ about quintic threefolds. Several of these have been verified: the number of lines was known classically, the number of conics was computed by Katz \cite{katz}, and the number of twisted cubics $n_3$ has recently been computed by Ellingsrud and Str\o mme \cite{ell-str}---all of these results agree with the predictions. Of the remaining predictions in the table, we have only checked one. Each hypersurface from the third family (the case $k=8$) can be regarded as a double cover of $\Bbb P^3$ branched on a surface of degree 8. The entry 29504 in the third line of the table can be interpreted as follows: for a general surface of degree 8 in $\Bbb P^3$, there should be 14752 lines which are 4-times tangent to the surface. (These lines will then split into pairs of rational curves on the double cover.) After we had obtained this number, Steve Kleiman was kind enough to locate a $19^{\text{th}}$-century formula of Schubert \cite[Formula 21, p. 236]{schubert}, which states that the number of lines in $\Bbb P^3$ 4-times tangent to a general surface of degree $n$ is \[ \frac1{12}n(n-4)(n-5)(n-6)(n-7)(n^3+6n^2+7n-30) . \] Substituting $n=8$, we find the predicted number 14752. \section*{Acknowledgements} Several of the ideas explained in this paper arose in conversations with Sheldon Katz---it is a pleasure to acknowledge his contribution. I also benefited greatly from the chance to interact with physicists which was provided by the Mirror Symmetry Workshop. I wish to thank the M.S.R.I. as well as the organizers and participants in the Workshop. This work was partially supported by NSF grant DMS-9103827. \makeatletter \renewcommand{\@biblabel}[1]{\hfill#1.}\makeatother \newcommand{\leavevmode\hbox to3em{\hrulefill}\,}{\leavevmode\hbox to3em{\hrulefill}\,}
{ "timestamp": "1992-02-26T03:26:22", "yymm": "9202", "arxiv_id": "alg-geom/9202026", "language": "en", "url": "https://arxiv.org/abs/alg-geom/9202026", "abstract": "We describe a strategy for computing Yukawa couplings and the mirror map, based on the Picard-Fuchs equation. (Our strategy is a variant of the method used by Candelas, de la Ossa, Green, and Parkes in the case of quintic hypersurfaces.) We then explain a technique of Griffiths which can be used to compute the Picard-Fuchs equations of hypersurfaces. Finally, we carry out the computation for four specific examples (including quintic hypersurfaces, previously done by Candelas et al.). This yields predictions for the number of rational curves of various degrees on certain hypersurfaces in weighted projective spaces. Some of these predictions have been confirmed by classical techniques in algebraic geometry.", "subjects": "Algebraic Geometry (math.AG)", "title": "Picard-Fuchs equations and mirror maps for hypersurfaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464485047916, "lm_q2_score": 0.72487026428967, "lm_q1q2_score": 0.7074345600602332 }
https://arxiv.org/abs/1109.0838
A central limit theorem for stationary random fields
This paper establishes a central limit theorem and an invariance principle for a wide class of stationary random fields under natural and easily verifiable conditions. More precisely, we deal with random fields of the form $X_k = g(\varepsilon_{k-s}, s \in \Z^d)$, $k\in\Z^d$, where $(\varepsilon_i)_{i\in\Z^d}$ are i.i.d random variables and $g$ is a measurable function. Such kind of spatial processes provides a general framework for stationary ergodic random fields. Under a short-range dependence condition, we show that the central limit theorem holds without any assumption on the underlying domain on which the process is observed. A limit theorem for the sample auto-covariance function is also established.
\section{Introduction} Central limit theory plays a fundamental role in statistical inference of random fields. There have been a substantial literature for central limit theorems of random fields under various dependence conditions. See \cite{Bolthausen1982a}, \cite{Bradley1992}, \cite{Bulinski2010}, \cite{Bulinskii1988}, \cite{Chen}, \cite{Dedecker1998}, \cite{Guyon--Richardson1984}, \cite{Jenish--Prucha2009}, \cite{Maltz1999}, \cite{Nahapetian1987}, \cite{Neaderhouser1978a}, \cite{Neaderhouser1978b}, \cite{Paulauskas2010}, \cite{Perera1997}, among others. However, many of them require that the underlying random fields have very special structures such as Gaussian, linear, Markovian or strong mixing of various types. In applications those structural assumptions can be violated, or not easily verifiable. In this paper we consider stationary random fields which are viewed as nonlinear transforms of independent and identically distributed (iid) random variables. Based on that representation we introduce dependence measures and establish a central limit theorem and an invariance principle. We assume that the random field $(X_i)_{ i \in \mathbb{Z}^d}$ has the form \begin{equation}\label{definition_champ} X_i = g\left(\varepsilon_{i-s};\,s\in\mathbb{Z}^d\right), \quad i\in\mathbb{Z}^d, \end{equation} where $(\varepsilon_j)_{j\in\mathbb{Z}^d}$ are iid random variables and $g$ is a measurable function. In the one-dimensional case ($d=1$) the model (\ref{definition_champ}) is well known and includes linear as well as many widely used nonlinear time series models as special cases. In Section \ref{eq:ex} based on (\ref{definition_champ}) we shall introduce dependence measures. It turns out that, with our dependence measure, central limit theorems and moment inequalities can be established in a very elegant and natural way. The rest of the paper is organized as follows. In Section \ref{sec:main} we present a central limit theorem and an invariance principle for \begin{eqnarray*} S_\Gamma = \sum_{i\in\Gamma}X_i, \end{eqnarray*} where $\Gamma$ is a finite subset of $\mathbb{Z}^d$ which grows to infinity. The proof of our Theorem \ref{tlc} is based on a central limit theorem for $m_n$-dependent random fields established by Heinrich \cite{Heinrich88}. Unlike most existing results on central limit theorems for random fields which require certain regularity conditions on the boundary of $\Gamma$, Heinrich's central limit theorem (and consequently our Theorem \ref{tlc}) has the very interesting property that no condition on the boundary of $\Gamma$ is needed, and the central limit theorem holds under the minimal condition that $|\Gamma| \to \infty$, where $|\Gamma|$ the cardinal of $\Gamma$. This is a very attractive property in spatial applications in which the underlying observation domains can be quite irregular. As an application, we establish a central limit theorem for sample auto-covariances. Section \ref{sec:main} also present an invariance principle. Proofs are provided in Section \ref{sec:proof}. \section{Examples and Dependence Measures} \label{eq:ex} In (\ref{definition_champ}), we can interpret $(\varepsilon_s)_{s \in \mathbb{Z}^d}$ as the input random field, $g$ is a transform or map and $(X_i)_{i \in \mathbb{Z}^d}$ as the output random field. Based on this interpretation, we define dependence measure as follows: let $(\varepsilon_j^{'})_{j\in\mathbb{Z}^d}$ be an iid copy of $(\varepsilon_j)_{j\in\mathbb{Z}^d}$ and consider for any positive integer $n$ the coupled version $X_i^{\ast}$ of $X_i$ defined by $$ X_i^{\ast}=g\left(\varepsilon^{\ast}_{i-s}\,;\,s\in\mathbb{Z}^d\right), $$ where for any $j$ in $\mathbb{Z}^d$, $$ \varepsilon_j^{\ast}= \left\{\begin{array}{ll} \varepsilon_j & \textrm{if $j\neq 0$} \\ \varepsilon_0^{'} & \textrm{if $j=0$.} \end{array}\right. $$ Recall that a Young function $\psi$ is a real convex nondecreasing function defined on $\mathbb{R}^{+}$ which satisfies $\lim_{t\to\infty}\psi(t)=\infty$ and $\psi(0)=0$. We define the Orlicz space $\mathbb{L}_{\psi}$ as the space of real random variables $Z$ defined on the probability space $(\Omega, \mathcal{F}, \P)$ such that $\mathbb{E}[\psi(\vert Z\vert/c)]<+\infty$ for some $c>0$. The Orlicz space $\mathbb{L}_{\psi}$ equipped with the so-called Luxemburg norm $\| . \|_{\psi}$ defined for any real random variable $Z$ by \begin{eqnarray*} \| Z\|_{\psi}=\inf\{\,c>0\,;\,\mathbb{E}[\psi(\vert Z\vert/c)]\leq 1\,\} \end{eqnarray*} is a Banach space. For more about Young functions and Orlicz spaces one can refer to Krasnosel'skii and Rutickii \cite{K-R}. Following Wu \cite{Wu2005}, we introduce the following dependence measures which are directly related to the underlying processes. \begin{Def}[Physical dependence measure] Let $\psi$ be a Young function and $i$ in $\mathbb{Z}^d$ be fixed. If $X_i$ belongs to $\mathbb{L}_{\psi}$, we define the physical dependence measure $\delta_{i,\psi}$ by $$ \delta_{i,\psi}=\|X_i-X_i^{\ast}\|_{\psi}. $$ If $p\in ]0,+\infty]$ and $X_i$ belongs to $\mathbb{L}^p$, we denote $\delta_{i,p}=\|X_i-X_i^{\ast}\|_p$. \end{Def} \begin{Def}[Stability] We say that the random field $X$ defined by $(\ref{definition_champ})$ is $p$-stable if $$ \Delta_p:=\sum_{i\in\mathbb{Z}^d}\delta_{i,p}<\infty. $$ \end{Def} As an illustration, we give some examples of $p$-stable spatial processes. \begin{example} {\em (Linear random fields) Let $(\varepsilon_i)_{i\in\mathbb{Z}^d}$ be i.i.d random variables with $\varepsilon_i$ in $\mathbb{L}^p$, $p \geq 2$. The linear random field $X$ defined for any $k$ in $\mathbb{Z}^d$ by $$ X_k=\sum_{s\in\mathbb{Z}^d}a_s\varepsilon_{k-s} $$ is of the form $(\ref{definition_champ})$ with a linear functional $g$. For any $i$ in $\mathbb{Z}^d$, $\delta_{i,p} = \vert a_i\vert \| \varepsilon_0 - \varepsilon^{'}_0\|_p$. So, $X$ is $p$-stable if $$ \sum_{i\in\mathbb{Z}^d}\vert a_i\vert<\infty. $$ Clearly, if $K$ is a Lipschitz continuous function, under the above condition, the subordinated process $Y_i = K(X_i)$ is also $p$-stable since $\delta_{i,p} = O(|a_i|)$. } \end{example} \begin{example} {\em (Volterra field) Another class of nonlinear random field is the Volterra process which plays an important role in the nonlinear system theory (Casti \cite{Casti1985}, Rugh \cite{Rugh1981}): consider the second order Volterra process \begin{eqnarray*} X_k = \sum_{s_1, s_2\in\mathbb{Z}^d} a_{s_1, s_2} \varepsilon_{k-s_1} \varepsilon_{k-s_2}, \end{eqnarray*} where $a_{s_1, s_2}$ are real coefficients with $a_{s_1, s_2} = 0$ if $s_1 = s_2$ and $\varepsilon_i$ in $\mathbb{L}^p$, $p \geq 2$. Let \begin{eqnarray*} A_k = \sum_{s_1, s_2\in\mathbb{Z}^d} (a_{s_1, k}^2 + a_{k, s_2}^2) \mbox{ and } B_k = \sum_{s_1, s_2\in\mathbb{Z}^d} (|a_{s_1, k}|^p + |a_{k, s_2}|^p). \end{eqnarray*} By the Rosenthal inequality, there exists a constant $C_p > 0$ such that \begin{eqnarray*} \delta_{k,p} = \| X_k - X_k^*\|_p \le C_p A_k^{1/2} \| \varepsilon_0\|_2 \| \varepsilon_0\|_p + C_p B_k^{1/p} \| \varepsilon_0\|_p^2. \end{eqnarray*} } \end{example} \section{Main Results} \label{sec:main} To establish a central limit theorem for $S_\Gamma$ we need the following moment inequality. With the physical dependence measure, it turns out that the moment bound can have an elegant and concise form. \begin{Prop}\label{inequality} Let $\Gamma$ be a finite subset of $\mathbb{Z}^d$ and $(a_i)_{i\in\Gamma}$ be a family of real numbers. For any $p\geq 2$, we have $$ \left\|\sum_{i\in\Gamma}a_iX_i\right\|_p\leq \left(2p\sum_{i\in\Gamma}a_i^2\right)^{\frac{1}{2}}\Delta_p $$ where $\Delta_p=\sum_{i\in\mathbb{Z}^d}\delta_{i,p}$. \end{Prop} In the sequel, for any $i$ in $\mathbb{Z}^d$, we denote $\delta_i$ in place of $\delta_{i,2}$. \begin{Prop}\label{variance_asymptotique} If $\Delta_2:=\sum_{i\in\mathbb{Z}^d}\delta_{i}<\infty$ then $\sum_{k\in\mathbb{Z}^d}\vert\mathbb{E}(X_0X_k)\vert<\infty$. Moreover, if $(\Gamma_n)_{n\geq 1}$ is a sequence of finite subsets of $\mathbb{Z}^d$ such that $\vert\Gamma_n\vert$ goes to infinity and $|\partial\Gamma_n| / |\Gamma_n|$ goes to zero then \begin{equation}\label{limite_variance_S_n_prime} \lim_{n\to+\infty}\vert\Gamma_n\vert^{-1} \mathbb{E}(S^2_{\Gamma_n})=\sum_{k\in\mathbb{Z}^d}\mathbb{E}(X_0X_k). \end{equation} \end{Prop} \subsection{Central Limit Theorem} Our first main result is the following central limit theorem. \begin{Th}\label{tlc} Let $(X_i)_{i\in\mathbb{Z}^d}$ be the stationary centered random field defined by $(\ref{definition_champ})$ satisfying $\Delta_2 := \sum_{i\in\mathbb{Z}^d}\delta_{i}<\infty$. Assume that $\sigma_n^2 := \mathbb{E}\left(S_{\Gamma_n}^2\right)\to\infty$. Let $(\Gamma_n)_{n\geq 1}$ be a sequence of finite subsets of $\mathbb{Z}^d$ such that $\vert\Gamma_n\vert\to\infty$, then the Levy distance \begin{eqnarray} \label{eq:Levy} L[ S_{\Gamma_n}/\sqrt{|\Gamma_n|}, \, N(0, \sigma_n^2 / |\Gamma_n|) ] \to 0 \mbox{ as } n \to \infty. \end{eqnarray} \end{Th} We emphasize that in Theorem \ref{tlc} no condition on the domains $\Gamma_n$ is imposed other than the natural one $|\Gamma_n| \to \infty$. Applying Proposition \ref{variance_asymptotique}, if $|\partial\Gamma_n| / |\Gamma_n|$ goes to zero and $\sigma^2 := \sum_{k\in\mathbb{Z}^d}\mathbb{E}(X_0 X_k) > 0$ then $$ \frac{S_{\Gamma_n}}{\sqrt{\vert\Gamma_n\vert}}\converge{n} {+\infty}{\textrm{$\mathcal{L}$}}\mathcal{N}(0,\sigma^2). $$ Theorem \ref{tlc} can be applied to the mean estimation problem: suppose that a stationary random field $X_i$ with unknown mean $\mu = \mathbb{E} X_i$ is observed on the domain $\Gamma$. Then $\mu$ can be estimated by the sample mean $\hat \mu = S_\Gamma / |\Gamma|$ and a confidence interval for $\hat \mu$ can be constructed if there is a consistent estimate for ${\rm var}(S_\Gamma) / |\Gamma|$. Interestingly, the Theorem can also be applied to the estimation of auto-covariance functions. For $k \in \mathbb{Z}^d$ let \begin{eqnarray} \gamma_k = {\rm cov}(X_0, X_k) = \mathbb{E}(X_0 X_k) - \mu^2. \end{eqnarray} Assume $X_i$ is observed over $i \in \Gamma$ and let $\Xi = \{ i \in \Gamma: \, i+k \in \Gamma \}$. Then $\gamma_k$ can be estimated by \begin{eqnarray} \hat \gamma_k = {1\over {|\Xi|}} \sum_{i \in \Xi}X_i X_{i+k} - \hat \mu^2. \end{eqnarray} To apply Theorem \ref{tlc}, we need to compute the physical dependence measure for the process $Y_i := X_i X_{i+k}, i \in \mathbb{Z}^d$. It turns out that the dependence for $Y_i$ can be easily obtained from that of $X_i$. Note that \begin{eqnarray*} \delta_{i, p/2}(Y) &=& \| X_i X_{i+k} - X^*_i X^*_{i+k} \|_{p/2} \cr &\le& \| X_i X_{i+k} - X_i X^*_{i+k} \|_{p/2} + \| X_i X^*_{i+k} - X^*_i X^*_{i+k} \|_{p/2} \cr &\le& \|X_i\|_p \delta_{i+k, p} + \delta_{i, p} \| X^*_{i+k} \|_p = \|X_0\|_p(\delta_{i+k, p} + \delta_{i, p}). \end{eqnarray*} Hence, if $\Delta_4 = \sum_{i \in \mathbb{Z}^d} \delta_{i, 4} < \infty$, we have $\sum_{i \in \mathbb{Z}^d} \delta_{i, 2}(Y) < \infty$ and the central limit theorem for $\sum_{i \in \Xi}X_i X_{i+k} / |\Xi|$ holds if $|\Xi| \to \infty$. \subsection{Invariance Principles} Now, we are going to see that an invariance principle holds too. If $\mathcal{A}$ is a collection of Borel subsets of $[0,1]^{d}$, define the smoothed partial sum process $\{S_{n}(A)\,;\,A\in\mathcal{A}\}$ by \begin{equation}\label{process} S_{n}(A)=\displaystyle{\sum_{i\in\{1,...,n\}^{d}}}\,\lambda(nA\cap R_{i})X_{i} \end{equation} where $R_{i}=]i_{1}-1,i_{1}]\times...\times]i_{d}-1,i_{d}]$ is the unit cube with upper corner at $i$, $\lambda$ is the Lebesgue measure on $\mathbb{R}^{d}$ and $X_i$ is defined by $(\ref{definition_champ})$. We equip the collection $\mathcal{A}$ with the pseudo-metric $\rho$ defined for any $A,B$ in $\mathcal{A}$ by $\rho(A,B)=\sqrt{\lambda(A\Delta B)}$. To measure the size of $\mathcal{A}$ one considers the metric entropy: denote by $H(\mathcal{A},\rho,\varepsilon)$ the logarithm of the smallest number $N(\mathcal{A},\rho,\varepsilon)$ of open balls of radius $\varepsilon$ with respect to $\rho$ which form a covering of $\mathcal{A}$. The function $H(\mathcal{A}, \rho, .)$ is the entropy of the class $\mathcal{A}$. Let $\mathcal{C}(\mathcal{A})$ be the space of continuous real functions on $\mathcal{A}$, equipped with the norm $\|.\|_{\mathcal{A}}$ defined by $\| f\|_{\mathcal{A}}=\sup_{A\in\mathcal{A}}\vert f(A)\vert$.\\ A standard Brownian motion indexed by $\mathcal{A}$ is a mean zero Gaussian process $W$ with sample paths in $\mathcal{C}(\mathcal{A})$ and Cov$(W(A),W(B))=\lambda(A\cap B)$. From Dudley \cite{Dudley} we know that such a process exists if \begin{equation}\label{entrop-metriq1} \int_{0}^{1}\sqrt{H(\mathcal{A},\rho,\varepsilon)}\,d\varepsilon<+\infty. \end{equation} We say that the invariance principle or functional central limit theorem (FCLT) holds if the sequence $\{n^{-d/2}S_{n}(A)\,;\,A\in\mathcal{A}\}$ converges in distribution to an $\mathcal{A}$-indexed Brownian motion in the space $(\mathcal{A})$. The first weak convergence results for $\mathcal{Q}_{d}$-indexed partial sum processes were established for i.i.d. random fields and for the collection $\mathcal{Q}_{d}$ of lower-left quadrants in $[0,1]^{d}$, that is to say the collection $\{[0,t_{1}]\times\ldots\times[0,t_{d}]\,;\,(t_{1},\ldots,t_{d})\in[0,1]^{d}\}$. They were proved by Wichura \cite{Wichu} under a finite variance condition and earlier by Kuelbs \cite{Kuelbs} under additional moment restrictions. When the dimension $d$ is reduced to one, these results coincide with the original invariance principle of Donsker \cite{Donsker}. Dedecker \cite{Dedecker2001} gave an $\mathbb{L}^{\infty}$-projective criterion for the process $\{n^{-d/2}S_{n}(A)\,;\,A\in\mathcal{A}\}$ to converge in the space $\mathcal{C}(\mathcal{A})$ to a mixture of $\mathcal{A}$-indexed Brownian motions when the collection $\mathcal{A}$ satisfies only the entropy condition ($\ref{entrop-metriq1}$). This projective criterion is valid for martingale-difference bounded random fields and provides a sufficient condition for $\phi$-mixing bounded random fields. For unbounded random fields, the result still holds provided that the metric entropy condition on the class $\mathcal{A}$ is reinforced (see \cite{Elmachkouri2002}). It is shown in \cite{Elmachkouri--Volny2003} that the FCLT may be not valid for $p$-integrable martingale-difference random fields ($0\leq p<+\infty$) but it still holds if the conditional variances of the martingale-difference random field are assumed to be bounded a.s. (see \cite{Elmachkouri--Ouchti2006}). In this paper, we are going to establish the FCLT for random fields of the form $(\ref{definition_champ})$ (see Theorem \ref{invariance_principle}).\\ Following \cite{van-der-Vaart-Wellner}, we recall the definition of Vapnik-Chervonenkis classes ($VC$-classes) of sets: let $\mathcal{C}$ be a collection of subsets of a set $\mathcal{X}$. An arbitrary set of $n$ points $F_n:=\{x_1,...,x_n\}$ possesses $2^n$ subsets. Say that $\mathcal{C}$ {\em picks out} a certain subset from $F_n$ if this can be formed as a set of the form $C\cap F_n$ for a $C$ in $\mathcal{C}$. The collection $\mathcal{C}$ is said to {\em shatter} $F_n$ if each of its $2^n$ subsets can be picked out in this manner. The {\em VC-index} $V(\mathcal{C})$ of the class $\mathcal{C}$ is the smallest $n$ for which no set of size $n$ is shattered by $\mathcal{C}$. Clearly, the more refined $\mathcal{C}$ is, the larger is its index. Formally, we have $$ V(\mathcal{C})=\inf\left\{n\,;\,\max_{x_1,...,x_n}\Delta_n(\mathcal{C},x_1,...,x_n)<2^n\right\} $$ where $\Delta_n(\mathcal{C},x_1,...,x_n)=\#\left\{C\cap\{x_1,...,x_n\}\,;\,C\in\mathcal{C}\right\}$. Two classical examples of $VC$-classes are the collection $\mathcal{Q}_d=\left\{[0,t]\,;\,t\in[0,1]^{d}\right\}$ and $\mathcal{Q}^{'}_d=\left\{[s,t]\,;\,s,t\in[0,1]^{d},\,s\leq t\right\}$ with index $d+1$ and $2d+1$ respectively (where $s\leq t$ means $s_i\leq t_i$ for any $1\leq i\leq d$). Fore more about Vapnik-Chervonenkis classes of sets, one can refer to \cite{van-der-Vaart-Wellner}.\\ \\ Let $\beta>0$ and $h_{\beta}=\left((1-\beta)/\beta\right)^{\frac{1}{\beta}}\ind{\{0<\beta<1\}}$. We denote by $\psi_{\beta}$ the Young function defined by $\psi_{\beta}(x)=e^{(x+h_{\beta})^{\beta}}-e^{h_{\beta}^{\beta}}$ for any $x$ in $\mathbb{R}^{+}$. \begin{Th}\label{invariance_principle} Let $(X_i)_{i\in\mathbb{Z}^d}$ be the stationary centered random field defined by $(\ref{definition_champ})$ and let $\mathcal{A}$ be a collection of regular Borel subsets of $[0,1]^d$. Assume that one of the following condition holds: \begin{itemize} \item[$(i)$] The collection $\mathcal{A}$ is a Vapnik-Chervonenkis class with index $V$ and there exists $p>2(V-1)$ such that $X_0$ belongs to $\mathbb{L}^p$ and $\Delta_p:=\sum_{i\in\mathbb{Z}^d}\delta_{i,p}<\infty$. \item[$(ii)$] There exists $\theta>0$ and $0<q<2$ such that $\mathbb{E}[\exp(\theta\vert X_0\vert^{\beta(q)})]<\infty$ where $\beta(q)=2q/(2-q)$ and $\Delta_{\psi_{\beta(q)}}:=\sum_{i\in\mathbb{Z}^d}\delta_{i,\psi_{\beta(q)}}<\infty$ and such that the class $\mathcal{A}$ satisfies the condition \begin{equation}\label{entrop-metriq2} \int_{0}^{1}\left(H(\mathcal{A},\rho,\varepsilon)\right)^{1/q}\,d\varepsilon<+\infty. \end{equation} \item[$(iii)$] $X_0$ belongs to $\mathbb{L^{\infty}}$, the class $\mathcal{A}$ satisfies the condition $(\ref{entrop-metriq1})$ and $\Delta_{\infty}:=\sum_{i\in\mathbb{Z}^d}\delta_{i,\infty}<\infty$. \end{itemize} Then the sequence of processes $\{n^{-d/2}S_n(A)\,;\,A\in\mathcal{A}\}$ converges in distribution in $\mathcal{C}(\mathcal{A})$ to $\sigma W$ where $W$ is a standard Brownian motion indexed by $\mathcal{A}$ and $\sigma^2=\sum_{k\in\mathbb{Z}^d}\mathbb{E}(X_0X_k)$. \end{Th} \section{Proofs} \label{sec:proof} {\em Proof of Proposition $\ref{inequality}$}. Let $\tau:\mathbb{Z}\to\mathbb{Z}^d$ be a bijection. For any $i\in\mathbb{Z}$, for any $j\in\mathbb{Z}^d$, \begin{equation}\label{definition_P_i} P_iX_j:=\mathbb{E}(X_j\vert\mathcal{F}_i)-\mathbb{E}(X_j\vert\mathcal{F}_{i-1}) \end{equation} where $\mathcal{F}_i=\sigma\left(\varepsilon_{\tau(l)};l\leq i\right)$. \begin{lemma}\label{majoration_P_X} For any $i$ in $\mathbb{Z}$ and any $j$ in $\mathbb{Z}^d$, we have $\|P_iX_j\|_p\leq\delta_{j-\tau(i),p}$. \end{lemma} {\em Proof of Lemma $\ref{majoration_P_X}$}. \begin{align*} \left\|P_iX_j\right\|_p&=\left\|\mathbb{E}(X_j\vert\mathcal{F}_i)-\mathbb{E}(X_j\vert\mathcal{F}_{i-1})\right\|_p\\ &=\left\|\mathbb{E}(X_0\vert T^j\mathcal{F}_i)-\mathbb{E}(X_0\vert T^j\mathcal{F}_{i-1})\right\|_p\quad\textrm{where $T^j\mathcal{F}_i=\sigma\left(\varepsilon_{\tau(l)-j};l\leq i\right)$}\\ &=\left\|\mathbb{E}\left(g\left((\varepsilon_{-s})_{s\in\mathbb{Z}^d}\right)\vert T^j\mathcal{F}_i\right)- \mathbb{E}\left(g\left((\varepsilon_{-s})_{s\in\mathbb{Z}^d\backslash\{j-\tau(i)\}};\varepsilon^{'}_{\tau(i)-j}\right)\vert T^j\mathcal{F}_{i}\right)\right\|_p\\ &\leq \left\|g\left((\varepsilon_{-s})_{s\in\mathbb{Z}^d}\right)-g\left((\varepsilon_{-s})_{s\in\mathbb{Z}^d\backslash\{j-\tau(i)\}};\varepsilon^{'}_{\tau(i)-j}\right)\right\|_p\\ &=\left\|g\left((\varepsilon_{j-\tau(i)-s})_{s\in\mathbb{Z}^d}\right)-g\left((\varepsilon_{j-\tau(i)-s})_{s\in\mathbb{Z}^d\backslash\{j-\tau(i)\}};\varepsilon^{'}_{0}\right)\right\|_p\\ &=\left\|X_{j-\tau(i)}-X_{j-\tau(i)}^{\ast}\right\|_p\\ &=\delta_{j-\tau(i),p}. \end{align*} The proof of Lemma $\ref{majoration_P_X}$ is complete.\\ \\ For all $j$ in $\mathbb{Z}^d$, $$ X_j=\sum_{i\in\mathbb{Z}}P_iX_j. $$ Consequently, $$ \left\|\sum_{j\in\Gamma}a_jX_j\right\|_p=\left\|\sum_{j\in \Gamma}a_j\sum_{i\in\mathbb{Z}}P_iX_j\right\|_p =\left\|\sum_{i\in\mathbb{Z}}\sum_{j\in \Gamma}a_jP_iX_j\right\|_p. $$ Since $\left(\sum_{j\in \Gamma}a_jP_iX_j\right)_{i\in\mathbb{Z}}$ is a martingale-difference sequence, by Burkholder inequality, we have \begin{equation} \left\|\sum_{j\in\Gamma}a_jX_j\right\|_p \leq\left(2p\sum_{i\in\mathbb{Z}}\left\|\sum_{j\in \Gamma}a_jP_iX_j\right\|_p^2\right)^{\frac{1}{2}} \leq\left(2p\sum_{i\in\mathbb{Z}}\left(\sum_{j\in \Gamma}\vert a_j\vert\left\|P_iX_j\right\|_p\right)^2\right)^{\frac{1}{2}} \end{equation} By the Cauchy-Schwarz inequality, we have $$ \left(\sum_{j\in \Gamma}\vert a_j\vert\left\|P_iX_j\right\|_p\right)^2 \leq\left(\sum_{j\in \Gamma}a_j^2\left\|P_iX_j\right\|_p\right)\times\left(\sum_{j\in\Gamma}\|P_iX_j\|_p\right) $$ and by Lemma $\ref{majoration_P_X}$, $$ \sum_{j\in\mathbb{Z}^d}\|P_iX_j\|_p\leq\sum_{j\in\mathbb{Z}^d}\delta_{j-\tau(i),p}=\Delta_p. $$ So, we obtain $$ \left\|\sum_{j\in\Gamma}a_jX_j\right\|_p\leq\left(2p\Delta_p\sum_{j\in \Gamma}a_j^2\sum_{i\in\mathbb{Z}}\left\|P_iX_j\right\|_p\right)^{\frac{1}{2}}. $$ Applying again Lemma $\ref{majoration_P_X}$, for any $j$ in $\mathbb{Z}^d$, we have $$ \sum_{i\in\mathbb{Z}}\|P_iX_j\|_p\leq\sum_{i\in\mathbb{Z}}\delta_{j-\tau(i),p}=\Delta_p, $$ Finally, we derive $$ \left\|\sum_{j\in\Gamma}a_jX_j\right\|_p\leq\left(2p\sum_{j\in \Gamma}a_j^2\right)^{\frac{1}{2}}\Delta_p. $$ The proof of Proposition $\ref{inequality}$ is complete.\\ \\ {\em Proof of Proposition $\ref{variance_asymptotique}$}. Let $k$ in $\mathbb{Z}^d$ be fixed. Since $X_k=\sum_{i\in\mathbb{Z}}P_iX_k$ where $P_i$ is defined by ($\ref{definition_P_i}$) and $\mathbb{E}((P_iX_0)(P_jX_k))=0$ if $i\neq j$, we have $$ \mathbb{E}(X_0X_k)=\sum_{i\in\mathbb{Z}}\mathbb{E}((P_iX_0)(P_iX_k)). $$ Thus, we obtain $$ \sum_{k\in\mathbb{Z}^d}\vert\mathbb{E}(X_0X_k)\vert\leq\sum_{i\in\mathbb{Z}}\|P_iX_0\|_2\sum_{k\in\mathbb{Z}^d}\|P_iX_k\|_2. $$ Applying again Lemma $\ref{majoration_P_X}$, we derive $\sum_{k\in\mathbb{Z}^d}\vert\mathbb{E}(X_0X_k)\vert\leq \Delta_2^2<\infty$.\\ \\ In the other part, since $(X_k)_{k\in\mathbb{Z}^d}$ is stationary, we have $$ \vert\Gamma_n\vert^{-1}\mathbb{E}(S^2_{\Gamma_n})=\sum_{k\in\mathbb{Z}^d}\vert\Gamma_n\vert^{-1}\vert \Gamma_n\cap(\Gamma_n-k)\vert\mathbb{E}(X_0X_k) $$ where $\Gamma_n-k=\{i-k\,;\,i\in \Gamma_n\}$. Moreover $$ \vert \Gamma_n\vert^{-1}\vert \Gamma_n\cap(\Gamma_n-k)\vert\vert\mathbb{E}(X_0X_k)\vert\leq\vert\mathbb{E}(X_0X_k)\vert \quad\textrm{and}\quad \sum_{k\in\mathbb{Z}^d}\vert\mathbb{E}(X_0X_k)\vert<\infty. $$ Since $\lim_{n\to+\infty}\vert \Gamma_n\vert^{-1}\vert \Gamma_n\cap(\Gamma_n-k)\vert=1$, applying the Lebesgue convergence theorem, we derive $$ \lim_{n\to+\infty}\vert \Gamma_n\vert^{-1}\mathbb{E}(S^2_{\Gamma_n})=\sum_{k\in\mathbb{Z}^d}\mathbb{E}(X_0X_k). $$ The proof of Proposition $\ref{variance_asymptotique}$ is complete.\\ \\ {\em Proof of Theorem $\ref{tlc}$}. We first assume that $\liminf_{n}{\sigma_n^2}/{\vert\Gamma_n\vert}>0$. Let $(m_n)_{n\geq 1}$ be a sequence of positive integers going to infinity. In the sequel, we denote $\overline{X}_j=\mathbb{E}\left(X_j\vert\mathcal{F}_{m_n}(j)\right)$ where $\mathcal{F}_{m_n}(j)=\sigma(\varepsilon_{j-s}\,;\,\vert s\vert\leq m_n)$. By factorization, there exists a measurable function $h$ such that $\overline{X}_j=h(\varepsilon_{j-s}\,;\,\vert s\vert\leq m_n)$. So, we have \begin{equation} \overline{X}^{\ast}_j =h(\varepsilon^{\ast}_{j-s}\,;\,\vert s\vert\leq m_n)=\mathbb{E}\left(X_j^{\ast}\vert\mathcal{F}^{\ast}_{m_n}(j)\right) \end{equation} where $\mathcal{F}^{\ast}_{m_n}(j)=\sigma(\varepsilon^{\ast}_{j-s}\,;\,\vert s\vert\leq m_n)$. We denote also for any $j$ in $\mathbb{Z}^d$, $$ \delta^{(m_n)}_{j,p}=\left\|(X_j-\overline{X}_j)-(X_j-\overline{X}_j)^{\ast}\right\|_p. $$ The following result is a direct consequence of Proposition $\ref{inequality}$. \begin{Prop}\label{inequality_bis} Let $\Gamma$ be a finite subset of $\mathbb{Z}^d$ and $(a_i)_{i\in\Gamma}$ be a family of real numbers. For any $n$ in $\mathbb{N}^{\ast}$ and any $p\in[2,+\infty]$, we have $$ \left\|\sum_{j\in \Gamma}a_j(X_j-\overline{X}_j)\right\|_p\leq\left(2p\sum_{i\in\Gamma}a_i^2\right)^{\frac{1}{2}}\Delta_p^{(m_n)} $$ where $\Delta_p^{(m_n)}=\sum_{j\in\mathbb{Z}^d}\delta^{(m_n)}_{j,p}$. \end{Prop} We need also the following lemma. \begin{lemma}\label{limite_delta_j_m} Let $p\in]0,+\infty]$ be fixed. If $\Delta_p<\infty$ then $\Delta_p^{(m_n)}\to 0$ as $n\to\infty$. \end{lemma} {\em Proof of Lemma $\ref{limite_delta_j_m}$}. Let $j$ in $\mathbb{Z}^d$ be fixed. Since $(X_j-\overline{X}_j)^{\ast}=X_j^{\ast}-\overline{X}_j^{\ast}$, we have \begin{align*} \delta_{j,p}^{(m_n)}&=\left\|(X_j-\overline{X}_j)-(X_j-\overline{X}_j)^{\ast}\right\|_p\leq\|X_j-X_j^{\ast}\|_p+\|\overline{X}_j-\overline{X}_j^{\ast}\|_p\\ &=\delta_{j,p}+\|\mathbb{E}(X_j\vert\mathcal{F}_{m_n}(j)\vee\mathcal{F}^{\ast}_{m_n}(j))-\mathbb{E}(X_j^{\ast}\vert\mathcal{F}^{\ast}_{m_n}(j)\vee\mathcal{F}_{m_n}(j))\|_p\\ &\leq 2\delta_{j,p}. \end{align*} Moreover, $\lim_{n\to +\infty}\delta_{j,p}^{(m_n)}=0$. Finally, applying the Lebesgue convergence theorem, we obtain $\lim_{n\to+\infty}\Delta_p^{(m_n)}=0$. The proof of Lemma $\ref{limite_delta_j_m}$ is complete.\\ \\ Let $(\Gamma_n)_{n\geq 1}$ be a sequence of finite subsets of $\mathbb{Z}^d$ such that $\lim_{n\to+\infty}\vert\Gamma_n\vert=\infty$ and $\liminf_n\frac{\sigma_n^2}{\vert\Gamma_n\vert}>0$ and recall that $\Delta_2$ is assumed to be finite. Combining Proposition $\ref{inequality_bis}$ and Lemma $\ref{limite_delta_j_m}$, we have \begin{equation}\label{ecart_X_et_X_barre} \limsup_{n\to+\infty}\frac{\left\|S_n-\overline{S}_n\right\|_2}{\sigma_n}=0. \end{equation} We are going to apply the following central limit theorem due to Heinrich (\cite{Heinrich88}, Theorem 2). \begin{Th}[Heinrich (1988)]\label{Theoreme_Heinrich} Let $(\Gamma_n)_{n\geq 1}$ be a sequence of finite subsets of $\mathbb{Z}^d$ with $\vert\Gamma_n\vert\to\infty$ as $n\to\infty$ and let $(m_n)_{n\geq 1}$ be a sequence of positive integers. For each $n\geq 1$, let $\{U_n(j),j\in\mathbb{Z}^d\}$ be an $m_n$-dependent random field with $\mathbb{E} U_n(j)=0$ for all $j$ in $\mathbb{Z}^d$. Assume that $\mathbb{E}\left(\sum_{j\in\Gamma_n}U_n(j)\right)^2\to\sigma^2$ as $n\to\infty$ with $\sigma^2<\infty$. Then $\sum_{j\in\Gamma_n}U_n(j)$ converges in distribution to a Gaussian random variable with mean zero and variance $\sigma^2$ if there exists a finite constant $c>0$ such that for any $n\geq 1$, $$ \sum_{j\in\Gamma_n}\mathbb{E} U_n^2(j)\leq c $$ and for any $\varepsilon>0$ it holds that $$ \lim_{n\to+\infty}L_n(\varepsilon):=m_n^{2d}\sum_{j\in\Gamma_n}\mathbb{E}\left(U_n^2(j)\ind{\vert U_n(j)\vert\geq\varepsilon m_n^{-2d}}\right)=0. $$ \end{Th} Since $\liminf_{n}\frac{\sigma_n^2}{\vert\Gamma_n\vert}>0$, there exists $c_0>0$ and $n_0\in\mathbb{N}$ such that $\frac{\vert\Gamma_n\vert}{\sigma_n^2}\leq c_0$ for any $n\geq n_0$. Consider $S_n=\sum_{i\in\Gamma_n}X_i$, $\overline{S}_n=\sum_{i\in\Gamma_n}\overline{X}_i$ and $U_n(j):=\frac{\overline{X}_j}{\sigma_n}$. We have $$ \mathbb{E}\left(\sum_{j\in\Gamma_n}U_n(j)\right)^2=\frac{\mathbb{E}(\overline{S}^2_n)-\sigma_n^2}{\sigma_n^2}+1. $$ So, for any $n\geq n_0$ we derive \begin{align*} \frac{\left\vert\sigma_n^2-\mathbb{E}(\overline{S}^2_n)\right\vert}{\sigma_n^2} &=\frac{1}{\sigma_n^2}\left\vert\mathbb{E}\left(\left(\sum_{j\in\Gamma_n}(\overline{X}_j-X_j)\right)\left(\sum_{j\in\Gamma_n}(\overline{X}_j+X_j)\right)\right)\right\vert\\ &\leq\frac{1}{\sigma_n^2}\left\|\sum_{j\in\Gamma_n}(\overline{X}_j-X_j)\right\|_2 \left\|\sum_{j\in\Gamma_n}(\overline{X}_j+X_j)\right\|_2\\ &\leq\frac{2\vert\Gamma_n\vert\Delta_2^{(m_n)}}{\sigma_n^2}\left(4\Delta_2+2\Delta_2^{(m_n)} \right)\\ &\leq4c_0\Delta_2^{(m_n)}\left(2\Delta_2+\Delta_2^{(m_n)} \right)\converge{n}{+\infty}{ }0. \end{align*} Consequently, $$ \lim_{n\to+\infty}\mathbb{E}\left(\sum_{j\in\Gamma_n}U_n(j)\right)^2=1. $$ Moreover, for any $n\geq n_0$, $$ \sum_{j\in\Gamma_n}\mathbb{E} U_n^2(j)=\frac{\vert\Gamma_n\vert\mathbb{E}(\overline{X}_0^2)}{\sigma_n^2}\leq c_0\mathbb{E}(X_0^2)<\infty. $$ Let $\varepsilon>0$ be fixed. We have \begin{align*} L_n(\varepsilon)&\leq c_0m_n^{2d}\mathbb{E}\left(\overline{X}_0^2\ind{\left\{\vert\overline{X}_0\vert\geq \frac{\varepsilon\sigma_n}{m_n^{2d}}\right\}}\right) \leq c_0m_n^{2d}\mathbb{E}\left(X_0^2\ind{\left\{\vert\overline{X}_0\vert\geq \frac{\varepsilon\sigma_n}{m_n^{2d}}\right\}}\right)\\ &\leq c_0m_n^{2d}\sigma_n\P\left(\vert\overline{X}_0\vert\geq\frac{\varepsilon\sigma_n}{m_n^{2d}}\right) +c_0m_n^{2d}\mathbb{E}\left(X_0^2\ind{\left\{\vert X_0\vert\geq\sqrt{\sigma_n}\right\}}\right)\\ &\leq\frac{c_0\mathbb{E}(X_0^2)m_n^{6d}}{\varepsilon^2\sigma_n}+c_0m_n^{2d}\psi(\sqrt{\sigma_n}) \end{align*} where $\psi(x)=\mathbb{E}\left(X_0^2\ind{\left\{\vert X_0\vert\geq x\right\}}\right)$. \begin{lemma}\label{psi-psin} If the sequence $(m_n)_{n\geq 1}$ is defined for any integer $n\geq 1$ by $ m_n=\min\left\{\left[\psi\left(\sqrt{\sigma_n}\right)^{\frac{-1}{4d}}\right],\left[\sigma_n^{\frac{1}{12d}}\right]\right\} $ if $\psi(\sqrt{\sigma_n})\neq0$ and by $m_n=\left[\sigma_n^{\frac{1}{12d}}\right]$ if $\psi(\sqrt{\sigma_n})=0$ where $[\,.\,]$ is the integer part function then $$ m_n\to\infty,\quad\frac{m_n^{6d}}{\sigma_n}\to0\quad\textrm{and}\quad m_n^{2d}\psi\left(\sqrt{\sigma_n}\right)\to0. $$ \end{lemma} {\em Proof of Lemma \ref{psi-psin}}. Since $\sigma_n\to\infty$ and $\psi(\sqrt{\sigma_n})\to0$, we derive $m_n\to\infty$. Moreover, $$ \frac{m_n^{6d}}{\sigma_n}\leq\frac{1}{\sqrt{\sigma_n}}\to0\quad\textrm{and}\quad m_n^{2d}\psi\left(\sqrt{\sigma_n}\right)\leq\sqrt{\psi\left(\sqrt{\sigma_n}\right)}\to0. $$ The proof of Lemma \ref{psi-psin} is complete.\\ \\ Consequently, we obtain $\lim_{n\to\infty}L_n(\varepsilon)=0$. So, applying Theorem \ref{Theoreme_Heinrich}, we derive that \begin{equation}\label{convergence_en_loi_Sn_barre} \frac{\overline{S}_n}{\sigma_n}\converge{n}{+\infty}{\textrm{Law}}\mathcal{N}(0,1). \end{equation} Combining (\ref{ecart_X_et_X_barre}) and (\ref{convergence_en_loi_Sn_barre}), we deduce $$ \frac{S_n}{\sigma_n}\converge{n}{+\infty}{\textrm{Law}}\mathcal{N}(0,1). $$ Hence (\ref{eq:Levy}) holds if $\liminf_n \sigma_n^2 / |\Gamma_n| >0$. In the general case, we argue as follows: If (\ref{eq:Levy}) does not hold then there exists a subsequence $n'\to \infty$ such that \begin{equation}\label{sous_suite_n_prime} L\left[\frac{S_{n^{'}}}{\sqrt{|\Gamma_{n^{'}}|}},\,N\left(0, \frac{\sigma^2_{n^{'}}}{|\Gamma_{n^{'}}|}\right)\right]\quad\textrm{converges to some $l$ in $]0,+\infty]$}. \end{equation} Assume that $\frac{\sigma_{n^{'}}^2}{\vert\Gamma_{n^{'}}\vert}$ does not converge to zero. Then there exists a subsequence $n^{''}$ such that $\liminf_n\frac{\sigma_{n^{''}}^2}{\vert\Gamma_{n^{''}}\vert}>0$. By the first part of the proof of Theorem $\ref{tlc}$, we obtain \begin{equation}\label{sous_suite_n_seconde} L\left[\frac{S_{n^{''}}}{\sqrt{|\Gamma_{n^{''}}|}},\,N\left(0, \frac{\sigma^2_{n^{''}}}{|\Gamma_{n^{''}}|}\right)\right]\quad\textrm{converges to $0$}. \end{equation} Since (\ref{sous_suite_n_seconde}) contradicts (\ref{sous_suite_n_prime}), we have $\frac{\sigma_{n^{'}}^2}{\vert\Gamma_{n^{'}}\vert}$ converges to zero. Consequently $S_{n'}/\sqrt{|\Gamma_{n^{'}}|}$ converges to zero in probability and $L\left[\frac{S_{n^{'}}}{\sqrt{|\Gamma_{n^{'}}|}},\,N\left(0, \frac{\sigma^2_{n^{'}}}{|\Gamma_{n^{'}}|}\right)\right]$ converges to $0$ which contradicts again $(\ref{sous_suite_n_prime})$. Consequently, (\ref{eq:Levy}) holds. The proof of Theorem \ref{tlc} is then complete.\\ {\em Proof of Theorem $\ref{invariance_principle}$}. As usual, we have to prove the convergence of the finite-dimensional laws and the tightness of the partial sum process $\{n^{-d/2}S_n(A)\,;\,A\in\mathcal{A}\}$ in $\mathcal{C}(\mathcal{A})$. For any Borel subset $A$ of $[0,1]^d$, we denote by $\Gamma_{n}(A)$ the finite subset of $\mathbb{Z}^{d}$ defined by $\Gamma_{n}(A)=nA\cap\mathbb{Z}^{d}$. We say that $A$ is a regular Borel set if $\lambda(\partial A)=0$. \begin{Prop}\label{Proposition_type_Dedecker2001} Let $A$ be a regular Borel subset of $[0,1]^d$ with $\lambda(A)>0$. We have $$ \lim_{n\to+\infty}\frac{\vert\Gamma_n(A)\vert}{n^d}=\lambda(A) \quad\textrm{and}\quad \lim_{n\to+\infty}\frac{\vert\partial\Gamma_n(A)\vert}{\vert\Gamma_n(A)\vert}=0. $$ Moreover, if $\Delta_2$ is finite then \begin{equation}\label{limite_approximation} \lim_{n\to+\infty}n^{-d/2}\|S_n(A)-S_{\Gamma_n(A)}\|_2=0 \end{equation} where $S_{\Gamma_n(A)}=\sum_{i\in\Gamma_n(A)}X_i$. \end{Prop} {\em Proof of Proposition $\ref{Proposition_type_Dedecker2001}$}. The first part of Proposition $\ref{Proposition_type_Dedecker2001}$ is the first part of Lemma 2 in Dedecker \cite{Dedecker2001}. So, we are going to prove only the second part. Let $n$ be a positive integer. Arguing as in Dedecker \cite{Dedecker2001}, we have \begin{equation}\label{decomposition} S_n(A)-S_{\Gamma_n(A)}=\sum_{i\in W_n}a_iX_i \end{equation} where $a_i=\lambda(nA\cap R_i)-\ind{i\in\Gamma_n(A)}$ and $W_n$ is the set of all $i$ in $\{1,..,n\}^d$ such that $R_i\cap(nA)\neq\emptyset$ and $R_i\cap(nA)^c\neq\emptyset$. Noting that $\vert a_i\vert\leq 1$ and applying Proposition $\ref{inequality}$, we obtain \begin{equation}\label{decomposition_inequality_1} \|S_n(A)-S_{\Gamma_n(A)}\|_2 \leq2\Delta_2\,\sqrt{\sum_{i\in W_n}a_i^2}\leq2\Delta_2\sqrt{\vert W_n\vert}. \end{equation} Following the proof of Lemma 2 in \cite{Dedecker2001}, we have $\vert W_n\vert=o(n^d)$ and we derive $(\ref{limite_approximation})$. The proof of Proposition $\ref{Proposition_type_Dedecker2001}$ is complete.\\ The convergence of the finite-dimensional laws follows from Proposition $\ref{Proposition_type_Dedecker2001}$ and Theorem $\ref{tlc}$.\\ \\ So, it suffices to establish the tightness property. \begin{Prop}\label{tightness} Assume that Assumption $(i)$, $(ii)$ or $(iii)$ in Theorem $\ref{invariance_principle}$ holds. Then for any $x>0$, we have \begin{equation}\label{tightness_equality} \lim_{\delta\to0}\limsup_{n\to+\infty} \P\left(\sup_{\substack{A,B\in\mathcal{A} \\ \rho(A,B)<\delta}}\big\vert n^{-d/2}S_{n}(A)-n^{-d/2}S_{n}(B)\big\vert>x\right)=0. \end{equation} \end{Prop} {\em Proof of Proposition $\ref{tightness}$}. Let $A$ and $B$ be fixed in $\mathcal{A}$ and recall that $\rho(A,B)=\sqrt{\lambda(A\Delta B)}$. We have $$ S_n(A)-S_n(B)=\sum_{i\in\Lambda_n}a_iX_i $$ where $\Lambda_n=\{1,...,n\}^d$ and $a_i=\lambda(nA\cap R_i)-\lambda(nB\cap R_i)$. Applying Proposition \ref{inequality}, we have \begin{equation}\label{Lipschitz_inequality_p} n^{-d/2}\left\|S_n(A)-S_n(B)\right\|_p\leq\Delta_p\left(\frac{2p}{n^{d}}\sum_{i\in\Lambda_n}\lambda(n(A\Delta B)\cap R_i)\right)^{\frac12}\leq\sqrt{2p}\Delta_p\rho(A,B). \end{equation} Assume that Assumption $(i)$ in Theorem $\ref{invariance_principle}$ holds. Then there exists a positive constant $K$ such that for any $0<\varepsilon<1$, we have (see Van der Vaart and Wellner \cite{van-der-Vaart-Wellner}, Theorem 2.6.4) $$ N(\mathcal{A},\rho,\varepsilon)\leq KV(4e)^V\left(\frac{1}{\varepsilon}\right)^{2(V-1)} $$ where $N(\mathcal{A},\rho,\varepsilon)$ is the smallest number of open balls of radius $\epsilon$ with respect to $\rho$ which form a covering of $\mathcal{A}$. So, since $p>2(V-1)$, we have \begin{equation}\label{metric_entropy_Lp} \int_0^1\left(N(\mathcal{A},\rho,\varepsilon)\right)^{\frac{1}{p}}d\varepsilon <+\infty. \end{equation} Combining ($\ref{Lipschitz_inequality_p}$) and ($\ref{metric_entropy_Lp}$) and applying Theorem 11.6 in Ledoux and Talagrand \cite{Led-Tal}, we infer that the sequence $\{n^{-d/2}S_{n}(A)\,;\,A\in\mathcal{A}\}$ satisfies the following property: for each positive $\epsilon$ there exists a positive real $\delta$, depending on $\epsilon$ and on the value of the entropy integral ($\ref{metric_entropy_Lp}$) but not on $n$, such that \begin{equation}\label{esperance_plus_petite_epsilon} \mathbb{E}\left(\sup_{\stackrel{A,B\in\mathcal{A}}{\rho(A,B)<\delta}}\vert n^{-d/2}S_{n}(A)-n^{-d/2}S_{n}(B)\vert\right)<\epsilon. \end{equation} The condition ($\ref{tightness_equality}$) is then satisfied under Assumption $(i)$ in Theorem $\ref{invariance_principle}$ and the sequence of processes $\{n^{-d/2}S_{n}(A)\,;\,A\in\mathcal{A}\}$ is tight in $\mathcal{C}(\mathcal{A})$.\\ \\ Now, we assume that Assumption $(ii)$ in Theorem $\ref{invariance_principle}$ holds. The following technical lemma can be obtained using the expansion of the exponential function. \begin{lemma}\label{equivalence_normes} Let $\beta$ be a positive real number and $Z$ be a real random variable. There exist positive universal constants $A_{\beta}$ and $B_{\beta}$ depending only on $\beta$ such that $$ A_{\beta}\,\sup_{p>2}\frac{\| Z\|_{p}}{p^{1/\beta}}\leq\| Z\|_{\psi_{\beta}}\leq B_{\beta}\,\sup_{p>2}\frac{\|Z\|_{p}}{p^{1/\beta}}. $$ \end{lemma} Combining Lemma $\ref{equivalence_normes}$ with $(\ref{Lipschitz_inequality_p})$, for any $0<q<2$, there exists $C_q>0$ such that \begin{equation}\label{Lipschitz_inequality_psiq} n^{-d/2}\left\|S_n(A)-S_n(B)\right\|_{\psi_q}\leq C_q\Delta_{\psi_{\beta(q)}}\rho(A,B) \end{equation} where $\beta(q)=2q/(2-q)$. Applying Theorem 11.6 in Ledoux and Talagrand \cite{Led-Tal}, for each positive $\epsilon$ there exists a positive real $\delta$, depending on $\epsilon$ and on the value of the entropy integral ($\ref{entrop-metriq2}$) but not on $n$, such that $(\ref{esperance_plus_petite_epsilon})$ holds. The condition ($\ref{tightness_equality}$) is then satisfied and the process $\{n^{-d/2}S_{n}(A)\,;\,A\in\mathcal{A}\}$ is tight in $\mathcal{C}(\mathcal{A})$.\\ \\ Finally, if Assumption $(iii)$ in Theorem \ref{invariance_principle} holds then combining Lemma $\ref{equivalence_normes}$ with $(\ref{Lipschitz_inequality_p})$, there exists $C>0$ such that \begin{equation}\label{Lipschitz_inequality_psi2} \left\|n^{-d/2}S_n(A)-n^{-d/2}S_n(B)\right\|_{\psi_2}\leq C\Delta_{\infty}\rho(A,B). \end{equation} Applying again Theorem 11.6 in Ledoux and Talagrand \cite{Led-Tal}, we obtain the tightness of the process $\{n^{-d/2}S_{n}(A)\,;\,A\in\mathcal{A}\}$ in $\mathcal{C}(\mathcal{A})$. The proofs of Proposition $\ref{tightness}$ and Theorem $\ref{invariance_principle}$ are complete.$\hfill\Box$\\ \\ \textbf{Acknowledgments}. The authors thank an anonymous referee for his$\backslash$her constructive comments and Olivier Durieu and Hermine Bierm\'e for pointing us a mistake in the first version of the proof of Theorem $\ref{tlc}$. \bibliographystyle{plain}
{ "timestamp": "2012-07-12T02:02:01", "yymm": "1109", "arxiv_id": "1109.0838", "language": "en", "url": "https://arxiv.org/abs/1109.0838", "abstract": "This paper establishes a central limit theorem and an invariance principle for a wide class of stationary random fields under natural and easily verifiable conditions. More precisely, we deal with random fields of the form $X_k = g(\\varepsilon_{k-s}, s \\in \\Z^d)$, $k\\in\\Z^d$, where $(\\varepsilon_i)_{i\\in\\Z^d}$ are i.i.d random variables and $g$ is a measurable function. Such kind of spatial processes provides a general framework for stationary ergodic random fields. Under a short-range dependence condition, we show that the central limit theorem holds without any assumption on the underlying domain on which the process is observed. A limit theorem for the sample auto-covariance function is also established.", "subjects": "Probability (math.PR); Statistics Theory (math.ST)", "title": "A central limit theorem for stationary random fields", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.975946442208311, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7074345554961017 }
https://arxiv.org/abs/1508.03543
Recovering the equivalence of ensembles II: An Ising chain with competing short and long-range interactions
In a pioneer work, John Nagle has shown that an Ising chain with competing short and long-range interactions displays second and first-order phase transitions separated by a tricritical point. More recently, it has been claimed that Nagle's model provides an example of the inequivalence between canonical and microcanonical calculations. We then revisit Nagle's original solution, as well as the usual formulation of the problem in a canonical ensemble, which lead to the same results. Also, in contrast to recent claims, we show that an alternative formulation in the microcanonical ensemble, with the adequate choice of the fixed thermodynamic extensive variables, leads to equivalent thermodynamic results.
\section{Introduction} In the beginning of the seventies, John Nagle \cite{nagle1970 \cite{bonner1971} analyzed an Ising chain with antiferromagnetic interactions between nearest-neighbor sites, and the addition of equivalent-neighbor (mean-field) ferromagnetic interactions between all pairs of sites. Depending on the strength of the competition, this system was shown to display second and first-order phase transitions separated by a \textquotedblleft special critical point\textquotedblright, which was later named a tricritical point \ \cite{griffiths1972}. A few years ago, this problem has been revisited by some authors as one on the \textquotedblleft paradigmatic examples\textquotedblright\ of the inequivalence of ensembles, in which the very localization of the tricritical point was supposed to depend on the ensemble (canonical or microcanonical) that was used to carry out the statistical calculations \cite{mukamel2005}\cite{campa2009}. We have recently disproved similar claims of inequivalence of ensembles for a long-range version of a spin-$1$ Ising model \cite{henriques2015}. In the present article we give arguments to show the equivalence of solutions in Nagl \'{ s model. The Hamiltonian of Nagl \'{ s model in zero external field may be written a \begin{equation} \mathcal{H}=-J_{SR}\sum_{i=1}^{N}\sigma_{i}\sigma_{i+1}-\frac{1}{2N J_{LR}\left( \sum_{i=1}^{N}\sigma_{i}\right) ^{2},\label{HN \end{equation} where $\sigma_{i}=\pm1$ for $i=1$, $2$, $...$, $N$, the long-range interactions are ferromagnetic, $J_{LR}>0$, and the presence of a tricritical point requires antiferromagnetic short-range interactions ($J_{SR}<0$). In his original work, Nagle obtained exact thermodynamic solutions by two elegant and complementary techniques, which already refer to different thermodynamic representations. Later, this model was solved by easier manipulations, in the usual canonical ensemble \cite{kardar1983}\cite{kislinsky1988 \cite{vieira1995}. Due to its instructive features, and to a number of misconceptions in the literature, we begin by reviewing Nagl \'{ s solution. We then resort to a Gaussian identity to establish the (same) solutions in the usual canonical ensemble. Finally, we use the corresponding Ising chain, with the exclusion of the long-range terms, to write an entropy function in the microcanonical ensemble. With the appropriate choice of the extensive variables, and properly accounting for the long-range interactions, we show that there are no discrepancies between canonical and microcanonical results. \section{Original solutions of Nagle} In the more detailed solution of the problem, Nagle considers the Hamiltonian of an Ising chain with the exclusion of the long-range interactions ($J_{LR}=0$) and in the presence of a field $H$, \begin{equation} \mathcal{H}_{I}=-J_{SR}\sum_{i=1}^{N}\sigma_{i}\sigma_{i+1}-H\sum_{i=1 ^{N}\sigma_{i}. \end{equation} Given the temperature $k_{B}T=1/\beta$ and the field $H$, we write the usual form of the canonical partition function \begin{equation} Z_{I}=Z_{I}\left( T,H,N\right) {\displaystyle\sum_{\left\{ \sigma_{i}\right\} }} \exp\left[ \beta J_{SR}\sum_{i=1}^{N}\sigma_{i}\sigma_{i+1}+\beta H\sum _{i=1}^{N}\sigma_{i}\right] , \end{equation} which can be analytically obtained by the transfer matrix technique. In the thermodynamic limit, the associated free energy per site, as a function of $T $ and $H$, is given b \begin{equation} g_{I}=g_{I}\left( T,H\right) \sim-\frac{1}{\beta N}\ln Z_{I}, \end{equation} from which we obtain the magnetization per site \begin{equation} m=\frac{1}{N}\left\langle \sum_{i=1}^{N}\sigma_{i}\right\rangle =-\left( \frac{\partial g_{I}}{\partial H}\right) _{T}.\label{mag \end{equation} We then use a Legendre transformation to write another free energy, $f_{I}=f_{I}\left( T,m\right) $, which is expressed as a function of temperature $T$ and magnetization $m$, \begin{equation} f_{I}\left( T,m\right) =g_{I}\left( T,H\right) +mH. \end{equation} In this one dimensional system, there are no problems of convexity, and the field $H$ can be eliminated by using equation (\ref{mag}). Nagle then remarks that the energy per site of an additional term, of mean-field nature, may be written in terms of the magnetization, so that we hav \begin{equation} \frac{1}{N}\left\langle -\frac{1}{2N}J_{LR}\left( \sum_{i=1}^{N}\sigma _{i}\right) ^{2}\right\rangle =-\frac{1}{2}J_{LR}m^{2}. \end{equation} Taking into account that this term depends only on $m$, the free energy $f=f\left( T,m\right) $, associated with Nagl \'{ s model, defined by the Hamiltonian of equation (\ref{HN}), is given b \begin{equation} f=f\left( T,m\right) =f_{I}\left( T,m\right) -\frac{1}{2}J_{LR}m^{2}. \end{equation} This is the central equation of Nagl \'{ s treatment. In analogy with a Landau expansion, the free energy $f\left( T,m\right) $ may be written as a power series in terms of the magnetization \begin{equation} f\left( T,m\right) =a_{0}\left( T\right) +a_{2}\left( T\right) m^{2}+a_{4}\left( T\right) m^{4}+a_{6}\left( T\right) m^{6 +...,\label{ftm \end{equation} from which we obtain the critical line ($a_{2}=0$; $a_{4}>0$) and the location of the tricritical point ($a_{2}=a_{4}=0$; $a_{6}>0$). It should be noted that this Landau expansion is written in terms of a density (the order parameter $m$) and that the coefficients of this expansion depend on the thermodynamic fields (in this case, the parameters $\beta J_{SR}$ and $\beta J_{LR}$). In the Appendix of his article, Nagle mentions an alternative calculation, in which the canonical partition function, in zero field, is written a \begin{equation} Z\left( T,H=0,N\right) {\displaystyle\sum\limits_{M=-N}^{N}} \exp\left[ \beta J_{LR}\frac{M^{2}}{2N}\right] {\displaystyle\sum\limits_{S=0}^{R}} f_{N}\left( R,S\right) \exp\left[ -\beta J_{SR}\left( N-4S\right) \right] , \end{equation} where $R=\min\left\{ \left( N\pm M\right) /2\right\} $, $S$ is the number of $\left( +,-\right) $ pairs, an \begin{equation} f_{N}\left( R,S\right) =\frac{N}{S}\left( \begin{array} [c]{c R-1\\ S-1 \end{array} \right) \left( \begin{array} [c]{c N-R-1\\ S-1 \end{array} \right) . \end{equation} Although referring to a future publication, Nagle and Yeo never published the combinatorial derivation of $f_{N}\left( R,S\right) $, which corresponds to the number of microstates of the system with fixed values of $N$, $M$ and $S$ (in other words, with fixed magnetization $m$ and internal energy $u$ associated with the short-range terms). This expression of $f_{N}\left( R,S\right) $ is directly related to the entropy in the microcanonical ensemble in terms of the appropriate densities. In the thermodynamic limit, the partition function $Z\left( T,H=0,N\right) $ is given by the maximum term of the sum over $M$ and $S$. According to Nagle, all the results in zero field have been checked in this alternative formulation, in particular the location of the tricritical point. The choice of independent variables and the alternative solutions of Nagle are already a firm indication of the equivalence of ensembles. The asymptotic form of the expression of $f_{N}\left( R,S\right) $, which is directly related to the entropy in the microcanonical ensemble, has been independently written by several authors \cite{rehn2012}, even in recent work with claims of inequivalence of ensembles \cite{mukamel2005}. The most remarkable deduction has been published by Ernst Ising \cite{ising1925} in his famous article of 1925. \section{Solution in the canonical ensemble} The usual canonical partition function associated with Hamiltonian (\ref{HN}) is given b \begin{equation} Z=Z\left( \beta J_{SR},\beta J_{LR}\right) {\displaystyle\sum\limits_{\left\{ \sigma_{i}\right\} }} \exp\left[ \beta J_{SR}\sum_{i=1}^{N}\sigma_{i}\sigma_{i+1}+\frac{\beta J_{LR}}{2N}\left( \sum_{i=1}^{N}\sigma_{i}\right) ^{2}\right] . \end{equation} Using the Gaussian identit \begin{equation} \int_{-\infty}^{+\infty}dx\,\exp\left[ -x^{2}+2ax\right] =\sqrt{\pi \,\exp\left( a^{2}\right) , \end{equation} we hav \begin{equation} Z=\left( \frac{\beta J_{LR}N}{2\pi}\right) ^{1/2}\int_{-\infty}^{+\infty }dy\,\exp\left[ -\beta Nf\left( y\right) \right] ,\label{Zcan \end{equation} wher \begin{equation} f\left( y\right) =\frac{1}{2}J_{LR}\,y^{2}-\frac{1}{\beta N}\ln Z_{I}, \end{equation} and $Z_{I}$ is the canonical partition function of an Ising chain \begin{equation} Z_{I}=Z_{I}\left( \beta J_{SR},\beta J_{LR}y\right) {\displaystyle\sum\limits_{\left\{ \sigma_{i}\right\} }} \exp\left[ \beta J_{SR}\sum_{i=1}^{N}\sigma_{i}\sigma_{i+1}+\beta J_{LR y\sum_{i=1}^{N}\sigma_{i}\right] . \end{equation} Also, we remark that these results can be obtained from an application of a well-known Bogoliubov identity \cite{kislinsky1988}. In the thermodynamic limit we writ \begin{equation} f\left( y\right) \sim\frac{1}{2}J_{LR}\,y^{2}-\frac{1}{\beta}\ln \lambda\left( y\right) ,\label{fy \end{equation} where $\lambda\left( y\right) $ is the largest eigenvalue of a transfer matrix \begin{equation} \lambda=\exp\left( \beta J_{SR}\right) \cosh\left( \beta J_{LR}y\right) + \left[ \exp\left( 2\beta J_{SR}\right) \cosh^{2}\left( \beta J_{LR}y\right) -2\sinh\left( 2\beta J_{SR}\right) \right] ^{1/2}. \end{equation} We can analyze the critical behavior from an expansion of the asymptotic form of $f\left( y\right) $ as a power series in $y$ \begin{equation} f\left( y\right) =A_{0}+A_{2}\,y^{2}+A_{4}\,y^{4}+A_{6}\,y^{6}+..., \end{equation} which is equivalent to Nagl \'{ s expansion of the free energy $f\left( T,m\right) $, given by equation (\ref{ftm}). The critical line comes from $A_{2}=0$, with $A_{4}>0$, and the tricritical point is located at $A_{2}=A_{4}=0$, with $A_{6}>0$. If we use Laplac \'{ s method to calculate the asymptotic form of the integral (\ref{Zcan}), the saddle-point equation is given b \begin{equation} \widetilde{y}=\frac{\sinh\left( \beta J_{LR}\widetilde{y}\right) \left[ 1+D^{-1/2}\cosh\left( \beta J_{LR}\widetilde{y}\right) \right] {\cosh\left( \beta J_{LR}\widetilde{y}\right) +D^{1/2}},\label{eqstate \end{equation} wher \begin{equation} D=\sinh^{2}\left( \beta J_{LR}\widetilde{y}\right) +\exp\left( -4\beta J_{SR}\right) , \end{equation} so we have the corresponding free energy per spin \begin{equation} g=g\left( T\right) =-\frac{1}{2}J_{LR}\widetilde{y}^{2}-\frac{1}{\beta \ln\lambda\left( \widetilde{y}\right) . \end{equation} In the next Section we derive again the equation of state (\ref{eqstate}) in the context of the microcanonical formulation. As in a typical mean-field calculation, there is always a paramagnetic solution, $\widetilde{y}=0$, but this solution becomes physically unacceptable in the ordered region of the phase diagram. If there are several solutions, we have to choose the absolute minima, which corresponds to using a Maxwell construction (and to recovering the convexity of the free energy). From the equation of state (\ref{eqstate}), it is possible to check the location of the tricritical point, given b \begin{equation} \beta J_{LR}=\exp\left( -2\beta J_{SR}\right) , \end{equation} which corresponds to $A_{2}=0$, and \begin{equation} \beta J_{LR}\left[ \frac{1}{3}+\frac{4}{3}\exp\left( 2\beta J_{SR}\right) -\exp\left( 6\beta J_{SR}\right) \right] =1+\exp\left( 2\beta J_{SR}\right) , \end{equation} which corresponds to $A_{4}=0$, in agreement with Nagl \'{ s findings. \section{Solution in the microcanonical ensemble} According to the work of Nagle, it is convenient to begin by considering the Hamiltonian of an Ising chain, without the addition of mean-field terms and in the presence of an external field $H$, which can be written a \begin{equation} \mathcal{H}_{I}=-J_{SR}\sum_{i=1}^{N}\sigma_{i}\sigma_{i+1}-H\sum_{i=1 ^{N}\sigma_{i}=U-HM, \end{equation} where the energy $U$ refers to the short-range interactions \begin{equation} U=-J_{SR}\sum_{i=1}^{N}\sigma_{i}\sigma_{i+1};\qquad M=\sum_{i=1}^{N \sigma_{i}. \end{equation} Given $U$, $M$, and $N$, the number of microscopic states associated with this system may be formally written as a sum over spin configurations of a product of two delta functions \begin{equation} \Omega_{I}=\Omega_{I}\left( U,M,N\right) =\sum_{\left\{ \sigma_{i}\right\} }\delta\left( U+J_{SR}\sum_{i=1}^{N}\sigma_{i}\sigma_{i+1}\right) \,\delta\left( M-\sum_{i=1}^{N}\sigma_{i}\right) . \end{equation} We now introduce integral representations of these delta functions, and use the transfer matrix technique to carry out the sums. In the thermodynamic limit, it is straightforward to writ \begin{equation} \Omega_{I}\left( U,M,N\right) \sim\int\int dk_{1}dk_{2}\,\exp\left[ N\,f\left( k_{1},k_{2}\right) \right] , \end{equation} wher \begin{equation} f\left( k_{1},k_{2}\right) =k_{1}u-k_{2}m+\ln\lambda\left( k_{1 ,k_{2}\right) , \end{equation} an \begin{equation} \lambda\left( k_{1},k_{2}\right) =\exp\left( k_{1}J_{SR}\right) \left\{ \cosh k_{2}+\left[ \sinh^{2}k_{2}+\exp\left( -4k_{1}J_{SR}\right) \right] \right\} , \end{equation} with $u=U/N$ and $m=M/N$. The entropy per particle as a function of $u$ and $m$ is given b \begin{equation} s_{I}=s_{I}\left( u,m\right) \sim\frac{k_{B}}{N}\ln\Omega_{I}\sim k_{B}\,f\left( \widetilde{k}_{1},\widetilde{k}_{2}\right) ,\label{sI \end{equation} where $\widetilde{k}_{1}$ and $\widetilde{k}_{2}$ come form the saddle-point equations, $\left( \partial f/\partial k_{1}\right) _{k_{2}}=0$ and $\left( \partial f/\partial k_{2}\right) _{k_{1}}=0$, \begin{equation} u+J_{SR}=2J_{SR}\frac{\exp\left( -4\widetilde{k}_{1}\right) D^{-1/2} {\cosh\widetilde{k}_{2}+D^{1/2}}\label{umicro \end{equation} an \begin{equation} m=\frac{\sinh k_{2}\left[ 1+D^{-1/2}\cosh k_{2}\right] }{\cosh\widetilde {k}_{2}+D^{1/2}},\label{mmicro \end{equation} wit \begin{equation} D=\sinh^{2}k_{2}+\exp\left( -4k_{1}J_{SR}\right) .\label{Dmicro \end{equation} In the entropy representation, we write the differential for \begin{equation} ds_{I}=\frac{1}{T}du-\frac{H}{T}dm, \end{equation} from which we have the equations of state \begin{equation} \frac{1}{T}=\left( \frac{\partial s_{I}}{\partial u}\right) _{m ;\qquad-\frac{H}{T}=\left( \frac{\partial s_{I}}{\partial m}\right) _{u}. \end{equation} It is straightforward to use these equations, together with the saddle point equations (\ref{umicro}) and (\ref{mmicro}), in order to show tha \begin{equation} \widetilde{k}_{2}=\beta H;\qquad\widetilde{k}_{1}=\beta,\label{k1k2 \end{equation} which is an evidence of the equivalence of ensembles (in the absence of the mean-field interactions). We now turn to Nagl \'{ s model, with the addition of the long-range terms. In the presence of the equivalent-neighbor interactions, the internal energy is given by the sum of two terms, \begin{equation} u=u_{SR}+u_{LR}=u_{SR}-\frac{1}{2}J_{LR}\,m^{2},\label{u \end{equation} so that both the energy associated with the short-range interactions, $u_{SR} $, and the magnetization $m$ should be fixed in the microcanonical formulation. Therefore, the entropy of Nagl \'{ s model is still given by the expression $s_{I}$, as in equation (\ref{sI}), but with the energy given by equation (\ref{u}), which leads to a new differential form \begin{equation} ds=\frac{1}{T}du_{SR}-\frac{J_{LR}\,m}{T}dm-\frac{H}{T}dm. \end{equation} From this expression we hav \begin{equation} \frac{1}{T}=\left( \frac{\partial s}{\partial u_{SR}}\right) _{m ;\qquad-\frac{J_{LR}\,m}{T}-\frac{H}{T}=\left( \frac{\partial s}{\partial m}\right) _{u_{SR}}, \end{equation} where $H_{ef}=H+J_{LR}\,m$ is an effective field, including the external field $H$ and the effects of the long-range terms. Thus, in zero external field, we use equations (\ref{k1k2}) to write $\widetilde{k}_{2}=\beta J_{LR}\,m$ and $\widetilde{k}_{1}=\beta$. Inserting these expressions into equations (\ref{mmicro}) and (\ref{Dmicro}), we obtai \begin{equation} m=\frac{\sinh\left( \beta J_{LR}m\right) \left[ 1+D^{-1/2}\cosh\left( \beta J_{LR}m\right) k_{2}\right] }{\cosh\left( \beta J_{LR}m\right) +D^{1/2}}, \end{equation} wher \begin{equation} D=\sinh^{2}\left( \beta J_{LR}m\right) +\exp\left( -4\beta J_{SR}\right) , \end{equation} which is identical to the equation of state (\ref{eqstate}) in the canonical ensemble, with the identification of $m$ with $\widetilde{y}$ (and which already leads to the location of the tricritical point). In contrast to previous calculations, we do not find any disagreements in the thermodynamic behavior obtained from calculations in different ensembles. \section{Conclusions} We revisited the statistical analysis of a spin-$1/2$ Ising chain with antiferromagnetic interactions between nearest-neighbor sites, and the addition of equivalent-neighbor ferromagnetic interactions between all pairs of sites. This system, which is known to display second and first-order phase transitions separated by a tricritical point, has been used as one of the paradigmatic examples of inequivalence of canonical and microcanonical formulations. In contrast to these claims, we give arguments to show the equivalence of thermodynamic solutions in different ensembles.
{ "timestamp": "2015-08-17T02:10:05", "yymm": "1508", "arxiv_id": "1508.03543", "language": "en", "url": "https://arxiv.org/abs/1508.03543", "abstract": "In a pioneer work, John Nagle has shown that an Ising chain with competing short and long-range interactions displays second and first-order phase transitions separated by a tricritical point. More recently, it has been claimed that Nagle's model provides an example of the inequivalence between canonical and microcanonical calculations. We then revisit Nagle's original solution, as well as the usual formulation of the problem in a canonical ensemble, which lead to the same results. Also, in contrast to recent claims, we show that an alternative formulation in the microcanonical ensemble, with the adequate choice of the fixed thermodynamic extensive variables, leads to equivalent thermodynamic results.", "subjects": "Statistical Mechanics (cond-mat.stat-mech)", "title": "Recovering the equivalence of ensembles II: An Ising chain with competing short and long-range interactions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9518632261523027, "lm_q2_score": 0.7431680086124812, "lm_q1q2_score": 0.7073942982510586 }
https://arxiv.org/abs/1309.7379
Incomparable copies of a poset in the Boolean lattice
Let $B_n$ be the poset generated by the subsets of $[n]$ with the inclusion as relation and let $P$ be a finite poset. We want to embed $P$ into $B_n$ as many times as possible such that the subsets in different copies are incomparable. The maximum number of such embeddings is asymptotically determined for all finite posets $P$ as $\frac{n \choose \lfloor n/2\rfloor}{M(P)}$, where $M(P)$ denotes the minimal size of the convex hull of a copy of $P$. We discuss both weak and strong (induced) embeddings.
\section{Introduction}~ \begin{definition} Let $B_n$ be the Boolean lattice, the poset generated by the subsets of $[n]$ with the inclusion as relation and $P$ be a finite poset with the relation $<_p$. (If $S$ is a set of size $n$ we may also write $B_S$.) $f:~P\rightarrow B_n$ is an {\it embedding} of $P$ into $B_n$ if it is an injective function that satisfies $f(a)\subset f(b)$ for all $a<_p b$. $f$ is called an {\it induced embedding} if it is an injective function such that $f(a)\subset f(b)$ if and only if $a<_p b$. \end{definition} \begin{definition} Let $X$ and $Y$ be two sets of subsets of $[n]$. $X$ and $Y$ are {\it incomparable} if there are no sets $x\in X$ and $y\in Y$ such that $x\subseteq y$ or $y\subseteq x$. A family of sets of subsets of $[n]$ is {\it incomparable} if its elements are pairwise incomparable. \end{definition} We investigate the following problem. How many times can we embed a poset into $B_n$ such that the resulting copies form an incomparable family? An asymptotic answer is given in both the induced and the non-induced case. Before we can state our main result, some notations are needed. \begin{notation} Let $F\subseteq B_n$. The {\it convex hull} of $F$ is the set \begin{equation} conv(F)=\{b\in B_n ~ \big| ~ \exists a,c\in F ~~ a\subseteq b\subseteq c\}. \end{equation} We use the following notations for the minimal size of the convex hull. For a finite poset $P$ \begin{eqnarray} t_1(P) &=& \min_{f,n}\{|conv(Im(f))| ~ \big| ~ f:~P\rightarrow B_n ~ {\rm is~an~embedding} \} \\ t_2(P) &=& \min_{f,n}\{|conv(Im(f))| ~ \big| ~ f:~P\rightarrow B_n ~ {\rm is~an~induced~embedding} \} \end{eqnarray} \end{notation} \begin{theorem}\label{mainthm} Let $P$ be a finite poset. Let $M_1(P,n)$ (and $M_2(P,n)$) denote the largest $M$ such that there are embeddings (induced embeddings) $f_1, f_2, \dots f_M: P\rightarrow B_n$ such that $\{Im(f_i),~ i=1,2,\dots M \}$ is an incomparable family. Then \begin{eqnarray} \lim_{n\rightarrow \infty} \frac{M_1(P,n)}{{n \choose \lfloor n/2\rfloor}} &=& \frac{1}{t_1(P)} \\ \lim_{n\rightarrow \infty} \frac{M_2(P,n)}{{n \choose \lfloor n/2\rfloor}} &=& \frac{1}{t_2(P)}. \end{eqnarray} \end{theorem} We prove upper and lower bounds for $M_j(P,n)$ in the next two sections (Theorem \ref{upperthm} and Theorem \ref{lowerthm}). The two bounds will imply the theorem immediately. Since the proofs are almost identical for $j=1,2$, they will be done simultaneously. \begin{remark} Theorem \ref{mainthm}. was independently proved by A. P. Dove and J. R. Griggs, \cite{dg}. \end{remark} The problem discussed in this paper is related to the problem of determining the largest families in $B_n$ avoiding certain configurations of inclusion. \begin{definition} Let $P_1, P_2, \dots P_k$ be finite posets. La($n, \{P_1, \dots P_k\}$) denotes the size of the largest subset $\mathcal{F}\subset B_n$ such that none of the posets $P_i$ can be embedded into $\mathcal{F}$. \end{definition} Let $V_k$ denote the $(k+1)$-element poset that has a minimal element contained in the other $k$ unrelated elements. $\Lambda_k$ is obtained from $V_k$ by reversing the relations. Katona and Tarj\'an proved that a subset of $B_n$ containing none of the posets $\{V_2,~ \Lambda_2\}$ has at most ${n-1\choose \left\lfloor {n-1\over 2} \right\rfloor }$ elements, and this bound is sharp \cite{tarjan}. Such a family consists of pairwise incomparable copies of the one-element poset and the two-element chain. Another example of the relation of the two problems is determining La($n, V_2$). (See \cite{rfork} for asymptotic bounds on La($n, V_r$).) A $V_2$-free family consists of pairwise independent copies of the posets $\{\Lambda_0, \Lambda_1, \Lambda_2, \dots \}$. The value of La($n,P$) is not known for a general poset $P$, but many special cases have been solved. See \cite{bukh} for posets whose Hasse diagram is a tree. See \cite{diamond} for diamond and harp posets. \cite{chen} provides upper bounds on La($n,P$) for all posets $P$. \section{The upper bound}~ To prove the upper bound for $M_j(P,n)$ we need a lemma about chains. Let $S$ be a set of size $n$. A chain in $S$ is a set of subsets $\emptyset=C_0\subset C_1\subset C_2\subset \dots \subset C_n=S$, where $|C_m|=m$ for all $m$. \begin{lemma}\label{chcount} Let $\mathcal{F}$ be a family of subsets of $S$, where $|S|=n$ and $|\mathcal{F}|=t$. Then the number of chains intersecting at least one member of $\mathcal{F}$ is at least \begin{equation} \left(t-\frac{{t \choose 2}}{n}\right)\lfloor n/2\rfloor!\lceil n/2 \rceil!. \end{equation} \end{lemma} \begin{proof} We prove the lemma by induction on $t$. The statement is true for $t=1$, as the number of chains passing through a subset $F$ is $|F|!(n-|F|)!\geq\lfloor n/2\rfloor!\lceil n/2 \rceil!$. Now let $t\geq 2$, and $\mathcal{F}=\{F_1, F_2, \dots F_t\}$. Since taking complements does not change the number of intersecting chains, we may assume that some set of $\mathcal{F}$ has size at most $\lfloor n/2\rfloor$. We can also assume that $F_t$ is one of the smallest subsets. By induction, the number of chains intersecting $\mathcal{F}\backslash \{F_t\}$ is at least \begin{equation} \left(t-1-\frac{{t-1 \choose 2}}{n}\right)\lfloor n/2\rfloor!\lceil n/2 \rceil!. \end{equation} The number of chains through $F_t$ is $|F_t|!(n-|F_t|)!$. Assume that $F_t\subset F_i$ for some $i\in [1,~n-1]$. The number of chains intersecting both $F_t$ and $F_i$ is $|F_t|!(|F_i|-|F_t|)!(n-|F_i|)!\leq |F_t|!(n-|F_t|-1)!$. So there are at least \begin{equation} |F_t|!(n-|F_t|)!\left(1-\frac{t-1}{n-|F_t|}\right)\geq \lfloor n/2\rfloor!\lceil n/2 \rceil!\left(1-\frac{2(t-1)}{n}\right)\end{equation} chains that intersect $\mathcal{F}$ only in $F_t$. The statement of the lemma follows after summation: \begin{equation} \left(t-1-\frac{{t-1 \choose 2}}{n}\right)+\left(1-\frac{2(t-1)}{n}\right)=t-\frac{{t \choose 2}}{n}. \end{equation} \end{proof} \begin{theorem}\label{upperthm} For any finite poset $P$ \begin{equation} M_j(P,n)\leq \frac{1}{t_j(P)}{n \choose \lfloor n/2\rfloor}(1+O(n^{-1})) \end{equation} holds for $j=1,2$. \end{theorem} \begin{proof} Assume that $f_1, f_2, \dots f_k: P\rightarrow B_n$ are embeddings (induced if $j=2$) such that the family $\{Im(f_i),~ i=1,2,\dots k \}$ is incomparable. Than $\{conv(Im(f_i)),~ i=1,2,\dots k \}$ is also an incomparable family. To see that, assume there are sets $a,b$ such that $a\subseteq b$, $a\in conv(Im(f_i))$, $b\in conv(Im(f_j))$ and $i\not=j$. Then by the definition of the convex hull there are sets $a'\in Im(f_i)$ and $b'\in Im(f_j)$ such that $a'\subseteq a\subseteq b\subseteq b'$. But $a'\not\subseteq b'$ since $\{Im(f_i),~ i=1,2,\dots k \}$ is an incomparable family. Since the family $\{conv(Im(f_i)),~ i=1,2,\dots k \}$ is incomparable, every chain intersects at most one of its members. By Lemma \ref{chcount}., each $conv(Im(f_i))$ intersects at least $t_j(P)\lfloor n/2\rfloor!\lceil n/2 \rceil!(1-O(n^{-1}))$ chains. Since the total number of chains is $n!$, \begin{equation} k \leq \frac{n!}{t_j(P)\lfloor n/2\rfloor!\lceil n/2 \rceil!(1-O(n^{-1}))}=\frac{1}{t_j(P)}{n \choose \lfloor n/2\rfloor}(1+O(n^{-1})). \end{equation} \end{proof} \section{The lower bound}~ In this section our aim is to prove a lower bound on $M_j(P,n)$ by embedding many copies of $P$ to $B_n$. We need the following lemmas for the construction. \begin{lemma}\label{labeling} Let $P$ be a finite poset, and let $f: P\rightarrow B_m$ be an embedding. Then we can label the elements of $B_m$ with the numbers $1, 2, \dots 2^m$ such that all the sets get a higher number than any of their subsets, and the numbers assigned to the elements of $conv(Im(f))$ form an interval in $[1, 2^m]$. \end{lemma} \begin{proof} We divide the elements of $B_m$ into three groups: Let $\mathcal{F}_1=\{b\in B_m \big|~ \exists c\in Im(f)~~b\subset c,~~\nexists a\in Im(f)~~a\subseteq b \}$, $\mathcal{F}_2=conv(Im(f))=\{b\in B_n ~ \big| ~ \exists a,c\in Im(f) ~~ a\subseteq b\subseteq c\}$ and $\mathcal{F}_3=B_m\backslash (\mathcal{F}_1 \cup \mathcal{F}_2)=\{b\in B_n ~ \big| ~ \nexists c\in Im(f) ~~ b\subseteq c\}$. We use the numbers of $[1,~ |\mathcal{F}_1|]$ for the sets of $\mathcal{F}_1$, the numbers of $[|\mathcal{F}_1|+1,~ |\mathcal{F}_1|+|\mathcal{F}_2|]$ for the sets of $\mathcal{F}_2$ and the numbers $[|\mathcal{F}_1|+|\mathcal{F}_2|+1,~ 2^m]$ for the sets of $\mathcal{F}_3$. In the groups we assign numbers such that the elements representing larger subsets get larger numbers. We have to check that if that if $x,y\in B_m$ and $y$ got a larger number than $x$, then $y\not\subset x$. If $x$ and $y$ are in the same group, than $|x|\leq |y|$, so $y\not\subset x$. If $x\in\mathcal{F}_1$ and $y\in\mathcal{F}_2$, then $y\not\subset x$, because $y$ contains an element of $Im(f)$ while $x$ does not. If $x\in\mathcal{F}_1\cup \mathcal{F}_2$ and $y\in\mathcal{F}_3$, then $y\not\subset x$, because $x$ is the subset of an element of $Im(f)$ while $y$ is not. \end{proof} \begin{lemma}\label{ordcopies} Let $P$ be a finite poset and let $\varepsilon' >0$ be fixed. Let $j\in\{1,2\}$. Then there are integers $N, K$ and functions $f_1, f_2, \dots f_K: P\rightarrow B_N$ such that \begin{enumerate}[(i)] \item For all $i\in [1, K]$, $f_i$ is an embedding if $j=1$, and an induced embedding if $j=2$. \item $K\geq \frac{2^N(1-\varepsilon')}{t_j(P)}$. \item If $i_1<i_2$, $a\in Im(f_{i_1})$ and $b\in Im(f_{i_2})$, then $b\not\subseteq a$. \end{enumerate} \end{lemma} \begin{proof} Let $P$ be a fixed finite poset. There is embedding (or induced embedding, if $j=2$) $f: P\rightarrow B_m$ for some $m$ such that $|conv(Im(f))|=t_j(P)$. Fix $m$ and $f$. Choose $k\in\mathbb{N}$ such that $\left(1- \frac{t_j(P)}{2^m}\right)^k\leq \varepsilon'$, and let $N=km$. Let $S_1, S_2, \dots S_k$ be disjoint sets of size $m$ and let $S=\bigcup_{i=1}^k S_i$. Consider the elements of $B_N$ as the subsets of $S$. Let $g_i: P\rightarrow B_{S_i}$ ($i=1, 2, \dots k$) be embeddings that map the elements of $P$ to $m$-element sets the same way as $f$ does. Assign the numbers $1, 2, \dots 2^m$ to the subsets of $S_i$ as in Lemma \ref{labeling}. The elements of $conv(Im(g_i))$ will get the numbers of the interval $I=[p,~ p+t_j(P)-1]$ for all $i$. We call an embedding $g: P\rightarrow B_S$ ${\it good}$ if there is an index $i\in [1,~k]$ and there are $k-1$ sets $A_1\subseteq S_1,~ A_2\subseteq S_2, \dots A_{i-1}\subseteq S_{i-1},~ A_{i+1}\subseteq S_{i+1}, \dots A_k\subseteq S_k $ such that none of the numbers assigned to $A_1, A_2, \dots A_{i-1}$ is in $I$, and for any $x\in P$, $g(x)\cap S_i=g_i(x)$, and $g(x)\cap (S\backslash S_i)=\displaystyle\bigcup_{r\in [n]\backslash \{i\}} A_r$. The number of good functions is \begin{equation} \sum_{i=1}^{k} (2^m-t_j(P))^{i-1}\cdot(2^m)^{k-i}=2^{N-m} \sum_{i=1}^{k} \left(1-\frac{t_j(P)}{2^m}\right)^{i-1}= \end{equation} \begin{equation}\nonumber 2^{N-m}\frac{1-\left(1-\frac{t_j(P)}{2^m}\right)^k}{\frac{t_j(P)}{2^m}}= \frac{2^N}{t_j(P)}\left(1-\left(1-\frac{t_j(P)}{2^m}\right)^k\right)\geq \frac{2^N(1-\varepsilon')}{t_j(P)}. \end{equation} $f_1, f_2, \dots f_K$ will be the good functions. They are embeddings (induced if $j=2$), and their number is sufficiently large. So $(i)$ and $(ii)$ are satisfied. Now we find an ordering of the good functions that satisfies $(iii)$. Let $g$ be the good function defined by the index $i$ and the subsets $A_1, A_2, \dots A_{i-1}, A_{i+1}, \dots A_k$. Define the code of $g$ as a vector of length $k$ with coordinates as follows. The first $i-1$ coordinates are numbers assigned to the sets $A_1, A_2, \dots A_{i-1}$ respectively. The $i$th coordinate is $p$, the smallest number in $I$. The last $k-i$ coordinates are numbers assigned to the sets $A_{i+1}, A_{i+2}, \dots A_k$ respectively. Now take the lexicographic ordering of these codes, and assign the names $f_1, f_2, \dots$ to the good functions according to the ordering. ($f_1$ will be the good function whose code comes first in the lexicographic ordering, $f_2$ will be the second and so on.) Now we can verify $(iii)$. Assume that $A\in Im(f_a)$, $B\in Im(f_b)$, $A\subseteq B$ and $b<a$. Let the $l$th be the first coordinate where the codes of $f_a$ and $f_b$ are different. Since $b<a$, the $l$th coordinate of $a$ is strictly larger than that of $b$, and the first $l-1$ coordinates are not from $I$. That implies that the number assigned to $A\cap S_l$ is strictly larger than the number assigned to $B\cap S_l$. (We use the fact that the numbers assigned to the elements of $conv(Im(g_l))$ form an interval at this step.) Then $A\cap S_l\not\subseteq B\cap S_l$ (contradicting $A\subseteq B$) as the labeling of the elements of $S_l$ is done according to Lemma \ref{labeling}. \end{proof} \begin{theorem}\label{lowerthm} Let $P$ be a finite poset, $\varepsilon >0$ and $j\in \{1,2\}$. Then for all large enough $n$ \begin{equation} M_j(P,n)\geq \frac{1}{t_j(P)}{n \choose \lfloor n/2\rfloor}(1-\varepsilon). \end{equation} \end{theorem} \begin{proof} Choose $N,~K$, and $f_1, f_2, \dots f_K: P\rightarrow B_N$ as in Lemma \ref{ordcopies}. (Use $\varepsilon'=\frac{\varepsilon}{2}$). Consider the elements of $B_N$ as the subsets of a set $S$ of size $N$. Let $R$ be a set such that $S\subset R$ and $|R|=n$. Let $Q=R\backslash S$. Let \begin{equation} \mathcal{Q}=\left\{T\subset Q~ \Big|~ \left\lfloor\frac{n-N}{2}\right\rfloor-K \leq |T| \leq \left\lfloor\frac{n-N}{2}\right\rfloor-1 \right\}. \end{equation} If $n$ is large enough, then the following inequality is true: \begin{equation} \sum_{i=1}^K {n-N \choose \left\lfloor\frac{n-N}{2}\right\rfloor-i} \geq K\cdot {n \choose \left\lfloor\frac{n-N}{2}\right\rfloor}\left(1-\frac{\varepsilon}{2}\right). \end{equation} Then \begin{equation} |\mathcal{Q}|\geq K\cdot {n-N \choose \left\lfloor\frac{n-N}{2}\right\rfloor}\left(1-\frac{\varepsilon}{2}\right) \geq \frac{2^N(1-\frac{\varepsilon}{2})}{t_j(P)} \cdot 2^{-N}{n \choose \left\lfloor\frac{n}{2}\right\rfloor}\left(1-\frac{\varepsilon}{2}\right) \geq \frac{1}{t_j(P)}{n \choose \lfloor n/2\rfloor}(1-\varepsilon). \end{equation} We used that $2^N {n-N \choose \left\lfloor\frac{n-N}{2}\right\rfloor}\geq {n \choose \left\lfloor\frac{n}{2}\right\rfloor}$. It can be verified easily by induction on $N$. We define an embedding $f_T: P\rightarrow B_R$ (induced if $j=2$) for every $T\in\mathcal{Q}$ such that $\{Im(f_T)~\big|~T\in\mathcal{Q}\}$ is an incomparable family. For any $x\in P$ let $f_T(x)\cap Q=T$ and $f_T(x)\cap S=f_{\lfloor \frac{n-N}{2} \rfloor-|T|}(x)$. Then $f_T$ is obviously an embedding (induced if $j=2$). Now we check that the family $\{Im(f_T)~\big|~T\in\mathcal{Q}\}$ is incomparable. Let $T_1, T_2\in \mathcal{Q}$ be different sets. Assume that $A_1\in Im(f_{T_1})$, $A_2\in Im(f_{T_2})$ and $A_1\subseteq A_2$. Then $T_1=A_1\cap Q\subseteq A_2\cap Q=T_2$. Since $T_1\not= T_2$, $|T_1|<|T_2|$ holds. Since $A_1\cap S\in Im(f_{\lfloor \frac{n-N}{2} \rfloor-|T_1|})$ and $A_2\cap S\in Im(f_{\lfloor \frac{n-N}{2} \rfloor-|T_2|})$, Lemma \ref{ordcopies}. $(3)$ implies $|A_1\cap S|\not\subseteq |A_2\cap S|$. It contradicts $A_1\subseteq A_2$, so the family is indeed incomparable. We found at least $\frac{1}{t_j(P)}{n \choose \lfloor n/2\rfloor}(1-\varepsilon)$ different embeddings (induced if $j=2$) of $P$ to $B_R$, where $|R|=n$, such that the resulting copies form an incomparable family. It proves the theorem. \end{proof} \section{Remarks}~ In this section we exactly determine the maximum number of incomparable copies for certain posets. The problem has already been solved for the path posets. \begin{theorem}\label{pathlemma} {\rm {\bf (Griggs, Stahl, Trotter) \cite{gst}}} Let $P^{h+1}$ be the path poset with $h+1$ elements. Then for all $n\geq h$ \begin{equation}M_1(P^{h+1},n)={n-h\choose \left\lfloor {n-h\over 2} \right\rfloor }.\end{equation} \end{theorem} We include an alternative proof for the sake of completeness. The following theorem will be used. \begin{theorem} {\rm {\bf (Bollob\'as) \cite{B}}} Let $(A_i, B_i)~ (1\leq i\leq m)$ be a family of disjoint subsets ($A_i\cap B_i=\emptyset$), where $A_i\cap B_j\not= \emptyset$ holds for $i\not= j~ (1\leq i,j\leq m)$. Then \begin{equation}\sum_{i=1}^m{1\over {|A_i|+|B_i|\choose |A_i|}}\leq 1.\end{equation} \end{theorem} \begin{proof} (Theorem \ref{pathlemma}.) Consider an embedding of $P^{h+1}$ into $B_n$. Let its maximal and minimal elements embedded into $C_i$ and $D_i$ respectively. $C_i\supset D_i$ implies $\overline{C}_i\cap D_i=\emptyset.$ On the other hand, choosing these sets for all $i=1, \ldots ,m$, the incomparability conditions imply $\overline{C}_i\cap D_j\not= \emptyset$. The theorem of Bollob\'as can be applied for the pairs $(\overline{C}_i, D_i)$: \begin{equation}\label{bollineq} \sum_{i=1}^m {1\over {|\overline{C}_i|+|B_i|\choose |\overline{C}_i|}}\leq 1. \end{equation} $|C_i-D_i|\geq h$ results in $|\overline{C_i}|+|D_i|\leq n-h.$ Therefore the left hand side of (\ref{bollineq}) can be decreased in the following way. \begin{equation} {m \over {n-h\choose \left\lfloor {n-h\over 2} \right\rfloor }}=\sum_{i=1}^m {1 \over {n-h\choose \left\lfloor {n-h\over 2} \right\rfloor }}\leq \sum_{i=1}^m {1\over {|\overline{C}_i|+|B_i|\choose |\overline{C}_i|}}\leq 1 \end{equation} holds, proving the upper bound in the theorem. The lower bound can be seen by an easy construction. Let $G\subset\{h+1, h+2,\dots n\}$ be a subset of size $\left\lfloor {n-h\over 2} \right\rfloor$. Then $P^{h+1}$ can be embedded to the sets $G,~\{1\}\cup G,~\{1, 2\}\cup G,\dots ~\{1,2\dots h\}\cup G$. We have ${n-h\choose \left\lfloor {n-h\over 2} \right\rfloor }$ such embeddings and the resulting copies form an incomparable family. This proves the lower bound. \end{proof} \begin{definition} Let $h(P)$ be the {\it height} of the poset $P$, that is the number of elements in a longest chain in $P$ minus 1. We say that $P$ is {\it thin} if it can be embedded into $B_{h(P)}$. $P$ is called {\it slim} if it has an induced embedding into $B_{h(P)}$. \end{definition} \begin{theorem}\label{thinthm} If $P$ is a thin poset, then \begin{equation} M_1(P,n)={n-h\choose \left\lfloor {n-h\over 2} \right\rfloor }. \end{equation} If $P$ is slim, then \begin{equation} M_1(P,n)=M_2(P,n)={n-h\choose \left\lfloor {n-h\over 2} \right\rfloor }. \end{equation} \end{theorem} \begin{proof} Since $P^{h+1}$ is a subposet of $P$, \begin{equation} M_2(P,n)\leq M_1(P,n) \leq M_1(P^{h+1},n). \end{equation} Now consider $M_1(P^{h+1},n)$ incomparable copies of $P^{h+1}$ in $B_n$ as defined in Theorem \ref{pathlemma}. Their convex hulls are isomorphic to $B_h$, so we can embed $P$ to them (in an induced way if $P$ is slim). It proves $M_1(P,n) \geq M_1(P^{h+1},n)$ for thin posets, and $M_2(P,n) \geq M_1(P^{h+1},n)$ for slim posets. We already determined the value of $M_1(P^{h+1},n)$ in Lemma \ref{pathlemma}, so the proof is completed. \end{proof} Of course Theorem \ref{thinthm} does not contradict Theorem \ref{mainthm}, since $t_1(P)=2^h$ and \begin{equation} {1\over 2^h}{n\choose \left\lfloor {n\over 2} \right\rfloor }\sim {n-h\choose \left\lfloor {n-h\over 2} \right\rfloor }. \end{equation} The smallest non-thin poset is $V$ with three elements, $a,b, c$ and the relations $a<b,~ a<c$. Now we give a large set of incomparable copies for all $n$. Fix the parameter $i$ $(1\leq i\leq {n+2\over 4})$. Choose an element \begin{equation} F\in {[n-2i]\choose \left\lceil {n\over 2} \right\rceil -2i+1}. \end{equation} Then the sets \begin{equation}\nonumber F\cup \{ n-(2i-3), \ldots , n\} , F\cup \{ n-(2i-3), \ldots , n\} \cup \{ n-(2i-1)\} , F\cup \{ n-(2i-3), \ldots , n\} \cup \{ n-(2i-2)\} \end{equation} form an embedding of the poset $V$. Let ${\cal P}_i$ denote the set of all such copies. It is trivial that the copies in ${\cal P}_i$ are incomparable. But not much more difficult to check that two copies chosen from ${\cal P}_i$ and ${\cal P}_j\ (1\leq i<j \leq {n+2\over 4}) $, respectively, are also incomparable. Therefore \begin{equation} \bigcup_{i=1}^{ \left\lfloor {n+2\over 4}\right\rfloor }{\cal P}_i \end{equation} is a collection of incomparable embeddings of $V$. We conjecture that this is the largest one. \begin{conjecture} \begin{equation} M_1(V,n)=\sum_{i=1}^{ \left\lfloor {n+2\over 4}\right\rfloor }{n-2i\choose \left\lceil {n\over 2} \right\rceil -2i+1}. \end{equation} \end{conjecture}
{ "timestamp": "2013-10-01T02:01:10", "yymm": "1309", "arxiv_id": "1309.7379", "language": "en", "url": "https://arxiv.org/abs/1309.7379", "abstract": "Let $B_n$ be the poset generated by the subsets of $[n]$ with the inclusion as relation and let $P$ be a finite poset. We want to embed $P$ into $B_n$ as many times as possible such that the subsets in different copies are incomparable. The maximum number of such embeddings is asymptotically determined for all finite posets $P$ as $\\frac{n \\choose \\lfloor n/2\\rfloor}{M(P)}$, where $M(P)$ denotes the minimal size of the convex hull of a copy of $P$. We discuss both weak and strong (induced) embeddings.", "subjects": "Combinatorics (math.CO)", "title": "Incomparable copies of a poset in the Boolean lattice", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363549643401, "lm_q2_score": 0.7185944046238982, "lm_q1q2_score": 0.7073385969452581 }
https://arxiv.org/abs/1607.04911
Near-Optimal Induced Universal Graphs for Bounded Degree Graphs
A graph $U$ is an induced universal graph for a family $F$ of graphs if every graph in $F$ is a vertex-induced subgraph of $U$. For the family of all undirected graphs on $n$ vertices Alstrup, Kaplan, Thorup, and Zwick [STOC 2015] give an induced universal graph with $O\!\left(2^{n/2}\right)$ vertices, matching a lower bound by Moon [Proc. Glasgow Math. Assoc. 1965].Let $k= \lceil D/2 \rceil$. Improving asymptotically on previous results by Butler [Graphs and Combinatorics 2009] and Esperet, Arnaud and Ochem [IPL 2008], we give an induced universal graph with $O\!\left(\frac{k2^k}{k!}n^k \right)$ vertices for the family of graphs with $n$ vertices of maximum degree $D$. For constant $D$, Butler gives a lower bound of $\Omega\!\left(n^{D/2}\right)$. For an odd constant $D\geq 3$, Esperet et al. and Alon and Capalbo [SODA 2008] give a graph with $O\!\left(n^{k-\frac{1}{D}}\right)$ vertices. Using their techniques for any (including constant) even values of $D$ gives asymptotically worse bounds than we present.For large $D$, i.e. when $D = \Omega\left(\log^3 n\right)$, the previous best upper bound was ${n\choose\lceil D/2\rceil} n^{O(1)}$ due to Adjiashvili and Rotbart [ICALP 2014]. We give upper and lower bounds showing that the size is ${\lfloor n/2\rfloor\choose\lfloor D/2 \rfloor}2^{\pm\tilde{O}\left(\sqrt{D}\right)}$. Hence the optimal size is $2^{\tilde{O}(D)}$ and our construction is within a factor of $2^{\tilde{O}\left(\sqrt{D}\right)}$ from this. The previous results were larger by at least a factor of $2^{\Omega(D)}$.As a part of the above, proving a conjecture by Esperet et al., we construct an induced universal graph with $2n-1$ vertices for the family of graphs with max degree $2$. In addition, we give results for acyclic graphs with max degree $2$ and cycle graphs. Our results imply the first labeling schemes that for any $D$ are at most $o(n)$ bits from optimal.
\section{Cycle graphs}\label{sec:cycles} We consider the family of graphs consisting of one cycle of length $\leq n$ (and no other edges or vertices). We discuss both of the cases where the decoder is aware and oblivious of the value of $n$, as discussed in \Cref{pro:aware}. In particular we show that oblivious decoding requires a larger induced universal graph for this problem. Our new bounds leave small gaps which are interesting open problems to tighten. \input{lowerbounds} \input{upperbounds} \section{General $D$} \newcommand{\mathcal{G}}{\mathcal{G}} In this section we present two upper bounds on $g_v(\mathcal{G}_D)$, the number of nodes in the smallest induced universal graph for graphs on $n$ nodes with bounded degree $D$. In \Cref{thmDetUpper} we give a deterministic construction of an induced universal graph for $\mathcal{G}_D$ that relies on the induced universal graph constructed in \Cref{sec:max2upper}. In \Cref{thmRandUpper} we give a randomized construction of an induced universal graph for $\mathcal{G}_D$ that with probability $\frac{1}{2}$ has a small number of nodes. Combining the two results shows the existence of an adjacency labeling scheme for $\mathcal{G}_D$ of size $\log \binom{\floor{n/2}|}{\floor{D/2}} + O\!\left(\min\set{D+\log n,\sqrt{D\log n}\log(n/d)}\right)$. In \Cref{corLowerBoundLabelSizeBoundedDeg} and \Cref{corRandLowerBound} we give lower bounds on $g_v(\mathcal{G}_D)$. These lower bounds imply that any adjacency labeling scheme for $\mathcal{G}_D$ must have labels of size at least $\log \binom{\floor{n/2}|}{\floor{D/2}} - O\!\left(\min\set{D,\sqrt{D\log n}\log(n/d)}\right)$, which means that the upper bounds are tight up to an additive term of size $O\!\left(\min\set{D+\log n,\sqrt{D\log n}\log(n/d)}\right)$, which is at most $O(\sqrt{n \log n})$. Previous labeling schemes use labels that are larger by an additive term of size $\Omega(D)$, which is $\Omega(n)$ when $D = \Omega(n)$, so this is the first adjacency labeling scheme for $\mathcal{G}_D$ where the dominating term is optimal. \subsection{Upper bounds on $g_v(\mathcal{G}_D)$} We show the following deterministic bound. \begin{theorem} \label{thmDetUpper} For the family $\mathcal{G}_D$ of graphs with bounded degree $D$ on $n$ nodes \[ g_v(\mathcal{G}_D) \le 2^{k+1} \cdot \frac{n^k}{(k-1)!} , \ \ \text{where} \ k = \ceil{D/2} \] \end{theorem} \begin{proof} For a set $S$ we let $S^{\le k}$ denote the set of all subsets of $S$ of size $\le k$. We note that $\abs{S^{\le k}} \le 2\frac{\abs{S}^k}{k!}$ whenever $S$ is finite. We will show that $g_v(\mathcal{G}_D) \le 2\frac{(2n-1)^k}{(k-1)!}$. Fix $n,D$, let $k = \ceil{D/2}$ and let $U_n$ be the induced universal graph for $\mathcal{G}_2$ defined in \Cref{sec:max2upper}. We note that $V[U_n] = [2n-1]$. We define the graph $G$ to have vertex set $[2n-1] \times [2n-1]^{\le k-1}$ and such that there is an edge between $(x,A)$ and $(y,B)$ iff $x \in B$, $y \in A$ or $x$ and $y$ are adjacent in $U_n$. Since $G$ has the desired number of nodes we proceed to show that $G$ is an induced universal graph for $\mathcal{G}_D$. Let $H$ be a graph in $\mathcal{G}_D$. By \Cref{Butlersplit} we know that we can decompose the edges of $H$ into $H_0$ and $H_1$ such that $\Delta(H_0) \le 2, \Delta(H_1) \le 2(k-1)$. We can find an embedding function $f : V[H] \to V[U_n]$ of $H_0$ in $U_n$ by the universality of $U_n$. By the same argument as in the first part of the proof we can orient the edges of $H_1$ such that any node has at most $k-1$ outgoing edges in $H_1$. For $u \in V[H]$ let $S_u$ be the set of nodes $v$ such that there exists an edge between $u$ and $v$ in $H_1$ oriented from $u$ to $v$. We see that $u$ and $v$ are adjacent iff $f(u)$ and $f(v)$ are adjacent in $U_n$ or it holds that $u \in S_v$ or $v \in S_u$. Therefore $\lambda : V[H] \to V[G]$ defined by $u \to (f(u), f(S_u))$ is an embedding function of $H$ in $G$. Hence $G$ must be an induced universal graph for $\mathcal{G}_D$. \end{proof} The intuition behind the randomized bound below is the following. Consider placing all $n$ vertices on a circle in a randomly chosen order and rename the vertices with indices $[n]$ following the order on the circle. Now, a vertex $v \in [n]$ remembers its neighbours in the next half of the circle, i.e., $v$ stores all the adjacant vertices among $\{v+1, \ldots, v+\lceil n/2 \rceil\}$ (where indices are taken modulu $n$). If two vertices $u,v$ are adjacant, then clearly either $u$ stores the index of $v$ or conversely, hence an adjacancy query can be answered. A Chernoff bound implies that vertex $v$ with high probability stores at most $D/2 + O(\sqrt{D \log n})$ indices. It follows that there exists an order of the points on the circle where every vertex stores that many neighbours and the theorem follows. \begin{theorem}\footnote{In the full version we improve this to $\binom{\floor{n/2}}{\floor{D/2}} \cdot 2^{O \!\left (\sqrt{D\log D} \cdot \log(n/D) \right )}$} \label{thmRandUpper} For the family $\mathcal{G}_D$ of graphs with bounded degree $D$ on $n \ge 2D$ nodes \[ g_v(\mathcal{G}_D) \le \binom{\floor{n/2}}{\floor{D/2}} \cdot 2^{O \!\left (\sqrt{D\log n} \cdot \log(n/D) \right )} \] \end{theorem} \begin{proof} Fix $n,D$ and wlog assume that $n$ is odd. For $D \le \log n$ the result follows from \Cref{thmDetUpper} so assume that $D \ge \log n$. Let $G$ be a graph in $\mathcal{G}_D$, and wlog assume that $V[G] = [n]$. Let $\pi : [n] \to [n]$ be a permutation of $[n]$ chosen uniformly at random. For each $u \in V[G]$ let $S_u$ be the set of differences $\pi(v) - \pi(u) \bmod n$ where $v$ is a neighbour of $u$ and $\pi(v) - \pi(u) \bmod n$ is at most $\floor{\frac{n}{2}}+1$. That is: \[ S_u = \set{(\pi(v) - \pi(u)) \bmod n \mid (u,v) \in E[G], \ (\pi(v) - \pi(u)) \bmod n \in \set{1,2,\ldots,\floor{\frac{n}{2}}}} \] Given two nodes $u,v$ we can determine whether $u$ are adjacent from $\pi(u),\pi(v)$ and $S_u,S_v$ in the following way. If $(\pi(u)-\pi(v)) \bmod n \le \floor{\frac{n}{2}}+1$ they are adjacent iff $(\pi(u)-\pi(v)) \bmod n \in S_v$. Otherwise they are adjacent iff $(\pi(v)-\pi(u)) \bmod n \in S_u$. We note that $\mathbb{E}(\abs{S_u}) = \frac{\deg_G(u)}{2} \le \frac{D}{2}$. By a standard Chernoff bound without replacement we see that \begin{align} \label{eqNeighbourBound} \abs{S_u} \le D', \ \ \text{where} \ D' = \floor{\frac{D}{2} + O \!\left ( \sqrt{D\log n} \right )} \end{align} with probability $\ge 1 - \frac{1}{2n}$ for a given vertex $u \in V[G]$. So with probability at least $\frac{1}{2}$ we have that \eqref{eqNeighbourBound} holds for every $u \in V[G]$. In particular there exists $\pi$ such that \eqref{eqNeighbourBound} holds for every $u \in V[G]$. Fix such a $\pi$. Let $D'' = \min\set{\floor{\frac{n}{2}},D'}$. Then for any node $u$ we can encode $\pi(u)$ and $S_u$ using at most $O(\log n) + \log \binom{\floor{n/2}}{D''}$ bits. Hence we conclude that: \[ g_v(\mathcal{G}_D) \le \binom{\floor{n/2}}{D''} n^{O(1)} \] The conclusion now follows from the following estimate \[ \binom{\floor{n/2}}{D''} \le \binom{\floor{n/2}}{\floor{D/2}} \cdot \left ( \frac{\floor{n/2}}{\floor{D/2}} \right )^{D''-\floor{D/2}} \le \binom{\floor{n/2}}{\floor{D/2}} \cdot \left ( \frac{n}{D} \right )^{O\!\left(\sqrt{D\log n}\right )} \] \end{proof} \subsection{Lower bounds on $g_v(\mathcal{G}_D)$} We now show lower bounds on $g_v(\mathcal{G}_D)$. Our first lower bound follows from counting perfect matchings. \begin{lemma} \label[lemma]{lemBipartiteBoundedDegLowerBound} Let $n, D$ be positive integers where $n$ is even. Let $V = [n]$. The number of graphs $G$ with $\Delta(G) \le D$ and vertex set $V$ is at least $\frac{\left((n/2)!\right)^{D}}{D^{Dn/2}}$. \end{lemma} \begin{proof} Let $V_0 = [n/2], V_1 = [n] \setminus [n/2]$. Let $M_0,M_1,\ldots,M_{r-1}$ be all perfect matchings of $V_0$ and $V_1$ where $r = (n/2)!$. Now consider the following family of graphs being the union of $D$ such perfect matchings: \[ \mathcal{F} = \set{G \mid V[G] = V, E[G] = M_{i_0} \cup \ldots M_{i_{D-1}}, i_0,\ldots,i_{D-1} \in [r]} \] Every graph in $\mathcal{F}$ is the union of $D$ perfect matchings and therefore has max degree $\le D$. Now fix $G \in \mathcal{F}$ and let $M$ be a perfect matching $G$. We can write $M = \set{(u,f(u)) \mid u \in V_0}$ for some bijective function $f : V_0 \to V_1$. There are at most $D$ ways to choose $f(u)$ for every $u \in V_0$ since $(u,f(u))$ must be an edge of $G$. Hence there are at most $D^{n/2}$ ways to choose a perfect matching of $G$, and $G$ can be written as a union of $D$ perfect matchings in at most $D^{Dn/2}$ ways. Since $G$ was arbitrarily chosen this must hold for any $G \in \mathcal{F}$. Since there are $r^D$ ways to choose $D$ perfect matchings we conclude that $\mathcal{F}$ consists of at least $\frac{r^D}{D^{nD/2}}$ graphs as desired. \end{proof} As an immediate corollary of \Cref{lemBipartiteBoundedDegLowerBound} we get a lower bound on the number of nodes in an induced universal graph, shown below in \cref{corLowerBoundLabelSizeBoundedDeg}. \begin{corollary} \label[corollary]{corLowerBoundLabelSizeBoundedDeg} The induced universal graph for the family $\mathcal{G}_D$ of graphs with bounded degree $D$ and $n$ nodes has at least $\Omega \!\left ( \left ( \frac{n}{2eD} \right )^{D/2} \right )$ nodes. \end{corollary} \begin{proof} Let $G$ be the induced universal graph for the family $\mathcal{G}_D$. Let $V = [n]$. Any graph $H$ from $\mathcal{G}_D$ on the vertex set $V$ is uniquely defined by the embedding function $f$ of $H$ in $G$. Since there are no more than $\abs{V[G]}^n$ ways to choose $f$, \Cref{lemBipartiteBoundedDegLowerBound} gives that $\abs{V[G]}^n \ge \frac{\left(\floor{n/2}!\right)^{D}}{D^{D\floor{n/2}}}$. The result now follows from Stirling's formula. \[ \abs{V[G]} \ge \left ( \frac{\left(\floor{n/2}!\right)^{2/n}}{D^{\floor{n/2}/(n/2)}} \right )^{D/2} \] We note that $\floor{n/2}/(n/2) = 1$ when $n$ is even. When $n$ is odd we have $\floor{n/2} = \frac{n-1}{2}$. Hence $\floor{n/2}/(n/2) = 1 - \frac{1}{n}$. Since $D \le n$ we have $D^{1-\frac{1}{n}} = \Theta(D)$. \end{proof} Our second lower bound comes from bounding the probability that a random graph on $n$ vertices, where each edge exists with probability around $D/n$, has max degree $D$. \begin{lemma} \label{lemRandLowerBound} Let $n, D$ be positive integers where $n \ge 2D$. Let $V = [n]$. The number of graphs $G$ with $\Delta(G) \le D$ and vertex set $V$ is at least $\binom{\binom{n}{2}}{\floor{nD/2}} \cdot 2^{-O \!\left (n\sqrt{D\log n} \cdot \log(n/D) \right )}$. \end{lemma} \begin{proof} Fix $n, D$. For $D \le \log^2 n$ the result follows from \Cref{lemBipartiteBoundedDegLowerBound}, so assume that $D \ge \log^2 n$. Let $D' = D - O\!\left(\sqrt{D\log n}\right)$ be an integer. Let $G$ be a random $G(n,p)$ graph where $p = \frac{D'}{n-1}$ and $V[G] = [n]$. That is, $G$ is a random graph on $n$ nodes and for every pair $u,v \in V[G]$ there is an edge between $u$ and $v$ with probability $p$. We say that $G$ is \emph{good} if it satisfies the following two properties: \begin{enumerate} \item[1] $\Delta(G) \le D$. \item[2] $\abs{E[G]} \ge nD''$ where $D''$ is an even integer satisfying $\frac{nD''}{2} = \frac{nD'}{2} - O(\sqrt{nD})$. \end{enumerate} We note that $D'' = D - O(\left(\sqrt{D\log n}\right)$. We will argue that $G$ satisfies Property 1 with probability at least $\frac{1}{3}$. By a Chernoff bound the probability that $u \in V[G]$ has more than $D$ neighbours is at most $\frac{1}{3n}$ if $D'$ is chosen sufficiently small. So with probability at least $\frac{2}{3}$ we have $\Delta(G) \le D$. Similarly, with probability at least $\frac{2}{3}$ we have $\abs{E[G]} \ge \frac{nD'}{2} - O(\sqrt{nD})$ if we choose the constant in the $O$-notation large enough. So with probability at least $\frac{1}{3}$ $G$ is good. Let $r$ be the number of good graphs and enumerate them $G_1,G_2,\ldots,G_r$. The probability that $G = G_i$ is $p^{\abs{E[G_i]}}(1-p)^{\binom{n}{2}-\abs{E[G_i]}}$. Since $G_i$ is good we know that $\abs{E[G_i]} \ge \frac{nD''}{2}$. Hence the probability is at most: \[ p^{nD''/2}(1-p)^{\binom{n}{2}-nD''/2} \le \binom{\binom{n}{2}}{nD''/2}^{-1} \] Where the inequality follows from the binomial expansion of $(p+(1-p))^{\binom{n}{2}}$. Hence we see that: \[ \frac{1}{3} \le \sum_{i=1}^r \Pr(G=G_i) \le r\binom{\binom{n}{2}}{\frac{nD''}{2}}^{-1} \] And hence there are at least $\frac{1}{3}\binom{\binom{n}{2}}{nD''/2}$ graphs with vertex set $[n]$ and maximum degree $\le D$. Now the result follows from the following estimate: \[ \binom{\binom{n}{2}}{\frac{nD''}{2}} \ge \binom{\binom{n}{2}}{\floor{\frac{nD}{2}}} \left ( \frac{\binom{n}{2}}{nD''/2} \right )^{nD''-nD} \ge \binom{\binom{n}{2}}{\floor{\frac{nD}{2}}} \left ( \frac{n}{D} \right )^{-O\!\left(n\sqrt{D \log n}\right)} \] \end{proof} As previously we get a bound on $g_v(\mathcal{G}_D)$. \begin{corollary} \label[corollary]{corRandLowerBound} For the family $\mathcal{G}_D$ of graphs with bounded degree $D$ on $n \ge 2D$ nodes \[ g_v(\mathcal{G}_D) \ge \binom{\floor{n/2}}{\floor{D/2}} \cdot 2^{-O \!\left (\sqrt{D\log n} \cdot \log(n/D) \right )} \] \end{corollary} \begin{proof} By the same argument as for \Cref{corLowerBoundLabelSizeBoundedDeg} we get that \Cref{lemRandLowerBound} implies: \[ g_v(\mathcal{G}_D) \ge \binom{\binom{n}{2}}{\floor{nD/2}}^{1/n} \cdot 2^{-O \!\left (\sqrt{D\log n} \cdot \log(n/D) \right )} = \binom{\floor{n/2}}{\floor{D/2}} \cdot 2^{-O \!\left (\sqrt{D\log n} \cdot \log(n/D) \right )} \] \end{proof} \section{Introduction} A graph $G=(V, E)$ is said to be an \emph{induced universal graph} for a family $\cal F$ of graphs if it contains each graph in $\cal F$ as a vertex-induced subgraph. A graph $H=(V',E')$ is contained in $G$ as a \emph{vertex-induced subgraph} if $V' \subseteq V$ and $E'=\{vw\mid v,w \in V' \wedge vw \in E\}$. Induced universal graphs have been studied since the 1960s~\cite{moon1965minimal,Rado64}, and bounds on the sizes of induced universal graphs have been given for many families of graphs, including general, bipartite~\cite{AlstrupKTZ14}, and bounded arboricity graphs~\cite{adjacencytrees2015}. We later define the classic distributed data structure \emph{adjacency labeling scheme} and describe how it is directly related to induced universal graphs. In Table~\ref{tab:adjacency2} in \Cref{sec:overview} below we give an overview of previous results and results in this paper. \subsection{Overview of new and existing results}\label{sec:overview} We give an overview in Table~\ref{tab:adjacency2} of dominating existing and new results. All bounds are on sizes of induced universal graphs. In the table, ``P'' refers to a result in this paper, $k=\ceil{D/2}$, $L=(\sqrt{D\log n} \cdot \log(n/D))$ and $U=(\sqrt{D\log n} \cdot \log(n/D))$. The ``A'' and ``B'' case below represent two different constructions. The upper bound in ``B'' is a randomized construction, whereas both lower bounds hold for both upper bounds. \begin{table*}[ht] \renewcommand{\arraystretch}{1.3} \small \centering \makebox[0pt][c]{ \begin{tabular}{|c|c|c|c|} \hline \hline \bf Graph family & \bf Lower bound & \bf Upper bound & \bf Lower/Upper\\ \hline \noalign{\vskip 2mm} \hline General& $2^{\frac{n-1}{2}}$ & $O( 2^{\frac{n}{2}})$ & \cite{moon1965minimal}/ \cite{AlstrupKTZ14} \\ \hline Tournaments & $2^{\frac{n-1}{2}}$ & $O( 2^{\frac{n}{2}})$ & \cite{moon1968topics}/\cite{AlstrupKTZ14} \\ \hline Bipartite& $\Omega(2^{\frac{n}{4}})$ & $O( 2^{\frac{n}{4}})$ & \cite{Lozin2007}/\cite{AlstrupKTZ14} \\ \hline \noalign{\vskip 2mm} \hline A: Max degree $D$ & $\Omega((\frac{n}{2eD})^{D/2})$ & $O\!\left(\frac{\min (n,k2^k)}{k!}n^k \right)$ & P/P and ~\cite{icalpnoy14} \\ \hline B: Max degree $D$ & $\binom{\floor{n/2}}{\floor{D/2}} \cdot 2^{-O(L)}$ & $ \binom{\floor{n/2}}{\floor{D/2}} \cdot 2^{O(U)}$ & P/P \\ \hline Max degree $D = o\!\left(\sqrt{n}\right)$ or $D \ge \frac{(2/3+\Omega(1))n}{\ln n}$ & $\binom{\floor{n/2}}{\floor{D/2}}\cdot n^{-O(1)}$ & & \cite{mckay1990asymptotic,mckay1991asymptotic} \\ \hline Constant odd degree $D$ & $\Omega(n^{\frac{D}{2}})$ & $O(n^{k-\frac{1}{D}})$ & \cite{Butler_induced-universalgraphs}/\cite{privatealon,AlonCapalbo2008,Esperet2008} \\ \hline Max degree 2& $11 \floor{n/6}$ & $2n-1$& \cite{Esperet2008}/P \\ \hline Acyclic, max degree 2& $\floor{3/2n}$ & $\floor{3/2n}$& P/P \\ \hline A cycle aware of $n$& $n+ \Omega(\log \log n)$ & $n+\log n + O(1)$ & P/P \\ \hline A cycle not aware of $n$& $n + \Omega\!\left(\sqrt[3]{n}\right)$ & $n+ O(\sqrt{n})$ & P/P \\ \hline \noalign{\vskip 2mm} \hline Excluding a fixed minor & $\Omega(n)$ & $n^2 (\log {n})^{O(1)} $ & \cite{gavoille2007shorter} \\ \hline Planar& $\Omega(n)$ &$n^2(\log n)^{O(1)}$ & \cite{gavoille2007shorter} \\ \hline Planar, constant degree & $\Omega(n)$ & $O(n^2)$& \cite{Chung90} \\ \hline Outerplanar & $\Omega(n)$ & $n (\log n)^{O(1)}$ & \cite{gavoille2007shorter} \\ \hline Outerplanar, constant degree & $\Omega(n)$& $O(n)$& \cite{Chung90}\\ \hline \noalign{\vskip 2mm} \hline Treewidth $l$ & $n2^{\Omega(l)}$& $n (\log \frac{n}{l})^{O(l)} $& \cite{gavoille2007shorter}\\ \hline Constant arboricity $l$ & $\Omega(n^l)$ & $O(n^l)$ & \cite{alstruprauhe}/\cite{adjacencytrees2015} \\ \hline \end{tabular} } \caption{Induced-universal graphs for various families of graphs. ``P'' is results in this paper. For the max degree results $k=\ceil{D/2}$. In the result for families of graphs with an excluded minor, the~$O(1)$ term in the exponent depends on the fixed minor excluded.} \label[table]{tab:adjacency2} \end{table*} \subsection{Maximum degree $2$ and maximum degree $D$} Let $g_v(\mathcal{F})$ be the smallest number of vertices in any induced universal graph for a family of graphs $\mathcal{F}$. Let $\mathcal{G}_D$ be the family of graphs with $n$ vertices and maximum degree $D$. In the families of graphs we study in this paper, a graph always has $n$ vertices, unless explicitly stated otherwise. {\bf {\large Maximum degree $2$.}} Butler~\cite{Butler_induced-universalgraphs} shows that $g_v(\mathcal{G}_2)\leq 6.5n$. That was subsequently improved to $g_v(\mathcal{G}_2)\leq 2.5n+O(1)$ by Esperet {\em et al.}~\cite{Esperet2008} who also show the lower bound $g_v(\mathcal{G}_2)\geq 11 \floor{n/6}$. Esperet {\em et al.}~\cite{Esperet2008} conjecture that $g_v(\mathcal{G}_2)\leq 2n+o(n)$ and raise as an open problem to prove or disprove this. We show the correctness of the conjecture by proving $g_v(\mathcal{G}_2)\leq 2n-1$. The $11 \floor{n/6}$ lower bound is based on a family of graphs whose largest component has $3$ vertices. We show matching $\frac{11}{6}n+\Oo(1)$ upper bounds for the family of graphs in $\mathcal{G}_2$ whose largest component is sufficiently small ($\leq6$ vertices), and for the family of graphs in $\mathcal{G}_2$ whose smallest component is sufficiently large ($\geq10$ vertices). {\bf {\large Maximum degree $D$.}} Let $k=\ceil{D/2}$. To give an upper bound for any value of $D$, Butler~\cite{Butler_induced-universalgraphs} first establishes: \begin{corollary}[\cite{Butler_induced-universalgraphs}]\label[corollary]{Butlersplit} Let $G \in \mathcal{G}_D$ be a graph on $n$ vertices with maximum degree $D$. Then $G$ can be decomposed into $k$ edge disjoint subgraphs where the maximum degree of each subgraph is at most $2$. \end{corollary} To achieve an upper bound this can be combined with: \begin{theorem}[\cite{Chung90}] \label[theorem]{ChungSplit} Let $\mathcal{F}$ and $\mathcal{Q}$ be two families of graphs and let $G$ be an induced universal graph for $\mathcal{F}$. Suppose that every graph in the family $\mathcal{Q}$ can be edge-partitioned into $\ell$ parts, each of which forms a graph in $\mathcal{F}$. Then $g_v(\mathcal{Q}) \leq |V[G]|^\ell$. \end{theorem} Butler~\cite{Butler_induced-universalgraphs} concludes $g_v(\mathcal{G}_D)\leq (6.5n)^k $. Similarly Esperet {\em et al.}~\cite{Esperet2008} achieve $g_v(\mathcal{G}_D)\leq (2.5n+O(1))^k$, and we achieve $g_v(\mathcal{G}_D)\leq (2n-1)^k=O(2^k n^k)$. For constant maximum degree $D$, Butler~\cite{Butler_induced-universalgraphs} also shows $g_v(\mathcal{G}_D)=\Omega(n^{D/2})$. When $D$ is even and constant, the bounds are hence very tight: $g_v(\mathcal{G}_D)=\Theta(n^{D/2})$. However, for non-constant $D$ we can, using another approach but still building on top of our maximum degree $2$ solution, beat Butler's lower bound for constant degree: For any value of $D$, we prove the upper bound $g_v(\mathcal{G}_D)=O\!\left(\frac{k2^k}{k!}n^k \right)$. We also give a lower bound for any value of $D$: $\Omega\!\left((\frac{n}{2eD})^{D/2}\right)$. {\bf {\large Constant odd degree.}} A \emph{universal} graph for a family of graphs ${\cal F}$ is a graph that contains each graph from ${\cal F}$ as a subgraph (not necessarily vertex induced). The challenge is to construct universal graphs with as few edges as possible. A graph has \emph{arboricity} $k$ if the edges of the graph can be partitioned into at most $k$ forests. Graphs with maximum degree $D$ have arboricity bounded by $\floor{\frac{D}{2}}+1$~\cite{chartrand68,Lovasz66}. When $D$ is odd and constant, some improvement have been achieved \cite{AlonCapalbo2008,Esperet2008} on the above bounds on $g_v(\mathcal{G}_D)$ by arguments involving universal graphs and graphs with bounded arboricity. Let $\mathcal{A}_k$ denote a family of graphs with arboricity at most $k$. \begin{theorem}[\cite{Chung90}] \label[theorem]{Arboricity} Let $G$ be a universal graph for $\mathcal{A}_k$ and $d_i$ the degree of vertex $i$ in $G$. Then $g_v(\mathcal{A}_k) \leq \sum_{i}(d_i+1)^k$. \end{theorem} Alon and Capalbo~\cite{alon2007sparse} describes a universal graph with $n$ vertices of maximum degree $c(D)n^{1-2/D}\log^{4/D}n$ for the family $\mathcal{G}_D$, where $D\geq 3$ and $c(D)$ is a constant. Using this bound in Theorem~\ref{Arboricity}, Esperet {\em et al.}~\cite{Esperet2008} note that for odd $D$ (and hence arboricity $k=\ceil{\frac{D}{2}}$), we get $g_v(\mathcal{G}_D)\leq c_1(D)n^{k-\frac{1}{D}}\log^{2+\frac{2}{D}}n$, for a constant $c_1(D)$.\footnote{In~\cite{Esperet2008} a typo states that the maximum degree for the universal graph in~\cite{alon2007sparse} is $c(D)n^{2-2/D}\log^{4/D}n$. The theorem in~\cite{alon2007sparse} only states the total number of edges being $c(D)n^{2-2/D}\log^{4/D}n$, however the maximum degree is $c(D)n^{1-2/D}\log^{4/D}n$~\cite{privatealon}.} Using the slightly better universal graphs from~\cite{AlonCapalbo2008} the maximum degree is reduced to $c(D)n^{1-2/D}$~\cite{privatealon}, giving $g_v(\mathcal{G}_D)\leq c_2(D)n^{k-\frac{1}{D}}$, for a constant $c_2(D)$. Note that using this technique for even values of $D$ would give $g_v(\mathcal{G}_D)\leq c_3(D)n^{\frac{D}{2}+1-\frac{2}{D}}$, for a constant $c_3(D)$, which is asymptotically worse even for constant values of $D$, compared to any of the new upper bounds presented in this paper. In~\cite{alon2010universality} it is stated that the methods in~\cite{AlonCapalbo2008} can be used to achieve $g_v(\mathcal{G}_D)=O(n^{D/2})$ for constant odd values of $D>1$, however according to~\cite{privatealon} this still has to be checked more carefully, and the hidden constant in the $O$-notation is not small. \subsection{Adjacency labeling schemes and induced universal graphs} An \emph{adjacency labeling scheme} for a given family $\mathcal{F}$ of graphs assigns \emph{labels} to the vertices of each graph in $\mathcal{F}$ such that a \emph{decoder} given the labels of two vertices from a graph, and no other information, can determine whether or not the vertices are adjacent in the graph. The labels are assumed to be bit strings, and the goal is to minimize the maximum label size. A $b$-bit labeling scheme uses at most $b$ bits per label. Information theoretical studies of adjacency labeling schemes go back to the 1960s~\cite{Breuer66,BF67}, and efficient labeling schemes were introduced in~\cite{KNR92,muller}. For graphs with bounded degree $D$, it was shown in~\cite{BF67} that labels of size $2nD$ can be constructed such that two vertices are adjacent whenever the Hamming distance~\cite{hamming} of their labels is at most $4D-4$. A labelling scheme for $\mathcal{F}$ is said to have \emph{unique labels} if no two vertices in the same graph from $\mathcal{F}$ are given the same label. \begin{theorem}[\cite{KNR92}] \label[theorem]{KNRreduction} A family $\mathcal{F}$ of graphs has a $b$-bit adjacency labeling scheme with unique labels iff $g_v(\mathcal{F}) \leq 2^b$. \end{theorem} From a labeling perspective the above new upper and lower bounds are at most an additive $O(D+\log n)$ term from optimality. \subsection{Better bounds for larger $D$, $D=\Omega(\log^3 n)$} We have another approach which for large $D$, $D=\Omega(\log^3 n)$, gives better bounds than the ones presented above for constant $D$. The previous best upper bound for such large $D$ was $\binom{n}{\ceil{D/2}} n^{O(1)}$ due to Adjiashvili and Rotbart~\cite{icalpnoy14}. For any $D$ we prove the lower and upper bounds \[\binom{\floor{n/2}}{\floor{D/2}} \cdot 2^{-O \!\left (\sqrt{D\log n} \cdot \log(n/D) \right )} \text{ and } \binom{\floor{n/2}}{\floor{D/2}} \cdot 2^{O \!\left (\sqrt{D\log n} \cdot \log(n/D) \right )},\] where the upper bound is a randomized construction. From a labeling perspective our bounds are the first to give labels for any value of $D$ that are at most $o(n)$ bits longer than the shortest possible labels. An asymptotic enumeration of the number of $D$-regular graphs due to McKay and Wormald \cite{mckay1990asymptotic,mckay1991asymptotic} combined with Stirling's formula gives a stronger lower bound of $\binom{\floor{n/2}}{\floor{D/2}} \cdot n^{-O(1)}$ whenever $D = o\left(\sqrt{n}\right)$ or $D > \frac{cn}{\ln n}$ for a constant $c > \frac{2}{3}$. \subsection{Acyclic graphs and cycle graphs} On our way to understand the family $\mathcal{G}_2$ better, we first examine two other basic families of graphs. For the family $\mathcal{AC}$ of acyclic graphs on $n$ vertices with maximum degree $2$ we show an upper bound matching exactly the lower bound in~\cite{Esperet2008}, which makes us conclude $g_v(\mathcal{AC})=\floor{3/2n}$. This lower bound is not explicitly stated in~\cite{Esperet2008}, but follows directly from the construction of the lower bound for $g_v(\mathcal{G}_2)$. We also study the family $\mathcal{C}_n$ of graphs consisting of one cycle of length $\leq n$ (and no other edges or vertices). For this family we show $n+ \Omega(\log \log n)\leq g_v(\mathcal{C}_n) \leq n+\log n + O(1)$. \subsection{Oblivious decoding} From a labeling perspective one can assume all labels have the same length~\cite{AlstrupKTZ14}. Hence, if the decoder does not know what $n$ is, it will always be able to compute $n$ approximately. However, we show that the label size in an optimum labeling scheme can be smaller if the decoder knows $n$ precisely. To be more specific, let $\mathcal{F}_n$ be a family of graphs for each $n=1,2,\ldots$. We show that there is a labeling scheme of $\mathcal F_n$ using $f(n)$ labels that enables a decoder not aware of $n$ to answer adjacency queries iff there is a family of graphs $G_1,G_2,\ldots$ such that $G_n$ is an induced universal graph for $\mathcal F_n$, $|G_n|=f(n)$, and $G_n$ is an induced subgraph of $G_{n+1}$ for every $n$. Next we show that with this extra requirement to the induced universal graph we have $n + \Omega\!\left(\sqrt[3]{n}\right) \leq g_v(\mathcal{C}_n) \leq n+ O(\sqrt{n})$. The lower bound is true for infinitely many $n$, but for specific $n$ it might not hold. For the other problems studied in this paper, the decoder does not need to know $n$, but the lower bounds hold even if it does. To the best of our knowledge this is the first time this relationship between labeling schemes and induced universal graphs has been described and examples have been given where the complexities differ. \subsection{Related results} For the family of general, undirected graphs on $n$ vertices, Alstrup {\em et al.}~\cite{AlstrupKTZ14} give an induced universal graph with $O(2^{n/2})$ vertices, which matches a lower bound by Moon \cite{moon1965minimal}. More recently Alon~\cite{Alonconstant2016} shows the existents of a construction having a better constant factor than the one in~\cite{AlstrupKTZ14}. It follows from~\cite{adjacencytrees2015,alstruprauhe} that $g_v(\mathcal{A}_k)=\theta(n^k)$ for the family $\mathcal{A}_k$ of graphs with constant arboricity $k$ and $n$ vertices. Using universal graphs constructed by Babai {\em et al.}~\cite{BCEGS82}, Bhatt {\em et al.}~\cite{BCLR89}, and Chung {\em et al.}~\cite{CG78,CG79,CG83,CGP76}, Chung~\cite{Chung90} obtains the best currently known bounds for e.g.~induced universal graphs for planar and outerplanar bounded degree graphs. Labeling schemes are being widely used and well-studied in the theory community: Chung~\cite{Chung90} gives labels of size $\log n+O(\log \log n)$ for adjacency labeling in trees, which was improved to $\log n + O(\log^* n)$~\cite{alstruprauhe} and in~\cite{bonichon2006short,Chung90,Fraigniaud2009randomized,fraigniaudkorman2,KMS02} to $\log n + \Theta(1)$ for various special cases of trees. Finally it was improved to $\log n + \Theta(1)$ for general trees~\cite{adjacencytrees2015}. Using labeling schemes, it is possible to avoid costly access to large global tables and instead only perform local and distributed computations. Such properties are used in applications such as XML search engines~\cite{AKM01}, network routing and distributed algorithms~\cite{Cowen01,EilamGP03,Gavoille01,ThZw05}, dynamic and parallel settings ~\cite{CohenKaplan2010,dynamicKormanP07}, and various other applications~\cite{Korman2010,peleg2,SK85}. A survey on induced universal graphs and adjacency labeling can be found in~\cite{AlstrupKTZ14}. See~\cite{gavoillepeleg} for a survey on labeling schemes for various queries. \section{Maximum degree $2$ upper bounds}\label{sec:max2upper} In this section we prove upper bounds on $g_v(\mathcal{G}_2)$, and on $g_v(\mathcal{F})$ for several special families $\mathcal{F}\subseteq\mathcal{G}_2$. \subsection{$2n-1$ upper bound for $g_v(\mathcal{G}_2)$} \pgfdeclarelayer{background} \pgfdeclarelayer{foreground} \pgfsetlayers{background,main,foreground} \newenvironment{tikzpicture-Un}[1][1]{% \begin{tikzpicture}[scale=0.5] \pgfmathtruncatemacro{\n}{#1}; \pgfmathtruncatemacro{\vmax}{2*\n-2}; \begin{scope}[ vertex style/.style={ draw, circle, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected vertex style/.style={ draw, circle, fill=blue!20, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected edge style/.style={ rounded corners,line width=1.5mm,blue!20,cap=round% } ] \foreach \i in {0,...,\vmax}{% \pgfmathtruncatemacro{\x}{div(\i+1,2)}; \pgfmathtruncatemacro{\y}{mod(\i+1,2)*(2*mod(\x,2)-1)}; \node[vertex style] (v\i) at (\x,\y) {\tiny $\i$}; }; }{% \foreach \i in {0,...,\vmax}{% \pgfmathtruncatemacro{\nexti}{\i+1}; \ifthenelse{\i=2 \OR \nexti>\vmax \OR \nexti=2}{% }{% \draw[thick,color=red] (v\i) -- (v\nexti); } \pgfmathtruncatemacro{\nexti}{4-mod(\i,2)+\i} \ifthenelse{\i=2 \OR \nexti>\vmax \OR \nexti=2}{% }{% \pgfmathtruncatemacro{\imod}{mod(\i,2)} \ifthenelse{\imod=1}{% \draw[thick,color=ForestGreen] (v\i) -- (v\nexti); }{% \draw[thick,color=blue] (v\i) -- (v\nexti); } } }; \end{scope} \end{tikzpicture} }% \NewEnviron{tikzpicture-Un-induced}[2]{ \begin{tikzpicture-Un}[#1] \BODY \begin{pgfonlayer}{background} \foreach \i in {#2}{% \node[selected vertex style] at (v\i) {\tiny $\i$}; } \foreach \i in {#2}{% \ifthenelse{\i=2}{% }{% \foreach \j in {#2}{% \ifthenelse{\i<\j \AND \NOT \j=2}{% \pgfmathtruncatemacro{\nexti}{\i+1}; \ifthenelse{\nexti=\j}{% \draw[selected edge style] (v\i.center) -- (v\j.center); }{} \pgfmathtruncatemacro{\nexti}{4-mod(\i,2)+\i} \ifthenelse{\nexti=\j}{% \draw[selected edge style] (v\i.center) -- (v\j.center); }{} }{ } } } } \end{pgfonlayer} \end{tikzpicture-Un} }% \newcommand{\tikzUnInduced}[2][]{ \begin{tikzpicture-Un-induced}{#2}{#1} \end{tikzpicture-Un-induced} } Here we prove that there exists an induced universal graph with $2n-1$ vertices and $4n-9$ edges for the family $\mathcal{G}_2$ of all graphs with $n$ vertices and maximum degree $2$. \begin{definition}\label[definition]{def:Un} Let \begin{align*} s(x)&:= \begin{cases} x+4&\text{ if }x\equiv0\pmod{2} \\ x+3&\text{ otherwise} \end{cases} \end{align*} and for any $n\in\mathbb{N}_0$ let $U_n$ be the graph with vertex set $[2n-1]$ and an edge $(u,v)$ iff $u,v\neq2$ and either $\abs{u-v}=1$ or $u=s(v)$ or $v=s(u)$. (See~\cref{fig:Un-example}). \end{definition} \begin{figure}[h!] \begin{center} \tikzUnInduced{1}~ \tikzUnInduced{2}~ \tikzUnInduced{3}~ \tikzUnInduced{4}~ \tikzUnInduced{5}~ \tikzUnInduced{6} \\ ~\\ \tikzUnInduced{26} \caption{$U_1,\ldots,U_6$, and $U_{26}$ with each $(u,v)$ colored \textcolor{red}{red}/\textcolor{ForestGreen}{green}/\textcolor{blue}{blue} if $\abs{u-v}=1/3/4$.} \label{fig:Un-example} \end{center} \end{figure} \begin{theorem} The graph family $U_0, U_1, \ldots $ has the property that for $n\in\mathbb{N}_0$ \begin{enumerate}[label=(\alph*)] \item $U_n$ has $\max\set{0,2n-1}$ vertices, and $\max\set{0,n-1,3n-5,4n-9}$ edges. \item $U_n$ is an induced subgraph of $U_{n+1}$. \item $U_n$ is an induced universal graph for the family of graphs with $n$ vertices and maximum degree $2$ \end{enumerate} \end{theorem} \begin{proof} If $n=0$, $U_n$ has $0$ vertices. Otherwise the number of vertices in $U_n$ is trivially $2n-1$ from the definition. It is also clear that $U_0$ and $U_1$ each have $0$ edges, $U_2$ has $1$ edge, $U_3$ has $4$ edges, and $U_4$ has $7$ edges. Finally for $n>4$, $U_{n}$ has exactly $4$ edges more than $U_{n-1}$, and therefore has $4(n-4)+7 = 4n-9$ edges, as desired. Since the existence of an edge $(u,v)$ does not depend on $n$, the subgraph of $U_{n+1}$ induced by all vertices with label $\leq2n-2$ is exactly $U_n$. For the final part, consider a graph $G\in\mathcal{G}_2$. We need to show that $G$ is an induced subgraph of $U_n$. The proof is by induction on the number of $P_{\set{\geq3}}$ and $C_{\set{\geq4}}$ components in $G$. If there are no such components, all components are either $P_1$, $P_2$, or $C_3$. Suppose therefore that $G \simeq k_1\times{}P_1 + k_2\times{}P_2 + k_3\times{}C_3$ for some $k_1,k_2,k_3\in\mathbb{N}_0$, and let $n_1=k_1$, $n_2=2k_2$, and $n_3=3k_3$ (so $n=n_1+n_2+n_3$). Further, assume that $n>1$ since otherwise it is trivial. Informally, we will show that assigning labels greedily, smallest label first, in the order $C_3$, $P_2$, $P_1$ is sufficient. Formally, let $\set{I_3,I_2,I_1}$ be the partition of $\set{-1,\ldots,2n-2}$ into parts of size $2n_3$, $2n_2$, and $2n_1$ such that $i_3<i_2<i_1$ for all $(i_3,i_2,i_1)\in I_3\times{}I_2\times{}I_1$, and let $A_3:=I_3\setminus\set{-1,2}$, $A_2:=I_2\setminus\set{-1,2}$, and $A_1:=(I_1\cup\set{2})\setminus\set{-1}$. Then $\set{A_3,A_2,A_1}$ is a partition of $V[U_n]$. Now let \begin{itemize} \item $V_3:=\set{i\in A_3\mathrel{}\middle\vert\mathrel{} i\in\set{0,1,4}\pmod{6}}$ \item $V_2:=\set{i\in A_2\mathrel{}\middle\vert\mathrel{} i-(6n_3-1)\in\set{1,2,4,7}\pmod{8}}$ \item $V_1:= \begin{cases} \emptyset&\text{if }n_1=0\\ \set{2}&\text{if }n_1=1\\ \set{2,2(n-n_1)}\cup\set{i\in A_1\mathrel{}\middle\vert\mathrel{} i\equiv1\pmod{2}\wedge i\geq2(n-n_1)+3}&\text{otherwise} \end{cases}$ \end{itemize} Let $V=V_1\cup{}V_2\cup{}V_3$ and let $G'$ be the subgraph of $U_n$ induced by $V$. We claim that $G\simeq{}G'$ (see~\cref{fig:V1V2V3}). \begin{figure}[h!] \begin{center} \begin{tikzpicture-Un-induced}{24}{ 0,1,4, 6,7,10, 12,13,16, 18,19,22, 24,25, 27,30, 32,33, 35,38, 2,40,43,45} \draw[very thick,dotted,red] ($(v0.north west)+(-0.15,1.15)$) -- ($(v3.north west)+(-0.15,0.15)$) -- ($(v3.north west)+(-0.15,1.15)$) -- ($(v22.north east)+(+0.15,0.15)$) -- ($(v21.south east)+(+0.15,-1.15)$) -- ($(v0.south west)+(-0.15,-0.15)$) -- cycle; \draw[very thick,dotted,red] ($(v24.south west)+(-0.15,-0.15)$) rectangle ($(v38.north east)+(0.15,0.15)$); \draw[very thick,dotted,red] ($(v2.north west)+(-0.15,0.75)$) -- ($(v46.north east)+(0.15,0.75)$) -- ($(v45.south east)+(0.15,-1.15)$) -- ($(v40.south west)+(-0.15,-0.15)$) -- ($(v39.north west)+(-0.15,1.5)$) -- ($(v2.north east)+(0.15,0.5)$) -- ($(v2.south east)+(0.15,-0.15)$) -- ($(v2.south west)+(-0.15,-0.15)$) -- cycle; \end{tikzpicture-Un-induced} \caption{$U_{24}$ with $4\times{}P_1+4\times{}P_2+4\times{}C_3$ embedded. The dotted red boxes represent $A_1$, $A_2$, and $A_3$ respectively. $V_1$, $V_2$, and $V_3$ consist of the marked vertices in each of the dotted red boxes.} \label{fig:V1V2V3} \end{center} \end{figure} \noindent Now it follows from~\cref{def:Un} that for $v\in V$ the neighbors of $v$ in $G'$ are: \begin{align*} N(v) &:= (V\setminus\set{2})\cap \begin{cases} \emptyset &\text{ if }v=2\\ \set{v-4,v-3,v-1,v+1,v+4} &\text{ if }v\neq2 \wedge v\equiv0\pmod{2}\\ \set{v-1,v+1,v+3} &\text{ otherwise} \end{cases} \end{align*} To see that $G\simeq G'$, first note that $\abs{V_1}=n_1$ and $N(v_1)=\emptyset$ for all $v_1\in{}V_1$. Thus the component of $v_1$ in $G'$ is a $P_1$. Second, note that $\abs{V_2}=n_2$ and that each vertex $v_2\in{}V_2$ has the form $v_2=b_2+k$ with $b_2=(6n_3-1)+8j\equiv1\pmod{2}$ for some $j\in[\floor{\frac{n_2}{4}}], k\in\set{1,2,4,7}$. Now $N(b_2+1)=\set{b_2+2}\subseteq V_2$, $N(b_2+2)=\set{b_2+1}\subseteq V_2$, $N(b_2+4)=\set{b_2+7}\subseteq V_2$, and $N(b_2+7)=\set{b_2+4}\subseteq V_2$, so $v_2$ has exactly one neighbor in $V$, and this neighbor is also in $V_2$. Thus the component of $v_2$ in $G'$ is a $P_2$. Third, note that $\abs{V_3}=n_3$, and that each vertex $v_3\in{}V_3$ has the form $v_3=b_3+k$ with $b_3=6j\equiv0\pmod{2}$ for some $j\in[\frac{n_3}{3}], k\in\set{0,1,4}$. Now $N(b_3+0)=\set{b_3+1,b_3+4}\subseteq V_3$, $N(b_3+1)=\set{b_3+0,b+4}\subseteq V_3$, and $N(b_3+4)=\set{b_3+0,b+1}\subseteq V_3$, so the component of $v_3$ in $G'$ consists of the $3$ vertices $\set{b_3,b_3+1,b_3+4}$, which form a $C_3$. Thus $G'\simeq n_1\times{}P_1 + \frac{n_2}{2}\times{}P_2 + \frac{n_3}{3}\times{}C_3 \simeq k_1\times{}P_1 + k_2\times{}P_2 + k_3\times{}C_3 \simeq G$. For the induction case, suppose $G$ has a component $X$ which is either a $P_k$ for some $k\geq3$, or a $C_k$ for some $k\geq4$. In either case, let $I^-:=[2(n-k)-1]$ and $I^+:=[2n-1]\setminus{}I^-$. Then $I^-$ induces an $U_{n-k}$ subgraph in $U_n$, which by induction has $G-X$ as induced subgraph. Thus all we need to show is that we can extend this to an embedding of $G$ by using using only vertices from $I^+$ to embed $X$ (see~\cref{fig:VP-VC}). \begin{figure}[h!]% \begin{center}% \begin{tikzpicture-Un-induced}{6}{1,4,6,8,9,10} \draw[very thick,dotted,red] ($(v0.south west)+(-1.15,-0.15)$) rectangle ($(v0.north east)+(-0.85,2.15)$); \draw[very thick,dotted,green] ($(v0.south west)+(-0.15,-0.15)$) rectangle ($(v10.north east)+(0.15,0.15)$); \end{tikzpicture-Un-induced} ~~~ \begin{tikzpicture-Un-induced}{7}{3,6,8,10,11,12} \draw[very thick,dotted,red] ($(v0.south west)+(-0.15,-0.15)$) rectangle ($(v0.north east)+(0.15,2.15)$); \draw[very thick,dotted,green] ($(v2.north west)+(-0.15,0.15)$) rectangle ($(v12.south east)+(0.15,-0.15)$); \end{tikzpicture-Un-induced} ~~~ \begin{tikzpicture-Un-induced}{10}{9,12,14,16,17,18} \draw[very thick,dotted,red] ($(v0.south west)+(-0.15,-0.15)$) rectangle ($(v6.north east)+(0.15,0.15)$); \draw[very thick,dotted,green] ($(v8.south west)+(-0.15,-0.15)$) rectangle ($(v18.north east)+(0.15,0.15)$); \end{tikzpicture-Un-induced}% \\ ~ \\ \begin{tikzpicture-Un-induced}{6}{3,4,6,8,9,10} \draw[very thick,dotted,red] ($(v0.south west)+(-1.15,-0.15)$) rectangle ($(v0.north east)+(-0.85,2.15)$); \draw[very thick,dotted,green] ($(v0.south west)+(-0.15,-0.15)$) rectangle ($(v10.north east)+(0.15,0.15)$); \end{tikzpicture-Un-induced} ~~~ \begin{tikzpicture-Un-induced}{7}{5,6,8,10,11,12} \draw[very thick,dotted,red] ($(v0.south west)+(-0.15,-0.15)$) rectangle ($(v0.north east)+(0.15,2.15)$); \draw[very thick,dotted,green] ($(v2.north west)+(-0.15,0.15)$) rectangle ($(v12.south east)+(0.15,-0.15)$); \end{tikzpicture-Un-induced} ~~~ \begin{tikzpicture-Un-induced}{10}{11,12,14,16,17,18} \draw[very thick,dotted,red] ($(v0.south west)+(-0.15,-0.15)$) rectangle ($(v6.north east)+(0.15,0.15)$); \draw[very thick,dotted,green] ($(v8.south west)+(-0.15,-0.15)$) rectangle ($(v18.north east)+(0.15,0.15)$); \end{tikzpicture-Un-induced}% \\ \caption{$P_6$ (top) and $C_6$ (bottom) embedded in $U_{6}$ (left), $U_{7}$ (middle), and $U_{10}$ (right). In each case the dotted red and green boxes represent $I^-$ and $I^+$ respectively, and $V^P$ or $V^C$ is the marked vertices in the green box. Notice that the contents of the red boxes is $U_0$ (left), $U_1$ (middle), and $U_4$ (right).} \label{fig:VP-VC} \end{center} \end{figure} \noindent Now let \begin{itemize} \item $V^P:=\set{2(n-k)+1, 2n-3}\cup\set{i\in I^+\mathrel{}\middle\vert\mathrel{} i\geq 2(n-k)+4 \wedge i\equiv0\pmod{2}}$ \item $V^C:=\set{2(n-k)+3, 2n-3}\cup\set{i\in I^+\mathrel{}\middle\vert\mathrel{} i\geq 2(n-k)+4 \wedge i\equiv0\pmod{2}}$ \end{itemize} Now for $k\geq3$ the subgraph induced by $V^P$ in $U_n$ is $P_k$, and for all $v\in{}V^P$ all neighbors to $v$ are in $I^+$. Similarly, for $k\geq4$ the subgraph induced by $V^P$ in $U_n$ is $C_k$, and for all $v\in{}V^C$ all neighbors to $v$ are in $I^+$. Thus, either $V^P$ or $V^C$ can be used to extend the embedding of $G-X$ in $U_{n-k}$ to an embedding of $G$ in $U_n$. \end{proof} \subsection{$\frac{11}{6}n+\Oo(1)$ upper bound when all components are small} Esperet {\em et al.}~\cite{Esperet2008} showed that $g_v(\mathcal{G}_2)\geq 11 \floor{n/6}$ by considering a specific family of graphs whose largest component had $3$ vertices. We used the same idea in our proof of \Cref{thm:pathlower}. A natural attempt to improve the lower bound would be to include larger components. However, as the following shows, considering components with $4$, $5$, or $6$ vertices is not sufficient. \begin{figure}[h!] \begin{tikzpicture*}{\textwidth} \begin{scope}[every node/.append style={ draw, circle, minimum size=2mm, inner sep=0pt, outer sep=0pt% }] \node (v0) at (1,1) {}; \node (v1) at (2,1) {}; \node (v2) at (3,1) {}; \node (v3) at (1,2) {}; \node (v4) at (2,2) {}; \node (v5) at (3,2) {}; \node (v6) at (1,3) {}; \node (v7) at (2,3) {}; \node (v8) at (3,3) {}; \node (v9) at (1,4) {}; \node (v10) at (3,4) {}; \node (v11) at (1,5) {}; \node (v12) at (2,5) {}; \node (v13) at (3,5) {}; \node (v14) at (1,6) {}; \node (v15) at (2,6) {}; \node (v16) at (3,6) {}; \node (v17) at (1,7) {}; \node (v18) at (2,7) {}; \node (v19) at (3,7) {}; \draw (v0) -- (v1); \draw (v1) -- (v2); \draw (v0) -- (v3); \draw (v1) -- (v3); \draw (v2) -- (v4); \draw (v2) -- (v5); \draw (v3) -- (v4); \draw (v3) -- (v6); \draw (v4) -- (v6); \draw (v5) -- (v7); \draw (v5) -- (v8); \draw (v6) -- (v7); \draw (v7) -- (v8); \draw (v7) -- (v9); \draw (v7) -- (v10); \draw (v9) -- (v12); \draw (v10) -- (v12); \draw (v11) -- (v12); \draw (v11) -- (v14); \draw (v12) -- (v13); \draw (v12) -- (v14); \draw (v13) -- (v15); \draw (v13) -- (v16); \draw (v14) -- (v17); \draw (v15) -- (v16); \draw (v15) -- (v17); \draw (v16) -- (v18); \draw (v16) -- (v19); \draw (v17) -- (v18); \draw (v18) -- (v19); \node (v20) at (1+4,1) {}; \node (v21) at (2+4,1) {}; \node (v22) at (3+4,1) {}; \node (v23) at (1+4,2) {}; \node (v24) at (2+4,2) {}; \node (v25) at (3+4,2) {}; \node (v26) at (1+4,3) {}; \node (v27) at (2+4,3) {}; \node (v28) at (3+4,3) {}; \node (v29) at (1+4,4) {}; \node (v30) at (3+4,4) {}; \node (v31) at (1+4,5) {}; \node (v32) at (2+4,5) {}; \node (v33) at (3+4,5) {}; \node (v34) at (1+4,6) {}; \node (v35) at (2+4,6) {}; \node (v36) at (3+4,6) {}; \node (v37) at (1+4,7) {}; \node (v38) at (2+4,7) {}; \node (v39) at (3+4,7) {}; \draw (v20) -- (v21); \draw (v21) -- (v22); \draw (v20) -- (v23); \draw (v21) -- (v23); \draw (v22) -- (v24); \draw (v22) -- (v25); \draw (v23) -- (v24); \draw (v23) -- (v26); \draw (v24) -- (v26); \draw (v25) -- (v27); \draw (v25) -- (v28); \draw (v26) -- (v27); \draw (v27) -- (v28); \draw (v27) -- (v29); \draw (v27) -- (v30); \draw (v29) -- (v32); \draw (v30) -- (v32); \draw (v31) -- (v32); \draw (v31) -- (v34); \draw (v32) -- (v33); \draw (v32) -- (v34); \draw (v33) -- (v35); \draw (v33) -- (v36); \draw (v34) -- (v37); \draw (v35) -- (v36); \draw (v35) -- (v37); \draw (v36) -- (v38); \draw (v36) -- (v39); \draw (v37) -- (v38); \draw (v38) -- (v39); \node (v40) at (1+14,1) {}; \node (v41) at (2+14,1) {}; \node (v42) at (3+14,1) {}; \node (v43) at (1+14,2) {}; \node (v44) at (2+14,2) {}; \node (v45) at (3+14,2) {}; \node (v46) at (1+14,3) {}; \node (v47) at (2+14,3) {}; \node (v48) at (3+14,3) {}; \node (v49) at (1+14,4) {}; \node (v50) at (3+14,4) {}; \node (v51) at (1+14,5) {}; \node (v52) at (2+14,5) {}; \node (v53) at (3+14,5) {}; \node (v54) at (1+14,6) {}; \node (v55) at (2+14,6) {}; \node (v56) at (3+14,6) {}; \node (v57) at (1+14,7) {}; \node (v58) at (2+14,7) {}; \node (v59) at (3+14,7) {}; \draw (v40) -- (v41); \draw (v41) -- (v42); \draw (v40) -- (v43); \draw (v41) -- (v43); \draw (v42) -- (v44); \draw (v42) -- (v45); \draw (v43) -- (v44); \draw (v43) -- (v46); \draw (v44) -- (v46); \draw (v45) -- (v47); \draw (v45) -- (v48); \draw (v46) -- (v47); \draw (v47) -- (v48); \draw (v47) -- (v49); \draw (v47) -- (v50); \draw (v49) -- (v52); \draw (v50) -- (v52); \draw (v51) -- (v52); \draw (v51) -- (v54); \draw (v52) -- (v53); \draw (v52) -- (v54); \draw (v53) -- (v55); \draw (v53) -- (v56); \draw (v54) -- (v57); \draw (v55) -- (v56); \draw (v55) -- (v57); \draw (v56) -- (v58); \draw (v56) -- (v59); \draw (v57) -- (v58); \draw (v58) -- (v59); \node (v60) at (11,1) {}; \node (v61) at (11,2) {}; \node (v62) at (10,3) {}; \node (v63) at (11,3) {}; \node (v64) at (12,3) {}; \node (v65) at (10,4) {}; \node (v66) at (12,4) {}; \node (v67) at ( 9,5) {}; \node (v68) at (13,5) {}; \node (v69) at (10,6) {}; \node (v70) at (12,6) {}; \node (v71) at (10,7) {}; \node (v72) at (11,7) {}; \node (v73) at (12,7) {}; \node (v74) at (11,8) {}; \node (v75) at (11,9) {}; \draw (v60) -- (v61); \draw (v61) -- (v62); \draw (v61) -- (v63); \draw (v61) -- (v64); \draw (v62) -- (v65); \draw (v63) -- (v65); \draw (v63) -- (v66); \draw (v64) -- (v66); \draw (v65) -- (v66); \draw (v65) -- (v67); \draw (v66) -- (v68); \draw (v67) -- (v69); \draw (v68) -- (v70); \draw (v69) -- (v70); \draw (v69) -- (v71); \draw (v69) -- (v72); \draw (v70) -- (v72); \draw (v70) -- (v73); \draw (v71) -- (v74); \draw (v72) -- (v74); \draw (v73) -- (v74); \draw (v74) -- (v75); \node (v76) at (11,13) {}; \node (v77) at ( 8,10) {}; \node (v78) at ( 9,10) {}; \node (v79) at ( 7,11) {}; \node (v80) at ( 8,11) {}; \node (v81) at (10,11) {}; \node (v82) at ( 8,12) {}; \node (v83) at ( 9,12) {}; \draw (v77) -- (v78); \draw (v77) -- (v79); \draw (v78) -- (v80); \draw (v78) -- (v81); \draw (v78) -- (v83); \draw (v79) -- (v80); \draw (v79) -- (v82); \draw (v81) -- (v83); \draw (v82) -- (v83); \node (v84) at (22- 8,10) {}; \node (v85) at (22- 9,10) {}; \node (v86) at (22- 7,11) {}; \node (v87) at (22- 8,11) {}; \node (v88) at (22-10,11) {}; \node (v89) at (22- 8,12) {}; \node (v90) at (22- 9,12) {}; \draw (v84) -- (v85); \draw (v84) -- (v86); \draw (v85) -- (v87); \draw (v85) -- (v88); \draw (v85) -- (v90); \draw (v86) -- (v87); \draw (v86) -- (v89); \draw (v88) -- (v90); \draw (v89) -- (v90); \draw (v75) -- (v78); \draw (v75) -- (v85); \draw (v76) -- (v83); \draw (v76) -- (v90); \node (v91) at ( 2, 9) {}; \node (v92) at ( 6, 9) {}; \node (v93) at ( 4,10) {}; \node (v94) at ( 3,11) {}; \node (v95) at ( 5,11) {}; \node (v96) at ( 4,12) {}; \node (v97) at ( 2,13) {}; \node (v98) at ( 6,13) {}; \node (v99) at ( 1,11) {}; \draw (v91) -- (v92); \draw (v91) -- (v93); \draw (v91) -- (v97); \draw (v92) -- (v93); \draw (v93) -- (v94); \draw (v93) -- (v95); \draw (v94) -- (v96); \draw (v95) -- (v96); \draw (v96) -- (v97); \draw (v96) -- (v98); \draw (v97) -- (v98); \draw (v97) -- (v99); \node (v100) at (22- 2, 9) {}; \node (v101) at (22- 6, 9) {}; \node (v102) at (22- 4,10) {}; \node (v103) at (22- 3,11) {}; \node (v104) at (22- 5,11) {}; \node (v105) at (22- 4,12) {}; \node (v106) at (22- 2,13) {}; \node (v107) at (22- 6,13) {}; \node (v108) at (22- 1,11) {}; \draw (v100) -- (v101); \draw (v100) -- (v102); \draw (v100) -- (v106); \draw (v101) -- (v102); \draw (v102) -- (v103); \draw (v102) -- (v104); \draw (v103) -- (v105); \draw (v104) -- (v105); \draw (v105) -- (v106); \draw (v105) -- (v107); \draw (v106) -- (v107); \draw (v106) -- (v108); \draw (v79) -- (v92); \draw (v79) -- (v98); \draw (v86) -- (v101); \draw (v86) -- (v107); \node (v109) at (19,4) {}; \end{scope} \end{tikzpicture*} \caption{This graph on $110$ nodes embeds all graphs on $60$ nodes whose maximum degree is $2$ and whose largest component has size at most $6$.} \label{fig:116-small} \end{figure} \begin{theorem} For any $n\in\mathbb{N}$ there exists a graph with $\frac{11}{6}n+\mathcal{O}(1)$ vertices, that contains as induced subgraphs all graphs on $n$ vertices whose maximum degree is $2$ and whose largest component has at most $6$ vertices. \end{theorem} \begin{proof} Let $G$ be a graph with $n$ vertices and maximum degree $2$ whose largest component has at most $6$ vertices. Let $H$ be the graph on $110$ vertices depicted in Figure~\ref{fig:116-small}. It is easy to see by inspection that this graph has each of $60\times P_1$, $30\times P_2$, $20\times P_3$, $20\times C_3$, $15\times P_4$, $15\times C_4$, $12\times P_5$, $12\times C_5$, $10\times P_6$, and $10\times C_6$ as induced subgraphs. Now, any graph with the desired properties whose components do not include all the components of one of these graphs has at most $(60-1)+(60-2)+(60-3)+(60-3)+(60-4)+(60-4)+(60-5)+(60-5)+(60-6)+(60-6)=561$ vertices, and can be embedded in at most $10$ copies of $H$. The whole of $G$ can therefore be embedded in at most $c=\left\lfloor\frac{n}{60}\right\rfloor+10$ copies of $H$. The total number of vertices in $c\times H$ is $110c\leq\frac{11}{6}n+1100$, which is $\frac{11}{6}n+\mathcal{O}(1)$ as desired. \end{proof} A more careful analysis shows that $\left\lceil\frac{n}{60}\right\rceil$ copies of $H$ is always sufficient to embed any $G\in\mathcal{G}_2$ with $n$ vertices, which in particular means that the $11\floor{n/6}$ bound is achievable for any such graph with $n$ divisible by $60$. We are not sure if the above construction can be extended to handle components of size $7$ or more. It would involve constructing a graph with $770$ vertices. An interesting open question is: Is there a function $f$, such that for any $n,s\in\mathbb{N}$ the family of graphs with $n$ vertices, maximum degree $2$, and maximum component size $s$ has an induced universal graph with at most $\frac{11}{6}n+f(s)$ vertices? \subsection{$\frac{11}{6}n+\Oo(1)$ upper bound when all components are large} Since it appears difficult to improve the lower bound by considering only small components, the next natural thing might be to consider just large components. As the following upper bound shows, this is also unlikely to succeed. \begin{figure}[h] \begin{tikzpicture*}{\textwidth} \pgfmathtruncatemacro{\cols}{13}; \pgfmathtruncatemacro{\p}{1}; \begin{scope}[every node/.append style={ draw, circle, minimum size=2mm, inner sep=0pt, outer sep=0pt% }, vertex style/.style={ draw, circle, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected vertex style/.style={ draw, circle, fill=blue!20, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected edge style/.style={ rounded corners,line width=1.5mm,blue!20,cap=round% } ] \node[vertex style] (v01) at (0,2) {}; \foreach \i in {1,...,\cols}{% \node[vertex style] (v\i0) at (-1+2*\i,0) {}; \node[vertex style] (v\i1) at ( 0+2*\i,2) {}; \node[vertex style] (v\i2) at (-1+2*\i,4) {}; }; \foreach \i in {1,...,\cols}{% \pgfmathtruncatemacro{\previ}{\i-1}; \draw[thick,color=blue] (v\i0) -- (v\previ1) -- (v\i2); \ifthenelse{\i>1}{% \draw[thick,color=blue] (v\previ0) -- (v\i0); \draw[thick,color=blue] (v\previ2) -- (v\i2); }{% } \draw[thick,color=blue] (v\i0) -- (v\i1) -- (v\i2); \pgfmathtruncatemacro{\imod}{mod(\i,\p)}; \ifthenelse{\imod=0}{% \draw[dashed] ($(v\i0.south east)+(0.25,-1.25)$) -- ($(v\i2.north east)+(0.25,1.25)$); }{% } }; \end{scope} \end{tikzpicture*} \caption{The graph with this repeated pattern embeds in the first $\frac{3}{2}n-2$ vertices all graphs on $n$ vertices whose components are all even cycles.} \label{fig:32-even} \end{figure} \begin{figure}[h] \begin{tikzpicture*}{\textwidth} \pgfmathtruncatemacro{\cols}{13}; \pgfmathtruncatemacro{\p}{3}; \begin{scope}[every node/.append style={ draw, circle, minimum size=2mm, inner sep=0pt, outer sep=0pt% }, vertex style/.style={ draw, circle, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected vertex style/.style={ draw, circle, fill=blue!20, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected edge style/.style={ rounded corners,line width=1.5mm,blue!20,cap=round% } ] \node[vertex style] (v01) at (0,2) {}; \foreach \i in {1,...,\cols}{% \node[vertex style] (v\i0) at (-1+2*\i,0) {}; \node[vertex style] (v\i1) at ( 0+2*\i,2) {}; \node[vertex style] (v\i2) at (-1+2*\i,4) {}; \pgfmathtruncatemacro{\imod}{mod(\i,\p)}; \pgfmathtruncatemacro{\itop}{\i+1}; \ifthenelse{\imod=1 \AND \itop<\cols}{% \node[vertex style,color=red] (v\i3) at (2+2*\i,3.25) {}; }{% } }; \foreach \i in {1,...,\cols}{% \pgfmathtruncatemacro{\previ}{\i-1}; \draw[thick,color=blue] (v\i0) -- (v\previ1) -- (v\i2); \ifthenelse{\i>1}{% \draw[thick,color=blue] (v\previ0) -- (v\i0); \draw[thick,color=blue] (v\previ2) -- (v\i2); }{% } \draw[thick,color=blue] (v\i0) -- (v\i1) -- (v\i2); \pgfmathtruncatemacro{\imod}{mod(\i,\p)}; \pgfmathtruncatemacro{\itop}{\i+2}; \ifthenelse{\imod=1 \AND \itop<\cols}{% \draw[thick,color=red] (v\i2) -- (v\i3); \draw[thick,color=red] (v\i3) -- (v\itop1); \pgfmathtruncatemacro{\inext}{\i+3}; \draw[thick,color=red] (v\i3) -- (v\inext2); \draw[thick,color=red] (v\i3) -- (v\i1); }{% } \ifthenelse{\imod=0}{% \draw[dashed] ($(v\i0.south east)+(0.25,-1.25)$) -- ($(v\i2.north east)+(0.25,1.25)$); }{% } }; \end{scope} \end{tikzpicture*} \caption{The graph with this repeated pattern embeds in the first $\frac{11}{6}n+\mathcal{O}(1)$ vertices all graphs on $n$ vertices whose maximum degree is $2$ and whose smallest component has size at least $10$.} \label{fig:116-large} \end{figure} \begin{theorem} For any $n\in\mathbb{N}$ there exists a graph with at most $\frac{11}{6}n+\mathcal{O}(1)$ vertices, that contain as induced subgraphs all graphs on $n$ vertices whose maximum degree is $2$ and whose smallest component has at least $10$ vertices. \end{theorem} \begin{proof} Consider the graph family $A$ implied by the pattern in Figure~\ref{fig:32-even}. Observe that the leftmost $\frac{3}{2}n-2$ vertices of this pattern embeds all graphs on $n$ vertices whose components are all even cycles. In particular, each (even) cycle of length $s\geq10$ will be embedded in such a way that it uses exactly $\frac{s-4}{2}\geq3$ consecutive edges from the top or bottom rows. Now consider the modified graph family $B$ in Figure~\ref{fig:116-large}. Every third edge along the top row is associated with one of the added red vertices. Now consider an odd cycle $C$ of length $s>10$. By splitting one vertex in the cycle, we can extend it to an even cycle $C'$ of length $\geq12$. When we embed this $C'$, it will contain an edge associated with one of the red vertices. We can therefore arrange that the split vertices are the endpoints of such an edge, and we can obtain an embedding of $C$ by replacing them with the associated red vertex. Similarly, since the minimum component size is assumed to be $10$, we can turn any number of paths of total length $s$ into a cycle of even length by adding at most $\frac{1}{10}n+\mathcal{O}(1)$ vertices. Thus any graph $G$ with $n$ vertices, maximum degree $2$ and minimum component size $10$ can be converted into a graph $G'$ with at most $n'=\frac{11}{10}n+\mathcal{O}(1)$ vertices whose components are all even cycles. The graph $G'$ can then be embedded in a graph from family $A$ with $\frac{3}{2}n'+\mathcal{O}(1)$ vertices. This graph can then be converted to a graph in family $B$ with at most $\frac{10}{9}(\frac{3}{2}n'+\mathcal{O}(1))=\frac{11}{6}n+\mathcal{O}(1)$ vertices, and the embedding of $G'$ in $A$ can be converted to an embedding of $G$ in $B$ as previsously described. \end{proof} \section{Paths}\label{sec:paths} \pgfdeclarelayer{background} \pgfdeclarelayer{foreground} \pgfsetlayers{background,main,foreground} We show that there exists an induced universal graph with $\lfloor 3n/2 \rfloor$ vertices and at most $\lfloor 3n/2 \rfloor - 1$ edges for the family $\mathcal{AC}$ of graphs consisting of a set of paths with a total of $n$ vertices. Our new induced universal graph matches the following lower bound. \begin{theorem}[Claim 1, \cite{Esperet2008}]\label{thm:pathlower} Every induced universal graph that embeds any acyclic graph $G$ on $n$ vertices with $\Delta(G)\leq 2$ has at least $\lfloor 3n/2 \rfloor$ vertices. \end{theorem} \begin{proof} The theorem can be extracted from the proof of Claim $1$ in \cite{Esperet2008}. Consider the two graphs $G'$ and $G''$, with $G'$ consisting of $n$ $P_1$ and $G''$ consising of $\floor{n/2}$ $P_2$ plus at most one $P_1$. An induced universal subgraph must contain $n$ disjoint $P_1$ to embed $G'$, and $\floor{n/2}$ disjoint $P_2$ to embed $G''$. Each $P_2$ in the embedding of $G''$ can overlap with at most one $P_1$ from the embedding of $G'$ in the induced universal graph, hence we need at least $n + \lfloor n/2 \rfloor = \lfloor 3n/2 \rfloor$ vertices in the induced universal graph. \end{proof} Our induced universal graph $U^p_n$ that matches \Cref{thm:pathlower} is defined as below. \begin{definition}\label{def:upn} Let \begin{align*} s(x)&:= \begin{cases} x+2&\text{ if }x\equiv2\pmod{3} \\ x+1&\text{ otherwise} \end{cases} \end{align*} For any $n \in {\mathbb{N}}$ let $U^p_n$ be the graph over vertex set $\left[ \lfloor 3n/2 \rfloor \right]$, and for all $u < v \in [\lfloor 3n/2 \rfloor]$ let there be an edge $(u,v)$ iff $v = s(u)$ (See \Cref{fig:Up_n}). \end{definition} \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.5] \pgfmathtruncatemacro{\n}{6}; \pgfmathtruncatemacro{\vmax}{2*\n}; \pgfdeclarelayer{background} \pgfdeclarelayer{foreground} \pgfsetlayers{background,main,foreground} \begin{scope}[ vertex style/.style={ draw, circle, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected vertex style/.style={ draw, circle, fill=blue!20, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected edge style/.style={ rounded corners,line width=1.5mm,blue!20,cap=round% } ] \node[vertex style] (v0) at (0,-1) {\tiny $0$}; \node[vertex style] (v1) at (0,0) {\tiny $1$}; \node[vertex style] (v2) at (1,0) {\tiny $2$}; \node[vertex style] (v3) at (2,-1) {\tiny $3$}; \node[vertex style] (v4) at (2,0) {\tiny $4$}; \node[vertex style] (v5) at (3,0) {\tiny $5$}; \node[vertex style] (v6) at (4,-1) {\tiny $6$}; \node[vertex style] (v7) at (4,0) {\tiny $7$}; \node[vertex style] (v8) at (5,0) {\tiny $8$}; \node[vertex style] (v9) at (6,-1) {\tiny $9$}; \node[vertex style] (v10) at (6,0) {\tiny $10$}; \node[vertex style] (v11) at (7,0) {\tiny $11$}; \node[vertex style] (v12) at (8,-1) {\tiny $12$}; \node[vertex style] (v13) at (8,0) {\tiny $13$}; \node[vertex style] (v14) at (9,0) {\tiny $14$}; \node[vertex style] (v15) at (10,-1) {\tiny $15$}; \draw[thick,color=blue] (v0) -- (v1); \draw[thick,color=blue] (v3) -- (v4); \draw[thick,color=blue] (v6) -- (v7); \draw[thick,color=blue] (v9) -- (v10); \draw[thick,color=blue] (v12) -- (v13); \draw[thick,color=blue] (v1) -- (v2); \draw[thick,color=red] (v2) -- (v4); \draw[thick,color=blue] (v4) -- (v5); \draw[thick,color=red] (v5) -- (v7); \draw[thick,color=blue] (v7) -- (v8); \draw[thick,color=red] (v8) -- (v10); \draw[thick,color=blue] (v10) -- (v11); \draw[thick,color=red] (v11) -- (v13); \draw[thick,color=blue] (v13) -- (v14); \end{scope} \end{tikzpicture} \caption{$U^p_{11}$, with each $(u,v)$ colored \textcolor{blue}{blue} if $v=u+1$ and \textcolor{red}{red} if $v=u+2$, i.e., the edges come from cases $1$ and $2$ of \Cref{def:upn}, respectively.} \label[figure]{fig:Up_n} \end{figure} When $n$ is even $U^p_n $ is a simple caterpillar graph, where every second vertex of the main path has an extra vertex connected to it. When $n$ is odd $U^p_n$ is a caterpillar and a single isolated vertex (see \Cref{fig:Up_n}). In the following, we show that $U^p_n$ embeds the family of acyclic graphs over $n$ vertices with maximum degree $2$, and that the corresponding decoder is size oblivious. We proceed by showing that a few particular paths can be embedded in the graph, and finally that any graph in $\mathcal{AC}$ can be embedded using the same embedding techniques. \begin{definition}\label{def:bb} Let $G$ and $U$ be graphs and let $\lambda$ be an embedding function of $G$ into $U$. We say that a vertex $v \in V[U]$ is \emph{used} if there exists $v' \in V[G]$ such that $\lambda(v') = v$. A node $w\in V[U]$ is \emph{allocated} if $w$ is used or adjacent to a used vertex. A subset $X \subseteq V[U]$ is \emph{allocated} when every vertex $x \in X$ is allocated. \end{definition} This definition allows us to argue that, given a specific graph $G\in\mathcal{AC}$ with $n$ vertices, we can allocate a part of our induced universal graph $U^p_{n}$ for embedding a part of $G$, and then the remaining unallocated part of $U^p_{n}$ is available for embedding for the remaining part of $G$. We divide $G$ into several parts and show that each part can be embedded in $U^p_{n}$. For each part we embed, the number of allocated vertices divded by the number of used vertices is at most $\lfloor 3n/2 \rfloor$. \Cref{thm:paths} implies that these embedding strategies can be combined, i.e., applied consecutively one after the other, implying that all of $G$ can be embedded in $U^p_{n}$. Our argument relies heavily on allocation of blocks as defined below. \begin{definition}\label{def:block} Let \emph{block} $B_j$ of $U^p_n$, where $j\in[\floor{\frac{n}{2}}]$, be the vertices $\{3j, 3j+1, 3j+2 \}$. \end{definition} In \Cref{lem:combined}, we show how to embed single paths of any length and simple families of paths in $U^p_{n}$ efficiently. \Cref{fig:p2k1,fig:p3p1} shows how the cases are handled. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.5] \pgfmathtruncatemacro{\n}{6}; \pgfmathtruncatemacro{\vmax}{2*\n}; \pgfdeclarelayer{background} \pgfdeclarelayer{foreground} \pgfsetlayers{background,main,foreground} \begin{scope}[ vertex style/.style={ draw, circle, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected vertex style/.style={ draw, circle, fill=blue!20, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected edge style/.style={ rounded corners,line width=1.5mm,blue!20,cap=round% } ] \node[selected vertex style] (v0) at (0,-1) {\tiny $0$}; \node[selected vertex style] (v1) at (0,0) {\tiny $1$}; \node[selected vertex style] (v2) at (1,0) {\tiny $2$}; \node[vertex style] (v3) at (2,-1) {\tiny $3$}; \node[selected vertex style] (v4) at (2,0) {\tiny $4$}; \node[selected vertex style] (v5) at (3,0) {\tiny $5$}; \node[vertex style] (v6) at (4,-1) {\tiny $6$}; \node[selected vertex style] (v7) at (4,0) {\tiny $7$}; \node[selected vertex style] (v8) at (5,0) {\tiny $8$}; \node[vertex style] (v9) at (6,-1) {\tiny $9$}; \node[selected vertex style] (v10) at (6,0) {\tiny $10$}; \node[vertex style] (v11) at (7,0) {\tiny $11$}; \node[vertex style] (v12) at (8,-1) {\tiny $12$}; \node[vertex style] (v13) at (8,0) {\tiny $13$}; \node[vertex style] (v14) at (9,0) {\tiny $14$}; \draw[thick,color=blue] (v0) -- (v1); \draw[thick,color=blue] (v3) -- (v4); \draw[thick,color=blue] (v6) -- (v7); \draw[thick,color=blue] (v9) -- (v10); \draw[thick,color=blue] (v12) -- (v13); \draw[thick,color=blue] (v1) -- (v2); \draw[thick,color=red] (v2) -- (v4); \draw[thick,color=blue] (v4) -- (v5); \draw[thick,color=red] (v5) -- (v7); \draw[thick,color=blue] (v7) -- (v8); \draw[thick,color=red] (v8) -- (v10); \draw[thick,color=blue] (v10) -- (v11); \draw[thick,color=red] (v11) -- (v13); \draw[thick,color=blue] (v13) -- (v14); \begin{pgfonlayer}{background} \draw[selected edge style] (v0.center) -- (v1.center) -- (v2.center) -- (v4.center) -- (v5.center) -- (v7.center) -- (v8.center) -- (v10.center); \draw[very thick,dotted,red] ($(v0.south west)+(-0.15,-0.15)$) rectangle ($(v2.north east)+(0.15,0.15)$); \draw[very thick,dotted,red] ($(v3.south west)+(-0.15,-0.15)$) rectangle ($(v5.north east)+(0.15,0.15)$); \draw[very thick,dotted,red] ($(v6.south west)+(-0.15,-0.15)$) rectangle ($(v8.north east)+(0.15,0.15)$); \draw[very thick,dotted,red] ($(v9.south west)+(-0.15,-0.15)$) rectangle ($(v11.north east)+(0.15,0.15)$); \end{pgfonlayer} \end{scope} \end{tikzpicture} ~~~~ \begin{tikzpicture}[scale=0.5] \pgfmathtruncatemacro{\n}{6}; \pgfmathtruncatemacro{\vmax}{2*\n}; \pgfdeclarelayer{background} \pgfdeclarelayer{foreground} \pgfsetlayers{background,main,foreground} \begin{scope}[ vertex style/.style={ draw, circle, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected vertex style/.style={ draw, circle, fill=blue!20, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected edge style/.style={ rounded corners,line width=1.5mm,blue!20,cap=round% } ] \node[selected vertex style] (v0) at (0,-1) {\tiny $0$}; \node[selected vertex style] (v1) at (0,0) {\tiny $1$}; \node[selected vertex style] (v2) at (1,0) {\tiny $2$}; \node[vertex style] (v3) at (2,-1) {\tiny $3$}; \node[selected vertex style] (v4) at (2,0) {\tiny $4$}; \node[selected vertex style] (v5) at (3,0) {\tiny $5$}; \node[vertex style] (v6) at (4,-1) {\tiny $6$}; \node[selected vertex style] (v7) at (4,0) {\tiny $7$}; \node[selected vertex style] (v8) at (5,0) {\tiny $8$}; \node[selected vertex style] (v9) at (6,-1) {\tiny $9$}; \node[selected vertex style] (v10) at (6,0) {\tiny $10$}; \node[vertex style] (v11) at (7,0) {\tiny $11$}; \node[vertex style] (v12) at (8,-1) {\tiny $12$}; \node[vertex style] (v13) at (8,0) {\tiny $13$}; \node[vertex style] (v14) at (9,0) {\tiny $14$}; \draw[thick,color=blue] (v0) -- (v1); \draw[thick,color=blue] (v3) -- (v4); \draw[thick,color=blue] (v6) -- (v7); \draw[thick,color=blue] (v9) -- (v10); \draw[thick,color=blue] (v12) -- (v13); \draw[thick,color=blue] (v1) -- (v2); \draw[thick,color=red] (v2) -- (v4); \draw[thick,color=blue] (v4) -- (v5); \draw[thick,color=red] (v5) -- (v7); \draw[thick,color=blue] (v7) -- (v8); \draw[thick,color=red] (v8) -- (v10); \draw[thick,color=blue] (v10) -- (v11); \draw[thick,color=red] (v11) -- (v13); \draw[thick,color=blue] (v13) -- (v14); \begin{pgfonlayer}{background} \draw[selected edge style] (v0.center) -- (v1.center) -- (v2.center) -- (v4.center) -- (v5.center) -- (v7.center) -- (v8.center) -- (v10.center) -- (v9.center); \draw[very thick,dotted,red] ($(v0.south west)+(-0.15,-0.15)$) rectangle ($(v2.north east)+(0.15,0.15)$); \draw[very thick,dotted,red] ($(v3.south west)+(-0.15,-0.15)$) rectangle ($(v5.north east)+(0.15,0.15)$); \draw[very thick,dotted,red] ($(v6.south west)+(-0.15,-0.15)$) rectangle ($(v8.north east)+(0.15,0.15)$); \draw[very thick,dotted,red] ($(v9.south west)+(-0.15,-0.15)$) rectangle ($(v11.north east)+(0.15,0.15)$); \end{pgfonlayer} \end{scope} \end{tikzpicture} \caption{(left) $U^p_{10}$, with a $P_{8}$ embedded, and (right) $U^p_{10}$, with a $P_{9}$ embedded. The embedded paths are shown embedded in \textcolor{blue!40}{blue}, and the allocated blocks in \textcolor{red}{red}. Both cases use at most $\lfloor 3n/2 \rfloor $ vertices.}\label[figure]{fig:p2k1} \end{figure} \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.5] \pgfmathtruncatemacro{\n}{6}; \pgfmathtruncatemacro{\vmax}{2*\n}; \pgfdeclarelayer{background} \pgfdeclarelayer{foreground} \pgfsetlayers{background,main,foreground} \begin{scope}[ vertex style/.style={ draw, circle, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected vertex style/.style={ draw, circle, fill=blue!20, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected edge style/.style={ rounded corners,line width=1.5mm,blue!20,cap=round% } ] \node[selected vertex style] (v0) at (0,-1) {\tiny $0$}; \node[selected vertex style] (v1) at (0,0) {\tiny $1$}; \node[selected vertex style] (v2) at (1,0) {\tiny $2$}; \node[vertex style] (v3) at (2,-1) {\tiny $3$}; \node[vertex style] (v4) at (2,0) {\tiny $4$}; \node[selected vertex style] (v5) at (3,0) {\tiny $5$}; \node[selected vertex style] (v6) at (4,-1) {\tiny $6$}; \node[selected vertex style] (v7) at (4,0) {\tiny $7$}; \node[vertex style] (v8) at (5,0) {\tiny $8$}; \node[vertex style] (v9) at (6,-1) {\tiny $9$}; \node[vertex style] (v10) at (6,0) {\tiny $10$}; \node[vertex style] (v11) at (7,0) {\tiny $11$}; \node[vertex style] (v12) at (8,-1) {\tiny $12$}; \node[vertex style] (v13) at (8,0) {\tiny $13$}; \node[vertex style] (v14) at (9,0) {\tiny $14$}; \draw[thick,color=blue] (v0) -- (v1); \draw[thick,color=blue] (v3) -- (v4); \draw[thick,color=blue] (v6) -- (v7); \draw[thick,color=blue] (v9) -- (v10); \draw[thick,color=blue] (v12) -- (v13); \draw[thick,color=blue] (v1) -- (v2); \draw[thick,color=red] (v2) -- (v4); \draw[thick,color=blue] (v4) -- (v5); \draw[thick,color=red] (v5) -- (v7); \draw[thick,color=blue] (v7) -- (v8); \draw[thick,color=red] (v8) -- (v10); \draw[thick,color=blue] (v10) -- (v11); \draw[thick,color=red] (v11) -- (v13); \draw[thick,color=blue] (v13) -- (v14); \begin{pgfonlayer}{background} \draw[selected edge style] (v0.center) -- (v1.center) -- (v2.center); \draw[selected edge style] (v5.center) -- (v7.center) -- (v6.center); \draw[very thick,dotted,red] ($(v0.south west)+(-0.15,-0.15)$) rectangle ($(v2.north east)+(0.15,0.15)$); \draw[very thick,dotted,red] ($(v3.south west)+(-0.15,-0.15)$) rectangle ($(v5.north east)+(0.15,0.15)$); \draw[very thick,dotted,red] ($(v6.south west)+(-0.15,-0.15)$) rectangle ($(v8.north east)+(0.15,0.15)$); \end{pgfonlayer} \end{scope} \end{tikzpicture} ~~~~ \begin{tikzpicture}[scale=0.5] \pgfmathtruncatemacro{\n}{6}; \pgfmathtruncatemacro{\vmax}{2*\n}; \pgfdeclarelayer{background} \pgfdeclarelayer{foreground} \pgfsetlayers{background,main,foreground} \begin{scope}[ vertex style/.style={ draw, circle, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected vertex style/.style={ draw, circle, fill=blue!20, minimum size=3mm, inner sep=0pt, outer sep=0pt% }, selected edge style/.style={ rounded corners,line width=1.5mm,blue!20,cap=round% } ] \node[selected vertex style] (v0) at (0,-1) {\tiny $0$}; \node[selected vertex style] (v1) at (0,0) {\tiny $1$}; \node[selected vertex style] (v2) at (1,0) {\tiny $2$}; \node[selected vertex style] (v3) at (2,-1) {\tiny $3$}; \node[vertex style] (v4) at (2,0) {\tiny $4$}; \node[selected vertex style] (v5) at (3,0) {\tiny $5$}; \node[selected vertex style] (v6) at (4,-1) {\tiny $6$}; \node[vertex style] (v7) at (4,0) {\tiny $7$}; \node[selected vertex style] (v8) at (5,0) {\tiny $8$}; \node[selected vertex style] (v9) at (6,-1) {\tiny $9$}; \node[vertex style] (v10) at (6,0) {\tiny $10$}; \node[selected vertex style] (v11) at (7,0) {\tiny $11$}; \node[selected vertex style] (v12) at (8,-1) {\tiny $12$}; \node[vertex style] (v13) at (8,0) {\tiny $13$}; \node[vertex style] (v14) at (9,0) {\tiny $14$}; \draw[thick,color=blue] (v0) -- (v1); \draw[thick,color=blue] (v3) -- (v4); \draw[thick,color=blue] (v6) -- (v7); \draw[thick,color=blue] (v9) -- (v10); \draw[thick,color=blue] (v12) -- (v13); \draw[thick,color=blue] (v1) -- (v2); \draw[thick,color=red] (v2) -- (v4); \draw[thick,color=blue] (v4) -- (v5); \draw[thick,color=red] (v5) -- (v7); \draw[thick,color=blue] (v7) -- (v8); \draw[thick,color=red] (v8) -- (v10); \draw[thick,color=blue] (v10) -- (v11); \draw[thick,color=red] (v11) -- (v13); \draw[thick,color=blue] (v13) -- (v14); \begin{pgfonlayer}{background} \draw[selected edge style] (v0.center) -- (v1.center) -- (v2.center); \draw[selected edge style] (v5.center) -- (v5.center) ; \draw[selected edge style] (v6.center) -- (v6.center) ; \draw[selected edge style] (v8.center) -- (v8.center) ; \draw[selected edge style] (v9.center) -- (v9.center) ; \draw[selected edge style] (v11.center) -- (v11.center) ; \draw[selected edge style] (v12.center) -- (v12.center) ; \draw[very thick,dotted,red] ($(v0.south west)+(-0.15,-0.15)$) -- ($(v1.north west)+(-0.15,0.15)$) -- ($(v2.north east)+(0.15,0.15)$) -- ($(v2.south east)+(0.15,-0.15)$) -- ($(v1.south east)+(0.15,-0.15)$) -- ($(v0.south east)+(0.15,-0.15)$) -- ($(v0.south west)+(-0.15,-0.15)$) ; \draw[very thick,dotted,green] ($(v3.south west)+(-0.15,-0.15)$) rectangle ($(v3.north east)+(0.15,0.15)$); \draw[very thick,dotted,black] ($(v5.south west)+(-0.15,-1.15)$) rectangle ($(v13.north east)+(0.15,0.15)$); \end{pgfonlayer} \end{scope} \end{tikzpicture} \caption{(left) $U^p_{10}$ with two $P_{3}$ embedded, and (right) $U^p_{10}$ with one $P_3$ and a number of $P_1$ embedded. Embedded paths are shown in \textcolor{blue!40}{blue}, and allocated blocks in \textcolor{red}{red}. The extra allocation to handle one $P_3$ and one $P_1$ is shown in dotted \textcolor{green}{green}. The trivial extension of $6$ additional $P_1$ is shown in dotted black. All cases use at most $\lfloor 3n/2 \rfloor $ vertices.} \label[figure]{fig:p3p1} \end{figure} \begin{lemma}\label{lem:combined} Let $k > 0$ unless otherwise specified. For the induced universal graph $U^p_{n}$ it holds that \begin{enumerate}[label=(\alph*)] \item $U^p_{2k}$ embeds $P_{2k}$. \label{case:p2k} \item $U^p_{2k}$ embeds $P_{2k+1}$ when $k > 1$. \label{case:p2k1} \item $U^p_{k}$ embeds $k$ $P_1$. \label{case:p1} \item $U^p_{6}$ embeds two $P_3$. \label{case:2p3} \item $U^p_{3+l}$ embeds one $P_3$ and $l \geq 0$ $P_1$. \label{case:p3p1} \end{enumerate} \end{lemma} \begin{proof} We show that the cases in \Cref{lem:combined} can be embedded in our induced universal graph. The general strategy is to allocate blocks as in \Cref{def:block} and embed the input in these blocks, where at least two out of the three vertices in a block are used, yielding the desired ratio between allocated and used vertices. Consider first case \ref{case:p2k}, where we are to embed an even path. See \Cref{fig:p2k1}. We allocate $k$ block $B_0, \ldots, B_{k-1}$ and embed the path by using three vertices in $B_0$, one vertex in $B_{k-1}$, and two vertices in $B_1, \ldots, B_{k-2}$. Say $n=2k$, then the size of the induced universal graph is thus $\lfloor 3n/2 \rfloor $. In the second case \ref{case:p2k1} we are to embed an odd path. See \Cref{fig:p2k1}. We use the same strategy as above, the difference being that two vertices are used in $B_{k-1}$, namely the two with the smallest label. As we embed one more vertex in the same number of blocks we are still within $\lfloor 3n/2 \rfloor $. For \ref{case:p1} we can embed a number $k$ of $P_1$ by allocating $\lfloor k/2 \rfloor$ blocks and embedding two $P_1$ per block. If $k$ is odd we embed the last $P_1$ by allocating the isolated vertex labelled $\floor{3k/2}-1$. When the input is \ref{case:2p3} we embed the two $P_3$ by allocating three blocks as seen in \Cref{fig:p3p1}. As we allocate $9$ vertices and use $6$, this strategy is within the $\lfloor 3n/2 \rfloor $ bound. In the final case \ref{case:p3p1} we allocate the first block $B_0$ and embed the $P_3$ in the block. Next we apply case \ref{case:p1} on the remaining $P_1$. See \Cref{fig:p3p1}. In total we then use $3 + \lfloor 3l/2 \rfloor$ vertices. \end{proof} We are now ready to state the main theorem of this section. \Cref{case:main} follows from consecutive applications of \Cref{lem:combined} and a careful order in which we embed the vertices of some given graph $G\in\mathcal{AC}$ with $n$ vertices in $U^p_n$. In particular, it is crucial that the $P_3$'s and $P_1$'s in $G$ are embedded second to last and last, respectively, since after embedding one of these two parts of $G$, the next unallocated vertex is not necessarily the first vertex of a new block. \begin{theorem}\label{thm:generalupperbound}\label{thm:paths} The graph family $U^p_0, U^p_1, \ldots$ has the property that for $n \in {\mathbb{N}}_1$ \begin{enumerate}[label=(\alph*)] \item $U^p_n$ has $\lfloor 3n/2 \rfloor$ vertices and $\max\set{0, 3\floor{\frac{n}{2}}-1}$ edges.\label{case:size} \item $U^p_n$ is an induced subgraph of $U^p_{n+1}$.\label{case:induced} \item $U^p_n$ is an induced universal graph for the family of acyclic graphs $G$ with $\Delta(G)\leq2$ and $n$ vertices.\label{case:main} \end{enumerate} \end{theorem} \begin{proof} Cases \ref{case:size} and \ref{case:induced} follow immediately from \Cref{def:upn}. Say the input graph $G$ consists of set $S_{\mathrm{even}}$ of $P_{2k}$ for $k \geq 1$, set $S_{\mathrm{odd}}$ of $P_{2k+1}$ for $k \geq 2$, set $S_{3}$ of $P_3$, and set $S_{1}$ of $P_1$. Let the cardinality of each set be the number of paths in the set and let the total number of vertices in each set be denoted $l_{\mathrm{even}}$, $l_{\mathrm{odd}}$, $l_{3}$, and $l_1$ respectively. Observe that every acyclic graph with $\Delta(G)\leq2$ has such a partition. The proof direction is to embed the different parts of the input in the right order, such that our embedding strategy from \Cref{lem:combined} can be applied. We start by embedding $S_{\mathrm{even}}$ by allocating the at most $3l_{\mathrm{even}}/2$ first vertices of $U^p_{n}$ by $|S_{\mathrm{even}}|$ invocations of \Cref{lem:combined} case \ref{case:p2k}. It follows that the next unallocated vertex is the first vertex of a new block, i.e., after embedding the even length paths we have spent at most $l_{\mathrm{even}}/2$ of blocks of size $3$. We allocate the next $\lfloor 3l_{\mathrm{odd}}/2 \rfloor$ vertices of $U^p_{n}$ by performing $|S_{\mathrm{odd}}|$ invocations of \Cref{lem:combined} case \ref{case:p2k1}. Again, the next unallocated vertex is the first vertex of a new block. The next step is embedding $S_3$ and $S_1$. If $|S_3|$ is even then proceed by invoking \Cref{lem:combined} case \ref{case:2p3} $|S_3|/2$ times to embed all pairs of $P_3$. This allocation uses $3 l_3 /2$ vertices from $U^p_{n}$. We can then embed $S_1$ trivially as in \Cref{lem:combined} case \ref{case:p1}, allocating additionally $\lfloor 3 l_1 / 2 \rfloor$ vertices of $U^p_{n}$. If $|S_3|$ is odd then allocate all the pairs using $|S_3|-1$ invocations of \Cref{lem:combined} case \ref{case:2p3}, which uses $3(l_3 - 3)/2$ vertices of $U^p_{n}$. We proceed by invoking \Cref{lem:combined} case \ref{case:p3p1} to embed the remaining $P_3$ along with $S_1$. In total we need to allocate at most $\lfloor 3(l_{\mathrm{even}} + l_{\mathrm{odd}} + l_3 + l_1 )/2 \rfloor = \lfloor 3n/2 \rfloor$ vertices, and hence $U^p_{n}$ induces any acyclic graph $G$ on $n$ vertices where $\Delta(G)\leq2$. \end{proof} \subsection{Preliminaries}\label{sec:prelim} Let $[n]=\{0, \ldots, n-1\}$, ${\mathbb{N}}_0 = \set{0,1,2,\ldots}$, ${\mathbb{N}}={\mathbb{N}}_1=\set{1,2,\ldots}$, and let $\log n$ refer to $\log_2 n$. For a graph $G$, let $V[G]$ be the set of vertices and $E[G]$ be the set of edges of $G$, and let $\abs{G}=\abs{V[G]}$ be the number of vertices. We denote the maximum degree of graph $G$ as $\Delta(G)$. For $i \in {\mathbb{N}}$, let $P_{i}$ denote a path with $i$ vertices, and for $i>2$, let $C_i$ denote a simple cycle with $i$ vertices. Let $G$ and $U$ be two graphs and let $\lambda\colon V[G] \to V[U]$ be an injective function. If $\lambda$ has the property that $uv\in E[G]$ if and only if $\lambda(u)\lambda(v)\in E[U]$, we say that $\lambda$ is an \emph{embedding function} of $G$ into $U$. $G$ is an \emph{induced subgraph} of $U$ if there exists an embedding function of $G$ into $U$, and in that case, we say that $G$ is \emph{embedded} in $U$ and that $U$ \emph{embeds} $G$. Let $\mathcal F$ be a family of graphs. $U$ is an \emph{induced universal graph} of $\mathcal F$ if $G$ is an induced subgraph of $U$ for each $G\in\mathcal F$. \newcommand{\mathcal{F}}{\mathcal{F}} Let $\mathcal{F}$ be a family of graphs and for each positive integer $n$, let $\mathcal{F}_n$ be the graphs in $\mathcal{F}$ on $n$ vertices. For $f : {\mathbb{N}}_0 \to {\mathbb{N}}_0$ we say that $\mathcal{F}$ can be labeled using $f(n)$ labels with a \emph{size aware decoder} if there exists a decoder $d_n : {\mathbb{N}}_0 \times {\mathbb{N}}_0 \to \{0,1\}$ for $n\in{\mathbb{N}}$, that satisfies the following: For every graph $G \in \mathcal{F}_n$ there is an encoding function $e : G \to [f(n)]$ such that $u,v \in G$ are adjacent iff $d_n(e(u),e(v)) = 1$. We say that $\mathcal{F}$ can be labeled using $f(n)$ labels with a \emph{size oblivious decoder} if there exists $d : {\mathbb{N}}_0 \times {\mathbb{N}}_0 \to \{0,1\}$ with the following property: For every $n\in{\mathbb{N}}$ and every $G\in\mathcal{F}_n$, there is an encoding function $e : G \to [f(n)]$ such that $u,v \in G$ are adjacent iff $d(e(u),e(v)) = 1$. The following theorem explains the relation between size aware and size oblivious decoders and induced universal graphs. \begin{proposition}\label[proposition]{pro:aware} Given a family of graphs $\mathcal{F}$ and a function $f : {\mathbb{N}} \to {\mathbb{N}}$ the following holds. 1) $\mathcal{F}$ can be labeled using $f(n)$ labels with a size aware decoder iff there exists a family of graphs $G_1, G_2, \ldots$ such that $G_n$ is an induced universal graph for $\mathcal{F}_n$ and $\abs{G_n} = f(n)$ for every $n$. 2) $\mathcal{F}$ can be labeled using $f(n)$ labels with a size oblivious decoder iff there exists a family of graphs $G_1, G_2, \ldots$ such that $G_n$ is an induced universal graph for $\mathcal{F}_n$, $\abs{G_n} = f(n)$, and $G_n$ is an induced subgraph of $G_{n+1}$ for every $n$. \end{proposition} \begin{proof} 1) First assume that $\mathcal{F}$ can be labeled using $f(n)$ labels with a size aware decoder. Let $d_n : {\mathbb{N}}_0 \times {\mathbb{N}}_0 \to \{0,1\}$ be the corresponding family of decoders. Let $G_n$ be defined on the vertex set $[f(n)]$ such that $i \neq j$ are adjacent iff $d_n(i,j) = 1$. Obviously, $G_n$ contains exactly $f(n)$ nodes. Furthermore $G_n$ is an induced universal graph of $\mathcal{F}_n$. This shows the first direction. For the other direction let such a family $G_1, G_2, \ldots$ be given. Wlog assume that the vertex set of $G_n$ is $[f(n)]$. We let $d_n : {\mathbb{N}}_0 \times {\mathbb{N}}_0 \to \{0,1\}$ be defined by $d_n(u,v) = 1$ iff $u,v < f(n)$ and $u,v$ are adjacent in $G_n$. For $G \in \mathcal{F}_n$ let $\lambda : G \to G_n$ be an embedding function of $G$ in $G_n$. For a node $u \in V[G]$ we assign it the label $\lambda(u)$. Then for nodes $u,v \in V[G]$ we have $d_n(\lambda(u),\lambda(v)) = 1$ iff $\lambda(u)$ and $\lambda(v)$ are adjacent. This happens iff $u$ and $v$ are adjacent as desired. 2) The first direction is as in the first part of the theorem. We just need to note that $\lambda : [f(n)] \to [f(n+1)]$ is an embedding function of $G_n$ in $G_{n+1}$. Now consider the other direction and let such a family $G_1, G_2, \ldots$ be given and wlog assume that the vertex set of $G_n$ is $[f(n)]$. Furthermore wlog assume that $\lambda_n : G_n \to G_{n+1}$ given by $k \to k$ is an embedding function of $G_n$ in $G_{n+1}$. Let $d : {\mathbb{N}}_0 \times {\mathbb{N}}_0 \to \{0,1\}$ be defined in the following way. For $u,v \in {\mathbb{N}}_0$ let $d(u,v) = 1$ iff there exists $n$ such that $f(n) > u,v$ and $u$ and $v$ are adjacent in $G_n$. We note that if one such $n$ exists it must hold for all $n$ with $f(n) > u,v$ that $u$ and $v$ are adjacent in $G_n$. Hence we can assign labels in the same way as the first part of the theorem. \end{proof}
{ "timestamp": "2016-07-25T02:01:18", "yymm": "1607", "arxiv_id": "1607.04911", "language": "en", "url": "https://arxiv.org/abs/1607.04911", "abstract": "A graph $U$ is an induced universal graph for a family $F$ of graphs if every graph in $F$ is a vertex-induced subgraph of $U$. For the family of all undirected graphs on $n$ vertices Alstrup, Kaplan, Thorup, and Zwick [STOC 2015] give an induced universal graph with $O\\!\\left(2^{n/2}\\right)$ vertices, matching a lower bound by Moon [Proc. Glasgow Math. Assoc. 1965].Let $k= \\lceil D/2 \\rceil$. Improving asymptotically on previous results by Butler [Graphs and Combinatorics 2009] and Esperet, Arnaud and Ochem [IPL 2008], we give an induced universal graph with $O\\!\\left(\\frac{k2^k}{k!}n^k \\right)$ vertices for the family of graphs with $n$ vertices of maximum degree $D$. For constant $D$, Butler gives a lower bound of $\\Omega\\!\\left(n^{D/2}\\right)$. For an odd constant $D\\geq 3$, Esperet et al. and Alon and Capalbo [SODA 2008] give a graph with $O\\!\\left(n^{k-\\frac{1}{D}}\\right)$ vertices. Using their techniques for any (including constant) even values of $D$ gives asymptotically worse bounds than we present.For large $D$, i.e. when $D = \\Omega\\left(\\log^3 n\\right)$, the previous best upper bound was ${n\\choose\\lceil D/2\\rceil} n^{O(1)}$ due to Adjiashvili and Rotbart [ICALP 2014]. We give upper and lower bounds showing that the size is ${\\lfloor n/2\\rfloor\\choose\\lfloor D/2 \\rfloor}2^{\\pm\\tilde{O}\\left(\\sqrt{D}\\right)}$. Hence the optimal size is $2^{\\tilde{O}(D)}$ and our construction is within a factor of $2^{\\tilde{O}\\left(\\sqrt{D}\\right)}$ from this. The previous results were larger by at least a factor of $2^{\\Omega(D)}$.As a part of the above, proving a conjecture by Esperet et al., we construct an induced universal graph with $2n-1$ vertices for the family of graphs with max degree $2$. In addition, we give results for acyclic graphs with max degree $2$ and cycle graphs. Our results imply the first labeling schemes that for any $D$ are at most $o(n)$ bits from optimal.", "subjects": "Data Structures and Algorithms (cs.DS); Combinatorics (math.CO)", "title": "Near-Optimal Induced Universal Graphs for Bounded Degree Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363545048391, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7073385966150632 }
https://arxiv.org/abs/2208.02454
An algebraic approach to count the number of representations of an integer by the quadratic form $x^2+ay^2$ for certain values of $a$
By considering the norm of elements in the ring of integers in $\mathbb{Q}(\sqrt{-a})$, we give an algebraic approach to count the number of integral solutions of diophantine equations of the form $x^2+ay^2=n$ where $a$ is a Heegner number or $a=27$.
\section{Introduction} The theory of representation of integers by binary quadratic forms has been studied for a long time. One of interesting problems is to count the number of representations of a fixed integer by a given binary quadratic form. Dirichlet's work \cite{Dirichlet1871} dealt with a variation of this problem, namely representations by the collection of reduced binary quadratic forms of a given discriminant. Based on this work, Hall \cite{newman} derived a formula for the case that each genus of binary quadratic forms of the given discriminant consists of exactly one reduced form. Further investigations on the number of representations by certain single binary quadratic forms are based on an analytic approach using Epstein zeta functions, theta series or Dirichlet series among the others. The latest investigations have been done by Kaplan and Williams \cite{Kaplan2004OnTN}; Sun and Williams \cite{Sun2006OnTN}; Berkovich and Yesilyurt \cite{berkovich-ramanujan}; Bagis and Glaser \cite{bagis-glasser}; etc. This paper focuses on an algebraic approach for this problem in several cases. The key tool for our approach is the ring of integers $\qZ{a}$ in ${\mathbb Q}(\sqrt{-a})$. Such an approach has been employed, for instance, in \cite{busenhart-halbeisen-hungerbuehler} to find an explicit formula for primitive solutions of the equation $x^2+y^2=n$ using Gaussian integers. More precisely, the expression $x^2+ay^2$ should be interpreted as the norm of $x+y\sqrt{-a}$, so that one may first consider instead counting the number of elements of $\qZ{a}$ of the given norm. As we shall see in Proposition \ref{prop:count-given-norm}, this works well if $a$ is a \textbf{Heegner number}, or equivalently, if $\qZ{a}$ is a unique factorization domain. It is known that a Heegner number is one of the following numbers: \[ 1, 2, 3, 7, 11, 19, 43, 67, 163, \] see \cite{stark-heegner}. The slight difficulty for $a\geq 3$ is $\qZ{a}$ also contains linear combinations of $1$ and $\sqrt{-a}$ with half-integer coefficients. Consequently, a more careful investigation is necessary. It turns out that the case $a=27$ can also be done by this approach. To summarize, we obtain the following results: \begin{thmABC} Let $n$ be a natural number. For each natural number $a$, let $X(n,a)$ denote the set of integral solutions to the equation $x^2+ay^2=n$. \begin{enumerate}[label=\textup{(\Roman*)}] \item \textup{(Theorem \ref{thm:Xan-1-2})} For $a=1,2$, we have \[ |X(n,1)| = 4 \sum_{c\mid n} \left(\frac{-4}{c}\right) \quad \text{and} \quad |X(n,2)| = 2 \sum_{c\mid n} \left(\frac{-2}{c}\right). \] \item \textup{(Theorem \ref{thm:Xan-3})} For $a=3$, we have the following results: \begin{enumerate}[label=\textup{(\alph*)}] \item If $n$ is even, then $\displaystyle |X(n,3)| = 6\displaystyle\sum_{c|n} \left ( \frac{c}{3} \right )$. \item If $n$ is odd, then $\displaystyle |X(n,3)| = 2\displaystyle\sum_{c|n} \left ( \frac{c}{3} \right )$. \end{enumerate} \item \textup{(Theorem \ref{thm:Xan-7})} For $a=7$, we have the following results: \begin{enumerate}[label=\textup{(\alph*)}] \item If $4\mid n$, then $\displaystyle |X(n,7)| = 2\displaystyle\sum_{c|\frac{n}{4}} \left ( \frac{c}{7} \right )$. \item If $n$ is even but $4\nmid n$, then $X(n,7)=\emptyset$. \item If $n$ is odd, then $\displaystyle |X(n,7)| = 2\displaystyle\sum_{c|n} \left ( \frac{c}{7} \right )$. \end{enumerate} \item \textup{(Theorem \ref{thm:Xan-11})} For every Heegner number $a\geq 11$, we have the following results: \begin{enumerate}[label=\textup{(\alph*)}] \item If $n$ is even, then $\displaystyle |X(n,a)| = |Y(n,a)|$. \item If $n$ is odd, then \[ |X(n,a)| = \frac13\left[1+2\cdot\frac{(\tau(n_q)|3)}{\tau(n_q)}\right]\cdot|Y(n,a)|, \] where $n_q$ denotes the product of all prime factors (including multiplicity) of $n$ which are quadratic residues modulo $a$ but not expressible as $x^2+ay^2$ for any integers $x,y$. \end{enumerate} \item \textup{(Theorem \ref{thm:Xan-27})} For $a=27$, we have the following results: \begin{enumerate}[label=\textup{(\alph*)}] \item If $3 \mid n$, then $X(a,27)\neq \emptyset$ only if $9 \mid n$. In this case, we have \[ |X(n,27)| = |X(\tfrac{n}{9},3)|. \] \item If $3\nmid n$ but $2\mid n$, then \[ |X(n,27)| = \frac13|X(n,3)|. \] \item If $\gcd(6,n)=1$, then \[ |X(n,27)| = \frac13\left[1+2\cdot\frac{(\tau(n_q)|3)}{\tau(n_q)}\right]\cdot|X(n,3)|, \] where $n_q$ denotes the product of all prime factors $q$ (including multiplicity) of $n$ such that $q\equiv1\pmod{3}$ and $2$ is not a cubic residue modulo $q$. \end{enumerate} \end{enumerate} \end{thmABC} This paper is organized as follows: Section \ref{sec:count-norm} deals with a related problem, namely counting the elements of given norm. The problem of our interest will be discussed in Section \ref{sec:count-repr} except the cases $a=3,27$. These two cases will be discussed in Section \ref{sec:eisenstein}, where the ring of Eisenstein integers with primitive third roots of unity is involved. \smallskip \paragraph{Notation} The ring of integers in ${\mathbb Q}(\sqrt{-a})$, where $a$ is a square-free natural number, will be denoted by \[ \qZ{a} = \begin{cases} {\mathbb Z}[\sqrt{-a}] & \text{if $a\equiv 1,2\pmod{4}$,}\\ {\mathbb Z}[\frac{-1+\sqrt{-a}}2] & \text{if $a\equiv 3\pmod{4}$.} \end{cases} \] We will also write $\lambda_a:=\frac{-1+\sqrt{-a}}{2}\in\qZ{a}$ if $a\equiv3\bmod{4}$. The conjugate of $z=x+y\sqrt{-a}\in{\mathbb Q}(\sqrt{-a})$, where $x,y\in{\mathbb Q}$, will be denoted by $\overline{z}:=x-y\sqrt{-a}$. The norm of $z$ will be denoted by $N(z):=z\overline{z}=x^2+ay^2$. In order to count the number of representations, we introduce the following sets for natural numbers $a$ and $n$: \begin{align*} X(n,a) &:= \{(x,y)\in{\mathbb Z}\times{\mathbb Z} \ | \ x^2+ay^2=n\} \quad \text{and} \\ Y(n,a) &:= \left\{ z\in \qZ{a} \ \big| \ N(z)=n \right\}. \end{align*} Both sets are connected by the map \[ \varphi_{n,a}:X(n,a)\mapsto Y(n,a), \ (x,y)\mapsto x+y\sqrt{-a} \] as to be discussed in Sections \ref{sec:count-repr} and \ref{sec:eisenstein}. Finally, $\tau(n)$ denotes as usual the number of positive divisors of $n$. \section{Counting the elements of given norm} \label{sec:count-norm} In order to count the number of elements of $Y(n,a)$, recall the Kronecker symbol $\left(\frac{a}{n}\right)$ or $(a|n)$, which is a generalization of the Legendre symbol. It is multiplicative in the upper and lower arguments. Note that if $d$ is a square-free integer and $D$ is the discriminant of ${\mathbb Q}(\sqrt{d})/{\mathbb Q}$, the following holds for every prime number $p$: \[ \left( \frac{D}{p} \right) = \begin{cases} 1 & \text{if $p$ splits in ${\mathbb Q}(\sqrt{d})/{\mathbb Q}$,} \\ 0 & \text{if $p$ is ramified in ${\mathbb Q}(\sqrt{d})/{\mathbb Q}$,} \\ -1 & \text{if $p$ is inert in ${\mathbb Q}(\sqrt{d})/{\mathbb Q}$.}\end{cases} \] \begin{prop} \label{prop:count-given-norm} Let $a$ be a Heegner number. Furthermore, let $D$ be the discriminant of ${\mathbb Q}(\sqrt{-a})/{\mathbb Q}$. The following formula holds for all natural numbers $n$: \begin{equation} \label{eqn:Yan-general} |Y(n,a)| = |\qZ{a}^\times|\cdot \sum_{c\mid n} \left(\frac{D}{c}\right). \end{equation} In particular, if $a\equiv3\pmod4$, then \begin{equation} \label{eqn:Yan-3mod4} |Y(n,a)| = |\qZ{a}^\times|\cdot \sum_{c\mid n} \left(\frac{c}{a}\right). \end{equation} \end{prop} Note that $|\qZ{a}^\times|=2$ for all square-free natural numbers $a$ except $a=1$, where $\qZ{1}^\times$ is the group of the fourth roots of unity, and $a=3$, where $\qZ{3}^\times$ is the group of the sixth roots of unity. \begin{proof} For simplicity, write $u:=|\qZ{a}^\times|$. Consider the function \[ f : \mathbb{N} \rightarrow \mathbb{Q}, \ n\mapsto f(n) := \frac{1}{u} |Y(n,a)|. \] We want to show that $f$ is multiplicative. It is evident that $f(1)=1$. Now let $m,n \in \mathbb{N}$ be such that $\gcd(m,n) = 1$. Consider the mapping \[ h:Y(m,a)\times Y(n,a) \to Y(mn,a), \ (z_1,z_2)\mapsto z_1z_2. \] This is well-defined since the norm on $\qZ{a}$ is multiplicative. Furthermore, the uniqueness of factorization in $\qZ{a}$ implies that each $z\in Y(mn,a)$ can be factored as product of two elements of norms $m$ and $n$ uniquely up to association. Hence $h$ is a $u$-to-one mapping, implying that \[ |Y(m,a)|\cdot|Y(n,a)| = u\cdot|Y(mn,a)|. \] This proves the multiplicativity of $f$. Now we compute $|Y(a,p^k)|$ for each prime number $p$ and $k \in \mathbb{N}$ as follows: \begin{enumerate}[leftmargin=4ex,labelwidth=8ex,label=\textsc{Case} \arabic*:,itemindent=6ex,itemsep=1ex] \item $(D|p)=0$.\\ In this case, there is exactly one prime element $\pi\in\qZ{a}$ of norm $p$ up to association. This implies that every element in $\qZ{a}$ of norm $p^k$ is an associate of $\pi^k$. This implies that $|Y(a,p^k)|=u$. \item $(D|p)=1$.\\ In this case, there are exactly two prime elements $\pi_1,\pi_2\in\qZ{a}$ of norm $p$ up to association. This implies that every element in $\qZ{a}$ of norm $p^k$ is an associate of $\pi_1^j\pi_2^{k-j}$ for some $j\in\{0,1,\ldots,k\}$. This implies that $|Y(a,p^k)|=u(k+1)$. \item $(D|p)=-1$.\\ In this case, $p$ remains prime in $\qZ{a}$ and has norm $p^2$. Hence there exists an element of norm $p^k$ if and only if $k$ is even. In this case, such an element is an associate of $p^{k/2}$. This implies that $|Y(a,p^k)|=u$ if $k$ is even and $|Y(a,p^k)|=0$ if $k$ is odd. \end{enumerate} From all the three cases, we have \[ f(p^k) = \sum_{j=0}^k \left(\frac{D}{p}\right)^j = \left[\mathbf{1}*\left(\frac{D}{\cdot}\right)\right](p^k), \] where $*$ denotes the Dirichlet convolution of two arithmetic functions. Hence \eqref{eqn:Yan-general} follows from the multiplicativity of both $f$ and the convolution $\mathbf{1}*\left(\frac{D}{\cdot}\right)$. The formula \eqref{eqn:Yan-3mod4} then follows from the reciprocity law for the Kronecker symbol. \end{proof} \section{Counting the number of representations} \label{sec:count-repr} We now count the number of elements of \[ X(n,a) = \{(x,y)\in{\mathbb Z}\times{\mathbb Z} \ | \ x^2+ay^2=n\}. \] An important ingredient is to consider the mapping \begin{equation} \label{eqn:ph-a-n} \varphi_{n,a}:X(n,a)\mapsto Y(n,a), \ (x,y)\mapsto x+y\sqrt{-a}. \end{equation} It is easily seen that $\varphi_{n,a}$ is well-defined and injective. The surjectivity holds if every element of $Y(n,a)$ is of the form $b+c\sqrt{-a}$ for some $b,c\in{\mathbb Z}$ (i.e.~$b$ and $c$ are not half of odd integers). This is particularly the case if $\qZ{a}={\mathbb Z}[\sqrt{-a}]$, i.e.~$a\equiv1,2\pmod{4}$. The only such Heegner numbers are $1$ and $2$. Hence we get the following result: \begin{thm} \label{thm:Xan-1-2} For all natural numbers $n$, we have \[ |X(n,1)| = 4 \sum_{c\mid n} \left(\frac{-4}{c}\right) \quad \text{and} \quad |X(n,2)| = 2 \sum_{c\mid n} \left(\frac{-2}{c}\right). \] \end{thm} \begin{proof} This follows from the observation above together with Proposition \ref{prop:count-given-norm} and the fact that $(-8|\delta)=(-2|\delta)^3=(-2|\delta)$ for every $\delta\in{\mathbb Z}$. \end{proof} If $a$ is a Heegner number such that $a\geq 3$, then necessarily $a\equiv3\pmod{4}$, which implies that $\qZ{a}={\mathbb Z}[\lambda_a]$, where $\lambda_a := \frac{-1+\sqrt{-a}}{2}$. Consequently, the map $\varphi_{n,a}$ defined in \eqref{eqn:ph-a-n} may not be surjective. The case $a=3$ leads to the ring of Eisenstein integers, which contains primitive third roots of unity. Hence this case needs to be discussed separately and will be postponed to Section \ref{sec:eisenstein}. Instead, we will discuss first the cases $a=7$, where $2$ splits completely in $\qZ{a}$, and $a\geq 11$, where $2$ remains prime in $\qZ{a}$. \subsection{The case \texorpdfstring{$a=7$}{a=7}} The number of representations by the quadratic form $x^2+7y^2$ can be counted as follows: \begin{thm} \label{thm:Xan-7} Let $n$ be a natural number. \begin{enumerate}[label=\textup{(\alph*)}] \item \label{item:a7-n4} If $4\mid n$, then $\displaystyle |X(n,7)| = 2\displaystyle\sum_{c|\frac{n}{4}} \left ( \frac{c}{7} \right )$. \item \label{item:a7-n2} If $n$ is even but $4\nmid n$, then $X(n,7)=\emptyset$. \item \label{item:a7-n1} If $n$ is odd, then $\displaystyle |X(n,7)| = 2\displaystyle\sum_{c|n} \left ( \frac{c}{7} \right )$. \end{enumerate} \end{thm} \begin{proof} We first treat the case $n$ is even. Observe that for any $(x,y)\in X(7,n)$, we have $x\equiv y\pmod{2}$. Consequently, if $X(7,n)$ contains an element $(x,y)$, then $n=x^2+7y^2\equiv0\pmod{4}$. This proves \ref{item:a7-n2}. In order to prove \ref{item:a7-n4}, observe that if $4\mid n$, there is a bijection \[ X(n,7)\to Y(\tfrac{n}{4},7), \ (x,y) \mapsto \tfrac{x+y}{2}+y\lambda_7. \] This together with Proposition \ref{prop:count-given-norm} proves \ref{item:a7-n4}. Now we come to the case $n$ is odd and claim that the map $\varphi_{n,7}$ from \eqref{eqn:ph-a-n} is bijective. To see the surjectivity, observe that if $z=a+b\lambda_7\in Y(7,n)$, then \[ n = N(z) = a^2-ab+2b^2 \equiv a(a-b)\pmod{2}. \] This implies that $a$ and $ab$ are odd. Hence $b$ is even, i.e.~$b=2y$ for some $y\in {\mathbb Z}$. Consequently, $z=(a-y)+y\sqrt{-7}=\varphi(a-y,y)$. This together with Proposition \ref{prop:count-given-norm} proves \ref{item:a7-n1}. \end{proof} \subsection{The case \texorpdfstring{$a\geq11$}{a>=11}} For the remaining case, we have $\qZ{a}^\times=\{\pm1\}$ and $2$ remains prime in $\qZ{a}$. Hence $\qZ{a}/(2)$ is a field with four residue classes represented by $0,1,\lambda_a$ and $\lambda_a^2\equiv 1+\lambda_a\bmod2$. Consequently, the elements of $X(n,a)$ will be enumerated differently from the case $a=7$. We begin with the following observation: \begin{lem} Let $a$ be a Heegner number such that $a\geq 11$ and $p$ be a prime number which splits completely in $\qZ{a}$. Then $p$ can be written as $x^2+ay^2$ for some $x,y\in{\mathbb Z}$ if and only if its prime factors in $\qZ{a}$ are congruent to $1$ modulo $2$. \end{lem} \begin{proof} Let $\pi=b+c\lambda_a$ be a prime factor of $p$. Then $p=\pi\overline{\pi}$ with $\overline{\pi}=b+c\overline{\lambda_a} = (b-c)-c\lambda_a$. If both $\pi$ and $\overline{\pi}$ are congruent to $1$ modulo $2$, then $2\mid c$, say $c=2v$ for some $v\in{\mathbb Z}$, implying that $p=N(b+2v\lambda_a)=(b-v)^2+av^2$. Conversely, if $p=x^2+ay^2=(x+y\sqrt{-a})(x-y\sqrt{-a})$ for some $x,y\in{\mathbb Z}$, then $b+c\lambda_a=\pm x\pm y\sqrt{-a} = \pm x\pm(y+2y\lambda_a)$, implying that $c=\pm2y$, i.e. $\pi,\overline{\pi}\equiv 1\pmod{2}$ as desired. \end{proof} \begin{ex} Consider $a=11$ and $p=5$. Since $5$ is a quadratic residue modulo $11$, it follows that $5$ splits completely in $\qZ{11}$. In fact, $5=(2+\lambda_{11})(2+ \overline{\lambda_{11}})$. Its prime factors in $\qZ{11}$, namely $2+\lambda_{11}$ and $2+\overline{\lambda_{11}} = 1-\lambda_{11}$ are both not congruent to $1$ modulo $2$. This corresponds to the fact that $5$ is not expressible as $x^2+11y^2$ for any $x,y\in{\mathbb Z}$ as can be easily seen. \end{ex} \begin{lem} \label{lem:x2ay2even-a3mod8} Let $a$ be a square-free natural number congruent to $3$ modulo $8$ and $n$ be an even natural number. Then $\#X(n,a)=\#Y(n,a)$. \end{lem} \begin{proof} Consider the map $\varphi_{n,a}$ defined in \eqref{eqn:ph-a-n}. It is easy to see that $\varphi_{n,a}$ is well-defined and injective. To see the surjectivity, let $z\in Y(n,a)$. Then $2\mid n=N(z)=z\overline{z}$. Since $2$ remains prime in $\qZ{a}$ (note that this does not require the uniqueness of factorization in $\qZ{a}$), it follows that $2\mid z$ or $2\mid\overline{z}$, but the latter case also implies that $2\mid z$. Hence there are $b,c\in{\mathbb Z}$ such that $z=2(b+c\lambda_a)=\varphi_{n,a}(2b-c,c)$ as desired. \end{proof} \begin{thm} \label{thm:Xan-11} Let $n$ be a natural number and $a$ be a Heegner number such that $a\geq 11$. \begin{enumerate}[label=\textup{(\alph*)}] \item \label{item:a11-n2} If $n$ is even, then $\displaystyle |X(n,a)| = |Y(n,a)|$. \item \label{item:a11-n1} If $n$ is odd, then \begin{equation} \label{eqn:Xan-11-nodd} |X(n,a)| = \frac13 \left[1+2\cdot\frac{(\tau(n_q)|3)}{\tau(n_q)}\right]\cdot|Y(n,a)|, \end{equation} where $n_q$ denotes the product of all prime factors (including multiplicity) of $n$ which are quadratic residues modulo $a$ but not expressible as $x^2+ay^2$ for any integers $x,y$. \end{enumerate} \end{thm} Note that the results from \cite[Cor.\,1 and 3]{Kaplan2004OnTN} are contained in \ref{item:a11-n2}. \begin{ex} Let $a=11$ and $n=437805=3^4\cdot5\cdot23\cdot47$. We see that $47=6^2+11\cdot1^2$ and $3,5,23$ are the prime factors of $n$ which are quadratic residues modulo $11$ but not expressible as $x^2+11y^2$. This implies that $|Y(n,a)|=2\tau(n)=80$ and $n_q=3^4\cdot5\cdot23$, i.e.~$\tau(n_q)=20$. Therefore, \[ |X(n,a)| = \frac{1}{3}\left[1+2\cdot\frac{(20|3)}{20}\right]\cdot80 = 24 \] In fact, a computation shows that \[ X(n,a) = \left\{\begin{array}{c} (\pm78,\pm609),(\pm114,\pm543),(\pm126,\pm513),\\(\pm166,\pm367),(\pm182,\pm271),(\pm198,\pm81)\end{array} \right\}. \] \end{ex} \begin{proof}[Proof of Theorem \ref{thm:Xan-11}] The case $n$ is even follows from Lemma \ref{lem:x2ay2even-a3mod8}. Now assume that $n$ is odd. Following the proof of loc.~cit., determining $|X(n,a)|$ amounts to counting the number of elements of $Y(n,a)$ of the form $x+y\sqrt{-a}=(x-y)+2y\lambda_a$ for some $x,y\in{\mathbb Z}$, or equivalently, those congruent to $1$ modulo $2\qZ{a}$. To this end, fix a set ${\mathcal P}$ of representatives of the association classes of prime elements of $\qZ{a}$ which is stable under conjugation. For each $z\in Y(n,a)$, consider its prime factorization (note that $a$ is necessarily a prime integer and is ramified in ${\mathbb Q}(\sqrt{-a})/{\mathbb Q}$): \begin{equation} \label{eqn:factor-in-qZa} z = \varepsilon(\sqrt{-a})^{\delta}\prod_{i=1}^l \Bigl(\pi_{i}^{\alpha_i}\overline{\pi}_{i}^{\alpha'_i}\Bigr) \prod_{j=1}^m \Bigl(\rho_{j}^{\beta_j}\overline{\rho}_{j}^{\beta'_j}\Bigr) \prod_{k=1}^sr_k^{\gamma_k}, \end{equation} where $\varepsilon\in\qZ{a}^\times$; $(\pi_i,\bar{\pi}_i)$ are conjugate pairs of primes in ${\mathcal P}$ dividing prime numbers $p_i$ which can be written as $x^2+ay^2$; $(\rho_j,\overline{\rho}_j)$ are conjugate pairs of primes in ${\mathcal P}$ dividing prime numbers $q_j$ which are quadratic residues modulo $a$ but not expressible as $x^2+ay^2$ such that $\rho_j\equiv\lambda_a^2$ and hence $\overline{\rho}_j\equiv\lambda_a\bmod{2}$; and $r_k$ are prime numbers that are not quadratic residues modulo $a$. Taking norm and comparing this with the prime factorization of $n$ yields \[ \delta = v_a(n), \quad \alpha_i+\alpha_i'=v_{p_i}(n), \quad \beta_j+\beta_j'=v_{q_j}(n) \quad \text{and} \quad 2\gamma_k=v_{r_k}(n), \] where $v_p(n)$ denotes the $p$-adic valuation of $n$, i.e.~the exponent of the highest power of $p$ that divides $n$. This means that the values of $\delta$ and $\gamma_k$'s are fixed and $0\leq\alpha_i,\alpha_i'\leq v_{p_i}(n)$ for all $i$ and $0\leq\beta_j,\beta_j'\leq v_{q_j}(n)$ for all $j$. Furthermore, reducing \eqref{eqn:factor-in-qZa} modulo $2$ yields \[ 1\equiv z \equiv \prod_j \lambda_a^{2\beta_j+\beta_j'} = \prod_j \lambda_a^{2\beta_j+v_{q_j}(n)-\beta_j} = \lambda_a^{\sum_{j}(v_{q_j}(n)+\beta_j)} \pmod{2}. \] This means that, provided that $2\mid v_r(n)$ for all primes $r$ which remain prime in $\qZ{a}$, we have \begin{equation} \label{eqn:Xan-11-nodd-pre} |X(n,a)| = 2\prod_{i=1}^l\bigl(v_{p_i}(n)+1\bigr) \cdot \bigl|T(v_{q_1}(n),\ldots,v_{q_m}(n))\bigr| \end{equation} where \begin{equation} \label{eqn:T-v-v} T(v_1,\ldots,v_m) := \Biggl\{ (b_1,\ldots,b_m)\in{\mathbb N}_0^m \ \bigg| \ b_j\leq v_j \ \text{and} \ \sum_{j=1}^m (v_j+b_j) \equiv 0 \bmod{3} \Biggr\}. \end{equation} To determine the cardinality of $T(v_1,\ldots,v_m)$, observe that it is equal to the constant term of the remainder in the polynomial division of \[ F_{(v_1,\ldots,v_m)}(x) := \prod_{j=1}^m G_{v_j}(x) \] by $x^3-1$, where $G_v(x):=x^v+x^{v+1}+\cdots+x^{2v}$. To compute this, write \begin{equation} \label{eqn:mod-x3-1} F_{(v_1,\ldots,v_m)}(x) = Q(x)(x^3-1)+(A+Bx+Cx^2), \end{equation} where $Q(x)\in{\mathbb Z}[x]$ and $A,B,C\in{\mathbb Z}$. Denote by $\omega\in{\mathbb C}$ a primitive third root of unity. Evaluating \eqref{eqn:mod-x3-1} in $x$ at $1,\omega,\omega^2$ and summing all three obtained equations yields \[ 3A = F_{(v_1,\ldots,v_m)}(1) + F_{(v_1,\ldots,v_m)}(\omega) + F_{(v_1,\ldots,v_m)}(\omega^2). \] On the other hand, we have $G_v(1)=v+1$ and $G_v(\omega)=G_v(\omega)=r(v)\in\{-1,0,1\}$ such that $v+1\equiv r(v)\pmod{3}$. Note that $r(v)$ is exactly the Legendre symbol of $v+1$ over $3$. This implies that \begin{equation} \label{eqn:T-cardinality} |T(v_1,\ldots,v_m)| = A = \frac{1}{3}\left[V+2\left(\frac{V}{3}\right)\right], \quad \text{where} \ V:=\prod_{j=1}^m(v_j+1). \end{equation} Hence \eqref{eqn:Xan-11-nodd} follows from \eqref{eqn:Xan-11-nodd-pre} and \eqref{eqn:T-cardinality} in combination with the observation that if $2\mid v_r(n)$ for all primes $r$ which remain prime in $\qZ{a}$, then \[ Y(n,a) = 2\prod_{i=1}^l\bigl(v_{p_i}(n)+1\bigr)\prod_{j=1}^m\bigl(v_{q_j}(n)+1\bigr), \] which can be deduced from Proposition \ref{prop:count-given-norm}. \end{proof} \section{Applications of Eisenstein integers and the Cubic Reciprocity Law} \label{sec:eisenstein} The special feature of the case $a=3$ is that $\qZ{3}^\times$ is exactly the group of the sixth roots of unity, whereas $\qZ{a}$ for $a>3$ consists of only $1$ and $-1$. Hence the discussion needs to be done separately. In what follows, we will write $\omega:=\lambda_3\in\qZ{3}$. This is a primitive third root of unity and satisfies $1+\omega+\omega^2=0$. \begin{thm} \label{thm:Xan-3} Let $n$ be a natural number. \begin{enumerate}[label=\textup{(\alph*)}] \item If $n$ is even, then $\displaystyle |X(n,3)| = 6\displaystyle\sum_{c|n} \left ( \frac{c}{3} \right )$. \item If $n$ is odd, then $\displaystyle |X(n,3)| = 2\displaystyle\sum_{c|n} \left ( \frac{c}{3} \right )$. \end{enumerate} \end{thm} \begin{proof} We first treat the case $n$ is even and claim that the map $\varphi_{n,3}$ from \eqref{eqn:ph-a-n} is bijective. To see the surjectivity, observe that for all $z\in Y(n,a)$, we have $2\mid N(z)=z\overline{z}$. Since $2$ remains prime in $\qZ3$ and $2\mid z$ if and only if $2\mid \overline{z}$, it follows that $2\mid z$, i.e.~$z=2a+2b\omega=\varphi_{n,3}(2a-b,b)$ for some $a,b\in{\mathbb Z}$. Combining this result with Proposition \ref{prop:count-given-norm} yields \[ |X(n,3)| = |Y(n,3)| = 6\displaystyle\sum_{c|n} \left ( \frac{c}{3} \right ) \] We now come to the case $n$ is odd. The set $Y(n,a)$ may be partitioned into subsets of the form $\{z,z\omega,z\omega^2\}$, i.e.~orbits under the action of $\{1,\omega,\omega^2\}$ given by the multiplication. Now $\{1,\omega,\omega^2\}$ is a reduced residue system in $\qZ{3}$ modulo $2$. Hence for each $z\in Y(n,a)$, the $z,z\omega,z\omega^2$ are all different modulo $2$ since $\gcd(2,z)=1$. Hence exactly one of them is congruent to $1$, i.e.~of the form $a+2b\omega=\varphi(a-b,b)$ for some $a,b\in{\mathbb Z}$. Combining this result with Proposition \ref{prop:count-given-norm} yields \[ |X(n,3)| = \frac13|Y(n,3)| = 2\displaystyle\sum_{c|n} \left ( \frac{c}{3} \right ) \qedhere \] \end{proof} We conclude with the case $a=27$, in which also the Cubic Reciprocity Law is involved. To this end, observe first that \cite[Prop.\,9.3.5]{ireland-rosen} each association class of prime elements of $\qZ{3}$ not dividing $3$ contains exactly one \textbf{primary prime}, by which we mean a prime element $\pi\in\qZ{3}$ congruent to $2$ modulo $3$, i.e.~$\pi=a+b\omega$ for some integers $a,b$ such that $a\equiv2$ and $b\equiv0\bmod3$. \begin{thm} Let $p$ be a prime number congruent to $1$ modulo $3$. The following are equivalent: \begin{enumerate} \item There are integers $x,y$ such that $p=x^2+27y^2$. \item $2$ is a cubic residue modulo $p$. \item If $\pi\in\qZ{3}$ is a primary prime dividing $p$, then $\pi\equiv1\pmod{2}$. \end{enumerate} \end{thm} \begin{proof} \cite[Prop.\,9.6.1-2]{ireland-rosen} \end{proof} \begin{thm} \label{thm:Xan-27} Let $n$ be a natural number. \begin{enumerate}[label=\textup{(\alph*)}] \item \label{item:a27-n3} If $3 \mid n$, then $X(a,27)\neq \emptyset$ only if $9 \mid n$. In this case, we have \[ |X(n,27)| = |X(\tfrac{n}{9},3)|. \] \item \label{item:a27-n2} If $3\nmid n$ but $2\mid n$, then \[ |X(n,27)| = \frac13|X(n,3)|. \] \item \label{item:a27-n61} If $\gcd(6,n)=1$, then \[ |X(n,27)| = \frac13\left[1+2\cdot\frac{(\tau(n_q)|3)}{\tau(n_q)}\right]\cdot|X(n,3)|, \] where $n_q$ denotes the product of all prime factors $q$ (including multiplicity) of $n$ such that $q\equiv1\pmod{3}$ and $2$ is not a cubic residue modulo $q$. \end{enumerate} \end{thm} Note that this result agrees with \cite[Thm.\,6.1]{berkovich-ramanujan}. \begin{proof} We begin with the case $3\mid n$. Observe that if $(x,y)\in X(27,n)$, then $3$ divides $n-27y^2=x^2$. Hence $x=3v$ for some $v\in{\mathbb Z}$. In particular, if $X(27,n)$ contains an element $(x,y)=(3v,y)$, then $n=9v^2+27y^2$, i.e.~$9\mid n$ and $v^2+3y^2=\frac{n}{9}$. This yields a bijection between $X(n,27)$ and $X(\frac{n}{9},3)$, which proves \ref{item:a27-n3}. In order to treat the case $3\nmid n$, observe first that the map \begin{equation} \label{eqn:psi-27-3} \psi:X(n,27)\to Y(n,3), \ (x,y)\mapsto x+3y\sqrt{-3}. \end{equation} is well-defined and injective. Furthermore, $z+b\omega\in Y(n,3)$ lies in the image of $\psi$ if and only if $6\mid b$, i.e.~$z\equiv\pm1,\pm2\bmod{6}$ For the case $2\mid n$, i.e.~$\gcd(6,n)=2$, we claim that for each $z\in Y(n,3)$, exactly one of $z,z\omega,z\omega^2$ is in the image of $\psi$. In fact, the condition $3\nmid n$ and $2\mid n$ implies that $(1-\omega)\nmid z$ and $2\mid z$. Consequently, $\gcd(z,6)=\gcd(z,2(1-\omega)^2)=2$. Hence $z$ is congruent to exactly one of the following elements modulo $6$: \[ \pm2,\pm2\omega,\pm2\omega^2=\mp2\mp2\omega. \] This implies that exactly one of $z,z\omega,z\omega^2$ is of the form $a+b\omega$ for some $a,b\in{\mathbb Z}$ such that $a\equiv\pm2\pmod{6}$ and $6\mid b$. Therefore $|Y(n,3)|=3|X(n,27)|$, which proves \ref{item:a27-n2} in combination with Theorem \ref{thm:Xan-3}. We now come to the case $\gcd(6,n)=1$. For each $z\in Y(n,3)$, consider its prime factorization \begin{equation} \label{eqn:factor-eisenstein} z = \varepsilon\prod_{i=1}^l \Bigl(\pi_{i}^{\alpha_i}\overline{\pi}_{i}^{\alpha'_i}\Bigr) \prod_{j=1}^m \Bigl(\rho_{j}^{\beta_j}\overline{\rho}_{j}^{\beta'_j}\Bigr) \prod_{k=1}^sr_k^{\gamma_k}, \end{equation} where $\varepsilon\in\qZ{3}^\times=\{\pm1,\pm\omega,\pm\omega^2\}$; $(\pi_i,\bar{\pi}_i)$ are conjugate pairs of primary primes dividing prime numbers $p_i$ such that $\pi_i\equiv1\bmod2$; $(\rho_j,\overline{\rho}_j)$ are conjugate pairs of primary primes dividing prime numbers $q_j$ such that $\rho_j\equiv\omega^2$ and hence $\overline{\rho}_j\equiv\omega\bmod{2}$; and $r_k$ are prime numbers that are not quadratic residues modulo $3$. Taking norm and comparing this with the prime factorization of $n$ yields \[ \alpha_i+\alpha_i'=v_{p_i}(n), \quad \beta_j+\beta_j'=v_{q_j}(n) \quad \text{and} \quad 2\gamma_k=v_{r_k}(n), \] i.e.~the values of $\gamma_k$'s are fixed and $0\leq\alpha_i,\alpha_i'\leq v_{p_i}(n)$ for all $i$ and $0\leq\beta_j,\beta_j'\leq v_{q_j}(n)$ for all $j$. Furthermore, reducing \eqref{eqn:factor-eisenstein} modulo $3$ yields \[ \pm1\equiv z \equiv \varepsilon(-1)^{\sum_i(\alpha_i+\alpha'_i)+\sum_j(\beta_j+\beta_j')+\sum_k\gamma_k} \pmod{3}, \] which implies that $\varepsilon=\pm1$. Now reducing \eqref{eqn:factor-eisenstein} modulo $2$ yields \[ 1\equiv z \equiv \prod_j \lambda_a^{2\beta_j+\beta_j'} = \prod_j \lambda_a^{2\beta_j+v_{q_j}(n)-\beta_j} = \lambda_a^{\sum_{j}(v_{q_j}(n)+\beta_j)} \pmod{2}. \] This means that, provided that $2\mid v_r(n)$ for all primes $r$ which remain prime in $\qZ{3}$, we have \begin{equation} \label{eqn:Xan-27-n6-pre} |X(n,a)| = 2\prod_{i=1}^l\bigl(v_{p_i}(n)+1\bigr) \cdot \bigl|T(v_{q_1}(n),\ldots,v_{q_m}(n))\bigr| \end{equation} where $T(v_{q_1}(n),\ldots,v_{q_m}(n))$ is as defined in \eqref{eqn:T-v-v} in the proof of Theorem \ref{thm:Xan-11}. Hence a similar argument from the proof of loc.\,cit.~applies here, which proves \ref{item:a27-n61}. \end{proof} \begin{rmk} Contrary to this case, a criterion for a prime integer $p$ to be of the form $x^2+ay^2$ in an explicit form depending on $a$ is not known to the authors. It is only guaranteed by \cite[Thm.9.2]{cox-primes} that such a polynomial criterion exists. Also the case $a=11$ has been discussed in \cite{primex211y2} with an explicit polynomial, but it is unlikely to extend this result to a general case, even for Heegner numbers. \end{rmk}
{ "timestamp": "2022-08-05T02:06:42", "yymm": "2208", "arxiv_id": "2208.02454", "language": "en", "url": "https://arxiv.org/abs/2208.02454", "abstract": "By considering the norm of elements in the ring of integers in $\\mathbb{Q}(\\sqrt{-a})$, we give an algebraic approach to count the number of integral solutions of diophantine equations of the form $x^2+ay^2=n$ where $a$ is a Heegner number or $a=27$.", "subjects": "Number Theory (math.NT)", "title": "An algebraic approach to count the number of representations of an integer by the quadratic form $x^2+ay^2$ for certain values of $a$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363545048391, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7073385966150632 }
https://arxiv.org/abs/1706.01477
Heat content and horizontal mean curvature on the Heisenberg group
We identify the short time asymptotics of the sub-Riemannian heat content for a smoothly bounded domain in the first Heisenberg group. Our asymptotic formula generalizes prior work by van den Berg-Le Gall and van den Berg-Gilkey to the sub-Riemannian context, and identifies the first few coefficients in the sub-Riemannian heat content in terms of the horizontal perimeter and the total horizontal mean curvature of the boundary. The proof is probabilistic, and relies on a characterization of the heat content in terms of Brownian motion.
\section{Introduction} Let us begin by recalling the classical heat content problem in Euclidean space. Let $\Omega \subset \mathbb R^n$ be a bounded domain with finite volume $\Vol(\Omega)$ and finite perimeter $P(\Omega)$. Denote by $v(x,t)$ the solution to the heat equation in $\Omega$ with Dirichlet boundary condition: \begin{equation* \begin{matrix} v_t = \tfrac12 \triangle v & \mbox{in $\Omega \times (0,\infty)$,} \\ v(x,t) = 0 & \mbox{for $(x,t) \in {\partial\Omega} \times(0,\infty)$}, \\ v(x,0) = 1 & \mbox{for $x \in \Omega$}. \end{matrix} \end{equation*} The {\bf heat content} of $\Omega$ at time $t>0$ is defined to be $$ {\mathbf Q}_\Omega(t) = \int_\Omega v(x,t) \, dx. $$ The short time asymptotics of ${\mathbf Q}_\Omega$ are controlled by geometric data involving the domain $\Omega$ and its boundary. Intuitively, one expects that the rate of escape of heat from $\Omega$ will depend, to first order, on the perimeter of $\Omega$. Moreover, it is reasonable to further conjecture that subsequent corrections should involve some type of curvature invariant of ${\partial\Omega}$. The following result of van den Berg and Le Gall \cite{vdblg:heat} formalizes this intuition. If $\Omega$ has $C^3$ smooth boundary, then \begin{equation}\label{Q-Euclidean} {\mathbf Q}_\Omega(t) = \Vol(\Omega) - \sqrt{\frac{2t}{\pi}} \sigma({\partial\Omega}) + \frac14 t \int_{{\partial\Omega}} H_{\partial\Omega} \, d\sigma + o(t), \end{equation} where $\sigma$ denotes the surface area measure on ${\partial\Omega}$ and $H_\Sigma(x)$ denotes the mean curvature of a surface $\Sigma$ at $x$. (For smoothly bounded domains, the surface area $\sigma({\partial\Omega})$ coincides with the perimeter $P(\Omega)$.) The asymptotic expansion \eqref{Q-Euclidean} is closely related to Ledoux's characterization of perimeter in terms of the heat equation, inspired by de Giorgi's original definition of perimeter \cite{degiorgi:teoria}. Let $$ p_t(x,y) = (2\pi t)^{-n/2} \exp(|x-y|^2/2t) $$ and let $u(x,t) = \int_\Omega p_t(x,y) \, dy$ solve the heat equation $u_t = \tfrac12 \triangle u$ in $\mathbb R^n \times (0,\infty)$ with $u(x,0) = \mathbbm{1}_\Omega(x)$. The {\bf heat content of $\Omega$ in $\mathbb R^n$} at time $t>0$ is \begin{equation}\label{H-Euclidean} {\mathbf H}_\Omega(t) = \int_\Omega u(x,t) \, dx = \iint_{\Omega \times \Omega} p_t(x,y) \, dy \, dx\,. \end{equation} Ledoux \cite{ledoux:semigroup} identified the perimeter of $\Omega$ as follows: \begin{equation}\label{Ledoux} P(\Omega) = \lim_{t\to 0} \sqrt{\frac{2\pi}{t}} \iint_{\Omega \times \Omega^c} p_t(x,y) \, dy \, dx\,. \end{equation} From \eqref{H-Euclidean} and \eqref{Ledoux} it is easy to see that $$ {\mathbf H}_\Omega(t) = \Vol(\Omega) - \sqrt{\frac{t}{2\pi}} P(\Omega) + o(\sqrt{t}). $$ An instructive comparison of these two problems can be found in van den Berg \cite{vdb:heat-and-perimeter}, where also the situation for domains with nonsmooth boundary is considered. For smooth boundaries, higher order terms in the short time expansion of ${\mathbf H}_\Omega(t)$ were obtained by Angiuli--Massari--Miranda \cite{amm:heat-content}. Van den Berg and Gilkey \cite{vdbg:manifold} extended the theory to Riemannian manifolds and obtained further terms in the short time expansion of ${\mathbf Q}_\Omega(t)$. We refer the interested reader to a pair of excellent survey articles by Gilkey \cite{gil:survey1}, \cite{gil:survey2}. The sub-Riemannian Heisenberg group and more general nilpotent stratified Lie groups (i.e., Carnot groups) provide a natural testing ground for analysis and geometry beyond the Riemannian setting. The connection between horizontal perimeter and the sub-Riemannian heat equation has already been studied by Bramanti--Miranda--Pallara \cite{bmp:bv}, who obtained a precise analog of Ledoux's characterization in step two Carnot groups. Recently, Marola--Miranda--Shanmugalingam \cite{mms:mms} generalized such results even further into the category of metric measure spaces supporting a Poincar\'e inequality. However, it appears that, up to now, more precise asymptotics for heat content have not been studied, even in the setting of the Heisenberg group. In this paper we identify short time asymptotics for the sub-Riemannian heat content ${\mathbf Q}_\Omega$ (in the sense of van den Berg and Le Gall) for a smoothly bounded domain in the first Heisenberg group. Let $\mathbb{H}$ denote the first Heisenberg group, let $X_1$ and $X_2$ denote the standard frame for the horizontal distribution, and let $\triangle_0 = X_1^2+X_2^2$ denote the subelliptic Laplacian. See section \ref{sec:prelim1} for definitions. We fix a bounded domain $\Omega \subset \mathbb{H}$ with boundary ${\partial\Omega}$, and let $v(x,t)$ denote the solution to the heat equation with Dirichlet boundary conditions: \begin{equation}\label{eq:Dirichlet} \begin{matrix} v_t = \tfrac12\triangle_0 v & \mbox{in $\Omega \times (0,\infty)$,} \\ v(x,t) = 0 & \mbox{for $(x,t) \in {\partial\Omega} \times(0,\infty)$}, \\ v(x,0) = 1 & \mbox{for $x \in \Omega$}. \end{matrix} \end{equation} As before, the heat content of $\Omega$ at time $t$ is defined to be $$ {\mathbf Q}_{\Omega}(t) = \int_\Omega v(x,t) \, dx, $$ where the integral is taken with respect to the Haar measure on $\mathbb{H}$ (which agrees with Lebesgue measure in $\mathbb R^3$), and we are interested in the short time asymptotics of ${\mathbf Q}_\Omega$. We denote by $\sigma_0$ the horizontal perimeter measure on ${\partial\Omega}$, which is defined provided ${\partial\Omega}$ is at least $C^1$, and by $H_{{\partial\Omega},0}(x)$ the horizontal mean curvature at $x \in {\partial\Omega}$, which is defined provided ${\partial\Omega}$ is at least $C^2$. For definitions of and further discussion about these geometric quantities, see subsection \ref{subsec:perim-and-mean-curvature}. We remark that the horizontal mean curvature of a surface $\Sigma$ is only defined pointwise at noncharacteristic points. In this paper we will assume that the boundary of $\Omega$ has no characteristic points. Our main theorem provides an exact analog of \eqref{Q-Euclidean} in the Heisenberg setting. \begin{theorem}\label{thm-heat-content-prob} Let $\Omega$ be a bounded domain in $\mathbb{H}$ with boundary $\partial \Omega$ which is of class $C^3$ and which is completely noncharacteristic. Then the asymptotic expansion \begin{equation}\label{Q-Heisenberg} {\mathbf Q}_{\Omega}(t) = \Vol(\Omega) - \sqrt{\frac{2t}{\pi}} \sigma_0({\partial\Omega}) + \frac{t}{4} \int_{{\partial\Omega}} H_{{\partial\Omega},0}(s) \, d\sigma_0(s) + o(t) \end{equation} holds in the limit as $t \to 0$. \end{theorem} This paper is structured as follows. Section \ref{sec:prelim1} reviews background material on the geometry of the Heisenberg group $\mathbb{H}$, especially the structure of tubular neighborhoods of smooth surfaces. Many results which we state are taken from a recent paper by Ritor\'e \cite{rit:tubular}. Section \ref{sec:prelim2} contains the necessary probabilistic preliminaries. We reformulate the problem in terms of the exit time of a Brownian motion process on $\mathbb{H}$, and perform a series of reductions which eventually allow us to deduce Theorem \ref{thm-heat-content-prob} from a corresponding theorem (Theorem \ref{thm-heat-content-prime}) for a stochastic process involving L\'evy's area form. We reduce the proof of the latter statement to three lemmas. In section \ref{sec:proof} we give the (rather technical) proofs of these lemmas. Some auxiliary calculations are deferred to an appendix for ease of exposition. We conclude this introduction with some additional comments on the heat content problem in the Heisenberg group, and directions for future work. First, we point out that there is an alternative approach to our main theorem which relies on the appearance of the sub-Riemannian metric on the Heisenberg group as a Gromov--Hausdorff limit of a sequence of Riemannian metrics. The use of this technique to establish results in sub-Riemannian geometry is by now a standard approach which has been used successfully by many authors. As a tool for understanding the sub-Riemannian geometry of submanifolds of Heisenberg groups, this approach featured prominently in the book \cite{cdpt:survey}. We anticipate that a careful analysis of the behavior of asymptotic formulas such as \eqref{Q-Euclidean} (or, more precisely, their Riemannian analogs as found in \cite{vdbg:manifold}) under degenerating limits of Riemannian metrics should reproduce our main asymptotic estimate \eqref{Q-Heisenberg} and possibly yield further terms in such expansions, similar to those found in Steiner's formula for the Carnot--Carath\'eodory metric \cite{BFFVW} and \cite{BTV}. We plan to return to this idea in a future paper. The heat content ${\mathbf H}_\Omega(t)$ of a domain $\Omega$ relative to the full Heisenberg group also deserves further study. As previously mentioned, the first order term (involving perimeter) in the short time expansion of ${\mathbf H}_\Omega(t)$ has been identified by Bramanti, Miranda and Pallara, but analogs of the higher order formulas of Angiuli--Massari--Miranda \cite{amm:heat-content} remain unexplored in the Heisenberg setting, as do extensions to other Carnot groups. The case of higher dimensional Heisenberg groups, or perhaps general step two Carnot groups, should be a natural first step. Adapting the methods of this paper to those settings would require a precise understanding of the structure of tubular neighborhoods of hypersurfaces which is currently unavailable. The Riemannian approximation metholodogy described in the preceding paragraph, however, would in principle be effective in all such settings. Finally, we would like to point out another possible extension of this heat content problem to other curved sub-Riemannian model spaces, such as the Cauchy-Riemann sphere $\mathbb S^{2n+1}$ and anti-de Sitter space $AdS^{2n+1}$. Subelliptic heat kernels on these spaces are well understood, and explicit expressions can be obtained (see \cite{BB}, \cite{B}, \cite{CRS}, \cite{CRH}). In \cite{BW}, the authors studied Brownian motion processes on these model spaces as horizontal lifts of Brownian motions on complex protective space $\mathbb{CP}^n$ and complex hyperbolic space $\mathbb{CH}^n$ respectively, where the fiber motions are exactly given by the stochastic area processes on $\mathbb{CP}^n$ and $\mathbb{CH}^n$. Following a similar intuition as in present paper (as well as the analytic approach previously mentioned), one may proceed to obtain small time expansions of heat contents on these curved spaces, and observe the appearance of the curvatures of the ambient spaces. \section{Geometric preliminaries}\label{sec:prelim1} We model the Heisenberg group $\mathbb{H}$ as the space $\mathbb R^3$ with the following group law: \[ (x_1,x_2,x_3)*(y_1,y_2,y_3) = (x_1+y_1,x_2+y_2,x_3+y_3+x_1y_2-x_2y_1) \,. \] The left invariant vector fields \[ X_1=\frac{\partial}{\partial x_1}-x_2\frac{\partial}{\partial x_3},\quad X_2=\frac{\partial}{\partial x_2}+x_1\frac{\partial}{\partial x_3},\quad X_3 =\frac{\partial}{\partial x_3} \] provide a global frame for the tangent bundle. The vector fields $X_1$ and $X_2$ span, at each point $x \in \mathbb{H}$, the {\bf horizontal tangent space} $\mathcal H_x\mathbb{H}$, and an absolutely continuous curve $\gamma$ valued in $\mathbb{H}$ is said to be {\bf horizontal} if its tangent vector $\gamma'(t)$ lies in $\mathcal H_{\gamma(t)}\mathbb{H}$ whenever it is defined. Introduce a metric $g_0$ on $\mathcal H\mathbb{H}$ by declaring $X_1$ and $X_2$ to be an orthonormal frame. The {\bf Carnot--Carath\'eodory (CC) metric} $d_{cc}$ is defined by \begin{equation}\label{eq-cc-dist} d_{cc}(x,y) = \inf \length_{cc}(\gamma) \end{equation} where the infimum is taken over all horizontal curves $\gamma:[a,b] \to \mathbb{H}$ joining $x$ to $y$ and \[ \length_{cc}(\gamma) = \int_a^b g_0(\gamma'(s),\gamma'(s))^{1/2}_{\gamma(s)} \, ds. \] The metric $d_{cc}$ is left invariant and geodesic. Explicit formulas for the CC geodesics will appear in subsection \ref{subsec:tubular-neighborhoods}. For later purposes we also introduce the Riemannian metric $g_1$ for which $X_1$, $X_2$ and $X_3$ are an orthonormal frame. Note that the lengths of any horizontal curve in the $g_0$ and $g_1$ metrics coincide. The ball with center $x$ and radius $r>0$ in the CC metric will be denoted $B_{cc}(x,r)$. \subsection{Perimeter and mean curvature in the Heisenberg group}\label{subsec:perim-and-mean-curvature} Let $\Omega$ be a bounded domain in $\mathbb{H}$ with $C^1$ boundary. For any $s\in{\partial\Omega}$, we consider the tangent space $T_s({\partial\Omega})$ at $s$ that is spanned by the vectors tangent to ${\partial\Omega}$. We say that $s$ is a {\bf characteristic point} if $T_s({\partial\Omega})$ agrees with the horizontal space $\mathcal H_s\mathbb{H}$, otherwise $s$ is said to be a {\bf non-characteristic point}. Throughout this paper, we assume that ${\partial\Omega}$ contains no characteristic points. Such an assumption, while clearly restrictive, nevertheless allows for a number of examples. For instance, there are smoothly bounded noncharacteristic tori in $\mathbb{H}$, see for example \cite[Remark 6.4]{tys:gcdima-heisenberg}. Let $\sigma$ be the surface area measure on ${\partial\Omega}$, and let $\vec{n}(s)$ be the outward pointing unit $g_1$-normal at $s \in {\partial\Omega}$. Let $\vec{n}_h$ be the orthogonal projection of $\vec{n}$ into $\mathcal H_s\mathbb{H}$; note that $\vec{n}_h \ne 0$ if and only if $s$ is noncharacteristic. The {\bf horizontal perimeter measure} $\sigma_0$ on ${\partial\Omega}$ is $d\sigma_0 = |\vec{n}_h| \, d\sigma$. We denote by $N(s)$ the normalized projection of the inward unit $g_1$-normal at $s$, i.e.\ $N(s) = -(\vec{n}_h/|\vec{n}_h|)(s)$. If $\vec{n} = n_1 X_1 + n_2 X_2 + n_3 X_3$ then \begin{equation}\label{eq:N} N(s) = \frac{-n_1X_1-n_2X_2}{|(n_1,n_2)|}. \end{equation} Since $s$ is noncharacteristic, the space $T_s({\partial\Omega})\cap\mathcal H_s\mathbb{H}$ is one-dimensional. We call it the {\bf horizontal tangent space} $\mathcal H T_s({\partial\Omega})$ of ${\partial\Omega}$ at $s$, and we denote by $T(s)$ a unit vector which spans $\mathcal H T_s({\partial\Omega})$. Specifically, if $N(s)$ is as in the previous paragraph then we choose \begin{equation}\label{eq:T} T(s) = \frac{-n_2X_1+n_1X_2}{|(n_1,n_2)|}. \end{equation} The pair \begin{equation}\label{eq:TN} \{N(s),T(s)\} \end{equation} forms an orthonormal basis of $\mathcal H_s\mathbb{H}$ with respect to the sub-Riemannian metric $g_0$. The horizontal tangent vector field $T$ generates a foliation of ${\partial\Omega}$, the {\bf Legendrian foliation}. If $\alpha$ is a curve in the Legendrian foliation with $\alpha(0) = s \in {\partial\Omega}$, then $\alpha'(0) = T(s)$. Assuming that ${\partial\Omega}$ is $C^2$, the {\bf horizontal mean curvature} of ${\partial\Omega}$ at a point $s$ is defined as the horizontal divergence of the horizontal unit normal: $$ H_{{\partial\Omega},0}(s) = \diver_H\left(\frac{\vec{n}_h(s)}{|\vec{n}_h(s)|}\right) $$ where $\diver_H(aX_1+bX_2) = X_1(a)+X_2(b)$. It is known (see e.g.\ \cite[Proposition 4.24]{cdpt:survey}) that $H_{{\partial\Omega},0}(s)$ coincides with the planar curvature of the projection of the Legendrian curve $\alpha$ in ${\partial\Omega}$ through $s$ into the $x_1x_2$-plane. \subsection{Tubular neighborhoods of the boundary of a smooth domain}\label{subsec:tubular-neighborhoods} Since we only care about the heat loss within a small time---which can be felt close to the boundary ${\partial\Omega}$---it is natural to consider a small inner tubular neighborhood of ${\partial\Omega}$. For $\epsilon>0$ define \[ \Omega_\epsilon=\lbrace x\in \Omega \,|\, \min_{y\in \mathbb{H}\setminus \Omega} d_{cc}(x,y)<\epsilon \rbrace \,. \] We describe the structure of such tubular neighborhoods in a sequence of geometric lemmas. A detailed discussion is in the recent preprint by Ritor\'e \cite{rit:tubular}, where proofs of several of these lemmas can be found. \begin{lemma}\label{lemma-unique-p} Assume that ${\partial\Omega}$ is compact and smooth, without characteristic points. Then there exists $\epsilon>0$ sufficiently small so that for any $x\in\Omega_\epsilon$, there exists a unique point $s\in{\partial\Omega}$ which is nearest to $x$ in the CC-metric. Furthermore, $x$ is joined to $s$ by a unique CC geodesic. \end{lemma} \begin{proof} Let $\mathrm{Unp(E)}$ be the set of points $x\in\mathbb{H}$ for which there is a unique point of $E$ nearest to $x$. For $s\in E$, define $\mathrm{reach}(E,s)$ as the supremum of those values $r>0$ for which $B(s,r)\subset\mathrm{Unp}(E)$. Let \[ \mathrm{reach}(E):=\inf\{\mathrm{reach}(E,s)\,|\,s\in E\}. \] Then we just need to show that $\mathrm{reach}({\partial\Omega})>0$. This is proved in \cite[Theorem 4.2 and Theorem 4.5]{rit:tubular}. The uniqueness of the CC geodesic between $x$ and $s$ follows from \cite[Remark 3.5 and Section 4]{rit:tubular}. \end{proof} Let $x \in \mathbb{H}$. Each CC geodesic emanating from $x$ is contained in a maximal CC geodesic $\gamma_{x,v}^\lambda$ for some $v \in \mathcal H_x \mathbb{H}$, $|v|=1$, and some $\lambda \in \mathbb R$. Here $x = (\gamma_{x,v}^\lambda)(0)$ is the initial position, $v = (\gamma_{x,v}^\lambda)'(0)$ is the initial velocity vector and the parameter $\lambda$ is known as the {\bf curvature}. The explicit form of these geodesics is well known, cf.\ Section 2.2 in \cite{rit:tubular}. If $v = \cos\theta X_1(x) + \sin\theta X_2(x)$ then \begin{equation}\label{CCgeodesics} \gamma_{x,v}^\lambda(t) = x*\left(\cos\theta\frac{\sin(\lambda t)}{\lambda} + \sin\theta \frac{1-\cos(\lambda t)}{\lambda}, -\cos\theta \frac{1-\cos(\lambda t)}{\lambda} + \sin\theta \frac{\sin(\lambda t)}{\lambda}, - \frac{\lambda t - \sin(\lambda t)}{\lambda^2}\right). \end{equation} The maximal CC geodesic $\gamma_{x,v}^\lambda$ is defined on the interval $(-2\pi/|\lambda|,2\pi/|\lambda|)$ (or on all of $\mathbb R$ if $\lambda = 0$). Its projection to the $x_1x_2$-plane is a circle of radius $1/|\lambda|$ if $\lambda \ne 0$, or is a line if $\lambda = 0$. The velocity vector at time $t$ is $$ \dot\gamma_{x,v}^\lambda(t) = \cos(\theta - \lambda t)X_1(\gamma_{x,v}^\lambda(t)) + \sin(\theta - \lambda t)X_2(\gamma_{x,v}^\lambda(t)). $$ The following lemma is Theorem 3.11 in \cite{rit:tubular}. \begin{lemma}\label{lemma-h-normal-geodesic} Assume ${\partial\Omega}$ is compact and $C^1$ smooth, without characteristic points. Fix $s\in \partial \Omega$ and $x\in\Omega$ such that $d_{cc}(x, \partial \Omega)=d_{cc}(x,s):=r$ and let $\gamma: [0,r]\to \mathbb{H}$ be the minimizing CC-geodesic connecting $x$ and $s$ such that $\gamma(0)=x$ and $\gamma(r)=s$. Then $-\dot{\gamma}(0)=N(s)$ where $N(s)$ is the inward horizontal unit normal vector to ${\partial\Omega}$ at $s$, see \eqref{eq:N}, and the curvature of $\gamma$ is $\lambda = 2g_1(\vec{n},X_3)/ |\vec{n}_h|$. Moreover, for any $0\le t\le r$, $-\dot{\gamma}(t)=N(\gamma(t))$ is the inward horizontal unit normal vector to $\partial \Omega_t$, where $$ \Omega_t = \{x\in \Omega \,|\, d_{cc}(x, {\partial\Omega})<t\}\,. $$ \end{lemma} In view of the previous lemmas, we observe a foliated structure of $\Omega_\epsilon$ induced by the Carnot--Carath\'eodory distance to ${\partial\Omega}$. As in \eqref{eq:TN} we obtain a $g_1$-orthonormal frame $\{N,T,X_3\}$ defined along $\gamma$. We extend this to a smooth frame $\{N,T,Z\}$ defined in a neighborhood of $\gamma$. \begin{lemma} Let $\epsilon$ be as in Lemma \ref{lemma-unique-p}. For $x\in\Omega_\epsilon$, assume $d_{cc}(x, \partial \Omega)=r<\epsilon$, and let $\gamma$ be the unique geodesic connecting $x$ to $s \in {\partial\Omega}$ with $\gamma(0) = x$ and $\gamma(r) = s$. Then the frame $N,T,Z$ along the geodesic $\gamma$ admits a smooth extension as follows: \begin{equation}\label{eq-N-T-ext}\begin{split} &N(q)=-\bigg(\cos\theta+\lambda(q_2-x_2)\bigg)X_1(q)-\bigg(\sin\theta-\lambda(q_1-x_1)\bigg)X_2(q),\\ &T(q)=-\bigg(\sin\theta-\lambda(q_1-x_1)\bigg)X_1(q)+\bigg(\cos\theta+\lambda(q_2-x_2)\bigg)X_2(q),\\ &Z(q)=f(q)X_3(q), \end{split}\end{equation} where $\lambda$ is the curvature of $\gamma$ and \begin{equation}\label{eq-N-T-f} f(q)=(\cos\theta+\lambda(q_2-x_2))^2+(\sin\theta-\lambda(q_1-x_1))^2. \end{equation} Moreover, \begin{align}\label{eq-N-T-brackets-1} [N, T]=-2Z,\ [N,Z]=0,\ [T,Z]=-2\lambda Z \end{align} and, for $k$-fold iterated brackets, \begin{equation}\label{eq-N-T-brackets-2} [T, [T,\cdots[T,N]]]=(-1)^{k-1} 2^k \lambda^{k-1}Z, \quad [T, [T,\cdots[T,Z]]]=(-2\lambda)^{k}Z. \end{equation} \end{lemma} \begin{proof} The $g_1$-orthonormal frame $\{N,T,X_3\}$ along $\gamma$ is given by $$ N(\gamma(t)) = -\cos(\theta - \lambda t) X_1(\gamma(t)) - \sin(\theta - \lambda t) X_2(\gamma(t)) $$ and $$ T(\gamma(t)) = -\sin(\theta - \lambda t) X_1(\gamma(t)) + \cos(\theta - \lambda t) X_2(\gamma(t)) $$ The fact that the expressions in \eqref{eq-N-T-ext} define an extension of this frame follow from the formula \eqref{CCgeodesics} for the geodesic $\gamma = \gamma_{s,v}^\lambda$. Verification of the bracket identities \eqref{eq-N-T-brackets-1} and \eqref{eq-N-T-brackets-2} is a simple exercise, left to the reader. \end{proof} We next define a parametrization $\varphi_x$ of a neighborhood $\mathcal{O}_x$ of $\gamma$ by a neighborhood $D$ of the origin in $\mathbb{R}^3$. For $(\xi, y,z)\in \mathbb{R}^3$, we let \begin{equation}\label{eq-cartesian} \varphi_x(\xi,y,z) = \exp_x(-\xi N + y T + z Z), \end{equation} that is, \begin{equation*} \varphi_{x}(\xi,y,z):=c(1), \end{equation*} where $c(t) = (c_1(t),c_2(t),c_3(t))$ solves the differential equation \begin{equation}\label{first-order-linear-system} \dot{c}(t) = -\xi \, N(c(t)) + y \, T(c(t)) + z \, Z(c(t)), \qquad c(0) = x. \end{equation} We have introduced an additional minus sign in front of the coefficient of $N$ in \eqref{eq-cartesian} so that increasing values of the parameter variable correspond to motion from $x$ towards the boundary of $\Omega$; recall that $N$ is the inward pointing normal. The first-order linear system \eqref{first-order-linear-system} can be solved explicitly. In Euclidean coordinates, it reads \begin{align*} & \dot{c}_1(t) = (\xi\cos\theta - y \sin\theta) + \lambda(\xi(c_2-x_2) + y(c_1-x_1)) \\ & \dot{c}_2(t) = (\xi\sin\theta + y \cos\theta) + \lambda(-\xi(c_1-x_1) + y(c_2-x_2)) \\ & \dot{c}_3(t) = z \, f(c(t)) + ((\xi\sin\theta + y \cos\theta) + \lambda(-\xi(c_1-x_1) + y(c_2-x_2)))c_1 \\ & \qquad \qquad - ((\xi\cos\theta - y \sin\theta) + \lambda(\xi(c_2-x_2) + y(c_1-x_1)))c_2 \,. \end{align*} The equations for $c_1(t)$ and $c_2(t)$ have solution \begin{align*} & c_1(t) = x_1 - \frac1\lambda(e^{\lambda t y}\cos(\lambda t \xi)-1)\sin\theta + \frac1\lambda e^{\lambda t y} \sin(\lambda t \xi) \, \cos\theta \\ & c_2(t) = x_2 + \frac1\lambda e^{\lambda t y} \sin(\lambda t \xi)\sin\theta + \frac1\lambda (e^{\lambda t y}\cos(\lambda t \xi)-1) \, \cos\theta \,. \end{align*} Then $f(c(t)) = e^{2\lambda t y}$ and the equation for $c_3(t)$ has solution \begin{align*} & c_3(t) = x_3 + \frac1{2\lambda^2 y} (e^{2\lambda t y}-1) (-\xi + \lambda z) + \frac1{\lambda^2} e^{\lambda t y}\sin(\lambda t \xi) \\ & \qquad \quad + \frac1{\lambda} \left(x_1 \, (e^{\lambda t y} \cos(\theta - \lambda t \xi) - \cos\theta) + x_2 \, (e^{\lambda t y} \sin(\theta - \lambda t \xi) - \sin\theta ) \right). \end{align*} Hence \begin{equation*}\begin{split} \varphi_x(\xi,y,z) &= \left( x_1 - \frac1\lambda ( e^{\lambda y} \sin(\theta - \lambda \xi) - \sin\theta), x_2 + \frac1\lambda (e^{\lambda y} \cos(\theta - \lambda \xi) - \cos\theta), \right. \\ & \qquad \quad x_3 + \frac1{2\lambda^2 y} (e^{2\lambda y}-1)(-\xi+\lambda z) + \frac1{\lambda^2} e^{\lambda y}\sin(\lambda \xi) \\ & \qquad \qquad + \frac1\lambda \left( x_1 (e^{\lambda y} \cos(\theta - \lambda \xi) - \cos\theta) + x_2 ( e^{\lambda y} \sin(\theta - \lambda \xi) - \sin\theta) \right) \,. \end{split}\end{equation*} The Jacobian of $\varphi_x$ is $$ \det d\varphi_x(\xi,y,z) = \frac1{2\lambda y} e^{2\lambda y} ( e^{2\lambda y} - 1) $$ which is always positive, hence $\varphi_x$ is locally invertible. Moreover, expressing the first two components of $\varphi_x$ in complex notation yields the map $$ \xi + {\mathbf i} y \mapsto (x_1+{\mathbf i} x_2) + \frac1\lambda {\mathbf i} e^{{\mathbf i} \theta} \left( e^{-{\mathbf i} \lambda (\xi + {\mathbf i} y)} - 1 \right) $$ which is invertible on a domain in $\mathbb{C}$ containing the interval $[0,r]$ if $r<\frac{2\pi}{|\lambda|}$. Hence $\varphi_x$ is invertible on a domain $D$ containing the interval $\{(\xi,0,0):0\le \xi\le r\}$, $\varphi_x(D) = \mathcal{O}_x$ is a domain in $\mathbb{H}$ containing the geodesic $\gamma$, $\varphi_x(0,0,0) = x$ and $\varphi_x(r,0,0) = s$. Using the group law we can verify that $x^{-1} * \varphi_x(\xi,y,z)$ is equal to $$ \left( - \frac{e^{\lambda y}\sin(\theta-\lambda\xi) - \sin\theta}{\lambda} , \frac{e^{\lambda y}\cos(\theta-\lambda\xi)-\cos\theta}{\lambda}, \frac{e^{2\lambda y} - 1}{2\lambda^2 y}(-\xi + \lambda z) + \frac1{\lambda^2}e^{\lambda y}\sin(\lambda \xi) \right)\,. $$ The inverse $\varphi^{-1}_x(\cdot)$ defines a Cartesian coordinate system in $\mathcal{O}_x$. Given $q\in \mathcal{O}_x$, we introduce the function $$ \|\varphi^{-1}_x(q)\|:=\sqrt{|\xi(q)|^2+|y(q)|^2+|z(q)|}. $$ \begin{lemma}\label{lemma-equi-dist} The function $q \mapsto \|\varphi_x^{-1}(q)\|$ is comparable to the CC distance $d_{cc}(q,x)$ in the following sense: there exists a constant $K$ so that for all $q \in \mathcal{O}_x$, $K^{-1} ||\varphi^{-1}_x(q)|| \le d_{cc}(q,x) \le K ||\varphi^{-1}_x(q)||$. \end{lemma} \begin{proof} The Kor\'anyi norm $|(y_1,y_2,y_3)|_H := ((|y_1|^2+|y_2|^2)^2+4y_3^2)^{1/4}$ defines a left invariant metric on $\mathbb{H}$ which is comparable to the CC metric. We will show that $|x^{-1}*q|_H$ is comparable to $||\varphi^{-1}_x(q)||$. It suffices to prove that $$ (\xi^2+y^2)^2+4z^2 $$ is comparable to \begin{equation}\label{comparison-quantity}\begin{split} &\left( \left| \frac{e^{\lambda y}\sin(\theta-\lambda\xi) - \sin\theta}{\lambda} \right|^2 + \left| \frac{e^{\lambda y}\cos(\theta-\lambda\xi)-\cos\theta}{\lambda} \right|^2 \right)^2 \\ & \quad + 4 \left( \frac{e^{2\lambda y} - 1}{2\lambda^2 y}(-\xi + \lambda z) + \frac1{\lambda^2}e^{\lambda y}\sin(\lambda \xi) \right)^2 \end{split}\end{equation} when $(\xi,y,z)$ lies in a bounded region of $\mathbb R^3$. After some algebraic manipulation we rewrite \eqref{comparison-quantity} in the form $$ \frac{4e^{2\lambda y}}{\lambda^4} \left( (\cosh(\lambda y)-\cos(\lambda\xi))^2 + \left( \frac{\sinh(\lambda y)}{y} (-\xi+\lambda z) + \sin(\lambda \xi) \right)^2 \right)\,. $$ Let us denote the expression in the previous line by $G(\xi,y,z)$. The function $G$ is real analytic in all of $\mathbb R^3$. It is elementary but tedious to verify that $$ \partial^{\alpha_1}_\xi \partial^{\alpha_2}_y \partial^{\alpha_3}_z G(0,0,0) = 0 $$ for all multi-indices $(\alpha_1,\alpha_2,\alpha_3)$ with $\alpha_1+\alpha_2+2\alpha_3 \le 3$, and $\partial_\xi^4 G(0,0,0) = \partial_y^4 G(0,0,0) = 24$, $\partial_\xi^2 \partial_y^2 G(0,0,0) = 8$, and $\partial_z^2 G(0,0,0)= 8$. By Taylor's theorem with remainder, $$ G(\xi,y,z) = (\xi^4+2\xi^2 y^2 + y^4 + 4z^2)(1+o(1)) = ((\xi^2+y^2)^2+4z^4)(1+o(1)) $$ and so the desired comparison holds on bounded regions of $\mathbb R^3$. \end{proof} Throughout this paper, we often use the function $||\varphi_x^{-1}(\cdot)||$ in explicit computations. The localized boundary $\varphi_{x}^{-1}(\partial \Omega\cap \mathcal{O}_x)$ has the representation \begin{equation}\label{eq-NTZ-cor} -\xi=h(y, z;s)-r, \end{equation} where $h(\cdot,\cdot;s):\mathbb R^2\to\mathbb R$ is smooth. Moreover, $h(y,z;s)$, for $s\in {\partial\Omega}$, satisfies the following expansion. \begin{lemma}\label{lemma-h-H} Let $\Omega$ and $\Omega_\epsilon$ be as in Lemma \ref{lemma-unique-p}. For $s\in \partial \Omega$ and $x\in\Omega$ such that $d_{cc}(x,{\partial\Omega})=d_{cc}(x,s)=\epsilon$, let $\varphi_x$ be as in \eqref{eq-cartesian}. Then there exists $0<\delta<\epsilon$ such that for all $|(y,z)|<\delta$, it holds that \begin{equation}\label{eq-h-H} \bigg|h(y,z;s)-H_{{\partial\Omega},0}(s)/2y^2-k_1(s)z\bigg|\le \delta^{-2}(|y|^3+|yz|), \end{equation} for some continuous function $k_1(\cdot)$ on ${\partial\Omega}\cap B_{cc}(s, \delta)$. \end{lemma} \begin{proof} The parametrization $\varphi_x$ induces a diffeomorphism $d\varphi_x: \mathbb R^3\to T_s\mathbb{H}$. In particular we have $d\varphi_x(\partial_y)=T(s)$. Since $T(s)\in T_s({\partial\Omega})$, we have $h_y(0,0;s)=0$ and $h_{yy}(0,0;s)=H_{{\partial\Omega},0}(s)$. Hence \eqref{eq-h-H} follows immediately from the Taylor expansion of $h(\cdot,\cdot;s)$ at $(0,0)$. \end{proof} The next lemma provides a way to change coordinates for integration. For a proof, see section 5 in \cite{rit:tubular}, specifically (5.7) and (5.8). \begin{lemma}\label{lemma-J} Let $\Omega$ and $\epsilon>0$ be as above. Consider the parametrization $\Psi$ of $\Omega_\epsilon$ by ${\partial\Omega} \times (0,\epsilon)$ given by $x = \Psi(s,r)$, where $r = d_{cc}(x,{\partial\Omega}) = d_{cc}(x,s)$. Equip ${\partial\Omega} \times (0,\epsilon)$ with the product of the horizontal perimeter measure $\sigma_0$ and Lebesgue measure, and equip $\Omega_\epsilon$ with the volume measure. Then the Jacobian $J_\Psi$ of $\Psi$ satisfies the estimate \begin{align}\label{eq-jacobi} |J_\Psi(s,r)-1+H_{{\partial\Omega},0}(s)r|\le K_1 r^2 \end{align} for all $s \in {\partial\Omega}$ and all $r \in (0,\epsilon)$, for some fixed constant $K_1>0$. \end{lemma} \begin{remark} An explicit formula for the Jacobian $J_\Psi$ can be found in section 5 of \cite{rit:tubular}. For the purposes of our main result we only need the above first-order expansion in $r$. \end{remark} In the proof of Theorem \ref{thm-heat-content-prob} in subsection \ref{subsec:reduction2} we require information about the behavior of volume, horizontal perimeter, and total horizontal mean curvature for tubular neighborhoods and their boundaries in the $g_1$-metric. The following lemma provides the necessary estimates. These estimates follow directly from the classical Steiner formula for volumes of tubular neighborhoods of submanifolds of Riemannian manifolds. \begin{lemma}\label{lemma-compare-vol-peri-mean-curv} Let $\Omega$ be a smoothly bounded domain in $\mathbb{H}$, and let $\Omega^r = \{x \in \mathbb{H} \, | \, d_{g_1}(x,\Omega) < r \}$ denote the $r$-neighborhood of $\Omega$ in the $g_1$-metric. Then \begin{itemize} \item[(1)] $\Vol(\Omega^r) = \Vol(\Omega) + O(r)$. \item[(2)] $\int_{{\partial\Omega}^r} |\vec{n}_h| \, d\sigma = \int_{{\partial\Omega}} |\vec{n}_h| \, d\sigma + O(r)$. \item[(3)] $\int_{{\partial\Omega}^r} H_{{\partial\Omega}^r,0} |\vec{n}_h| \, d\sigma = \int_{{\partial\Omega}} H_{{\partial\Omega},0} |\vec{n}_h| \, d\sigma + O(r)$. \end{itemize} \end{lemma} \begin{proof} For sufficiently small $r>0$, the domain $\Omega^r \setminus \Omega$ is foliated by the surfaces ${\partial\Omega}^t$, $0<t<r$. Define functions $A$ and $B$ in $\Omega^r \setminus \Omega$ by $A(x) = |\vec{n}_h(x)|$ and $B(x) = H_{{\partial\Omega}^t,0}(x)$ for $x \in {\partial\Omega}^t$. Then $A$ and $B$ are smooth in $\Omega^r \setminus \Omega$. Our starting point is the Steiner formula $$ \Vol(\Omega^r) = \Vol(\Omega) + \int_0^r \sigma({\partial\Omega}^t) \, dt, $$ where $\sigma$ denotes the surface measure in the $g_1$ metric. For sufficiently small $r>0$, the domain $\Omega^r \setminus \Omega$ may be parameterized by ${\partial\Omega} \times (0,r)$ (analogously to the discussion in this section in the setting of the Carnot--Carath\'eodory metric) via a diffeomorphism $\Psi$, and for a smooth function $f:\Omega^r \setminus \Omega \to \mathbb R$, $$ \int_{{\partial\Omega}^t} f(x) \, d\sigma(x) = \int_{{\partial\Omega}} f(\exp_s(t\vec{n}(s))) \, J_\Psi(s,t)\, d\sigma(s)\,, \qquad x = \Psi(s,t), $$ cf.\ \cite[Lemma 3.12]{gray:tubes}. Expanding in a series in $t$ and using the analog of Lemma \ref{lemma-J} for the $g_1$ metric gives $$ \int_{{\partial\Omega}^t} f \, d\sigma = \int_{{\partial\Omega}} (f + g_1(\nabla f,\vec{n})t + o(t)) (1+H_{{\partial\Omega},1}\,t+o(t)) \, d\sigma\, $$ where $H_{{\partial\Omega},1}$ denotes the mean curvature in the $g_1$ metric. Thus $$ \int_{{\partial\Omega}^t} f \, d\sigma = \int_{{\partial\Omega}} f \, d\sigma + \int_{{\partial\Omega}} \bigl( g_1(\nabla f,\vec{n}) + f\,H_{{\partial\Omega},1}) \bigr) d\sigma \cdot t + o(t) $$ Part (2) follows by choosing $f = A$ and part (3) by choosing $f = AB$, where $A$ and $B$ are as defined at the start of this proof. Finally, (1) follows from Steiner's formula above. \end{proof} \section{Probabilistic preliminaries}\label{sec:prelim2} \subsection{First reduction: time change}\label{subsec:time-change} The interpretation of the solution of a Dirichlet problem in terms of the exit time of the corresponding Markov process is well-known and has been widely used. Let ${x}_t$ be the strong Markov process generated by the horizontal sub-Laplacian $\frac12\triangle_0$ starting from $x\in\mathbb{H}$. Then the solution $v(x,t)$ of the Dirichlet heat equation \eqref{eq:Dirichlet} yields the probability of surviving up to time $t$: \[ v(x,t)=\partial_x({T}_\Omega>t), \] where \[ {T}_\Omega=\inf\{t>0, {x}_t\in \mathbb{H} \setminus \Omega \}. \] Intuitively, the most likely event is that the Markov process escapes $\Omega$ in the direction of the outward horizontal normal $-N$ at the boundary ${\partial\Omega}$. It is more convenient for us to locally use the frame that is equipped with such information. For each $x\in\mathbb{H}$, consider the new frame $\{N,T,Z\}$ as in \eqref{eq-N-T-ext}. In a small neighborhood $\mathcal{O}_x$, the horizontal sub-Laplacian can be written as \begin{equation}\label{eq-delta-L} \triangle_0 = \frac{1}{f}(N^2+T^2), \end{equation} where $f$ is as in \eqref{eq-N-T-f}. We write $$ L=N^2+T^2. $$ Let $\tilde{x}_t$ the Markov process generated by $L$ and starting from $x$. Then $\tilde{x}_t$ solves the Stratonovich differential equation \begin{equation}\label{eq-SDE} \begin{cases} d\tilde{x}_t=-N(\tilde{x}_t)dB_t^N+T(\tilde{x}_t)dB_t^T \\ \tilde{x}_0=x \end{cases} \end{equation} where $B_t^N$, $B_t^T$ are independent standard Brownian motions. By using the language of stochastic flows we can lift the process to the tangent space $T_x\mathbb{H}$. Combining Strichartz's result (\cite{str:cbhd}, Theorem 3.2) with \eqref{eq-N-T-brackets-1} and \eqref{eq-N-T-brackets-2}, we deduce that \begin{equation}\label{eq-sub-BM} \tilde{x}_t=\exp_x\left( -B^N_tN+B^T_tT+ tR_tZ\right) \end{equation} where $R_t$ is a remainder term (process) which satisfies the following estimate: $\exists\, \alpha_0, c_0>0$ such that for any $R>c_0$, \begin{equation}\label{eq-R-1-est} \partial\bigg(\sup_{0\le s\le t}|R_s|\ge R\bigg)\le \exp\bigg(-\frac{R^{\alpha_0}}{c_0t} \bigg) \end{equation} The derivation of \eqref{eq-R-1-est} is an easy consequence of the result of Azencott \cite[p.\ 252]{aze:formule}, see also Castell \cite[p.\ 235]{cas:asymptotic}. Moreover, if we write \[ \tilde{X}_t=\varphi_x^{-1}(\tilde{x}_t)=(-B^N_t, B^T_t, tR_t), \] then we have the following tail estimates. We remind the reader that $q \mapsto \|\varphi_x^{-1}(q)\|$ refers to the homogeneous distance considered in Lemma \ref{lemma-equi-dist}; this notation will be used repeatedly in what follows. \begin{lemma}\label{lemma-X-s-est} Let $\tilde{X}_t$ be given as above. Then the following estimates hold when $t$ is small enough. \begin{itemize} \item[(1)] For any $0<\alpha<1$, there exist $ c, C, \alpha'>0$ such that \begin{equation}\label{eq-X-s-est} \partial\bigg(\sup_{0\le s\le t}||\tilde{X}_s||^2> t^{1-\alpha}\bigg)\le C\exp\bigg(-\frac{c}{t^{\alpha'}} \bigg). \end{equation} \item[(2)] For any $\delta>0$, there exists $C>0$ such that \begin{equation}\label{eq-X-TD-tilde-1} \partial_x\left(\sup_{0\le s\le t}||\tilde{X}_s||\ge \delta\right)\le Ce^{-\frac{\delta^2}{16t}}. \end{equation} \item[(3)] There exist $c, c'>0$ such that \begin{equation}\label{eq-X-TD-tilde-2} \partial_x(\tilde{T}_\Omega<t)\le c'e^{-\frac{d_{cc}^2(x,{\partial\Omega})}{ct}}, \end{equation} where $d_{cc}(x,{\partial\Omega})$ is Carnot-Carath\'eodory distance from $x$ to $\partial \Omega$. \item[(4)] (Principle of not feeling the boundary) Let $\Omega$ and $\Omega_\epsilon$ be as given previously, then \begin{equation}\label{eq-not-feel-bdry-tilde} \int_{\Omega\setminus \Omega_\epsilon}\partial_x(\tilde{T}_\Omega>t)dx=\Vol(\Omega)- \Vol(\Omega_\epsilon)+ O(e^{-\epsilon^2/ct}) \end{equation} for some constant $c>0$. \end{itemize} \end{lemma} \begin{proof} Note $||X_s||^2=|B^N_t|^2+|B^T_t|^2+|tR_t |$, and for $B^i_t$, $i=N, T$, we know that for any $\alpha>0$ there exists $c>0$ such that \[ \partial\bigg(\sup_{0\le s\le t}|B^i_s|^2> t^{1-\alpha}\bigg)= \partial\bigg(\sup_{0\le s\le 1}|B^i_s|^2> t^{-\alpha}\bigg) \le \exp\bigg(-\frac{c}{t^{\alpha}} \bigg). \] Moreover, for $t\in[0,1)$ small enough, by \eqref{eq-R-1-est} we have \[ \partial\bigg(\sup_{0\le s\le t}|sR_s|> t^{-\alpha}\bigg)\le \partial\bigg(\sup_{0\le s\le 1}|R_s|> t^{-1-\alpha}\bigg) \le \exp\bigg(-\frac{1}{c_0 t^{\alpha_0\alpha}} \bigg). \] We then complete the proof of (1) by letting $\alpha'=\min\{ \alpha, \alpha_0\alpha\}$. The proof of (2) follows the same argument as that of (1). To see (3), just note that \[ \partial_x(\tilde{T}_\Omega<t)\le \partial_x\left( \sup_{0\le s\le t}d_{cc}(\tilde{x}_s, x)>d_{cc}(x,{\partial\Omega})\right). \] Due to the equivalence between $d_{cc}(x,y)$ and $\|\varphi^{-1}_x(y)\|$ for any $y\in\mathcal{O}_x$, there exists $C>0$ such that \[ \partial_x\left( \sup_{0\le s\le t}d_{cc}(\tilde{x}_s, x)>d_{cc}(x,{\partial\Omega})\right) \le \partial_x\left( \sup_{0\le s\le t}||\tilde{X}_s||>Cd_{cc}(x,{\partial\Omega})\right). \] By plugging $\delta=d_{cc}(x,{\partial\Omega})$ into \eqref{eq-X-TD-tilde-1} we obtain \eqref{eq-X-TD-tilde-2}. At last, from (3) we have \[ \int_{\Omega\setminus \Omega_\epsilon}\partial_x(\tilde{T}_\Omega>t)dx=\Vol(\Omega\setminus \Omega_\epsilon)(1-O(e^{-\epsilon^2/ct})), \] which immediately implies \eqref{eq-not-feel-bdry-tilde}. \end{proof} Next, from \eqref{eq-delta-L} we know that ${x}_t$ is a time-changed version of $\tilde{x}_t$. Precisely, let $\mathfrak{t}(t)=\int_0^t{f(\tilde{x}_s)}ds$ and $\mathfrak{t}^{-1}(t)=\sup\{s:\mathfrak{t}(s)\le t \}$, then we have \[ {x}_t=\tilde{x}_{\mathfrak{t}^{-1}(t)}. \] The exit time of $\tilde{x}_t$ is $\tilde{T}_\Omega=\mathfrak{t}^{-1}(T_\Omega)$, hence \begin{equation}\label{eq-T-Op-Tilde-OP} \partial_x(\tilde{T}_\Omega>t)=\partial_x\bigg({T}_\Omega>\int_0^{t}{f(\tilde{x}_u)}du \bigg). \end{equation} Denote by $\tilde{{\mathbf Q}}_\Omega(t)$ the heat content associated with $\tilde{x}_t$. Then we can easily show that $\tilde{{\mathbf Q}}_\Omega(t)$ differs from ${{\mathbf Q}}_\Omega(t)$ by $o(t)$. \begin{proposition}\label{prop-beta} Let ${{\mathbf Q}}_\Omega(t)$ and $\tilde{{\mathbf Q}}_\Omega(t)$ be given as above. Then \[ {{\mathbf Q}}_\Omega(t)=\tilde{{\mathbf Q}}_\Omega(t)+o(t). \] \end{proposition} \begin{proof} From \eqref{eq-N-T-f} we know that $1 - c \lambda \, d_{cc}(x,y) \le f(y) \le 1 + c \lambda \, d_{cc}(x,y)$, provided $y$ is sufficiently close to $x$. Moreover, from Lemma \ref{lemma-equi-dist} and \eqref{eq-X-s-est} we know that for any $0<\alpha<1$ there exist $c_1, \alpha', c', C>0$ such that \[ \partial_x\bigg( \sup_{0\le s\le t} d_{cc}(\tilde{x}_s,x)>t^{1-\alpha}\bigg) \le \partial \bigg( \sup_{0\le s\le t} ||\varphi_x^{-1}(\tilde{x}_s)||>c_1t^{1-\alpha}\bigg) \le Ce^{-c'/t^{\alpha'}}, \] hence \begin{align*} \partial_x\bigg({T}_\Omega>\int_0^{t}{f(\tilde{x}_u)}du\bigg)&\ge\partial_x\bigg({T}_\Omega>\int_0^{t}{(1+c\lambda t^{1-\alpha})}du\bigg)+O(e^{-c'/t^{\alpha'}})\\ &=\partial_x\bigg({T}_\Omega>t+{c\lambda }t^{2-\alpha}\bigg)+O(e^{-c'/t^{\alpha'}}) \end{align*} and \begin{align*} \partial_x\bigg({T}_\Omega>\int_0^{t}{f(\tilde{x}_u)}du\bigg)&\le\partial_x\bigg({T}_\Omega>\int_0^{t}{(1-c\lambda t^{1-\alpha})}du\bigg)+O(e^{-c'/t^{\alpha'}})\\ &=\partial_x\bigg({T}_\Omega>t-{c\lambda }t^{2-\alpha}\bigg)+O(e^{-c'/t^{\alpha'}}) \end{align*} Therefore by \eqref{eq-not-feel-bdry-tilde} and \eqref{eq-T-Op-Tilde-OP} we have \begin{align*} \int_{\Omega_\epsilon}\partial_x(\tilde{T}_\Omega>t)dt &\ge\int_{\Omega_\epsilon}\partial_x\bigg({T}_\Omega>t-{c\lambda }t^{2-\alpha}\bigg)dt+O(e^{-c'/t^{\alpha'}})\\ &={{\mathbf Q}}_\Omega\bigg(t-{c\lambda }t^{2-\alpha}\bigg)+O(e^{-c'/t^{\alpha'}}). \end{align*} and \begin{align*} \int_{\Omega_\epsilon}\partial_x(\tilde{T}_\Omega>t)dt &\le\int_{\Omega_\epsilon}\partial_x\bigg({T}_\Omega>t+\frac{c\lambda }{2-\alpha}t^{2-\alpha}\bigg)dt+O(e^{-c'/t^{\alpha'}})\\ &={{\mathbf Q}}_\Omega\bigg(t+{c\lambda }t^{2-\alpha}\bigg)+O(e^{-c'/t^{\alpha'}}). \end{align*} Observe that $\sqrt{t\pm{c\lambda }t^{2-\alpha}}=\sqrt{t}+o(t^{3/2-\alpha})$. Applying the `principle of not feeling the boundary' for $x_t$ we know that $\tilde{{\mathbf Q}}_\Omega(t)=\int_{\Omega_\epsilon}\partial_x(\tilde{T}_\Omega>t)dt+O(e^{-\epsilon^2/t})={{\mathbf Q}}_\Omega(t)+o(t)$. The proof is complete. \end{proof} \subsection{Second reduction: eliminating higher order remainder terms}\label{subsec:reduction2} Let us denote L\'evy's area process by $A_t := \int_0^t (B_s^NdB_s^T-B_s^TdB_s^N)$, and consider the `truncated process' \begin{equation}\label{eq-x-prime} x_t':=\exp_x\left( -B^N_tN+B^T_tT+ A_tZ\right). \end{equation} By \cite[Theorem 2.1]{cas:asymptotic} we know that \[ x_t'=\tilde{x}_t+t^{3/2}P_t \] where $P_t$ satisfies that $\exists\, \alpha_1, c_1>0$ such that for any $R>c_1$, \[ \partial\bigg(\sup_{0\le s\le t}|P_s|\ge R\bigg)\le \exp\bigg(-\frac{R^{\alpha_1}}{c_1t} \bigg), \] where $|\cdot|$ is the Euclidean norm in $\mathbb R^3$. Since the Riemannian metric $g_1$ and the Euclidean metric are locally bi-Lipschitz equivalent, we may equivalently write \begin{equation}\label{eq-P-est} \partial\bigg(\sup_{0\le s\le t} s^{-3/2} d_{g_1}(x_s',x_s)\ge R\bigg)\le \exp\bigg(-\frac{R^{\alpha_1}}{c_1t} \bigg), \end{equation} for a possibly different choice of $c_1$. Consider the associated process on $T_x\mathbb{H}$, \begin{equation}\label{eq-X-NTZ} X_t:=\varphi^{-1}_x(x_t')=\bigg(-B^N_t,\,B^T_t,\, A_t \bigg). \end{equation} Following the same arguments, we easily obtain that Lemma \ref{lemma-X-s-est} holds for $X_t$ as well. \begin{lemma}\label{lemma-X-TD-prime} Let ${X}_t$ be given as above. Then the following estimates hold when $t$ is small enough. \begin{itemize} \item[(1)] For any $0<\alpha<1$, there exist $C, c, \alpha'>0$ such that \begin{equation}\label{eq-X-s-est-prime} \partial\bigg(\sup_{0\le s\le t}||{X}_s||^2> t^{1-\alpha}\bigg)\le C\exp\bigg(-\frac{c}{t^{\alpha'}} \bigg). \end{equation} \item[(2)] For any $\delta>0$, there exists $C>0$ such that \begin{equation}\label{eq-X-TD-delta-1} \partial_x\left(\sup_{0\le s\le t}||{X}_s||\ge \delta\right)\le Ce^{-\frac{\delta^2}{16t}}. \end{equation} \item[(3)] There exist $c, c'>0$ such that \begin{equation}\label{eq-X-TD-delta-2} \partial_x({T}'_\Omega<t)\le c'e^{-\frac{d_{cc}^2(x,{\partial\Omega})}{ct}}, \end{equation} where $d_{cc}(x,{\partial\Omega})$ is Carnot-Carath\'eodory distance from $x$ to $\partial \Omega$. \item[(4)] (Principle of not feeling the boundary) Let $\Omega$ and $\Omega_\epsilon$ be as given previously, then \begin{equation}\label{eq-not-feel-bdry-prime} \int_{\Omega\setminus \Omega_\epsilon}\partial_x({T}'_\Omega>t)dx=\Vol(\Omega)- \Vol(\Omega_\epsilon)+ O(e^{-\epsilon^2/ct}) \end{equation} for some constant $c>0$, where $T'_{\Omega}=\inf\{t>0, {x}'_t\in \mathbb{H} \setminus \Omega \}$. \end{itemize} \end{lemma} Let ${\mathbf Q}'_\Omega(t)=\int_{\Omega}\partial_x({T}'_\Omega>t)dt$. We then have the following heat content expansion for ${\mathbf Q}'_\Omega(t)$ when $t\to0$. \begin{theorem}\label{thm-heat-content-prime} Let $\Omega \subset \mathbb{H}$ be a bounded domain in $\mathbb{H}$ whose boundary is smooth and has no characteristic points. Let $x'_t$ be the process given in \eqref{eq-x-prime}. Then the associated heat content has the following expansion \begin{equation}\label{heat-content-expansion} {{\mathbf Q}}'_{\Omega}(t) = \Vol(\Omega) - \sqrt{\frac{2t}{\pi}} \sigma_0({\partial \Omega}) + \frac{t}{4} \int_{\partial \Omega} H_{{\partial\Omega},0}(s) \, d\sigma_0(s) + o(t) \end{equation} as $t \to 0$. \end{theorem} We postpone the proof of the above theorem to Subsection \ref{subsec:reduction3} and Section \ref{sec:proof}. In the rest of this section, we sketch the proof of the main theorem. \begin{proof}[Proof of Theorem \ref{thm-heat-content-prob}] From \eqref{eq-not-feel-bdry-prime} we know that ${\mathbf Q}'_\Omega(t)=\int_{\Omega_\epsilon}\partial_x({T}'_\Omega>t)dt+O(e^{-\epsilon^2/t})$. Moreover, note \begin{equation*}\begin{split} \partial_x(\tilde{T}_\Omega>t) &= \partial_x\bigg(\forall\, 0\le s\le t, \tilde{x}_s\in\Omega \bigg) \\ &\le \partial_x\bigg(\forall\, 0\le s\le t, x'_s\in\Omega^+, \sup_{0\le s\le t} s^{-3/2}d_{g_1}(x'_s,\tilde{x}_s)<R \bigg)+O(e^{-C_R/t}) \end{split}\end{equation*} for $\Omega^+=\{x\in\mathbb{H}, d_{g_1}(x, \Omega)\le t^{3/2}R\} $. Here $C_R>0$ is a constant depending on $R$, and the last inequality comes from \eqref{eq-P-est}. Also since \begin{align*} \partial_x(\tilde{T}_\Omega>t) &=\partial_x\bigg(\forall\, 0\le s\le t, \tilde{x}_s\in\Omega \bigg)\\ &\ge\partial_x\bigg(\forall\, 0\le s\le t, {x}'_s\in\Omega^-, \sup_{0\le s\le t} s^{-3/2}d_{g_1}(x'_s,\tilde{x}_s)<R \bigg)\\ &\ge \partial_x\bigg(\forall\, 0\le s\le t, {x}'_s\in\Omega^- \bigg)+O(e^{-C_R/t}), \end{align*} where $\Omega^-=\{x\in\mathbb{H}, d_{g_1}(x, \Omega^c)\ge t^{3/2}R\}$, we have \[ \partial_x({T}'_{\Omega^-}>t)+o(t)\le \partial_x(\tilde{T}_\Omega>t)\le \partial_x({T}'_{\Omega^+}>t)+o(t). \] By the principle of not feeling the boundary we have ${{\mathbf Q}}'_{\Omega^-}(t)+o(t)\le\tilde{{\mathbf Q}}_\Omega(t)\le{{\mathbf Q}}'_{\Omega^+}(t)+o(t)$. Moreover, from Lemma \ref{lemma-compare-vol-peri-mean-curv} and Theorem \ref{thm-heat-content-prime} we obtain \begin{equation}\label{eq-claim-beta} {{\mathbf Q}}'_\Omega(t)={{\mathbf Q}}'_{\Omega^\pm}(t)+o(t). \end{equation} Therefore $\tilde{{\mathbf Q}}_\Omega(t)={{\mathbf Q}}'_{\Omega}(t)+o(t)$. Together with Proposition \ref{prop-beta} we have \[ {{\mathbf Q}}_\Omega(t)={{\mathbf Q}}'_{\Omega}(t)+o(t). \] Hence we complete the proof. \end{proof} \subsection{Third reduction: decomposing the main event into subevents}\label{subsec:reduction3} In this section we reduce Theorem \ref{thm-heat-content-prime} to a sequence of lemmas. Following the intuition that the Markov process $x_t$ is most likely to exit $\Omega$ along the outward horizontal normal direction of the boundary, we track the furthest distance that $B^N$ can travel before time $t$ by considering the following process $\tau_t$. For each $t>0$, \begin{equation}\label{eq-B^N-tau} B^N_{\tau_t}=\sup_{0\le \tau\le t}B^N_{\tau}. \end{equation} The joint density of $B^N_{\tau_t}$ and $\tau_t$ is known. \begin{lemma}\label{lemma-joint} The joint density of $( B^N_{\tau_t}, \tau_t)$ is given by \begin{equation}\label{eq-phi} \Phi(\xi, \tau;t)=\frac{\xi e^{-\frac{\xi^2}{2\tau}}}{\pi\tau^{3/2}(t-\tau)^{1/2}}{\mathbbm{1}}_{[0,t)}(\tau){\mathbbm{1}}_{[0,\infty)}(\xi) \end{equation} \end{lemma} For a proof, see \cite[p.\ 339]{louchard1968mouvement}. Moreover, the event $\{x'_{\tau_t}\in \Omega\}$ captures the major part of the event that the process stays inside $\Omega$, namely $\{T'_\Omega>t\}$. We will estimate $\partial_x(x'_{\tau_t}\in \Omega)$ as well as its difference from $\partial_x(T'_\Omega>t)$. Since \[ \partial_x(x'_{\tau_t}\in \Omega)-\partial_x(T'_\Omega>t)=\partial(\tau_t<T'_\Omega\le t)+\partial_x(T'_\Omega\le\tau_t\le t, x'_{\tau_t}\in \Omega). \] we just need to estimate each of the terms $$ \int_{\Omega_\epsilon} \partial_x(x'_{\tau_t}\in \Omega)dx, $$ $$ \int_{\Omega_\epsilon}\partial_x(\tau_t<T'_\Omega\le t)dx, $$ and $$ \int_{\Omega_\epsilon} \partial_x(T'_\Omega\le\tau_t\le t, x'_{\tau_t}\in \Omega)dx $$ separately. These estimations are obtained in the following three lemmas, which in turn yields Theorem \ref{thm-heat-content-prime}. The proofs of these three lemmas are given in the following section. \begin{lemma}\label{lemma-main-I} Let $\Omega$, $\Omega_\epsilon$ and $x'_t$ be given as before. There exists a constant $C_1>0$ such that for $t>0$ small enough, \[ \bigg|\int_{\Omega_\epsilon} \partial_x(x'_{\tau_t}\in \Omega)dx-\Vol(\Omega_\epsilon)+\sqrt{\frac{2t}{\pi}}\sigma_0(\partial \Omega)-\frac{t}{4}\int_{\partial \Omega}H_{{\partial\Omega},0}(s)d\sigma_0(s)\bigg|\le C_1t^{3/2}. \] \end{lemma} \begin{lemma}\label{lemma-main-II} Let $\Omega$, $\Omega_\epsilon$ and $x'_t$ be as previously defined. Then \begin{align*} \int_{\Omega_\epsilon} \partial_x(T'_\Omega\le\tau_t\le t, x'_{\tau_t}\in \Omega)dx=o(t). \end{align*} \end{lemma} \begin{lemma}\label{lemma-main-III} Let $\Omega$, $\Omega_\epsilon$ and $x'_t$ be as previously defined. Then \begin{align*} \int_{\Omega_\epsilon} \partial_x(\tau_t<T'_\Omega\le t)dx=o(t). \end{align*} \end{lemma} \section{Proofs of the lemmas}\label{sec:proof} In this final section, we prove Lemmas \ref{lemma-main-I}, \ref{lemma-main-II}, and \ref{lemma-main-III}. First let us recall notation. Let $x'_t=\exp_x\left( -B^N_tN+B^T_tT+ A_tZ\right)$ be the truncated diffusion process, $X_t=(-B^N_t, B^T_t, A_t)$ the lift of $x'_t$ on $T_x\mathbb{H}$, and $T'_\Omega=\inf\{t>0\,|\, {x}'_t\in \mathbb{H} \setminus \Omega \}$ the exit time of $x'_t$ from $\Omega$. To streamline the exposition, we defer the proof of several technical estimates in Subsection \ref{subsec:proof-of-first-lemma} to an appendix. \subsection{Proof of Lemma \ref{lemma-main-I}}\label{subsec:proof-of-first-lemma} From now on we denote $|(B^T_{\tau_t},A_{\tau_t})|=|B^T_{\tau_t}|^2+|A_{\tau_t}|$. Since $|(B^T_{\tau_t},A_{\tau_t})|\le ||X_{\tau_t}||^2$, from Lemma \ref{lemma-X-TD-prime} we know that for any $\delta>0$ , \[ \partial_x(|(B^T_{\tau_t},A_{\tau_t})|>\delta)= O(e^{-\delta^2/4t}). \] Moreover, since \begin{equation*}\begin{split} \partial_x(x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta) &\le \partial_x(x'_{\tau_t}\in \Omega) \\ &\le\partial_x(x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)+\partial_x(|(B^T_{\tau_t},A_{\tau_t})|>\delta) \end{split}\end{equation*} we obtain that \[ \partial_x(x'_{\tau_t}\in \Omega)= \partial_x(x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)+O(e^{-\delta^2/4t}). \] Letting $E(t)=\int_{\Omega_\epsilon} \partial_x(x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)dx$, we are reduced to prove \begin{equation}\label{eq-E(t)} E(t)= \Vol(\Omega_\epsilon)-\sqrt{\frac{2t}{\pi}}\sigma_0(\partial \Omega)+\frac{t}{4}\int_{\partial \Omega}H_{{\partial\Omega},0}(s)d\sigma_0(s)+ O(t^{3/2}). \end{equation} For fixed $x\in\Omega_\epsilon$, assume $d_{cc}(x,\partial\Omega)=r>0$. When $\epsilon>0$ is small enough, we can always assume that $x'_t$ started from $x\in\Omega_\epsilon$ stays inside the diffeomorphism neighborhood $\mathcal{O}_x$ of $\varphi_x$ within small time $t$. Hence we can consider the lifted process $X_t$. By comparing $\{x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta\}$ and $\{B^N_{\tau_t}<r, |(B^T_{\tau_t},A_{\tau_t})|<\delta\}$ we have \begin{equation*}\begin{split} \partial_x(x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta) &= \partial_x(B^N_{\tau_t}<r, |(B^T_{\tau_t},A_{\tau_t})|<\delta) \\ & \quad - \partial_x(B^N_{\tau_t}<r,x'_{\tau_t}\not\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta) \\ & \qquad + \partial_x(B^N_{\tau_t}>r,x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta). \end{split}\end{equation*} We denote \begin{align*} &I_1(t)=\int_{\Omega_\epsilon} \partial_x(B^N_{\tau_t}<r,|(B^T_{\tau_t},A_{\tau_t})|<\delta)dx, \\ &I_2(t)=\int_{\Omega_\epsilon} \partial_x(B^N_{\tau_t}<r,x'_{\tau_t}\not\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)dx,\\ &I_3(t)=\int_{\Omega_\epsilon} \partial_x(B^N_{\tau_t}>r,x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)dx. \end{align*} Then $E(t)=I_1(t)-I_2(t)+I_3(t)$. We estimate these terms in the following three steps. \\ \noindent {\bf Step 1:} First, let us estimate $I_1(t)$. Using the parametrization $\Psi$ from Lemma \ref{lemma-J}, we have \[ I_1(t)=\int_0^\epsilon \int_{\partial \Omega} \partial_x(B^N_{\tau_t}<r,|(B^T_{\tau_t},A_{\tau_t})|<\delta)J_\Psi(s,r)d\sigma_0(s)dr, \] where $x = \Psi(s,r)$. Furthermore, since \[ \partial_x(B^N_{\tau_t}<r,|(B^T_{\tau_t},A_{\tau_t})|<\delta)=1- \partial_x(|(B^T_{\tau_t},A_{\tau_t})|>\delta)- \partial_x(B^N_{\tau_t}>r,|(B^T_{\tau_t},A_{\tau_t})|<\delta) \] and $\partial_x(|(B^T_{\tau_t},A_{\tau_t})|>\delta)= O(e^{-\frac{\delta^2}{4t}})$, we have \begin{equation}\label{eq-I-1-J} I_1(t)=\Vol(\Omega_\epsilon)-\int_0^\epsilon \int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r,|(B^T_{\tau_t},A_{\tau_t})|<\delta)J_\Psi(s,r)d\sigma_0(s)dr+o(t). \end{equation} Let $J(t)=\int_0^\epsilon\int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r,|(B^T_{\tau_t},A_{\tau_t})|<\delta)(1-r H_{{\partial\Omega},0}(s))d\sigma_0(s)dr$. There exists $c>0$ depending on $\delta>0$ such that \begin{equation*}\begin{split} J(t)&=\int_0^\epsilon\int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r)(1-r H_{{\partial\Omega},0}(s))d\sigma_0(s)dr+O(e^{-\frac{c}{t}})\\ &= \int_0^\infty\int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r)(1-r H_{{\partial\Omega},0}(s))d\sigma_0(s)dr-R_1(t)+ O(e^{-\frac{c}{t}}), \end{split}\end{equation*} where $R_1(t)=\int_\epsilon^\infty\int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r)(1-r H_{{\partial\Omega},0}(s))d\sigma_0(s)dr$. By Lemma \ref{lemma-joint} we can compute \[ \int_0^\infty\int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r)d\sigma_0(s)dr=\sigma_0(\partial \Omega) \int_0^\infty\frac{\sqrt{2}}{\sqrt{\pi t}}\int_r^{\infty} e^{-\frac{\xi^2}{2t}}d\xi dr=\frac{\sqrt{2t}}{\sqrt{\pi}}\sigma_0(\partial \Omega) \] and similarly \[ \int_0^\infty\int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r)r H_{{\partial\Omega},0}(s)d\sigma_0(s)dr=\frac{t}{2}\int_{\partial \Omega} H_{{\partial\Omega},0}(s)d\sigma_0(s). \] Therefore we have \[ J(t)=\frac{\sqrt{2t}}{\sqrt{\pi}}\sigma_0(\partial \Omega)-\frac{t}{2}\int_{\partial \Omega} H_{{\partial\Omega},0}(s)d\sigma_0(s)-R_1(t)+O(e^{-\frac{c}{t}}). \] Plugging this into \eqref{eq-I-1-J} yields \[ I_1(t)=\Vol(\Omega_\epsilon)-\frac{\sqrt{2t}}{\sqrt{\pi}}\sigma_0(\partial \Omega)+\frac{t}{2}\int_{\partial \Omega} H_{{\partial\Omega},0}(s)d\sigma_0(s)+R_1(t)+R_2(t)+O(e^{-\frac{c}{t}}) \] where \[ R_2(t)=\int_0^\epsilon \int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r,|(B^T_{\tau_t},A_{\tau_t})|<\delta)(1-J_\Psi(s,r)-r H_{{\partial\Omega},0}(s))d\sigma_0(s)dr. \] Now we are left to estimate $R_1(t)$ and $R_2(t)$. Note when $r\ge \epsilon$ we have \[ \bigg| 1-rH_{{\partial\Omega},0}(s)\bigg| \le r^2|\epsilon^{-2}+K\epsilon^{-1}|, \] where $K=\max_{s\in\partial\Omega}|H_{{\partial\Omega},0}(s)|$. Hence \begin{equation*}\begin{split} R_1(t) &= \frac{\sqrt{2}}{\sqrt{\pi t}}\int_\epsilon^\infty \int_r^{\infty} e^{-\frac{\xi^2}{2t}}d\xi\int_{\partial \Omega} (1-r H_{{\partial\Omega},0}(s))d\sigma_0(s) dr \\ &\le \frac{C_\epsilon}{\sqrt{t}} \sigma_0({\partial\Omega}) \int_0^\infty \int_r^{\infty}r^2 e^{-\frac{\xi^2}{2t}}d\xi\, dr =O(t^{3/2}). \end{split}\end{equation*} For $R_2(t)$, by \eqref{eq-jacobi} we know that $ \bigg|1-J_\Psi(s,r)-rH_{{\partial\Omega},0}(s) \bigg|\le K_1 r^2 $ for $r\in (0,\epsilon)$, hence \[ |R_2(t)| \le \frac{\sqrt{2}K_1}{\sqrt{\pi t}} \sigma_0({\partial\Omega}) \int_0^\infty\int_r^\infty r^2e^{-\frac{\xi^2}{2t}}d\xi dr=O(t^{3/2}). \] At the end we obtain \begin{equation}\label{eq-I-1} I_1(t)=\Vol(\Omega_\epsilon)-\frac{\sqrt{2t}}{\sqrt{\pi}}\sigma_0(\partial \Omega)+\frac{t}{2}\int_{\partial \Omega} H_{{\partial\Omega},0}(s)d\sigma_0(s)+O(t^{3/2}). \end{equation} \noindent {\bf Step 2:} We are left to show that $$ -I_2(t)+I_3(t)=-\frac{t}{4}\int_{\partial \Omega} H_{{\partial\Omega},0}(s)d\sigma_0(s)+O(t^{3/2}). $$ By changing coordinates we have \begin{align*} I_2(t)&=\int_0^\epsilon\int_{\partial \Omega} \partial_x(B^N_{\tau_t}<r,x'_{\tau_t}\not\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)J_\Psi(s,r)d\sigma_0(s)dr. \end{align*} We claim that \begin{equation}\label{eq-claim-A5} \int_0^\epsilon\int_{\partial \Omega} \partial_x(B^N_{\tau_t}<r,x'_{\tau_t}\not\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)(1-J_\Psi(s,r))d\sigma_0(s)dr=O(t^{3/2}) \end{equation} and \begin{equation}\label{eq-claim-A3} \int_\epsilon^\infty\int_{\partial \Omega} \partial_x(B^N_{\tau_t}<r,x'_{\tau_t}\not\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)d\sigma_0(s)dr=O(t^{3/2}). \end{equation} Estimates \eqref{eq-claim-A5} and \eqref{eq-claim-A3} are proved in sections \ref{App-claim-A5} and \ref{App-claim-A3} respectively. Then we have \begin{align*} I_2(t)&=\int_0^\infty \int_{\partial \Omega} \partial_x(B^N_{\tau_t}<r,x'_{\tau_t}\not\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)d\sigma_0(s)dr+O(t^{3/2}). \end{align*} Using the coordinate system in \eqref{eq-cartesian} and \eqref{eq-NTZ-cor} we have \[ \{x'_{\tau_t}\not\in \Omega\}=\{h(B^T_{\tau_t}, A_{\tau_t};s)>r-B^N_{\tau_t}\}, \] thus by Fubini we obtain \begin{align}\label{eq-I2} I_2(t)&=\int_0^\infty \int_{\partial \Omega} \partial_x(B^N_{\tau_t}<r,h(B^T_{\tau_t}, A_{\tau_t};s)>r-B^N_{\tau_t}, |(B^T_{\tau_t},A_{\tau_t})|<\delta)d\sigma_0(s)dr+O(t^{3/2})\nonumber\\ &=\int_{\partial \Omega} \,\mathbb E_x\left(h^+(B^T_{\tau_t},A_{\tau_t};s)\mathbbm{1}_{\{|(B^T_{\tau_t},A_{\tau_t})|<\delta\} }\right)\,d\sigma_0(s) +O(t^{3/2}), \end{align} where $h^+(y,z;s)=\int_0^\infty\mathbbm{1}_{\{h(y,z;s)>r\}}dr$ is the positive part of $h(y,z;s)$.\\ \noindent {\bf Step 3:} Now consider $I_3(t)$. Note \begin{align*} I_3(t)&=\int_0^\epsilon\int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r,x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)d\sigma_0(s) dr\\ &-\int_0^\epsilon\int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r,x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)(1-J_\Psi(s,r))d\sigma_0(s)dr. \end{align*} We claim that \begin{equation}\label{eq-minor-I3-1} \int_0^\epsilon\int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r,x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)(1-J_\Psi(s,r))d\sigma_0(s)dr=O(t^{3/2}), \end{equation} and \begin{equation}\label{eq-minor-I3-2} \int_\epsilon^\infty\int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r,x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)d\sigma_0(s)dr=O(t^{3/2}). \end{equation} Estimates \eqref{eq-minor-I3-1} and \eqref{eq-minor-I3-2} are proved in sections \ref{App-minor-I3-1} and \ref{App-minor-I3-2} respectively. Therefore by \eqref{eq-cartesian} and \eqref{eq-NTZ-cor} we have \begin{align*} I_3(t)&=\int_0^\infty\int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r,x'_{\tau_t}\in \Omega, |(B^T_{\tau_t},A_{\tau_t})|<\delta)d\sigma_0(s)dr+O(t^{3/2})\\ &=\int_0^\infty \int_{\partial \Omega} \partial_x(B^N_{\tau_t}>r,h(B^T_{\tau_t}, A_{\tau_t};s)<r-B^N_{\tau_t}, |(B^T_{\tau_t},A_{\tau_t})|<\delta)d\sigma_0(s)dr+O(t^{3/2}) \end{align*} Let $h^-(y,z;s)=|h(y,z;s)| - h^+(y,z;s)$ be the negative part of $h(y,z;s)$, then by Fubini, \[ I_3(t)=\int_{\partial \Omega} \,\mathbb E_x\left(\min\left(h^-(B^T_{\tau_t},A_{\tau_t};s),B^N_{\tau_t}\right)\mathbbm{1}_{\{|(B^T_{\tau_t},A_{\tau_t})|<\delta\} }\right)\,d\sigma_0(s) +O(t^{3/2}) \] Note $\min\left(h^-(B^T_{\tau_t},A_{\tau_t};s),B^N_{\tau_t}\right)=h^-(B^T_{\tau_t},A_{\tau_t};s)+\min\left(B^N_{\tau_t}-h^-(B^T_{\tau_t},A_{\tau_t};s),0\right)$, hence \[ I_3(t)=\int_{\partial \Omega} \,\mathbb E_x\left(h^-(B^T_{\tau_t},A_{\tau_t};s)\mathbbm{1}_{\{|(B^T_{\tau_t},A_{\tau_t})|<\delta\} }\right)\,d\sigma_0(s)+R_3(t) +O(t^{3/2}), \] where \begin{align*} R_3(t)&=\int_{\partial \Omega} \,\mathbb E_x\left(\min\left(B^N_{\tau_t}-h^-(B^T_{\tau_t},A_{\tau_t};s),0\right)\mathbbm{1}_{\{|(B^T_{\tau_t},A_{\tau_t})|<\delta\} }\right)\,d\sigma_0(s). \end{align*} We claim that \begin{align}\label{eq-C-4} |R_3(t)|=O(t^{3/2}); \end{align} see section \ref{App-C-4} for a proof. At the end we have \begin{align}\label{eq-I3} I_3(t)&=\int_{\partial \Omega} \,\mathbb E_x\left(h^-(B^T_{\tau_t},A_{\tau_t};s)\mathbbm{1}_{\{|(B^T_{\tau_t},A_{\tau_t})|<\delta\} }\right)\,d\sigma_0(s) +O(t^{3/2}). \end{align} Now by combining \eqref{eq-I2} and \eqref{eq-I3} we obtain \begin{align*} -I_2(t)+I_3(t)&=-\int_{\partial \Omega} \,\mathbb E_x\left(h(B^T_{\tau_t},A_{\tau_t};s)\mathbbm{1}_{\{|(B^T_{\tau_t},A_{\tau_t})|<\delta\} }\right)\,d\sigma_0(s) +O(t^{3/2})\\ &=-\int_{\partial \Omega} \,\mathbb E_x\left(\frac12 {H_{{\partial\Omega},0}(s)}(B^T_{\tau_t})^2+k_1(s)A_{\tau_t}\right)\,d\sigma_0(s)+C_1(t)+C_2(t)+O(t^{3/2}) \end{align*} where \[ C_1(t)=\int_{\partial \Omega} \,\mathbb E_x\left(\left(\frac12{H_{{\partial\Omega},0}(s)}(B^T_{\tau_t})^2+k_1(s)A_{\tau_t}-h(B^T_{\tau_t},A_{\tau_t};s)\right)\mathbbm{1}_{\{|(B^T_{\tau_t},A_{\tau_t})|<\delta\} }\right)\,d\sigma_0(s) \] and \[ C_2(t)=\int_{\partial \Omega} \,\mathbb E_x\left(\left(\frac12{H_{{\partial\Omega},0}(s)}(B^T_{\tau_t})^2+k_1(s)A_{\tau_t}\right)\mathbbm{1}_{\{|(B^T_{\tau_t},A_{\tau_t})|\ge\delta\} }\right)\,d\sigma_0(s)\] We claim that for sufficiently small $\eta>0$, \begin{align}\label{eq-C} C_1(t)=O(t^{3/2-\eta}), \quad C_2(t)=O(t^{3/2-\eta}). \end{align} Estimate \ref{eq-C} will be proved in section \ref{App-C}. Then we have \begin{align*} -I_2(t)+I_3(t)&=-\bigg(\int_{\partial \Omega} \frac12{H_{{\partial\Omega},0}(s)} d\sigma_0(s)\bigg)\cdot\mathbb E\left((B^T_{\tau_t})^2\right)-\bigg(\int_{\partial \Omega}k_1(s) d\sigma_0(s)\bigg) \cdot\mathbb E\left(A_{\tau_t}\right)+o(t). \end{align*} Moreover, since \[ \mathbb E\left((B^T_{\tau_t})^2\right)=\int_0^t\mathbb E\left((B^T_{\tau_t})^2|\tau_t=\tau\right)\partial(\tau_t=d\tau)=\int_0^t \tau\, \partial(\tau_t=d\tau)=\frac{t}{2} \] and \[ \mathbb E\left(A_{\tau_t}\right)=\mathbb E\bigg(-B^N_{\tau_t}B^T_{\tau_t}+2\int_0^{\tau_t}B_s^NdB_s^T\bigg)=0, \] we obtain \begin{equation}\label{I2I3-final} -I_2(t)+I_3(t)=-\frac{t}{4}\int_{\partial \Omega}H_{{\partial\Omega},0}(s) d\sigma_0(s)+O(t^{3/2}). \end{equation} Combining \eqref{I2I3-final} with \eqref{eq-I-1} we have \[ E(t)=I_1(t)-I_2(t)+I_3(t)=\Vol(\Omega_\epsilon)-\sqrt{\frac{2t}{\pi}}\sigma_0(\partial \Omega)+\frac{t}{4}\int_{\partial \Omega}H_{{\partial\Omega},0}(s)d\sigma_0(s)+ O(t^{3/2}), \] which completes the proof of \eqref{eq-E(t)}. \subsection{Proof of Lemma \ref{lemma-main-II}} \begin{lemma}\label{lemma-sigma-w} Let $\Omega$ and $\Omega_\epsilon$ be as previously defined, and recall that ${\partial\Omega}$ is locally parameterized by a function $h$ as in \eqref{eq-NTZ-cor}. There is a constant $K>0$ so that for any $\delta<\epsilon$ and any $x\in \Omega_\delta$, $\sigma\in\partial \Omega$, and $w\in\mathbb{H}$ such that \begin{itemize} \item $d_{cc}(\sigma, x)<\delta$ and $d_{cc}(w,x)<\delta$, \item the estimate \begin{equation}\label{eq-w-lambda} \lambda_1-w_1\le h_y(\lambda_2,\lambda_3;s)(w_2-\lambda_2) -K\left(|w_2-\lambda_2|^2+|w_3-\lambda_3|\right), \end{equation} holds, where $\varphi_x^{-1} (\sigma)=(-\lambda_1,\lambda_2,\lambda_3)$ and $\varphi_x^{-1} (w)=(-w_1, w_2, w_3)$, \end{itemize} then the following conclusions hold: \begin{itemize} \item[(1)] $w\not\in \Omega$, \item[(2)] $|h_y(\lambda_2, \lambda_3;s)|\le K (|\lambda_2|+|\lambda_3|)$. \end{itemize} \end{lemma} \begin{proof} Since $\sigma \in {\partial\Omega}$ is sufficiently close to $x$, we know that \begin{equation}\label{lambdas} -\lambda_1=h(\lambda_2,\lambda_3;s)-r. \end{equation} Using the Taylor expansion of $h$ we have \begin{align*} h(w_2, w_3;s)&=h(\lambda_2,\lambda_3;s)+h_y(\lambda_2,\lambda_3;s)(w_2-\lambda_2) +R(s,w', \lambda'), \end{align*} where $\lambda'=(\lambda_2,\lambda_3)$ and $w'=(w_2,w_3)$. Since ${\partial\Omega}$ is $C^3$ the remainder term can be bounded uniformly: \[ |R(s,w', \lambda')|\le \frac{1}{2}K(|w_2-\lambda_2|^2+|w_3-\lambda_3|) \] for a suitable choice of $K$. We first verify (1). It follows from the preceding estimates that \[ h(w_2, w_3;s)\ge h(\lambda_2,\lambda_3;s)+h_y(\lambda_2,\lambda_3;s)(w_2-\lambda_2) -\frac12K(|w_2-\lambda_2|^2+|w_3-\lambda_3|). \] Together with \eqref{eq-w-lambda} and \eqref{lambdas} this then implies that $h(w_2,w_3;s)\ge -w_1+r$, namely $w\not\in\Omega$. (2) is an easy consequence of the fact that $h$ is $C^2$ and $h_y(0,0;s)=0$. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma-main-II}] We denote by $\sigma\in\partial \Omega$ the exit point $x'_{T'_{\Omega}}$, and let $\partial_\sigma$ be the probability measure of the Markov process $x'_t$ started from $\sigma$. Recalling the notation $X_t=\varphi_x^{-1}(x'_t)$, we estimate \begin{align*} \partial_x(T'_\Omega\le\tau_t\le t, x'_{\tau_t}\in \Omega) &\le \partial_x(|X_{T'_\Omega}|>\delta)+\partial_x(T'_\Omega\le\tau_t\le t, |X_{T'_\Omega}|<\delta, x'_{\tau_t}\in \Omega)\\ &\le \partial_x(T'_\Omega\le\tau_t\le t, |X_{T'_\Omega}|<\delta, x'_{\tau_t}\in \Omega)+O(t^{3/2}). \end{align*} For fixed $\sigma\in\partial \Omega$, $u\ge0$, let $\phi_{x}( \sigma,u)$ be the probability that the process $x'_t$ started from $\sigma$ has farthest achievement along the horizontal normal direction $N$ up to time $u$ inside $\Omega$, that is, \[ \phi_{x}( \sigma,u)=\partial_\sigma(x'_{\tau_u}\in \Omega), \quad \tau_u=\inf{_\sigma}\lbrace\tau: B^N_\tau=\sup_{0\le v\le u}B^N_v\rbrace. \] Under $\partial_x$, we have $\tau_u=\inf\lbrace\tau: B^N_\tau=\sup_{T'_\Omega\le v\le u}B^N_v\rbrace.$ Note $T'_\Omega\le\tau_t\le t$ means that $x'_t$ hits $\partial \Omega$ before $B^N_t$ achieve its maximum, hence \begin{align}\label{eq-T-X-D} \partial_x(T'_\Omega\le\tau_t\le t, |X_{T'_\Omega}|<\delta, x'_{\tau_t}\in \Omega) \le \mathbb E_x\left({\mathbbm{1}}_{\{T'_\Omega\le t, |X_{T'_\Omega}|<\delta\}}\phi_{x}( x'_{T'_\Omega}, t-T'_\Omega) \right). \end{align} Using the same notation as in Lemma \ref{lemma-sigma-w}, for $\sigma\in\partial \Omega$, $d_{cc}(\sigma,x)<\delta$, we have $-\lambda_1=h(\lambda_2,\lambda_3;s)-r$, that is \[ \varphi_x^{-1}(\sigma)=(h(\lambda_2,\lambda_3;s)-r, \lambda_2,\lambda_3). \] Also, under $\partial_\sigma$ we can write \begin{align*} X_{\tau_u}=\varphi_x^{-1}(x'_{\tau_u}) =(-\lambda_1-\beta^N_{\tau_u}, \lambda_2+\beta^T_{\tau_u}, \lambda_3+\mathcal{A}_{\tau_u}), \end{align*} where $\beta^N$ and $\beta^T$ are independent standard Brownian motions under $\partial_\sigma$, and $\mathcal{A}_{\tau_u}=-\beta^N_{\tau_u}\lambda_2+\beta^T_{\tau_u}\lambda_1-\beta^T_{\tau_u}\beta^N_{\tau_u}+2\int_0^{\tau_u}\beta^N_sd\beta^T_s$. By Lemma \ref{lemma-sigma-w} we know that if $x'_{\tau_u}\in \Omega$ then \begin{align*} -\beta^N_{\tau_u}&\ge h_y(\lambda_2,\lambda_3;s)\beta^T_{\tau_u}-K\left(|\beta^T_{\tau_u}|^2+|\mathcal{A}_{\tau_u}|\right) \end{align*} Hence for $\sigma=x'_{T'_\Omega}$, $u=t-T'_\Omega$, $$ \phi_x(\sigma, u)=\partial_\sigma(x'_{\tau_u}\in \Omega) \le\partial_\sigma\bigg(\beta^N_{\tau_u}\le -h_y(\lambda_2,\lambda_3;s) \beta^T_{\tau_u}+K\left(|\beta^T_{\tau_u}|^2+|\mathcal{A}_{\tau_u}|\right) \bigg). $$ Since $\int_0^{\tau_u}\beta^N_sd\beta^T_s$ is a martingale, it can be written as a time changed Brownian motion $\beta_{\int_0^{\tau_u}(\beta_s^N)^2ds}$ where $\beta$ is an independent standard Brownian motion. Hence there exists $C>0$ such that \begin{equation*}\begin{split} \partial_\sigma\bigg(\bigg|\int_0^{\tau_u}\beta^N_sd\beta^T_s \bigg|>\beta^N_{\tau_u}{\tau_u}^{1/2-\eta}\bigg) &=\partial_\sigma\bigg(|\beta_1|^2\int_0^{\tau_u}(\beta^N_s)^2ds >(\beta^N_{\tau_u})^2{\tau_u}^{1-2\eta}\bigg) \\ &\le \partial_\sigma\bigg(|\beta_1|^2\cdot{\tau_u} >{\tau_u}^{1-2\eta}\bigg) \\ &= \partial_\sigma\bigg(|\beta_1|>\frac{1}{t^{\eta}}\bigg) =O(e^{-C/t^{2\eta}}) \end{split}\end{equation*} for some $\eta\in(0,1/2)$. Therefore we know that \begin{align*} &\phi_x(\sigma, u)\le \partial_\sigma\bigg(\beta^N_{\tau_u}\big(1-K|\lambda_2|-K|\beta^T_{\tau_u}|-2K\tau_u^{1/2-\eta}\big)\le |h_y(\lambda_2,\lambda_3;s)||\beta^T_{\tau_u}| +K|\lambda_1||\beta^T_{\tau_u}|\\ & \qquad \quad +K |\beta^T_{\tau_u}|^2 \bigg)+O(t^{3/2}), \end{align*} Hence we have \begin{align*} &\phi_x(\sigma, u)\le \partial_\sigma\bigg(\frac{1}{2}\beta^N_{\tau_u}\le |h_y(\lambda_2,\lambda_3;s)||\beta^T_{\tau_u}| +K|\lambda_1||\beta^T_{\tau_u}|+K |\beta^T_{\tau_u}|^2 \bigg)\\ &\qquad \quad +\partial_\sigma\left(1-K|\lambda_2|-K|\beta^T_{\tau_u}|-2K\tau_u^{1/2-\eta}<\frac{1}{2}\right)+O(t^{3/2}). \end{align*} Recall $d_{cc}(\sigma,x)<\delta$ implies $|\lambda_2|<\delta$. When $\delta>0$ is small enough such that $\delta\le\frac{1}{8K}$, and when $t$ is small enough such that $2Kt^{1/2-\eta}<1/4$, in the set $\{||X_{T'_\Omega}||<\delta\}\cap\{T'_\Omega<t \}\cap\{\tau_u<t\}$ we have \begin{equation*}\begin{split} \partial_\sigma\left(1-K|\lambda_2|-K|\beta^T_{\tau_u}|-2K\tau_u^{1/2-\eta}<\frac{1}{2}\right) &\le\partial_\sigma\left(K|\lambda_2|+K|\beta^T_{\tau_u}|>\frac{1}{4}\right) \\ &\le \partial_\sigma\left(|\beta^T_{\tau_u}|_*>\frac{1/4-K\delta}{K}\right) \\ &\le \partial_\sigma\left(|\beta^T_t|_*>\frac{1}{8K}\right)=O(e^{-C_K/t}) \end{split}\end{equation*} for some $C_K>0$, where $|\beta^T|_*$ is the running maximum of $|\beta^T|$. Hence we have in the set $\{||X_{T'_\Omega}||<\delta\}\cap\{T'_\Omega<t \}\cap\{\tau_u<t\}$, \begin{equation*}\begin{split} \phi_x(\sigma, u)=\partial_\sigma(x'_{\tau_u}\in \Omega) &\le \partial_\sigma\bigg(\beta^N_{\tau_u}/2\le |h_y(\lambda_2,\lambda_3;s)||\beta^T_{\tau_u}| +K|\lambda_1||\beta^T_{\tau_u}|+K |\beta^T_{\tau_u}|^2 \bigg)+O(t^{3/2})\\ &= \int_0^ud\tau\int_{-\infty}^\infty \frac{1}{\sqrt{2\pi\tau}}e^{-\frac{y^2}{2\tau}} dy \int_{\{0<\xi< 2F(y,\tau) \}} \Phi(\xi, \tau;u)d\xi+O(t^{3/2})\\ &= \int_0^ud\tau\int_{-\infty}^\infty \frac{1}{\pi\sqrt{\tau}\sqrt{u-\tau}}\left(1- e^{\frac{1}{\tau}F(y,\tau)}\right) \frac{1}{\sqrt{2\pi\tau}}e^{-\frac{y^2}{2\tau}} dy+O(t^{3/2}) \end{split}\end{equation*} where \[ F(y,\tau)=\bigg(|h_y(\lambda_2,\lambda_3;s)||y|+K|\lambda_1||y|+K |y|^2\bigg)^2. \] Since \[ 1-e^{\frac{1}{\tau}F(y,\tau)} \le \frac{C_{K'}}{\tau}\left(y^4+|h_y|^2|y|^2+|y|^2\lambda_1^2\right) \] for some $C_{K'}>0$, we have for $\sigma=x'_{T'_\Omega}$, $u=t-T'_\Omega$, that \begin{align*} \phi_x(\sigma, u) &\le\int_0^ud\tau\int_{-\infty}^\infty \frac{C_{K'}}{\pi{\tau}^{3/2}\sqrt{u-\tau}}\left(y^4+|h_y|^2|y|^2+|y|^2\lambda_1^2\right) \frac{1}{\sqrt{2\pi\tau}}e^{-\frac{y^2}{2\tau}} dy+O(t^{3/2})\\ &=C_{K'}\int_0^u \frac{3\sqrt{\tau}+| h_y|^2\tau^{-1/2}+\tau^{-1/2}\lambda_1^2}{\sqrt{u-\tau}} d\tau+O(t^{3/2})\\ &\le C'\bigg(u +|h_y|^2+\lambda_1^2\bigg)+O(t^{3/2}) \\ &\le C'' \left(t+K^2(|\lambda_2|^2+|\lambda_3|)+\lambda_1^2\right)+O(t^{3/2}) \end{align*} The last inequality is due to Lemma \ref{lemma-sigma-w}. Plugging back into \eqref{eq-T-X-D} we obtain \begin{align*} &\partial_x(T'_\Omega\le\tau_t\le t, x'_{\tau_t}\in \Omega) \\ &\quad \le \mathbb E_x\left({\mathbbm{1}}_{\{T'_\Omega\le t, ||X_{T'_\Omega}||<\delta\}}C'' \left(t+K^2(|B^T_{T'_\Omega}|^2+|A_{T'_\Omega}|)+|B^N_{T'_\Omega}|^2\right) \right)+O(t^{3/2})\\ &\quad \le \mathbb E_x\left({\mathbbm{1}}_{\{T'_\Omega\le t\}} \left(C'' t+C''K^2\left(\sup_{0\le s\le t}|B^T_{s}|^2+\sup_{0\le s\le t}|A_{s}|\right)+C''\sup_{0\le s\le t}|B^N_{s}|^2\right) \right)+O(t^{3/2})\\ &\quad \le C'' \partial_x(T'_\Omega\le t)t+O(t^{3/2}) \\ & \qquad \quad + C'''\mathbb E_x\left({\mathbbm{1}}_{\{T'_\Omega\le t\}}\right)^{1/2}\left[\left(\mathbb E_x\left(\sup_{0\le s\le t}|B^T_{s}|^4\right)\right)^{1/2}+\left(\mathbb E_x\left(\sup_{0\le s\le t}|A_{s}|^2\right) \right)^{1/2}+\left(\mathbb E_x\left(\sup_{0\le s\le t}|B^N_{s}|^4\right)\right)^{1/2}\right] \end{align*} By Doob's maximal inequality, for $i=N, T$, \[ \mathbb E_x\left(\sup_{0\le s\le t}|B^i_{s}|^4\right)\le (4/3)^4\mathbb E_x(|B_t^i|^4)=4^43^{-3}t^2 \] and \begin{align*} \mathbb E_x\left(\sup_{0\le s\le t}|A_{s}|^2\right)\le 4\mathbb E_x(|A_t|^2)=4t^2. \end{align*} Therefore we obtain \begin{align*} &\partial_x(T'_\Omega\le\tau_t\le t, x'_{\tau_t}\in \Omega) \le C''\partial_x(T'_\Omega\le t) t+C'''\partial_x\left(T'_\Omega\le t\right)^{1/2}t+O(t^{3/2}) \end{align*} From Lemma \ref{lemma-X-TD-prime} we know there exists $C^*,C_2, c>0$ such that \begin{align*} & \int_{\Omega_\epsilon} \partial_x(T'_\Omega\le\tau_t\le t, x'_{\tau_t}\in \Omega)dx \\ & \quad \le C^*\int_{\Omega_\epsilon}\left(te^{-\frac{d_{cc}(x,{\partial\Omega})^2}{ct}} +te^{-\frac{d_{cc}(x,{\partial\Omega})^2}{2ct}}\right)dx+O(t^{3/2})\\ & \quad =C^* \int_{0}^\epsilon \int_{\partial \Omega}\left(te^{-\frac{r^2}{ct}} +te^{-\frac{r^2}{2ct}}\right)(1-H_{{\partial\Omega},0}(s)r)d\sigma_0(s)dr+O(t^{3/2}) \\ & \quad \le C_2t^{3/2}. \end{align*} Hence we obtain Lemma \ref{lemma-main-II}. \end{proof} \subsection{Proof of Lemma \ref{lemma-main-III}} Our main task is to estimate $\partial_x(\tau_t<T'_\Omega\le t)$, namely the probability of $x'_t$ \begin{itemize} \item remaining inside $\Omega$ up to its furthest excursion along the outward horizontal normal direction $-N$ with in time $t$, and \item exiting $\Omega$ after the ``maximum excursion" along $-N$ before $t$. \end{itemize} Again, we deal with the lifted process on the tangent space. Let $X_t=(-B^N_t, B^T_t, A_t)$ be the Markov process as given in \eqref{eq-X-NTZ}. For any $w\in\mathcal{O}_x$, we denote $\varphi_x^{-1}(w)=(-w_1, w_2, w_3)$. From \eqref{eq-NTZ-cor} we know that $w\in \Omega$ if $w_1<r-h(w_2,w_3;s)$ and $w\not\in \Omega$ if $w_1>r-h(w_2,w_3;s)$. By Lemma \ref{lemma-h-H} we can then conclude that there exists a $\delta\in(0, \epsilon)$, $$ w_1<r-\frac{1}{2}H_{{\partial\Omega},0}(s)w_2^2-k_1(s)w_3+\frac{1}{\delta^2}|(w_2,w_3)|^3 $$ if $w\in \Omega$, while $$ w_1\ge r-\frac{1}{2}H_{{\partial\Omega},0}(s)w_2^2-k_1(s)w_3-\frac{1}{\delta^2}|(w_2,w_3)|^3 $$ if $w\not\in \Omega$. Hence in probabilistic language we have \[ \partial_x(\tau_t<T'_\Omega\le t, \sup_{0\le s\le t}||X_s||\le \delta)\le D(s,r,t), \] where \begin{align*} D(s,r,t)&=\partial_x\bigg( B^N_{\tau_t}<r-\frac{1}{2}H_{{\partial\Omega},0}(s) |B^T_{\tau_t}|^2-k_1(s) A_{\tau_t}+\frac{1}{\delta^2}|( B^T_{\tau_t}, A_{\tau_t})|^3,\\ & \exists v\in [\tau_t,t]: B^N_{v}\ge r-\frac{1}{2}H_{{\partial\Omega},0}(s) |B^T_{v}|^2-k_1(s) A_{v}-\frac{1}{\delta^2}|( B^T_{v}, A_{v})|^3\bigg) \, . \end{align*} \begin{proof}[Proof of Lemma \ref{lemma-main-III}] By changing coordinates, we have \begin{equation}\label{eq-int-p} \int_{\Omega_\epsilon} \partial_x(\tau_t<T'_\Omega\le t)dx\le(1+K'\epsilon)\int_0^\epsilon\int_{\partial \Omega}\partial_{(s,r)}(\tau_t<T'_\Omega\le t)d\sigma_0(s)dr, \end{equation} where $K'=\max_{s\in \partial \Omega}|H_{{\partial\Omega},0}(s)|+K_1$ and $K_1$ is as in Lemma \ref{lemma-J}. For fixed $s\in \partial \Omega$ we want to bound $\int_0^\epsilon\partial_{(s,r)}(\tau_t<T'_\Omega\le t)dr$. From Lemma \ref{lemma-X-TD-prime} we know that there exists $C>0$ such that \begin{align}\label{eq-P-D} \partial_x(\tau_t<T'_\Omega\le t)\le \partial_x\left(\sup_{0\le s\le t}||X_s||\ge \delta\right)+D(s,r,t)\le D(s,r,t)+Ct^{3/2}. \end{align} The rest of proof is then devoted into the estimate of $D(s,r, t)$. For each $v\in [\tau_t,t]$, introduce a parameter $\tau = \tau(v) \in[0,1]$ such that $v=\tau_t+\tau(t-\tau_t)$, and let $(M^N_\tau, M^T_\tau,M^{A}_\tau)$ be given as \begin{align*} & M^N_\tau=\frac{B^N_{\tau_t}-B^N_{\tau_t+\tau(t-\tau_t)}}{\sqrt{t-\tau_t}},\quad M^T_\tau=\frac{B^T_{\tau_t+\tau(t-\tau_t)}-B^T_{\tau_t}}{\sqrt{t-\tau_t}}. \end{align*} Clearly $M^N_\tau$ is a Brownian meander process. Due to independence of $B^T$ and $(B^N,\tau_t)$ we know that $M^T_\tau$ is an independent standard Brownian motion process. We have \[ X_v=X_{\tau_t}+\bigg(\sqrt{t-\tau_t}M^N_\tau, \sqrt{t-\tau_t}M^T_\tau , A_{\tau_t+\tau(t-\tau_t)}-A_{\tau_t} \bigg), \] where $$ A_{\tau_t+\tau(t-\tau_t)}-A_{\tau_t}= -\sqrt{t-\tau_t}\left(M_\tau^NB_{\tau_t}^T-M_\tau^TB_{\tau_t}^N \right)-(t-\tau_t)M^N_\tau M^T_\tau+2(t-\tau_t)\int_0^\tau M_s^NdM_s^T. $$ Let $\chi_t(\xi, y, z, u)$ be the density function of $(B^N_{\tau_t}, B^T_{\tau_t}, A_{\tau_t},t-\tau_t)$, then \begin{align*} &D(s,r,t)=\int_0^\infty d\xi\int_{-\infty}^{\infty}dy \int_{-\infty}^\infty dz \int_0^t du\, \chi_t(\xi, y, z, u){\mathbbm{1}}_{\xi-r+\frac{1}{2}H_{{\partial\Omega},0}(s)|y|^2+k_1(s)z<\frac{1}{\delta^2}|(y,z)|^3} \\ &\quad \cdot\partial_x\bigg( \exists \tau\in [0,1]: \xi-\sqrt{u}M^N_{\tau}\ge r-\frac{1}{2}H_{{\partial\Omega},0}(s)|y+\sqrt{u}M^T_{\tau}|^2 -k_1(s) \bigg(z-\sqrt{u}(M^N_\tau y-M^T_\tau \xi)-uM^N_\tau M^T_\tau\\ &\qquad +2u\int_0^\tau M_s^NdM_s^T\bigg) -\frac{1}{\delta^2}\bigg|\bigg(y+\sqrt{u}M^T_{\tau},z-\sqrt{u}(M^N_\tau y-M^T_\tau \xi)-{u}M^N_\tau M^T_\tau+2u\int_0^\tau M_s^NdM_s^T\bigg)\bigg|^3\bigg). \end{align*} Here recall the notation $|(y,z)|=\sqrt{y^2+|z|}$. Furthermore since $|(y_1+y_2,z_1+z_2)|^3\le 8|(y_1,z_1)|^3+8|(y_2,z_2)|^3$ for any $y_1, y_2, z_1, z_2 \in\mathbb R$, we have \begin{align}\label{eq-D-W} D(s,r,t)&\le\int_0^\infty d\xi\int_{-\infty}^{\infty}dy \int_{-\infty}^\infty dz \int_0^t du\, \chi_t(\xi, y, z, u){\mathbbm{1}}_{\xi-r+\frac{1}{2}H_{{\partial\Omega},0}(s)|y|^2+k_1(s) z<\frac{1}{\delta^2}|(y,z)|^3}\nonumber \\ & \quad \cdot\partial_x\bigg( \exists \tau\in [0,1]: \xi-\sqrt{u}M^N_{\tau}\ge r-\frac{1}{2}H_{{\partial\Omega},0}(s)|y|^2-k_1(s)z-\frac{8}{\delta^2} |(y,z)|^3-\sqrt{u}W(y,u,z,\tau)\bigg) \end{align} where \begin{align*} &W(y,u,z,\tau)=H_{{\partial\Omega},0}(s)|M^T_{\tau}||y|+\frac{1}{2}H_{{\partial\Omega},0}(s)\sqrt{u}|M^T_{\tau}|^2 \\ & \quad +k_1(s)\left(-(M^N_\tau y-M^T_\tau \xi)-\sqrt{u}M^N_\tau M^T_\tau+2\sqrt{u}\int_0^\tau M_s^NdM_s^T\right)\\ & \qquad +\frac{8}{\delta^2}{u}\bigg|\bigg(M^T_{\tau},-(M^N_\tau y-M^T_\tau \xi)-\sqrt{u}M^N_\tau M^T_\tau+2\sqrt{u}\int_0^\tau M_s^NdM_s^T\bigg)\bigg|^3. \end{align*} To estimate $W(y,u,z, \tau)$. First we prove the following lemma. \begin{lemma}\label{lemma-MN-MT} Let $M^N_\tau$ and $M^T_\tau$, $0\le \tau\le1$ be as before. Then for any $0<\eta<1/2$, there exists $\eta'>0$ such that for any $u\in[0,t)$, \begin{equation}\label{eq-int-MT} \partial\bigg(\exists \tau\in[0,1],\,{u}^{2\eta}\bigg|\int_0^\tau M_s^NdM_s^T\bigg| >|M^T_\tau|_*\bigg)=O( e^{-c/t^{\eta'}}) \end{equation} where $|M^T_\tau|_*=\sup_{0\le s\le \tau}|M^T_s|$ is the running maximum. Moreover we have \[ \partial\bigg(\exists \tau\in[0,1],\,{u}^{2\eta}\bigg|M^N_\tau M^T_\tau-2\int_0^\tau M_s^NdM_s^T\bigg| >|M^T_\tau|_*\bigg)=O( e^{-c/t^{\eta'}}). \] \end{lemma} \begin{proof} First note \begin{align*} &\partial\bigg(\exists \tau\in[0,1],\,{u}^{2\eta}\bigg|M^N_\tau M^T_\tau-2\int_0^\tau M_s^NdM_s^T\bigg| >|M^T_\tau|_*\bigg)\\ & \quad \le \partial\bigg(\exists \tau\in[0,1],\,{u}^{2\eta}|M^N_\tau M^T_\tau|_* >\frac12|M^T_\tau|_*\bigg) \\ & \qquad +\partial\bigg(\exists \tau\in[0,1],\,2{u}^{2\eta}\bigg|\int_0^\tau M_s^NdM_s^T\bigg| >\frac12|M^T_\tau|_*\bigg) \end{align*} and \[ \partial\bigg(\exists \tau\in[0,1],\,{u}^{2\eta}|M^N_\tau |_* >\frac12\bigg)\le\partial\bigg({u}^{2\eta}|M^N_1 |_* >\frac12\bigg)=O(e^{-1/2u^{2\eta}}). \] We just need to prove \eqref{eq-int-MT}. Since $\int_0^\tau M_s^NdM_s^T$ is a martingale, it can be written as a time changed Brownian motion, namely \[ \int_0^\tau M_s^NdM_s^T=\beta_{\int_0^\tau (M^N_s)^2ds}, \] where $\beta$ is an independent standard Brownian motion. Therefore \begin{align*} &\partial\bigg(\exists \tau\in[0,1],\,{u}^{2\eta}\bigg|\int_0^\tau M_s^NdM_s^T\bigg| >|M^T_\tau|_*\bigg) \\ & \quad =\partial\bigg(\exists \tau\in[0,1],\,{u}^{2\eta}\left(\int_0^\tau (M^N_s)^2ds\right)^{1/2} |\beta_1| >\sqrt{\tau}|M^T_1|_*\bigg)\\ & \qquad \le\partial\bigg(\exists \tau\in[0,1],\,{u}^{\eta}\left(\int_0^\tau (M^N_s)^2ds\right)^{1/2} >\sqrt{\tau}\bigg)+\partial\bigg(u^\eta|\beta_1|>|M^T_1|_* \bigg). \end{align*} We easily observe that there exists $c, c', \eta'>0$ such that \[ \partial\bigg(u^\eta|\beta_1|>|M^T_1|_* \bigg)\le \partial(|\beta_1|>u^{-\eta/2})+\partial(|M_1^T|_*<u^{\eta/2})\le c\left(e^{-\frac{c'}{t^{\eta'}}}\right). \] On the other hand, if we denote the Brownian meander of length $\frac{1}{\tau}$ by $m^{1/\tau}_s$, namely, \[ m^{1/\tau}_s=\frac{1}{\sqrt{\tau}}M_{s\tau}^N, \quad s\in [0,1/\tau], \] then we know that $m^{1/\tau}_{1/\tau}$ is Rayleigh distributed with scale parameter $\frac{1}{\sqrt{\tau}}$, and hence \[ \partial\bigg(\left|m^{1/\tau}_{1/\tau}\right|_*>R\bigg)\le 2e^{-\frac{R^2\tau}{2}} \] for any $R>0$. This implies that for any $\tau\in[0,1]$ \[ \partial\bigg(\left|m^{1/\tau}_{1/\tau}\right|_*>\frac{1}{u^\eta\tau}\bigg)\le 2e^{-\frac{1}{2\tau u^{2\eta}}}. \] Hence we have \begin{align*} &\partial\bigg(\exists \tau\in[0,1],\,{u}^{\eta}\left(\int_0^\tau (M^N_s)^2ds\right)^{1/2} >\sqrt{\tau}\bigg) \\ & \quad =\partial\bigg(\exists \tau\in[0,1],\,{u}^{\eta}\left(\tau^2\int_0^1 (m^{1/\tau}_s)^2ds\right)^{1/2} >\sqrt{\tau}\bigg)\\ & \qquad =\partial\bigg(\exists \tau\in[0,1],\,{u}^{2\eta}\int_0^1 (m^{1/\tau}_s)^2ds>\frac{1}{\tau}\bigg) \\ & \qquad \quad \le \partial\bigg(\exists \tau\in[0,1],\,{u}^{2\eta}\left|m^{1/\tau}_{1/\tau}\right|_*>\frac{1}{\tau}\bigg) \end{align*} and moreover \begin{align*} &\partial\bigg(\exists \tau\in[0,1],\,{u}^{2\eta}\left|m^{1/\tau}_{1/\tau}\right|_*>\frac{1}{\tau}\bigg) \\ & \quad \le\sum_{k=0}^\infty\partial\bigg(\exists \tau\in(2^{-(k+1)},2^{-k}],\,\left|m^{1/\tau}_{1/\tau}\right|_*>\frac{1}{\tau {u}^{2\eta}}\bigg)\\ & \qquad \le\sum_{k=0}^\infty \partial\bigg(\exists \tau\in(2^{-(k+1)},2^{-k}],\,\left|m^{1/\tau}_{1/\tau}\right|_*>\frac{2^k}{ {u}^{2\eta}}\bigg) \le\sum_{k=0}^\infty 2e^{-\frac{4^k}{ 2^{k+1}u^{4\eta}}}= o(e^{-\frac{c}{u^{4\eta}}}) \end{align*} for some $c>0$. Hence we complete the proof. \end{proof} Let $H=\max\{\max_{s\in{\partial\Omega}}(H_{{\partial\Omega},0}(s)),0\}$ and $K'_1=\max\{\max_{s\in{\partial\Omega}}k_1(s),0\}$. From Lemma \ref{lemma-MN-MT}, for any small $\eta>0$, in the set $\{|M^T_\tau|_*<\delta u^{-1/4}\}\cap\{u^{2\eta}|M^N_\tau|_*\le1 \}$, there exist $c, \eta'>0$ such that \begin{align*} &\frac{8}{\delta^2}{u}\bigg|\bigg(M^T_{\tau},-(M^N_\tau y-M^T_\tau \xi)-\sqrt{u}M^N_\tau M^T_\tau+2\sqrt{u}\int_0^\tau M_s^NdM_s^T\bigg)\bigg|^3\\ &\quad\le \frac{C}{\delta^2}u\bigg(|M^T_\tau|^3+|M^T_\tau \xi|^{3/2}+(M^N_\tau)^{3/2} |y|^{3/2}+u^{3/4-3\eta}|M_\tau^T|_*^{3/2}\bigg)\\ &\qquad \le \frac{C}{\delta^2}u |y|^{3/2}(M^N_\tau)^{3/2}+{C'}u^{3/4}|M^T_\tau|_*^2+C''\left(u^{7/8}|\xi|^{3/2}+u^{5/8-3\eta}\right)|M^T_\tau|_*\\ &\qquad \quad \le C_\delta\bigg(u^{1-\eta} |y|^{3/2}M^N_\tau+u^{3/4}|M^T_\tau|_*^2+\left(u^{7/8}|\xi|^{3/2}+u^{5/8-3\eta}\right)|M^T_\tau|_*\bigg) \end{align*} where $C', C'', C_\delta>0$ are constants depend only on $\delta$. Then we have \begin{align*} W(y,u,z,\tau) &\le \bigg(H|y|+K'_1|\xi|+3K'_1u^{1/2-2\eta}+C_\delta \left(u^{7/8}|\xi|^{3/2}+u^{5/8-3\eta}\right)\bigg)|M^T_{\tau}|_*\\ & \quad +\left(\frac{1}{2}H\sqrt{u}+C_\delta u^{3/4}\right)|M^T_{\tau}|_*^2+(-k_1(s)y+C_\delta u^{1-\eta} |y|^{3/2} ) M^N_\tau. \end{align*} We denote $$ a=H|y|+K'_1|\xi|+3K'_1u^{1/2-2\eta}+C_\delta \left(u^{7/8}|\xi|^{3/2}+u^{5/8-3\eta}\right) $$ and $$ b=\frac{1}{2}H\sqrt{u}+C_\delta u^{3/4}. $$ Hence in the set $\{|M^T_\tau|_*<\delta u^{-1/4}\}\cap\{u^{2\eta}|M^N_\tau|_*\le1 \}\cap \{|-k_1(s)y+C_\delta u^{1-\eta} |y|^{3/2}|<1/2 \}$ it holds that \[ W(y,u,z,\tau)\le a|M^T_{\tau}|_*+ b|M^T_{\tau}|_*^2+\frac{1}{2}M^N_\tau. \]% Let $\gamma=u^{-1/2}\left(\xi-r+\frac{1}{2}H_{{\partial\Omega},0}(s)|y|^2+k_1(s)z+\frac{8}{\delta^2} |(y,z)|^3\right)$, then from \eqref{eq-D-W} we have we have \begin{align}\label{eq-int-psr} D(s,r,t) &\le \int_0^\epsilon dr\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u){\mathbbm{1}}_{\xi-r+\frac{1}{2}H_{{\partial\Omega},0}(s)|y|^2+k_1(s)z<-\frac{1}{\delta^2} |(y,z)|^3} \nonumber \\ &\quad \cdot\bigg[\partial_x\bigg( \exists \tau\in [0,1]: M^N_{\tau}\le 2\gamma +2a|M^T_\tau|_*+2b|M^T_\tau|_*^2 \bigg)+\partial_x\bigg( \exists \tau\in [0,1]: |M^T_\tau|_*\ge\delta u^{-1/4} \bigg) \nonumber\\ &\qquad+\partial_x\bigg( \exists \tau\in [0,1]: u^{2\eta}|M^N_\tau|_*>1\bigg) +\partial_x\bigg( \exists \tau\in [0,1]: K_1'|M^T_t|_* +C_\delta t^{1-\eta}|M^T_t|_*^{3/2}\ge1/2\bigg)\bigg]. \end{align} At last since \begin{align*} &\int_0^\epsilon dr\int_0^\infty d\xi\int_{-\infty}^{\infty}dy \int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u){\mathbbm{1}}_{-\frac{8}{\delta^2} |(y,z)|^3\le\xi-r+\frac{1}{2}H_{{\partial\Omega},0}(s)|y|^2+k_1(s)z<\frac{1}{\delta^2}|(y,z)|^3} \\ &\quad \le \int_0^\infty d\xi\int_{-\infty}^{\infty}dy \int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, u)\frac{9}{\delta^2} |(y,z)|^3\\ &\qquad \le\frac{72}{\delta^2} \mathbb E\left(|B^T_{\tau_t}|^3+|A_{\tau_t}|^{3/2} \right)=\frac{72}{\delta^2}\mathbb E(|B_{\tau_1}|^3+|A_{\tau_1}|^{3/2})t^{3/2}=o(t^{3/2}). \end{align*} We obtain that \[ \int_0^\epsilon D(s,r,t)dr\le A_1(s,t)+A_2(s,t)+A_3(s,t)+A_4(s,t)+o(t^{3/2}), \] where \begin{align*} A_1(s,t) &= \int_0^\epsilon dr\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u){\mathbbm{1}}_{\xi-r+\frac{1}{2}H_{{\partial\Omega},0}(s)|y|^2+k_1(s)z<-\frac{8}{\delta^2} |(y,z)|^3} \\ & \cdot\partial_{(s,r)}\bigg( \exists \tau\in [0,1]: M^N_{\tau}\le 2\gamma +2a|M^T_\tau|_*+2b|M^T_\tau|_*^2 \bigg)\,, \end{align*} \begin{align*} A_2(s,t) &= \int_0^\epsilon dr\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u){\mathbbm{1}}_{\xi-r+\frac{1}{2}H_{{\partial\Omega},0}(s)|y|^2+k_1(s)z<-\frac{8}{\delta^2} |(y,z)|^3} \\ & \cdot\partial_{(s,r)}\bigg( \exists \tau\in [0,1]: |M^T_\tau|_*\ge\delta u^{-1/4} \bigg)\\ &\le \partial\bigg(|M^T_1|_*\ge\delta t^{-1/4}\bigg)\le C'e^{-\frac{C}{t^{1/2}}}\,, \end{align*} \begin{align*} A_3(s,t) &= \int_0^\epsilon dr\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u){\mathbbm{1}}_{\xi-r+\frac{1}{2}H_{{\partial\Omega},0}(s)|y|^2+k_1(s)z<-\frac{8}{\delta^2} |(y,z)|^3} \\ & \cdot\partial_{(s,r)}\bigg( \exists \tau\in [0,1]: u^{2\eta}|M^N_\tau|_*>1 \bigg)\\ &\le \partial\bigg( \exists \tau\in [0,1]: t^{2\eta}|M^N_\tau|_*>1 \bigg)=\partial\bigg(|M^N_1|_*> t^{-2\eta} \bigg)\le 2e^{-\frac{1}{2t^{4\eta}}}\,, \end{align*} and \begin{align*} A_4(s,t) &= \int_0^\epsilon dr\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u){\mathbbm{1}}_{\xi-r+\frac{1}{2}H_{{\partial\Omega},0}(s)|y|^2+k_1(s)z<-\frac{8}{\delta^2} |(y,z)|^3} \\ & {\mathbbm{1}}_{ |k_1(s)y+C_\delta u^{1-\eta} |y|^{3/2}|\ge1/2}\\ &\le \partial\bigg( K_1'|M^T_t|_* +C_\delta t^{1-\eta}|M^T_t|_*^{3/2}\ge1/2 \bigg)\le \partial\bigg( |M^T_t|_*\ge C\bigg)=O(e^{-c/t}). \end{align*} We are left to show that $A_1(t)=O(t^{3/2-\eta})$ for some small $\eta>0$. \begin{lemma}\label{lemma-meander} There exist constants $C>0$, $C'>0$ that are independent of $a, b, \gamma$ such that \[ \partial\bigg( \exists \tau\in [0,1]: M^N_{\tau}\le 2\gamma +2a|M^T_\tau|_*+2b|M^T_\tau|_*^2 \bigg)\le C(a+b)(1+(\log^+(1/x_0))e^{-C'x_0^2}, \] where $(\cdot)^+$ denotes the positive part and $x_0=\frac{-a+\sqrt{a^2-4\gamma b}}{2b}$. \end{lemma} \begin{proof} The proof can be found in \cite{vdbg:manifold}, Lemma 4.3 by replacing a $d$-dimensional Bessel process with $S_\tau=\sup_{0\le u\le \tau}|M^T_u|$ where $M^T_u$ is a standard Brownian motion and noting that \[ \partial\bigg(\sup_{0\le \tau\le 1}S_\tau\ge \xi \bigg)=\partial\bigg(\sup_{0\le u\le 1}|M^T_u|\ge \xi \bigg)\le Ce^{-\xi^2/8\tau}. \] \end{proof} From the above lemma we have \begin{align*} A_1(t) &\le \int_0^\epsilon dr\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u){\mathbbm{1}}_{\xi-r+\frac{1}{2}H_{{\partial\Omega},0}(s)|y|^2+k_1(s)z<-\frac{8}{\delta^2}|(y,z)|^3} \\ & \cdot C(a+b)(a+\log^+(1/x_0))e^{-C'x_0^2}. \end{align*} By the change of variables $r\to\gamma$ we obtain \begin{align*} A_1(t) &\le \sqrt{t}\int_{-\infty}^0 d\gamma\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u) C(a+b)(1+\log^+(1/x_0))e^{-C'x_0^2}. \end{align*} Since $\gamma=\frac{a^2-(a+2bx_0)^2}{4b}$, by the change of variables $\gamma\to x_0$ we have \begin{align*} A_1(t) &\le C\sqrt{t}\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u)\\ &\cdot \int_0^\infty dx_0 (a+b)(a+2b x_0)(1+\log^+(1/x_0))e^{-C'x_0^2}\\ &=C\sqrt{t}\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u) (C_1a(a+b)+2C_2b(a+b)) \end{align*} where $C_1= \int_0^\infty (1+\log^+(1/x))e^{-C'x^2}dx$ and $C_2= \int_0^\infty x(1+\log^+(1/x))e^{-C'x^2}dx$ are positive constants. Now since for some $C, C'>0$ and $\eta>0$ small enough we have \begin{align*} &\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u) a^2\\ &\le 2^5\left( H^2\mathbb E(|B^T_{\tau_t}|^2)+{K'_1}^2\mathbb E(|B^N_{\tau_t}|^2)+9{K'_1}^2t^{1-4\eta}+C_\delta^2t^{7/4}E(|B^N_{\tau_t}|^{3/2})+C_\delta^2t^{5/4-6\eta}\right)=Ct^{1-4\eta}+O(t), \end{align*} \begin{align*} &\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u) ab\\ & \le \frac{1}{2}H^2\mathbb E(|B^T_{\tau_t}|)(\sqrt{t}+o(t))+\frac{1}{2}HK'_1\mathbb E(|B^N_{\tau_t}|)(\sqrt{t}+o(t))+\frac{3}{2}{K'_1}Ht^{1-\eta}+o(t)=C'\,t^{1-2\eta}+O(t) \end{align*} and \begin{align*} &\int_0^\infty d\xi\int_{-\infty}^{\infty}dy\int_{-\infty}^{\infty}dz \int_0^t du\, \chi_t(\xi, y, z, u) b^2\le \frac{1}{4}H^2t+ o(t), \end{align*} we finally obtain that $A_1(s,t)=O(t^{3/2-\eta'})$ for $\eta'>0$ as small as we want. At the end from \eqref{eq-int-p} and \eqref{eq-P-D} we have \begin{align*} \int_{\Omega_\epsilon} \partial_x(\tau_t<T'_\Omega\le t)dx&\le(1+K'\epsilon)\int_{{\partial\Omega}}\left(\int_0^\epsilon D(s,r,t)dr\right)d\sigma_0(s)+O(t^{3/2})\\ &\le(1+K'\epsilon)\,\sigma_0({\partial\Omega})\cdot O(t^{3/2-\eta'}) \end{align*} Thus we complete the proof of Lemma \ref{lemma-main-III}. \end{proof}
{ "timestamp": "2017-06-07T02:00:23", "yymm": "1706", "arxiv_id": "1706.01477", "language": "en", "url": "https://arxiv.org/abs/1706.01477", "abstract": "We identify the short time asymptotics of the sub-Riemannian heat content for a smoothly bounded domain in the first Heisenberg group. Our asymptotic formula generalizes prior work by van den Berg-Le Gall and van den Berg-Gilkey to the sub-Riemannian context, and identifies the first few coefficients in the sub-Riemannian heat content in terms of the horizontal perimeter and the total horizontal mean curvature of the boundary. The proof is probabilistic, and relies on a characterization of the heat content in terms of Brownian motion.", "subjects": "Analysis of PDEs (math.AP); Probability (math.PR)", "title": "Heat content and horizontal mean curvature on the Heisenberg group", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363540453381, "lm_q2_score": 0.7185944046238982, "lm_q1q2_score": 0.7073385962848685 }
https://arxiv.org/abs/2210.12394
Coverings of planar and three-dimensional sets with subsets of smaller diameter
Quantitative estimates related to the classical Borsuk problem of splitting set in Euclidean space into subsets of smaller diameter are considered. For a given $k$ there is a minimal diameter of subsets at which there exists a covering with $k$ subsets of any planar set of unit diameter. In order to find an upper estimate of the minimal diameter we propose an algorithm for finding sub-optimal partitions. In the cases $10 \leqslant k \leqslant 17$ some upper and lower estimates of the minimal diameter are improved. Another result is that any set $M \subset \mathbb{R}^3$ of a unit diameter can be partitioned into four subsets of a diameter not greater than $0.966$.
\section{Introduction} This paper presents some results related to the classical Borsuk problem on partitioning of sets in $\mathbb{R}^n$ into parts of smaller diameter \cite{borsuk,borsuk1,borsuk2,borsuk3}, and also to the Nelson--Erdős--Hadwiger problem on the chromatic number of Euclidean space \cite{chrom,chrom1,chrom2,chrom3,chrom4,chrom5,chrom6}. Most of the article is devoted to the planar case. Let $F$ be a bounded set in the plane, and $k \in \mathbb{N}$. We denote by $d_k(F)$ the greatest lower bound of the set of positive real numbers $x$ with the property that $F$ can be covered by $k$ sets $F_1, F_2, \ldots , F_k$ whose diameters are at most $x$, that is, $$d_k(F) = \inf\{x \in \mathbb{R}^{+} : \exists F_{1}, \ldots, F_{k}: F \subseteq F_{1} \cup \ldots \cup F_{k}, \; \forall i \: \operatorname{diam}(F_{i}) \leqslant x \}.$$ In other words, we want to choose optimal coverings consisting of the smallest possible diameter sets among all possible coverings of the set $F$. In addition, the value $d_k(F)$ does not change, if, without loss of generality, we require the sets to be convex and closed. Indeed, it is easy to see that the diameter of the closure of the convex hull for any set $F_i$ from the covering coincides with the diameter of $F_i$. For every positive number $k$ we consider the values $d_k = \sup \; d_k(F)$, where the suprema are taken over all sets $F$ of unit diameter on the plane. It follows from the remark above that the sequence $d_k$ is nonincreasing. Motivated by the classical Borsuk problem \cite{borsuk,borsuk1,borsuk2,borsuk3}, many specialists have evaluated elements of this sequence. Over the years, H. Lenz (see \cite{Lenz}), M. Dembinski and M. Lassak (see \cite{Lassak}), V. Filimonov (see \cite{Filimonov}), D. Belov and N. Aleksandrov (see \cite{Belov}), V. Koval (see \cite{Koval}) estimated values $d_k$ for various values of $k$. A theoretically feasible, but extremely time-consuming approach to this problem and its generalizations was proposed in \cite{zong2021borsuk}. Moreover, Yanlu Lian and Senlin Wu explored such values for some Banach spaces (see \cite{lian2021divide}). In our previous work (see \cite{Doklady}) we significantly improved some of previous upper bounds of the quantities $d_k$. In this paper we prove new lower and upper bounds for the elements in the sequence $d_k$. Moreover, we proposed new approach to improving upper bounds of the values $d_k$. We present our new results in the following sections, but here we suggest additional important definitions for these theorems and their proofs. In this paper, using techniques to construct universal covering sets and systems, we prove some upper bounds for the elements of the sequence $d_k$. Note that both infinitesimal local improvements to these partitions are possible, as well as improvements based on the extension of the covering system. Of course, this approach does not allow us to obtain exact values of $d_{k}$. \vspace{\baselineskip} \begin{definition} A set $\Omega \subset \mathbb R^2$ is called \textit{universal covering set} if every planar set $F$ of unit diameter can be completely covered by $\Omega$ (that is, there exists a planar set $\Omega'$ congruent to $\Omega$ such that $F \subset \Omega'$ ). \end{definition} In 1920, in \cite{Pal} J. Pal proved that a regular hexagon with edge length $\frac{1}{\sqrt{3}}$ is a universal covering set. We denote this regular hexagon by $\Omega$. Next, we define a universal covering system. \begin{definition} System of sets $S = \{ \Omega_{\alpha} \}_{\alpha \in I}$ is called a \textit{universal covering system} if every planar set $F$ of unit diameter can be completely covered by one of sets $\Omega_{\alpha}$. Here $I$ is a (possibly infinite) set of indices. \end{definition} \section{Main results} In this part of the paper we show the table of improved results for the first 17 elements of the sequence $d_k$. In Table 1, the column titled as ``comment'' indicates by how many percent the gap between the upper and lower bounds has decreased as a result of the improvements proposed in this paper. The column titled as ``UCS'' presents the universal covering system used to prove the indicated upper bound on $d_k$ for the corresponding value of $k$. Let us denote by $\underline{d}_k^{old}$, $\underline{d}_k^{new}$, $\overline{d}_k^{old}$ and $\overline{d}_k^{new}$, respectively, the previously known and the new value of the lower bound, and similarly for the upper bound. All constants in this table are specified with four decimal places. \setlength{\tabcolsep}{8pt} \renewcommand{\arraystretch}{0.95} \begin{center} \begin{table}[H] \caption{Old and new estimates} \label{results1} \begin{tabular}{ |c|c|c||c|c||c|c| } \hline $k$ & $\underline{d}_k^{new}$ & $\underline{d}_k^{old}$ & $\overline{d}_k^{old}$ & $\overline{d}_k^{new}$ & Comment & UCS \\ \hline\hline 1 & - & 1.0000 & 1.0000 & - & tight & -\\ \hline 2 & - & 1.0000 & 1.0000 & - & tight & -\\ \hline 3 & - & 0.8660 & 0.8660 & - & tight & -\\ \hline 4 & - & 0.7071 & 0.7071 & - & tight & -\\ \hline 5 & - & 0.5877 & 0.5953 & - & - & - \\ \hline 6 & - & 0.5051 & 0.5343 & - & - & -\\ \hline 7 & - & 0.5000 & 0.5000 & - & tight & -\\ \hline 8 & - & 0.4338 & 0.4456 & - &- & -\\ \hline 9 & - & 0.3826 & 0.4047 & - & - & -\\ \hline 10 & 0.3665 & 0.3420 & 0.4012 & 0.3896 & 61\% & $S_{10}$ \\ \hline 11 & 0.3535 & 0.3333 & 0.3942 & 0.3732 & 68\% & $S_{10}$ \\ \hline 12 & 0.3420 & 0.3333 & 0.3660 & 0.3532 & 66\% & $S_{10}$ \\ \hline 13 & - & 0.3333 & 0.3550 & 0.3419 & 60\% & $S_{10}$ \\ \hline 14 & - & 0.3090 & 0.3324 & 0.3263 & 26\% & $S_{10}$ \\ \hline 15 & - & 0.2928 & 0.3226 & 0.3130 & 32\% & $S_{10}$ \\ \hline 16 & - & 0.2817 & 0.3191 & 0.3035 & 42\% & $S_{10}$ \\ \hline 17 & - & 0.2701 & 0.3010 & 0.2967 & 14\% & $S_{10}$ \\ \hline \end{tabular} \end{table} \end{center} \section{Upper bounds} \subsection {Improvements to the upper estimates and the construction of the UCS} To prove that $d_k \leqslant \rho$, where $\rho$ is some fixed number, we consider some universal covering system $S$ and divide each of the covering sets into $k$ parts. The diameter of each part does not exceed $\rho$. From the introduction, we know that any set of diameter 1 can be covered by a $ \Omega$, i.e regular hexagon with a unit distance between the opposite sides. \begin{figure}[htb!] \center{\includegraphics[scale=0.4]{img2.pdf}} \caption{The set $M$ and its division into the sets $m_1$, $m_2$, $m_3$} \label{cut1} \end{figure} \begin{lemma} Let $\sigma$ be a covering system. Suppose that there exists a set $M \in \sigma$ and points $A, B \in M$ such that the length of segment $AB$ is greater than 1. Denote by $\delta = \frac{|AB| - 1}{2}$. Perpendicular lines drawn to the segment $AB$ at a distance of $\frac{\delta}{2}$ from its ends divide the set $M$ into three sets $m_1$, $m_2$, $m_3$ (in Figure 1, the perpendiculars are drawn at points $C$ and $D$). We will refer the perpendiculars themselves only to the set $m_2$, so sets $m_1$ and $m_3$ will not contain them. Denote by $M_1 = m_1 \cup m_2$, $M_2 = m_2 \cup m_3$. Then $ \sigma \backslash \{M\} \cup \{M_1, M_2\}$ will also be a covering system. \end{lemma} \textbf {Proof:} We want to show that if a set $F$ can be covered by the set $M$, then $F$ can be covered by the set $M_1$ or the set $M_2$. Let's assume that when a certain set $N$ is covered by the set $M$, there is at least one point of the set $N$ in $m_1$. Then $m_3$ cannot have points from $N$, because the distance between any point from $m_1$ and a point from $m_3$ is strictly greater than 1. So we get a contradiction with the diameter of the set $N$ is not more than 1. This contradiction proves that the set $M_1$ will cover the set $N$. If $m_1$ does not contain points from $N$, then $M_2$ will cover $N$. This statement proves the Lemma 1. \qed Now, using the lemma, we pass from one set $\Omega$ to a system of two covering sets. Each of the main diagonals of $ \Omega$ has length $\frac{2}{\sqrt{3}}$, which is greater than one, so according to the lemma from the hexagon, you can cut off the corresponding parts (in this case, triangles) on each of the three diagonals of the hexagon. \begin{figure}[htb!] \center{\includegraphics[scale=0.4]{omega12.pdf}} \caption{$\Omega_1, \Omega_2$ (the dotted line marks the parts cut off from $\Omega$)} \label{cut_hex} \end{figure} After eliminating the congruent ones, we get a universal covering system containing two sets. In the first case, the vertices from which the triangles are cut off go through one (let's call it $\Omega_1$). In the second case, the vertices from which the triangles are cut off go in a row (let's call it $\Omega_2$). \begin{figure}[!htb] \center{\includegraphics[scale=0.25]{S10_UCS.pdf}} \caption{Shapes used in UCS $S_{10}$} \label{UCS10} \end{figure} Note that after cutting off the triangles, the remaining segment of the diagonal (equal to $\frac{1}{2} + \frac{1}{\sqrt{3}}$) is still greater than $1$. Therefore, the lemma allows us to cut off from $\Omega_1, \Omega_2$ other parts, reducing the length of the diagonals by half the remaining distance (by $h = \frac{1}{4} + \frac{1}{2\sqrt{3}}$). There are two clipping options for each diagonal. In the first case, we cut off the triangle of height $h$, and in the other we cut off the trapezoid of height $h$. Excluding the congruent ones, we get a universal covering system consisting of 10 sets (4 sets are obtained in the results of cutting off pieces from $\Omega_1$, the other 6 in the results of cutting off pieces from $\Omega_2$). We denote by $S_{10} = \{\Omega_{11}, \Omega_{12}, \Omega_{13}, \Omega_{14}, \Omega_{21}, \Omega_{22}, \Omega_{23}, \Omega_{24}, \Omega_{25}, \Omega_{26}\}$ the constructed universal covering system of 10 sets. Using this UCS, new upper bounds are obtained for $d_{10}, \dots, d_{17}$. All the necessary partitions of sets from the UCS are given in \cite{github} and obtained using an optimization algorithm. The proposed method can be applied without any significant changes to the search for partitions in higher dimensions. Let us denote by $d_{3,4}$ the minimal diameter of a part for which there exists a partition of any three-dimensional set of unit diameter into four parts of a given diameter. The previously known estimate $d_{3,4} \leqslant 0.98$ was obtained by L. Evdokimov by partitioning a truncated rhombic dodecahedron $\Omega_{TRD}$ (Fig. \ref{fig:TRD1}). V.~V.~Makeev proved that this polyhedron is a universal cover in the three-dimensional case \cite{makeev2000}. Using the optimization algorithm described in the next section we obtained a slightly better estimate $$d_{3,4} \leqslant 0.966.$$ Note that here we consider a \emph{covering} of $\Omega_{TRD}$ by several convex polyhedrons, which cannot be directly reduced to \emph{partition} into convex polyhedrons. If we consider only the partitions of $\Omega_{TRD}$ into four convex polyhedrons of smaller diameter by six planes passing through some common point, the estimate is slightly worse, $d_{3,4} \leqslant 0.9755$, close to the result presented in \cite{makeev2000}. These partitions are shown in Fig. \ref{fig:TRD1}, Fig. \ref{fig:TRD2}. The coordinates of the vertices are available in the repository \cite{github}. \subsection{Description of the algorithm} The idea of the proposed algorithm lies in multiple generation of some initial partition into polygons and subsequent minimization of the maximum diameter of obtained parts. We assume that when solving the optimization problem, we can consider the structure of the partition to be unchanged. Thus we are talking about finding the local minimum of some piecewise smooth nonconvex function under linear constraints. The Adam algorithm, which was proposed in \cite{adam} and is now widely used in machine learning problems, is used to find a local minimum. The Adam algorithm is one of the extensions of stochastic gradient descent. Its high convergence rate and stability are achieved by adaptive learning rate selection for each parameter based on the mean and the variance of the gradient. Most theoretical results for algorithms of this type are obtained under the assumptions of differentiability and convexity of the minimizing function, but in machine learning problems the objective function is usually non-smooth, for example, when training a neural network with ReLU activation function \cite{adam}. Note that in our case the minimized function is not stochastic. We use only those properties of the Adam algorithm that allow us to efficiently solve non-smooth high-dimensional optimization problems. In this paper, we do not prove any convergence statements and present only numerically found local minima. In numerical calculations, we used a penalty method and an implementation of the Adam algorithm in the PyTorch package \cite{adam}. Let some initial partition of an $m$-gon $\Omega$ into polygons $\Omega = F_1 \cup F_2 \cup \dots \cup F_k$ be chosen and $X = \{x_1, \dots , x_r \}$ be the vertex set of the partition. The vertices of the polygon $F_i$ are points $x_j$, $j \in \mathcal{J}_i$. The condition $(x,c)+b=0$, $\|c\|=1$ specifies that the point $x$ belongs to the line given by the normal vector $c$ and the coefficient $b$. For an interior point $y \in \operatorname{Int} \Omega$ the value of the linear function $f(y) = (y,c)+b >0$ is the distance to the line. We assume that the belonging of points to lines bounding $\Omega$ is given for some set of pairs of indices $E \subseteq \{1, \dots r \} \times \{1, \dots , m\}$. Then the search for a local minimum in the problem \begin{align} \label{opt1} \varphi(X) = \max_{i} \max_{p,q \in \mathcal{J}_i} \| x_p - x_q \| \rightarrow \min, \\ (x_s, c_t) +b_t = 0, \quad (s,t) \in E \nonumber \end{align} \noindent can be performed by standard methods for non-smooth optimization problems. Generally speaking, we should also require the conditions \[ (x_s, c_t) +b_t \geq 0, \quad 1 \leq s \leq r, \quad 1 \leq t \leq m, \] meaning that each vertex $x_s$ belongs to $\Omega$. But if we assume that $x_s$ change rather little during partition optimization, we may not introduce these conditions, and check them after the local minimum has been found. The Voronoi diagram constructed for some random (rough) approximate solution of the problem of packing $k$ equal circles of maximum diameter in $\Omega_i$ was used as a zero estimate. The corresponding optimization problem is written as follows: \begin{equation} \label{opt2} \psi(V) = \min \left\{ \min_{p, t} \{ (v_p, c_t) + b_t \}, \; \frac{1}{2}\min_{p,q}\{ \| v_p - v_q \| \} \right\} \rightarrow \max. \end{equation} \begin{figure} \centering \begin{tabular}{p{3cm} p{3cm} p{3cm}} \input{ alg1.tikz} & \input{ alg2.tikz} & \input{ alg3.tikz}\\ \end{tabular} \caption{A rough approximation for dense circle packing (a), the Voronoi diagram (b), and the final partition (c).} \label{alg_example} \end{figure} Here we assume that the centers of the circles are given by the set $V= \{v_1, \dots , v_k \}$. The generation of the Voronoi diagram is performed many times, and in each case a local minimum in the problem (\ref{opt1}) is computed. The sequence of computations is shown in Fig. \ref{alg_example}. In cases where the global minimum in the problem (\ref{opt1}) is known, the presented Algorithm \ref{alg_part} finds it rather quickly. On the other hand, in the general case we cannot claim that the optimal partition will be found with positive probability. \begin{algorithm}[h!] \caption{Stochastic search for the sub-optimal partition of a polygon (or a polyhedron) $\Omega$} \label{alg_part} \begin{algorithmic} \STATE Input: $\Omega$, $k$ \FOR{$1 \leqslant i \leqslant N$} \STATE $V_0 = \{v_{1}, \dots , v_k \}$ are random points distributed uniformly in $[-1,1]^n$; \COMMENT{Run $s$ steps of Adam optimizer for the problem (\ref{opt2})} \FOR{$1 \leqslant j < s$} \STATE $V_{j+1} \gets \operatorname{Adam}\left(\psi(V)\to \max ; V_j\right)$; \ENDFOR \COMMENT{Initialize a set of partition vertices with a Voronoi diagram} \STATE $X_0 \gets \operatorname{Vor}(\Omega; V_s)$; \STATE $t \gets 0$; \COMMENT{Find the local minimum for the problem (\ref{opt1})} \WHILE{not $\operatorname{OptCondition}(X)$} \STATE $X_{t+1} \gets \operatorname{Adam}\left(\phi(X) \to \min; X_t\right)$; \STATE $t \gets t+1$; \ENDWHILE \IF{ $\phi(X_t) < \phi(X^*)$} \STATE $X^* \gets \Tilde{X_t}$; \ENDIF \ENDFOR \STATE Output: $X^*$. \end{algorithmic} \end{algorithm} The application at the end of the article shows exactly how the partitions into $k$ parts of all sets from the UCS $S_{10}$ look like for $10 \leqslant k \leqslant 17$. It should be noted that in the three-dimensional case the chance of obtaining the desired initial approximation is quite small. Namely, about $3\cdot 10^4$ of runs were required to find the examples shown in Fig. \ref{fig:TRD1}, \ref{fig:TRD2}. \section{Lower bounds} \subsection {General scheme} We prove a lower bound for a circle $D$ of unit diameter. When an arbitrary circle is covered by $k$ sets, there are two types of sets, namely, those that have at least two common points with the boundary of the circle (\textit{extreme sets}), and the rest (\textit{central sets}). Let there be $e$ of extreme and $c$ of central sets. Obviously, $c + e = k$. The general scheme of the proof consists in analyzing the cases of the number of central and extreme sets, as a rule, only those where $c = 1$ or $c = 2$ remain meaningful from these cases. In each of these cases, the length of a certain segment $Q_2Q_j$ in the central set is estimated. The estimate is proved by introducing parameters-angles $\alpha, \beta, \gamma$, that is, the length of the segment $Q_2Q_j$ is considered as a function $f$ of $(\alpha, \beta, \gamma)$. The first one proves that for a fixed $\gamma$ and a fixed parameter $\delta = \gamma + 2\alpha + 2\beta$, the minimum of the function $f$ is achieved for $\alpha = \beta$. Further, it is proved that for $ \alpha = \beta$(and a fixed $ \delta$) , the greater the $\gamma$ , the smaller the value of $f$, so for the lower estimate, we need to take the maximum possible value of $\gamma = \gamma_{\max}$. Finally, it is proved that for $\gamma = \gamma_{\max}, \alpha = \beta$, the function $f$ is minimal for the minimum possible value of $\delta$, that is, for the minimum possible values of $\alpha = \beta = \alpha_{\min}$. As a result, the length of the segment $Q_2Q_j$ can be estimated from below by the value $f(\alpha, \beta, \gamma) = f (\alpha_{\min}, \alpha_{\min}, \gamma_{\max})$. Below is a table of parameters for various cases of $k$, $c$. \setlength{\tabcolsep}{9pt} \renewcommand{\arraystretch}{0.95} \begin{center} \begin{table}[H] \caption{Table of extreme parameter values} \label{results2} \begin{tabular}{ |c|c|c|c|c|c|c|c| } \hline $k$ & $c$ & $e$ & $j(k, c)$ & $\alpha_{\min}$ & $\gamma_{\max}$ & $\delta_0$ & $Q_2 Q_j$ \\ \hline\hline 10 & 1 & 9 & 6 & $36^{\circ}$ & $86.92^{\circ}$ & $230.92^{\circ}$ & $0.3665$ \\ \hline 11 & 1 & 10 & 7 & $27.88^{\circ}$ & $124.23^{\circ}$ & $235.79^{\circ}$ & $0.3535$ \\ \hline 11 & 2 & 9 & 6 & $38.23^{\circ}$ & $82.82^{\circ}$ & $235.79^{\circ}$ & $0.3665$\\ \hline 12 & 1 & 11 & 7 & $20^{\circ}$ & $120^{\circ}$ & $200^{\circ}$ & $\sin(\frac{\pi}{9})$ \\ \hline 12 & 2 & 10 & 6 & $30^{\circ}$ & $80^{\circ}$ & $200^{\circ}$ & $0.3751$ \\ \hline \end{tabular} \end{table} \end{center} \subsection {Complete proof} Define $\sigma_{10} = 0.366538$, $\sigma_{11} = 0.353553$, $\sigma_{12} = \sin(\frac{\pi}{9}) = 0.342020...$ We prove that $d_{k}(D) \geqslant \sigma_k$ for $k \in \{10, 11, 12\}$. For $c = 0$, the set covering the center of the circle will have a diameter of at least $0.5$, so we do not consider this case. For $c \geqslant 3$, we have $e \leqslant k-3$, which means that a diameter of at least $\sin(\frac{\pi}{k - 3}) = \begin {cases} 0.4338... (k = 10) \\ 0.3826... (k = 11) \\ 0.3420... (k = 12) \end{cases} \geqslant \sigma_k$ will be required to cover the boundary of the circle. The remaining case is $c = 1$ and $c = 2$. Let's fix the orientation of the circle counterclockwise. For each ``extreme'' set, due to its closure, there is its first point on the circle, in accordance with the orientation. At the same time, all these first points are different. Let's denote these points (in accordance with their order when traversing the circle) by $P_1, P_2, ..., P_{e}$, and denote the corresponding sets by $S_1, S_2, ..., S_{e}$. Also, for the convenience of notation, we put $P_0 = P_{e}$, $P_{e + 1} = P_1$. We denote by $T_{1}, ..., T_{c}$ ``central'' sets. $T = T_{1} \cup ... \cup T_{c}$ (union of ``central'' sets) \begin{figure}[htb] \center{\includegraphics[scale=0.20]{lowerA.pdf}} \caption{To the proof of the lower bound $d_{10}$, $d_{11}, d_{12}$ (example for $e = 9$)} \label{lowerA} \end{figure} Denote by $Q_i$ the common point of $D$ and circles with centers $P_{i-1}$, $P_{i + 1}$ of radius $\sigma_k$ (see Figure \ref{lowerA}). Due to the closeness of the sets, we can assume that $Q_i \in T$. \vspace{\baselineskip} Our goal is to prove the following fact $$\operatorname{dist}(Q_2, Q_{j}) \geqslant \sigma_k \label{fact1}$$ where $j = j(k, c) = \begin{cases} 6, \ \ k=10, c=1\\ 7, \ \ k=11, c=1\\ 6, \ \ k=11, c=2 \\ 7, \ \ k=12, c=1 \\ 6, \ \ k=12, c=2 \end{cases}$ \vspace{\baselineskip} \begin{lemma} $Q_2$ and $Q_j$ lie in the same central set up to renumbering. \label{centralset} \end{lemma} \textbf{Proof:} In the case of $c = 1$, all points $Q_1,..., Q_e$ belong to the central set, so the lemma is valid. In the case of $c = 2$, some points belong to $T_1$, some to $T_2$. We show that it is possible to renumber the points so that $Q_2$ and $Q_j$ belong to the same central set. For $k = 11$, $c = 2$, we have $j = 6$. Note that the graph $G = \{V = (1,..., 9), E = \{i, j\, |\, i - j = 4 \pmod 9\}$ is not bipartite, which means that for any division into two sets there will be two numbers $x, y$ such that $x - y = 4 \pmod 9$. Renumber the vertices so that $y = 2$, $x = 6 = j$. For $k = 12$, $c = 2$, we have $j = 6$. The graph $G = \{V = (1,..., 10), E = \{i, j \,|\, i - j = 4 \pmod {10} \}$ is not bipartite, which means that for any division into two sets there will be two numbers $x, y$ such that $x - y = 4 \pmod 9$. Renumber the vertices so that $y = 2$, $x = 6 = j$. \qed \vspace{\baselineskip} It follows from Lemma \ref{centralset} and (\ref{fact1}) that \[ d_{k} (D) \geqslant \max(\operatorname{diam}(T_1), ..., \operatorname{\operatorname{diam}}(T_c)) \geqslant \operatorname{dist} (Q_2, Q_j) \geqslant \sigma_k,\] and this is what we need to prove. \vspace{\baselineskip} The position of $ Q_2 $ is uniquely determined by the points $ P_1 $ and $P_3$, and the position of $ Q_j $ is uniquely determined by the points $ P_{j-1}$, $ P_{j + 1}$. Let $\alpha := \angle{Q_2 O P_3}$, $\beta := \angle{Q_{j} O P_{j + 1}}$, $\gamma := \angle{P_3 O P_{j-1}}$. It is obvious that $\angle{P_1 O Q_2} = \alpha$ and $\angle{Q_{j} O P_{j+1}} = \beta$. Let $\delta = \angle{P_1 O P_{j + 1}} = 2\alpha + \gamma + 2\beta$. \begin{figure}[htb] \center{\includegraphics[scale=0.20]{lowerB.pdf}} \caption{To the proof of the lower bound $d_{10}$, $d_{11}, d_{12}$} \label{lowerB} \end{figure} Note that $\operatorname{dist}(Q_2, Q_{j})$ is a function of $f (\alpha, \beta, \gamma)$ that depends only on the angles $\alpha$, $\beta$, $\gamma$. We show that this function reaches a minimum when $\alpha_0 = \beta_0 = \frac{\delta_0 - \gamma_{\max}}{4}$ and $\gamma_{\max} = (2j - 8) \cdot \operatorname{arcsin}(\rho)$. Note that the arcs $P_{i}P_{i + 1}$ cannot be greater than $ 2 \cdot \operatorname{arcsin}(\sigma_k)$, so we have such inequalities on the angles: $$ \gamma \leqslant \gamma_{\max} = \gamma_{\max}(k, c) = (2j - 8) \cdot \operatorname{arcsin}(\sigma_k) = \begin{cases} 179.09^{\circ}, k=10, c=1\\ 124.23^{\circ}, k=11, c=1\\ 82.82^{\circ}, k=11, c=2\\ 120^{\circ}, k=12, c=1 \\ 80^{\circ}, k=12, c=2 \end{cases}$$ $$\delta \geqslant \delta_0 = \delta_0(k, c) = 2\pi - 2(k - c - j) \cdot \operatorname{arcsin}(\sigma_k) = \begin{cases} 230.92^{\circ}, k = 10,c=1\\ 235.79^{\circ}, k = 11, c=1\\ 235.79^{\circ}, k = 11, c=2\\ 200^{\circ}, k=12, c=1\\ 200^{\circ}, k=12, c=2 \end{cases}$$ I. Let $\gamma$ and $\delta$ be fixed. We intend to show that the value of $\operatorname{dist} (Q_2, Q_j)$ is the smallest when $\alpha = \beta$. Since $\delta$ is fixed, we can assume that the positions of the points $P_1$ and $P_{j + 1}$ are fixed. Without generality restriction, let $\alpha \leqslant \beta$. We denote by $P'_3$, $P'_{j - 1}$, $Q'_2$, $Q'_{j}$ the positions of the points $P_3$, $P_{j-1}$, $Q_2$, $Q_{j}$ for $\alpha = \beta$, respectively. \begin{figure}[htb] \center{\includegraphics[scale=0.20]{lowerC.pdf}} \caption{To the proof of the lower bound $d_{10}$, $d_{11}, d_{12}$} \label{fig:image_lbound} \end{figure} Denote by $a$ the line passing through $Q'_2 Q'_{j}$. Let the line $m \perp a$, $O \in m$. Let $Q_S$ be a point symmetric to $Q_2$ with respect to $m$. Denote the projections of the points $Q_2$, $Q_{j}$, $Q_S$ on the line $a$ through $K_2$, $K_{j}$, $K_S$, respectively. Note that since $\gamma$ and $\delta$ are fixed, then $\angle{Q_2 O Q_{j}} = \alpha + \gamma + \beta = \frac{\gamma + \delta}{2} = \angle{Q'_2 O Q'_{j}}$. Therefore, $\angle{Q_2 O Q'_2} = \angle{Q_{j} O Q'_{j}} = \angle{Q'_{j} O Q_S}$(the latter equality is true due to the symmetry of the angles with respect to $m$). Denote $A = O Q_{j} \cap Q_S Q'_{j} \text{(line)}$. Consider $\triangle{O Q_S A}$.In it, $OQ'_{j}$ is a bisector, and due to the limitation of $\delta$ from below in $200^{\circ}$, it can be shown that $\angle{O Q_S A} > 90^{\circ}$, which means that $Q_{j} Q'_{j} > A Q'_{j} > Q'_{j}Q_S$. We also have $\angle{K_{j} Q'_{j} Q_{j}} < \angle{Q_S Q'_{j} K_S}$. Therefore, $Q'_{j} K_S = Q'_{j}Q_S \cos(\angle{Q_S Q'_{j} K_S}) < Q'_{j} K_{j} \cos(\angle{K_{j} Q'_{j} Q_{j}}) = K_{j} Q'_{j}$. Hence, $K_2 Q'_2 = K_S Q'_{j} < Q'_{j} K_{j} $. As a result, we get the required inequality $Q_2 Q_{j} > K_2 K_{j} = K_2 Q'_{j} + Q'_{j}K_{j} > K_2Q'_{j} + K_2Q'_2 = Q'_{j} Q'_2$. This means that the value of $\operatorname{dist} (Q_2, Q_{j})$ is minimal for $\alpha = \beta$. \vspace{\baselineskip} II. Now let $\alpha = \beta$ and $\delta$ be fixed. Note that when $\gamma$ decreases, the value of $\angle{Q_2 O Q_{j}} = \alpha + \gamma + \beta = \frac{\gamma + \delta}{2}$ also decreases, and hence the value of $\operatorname{dist}(Q_2, Q_{j})$ increases due to the fact that the points $Q_2$ and $Q_{j}$ ``shift'' along the corresponding circles in different directions, approaching the boundary of the circle $D$ (we assume that $P_1$ and $P_{j+1}$ are fixed at this moment). On the other hand, $\gamma \leqslant \gamma_{\max} = \gamma_{\max}(k, c)$. \vspace{\baselineskip} III. Now let's say $\alpha = \beta$ and $\gamma = \gamma_0$. Consider the isosceles trapezoid $Q_2P_3P_{j-1}Q_{j}$. Note that when the angle $\delta$ decreases, the angles $\angle{Q_{j}P_{j-1}P_3} = \angle{P_{j-1}P_3Q_2}$ also decrease. Therefore, the length of $Q_2Q_{j}$ also decreases. On the other hand, $\delta \geqslant \delta_0(k, c)$. \vspace{\baselineskip} Summing up, we can conclude that the value of $\operatorname{dist}(Q_2, Q_{j})$ reaches its minimum at the above values of the angles. The calculations show that for the specified values $\alpha, \beta, \gamma, \delta$ we have $$f_{\min} = f(\alpha_{\min}, \alpha_{\min}, \gamma_{\max}) = \begin {cases} 0.3665..., k=10, c=1\\ 0.3536..., k=11, c=1\\ 0.4362..., k=11, c=2\\ \sin(\frac{\pi}{9}), k=12, c=1\\ 0.3751..., k=12, c=2\\ \end{cases}\geqslant \sigma_k.$$ Finally, some of the ``central'' sets has a diameter of at least $\operatorname{dist}(Q_2, Q_{j}) \geqslant \sigma_k$, which was required to be proved. \section*{Acknowledgements} The authors would like to thank the anonymous reviewers for careful reading and for comments that helped improve the text of the article and correct a number of inaccuracies.
{ "timestamp": "2022-10-25T02:08:08", "yymm": "2210", "arxiv_id": "2210.12394", "language": "en", "url": "https://arxiv.org/abs/2210.12394", "abstract": "Quantitative estimates related to the classical Borsuk problem of splitting set in Euclidean space into subsets of smaller diameter are considered. For a given $k$ there is a minimal diameter of subsets at which there exists a covering with $k$ subsets of any planar set of unit diameter. In order to find an upper estimate of the minimal diameter we propose an algorithm for finding sub-optimal partitions. In the cases $10 \\leqslant k \\leqslant 17$ some upper and lower estimates of the minimal diameter are improved. Another result is that any set $M \\subset \\mathbb{R}^3$ of a unit diameter can be partitioned into four subsets of a diameter not greater than $0.966$.", "subjects": "Metric Geometry (math.MG)", "title": "Coverings of planar and three-dimensional sets with subsets of smaller diameter", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363503693294, "lm_q2_score": 0.7185944046238982, "lm_q1q2_score": 0.7073385936433092 }
https://arxiv.org/abs/0909.0361
Poisson structures compatible with the cluster algebra structure in Grassmannians
We describe all Poisson brackets compatible with the natural cluster algebra structure in the open Schubert cell of the Grassmannian $G_k(n)$ and show that any such bracket endows $G_k(n)$ with a structure of a Poisson homogeneous space with respect to the natural action of $SL_n$ equipped with an R-matrix Poisson-Lie structure. The corresponding R-matrices belong to the simplest class in the Belavin-Drinfeld classification. Moreover, every compatible Poisson structure can be obtained this way.
\section{Introduction} The notion of a Poisson bracket {\em compatible with a cluster algebra structure\/} was introduced in \cite{GSV1}. It was used to interpret cluster transformations and matrix mutations in cluster algebras from a viewpoint of Poisson geometry. In addition, it was shown in \cite{GSV1} that if a Poisson variety $\left ( \mathcal{M}, {\{\cdot,\cdot\}}\right )$ possesses a coordinate chart that consists of regular functions whose logarithms have pairwise constant Poisson brackets, then one can use this chart to define a cluster algebra ${\mathcal A}_{\mathcal{M}}$, which is closely related (and, under rather mild conditions, isomorphic) to the ring of regular functions on $\mathcal{M}$, and such that ${\{\cdot,\cdot\}}$ is compatible with ${\mathcal A}_{\mathcal{M}}$. This construction was applied to an open cell $G^0_k(n)$ in the Grassmannian $G_k(n)$ viewed as a Poisson homogeneous space under the action of $SL_n$ equipped with the {\em standard\/} Poisson-Lie structure. The resulting cluster algebra ${\mathcal A}_{G^0_k(n)}$ can be viewed as a restriction of the cluster algebra structure in the coordinate ring of $G_k(n)$. This ``larger'' cluster algebra ${\mathcal A}_{G_k(n)}$ was described in \cite{Scott} using combinatorial properties of Postnikov's map from the space of edge weights of a planar directed network into the Grassmannian \cite{Postnikov}. Poisson geometric properties of Postnikov's map are studied in \cite{GSV3}. One of the by-products of this study is the existence of a {\em pencil\/} of Poisson structures on $G_k(n)$ compatible with ${\mathcal A}_{G^0_k(n)}$. It turnes out that every bracket in the pencil defines a Poisson homogeneous structure with respect to a {\em Sklyanin\/} Poisson-Lie bracket associated with a solution of the {\em modified classical Yang-Baxter equation (MCYBE)\/} of the form $R_t = R_0 + t A \pi_0$, where $t$ is a scalar parameter, $A$ is a certain fixed skew-symmetric $n\times n$ matrix, $R_0 = \pi_+ - \pi_-$, and $\pi_{\pm, 0}$ are projections onto strictly upper/strictly lower/diagonal part in $sl_n$ (the standard Poisson-Lie structure corresponds to $t=0$). According to the {\em Belavin-Drinfeld classification\/} \cite{BD}, skew-symmetric solutions of MCYBE are defined by two types of data: discrete data associated with the Dynkin diagram and called the {\em Belavin-Drinfeld triple\/} and continuous data associated with the Cartan subalgebra. We will say that two R-matrices belong to the same class if the corresponding Belavin-Drinfeld triples are the same. R-matrices $R_t$ mentioned above belong to the class associated with the trivial Belavin-Drinfeld triple. The entire class consists of R-matrices of the form $R_S = R_0 + S \pi_0$, with $S$ arbitrary skew-symmetric. On the other hand, the Poisson pencil described above does not exhaust all Poisson structures compatible with ${\mathcal A}_{G^0_k(n)}$. The main goal of this paper is to prove \begin{theorem} \label{mainintro} The Poisson homogeneous structure with respect to the Poisson-Lie bracket associated with {\em any\/} $R_S$ is compatible with ${\mathcal A}_{G^0_k(n)}$. Moreover, up to a scalar multiple, all Poisson brackets compatible with ${\mathcal A}_{G^0_k(n)}$ are obtained this way. \end{theorem} It should be noted that Poisson brackets compatible with the ``larger'' cluster algebra ${\mathcal A}_{G_k(n)}$ are naturally associated with Poisson structures on the {\em Grassmann cone\/} $\mathcal{C} (G_k(n))$ that can be realized as one-dimensional extensions of corresponding structures on $G_k(n)$. Both the formulation and the proof of Theorem \ref{mainintro} can be modified in a rather straightforward way to the case of the cluster algebra ${\mathcal A}_{G_k(n)}$. A detailed description of the relationship between ${\mathcal A}_{G_k(n)}$ and ${\mathcal A}_{G^0_k(n)}$ this modification relies upon is presented in Chapter 4 of the forthcoming book \cite{book}. The paper is organized as follows. In Section \ref{CA&PB} we recall the necessary information on cluster algebras and compatible Poisson structures and show how the latter can be completely described via the use of a toric action. Section \ref{sklya} provides the background on Poisson-Lie groups and Sklyanin brackets. Finally, in Section \ref{grass} we review Poisson homogeneous structures on $G_k(n)$ and the construction of ${\mathcal A}_{G^0_k(n)}$, and then proceed to prove Theorem \ref{mainintro}, which is re-stated there as Theorem \ref{main}. \section{Cluster algebras and compatible Poisson brackets} \label{CA&PB} We start with the basics on cluster algebras of geometric type. The definition that we present below is not the most general one, see, e.g., \cite{FZ2, CAIII} for a detailed exposition. In what follows, we will use notation $[i,j]$ for an interval $\{i, i+1, \ldots , j\}$ in $\mathbb{N}$, and write $[n]$ instead of $[1, n]$. The {\em coefficient group\/} ${\mathfrak P}$ is a free multiplicative abelian group of a finite rank $m$ with generators $g_1,\dots, g_m$. An {\em ambient field\/} is the field ${\mathfrak F}$ of rational functions in $n$ independent variables with coefficients in the field of fractions of the integer group ring ${\mathbb Z}{\mathfrak P}={\mathbb Z}[g_1^{\pm1},\dots,g_m^{\pm1}]$ (here we write $x^{\pm1}$ instead of $x,x^{-1}$). A {\em seed\/} (of {\em geometric type\/}) in ${\mathfrak F}$ is a pair $\Sigma=({\bf x},\widetilde{B})$, where ${\bf x}=(x_1,\dots,x_n)$ is a transcendence basis of ${\mathfrak F}$ over the field of fractions of ${\mathbb Z}{\mathfrak P}$ and $\widetilde{B}$ is an $n\times(n+m)$ integer matrix whose principal part $B$ (that is, the $n\times n$ submatrix formed by the columns $1,\dots,n$) is skew-symmetrizable. In this paper, we will only deal with the case when $B$ is skew-symmetric. The $n$-tuple ${\bf x}$ is called a {\em cluster\/}, and its elements $x_1,\dots,x_n$ are called {\em cluster variables\/}. Denote $x_{n+i}=g_i$ for $i\in [m]$. We say that $\widetilde{{\bf x}}=(x_1,\dots,x_{n+m})$ is an {\em extended cluster\/}, and $x_{n+1},\dots,x_{n+m}$ are {\em stable variables\/}. It is convenient to think of ${\mathfrak F}$ as of the field of rational functions in $n+m$ independent variables with rational coefficients. Given a seed as above, the {\em adjacent cluster\/} in direction $k\in [n]$ is defined by $$ {\bf x}_k=({\bf x}\setminus\{x_k\})\cup\{x'_k\}, $$ where the new cluster variable $x'_k$ is given by the {\em exchange relation} \[ x_kx'_k=\prod_{\substack{1\le i\le n+m\\ b_{ki}>0}}x_i^{b_{ki}}+ \prod_{\substack{1\le i\le n+m\\ b_{ki}<0}}x_i^{-b_{ki}}; \] here, as usual, the product over the empty set is assumed to be equal to~$1$. We say that ${\widetilde{B}}'$ is obtained from ${\widetilde{B}}$ by a {\em matrix mutation\/} in direction $k$ and write ${\widetilde{B}}'=\mu_k({\widetilde{B}})$ if \[ b'_{ij}=\begin{cases} -b_{ij}, & \text{if $i=k$ or $j=k$;}\\ b_{ij}+\displaystyle\frac{|b_{ik}|b_{kj}+b_{ik}|b_{kj}|}2, &\text{otherwise.} \end{cases} \] Given a seed $\Sigma=({\bf x},\widetilde{B})$, we say that a seed $\Sigma'=({\bf x}',\widetilde{B}')$ is {\em adjacent\/} to $\Sigma$ (in direction $k$) if ${\bf x}'$ is adjacent to ${\bf x}$ in direction $k$ and $\widetilde{B}'= \mu_k(\widetilde{B})$. Two seeds are {\em mutation equivalent\/} if they can be connected by a sequence of pairwise adjacent seeds. The {\em cluster algebra\/} (of {\em geometric type\/}) ${\mathcal A}={\mathcal A}(\widetilde{B})$ associated with $\Sigma$ is the ${\mathbb Z}{\mathfrak P}$-subalgebra of ${\mathfrak F}$ generated by all cluster variables in all seeds mutation equivalent to $\Sigma$. Let ${\{\cdot,\cdot\}}$ be a Poisson bracket on the ambient field ${\mathfrak F}$ considered as the field of rational functions in $n+m$ independent variables with rational coefficients. We say that it is {\em compatible} with the cluster algebra ${\mathcal A}$ if, for any extended cluster $\widetilde{{\bf x}}=(x_1,\dots,x_{n+m})$, one has $$ \{x_i,x_j\}=\omega_{ij} x_ix_j, $$ where $\omega_{ij}\in{\mathbb Z}$ are constants for all $i,j\in[n+m]$. The matrix $\Omega^{\widetilde {\bf x}}=(\omega_{ij})$ is called the {\it coefficient matrix\/} of ${\{\cdot,\cdot\}}$ (in the basis $\widetilde {\bf x}$); clearly, $\Omega^{\widetilde {\bf x}}$ is skew-symmetric. In what follows, we denote by $A(I,J)$ the submatrix of a matrix $A$ with a row set $I$ and a column set $J$. Consider, along with cluster and stable variables ${\widetilde{\bf x}}$, another $(n+m)$-tuple of rational functions denoted $\tau=(\tau_1,\dots,\tau_{n+m})$ and defined by \begin{equation}\label{eq:1.3} \tau_j=x_j^{\varkappa_j}\prod_{k=1}^{n+m} x_k^{b_{jk}}, \end{equation} where $\widehat{B}=(b_{jk})_{j,k=1}^{n+m}$ is a fixed skew-symmetric matrix such that $\widehat{B}([n], [n+m]) = \widetilde{B}$, $\varkappa_j$ is an integer, $\varkappa_j=0$ for $1\le j\le n$. We say that the entries $\tau_i$, $i\in[n+m]$, form a \emph{$\tau$-cluster}. It is proved in \cite{GSV1}, Lemma 1.1, that $\varkappa_j$, $ n+1 \leq j \leq n+m$, can be selected in such a way that the transformation ${\widetilde{\bf x}}\mapsto\tau$ is non-degenerate, provided $\operatorname{rank} {\widetilde{B}}=n$. We denote $\varkappa=\operatorname{diag}(\varkappa_i)_{i=1}^{n+m}$ and $B_\varkappa=\widehat{B}+ \varkappa$. Nondegeneracy of the transformation ${\widetilde{\bf x}} \mapsto \tau$ is equivalent to nondegeneracy of $B_\varkappa$. Recall that a square matrix $A$ is {\it reducible\/} if there exists a permutation matrix $P$ such that $PAP^T$ is a block-diagonal matrix, and {\it irreducible\/} otherwise. The following result is a particular case of Theorem 1.4 in \cite{GSV1}. \begin{theorem}\label{thm:1.4} Assume that $\operatorname{rank} {\widetilde{B}}=n$ and the principal part of ${\widetilde{B}}$ is irreducible. Then a Poisson bracket is compatible with ${\mathcal A}({\widetilde{B}})$ if and only if its coefficient matrix $\Omega^\tau$ in the basis $\tau$ has the following property: the submatrix $\Omega^\tau([n],[n+m])$ is proportional to ${\widetilde{B}}$. \end{theorem} Starting with an arbitrary ${\{\cdot,\cdot\}}_0^{\mathcal A}$ compatible with ${\mathcal A}$, one can suggest an alternative description of all other compatible Poisson brackets via the following construction. Let $C=(c_{ij})$ be an integral $(n+m)\times m$ matrix. Define an action of $({\mathbb C}^*)^m=\{ \mathbf d=(d_1,\ldots, d_m)\ : \ d_1 \cdots d_r \ne 0 \}$ on ${\widetilde{\bf x}}$ by \begin{equation} \mathbf d.{\widetilde{\bf x}}=\left ( x_i \prod_{\alpha=1}^m d_\alpha^{c_{i\alpha}}\right )_{i=1}^{n+m}. \label{toricact} \end{equation} We say that \eqref{toricact} {extends to an action of $({\mathbb C}^*)^m$ on ${\mathcal A}$} if the action induced by it in {\em any} cluster in ${\mathcal A}$ has a form \eqref{toricact} (with possibly different coefficients $c_{i\alpha}$). Lemma 2.3 in \cite{GSV1} claims that \eqref{toricact} extends to an action of $({\mathbb C}^*)^m$ on ${\mathcal A}$ if and only if ${\widetilde{B}} C = 0$. The same condition guarantees that $\tau_i(\mathbf d.{\widetilde{\bf x}})=\tau_i({\widetilde{\bf x}})$ for $i\in [n]$. Since $B_\varkappa$ is invertible, any such $C$ of full rank has a form $C=B_\varkappa^{-1}([n+m],[n+1,n+m]) U$, where $U$ is any invertible $m\times m$ matrix. Next, assume that $({\mathbb C}^*)^m$ is equipped with a Poisson structure given by $$ \{d_i, d_j\}_V = v_{ij} d_i d_j, $$ where $V=(v_{ij})$ is a fixed skew-symmetric matrix. \begin{proposition} Let ${\widetilde{B}}$ satisfy the assumptions of Theorem~\ref{thm:1.4}. Then for any skew-symmetric $m\times m$ matrix $V$, there exists a Poisson structure ${\{\cdot,\cdot\}}_V^{\mathcal A}$ compatible with ${\mathcal A}$ such that the map $\left(({\mathbb C}^*)^m \times {\mathcal A}, {\{\cdot,\cdot\}}_V \times {\{\cdot,\cdot\}}_0^{\mathcal A} \right)\to \left( {\mathcal A}, {\{\cdot,\cdot\}}_V^{\mathcal A}\right)$ extended from the action $(\mathbf d,{\widetilde{\bf x}}) \mapsto \mathbf d.{\widetilde{\bf x}}$ is Poisson. Moreover, every compatible Poisson bracket on ${\mathcal A}$ is a scalar multiple of ${\{\cdot,\cdot\}}_V^{\mathcal A}$ for some $V$. \label{allcompat} \end{proposition} \begin{proof} Let $\Omega^{{\widetilde{\bf x}}}$ be the coefficient matrix of ${\{\cdot,\cdot\}}_0^{\mathcal A}$ in the basis ${\widetilde{\bf x}}$. It is easy to see that in the product structure ${\{\cdot,\cdot\}}_V \times {\{\cdot,\cdot\}}_0^{\mathcal A}$ on $({\mathbb C}^*)^m \times {\mathcal A}$, $$ \{(\mathbf d.{\widetilde{\bf x}})_i, (\mathbf d.{\widetilde{\bf x}})_j\} =\left (\Omega^{{\widetilde{\bf x}}} + C V C^T\right)_{ij}(\mathbf d.{\widetilde{\bf x}})_i (\mathbf d.{\widetilde{\bf x}})_j. $$ Thus, for the action $(\mathbf d,{\widetilde{\bf x}}) \mapsto \mathbf d.{\widetilde{\bf x}}$ to be Poisson, one must have $\{x_i,x_j\}^{\mathcal A}_V= \left (\Omega^{\tilde{\bf x}} + C V C^T\right )_{ij} x_i x_j$ for $i,j \in [n+m]$. Since $\tau_i(\mathbf d.{\widetilde{\bf x}})=\tau_i({\widetilde{\bf x}})$ if $i\in [n]$, and $\tau_i(\mathbf d.{\widetilde{\bf x}})=\tau_i({\widetilde{\bf x}}) m_i(\mathbf d)$ for some monomials $ m_i(\mathbf d)$ in $\mathbf d$ for $i\in [n+1, n+m]$, we see that $\{\tau_i, \tau_j\}^{\mathcal A}_V = \{\tau_i, \tau_j\}^{\mathcal A}_0$ for $i\in [n]$, $j\in [n+m]$. Since ${\{\cdot,\cdot\}}_0^{\mathcal A}$ is a compatible Poisson bracket, Theorem \ref{thm:1.4} yields that ${\{\cdot,\cdot\}}_V^{\mathcal A}$ is compatible as well. Now, let $\Omega^\tau$ be the coefficient matrix of ${\{\cdot,\cdot\}}_0^{\mathcal A}$ in the basis $\tau$. Denote $Z_0=\Omega^\tau([n+1,n+m],[n+1,n+m])$. Consider $\{\tau_i, \tau_j\}^{\mathcal A}_V$ for $i,j\in [n+1, n+m]$: $\{\tau_i, \tau_j\}^{\mathcal A}_V= z_{ij}\tau_i \tau_j$. To compute $Z=(z_{ij})_{i,j=n+1}^{n+m}$, note that the matrix that describes ${\{\cdot,\cdot\}}^{\mathcal A}_V$ in coordinates $\tau$ is $\Omega_V^\tau = B_\varkappa \left (\Omega^{{\widetilde{\bf x}}} + C V C^T\right ) B_\varkappa^T$, and thus $$ Z=\Omega_V^\tau([n+1,n+m], [n+1,n+m]) = Z_0 + U V U^{-1}. $$ It is clear that by varying $V$, one can make $Z$ to be equal to an arbitrary skew-symmmetric $m\times m$ matrix. Theorem~\ref{thm:1.4} implies that up to a scalar multiple, the matrix block $Z$ determines a compatible Poisson structure uniquely, and the result follows. \end{proof} \section{Poisson-Lie groups and Sklyanin brackets} \label{sklya} We need to recall some facts about Poisson-Lie groups (see, e.g.\cite{r-sts}). Let ${\mathcal G}$ be a Lie group equipped with a Poisson bracket ${\{\cdot,\cdot\}}$. ${\mathcal G}$ is called a {\em Poisson-Lie group\/} if the multiplication map $$ {\mathfrak m} : {\mathcal G}\times {\mathcal G} \ni (x,y) \mapsto x y \in {\mathcal G} $$ is Poisson. Perhaps, the most important class of Poisson-Lie groups is the one associated with classical R-matrices. Let ${\bf g}$ be a Lie algebra of ${\mathcal G}$. Assume that ${\bf g}$ is equipped with a nondegenerate invariant bilinear form $(\ ,\ )$. An element $R\in \operatorname{End}({\bf g})$ is a {\em classical R-matrix\/} if it is a skew-symmetric operator that satisfies the {\em modified classical Yang-Baxter equation\/} (MCYBE) \begin{equation} [R(\xi), R(\eta)] - R \left ( [R(\xi), \eta]\ + \ [\xi, R(\eta)] \right ) = - [\xi,\eta]. \label{MCYBE} \end{equation} Given a classical R-matrix $R$, ${\mathcal G}$ can be endowed with a Poisson-Lie structure as follows. Let $\nabla f$, $\nabla' f$ be the right and the left gradients for a function $f\in C^\infty({\mathcal G})$: \begin{equation} ( \nabla f ( x) , \xi ) = \frac{d} {dt} f( \exp{(t \xi)} x)\vert_{t=0},\qquad (\nabla' f ( x) , \xi ) = \frac{d} {dt} f( x\exp{(t \xi)})\vert_{t=0}. \label{rightleftgrad} \end{equation} Then the bracket given by \begin{equation}\label{Rbra} \{ f_1, f_2\} = \frac{1}{2} ( R(\nabla' f_1), \nabla' f_2 ) - \frac{1}{2} ( R(\nabla f_1), \nabla f_2 )\ \end{equation} is a Poisson-Lie bracket on ${\mathcal G}$ called the {\em Sklyanin bracket}. We are interested in the case ${\mathcal G}=SL_n$ and ${\bf g}=sl_n$ equipped with the trace-form $$ (\xi, \eta) = \operatorname{Tr} ( \xi \eta). $$ In this case, the right and left gradients~(\ref{rightleftgrad}) are $$ \nabla f(x) = x\ \mbox{grad} f (x), \qquad \nabla' f(x) = \mbox{grad} f (x)\ x, $$ where $$ \mbox{grad} f (x) = \left ( \frac{\partial f} {\partial x_{ji}} \right )_{i,j=1}^n, $$ and the Sklyanin bracket becomes \begin{multline} \{ f_1, f_2\}_{R}(x) = \\ \frac{1}{2} ( R(\mbox{grad} f_1(x)\ x), \mbox{grad} f_2(x)\ x ) - \frac{1}{2} ( R(x\ \mbox{grad} f_1(x)), x\ \mbox{grad} f_2(x) ). \label{sklyaSLn} \end{multline} Every $\xi \in \mathfrak g$ can be uniquely decomposed as \begin{equation}\nonumber \xi = \pi_-(\xi) + \pi_0(\xi) + \pi_+(\xi), \label{decomposition_algebra} \end{equation} where $\pi_+(\xi)$ and $\pi_-(\xi)$ are strictly upper and lower triangular and $\pi_0(\xi) $ is diagonal. The simplest classical R-matrix on $sl_n$ is given by \begin{equation} R_0 (\xi) = \pi_+(\xi) - \pi_-(\xi)= \left ( \operatorname{sign}(j-i) \xi_{ij}\right )_{i,j=1}^n. \label{standardR} \end{equation} Substituting $R=R_0$ into~(\ref{sklyaSLn}), we find the bracket for a pair of matrix entries: \begin{equation}\label{braijkl} \{ x_{ij}, x_{i'j'}\}_{R_0}=\frac{1}{2} \left ( \operatorname{sign}(i'-i) + \operatorname{sign}(j'-j)\right ) x_{ij'} x_{i'j}. \end{equation} It is known (see \cite{r-sts}) that if $R_0$ is the standard R-matrix, $S$ is any linear operator on the space of traceless diagonal matrices that is skew-symmetric with respect to the trace-form, and $\pi_0$ is the natural projection onto the subspace of diagonal matrices, then \begin{equation} \label{Rs} R_S = R_0 + S \pi_0 \end{equation} satisfies MCYBE~(\ref{MCYBE}), and thus gives rise to a Sklyanin Poisson-Lie bracket. The operator $S$ can be identified with an $n\times n$ skew-symmetric matrix whose kernel contains the vector $(1,\ldots,1)$ and thus is uniquely determined by its $(n-1)\times (n-1)$ submatrix $(s_{ij})_{i,j=1}^{n-1}$, which we will also denote by $S$. Slightly abusing notation, we denote the remaining elements of the above $n\times n$ skew-symmetric matrix by \[ s_{in}=-\sum_{j=1}^{n-1}s_{ij},\qquad s_{nj}=-\sum_{i=1}^{n-1}s_{ij}. \] The Sklyanin bracket \eqref{sklyaSLn} that corresponds to \eqref{Rs} can be written in terms of matrix entries as \begin{equation}\label{braijklRS} \{ x_{ij}, x_{i'j'}\}_{R_S}= \{ x_{ij}, x_{i'j'}\}_{R_0}+ \frac{1}{2} \left (s_{i i'} - s_{j j'} \right ) x_{ij} x_{i'j'}. \end{equation} Let $\mathcal H$ denote the subgroup of diagonal matrices in $SL_n$: $$ \mathcal H=\{\operatorname{diag}(d_1,\dots,d_n)\ : d_1\cdots d_n =1 \}. $$ For any skew-symmetric matrix $V=(v_{ij})_{i,j=1}^{n-1}$, define a Poisson bracket ${\{\cdot,\cdot\}}^\mathcal H_V$ on $\mathcal H$ by \begin{equation} \label{diag} \{d_i, d_j\}^\mathcal H_V = v_{ij} d_i d_j; \end{equation} here $v_{in}$ and $v_{nj}$ have the same meaning as $s_{in}$ and $s_{nj}$ above. In what follows, we denote the Poisson manifolds $\left (\mathcal H, {\{\cdot,\cdot\}}^\mathcal H_V \right )$ and $\left (SL_n, {\{\cdot,\cdot\}}_{R_S} \right )$ by $\mathcal H^{\{V\}}$ and $SL_n^{\{S\}}$, respectively. Next, for $S$ defined as in \eqref{Rs}, consider the direct product of Poisson manifolds $\mathcal H^{\{\frac 12 S\}} \times SL_n^{\{0\}} \times \mathcal H^{\{-\frac 12 S\}}$; the product structure we denote below simply by ${\{\cdot,\cdot\}}$. \begin{lemma} The map $\mathcal H^{\{\frac 12 S\}} \times SL_n^{\{0\}} \times \mathcal H^{\{-\frac 12 S\}} \to SL_n^{\{S\}}$ given by $( D_1, X, D_2) \mapsto D_1 X D_2$ is Poisson. \label{lemma} \end{lemma} \begin{proof} Denote $D_1 X D_2$ by $\widehat{X} = (\hat x_{ij})_{i,j=1}^n$. Let $D_k=\operatorname{diag}(d_{kl})_{l=1}^n$ for $k=1,2$. Then $\{ \hat x_{ij}, \hat x_{i'j'} \} = \{ \hat x_{ij}, \hat x_{i'j'} \}_{R_0} + x_{ij} x_{i'j'} \{d_{1i} d_{2j}, d_{1i'} d_{2j'}\} $. The second term is equal to \[ \frac{1}{2} ( s_{ii'} - s_{jj'}) x_{ij} x_{i'j'} d_{1i} d_{2j} d_{1i'} d_{2j'} = \frac{1}{2} ( s_{ii'} - s_{jj'}) \hat x_{ij} \hat x_{i'j'}, \] and the claim follows by \eqref{braijklRS}. \end{proof} \section{ Grassmannians} \label{grass} \subsection{} Let ${\mathcal P}$ be a Lie subgroup of a Poisson-Lie group ${\mathcal G}$. A Poisson structure on the homogeneous space ${\mathcal P} \backslash {\mathcal G}$ is called {\it Poisson homogeneous\/} (with respect to the Poisson-Lie structure on ${\mathcal G}$) \cite{D} if the action map ${\mathcal P} \backslash {\mathcal G} \times {\mathcal G} \to {\mathcal P} \backslash {\mathcal G}$ is Poisson. In particular, if ${\mathcal P}$ is a parabolic subgroup of a simple Lie group ${\mathcal G}$ equipped with the standard Poisson-Lie structure, then ${\mathcal P} \backslash {\mathcal G}$ is a Poisson homogeneous space. We will be interested in the case when ${\mathcal G}=SL_n$ equipped with the bracket (\ref{sklyaSLn}) and $$ {\mathcal P}={\mathcal P}_k = \left \{ \begin{pmatrix} A & 0\\ B & C \end{pmatrix}\ : A\in GL_k, C\in GL_{n-k} \right \}. $$ The resulting homogeneous space is the Grassmannian $G_k(n)$ equipped with what we will call {\em the standard Poisson homogeneous structure\/} ${\{\cdot,\cdot\}}_0^{Gr}$. We will recall an explicit expression of this Poisson structure on the open Schubert cell $G^0_k(n)=\{ X\in G_k(n) : x_{[k]} \ne 0 \}$. Here we use the same notation for an element of the Grassmannian and its matrix representative $X$, and $x_I$ denotes the Pl\"ucker coordinate that corresponds to a $k$-element subset $I \subset [n]$. Elements of $G^0_k(n)$ can be represented by matrices of the form $\left [ \mathbf 1_k \ Y\right ]$, and the entries of the $k\times(n-k)$ matrix $Y$ serve as coordinates on $G^0_k(n)$. In terms of matrix elements $y_{ij}$ of $Y$, the Poisson homogeneous bracket looks as follows \cite{GSV1}: \begin{equation}\label{Poihomstand} \{y_{ij}, y_{\alpha\beta}\}_0^{Gr}=\frac{\operatorname{sign} (\alpha - i)- \operatorname{sign} (\beta -j)}{2} y_{i\beta} y_{\alpha j}. \end{equation} We denote $G_k(n)$ equipped with the Poisson bracket \eqref{Poihomstand} by $G_k(n)^{\{ 0\}}$. \begin{proposition} \label{GrS} {\rm (i)} For an arbitrary skew-symmetric operator $S$, there exists a Poisson bracket ${\{\cdot,\cdot\}}^{Gr}_S$ on $G_k(n)$, unique up to a scalar multiple, such that the natural action $(X, D) \mapsto X D$ is a Poisson map from $G_k(n)^{\{ 0\}} \times \mathcal H^{\{-\frac{1}{2} S\}} $ to $G_k(n)^{\{ S\}}:=\left (G_k(n), {\{\cdot,\cdot\}}^{Gr}_S \right )$. {\rm (ii)} The bracket ${\{\cdot,\cdot\}}^{Gr}_S$ is a Poisson homogeneous structure on $G_k(n)$ with respect to the bracket ${\{\cdot,\cdot\}}_{R_S}$ on $SL_n$ defined by \eqref{braijklRS}. \end{proposition} \begin{proof} (i) Let $X=\left [ \mathbf 1_k \ Y\right ]\in G^0_k(n)$, $D=\operatorname{diag} (d_1,\dots,d_n) \in \mathcal H$ and let $\left [ \mathbf 1_k \ \widetilde{Y}\right ]$ be the matrix that represents the element $X D \in G^0_k(n)$. Then $\tilde y_{ij} = y_{ij} {d_{j+k}}/{d_i}$, and the Poisson bracket of any two Pl\"ucker coordinates $\tilde y_{ij}$ and $\tilde y_{\alpha\beta}$ in the product structure $ {\{\cdot,\cdot\}}^{Gr}_0 \times {\{\cdot,\cdot\}}^\mathcal H_{-\frac{1}{2} S}$ is equal to $$ \frac{\operatorname{sign} (\alpha - i)- \operatorname{sign} (\beta -j)}{2} \tilde y_{i\beta} \tilde y_{\alpha j} + \frac{s_{i, \beta+k} + s_{j+k, \alpha} - s_{i, \alpha} - s_{j+k, \beta+k}}{2} \tilde y_{ij} \tilde y_{\alpha\beta}. $$ Thus, the bracket defined on $G^0_k(n)$ by the formula $$ \{y_{ij}, y_{\alpha\beta}\}_S^{Gr}= \{y_{ij}, y_{\alpha\beta}\}_0^{Gr} + \frac{s_{i, \beta+k} + s_{j+k, \alpha} - s_{i, \alpha} - s_{j+k, \beta+k}}{2}y_{ij} y_{\alpha\beta} $$ is the unique, up to a scalar multiple, Poisson bracket that makes the map $ (X, D) \mapsto XD $ Poisson. Sinse $G^0_k(n)$ is an open dense subset in $G_k(n)$ the claim follows. (ii) To see that ${\{\cdot,\cdot\}}^{Gr}_S$ is Poisson homogeneous with respect to ${\{\cdot,\cdot\}}_{R_S}$, we need to check that the natural action of $SL_n$ on $G_k(n)$ defines a Poisson map from $G_k(n)^{\{ S\}}\times SL_n^{\{ S\}}$ to $G_k(n)^{\{ S\}}$. Instead of a straightforward calculation, we can use the fact that this is true for $S=0$ and Lemma \ref{lemma}. Indeed, it is easy to check that both Poisson maps $G_k(n)^{\{0\}}\times\mathcal H^{\{-\frac12 S\}}\to G_k(n)^{\{S\}}$ given by $(X,D_1)\mapsto XD_1$ and $\mathcal H^{\{\frac 12 S\}} \times SL_n^{\{0\}} \times \mathcal H^{\{-\frac 12 S\}} \to SL_n^{\{S\}}$ given by $( D_1, X, D_2) \mapsto D_1 X D_2$ are surjective. Therefore, we can replace the map $G_k(n)^{\{ S\}}\times SL_n^{\{ S\}}\to G_k(n)^{\{ S\}}$ by the map $$ G_k(n)^{\{0\}}\times\mathcal H^{\{-\frac12 S\}}\times\mathcal H^{\{\frac 12 S\}} \times SL_n^{\{0\}} \times \mathcal H^{\{-\frac 12 S\}} \to G_k(n)^{\{ S\}} $$ given by $(X,D_1,D_2,A,D_3)\mapsto XD_1D_2AD_3$. It is easy to check that $(D_1,D_2)\mapsto D_1 D_2$ is a Poisson map from $\mathcal H^{\{-\frac{1}{2} S\}} \times \mathcal H^{\{\frac{1}{2} S\}}$ onto $\mathcal H^{\{0\}}$, which, in turn, is a Poisson-Lie subgroup of $SL_n^{\{0\}}$. We thus arrive to the Poisson map $G_k(n)^{\{0\}}\times SL_n^{\{0\}} \times \mathcal H^{\{-\frac 12 S\}} \to G_k(n)^{\{ S\}}$ given by $(X,\widetilde{A},D_3)\mapsto X\widetilde{A}D_3$ with $\widetilde{A}=D_1D_2A$. The standard Poisson homogeneous structure ensures that the map $G_k(n)^{\{0\}}\times SL_n^{\{0\}}\to G_k(n)^{\{ 0\}}$ is Poisson, and it remains to use part (i) of Proposition~\ref{GrS} to complete the proof. \end{proof} \subsection{} Now, we recall the construction of the cluster algebra ${\mathcal A}_{G^0_k(n)}$ associated with the open cell $G^0_k(n)$ as described in \cite{GSV1,GSV3}. Denote $m=n-k$. For every $i\in [k]$, $j\in [m]$ put \begin{equation}\label{Iij} I_{ij} =\begin{cases} [i+1,k] \cup [ j+k,i + j+ k - 1], &\text{if $i \le m-j +1$}\\ \left ([k]\setminus [i + j - m ,\ i]\right ) \cup [ j+k, n], &\text{if $i > m-j +1$}. \end{cases} \end{equation} Denote the Pl\"ucker coordinate $x_{I_{ij}}$ by $x(i,j)$. The initial cluster of ${\mathcal A}_{G^0_k(n)}$ is given by \begin{equation}\label{initPluck} \mathbf x =\mathbf x (k,n)=\left \{ \frac{x(i,j)}{x_{[k]}}\ :\ i\in [k],\ j\in [m]\right \}. \end{equation} Stable variables are ${\displaystyle \frac{x(1,1)}{x_{[k]}}, \ldots, \frac{x(k,1)}{x_{[k]}}, \frac{x(k,2)}{x_{[k]}},\ldots, \frac{x(k,m)}{x_{[k]}}}$. The entries of ${\widetilde{B}}$ that correspond to ${\bf x}$ are all $0$ or $\pm 1$s. Thus it is convenient to describe ${\widetilde{B}}$ by a directed graph $\Gamma({\widetilde{B}})$. \begin{figure}[ht] \begin{center} \includegraphics[height=3.5cm]{grass1.eps} \caption{Graph $\Gamma({\widetilde{B}})$}\label{fig:graph_1} \end{center} \end{figure} The vertices of $\Gamma({\widetilde{B}})$ correspond to all columns of ${\widetilde{B}}$, and, since ${\widetilde{B}}$ is rectangular, the corresponding edges are either between the cluster variables or between a cluster variable and a stable variable. In our case, $\Gamma({\widetilde{B}})$ is a directed graph with $k m $ vertices labeled by pairs of integers $(i,j)\ i\in[k], j\in [m]$. $\Gamma({\widetilde{B}})$ has edges $(i,j) \to (i,j+1)$, $(i+1,j) \to (i,j)$ and $(i,j) \to (i+1,j-1)$ whenever both vertices defining an edge are in the vertex set of $\Gamma({\widetilde{B}})$ (cf.~Fig.~\ref{fig:graph_1}). Each cluster variable $x(i,j)$ is associated with (placed at) the vertex with coordinates $(i,j)$ of the grid in Fig.~\ref{fig:graph_1} for $i\in [k],\ j\in [ m]$. Equation \eqref{eq:1.3} then results in the following formulas for the $\tau$-cluster: \begin{equation} \label{tau} \tau_{ij}= \frac{x(i+1,j-1) x(i,j+1) x(i-1,j) } {x(i-1,j+1) x(i,j-1) x(i+1,j) }, \quad i\in[ k-1],\ j \in [2,m], \end{equation} where $x(0,j)=x(i,m+1) =1$. \begin{lemma} \label{tauinv} Functions \eqref{tau} are invariant under the natural action of $\mathcal H$ on $G_k(n)$. \end{lemma} \begin{proof} Let $X\in G_k(n)$, $D=\operatorname{diag}(d_1,\dots,d_n) \in \mathcal H$ and $\widetilde{X} = X D$. For any subset $I=\{i_1, \ldots, i_l\} \subset [n]$ denote $d^I = \prod_{j=1}^l d_{i_j}$. Then, using \eqref{Iij}, we obtain $$ \tilde x(i,j)=x(i,j) d^{I_{ij}} = \begin{cases} {\displaystyle x(i,j) \frac{d^{[k]} d^{[i+j+k-1]} }{d^{[i]} d^{[j+k-1]}}} , &\text{if $i \le m-j +1$},\\[.7em] {\displaystyle x(i,j)\frac{d^{[k]} d^{[n]} d^{[i+j-m-1]} }{d^{[i]} d^{[j+k-1]}}}, &\text{if $i > m-j +1$}, \end{cases} $$ and the equality $\tau_{ij}(\widetilde{X})=\tau_{ij}(X)$ follows from \eqref{tau} by trivial cancellation. \end{proof} Now we are ready to prove \begin{theorem} \label{main} A Poisson structure ${\{\cdot,\cdot\}}$ on $G_k(n)$ is compatible with ${\mathcal A}_{G^0_k(n)}$ if and only if a scalar multiple of ${\{\cdot,\cdot\}}$ defines a Poisson homogeneous structure with respect to ${\{\cdot,\cdot\}}_{R_S}$ for some skew-symmetric operator $S$. \end{theorem} \begin{proof} It follows from Theorem 5.4 in \cite{GSV3} that ${\{\cdot,\cdot\}}_0^{Gr}$ is compatible with ${\mathcal A}_{G^0_k(n)}$. The number of stable variables for ${\mathcal A}_{G^0_k(n)}$ is $n-1$. Since $\mathcal H$ is isomorphic to $({\mathbb C}^*)^{n-1}$, Lemma \ref{tauinv} guarantees that the map $ (X, D) \mapsto XD $ translates into an action of $({\mathbb C}^*)^{n-1}$ on ${\mathcal A}_{G^0_k(n)}$ as described in Section \ref{CA&PB}. Assumptions of Theorem~\ref{thm:1.4} are verified in \cite{GSV1}, Section 3.3. Then Proposition \ref{allcompat} and Proposition \ref{GrS} imply that every compatible Poisson bracket on ${\mathcal A}_{G^0_k(n)}$ is a scalar multiple of ${\{\cdot,\cdot\}}_S^{Gr}$ for some skew-symmetric operator $S$ on the space of traceless diagonal $n\times n$ matrices. Since ${\{\cdot,\cdot\}}_S^{Gr}$ is a unique Poisson homogeneous with respect to ${\{\cdot,\cdot\}}_{R_S}$ (see, e.g. \cite{D}), the claim follows. \end{proof} \section*{Acknowledgments} M.~G.~was supported in part by NSF Grant DMS \#0801204. M.~S.~was supported in part by NSF Grants DMS \#0800671 and PHY \#0555346. A.~S.~was supported in part by KVVA. A.~V.~was supported in part by ISF Grant \#1032/08. The authors are grateful to A.~Zelevinsky for useful comments.
{ "timestamp": "2009-09-06T10:56:57", "yymm": "0909", "arxiv_id": "0909.0361", "language": "en", "url": "https://arxiv.org/abs/0909.0361", "abstract": "We describe all Poisson brackets compatible with the natural cluster algebra structure in the open Schubert cell of the Grassmannian $G_k(n)$ and show that any such bracket endows $G_k(n)$ with a structure of a Poisson homogeneous space with respect to the natural action of $SL_n$ equipped with an R-matrix Poisson-Lie structure. The corresponding R-matrices belong to the simplest class in the Belavin-Drinfeld classification. Moreover, every compatible Poisson structure can be obtained this way.", "subjects": "Quantum Algebra (math.QA); Symplectic Geometry (math.SG)", "title": "Poisson structures compatible with the cluster algebra structure in Grassmannians", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363480718237, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7073385919923343 }
https://arxiv.org/abs/2008.12076
Robust Combinatorial Optimization with Locally Budgeted Uncertainty
Budgeted uncertainty sets have been established as a major influence on uncertainty modeling for robust optimization problems. A drawback of such sets is that the budget constraint only restricts the global amount of cost increase that can be distributed by an adversary. Local restrictions, while being important for many applications, cannot be modeled this way.We introduce new variant of budgeted uncertainty sets, called locally budgeted uncertainty. In this setting, the uncertain parameters become partitioned, such that a classic budgeted uncertainty set applies to each partition, called region.In a theoretical analysis, we show that the robust counterpart of such problems for a constant number of regions remains solvable in polynomial time, if the underlying nominal problem can be solved in polynomial time as well. If the number of regions is unbounded, we show that the robust selection problem remains solvable in polynomial time, while also providing hardness results for other combinatorial problems.In computational experiments using both random and real-world data, we show that using locally budgeted uncertainty sets can have considerable advantages over classic budgeted uncertainty sets.
\section{Introduction}\label{sec:intro} We consider nominal combinatorial optimization problems of the form \begin{align*} \min\ & \pmb{c}^t \pmb{x} \\ \text{s.t. } & \pmb{x}\in\mathcal{X} \end{align*} where $\mathcal{X}\subseteq\{0,1\}^n$ is the set of feasible solutions. For uncertain cost coefficients $\pmb{c}\in\mathcal{U}$, robust optimization approaches have been analyzed. To this end, one assumes that a set $\mathcal{U}$ of possible cost realizations is given by a decision maker or derived from historical data. The set $\mathcal{U}$ is referred to as the uncertainty set. The (min-max) robust counterpart is then to solve \[ \min_{\pmb{x}\in\mathcal{X}}\ \max_{\pmb{c}\in\mathcal{U}}\ \pmb{c}^t \pmb{x} \] Different possibilities to model the set $\mathcal{U}$ have been proposed. One straight-forward possibility is to use a discrete set of scenarios $\mathcal{U} = \{ \pmb{c}^1,\ldots,\pmb{c}^N\}$, i.e., to list all possible outcomes explicitly. While this approach is flexible, it usually results in NP-hard robust optimization problems, even if the nominal problem can be solved in polynomial time (see \cite{kouvelis2013robust,aissi2009min,kasperski2016robust} for overviews). Also, implicit descriptions of the uncertainty set can lead to exponential-sized equivalent discrete uncertainty sets. A popular alternative are budgeted uncertainty sets of the form \begin{equation} \mathcal{U} = \left\{ \pmb{c} = \underline{\pmb{c}} + \pmb{\delta} : \delta_i\in[0,d_i]\ \forall i\in[n],\ \sum_{i\in[n]} \delta_ i \le \Gamma \right\} \label{classic} \end{equation} as first introduced in \cite{bertsimas2003robust,bertsimas2004price}. Here we use the notation $[n]=\{1,\ldots,n\}$. For every item $i\in[n]$, we are given a lower bound on the costs $\underline{c}_i$, as well as a possible maximum cost deviation $d_i$. Additionally, there is a budget $\Gamma$ on the total increase of costs over the lower bound. Advantages of this set include its intuitive description for a decision maker, and that robust counterparts remain efficiently solvable for nominal problems that can be solved efficiently, even though the budgeted uncertainty set has an exponential number of extreme points. These benefits have lead to a substantial amount of research into robust optimization problems with budgeted uncertainty sets, see, e.g., \cite{alves2015robust,hansknecht2018fast,chassein2018recoverable,bougeret2019robust,chassein2019faster} and many more. But there are also limitations to this approach, which has lead to the development of alternative uncertainty sets. These include multi-band uncertainty \cite{busing2012new}, variable budgeted uncertainty \cite{poss2013robust}, and knapsack uncertainty \cite{poss2018robust}. To the best of our knowledge, no previous work has considered avoiding the potential problem that the constraint $\sum_{i\in[n]} \delta_ i \le \Gamma$ denotes a global budget over all uncertain parameters. For various applications, multiple local budgets are more desirable. As examples, consider multi-period problems, where every period has its own budget limitation, routing problems, where separate budgets apply to geographic regions or types of roads, and portfolio problems, where uncertainty budgets are restricted to asset classes or sectors. In this paper we introduce a new type of budgeted uncertainty set, where budgets apply locally to their respective regions. These sets are of the form \[ \mathcal{U} = \left\{ \pmb{c} = \underline{\pmb{c}} + \pmb{\delta} : \delta_i \in [0,d_i]\ \forall i\in[n],\ \sum_{i\in P_j} \delta_i \le \Gamma_j\ \forall j\in[K] \right\} \] where $P_1\cup P_2 \cup \ldots \cup P_K = [n]$ denotes a partition of the items. Each set $P_j$ is called a region. In this approach, every region has a separate budget constraint, which models the local uncertainty. Note that this definition of uncertainty is a generalization of the classic definition~\eqref{classic}, which can be recovered by using $K=1$. Our contributions are as follows. For min-max problems with locally budgeted uncertainty, we first derive a compact formulation in Section~\ref{sec:compact}. Based on this formulation, we then consider the case of a constant number of regions in Section~\ref{sec:constant}. We show that the robust problem remains solvable in polynomial time, if it is possible to solve the nominal problem in polynomial time. For an unbounded number of regions, the selection problem remains solvable in polynomial time, while this is not the case for the representative selection problem (see Section~\ref{sec:unbounded}). We conclude that also the spanning tree problem, the $s$-$t$-min-cut problem, and the shortest path problem become NP-hard. Additionally, we can exclude the possibility of parameterized algorithms with running time in $O^*(2^{o(K)})$. In Section~\ref{sec:experiments}, we present three computational experiments using locally budgeted uncertainty sets. In all experiments, we compare locally budgeted uncertainty to the classic budgeted uncertainty approach. While the first two experiments use randomly generated data, the third experiment is based on real-world data for robust shortest path problems. Section~\ref{sec:conclusions} concludes the paper and points out further questions. \section{Theoretical Results} \subsection{A Compact Formulation} \label{sec:compact} Let some solution $\pmb{x}\in\mathcal{X}$ be fixed. Its objective value is then determined by solving the adversarial problem \[ \max_{\pmb{c}\in\mathcal{U}}\ \pmb{c}^t\pmb{x} \] that is, by choosing a scenario $\pmb{c}$ that maximizes the costs of $\pmb{x}$. Using the definition of locally budgeted uncertainty, this is equivalent to solving the following linear program: \begin{align} \max\ & \sum_{i\in[n]} (\underline{c}_i + \delta_i) x_i \\ \text{s.t. } & \sum_{i\in P_j} \delta_i \le \Gamma_j & \forall j\in[K] \label{adv:con1}\\ & \delta_i \le d_i & \forall i\in[n] \label{adv:con2} \\ & \delta_i \ge 0 & \forall i\in[n] \end{align} By strong duality, we can dualize this linear program to find another linear program with the same optimal objective value. Furthermore, any feasible solution to the dual problem gives an upper bound to the objective value of the primal problem. Using the dual, we hence find the following compact problem formulation for the min-max problem with locally budgeted uncertainty. \begin{align} \min\ & \sum_{j\in[K]} \left( \Gamma_j \pi_j + \sum_{i\in P_j} d_i \rho_i + \sum_{i\in P_j} \underline{c}_i x_i \right) \label{compactstart}\\ \text{s.t. } & \pi_j + \rho_i \ge x_i & \forall j\in[K], i\in P_j \\ & \pi_j \ge 0 & \forall j\in [K] \\ & \rho_i \ge 0 & \forall i\in[n] \\ & \pmb{x} \in \mathcal{X} \label{compactend} \end{align} Recall that $\mathcal{X}$ represents the set of feasible solutions for the underlying combinatorial problem. Variables $\pi_j$ are the duals of Constraints~\eqref{adv:con1}, and variables $\rho_i$ are the duals of Constraints~\eqref{adv:con2}. \subsection{Constant Number of Regions} \label{sec:constant} We first consider the case that the number of regions $K$ is a constant value. Note that, in an optimal solution, we can assume that $\rho_i = [x_i - \pi_j]_+$, where $[y]_+=\max\{0,y\}$ denotes the positive part of $y$. \begin{lemma}\label{pilemma} There is an optimal solution to Problem~(\ref{compactstart}-\ref{compactend}), where $\pi_j \in \{0,1\}$ for all $j\in[K]$. \end{lemma} \begin{proof} Let us assume that $\pmb{x}\in\mathcal{X}$ is fixed. Let $X = \{ i\in[n] : x_i = 1\}$ denote the set of items taken by solution $\pmb{x}$. The problem then decomposes to: \begin{align} & \min_{\pmb{\pi} \ge \pmb{0}} \sum_{j\in[K]} \left( \Gamma_j \pi_j + \sum_{i\in P_j} d_i \rho_i + \sum_{i\in P_j} \underline{c}_i x_i \right) \\ = & \sum_{i\in[n]} \underline{c}_i x_i + \sum_{j\in[K]} \min_{\pi_j \ge 0} \left( \Gamma_j\pi_j + \sum_{i\in P_j} d_i[x_i - \pi_j]_+ \right) \\ = & \sum_{i\in X} \underline{c}_i + \sum_{j\in[K]} \min_{\pi_j \in[0,1]} \left( \Gamma_j\pi_j + \sum_{i\in P_j\cap X} d_i(1 - \pi_j) \right) \label{lemma1con} \end{align} Note that the equivalence~\eqref{lemma1con} follows as increasing any variable $\pi_j$ beyond 1 can never be optimal for $\Gamma_j > 0$. If $\Gamma_j=0$, then setting $\pi_j=1$ gives the same value as setting $\pi_j > 1$. We can conclude that there is an optimal solution with $\pi_j \in \{0,1\}$ for all $j\in[K]$. \end{proof} \begin{theorem}\label{decomp-theorem} The robust problem with locally budgeted uncertainty (\ref{compactstart}-\ref{compactend}) can be decomposed into $2^K$ subproblems of nominal type. In particular, if $K$ is a constant and the nominal problem can be solved in polynomial time, Problem~(\ref{compactstart}-\ref{compactend}) can be solved in polynomial time as well. \end{theorem} \begin{proof} By Lemma~\ref{pilemma}, we can assume every variable $\pi_j$ to be either 0 or 1. We guess these values. There are $K$ variables $\pi_j$, and thus $2^K$ combinations are possible. For fixed $\pmb{\pi}=(\pi_1,\ldots,\pi_K)$, denote by $\Pi\subseteq[K]$ the set of indices $j$ where $\pi_j=1$. The problem then becomes \begin{align*} & \min_{\pmb{x}\in\mathcal{X}} \sum_{j\in[K]} \left( \Gamma_j \pi_j + \sum_{i\in P_j} d_i [x_i - \pi_j]_+ + \sum_{i\in P_j} \underline{c}_i x_i \right) \\ = & \sum_{j\in \Pi} \Gamma_j + \min_{\pmb{x}\in\mathcal{X}} \left( \sum_{j\in \Pi} \sum_{i\in P_j} \underline{c}_i x_i + \sum_{j\in [K]\setminus \Pi} \sum_{i\in P_j} (\underline{c}_i + d_i) x_i \right) \end{align*} This is a problem of nominal type, and the claim follows. \end{proof} \subsection{Unbounded Number of Regions} \label{sec:unbounded} We now consider the case that the number of regions $K$ is not a constant, but part of the problem input. \subsubsection{Hardness Results} We first consider the representative selection problem, where \[ \mathcal{X} = \left\{ \pmb{x}\in\{0,1\}^n : \sum_{i\in T_\ell} x_i = p_\ell\ \forall \ell\in [L] \right\} \] for a partition $T_1\cup T_2 \cup \ldots T_L = [n]$ and integers $p_\ell$ for all $\ell\in[L]$ (see, e.g., \cite{deineko2013complexity}). \begin{theorem}\label{thm:nphard} The robust representative selection problem with locally budgeted uncertainty and arbitrary $K$ is APX-hard, even if $|T_\ell| = 2$, $p_\ell=1$ for all $\ell\in[L]$, and the for the regions it holds that $|P_j| \leq 3$ for all $j \in [K]$. \end{theorem} \begin{proof} We reduce from an instance of the vertex cover problem, which is APX-hard, even on 3-regular graphs \cite{garey1979computers,alimonti2000some}. \noindent\textbf{Given:} Graph $G=(V,E)$ 3-regular, $k \in \mathbb{N}$ \noindent\textbf{Question:} Does there exist a vertex cover of size less or equal to $k$, i.e. a set $S \subseteq V$ such that for all $e=\{u,v\} \in E$ it holds that $u \in S$ or $v \in S$, and $|S| \leq k$? \begin{figure}[ht] \centering \begin{tikzpicture} \graph[clockwise=6, radius=2.3cm] { {1 [ultra thick, circle, draw], 2, 3[ultra thick, circle, draw], 4, 5[ultra thick, circle, draw], 6[ultra thick, circle, draw]}; {1} --[edge label=a] {2}; {2} --[edge label=b] {3}; {3} --[edge label=c] {4}; {4} --[edge label=d] {5}; {5} --[edge label=e] {6}; {6} --[edge label=f] {1}; {1} --["g", pos=0.2] {4}; {2} --["h", pos=0.8] {5}; {3} --["i", pos=0.2] {6}; }; \end{tikzpicture} \bigskip \begin{tikzpicture}[box/.style={rectangle,draw=black,thick, minimum size=0.8cm},] \node[box,fill=red!80] at (0,-0.8){$2_a$}; \node[box,fill=green,line width=3pt] at (0,0){$\mathbf{1_a}$}; \node[box,fill=red!80] at (1,0){$2_b$}; \node[box,fill=blue!70,line width=3pt] at (1,-0.8){$\mathbf{3_b}$}; \node[box,fill=orange] at (2,-0.8){$4_c$}; \node[box,fill=blue!70,line width=3pt] at (2,0){$\mathbf{3_c}$}; \node[box,fill=orange] at (3,0){$4_d$}; \node[box,fill=yellow,line width=3pt] at (3,-0.8){$\mathbf{5_d}$}; \node[box,fill=pink] at (4,-0.8){$6_e$}; \node[box,fill=yellow,line width=3pt] at (4,0){$\mathbf{5_e}$}; \node[box,fill=green] at (5,-0.8){$1_f$}; \node[box,fill=pink,line width=3pt] at (5,0){$\mathbf{6_f}$}; \node[box,fill=orange] at (6,-0.8){$4_g$}; \node[box,fill=green,line width=3pt] at (6,0){$\mathbf{1_g}$}; \node[box,fill=red!80] at (7,0){$2_h$}; \node[box,fill=yellow,line width=3pt] at (7,-0.8){$\mathbf{5_h}$}; \node[box,fill=blue!70] at (8,0){$3_i$}; \node[box,fill=pink,line width=3pt] at (8,-0.8){$\mathbf{6_i}$}; \end{tikzpicture} \caption{Illustration of the reduction from vertex cover. The big circled vertices correspond to a minimum size vertex cover of the graph. Below we show the instance of the robust representative selection problem corresponding to this graph. Each column corresponds to a partition from which one of the two elements must be selected. The colors correspond to the regions of the instance. The bold elements are an optimal solution corresponding to the shown vertex cover of the graph. Note that only elements in 4 regions (green, blue, yellow, pink), corresponding to the vertices 1,3,5,6, are selected.} \label{fig:vcred} \end{figure} Given such an instance, we construct an instance of the robust representative selection problem with locally budgeted uncertainty fulfilling the restrictions stated in the theorem. In Figure~\ref{fig:vcred} we illustrate the reduction via an example for a concrete vertex cover instance. Let $L = |E|$ and $n=2|E|$. For each $e = \{u, v\} \in E$ let $u_e, v_e$ be the two elements of $[n]$ in $T_e$, which we associate with the two vertices $u$ and $v$. Note that for every vertex $v\in V$ there exist degree of $v$ many elements that are associated with this vertex, one corresponding to every edge incident to $v$. For our partition into regions of the locally budgeted uncertainty set, we use exactly those sets of elements that correspond to the same vertices in $V$, i.e. we define a region $P_v = \{ v_e \colon e \in E, v \text{ incident to }e \}$ for every $v \in V$. Note that these sets also form a partition of $[n]$, and we have $K=|V|$. We further set $\pmb{\Gamma} = \pmb{1}$, $\underline{\pmb{c}} = \pmb{0}$ and $\pmb{d} = \pmb{1}$. We show that there is a vertex cover of size at most $k$ if and only if the constructed instance of the robust representative selection problem with locally budgeted uncertainty has a solution with objective value at most $k$. To see this we first prove the following claim. \begin{claim} Given a feasible solution $\pmb{x}$ of our instance of the robust representative selection problem, the robust objective value is equal to the number of regions $P_v$ in which at least one element is selected, i.e. \[ | \{ v \in V \colon \exists i \in P_v \text{ with } x_i = 1 \}|. \] \end{claim} \begin{claimproof}[Proof of Claim] First observe that this value can be realized by the adversary by selecting for each region $P_v$ with an element $i \in P_v$ such that $x_i=1$ an arbitrary such element $i$ and set $\delta_i = 1$. It is easy to see that this is a feasible solution for the adversary and the claimed objective value is reached. For the upper bound observe that it is only sensible to set $\delta_i > 0$ for $i\in [n]$ with $x_i = 1$. Hence by the definition of $\mathcal{U}$ we have that $ | \{ v \in V \colon \exists i \in P_v \text{ with } x_i = 1 \}| $ is an upper bound on the objective value of the adversary. \end{claimproof} Now given a vertex cover $S$ of size $k$ we construct a solution $\pmb{x}$ by selecting in each $T_e$ the element corresponding to the vertex in the vertex cover. If both vertices incident to $e$ are in $S$ we choose one of the two elements arbitrarily. Since $|S|=k$ we select elements from at most $k$ different regions $P_v$. Hence, by our claim the objective value of the robust representative selection problem is less or equal to $k$. Given a solution $\pmb{x}$ to the robust representative selection problem with objective value $k$, we know by our claim that the elements selected by $\pmb{x}$ are contained in exactly $k$ different regions $P_{v_1}, \dots, P_{v_k}$. We define $S$ to be the set corresponding to exactly those $k$ different vertices, i.e. $S = \{v_{1}, \dots, v_{k}\}$. Since for each $T_e, e \in E$ one element is selected by $\pmb{x}$, also for every edge $e$ at least one incident vertex is contained in $S$, hence $S$ is a vertex cover of size $k$. \end{proof} \begin{corollary} The robust problem with locally budgeted uncertainty with arbitrary $K$ is APX-hard for the shortest path problem on series-parallel graphs, for the minimum spanning tree problem, and for the $s$-$t$-min-cut problem, even if for the regions it holds that $|P_j| \leq 3$ for all $j \in [K]$. \end{corollary} \begin{proof} The result for the shortest path and minimum spanning tree problem follows directly from Theorem~\ref{thm:nphard}. To see this, given an instance of the representative selection problem with $|T_\ell| = 2$ and $p_\ell=1$ for all $\ell \in [L]$, we construct the graph $G$ with vertex set $V = \{0,1,\dots,L\}$ and edge set $E$ consisting of parallel edges $e_\ell^1, e_\ell^2$ connecting vertex $\ell-1$ with vertex $\ell$ for all $\ell \in [L]$. Here $e_\ell^1, e_\ell^2$ are in one-to-one correspondence with the two elements in $T_\ell$. It is now easy to see that both spanning trees and paths from nodes $0$ to $L$ in $G$ are in one-to-one correspondence with feasible solutions to the original representative selection problem. Using the same locally budgeted uncertainty set as for the representative selection problem, we find that objective values of corresponding solutions remain equal. To obtain the result for the $s$-$t$-min-cut problem, observe that the special instance of the representative selection problem is equivalent to the $s$-$t$-cut problem in a graph $G$ where for each part $\ell \in [L]$ of size $2$ we add a special vertex $v_{\ell}$ in addition to $s$ and $t$ and the path from $s$ via $v_{\ell}$ to $t$. Then $s$-$t$-cuts correspond to selecting one of the two edges from each of these paths. \end{proof} Note that the above reduction does not exclude the possibility of a parameterized algorithm with running time $O^{*}(2^{o(K)})$, even if we assume the exponential time hypothesis (ETH), since the number of regions $K$ in the reduction cannot be bounded by the solution size $k$ of the vertex cover. In the following we give a direct linear parameterized reduction from 3-SAT to robust representative selection with locally budgeted uncertainty, which shows that the running time of our FPT meta-algorithm is essentially tight under ETH. \begin{theorem}\label{thm:ethhard} Assuming ETH, there is no $O^{*}(2^{o(K)})$ time algorithm for the robust representative selection problem with locally budgeted uncertainty, even if $|T_\ell| \leq 3$ and $p_\ell=1$ for all $\ell\in[L]$. \end{theorem} \begin{proof} We reduce from an instance of the well known 3-SAT problem. \noindent\textbf{Given:} A formula $\varphi$ in 3-CNF with variables $x_1, \dots, x_{\tilde{n}}$ and $\tilde{m}$ clauses, i.e. \[ \varphi = (l_{1,1} \vee l_{1,2} \vee l_{1,3}) \wedge \dots \wedge (l_{\tilde{m},1} \vee l_{\tilde{m},2} \vee l_{\tilde{m},3}), \] where the $l$ are literals of the variables $\pmb{x}$. \noindent\textbf{Question:} Is there an assignment for $\pmb{x}$ such that $\varphi$ is true? \begin{figure}[ht] \centering \begin{tikzpicture} \node at (0,0) {$\varphi = (\bar{x}_{1} \vee x_2 \vee \bar{x}_4) \wedge (x_2 \vee \bar{x}_3 \vee x_4) \wedge (x_1 \vee \bar{x}_2 \vee x_3)$}; \node at (0, -1) {$x_1 = 0, x_2=0, x_3=1, x_4=1$}; \end{tikzpicture} \bigskip \begin{tikzpicture}[box/.style={rectangle,draw=black,thick, minimum size=0.8cm},] \node[box,fill=green] at (0,-0.8){$e_1^T$}; \node[box,fill=red!80,line width=3pt] at (0,0){$\mathbf{e_1^F}$}; \node[box,fill=pink] at (1,-0.8){$e_2^T$}; \node[box,fill=yellow,line width=3pt] at (1,0){$\mathbf{e_2^F}$}; \node[box,fill=orange] at (2,0){$e_3^F$}; \node[box,fill=blue!70,line width=3pt] at (2,-0.8){$\mathbf{e_3^T}$}; \node[box,fill=violet!70] at (3,0){$e_4^F$}; \node[box,fill=brown,line width=3pt] at (3,-0.8){$\mathbf{e_4^T}$}; \node[box,fill=pink] at (4,-0.4){$e_1^2$}; \node[box,fill=violet!70] at (4,-1.2){$e_1^3$}; \node[box,fill=red!80,line width=3pt] at (4,0.4){$\mathbf{e_1^1}$}; \node[box,fill=pink] at (5,0.4){$e_2^1$}; \node[box,fill=orange] at (5,-0.4){$e_2^2$}; \node[box,fill=brown,line width=3pt] at (5,-1.2){$\mathbf{e_2^3}$}; \node[box,fill=green] at (6,0.4){$e_3^1$}; \node[box,fill=yellow] at (6,-0.4){$e_3^2$}; \node[box,fill=blue!70,line width=3pt] at (6,-1.2){$\mathbf{e_3^3}$}; \end{tikzpicture} \caption{Illustration of the reduction from 3-SAT. Above there is an example of a 3-SAT formula with a feasible assignment. Below we show the instance of the robust representative selection problem corresponding to this instance. Each column corresponds to a partition from which one of the two elements must be selected. The colors correspond to the regions of the instance. The bold elements are an optimal solution corresponding to the shown variable assignment. Note that elements in exactly 4 regions (red, yellow, blue, brown), corresponding to the shown feasible variable assignment, are selected.} \label{fig:satred} \end{figure} The exponential time hypothesis (ETH) implies that there does not exist an algorithm to decide 3-SAT with running time $2^{o(n)}$, and is widely believed~\cite{woeginger2003exact}. We define an instance of our robust problem on the element set $[2 \tilde{n} + 3 \tilde{m}]$ in the following way. In Figure~\ref{fig:satred} we illustrate the reduction via an example for a 3-SAT instance. The partition of $[2\tilde{n}+3\tilde{m}]$ for the representative selection problem consists of $\tilde{n} + \tilde{m}$ parts, one for each variable and clause. For each $i \in [\tilde{n}]$ the set $T_{i}$ consists of two elements, an element $e^{T}_{i}$ and an element $e^{F}_{i}$. These parts are the variable gadgets and selecting $e^{T}_{i}$ or $e^{F}_{i}$ corresponds to setting $x_{i}$ to true or false respectively. For each clause we create a part consisting of exactly three elements, i.e. for each $j \in [\tilde{m}]$ the set $T_{\tilde{n}+j}$ consists of the elements $e^{1}_{j}$, $e^{2}_{j}$ and $e^{3}_{j}$. Using the locally budgeted uncertainty set and cost structure we will enforce that $e^{i}_{j}$ can only be selected without inducing additional cost, if the selection in the variable gadget corresponding to the variable of literal $l_{j,i}$ corresponds to $l_{j,i}$ being true. To this aim, we define the partition for the locally budgeted uncertainty set consisting of $K = 2\tilde{n}$ regions $P_{i}^{T}$ and $P_{i}^{F}$ for each $i \in [\tilde{n}]$. Selecting an element inside region $P_{i}^{T}$ or $P_{i}^{F}$ corresponds to setting the variable $x_i$ to true or false respectively. We set $\Gamma_{i}^{T} = \Gamma_{i}^{F} = 1$ for all $i \in [\tilde{n}]$ and the costs to $\underline{\pmb{c}} = \pmb{0}$ and $\pmb{d} = \pmb{1}$. In a similar way as in the proof of Theorem~\ref{thm:nphard}, one can prove the following claim. \begin{claim} Given a feasible solution $\pmb{x}$ of our instance of the robust representative selection problem, the robust objective value is equal to the number of regions in which at least one element is selected by $\pmb{x}$. \end{claim} Based on this, we show that $\varphi$ is feasible, if and only if the objective value of our instance is $\tilde{n}$. Given a feasible assignment for $\varphi$ we select in each variable gadget the corresponding element. Then, since $\varphi$ is true, for each clause $j \in [\tilde{m}]$ there is at least one literal $l_{j,i}$ which is true. We select the corresponding element $e^{i}_{j}$ in $T_{\tilde{n}+j}$. Observe that this selection uses exactly the $\tilde{n}$ parts $P_{i}^{v}$ where $v$ is $T$ if $x_i$ is true and $v$ is $F$ if $x_i$ is false. Hence by our claim the objective value of this selection is $\tilde{n}$. For the other direction first observe, that the objective value of our instance cannot be smaller than $\tilde{n}$, since in every variable gadget one element must be chosen and each element $e_{i}^{v}$ has its exclusive region $P_{i}^{v}$. Now assume that there is a selection with robust objective value $\tilde{n}$. Then by our claim for every $i \in [\tilde{n}]$ in exactly one of the two regions $P_{i}^{T}$ and $P_{i}^{F}$ an element of $T_{i}$ is selected. Hence, the truth assignment to $x_{i}$ induced by the selection in the variable gadget satisfies all the clauses, since otherwise in one of the clause gadgets we would have to select an element inside an additional region. \end{proof} This result is slightly weaker than Theorem~\ref{thm:nphard} in the sense that we cannot assume $|T_\ell| = 2$ but only $|T_\ell| \leq 3$. The existence of a $O^{*}(2^{o(K)})$ time algorithm for this case is an open problem. \begin{theorem}\label{theolog} Let $k=\max_{\ell\in[L]} |T_\ell|$ and $\Delta=\max_{j\in[K]} |P_j|$. Then the robust representative selection problem with locally budgeted uncertainty and arbitrary $K$ inherits inapproximability results from the set cover problem with maximum cardinality of subsets $\Delta$, and maximum number $k$ of subsets containing any element of the ground set. \end{theorem} \begin{proof} We use an objective-preserving reduction from the set cover problem. \noindent\textbf{Given:} A ground set $V$ with $|V|=\tilde{n}$, and a set of subsets $V_s \subseteq V$, $s\in S$ with $|S|=\tilde{m}$. \noindent\textbf{Question:} Does there exist a set cover of size less or equal to $k$, i.e., a set $C\subseteq S$ with $|C|\le k$ such that $\cup_{s\in C} V_s = V$? We set $L=\tilde{n}$ and $K=\tilde{m}$. For each $v\in V$ and each $s\in S$, we add an element $(v,s)$ to set $T_v$. Each such element $(v,s)$ belongs to a region $P_s$. We set $\Gamma_s = 1$ for all $s \in S$ and the costs to $\underline{\pmb{c}} = \pmb{0}$ and $\pmb{d} = \pmb{1}$. Finally, we set $p_v = 1$ for all sets $T_v$. As an example, let us assume we have $\tilde{n}=4$, $\tilde{m}=3$, and $V_1 = \{1,2\}$, $V_2=\{2,3\}$, $V_3 = \{3,4\}$. Then Table~\ref{tablelog} illustrates the construction. \begin{table}[h] \begin{center} \begin{tabular}{r|rrr|r} & $V_1$ & $V_2$ & $V_3$ & \\ \hline 1 & x & & & $T_1$ \\ 2 & x & x & & $T_2$ \\ 3 & & x & x & $T_3$ \\ 4 & & & x & $T_4$ \\ \hline &$P_1$ & $P_2$ & $P_3$ \end{tabular} \caption{Example construction for the proof of Theorem~\ref{theolog}.}\label{tablelog} \end{center} \end{table} By choosing one item from each set $T_v$, we determine by which set $V_s$ we intend to cover it. It can be easily seen that a set cover of size $k$ exists if and only if the robust representative selection problem with locally budgeted uncertainty has an objective value at most $k$. Hence, the reduction is cost-preserving, and the claim follows. \end{proof} Note that the inapproximability of set cover under parameters $\Delta$ and $k$ is well-researched (see, e.g., \cite{saket2012new}). If $k=\theta(\log\Delta)$, both problems become hard to approximate within $\Omega(\log\Delta/(\log\log\Delta)^2)$. \subsubsection{A Polynomial Time Algorithm for the Selection Problem} \label{sec:selection} While the results from the previous section indicate that the robust counterpart of even simple combinatorial problems becomes hard, we now show that this is not the case for the selection problem, where $\mathcal{X} = \{ \pmb{x}\in\{0,1\}^n : \sum_{i\in[n]} = p\}$ for some integer $p$. To this end, use a dynamic program over how many items are taken from every partition. Let $n_j=|P_j|$ be the number of items in region $h\in[K]$. If the number of items $p_j$ taken from partition $P_j$ is fixed, the robust problem can be decomposed: \[ \min_{\pmb{x}\in X} \max_{\pmb{c}\in\mathcal{U}} \pmb{c}^t\pmb{x} = \sum_{j\in [K]} \min_{\pmb{x}\in\mathcal{X}_j} \max_{\pmb{c}\in\mathcal{U}_j} \pmb{c}^t\pmb{x} =: \sum_{j\in[K]} f_j(p_j) \] where $\mathcal{X}_j = \{ \pmb{x}\in\{0,1\}^{n_j} : \sum_{i\in P_j} x_i = p_j\}$ and $\mathcal{U}_j = \{ \pmb{c} \in\mathbb{R}^{n_j} : c_i = \underline{c}_i + \delta_i,\ \delta_i \in [0,d_i]\ \forall i\in P_j,\ \sum_{i\in P_j} \delta_i \le \Gamma_j\}$. For every $j\in[K]$ and every $p_j\in\{0,1,\ldots,n_j\}$, the value $f_j(p_j)$ is the solution of a robust selection problem with continuous budgeted uncertainty set. In Theorem~\ref{thm:polysel} we explain how the whole table of these values can be precomputed efficiently. The robust selection problem with locally budgeted uncertainty thus becomes \begin{align} \min\ &\sum_{j\in[K]} f_j(p_j) \label{dpstart}\\ \text{s.t. } & \sum_{j\in[K]} p_j = p \\ & p_j \in\mathbb{N}_0 & \forall j\in[K] \label{dpend} \end{align} where we use $f_j(p_j) = \infty$ if $p_j > n_j$ \begin{theorem}\label{thm:polysel} The robust selection problem with locally budgeted uncertainty with arbitrary number of regions $K$ can be solved in $O(n \log n + pn)$ time, hence in polynomial time. \end{theorem} \begin{proof} First we explain how to compute the complete table of values $f_j(p_j)$ for all values $j=1,\dots,K$ and $p_j=0,\dots,p$. Note that for fixed $j$ and $p_j$ computing $f_j(p_j)$ is equivalent to solving the robust selection problem with continuous budgeted uncertainty. Observe that this problem is equivalent to solving \[ \min_{\pmb{x} \in \mathcal{X}_j} \left(\underline{\pmb{c}}^t\pmb{x} + \min\{ \pmb{d}^t\pmb{x}, \Gamma_j \}\right), \] corresponding to the two cases of $\pi_j=0$ and $\pi_j=1$ in formulation~(\ref{compactstart}-\ref{compactend}). The optimal solution to this problem can be determined by solving the two instances of the selection problem for parameter $p_j$ with costs $\underline{\pmb{c}}$ and $(\underline{\pmb{c}} + \pmb{d})$ and taking the solution giving the smaller objective value. Hence, for fixed $j$, all values of $f_j$ can be calculated by sorting the items in $P_j$ once with respect to costs $\underline{\pmb{c}}$, and once with respect to costs $\underline{\pmb{c}}+\pmb{d}$. In total, this requires time $O(\sum_{j\in[K]} n_j \log n_j) = O(\sum_{j\in[K]} n_j \log n) = O(n \log n)$. We now give a dynamic program solving problems of type~(\ref{dpstart}-\ref{dpend}) in general form, based on a given table for the values of values $f_j(p_j)$. This then directly implies our result for the robust selection problem. Let $T(K',p')$ be defined as \begin{align*} T(K',p') := \min\ &\sum_{j\in[K']} f_j(p_j) \\ \text{s.t. } & \sum_{j\in[K']} p_j = p' \\ & p_j \in\mathbb{N}_0 & \forall j\in[K']. \end{align*} Then problem~(\ref{dpstart}-\ref{dpend}) is equivalent to computing $T(K, p)$. It holds that $T(1,p') = f_1(p')$ for all $p'=0,1,\dots,p$. It also holds that \[ T(K', p') = \min \left\{ T(K'-1, p'-p_{K'}) + f_{K'}(p_{K'}) \colon p_{K'}=0,1, \dots, \min\{p',n_j\} \right \}. \] Hence, calculating entry $T(K',p')$ can be done in $O(n_j)$ time, if all preceding entries have already been calculated. In total, this means that $T(K,p)$ can be calculated in $O(\sum_{j\in[K]} \sum_{p'\in[p]} n_j) = O(\sum_{j\in[K]} p n_j) = O(pn)$ time. Hence in total the running time of our algorithm is $O(n \log n + p n)$. \end{proof} \section{Experiments} \label{sec:experiments} \subsection{Overview} We present three experiments to quantify differences between ''classic'' budgeted uncertainty sets and the locally budgeted uncertainty sets proposed in this paper. Experiments 1 and 2 use randomly generated data for the uncertain selection problem, while Experiment 3 is based on real-world data. In the first experiment, we assume that the uncertainty set is locally budgeted, and consider the benefit of using this information instead of using a classic budgeted set. In the second experiment, the actual regions are not known to the imagined decision maker. Instead, only sampled scenarios are provided. We analyze the differences between solutions based on classic and locally budgeted uncertainty sets fitted to the data. Finally, in the third experiment, we consider the differences between solutions based on classic and locally budgeted uncertainty sets fitted to real-world data for shortest path problems, where nothing is known about the underlying distribution. \subsection{Experiment 1} \subsubsection{Setup} In this experiment, we focus on randomly generated selection problems. We fix $n=30$. Given the number of regions $K$, we distribute items into the $K$ regions as uniformly as possible. For every item, we generate $\underline{c}_i$ and $d_i$ independently and uniformly from $\{10,\ldots,49\}$. We set $\Gamma_j = 10|P_j|$ and use $K=2,3,4,5$. We generate 10,000 instances using the same random seed for each $K$, (i.e., cost coefficients of these instances are the same for each $K$). We consider all values $p=1,\ldots,29$. Each instance is solved exactly, using the compact formulation for locally budgeted uncertainty. Additionally, we solve each instance using the classic budgeted uncertainty approach, by ignoring the partition into regions and using $\Gamma=\sum_{j\in[K]} \Gamma_j = 10n$. We measure the robust objective value of both solutions with respect to the locally budgeted uncertainty set. By this setup, we already know that the approach using the locally budgeted uncertainty set must perform better. The question we answer here is how much we lose by ignoring such local information. As discussed in Section~\ref{sec:intro}, local uncertainty naturally arises in some practical applications. Our experiment simulates the effect of using classic budgeted uncertainty in this case. \subsubsection{Results} In Figure~\ref{fig1} we show the ratio of average objective values between the solution found by the model using classic budgeted uncertainty, and by the model using locally budgeted uncertainty, for different values of $p$. The higher the ratio, the higher are the additional costs that arise by ignoring the locally budgeted uncertainty structure. \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\textwidth]{plot-exact.pdf} \caption{Experiment 1: Ratio of between average objective values of solutions based on classic and locally and budgeted uncertainty.}\label{fig1} \end{center} \end{figure} Note that for small ($p=1$) and large ($p=29$) values of $p$, the ratio is close to one. Hence, the local information does not matter in this setting. The best choice is to buy the one item $i$ where $\underline{c}_i + d_i$ is smallest (or to avoid the one item where this value is largest, respectively). For values of $p$ between these two extremes, solutions differ. The region of values for $p$ where there is a difference between solutions based on classic and locally budgeted uncertainty increases with $K$. For $K=2$ and $p=11$, the average cost difference is $15.6\%$, while this increases to $17.9\%$ for $K=5$ and $p=10$. \subsection{Experiment 2} \subsubsection{Setup} \label{exp2:setup} In the previous experiment we considered the effect if the decision maker knows the parameters of a locally budgeted uncertainty set, but chooses to ignore these and use a classic budgeted uncertainty instead. In practice, an uncertainty set is usually not given, but needs to be derived from data. Hence in this second experiment, we build locally budgeted uncertainty sets in the same way as before, but then sample $N$ scenarios from the set. To create a sample scenario $c^k$ with $k\in[N]$, we choose a random value $\gamma_j$ from $\{0,\ldots,\Gamma_j\}$ uniformly and distribute $\min\{\gamma_j, \sum_{i\in P_j} d_i\}$ many unit cost increases to all items $i\in P_j$. We do so iteratively, i.e., we first begin with $c^k=\underline{c}$, and then repeatedly choose an item from $P_j$ where $c^k_i \le \underline{c}_i + d_i - 1$ at random, and increase this item's costs by one. Having constructed $N$ scenarios, we then fit suitable classic and locally budgeted uncertainty sets. The focus of this experiment is to derive the regions from the data. We therefore assume that $\underline{c}_i$ and $d_i$ are given for each item. To estimate the underlying partition into regions, we can assume that in a sufficiently large sample $N$, two items from different regions are not correlated. As each budget constraint applies locally, correlation can only be found within regions. Based on this idea, we calculate the correlation matrix using the available sample data. We then consider two items to be connected, if the absolute value of correlation is above a certain threshold (in this experiment, we used 0.3). Each connected component then forms its own region. Note that this way, the number of regions $\tilde{K}$ we use is not prescribed, but estimated from the data. The classic budgeted uncertainty set uses $\tilde{\Gamma} = \max_{k\in[N]} \left(\sum_{i\in[n]} c^k_i - \underline{c}_i\right)$ as an estimate for the uncertainty budget. For each region, we estimate $\tilde{\Gamma}_j$ in the same way. We use $n=30$, $p=10$, $K=2,\ldots,5$ and vary the sample size $N$ from 10 to 10,000. As before, for each parameter combination, we construct and solve 10,000 instances. \subsubsection{Results} Our results are summarized in Figure~\ref{fig2}. On the horizontal axis, we denote the sample size $N$ (note the logarithmic scale). On the vertical axis, we show average objective values with respect to the original, unknown locally budgeted uncertainty set. \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\textwidth]{plot-approximate.pdf} \caption{Experiment 2: Average objective values of solutions based on locally and classic budgeted uncertainty.}\label{fig2} \end{center} \end{figure} The solid lines indicate the objective value of the solutions based on fitted classic budgeted uncertainty sets, while the dashed lines represent locally budgeted uncertainty sets. The dotted lines indicate the optimal objective value, if the actual uncertainty set were known. First note that the larger the number of regions $K$, the smaller become objective values overall. For the classic budgeted uncertainty set, the decrease in objective value with increasing sample size $N$ is small, the line is mostly horizontal. This is different for the locally budgeted uncertainty set, where a significant decrease can be observed after the sample size reaches a certain threshold. This begins at around $N=30$, and is completed at approximately $N=110$. We find that even if the locally budgeted uncertainty set is not given explicitly, it is possible to take significant advantage of this model by identifying the corresponding structure in the data. \subsection{Experiment 3} \subsubsection{Setup} While the previous experiments used artificial data that is based on an underlying locally budgeted uncertainty set, we now consider real-world data, where no such underlying structure is known. The data we use was first introduced in \cite{chassein2019algorithms}. It consists of a graph modeling the city of Chicago with 538 nodes and 1308 edges, and 4363 snapshots of traffic speed for each edge over 46 days. Figure~\ref{fig3} shows the structure of the graph. \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\textwidth]{graph.pdf} \caption{Experiment 3: Chicago graph with three regions highlighted.}\label{fig3} \end{center} \end{figure} The data is prepared in the same way as in \cite{chassein2019algorithms}. We use each traffic speed snapshot as a scenario. Of the 4363 scenarios, we use $75\%$ for training our models, and $25\%$ for evaluation. We sample 200 random $s$-$t$ pairs and calculate a shortest path for each pair using each of our models. The classic budgeted model is trained on the data in the same way as in Experiment~2 (see Section~\ref{exp2:setup}), where $\underline{c}_i$ and $d_i$ are estimated from the data. To model locally budgeted uncertainty sets, we create regions by using edge sequences between any two crossings in one direction. In Figure~\ref{fig3}, we show three such regions in red as an illustration. In total, this results in 546 regions. We control the degree of conservatism of our two approaches by multiplying the estimated $\tilde{\Gamma}$ value (or $\tilde{\Gamma}_j$ values, respectively) with a budget factor $f$. We use all values of $f$ from 0 to $0.5$ in step size $0.002$. For each value of $f$ and each model, we solve the 200 shortest path problems and evaluate the path choices in-sample and out-of-sample. We then calculate the average of the average path length and the worst-case path length over the two scenario sets. \subsubsection{Results} We show our in-sample results in Figure~\ref{fig4a} and the out-of-sample results in Figure~\ref{fig4b}. On the horizontal axis is the average travel time (in minutes), and the vertical axis is the average worst-case travel time. The results of both models are shown as a line, starting with $f=0$ in the top left, and moving to the right with increasing value of $f$. \begin{figure}[htb] \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=\textwidth]{plot-in.pdf} \subcaption{In-sample results.\label{fig4a}} \end{subfigure} \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=\textwidth]{plot-out.pdf} \subcaption{Out-of-sample results.\label{fig4b}} \end{subfigure} \caption{Experiment 3: Average versus average worst-case objective value for different budget factors.}\label{fig4} \end{figure} Note that for $f=0$, the classic and the locally budgeted approach result in the same solution, that is, they only optimize for best-case travel times. In an ideal trade-off between average and worst-case travel time, we would expect the lines to reach from the top left corner (low average time, high worst-case time) to the bottom right corner (high average time, low worst-case time). In Figure~\ref{fig4a} we can see that for the classic approach, no such trade-off can be reached. With increasing budget factor, we increase the average travel time, but do not decrease the worst-case travel times. From the perspective of Pareto optimality, most of the budget factors result in dominated solutions. The locally budgeted uncertainty set, on the other hand, gives a trade-off with increasing budget factor and considerably outperforms the solutions found by the locally budgeted uncertainty set. In the out-of-sample results (Figure~\ref{fig4b}), the classic approach performs even worse, with the curve leading upwards to the top right. The locally budgeted solutions retain a trade-off between average and worst-case time. Overall, we see that it is possible to model the discrete real-world scenarios more accurately using the locally budgeted uncertainty approach, while with the classic budgeted approach, it is not possible to capture the underlying data. \section{Conclusions and Further Research} \label{sec:conclusions} In this paper we introduced a new generalization of budgeted uncertainty sets, where there is a separate uncertainty budget for different regions of items. We showed that for constant number of regions $K$, the robust counterpart remains polynomially solvable if the nominal problem is solvable in polynomial time. For unbounded values of $K$, we show that the robust selection problem can still be solved in polynomial time, while this is not the case for the representative selection problem, even if only one item is chosen from each partition. This extends to other combinatorial problems that include the representative selection problem as a special case. Table~\ref{tab:results} gives an overview to these results. In addition, we show that no parameterized algorithms with running time in $O^*(2^{o(K)})$ exist. To the best of our knowledge, robust optimization problems have not been considered from the perspective of fixed parameter tractability so far. \begin{table}[htb] \begin{center} \begin{tabular}{r|cc} Problem & $K = O(1)$ & $K=O(n)$ \\ \hline Unconstrained & P & P \\ Selection & P & P \\ Repr. Selection & P & strongly NPH \\ Spanning Tree & P & strongly NPH \\ $s$-$t$-min-cut & P & strongly NPH \\ Shortest Path & P & strongly NPH \end{tabular} \caption{Overview of complexity results from this paper.}\label{tab:results} \end{center} \end{table} In computational experiments we showed that more general, locally budgeted uncertainty sets can result in better solutions than their classic counterparts using real-world data sets. Different types of classic budgeted uncertainty sets have been considered in the literature. Instead of the definition used in this paper, where \[ \mathcal{U} = \left\{ \pmb{c} = \underline{\pmb{c}} + \pmb{\delta} : \delta_i\in[0,d_i]\ \forall i\in[n],\ \sum_{i\in[n]} \delta_ i \le \Gamma \right\}, \] it is possible to consider the variant \[ \mathcal{U} = \left\{ \pmb{c} : c_i = \underline{c}_i + d_i z_i, z_i\in[0,1]\ \forall i\in[n],\ \sum_{i\in[n]} z_ i \le \Gamma \right\}. \] All of our theoretical results can be extended to this case. In Theorem~\ref{decomp-theorem}, this means we need to consider $n^K$ instead of $2^K$ subproblems. The hardness results hold as well, as we used $d_i=1$ in the uncertainty sets we constructed. Furthermore, the locally budgeted uncertainty set proposed in this paper can be seen in the light of data-driven robust optimization (see, e.g., \cite{bertsimas2018data,chassein2019algorithms}), where the aim is to find the most suitable uncertainty set to describe given data. Using local budgets extends the capabilities of classic budgeted uncertainty models, and thus gives more degrees of freedom to describe the data. From a theoretical perspective, further investigations into the parameterized running time of our meta-algorithm in Section~\ref{sec:constant} are of interest. Note that a minor modification of the proof of Theorem~\ref{thm:ethhard} implies that no $O^{*}((\sqrt{2}-\epsilon)^{K})$ algorithm for the robust representative selection problem with locally budgeted uncertainty exists, unless the strong exponential time hypothesis (SETH) fails. We conjecture that there are combinatorial optimization problems for which the constant $2$ can be improved. Whether this can be done in a meta-algorithm or only for specific combinatorial optimization problems is another interesting open problem. For the second variant of locally budgeted uncertainty, mentioned in Section~\ref{sec:conclusions}, our gap between positive and negative results is even larger. It is of major interest whether a fixed-parameter tractable algorithm also exist for this case, or if this slight change in the definition of the uncertainty set leads to W[1]-hardness.
{ "timestamp": "2020-08-28T02:16:31", "yymm": "2008", "arxiv_id": "2008.12076", "language": "en", "url": "https://arxiv.org/abs/2008.12076", "abstract": "Budgeted uncertainty sets have been established as a major influence on uncertainty modeling for robust optimization problems. A drawback of such sets is that the budget constraint only restricts the global amount of cost increase that can be distributed by an adversary. Local restrictions, while being important for many applications, cannot be modeled this way.We introduce new variant of budgeted uncertainty sets, called locally budgeted uncertainty. In this setting, the uncertain parameters become partitioned, such that a classic budgeted uncertainty set applies to each partition, called region.In a theoretical analysis, we show that the robust counterpart of such problems for a constant number of regions remains solvable in polynomial time, if the underlying nominal problem can be solved in polynomial time as well. If the number of regions is unbounded, we show that the robust selection problem remains solvable in polynomial time, while also providing hardness results for other combinatorial problems.In computational experiments using both random and real-world data, we show that using locally budgeted uncertainty sets can have considerable advantages over classic budgeted uncertainty sets.", "subjects": "Optimization and Control (math.OC)", "title": "Robust Combinatorial Optimization with Locally Budgeted Uncertainty", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363480718236, "lm_q2_score": 0.7185944046238982, "lm_q1q2_score": 0.7073385919923343 }
https://arxiv.org/abs/1403.4654
Curvature bounds via an isoperimetric comparison for Ricci flow on surfaces
A comparison theorem for the isoperimetric profile on the universal cover of surfaces evolving by normalised Ricci flow is proven. For any initial metric, a model comparison is constructed that initially lies below the profile of the initial metric and which converges to the profile of the constant curvature metric. The comparison theorem implies that the evolving metric is bounded below by the model comparison for all time and hence converges to the constant curvature profile. This yields a curvature bound and a bound on the isoperimetric constant, leading to a direct proof that the metric converges to the constant curvature metric.
\section{Introduction} \label{sec-1} The Ricci flow is the nonlinear geometric parabolic evolution equation \begin{equation} \label{eq:RF} \begin{cases} \pd{t} \metric &= -2\ricci(t) \\ \metric(0) &= \metric_0 \end{cases} \end{equation} for a smooth family of Riemannian metrics $\metric(t)$ on a smooth manifold $M$ with Ricci curvature $\ricci(t)$ and an arbitrary smooth initial metric $\metric_0$. Here we are interested in the case of closed surfaces, that is, $2$ dimensional, compact manifolds $M$ without boundary. The results here pertain to the \emph{normalised} flow, preserving the $2$-dimensional volume of $M$. After rescaling the initial metric to have volume $4\pi$ and applying the Gauss-Bonnet formula, the normalised flow on surfaces takes the form \begin{equation} \label{eq:NRF} \pd{t}\metric = -2(\gausscurv - \avg{\gausscurv})\metric \end{equation} where $\gausscurv$ is the Gaussian curvature and $\avg{\gausscurv} = \frac{1}{4\pi} \int_M \gausscurv \metricmeasure{\metric}$ is the average Gauss curvature on $M$ \cite{MR2729306}. An important consequence of writing the equation in this way is that it may be lifted to the universal cover $\unicover{M} \to M$. That is, the pullback metric $\unicover{\metric}(t) = \pullback{\uniproj} \metric (t)$ also evolves according to equation \eqref{eq:NRF}. The main theorem of this paper is a comparison theorem for the isoperimetric profile of a surface with metric evolving by the normalised Ricci flow, generalising the comparison theory in \cite{MR2729306} for $M=\mathbb{S}^2$ to arbitrary closed surfaces. Recall, the isoperimetric profile is the least boundary area enclosing a given volume (see section \ref{sec:isoprofile} for a precise definition). \begin{theorem} [Main Theorem \ref{thm:comparison}] Let $(M, \metric(t))$ be a Ricci flow of a closed surface and $(\unicover{M}, \unicover{\metric}(t))$ be the lift to the universal cover. Let $\phi: (0, \abs{\unicover{M}}) \times [0, T) \to \fld[R]$ be a smooth, strictly positive, strictly concave function satisfying \[ \pd[\phi]{t} \leq \phi'' \phi^2 - (\phi')^2 \phi + \phi'\left(4\pi - 2(1-\genus)a_0\right) + (1-\genus)\phi. \] along with the asymptotic behaviour \[ \limsup_{a\to 0} \frac{\phi(a, t)}{\sqrt{4\pi a}} \leq 1 \] and \[ \limsup_{a\to \infty} \left(\isoprofile(a, t) - \phi(a,t)\right) \geq 0. \] If the initial inequality, $\phi(a, 0) < \isoprofile_{\unicover{\metric}(0)} (a)$ for all $a \in (0, \abs{\unicover{M}})$ holds, then $\phi(a, t) \leq \isoprofile_{\unicover{\metric}(t)} (a)$ for all $a, t$ with strict inequality if the inequality in \eqref{eq:ricci_iso_diff_inequal} is strict. \end{theorem} As an application, by a standard bootstrapping argument, a proof of the Hamilton-Chow theorem (Theorem \ref{thm:convergence}) is obtained directly as a corollary of Theorem \ref{thm:comparison}. This achieved by suitable choices of comparison functions, leading to explicit curvature bounds and bounds on the isoperimetric constant as described in sections \ref{sec:models} and \ref{sec:convergence}. \begin{theorem} [\cite{MR954419, MR1094458}] \label{thm:convergence} Given any initial metric $\metric_0$, there exists a unique solution to the normalised Ricci flow existing for all time $t \in [0,\infty)$ and such that $\metric(t) \to_{C^{\infty}} \metric_{\genus}$, the metric of constant curvature $\gausscurv = 1-\genus$ as $t\to\infty$. \end{theorem} The use of the isoperimetric profile here is an extension from the 2-sphere to arbitrary surfaces of the results in \cite{MR2729306}, which in turn are based on the isoperimetric estimates in \cite{MR1369139}. We begin in section \ref{sec:isoprofile} with a treatment of the isoperimetric profile of surfaces, deriving a viscosity equation via variational techniques which forms the heart of the comparison theorem. The comparison theorem is proven in section \ref{sec:comparison} by coupling the time-variation of the isoperimetric profile under the normalised Ricci flow with the spatial viscosity equation, yielding the parabolic version of the viscosity equation. Section \ref{sec:models} is devoted to constructing suitable model comparisons. The construction on the $2$-sphere was given in \cite{MR2729306} and is briefly described. Curiously, the most difficult case to deal with is for surfaces of genus $\genus > 1$, which historically was perhaps the easiest case by applying the maximum principle and introducing a potential function \cite{MR954419}. For initially negatively curved surfaces however, the model given here is quite appealing. Finally in section \ref{sec:convergence}, the boostrapping convergence argument is briefly described. \section*{Acknowledgements} I would like to thank Professor Ben Andrews for his generous guidance and support supervising my Ph.D. thesis on which this paper is based. I would also like to thank Professor Gang Tian and BICMR for sponsoring a very enjoyable stay in Beijing where my thoughts on variational techniques were greatly clarified on the banks of WeiMing Lake. Finally, this paper was written during my time as SEW Assistant Professor at UCSD. \section{The Isoperimetric Profile} \label{sec-2} \label{0b158ef8-2a10-4c0e-9d45-0de44bbddf06} \label{sec:isoprofile} \subsection{Definition and Basic Properties} \label{sec-2-1} \begin{defn} The \emph{isoperimetric profile}, $\isoprofile_M : (0, \abs{M}) \to \fld[R]_+$ of $M$ is defined by \[ \isoprofile_M(a) = \inf \left\{\abs{\bdry{\Omega}} : \abs{\Omega} = a \right\} \] where the infimum is taken over all relatively compact open sets $\Omega$ with smooth boundary. Such $\Omega$ are said to be \emph{admissible regions}. If $\Omega$ is an admissible region such that $\isoprofile_M(\abs{\Omega}) = \abs{\bdry{\Omega}}$, we will call $\Omega$ an \emph{isoperimetric region}. \end{defn} A basic theorem we will assume here is that for every $a\in(0,\abs{M})$, there exists a corresponding isoperimetric region (with smooth boundary apart from a set of Hausdorff dimension at most $n-7$ on an $n$ dimensional manifold) provided $M$ is either compact or co-compact. In particular, smooth isoperimetric regions exist on a closed surface and its universal cover equipped with the pull-back metric. The proof of this fact is a standard result of geometric measure theory \cite[pp. 128-129]{MR2455580}. A simplified proof in the case of surfaces is given in \cite{MR1661278} using regularity techniques developed in \cite{MR1417428}. It will be important for us to understand the behaviour of the isoperimetric profile near the end points $\{0, \abs{M}\}$. In the situation where $M$ is compact, then the complement of an isoperimetric region is again an isoperimetric region, so the isoperimetric profile is symmetric about $\abs{M}/2$ and it suffices to consider only the behaviour near $0$. In the non-compact case, the behaviour near $0$ is the same as for the compact case, so let us begin with the behaviour near $0$. \begin{theorem} \label{thm:bdrybehaviour} Let $M$ be a smooth Riemannian surface without boundary and such that $\sup_M \gausscurv < \infty$. Then the isoperimetric profile satisfies \[ \isoprofile(a)= \sqrt{4\pi a} - \frac{\sup_M \gausscurv}{4\sqrt{\pi}} a^{3/2} + O(a^{5/2}) \quad \text{as} \quad a\to 0. \] \end{theorem} \begin{proof} Small geodesic balls about any point $p$ are admissible regions. The result of \cite[Theorem 3.1]{MR0339002} gives $\abs{B_r(p)} = \pi r^2 \left(1 - \frac{\gausscurv(p)}{12} r^2 + O(r^4)\right)$ and $\abs{\bdry{B_r(p)}} = 2\pi r \left(1 - \frac{\gausscurv(p)}{6} r^2 +O(r^4)\right)$. The upper bound follows since $\abs{\bdry{B_r(p)}} \geq \isoprofile(\abs{B_r(p)})$. To prove the lower bound, first choose $a_0$ sufficiently small to ensure that $\isoprofile(a_0)$ is much smaller than the injectivity radius of $M$. Then an isoperimetric region $\Omega_0$ corresponding to $a_0$ lies inside a geodesic ball about some point $p$ (width is bounded above by perimeter for surfaces). Since geodesic balls are simply connected and $\gausscurv \leq \gausscurv_0 = \sup_M\gausscurv$, the Bol-Fiala inequality (see \cite{MR0500557}) then gives \begin{align*} \isoprofile(a_0) \geq \sqrt{4\pi a_0 - \gausscurv_0 a_0^2} = \sqrt{4\pi a_0} - \frac{\gausscurv_0}{4\sqrt{\pi}} a_0^{3/2} + O(a_0^2). \end{align*} \end{proof} Next, we have the asymptotics of the isoperimetric profile near $\infty$ for non-compact $\unicover{M}$. \begin{theorem} \label{thm:asymptotics_large} Let $M$ be a closed, genus $\genus\geq 1$ surface with metric $\metric$, normalised to have $\abs{M} = 4\pi$ and let $\uniproj: \unicover{M} \to M$ be the universal cover of $M$ equipped with the pull-back metric $\unicover{\metric} = \uniproj^{\star}\metric$. Then \[ \isoprofile_{\unicover{\metric}}(a) \to C \sqrt{4\pi a - (1-\genus) a^2} \] as $a\to\infty$ for some $C>0$. \end{theorem} \begin{proof} By the uniformisation theorem, $\unicover{\metric}$ is conformal to a metric of constant curvature so that \[ \unicover{\metric} = \phi\metric_{1-\genus} \] with $\metric_{1-\genus}$ the metric of constant curvature $1-\genus$ and $\phi$ a positive function $\phi: \unicover{M} \to \fld[R]$ invariant under the deck transformation group of $\unicover{M}$. Thus $\phi$ is uniformly bounded above and below. The isoperimetric inequality for simply connected Riemannian surfaces of constant curvature $1-\genus$ implies that the isoperimetric profile $I_{1-\genus}$ of the constant curvature metric $\metric_{1-\genus}$ is given by \[ \isoprofile_{1-\genus}(a) = \sqrt{4\pi a - (1-\genus)a^2}. \] Since $\unicover{\metric}$ is conformal to $\metric_{1-\genus}$ with conformal factor $\phi$ uniformly bounded, we have \begin{align*} \frac{1}{C_1} \abs{\bdry{\Omega}}_{\metric_{1-\genus}} &\leq \abs{\bdry{\Omega}}_{\unicover{\metric}} \leq C_1 \abs{\bdry{\Omega}}_{\metric_{1-\genus}} \\ \frac{1}{C_2} \abs{\Omega}_{\metric_{1-\genus}} &\leq \abs{\Omega}_{\unicover{\metric}} \leq C_2 \abs{\Omega}_{\metric_{1-\genus}} \end{align*} for constants $C_1,C_2>0$ which gives the result. \end{proof} \begin{remark} It would be preferable if we didn't have to refer to the uniformisation theorem, as then the results of this paper provide a proof of the uniformisation theorem. In the case $\genus=0$, we have such a proof since $M$ is compact. In the case $\genus=1$, the result of \cite{MR1354290} implies that \[ \isoprofile(a) \to C\sqrt{a} \] as $a\to \infty$ for $0 < C \leq 4\pi$ with $C=4\pi$ if and only if $M$ is flat. This is precisely the required asymptotics in the theorem for $\genus=1$ surfaces obtained without requiring the use of the uniformisation theorem. The only problem here then is for $\genus>1$ surfaces. The volume growth of $\unicover{M}$ is controlled by the number of generators for the fundamental group, but controlling the perimeter is rather more difficult. I am not aware of an applicable result for $\genus>1$ surfaces, though such a result would be interesting. \end{remark} \subsection{Variational Formulae and Consequences} \label{sec-2-2} Our techniques are based on applying the standard variational formula for isoperimetric regions, with a slight change in the second variation, obtained by applying the Gauss-Bonnet theorem. Let us briefly recall the applicable variational formulae and describe the approach used here in obtaining the second variation. Let $\Omega_0$ be an isoperimetric region and $\Omega_{\epsilon}$ a smooth normal variation with variational vector field $\eta \nor$ for a smooth function $\eta : \bdry{\Omega_0} \to \fld[R]$. The first variation formulae are \begin{align} \label{eq:firstvar_bdry} \pd{\epsilon} \abs{\bdry{\Omega_{\epsilon}}} &= \int_{\bdry{\Omega_{\epsilon}}} \eta \curvecurv \\ \intertext{and} \label{eq:firstvar_vol} \pd{\epsilon} \abs{\Omega_{\epsilon}} &= \int_{\bdry{\Omega_{\epsilon}}} \eta . \end{align} where $\curvecurv$ is the geodesic curvature of $\bdry{\Omega_{\epsilon}}$. In particular, the vanishing of the first variation for all functions $\eta$ such that $\int_{\bdry{\Omega_{\epsilon}}} \eta = 0$ (area preserving variations) implies that $\curvecurv$ is constant. For the second variation, we need only consider unit-speed variations ($\eta \equiv 1$) and so immediately conclude the second variation for area, \begin{equation} \label{eq:secondvar_vol} \pdd{\epsilon} \abs{\Omega_{\epsilon}} = \int_{\bdry{\Omega_{\epsilon}}} \curvecurv. \end{equation} For the second variation of boundary length, it suits our purposes to first apply the Gauss-Bonnet formula and then differentiate equation \eqref{eq:firstvar_bdry}. Thus \[ \begin{split} \pdd{\epsilon} \abs{\bdry{\Omega_{\epsilon}}} &= \pd{\epsilon} \int_{\bdry{\Omega_{\epsilon}}} \curvecurv \\ &= \pd{\epsilon} \left(2\pi\eulerchar{\Omega(\epsilon)} - \int_{\Omega(\epsilon)}\gausscurv_M\right) \end{split} \] where $\eulerchar{\Omega{\epsilon}}$ is the Euler characteristic of $\Omega_{\epsilon}$ which is independent of $\epsilon$ since each $\phi_{\epsilon}$ is a diffeomorphism and $\gausscurv_M$ is the Gauss curvature of $M$. The latter has no explicit dependence on $\epsilon$ and so the Reynold's Transport Theorem (or differentiating under the integral sign) yields \begin{equation} \label{eq:secondvar_bdry} \pdd{\epsilon} \abs{\bdry{\Omega_{\epsilon}}} = - \int_{\bdry{\Omega_{\epsilon}}} \gausscurv_M. \end{equation} Our approach is based on weak differential inequalities for the isoperimetric profile arising from the variational formulae. \begin{defn} A function $f: (a,b) \to \fld[R]$ has weak derivatives satisfying \[ \pd[f^-]{x} \leq C_1 \leq \pd[f^+]{x} \quad \text{and} \quad \pdd[f]{x} \leq C_2 \] in the \emph{support} (or sometimes \emph{Calabi}) sense at $x_0$ if $f$ supports a smooth function $\phi$ at $x_0$ ($f(x_0)=\phi(x_0)$ and $f(x) \leq x_0$ for $x$ near $x_0$) such that \[ \pd[\phi]{x} (x_0) = C_1 \quad \text{and} \quad \pdd[\phi]{x} (x_0) = C_2. \] \end{defn} \begin{prop} [\cite{MR875084} (see also \cite{MR2229062} pp. 249-251)] \label{prop:support_iso_nobdry} For each $a_0 \in (0,\abs{M})$, let $\Omega_0$ be a corresponding isoperimetric region with constant curvature $\curvecurv(a_0)$ along the boundary. Then the isoperimetric profile satisfies \[ \pd[\isoprofile^-]{a} \leq \curvecurv(a_0) \leq \pd[\isoprofile^+]{a} \quad \text{and} \quad \pdd[\isoprofile]{a} \leq \frac{-1}{\isoprofile^2}\left(\curvecurv(a_0)^2\isoprofile + \int_{\bdry{\Omega_0}} \gausscurv_M\right). \] in the support sense. Moreover, if $\gausscurv_M \geq \gausscurv_0$, the function \[ a \mapsto \isoprofile(a)^2 + \gausscurv_0 a^2 \] is concave, hence $\isoprofile^2$ is locally Lipschitz and in particular $\isoprofile$ is continuous. \end{prop} \begin{remark} Since we are assuming $M$ is compact or co-compact, $\gausscurv_M$ is bounded hence $\isoprofile$ is continuous. Note also that since $\sqrt{-}$ is smooth away from $0$, by Rademacher's theorem, $\isoprofile$ is differentiable almost everywhere. \end{remark} \begin{cor} \label{cor:iso_concave_nobdry} With the notation of the proposition, if $\gausscurv_0\geq0$ then $\isoprofile$ is concave and so too is $\isoprofile^2$. If the inequality is strict, then $\isoprofile$ and $\isoprofile^2$ are strictly concave. \end{cor} The last results of this section concern the topology of isoperimetric regions. We generally don't have a priori control over the topology of isoperimetric regions and so we don't know the precise form of the differential inequality for $\isoprofile$ because of the integral over the unknown regions $\Omega_0$. However, there is a useful sufficient condition for obtaining control of the topology of isoperimetric regions. The idea comes from \cite{MR1674097}. \begin{lemma} \label{thm:curvetopology} Let $a_0\in (0,\abs{M})$ and $\Omega_0$ a corresponding isoperimetric region. If there exists a strictly positive, strictly concave function $\phi: (0,\abs{M}) \to \fld[R]$ supporting $I$ at $a_0$ ($\phi(a_0) = \isoprofile(a_0)$ and $\phi(a) \leq \isoprofile(a)$ for all $a\in(0,\abs{M})$), then $\Omega_0$ is connected. If $M$ is compact then $\Omega_0$ has connected complement. \end{lemma} \begin{remark} It is worth pointing out that while the conclusion of the lemma is local, pertaining to a particular value of $a_0$ and corresponding isoperimetric region, the hypotheses are global in nature in that we need a \emph{globally defined} supporting function $\phi$ (not just in a neighbourhood of $a_0$). \end{remark} \begin{proof} First note that since $\phi > 0$ on $(0,\abs{M})$, $\phi \leq \isoprofile$, and $\isoprofile(0) = 0$, we have $\phi(0) = 0$. Thus since $\phi$ is strictly concave, $\phi$ is strictly subadditive. Now suppose $\Omega_0$ is not connected. Then we can write $\Omega_0 = \Omega_1\cup\Omega_2$ with $\Omega_1 \cap \Omega_2 = \emptyset$. Since $\bdry{\Omega_0}$ is smooth we must have $\bdry{\Omega_0} = \bdry{\Omega_1} \cup \bdry{\Omega_2}$ and $\bdry{\Omega_1} \cap \bdry{\Omega_2} = \emptyset$. Thus we have $\abs{\Omega_0} = \abs{\Omega_1} + \abs{\Omega_2}$ and $\abs{\bdry{\Omega_0}} = \abs{\bdry{\Omega_1}} + \abs{\bdry{\Omega_2}}$, and all of these are non-zero. But then we get \begin{align*} \phi\left(\abs{\Omega_1}\right) + \phi\left(\abs{\Omega_2}\right) &\leq \abs{\bdry{\Omega_1}} + \abs{\bdry{\Omega_2}} \\ &= \abs{\bdry{\Omega_0}} \\ &= \phi\left(\abs{\Omega_0}\right) \\ &= \phi\left(\abs{\Omega_1} + \abs{\Omega_2}\right) \\ &< \phi\left(\abs{\Omega_1}\right) + \phi\left(\abs{\Omega_2}\right). \end{align*} This is a contradiction, so $\Omega_0$ is connected. If $M$ is compact, then $M\setminus\Omega_0$ is also an isoperimetric region with $\abs{M\setminus\Omega_0} = \abs{M}-\abs{\Omega_0} = \abs{M} - a_0$ and $\isoprofile(\abs{M} - a_0) = \abs{\bdry{M\setminus\Omega_0}} = \abs{\bdry{\Omega_0}} = \isoprofile(a_0)$. Reflecting $\phi$ about $a=\abs{M}/2$ gives a function satisfying the hypothesis of the proposition at $\abs{M}-a_0$ hence $M\setminus\Omega_0$ is also connected. \end{proof} \begin{cor} \label{cor:sphere_iso_conn} With the hypothesis of lemma \ref{thm:curvetopology}, if $M$ is diffeomorphic to $\mathbb{S}^2$ then $\Omega_0$ is simply connected. \end{cor} \begin{proof} Follows from the Jordan curve theorem for $\mathbb{S}^2$. \end{proof} \begin{cor} \label{cor:plane_iso_conn} With the hypothesis of lemma \ref{thm:curvetopology}, if $M$ is diffeomorphic to $=\fld[R]^2$ then $\Omega_0$ is simply connected. \end{cor} \begin{proof} Since $\fld[R]^2$ is not compact, we cannot immediately conclude that $\Omega_0$ has connected complement as before. To achieve this result, first note that if $M$ is $\fld[R]^2$, then $\phi$ is a (strictly) positive concave function on $(0,\infty)$ and hence is strictly increasing. Since $\Omega_0$ is connected, topologically it is a disc with finitely many discs removed. Let $\Omega_1$ denote the interior of the external boundary of $\Omega_0$, i.e. $\Omega_1$ is equal to $\Omega_0$ with the ``holes'' filled in. Then $\Omega_1$ has strictly larger area than $\Omega_0$ and strictly smaller boundary length. But then \[ \phi(\abs{\Omega_0}) = \isoprofile(\abs{\Omega_0}) = \abs{\bdry{\Omega_0}} > \abs{\bdry{\Omega_1}} \geq \isoprofile(\abs{\Omega_1}) \geq \phi(\abs{\Omega_1}) \] contradicting that $\phi$ is increasing. Therefore $\fld[R]^2 \setdiff \Omega_0$ is connected and now the Jordan curve theorem implies $\Omega_0$ is simply connected. \end{proof} Let us finish by noting that in positive curvature, we have complete knowledge of the topology of isoperimetric regions. \begin{cor} \label{cor:isoregions_positive_curvature} If $M$ is diffeomorphic to either $\mathbb{S}^2$ or $\fld[R]^2$ (for instance if $M$ is the universal cover of a closed surface) equipped with any metric (not necessarily the pull-back from a compact surface) and $\gausscurv_0 > 0$, then all isoperimetric regions are simply connected. \end{cor} \begin{proof} By Corollary \ref{cor:iso_concave_nobdry}, $\isoprofile$ is strictly concave so the hypotheses of Corollaries \ref{cor:sphere_iso_conn} and \ref{cor:plane_iso_conn} are satisfied at any $a_0\in (0,\abs{M})$ by choosing $\phi=\isoprofile$ itself. \end{proof} \begin{remark} I don't know if this result can be extended to $\gausscurv_0 = 0$ since in this case $\isoprofile$ is not necessarily strictly concave. \end{remark} \subsection{A viscosity equation for the isoperimetric profile} \label{sec-2-3} The results in this section formalise some of the ideas used in \cite{MR2729306}. We obtain a differential inequality in the viscosity sense for the isoperimetric profile of a surface. This is somewhat dual to the results in the previous section and those in \cite{MR2041647} in that we assert conditions which \emph{lower} supporting functions must satisfy as opposed to the aforementioned results which assert the existence of an \emph{upper} supporting function with bounds on the derivatives. The methods however, are essentially the same and the support inequality implies the viscosity inequality. As the isoperimetric profile is defined as an extrema, viscosity equations turn out to be well suited to this situation. Indeed, viscosity equations were introduced in \cite{MR690039} to study Hamilton-Jacobi equations, also arising from optimisation problems. A central feature of viscosity equations, that forms the basis of the comparison theorem \ref{thm:comparison}, is that they enjoy a maximum principle. See \cite{MR1351007} for more on viscosity equations. \begin{defn} A lower semi-continuous function $f: (a,b) \to \fld[R]$ is a viscosity super-solution of the 2nd order differential equation \[ A(x, f, f', f'') = 0 \] if for every $x_0\in (a,b)$ and every $C^2$ function $\phi$ such that $\phi(x_0) = f(x_0)$ and $\phi(x) \leq f(x)$ in a neighbourhood of $x_0$, we have $A(x_0, \phi(x_0), \phi'(x_0), \phi''(x_0)) \geq 0$. An upper semi-continuous function is a viscosity sub-solution if the same statements hold with all the inequalities reversed. \end{defn} For $f$ a viscosity super(sub)-solution of $A(x, f, f', f'') = 0$, we will abuse notation slightly and write $A(x, f, f', f'') \geq 0 (\leq 0)$ (in the viscosity sense). \begin{remark} In the definition, the existence of a lower (upper) supporting function at a point is not required, rather we assert that if such a supporting function exists, it must satisfy the appropriate differential inequality. For instance, the absolute value function, $x\mapsto \abs{x}$ is a viscosity sub-solution of $f'=1$ even though no $C^1$ upper support function exists at $x=0$. \end{remark} \begin{theorem} \label{thm:viscosity_nobdry} The isoperimetric profile is a viscosity super-solution of \begin{equation} \label{eq:spatial_viscosity} -\left(\isoprofile'' \isoprofile^2 + (\isoprofile')^2 \isoprofile + \int_{\bdry{\Omega_0}} \gausscurv_M\right) = 0 \end{equation} where $\Omega_0$ is any isoperimetric region corresponding to $a_0$ ($\abs{\Omega_0} = a_0$ and $\isoprofile(a_0) = \abs{\bdry{\Omega_0}}$) and $\gausscurv_M$ is the gauss curvature of $M$. In particular, if $\gausscurv_M \geq \gausscurv_0$ is bounded below on $M$, then \[ -\left(\isoprofile'' \isoprofile^2 + (\isoprofile')^2 \isoprofile + \gausscurv_0 \isoprofile\right) \geq 0 \] in the viscosity sense. \end{theorem} \begin{remark} The integral term in the first equation is difficult to deal with; even though the Gauss curvature $\gausscurv$ is a given function on the ambient space $M$, we don't have any a-priori knowledge of $\Omega_0$. Nevertheless, the first form will be the most useful to us when considering the Ricci flow, since the integral term will also appear in the time variation of isoperimetric regions under the Ricci flow allowing us to connect the spatial variational formulae with the time variational formulae. \end{remark} \begin{proof} The isoperimetric profile is continuous by proposition \ref{prop:support_iso_nobdry} and the remark following it. Let $\phi$ be a smooth function defined on a neighbourhood of $a_0 \in (0, \abs{M})$ such that $\phi\leq\isoprofile$ and $\phi(a_0) = \isoprofile(a_0)$. Let $\Omega_0$ be an isoperimetric region corresponding to $a_0$. Choose a unit speed normal variation of $\bdry{\Omega_0}$ and define \[ f(\epsilon) = \abs{\bdry{\Omega_{\epsilon}}} - \phi(\abs{\Omega_{\epsilon}}). \] Then we have \[ f(\epsilon) \geq \isoprofile(\abs{\Omega_{\epsilon}}) - \phi(\abs{\Omega_{\epsilon}}) \geq 0 \] and \[ f(0) = \abs{\bdry{\Omega_0}} - \phi(\abs{\Omega_0}) = \isoprofile(\abs{\Omega_0}) - \phi(\abs{\Omega_0}) = 0. \] Thus $0$ is a minima of $f$ so that $\inpd[f]{\epsilon}(0) = 0$ and $\inpdd[f]{\epsilon}(0) \geq 0$. Now we use the first variation formula to compute \[ \pd[f]{\epsilon} = \int_{\bdry{\Omega_{\epsilon}}} \curvecurv - \phi' \abs{\bdry{\Omega_{\epsilon}}} \] which at $\epsilon = 0$ gives \[ 0 = \int_{\bdry{\Omega_0}} \curvecurv - \phi'(a_0) \abs{\bdry{\Omega_0}} = (\curvecurv - \phi'(a_0)) \abs{\bdry{\Omega_0}} \] since $\curvecurv$ is constant along $\bdry{\Omega_0}$. Thus $\curvecurv = \phi'(a_0)$ along $\bdry{\Omega_0}$. The second variation gives \[ \begin{split} 0 \leq \pdd[f]{\epsilon} &= \pdd{\epsilon} \abs{\bdry{\Omega_{\epsilon}}} - \phi'' (\pd{\epsilon} \abs{\Omega_{\epsilon}})^2 - \phi' \pdd{\epsilon} \abs{\Omega_{\epsilon}} \\ &= - \int_{\bdry{\Omega_{\epsilon}}} \gausscurv - \phi'' (\abs{\bdry{\Omega_{\epsilon}}})^2 - \phi' \int_{\bdry{\Omega_{\epsilon}}} \curvecurv \\ &= - \int_{\bdry{\Omega_0}} \gausscurv - \phi''(a_0) \phi^2(a_0) - (\phi')^2(a_0) \phi(a_0). \end{split} \] recalling that $\phi(a_0) = \abs{\Omega_0}$ and using that $\curvecurv = \phi'(a_0)$ along $\bdry{\Omega_0}$ as just obtained from the first variation. \end{proof} \section{A comparison theorem} \label{sec-3} \label{9b6175ce-34f4-40c4-9b36-e7588e0a53d7} \label{sec:comparison} \subsection{Comparison equation under the Ricci flow} \label{sec-3-1} Let us now couple the spatial viscosity equation with the Ricci flow. For this we need to know the time-variation of isoperimetric regions under the Ricci flow. It is quite remarkable that this is possible at all and heavily relies on the fact that $M$ is $2$ dimensional. It would be interesting to see if similar results hold in higher dimensions, though this seems unlikely unless some topological and/or curvature restrictions are imposed. We first need the parabolic version of viscosity equations. \begin{defn} A lower semi-continuous function $f: (a,b) \times [0, T) \to \fld[R]$ is a viscosity super-solution of the 2nd order parabolic equation \[ \pd[f]{t} + A(x, t, f, f', f'') = 0 \] if for every $(x_0, t_0) \in (a,b) \times [0, T)$ and every $C^2$ function $\phi$ such that $\phi(x_0, t_0) = f(x_0, t_0)$ and $\phi(x, t) \leq f(x, t)$ for $x$ in a neighbourhood of $x_0$ and $t\leq t_0$ near $t_0$, we have $\pd[\phi]{t} (x_0, t_0) + A(x_0, t_0, \phi, \phi' , \phi'') \geq 0$. An upper semi-continuous function is a viscosity sub-solution if the same statements hold with the inequalities reversed. \end{defn} \begin{theorem} \label{thm:ricci_iso_viscosity} Let $M$ be a closed surface of genus $\genus$, $\metric(t)$ a solution of the normalised Ricci flow on $M$ and $\unicover{\metric}(t) = \uniproj^{\ast}\metric (t)$ the corresponding solution on the universal cover $\uniproj: \unicover{M} \to M$. For any $a_0$, let $\chi_0$ be the Euler characteristic of $\Omega_0$ an isoperimetric region corresponding to $a_0$. Then the isoperimetric profile, $\isoprofile_{\unicover{\metric}(t)}$ satisfies \begin{equation} \label{eq:temporal_viscosity} \pd{t}\isoprofile - \left[\isoprofile'' \isoprofile^2 + (\isoprofile')^2 \isoprofile + (4\pi\chi_0 - 2(1-\genus)a)\isoprofile' + (1-\genus) \isoprofile\right] \geq 0 \end{equation} in the viscosity sense. \end{theorem} \begin{proof} For convenience sake, let us write $\abs{\cdot}_t = \abs{\cdot}_{\unicover{\metric}(t)}$ and $\isoprofile_t = \isoprofile_{\unicover{\metric}(t)}$. Let $\phi$ be a $C^2$ function such that $\phi(a_0, t_0) = \isoprofile_{t_0}(a_0)$ and $\phi \leq \isoprofile$ for $a$ near $a_0$ and $t\leq t_0$ near $t_0$. We need to show that $\phi$ satisfies the differential inequality \eqref{eq:temporal_viscosity}. We compute the time variation of isoperimetric regions. Given $a_0$, let $\Omega_0\subset \unicover{M}$ be an isoperimetric region in $\unicover{M}$ with respect to the metric $\unicover{\metric}(t_0)$. That is $\abs{\Omega_0}_{t_0} = a_0$ and $\abs{\bdry{\Omega_0}}_{t_0} = \phi(a_0, t_0)$. Since $\isoprofile_t(a) \geq \phi(a,t)$ for $t\leq t_0$ and $a$ near $a_0$, we have \[ \abs{\bdry{\Omega_0}}_t \geq \phi\left(\abs{\Omega_0}_t, t\right) \] for $t\leq t_0$, and equality holds when $t=t_0$. Since both sides of this equation are differentiable in $t$, it follows that under the normalised Ricci flow, \begin{equation}\label{eq:timeineq1} \frac{\partial}{\partial t}\Big|_{t=t_0} \abs{\bdry{\Omega_0}}_t \leq \pd[\phi]{t} (a_0,t_0) + \phi'(a_0,t_0) \pd{t}\Big|_{t=t_0} \abs{\Omega_0}_{t}. \end{equation} The time derivative on the left can be computed as follows: Parametrise $\bdry{\Omega_0}$ by $\gamma: u \in \mathbb{S}^1 \mapsto M$ and write $\gamma_u = \gamma_{\ast} \pd{u}$. Then recalling that the metric evolves by the normalised Ricci flow, $\pd{t}\unicover{\metric} = -2(\gausscurv - (1-\genus))\unicover{\metric}$, we obtain \begin{align*} \pd{t}\Big|_{t=t_0} \abs{\bdry{\Omega_0}} &= \pd{t} \int_{\bdry{\Omega_0}} \sqrt{\unicover{\metric}_t(\gamma_u,\gamma_u)}\,du = -\int_{\bdry{\Omega_0}}(\gausscurv_M - (1 - \genus))ds \\ & = -\int_{\bdry{\Omega_0}} \gausscurv_M\,ds + (1-\genus)\phi (a_0, t_0), \end{align*} where $ds$ is the arc-length element along $\bdry{\Omega_0}$. For the right hand side, by differentiating the determinant and using the normalised Ricci flow equation again, we have $\pd{t} \measure_{\unicover{\metric}} = -2(\gausscurv_M - (1-\genus))\measure_{\unicover{\metric}}$ where $\measure_{\unicover{\metric}}$ is the measure on $\unicover{M}$ induced by the metric $\unicover{\metric}$. Thus, \[ \pd{t} \Big|_{t=t_0} \abs{\Omega_0}_t = -2\int_{\Omega_0}(\gausscurv_M - (1-\genus) d\measure_{\unicover{\metric}(t_0)}. \] Writing $\chi_0 = \chi(\Omega_0)$ the Euler characteristic of $\Omega_0$ and applying the Gauss-Bonnet theorem yields \[ \pd{t}\Big|_{t=t_0} \abs{\Omega_0}_t = 2(1-\genus)\abs{\Omega_0} - 2\left(2\pi\chi_0 - \int_{\bdry{\Omega_0}} \curvecurv\,ds\right) = 2(1-\genus)a_0 -4\pi\chi_0 +2 \int_{\bdry{\Omega_0}}\curvecurv\,ds, \] were $\curvecurv$ is the geodesic curvature of the curve $\bdry{\Omega_0}$. Thus the inequality \eqref{eq:timeineq1} becomes \begin{equation}\label{eq:timeineq} -\int_{\bdry{\Omega_0}}\gausscurv_M\,ds + (1-\genus)\phi \leq \pd{t}\phi + \phi'\left(2(1-\genus)a_0 -4\pi\chi_0 + 2 \int_{\bdry{\Omega_0}}\curvecurv\,ds\right). \end{equation} Now recall that theorem \ref{thm:viscosity_nobdry} states that for each time $t$, the isoperimetric profile $\isoprofile_t$ satisfies \[ -\left(\isoprofile'' \isoprofile^2 + (\isoprofile')^2 \isoprofile + \int_{\bdry{\Omega_0}} \gausscurv_M\right) \geq 0 \] in the viscosity sense. Since at $a_0$, $\phi(-, t_0)$ is a supporting function for $\isoprofile_{t_0}(-)$ we also have \begin{equation} \label{eq:spaceineq} \phi'' \phi^2 + (\phi')^2 \phi \leq -\int_{\bdry{\Omega_0}} \gausscurv_M. \end{equation} Also, the vanishing of the first spatial variation gives $\curvecurv = \phi'(a_0)$ is constant along $\bdry{\Omega_0}$ and so \begin{equation} \label{eq:curvfirstvar} \int_{\bdry{\Omega_0}} \curvecurv \, ds = \phi(a_0) \phi' (a_0). \end{equation} Putting together the inequalities \eqref{eq:timeineq} and \eqref{eq:spaceineq} and using \eqref{eq:curvfirstvar} we obtain \begin{equation} \begin{split} \pd[\phi]{t} &\geq -\int_{\bdry{\Omega_0}}\gausscurv_M\,ds + (1-\genus)\phi - \phi'\left(2(1-\genus)a_0 - 4\pi\chi_0 + 2\phi\phi'\right) \\ &\geq \phi'' \phi^2 + (\phi')^2 \phi + (1-\genus)\phi + \phi'\left(4\pi\chi_0 - 2(1-\genus)a_0\right) - 2\phi(\phi')^2 \\ &= \phi'' \phi^2 - (\phi')^2 \phi + (1-\genus)\phi + \phi'\left(4\pi\chi_0 - 2(1-\genus)a_0\right) \end{split} \end{equation} which is the required inequality. \end{proof} \begin{remark} The viscosity equation includes the $\chi_0$ term which, without any topological knowledge of isoperimetric regions is essentially unknown and could a priori, take on any possible value. By Corollary \ref{cor:isoregions_positive_curvature}, in the particular case that $\gausscurv_M > 0$, we may conclude that $\chi_0 = 1$ for all $a_0$. In general however, we need not expect any particular bound on Euler characteristic from a curvature bound alone. \end{remark} Even though the topological uncertainty is a real problem, for our purposes we may avoid it entirely by appealing to the underlying concavity of the isoperimetric profile. This is exploited in the next theorem, the comparison theorem, which is the central result of this paper. \begin{theorem} \label{thm:comparison} Let $(M, \metric(t))$, $(\unicover{M}, \unicover{\metric}(t))$ be as in the previous theorem. Let $\phi: (0, \abs{\unicover{M}}) \times [0, T) \to \fld[R]$ be a smooth, strictly positive, strictly concave function satisfying \begin{equation} \label{eq:ricci_iso_diff_inequal} \pd[\phi]{t} \leq \phi'' \phi^2 - (\phi')^2 \phi + \phi'\left(4\pi - 2(1-\genus)a_0\right) + (1-\genus)\phi. \end{equation} along with the asymptotic behaviour \[ \limsup_{a\to 0} \frac{\phi(a, t)}{\sqrt{4\pi a}} \leq 1 \] and \[ \limsup_{a\to \infty} \left(\isoprofile(a, t) - \phi(a,t)\right) \geq 0 \] Then if the initial inequality, $\phi(a, 0) < \isoprofile_{\unicover{\metric}(0)} (a)$ for all $a \in (0, \abs{\unicover{M}})$ holds, the inequality $\phi(a, t) \leq \isoprofile_{\unicover{\metric}(t)} (a)$ holds for all $a, t$ with strict inequality if the inequality in \eqref{eq:ricci_iso_diff_inequal} is strict. \end{theorem} \begin{remark} The large scale asymptotic requirements are rather imprecise because we don't have a priori control over the constant $C$ in Theorem \ref{thm:asymptotics_large}. However, this will not prove problematic for us by Proposition \ref{prop:comparison} below. \end{remark} \begin{proof} First suppose that we have strict inequality in the differential inequality and in the asymptotic inequalities. We argue by contradiction. The conditions $\phi(a, 0) < \isoprofile_{\unicover{\metric}(0)} (a)$ and $\phi(a, t) < \isoprofile_{\unicover{\metric}(t)} (a)$ for $a$ sufficiently close to $\{0, \abs{\unicover{M}}\}$ imply that if the theorem is false, there is a first time $t_0>0$ and an $a_0 \in (0, \abs{\unicover{M}})$ such that $\phi(a_0, t_0) = \isoprofile_{t_0} (a_0)$. Thus $\phi(a, t) \leq \isoprofile_t(a)$ for $t\leq t_0$ with equality at $(a_0, t_0)$. Since $\phi$ is strictly concave, the hypotheses of Lemma \ref{thm:curvetopology} are satisfied, so $\Omega_0$ is simply connected and $\chi_0 = 1$. But now observe that $\phi$ is a lower supporting function for $\isoprofile_t$ at $a_0$ and by theorem \ref{thm:ricci_iso_viscosity}, \[ \pd[\phi]{t} \geq \phi'' \phi^2 - (\phi')^2 \phi + \phi'\left(4\pi - 2(1-\genus)a_0\right) + (1-\genus)\phi \] a contradiction, hence the theorem is true when the inequalities are strict. If any of the inequalities are not strict, define \[ \phi_{\epsilon} = (1-\epsilon) \phi \] for any $\epsilon$ with $0<\epsilon<1$. Then we have $\phi_{\epsilon} < \phi$ giving strict inequality for the asymptotics. We also have \[ \begin{split} &\pd[\phi_{\epsilon}]{t} - (\phi_{\epsilon}'' \phi_{\epsilon}^2 - (\phi_{\epsilon}')^2 \phi_{\epsilon}) - \phi_{\epsilon}'\left(4\pi - 2(1-\genus)a_0\right) - (1-\genus)\phi_{\epsilon} \\ &= (1-\epsilon)\left(\pd[\phi]{t} - (1-\epsilon)^2(\phi'' \phi^2 - (\phi')^2 \phi) - \phi'\left(4\pi - 2(1-\genus)a_0\right) - (1-\genus)\phi\right) \\ &\leq \epsilon(1-\epsilon)(2-\epsilon)(\phi^2\phi'' - (\phi')^2\phi) \\ &< 0 \end{split} \] since $\phi''< 0$. Thus $\phi_{\epsilon} (a,t) < \isoprofile (a,t)$ by the result for strict inequalities and the result follows by letting $\epsilon \to 0$. \end{proof} \begin{remark} It's not entirely clear whether strict concavity may be relaxed to merely concavity. A strictly concave approximation to $\phi$ may increase $\phi$ violating the inequality $\phi \leq \isoprofile$. \end{remark} Using the theorem, and the asymptotics of $\isoprofile$ from Theorem \ref{thm:bdrybehaviour}, \[ \isoprofile(a) = \sqrt{4 \pi a}(1 - \frac{\sup_M\gausscurv}{8\pi} a + O(a^2)) \] as $a\to 0$, we may now obtain a curvature bound for $\unicover{\metric}(t)$ and hence for $\metric(t)$. \begin{cor} \label{cor:ricci_comp_curv_bnd} With the notation of the previous theorem, $\phi$ satisfying the hypothesis of the theorem and such that \[ \phi (a, t) = \sqrt{4\pi a}(1 - \frac{\gausscurv_0(t)}{8\pi} a + O(a^2)), \] we have \[ \sup_M \gausscurv_M(t) \leq \gausscurv_0(t). \] \qed \end{cor} The isoperimetric constant of a non-compact surface is defined to be \[ \isoconst = \inf\left\{ \frac{\abs{\bdry{\Omega}}^2}{\abs{\Omega}} : \Omega \quad \text{admissible} \right\} = \inf \left\{\frac{\isoprofile(a)^2}{a} : 0 < a < \infty \right\}. \] For a compact surface, the (modified) isoperimetric constant is defined by \[ \isoconst = \inf\left\{ \frac{\abs{\bdry{\Omega}}^2}{\min(\abs{\Omega}, \abs{M \setdiff \Omega})} : \Omega \quad \text{admissible} \right\} = \inf \left\{ \frac{\isoprofile(a)^2}{a} : 0 < a < \frac{\abs{M}}{2} \right\}. \] \begin{cor} \label{cor:isoconst_bdd} With the notation of the previous theorem and, $\phi$ satisfying the hypothesis of the theorem we have \[ \isoconst_{\unicover{M}}(t) \geq \inf\left\{\frac{\phi(a, t)^2}{a} : 0 < a < \frac{\abs{\tilde{M}}}{2}\right\}. \] \end{cor} \begin{remark} \label{rem:isoconst_bdd} Note that area (2 dimensional volume) on $\unicover{M}$ equipped with the pull-back metric $\pullback{\uniproj} \metric$ grows like the growth of the fundamental group, but boundary length can't be controlled so easily. For instance, the torus with arbitrarily small ratio of principal radii may be equipped with the flat metric, giving control of the isoperimetric constant on $\unicover{M}$, but with arbitrarily small isoperimetric constant on $M$. Note however, that by normalising the area of $M$ to $4\pi$, we avoid this issue, and I conjecture that the isoperimetric constant of $M$ may be bounded below by that of $\unicover{M}$ for any $\genus \geq 1$. For the matter at hand, when $\genus>0$ (so that $\unicover{M}$ is not compact), we can't immediately transfer control of the isoperimetric constant on $\unicover{M}$ to control of the isoperimetric constant on $M$. \end{remark} Let us finish this section by recording a useful result for surfaces of genus $\genus \geq 1$ that shows the large scale asymptotics of $\phi$ are superfluous. \begin{prop} \label{prop:comparison} Let $M$ be a closed surface of genus $\geq 1$ (so that $\unicover{M}$ is not compact). Let $\phi$ be a strictly positive, strictly concave function satisfying the differential inequality \eqref{eq:ricci_iso_diff_inequal} and the small scale asymptotics from the comparison theorem \ref{thm:comparison}. Then if $\phi(a, 0) < \isoprofile_{\unicover{\metric}(0)}(a)$ for all $a\in (0,\infty)$, then $\phi(a, t) \leq \isoprofile_{\unicover{\metric}(t)} (a)$ for all $a,t$. \end{prop} \begin{proof} The only thing missing from Theorem \ref{thm:comparison} is the large scale asymptotics. It is convenient to work with the function $v=\phi^2$. This satisfies \begin{equation} \label{eq:ricci_isosq_diff_inequal} \pd[v]{t} \leq v^2 \Delta \ln v + (4\pi - 2(1-\genus)a) v' + 2(1-\genus) v. \end{equation} For any $C>0$, define \[ u_C(a, t) = C e^{2(1-\genus)t}. \] Then $u_C$ satisfies equality in equation \eqref{eq:ricci_isosq_diff_inequal}. Since $u_C$ is constant for each fixed $t$ and $\isoprofile$ grows at least linearly as $a\to\infty$ by Theorem \ref{thm:asymptotics_large}, we have also have $u_C(a, t) < \isoprofile_{\unicover{\metric}(t)}(a)$ for all $a$ large enough. Now take the harmonic mean, \[ \harmean (a, t) = \left(\frac{1}{v(a,t)} + \frac{1}{u_C(a,t)}\right)^{-1}. \] This has the property that for any $(a,t)$ we have \[ v(a, t) = \lim_{C\to\infty} \frac{v(a, t) u_C(a, t)}{v(a, t) + u_C(a, t)} = \lim_{C\to\infty} \harmean (a, t). \] Therefore to prove the result, we need to show $\harmean$ satisfies the hypotheses of theorem \ref{thm:comparison} since this will give the inequality for $\harmean$ for every $C>0$ and so too for $v$ being the limit $C\to\infty$ of $\harmean$. First, since $0 < \harmean \leq v, u_C$, the initial inequality $\harmean < \isoprofile_0$ is satisfied along with the necessary small and large scale asymptotics. For strict concavity of $\harmean$, we use \[ \harmean = \frac{u_C v}{u_C + v} \] and $(u_C)' = 0$ to compute \begin{align*} \harmean' &= \frac{u_Cv'}{u_C + v} - \frac{u_C v v'}{(u_C + v)^2} \\ &= \frac{u_C^2 v'}{(u_C + v)^2} \end{align*} and so \[ \harmean'' = \frac{u_C^2 v''}{(u_C + v)^2} - \frac{2u_C^2(v')^2}{(u_C + v)^3} < 0 \] by strict concavity of $v$ and positivity of $v, u_C$. Thus $H$ is strictly concave. Now let us consider the differential inequality. Define \[ L_{\pm} = \left(4\pi - 2(1-\genus)a\right)\pd{a} \pm 2(1-\genus). \] The differential inequality \eqref{eq:ricci_isosq_diff_inequal} then reads \[ \left(\pd{t} - L_-\right) v \leq v^2 \Delta \ln v. \] For any function $f$ we have \begin{equation} \label{eq:linopf} \left(\pd{t} - L_{\pm}\right) \frac{1}{f} = -\frac{1}{f^2} \left(\pd{t} - L_{\mp}\right) f. \end{equation} Applying equation \eqref{eq:linopf} to $\harmean = 1/f$ with $f = v^{-1} + u_C^{-1}$ gives \begin{align*} \left(\pd{t} - L_-\right) \harmean &= -\harmean^2 \left((\pd{t} - L_+) \frac{1}{v} + (\pd{t} - L_+) \frac{1}{u_C}\right) \\ &= -\harmean^2 \left((\pd{t} - L_+) \frac{1}{v} - 2(1-\genus)\frac{1}{u_C}\right) \end{align*} since $L_{\pm} (u_C) = 0$ and $u_C$ satisfies equality in \eqref{eq:ricci_isosq_diff_inequal}. Next applying equation \eqref{eq:linopf} to $v^{-1}$ we get \begin{align*} \left(\pd{t} - L_-\right) \harmean &= \frac{\harmean^2}{v^2} (\pd{t} - L_-) v + 2\harmean^2 (1-\genus)\frac{1}{u_C} \\ &\leq \frac{\harmean^2}{v^2} v^2 \Delta \ln v \\ &= \harmean^2 \Delta \ln v. \end{align*} since $(1-\genus) \leq 0$. Here the inequality is strict if $v$ (or equivalently $u$) satisfies strict inequality in the differential inequality. We want to show the right hand side is less than or equal to $\harmean^2 \Delta \ln \harmean$. We compute \begin{align*} \harmean^2 \Delta \ln \harmean &= \harmean \harmean'' - (\harmean')^2 \\ &= \frac{v u_C}{v+u_C} \left[\left(\frac{u_C}{v+u_C}\right)^2 v'' - \frac{2u_C^2}{(v+u_C)^3} (v')^2\right] - \left(\frac{u_C}{v+u_C}\right)^4 (v')^2 \\ &= \left(\frac{v u_C}{v + u_C}\right)^2 \left[\left(\frac{u_C}{v+u_C}\right) \frac{v''}{v} - \frac{2u_C v}{(v+u_C)^2} \frac{(v')^2}{v^2} - \frac{u_C^2}{(v+u_C)^2} \frac{(v')^2}{v^2}\right] \\ &= \harmean^2 \left[\left(\frac{u_C}{v+u_C}\right) \frac{v''}{v} - \left(\frac{(v+u_C)^2 - v^2}{(v+u_C)^2}\right) \frac{(v')^2}{v^2}\right] \\ &\geq \harmean^2 \left[ \frac{v''}{v} - \frac{(v')^2}{v^2}\right] = \harmean^2 \Delta \ln v \end{align*} where the inequality follows from the concavity of $v$ and the positivity of $v$ and $u_C$. \end{proof} \subsection{A connection with logarithmic porous media} \label{sec-3-2} For positive functions $\phi$, the differential inequality \[ \pd[\phi]{t} < \phi^2\phi'' - \phi(\phi')^2 + (4\pi - 2(1-\genus)a)\phi' + (1-\genus)\phi \] is equivalent to the logarithmic porous media inequality \[ \pd[u]{t} > \Delta \ln u. \] To see this, observe that \[ \phi^3 \Delta \ln \phi = \phi^2\phi'' - \phi(\phi')^2. \] Letting $u = \phi^{-2}$ we have $\Delta \ln u = -2 \Delta \ln \phi$ and so \[ \begin{split} \pd[u]{t} &= \frac{-2}{\phi^3} \pd[\phi]{t} > \frac{-2}{\phi^3} \left[\phi^3 \Delta \ln \phi + (4\pi - 2(1-\genus)a)\phi' + (1-\genus)\phi\right] \\ &= \Delta \ln u + (4\pi - 2(1-\genus)a)u' -2(1-\genus)u. \end{split} \] A change of the independent variables $(a,t)$ can now be made to get rid of the lower order terms. This point of view may prove useful since the logarithmic porous media equation has been extensively studied, but we do not use it here. \section{Model solutions} \label{sec-4} \label{396a40e0-d4d8-422d-a69b-4e25da6de9c3} \label{sec:models} This section is devoted to exhibiting suitable comparison functions $\phi$ and hence curvature and isoperimetric bounds for metrics evolving by the normalised Ricci flow via Corollaries \ref{cor:ricci_comp_curv_bnd} \ref{cor:isoconst_bdd}. We will need to treat the cases $\genus=0, \genus=1, \genus>1$ separately. The next and final section briefly outlines how such bounds lead, via standard arguments, to the convergence results described in Theorem \ref{thm:convergence}. \subsection{genus 0} \label{sec-4-1} In \cite{MR2729306}, we showed that the isoperimetric profile of the Rosenau solution provided a suitable comparison solution. Let us briefly recall the result. The Rosenau solution is an explicit axially symmetric solution of the normalized Ricci flow on the two-sphere. The metric is given by $\bar g(t) = u(x,t)(dx^2+dy^2)$, where $(x,y)\in\fld[R]\times[0,4\pi]$, and \[ u(x,t) = \frac{\sinh(e^{-2t})}{2e^{-2t}\left(\cosh(x)+\cosh(e^{-2t})\right)}. \] This extends to a smooth metric on the two-sphere at each time with area $4\pi$, and which evolves according to the normalized Ricci flow equation \eqref{eq:NRF}. A direct computation gives the isoperimetric profile, \begin{equation} \label{eq:Rosenauprofile} \varphi(a,t) = \sqrt{4\pi}\sqrt{\frac{\sinh(a e^{-2t})\sinh((1-a)e^{-2t})}{\sinh(e^{-2t})e^{-2t}}}. \end{equation} By translating $t \mapsto t - t_0$ with $t_0$ chosen so that initial inequality of the isoperimetric profile holds, the comparison theorem leads to the following bounds for solutions of the normalised Ricci flow on the $2$-sphere: \begin{theorem} \label{thm:sphere_bounds} Let $\metric$ be a solution of the normalised Ricci flow on $\mathbb{S}^2$. Then there exists constants $A,C>0$ depending only on the metric at the initial time such that \[ \sup_{\mathbb{S}^2} \gausscurv (t) \leq C e^{-At}. \] There also exists a constant $\isoconst_0 > 0$, depending only on the initial metric $\metric_0$, such that \[ \isoconst (t) > \isoconst_0 \] where $\isoconst(t)$ is the isoperimetric constant of $(\mathbb{S}^2, \metric(t))$. \end{theorem} \subsection{genus 1} \label{sec-4-2} Next, let us describe a comparison solution for the universal cover of surfaces of genus $\genus = 1$, i.e. for $\fld[R]^2$. Recall, we need to find a function satisfying the differential inequality \[ \phi_t\geq \phi^2\phi''-\phi(\phi')^2+4\pi \phi'. \] We look for solutions with equality. First, to simply matters, let $v=\phi^2$ which satisfies the equation \[ v_t = vv''-(v')^2+4\pi v' = v^2\left(\frac{v'}{v} - \frac{4\pi}{v}\right)'. \] Taking the Ansatz $v(a,t) = tV(a/t)$, we obtain an integrable equation, which adding in the limiting behaviour $V(0) = 0$, has the family of solutions \[ V_C(z) = \frac{1}{C}\left(4\pi - \frac{1}{C}\right) \left(1 - e^{-Cz}\right) + \frac{z}{C}. \] That is, we have \begin{equation} \label{eq:ricci_plane_comparison} v_C(a,t) = \frac{a}{C} + \frac{t}{C}\left(4\pi - \frac{1}{C}\right) \left(1 - e^{-\tfrac{Ca}{t}}\right). \end{equation} We can now use $v_C$ as a comparison for $\genus = 1$ surfaces, as in the following: \begin{theorem} \label{thm:plane_comparison} Let $\metric(t)$ be any solution of the normalised Ricci flow on $M$ a closed, genus $1$ surface and let $\unicover{\metric} = \uniproj^{\ast}\metric$ be the pull back metric to the universal cover $\unicover{M} = \fld[R]^2$. Then there exists a $C>0$ such that the function $\phi = \sqrt{v_c}$ where $v_c$ is defined by \eqref{eq:ricci_plane_comparison} satisfies $\phi(a, t) < \isoprofile_{\unicover{\metric}} (a, t)$ for all $a \in (0,\infty)$ and $t \in [0, T)$. Therefore the Gauss curvature $\gausscurv$ of $M$ satisfies the bound \[ \sup_M \gausscurv \leq \frac{A}{t} \] for a constant $A>0$ depending only on the initial metric $\metric_0$. \end{theorem} \begin{proof} We know that $v_C$ satisfies the differential inequality and it's easy to see that $v_C$ is strictly concave, so we need to show that $v_C$ meets the other requirements for the comparison theorem in the form of Proposition \ref{prop:comparison}. At $t=0$, we have $v_C(a,0) = \tfrac{a}{C}$ so by choosing $C$ large enough, we have the initial comparison since $\isoprofile \simeq \sqrt{C_1 a + C_2 a^2}$ as $a\to \infty$. On the small scale we have \[ \phi(a, t) = \sqrt{4\pi a} \left(1 - (\frac{C}{4} - \frac{1}{16\pi})\frac{1}{t} a + O(a^2)\right) \] as required for the small scale asymptotics and also providing the stated curvature bound with $A = 2\pi C - 1/2$ (which is positive for $C > 1/4\pi$) by Corollary \ref{cor:ricci_comp_curv_bnd}. \end{proof} \subsection{genus $>1$} \label{sec-4-3} In this section, we construct the model comparison solution for the final case, $\genus > 1$. When $\sup_{M_0} \gausscurv > 0$, the construction is a little involved. \subsubsection{$K < 0$ case} \label{sec-4-3-1} First let us consider the case where \(\sup_{M_0} \gausscurv \leq 0\), since it admits a simple, appealing comparison solution. For any $A,C>0$, let \begin{equation} \label{eq:ricci_hyperbolic_comparison} v(a, t) = 4\pi a + B(t) a^2 \end{equation} with \[ B(t) = (\genus - 1) - \frac{C}{1 + A e^{(\genus - 1)t}}. \] Direct computation shows that $v_C$ is a solution of the differential equation \[ v_t = vv''-(v')^2 + (4\pi - (1-\genus)a) v' + 2(1-\genus)v, \] which is the required equation for $v = \phi^2$ as in the genus $1$ case above. \begin{theorem} \label{thm:ricci_hyperbolic_curvature_bound_negative} Let $\metric(t)$ be any solution of the normalised Ricci flow on $M$ a closed, genus $>1$ surface with $\sup_{M_0} \gausscurv \leq 0$ and let $\unicover{\metric} = \uniproj^{\ast}\metric$ the pull back to $\fld[H]^2$ with $\uniproj: \fld[H]^2 \to M$ the universal cover. Then for $\phi = \sqrt{v}$ where $v$ is defined by \eqref{eq:ricci_hyperbolic_comparison}, there exists $A,C>0$ such that $\phi(a, t) < \isoprofile_{\unicover{\metric}} (a, t)$ for all $a \in (0,\infty)$. Therefore, the Gauss curvature $\gausscurv_M$ satisfies the bound \[ \sup_{M_t} \gausscurv \leq C_1 e^{-C_2t} \] for positive constants $C_1,C_2$. \end{theorem} \begin{proof} Since the comparison function is a quadratic with zero constant term and linear coefficient equal to $4\pi$, the small scale asymptotics are satisfied providing that $B(t) \leq - \text{const} \sup_M\gausscurv$, by the asymptotics of the isoperimetric profile given in theorem \ref{thm:bdrybehaviour}. Since we require $B(t)\geq 0$, this can only be achieved in the case $\sup_{M_t} \gausscurv \leq 0$ which is true by the maximum principle under the assumption $\sup_{M_0} \gausscurv \leq 0$. In this case, we choose $A,C$ large enough so that the initial comparison holds. Concavity is easily checked. Proposition \ref{prop:comparison} completes the proof that $\phi(a, t) < \isoprofile_{\unicover{\metric}} (a, t)$ for all $a \in (0,\infty)$. The curvature bound now follows directly from Corollary \ref{cor:ricci_comp_curv_bnd}. \end{proof} \subsubsection{General case} \label{sec-4-3-2} \paragraph{Stationary solution} \label{sec-4-3-2-1} Recall we have the equation, \[ \pd{t} v - \left\{vv'' - (v')^2 + [4\pi - 2(1-\genus)a]v' + 2(1-\genus)v\right\} \leq 0. \] We can write this as \begin{equation} \label{eq:hyperbolic_squared_diff_ineq} \pd{t} v \leq v^2 \left(\frac{v'}{v} - \frac{4\pi - 2(1-\genus)a}{v}\right)'. \end{equation} Stationary solutions (with equality) to this equation that satisfy the conditions $v(0)=0$ and $\limsup_{x\to\infty} \tfrac{v(x)}{x^2} < \infty$ are given by \begin{equation} \label{eq:hyperbolic_stationary} \begin{split} v_C(x) &= \frac{1}{C}[4\pi + \frac{2(1 - \genus)}{C}][1 - e^{-Cx}] - \frac{2(1-\genus)}{C^2} (Cx) \\ &= 4\pi x + \frac{1}{C}[4\pi + \frac{2(1 - \genus)}{C}][1 - Cx - e^{-Cx}] \\ &= 4\pi x - \frac{1}{C}[4\pi + \frac{2(1 - \genus)}{C}]\frac{(Cx)^2}{2} \\ & \quad + \frac{1}{C}[4\pi + \frac{2(1 - \genus)}{C}][1 - Cx + \frac{(Cx)^2}{2} - e^{-Cx}] \end{split} \end{equation} for any $C\geq 0$. The last line is obtained from the Taylor expansion for $e^{-Cx}$. Each of the three expressions illustrates different properties of $v_C$. For instance, the first line shows that $v_C$ grows at most linearly. The second and third lines give the first and second order Taylor expansions with explicit remainders. For later use, the first and second derivatives of $v_C$ are \begin{align} \label{eq:hyperbolic_stationary_1st} v_C' &= 4\pi + [4\pi + \frac{2(1 - \genus)}{C}][-1 + e^{-Cx}] \\ \label{eq:hyperbolic_stationary_2nd} v_C'' &= -C[4\pi + \frac{2(1 - \genus)}{C}]e^{-Cx}. \end{align} In particular, provided that \[ C \geq C_{\text{crit}} = - \frac{1-\genus}{2\pi} \] we have $[4\pi + \frac{2(1 - \genus)}{C}] \geq 0$ and so $V_C$ is concave, strictly so when $C>C_{\text{crit}}$. Such functions prove useful, but are not quite sufficient for our purposes. The comparison is constructed from the function, \begin{equation} \label{eq:f_hyperbolic} f(x, t) = \sqrt{v_{C} (x) + b x^2} \end{equation} with $b\geq 0$. \begin{lemma} \label{lem:hyperbolic_stationary_concave} Let $f$ be defined as in equation \eqref{eq:f_hyperbolic} with $C\geq C_{\text{crit}}$. Then $f$ is concave, if and only if \[ b \leq b_{\text{crit}} = \frac{(1-\genus)^2}{\frac{1}{C}[4\pi + \frac{2(1-\genus)}{C^2}]}, \] with strict concavity corresponding to strict inequality. \end{lemma} \begin{proof} We have \[ f'' = \frac{1}{2f^3}[v_Cv_C'' - \frac{1}{2}(v_C')^2 + b(2v_c - 2xv_C' + x^2 v_C'')] \] so that $f$ is concave if and only if \[ b \leq \frac{\frac{1}{2}(v_C')^2 - v_Cv_C''}{(2v_c - 2xv_C' + x^2 v_C'')} \] since $b$ is non-negative. First, consider the numerator. Since $v_C\geq 0$ and $v_C''\leq 0$ we have \[ \frac{1}{2}(v_C')^2 - v_C v_C'' \geq \frac{1}{2}(v_C')^2. \] Again using $v_C''\leq 0$ we have \[ v_C'(x) \geq \lim_{x\to\infty} v_C' = \frac{-2(1-\genus)}{C} \] Also $\lim_{x\to\infty} v_Cv_C'' = 0$ so that in fact, \[ \inf_{x}\left(\frac{1}{2}[(v_C')^2 - v_Cv_C'']\right) = \frac{2(1-\genus)^2}{C^2}. \] For the denominator, we have \begin{align*} 2v_C - 2xv_C' + x^2v_C &= \frac{2}{C}[4\pi + \frac{2(1 - \genus)}{C}][1 - e^{-Cx}(1 + Cx + (Cx)^2/2)] \\ &\leq \frac{2}{C}[4\pi + \frac{2(1 - \genus)}{C}] \end{align*} with equality at $x=0$ so that \[ \sup_{x} \left(2v_C - 2xv_C' + x^2v_C\right) = \frac{2}{C}[4\pi + \frac{2(1 - \genus)}{C}]. \] Therefore $f$ is concave if and only if \[ b \leq \frac{\frac{2(1-\genus)^2}{C^2}}{\frac{2}{C}[4\pi + \frac{2(1 - \genus)}{C}]} = \frac{(1-\genus)^2}{\frac{1}{C}[4\pi + \frac{2(1-\genus)}{C^2}]}. \] \end{proof} \begin{remark} \label{rem:hyperbolic_stationary_concave} Observe that the denominator in $b_{\text{crit}}$ is zero for $C=C_{\text{crit}}$, is positive for $C>C_{\text{crit}}$ and, approaches $0$ as $C\to\infty$. Thus for $C_0 \geq C_{\text{crit}}$, $b_{\text{crit}} ([C_{\text{crit}}, C_0])$ is bounded below away from $0$. This will prove useful later. \end{remark} Let us also record the small and large scale asymptotics of $f$ in a lemma for later reference. \begin{lemma} \label{lem:hyperbolic_asymptotics} The function $v_C$ satisfies the asymptotic behaviour \[ v_C (x) = 4\pi x - \frac{1}{C}\left[4\pi + \frac{2(1-\genus)}{C}\right] \frac{(Cx)^2}{2} + \bigo((Cx)^3)) \] as $Cx\to 0$. Moreover, \[ \limsup_{x\to\infty} v_C(x) = -\frac{2(1-\genus)}{C^2} (Cx). \] Therefore, $f$ satisfies the asymptotic behaviour \[ f^2(x) = 4\pi x +\left(\frac{2b}{C^2} - \frac{1}{C}\left[4\pi + \frac{2(1-\genus)}{C}\right]\right) \frac{(Cx)^2}{2} + \bigo((Cx)^3)) \] as $Cx\to 0$. Moreover \[ \limsup_{x\to \infty} f^2 = b x^2. \] \qed \end{lemma} \begin{remark} \label{rem:hyperbolic_asymptotics} In particular notice that the coefficient of $x^2/2$ (rather than $(Cx)^2/2$) from the small-scale asymptotics of $f$ is \[ 2b - 4\pi C - 2(1-\genus) \] and this can be made arbitrarily large negative by choosing $0 \leq b \ll 1-\genus$ and $C \gg C_{\text{crit}}$. \end{remark} \paragraph{Construction of the comparison function} \label{sec-4-3-2-2} The comparison is built from the function $f$ defined in equation \eqref{eq:f_hyperbolic} by letting $C=C(t), b=b(t)$. If $C(t) \searrow C_{\text{crit}}$ and $b(t) \nearrow -(1-\genus)$ as $t\to \infty$, with $b(t) \leq b_{\text{crit}}(C(t))$ (which choice is possible by remark \ref{rem:hyperbolic_stationary_concave}), then \begin{equation} \label{eq:hyperbolic_comparison} f(x,t) = \sqrt{v_{C(t)} + b(t)x^2} \end{equation} is a concave function with \[ \lim_{t\to\infty} f(x,t) = \sqrt{4\pi x + (1-\genus)x^2} \] the isoperimetric profile of the metric of constant curvature $1-\genus$ (which is the curvature of the metric lifted from the constant curvature surface with area $4\pi$ and genus $\genus$ by the Gauss-Bonnet theorem). By choosing $C(0)>C_{\text{crit}}$ sufficiently large and $0\leq b(0) < \max\{b_{\text{crit}}, 1-\genus\}$, sufficiently small, lemma \ref{lem:hyperbolic_asymptotics} and remark \ref{rem:hyperbolic_asymptotics} imply that for any initial metric, initial inequality is satisfied along with the asymptotic behaviour required by comparison theorem in the form of Proposition \ref{prop:comparison}. Thus for $f$ to be a suitable comparison function, we need to choose $C(t)$ and $b(t)$ so that the differential inequality is satisfied. As before, it is more convenient to work with $v=f^2 = v_c + bx^2$ \begin{lemma} \label{lem:hyperbolic_diff_ineq} Let \begin{align*} b(t) &= \left[\left(\frac{1}{b_0} + \frac{1}{1-\genus}\right) e^{4(1-\genus)t} - \frac{1}{1-\genus}\right]^{-1} \\ C(t) &= (C_0 - C_{\text{crit}})\sqrt{b_0} e^{2(1-\genus)t} \left[\left(\frac{1}{b_0} + \frac{1}{1-\genus}\right) e^{4(1-\genus)t} - \frac{1}{1-\genus}\right]^{-1/2} + C_{\text{crit}} \end{align*} Then $v = f^2$ satisfies the differential inequality \eqref{eq:hyperbolic_squared_diff_ineq} with $f$ defined by \eqref{eq:hyperbolic_comparison}. \end{lemma} \begin{proof} First, for the time derivative we have \begin{equation} \label{eq:dt_hyperbolic_stationary} \begin{split} \pd[v]{t} &= \dd[C]{t} \pd[v_C]{C} + \dd[b]{t} x^2 \\ &= -\dd[C]{t} \frac{1}{C^2}[4\pi + \frac{2(1-\genus)}{C}] \frac{(Cx)^2}{2} \\ &\quad - \dd[C]{t} \left(\frac{1}{C^2}[4\pi + \frac{4(1-\genus)}{C}] \right) \left(1 - Cx + \frac{(Cx)^2}{2} - e^{-Cx}\right) \\ &\quad - \dd[C]{t} \left(\frac{1}{C^2}[4\pi + \frac{2(1-\genus)}{C}]\right)\left(Cx\right)\left(1 - Cx - e^{-Cx}\right) \\ &\quad + \frac{2}{C^2} \dd[b]{t} \frac{(Cx)^2}{2} \\ &< \frac{1}{C^2} \left(2\dd[b]{t} - \dd[C]{t} [4\pi + \frac{2(1-\genus)}{C}]\right) \frac{(Cx)^2}{2} \\ &\quad - \dd[C]{t} \left(\frac{1}{C^2}[4\pi + \frac{2(1-\genus)}{C}] \right) \left(1 - Cx + \frac{(Cx)^2}{2} - e^{-Cx}\right) \\ &\quad - \dd[C]{t} \left(\frac{1}{C^2}[4\pi + \frac{2(1-\genus)}{C}\right)\left(Cx\right)\left(1 - Cx - e^{-Cx}\right) \\ \end{split} \end{equation} The inequality occurs in the second line after the inequality by replacing the $4$ with a $2$ using the fact that $C$ is decreasing and $(1-\genus)<0$. For the spatial part, we first use the fact that for two functions $g,h$ we have \[ (g+h)^2 \Delta \ln (g+h) = (g+h)(g+h)'' - (g+h)' = g^2\Delta \ln g + h^2 \Delta \ln h + gh'' + hg'' - 2g'h' \] Thus with $g=v_C$ and $h=bx^2$ we get \[ v^2 \Delta \ln v = v_C^2 \Delta \ln v_C - 2b^2x^2 + b (2v_C - 4 x v_C' + x^2 v_C'') \] so that \[ \begin{split} v^2 \Delta \ln v + L[v] &= v_C^2 \Delta\ln v_C + L[v_C] + 8\pi b x - 2(b^2 + (1-\genus)b)x^2 \\ &\quad + b (2v_C - 4 x v_C' + x^2 v_C'') \\ &= 8\pi b x - 2(b^2 + (1-\genus)b)x^2 + b (2v_C - 4 x v_C' + x^2 v_C'') \end{split} \] since $v_C$ satisfies $v_C^2 \Delta\ln v_C + L[v_C] = 0$. Expand the last term in parenthesis in a Taylor series using equations \eqref{eq:hyperbolic_stationary}, \eqref{eq:hyperbolic_stationary_1st} and \eqref{eq:hyperbolic_stationary_2nd} to get \begin{equation} \label{eq:spatial_hyperbolic_stationary} \begin{split} v^2 \Delta \ln v + L[v] &= \left(\frac{4b}{C}[4\pi + \frac{2(1-\genus)}{C}] - \frac{4(b^2 + (1-\genus)b)}{C^2} \right) \frac{(Cx)^2}{2} \\ &\quad + \frac{2b}{C}[4\pi + \frac{2(1-\genus)}{C}] \left(1 - Cx + \frac{(Cx)^2}{2} - e^{-Cx}\right) \\ &\quad + \frac{4b}{C}[4\pi + \frac{2(1-\genus)}{C}] \left(Cx\right) \left(1 - Cx - e^{-Cx}\right) \\ &\quad + \frac{2b}{C}[4\pi + \frac{2(1-\genus)}{C}] \frac{(Cx)^2}{2} \left(1 - e^{-Cx}\right). \end{split} \end{equation} Now we compare the terms from equation \eqref{eq:dt_hyperbolic_stationary} with those of equation \eqref{eq:spatial_hyperbolic_stationary} to obtain the following necessary inequalities: \begin{itemize} \item $(Cx)^2/2$: \begin{multline*} \frac{1}{C^2} \left(2\dd[b]{t} - \dd[C]{t} [4\pi + \frac{2(1-\genus)}{C}]\right) \\ < \left(\frac{4b}{C}[4\pi + \frac{2(1-\genus)}{C}] - \frac{4(b^2 + (1-\genus)b)}{C^2} \right) \end{multline*} \item $1 - Cx + \frac{(Cx)^2}{2} - e^{-Cx}$: \[ -\dd[C]{t} \left(\frac{1}{C^2}[4\pi + \frac{2(1-\genus)}{C}] \right) < \frac{2b}{C}[4\pi + \frac{2(1-\genus)}{C}] \] \item $Cx \left(1 - Cx - e^{-Cx}\right)$: \[ -\dd[C]{t} \left(\frac{1}{C^2}[4\pi + \frac{2(1-\genus)}{C}\right) < \frac{4b}{C}[4\pi + \frac{2(1-\genus)}{C}] \] \item $\frac{(Cx)^2}{2} \left(1 - e^{-Cx}\right)$: \[ 0 < \frac{2b}{C}[4\pi + \frac{2(1-\genus)}{C}] \] \end{itemize} All the above inequalities are satisfied if \begin{align} \label{eq:b_bernoulli_hyperbolic_stationary} \dd[b]{t} &< -4\left(b^2 + (1-\genus)b\right) \\ \label{eq:c_hyperbolic_stationary} \dd{t} \ln C &> - 2b \end{align} and $C \geq C_{\text{crit}}$ which ensures that $4\pi + \tfrac{2(1-\genus)}{C} \geq 0$. It is now a simple matter, left to the reader, to check that $b(t)$ as given in the statement of the lemma satisfies equality in the Bernoulli equation \eqref{eq:b_bernoulli_hyperbolic_stationary} and that equality in \eqref{eq:c_hyperbolic_stationary} is satisfied by \[ \tilde{C} = (C_0 - C_{\text{crit}})\sqrt{b_0} e^{2(1-\genus)t} \left[\left(\frac{1}{b_0} + \frac{1}{1-\genus}\right) e^{4(1-\genus)t} - \frac{1}{1-\genus}\right]^{-1/2}. \] Since $C(t) = \tilde{C} + C_{\text{crit}} > \tilde{C}$ and $\dd{t} \tilde{C} < 0$, we then have \[ \dd{t} \ln C = \frac{\dd{t}\tilde{C}}{\tilde{C} + C_{\text{crit}}} > \frac{\dd{t}\tilde{C}}{\tilde{C}} = -2b \] competing the proof. \end{proof} \begin{remark} Observe that with $b(t), C(t)$ as given in lemma \ref{lem:hyperbolic_diff_ineq}, $b(t)$ monotonically increases from $b_0$ to $-(1-\genus)$ and $C(t)$ monotonically decreases from $C_0$ to $C_{\text{crit}}$ so that $f = \sqrt{v}$ converges to the constant curvature $1-\genus$ isoperimetric profile. \end{remark} Finally, applying corollary \ref{cor:ricci_comp_curv_bnd} we obtain \begin{theorem} \label{thm:ricci_hyperbolic_curvature_bound_general} Let $\metric(t)$ be any solution of the normalised Ricci flow on $M$ a closed, genus $>1$ surface and let $\unicover{\metric} = \uniproj^{\ast}\metric$ the pull back to $\fld[H]^2$ with $\uniproj: \fld[H]^2 \to M$ the universal cover. Then for $\phi = \sqrt{v_{C(t)} + b(t)a^2}$ where $C(t), b(t)$ are defined as in Lemma \ref{lem:hyperbolic_diff_ineq} there exists $C_0, b_0>0$ such that $\phi(a, t) < \isoprofile_{\unicover{\metric}} (a, t)$ for all $a \in (0,\infty)$. Therefore, the Gauss curvature $\gausscurv_M$ satisfies the bound \[ \sup_{M} \gausscurv_{M} (t) \leq \left(\frac{C(t)}{2}\left[4\pi + \frac{2(1-\genus)}{C(t)}\right] - b(t)\right) \] which decays exponentially fast to $(1-\genus)$ as $t \to \infty$. \end{theorem} \begin{remark} The exponential decay in the theorem follows from the fact that $b(t) \to -(1-g)$ exponentially fast, $C(t) \to C_{\text{crit}}$ exponentially fast and hence $\left[4\pi + \frac{2(1-\genus)}{C(t)}\right] \to 0$ exponentially fast. \end{remark} \section{Convergence} \label{sec-5} \label{50880488-de51-46b5-9883-fc520d40172c} \label{sec:convergence} In this last section, let us briefly discuss the proof of Theorem \ref{thm:convergence}. The argument is very standard, following from bootstrapping the curvature bounds to higher derivative bounds. Here we will only outline the steps, indicating how the results here may be applied. \begin{proof} [Proof of Theorem \ref{thm:convergence}] First, observe that by Theorems \ref{thm:sphere_bounds}, \ref{thm:plane_comparison} and, \ref{thm:ricci_hyperbolic_curvature_bound_negative} and, \ref{thm:ricci_hyperbolic_curvature_bound_general} and the fact that $\uniproj: \unicover{M} \to M$ is a local isoemetry, we have uniform upper bounds $\gausscurv_M \leq \gausscurv_0(t) + (1-\genus)$ with $\gausscurv_0(t)$ uniformly bounded and such that $\lim_{t\to\infty} \gausscurv_0(t) = 0$. Since the Gauss curvature evolves according to $\pd{t} \gausscurv = \Delta \gausscurv + \gausscurv(\gausscurv - (\genus - 1))$, by an ODE comparison we also have uniform lower bounds converging to $0$ as $t\to\infty$. Thus $\abs{\gausscurv(t)}$ is uniformly bounded for all $t>0$ and hence the solution exists for all time. $L^1$ convergence of the curvature to $1-\genus$ now follows easily. By Gauss-Bonnet, $\gausscurv \leq \gausscurv_0(t) + (1-\genus)$ and, the fact that $\abs{M} = 4\pi$, \[ 0 = \int_M \gausscurv - (1-\genus) d\mu \leq - \int_{\gausscurv \leq (1-\genus)} \abs{\gausscurv - (1-\genus)} d\mu + 4\pi \gausscurv_0(t), \] which rearranges to give $\int_{\gausscurv \leq (1-\genus)} \abs{\gausscurv - (1-\genus)} d\mu \leq 4\pi \gausscurv_0(t)$. Therefore we get \[ \int_M \abs{\gausscurv - (1-\genus)} d\mu \leq 8\pi \gausscurv_0(t) \] which converges to $0$ as $t\to\infty$. Next we bound the higher derivatives of $\gausscurv$. By the bootstrapping argument described in \cite[Section 7]{MR1375255}, and from the uniform curvature bounds we obtain \[ \abs{\nabla^{(j)} \gausscurv}^2 \leq C_j ((1-\genus) + t^{-j}) \] for constants $C_j>0$. In the genus $\genus=0$ case, the lower bound on the isoperimetric constant affords very strong analytic control allowing us to apply Gagliardo-Nirenberg inequalities to deduce that $\gausscurv \to_{C^{\infty}} (1-\genus)$ uniformly as $t \to \infty$. See \cite{MR2729306}. For higher genus surfaces, we don't have such control as noted in remark \ref{rem:isoconst_bdd}. Thus instead, for $\genus>0$ surfaces, with a little more work, another bootstrapping argument gives $\gausscurv \to_{C^{\infty}} (1-\genus)$ uniformly as $t \to \infty$ \cite[Section 7]{MR1375255}. Finally, in the cases $\genus \ne 1$, we have $\gausscurv_0(t) = Ce^{-at}$ which gives for any non-zero $v\in TM$, \[ \abs{\pd{t} \ln \metric(t) (v,v)} = 2\abs{\gausscurv - (1-\genus)} \leq C^{-at} \] which is integrable in $t$ on $[0,\infty)$. Smooth convergence of the metric now follows by the argument in \cite[Section 17]{MR664497}. For the case $\genus = 0$ we only have $\gausscurv_0(t) = C/t$ which is not integrable. However, we may use the fact that $\abs{\nabla \gausscurv_0} \leq C/t^{3/2}$ which is integrable to again deduce smooth convergence \cite{MR2061425}. \end{proof} \begin{remark} Note that we have control of the isoperimetric constant on $\unicover{M}$ and a curvature bound. We cannot however use these to obtain the simpler convergence proof using Gagliardo-Nirenberg inequalities since these rely on $L^1$ convergence of the curvature. But this is invalid for $\genus > 1$ since $\unicover{M}$ is not compact. Perhaps one might deduce $L^1_{\text{loc}}$ convergence, but note that the above $L^1$ convergence argument uses Gauss-Bonnet. In the $L^1_{\text{loc}}$ case, we would need to deal with boundary terms arising from Gauss-Bonnet and I don't know how to control these. This is perhaps related to transferring isoperimetric control from $\unicover{M}$ to $M$. \end{remark} \printbibliography \end{document}
{ "timestamp": "2014-03-20T01:02:01", "yymm": "1403", "arxiv_id": "1403.4654", "language": "en", "url": "https://arxiv.org/abs/1403.4654", "abstract": "A comparison theorem for the isoperimetric profile on the universal cover of surfaces evolving by normalised Ricci flow is proven. For any initial metric, a model comparison is constructed that initially lies below the profile of the initial metric and which converges to the profile of the constant curvature metric. The comparison theorem implies that the evolving metric is bounded below by the model comparison for all time and hence converges to the constant curvature profile. This yields a curvature bound and a bound on the isoperimetric constant, leading to a direct proof that the metric converges to the constant curvature metric.", "subjects": "Differential Geometry (math.DG); Analysis of PDEs (math.AP)", "title": "Curvature bounds via an isoperimetric comparison for Ricci flow on surfaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363545048393, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7073385906829397 }
https://arxiv.org/abs/1706.08960
Combinatorial approach to detection of fixed points, periodic orbits, and symbolic dynamics
We present a combinatorial approach to rigorously show the existence of fixed points, periodic orbits, and symbolic dynamics in discrete-time dynamical systems, as well as to find numerical approximations of such objects. Our approach relies on the method of `correctly aligned windows'. We subdivide the `windows' into cubical complexes, and we assign to the vertices of the cubes labels determined by the dynamics. In this way we encode the dynamics information into a combinatorial structure. We use a version of the Sperner Lemma saying that if the labeling satisfies certain conditions, then there exist fixed points/periodic orbits/orbits with prescribed itineraries. Our arguments are elementary.
\section{Introduction} Numerical investigations of discrete-time dynamical systems often require the approximation of the phase space and of the underlying map via a fine grid. Henceforth, the dynamics information is encoded into a combinatorial structure. From the computational point of view it is important that such combinatorial structure should be as simple as possible. While simplicial structures appear to be more elegant, cubical structures present many practical advantages, including the possibility of using cartesian coordinates, simple numerical representation of maps as multivalued maps, and lower computational costs of higher dimension homologies. See. e.g., \cite{Kaczynski2003}. In this paper we develop a combinatorial topology-based approach to detect fixed points, periodic orbits, and symbolic dynamics in discrete-time dynamical systems. The approach relies on the method of `correctly aligned windows', also known as `covering relations'. This method goes back to the geometric ideas of Conley, Easton and McGehee \cite{Conley68,EastonMcGehee79,Easton81}, while more recent, topological versions of the method have been developed in \cite{GideaZ2004}. The method can be described concisely as follows. A `window' (also known as an `$h$-set') is a multi-dimensional rectangle, whose boundary consists of an `exit set' and an `entry set'. One window is correctly aligned with (or `covers') another window under the map if the image of the first window is going across the second window, with the exit set of the first window coming out through the exit set of the second window, and without the image of the first window intersecting the entry set of the second window. There is an additional condition that the crossing of the windows should be topologically nontrivial, which can be expressed in terms of the Brouwer degree. The main results about correctly aligned windows can be summarized as follow: \begin{itemize} \item[(i)] If a window is correctly aligned with itself, then there is a fixed point inside that window; \item[(ii)] For a finite sequence of windows (with a circular ordering), if each window is correctly aligned with the next window in the sequence, then there exists a periodic orbit inside those windows; \item[(iii)] For an infinite sequence of windows, if each window is correctly aligned with the next window, then there exists an orbit inside those windows; \item[(iv)] { For a finite sequence of pairwise disjoint windows, with the correct alignment of pairs of windows described by a $0-1$ transition matrix, there exists an invariant set inside those windows on which the dynamics is semi-conjugate to a topological Markov chain associated to that matrix.} \end{itemize} In principle, this method only yields existential type of results on fixed points/perodic orbits/orbits with prescribed itineraries. In this paper, we provide an algorithmic approach to verify the that the crossing of the windows is topologically non-trivial, and to detect numerically, up to a desired level of precision, fixed points/perodic orbits/orbits with prescribed itineraries. Our method is constructive, can be implemented numerically quite easily, and does not require the computation of algebraic topology-type invariants (e.g., Conley index, homology, Brouwer degree). The approach proposed in this paper is based on combinatorial topology, particularly on the classical Sperner Lemma \cite{Sperner1928}. We regard each window as a multi-dimensional cube, and we construct a cubical decomposition of it. Then we assign labels to all vertices of the cubical decomposition. The label of a vertex $x$ is given by the hyperoctant where the vector $f_\chi(x)-x$ lands, where $f_\chi$ is the map that defines the dynamical system expressed in certain coordinates. A cube is called completely labeled if the intersection of the hyperoctants corresponding to its labels is the zero vector. We can also assign an index to the labeling of a cube. This index turns out to be related to the Brouwer degree (see Proposition \ref{prop:Bekker}). Thus the index can be computed via a simple recursive formula (see \eqref{eqn:defn_index}). A non-zero index is a sufficient condition, but not necessary, for the labeling to be complete. To check that one window is correctly aligned with another, it is sufficient to check that the labels of the vertices of the cubical decomposition that lie on the boundary of one of the windows satisfy certain explicit conditions, and that the above mentioned index is non-zero. If a window is correctly aligned with itself, a version of the Sperner Lemma shows the existence of at least one small cube in the cubical decomposition that has non-zero index, hence completely labeled. There may also exists small cubes that are completely labeled that have zero index. In this setting, a small cube with non-zero index yields a true fixed point, while a small cube that is completely labeled yields a numerical approximation of a fixed point. Similarly, in the case of sequences of windows, small cubes inside those windows that have non-zero index yield true periodic orbits/orbits with prescribed itineraries, while small cubes that are completely labeled yield numerical approximations of periodic orbits/orbits with prescribed itineraries. Checking that a small cube has non-zero index can be done via a finite computation. Completely labeled small cubes can be searched using a Nerve Graph algorithm similar to \cite{Su2002}. In Section \ref{sec:preliminaries}, we recall the classical Sperner Lemma and some generalizations. In Section \ref{sec:sperner_cubical} we provide a new version of the Sperner Lemma, for cubical complexes, which will be used in the subsequent sections. In Section \ref{sec:windows} we provide sufficient conditions for correct alignment of windows, in terms of the labeling of vertices of a cubical decomposition. We also prove several results on detection -- in terms of the non-zero index/completely labeled cubes -- of true/approximate fixed points, periodic orbits, and orbits with prescribed itineraries. In Section \ref{sec:application} we illustrate the above procedure by an example, in which we find periodic orbits for the H\'enon map. \section{Preliminaries}\label{sec:preliminaries} \subsection{Sperner's Lemma for simplices}\label{sec:sperner} Consider an $n$-simplex $T$, and $T=\bigcup_{i\in I} T_i$, with $I$ finite, a simplicial decomposition of $T$. A labeling of $T$ is a map $\phi:T\to\{1,2,\ldots,n+1\}$. In particular, each vertex $v$ of $T$ and of any of the $T_i$'s is assigned a unique label $\phi(v)\in \{1,2,\ldots,n+1\}$. A simplex $T$ is said to be \emph{completely labeled} if its vertices are assigned all labels from $\{1,2,\ldots,n+1\}$. The labeling $\phi$ is called a \emph{Sperner labeling} if every point $p$ that lies on some face $S$ of $T$ is assigned one of the labels of the vertices of $S$. In the case of a completely labeled simplex, the Sperner condition is equivalent to a \emph{non-degenerate labeling} condition, that no $(n-1)$-dimensional face of $T$ contains points of $(n+1)$ or more different labels. \begin{figure} \includegraphics[width=0.5\textwidth]{sperner} \caption{Sperner labeling of a simplicial decomposition.} \label{fig:sperner} \end{figure} In the most basic form the Sperner Lemma states the following: \begin{thm}[Sperner Lemma \cite{Sperner1928}] Given an $n$-simplex $T$ with a complete Sperner labeling, a simplicial decomposition $T=\bigcup_{i} T_i$. Then the number of completely labeled simplices $T_i$ is odd. In particular, there exists at least one simplex $T_i$ that is completely labeled. \end{thm} See Figure~\ref{fig:sperner}. The Sperner Lemma can be used to derive an elementary proof of the Brouwer Fixed Point Theorem: any continuous map $f:B\to B$ from a (homeomorphic copy of an) $n$-dimensional ball to itself has a fixed point. See, e.g., \cite{BurnsG2005}. The Sperner Lemma can be used to numerically find fixed points, and it has been used extensively in numerical works, see, e.g., \cite{Allgower2003}. \subsection{Sperner's Lemma for polytopes}\label{sec:bekker} We now describe a more general Sperner Lemma-type of result for polytopes, following \cite{Bekker1995}. A related result can be found in \cite{Su2002}. Let $P$ be a convex $n$-dimensional polytope. We consider a labeling $\phi:P\to \{1,2,\ldots, n+1\}$ of $P$. As in the simplex case, a polytope is said to be \emph{completely labeled} if its vertices are assigned all labels from $\{1, 2, \ldots, n+1\}$. Also, a labeling $\phi:P\to \{1,\ldots,n+1\}$ is said to be \emph{non-degenerate} if no $(n-1)$-dimensional face of $P$ contains points which take $(n+1)$ or more different values. In the general case of a polytope, the non-degeneracy condition on the labeling is not equivalent to the Sperner condition. In the sequel, we will assume that all labelings of the polytopes under consideration are non-degenerate. In applications, such as below, we often consider a polytope (cube) $P$ divided into a finite number of smaller polytopes (cubes) $P_i$, $i\in I$. In such contexts, we only need to verify that the labels of the vertices of $P$ and the $P_i$'s lying on the faces of $P$ satisfy the \emph{non-degeneracy condition}. That is, we only need to verify such condition on a finite number of points. We introduce some tools. Consider the standard $n$-dimensional simplex $T=\textrm{conv}(0, e_1,e_2,\ldots ,e_n)\subset \mathbb{R}^n$, where we denote by $(e_1,e_2,\ldots ,e_{n})$ the standard basis of $\mathbb{R}^n$, and by $\textrm{conv}(\cdot)$ the convex hull of a set. Let us declare the standard labeling of the vertices of $T$, given by $\phi(0)=1$ and $\phi(e_i)=i+1$, $i=1,\ldots,n$, as \emph{positively oriented}. We also declare any labeling obtained by an even number of permutations of the labels of the vertices in the standard labeling also \emph{positively oriented}, and any labeling obtained by an odd number of permutations as \emph{negatively oriented}. We define the \emph{oriented index} of a labeling $\phi$ of any oriented, convex polytope $P$, recursively: \begin{equation}\label{eqn:defn_index} \textrm{ind}_P(\phi)=\left\{ \begin{array}{ll} 1, & \hbox{if $\dim(P)=0$ and $\phi(V(P))=\{1\}$;} \\[0.5em] \pm 1, & \hbox{if $\dim(P)=1$ and $\phi(V(P))=\{1,2\}$,}\\ & {\textrm{according to its orientation};}\\[0.5em] 0, & \hbox{if $\dim(P)=k$ and $\phi(V(P))\neq\{1,2,\ldots,k+1\}$;} \\[0.5em] \sum_{S\in\mathscr{F}_{k-1}(P)} \textrm{ind}_S(\phi), & \hbox{if $\dim(P)=k$ and $\phi(V(P))=\{1,2,\ldots,k+1\}$,}\\ & {\textrm{where } \textrm{ind}_S(\phi) \textrm{ is counted with orientation}.}\\ \end{array} \right. \end{equation} Above, $V(P)$ denotes the vertices of $P$, and $\mathscr{F}_{k-1}( P)$ denotes the set of all $(k-1)$-faces of $P$. The orientation is taken into account in the following way. The orientation of a $k$-dimensional polytope $P$ induces an orientation of every $(k-1)$-dimensional face $S$ of $P$, and on all the lower dimensional faces of $S$. In the definition of the index, only those $(k-1)$-dimensional faces that carry all labels $\{1,2,\ldots,k\}$ count in the summation. The definition of the index yields a choice of sign $\pm 1$ to each lower dimensional face carrying all labels which appears in the recursive definition. \begin{rem} By taking into account the orientation of the polytope, we get here a definition of the index slightly different from the one in \cite{Bekker1995}, which uses mod~2 summation in the above recursive definition. We note that the definition of the index in \cite{Bekker1995} is inconsistent with some of the results later in that paper, e.g., with Proposition \ref{prop:Bekker} quoted below. \end{rem} Notice that the index of a labeling $\phi$ of a polytope defined above only depends on the values of $\phi$ at the vertices $V(P)$ of $P$. The following result is immediate: \begin{lem}[\cite{Bekker1995}]\label{lem:simplex} If $T$ is a simplex and $\phi:T\to \{1,\ldots,n+1\}$ is a non-degenerate labeling, then \[\textrm{ind}_T(\phi)=\left\{ \begin{array}{ll} \pm 1 , & \hbox{if $T$ is completely labeled;} \\ 0, & \hbox{otherwise.} \end{array} \right.\] \end{lem} Given an $n$-dimensional, oriented, convex polytope $P$, a labeling $\phi:P\to\{1,\ldots,n+1\}$, and the standard $n$-simplex $T$ of vertices $a_1,\ldots,a_{n+1}$, a \emph{realization} of $\phi$ is a continuous map $\Phi: P \to T$, satisfying the following condition: \begin{itemize} \item[(i)] If $v$ is a vertex of $P$ then $\Phi(v)=a_{\phi(v)}$, i.e., $\Phi(v)$ is the vertex $a_i$ of $T$ with the index $i$ equal to the label of $v$; \item[(ii)] If $S$ is face of $P$ with vertices $v_1,\ldots,v_k$, then $\Phi(S)\subset \textrm{conv}(a_{\phi(v_1)},\ldots, a_{\phi(v_k)})$. \end{itemize} Informally, a realization of $P$ is a continuous mapping of $P$ onto $T$ that `wraps' $\partial P$ around $\partial T$, such that the labels of the vertices of $P$ match with the indices $i$ of the vertices of $T$. Such $\Phi$ is in general non-injective. In the sequel we denote by $\deg(\cdot)$ the oriented Brouwer degree of a continuous function (see, e.g., \cite{BurnsG2005}). Recall that for a smooth, boundary preserving map $\Phi:(M,\partial M)\to (N,\partial N)$ between two oriented $n$-dimensional manifolds with boundary, $\deg(\Phi)=({\int_{ M} \Phi^*\eta})/({\int _{ N} \eta})$ where $\eta$ is a volume form on $N$; the definition is independent of the volume form. Let $\omega$ be such that $d\omega=\eta$. By Stokes' theorem and the fact that $d\Phi^*\omega=\Phi^*d\omega=\Phi^*\eta$, we have $\deg(\Phi)=({\int_{\partial M} \Phi^*\omega})/({\int _{\partial N} \omega})$. This implies the following property of the degree: $\deg(\Phi)=\deg(\partial \Phi)$, where $\partial \Phi:\partial M\to\partial N$ is the map induced by $\Phi$ on the boundaries. Equivalently, the degree of the map $\Phi$ can be defined as the signed number of preimages $\Phi^{-1}(p)=\{q_1,\ldots,q_k\}$ of a regular value $p$ of the map $\Phi$, where each point $q_i$ is counted with a sign $\pm 1$ depending on whether $d\Phi_{q_i}:T_{q_i}M\to T_{p}N$ is orientation preserving or orientation reversing. That is, $\deg(\Phi)=\sum_{q\in \Phi^{-1}(p)}\textrm{sign} (\det (d\Phi_{q}))$, where $p\in N\setminus \partial N$ is a regular value of $\Phi$. The definition of the Brouwer degree extends via homotopy to continuous maps. \newpage \begin{prop}[\cite{Bekker1995}]\label{prop:Bekker} Let $P$ be a convex $n$-dimensional polytope. \begin{itemize} \item[(i)] Any non-degenerate labeling $\phi$ of $P$ admits a realization $\Phi$; \item[(ii)] Any two realizations of the same labeling are homotopic as maps of pairs $(P,\partial P)\mapsto (T,\partial T)$; \item [(iii)] The index $\textrm{ind}_P(\phi)$ of the labeling $\phi$ is equal to the degree $\deg(\Phi)$ of any realization $\Phi$ of $\phi$, up to a sign \[\textrm{ind}_P(\phi)=\pm \deg(\Phi).\] \item[(iv)] If $\textrm{ind}_P(\phi)\neq 0$ then $P$ is completely labeled. \end{itemize} \end{prop} Let us consider a convex, $n$-dimensional polytope $P$ that is subdivided into finitely many $n$-dimensional polytopes $\{P_i\}_{i\in I}$, with $I$ finite, such that $P=\bigcup_{i\in I}P_i$, and for $i\neq j$, $\textrm{int} (P_i)\cap \textrm{int} (P_j)=\emptyset$ and $P_i\cap P_j$ is either empty or a face of both $P_i$ and $P_j$. In particular, each $k$-dimensional face of $P$ is the union of finitely many $k$-dimensional faces of $P_i$'s. For the following result from \cite{Bekker1995} we provide an alternative proof. \begin{lem}[\cite{Bekker1995}]\label{lem:sumindices} If the labeling $\phi$ is non-degenerate, then \[\textrm{ind}_P(\phi)=\sum_i \textrm{ind}_{P_i} (\phi).\] \end{lem} \begin{proof} The proof follows by induction on the dimension $n$ of the polytope. When $n=1$ the identity is immediate. For the induction step, we use \eqref{eqn:defn_index}. Note that each $(n-1)$-dimensional face $S$ of $P$ is the union of $(n-1)$-dimensional faces of $P_i$'s. Each $(n-1)$-dimensional face of a $P_i$ that is not lying on a $(n-1)$-dimensional face of $P$ is shared by two polytopes $P_i$ and $P_j$, and so it is counted twice with opposite orientations. Thus, the sum of the indices of the $(n-1)$-dimensional faces of the $P_i$'s reduces to the sum of the the indices of the $(n-1)$-dimensional faces of the $P_i$'s that lie on $(n-1)$-dimensional faces of $P$. The fact that the index of each $(n-1)$-dimensional face $S^j$ of $P$ is the sum of the indices of the $(n-1)$-dimensional faces $S^j_i$ of the $P_i$'s that lie on $S^j$ follows from the induction hypothesis. In summary, we have: \begin{equation*}\begin{split}\sum_i\textrm{ind}_{P_i}(\phi)&=\sum_i\sum_{S'\in\mathscr{F}_{n-1}(P_i)} \textrm{ind}_{S'}(\phi)\\&=\sum_i\sum_{S^j_i\in\mathscr{F}_{n-1}(P_i)\cap \mathscr{F}_{n-1}(P)} \textrm{ind}_{S^j_i}(\phi)\\&=\sum_{S^j\in\mathscr{F}_{n-1}(P)} \textrm{ind}_{S^j}(\phi)\\&=\textrm{ind}_{P}(\phi). \end{split}\end{equation*} \end{proof} The following is a generalization of the Sperner Lemma from \cite{Bekker1995}. \begin{thm}[\cite{Bekker1995}]\label{thm:bekker2} Assume that $P$ is an $n$-dimensional polytope, $P=\bigcup_{i\in I}P_i$ is a decomposition of $P$ into polytopes as above, and $\phi:P\to\{1,\ldots,n+1\}$ is a {\emph{non-degenerate labeling}}. If $\textrm{ind}_P(\phi)\neq 0$, then there exists a polytope $P_i$ such that $\textrm{ind}_{P_i}(\phi)\neq 0$; in particular, $P_i$ is completely labeled. \end{thm} \begin{proof} Follows immediately from Lemma \ref{lem:sumindices} and Proposition \ref{prop:Bekker}. \end{proof} \begin{cor}\label{cor:Bekker} Assume that $P$ is an $n$-dimensional polytope, and $P=\bigcup_{i\in I}T_i$ is a simplicial decomposition of $P$. If a { non-degenerate} labeling $\phi$ satisfies $\textrm{ind}_P(\phi)\neq 0$, then there exists a simplex $T_i$ that is completely labeled. \end{cor} \begin{proof} Follows from Lemma \ref{lem:simplex}. \end{proof} Note that in Corollary \ref{cor:Bekker} the assumption that $P$ is completely labeled alone is not sufficient to ensure that there exists a completely labeled simplex in the decomposition; the condition that $\textrm{ind}_P(\phi)\neq 0$ is necessary. See Example \ref{ex}, (ii), (iii). \begin{ex}\label{ex} (i) Consider the polygon $P$, the simplicial decomposition, and the labeling shown in Fig. \ref{hexagon}-(a). We have $\textrm{ind}_P(\phi)=2$; there exists a completely labeled triangle. (ii) Consider the polygon $P'$, the simplicial decomposition, and the labeling shown in Fig. \ref{hexagon}-(b). We have $\textrm{ind}_{P'}(\phi)=0$; there is no completely labeled simplex. (iii) Consider the polyhedron $P''$, the simplicial decomposition, and the labeling shown in Fig. \ref{hexagon}-(c). We have $\textrm{ind}_{P''}(\phi)=0$; there is no completely labeled simplex. \begin{figure} $\begin{array}{ccc} \includegraphics[width=0.25\textwidth]{hexagon_b.pdf}& \includegraphics[width=0.25\textwidth]{pentagon.pdf}& \includegraphics[width=0.25\textwidth]{doubletetrahedron.pdf} \end{array}$ \caption{Examples of simplicial decomposition of polytopes and of labelings.} \label{hexagon} \end{figure} \end{ex} \section{Sperner's Lemma for cubical complexes}\label{sec:sperner_cubical} In this section we present a new version of the Sperner Lemma for cubical decompositions. The main difference from the previous sections will be the labeling. For an $n$-dimensional cube and a corresponding cubical decomposition, it will be convenient in Section \ref{sec:windows} to use $2^n$ labels that are $n$-dimensional vectors with coefficients $\pm 1$; whereas, in Section \ref{sec:bekker}, for an $n$-dimensional polytope we have used only $(n+1)$ labels. Hence, we will have to re-define what a complete labeling means in terms of the new labeling convention, and relate with the old labeling convention. Consider vector-labels $\ell\in \{(\pm 1,\ldots, \pm 1)\}=\mathscr{Z}^n$, where we denote $\mathscr{Z}=\{\pm 1\}$. Each label $\ell=(\pm 1,\ldots, \pm 1)$ corresponds to a hyperoctant \[\mathscr{O}_\ell=\{(x_1,x_2,\ldots, x_n)\in {\mathbb R}^n\,|\, \forall i,\,\ell_i x_i\geq 0 \}.\] Note that $\bigcup_{\ell\in\mathscr{Z}^n} \mathscr{O}_\ell={\mathbb R}^n$. We call a collection of labels $\{\ell_1,\ell_2,\ldots,\ell_{n+1}\}$ \emph{complete} if $\mathscr{O}_{\ell_1}\cap \mathscr{O}_{\ell_2}\cap\ldots\cap \mathscr{O}_{\ell_{n+1}}=\{{\bf 0}\}$. Equivalently, $\{\ell_1,\ell_2,\ldots,\ell_{n+1}\}$ is complete if for each coordinate index $i\in\{1,\ldots,n\}$, there exists a pair of labels $\ell_{j}, \ell_{k}$ such that the $i$-th coordinates of $\ell_{j}$ and $\ell_{k}$ have opposite signs, that is, $\pi_i(\ell_j)=-\pi_i(\ell_k)$. A labeling $\phi$ is said to be \emph{non-degenerate} if no face of $P$ carries a complete set of labels. A convex polyhedral cone is a convex cone in ${\mathbb R}^n$ bounded by a finite collection of hyperplanes of the form $x_k=0$; alternatively, it can be characterized as an intersection of finitely many half-spaces (e.g., spaces of the form $\{x\in{\mathbb R}^n\,|\,x_k\geq 0\}$ for some $k$); see \cite{Stanley2011}. A \emph{special convex polyhedral cone partition} of ${\mathbb R}^n$ is a collection of $(n+1)$ convex polyhedral cones $\mathscr{N}_j$, $j=1,\ldots,n+1$, satisfying the following properties: \begin{itemize} \item[(a)] $\bigcup _j \mathscr{N}_j={\mathbb R}^n$; \item[(b)] $\textrm{int}({\mathscr{N}_j})\cap \textrm{int}(\mathscr{N}_l)=\emptyset$ for $j\neq l$; \item[(c)] $\bigcap_j \mathscr{N}_j=\{\bf{0}\}$. \end{itemize} \begin{lem}\label{lem:partition} (i) Given a special convex polyhedral cone partition $\mathscr{N}_1, \ldots, \mathscr{N}_{n+1}$ of ${\mathbb R}^n$. Any set of $(n+1)$ hyperoctants $\mathscr{O}_{\ell_1},\ldots,\mathscr{O}_{\ell_{n+1}}$, with the property that each $\mathscr{O}_\ell$ is contained in exactly one $\mathscr{N}_j$, and no two $\mathscr{O}_\ell$'s are contained in the same $\mathscr{N}_i$, satisfies $\mathscr{O}_{\ell_1}\cap \mathscr{O}_{\ell_2}\cap\ldots\cap \mathscr{O}_{\ell_{n+1}}=\{\bf 0\}$. (ii) Given a set of $(n+1)$ hyperoctants $\mathscr{O}_{\ell_1},\ldots,\mathscr{O}_{\ell_{n+1}}$ with $\mathscr{O}_{\ell_1}\cap \mathscr{O}_{\ell_2}\cap\ldots\cap \mathscr{O}_{\ell_{n+1}}=\{\bf 0\}$. There exists a special convex polyhedral cone partition $\mathscr{N}_1, \ldots, \mathscr{N}_{n+1}$ of ${\mathbb R}^n$, such that each $\mathscr{O}_{\ell_i}$ is contained exactly in one $\mathscr{N}_j$. \end{lem} \begin{proof} (i) We have $\mathscr{O}_{\ell_1}\cap \mathscr{O}_{\ell_2}\cap\ldots\cap \mathscr{O}_{\ell_{n+1}}\subseteq \mathscr{N}_1\cap \ldots \cap \mathscr{N}_{n+1}=\{\bf 0\}$. (ii) Consider a set of hyperoctants $\mathscr{O}_{\ell_1},\ldots,\mathscr{O}_{\ell_{n+1}}$ with $\mathscr{O}_{\ell_1}\cap \mathscr{O}_{\ell_2}\cap\ldots\cap \mathscr{O}_{\ell_{n+1}}=\{\bf 0\}$. Each hyperplane will separate the set of hyperoctants into two non-empty collections on each side of the hyperplane (since having all hyperoctants on the same side of a hyperplane would imply that their intersections is more than the zero vector). Thus, cutting ${\mathbb R}^n$ by the $n$ hyper-planes will imply that each pair of hyperoctants is on opposite sides of some hyperplane. First, select $\mathscr{N}_1$ as the largest intersection of half-spaces in ${\mathbb R}^n$ (i.e., an intersection by the minimum number of half-spaces) that contains only one hyperoctant $\mathscr{O}_{\ell_{i_1}}$. Then select $\mathscr{N}_2$ as the largest intersection set of half-spaces in ${\mathbb R}^n\setminus \mathscr{N}_1$ that contains only one hyperoctant $\mathscr{O}_{\ell_{i_2}}$, with $i_1\neq i_1$. Continue this procedure up to the last hyperoctant $\mathscr{O}_{\ell_{i_{n+1}}}$ in the given collection, which will provide $\mathscr{N}_{n+1}$. \end{proof} The following example shows some simple changes of labels: \begin{ex}\label{ex:relabeling} Consider the following special partition into convex polyhedral cones: \begin{equation*}\begin{split} \mathscr{N}_1&=\{x\in{\mathbb R}^n\,|\,x_1\geq 0\},\\ \mathscr{N}_2&=\{x\in{\mathbb R}^n\,|\,x_1\leq 0, x_2\geq 0\},\\ &\cdots \\ \mathscr{N}_{n}&=\{x\in{\mathbb R}^n\,|\,x_1\leq 0, \ldots,x_{n-1}\leq 0,x_{n}\geq 0\},\\ \mathscr{N}_{n+1}&=\{x\in{\mathbb R}^n\,|\,x_1\leq 0, \ldots,x_{n-1}\leq 0,x_{n}\leq 0\}. \end{split}\end{equation*} The corresponding change of labels $\psi$ from vector-labels $\ell\in \mathscr{Z}^n$ to labels $j \in \{1,2,\ldots, n + 1\}$ is given by \begin{equation*}\begin{split}\psi(1,\pm 1, \ldots, \pm 1) &= 1,\\ \psi(-1,1, \pm 1, \ldots, \pm 1) &= 2,\\ &\cdots \\ \psi(-1, \ldots, -1, 1,\pm 1, \ldots,\pm 1) &= \min\{j |\, \ell_j = 1\},\\&\cdots \\ \psi(-1,-1, \ldots, -1) &= n+1.\end{split}\end{equation*} \end{ex} Consider a labeling $\phi:P\to \mathscr{Z}^n$ of an $n$-dimensional polytope $P$. Let $\psi:\mathscr{Z}^n\to\{1,\ldots,n+1\}$ be a change of labels. We can define a re-labeling $\psi\circ\phi:P\to \{1,\ldots,n+1\}$. This re-labeling is as in Section \ref{sec:bekker}, hence all the results from that section can be applied in this context. In particular, any re-labeling $\psi\circ\phi: P\to \{1,\ldots,n+1\}$ admits a realization. \begin{lem}\label{lem:index_cubical} Let $P$ be a polytope, $\phi:P\to\mathscr{Z}^n$ a labeling, $\psi_1,\psi_2:\mathscr{Z}^n\to\{1,\ldots,n+1\}$ two changes of labels, and $\psi_1\circ\phi,\psi_2\circ\phi:P\to\{1,\ldots,n+1\}$ the corresponding re-labelings. Then \[\textrm{ind}_{P} (\psi_1\circ\phi)=\pm\textrm{ind}_{P} (\psi_2\circ\phi).\] \end{lem} \begin{proof} Let $\Phi_i:P\to T$ be a realization of $\psi_i\circ\phi$, $i=1,2$. By Proposition~\ref{prop:Bekker}, $\textrm{ind}_{P}(\psi_i\circ\phi)=\pm\deg(\Phi_i)$, for $i=1,2$. Each change of labels $\psi_i$ corresponds to a special partition into convex polyhedral cones $\{\mathscr{N}^i_j\}_j$. Either partition can be obtained from the other via a composition of a rotation, a reflection, and a homotopy. These transformation preserve the Brouwer degree up to a sign. Hence $\deg(\Phi_1)=\pm\deg(\Phi_2)$, thus $\textrm{ind}_{P} (\psi_1\circ\phi)=\pm\textrm{ind}_{P} (\psi_2\circ\phi)$. \end{proof} By Lemma \ref{lem:index_cubical}, we can define the index of a labeling $\phi:P\to\mathscr{Z}^n$, up to a sign, by $\textrm{ind}_{P}(\phi)=\pm\textrm{ind}_{P} (\psi \circ\phi)$, where $\psi$ is a change of labels. Now, let us consider $C$ and a labeling $\phi:C\to\mathscr{Z}^n$ of $C$. Let $\{C_i\}_i$ be a cubical decomposition of $C$. Then $C$ can be regarded as a polytope by appending to the vertices of $C$ all the vertices of the the $C_i$'s lying on the faces of $C$, as well as appending to the faces of $C$ all the faces of the $C_i$'s lying on the faces of $C$. We denote this polytope by $\tilde C$. For $\tilde C$ we have a polytope (cubical) decomposition $\tilde C=\bigcup_i C_i$. The labeling $\phi:C\to\mathscr{Z}^n$ of $C$ can be viewed as a labeling $\phi:\tilde C\to\mathscr{Z}^n$ of $\tilde C$. The following is a version of Sperner's Lemma for cubical decompositions. \begin{thm}[Sperner's Lemma]\label{thm:sperner_cubical} Let $C=\bigcup_i C_i$ be a cube together with a cubical decomposition, $\tilde C$ the corresponding polytope, and $\phi:\tilde C\to\mathscr{Z}^n$ a { non-degenerate} labeling of $\tilde C$. If $\textrm{ind}_{\tilde C}(\phi)\neq 0$ then there exists at least one cube $C_i$ such that $\textrm{ind}_{C_i}(\phi)\neq 0$, hence completely labeled relative to $\phi$. \end{thm} \begin{proof} Consider a re-labeling $\psi\circ\phi$ of $\tilde C$. By definition, we have $\textrm{ind}_{\tilde C}(\phi)=\pm\textrm{ind}_{\tilde C}(\psi\circ\phi) \neq 0$. By Theorem \ref{thm:bekker2}, there exists a cube $C_i$ such that $\textrm{ind}_{C_i}(\psi\circ\phi)\neq 0$, and so $C_i$ is completely labeled relative to $\psi\circ\phi$. Hence $\textrm{ind}_{C_i}(\phi)\neq 0$. Lemma \ref{lem:partition} says that a labeling is complete relative to $\psi\circ\phi$ if and only if it is completely labeled relative to~$\phi$. \end{proof} \begin{cor}\label{cor:sperner_cubical_1} Let $C=\bigcup_i C_i$ be a cube together with a cubical decomposition, and $\phi:C\to\mathscr{Z}^n$ a { non-degenerate} labeling of $C$. If $\textrm{ind}_{C}(\phi)\neq 0$ then there exists at least one cube $C_i$ such that $\textrm{ind}_{C_i}(\phi)\neq 0$, hence completely labeled relative to $\phi$. \end{cor} \begin{proof} If the labeling $\phi$ of $C$ is { non-degenerate}, and $\tilde C$ is the polytope obtained from the cubical decomposition $C=\bigcup_iC_i$, it follows that $\textrm{ind}_{\tilde C}(\phi)=\textrm{ind}_{C}(\phi)\neq 0$. Thus Theorem \ref{thm:sperner_cubical} applies. \end{proof} These last two results are the most important for Section \ref{sec:windows}. In a nutshell, they say that given a cubical complex with a non-degenerate, non-zero index labeling, any finer decomposition of the complex into smaller cubes will always have a small cube with non-zero index. \section{Correctly aligned windows and detection of fixed points/periodic orbits/orbits with prescribed itineraries}\label{sec:windows} In this section we present the definitions of windows and of correct alignment following \cite{GideaZ2004}, with a few minor modifications. Then we associate some labeling to the windows, and characterize correct alignment in terms of that labeling. We also use the labeling to find numerical approximations of fixed points, periodic orbits, and orbits with prescribed itineraries. \subsection{Approximate fixed points and periodic orbits} Given $\delta>0$ and a map $g:{\mathbb R}^n\to {\mathbb R}^n$, we call a point $z=(x_1,\ldots,x_n)\in {\mathbb R}^n$ a \emph{$\delta$-approximate fixed point} of $g$ if $\|g(z)-z\|_\infty<\delta$, where $\|z\|_\infty=\max_{i=1,\ldots,n}|x_i|$. We remark here that there may be no `true' fixed point near a $\delta$-approximate fixed point, that is, $\delta$-approximate fixed points can be `fake' fixed points. Obviously, any `true' fixed point is a \emph{$\delta$-approximate fixed point} for any $\delta>0$. Similarly, a finite collection of points $z_1,\ldots, z_k$ is a \emph{$\delta$-approximate periodic orbit} of $g$ if $\|g(z_j)-z_{j+1}\|_\infty<\delta$, for $j=1,\ldots,k$, and $\|g(z_k)-z_{1}\|_\infty<\delta$. As in the case of fixed points, there may be no `true' periodic orbits near a $\delta$-approximate periodic orbit. \subsection{Windows} We consider a discrete dynamical system given by a homeomorphism $f:{\mathbb R}^n\to{\mathbb R}^n$. We define an equivalence relation on the set of homeomorphisms $\chi:{\mathbb R}^n\to{\mathbb R}^n$ by setting $\chi_1\sim\chi_2$ if there exists an open neighborhood $U$ of $[0,1]^n$ in ${\mathbb R}^n$ such that $(\chi_1)_{\mid U}=(\chi_2)_{\mid U}$. We will use the same notation for an equivalence class as for a representative of that class. \begin{defn} A window in ${\mathbb R}^n$ consists of:\begin{itemize} \item[(i)] a homeomorphic copy $D$ of a multidimensional rectangle $[0,1]^n$, \item[(ii)] an equivalence class of homeomorphisms $\chi_{D}:{\mathbb R}^n\to {\mathbb R}^n$ with $\chi_D([0,1]^n)=D$, \item[(iii)] a choice of stable- and unstable-like dimensions, $n_s,n_u\geq 0$, respectively, with $n_s+n_u=n$, \item[(iv)] a choice of an `exit set' $D^-\subset \textrm{bd} (D)$ and of an `entry set' $D^+\subset \textrm{bd}(D)$, given by \begin{equation*} \begin{split} D^-&=\chi_D\left(\partial[0,1]^{n_u} \times[0,1]^{n_s}\right),\\ D^+&=\chi_D\left([0,1]^{n_u}\times\partial[0,1]^{n_s} \right). \end{split} \end{equation*} \end{itemize} \end{defn} We will write $[0,1]^n=[0,1]^{n_u}\times [0,1]^{n_s}$. Given $D_1,D_2$ two windows in ${\mathbb R}^n$, and $\chi_{D_1},\chi_{D_2}$ two representatives of the corresponding equivalence classes of homeomorphisms, we denote by $f_{\chi_{D_1},\chi_{D_2}}:{\mathbb R}^n\to{\mathbb R}^n$ the homeomorphism $f_{\chi_{D_1},\chi_{D_2}}:=\chi_{D_2}^{-1}\circ f \circ \chi_{D_1}$. When there is no risk of ambiguity, we use the simplified notation $f_\chi:= f_{\chi_{D_1},\chi_{D_2}}$. Denote by $\Upsilon :=\{(x,y)\in {\mathbb R}^{n_u}\times {\mathbb R}^{n_s}\,|\,x\not\in [0,1]^{n_u}\}$. Given two windows $D_1$ and $D_2$ such that $f_\chi ([0,1]^n)\subseteq \Upsilon \cup \left( [0,1]^{n_u}\times (0,1)^{n_s}\right)$, there exists another homeomorphism $\chi'_{D_2}$ from the equivalence class of homeomorphisms associated to $D_2$, such that $f_{\chi'} ([0,1]^n )\subset {\mathbb R}^{n_u}\times (0,1)^{n_s}$, where $f_{\chi'}:={\chi'_{D_2}}^{-1}\circ f \circ \chi_{D_1}$. Note that changing $\chi_{D_2}$ with $\chi'_{D_2}$ has no effect on the windows $D_1$ and $D_2$. \subsection{Correctly aligned windows in two-dimensions} \label{sec:2D_windows} We first give a definition of correct alignment of windows in the $2$-dimensional case, that is, $n=2$. \begin{defn}\label{defn:win2d} We say that the window $D_1$ is correctly aligned with $D_2$ under $f$ if there exist corresponding homeomorphisms $\chi_{D_1},\chi_{D_2}$ with the following properties: \begin{itemize} \item[(i)] Case $n_u=1$, $n_s=1$: \begin{itemize}\item[(i.a)] $f_{\chi} ([0,1]^2 )\subset {\mathbb R}\times(0,1)$; \item [(i.b)] $f_{\chi} (\{0\}\times [0,1] ) \subset (-\infty,0)\times {\mathbb R}$ and $f_{\chi} (\{1\}\times [0,1] )\subset (1, +\infty) \times {\mathbb R}$, or, $f_{\chi} (\{0\}\times [0,1] )\subset (1, +\infty) \times {\mathbb R}$ and $f_{\chi} (\{1\}\times [0,1] )\subset (-\infty, 0) \times {\mathbb R}$; \end{itemize} \item[(ii)] Case $n_u=0$, $n_s=2$: $f_{\chi}( [0,1]^2 )\subset (0,1)^2$; \item[(iii)] Case $n_u=2$, $n_s=0$: $f^{-1}_{\chi} ([0,1]^2) \subset (0,1)^2$. \end{itemize} \end{defn} \begin{rem} As noted above, instead of condition (i.a) we can require the more general condition that $f_{\chi} ([0,1]^2 )\subset \Upsilon \cup \left( [0,1]^{n_u}\times (0,1)^{n_s}\right)$. \end{rem} Let $g:{\mathbb R}^2\to{\mathbb R}^2$ be a homeomorphism. Let $(x_1,x_2)$ denote the coordinates of a point $z\in [0,1]^2$ and $(x'_1,x'_2)$ the coordinates of $g(z)$. Let $\Delta z:=(\Delta x_1,\Delta x_2)=(x_1'-x_1,x'_2-x_2)$. Denote the quadrants \begin{equation*}\begin{split}\mathscr{O}_{(1,1)}&=\{(x_1,x_2)\,|\, x_1\geq 0\textrm{ and } x_2\geq 0\},\\ \mathscr{O}_{(-1,1)}&=\{(x_1,x_2)\,|\, x_1\leq 0\textrm{ and } x_2\geq 0\},\\ \mathscr{O}_{(-1,-1)}&=\{(x_1,x_2)\,|\, x_1\leq 0\textrm{ and } x_2\leq 0\}, \\ \mathscr{O}_{(1,-1)}&=\{(x_1,x_2)\,|\, x_1\geq 0\textrm{ and } x_2\leq 0\}.\end{split}\end{equation*} These are closed sets which cover ${\mathbb R}^2$, and $\mathscr{O}_{\ell_1}\cap \mathscr{O}_{\ell_2}\cap \mathscr{O}_{\ell_3}=\{(0,0)\}$ whenever $\ell_1,\ell_2,\ell_3$ are all different. With respect to the mapping $g$, to each point $z\in {\mathbb R}^2$ we assign a vector label $(\pm 1,\pm 1)\in\mathscr{Z}^2$ according to the following: \begin{description} \item[Condition O]{$ $} \begin{itemize}\item if $\Delta z\in\textrm{int} (\mathscr{O}_\ell)$ then we assign to $z$ the label $\ell$; \item if $\Delta z\in \textrm{int}(\mathscr{O}_{\ell_1}\cap \mathscr{O}_{\ell_2})$ then we assign to $z$ either label $\ell_1$ or $\ell_2$; \item if $\Delta z=0$, we assign to $z$ either label $\ell$. \end{itemize} \end{description} A square $C\subset {\mathbb R}^2$ labeled according to \emph{Condition O} is said to be \emph{completely labeled} if its vertices contain three different labels $\ell_1,\ell_2,\ell_3$, in which case $\mathscr{O}_{\ell_1}\cap \mathscr{O}_{\ell_2}\cap \mathscr{O}_{\ell_3}=0$. Note that if $\Delta z\in \mathscr{O}_{\ell_1}\cap \mathscr{O}_{\ell_2}\cap \mathscr{O}_{\ell_3}$ with $\ell_1,\ell_2,\ell_3$ mutually distinct, then $\Delta z=0$ and so $(x'_1,x'_2)=(x_1,x_2)$. Suppose that $D_1$ is correctly aligned with $D_2$ under $f$. For the mapping $f_\chi:[0,1]^2\to {\mathbb R}^2$ we assign a labeling $\phi:[0,1]^2\to \mathscr{Z}^2$ as per \emph{Condition O}. Denote the vertices of $[0,1]^2$ as follows: $A=(1,0)$, $B=(0,0)$, $C=(0,1)$, $D=(1,1)$. From the definition of correct alignment, we infer the following labeling of the vertices and the edges of $[0,1]^2$: \begin{itemize} \item[(i)] Case $n_u=1$, $n_s=1$. Using the labeling associated to $f_\chi$ yields: \begin{itemize} \item [(i.a)] $A\to (1,1)$, $B\to (-1,1)$, $C\to (-1,-1)$, $D\to (1,-1)$, or $B\to (1,1)$, $A\to (-1,1)$, $D\to (-1,-1)$, $C\to (1,-1)$, \item [(i.b)] $AB\to (1,1)$ or $(-1,1)$, $CD\to (-1,-1)$ or $(1,-1)$; \item [(i.c)] $BC\to (-1,1)$ or (-1,-1), $AD\to (1,1)$ or $(1,-1)$, or $BC\to (1,1)$ or $(1,-1)$, $AD\to (-1,1)$ or $(-1,-1)$; \end{itemize} \item[(ii)] Case $n_u=0$, $n_s=2$. Using the labeling associated to $f_\chi$ yields: \begin{itemize}\item[(ii.a)] $A\to (-1,1)$, $B\to (1,1)$, $C\to (1,-1)$, $D\to (-1,-1)$, \item[(ii.b)] $AB\to (1,1)$ or $(-1,1)$, $AD\to (-1,1)$ or $(-1,-1)$, $CD\to (-1,-1)$ or $(1,-1)$, $BC\to (1,1)$ or $(1,-1)$; \end{itemize} \item[(iii)] Case $n_u=2$, $n_s=0$. Using the labeling associated to $f^{-1}_\chi$ yields the same labeling rules as in case (ii). \end{itemize} See Fig.~\ref{fig:2D_windows}. A possible re-labeling of the points is $(1,1)\to 1$, $(-1,1)\to 2$, $(1,-1), (-1,-1)\to 3$. \begin{figure} \includegraphics[width=0.75\textwidth]{2D_windows_c.pdf} \caption{2D correctly aligned windows and labeling of $[0,1]^2$} \label{fig:2D_windows} \end{figure} \begin{prop}\label{prop:necessary_sufficient_2D} If $D_1$ is correctly aligned with $D_2$ under $f$, then the labeling of $\chi^{-1}_{D_1}(D_1)=[0,1]^2$ described above { is \emph{non-degenerate}} and has { \emph{non-zero index}}. Conversely, in the case when $n_u=1$, $n_s=1$, if $D_1$, $D_2$ satisfy: \begin{itemize}\item[(a)] $f_{\chi} ([0,1]^2 )\subset {\mathbb R}\times(0,1)$; \item [(b)] $f_{\chi}(\partial [0,1]\times [0,1])\cap [0,1]^2=\emptyset$; \end{itemize} and if the labeling as per \emph{Condition O} { is \emph{non-degenerate}} and of { \emph{non-zero index}}, then $D_1$ is correctly aligned with $D_2$ under $f$. \end{prop} \begin{proof} For the direct statement, the points on the boundary of $[0,1]^2$ inherit the labeling explicitly described above, which is { non-degenerate} and has { non-zero index}. For the converse statement, we proceed by contradiction. If condition (i.b) from Definition \ref{defn:win2d} is not satisfied, it means that the two components of $\partial[0,1]\times [0,1]$ are mapped by $f_\chi$ on the same side of $[0,1]^2$ within the strip ${\mathbb R}\times(0,1)$, which leads to a labeling that fails to be { non-degenerate}. \end{proof} \begin{rem}We note that in Proposition \ref{prop:necessary_sufficient_2D}, the converse statement does not include the cases that $n_u=0$, $n_s=2$, or $n_u=2$, $n_s=0$, since assuming condition (ii), or (iii) from Definition \ref{defn:win2d}, respectively, automatically yields both correct alignment and { non-degenerate}, { non-zero index} labeling.\end{rem} Assume that $f_\chi$ is a bi-Lipschitz function with Lipschitz constant $L$, relative to the norm $\|\cdot\|_\infty$ on ${\mathbb R}^2$. Consider a subdivision of $[0,1]^2$ into $N^2$ squares $\{C_i\}_{i=1,\ldots,N^2}$ of side $1/N$. We assign a labeling to all points of $[0,1]^2$ according to \emph{Condition O}. Let $\delta>0$ be small, and $N>0$ be large so that $(L+1)/N<\delta$. If $C_i$ is a completely labeled square, then any point $z\in C_i$ is a \emph{$\delta$-approximate fixed point} of $f_\chi$. Indeed, the complete labeling implies, via the Intermediate Value Theorem, that there exist points $\hat z=(\hat x_1, \hat x_2)\in C_i$ and $\check z=(\check x_1, \check x_2)\in C_i$ such that $\hat x'_1 =\hat x_1$ and $\check x'_2=\check x_2$, respectively. Then, for each $z=(x_1,x_2)\in D_1$ we have \begin{equation}\label{eqn:deltafixed}\begin{split}\|f_\chi(z)-z\|_\infty&=\max\{|x'_1-x_1|,|x'_2-x_2|\}\\&\leq \max\{|x'_1-\hat x'_1|+|\hat x'_1 -\hat x_1|+|\hat x_1-x_1|, \\ &\quad\quad\quad\,\,\,|x'_2-\check x'_2|+|\check x'_2 -\check x_2|+|\check x_2-x_2|\} \\\leq& \frac{(L+1)}{N}\\<&\delta. \end{split}\end{equation} The next statement is a fixed point theorem in the case of a window correctly aligned to itself. We will distinguish between the cases (i), (ii) of Definition \ref{defn:win2d}, and the case (iii), for which the corresponding statement is indicated in parentheses. In the former the labeling as per {Condition O} is done with respect to $f_\chi$, while in the latter the labeling is done with respect to $f^{-1}_\chi$. \begin{prop}\label{prop:2dfixed} Let $D$ be a window and $\phi:[0,1]^2=\chi_D^{-1}(D)\to\mathscr{Z}^2$ be a labeling associated to $f_\chi$ as per \emph{Condition O} (resp., associated to $f^{-1}_\chi$). (i) If a window $D$ is correctly aligned with itself under $f$, then $f$ has a fixed point in~$D$. (ii) If $\{C_i\}$ is a subdivision of $[0,1]^2=\chi^{-1}_{D}(D)$ then there exists a square $C_*$ in the decomposition { with $\textrm{ind}_{C_*}(\phi)\neq 0$; if $C_*$ further satisfies the \emph{non-degeneracy condition} on its faces, then $f$ has a fixed point in $\chi_D(C_*)$ (resp. $f$ has a fixed point in $\chi_D(f_{\chi}^{-1}(C_*))$).} (ii) Assume that $\chi_D$ is Lipschitz with Lipschitz constant $K>1$, and that $f_\chi$ is bi-Lipichitz with Lipschitz constant $L>1$. Then, given $\delta>0$ and a sufficiently fine subdivision of $[0,1]^2$ into squares $\{C_i\}_{i=1,\ldots,N^2}$ of side $1/N$, so that $K(L+1)/N<\delta$, for every completely labeled square $C_*$, each point $\tilde z\in \chi_D(C_*)$ is a $\delta$-approximate fixed point of $f$ (resp. each point $\tilde z\in \chi_D(f_{\chi}^{-1}(C_*))$ is a $\delta$-approximate fixed point of $f$). \end{prop} \begin{proof} (i) Let $\{C^N_i\}$ be a subdivision of $C$, with $\textrm{diam}(C^N_i)\to 0$ as $N\to\infty$. For each $N$, by applying Corollary \ref{cor:sperner_cubical_1}, there exists { a square $C^N_{i^*_N}\subset C$ with $\textrm{ind}_{C^N_{i^*_N}}(\phi)\neq 0$, hence completely labeled}; we choose and fix such a $C^N_{i^*_N}$. As before, there exist some points $\hat z_{i^*_N}=((\hat x_1)_{i^*_N}, (\hat x_2)_{i^*_N})\in C^N_{i^*_N}$ and $\check z_{i^*_N}=((\check x_1)_{i^*_N}, (\check x_2)_{i^*_N})\in C^N_{i^*_N}$, such that $(\hat x_1)_{i^*_N}=(\hat x'_1)_{i^*_N}$ and $(\check x_2)_{i^*_N}=(\check x'_2)_{i^*_N}$. By compactness, the sequences $\hat z_{i^*_N}$ and $\check z_{i^*_N}$, contain convergent subsequences $\hat z_{i^*_{k_N}}$ and $\check z_{i^*_{k_N}}$, respectively. Since the diameters of the corresponding completely labeled squares tend to zero, these subsequences approach the same limit $z=(x_1,x_2)$, for which we have $x_1=x'_1$ and $x_2=x'_2$. Hence $z$ is a fixed point for $f_\chi$, so $\tilde{z}=\chi_D(z)$ is a fixed point for $f$. For an alternative proof (non-constructive) see \cite{GideaZ2004}. (ii) Let $\{C_i\}$ be a subdivision as in the statement. { By Corollary \ref{cor:sperner_cubical_1}, there exists a square $C_*$ with $\textrm{ind}_{C_*}(\phi)\neq 0$. If $C_*$ satisfies the non-degeneracy condition}, we further subdivide $C_*$ into smaller squares $\{C^N_i\}$ as above. The proof for (i) implies that there exists a fixed point $z$ for $f_\chi$ in $C_*$, hence $p=\chi_D(z)$ is a fixed point for $f$ in $\chi_D(C_*)$. (iii) Suppose that $C_{i^*}^N$ is a completely labeled square of the subdivision. By \eqref{eqn:deltafixed}, for each $z\in C_{i^*}^N $ we have $\|f_\chi(z)-z\|_\infty<(L+1)/N$. Hence, for each $\tilde z=\chi_D(z)\in \chi_D(C_{i^*}^N)$, we have \[\|f(\tilde z)-\tilde z\|_\infty= \|\chi_D\circ f_\chi\circ \chi^{-1}_D(\chi_D(z))-\chi_D(z)\|_\infty\leq K\|f_\chi(z)-z\|_\infty<K(L+1)/N<\delta. \] In the case of labeling associated to $f^{-1}_\chi$, if $C_{i^*}^N$ is completely labeled, applying \eqref{eqn:deltafixed} to $f_\chi^{-1}$, and using that $f_\chi$ is bi-Lipschitz of Lipschitz constant $L$ yields $\|f^{-1}_\chi(z)-z\|_\infty<(L+1)/N$ for every $z\in C_{i^*}^N$. Thus $\|f_\chi(f^{-1}_\chi(z))- f^{-1}_\chi(z)\|_\infty< L(L+1)/N$, and so, for $\tilde z=\chi_D(f^{-1}_\chi(z))\in \chi_D(f^{-1}_\chi(C_{i^*}^N))$ we have $\|f(\tilde z)-\tilde z\|_\infty<KL(L+1)/N<\delta$. \end{proof} \subsection{Correctly aligned windows in higher-dimensions} As before, let \[\mathscr{O}_{\ell}=\{(x_1,\ldots,x_n)\in\mathbb{R}^n\,|\,\forall i, x_i l_i\geq 0\},\] where $\ell= (l_1,\ldots,l_n)\in\mathscr{Z}^n$, with $l_i=\pm 1$ for $i=1,\ldots,n$. Let $g:{\mathbb R}^n\to {\mathbb R}^n$ be a homeomorphism. For a point $z=(x_1,\ldots, x_n)$ in ${\mathbb R}^n$, letting $g(z)=z'=(x'_1,\ldots, x'_n)$, and $\Delta z=(x'_1-x_1,\ldots, x'_n-x_n)$, we assign a label $\ell\in\mathscr{Z}^n$ as follows: \begin{description} \item[Condition O]{$ $} \begin{itemize}\item if $\Delta z\in\textrm{int} (\mathscr{O}_\ell)$ then we assign to $z$ the label $\ell$; \item if $\Delta z\in \textrm{int}(\mathscr{O}_{\ell_1}\cap \ldots \cap \mathscr{O}_{\ell_k})$ for some labels $\ell_1,\ldots,\ell_k$ that are mutually distinct, then we assign to $z$ either one of the labels $\ell_1,\ldots,\ell_k$. \item if $\Delta z=0$ then we assign to $z$ either label $\ell$. \end{itemize} \end{description} A possible re-labeling of the points from labels in $\mathscr{Z}^n$ to labels in $\{1,\ldots,n+1\}$ can be done as in Example \ref{ex:relabeling}. Let $D_1$, $D_2$ be $n$-dimensional windows. We will assume that the homeomorphism $f:\mathbb{R}^n\to\mathbb{R}^n$ satisfies a bi-Lipschitz condition with Lipschitz constant $L$. \begin{defn}\label{defn:win-nd} We say that the window $D_1$ is correctly aligned with $D_2$ under $f$ if there exist corresponding homeomorphisms $\chi_{D_1},\chi_{D_2}$ with the following properties: \begin{itemize} \item[(i)] Case $n_u\neq 0$, $n_s\neq 0$: \begin{itemize}\item[(i.a)] $f_{\chi} ([0,1]^n )\subset {\mathbb R}^{n_u}\times(0,1)^{n_s}$; \item [(i.b)] $f_{\chi} (\partial [0,1]^{n_u} \times [0,1]^s ) \subset \left({\mathbb R}^{n_u}\times(0,1)^{n_s}\right)\setminus [0,1]^{n}$; \item [(i.c)] There exists $x^*_s\in(0,1)^{n_s}$ such that the map $L:[0,1]^{n_u}\to {\mathbb R}^{n_u}$ defined by $L(x_u)=\pi_u\circ f_\chi(x_u,x^*_s)$ satisfies $\deg(L_{\mid (0,1)^{n_u}})\neq 0$; \end{itemize} \item[(ii)] Case $n_u=0$, $n_s=n$: $f_{\chi}( [0,1]^n )\subset (0,1)^n$; \item[(iii)] Case $n_u=n$, $n_s=0$: $f^{-1}_{\chi} ([0,1]^n) \subset (0,1)^n$. \end{itemize} \end{defn} Suppose that $D_1$ is correctly aligned with $D_2$ under $f$, and let $n_u$ be the unstable-like dimension. We denote by $\{\mathscr{O}_{\ell_u}\}$ the octants of $\mathbb{R}^{n_u}$, where $\ell_u\in\mathscr{Z}^{n_u}$. In the cases (i) and (ii) of Definition \ref{defn:win-nd}, we will label all points of $C=\chi^{-1}_{D_1}(D_1)$ according to \emph{Condition O} applied to $f_\chi$, and in case (iii) of Definition \ref{defn:win-nd} we will label all points of $C=\chi^{-1}_{D_2}(D_2)$ with labels according to \emph{Condition O} applied to $f_\chi$. We describe the labeling below, for each of the cases (i), (ii), (iii) of Definition \ref{defn:win-nd}. \emph{Case (i).} First consider the case when $n_u>0$ and $n_s>0$. Labeling $C=[0,1]^n$ as per \emph{Condition O}, as we did in the $2$-dimensional case in Section \ref{sec:2D_windows}, does not necessarily yield a \emph{non-degenerate} labeling. We will transform $C=[0,1]^n$ into an $n$-dimensional polytope $\widetilde C$, construct a subdivision $\widetilde C=\bigcup_i C_i$ into smaller cubes, in order to obtain a \emph{non-degenerate} labeling. Moreover, we will construct $\widetilde C$ so that the resulting labeling of its vertices in \emph{complete}. (As we pointed out earlier, completeness is a necessary, but not sufficient condition for the index to be non-zero.) The faces of the resulting polytope $\tilde C$ will consist of the faces of the $C_i$'s in the subdivision that are contained in $\partial([0,1]^n)$. We perform this construction below. \emph{Estimates on $\Delta z$.} Let \begin{equation*}\begin{split}\rho_s&=\min \{d(p,p')\,|\,p\in[0,1]^{n_u}\times\partial[0,1]^{n_s}, p'\in f_\chi([0,1]^{n}\times\partial[0,1]^{n_s})\}>0,\\ \rho_u&=\min \{d(p,p')\,|\,p\in\partial[0,1]^{n_u}\times[0,1]^{n_s}, p'\in f_\chi(\partial[0,1]^{n}\times[0,1]^{n_s})\}>0,\\ \rho&=\min\{\rho_s,\rho_u\}>0, \end{split}\end{equation*} where $d$ is the distance corresponding to $\|\cdot\|_\infty$. The fact that $\rho_u$, $\rho_s$, and hence $\rho$ are positive follows from Definition \ref{defn:win-nd} (i.a) and (i.b), respectively. This fact implies that for each $z\in\partial[0,1]^n$ we have \begin{equation}\label{eqn:delta_z} \|\Delta z\|=\|z'-z\|_\infty=\|f_\chi(z)-z\|_\infty>\rho. \end{equation} In particular, for any point $z\in\partial[0,1]^n$ we have that $\Delta z$ is not contained within a ball of radius $\rho$ around the origin in ${\mathbb R}^{n}$. \smallskip \emph{Coarse cubical decomposition of $C$.} We divide $[0,1]^n$ into $M^n$ identical cubes $\{C_i\}_{i=1,\ldots,M^n}$, of side $1/M$. The quantity $M$ from above is required to satisfy the following condition: \begin{description} \item[Condition P]{$ $} \begin{itemize} \item[(P1)] For each $\ell_u\in\mathscr{Z}^{n_u}$, there exists a vertex $v$ of a cube $C_i$ lying on $\partial[0,1]^{n_u}\times[0,1]^{n_s}$, such that the label of $z$ is $\ell=(\ell_u,\ell_s)\in \mathscr{Z}^{n}$, for some $\ell_s\in \mathscr{Z}^{n_s}$. \item[(P2)] $(L+1)/M<\rho/2$. \end{itemize} \end{description} Condition (P1) implies that the cubical decomposition $\{C_i\}_{i=1,\ldots,M^n}$ is fine enough so that for the vertices $z$ of the cubes with faces lying on $\partial[0,1]^{n_u}\times[0,1]^{n_s}$, the vectors $\Delta z=(\Delta z_u,\Delta z_s)$ have the $\Delta z_u$ component taking values in each of the hyperoctants $\mathscr{O}_{\ell_u}$ of ${\mathbb R}^{n_u}$. The argument for this claim is below. First we note Definition \ref{defn:win-nd}-(i.a) implies that the corresponding $\pi_s(\Delta z)$ take values in each of the sectors $\mathscr{O}_{\ell_s}$ of ${\mathbb R}^{n_s}$. Definition \eqref{defn:win-nd}-(i.b) and -(i.c) imply that, for some $x^*_s\in(0,1)^{n_s}$ the projection $\pi_{u}$ onto $[0,1]^{n_u}$ of the image of $[0,1]^{n_u}\times\{x ^*_s\}$ under $f_\chi$ contains the rectangle $[0,1]^{n_u}$ inside its interior, and that the boundary of $\pi_u(f_\chi([0,1]^{n_u}\times\{x^*_s\}))$ wraps around the boundary of $[0,1]^{n_u}$, in the sense that for $z\in\partial\left[ \pi_u(f_\chi([0,1]^{n_u}\times\{x^*_s\}))\right]$, the corresponding $\pi_u(\Delta z)$ visits all sectors $\mathscr{O}_{\ell_u}$ of ${\mathbb R}^{n_u}$. It follows that $\Delta z=(\Delta z_u,\Delta z_s)$ take values in a complete set of hyperoctants $\mathscr{O}_{\ell}$ of ${\mathbb R}^{n}$. (This does not mean that $\Delta z$ takes values in all hyperoctants $\mathscr{O}_{\ell}$ of ${\mathbb R}^{n}$.) Thus, the corresponding labeling of the vertices of the $C_i$'s is \emph{complete}. We append the vertices and faces $S_i$ of the $C_i$'s that lie on $\partial[0,1]^n$ to $C$, thus transforming $C$ into a polytope $\widetilde C$ (like a Rubik's cube, see Fig.~\ref{fig:rubik}). \begin{figure} \includegraphics[width=0.25\textwidth]{rubik.png} \caption{Coarse decomposition of $C$ and transformation into a polytope $\widetilde C$.} \label{fig:rubik} \end{figure} Now we discuss Condition (P2). Note first that for $z_1,z_2\in \partial[0,1]^{n}$, we have \begin{equation}\begin{split}\|\Delta z_1- \Delta z_2\|_\infty&=\|(z'_1-z_1)-(z'_2-z_2)\|_\infty\leq \|z'_1-z'_2\|_\infty+\|z_1-z_2\|_\infty\\&\leq (L+1)\|z_1-z_2\|_\infty. \end{split}\end{equation} This implies that, the image of any cube $C_i$ under the map $z\mapsto \Delta z$ has diameter less than $\rho/2$. Hence, the image under $z\mapsto \Delta z$ of every face $S_i$ of a cube $C_i$ that lies on $\partial[0,1]^{n}$, is disjoint from a $\rho$-ball around the origin. Hence no such a face $S_i$ can carry a complete set of labels. That is, the labeling is \emph{non-degenerate}. When the windows are correctly aligned, as assumed above, it also follows that the index of the labeling, is non-zero. Condition (i.a) of correct alignment implies that the index relative to the labels in $\mathscr{Z}^{n_s}$ is non-zero. Also, conditions (i.b) and (i.c), together with (P1), imply that the the index relative to the labels in $\mathscr{Z}^{n_u}$ is non-zero. Proposition \ref{prop:Bekker}-(iii), saying that the index of a labeling equals the Brouwer degree of a realization, and the product property of the Brouwer degree, imply that the overall labeling is non-degenerate. \emph{Case (ii).} Consider the case when $n_u=0$. We label all points of $C=\chi^{-1}_{D_1}(D_1)$ according to the quadrant $\mathscr{O}_\ell$ where $f_{\chi}(z)-z$ lands, as per \emph{Condition O}. The resulting labeling is \emph{non-degenerate} and of \emph{non-zero index}. \emph{Case (iii).} Consider the case when $n_s=0$. We label all points of $C=\chi^{-1}_{D_2}(D_2)$ according to the quadrant $\mathscr{O}_\ell$ where $f^{-1}_{\chi}(z)-z$ lands, as per \emph{Condition O}. The resulting labeling is \emph{non-degenerate} and of \emph{non-zero index}. \begin{ex} In Fig.~\ref{fig:3D_windows} we illustrate a 3D window that is correctly aligned to itself under some map; the points $A,B,\ldots$, are mapped to the points $A',B',\ldots$, respectively. The unstable-like dimension is $n_u=2$ and the stable-like dimension is $n_s=1$. If we label the vertices $A,B,\ldots$ according to \emph{Condition O}, we obtain $A\to \ell_1:=(-1,1,1)$, $B\to \ell_2:=(-1,1,1)$, $C\to \ell_3:=(-1,-1,1)$, $D\to \ell_4:=(-1,-1,1)$, $E\to \ell_5:=(-1,1,-1)$, $F\to \ell_6:=(-1,1,-1)$, $G\to \ell_7:=(-1,-1,-1)$, $H\to \ell_8:=(-1,-1,-1)$. We notice that the corresponding set of labels is not complete; indeed, for the corresponding octants we have that $\mathscr{O}_{\ell_1}\cap\ldots\cap \mathscr{O}_{\ell_8}=\{x\in {\mathbb R}^n\,|\, x_1\leq 0, x_2=0,x_3=0\}$. { In particular, the index is zero.} The labels $\ell_u$ corresponding to the unstable directions take only the values $(-1,1)$ and $(-1,-1)$, so \emph{Condition P} is not satisfied. However, by taking a coarse cubical decomposition of $[0,1]^n$ satisfying \emph{Condition P}, as illustrated in Fig.~\ref{fig:3D_windows}, we can obtain that the labeling is \emph{non-degenerate} and { has non-zero index}. \begin{figure} \includegraphics[width=0.75\textwidth]{3D_windows_c.pdf} \caption{3D correctly aligned windows, and cubical decomposition satisfying \emph{Condition P}.} \label{fig:3D_windows} \end{figure} \end{ex} To summarize, if $D_1$ is correctly aligned with $D_2$ under $f$, in case (i) and (ii) we start with $C=\chi_{D_1}^{-1}(D_1)$ and assign a labeling associated to $\Delta z=f_\chi(z)-z$, and in case (iii) we start with $C=\chi_{D_2}^{-1}(D_2)$ and assign a labeling associated to $\Delta z=f^{-1}_\chi(z)-z$. In each case, we perform a cubical decomposition of $C$ into smaller cubes $\{C_i\}_i$, and transform $C$ into a polytope $\widetilde C$ with a cubical decomposition $\{C_i\}_i$ such that the labeling is \emph{non-degenerate} and has \emph{non-zero index}. \begin{prop}\label{prop:necessary_sufficient_alignment_nD} Assume $D_1$ is correctly aligned with $D_2$ under $f$, and $\widetilde C=\{C_i\}$ is a subdivision of $[0,1]^n=\chi^{-1}_{D}(D)$ satisfying \emph{Condition P}. Then the labeling described above, in each of the cases (i), (ii), (iii) of Definition \ref{defn:win-nd}, satisfies \[\textrm{ind}_{\widetilde C}(\phi)\neq 0.\] Conversely, in the case when $n_u>0$, $n_s>0$, if $D_1$, $D_2$ satisfy: \begin{itemize}\item[(a)] $f_{\chi} ([0,1]^n )\subset {\mathbb R}^{n_u}\times(0,1)^{n_s}$; \item [(b)] $f_{\chi} (\partial [0,1]^{n_u} \times [0,1]^{n_s} ) \subset \left({\mathbb R}^{n_u}\times(0,1)^{n_s}\right)\setminus [0,1]^{n}$; \end{itemize} and the corresponding decomposition $\widetilde C=\bigcup _i C_i$ satisfies $\textrm{ind}_{\widetilde C}(\phi)\neq 0$, then $D_1$ is correctly aligned with $D_2$ under $f$. \end{prop} \begin{proof} The direct statement was shown above. For the converse statement, pick any $x^*_s\in(0,1)^{n_s}$ and consider the labeling of the $n_s$-dimensional polytope $[0,1]^{n_u}\times\{x^*_s\}$ as per \emph{Condition O}. As before, it follows that the index of the labeling, relative to the labels in $\mathscr{Z}^{n_u}$, is non-zero. By Proposition \ref{prop:Bekker}, this index equals to the Brouwer degree of a realization $\Phi:[0,1]^{n_u}\times\{x^*_s\}\to T^{n_u}$, where $T^{n_u}$ is the $(n_u)$-dimensional simplex. This degree is non-zero and equals, up to a sign, the degree of $L(\cdot)=\pi_u\circ f_\chi(\cdot,x^*_s)$, which concludes the proof. \end{proof} In the statement below, we distinguish between the cases (i), (ii) of Definition \ref{defn:win-nd}, and the case (iii), for which the corresponding statement is indicated in parentheses. \begin{prop}\label{prop:ndfixed} Let $D$ be a window and $\phi:\chi_D^{-1}(D)\to\mathscr{Z}^n$ be a labeling associated to $f_\chi$ as per \emph{Condition O} (resp., associated to $f^{-1}_\chi$). Assume that $\widetilde C=\{C_i\}$ is a subdivision of $[0,1]^n=\chi^{-1}_{D}(D)$ satisfying \emph{Condition P} and \emph{Condition O}. (i) If $D$ is correctly aligned with itself under $f$, then $f$ has a fixed point in $ D $. (ii) Let $\{C'_j\}_{j=1,\ldots,N^n}$ be a fine subdivision of $[0,1]^n$ into cubes of side $1/N$, where $N$ is a multiple of $M$ (so that the family of cubes $C'_j$ subdivide each of the cubes $C_i$). Then there exists a cube $C'_*$ with $\textrm{ind}_{C'_*}(\phi)\neq 0$ in the decomposition; if $C'_*$ further satisfies the \emph{non-degeneracy condition}, then $f$ has a fixed point $p$ in $\chi_D(C'_*)$ (resp., $f$ has a fixed point in $\chi_{D}(f_\chi^{-1}(C'_*))$). (iii) Assume that $\chi_D$ is Lipschitz with Lipschitz constant $K>1$, and that $f_\chi$ is bi-Lipichitz with Lipschitz constant $L>1$. Then, given $\delta>0$ and a subdivision $\{C'_j\}_{j=1,\ldots,N^n}$ of $[0,1]^n$ into cubes of side $1/N$ as above, so that $K(L+1)/N<\delta$, then for every completely labeled cube $C'_j$, each point $\tilde z\in \chi_D(C'_j)$ is a $\delta$-approximate fixed point of $f$ (resp., each point $\tilde z\in \chi_{D}(f_\chi^{-1}(C'_j))$). \end{prop} \begin{proof} The proof is similar to the proof of Proposition \ref{prop:2dfixed}, and the details are left to the reader. \end{proof} \subsection{Detection of periodic points and symbolic dynamics} Assume that $p_1$ is a periodic point of period $k$ for $f$; the orbit of $p_1$ is $\{p_1,\ldots,p_k\}$, with $f(p_k)=p_1$. Let $F:({\mathbb R}^n)^k\to ({\mathbb R}^n)^k$ be given by \begin{equation} \label{eqn:F} F(z_1,\ldots, z_k)=(f(z_k), f(z_1),\ldots,f(z_{k-1})). \end{equation} Note that $\{p_1,\ldots,p_k\}$ is a period-$k$ orbit if and only if $F(p_1,\ldots,p_k)=(p_1,\ldots, p_k)$, that is, $(p_1,\ldots,p_k)\in ({\mathbb R}^n)^k$ is a fixed point for $F$. Now consider a finite sequence of windows $D_1,\ldots, D_k$ in ${\mathbb R}^n$. We are interested in periodic orbits $\{p_1,\ldots,p_k\}$ with $p_j\in D_{j}$, $j=1,\ldots, k$. Assume that for $j=1,\ldots, k-1$, $D_j$ is correctly aligned with $D_{j+1}$ under $f$, and also $D_k$ is correctly aligned with $D_{1}$ under $f$. { Here we only consider correct alignment as in Definition \ref{defn:win-nd}-(i).} See Fig.~\ref{fig:windows}. Denote by $\chi_{D_j}$, the equivalence class of homeomorphisms corresponding to $D_j$, for $j=1,\ldots,k$. Let $\chi_D:({\mathbb R}^n)^k\to ({\mathbb R}^n)^k$ be given by $\chi_D(z_1,\ldots, z_k)=(\chi_{D_1}(z_1),\ldots,\chi_{D_k}(z_{k}))$. \begin{lem}\label{lem:product_aligned} (i) Let $D=\chi_D\left(\Pi_{i=1}^{k} [0,1]^n\right)\subseteq ({\mathbb R}^n)^k$, and \begin{equation}\begin{split} D^{-}&= \chi_D\left(\Pi_{j=1}^{k}\partial[0,1]^{n_u}\times[0,1]^{n_s}\right),\\ D^{+}&= \chi_D\left(\Pi_{j=1}^{k} [0,1]^{n_u}\times\partial[0,1]^{n_s}\right). \end{split}\end{equation} Then $D$ is an $(nk)$-dimensional window, with $(n_uk)$-unstable-like, and $(n_sk)$-stable like dimensions. (ii) If for $j=1,\ldots, k-1$, $D_j$ is correctly aligned with $D_{j+1}$ under $f$, and $D_k$ is correctly aligned with $D_{1}$ under $f$, then $D$ is correctly aligned with $D$ under $F$. \end{lem} \begin{proof} (i) Follows from elementary set theory. (ii) Follows from elementary set theory and from the product property of the Brouwer degree. See \cite{GideaZ2004}. \end{proof} \begin{figure} \includegraphics[width=1.0\textwidth]{windows.pdf} \caption{Periodic sequence of correctly aligned windows.} \label{fig:windows} \end{figure} We associate to each rectangle $D_j$, $j=1,\ldots,k$, an $n$-dimensional polytope $P_j$, obtained by dividing each underlying cube $\chi^{-1}_{D_j}(D_j)=[0,1]^n$ into $(N_j)^n$ cubes of side $1/N_j$, where $N_j$ is chosen large enough so that \emph{Condition P} is satisfied. The cubical decomposition of each window $D_j$ determines a { \emph{coarse rectangular decomposition} of $\chi^{-1}_D(D)=([0,1]^{n})^{k}$ into multi-dimensional rectangles} of the form \begin{equation}\label{eqn:cubic_coarse}\begin{split}C_{\alpha}=&(C_1)_{\alpha_1}\times (C_2)_{\alpha_2}\times \cdots \times (C_k)_{\alpha_k},\\&\textrm { where } {\alpha}=(\alpha_1,\ldots, \alpha_k)\in \mathscr{A}:=\{1,\ldots,(M_1)^n\}\times\cdots \times\{1,\ldots,(M_k)^n\}.\end{split} \end{equation} Further, we divide each $D_j$ into small cubes $\{(C_j)_\beta\}_{\beta=1,\ldots,(N_j)^n}$, of side $1/N_j$, where $N_j$ is a multiple of $M_j$, obtaining a { \emph{fine rectangular decomposition}}. For each vertex $z_j$ of a cube $(C_j)_\beta$ in the cubical decomposition of $D_j$, we assign a label $\ell_j=(\pm 1, \ldots, \pm 1)\in\mathscr{Z}^n$, based on the sector of $\mathscr{O}_\ell\subseteq{\mathbb R}^n$ where the displacement vector $\Delta z_j=f_{\chi_{D_j,D_{j+1}}}(z_j)-z_j$ lands. Relative to the product window $D$, this can also be regarded as an $(nk)$-dimensional polytope. The cubical decomposition of each window $D_j$ determines a { rectangular} decomposition of $D$ of the form \begin{equation}\label{eqn:cubic_fine}\begin{split}C_{\beta}=&(C_1)_{\beta_1}\times (C_2)_{\beta_2}\times \cdots \times (C_k)_{\beta_k},\\&\textrm { where } {\beta}=(\beta_1,\ldots, \beta_k)\in\mathscr{B}:=\{1,\ldots,(N_1)^n\}\times\cdots \times\{1,\ldots,(N_k)^n\}.\end{split} \end{equation} Note that for each $\alpha \in \mathscr{A}$, $(C_\alpha \cap C_\beta)_{\beta\in\mathscr{B}}$ forms a { rectangular} decomposition of $C_\alpha$. Each vertex $z=(z_1,\ldots,z_k)$ of a cube $C_{\beta}$ is assigned a label $\ell=(\ell_1,\ldots,\ell_k)\in(\mathscr{Z}^{n})^{k}$ whose component $\ell_j\in \mathscr{Z}^{n}$ is the label corresponding to the vertex $z_j$, $k=1,\ldots, k$, according to \emph{Condition O}. \begin{prop}\label{prop:periodic-orbit-higher-dimensions} (i) Given a sequence of windows $D_1,\ldots, D_k$ as above, with $D_j$ correctly aligned with $D_{j+1}$ under $f$, for $j=1,\ldots, k$, and $D_k$ correctly aligned with $D_{1}$ under $f$. Then there exists a periodic orbit $p_1,\ldots,p_k$ of $f$ of period $k$ with $p_j\in \textrm{int}(D_j)$, for $j=1,\ldots,k$. (ii) If $\{(C_j)_{\alpha}\}$ is a coarse subdivision of $[0,1]^n=\chi^{-1}_{D_j}(D_j)$ then for each $j$ there exists $(C_j)_{\alpha^*_j}$ with $\textrm{ind}_{(C_j)_{\alpha^*_j}}(\phi)\neq 0$; if each $(C_j)_{\alpha^*_j}$ further satisfies the \emph{non-degeneracy condition} on its faces, then $f$ has a periodic orbit $p_1,\ldots,p_k$ with $p_j\in \chi_{D_j}((C_j)_{\alpha^*_j})$, for $j=1,\ldots,k$. (iii) Assume that $\chi_{D_j}$ is Lipschitz with Lipschitz constant $K_j>1$, and that $f_{\chi_{D_j,D_{j+1}}}$ is bi-Lipschitz with Lipschitz constant $L_j>1$. Let $\delta>0$ and consider a sufficiently fine subdivision of each $\chi_{D_j}^{-1}(D_j)=[0,1]^n$ into cubes $\{(C_j)_{\beta_j}\}_{\beta_j=1,\ldots,N_j^n}$ as above, so that $\max_j\left\{\frac{K_{j+1}(L_j+1)}{N}\right\}<\delta$. Then for every sequence of cubes $(C_j)_{\beta^*_j}\subseteq \chi^{-1}_{D_j}(D_j)$ that are completely labeled, every sequence of points $\tilde z_j=\chi_{D_j}(z_j)\in \chi_{D_j}( (C_j)_{\beta^*_j})$, $j=1,\ldots,k$, is a $\delta$-approximate periodic orbit of period $k$. \end{prop} \begin{proof} (i) By Lemma \ref{lem:product_aligned}, the correct alignment of $D_j$ with $D_{j+1}$ under $f$ implies that $D$ is correctly aligned with itself under $F$. The cubical decompositions $\{(C_j)_\alpha\}$ of $[0,1]^n=\chi^{-1}_{D_j}(D_j)$, $j=1,\ldots,k$ determine a coarse decomposition $C_\alpha$ of $([0,1]^{n})^{k}$ as in \eqref{eqn:cubic_coarse}. By Theorem \ref{thm:sperner_cubical}, there exists a cube $C_{\alpha_*}=(C_1)_{\alpha^*_1}\times(C_2)_{\alpha^*_2}\times\ldots\times (C_k)_{\alpha^k_*}$ in this decomposition with $\textrm{ind}_{C_{\alpha_*}}(\phi)\neq 0$. The labeling of the vertices of $C_{\alpha^*}$ with respect to the map $F_{\chi_D}$, induce a labeling of each cube $(C_j)_{\alpha^*_j}$ with respect to the corresponding map $f_{\chi_{D_j},\chi_{D_{j+1}}}$ such that $\textrm{ind}_{(C_j)_{\alpha^*_j}}(\phi)\neq 0$. Take now a sequence of fine cubical decomposition $(C_j)_{\beta_j^N}$ of $\chi^{-1}_{D_j}(D_j)$, as in \eqref{eqn:cubic_fine}, with $\textrm{diam}((C_j)_{\beta_j^N})\to 0$ as $N\to\infty$. By the above argument, within each subdivision one can find a cube $(C_j)_{\beta_{j}^{*N}}$ with $\textrm{ind}_{(C_j)_{\beta_{j}^{*N}}}(\phi)\neq 0$. Since such labeling is also complete, it implies that for each $i\in\{1,\ldots,n\}$ there exists an $n$-tuple of points $ z^{i,N}_j\in (C_j)_{\beta_{j}^{*N}}$, such that for each $i$ we have that $\pi_i\left(f_{\chi}( z^{i,N}_j)- z^{i,N}_{j+1}\right)=0$. By successively extracting convergent subsequence in each $i$-coordinate, for each $j$ we obtain $n$ subsequences in $(C_j)_{\alpha^*_j}$ of the $ z^{i,N}_j$'s that are convergent to the same limit $ z_j\in (C_j)_{\alpha^*_j}$, and such that $\pi_i\left(f_{\chi}( z_j)- z_{j+1}\right)=0$ for all $i=1,\ldots,n$, that is, $f_\chi( z_j)= z_{j+1}$. Hence $p_j=\chi_{D_j}( z_j)$, $j=1,\ldots,k$, is a periodic sequence of period $k$ for $f$. (ii) Using the non-degeneracy condition on the labeling and that $\textrm{ind}_{(C_j)_{\alpha^*_j}}(\phi)\neq 0$ for $j=1,\ldots,k$, we apply the previous argument to the collection of cubes $(C_j)_{\alpha^*_j}$, obtaining a periodic orbit $p_j\in \chi_{D_j}((C_j)_{\alpha^*_j})$ for $f$, where $j=1,\ldots,k$. (iii) Consider the points $z^{i,N}_j\in (C_j)_{\beta^*_j}$ as before, with $N$ large enough as in the statement. Let $z_j$ be an arbitrary point in $(C_j)_{\beta^*_j}$, and $\hat z_j=\chi_{D_j}(z_j)$, for $j=1,\ldots,k$. We have \begin{equation*}\begin{split}\|f(\hat z_j)-\hat z_{j+1}\|_\infty &=\|\chi_{D_{j+1}}\circ f_{\chi_{D_{j}}, \chi_{D_{j+1}}}\circ \chi^{-1}_{D_{j}}(\chi_{D_{j}}(z_{j}))-\chi_{D_{j+1}} (z_{j+1})\|_\infty\\&\leq K_{j+1}\|f_{\chi_{D_{j}}, \chi_{D_{j+1}}}(z_j)-z_{j+1}\|_\infty\\ &= K_{j+1}\max_{i=1,\ldots, n}|\pi_i(f_{\chi_{D_{j}}, \chi_{D_{j+1}}}(z_j))-\pi_i(z_{j+1})|\\ &\leq K_{j+1}\max_{i=1,\ldots,n}\left ( \left|\pi_i(f_{\chi_{D_{j}}, \chi_{D_{j+1}}}(z_j))-\pi_i(f_{\chi_{D_{j}}, \chi_{D_{j+1}}}(z^{i,N}_j))\right|\right . \\&{}\qquad\qquad + \left . \left|\pi_i(f_{\chi_{D_{j}}, \chi_{D_{j+1}}}(z^{i,N}_j) -\pi_i(z^{i,N}_{j+1})\right|\right .\\&\qquad\qquad + \left . \left|\pi_i( z^{i,N}_{j+1})-\pi_i(z_{j+1})\right|\right )\\ &\leq\frac{K_{j+1}(L_j+1)}{N}\\&<\delta. \end{split}\end{equation*} Thus, $\hat z_1,\ldots, \hat z_k$ is a $\delta$-approximate periodic orbit of period $k$ for $f$. \end{proof} \begin{prop}\label{prop:symbolic-dynamics-higher-dimensions} (i) Assume that $D_1,\ldots, D_k$ is a sequence of windows as above. Let $\Gamma=(\gamma_{ij})_{i,j=1,\ldots,k}$ be a transition matrix, where $\gamma_{ij}=0$ or $1$; assume that for any $i,j$ with $\gamma_{ij}=1$, $D_i$ is correctly aligned with $D_j$ under $f$. Consider the topological Markov chain associated to the transition matrix $\Gamma$ defined by \[\Omega_\Gamma=\{\omega:=(\omega_t)_{t\in\mathbb{Z}}\,|\,\omega_t\in\{1,\ldots,k\} \textrm { and } \gamma_{\omega_t\omega_{t+1}}=1\textrm{ for all } t\},\] and the shift map $\sigma:\Omega_\Gamma\to\Omega_\Gamma$ given by $(\sigma(\omega))_t=\omega_{t+1}$, $t\in\mathbb{Z}$. Then, for every sequence $\omega \in\Omega_\Gamma$, there exists an orbit $(p_t)_{t\in\mathbb{Z}}$ of $f$, with $p_t:=f^t(p_0)\in \textrm{int}(D_{\omega_t})$, for~all~$t$. (ii) Assume that $\chi_{D_j}$ is Lipschitz with Lipschitz constant $K>1$, and that $f_{\chi_{D_{j},D_{l}}}$ is bi-Lipschitz with Lipschitz constant $L>1$, for all $j,l\in\{1,\ldots,k\}$. Let $\delta>0$, $T\in\mathbb{Z}^+$, and $\omega\in \Omega_\Gamma$. Consider a sufficiently fine subdivision of each $\chi_{D_j}^{-1}(D_j)=[0,1]^n$ into cubes $\{(C_j)_{\beta_j}\}_{\beta_j=1,\ldots,N^n}$ as above, so that $\max_j\left\{\frac{K (L+1)}{N}\right\}<\delta$, $j=1,\ldots,k$. Then for every sequence of cubes $(C_{\omega_t})_{\beta^*_t}\subseteq \chi^{-1}_{D_{\omega_t}}(D_{\omega_t})$ that are completely labeled, every sequence of points $\tilde z_t=\chi_{D_{\omega_t}}(z_{\omega_t})\in \chi_{D_{\omega_t}}( (C_{\omega_t})_{\beta^*_t})$, $t=1,\ldots,T$, is a $\delta$-approximate orbit of length $T$, in the following sense \[d(f(\tilde z_t), \tilde z_{t+1})<\delta, \textrm { for } t=1,\ldots,T.\] \end{prop} \begin{proof} (i) It is enough to prove that for each $\omega\in\Omega_\Gamma$, for the infinite of windows $D_{\omega_t}$, $t\in\mathbb{Z}$, there exists a point $p_0$ in $D_{\omega_0}$ such that $f^t(p_0)\in D_{\omega_t}$. This follows from the following: \emph{Claim 1. If $\{D_t\}_{t=1,\ldots, k}$, is a sequence of windows such that for every $t=1,\ldots, k-1$, $D_t$ is correctly aligned with $D_{t+1}$ under $f$, then there exists an orbit $(p_t)_{t=1,\ldots,k}$ such that $p_{t+1}=f(p_t)$, and $p_t\in D_t$ for all $t$.} \emph{Proof of Claim 1.} We can always define a continuous map $\widehat f$ such that $D_n$ is correctly aligned with $D_0$ under $\widehat f$. Then, similarly to \eqref{eqn:F} we define the map \begin{equation} \label{eqn:F} \widehat F(z_1,\ldots, z_k)=(\widehat f(z_k), f(z_1),\ldots,f(z_{k-1})). \end{equation} Denoting $\chi_D(z_1,\ldots, z_k)=(\chi_{D_1}(z_1),\ldots,\chi_{D_k}(z_{k}))$, as in Lemma \ref{lem:product_aligned} and Proposition \ref{prop:periodic-orbit-higher-dimensions}, we obtain that $\chi_D(\prod_{t=1}^{k} [0,1]^n)$ is correctly aligned to itself under $\widehat F$, hence there is a fixed point for $\widehat F$. This yields an orbit of $f$ as in the claim; the fact that $p_1=\widehat{f}(p_k)$ is irrelevant for the dynamics. \emph{Claim 2. If $D_t$, $t\in\mathbb{Z}$, is a sequence of windows such that for every $t$, $D_t$ is correctly aligned with $D_{t+1}$ under $f$, then there exists an orbit $(p_t)_{t\in\mathbb{Z}}$ such that $p_{t+1}=f(p_t)$, and $p_t\in D_t$ for all $t$.} \emph{Proof of Claim 2.} By \emph{Claim 1} for each finite sequence of windows \[D_{-N}, \ldots,D_0,\ldots, D_{N},\] there is a point $p_0^N\in D_0$ such that $f^t(p^N_0)\in D_t$ for all $t\in\{-N,\ldots, N\}$. Taking a convergent subsequence $p^{k_N}_0$ of $p^N_0$ with $p^{k_N}_0\to p_0$ as $N\to\infty$, we obtain that the orbit of $p_0$ is as claimed. (ii) The proof follows in the same way as for Proposition \ref{prop:periodic-orbit-higher-dimensions}-(ii). \end{proof} For a related statement to Proposition \ref{prop:symbolic-dynamics-higher-dimensions} see \cite{gidea1999conley}. \section{Application}\label{sec:application} In this section we illustrate the methodology developed in this paper on a simple example. Namely, we consider the H\'enon Map, defined as $f(x,y)=(a-x^2+by,x)$ for $a=1.25$ and $b=0.3$. We will use the Sperner lemma-based approach to show the existence of a period-$7$ orbit. \begin{figure} \includegraphics[width=0.75\textwidth]{Screen_Shot_004.png} \caption{The window $D$ (green) and its image $f^7(D)$ (blue) under the seventh iterate of the map.} \label{fig:window_seven_iterate} \end{figure} \begin{figure} \includegraphics[width=0.75\textwidth]{Screen_Shot_006.png} \caption{Sperner labeling of $D$; the label $1$ is shown in blue, $2$ in green, and $3$ in red.} \label{fig:Sperner_labeling} \end{figure} \begin{figure}[h] \includegraphics[width=0.75\textwidth]{grid.pdf} \caption{Completely labeled square determined by the grid} \label{fig:completely_labeled} \end{figure} \begin{figure} \includegraphics[width=0.75\textwidth]{Screen_Shot_002.png} \caption{Approximate period-$7$ orbit.} \label{fig:period_seven} \end{figure} We build a window $D$ around the point $(-0.12,-1.36)$, which is a `first guess' of a period seven point, and compute its seventh iterate $f^7(D)$. See Fig. \ref{fig:window_seven_iterate}. We define a fine grid on $D$ and we label the points of the grid according to \emph{Condition O}. We further reduce the labeling to only three labels $(1,1)\rightarrow 1$, $(-1,1)\rightarrow 2$, and $(1,-1), (-1,-1)\rightarrow 3$, as in Section \ref{sec:2D_windows}. This labeling is shown color coded in Fig.~\ref{fig:Sperner_labeling}. It is easy to see that the boundary of $D$ has a { non-degenerate labeling} and $\textrm{ind}_D(\phi)=1$, thus $D$ is correctly aligned to itself under $f^7$. A completely labeled square in the grid decomposition occurs where the three different labels `meet'; this square has vertices at $x=-0.124198$, $y=-1.36279$ (red), $x=-0.124197$, $y=-1.36279$ (red), $x=-0.124198$, $y=-1.36279$ (blue), $x=-0.124197$, $y= -1.36279$ (green); see Fig.~\ref{fig:completely_labeled}. The corresponding approximate period-$7$ orbit is shown in Fig.~\ref{fig:period_seven}. { It is easy to see that the above square has non-zero index. Thus, there exists a true period-$7$ orbit with an initial point near the square.} \section*{Acknowledgement} Both authors are grateful to Meir Retter who helped with the computer code for the example in Section \ref{sec:application}. The first author is grateful to Zhonggang (Zeke) Zeng, who made us aware of the numerical analysis literature related to the Sperner Lemma, and to Kathleen Dexter-Mitchell, who read an early version of this work. \bibliographystyle{alpha}
{ "timestamp": "2017-06-28T02:09:28", "yymm": "1706", "arxiv_id": "1706.08960", "language": "en", "url": "https://arxiv.org/abs/1706.08960", "abstract": "We present a combinatorial approach to rigorously show the existence of fixed points, periodic orbits, and symbolic dynamics in discrete-time dynamical systems, as well as to find numerical approximations of such objects. Our approach relies on the method of `correctly aligned windows'. We subdivide the `windows' into cubical complexes, and we assign to the vertices of the cubes labels determined by the dynamics. In this way we encode the dynamics information into a combinatorial structure. We use a version of the Sperner Lemma saying that if the labeling satisfies certain conditions, then there exist fixed points/periodic orbits/orbits with prescribed itineraries. Our arguments are elementary.", "subjects": "Dynamical Systems (math.DS); Algebraic Topology (math.AT); Combinatorics (math.CO)", "title": "Combinatorial approach to detection of fixed points, periodic orbits, and symbolic dynamics", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363540453381, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7073385903527447 }
https://arxiv.org/abs/0907.4812
Superstability and Finite Time Extinction For C_0-Semigroups
A new approach to superstability and finite time extinction of strongly continuous semigroups is presented, unifying known results and providing new criteria for these conditions to hold analogous to the well-known Pazy condition for stability. That finite time extinction implies superstability which is in turn equivalent to several (both known and new) conditions follow from this new approach in a consistent fashion. Examples showing that the converse statements fail are constructed, in particular, an answer to a question of Balakrishnan on superstable systems not exhibiting finite time extinction.
\section{Introduction} The study of $C_{0}$-semigroups as a means to understand systems, particularly systems modeled by (partial) differential equations, has a long and rich history. A central concept in this study is the notion of exponential stability (often referred to simply as stability). A related stronger condition known as superstability has been the focus of much research (e.g. \cite{Bal04}, \cite{Bal81}, \cite{NR92}, \cite{RW95}, \cite{Lu01}), in particular its connection with finite time extinction. Balakrishnan \cite{Bal04} poses the following: Are there physical (i.e. differential operator) examples of superstability which are not of type extinction-in-finite-time? We provide a positive answer to this question with a constructive example (in fact many similar examples can easily be obtained). We also collect, clarify and extend the existing results in the field with a new unified approach. The notion of superstability first appeared (in a very rough form) in the seminal work of Hille and Philips \cite{HP57} who were concerned primarily with the mathematical aspects of it and related properties, particularly the relationship between the spectrum of the infinitesimal generator and the stability of the semigroup. Later work refined and extended this notion, applying more complicated machinery (\cite{NR92}, \cite{RW95}). More recently Balakrishnan \cite{Bal04} (and others) have become interested in the superstability phenomena arising in the control theory of (physical) systems. The key ingredient in our work is a new approach to the concepts of stability and superstability focusing on the ``entry times'' of the system into balls about the origin (in the Banach space). This approach, which has a certain probabilistic flavor though is not in itself probabilistic, allows us to unify the existing results in the field with (largely) new proofs. More importantly, we obtain analogues of certain well-known results about stability for superstable and finite-time-extinction systems. In particular, an analogue of Pazy's condition \cite{Paz83} for stability is given for both superstability and finite time extinction. \section{Main Results} Our main result characterizing the types of stabilty is: \begin{theorem*} Let $\{T(t)\}$ be a $C_{0}$-semigroup of bounded linear operators on a Banach space $X$. Define the relative entry time for each $r \in \mathbb{N}$ as \[ u_{r} = \sup \{ t_{r+1}(x) - t_{r}(x) : \| x \| \leq 1 \} \quad\text{where}\quad t_{r}(x) = \inf \{ t \geq 0 : \| T(t^{\prime}) x \| \leq e^{-r} \text{ for all $t^{\prime} \geq t$} \} \] Then \begin{align*} \text{(i)}\quad & \text{$\{ T(t) \}$ is stable} & \text{if and only if} &&& \limsup_{r} u_{r} < \infty \\ \text{(ii)}\quad & \text{$\{ T(t) \}$ is superstable} & \text{if and only if} &&& \lim u_{r} = 0 \\ \text{(iii)}\quad & \text{$\{ T(t) \}$ has finite time extinction} & \text{if and only if} &&& \sum_{r} u_{r} < \infty \end{align*} \end{theorem*} and our Pazy-type condition is: \begin{theorem*} Let $\{ T(t) \}$ be a $C_{0}$-semigroup. Then, if for some $a > 0$, \begin{align*} \text{(i)} \quad &\int_{a}^{\infty} \| T(t) \|^{p} dt < \infty\quad \text{for some $0 < p < \infty$} \quad & \text{then} \quad & \text{$\{T(t)\}$ is stable} \\ \text{(ii)} \quad &\int_{a}^{\infty} \big{|} \log \| T(t) \| \big{|}^{-p} dt < \infty \quad \text{for some $1 < p < \infty$} \quad & \text{then} \quad & \text{$\{T(t)\}$ is stable} \\ \text{(iii)} \quad &\int_{a}^{\infty} \big{|}\log \| T(t) \| \big{|}^{-1} dt < \infty \quad & \text{then} \quad & \text{$\{T(t)\}$ is superstable} \\ \text{(iv)} \quad &\lim_{p\downarrow 0} \int_{a}^{\infty} \big{|} \log \| T(t) \| \big{|}^{-p} dt < \infty \quad & \text{then} \quad & \text{$\{T(t)\}$ has finite time extinction} \end{align*} \end{theorem*} \section{Preliminaries} A family $\{T(t)\}_{t\geq 0}$ of bounded linear operators on a Banach space $X$ is called a {\bf strongly continuous semigroup} (or {\bf $C_{0}$-semigroup}) when $T(0)=I, \; T(t+s)=T(t)T(s)$ for all $t,s \geq 0$, and $\lim _{t \downarrow 0} T(t) = I$ in the strong operator topology ($T(t)x \to x$ as $t \downarrow 0$ for all $x \in X$). As is well-known, this implies that the map $t \mapsto T(t)$ is (strongly) continuous. For a strongly continuous semigroup $\{T(t)\}_{t\ge 0}$, define $\mathcal{D}$ to be the set of all $x\in X$ such that $\lim_{t\downarrow 0}t^{-1}(T(t)x-x)$ exists. The {\bf infinitesimal generator} of the semigroup $\{T(t)\}_{t\ge 0}$ is the operator $A$ on $X$, with the domain $D(A)= \mathcal{D}$, given by \[ Ax=\lim\limits_{t\downarrow 0}\frac{T(t)x-x}{t},\;x\in D(A). \] The name ``infinitesimal generator'' is justified by the fact that \[ Ax=\frac{d(T(t)x)}{dt}\Big{|}_{t=0},\;x\in D(A). \] The pair $(A,\mathcal{D})$ and the semigroup $\{T_{t}\}_{t\geq 0}$ uniquely determine one another. A strongly continuous semigroup $\{T(t)\}_{t\geq 0}$ is called {\bf exponentially stable} (or just {\bf stable}) when there exists constants $M > 0$ and $\rho > 0$ such that \[ \| T(t) \| \leq M e^{-\rho t} \quad\text{for all $t \geq 0$}. \] Equivalently, define the {\bf stability index} to be \[ \sup \{ \nu > 0 : (\exists M > 0) \| T(t) \| \leq M e^{-\nu t} \quad\text{for all $t \geq 0$} \} \] and stability is then the requirement that the index be positive. The {\bf growth characteristic} is \[ \omega_{0} = \lim_{t\downarrow 0} \frac{\log \| T(t) \|}{t} \] which, as is well-known, is equal to $-\nu$ where $\nu$ is the stability index (when the semigroup is stable). It is then natural to define {\bf superstability} to be the condition that the growth characteristic is $\omega_{0} = -\infty$. Alternatively, superstability can be defined as the equivalent condition that the operators $T(t)$ be quasinilpotent (recall that an operator $T$ is quasinilpotent when $spec(T) = \{ 0 \}$). A system is said to have {\bf finite time extinction} when there is some $t_{0} \geq 0$ such that $T(t)x = 0$ for all $t \geq t_{0}$ and all $x \in X$ with $\| x \| \leq 1$. A semigroup $\{ T(t) \}$ is {\bf nilpotent} when there exists $t_{0}$ such that $T(t_{0}) = 0$. The smallest possible choice $t_{0}$ such that $T(t^{\prime}) = 0$ for all $t^{\prime} > t_{0}$ is called the {\bf index of nilpotency} for the semigroup. The reader should note that in what follows we often defer the proofs until after all results are stated, the purpose being to stress the similarities among the theorems characterizing these three concepts (perhaps the most useful aspect of our approach). \section{Final Entry Times} \begin{definition} Let $\{T(t)\}$ be a $C_{0}$-semigroup on a Banach space $X$. For each $x \in X$ and $r \in \mathbb{N}$ the \textbf{\emph{final entry time}} of $x$ into the $e^{-r}$-ball is \[ t_{r}(x) := \inf \{ t \geq 0 : \| T(t^{\prime}) x \| \leq e^{-r} \text{ for all $t^{\prime} \geq t$} \} \] and the \textbf{final entry time} of the $1$-ball into the $e^{-r}$-ball (referred to from here on as just the final entry time of the $e^{-r}$-ball) is \[ t_{r} := \sup \{ t_{r}(x) : \| x \| \leq 1 \} \] where we adopt the (usual) convention that the infimum of the empty set is $\infty$ (that is, $t_{r}(x) = \infty$ when no such time exists).\\ \\ The \textbf{\emph{relative entry time}} of $x$ into the $e^{-r}$-ball is \[ u_{r}(x) := \begin{cases} t_{r+1}(x) - t_{r}(x) &\quad\text{when $t_{r+1}(x) < \infty$} \\ \infty &\quad\text{when $t_{r+1}(x) = \infty$} \end{cases} \] and the \textbf{\emph{relative entry time}} of the $1$-ball into the $e^{-r}$-ball (referred to from here on as just the final entry time of the $e^{-r}$-ball) is \[ u_{r} := \sup \{ u_{r}(x) : \| x \| \leq 1 \} \] \end{definition} \begin{lemma}\label{L:2} For any $r$ we have $u_{r} = t_{r+1} - t_{r}$ (meaning when either or both $t_{r}, t_{r+1} = \infty$ then $u_{r} = \infty$). \end{lemma} \begin{proof} First note that $u_{r} = \infty$ if and only if $t_{r+1} = \infty$ and that $t_{r} = \infty$ implies $t_{r+1} = \infty$ so we need only handle the case that all three are finite. By definition, \[ u_{r} + t_{r} = \sup \{ u_{r}(x) : \|x\|\leq 1 \} + \sup \{ t_{r}(x) : \|x\|\leq 1\} \geq t_{r+1}. \] On the other hand, there exists a sequence $x_{n}$ such that $t_{r}(x_{n}) \to t_{r}$. So for any $\epsilon > 0$ and sufficiently large $n$ we have $t_{r}(x_{n}) > t_{r} - \epsilon$ and so $t_{r} < t_{r}(x_{n}) + \epsilon$. By definition $t_{r+1} \geq t_{r+1}(x)$ for all $x$, in particular for the $x_{n}$. Then \[ t_{r+1} - t_{r} \geq t_{r+1}(x_{n}) - t_{r}(x_{n}) - \epsilon \geq u_{r} - \epsilon. \] As $\epsilon$ is arbitrary, the claim follows. \end{proof} \begin{lemma}\label{L:e-r} For any $x$ and $r$ with $\| x \| > e^{-r}$ we have $\| T(t_{r}(x)) x \| = e^{-r}$. Moreover, $\| T(t_{r}) \| = e^{-r}$. \end{lemma} \begin{proof} This follows directly from the strong continuity of $T(t)$ (which is automatic for right continuity and follows from the uniform boundedness principle for left continuity). \end{proof} \begin{lemma}\label{L:1} The sequence $u_{r}$ is nonincreasing in $r$: $u_{r+1} \leq u_{r}$. Hence the limit $\lim_{r} u_{r}$ always exists. \end{lemma} \begin{proof} First note that if $u_{r}$ ever attains $\infty$ then in fact $t_{r+1} = \infty$ at that point and for all subsequent $t_{r}$ meaning that $u_{r}$ remains $\infty$ from then on. So we need only handle the case when $u_{r} < \infty$ for all $r$ (and therefore assume that $t_{r} < \infty$ for all $r$). Now suppose that $u_{r} > u_{r-1}$ for some $r$. Then there is some $x$ such that $u_{r}(x) > u_{r-1}$ (since $u_{r}$ is the supremum over all $x$). Set \[ y = e T(t_{r}(x)) x \quad\text{ so that }\quad T(t)y = e T(t + t_{r}(x))x \quad\text{ for all $t \geq 0$} \] By definition of $t_{r+1}(x)$ we have that \[ \| T(t + t_{r}(x))x \| \leq e^{-r-1} \quad\text{ if and only if }\quad t + t_{r}(x) \geq t_{r+1}(x) \] Then \[ \| T(t) y \| = e \| T(t + t_{r}(x)) x \| \leq e^{-r} \quad\text{ if and only if }\quad t \geq t_{r+1}(x) - t_{r}(x) = u_{r}(x) \] which means that $t_{r}(y) = u_{r}(x)$. Now $t_{r-1}(y) = 0$ since $\| T(t^{\prime} + t_{r}(x)) x \| \leq e^{-r}$ by definition of $t_{r}(x)$ and so \[ \| T(t^{\prime}) y \| = e \| T(t^{\prime} + t_{r}(x)) x \| \leq e e^{-r} = e^{-(r-1)} \quad\quad\text{ for all $t^{\prime} \geq 0$} \] Hence $u_{r-1}(y) = u_{r}(x)$. But this means that \[ u_{r}(x) > u_{r-1} \geq u_{r-1}(y) = u_{r}(x) \] is a contradiction. Therefore $u_{r} \leq u_{r-1}$ for all $r$ as claimed. \end{proof} \begin{lemma}\label{L:stoptime} If the $T(t)$ are (not necessarily proper) contractions (i.e., $\| T(t) \| \leq 1$) then \[ t_{r}(x) = \sup \{ t \geq 0 : \| T(t) x \| \geq e^{-r} \} \] which is to say the $t_{r}$ are ``stopping times''. \end{lemma} \begin{proof} The $T(t)$ being contractions forces for any $q > 0$ that \[ \| T(t+q)x \| = \| T(q) T(t) x \| \leq \| T(t)x \| \| T(q) \| \leq \| T(t) x \| \] by the semigroup property. \end{proof} \section{The Entry Time Growth Characteristic $\omega_{0}^{(ET)}$} \begin{definition} The \textbf{\emph{entry time growth characteristic}} of a $C_{0}$-semigroup $\{ T(t) \}$ is defined by \[ \omega_{0}^{(ET)} := -\big{(} \lim_{r} u_{r} \big{)}^{-1} \] which always exists by Lemma \ref{L:1}. \end{definition} We now show that the entry time growth characteristic is equal to the usual growth characteristic $\omega_{0}$ defined previously for stable semigroups. Note that if the semigroup is not stable then $\omega_{0}^{(ET)} = 0$. \begin{theorem}\label{T:omega0} For a stable $C_{0}$ semigroup $T(t)$, \[ \omega_{0}^{(ET)} = \omega_{0} = \lim_{t\downarrow 0} \frac{\log \| T(t) \| }{t} = \inf_{t\geq 0}\frac{\log \| T(t) \|}{t} = \lim_{t\to\infty} \frac{\log \| T(t) \|}{t} \] \end{theorem} \begin{proof} That $\omega_{0}$ (the usual growth characteristic), which is defined as the limit down to zero above, is equal to the infimum is well-known. Define $\omega(t) = \frac{1}{t}\log \| T(t) \|$. Then \[ (t + s) \omega(t + s) = \log \| T(t + s) \| \leq \log \| T(t) \| \| T(s) \| = \log \| T(t) \| + \log \| T(s) \| = t \omega(t) + s \omega(s) \] is a subadditive sequence and therefore \[ \lim_{t \to \infty} \frac{t \omega(t)}{t} = \lim_{t \to \infty} \omega(t) \] exists. Call this limit $\omega$. Set $w_{r} = \omega(t_{r})$. By Lemma \ref{L:e-r} we know that $w_{r} = - \frac{r}{t_{r}}$. Now $\lim_{r} w_{r} = \omega$ and \[ - \frac{r+1}{t_{r+1}} = - \frac{r}{t_{r}}\frac{t_{r}}{t_{r+1}} - \frac{1}{t_{r+1}} \] and so \[ w_{r+1} = w_{r} \frac{t_{r}}{t_{r+1}} - \frac{1}{t_{r+1}} \] which means that \[ w_{r+1} t_{r+1} - w_{r} t_{r} = -1 \] Consider the case when $\omega > -\infty$. Taking limits in the above, $\lim_{r} \omega u_{r} = -1$ and so $\omega_{0}^{(ET)} = \omega$. Now consider the case when $\omega = -\infty$. Suppose that $\omega_{0}^{(ET)} > -\infty$. Then $\lim_{r} u_{r} = c > 0$ and so \[ -1 = t_{r+1}w_{r+1} - t_{r} w_{r} = t_{r} (w_{r+1} - w_{r}) + u_{r} w_{r+1} \leq u_{r}w_{r+1} \] since $w_{r+1} - w_{r} \leq 0$. But then \[ -1 \leq \lim_{r} u_{r} w_{r+1} = c (-\infty) = -\infty \] contradicting that $\omega_{0}^{(ET)} > -\infty$. Therefore in either case $\omega_{0}^{(ET)} = \omega$. Now we show that $\omega = \omega_{0}$. Since we know $\omega_{0} = \inf_{t \geq 0} \omega(t)$ and $\omega = \lim_{t \to \infty} \omega(t)$ we already have that $\omega_{0} \leq \omega$. For any positive integer $n$ and any $s \geq 0$ we have \[ \omega(ns) = \frac{1}{ns} \| T(s)^{n} \| \leq \frac{1}{ns} \log \| T(s) \|^{n} = \omega(s) \] and therefore \[ \lim_{t \to \infty} \omega(t) = \lim_{n \to \infty} \omega(ns) \leq \omega(s) \] which means that \[ \omega = \lim_{t \to \infty} \omega(t) \leq \inf_{t \geq 0} \omega(t) = \omega_{0} \] Therefore $\omega = \omega_{0}$ and the proof is complete. \end{proof} \section{Equivalence of Stability Notions} We now present the main theorems characterizing the various notions of stability. One of our aims in this paper is to collect and clarify the various characterizations of these notions. To this end we include several known characterizations and provide new proofs using our techniques. Specifically, the equivalences in this section, excepting the conditions which involve the relative entry times $u_{r}$, are known (see e.g. \cite{Bal04}). \begin{theorem}\label{T:stable} For a $C_{0}$-semigroup $\{ T(t) \}$ and $\nu > 0$, the following are equivalent: \begin{align*} \text{(i)} \quad&\text{$\{T(t)\}$ is \textbf{\emph{stable}} with \textbf{\emph{stability index}} $\nu$}: \quad(\forall \rho < \nu) (\exists M > 0) (\forall t \geq 0) \| T(t) \| \leq Me^{-\rho t} \\ \text{(ii)} \quad&\lim_{r} u_{r} = \nu^{-1} < \infty \\ \text{(iii)} \quad&spec (T(t)) \subseteq \{ \lambda \in \mathbb{C} : |\lambda| \leq e^{-t\nu} \} \\ \text{(iv)} \quad&\omega_{0} = -\nu \end{align*} \end{theorem} \begin{theorem}\label{T:stableA} If $\{ T(t) \}$ is a stable $C_{0}$-semigroup with stability index $\nu$ and $A$ is the generator of $T(t)$ with domain $D$ then $spec (A) \subseteq \{ \lambda \in \mathbb{C} : Re\ \lambda < -\nu \}$. The converse does not hold. \end{theorem} \begin{remark} The spectrum of the generator, $spec (A)$, depends very delicately on the domain of definition (see the counterexamples below). In particular, the operator $A$ treated as having full domain may well have spectrum much larger than $spec (A)$. \end{remark} \begin{theorem}\label{T:superstable} For a $C_{0}$-semigroup $\{ T(t) \}$, the following are equivalent: \begin{align*} \text{(i)} \quad&\text{$\{T(t)\}$ is \textbf{\emph{superstable}}}: \quad\text{$\{T(t)\}$ is stable with stability index $\infty$} \\ \text{(ii)} \quad&\lim_{r} u_{r} = 0 \\ \text{(iii)} \quad&(\forall \nu > 0) (\exists M_{\nu} > 0) (\forall t \geq 0) \| T(t) \| \leq M_{\nu}e^{-\nu t} \\ \text{(iv)} \quad&\text{$T(t)$ are all quasinilpotent: $spec (T(t)) = \{ 0 \}$} \\ \text{(v)} \quad&\omega_{0} = -\infty \\ \end{align*} \end{theorem} \begin{remark} The constants $M_{\nu}$ in condition (iii) must tend to infinity as $\nu \to \infty$ since otherwise the semigroup will be identically $0$. \end{remark} \begin{theorem}\label{T:superstableA} If $\{ T(t) \}$ is a superstable $C_{0}$-semigroup with generator $A$ and domain $D$ then $spec (A) = \emptyset$. The converse does not hold. \end{theorem} \begin{theorem}\label{T:finitetime} For a $C_{0}$-semigroup $\{ T(t) \}$ on a Banach space $X$ with generator $A$ having domain $D$, and $0\leq k < \infty$ the following are equivalent: \begin{align*} \text{(i)} \quad&\text{ $\{T(t)\}$ has \textbf{\emph{finite time extinction at time $k$}}}: \\ &\quad\quad(\forall x \in X) (\exists t_{\infty}(x) \geq 0) (\forall t \geq t_{\infty}(x)) T(t)x = 0 \text{ and } \sup \{t_{\infty}(x) : \|x\| \leq 1\} = k \\ \text{(ii)} \quad&\sum_{r} u_{r} = k < \infty \\ \text{(iii)} \quad&(\forall \nu > 0) (\exists M_{\nu} > 0) (\forall t \geq 0) \| T(t) \| \leq M_{\nu} e^{-\nu t} \text{ and } \sup_{\nu > 0} \frac{\log M_{\nu}}{\nu} = k \\ \text{(iv)} \quad&(\exists M > 0) (\forall \nu \geq 0) (\forall t \geq 0) \| T(t) \| \leq M e^{- \nu (t - k)} \\ \text{(v)} \quad&\text{$T(t)$ is nilpotent with nilpotency index $k$: $T(q) = 0$ for $q > k$ and $T(q) \ne 0$ for $q < k$} \\ \text{(vi)} \quad&\text{the resolvent function $R(\lambda,A)$ is entire and } \big{|}R(\lambda,A)\big{|} \leq C(1 + |\lambda|)^{-N}e^{k |Re \lambda|} \text{ for some constants $C,N$} \\ \end{align*} \end{theorem} \begin{remark} In condition (i), the definition of finite time extinction, the $t_{\infty}$ can be chosen uniformly on bounded sets, in particular on balls around the origin of finite radius, however, $t_{\infty}$ cannot be chosen uniformly over $x$ unless the underlying space of the Banach space is compact (i.e., $L^{2}[0,1]$ not $L^{2}[0,\infty)$). \end{remark} \begin{theorem} Extinction in finite time implies superstability and superstability implies stability. The converses of both statements are false. \end{theorem} \begin{theorem}\label{T:C} Let $\{ T_{t} \}$ be a $C_{0}$-semigroup. Then, if for some $a > 0$, \begin{align*} \text{(i)} \quad &\int_{a}^{\infty} \| T(t) \|^{p} dt < \infty\quad \text{for some $0 < p < \infty$} \quad & \text{then} \quad & \text{$\{T(t)\}$ is stable} \\ \text{(ii)} \quad &\int_{a}^{\infty} \big{|} \log \| T(t) \| \big{|}^{-p} dt < \infty \quad \text{for some $1 < p < \infty$} \quad & \text{then} \quad & \text{$\{T(t)\}$ is stable} \\ \text{(iii)} \quad &\int_{a}^{\infty} \big{|}\log \| T(t) \| \big{|}^{-1} dt < \infty \quad & \text{then} \quad & \text{$\{T(t)\}$ is superstable} \\ \text{(iv)} \quad &\lim_{p\downarrow 0} \int_{a}^{\infty} \big{|} \log \| T(t) \| \big{|}^{-p} dt < \infty \quad & \text{then} \quad & \text{$\{T(t)\}$ has finite time extinction} \end{align*} \end{theorem} \begin{remark} Stability can only occur when $\| T(t) \|$ is eventually bounded by $1$ (that is, $t_{0} < \infty$). Taking $a = t_{0}$ will cause the integrals to converge whenever there is some value of $a$ that causes convergence. \end{remark} \section{Proofs of Equivalences} As usual, let $\{T(t)\}$ be a $C_{0}$-semigroup with generator $A$ having dense domain $D$ on the Banach space $X$. \begin{proof} (of Theorem \ref{T:stable}). Assume condition (ii) holds: \[ \lim_{r} u_{r} = \nu^{-1} < \infty. \] Fix $\epsilon > 0$. Then there exists $r^{*}$ such that \[ u_{r} < \nu^{-1} + \epsilon \quad\text{ for all }\quad r \geq r^{*}. \] Hence for $r \geq r^{*}$, \[ t_{r} - t_{r^{*}} \leq (r - r^{*})(\nu^{-1} + \epsilon). \] For any $t \geq r^{*} (\nu^{-1} + \epsilon)$ pick $r \geq r^{*}$ such that \[ r(\nu^{-1} + \epsilon) + t_{r^{*}} \leq t < (r + 1)(\nu^{-1} + \epsilon) + t_{r^{*}}. \] Since $t \geq t_{r}$ (as $r(\nu^{-1} + \epsilon > 0$) we have that $\| T(t) \| \leq e^{-r}$ and \[ t < (r + 1)(\nu^{-1} + \epsilon) + t_{r^{*}} \quad\text{implies}\quad r > \frac{t - t_{r^{*}}}{\nu^{-1} + \epsilon} - 1. \] Then \[ \| T(t) \| \leq e^{-r} < e^{1 - \frac{t - t_{r^{*}}}{\nu^{-1} + \epsilon}} = e^{1 + \frac{t_{r^{*}}}{\nu^{-1} + \epsilon}} e^{-t \frac{1}{\nu^{-1}+\epsilon}} \] So $\{ T(t) \}$ has stability index less than $\frac{1}{\nu^{-1} + \epsilon}$. Since $\epsilon$ was arbitrary, condition (i) holds. Conversely, assume (i) holds. Then for any $\rho < \nu$ there is $M$ such that $\|T(t)\| \leq Me^{-\rho t}$. For $t \geq \frac{r - \log M}{\rho}$ we then have that $\| T(t) \| \leq e^{-r}$. Hence \[ t_{r} \leq \frac{r - \log M}{\rho} \] Suppose $\lim u_{r} > \frac{1}{\rho} + 2\delta$ for some $\delta > 0$. Then for sufficiently large $r^{\prime}$ we have $u_{r} > \frac{1}{\rho} + \delta$ for $r \geq r^{\prime}$ so \[ t_{r + 1} = t_{r^{\prime}} + u_{r^{\prime}} + \cdots + u_{r} > t_{r^{\prime}} + (r - r^{\prime})\frac{1}{\rho} + (r - r^{\prime})\delta \] and therefore \[ \frac{r + 1 - \log M}{\rho} - (r - r^{\prime})\frac{1}{\rho} > (r - r^{\prime})\delta \] so \[ \frac{r^{\prime} + 1 - \log M}{\rho} + r^{\prime}\delta > r \delta \] but the lefthand side is constant and the right hand tends to $\infty$ as $r \to \infty$. This means that $\lim u_{r} \leq \frac{1}{\rho}$. Since $\rho < \nu$ is arbitrary we have (ii). Now assume (iv) holds. Then by Gelfand's spectral radius formula, \[ \sup \big{|} spec (T(t)) \big{|} = \lim_{t} \| T(t) \|^{\frac{1}{t}} = e^{\omega_{0} t} = e^{-t \nu}. \] Hence (iii) holds. Likewise, if (iii) holds then by Gelfand's formula, $\omega_{0} = -\nu$ so (iv) holds. The equivalence of (i) and (ii) with (iv) is a direct consequence of Theorem \ref{T:omega0}. This completes the proof that (i) through (iv) are equivalent. \end{proof} \begin{proof} (of Theorem \ref{T:stableA}). This is well-known. \end{proof} \begin{proof} (of Theorem \ref{T:superstable}). The equivalences follow from identical arguments to those for the case of stable semigroups (simply replace $\nu^{-1}$ by $0$). \end{proof} \begin{proof} (of Theorem \ref{T:superstableA}). This follows from Theorem \ref{T:stableA}: if $z \in spec(A)$ then the stability index $\nu$ is at least $- Re(z)$ but a superstable semigroup has stability index $\infty$. \end{proof} \begin{proof} (of Theorem \ref{T:finitetime}). Assume that (ii) holds. Then for any $x$ with $\| x \| \leq 1$ we have that \[ t_{r+1}(x) - t_{0}(x) = \sum_{j=0}^{r} u_{r}(x) \leq \sum_{j=0}^{r} u_{r} \to k \] hence (i) holds. Conversely, assume (i) holds and suppose (ii) fails. If $\sum_{r} u_{r} = \ell < k$ then \[ t_{\infty}(x) \leq \ell < k \quad\text{for all $x$} \] contradicting (i). So it must be that $\sum_{r} u_{r} = \ell > k$. By Lemma \ref{L:2} we have \[ \sum_{r}u_{r} = \sum_{r} t_{r+1} - t_{r} = t_{\infty} - t_{0} \] and there is then a sequence $x_{n}$ such that $t_{\infty}(x_{n}) \to \ell > k$ contradicting (i). Assume (ii) holds. For $\nu > 0$ pick $r^{*}$ such that \[ \sup_{r\geq r^{*}} u_{r} < \nu^{-1} \quad\text{and set}\quad M_{\nu} = e^{1 + \nu t_{r^{*}}}. \] Then, as in the proof of stability, we have \[ r\nu^{-1} \leq t - t_{r^{*}} < (r+1)\nu^{-1}\quad\text{implies}\quad \|T(t)\| \leq M_{\nu} e^{-t \nu}. \] Now \[ \frac{\log M_{\nu}}{\nu} = \frac{1}{\nu} + t_{r^{*}(\nu)} \quad \text{and} \quad r^{*}(\nu) \to \infty \quad\text{as $\nu \to \infty$} \] hence $\frac{\log M_{\nu}}{\nu} \to \sum_{r} u_{r} = k$. So (iii) holds. Assume (iii) holds. Then \[ M_{\nu} \leq e^{k\nu}\quad\text{for all $\nu$} \] hence (iv) holds with $M = 1$. Assume (iv) holds. Then for $t > k$ we have \[ \| T(t) \| \leq Me^{-\nu(t-k)} \quad\text{for all $\nu$ and $t - k > 0$} \] so $\lim_{\nu} e^{-\nu(t-k)} = 0$ hence $\| T(t) \| = 0$. Thus (i) holds. The equivalence of (i) and (v) is trivial. To see that (i) and (vi) are equivalent, note that (i) and (vi) both imply $spec(A) = \emptyset$. Then \[ R(\lambda, A) = \int_{0}^{\infty} e^{-\lambda t}T(t) dt \quad\text{for all $\lambda$} \] (in general for $Re\ \lambda > \omega_{0}$, see, e.g,. \cite{Bal81}). Hence $R(i\lambda,A)$ is the Fourier Transform of $T(t)$. By the Paley-Wiener Theorem, $R(i\lambda,A)$ is the Fourier Transform of a compactly supported function (i.e. $T(t) = 0$ for all $t \geq k$) if and only if (vi) holds. This argument first appeared in \cite{GK70}. \end{proof} \section{Proof of the Pazy-type Criteria} \begin{lemma}\label{L:Ftrick} Suppose that $\| T(t) \| \leq 1$ for all $t \geq 0$ (that is, $t_{0} = 0$). Let $F: \mathbb{R} \cup \{ \pm \infty \} \to [0,\infty]$ be a decreasing function such that $F(\infty) = 0$. Then \[ \sum_{r=0}^{\infty} u_{r} F(r+1) \leq \int_{0}^{\infty} F(-\log \|T(t)\|) dt \leq \sum_{r=0}^{\infty} u_{r} F(r) \] \end{lemma} \begin{proof} Observe that \[ \int_{0}^{\infty} F(-\log \|T(t)\|) dt = \int_{0}^{t_{0}} F(-\log \|T(t)\|) dt + \sum_{r=0}^{\infty} \int_{t_{r}}^{t_{r+1}} F(-\log \|T(t)\|) dt + \int_{t_{\infty}}^{\infty} F(-\log \|T(t)\|) dt \] Now $t_{0} = 0$ so the first term on the right is $0$. For $t \geq t_{\infty}$ we have that $\|T(t)\| = 0$ so $F(-\log \|T(t)\|) = F(\infty) = 0$ meaning that the third term on the right is zero. For the middle terms, note that for $t < t^{\prime}$, \[ \| T(t^{\prime}) \| = \| T(t) T(t^{\prime} - t) \| \leq \| T(t) \| \| T(t^{\prime} - t) \| \leq \| T(t) \| \] since $\| T(t^{\prime} - t) \| \leq 1$ by assumption. So \[ \| T(t_{r+1}) \| \leq \| T(t) \| \leq \| T(t_{r}) \| \quad\text{for}\quad t_{r} \leq t \leq t_{r+1} \] and therefore, by Lemma \ref{L:e-r}, \[ -r-1 = \log \| T(t_{r+1}) \| \leq \log \| T(t) \| \leq \log \| T(t_{r}) \| = -r \quad\text{for}\quad t_{r} \leq t \leq t_{r+1} \] and so since $F$ is decreasing \[ F(r+1) \leq F(-\log \|T(t)\|) \leq F(r) \quad\text{for}\quad t_{r} \leq t \leq t_{r+1} \] So for each term in the sum, \[ u_{r}F(r+1) = \int_{t_{r}}^{t_{r+1}} F(r+1) dt \leq \int_{t_{r}}^{t_{r+1}} F(-\log \|T(t)\|) dt \leq u_{r} F(r) \] \end{proof} \begin{proof} (of Theorem \ref{T:C}). First note that if $\limsup \| T(t) \| > 1$ then $u_{r} = \infty$ for all $r$ so there can be no stability. In this case, none of the three conditions involving integrals can hold. So it is enough to consider the case when $\| T(t) \|$ is eventually bounded by $1$. Since \[ \int_{0}^{t_{0}} H(\| T(t) \|) dt < \infty \] for any bounded function $H$ (recall that $t_{0} < \infty$ since we have eliminated the other case), we may assume that $\| T(t) \| \leq 1$ for all $t$: the integral conditions are unaffected by finite translations in time as is the stability of the semigroup. Recall that condition (i) is the Datko-Pazy Theorem (\cite{Dat70}, \cite{Paz72}, \cite{Paz83}). Consider the function $F(x) = e^{-px}$ for some fixed $0 < p < \infty$. Then $F$ is decreasing and $F(\infty) = 0$. By Lemma \ref{L:Ftrick}, \[ \sum_{r=0}^{\infty} u_{r} e^{-p(r+1)} \leq \int_{0}^{\infty} F(-\log \|T(t)\|) dt = \int_{0}^{\infty} \|T(t)\|^{p} dt < \infty \] For any given $r^{*}$ observe that since the $u_{r}$ are nonincreasing, \[ \sum_{r=0}^{r^{*}-1} u_{r} e^{-p(r+1)} \geq u_{r^{*}} \sum_{r=1}^{r^{*}} e^{-pr} \] and since $\sum_{r} e^{-p(r+1)} = C < \infty$, \[ \lim_{r^{*}} \sum_{r=0}^{r^{*}-1} u_{r} e^{-p(r+1)} \geq \lim_{r^{*}} u_{r^{*}} C \] and therefore \[ \lim_{r} u_{r} \leq C^{-1} \sum_{r=0}^{\infty} u_{r} e^{-p(r+1)} < \infty \] hence the semigroup is stable. Condition (ii) is a weakening of the Pazy condition: set $F(x) = x^{-p}$ for the appropriate $1 < p < \infty$. By Lemma \ref{L:Ftrick}, \[ \sum_{r=0}^{\infty} u_{r} (r+1)^{-p} \leq \int_{0}^{\infty} (- \log \| T(t) \|)^{-p} dt < \infty \] Then, as above, since $\sum_{r} (r+1)^{-p} = C < \infty$, \[ \lim_{r} u_{r} \leq C^{-1} \sum_{r=0}^{\infty} u_{r} (r+1)^{-p} < \infty \] so the semigroup is stable. Now condition (iii): set $F(x) = \frac{1}{x}$. By Lemma \ref{L:Ftrick}, \[ \sum_{r=0}^{\infty} u_{r} \frac{1}{r+1} \leq \int_{0}^{\infty} \frac{dt}{-\log \|T(t)\|} < \infty \] Proceeding as above, \[ \sum_{r=0}^{r^{*}-1} u_{r} \frac{1}{r+1} \geq u_{r^{*}} \sum_{r=1}^{r^{*}} \frac{1}{r} \] and since $\sum_{r=1}^{\infty} \frac{1}{r} = \infty$ this means that $\lim_{r} u_{r} = 0$ (and in fact converges to $0$ faster than the inverse of the harmonic sum $(1 + \cdots + \frac{1}{r})^{-1}$). The semigroup is therefore superstable. Finally condition (iv): set $F_{p}(x) = x^{-p}$ for $0 < p$. Then, by Lemma \ref{L:Ftrick}, \[ \sup_{p>0} \sum_{r=0}^{\infty} u_{r} (r+1)^{-p} \leq \sup_{p>0} \int_{0}^{\infty} (- \log \| T(t) \|)^{p} dt < \infty \] here we use that $F_{p} \leq F_{p^{\prime}}$ for $p \geq p^{\prime}$ so the hypothesis for $p\downarrow 0$ in fact implies boundedness for all $p > 0$. Suppose that $\sum_{r} u_{r} = \infty$. Then for any $K$ there exists $r_{K}$ such that $\sum_{r=0}^{r_{K}-1} u_{r} \geq K$. Then \[ \sum_{r=0}^{\infty} u_{r} (r+1)^{-p} \geq \sum_{r=0}^{r_{K}-1} u_{r} r_{K}^{-p} \geq K r_{K}^{-p} \] for any $p > 0$ and so \[ \sup_{p>0} \sum_{r=0}^{\infty} u_{r} (r+1)^{-p} \geq K \] But $K$ is arbitrary so \[ \infty > \sup_{p>0} \sum_{r=0}^{\infty} u_{r} (r+1)^{-p} = \infty \] is a contradiction. The semigroup therefore has finite time extinction. In fact the semigroup goes extinct at time \[ k = \sum_{r=0}^{\infty} u_{r} = \sup_{p>0} \int_{0}^{\infty} (- \log \|T(t)\|)^{-p} dt \] (details here are left to the reader). \end{proof} \section{Counterexamples} We construct examples of semigroups demonstrating that finite time extinction is strictly stronger than superstability and that superstability is strictly stronger than stability. In particular, we answer a question of Balakrishnan \cite{Bal04} on the existence of superstable semigroups not vanishing in finite time with generator being a differential operator (what he terms a ``physical system''). We also remark on a (previously known) example showing that the spectrum of the generator does not fully determine superstability. \subsection{Superstable Without Finite Time Extinction} Consider the Gaussian (probability) measure $\mu $ on $\mathbb{R}^{+} = [0,\infty)$ given by $d\mu(x) = \sqrt{\frac{2}{\pi}} \exp(-\frac{x^{2}}{2}) dx$. Let $X = L^{2}(\mathbb{R}^{+}, \mu)$. Define the semigroup \[ T(t)f(s) = f(s-t) \text{ for $s \geq t$ } \quad \text{ and } \quad T(t)f(s) = 0 \text{ otherwise } \] on $X$. The reader may verify that this in fact a semigroup with generator $A = - \frac{d}{ds}$ and domain the appropriate Sobolev space. Now for $f \in L^{2}(\mathbb{R}^{+}, \mu)$ with $\|f\|=1$ we have \begin{align*} \| T(t) f \|^{2} &= \int_{0}^{\infty} |f(s-t)|^{2} d \mu(s) = \int_{0}^{\infty} |f(v)|^{2} \sqrt{\frac{2}{\pi}} e^{-\frac{(t+v)^{2}}{2}} dv \leq e^{-\frac{t^{2}}{2}} \int_{0}^{\infty} |f(v)|^{2} d \mu(v) = e^{-\frac{t^{2}}{2}} \end{align*} since $e^{-(t+v)^{2}} \leq e^{- t^{2}}$ for $t,v \geq 0$. So $\|T(t)\| \leq \exp(-\frac{t^{2}}{2}) \to 0$ meaning that the semigroup is superstable. However, $T(t) \ne 0$ for any $t$. Taking $f$ to be a norm one (with respect to $\mu$) function concentrated near $0$ we see that $\| T(t) \| = \exp(-\frac{t^{2}}{4})$ and so $t_{r} = 2 \sqrt{r}$ and $u_{r} = 2 ( \sqrt{r+1} - \sqrt{r} ) \to 0$ but $\sum u_{r} = \infty$. Hence superstability can occur without finite time extinction (even when the generator is merely a derivative). The space $(\mathbb{R}^{+},\mu)$ is a variant of the classical Gaussian measure space which arises naturally in the context of stochastic systems and quantum systems, among other areas. Our example can easily be extended to any system with a Gaussian measure (details are left to the interested reader). \subsection{Finite Time Extinction} Define the semigroup \[ T(t)f(s) = f(s+t) \text{ for $s + t \leq 1$ } \quad \text{ and } \quad T(t)f(s) = 0 \text{ otherwise } \] on $X = \{ f \in L^{2}[0,1] : f(0) = f(1) = f^{\prime}(0) = f^{\prime}(1) = 0\}$. The reader may verify that this is a semigroup with generator $A = \frac{d}{ds}$ and domain Sobolev space. It is clear that $T(t) = 0$ for all $t \geq 1$ so this semigroup has finite time extinction. In fact, $t_{r} = 1$ for all $r > 0$ so $u_{r} = 0$ for $r > 0$ and $\sum u_{r} = 1 < \infty$. \subsection{Stable but Not Superstable} For completeness, we mention the fairly trivial example $T(t)f(s) = e^{-\nu t}f(s)$ (so the generator is $A = \nu I$ and the domain is $\mathcal{D} = L^{2}$) is clearly stable with index $\nu$. Here $t_{r}$ is defined by $e^{-\nu t_{r}} = e^{-r}$ so $t_{r} = r \nu^{-1}$ meaning $u_{r} = -\nu^{-1}$. \subsection{Empty Spectrum (for the Generator) but not Superstable} For completeness, we mention an example due to Hille and Phillips \cite{HP57} (chapter 23, section 16). We first present a superstable semigroup which will be used to develop the actual example of interest. Define $T(t)f(s) := \frac{1}{\Gamma(t)}\int_{0}^{s}(s-u)^{t-1}f(u)du$. That this is a semigroup follows from Euler integral identities. The generator $A$ is the derivative of the convolution with $\log$ minus a constant: $Af(s) = \frac{d}{ds}\int_{0}^{s}\log(s-u) f(u)du - \gamma f(s)$ ($\gamma$ is Euler's constant). Then $spec(A) = \emptyset$ and $\| T(t) \| \approx \frac{1}{t\Gamma(t)}$. So $\omega_{0} = -\infty$ and $T(t) \to 0$ but $T(t) \ne 0$ for any $t$. This semigroup is in fact superstable but does not have finite time extinction. With this construction in hand, we construct the desired example: for $\xi \in \mathbb{C}$ with $Re\ \xi > 0$, define $J^{\xi}f(s) = \frac{1}{\Gamma(\xi)}\int_{0}^{s}(s-u)^{\xi - 1}f(u) du$. When $\xi$ is taken to be a positive real this yields the semigroup above. There is an analytic extension of $J^{\xi}$ to $\xi$ purely imaginary. Let $T(t) = J^{it}$. The generator of this semigroup is $iA$ where $A$ is the generator from above. Then $spec(iA) = \emptyset$ but $0 \notin spec(T(t))$, i.e. the operators are not quasinilpotent hence not superstable. The reader is referred to \cite{HP57} for details on these semigroups.
{ "timestamp": "2010-08-03T02:00:36", "yymm": "0907", "arxiv_id": "0907.4812", "language": "en", "url": "https://arxiv.org/abs/0907.4812", "abstract": "A new approach to superstability and finite time extinction of strongly continuous semigroups is presented, unifying known results and providing new criteria for these conditions to hold analogous to the well-known Pazy condition for stability. That finite time extinction implies superstability which is in turn equivalent to several (both known and new) conditions follow from this new approach in a consistent fashion. Examples showing that the converse statements fail are constructed, in particular, an answer to a question of Balakrishnan on superstable systems not exhibiting finite time extinction.", "subjects": "Functional Analysis (math.FA)", "title": "Superstability and Finite Time Extinction For C_0-Semigroups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363540453381, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7073385903527447 }
https://arxiv.org/abs/2003.05834
Computing the Galois group of a polynomial over a $p$-adic field
We present a family of algorithms for computing the Galois group of a polynomial defined over a $p$-adic field. Apart from the "naive" algorithm, these are the first general algorithms for this task. As an application, we compute the Galois groups of all totally ramified extensions of $\mathbb{Q}_2$ of degrees 18, 20 and 22, tables of which are available online.
\section{Introduction} In this article we consider the following problem, the \(p\)-adic instance of the forward Galois problem: given a \(p\)-adic field \(K\) and a polynomial \(F(x) \in K[x]\) over that field, what is its Galois group \(G := \operatorname{Gal}(F/K)\)? Over any field for which polynomial factorization algorithms are known, the forward Galois problem can always be solved with the \define{naive algorithm}: explicitly compute the splitting field of \(F\) by repeatedly adjoining a root of it to the base field, and then explicitly compute the automorphisms of the splitting field. To date, there is no general solution to the \(p\)-adic forward Galois problem other than the naive algorithm. This article presents a general algorithm. In practice, it can for example quickly determine the Galois group of most irreducible polynomials of degree 16 over \(\mathbb{Q}_2\) and has been used to compute some non-trivial Galois groups at degree 32. It has been tested on polynomials defining all extensions of \(\mathbb{Q}_2\), \(\mathbb{Q}_3\) and \(\mathbb{Q}_5\) of degree up to 12, all extensions of \(\mathbb{Q}_2\) of degree 14, and all totally ramified extensions of \(\mathbb{Q}_2\) of degrees 18, 20 and 22, the latter three being new. See \cref{gg-sec-implementation}. Our implementation is publicly available \cite{galoiscode} and pre-computed tables of Galois groups are available from here also. \subsection{Overview of algorithm} Our algorithm uses the ``resolvent method''. We now describe a concrete instance. Suppose \(F(x) \in \mathbb{Q}_p[x]\) is irreducible of degree \(d\), and therefore defines an extension \(L/\mathbb{Q}_p\) of degree \(d\). The ramification filtration of this extension is a tower \(L_t=L/\ldots/L_0=\mathbb{Q}_p\). Let \(F_1(x)\in \mathbb{Q}_p[x]\) be a defining polynomial for \(L_1/\mathbb{Q}_p\). By Krasner's lemma, any polynomial in \(\mathbb{Q}[x]\) sufficiently close to \(F_1\) is also a defining polynomial, so we may take \(F_1 \in \mathbb{Q}[x]\). It is irreducible and so defines the number field \(\mathcal{L}_1/\mathcal{L}_0=\mathbb{Q}\) which has a unique completion embedding into \(L_1\). Repeating this procedure up the tower, we obtain the tower of number fields \(\mathcal{L}=\mathcal{L}_t/\ldots/\mathcal{L}_0=\mathbb{Q}\) such that \(\mathcal{L}\) embeds uniquely into \(L\). We call \(\mathcal{L}/\mathbb{Q}\) a \define{global model} of \(L/\mathbb{Q}_p\). Let \(d_i:=(\mathcal{L}_i:\mathcal{L}_{i-1})=(L_i:L_{i-1})\), then \(\operatorname{Gal}(\mathcal{L}_i/\mathcal{L}_{i-1}) \leq S_{d_i}\) and therefore \(\operatorname{Gal}(\mathcal{L}/\mathbb{Q}) \leq W := S_{d_t} \wr \cdots \wr S_{d_1}\). Observe also that naturally \(\operatorname{Gal}(L/\mathbb{Q}_p) \leq \operatorname{Gal}(\mathcal{L}/\mathbb{Q})\) since the left hand side is a decomposition group of the right hand side. Suppose \(\alpha_1 \in \mathcal{L}\) generates \(\mathcal{L}/\mathbb{Q}\), and let \(\alpha_2,\ldots,\alpha_d\in\bar\mathbb{Q}\) be its \(\mathbb{Q}\)-conjugates. Suppose we choose some subgroup \(U \leq W\), find an \define{invariant} \(I \in \mathbb{Z}[x_1,\ldots,x_d]\) such that \(\operatorname{Stab}_W(I) = U\) and compute the \define{resolvent} \[R(x) = \prod_{wU \in W/U}(t - wU(I)(\alpha_1,\ldots,\alpha_d)) \in \mathbb{Z}[t]\] by finding sufficiently precise complex approximations to \(\alpha_1,\ldots,\alpha_d\), giving a complex approximation to \(R\), whose coefficients we can then round to \(\mathbb{Z}\). One can show that \(\operatorname{Gal}(R/\mathbb{Q}) = q(\operatorname{Gal}(\mathcal{L}/\mathbb{Q}))\) and hence \(\operatorname{Gal}(R/\mathbb{Q}_p) = q(\operatorname{Gal}(L/\mathbb{Q}_p)) = q(\operatorname{Gal}(F/\mathbb{Q}_p))\) where \(q:W\to S_{W/U}\) is the action of \(W\) on the cosets of \(U\). In particular, if we define \(s(G)\) to be the multiset of the sizes of orbits of the permutation group \(G\), and we let \(S\) be the multiset of the degrees of the factors of \(R\) over \(K\), then \(s(q(\operatorname{Gal}(F/\mathbb{Q}_p))) = S\). We compute the set \(\mathcal{G}\) of all transitive subgroups of \(W\), so that \(\operatorname{Gal}(F/\mathbb{Q}_p) \in \mathcal{G}\). If \(\abs{\mathcal{G}}>1\), we search through the subgroups \(U \leq W\) in index order until we find one such that \(\{s(q(G)) \,:\, G \in \mathcal{G}\}\) contains at least two elements. We then compute the corresponding resolvent \(R(t) \in \mathbb{Z}[t]\), factorize it over \(\mathbb{Q}_p\) and let \(S\) be the multiset of degrees of factors, and replace \(\mathcal{G}\) by \(\{G\in\mathcal{G} \,:\, s(q(G)) = S\}\). Observe that \(\mathcal{G}\) is now strictly smaller than it was before, and we still have \(\operatorname{Gal}(F/\mathbb{Q}_p) \in \mathcal{G}\). We repeat this process until \(\abs{\mathcal{G}}=1\), at which point this single group is the Galois group and we are done. In \cref{gg-sec-arm} we describe our precise formulation of this algorithm. We have described one method of producing a global model, which results in the group \(W\) (relative to which we compute resolvents) being a wreath product of symmetric groups. It is better for \(W\) to be as small as possible, since this will reduce the index \((W:U)\) required, and hence also reduce \(\deg R\). In \cref{gg-sec-glomod} we discuss some other constructions. The best constructions take advantage of the simple structure of the Galois group of a ``singly ramified'' extension, something like \(C_d\) for unramified extensions, \(C_d \rtimes (\mathbb{Z}/d\mathbb{Z})^\times\) for tame extensions and \(C_p^k \rtimes H\) for wild extensions. We can also produce global models for reducible \(F\) using global models for its factors. In this example, we deduced the Galois group by enumerating the set \(\mathcal{G}\) of all possibilities and then eliminating candidates. This is the ``group theory'' part of the algorithm. We have other methods which avoid enumerating all subgroups of \(W\), and instead work down the graph of subgroups of \(W\). These are discussed in \cref{gg-sec-groups}. The function \(s\) taking a group and returning the multiset of sizes of its orbits is a ``statistic'', and there are other choices. These are discussed in \cref{gg-sec-statistic}. Some statistics provide more information than others, and therefore can result in smaller indices \((W:U)\) being required, but this comes at the expense of taking longer to compute. We search for \(U\) by enumerating all the subgroups of \(W\) of each index in turn until we find one which is useful. There are other methods which try to avoid computing all of these subgroups, of which there may be many. One method restricts to a special class of subgroups. These are given in \cref{gg-sec-choice}. \subsection{Previous work} Over \(p\)-adic fields, there are some special cases where Galois groups can be computed. \begin{itemize} \item It is well known that the unramified extensions of \(K\) of degree \(d\) are all isomorphic, Galois and have cyclic Galois group \(C_d\). Hence if the irreducible factors of \(F(x)\) all define unramified extensions, then the splitting field of \(F(x)\) is unramified, Galois and cyclic with degree \(\operatorname{lcm} \{\deg g \,:\, g \in \operatorname{Factors}(F)\}\). \item Suppose \(L/K\) is tamely ramified. Then it has a maximal unramified subfield \(U\), and \(L/U\) is totally (tamely) ramified. It is well known that \(L = U(\sqrt[e]{\zeta^r \pi})\) where \(e = (L:U)\) for some uniformizer \(\pi \in K\), \(\zeta\) a root of unity generating \(U\) and \(r \in \mathbb{Z}\). In this special form, it is straightforward to write down the splitting field and Galois group of \(L/K\). Furthermore, it is easy to compute the compositum of tame extensions, and hence if each irreducible factor of \(F(x)\) defines a tamely ramified extension, we can compute its Galois group. See \cite[Ch. II, \S2.2]{DPhD} for an exposition. \item Greve and Pauli have studied \define{singly ramified} extensions, that is extensions whose ramification polygon has a single face, giving an explicit description of their splitting field and Galois group \cite[Alg. 6.1]{GP}. So in particular if \(F(x)\) is an Eisenstein polynomial whose ramification polygon has a single face, then we can compute its Galois group. An explicit description of this algorithm appears in Milstead's thesis \cite[Alg. 3.23]{Mil}. \item In his thesis, Greve extends this to an algorithm for \define{doubly ramified} extensions \cite[\S6.3]{GreveTh}, that is whose ramification polygon has two faces. Essentially this uses the singly ramified algorithm for the bottom part, and class field theory and group cohomology to deal with the elementary abelian top part. \item Jones and Roberts \cite{LFDB} have computed all extensions of \(\mathbb{Q}_p\) of degree up to 12, including their Galois group and some other invariants. These are available online in the Local Fields Database (LFDB). Some of the methods they use to compute Galois groups will feature in our general algorithm. \item Awtrey et al. have also considered degree 12 extensions of \(\mathbb{Q}_2\) and \(\mathbb{Q}_3\) \cite{AwtreyTh}; degree 14 extensions of \(\mathbb{Q}_2\) \cite{AwtreyD14}; degree 15 extensions of \(\mathbb{Q}_5\) \cite{AwtreyD15}; and degree 16 \emph{Galois} extensions of \(\mathbb{Q}_2\) \cite{AwtreyD16}. The main new idea in these articles is the \define{subfield Galois group content} of an extension \(L/K\): the set of Galois groups of all proper subfields of \(L/K\). This invariant of \(\operatorname{Gal}(L/K)\) is useful in distinguishing between possible Galois groups, and is possible to compute given a database of all smaller extensions. \end{itemize} The difficult case appears to be when the factors of \(F\) define wildly ramified extensions whose ramification polygons have many faces. Recently Rudzinski has developed techniques for evaluating linear resolvents \cite{Rudz} and Milstead has used a combination of these techniques with the ones mentioned above to compute some Galois groups in this difficult class \cite{Mil}. \subsection{Mathematical notation} Roman capital letters \(K,L,\ldots\) denote \(p\)-adic fields. The ring of integers of \(K\) is denoted \(\mathcal{O}_K\), a uniformizer is denoted \(\pi_K\) and the residue class field is denoted \(\mathbb{F}_K = \mathcal{O}_K/(\pi_K)\). If \(u \in \mathcal{O}_K\) then \(\bar u = u+(\pi_K) \in \mathbb{F}_K\) is its residue class. We denote by \(v_K\) the valuation of \(\bar\mathbb{Q}_p\) such that \(v_K(\pi_K)=1\). Calligraphic capital letters \(\mathcal{K},\mathcal{L},\ldots\) denote number fields. The ring of integers of \(\mathcal{K}\) is \(\mathcal{O}_\mathcal{K}\). If \(U \leq W\) is a subgroup then \(q_U:W \to S_{W/U}\) denotes the action of \(W\) on the left cosets of \(U\). As introduced in \cref{gg-sec-statistic}, \(s\) denotes a function whose input is a permutation group or a polynomial and whose output is anything. There is an equivalence relation \(\sim\) on outputs such that if \(F(x) \in K[x]\) then \(s(\operatorname{Gal}(F))\sim s(F)\). There may also be a partial ordering \(\preceq\) on outputs such that if \(H \leq G\) are groups then \(s(H) \preceq s(G)\). We may omit subscripts from the notation if they are clear from context. \subsection{A note on conjugacy} Recall that the Galois group of a polynomial \(G = \operatorname{Gal}(F)\) is defined to be the group of automorphisms of the splitting field of \(F\). Usually, we represent this as a permutation group \(G \leq S_d\) where \(d = \deg(F)\), such that writing the roots of \(F\) as \(\alpha_1,\ldots,\alpha_d\) in some order, then \(G\) acts as \(g(\alpha_i) = \alpha_{g(i)}\). Since the order of the roots was arbitrary, \(G\) is only really defined up to conjugacy in \(S_d\). Sometimes, we may know more about the roots of \(F\). For instance, if \(F\) is reducible, then \(G\) has multiple orbits. If we explicitly factorize \(F = \prod_i F_i\), and let \(d_i = \deg(F_i)\), then we can specify that the first \(d_1\) roots \(\alpha_1,\ldots,\alpha_{d_1}\) are the roots of \(F_1\), the next \(d_2\) are the roots of \(F_2\) and so on. Letting \(W = S_{d_1} \times S_{d_2} \times \ldots\) then \(G \leq W \leq S_d\) is defined up to conjugacy in \(W\). We shall see more examples in \cref{gg-sec-glomod}. Almost everywhere in our exposition, when we talk of a group, we actually mean the conjugacy class of the group inside some understood larger group. When we talk of the collection of all groups with some property, we mean all the conjugacy classes whose groups have that property. This is to simplify the exposition. In the implementation, a conjugacy class is usually represented by a representative group. An algorithm which returns all conjugacy classes with some property may actually return several representatives for the same class. Finding which groups generate the same class in order to remove duplicates can be computationally difficult, and so whether or not to do this, and how, is usually parameterised. The default is not to remove duplicates. See \cite[Ch. II, \S 11]{DPhD} for details. Henceforth, we shall typically only mention conjugacy when we have specific strategies to deal with conjugate groups. \subsection{Compendium} \label{gg-sec-tldr} Most of the rest of this article describes in full detail the possible parameters to our algorithm, of which there are many. We now list the sections with the most important or novel contributions. \begin{itemize} \item \Cref{gg-sec-arm}: Describes the resolvent method, the main focus of this article. \item \Cref{gg-sec-reseval,gg-sec-glomod}: Methods for producing ``global models'' for \(p\)-adic fields, which are used to evaluate resolvents. Our constructions are more general than previous similar efforts and so can produce more efficient models. \item \Cref{gg-sec-groups-all,gg-sec-groups-maximal2}: The main two ways we perform the group theory part of deducing the Galois group. The former is to write down all possibilities and then eliminate until one remains; the latter works down the graph of possible groups using the notion of ``maximal preimages of statistics'' to efficiently move down the graph without blowing up the number of possibilities. \item \Cref{gg-sec-stat-facdegs}: The main ``statistic'' of a resolvent we compute is the multiset of degrees of its factors. This is compared to the multiset of sizes of orbits of potential Galois groups to deduce which are possible. \item \Cref{gg-sec-tranche-oidx}: Methods to produce groups from which to compute resolvents which empirically are both fast to compute and give low-degree resolvents. \item \Cref{gg-sec-implementation}: The implementation, timings, performance notes, etc. \end{itemize} \section{Galois group algorithms} \label{gg-sec-algorithms} This article is mainly concerned with the resolvent method, introduced in \cref{gg-sec-arm}. However, the algorithm is recursive, in that it may compute other Galois groups along the way, and it may suffice to use other algorithms for this purpose. Therefore, we briefly describe the other algorithms available in our implementation. \subsection{\texttt{Naive}} \label{gg-sec-naive} This explicitly computes a splitting field for \(F(x)\) and explicitly computes its automorphisms. This is the algorithm currently implemented in Magma for \(p\)-adic polynomials, called \texttt{GaloisGroup}. Since the splitting field is computed explicitly, this is only suitable when the Galois group is known in advance to be small, such as because the degree is small. \input{gg-sec-tame-v2.tex} \subsection{\texttt{SinglyRamified}} \label{gg-sec-singlyramified} This computes the Galois group of \(F(x)\) provided it is irreducible and defines an extension whose ramification filtration contains a single segment. Such an extension is called \define{singly ramified}. When the extension is tamely ramified, we can use the \texttt{Tame} algorithm. Otherwise the extension is totally wildly ramified and we use an algorithm due to Greve and Pauli \cite[Alg. 6.1]{GP}. An explicit description is given by Milstead \cite[Alg. 3.23]{Mil}. \subsection{\texttt{ResolventMethod}} \label{gg-sec-arm} The resolvent method is the focus of the remainder of this article and is based on the following simple lemma. \begin{lemma} Suppose \(G := \operatorname{Gal}(F) \leq W \leq S_d\) where \(d = \deg F\), and take any \(U \leq W\). Now \(S_d\) acts on \(\mathbb{Z}[x_1,\ldots,x_d]\) by permuting the variables, so suppose \(I \in \mathbb{Z}[x_1,\ldots,x_n]\) such that \(\operatorname{Stab}_W(I) = U\) (we say \(I\) is a \define{primitive \(W\)-relative \(U\)-invariant}). Letting \(\alpha_1,\ldots,\alpha_d\) be the roots of \(F\), define \(\beta_{wU} = wU(I)(\alpha_1,\ldots,\alpha_n)\) (this is well-defined since \(I\) is fixed by \(U\)) and define the \define{resolvent} \(R(t) := \prod_{wU \in W/U} (t - \beta_{wU})\). Then \(R(t) \in K[t]\). If \(R\) is squarefree, then its Galois group corresponds to the coset action of \(G\) on \(U\). That is, letting \(q : W \to S_{W/U}\) be the coset action, then identifying \(wU \leftrightarrow \beta_{wU}\) we have \(\operatorname{Gal}(R) = q(G)\). \end{lemma} \begin{proof} Writing \(R(t) := \tilde R(\alpha_1,\ldots,\alpha_d; t)\) where \[\tilde R(x_1,\ldots,x_d; t) := \prod_{wU \in W/U} (t - wU(I)(x_1,\ldots,x_d))\] then the \(t\)-coefficients of \(\tilde R\) are fixed by \(W\) (the action of \(W\) re-orders the product) and hence by \(G\). We conclude that the \(t\)-coefficients of \(R\) are fixed by \(G\) too, and hence by Galois theory \(R(t) \in K[t]\). If \(R\) is squarefree, then there is a 1-1 correspondence between the cosets \(\braces{wU}\) of \(W/U\) and the roots \(\braces{\beta_{wU}}\) of \(R\). Take \(g \in G\), then \begin{align*} g(\beta_{wU}) &= g(wU(I)(\alpha_1,\ldots,\alpha_d)) \\ &= wU(I)(g(\alpha_1),\ldots,g(\alpha_d))) \\ &= wU(I)(\alpha_{g(1)},\ldots,\alpha_{g(d)}) \\ &= gwU(I)(\alpha_1,\ldots,\alpha_d) \\ &= \beta_{gwU} \\ \end{align*} so the action of \(G\) on the roots of \(R\) corresponds to the coset action, as claimed. \end{proof} Therefore, if we have some \(W\) containing \(G\) and a means to compute resolvents \(R\) for \(U \leq W\), then since \(\operatorname{Gal}(R) = q(G)\) is a function of \(G\), we can deduce information about \(G\) by finding some information about \(\operatorname{Gal}(R)\). Specifically how we compute resolvents and deduce information about \(G\) is controlled by two parameters. Firstly, a resolvent evaluation algorithm (\cref{gg-sec-reseval}) selects a fixed group \(W \leq S_d\) such that \(G \leq W\), and thereafter is responsible for evaluating the resolvents \(R(t)\) from selected \(U \leq W\) and invariants \(I \in \mathbb{Z}[x_1,\ldots,x_d]\). Secondly, a group theory algorithm (\cref{gg-sec-groups}) is responsible for deducing the Galois group \(G\) by choosing a suitable \(U\), and then using the resolvent \(R\) returned by the resolvent evaluation algorithm to gather information about \(G\). \begin{algorithm}[Galois group: resolvent method] Given a polynomial \(F(x) \in K[x]\), returns its Galois group.\hfill \label{gg-alg-arm} \begin{algorithmic}[1] \State Initialize the resolvent evaluation algorithm.\label{gg-alg-arm-rinit} \State Initialize the group theory algorithm.\label{gg-alg-arm-ginit} \State If we have determined the Galois group, then return it.\label{gg-alg-arm-done} \State Let \(U\) be a subgroup of \(W\).\label{gg-alg-arm-U} \State Let \(I\) be a primitive \(W\)-relative \(U\)-invariant.\label{gg-alg-arm-I} \State Let \(R\) be the resolvent corresponding to \(I\).\label{gg-alg-arm-R} \State Use \(R\) to deduce information about the Galois group.\label{gg-alg-arm-deduce} \State Go to step \ref{gg-alg-arm-done}. \end{algorithmic} \end{algorithm} The resolvent algorithm controls steps \ref{gg-alg-arm-rinit} and \ref{gg-alg-arm-R}. The group theory algorithm controls steps \ref{gg-alg-arm-ginit}, \ref{gg-alg-arm-done}, \ref{gg-alg-arm-U} and \ref{gg-alg-arm-deduce}. Step \ref{gg-alg-arm-I} could also be parameterised, but we find it is sufficient to use the algorithm due to Fieker and Kl\"uners \cite[\S5]{FK}, implemented as the intrinsic \texttt{RelativeInvariant} in Magma. \begin{remark} Using resolvents to compute Galois groups is not new. Stauduhar's method \cite{Stauduhar73} for polynomials over \(\mathbb{Q}\) computes resolvents relative to \(S_d\) by computing complex approximations to the roots. This was improved by Fieker and Kl\"uners \cite{FK} to a ``relative resolvent method'' which allows the overgroup \(W\) to be made smaller at each iteration until it equals \(G\). Over \(\mathbb{Q}_p\), a resolvent method has been used by Jones and Roberts \cite{LFDB} to compute the Galois group of fields of degree up to 12, computing resolvents in \(W = S_{d_2} \wr S_{d_1}\) corresponding to a subfield of degree \(d_1\). \end{remark} \subsection{\texttt{Sequence}} This algorithm takes as parameters a sequence of other algorithms to compute Galois groups. It tries each algorithm in turn until one succeeds. This is mainly useful to deal with special cases first (e.g. \texttt{Tame} or \texttt{SinglyRamified}) before applying a general method (e.g. \texttt{ResolventMethod}). \section{Resolvent evaluation algorithms} \label{gg-sec-reseval} These are used as part of the \texttt{ResolventMethod} algorithm for computing Galois groups. They are responsible for selecting an overgroup \(W\) such that \(G \leq W\) and thereafter evaluating resolvents relative to \(W\). Currently there is one option, \code{Global}, described here. \begin{definition} A \define{global model} for a \(p\)-adic field \(K\) is an embedding \(i : \mathcal{K} \to K\) where \(\mathcal{K}\) is a global number field such that \(K\) is a completion of \(\mathcal{K}\) and \(i\) is the corresponding embedding. If \(L/K\) is an extension of \(p\)-adic fields, and \(i : \mathcal{K} \to K\) is a global model for \(K\), then a \define{global model for \(L/K\) extending \(i\)} is a global model \(j : \mathcal{L} \to L\) of \(L\) such that \(j|_{\mathcal{K}} = i\). Similarly a \define{global model for \(F(x) \in K[x]\) extending \(i\)} is \(\prod_k \mathcal{F}_k\) where \(F = \prod_k F_k\) is the factorization over \(K\) of \(F\) into irreducible factors, \(L_k/K\) are the corresponding extensions, \(i_k : \mathcal{L}_k \to L_k\) are global models for \(L_k/K\) extending \(i\), and \(\mathcal{L}_k \cong \mathcal{K}(x)/(\mathcal{F}_k(x))\). We shall often refer to \(\mathcal{K}\) itself as the global model, instead of the embedding \(i\). \end{definition} The \texttt{Global} algorithm computes a global model \(\mathcal{K}\) for \(K\) and a global model \(\mathcal{F}(x) \in \mathcal{K}[x]\) for the input \(F(x) \in K[x]\) extending \(\mathcal{K}\). At the same time, it computes the required overgroup \(W\) such that \(G \leq \operatorname{Gal}(\mathcal{F} / \mathcal{K}) \leq W\). A parameter (a global model algorithm, \cref{gg-sec-glomod}) specifies how to produce a global model for \(F(x)\). \begin{remark} \label{gg-rmk-glomodidx} Note that this implies that \(\deg\mathcal{F}=\deg F=d\). In fact, our algorithm more generally computes an \define{overgroup embedding} \(e:W\to\mathcal{W}\) such that \(G\leq W\), \(\operatorname{Gal}(\mathcal{F}/\mathcal{K})\leq\mathcal{W}\) and \(e(G)\) is the corresponding decomposition group. Hence \(\deg\mathcal{F}>d\) is allowed. This usually arises as a global model \(\mathcal{L}/\mathcal{K}'/\mathcal{K}\) for \(L/K\) where \(\mathcal{K}'\) is also a global model for \(K\) and \((\mathcal{L}:\mathcal{K}')=d\), in which case we refer to \((\mathcal{K}':\mathcal{K})\) as the \define{index} of the global model. In our exposition we shall assume \(W=\mathcal{W}\) for simplicity and leave the details to \cite[Ch. II]{DPhD}. \end{remark} The algorithm then can evaluate resolvents as follows. For each complex embedding \(c : \mathcal{K} \to \mathbb{C}\), we compute the roots of \(c(\mathcal{F})\) to high precision. Letting \(\tilde\alpha_1,\ldots,\tilde\alpha_{d'}\) be these roots, we compute \[\tilde R_c(t) := \prod_{wU \in W/U}(t - wU(I)(\tilde\alpha_1,\ldots,\tilde\alpha_{d'}))\] which is an approximation to \(c(R(t)) \in \mathbb{C}[t]\). We can always arrange for \(\mathcal{F}(x)\) to be monic and integral, so that its roots are integral, and therefore \(R(t) \in \mathcal{O}_{\mathcal{K}}[t]\). Firstly, suppose that \(\mathcal{K} = \mathbb{Q}\) (so \(K = \mathbb{Q}_p\)), then we know \(R(t) \in \mathbb{Z}[t]\) and therefore assuming we have computed \(\tilde R(t)\) sufficiently precisely, then we can compute \(R(t)\) by rounding its coefficients to the nearest integer. More generally, for each coefficient \(R_i\) of \(R(t)\) we take the vector \((\tilde R_{c,i})_c\) which should be a close approximation to \((c(R_i))_c\). Since \(R_i\) are integral, \((c(R_i))_c\) is an element of the \define{Minkowski lattice} \(\prod_c c(\mathcal{O}_\mathcal{K})\), which is discrete, and therefore we can deduce \(R_i\) by rounding \((\tilde R_{c,i})_c\) to the nearest point in the lattice. This can be done using lattice basis reduction techniques such as LLL. \begin{algorithm}[Resolvent: \texttt{Global}] Given a global model \(\mathcal{F}(x) \in \mathcal{K}[x]\) and subgroup \(U \leq W\), returns the corresponding resolvent \(R(t)\). \label{gg-alg-resolvent} \begin{algorithmic}[1] \State Choose a Tschirnhaus transformation \(T \in \mathbb{Z}[x]\) (see Rmk. \ref{gg-rmk-resolvent-tschirnhaus}). \State Choose a complex floating point precision, \(k\) decimal digits (see Rmk. \ref{gg-rmk-resolvent-complex-precision}). \State Compute complex approximations to the roots of \(c(\mathcal{F})\) for each complex embedding \(c : \mathcal{K} \to \mathbb{C}\). \State Compute \(\tilde R_c(t) = \prod_{wU \in W/U} (t - wU(I)(T(\tilde \alpha_1),\ldots,T(\tilde \alpha_{d'})))\). \State Round \((\tilde R_{c,i})_i\) to the nearest point of the Minkowski lattice of \(\mathcal{O}_{\mathcal{K}}\), and let \(R_i\) be the corresponding element of \(\mathcal{O}_{\mathcal{K}}\). \State If \(R(t) \in \mathcal{K}[t]\) is not squarefree, go to Step 1. \State Return \(R(t)\). \end{algorithmic} \end{algorithm} \begin{remark} \label{gg-rmk-resolvent-tschirnhaus} In Step 1, a Tschirnhaus transformation is any randomly selected polynomial in \(\mathbb{Z}[x]\). Its purpose is to ensure that \(R(t)\) is squarefree. Indeed, if \(R(t)\) is not squarefree, then there is some coincidence between its roots, and therefore some unintended structure between the roots of \(F\). By transforming the roots, we should destroy this structure. Such a transformation always exists \cite{Girstmair83}. In practice, it suffices to use \(T(x) = x\) initially, and thereafter to choose a random polynomial of small degree and coefficients, increasing the degree and coefficient bound at each iteration. \end{remark} \begin{remark} \label{gg-rmk-resolvent-complex-precision} It is important in Step 2 that we choose a complex floating point precision \(k\) such that the rounding step produces the correct answer. We do this as follows. First, we find an upper bound on the absolute valuations of the roots of \(c(\mathcal{F})\) for each complex embedding \(c\). In principle this could be done by analyzing the polynomials which define the global model and bounding their roots in terms of the coefficients, but in our current implementation we instead compute the complex roots to some default precision (30 decimal digits) and take the size of the largest root as our bound. It is possible although unlikely that the latter approach introduces enough precision error that this bound is incorrect, and hence this part of the implementation does not yield proven results. Using this upper bound, we can follow through the computation of \(\tilde R_c\) to get upper bounds on its coefficients. By increasing the bounds by a small fraction at each computation, we can absorb the effect of any complex precision error. We then select a precision so that the absolute errors on the coefficients \(\tilde R_{c,i}\) are less than half the shortest distance between two elements of the Minkowski lattice. We then add a generous margin to the precision (say 20 decimal digits) so that we can check in the code that we are in fact very close (say within 10 decimal digits) of an integer point. \end{remark} \begin{remark} The choice to approximate the roots of \(\mathcal{F}\) in the complex field \(\mathbb{C}\) is somewhat arbitrary. We could instead pick a prime \(\ell\) such that \(\mathcal{F}\) has a small splitting field over \(\mathbb{Q}_\ell\) and approximate the roots \(\ell\)-adically. Making such a change usually improves the reliablility and precision requirements. The theory of the Minkowski lattice carries over into this setting. \end{remark} \section{Global model algorithms} \label{gg-sec-glomod} Given a polynomial \(F(x) \in K[x]\) and a global model \(i:\mathcal{K} \to K\), a global model algorithm computes a global model \(\mathcal{F}(x)\) for \(F(x)\) extending \(\mathcal{K}\). It also computes an overgroup \(W\) such that \(G \leq \operatorname{Gal}(\mathcal{F} / \mathcal{K}) \leq W\). \begin{remark} As presented, these constructions assume the global model index (\cref{gg-rmk-glomodidx}) is 1, but do generalize. See \cite[Ch. II, \S4]{DPhD} for details. \end{remark} \subsection{\texttt{Symmetric}} Given irreducible \(F(x) \in K[x]\), this finds a polynomial \(\mathcal{F}(x) \in \mathcal{K}[x]\) sufficiently close to \(F(x)\) that they have the same splitting field over \(K\). Generically we expect that \(\operatorname{Gal}(\mathcal{F} / \mathcal{K}) = S_d\), since we are not imposing any further restriction of \(\mathcal{F}\), and therefore the corresponding overgroup is taken to be \(W=S_d\). To find such a polynomial, we pick some precision parameter \(k \in \mathbb{N}\). We take some polynomial \(\mathcal{F}(x) \in \mathcal{K}[x]\) such that \(i(\mathcal{F}(x)) - F(x)\) has coefficients of valuation at least \(k\), and then we check that \(\mathcal{F}\) is a global model. If not, we increase \(k\). By keeping \(k\) small, we limit the size of the coefficients of \(\mathcal{F}\), which in turn limits the precision required in the complex arithmetic later. \subsection{\texttt{Factors}} This factorizes \(F(x) = \prod_k F_k(x)\) into irreducible factors over \(K\), produces a global model \(\mathcal{F}_k(x)\) for each factor, and then the global model is \(\mathcal{F}(x) = \prod_k \mathcal{F}_k(x)\). The overgroup is the direct product \(W=\prod_k W_k\) of overgroups for each factor. A parameter determines how to compute a global model for each factor. \subsection{\texttt{RamTower}} \label{gg-sec-ramtower} Assuming \(F(x)\) is irreducible and defines an extension \(L/K\), this finds the ramification filtration \(L=L_t/\ldots/L_0=K\) of \(L/K\). For each segment \(L_k/L_{k-1}\), it produces a global model extending the global model of the segment below it. Then the global model is the final model in this iteration. The overgroup is the wreath product \(W=W_t\wr\cdots\wr W_1\) of overgroups of each segment. A parameter determines how to compute a global model for each segment. \subsection{\texttt{RootOfUnity}} \label{gg-sec-rootofunity} Assuming the splitting field \(L\) of \(F\) over \(K\) is unramified, and therefore generated by a primitive \(n\)th root of unity \(\zeta\), we define the global model to be \(\mathcal{L} = \mathcal{K}(\zeta)\). We naturally identify \(\mathcal{W} = \operatorname{Gal}(\mathcal{L} / \mathcal{K})\) with a subgroup of \((\mathbb{Z} / n \mathbb{Z})^\times\), identifying \(i \bmod n\) with \(\zeta \mapsto \zeta^i\). The subgroup \(W = \angles{q} \leq \mathcal{W}\) is the decomposition group, i.e. \(\operatorname{Gal}(L/K)\). If \(W=\mathcal{W}\) then this is our overgroup (otherwise \(\mathcal{W}\) is an overgroup for a model of higher index \cite[Ch. II, \S4.5]{DPhD}). By default, we use \(n=q^d-1\). A parameter can change this to use the smallest divisor of \(q^d-1\) not dividing \(q^c-1\) for any \(c<d\). Another parameter controls whether to search for a \define{complement} to \(W\) --- i.e. a subgroup \(H \leq \mathcal{W}\) such that \(H \cap W = 1\) --- of smallest index possible, and then replace \(\mathcal{L}\) by the fixed field of \(H\). By design, this still has a completion to \(L\), but is of smaller degree. If \(\angles{H,W}=\mathcal{W}\) then \(H\) is a \define{perfect complement} and \(W=\mathcal{W}/H\) is our overgroup (otherwise \(\mathcal{W}/H\) is an overgroup for a model of higher index). \begin{remark} The complement option usually finds a perfect complement. For example, suppose \(\mathcal{K} = \mathbb{Q}\) and \(K = \mathbb{Q}_p\), \(p \leq 7\) and \(d \leq 50\), then there is a perfect complement unless: \(p=2\) and \(8 \mid d\); or \(p=3\) and \(d=9\); or \(p=7\) and \(d\in\braces{5,8}\). \end{remark} \begin{remark} \label{gg-rmk-grunwaldwang} The Grunwald--Wang theorem of class field theory \cite[Ch. X, \S2]{ATCFT} implies that if \(K\) is a completion \(\mathcal{K}_\mathfrak{p}\), and \(L/K\) is cyclic, degree \(d\), then there is \(\mathcal{L} / \mathcal{K}\) cyclic of degree \(d\) which completes to \(L\). There is an exception at primes \(\mathfrak{p} \mid 2\) and degrees \(8 \mid d\), for which \((\mathcal{L} : \mathcal{K}) = 2d\) is sometimes necessary. \end{remark} \subsection{\texttt{RootOfUniformizer}} \label{gg-sec-rootofuniformizer} Assuming \(F\) is irreducible of degree \(d\) over \(K\) and defines a totally tamely ramified extension \(L/K\), then \(L=K(\sqrt[d]{\pi})\) for some uniformizer \(\pi \in K\). Taking a sufficiently precise approximation to \(\pi\), we may assume that \(\pi \in \mathcal{K}\), and we define the global model to be \(\mathcal{L} = \mathcal{K}(\sqrt[d]{\pi})\). The embedding \(\mathcal{K} \to K\) extends uniquely to \(\mathcal{L} \to L\). Letting \(\zeta\) be a primitive \(d\)th root of unity, then clearly \(\mathcal{K}(\sqrt[d]{\pi},\zeta)\) is the normal closure and its Galois group \(W\) (which is a function of \(\operatorname{Gal}(\mathcal{K}(\zeta)/\mathcal{K})\) which may be computed explicitly) acts faithfully on the \(d\) elements \(\sqrt[d]{\pi}\), \(\zeta \sqrt[d]{\pi}\), \(\ldots\), \(\zeta^{d-1} \sqrt[d]{\pi}\). \subsection{\texttt{SinglyWild}} \label{gg-sec-singlywild} Suppose \(F(x) \in K[x]\) defines a singly wildly ramified extension \(L/K\) of degree \(d=p^k\). That is, a totally wildly ramified extension whose ramification polygon has a single face. Suppose also \(p=2\) and \(L/K\) is Galois, then \(\operatorname{Gal}(L/K)\cong C_2^k\) and so \(L = K(\sqrt{a_1},\ldots,\sqrt{a_k})\) for some \(a_i\in K\). By taking sufficiently precise approximations, we may further assume \(a_i\in\mathcal{K}\). Then \(\mathcal{L}=\mathcal{K}(\sqrt{a_1},\ldots,\sqrt{a_k})\) is our global model with overgroup \(W=C_2^k\). \begin{remark} Using Kummer theory, an averaging argument, and a result of Greve \cite[Thm. 7.3]{GP}, this method generalizes to \(p\ne2\) and non-Galois \(L/K\) \cite[Ch. II, \S4.7]{DPhD}. This has not yet been implemented. \end{remark} \subsection{\texttt{Select}} This selects between several different global model algorithms, depending on \(F\). For example, we can select between \texttt{RootOfUnity}, \texttt{RootOfUniformizer} or \texttt{SinglyWild} depending on whether \(F\) defines an unramified, tame, or wild extension. \section{Group theory algorithms} \label{gg-sec-groups} The job of a group theory algorithm is to decide, given the overgroup \(W\), which subgroups \(U \leq W\) to form resolvents from, and to use those resolvents to deduce the Galois group \(G \leq W\). We recommend now reading the definition of statistic at the start of \cref{gg-sec-statistic}. A statistic is our means of comparing groups with resolvents. \subsection{\texttt{All}} \label{gg-sec-groups-all} This algorithm proceeds by writing down all possible Galois groups \(G\) (up to \(W\)-conjugacy), and then eliminating possibilities until only one remains. There are two parameters, a statistic algorithm \(s\) (\cref{gg-sec-statistic}) which determines which properties of the Galois groups \(G\) and resolvents \(R\) to compare, and a subgroup choice algorithm (\cref{gg-sec-choice}) which determines how we choose a subgroup \(U\). The subgroup choice algorithm is used to choose a subgroup \(U\). Then, given a resolvent \(R\), we compute the statistic \(s(R)\) and see for which \(G\) in the list of possible Galois groups this equals \(s(q(G))\) where \(q\) is the coset action of \(W\) on \(W/U\). We eliminate the \(G\) for which the statistics differ. We are done when only one \(G\) remains. \begin{remark} The parameters must be chosen correctly to ensure that the algorithm terminates, otherwise it is possible that the subgroup choice algorithm cannot find a useful subgroup for the given statistic. \Cref{gg-lem-rootsmax} below implies the algorithm terminates for the \texttt{HasRoot} statistic (or any more precise statistic such as \texttt{FactorDegrees}) and any subgroup choice algorithm which considers all groups. \end{remark} \begin{lemma} \label{gg-lem-rootsmax} \(G\) is congruent to a subgroup of \(U\) if and only if the corresponding resolvent \(R\) has a root. \end{lemma} \begin{proof} \(G \leq U\) if and only if \(q(G)\) has a fixed point, where \(q:W \to S_{W/U}\) is the coset action. Since \(\operatorname{Gal}(R)=q(G)\), this occurs if and only if \(R\) has a root. \end{proof} \subsection{\texttt{Maximal}} \label{gg-sec-groups-maximal} This algorithm avoids the need to enumerate all possible Galois groups. We start at the top of the directed acyclic graph of subgroups of \(W\) and work our way down, at each stage either proving that a current group under consideration is not the Galois group, and so moving on to its maximal subgroups, or proving that the Galois group is not a subgroup of some of the maximal subgroups of a group under consideration. Specifically, at all times we have a set \(\mathcal{P}\) of subgroups of \(W\) such that we know that the Galois group is contained in at least one of them. We call this the \define{pool}. Initially we have \(\mathcal{P} = \{W\}\). If for some resolvent \(R\) and \(P \in \mathcal{P}\) we find that their statistics do not agree, i.e. \(s(R) \not\sim s(q(P))\), then we record that \(G \ne P\). We also test if the statistic is consistent with the Galois group being a subgroup of \(P\). If this latter test fails, i.e. \(s(R) \not\preceq s(q(P))\), then we remove \(P\) from the pool. We also perform the same tests on all maximal subgroups \(Q < P \in \mathcal{P}\). Having processed a resolvent in this way, we may decide to modify \(\mathcal{P}\) further. For example, as soon as there is some \(P \in \mathcal{P}\) such that the Galois group is not \(P\), replace \(P\) by its maximal subgroups. Or instead, when all \(P \in \mathcal{P}\) are known not to be the Galois group, replace the whole pool by the set of maximal subgroups of its elements. This behaviour is parameterised. We have determined the Galois group when \(\mathcal{P}\) contains one group, and we have deduced that the Galois group is not contained in any of its maximal subgroups. The question remains of which subgroups \(U\leq W\) are \define{useful} in the sense that a resolvent formed from \(U\) will provide information. Unlike the \code{All} algorithm, it is not possible to determine for certain if a given group \(U\) will allow us to make progress or not. There is a necessary condition, but this does not guarantee progress, and there is a sufficient condition, but it is not guaranteed there there exists a group with this condition. We parameterise this choice, but in the next section give an improved method without this issue. \subsection{\texttt{Maximal2}} \label{gg-sec-groups-maximal2} Note that a shortcoming of the \texttt{Maximal} algorithm is that it is not always possible to tell if a subgroup \(U \leq W\) will provide any information, and so its behaviour is more heuristic than principled. Another problem is that it only ever rules groups out of consideration which cannot contain the Galois group, and therefore all groups \(P\) with \(G \leq P \leq W\) will be considered in the pool \(\mathcal{P}\) at some point; if there are many such groups, this can get inefficient. The \texttt{Maximal2} algorithm avoids both of these problems by positively identifying groups which do contain the Galois group. As before, we have a pool \(\mathcal{P}\) of subgroups, at least one of which contains the Galois group. Suppose there is a group \(U \leq W\) such that \(s(q(P)) \not\sim s(q(Q))\) for some \(P \in \mathcal{P}\) and maximal \(Q < P\) (such a group is \define{useful}) and we form the corresponding resolvent \(R\). There are two possibilities. If \(s(R) \sim s(q(P))\) then \(s(q(Q)) \prec s(R)\), so \(s(R) \not\preceq s(q(Q))\), so \(G \not\leq Q\), and so we can rule \(Q\) out of consideration. Otherwise \(s(R) \not\sim s(q(P))\) and so \(G \ne P\). In the \texttt{Maximal} algorithm at this point we would do something like replace \(P\) in the pool by its maximal subgroups. Instead, we find the set \(X''\) of subgroups \(Q'' < q(P)\) which are maximal among those such that \(s(Q'') \sim s(R)\); we refer to these as the \define{maximal preimages in \(q(P)\) of \(s(R)\)}. Then we let \(X = \{P \cap q^{-1}(Q'') \,:\, Q'' \in X''\}\). By construction, if \(G \leq P\) then \(G \leq Q'\) for some \(Q' \in X\) and so we can replace \(P\) in the pool by \(X\). Typically \(X\) is much smaller than the number of maximal subgroups of \(P\). Suppose now that we have eliminated all maximal subgroups of all \(P \in \mathcal{P}\) from consideration. Then we know that \(G=P\) for some \(P \in \mathcal{P}\). We are now in the scenario of the \texttt{All} algorithm, and so can now eliminate groups from the pool by finding \(U \leq W\) such that \(s(q(P_1)) \not\sim s(q(P_2))\) for some \(P_1,P_2 \in \mathcal{P}\). Such a \(U\) is also said to be \define{useful}. We have deduced the Galois group when the pool contains a single group, and we have ruled all of its maximal subgroups out of consideration. We can use any statistic which has an equivalence relation (as required for \texttt{All}) and a partial ordering (as required for \texttt{Maximal}) and an algorithm for computing maximal preimages. For the latter, in general we have a ``naive'' algorithm, which simply works down the subgroups of \(P\) until ones with the correct statistic are found. \begin{algorithm}[Maximal preimages: Naive] \label{gg-alg-maxpre-naive} Given a group \(P\), a statistic \(s\) and a value \(v\) of \(s\), returns the maximal preimages of \(v\) in \(P\). \begin{algorithmic}[1] \If{\(v \sim s(P)\)} \State\Return \(\{P\}\) \ElsIf{\(v \prec s(P)\)} \State\Return \(\bigcup_{\text{maximal \(Q < P\)}} \text{maximal preimages of \(v\) in \(Q\)}\) \Else \State\Return \(\emptyset\) \EndIf \end{algorithmic} \end{algorithm} However, only using the naive algorithm would not provide an improvement over \code{Maximal}. The real efficiency gain comes from the existence of more efficient algorithms for particular statistics, in particular \texttt{HasRoot} (\cref{gg-sec-stat-hasroot}) and \texttt{FactorDegrees} (\cref{gg-sec-stat-facdegs}). \subsection{\texttt{Sequence}} This takes as parameters a sequence of group theory algorithms. Each one is used in turn until either the Galois group is deduced or the subgroup choice algorithm runs out of subgroups to try. If the same algorithm appears consecutively with different parameters, then the state of the algorithm (such as the pool of possible Galois groups) is maintained so that information is not lost. This allows us, for example, to first use a cheap statistic on a limited number of subgroups --- aiming to deduce easy Galois groups quickly --- before trying a more expensive statistic. \section{Statistic algorithms} \label{gg-sec-statistic} A statistic algorithm is a means of comparing the Galois group of a polynomial with a permutation group. Specifically it is a function which takes as input a permutation group or a polynomial and outputs some value. There must be an equivalence relation on these values, which we denote \(\sim\). A statistic function \(s\) must satisfy the following property: \(s(R) \sim s(\operatorname{Gal}(R))\) for all polynomials \(R\). For most statistics, \(\sim\) is equality. Using this, if we are given a polynomial \(R(x)\) (such as a resolvent) and a permutation group \(G\) and we find that \(s(R) \not\sim s(G)\), then we know that \(\operatorname{Gal}(R) \neq G\). This is the basis of the \texttt{All} (\cref{gg-sec-groups-all}) group theory algorithm. Optionally, statistics can also support a partial ordering, denoted \(\preceq\), which must respect the partial ordering due to subgroups. Specifically, the following must hold: for all groups \(G,H\), if \(H \leq G\) then \(s(H) \preceq s(G)\). Statistics supporting this operation may be used in the \texttt{Maximal} (\cref{gg-sec-groups-maximal}) and \texttt{Maximal2} (\cref{gg-sec-groups-maximal2}) group theory algorithms. Optionally, ordered statistics can also provide a specialised algorithm to compute maximal preimages, as defined in \cref{gg-sec-groups-maximal2}. \subsection{\texttt{HasRoot}} \label{gg-sec-stat-hasroot} \(s(G)\) is true if it has a fixed point, and otherwise is false. Correspondingly, \(s(R)\) is true if it has a root (in its base field \(K\)). If \(H \leq G\) and \(G\) has a fixed point, then so does \(H\), so we define \(v_1 \preceq v_2\) to be \(v_2 \Rightarrow v_1\). The maximal subgroups with a fixed point are point stabilizers. Two point stabilizers are conjugate if they stabilize a point in the same orbit, and so we deduce the following algorithm to compute maximal preimages. \begin{algorithm}(Maximal preimages: \texttt{HasRoot}) \label{gg-alg-maxpre-hasroot} Given a group \(P\) and a value \(v \in \{\text{true},\text{false}\}\), returns the maximal preimages of \(v\) in \(P\). \begin{algorithmic}[1] \If{\(v = \text{true}\)} \State\Return \(\{\operatorname{Stab}_P(x) \text{ for some \(x \in o\)} \,:\, o \in \operatorname{Orbits}(P)\}\) \Else \State\Return \(\{P\}\) \EndIf \end{algorithmic} \end{algorithm} \subsection{\texttt{NumRoots}} \label{gg-sec-stat-numroots} \(s(G)\) is the number of fixed points of \(G\). Correspondingly, \(s(R)\) is the number of roots of \(R\). If \(H \leq G\) then \(H\) has at least as many fixed points as \(G\), so \(\preceq\) in this case is the usual \(\leq\) on integers. \subsection{\texttt{Factors}} \label{gg-sec-stat-factors} This takes a parameter, which is another statistic \(s'\). Then \(s(G)\) is the multiset \(\{s'(G')\}\) where \(G'\) runs over the images of \(G\) acting on each of its orbits (so the degree of \(G'\) is the size of the corresponding orbit). Correspondingly, \(s(R)\) is the multiset \(\{s'(R')\}\) where \(R'\) runs over the irreducible factors of \(R\). \subsection{\texttt{Degree}} \label{gg-sec-stat-degree} \(s(G)\) is the degree of the permutation group \(G\) and \(s(R)\) is the degree of \(R\). If \(H \leq G\), then they are permutation groups of equal degree, so \(v_1 \preceq v_2\) is \(v_1 = v_2\). \subsection{\texttt{FactorDegrees}} \label{gg-sec-stat-facdegs} \(s(G)\) is the multiset of sizes of orbits of \(G\). Correspondingly, \(s(R)\) is the mulitset of degrees of irreducible factors of \(R\). This is equivalent to \code{Factors} with the \code{Degree} parameter, but is more efficient because it does not require the explicit computation of the orbit images of \(G\) on its orbits. Additionally, it supports ordering as follows: we know that if \(H \leq G\) then the orbits of \(H\) form a refinement of the orbits of \(G\); that is, the orbits of \(G\) are unions of orbits of \(H\). Hence, given two multisets \(v_1\) and \(v_2\) of orbits sizes, we check combinatorially if one is a refinement of the other. We provide an algorithm to compute maximal preimages of this statistic. First, in case the group \(G\) is intransitive, we embed \(G\) into a direct product \(D\) and find maximal preimages there. For each preimage \(H\), and \(d \in D\) we see if any \(H^d \cap G\) is a preimage. Observing that if \(n \in N_D(H)\) and \(g \in G\) then \(H^{ndg} \cap G = (H^d \cap G)^g\), it suffices to only consider coset representatives of \(N_D(H) \backslash D / G\). \begin{algorithm}[Maximal preimages: \texttt{FactorDegrees}] \label{gg-alg-maxpre-facdegs} Given a group \(G\) of degree \(d\) and a multiset \(v\) of integers such that \(\sum v = d\), returns all maximal preimages of \(v\) in \(G\) up to conjugacy. \begin{algorithmic}[1] \State \(S \leftarrow \emptyset\) \State Embed \(G \subset D = G_1 \times \ldots \times G_r\) \For{maximal preimages \(H\) of \(v\) in \(D\) (\cref{gg-alg-maxpre-facdegs-dp})} \For{double coset representatives \(d\) of \(N_D(H) \backslash D / G\)} \State \(H' \leftarrow H^d \cap G\) \If{\(H'\) has orbits of sizes \(v\)} \State \(S \leftarrow S \cup \{H'\}\) \EndIf \EndFor \EndFor \State \Return \(S\) \end{algorithmic} \end{algorithm} To find maximal preimages in direct products, we first find all the ways in which \(v\) may be written as a union, with each component corresponding to a direct factor. Then by \cref{gg-lem-subpar-dp}, the maximal preimages in \(D\) are direct products of the maximal preimages in each (transitive) factor. \begin{algorithm}[Maximal preimages: \texttt{FactorDegrees}: Direct products] \label{gg-alg-maxpre-facdegs-dp} Given a direct product \(G = G_1 \times \ldots \times G_r\) and \(v\) as above, returns all maximal preimages of \(v\) in \(G\) up to conjugacy. \begin{algorithmic}[1] \State \(S \leftarrow \emptyset\) \For{multisets \((v_1,\ldots,v_r)\) of integers such that \(\sum v_i = \deg G_i\) and \(\bigcup_i v_i = v\)} \For{i = 1, \ldots, r} \State \(S_i \leftarrow\) maximal preimages of \(v_i\) in \(G_i\) (\cref{gg-alg-maxpre-facdegs-trans}) \EndFor \For{\((H_1,\ldots,H_r) \in \prod_i S_i\)} \State \(S \leftarrow S \cup \{H_1 \times \ldots \times H_r\}\) \EndFor \EndFor \State \Return \(S\) \end{algorithmic} \end{algorithm} To find maximal preimages in transitive groups, we embed \(G\) into a wreath product \(W\), and solve the problem there. As with \cref{gg-alg-maxpre-facdegs}, a loop over coset representatives lifts these to all preimages in \(G\). \begin{algorithm}[Maximal preimages: \texttt{FactorDegrees}: Transitive] \label{gg-alg-maxpre-facdegs-trans} Given a transitive group \(G\) and \(v\) as above, returns all maximal preimages of \(v\) in \(G\) up to conjugacy. \begin{algorithmic}[1] \State \(S \leftarrow \emptyset\) \State Embed \(G \subset W = G_r \wr \ldots \wr G_1\) \For{maximal preimages \(H\) of \(v\) in \(W\) (\cref{gg-alg-maxpre-facdegs-wr})} \For{double coset representatives \(w\) of \(N_W(H) \backslash W / G\)} \State \(H' \leftarrow H^w \cap G\) \If{\(H'\) has orbits of sizes \(v\)} \State \(S \leftarrow S \cup \{H'\}\) \EndIf \EndFor \EndFor \State \Return \(S\) \end{algorithmic} \end{algorithm} \begin{remark} Sometimes, if the wreath product \(W\) is very large compared to \(G\), the number of double cosets to check makes \cref{gg-alg-maxpre-facdegs-trans} infeasible. In this case, we use the naive algorithm instead. \end{remark} For wreath products, we work recursively so that we only need to consider a single wreath product \(A \wr B\). By \cref{gg-lem-subpar-wr}, the maximal preimages correspond to choosing a partition \(\mathcal{X}\) for \(B\), and for each \(X \in \mathcal{X}\) a partition \(\mathcal{Y}_X\) for \(A\), with \(v = \{\abs X \abs Y : Y \in \mathcal{Y}_X, X \in \mathcal{X}\}\). We can think of \(v\) as the areas of a \(d \times e\) rectangle which has a series of vertical cuts (corresponding to the sizes of \(\mathcal{X}\)), and each piece (\(X\)) having a further series of horizontal cuts (corresponding to the sizes of \(\mathcal{Y}_X\)). We call this a ``rectangle division'' (see \cref{gg-fig-recdiv}). For each such division, we find all possible corresponding partitions of \(A\) and \(B\), and take all combinations to construct the partitions for \(A \wr B\). \begin{figure} \centering \begin{tikzpicture} \draw (0,0) -- (5,0) -- (5,4) -- (0,4) -- (0,0); \draw (3,0) -- (3,4); \draw (0,2) -- (3,2); \draw (0,3) -- (3,3); \end{tikzpicture} \caption[A rectangular division]{A rectangular division of a \(5 \times 4\) rectangle, represented as \(\{(3, \{2,1,1\}), (2, \{4\})\}\), with areas \(\{8,6,3,3\}\).} \label{gg-fig-recdiv} \end{figure} \begin{algorithm} \label{gg-alg-maxpre-facdegs-wr} Given a wreath product \(G = W_r \wr \ldots \wr W_1\) and \(v\) as above, returns all maximal preimages of \(v\) in \(G\) up to conjugacy. \begin{algorithmic}[1] \If{r = 0} \State \Return \(\{G\}\) \EndIf \State \(A \leftarrow W_r \wr \ldots \wr W_2\) \State \(B \leftarrow W_1\) \State \(S \leftarrow \emptyset\) \For{rectangle divisions \(\{(w_i,\{h_{i,j} \,:\, j\}) \,:\, i\}\) of \(\deg A \times \deg B\) into areas \(v\) } \State \(S_B \leftarrow\) maximal preimages of \(\{w_i \,:\, i\}\) in \(B\) (naive \cref{gg-alg-maxpre-naive}) \For{i} \State \(S_{A,i} \leftarrow\) maximal preimages of \(\{h_{i,j} \,:\, j\}\) in \(A\) (recursively) \EndFor \For{\(H_B \in S_B\)} \State \(\mathcal{X} \leftarrow \operatorname{Orbits}(H_B)\) \For{bijections \(m : \mathcal{X} \to \{i\}\) so that \(\abs X = w_{m(X)}\)} \For{\((H_{A,1},\ldots) \in \prod_i S_{A,i}\)} \State \(H \leftarrow \parens{\prod_x H_{A,m(\mathcal{X}(x))}} \rtimes H_B\) \State \(S \leftarrow S \cup \{H\}\) \EndFor \EndFor \EndFor \EndFor \State \Return \(S\) \end{algorithmic} \end{algorithm} We use the naive algorithm to find the maximal preimages of transitive and primitive groups. Since we are mainly dealing with groups close to \(p\)-groups, we expect that they have plenty of block structure and therefore the factors in any such wreath product are small enough to use the naive algorithm. \subsection{\texttt{NumAuts}} \label{gg-sec-stat-numauts} \(s(G)\) is the index \((N_G(S):S)\) where \(S := \operatorname{Stab}_G(1)\), assuming \(G\) is transitive. \(s(R)\) is the number of automorphisms \(\abs{\operatorname{Aut}(L/K)}\) where \(R\) is irreducible and defines the extension \(L/K\). Observe that if \(G=\operatorname{Gal}(R/K)\), then \(S=\operatorname{Gal}(R/L)\), \(N_G(S)\) is (by definition) the largest subgroup of \(G\) in which \(S\) is normal, and hence its fixed field is the smallest subfield \(M\) of \(L/K\) such that \(L/M\) is normal. Hence \(\operatorname{Gal}(L/M)\) is \(\operatorname{Aut}(L/K)\), and so \(\operatorname{Aut}(L/K) \cong N_G(S)/S\). As we shall see in \cref{gg-lem-autgrp-order}, if \(H \leq G\) then \(s(G) \mid s(H)\). Hence \(v_1 \preceq v_2\) is \(v_2 \mid v_1\). \subsection{\texttt{AutGroup}} \label{gg-sec-stat-autgroup} \(s(G)\) is the group \(N_G(S)/S\) where \(S := \operatorname{Stab}_G(1)\) as a regular permutation group of degree \((N_G(S):S)\); it requires \(G\) to be transitive. Correspondingly, \(s(R)\) requires \(R\) to be irreducible, and is \(\operatorname{Aut}(L/K)\) where \(L\) is the field defined by \(R\). \(v_1 \sim v_2\) iff \(v_1\) and \(v_2\) are groups of the same degree and are conjugate in the symmetric group of this degree. The test for ordering uses the following lemma, which says that as the Galois group gets smaller, the automorphism group gets larger. Hence \(v_1 \preceq v_2\) is defined as follows: \(v_1\) must have degree at least the degree of \(v_2\), and \(v_2\) must be conjugate to a subgroup of \(v_1\). \begin{lemma} \label{gg-lem-autgrp-order} Suppose \(G' \leq G\) acts transitvely on a set \(X\). Fix \(x \in X\) and define \(S := \operatorname{Stab}_G(x)\), \(N := N_G(S)\), \(A := N/S\) and define \(S'\), \(N'\), \(A'\) similarly with respect to \(G'\). Then \(A\) is naturally isomorphic to a subgroup of \(A'\). \end{lemma} \begin{proof} By definition \begin{align*} N &= \{ n \in G : s \in S \Rightarrow s^n \in S \} \\ &= \{ n \in G : s \in S \Rightarrow (s^n)(x) = x \} \\ &= \{ n \in G : s \in S \Rightarrow s(n(x)) = n(x) \} \\ &= \{ n \in G : s \in S \Rightarrow s \in \operatorname{Stab}_G(n(x)) \} \\ &= \{ n \in G : S \subseteq \operatorname{Stab}_G(n(x)) \} \\ &= \{ n \in G : S = \operatorname{Stab}_G(n(x)) \} \text{ by orbit-stabilizer theorem} \\ &= \{ n \in G : n(x) \in \operatorname{Fix}(S) \} \\ &= \{ n \in G : n(y) \in \operatorname{Fix}(S) \} \text{ for any \(y \in \operatorname{Fix}(S)\) by symmetry} \\ &= \{ n \in G : y \in \operatorname{Fix}(S) \Rightarrow n(y) \in \operatorname{Fix}(S) \} \\ &= \operatorname{Stab}_G \operatorname{Fix}(S) \end{align*} is the group of elements of \(G\) which permute the fixed points of \(S := \operatorname{Stab}_G(x)\). Since \(G\) is transitive, for each \(y \in \operatorname{Fix}(S)\) there exists \(n \in G\) such that \(n(x) = y\), and hence \(n \in N\). We deduce that \(N\) acts transitively on \(\operatorname{Fix}(S)\), and in particular the orbit-stabilizer theorem implies that \[\abs A = (N:S) = \abs{\operatorname{Fix}(S)}.\] Similarly, since \(G'\) is also transitive then \(N \cap G' = \operatorname{Stab}_{G'} \operatorname{Fix}(S)\) acts transitively on \(\operatorname{Fix}(S)\), and so the orbit-stabilizer theorem implies \[\abs{N \cap G'} = \abs{\operatorname{Stab}_{N \cap G'}(1)} \abs{\operatorname{Fix}(S)},\] but noting that the stabilizer is actually \(S'\) then we deduce \[(N \cap G' : S') = (N : S).\] The isomorphism theorems imply \[(N \cap G')/(S \cap G') \cong (N \cap G')S/S \leq N/S,\] but noting that \(S' = S \cap G'\) then the previous paragraph implies that we have equality, and hence naturally \[(N \cap G')/(S \cap G') \cong N/S =: A.\] Finally, note that \[N \cap G' = \operatorname{Stab}_{G'} \operatorname{Fix}(S) \leq \operatorname{Stab}_{G'} \operatorname{Fix}(S') =: N'\] so that \[(N \cap G')/(S \cap G') \leq N'/S' =: A'.\] \end{proof} \subsection{\texttt{Tup}} This statistic takes as a parameter a tuple \((s_1,\ldots,s_k)\) of statistic algorithms. Then \(s(G) = (s_1(G),\ldots,s_k(G))\) and similarly for \(s(R)\). Also \(v_1 \sim v_2\) iff \(v_{1,i} \sim v_{2,i}\) for all \(i\), and similarly for \(\preceq\). \section{Subgroup choice algorithms} \label{gg-sec-choice} A subgroup choice algorithm decides, given the current state of a group theory algorithm (\cref{gg-sec-groups}) for the resolvent method, which subgroup \(U \leq W\) to form a resolvent from next. Currently we use one method \code{Tranche} which generates a sequence \(\mathscr{U}_1,\mathscr{U}_2,\ldots\) of sets of subgroups of \(W\) one at a time, which we call \define{tranches}. Given the current tranche, \(\mathscr{U}\), we inspect each element \(U\) in turn to test if it is useful by some measure (see \cref{gg-rmk-useful}). If so, we use one such \(U\). If there is no such \(U\), we declare the tranche useless and move on to the next one. The idea is that we avoid enumerating all possible subgroups \(U\leq W\), and only generate them until we find a useful one. \begin{remark}[On usefulness] \label{gg-rmk-useful} In the \texttt{All} group theory algorithm, we have a pool \(\mathcal{P}\) of all possible Galois groups, and therefore we know all of the possible outcomes of using the group \(U\) to form a resolvent: i.e. the resolvent has one of the Galois groups \(\braces{q(P) \,:\, P \in \mathcal{P}}\) and so we measure the statistic values \(\mathcal{S} = \braces{s(q(G)) \,:\, P \in \mathcal{P}}\). If \(\mathcal{S}\) contains multiple elements, then \(U\) is useful because we will certainly cut down the list \(\mathcal{P}\). Usefulness for \texttt{Maximal} and \texttt{Maximal2} is defined in \cref{gg-sec-groups-maximal,gg-sec-groups-maximal2}. \end{remark} The rest of this section describes some possible methods for producing tranches. \subsection{\texttt{All}} \label{gg-sec-tranche-all} Produces a single tranche containing all subgroups of \(W\). \subsection{\texttt{Index}} \label{gg-sec-tranche-idx} For each divisor \(n \mid \abs{W}\), produces a tranche containing all the subgroups of \(W\) of index \(n\). There are algorithms to produce the subgroups of a group with a given index. For example, the \texttt{Subgroups} intrinsic in Magma has a \texttt{IndexEqual} parameter for this purpose. \subsection{\texttt{OrbitIndex}} \label{gg-sec-tranche-oidx} \begin{definition} \label{gg-def-oidx} For \(U \leq W \leq S_d\), the \define{orbit index of \(U\) in \(W\)} is the index \((W : U')\) where \[U' = \operatorname{Stab}_W \operatorname{Orbits}(U) = \braces{w \in W \,:\, X \in \operatorname{Orbits}(U), x \in X \Rightarrow w(x) \in X}\] and is denoted \((W:U)^{\operatorname{orb}}\). The \define{remaining orbit index of \(U\) in \(W\)} is \((W:U)/(W:U)^{\operatorname{orb}} = (U':U)\). If \(\mathcal{X}\) is a partition of \(\braces{1,\ldots,d}\), then it is a \define{subgroup partition for \(W\)} if there exists \(U \leq W\) such that \(\mathcal{X} = \operatorname{Orbits}(U)\). The \define{index} \((W:\mathcal{X})\) of a subgroup partition \(\mathcal{X}\) is \((W : \operatorname{Stab}_W(\mathcal{X}))\). \end{definition} For each divisor \(n \mid \abs{W}\) and \(r \mid n\), produces a tranche containing all the subgroups of \(W\) of index \(n\) and of remaining orbit index \(r\). We find empirically that restricting to small \(r\), such as \(\operatorname{val}_p(r)\le1\), typically results in an algorithm which still terminates, and does so more quickly because it generates many fewer groups. To produce the tranche corresponding to a given \((n,r)\), we compute the subgroup partitions \(\mathcal{X}\) of \(\braces{1,\ldots,d}\) such that \((W:\operatorname{Stab}_W(\mathcal{X})) = m := \tfrac{n}{r}\), and then compute the subgroups of \(\operatorname{Stab}_W(\mathcal{X})\) of index \(r\). To efficiently compute the subgroup partitions of \(W\) of a given index, we use the special form of \(W\). If \(W\) is a wreath product, direct product, or symmetric group, then we can use the algorithms in the rest of this section to reduce the problem to computing subgroup partitions of smaller groups. For these smaller groups, we compute the subgroup partitions by explicitly enumerating all the subgroups. \begin{lemma}[Partitions of direct products] \label{gg-lem-subpar-dp} Suppose \(W_i \leq S_{d_i}\) for \(i=1,\ldots,k\) (each symmetric group acting on a disjoint set) and \(W = W_1 \times \cdots \times W_k\). If \(\mathcal{X}_i\) is a partition for \(W_i\) of orbit index \(m_i\) then \(\bigcup_i \mathcal{X}_i\) is a partition for \(W\) of orbit index \(\prod_i m_i\). Every partition for \(W\) is of this form. \end{lemma} \begin{proof} By definition \(m_i = (W_i : \operatorname{Stab}_W(X_i))\). Now \[\operatorname{Stab}_W(\bigcup_i X_i) = \prod_i \operatorname{Stab}_{W_i}(X_i)\] and the result follows. Take any \(U \leq W\), and consider its projections \(U_i\) to \(W_i\), and let \(\mathcal{X}_i = \operatorname{Orbits}(U_i)\), then clearly \(\mathcal{X} = \bigcup_i \mathcal{X}_i\). \end{proof} \begin{algorithm}[Partitions of direct products] \label{gg-alg-subpar-dp} Given \(W_i \leq S_{d_i}\) for \(i=1,\ldots,k\) and an integer \(m \mid \prod_i \abs{W_i}\), this returns all the partitions for \(W = W_1 \times \cdots \times W_k\) of index \(m\). \begin{algorithmic}[1] \If{\(k = 0\)} \State\Return \(\braces{\emptyset}\) \EndIf \State \(S \leftarrow \emptyset\) \ForAll{\(m_1 \mid \gcd(m, \abs{W_1})\)} \State \(S_1 \leftarrow\) partitions of \(W_1\) of index \(m_1\) \State \(S_2 \leftarrow\) partitions of \(W_2 \times \cdots \times W_k\) of index \(m_2 = \tfrac{m}{m_1}\) \State \(S \leftarrow S \cup \braces{\mathcal{X}_1 \cup \mathcal{X}_2 \,:\, \mathcal{X}_1 \in S_1, \mathcal{X}_2 \in S_2}\) \EndFor \State\Return \(S\) \end{algorithmic} \end{algorithm} \begin{lemma}[Partitions of wreath products] \label{gg-lem-subpar-wr} Suppose \(A,B\) are permutation groups, let \(\mathcal{X}\) be a subgroup partition for \(B\), and for each \(X \in \mathcal{X}\) let \(\mathcal{Y}_X\) be a subgroup partition for \(A\). Then \(\mathcal{Z} = \braces{X \times Y \,:\, X \in \mathcal{X}, Y \in \mathcal{Y}_X}\) is a subgroup partition for \(W = A \wr B\), its index is \((B:\mathcal{X}) \prod_{X \in \mathcal{X}} (A:\mathcal{Y}_X)^{\abs{X}}\), and all subgroup partitions are of this form up to conjugacy. \end{lemma} \begin{proof} If \(A\) acts on \(\{1,\ldots,d\}\) and \(B\) acts on \(\{1,\ldots,e\}\), then elements of \(A \wr B\) can be defined as elements of the cartesian product \(A^e \times B\) acting on \(\{1,\ldots,e\} \times \{1,\ldots,d\}\) as \[(a_1,\ldots,a_e,b)(x,y) = (b x, a_x y).\] This implies the group operation is \[(a'_1,\ldots,a'_e,b')(a_1,\ldots,a_e,b) = (a'_{b1}a_1,\ldots,a'_{bd}a_d,b'b).\] Suppose \(\mathcal{Z}\) is defined as above, and take any \((x,y),(x',y') \in X \times Y \in \mathcal{Z}\). Choose \(b \in \operatorname{Stab}_B(\mathcal{X})\) such that \(b(x)=x'\), which is possible since \(\operatorname{Stab}_B(\mathcal{X})\) acts transitively on \(X\) by definition of a subgroup partition. Choose \(a_x \in \operatorname{Stab}_A(\mathcal{Y}_X)\) such that \(a_x(y)=y'\), and choose all other \(a_{x''} \in \operatorname{Stab}_A(\mathcal{Y}_{X''})\) for \(x'' \in X''\) arbitrarily (e.g. the identity). Defining \(g=(a_1,\ldots,a_e,b)\) then \(g(x,y) = (bx,a_xy) = (x',y')\) and by construction \(g \in \operatorname{Stab}_W(\mathcal{Z})\). We conclude that \(\operatorname{Stab}_W(\mathcal{Z})\) acts transitively on each element of \(\mathcal{Z}\), and so \(\mathcal{Z}\) is a subgroup partition of \(W\) as claimed. Expressing \(A \wr B\) as a semidirect product \(A^e \rtimes B\), then \(\operatorname{Stab}_W(\mathcal{Z})\) is the subgroup \[\parens{\prod_{x \in \{1,\ldots,e\}} \operatorname{Stab}_A(\mathcal{Y}_{\mathcal{X}(x)})} \rtimes \operatorname{Stab}_B(\mathcal{X})\] where \(\mathcal{X}(x)\) is the \(X \in \mathcal{X}\) such that \(x \in X\). The index \((W:\mathcal{Z})\) follows. Suppose \(G \leq W\). We want to show that a conjugate of \(G\) has orbits of the form \(\mathcal{Z}\). Letting \(\pi : A \wr B \to B\) be the natural projection \((a_1,\ldots,a_e,b) \mapsto b\), let \(\mathcal{X} = \operatorname{Orbits}(\pi(G))\), which is a subgroup partition of \(B\). For each \(X \in \mathcal{X}\), fix a representative \(x_X \in X\), and for each \(x \in X\), fix some \(g_x = (a_{x,1},\ldots,a_{x,e},b_x) \in G\) such that \(\pi(g_x)(x_X) = x\). Define \(\hat a_x = a_{x,x_X}\) and \(\hat g = (\hat a_1,\ldots,\hat a_e,id) \in W\) then by construction \[g_x^{-1} \hat g (x,y) = (x_X, y).\] Define \(\mathcal{Y}_X\) such that \(\{x_X\} \times Y\) is an orbit of \(S_X := \operatorname{Stab}_G(\{x_X\} \times \{1,\dots,d\})\) for each \(Y \in \mathcal{Y}_X\). We claim that \[\operatorname{Orbits}(G^{\hat g}) = \mathcal{Z} = \{X \times Y \,:\, Y \in \mathcal{Y}_X, X \in \mathcal{X}\}.\] Note that if \(g^{\hat g}(x,y)=(x',y')\) then \(\pi(g^{\hat g})(x)=\pi(g)(x)=x'\) and so \(\mathcal{X}(x)=\mathcal{X}(x')=X\) say. For any \((x,y),(x',y')\) with \(x,x'\in X \in \mathcal{X}\), then there exists \(g \in G\) such that \(g^{\hat g}(x,y) = (x',y')\) iff there is \(g\) such that \((g_{x'}^{-1} g g_x) g_x^{-1} \hat g (x, y) = g_{x'}^{-1} \hat g (x', y')\), i.e. such that \((g_{x'}^{-1} g g_x)(x_X, y) = (x_X, y').\) This occurs iff there is \(g \in S_X\) such that \(g(x_X,y)=(x_X,y')\), which occurs iff \(\mathcal{Y}(y)=\mathcal{Y}(y')=Y\) say, in which case \((x,y),(x',y')\in X \times Y\). This proves the claim. \end{proof} \begin{algorithm}[Partitions of wreath products] \label{gg-alg-subpar-wr} Given \(A \leq S_d, B \leq S_e\) and an integer \(m \mid \abs{A}^e \abs{B}\), this returns all the partitions for \(A \wr B\) of index \(m\) up to conjugacy. \begin{algorithmic}[1] \State \(S \leftarrow \emptyset\) \ForAll{\(m' \mid m\)} \State \(S' \leftarrow\) partitions for \(B\) of index \(m'\) \ForAll{\(\mathcal{X} \in S'\)} \ForAll{factorizations of \(\tfrac{m}{m'}\) of the form \(\prod_{X \in \mathcal{X}} m_X^{\abs{X}}\)} \ForAll{\(X \in \mathcal{X}\)} \State \(S_X \leftarrow\) partitions for \(A\) of index \(m_X\) \EndFor \ForAll{\((\mathcal{Y}_X)_X \in \prod_X S_X\)} \State include \(\braces{X \times Y \,:\, X \in \mathcal{X}, Y \in \mathcal{Y}_X}\) in \(S\) \EndFor \EndFor \EndFor \EndFor \State \Return \(S\) \end{algorithmic} \end{algorithm} \begin{remark} The preceding algorithm may produce multiple representatives per conjugacy class. With a little more care, we can return just one as follows. Having chosen \(\mathcal{X}\), we partition it into \(B\)-conjugacy classes \(\mathcal{X}_i=\{X_{i,j}\}\). Then we consider all factorizations of \(m/m'\) of the form \(\prod_{\mathcal{X}_i} m_i^{\abs{X_{i,1}}}\), and then all factorizations of \(m_i\) of the form \(\prod_{X_{i,j}\in\mathcal{X}_i} m_{X_{i,j}}\) with \(m_{i,1} \le m_{i,2} \le \ldots\). Hence we have a factorization of \(m/m'\) of the form \(\prod_{X \in \mathcal{X}} m_{X}^{\abs{X}}\) as above. Note that this includes all factorizations of this form exactly once up to reordering conjugate blocks \(X \in \mathcal{X}\). For such a factorization, we partition \(\mathcal{X}_i\) further into classes \(\mathcal{X}_{i,j}=\{X_{i,j,k}\}\) such that \(m_{i,j}:=m_{X_{i,j,k}}\) is constant within a class. Similar to before, we let \(S_{i,j} = \{\mathcal{Y}_{i,j,\ell}\}\) be all partitions for \(A\) of index \(m_{i,j}\), and consider all \((\mathcal{Y}_{i,j,\ell_k})_{i,j,k} \in \prod_{i,j,k} S_{i,j}\) with \(\ell_1 \le \ell_2 \le \ldots\). Note that this includes all \((\mathcal{Y}_X)_X \in \prod_X S_X\) as above precisely once up to reordering conjugate blocks \(X \in \mathcal{X}\). Letting \(\mathcal{Z} = \{X_{i,j} \times Y \,:\, Y \in \mathcal{Y}_{i,j,\ell_k}\}\) be the corresponding partition, then all such \(\mathcal{Z}\) are not conjugate in \(A \wr B\), and they cover all conjugacy classes up to reordering conjugate blocks of \(\mathcal{X}\). Define \(S \leq S_d \wr S_e\) to be the group isomorphic to \(1_d \wr \prod_i 1_{\abs{\mathcal{X}_i}} \wr S_{\abs{X_{i,1}}}\) which reorders conjugate blocks of \(\mathcal{X}\), where \(1_d\) denotes the trivial subgroup of \(S_d\). Then we find all \(\mathcal{Z}\) up to \(A\wr B\) conjugacy by finding all \(S\)-conjugates of \(\mathcal{Z}\) up to \(A\wr B\) conjugacy as follows. Let \(H_0 = \operatorname{Stab}_{A \wr B}(\mathcal{Z})\), then we want all \(S\)-conjugates of \(H_0\) up to \(A\wr B\) conjugacy. Note that if \(n \in N_S(H_0)\) and \(g \in A \wr B\) then \(H_0^{nsg} \sim_{A\wr B} H_0^s\) so it suffices to consider double coset representatives \(s\) of \(N_S(H_0) \backslash S / (A \wr B) \cap S\). Compute \(H_0^s\) for all such \(s\) and dedupe by \(A \wr B\)-conjugacy. \end{remark} \begin{lemma}[Partitions of symmetric groups] \label{gg-lem-subpar-sym} Any partition \(\mathcal{X}\) of \(\braces{1,\ldots,d}\) is a subgroup partition for \(S_d\) and it has orbit index \(d! / \prod_{X \in \mathcal{X}} \abs{X}!\). \end{lemma} \begin{proof} Indeed \(\operatorname{Stab}_{S_d}(\mathcal{X}) = \prod_{X \in \mathcal{X}} S_X\). \end{proof} \begin{algorithm}[Partitions of symmetric groups] \label{gg-alg-subpar-sym} Given integers \(d \geq 0, m \mid d!\), returns all partitions for \(S_d\) of index \(m\) up to conjugacy. \begin{algorithmic}[1] \If{\(d=0\)} \State\Return \(\braces{\emptyset}\) \EndIf \State \(S \leftarrow \emptyset\) \ForAll{\(d_1=0,\ldots,d\)} \If{\(d!/d_1!(d-d_1)! \mid m\)} \State \(S_2 \leftarrow\) partitions of \(S_{d-d_1}\) of index \(m d_1! (d-d_1)! / d!\) up to conjugacy \State \(S \leftarrow S \cup \braces{\braces{1,\ldots,d_1} \cup \mathcal{X}_2 \,:\, \mathcal{X}_2 \in S_2}\) \EndIf \EndFor \State\Return \(S\) \end{algorithmic} \end{algorithm} \input{gg-sec-implementation-v2.tex} \section{Auxillary algorithms} \label{gg-sec-auxillary} A collection of algorithms used elsewhere in this article. \subsection{Group embeddings} \begin{algorithm}[Embed into direct product] \label{gg-alg-embed-dp} Given \(G \leq S_d\), returns \(s \in S_d\) and transitive \(G_1,\ldots,G_r\) such that \(\sum_i \deg G_i = d\) and \(G^s \leq G_1 \times \ldots \times G_r\). The algorithm finds the image of the group acting on each of its orbits, then takes the direct product. \begin{algorithmic}[1] \State \(X_1=\{x_{1,1},\ldots\},\ldots,X_r \leftarrow \operatorname{Orbits}(G)\) \State \(D \leftarrow G|_{X_1} \times \ldots \times G|_{X_r}\) \State \(s \leftarrow\) permutation sending \(x_{i,j}\) to \(\sum_{i'<i} \abs{X_{i'}} + j\). \State \Return \(D, s^{-1}\) \end{algorithmic} \end{algorithm} \begin{algorithm}[Embed into wreath product] \label{gg-alg-embed-wr} Given transitive \(G \leq S_d\), returns \(s\in S_d\) and primitive \(G_1,\ldots,G_r\) such that \(\prod_i \deg G_i = d\) and \(G^s \leq G_r \wr \ldots \wr G_1\). We choose a minimal non-trivial block-partition of \(G\) and use this to embed \(G\) into \(A \wr B\) with the same block structure. We then recurse on \(B\). Note that, unlike with the direct product case, there is not a canonical ``best'' (smallest) choice for the factors. Indeed, suppose we are given a group of the form \((A_1 \times \ldots \times A_e) \rtimes B \leq S_d \wr S_e\), then we could embed it into \(A \wr B\) where \(A = \angles{A_1^{s_1},\ldots,A_e^{s_e}}\) for any \(s_i \in S_d\). Minimizing \(A\) is difficult. However, something cheap to compute is: for each \(i>1\), choose \(g_i \in G\) such that \(g_i(1) \in \{d(i-1)+1,\ldots,di\}\), and then reorder \((d(i-1)+1,\ldots,di)\) to \((g_i(1),\ldots,g_i(d))\) (i.e. let \(s_i\) permute \(j \mapsto g_i(j)-d(i-1)\)). If the best \(A\) is cyclic \(C_d\), this is guaranteed to find it. \begin{algorithmic}[1] \If{\(G\) is primitive} \State \Return \(G, id\) \EndIf \State \(P \leftarrow\) minimal partition of \(G\) \State Fix an ordering \((B_1,\ldots,B_e)\) on \(P\) \State Fix an ordering \((x_{i,1},\ldots,x_{i,d})\) on each \(B_i\) \For{i = 1,\ldots,e} \State \(g_i \leftarrow\) an element of \(G\) such that \(g_i(x_{1,1}) = g_i(x_{i,1})\) \EndFor \State \(s \leftarrow\) the permutation \(g_i(x_{1,j}) \mapsto di+j\) \State \(G' \leq S_d \wr S_e \leftarrow G^s\) \State \(q : G' \to S_e\) the quotient \State \(b : S_e \to S_d \wr S_e\) the canonical lift such that \(b(\sigma)(id+j)=\sigma(i)d+j\) \State \(B \leftarrow q(G')\) \State \(\mathcal{A} \leftarrow \emptyset\) \For{each generator \(g\) of \(G\)} \State \(g' \leftarrow g b(q(g))^{-1}\) \State \(\mathcal{A} \leftarrow \mathcal{A} \cup \{j \mapsto g'((i-1)d+j)-(i-1)d \,:\, i=1,\ldots,e\} \subset S_d\) \EndFor \State \(A \leq S_d \leftarrow \angles{\mathcal{A}}\) \State \((W_r,\ldots,W_1), s' \leftarrow\) embedding of \(B\) into a wreath product \State \Return \((A,W_r,\ldots,W_1), sb(s')\) \end{algorithmic} \end{algorithm} \subsection{Combinatorial} \begin{algorithm}[Linear divisions] \label{gg-alg-lindiv} Given \(n \in \mathbb{N}\) and multiset \(N \subset \mathbb{N}\), returns all subsets \(M \subset N\) such that \(\sum M = n\). We represent multisets of integers as a sorted sequence, with the largest element first. We loop over possible choices of the first division, and then recurse to assign the rest. An additional optional parameter \(L\) is such a sequence, and restricts any returned \(M\) to be at most \(L\) in the lexicographic ordering (i.e. if \(M \ne L\), then at the first place they disagree, \(M\) must be smaller). The default \(L\) is \(\{n\}\), which is no restriction. \begin{algorithmic}[1] \State \(S \leftarrow \emptyset\) \For{distinct \(m_1 \in N\)} \If{\(m_1 \le \min(n, L_1)\)} \For{linear divisions \((m_2,\ldots)\) of \(n-m_1\) from \(N-\{m_1\}\) with limit \((L_2,\ldots)\) if \(m_1 = L_1\) or else limit \((m_1,m_1,\ldots)\)} \State Append \((m_1,m_2,\ldots)\) to \(S\) \EndFor \EndIf \EndFor \State \Return \(S\) \end{algorithmic} \end{algorithm} \begin{algorithm}[Rectangle divisions] \label{gg-alg-recdiv} Given \(w,h \in \mathbb{Z}\) and multiset \(A \subset \mathbb{Z}\), returns all multisets \(\{(w_i,\{h_{i,j}\})\}\) such that \(\sum_i w_i = w\), \(\sum_j h_{i,j} = h\) for each \(i\) and \(\{w_i h_{i,j} \,:\, i,j\} = A\). See \cref{gg-fig-recdiv}. As with the previous algorithm, multisets are represented as sorted sequences. We loop over possible choices of the first division, and then recurse to assign the rest. Optional parameter \(L=(w_L,\{h_{L,j}\})\) limits the allowed divisions, with default \((w,\{h\})\). \begin{algorithmic}[1] \State \(S \leftarrow \emptyset\) \For{distinct divisors \(w_1\) of some \(a \in A\)} \If{\(w_1 \le \min(w, w_L)\)} \For{linear divisions \((h_{1,1},h_{1,2},\ldots)\) of \(h\) from \(\{a/w_1 \,:\, a \in A, w_1 \mid a\}\) with limit \(h_{L,j}\) if \(w_1=w_L\) or else no limit} \For{rectangle divisions \(\{(w_2,\{h_{2,j}\}),\ldots\}\) of width \(w-w_1\), height \(h\), areas \(A-\{w_1h_{1,j}\}\), limit \((w_1,\{h_{1,j}\})\)} \State Append \((w_1,\{h_{1,j}\},\ldots)\) to \(S\) \EndFor \EndFor \EndIf \EndFor \State \Return \(S\) \end{algorithmic} \end{algorithm} \begin{algorithm}[Binning] \label{gg-alg-binning} Suppose we are given integers \((m_1,\ldots,m_r)\) and \((n_1,\ldots,n_s)\) such that we have \(m_i\) indistinguishable copies of some item \(i\), and \(n_j\) indistinguishable copies of some bin \(j\). A \define{binning} is some \((m'_1,\ldots,m'_r)\) with \(0 \leq m'_i \leq m_i\) for all \(i\). Suppose we are given a function \(V\) such that when \(V((m'_1,\ldots,m'_r),j)\) is true, we define the binning to be \define{valid for bin \(j\)}. Suppose we are given a function \(S\) such that whenever \((m'_1,\ldots,m'_r)\) is valid for bin \(j\) and \(0 \leq m''_i \leq m'_i\), then \(S((m''_1,\ldots,m''_r),j)\) is true. Such a binning is \define{semi-valid}. A \define{total valid binning} is a sequence of length \(s\), whose \(j\)th entry is a multiset of \(n_j\) valid binnings for bin \(j\), and such that all of these binnings sum to \((m_1,\ldots,m_r)\). This algorithm returns all total valid binnings. Optionally, a \define{partial semi-valid binning} \(B\) can be given (like a total valid binning, except the binnings are only semi-valid and only sum to at most \(m_1,\allowbreak\ldots,\allowbreak m_r\)) and this algorithm only returns total valid binnings which extend it. Optionally, a limit \(N\) can be given (defaulting to \(\infty\)) and this algorithm returns at most this many total binnings. This algorithm works by choosing an item and considering all bins it could be added to. For each choice, we add this to the partial semi-valid binning, and recursively find all the total binnings extending it. In order to avoid duplicated effort, we only add item \(i\) to a bin if either it makes the binning equal to the largest or exceed the largest, when comparing the \(i\)th entry in each binning. Since items are assigned in order, they should be given to the algorithm in whatever order is likely to lead to a contradiction quickest (in terms of not being semi-valid). This usually means the ``largest'' items should come first, because these will ``fill'' the bins quicker. \begin{algorithmic}[1] \LineComment{Check semi-valid} \If{\(B\) is not semi-valid} \State\Return \(\emptyset\) \EndIf \LineComment{Base case: nothing more to bin} \If{\(m_i=0\) for all \(i\)} \If{\(B\) is a total valid binning} \State\Return \(\{B\}\) \Else \State\Return \(\emptyset\) \EndIf \EndIf \LineComment{General case} \State \(\mathcal{R} \leftarrow \emptyset\) \State \(i \leftarrow \min \{i \,:\, m_i \ne 0\}\) \State \(m_i \leftarrow m_i-1\) \LineComment{Put an item \(i\) into one of the \(j\) bins} \For{\(j=1,\ldots,k\)} \State \(B = \mathcal{B}_i\) (a set of multiset binnings of size \(n_j\)) \State \(B' \leftarrow\) the set of binnings in \(B\) with entry \(i\) set to 0 \For{\(b' \in B'\)} \LineComment{Increase the highest value} \State \(m''_i \leftarrow\) the largest value of \(b_i\) among all \(b \in B\) agreeing with \(b'\) away from the \(i\)th entry \State \(b_0 \leftarrow\) \(b'\) with the \(i\)th entry set to \(m''_i\) \State \(b \leftarrow\) \(b'\) with the \(i\)th entry set to \(m''_i+1\) \State \(\mathcal{R} \leftarrow \mathcal{R} \cup\) all total valid binnings extending \(B\) with \(b_0\) replaced by \(b\) in \(B_i\) \LineComment{Increase the next one down} \State \(b_{-1} \leftarrow\) \(b'\) with the \(i\)th entry set to \(m''_i-1\) \If{\(b_{-1} \in B\)} \State \(\mathcal{R} \leftarrow \mathcal{R} \cup\) all total valid binnings extending \(B\) with \(b_{-1}\) replaced by \(b_0\) in \(B_i\) \EndIf \EndFor \EndFor \State \Return \(\mathcal{R}\) \end{algorithmic} \end{algorithm} \section{Implementation and results} \label{gg-sec-implementation} These algorithms have been implemented \cite{galoiscode} for the Magma computer algebra system \cite{magma}. Our main \code{GaloisGroup} routine takes two arguments: a polynomial over a \(p\)-adic field, and a string describing the parameterization of the algorithm to use. Our algorithm is by design highly modular, with each piece of the parameterization as independent as possible from the rest. This means that if one has a new algorithm for evaluating resolvents for instance, one simply needs to implement this algorithm satisfying a particular interface, and then add a line of code to the parameterization parser. The main omission from our implementation is that the \code{SinglyWild} global model algorithm is not available in full generality, which means that for wild extensions our global model will usually use symmetric groups. Over \(\mathbb{Q}_2\) with a \(2 \times \ldots \times 2\) ramification filtration this is not a problem, but for coarser filtrations, \(S_8\) is much larger than \(C_2^3\) for example, and \(S_7\) is much larger than \(C_7\), and so our global models are far from optimal. A special case of \code{SinglyWild} has been implemented and is discussed specifically in \cref{gg-sec-impl-sw}. All experiments reported on in this section were performed on a 2.7GHz Intel Xeon. Any timings are given in core-seconds. Tables of Galois groups have been produced from all runs in this section and are available from the implementation website \cite{galoiscode}. Unless otherwise stated, all experiments use the ``exact'' \(p\)-adic polynomial type made available by the \texttt{ExactpAdics} package \cite{exactpadics}. This uses infinite-precision arithmetic and its routines are designed to give provably correct results (modulo coding errors) and hence our algorithm also yields provably correct results except for \cref{gg-rmk-resolvent-complex-precision}. See \cite[Ch. II, \S13]{DPhD} for a more detailed account. \subsection{Some particular parameterizations} Six parameterizations we will consider are named A0, B0, A1, B1, A2 and B2. These parameterizations all try three algorithms in turn: \code{Tame} (\cref{gg-sec-tame}), \code{Singly\-Ra\-mi\-fied} (\cref{gg-sec-singlyramified}) and \code{ResolventMethod} (\cref{gg-sec-arm}). The resolvent method evaluates resolvents using a global model which first factorizes the polynomial, then finds the ramification tower of the field defined by each factor, then finds a global model for each segment of the tower. For the A parameterizations, this global model is \code{Symmetric}. For the B parameterizations, we use the \code{RootOfUnity}, \code{RootOfUniformizer} or \code{Symmetric} global model, depending on whether the segment is unramified, tame or wild. The number part of the parameterization name controls the group theory part of the algorithm. For A0 and B0, we enumerate \code{All} possible Galois groups, then eliminate candidates based on the \code{FactorDegrees} statistic for resolvents of all subgroups. For A1 and B1, we do the same except using the \code{OrbitIndex} method to only generate resolvents for subgroups whose remaining orbit index \(r\) satisfies \(v_p(r) \le 1\). For A2 and B2, instead of enumerating all possible Galois groups, we work down the graph of possibilities using \code{Maximal2}. We shall also consider the parameterization 00, which is the same as A0, but which uses a \code{Symmetric} global model for each factor and the \code{RootsMaximal} group theory algorithm \cite[Ch. II, \S5.4]{DPhD} which mimics Stauduhar's original absolute resolvent method \cite{Stauduhar73}. \subsection{Up to degree 12 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}, \texorpdfstring{\(\mathbb{Q}_3\)}{Q3} and \texorpdfstring{\(\mathbb{Q}_5\)}{Q5}} \label{gg-sec-d12} The local fields database (LFDB) \cite{LFDB} tabulates data about all extensions of degree up to 12 over \(\mathbb{Q}_p\) for all \(p\) including a defining polynomial, residue and ramification degrees, Galois and inertia groups, and the Galois slope content which summarizes the ramification polygon of the Galois closure. We have run our algorithm with the eight paramaterizations \texttt{Naive}, 00 and A0 to B2 on all defining polynomials from the LFDB of degrees 2 to 12 over \(\mathbb{Q}_2\), \(\mathbb{Q}_3\) and \(\mathbb{Q}_5\). We also ran with the parameterization A0 but using Magma's default inexact polynomial representation, which does not guarantee correctness, which we denote A0*. In all cases, the Galois group agrees with that reported in the LFDB. The mean run times of these are given in Tables~\ref{gg-tbl-d12-q2}, \ref{gg-tbl-d12-q3} and \ref{gg-tbl-d12-q5}. In each case, the times within 10\% of the smallest are shown in bold. Counts marked with an asterisk (*) represent a random sample of all possibilities. Times marked with a numeric superscript mean that the algorithm failed to find the Galois group for this many polynomials; these are not included in the mean. A dash (---) means the corresponding algorithm was not tried. A cross (\texttimes) means the corresponding runs were prohibitively slow. Times preceded by \(\approx\) are the mean of a small number of runs, the rest being prohibitively slow. This notation is reused in subsequent tables. \begin{table} \centering \input{gg-tbl-d12-q2.tex} \caption[Timings on polynomials up to degree 22 over \(\mathbb{Q}_2\)]{Mean run times for some parameterizations on polynomials defining fields of given degrees over \(\mathbb{Q}_2\).} \label{gg-tbl-d12-q2} \end{table} \begin{table} \centering \begin{tabular}{rrrrrrrrrrr} \hline Deg & \# & \multicolumn{9}{l}{Run time (seconds)} \\ & & Naive & 00 & A0* & A0 & B0 & A1 & B1 & A2 & B2 \\ \hline 2 & 3 & \bfseries 0.04 & 0.11 & 0.07 & 0.10 & 0.11 & 0.12 & 0.12 & 0.11 & 0.11 \\ 3 & 10 & 0.05 & 0.07 & \bfseries 0.04 & 0.06 & 0.06 & 0.06 & 0.06 & 0.05 & 0.06 \\ 4 & 5 & 0.10 & 0.10 & \bfseries 0.05 & 0.08 & 0.08 & 0.08 & 0.12 & 0.09 & 0.09 \\ 5 & 2 & \bfseries 0.08 & 0.16 & 0.10 & 0.15 & 0.16 & 0.15 & 0.16 & 0.14 & 0.16 \\ 6 & 75 & 0.66 & 0.29 & \bfseries 0.13 & 0.31 & 0.33 & 0.34 & 0.32 & 0.30 & 0.32 \\ 7 & 2 & 0.12 & 0.17 & \bfseries 0.10 & 0.15 & 0.18 & 0.19 & 0.15 & 0.16 & 0.17 \\ 8 & 8 & 0.10 & 0.09 & \bfseries 0.06 & 0.09 & 0.08 & 0.09 & 0.08 & 0.08 & 0.08 \\ 9 & 795 & \(\approx 400\) & \(\approx 100\) & --- & \bfseries 0.63 & \bfseries 0.64 & \bfseries 0.67 & \bfseries 0.66 & \bfseries 0.66 & 0.73 \\ 10 & 6 & 0.14 & 0.09 & \bfseries 0.08 & 0.09 & 0.09 & 0.09 & 0.10 & 0.09 & 0.10 \\ 11 & 2 & 0.15 & 0.16 & \bfseries 0.11 & 0.17 & 0.17 & 0.18 & 0.19 & 0.21 & 0.20 \\ 12 & 785 & \texttimes & \texttimes & --- & \bfseries 1.52 & \bfseries 1.57 & 1.90 & 2.24 & 2.21 & 2.54 \\ \hline \end{tabular} \caption[Timings on polynomials up to degree 12 over \(\mathbb{Q}_3\)]{Mean run times for some parameterizations on polynomials defining fields of given degrees over \(\mathbb{Q}_3\). There were 11 polynomials of degree 12 for which A0, A1 and A2 did not succeed due to a bug in Magma; these are not included in timings.} \label{gg-tbl-d12-q3} \end{table} \begin{table} \centering \begin{tabular}{rrrrrrrrrrr} \hline Deg & \# & \multicolumn{9}{l}{Run time (seconds)} \\ & & Naive & 00 & A0* & A0 & B0 & A1 & B1 & A2 & B2 \\ \hline 2 & 3 & \bfseries 0.04 & 0.11 & 0.07 & 0.12 & 0.28 & 0.11 & 0.11 & 0.12 & 0.11 \\ 3 & 2 & \bfseries 0.09 & 0.14 & \bfseries 0.10 & 0.15 & 0.15 & 0.15 & 0.15 & 0.20 & 0.16 \\ 4 & 7 & \bfseries 0.03 & 0.07 & 0.04 & 0.07 & 0.07 & 0.08 & 0.07 & 0.09 & 0.08 \\ 5 & 26 & 0.12 & 0.05 & \bfseries 0.02 & 0.05 & 0.06 & 0.05 & 0.06 & 0.05 & 0.06 \\ 6 & 7 & 0.07 & 0.09 & \bfseries 0.05 & 0.08 & 0.08 & 0.08 & 0.09 & 0.08 & 0.08 \\ 7 & 2 & 0.12 & 0.17 & \bfseries 0.10 & 0.15 & 0.16 & 0.16 & 0.16 & 0.15 & 0.21 \\ 8 & 11 & 0.07 & 0.09 & \bfseries 0.05 & 0.08 & 0.07 & 0.08 & 0.08 & 0.07 & 0.09 \\ 9 & 3 & 0.12 & 0.11 & \bfseries 0.09 & 0.15 & 0.13 & 0.13 & 0.13 & 0.13 & 0.13 \\ 10 & 258 & \(\approx 100\) & \texttimes & --- & \bfseries 2.09 & \bfseries 1.93 & 3.00 & 2.76 & 16.02 & 11.87 \\ 11 & 2 & 0.15 & 0.17 & \bfseries 0.11 & 0.18 & 0.17 & 0.17 & 0.19 & 0.18 & 0.44 \\ 12 & 17 & 0.16 & \bfseries 0.08 & \bfseries 0.09 & \bfseries 0.08 & \bfseries 0.08 & \bfseries 0.08 & \bfseries 0.08 & \bfseries 0.08 & \bfseries 0.08 \\ \hline \end{tabular} \caption[Timings on polynomials up to degree 12 over \(\mathbb{Q}_5\)]{Mean run times for some parameterizations on polynomials defining fields of given degrees over \(\mathbb{Q}_5\).} \label{gg-tbl-d12-q5} \end{table} Over \(\mathbb{Q}_2\), we have also run the algorithm on a selection of reducible polynomials whose irreducible factors have a given set of degrees. For example, we consider all pairs \(F_1,F_2 \in K[x]\) of quadratic polynomials defining quadratic fields over \(\mathbb{Q}_2\) and run the algorithm on \(F(x) = F_1(x) F_2(x+1)\). Note that the offset \(x+1\) ensures that \(F(x)\) is squarefree in case \(F_1=F_2\). Mean run times are given in \cref{gg-tbl-d12-q2}, where for example degree ``\(2+2=4\)'' means products of quadratics. Observe that A0* is generally faster than A0, suggesting there is some overhead due to using exact arithmetic. However, this overhead is around a factor of two in the worst case and usually less, so not too significant. There is little variation in timings between the six parameterizations A0 to B2. This suggests that for small degrees, there is little overhead in writing down all possible Galois groups \(G \leq W\), or in enumerating all subgroups of \(W\) of a given index. Unsurprisingly, the run time increases in both the degree \(d\) and in \(v_p(d)\), the latter being the number of wild ramification breaks possible. Not displayed in the table is that the variance in these run times is low. In particular, the maximum run time is always within a factor of 3 of the mean, and is usually less. For small degrees, the simple parameterization 00 is comparable to the other parameterizations. However it quickly becomes infeasible as the degree increases, taking for example about 50 seconds at degree 8 over \(\mathbb{Q}_2\). The same is true for the \texttt{Naive} algorithm. Indeed, for small degrees this is often the fastest but becomes infeasibly slow above degree about 10. \subsection{Degree 14 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}} \label{gg-sec-d14} There are two types of wildly ramified extensions \(L/K=\mathbb{Q}_2\) of degree 14: those with \(e(L/K)=2\) and those with \(e(L/K)=14\). In the former case, \(L\) is a ramified quadratic extension of the unique unramified extension \(U/K\) of degree 7. In the latter case, \(L\) is a ramified quadratic extension of the unique (tamely) ramified extension \(T=K(\sqrt[7]2)/K\) of degree 7. We refer to these as Type 14u and Type 14t respectively. Using the \code{AllExtensions} intrinsic in Magma we have generated all such extensions up to \(K\)-conjugacy, and have run our algorithm on all of these. The timings are given in \cref{gg-tbl-d12-q2} separately for the two types. As a point of comparison, \cite{AwtreyD14} uses a degree 364 resolvent relative to \(W=S_{14}\) and a few other invariants to compute the same Galois groups, taking around 20 hours per polynomial whereas our algorithm takes around 2 seconds. Our results are consistent with \cite[Table 3]{AwtreyD14}. We see that for Type 14t, using a more sophisticated global model \code{Root\-Of\-Uni\-formizer} for \(T/K\) in the B parameterizations instead of \texttt{Symmetric} in the A parameterizations makes a marked improvement to the run-time. Even when we do use \texttt{Symmetric}, we get an improvement for using more sophisticated group theory, comparing A0, A1 and A2. In contrast, for Type 14u using a more sophisticated global model \texttt{RootOfUnity} actually made the run time worse. In this case, with parameterization B0, most of the run time is spent computing complex approximations to resolvents, despite generally using fewer resolvents and using a lower complex precision. This suggests that the implementation of \texttt{RootOfUnity} needs to be optimized. \subsection{Degree 16 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}} \label{gg-sec-d16} Recall (e.g. \cite{PS} or \cite{extensions}) that to an extension of \(p\)-adic fields, we can attach a ramification polygon, which is an invariant of the extension. By attaching further residual information such as the residual polynomials of each face of the ramification polygon, we can form a finer invariant. Using the \texttt{pAdicExtensions} package \cite{extensionscode}, which implements these invariants, we generated all possible equivalence classes of the finest such invariant, called the \define{fine ramification polygon with residues and uniformizer residue} in \cite{extensions}, for totally ramified extensions of degree 16 of \(\mathbb{Q}_2\). For each class, we selected at random one Eisenstein polynomial generating a field with this invariant, giving us a sample of 447 polynomials. We divide these polynomials into three types. Writing \(L=L_t/\ldots/L_0=K=\mathbb{Q}_2\) for the ramification filtration of the field they generate, then Type 16a polynomials have \((L_i:L_{i-1})=2\) for all \(i\) (and hence \(t=4\)), Type 16b polynomials are those remaining with \((L_i:L_{i-1})\mid4\) for all \(i\), and Type 16c are the rest (so \((L_i:L_{i-1})=8\) or \(16\) for some \(i\)). There are 64, 253 and 130 polynomials of each type respectively. In total, there are 4,008,960 degree 16 extensions of \(\mathbb{Q}_2\) inside \(\bar \mathbb{Q}_2\) of Type 16a, 1,857,120 of Type 16b and 155,024 of Type 16c \cite{Sinclair}. Per an earlier remark, we do not have \texttt{SinglyWild} global models fully implemented and so use the less efficient \texttt{Symmetric} instead. We expect run times for Types 16b and 16c to be worse than Type 16a, since the former will work relative to groups like \(W=S_4 \wr S_4\) or \(S_2 \wr S_8\) which are larger than \(W=S_2 \wr S_2 \wr S_2 \wr S_2\) of the latter. We expect that with \texttt{SinglyWild} fully implemented, the overgroup for Types 16b or 16c will be smaller not larger than for Type 16a, and that Types 16b and 16c will therefore actually become the easier classes. See \cref{gg-sec-impl-sw} for some evidence supporting this claim. Our algorithm has been run on these polynomials with the 6 parameterizations A0 to B2. \Cref{gg-tbl-d16-q2} summarizes the results, with the polynomials grouped by type. Mean timings are also given in \cref{gg-tbl-d12-q2} for comparison. Some of these runs failed to find the Galois group, because the parameterization ran out of resolvents to try; the number of failures is given in the table. The timings only include successful runs. To give an idea of the variance in run time, we report the median and maximum time as well as the mean. \begin{table} \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lrrrrrr} \hline & A0 & B0 & A1 & B1 & A2 & B2 \\ \hline \multicolumn{7}{l}{Type 16a (64 polynomials)} \\ Number failed & 0 & 0 & 4 & 4 & 4 & 4 \\ Mean run time & 53.65 & 54.54 & 17.47 & 18.21 & 7.25 & 7.59 \\ Median run time & 27.87 & 28.64 & 16.69 & 17.00 & 6.06 & 6.34 \\ Maximum run time & 311.86 & 252.39 & 31.57 & 56.59 & 22.99 & 21.76 \\ \hline \multicolumn{7}{l}{Type 16b (253 polynomials)} \\ Number failed & 0 & 0 & 7 & 7 & 7 & 7 \\ Mean run time & 304.97 & 288.25 & 42.37 & 34.90 & 25.47 & 29.40 \\ Median run time & 18.20 & 14.77 & 12.25 & 10.38 & 8.02 & 7.65 \\ Maximum run time & 4016.19 & 3721.84 & 432.85 & 1182.44 & 1063.16 & 1616.56 \\ \hline \multicolumn{7}{l}{Type 16c (130 polynomials)} \\ Number failed & --- & --- & 23 & 23 & 4 & 23 \\ Mean run time & --- & --- & 133.29 & 195.59 & 115.38 & 150.83 \\ Median run time & --- & --- & 10.50 & 1.58 & 1.43 & 1.36 \\ Maximum run time & --- & --- & 2502.06 & 7949.19 & 12432.12 & 4368.25 \\ \hline \end{tabular} \end{adjustbox} \caption[Timings on polynomials of degree 16 over \(\mathbb{Q}_2\)]{Run times in seconds for a selection of parameterizations on a sample of polynomials defining fields of degree 16 over \(\mathbb{Q}_2\) divided into three types.} \label{gg-tbl-d16-q2} \end{table} The run times are significantly higher at degree 16 than lower degrees, and there are now pronounced differences between the parameterizations, with A0 and B0 being the slowest and numbered A2 and B2 being the fastest. As predicted, Type 16a polynomials are the fastest. For this type, the median is usually close to the mean and the maximum is not much larger, indicating this is a low-variance regime. Elsewhere, the median is smaller and the maximum is a lot higher, so the variance is greater. \subsection{Degree 18 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}} \label{gg-sec-d18} Using the \texttt{pAdicExtensions} package \cite{extensionscode}, we have generated all ramification polygons of totally ramified extensions \(L/\mathbb{Q}_2\) of degree 18. These have vertices of the form \[(1,J), (2,0), (18,0)\] where the discriminant valuation is \(18+J-1\). Note that these extensions are of the form \(L/T/\mathbb{Q}_2\) where \(T/\mathbb{Q}_2\) is the unique tame extension of degree 9 and \(L/T\) is quadratic. For each polygon, we have generated a set of polynomials generating all extensions with this ramification polygon, and run our algorithm on them all with parameterizations A0 to B2. There are 2046 polynomials in total. Mean timings are given in \cref{gg-tbl-d12-q2}. Note that the B parameterizations are far quicker than A as a result of using the \code{RootOfUniformizer} global model instead of \code{Symmetric} for \(T/\mathbb{Q}_2\). In \cref{gg-tbl-d18-q2} we give the number of polynomials for each ramification polygon (parameterized by \(J\)) and the count of the T-numbers of their Galois groups. \begin{table} \centering \begin{tabular}{rrp{17em}} \hline \(J\) & \# & Groups \\ \hline 1 & 2 & \(433\), \(434\) \\ 3 & 4 & \(98\), \(101\), \(588\), \(592\) \\ 5 & 8 & \(433^2\), \(434^2\), \(588^2\), \(592^2\) \\ 7 & 16 & \(433^4\), \(434^4\), \(588^4\), \(592^4\) \\ 9 & 32 & \(45^2\), \(147^2\), \(512^{14}\), \(656^{14}\) \\ 11 & 64 & \(433^8\), \(434^8\), \(512^{16}\), \(588^8\), \(592^8\), \(656^{16}\) \\ 13 & 128 & \(433^{16}\), \(434^{16}\), \(512^{32}\), \(588^{16}\), \(592^{16}\), \(656^{32}\) \\ 15 & 256 & \(98^2\), \(101^2\), \(147^4\), \(588^{62}\), \(592^{62}\), \(656^{124}\) \\ 17 & 512 & \(433^{32}\), \(434^{32}\), \(512^{64}\), \(588^{96}\), \(592^{96}\), \(656^{192}\) \\ 18 & 1024 & \(45^{4}\), \(147^{12}\), \(512^{252}\), \(656^{756}\) \\ \hline Total & 2046 & \(45^{6}\), \(98^{3}\), \(101^{3}\), \(147^{18}\), \(433^{63}\), \(434^{63}\), \(512^{378}\), \(588^{189}\), \(592^{189}\), \(656^{1134}\) \\ \hline \end{tabular} \caption[Totally ramified Galois groups of degree 18 over \(\mathbb{Q}_2\)]{Totally ramified Galois groups of degree 18 over \(\mathbb{Q}_2\).} \label{gg-tbl-d18-q2} \end{table} Noting that \(L/T\) is Galois and \(T/\mathbb{Q}_2\) has only the trivial automorphism, then \(\operatorname{Aut}(L/\mathbb{Q}_2) \cong C_2\) and so each \(L/\mathbb{Q}_2\) has 9 conjugates inside \(\bar\mathbb{Q}_2\). The number of polynomials generated times 9 is equal to the number of extensions of degree 18 in \(\bar\mathbb{Q}_2\), from which we deduce we have exactly one polynomial per isomorphism class. \subsection{Degree 20 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}} \label{gg-sec-d20} As in \cref{gg-sec-d18}, we have generated all ramification polygons of totally ramified extensions \(L/\mathbb{Q}_2\) of degree 20. For each we have produced a set of generating polynomials, 511,318 in total. We have computed the Galois groups of all of these polynomials \(F(x)\), which required several parameterizations of our algorithm to cover all cases. This also occasionally required computing \(\operatorname{Gal}(F/K)\) where \(K = \mathbb{Q}_2(\sqrt[3]2,\zeta_3)\), for which we can compute a more efficient global model than \(F/\mathbb{Q}_2\), at the expense of some more group theory computation. By \cite[Theorem 1]{Monge11} there are 259,968 isomorphism classes of such extensions \(L/\mathbb{Q}_2\) so we have over-counted by a factor of about 2. Average timings are given in \cref{gg-tbl-d12-q2} and counts of Galois groups are given in \cref{gg-tbl-d20-q2}. \begin{table} \centering \begin{tabular}{rp{24em}} \hline \# & Groups \\ \hline 511,318 & \(16^{4}\), \(18^{8}\), \(19^{6}\), \(20^{8}\), \(42^{48}\), \(61^{3}\), \(68\), \(77^{15}\), \(80^{15}\), \(129^{78}\), \(131^{90}\), \(132^{78}\), \(137^{90}\), \(173^{30}\), \(186^{120}\), \(189^{120}\), \(194^{180}\), \(195^{114}\), \(196^{180}\), \(261^{720}\), \(282^{90}\), \(305^{120}\), \(306^{105}\), \(309^{240}\), \(312^{240}\), \(317^{140}\), \(330^{35}\), \(332^{240}\), \(338^{140}\), \(351^{120}\), \(406^{630}\), \(411^{1440}\), \(416^{1440}\), \(417^{1440}\), \(419^{1440}\), \(420^{1440}\), \(422^{630}\), \(434^{720}\), \(435^{1440}\), \(437^{1440}\), \(441^{1440}\), \(443^{720}\), \(444^{562}\), \(447^{720}\), \(448^{720}\), \(449^{562}\), \(471^{85}\), \(472^{225}\), \(510^{1920}\), \(511^{2880}\), \(512^{1920}\), \(514^{1920}\), \(515^{2880}\), \(516^{2880}\), \(517^{1920}\), \(518^{2880}\), \(519^{1920}\), \(520^{1920}\), \(523^{1920}\), \(524^{1920}\), \(526^{1396}\), \(528^{840}\), \(529^{1920}\), \(530^{1920}\), \(632^{11520}\), \(633^{11520}\), \(634^{11520}\), \(678^{255}\), \(683^{675}\), \(847^{5760}\), \(850^{5760}\), \(851^{5760}\), \(854^{5760}\), \(906^{42240}\), \(907^{42240}\), \(908^{34560}\), \(909^{42240}\), \(910^{42240}\), \(911^{34560}\), \(946^{161280}\) \\ \hline \end{tabular} \caption[Totally ramified Galois groups of degree 20 over \(\mathbb{Q}_2\)]{Totally ramified Galois groups of degree 20 over \(\mathbb{Q}_2\).} \label{gg-tbl-d20-q2} \end{table} \subsection{Degree 22 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}} \label{gg-sec-d22} As in \cref{gg-sec-d18}, we have generated all ramification polygons of totally ramified extensions \(L/\mathbb{Q}_2\) of degree 22, these have vertices of the form \[(1,J),(2,0),(22,0),\] and for each we have produced a set of generating polynomials. Again, we have precisely one polynomial per isomorphism class, 8190 in total. Timings with parameterizations B0 to B2 are given in \cref{gg-tbl-d12-q2} and counts of Galois groups are given in \cref{gg-tbl-d22-q2}. \begin{table} \centering \begin{tabular}{rrl} \hline \(J\) & \# & Groups \\ \hline 1 & 2 & \(34\), \(35\) \\ 3 & 4 & \(34^{2}\), \(35^{2}\) \\ 5 & 8 & \(34^{4}\), \(35^{4}\) \\ 7 & 16 & \(34^{8}\), \(35^{8}\) \\ 9 & 32 & \(34^{16}\), \(35^{16}\) \\ 11 & 64 & \(6^{2}\), \(37^{62}\) \\ 13 & 128 & \(34^{32}\), \(35^{32}\), \(37^{64}\) \\ 15 & 256 & \(34^{64}\), \(35^{64}\), \(37^{128}\) \\ 17 & 512 & \(34^{128}\), \(35^{128}\), \(37^{256}\) \\ 19 & 1024 & \(34^{256}\), \(35^{256}\), \(37^{512}\) \\ 21 & 2048 & \(34^{512}\), \(35^{512}\), \(37^{1024}\) \\ 22 & 4096 & \(6^{4}\), \(37^{4092}\) \\ \hline Total & 8190 & \(6^{6}\), \(34^{1023}\), \(35^{1023}\), \(37^{6138}\) \\ \hline \end{tabular} \caption[Totally ramified Galois groups of degree 22 over \(\mathbb{Q}_2\)]{Totally ramified Galois groups of degree 22 over \(\mathbb{Q}_2\).} \label{gg-tbl-d22-q2} \end{table} \subsection{Degree 32 over \texorpdfstring{\(\mathbb{Q}_2\)}{Q2}} \label{gg-sec-d32} Our algorithm can compute some non-trivial Galois groups of order 32. For example, consider \(F(x) = x^{16} + 32 x + 2\) which is Eisenstein with Galois group 16T1638 of index \(8=2^3\) in \(C_2^{\wr 4}\). Using A2, we find the Galois group of \(F(x^2)\) is 32T2583443 of index \(2^{10}\) in \(C_2^{\wr 5}\). This took about 125 seconds, which breaks down as follows. \begin{center} \begin{tabular}{lrr} \hline & Run time (seconds) & Share of run time \\ \hline Start resolvent algorithm & 23.28 & 18.6\% \\ Choose subgroup & 91.44 & 73.0\% \\ Compute resolvent & 1.39 & 1.1\% \\ Process resolvent & 6.84 & 5.5\% \\ Other & 2.37 & 1.9\% \\ \hline Total & 125.32 & \\ \hline \end{tabular} \end{center} Here, ``start resolvent algorithm'' includes initially factorizing the polynomial, finding the extensions defined by the factors, finding their ramification filtrations, and computing a corresponding global model. ``Choose subgroup'' means time spent by the subgroup choice algorithm choosing a subgroup \(U \leq W\) from which to form a resolvent. ``Compute resolvent'' is the time spent computing a resolvent \(R(x)\) given an invariant for the subgroup \(U\). ``Process resolvent'' is the time spent by the group theory algorithm deducing information about the Galois group from a resolvent, and so in particular includes finding the degrees of the factors of the resolvent and computing maximal preimages. ``Other'' is everything else, including intializing the group theory algorithm and computing invariants. This used 104 resolvents in total: 82 of degree 2, 9 of degree 4, 7 of degree 8, 2 of degree 16 and 4 of degree 32. The maximum complex precision used was 4056 decimal digits. The run time is dominated by time spent choosing subgroups \(U \leq W\), suggesting that this should be the focus for future improvement. The next most dominant part is time spent starting the resolvent algorithm, but this part is essentially independent of the Galois group. Very little time is spent actually computing resolvents, which is perhaps surprising given that this is the part spent using complex embeddings of global models. \subsection{A special case of \texttt{SinglyWild}} \label{gg-sec-impl-sw} We have implemented \texttt{SinglyWild} in the special case \(p=2\) for totally wildly ramified extensions \(L/K\) which are Galois. Hence \(\operatorname{Gal}(L/K) \cong C_2^k\) where \((L:K)=p^k\). We now define three more parameterizations C0, C1 and C2 which are the same as B0, B1 and B2 except that the \texttt{Symmetric} global model on the wild part is replaced by \texttt{SinglyWild}. It is well-known (e.g. \cite[Ch. IV, \S2, Prop. 7]{SerLF}) that for such an extension \(L/K\) there is an injective group homomorphism \(\operatorname{Gal}(L/K) \to \mathbb{F}_K^+\), and hence \(\operatorname{Gal}(L/K)\) is isomorphic to a subspace of \(\mathbb{F}_K/\mathbb{F}_p\). In particular, \((\mathbb{F}_K : \mathbb{F}_p) \ge k\) and so \(K/\mathbb{Q}_p\) has residue degree at least \(k\). Using the \texttt{pAdicExtensions} package \cite{extensionscode}, we have generated defining polynomials which between them generate all extensions of the form \(L/U/\mathbb{Q}_2\) where \(U/\mathbb{Q}_2\) is unramified of some degree and \(L/U\) is singly wildly ramified and Galois of some degree. For example when \(k=2\) and \((U:\mathbb{Q}_2)=4\), then the global model in C0 gives the overgroup \(W = C_2^2 \wr C_4\) of order \(2^{10}\), which is somewhat smaller than the overgroup \(W = S_4 \wr C_4\) of order \(2^{14} \cdot 3^4\) from B0. We have run our algorithm with the 9 parameterizations A0 to C2 on these polynomials. Mean timings are given in \cref{gg-tbl-sw-q2}. \begin{table} \newcommand{\makebox[0pt][l]}{\makebox[0pt][l]} \newcommand{\makebox[0pt][r]}{\makebox[0pt][r]} \newcommand{\z}[1]{\makebox[0pt][l]{\textsuperscript{#1}}} \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{rrrrrrrrrrrr} \hline Deg & \(k\) & \# & \multicolumn{9}{l}{Run time (seconds)} \\ & & & A0 & B0 & C0 & A1 & B1 & C1 & A2 & B2 & C2 \\ \hline 8 & 2 & 4 & \bfseries 0.53 & \bfseries 0.56 & 0.61 & \bfseries 0.57 & \bfseries 0.57 & 0.66 & 0.62 & \bfseries 0.58 & 0.68 \\ 12 & 2 & 28 & 2.36 & 2.34 & 0.71 & 3.41 & 3.73 & \bfseries 0.64 & 4.57 & 4.79 & \bfseries 0.66 \\ 16 & 2 & 140 & \(\times\) & \(\times\) & 1.23 & 80.02\z1 & 23.27\z1 & \bfseries 0.95 & \(\times\) & \(\times\) & \bfseries 0.98 \\ 24 & 3 & 8 & \(\times\) & \(\times\) & \bfseries 12.55 & \(\times\) & \(\times\) & \bfseries 12.75 & \(\times\) & \(\times\) & \bfseries 12.42 \\ 32 & 3 & 120 & --- & --- & 40.49 & --- & --- & 31.34 & --- & --- & \bfseries 23.68 \\ \hline \end{tabular} \end{adjustbox} \caption[Timings on polynomials using \texttt{SinglyWild} over \(\mathbb{Q}_2\)]{Mean run times for a selection of parameterizations on polynomials defining fields of the form \(L/U/\mathbb{Q}_2\) where \(U/\mathbb{Q}_2\) is unramified and \(L/U\) is singly wildly ramified with Galois group \(C_2^k\). At degree 32, there were four polynomials which did not succeed due to a bug in Magma; these are not included in the mean.} \label{gg-tbl-sw-q2} \end{table} Except at degree 8, the C parameterizations are by far the quickest. \subsection{\texttt{Tame}} \label{gg-sec-tame} As explained in the introduction, if the irreducible factors of \(F(x)\) all generate tamely ramified extensions of \(K\), then its Galois group can be computed directly. \section*{Acknowledgements} This work was partially supported by a grant from GCHQ. \bibliography{refs-bibtex} \end{document}
{ "timestamp": "2020-03-13T01:13:10", "yymm": "2003", "arxiv_id": "2003.05834", "language": "en", "url": "https://arxiv.org/abs/2003.05834", "abstract": "We present a family of algorithms for computing the Galois group of a polynomial defined over a $p$-adic field. Apart from the \"naive\" algorithm, these are the first general algorithms for this task. As an application, we compute the Galois groups of all totally ramified extensions of $\\mathbb{Q}_2$ of degrees 18, 20 and 22, tables of which are available online.", "subjects": "Number Theory (math.NT)", "title": "Computing the Galois group of a polynomial over a $p$-adic field", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.984336353585837, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7073385900225497 }
https://arxiv.org/abs/2210.00630
No Selection Lemma for Empty Triangles
Let $S$ be a set of $n$ points in general position in the plane. The Second Selection Lemma states that for any family of $\Theta(n^3)$ triangles spanned by $S$, there exists a point of the plane that lies in a constant fraction of them. For families of $\Theta(n^{3-\alpha})$ triangles, with $0\le \alpha \le 1$, there might not be a point in more than $\Theta(n^{3-2\alpha})$ of those triangles. An empty triangle of $S$ is a triangle spanned by $S$ not containing any point of $S$ in its interior. Bárány conjectured that there exist an edge spanned by $S$ that is incident to a super constant number of empty triangles of $S$. The number of empty triangles of $S$ might be $O(n^2)$; in such a case, on average, every edge spanned by $S$ is incident to a constant number of empty triangles. The conjecture of Bárány suggests that for the class of empty triangles the above upper bound might not hold. In this paper we show that, somewhat surprisingly, the above upper bound does in fact hold for empty triangles. Specifically, we show that for any integer $n$ and real number $0\leq \alpha \leq 1$ there exists a point set of size $n$ with $\Theta(n^{3-\alpha})$ empty triangles such that any point of the plane is only in $O(n^{3-2\alpha})$ empty triangles.
\section{Introduction} Let $S$ be a set of $n$ points in general position\footnote{A point set $S \subset \mathbb{R}^d$ is in general position if for every integer $1 < k \le d+1 $, no subset of $k$ points of $S$ is contained in a $(k\!-\!2)$-dimensional flat.} in the plane. A \emph{triangle of $S$} is a triangle whose vertices are points of $S$. We say that a point $p$ of the plane \emph{stabs} a triangle $\Delta$ if it lies in the interior of $\Delta$. Boros and F\"uredi~\cite{boros1984number} showed that for any point set $S$ in general position in the plane, there exists a point in the plane which stabs a constant fraction ($\frac{n^3}{27}+O(n^2)$) of the triangles of~$S$. B\'ar\'any~\cite{BARANY-82} extended the result to $\mathbb{R}^d$; he showed that there exists a constant $c_d>0$, depending only on $d$, such that for any point set $S_d \subset \mathbb{R}^d$ in general position, there exists a point in $\mathbb{R}^d$ which is in the interior of $c_d n^{d+1}$ $d$-dimensional simplices spanned by $S_d$. This result is known as \emph{First Selection Lemma}~\cite{MatousekBook-2002}. Later, researchers considered the problem of the existence of a point in many triangles of a given family, $\mathcal{F}$, of triangles of $S$. B\'ar\'any, F\"uredi and Lov\'asz~\cite{barany1990number} showed that for any point set $S$ in the plane in general position and any family $\mathcal{F}$ of $\Theta(n^3)$ triangles of $S$, there exists a point of the plane which stabs $\Theta(n^3)$ triangles from~$\mathcal{F}$. This result, generalized to $\mathbb{R}^d$ by Alon et al.~\cite{alon1992point}, is now also known as \emph{Second Selection Lemma}~\cite{MatousekBook-2002}. Both results for the plane require families of $\Theta(n^3)$ triangles of $S$. It is natural to ask about families of triangles of smaller cardinality. For this question, Aronov et al.~\cite{SecondSelectionAronov} showed that for every $0 \leq \alpha \leq 1$ and every family $\mathcal{F}$ of $\Theta(n^{3-\alpha})$ triangles of $S$, there exists a point of the plane which stabs $\Omega(n^{3-3\alpha}/\log^5 n)$ triangles of $\mathcal{F}$. This lower bound was improved by Eppstein~\cite{EppsteinImprovedBound} to the maximum of $n^{3-\alpha}/(2n-5)$ and $\Omega(n^{3-3\alpha}/\log^2 n)$. A mistake in one of the proofs was later found and fixed by Nivasch and Sharir~\cite{EppsteinRevisited}. Furthermore, Eppstein~\cite{EppsteinImprovedBound} constructed $n$-point sets and families of $n^{3-\alpha}$ triangles in them such that every point of the plane is in at most $n^{3-\alpha}/(2n-5)$ triangles for $\alpha \geq 1$ and in at most $n^{3-2\alpha}$ triangles for $0\leq \alpha \leq 1$. Hence, for the number of triangles of a family $\mathcal{F}$ that can be guaranteed to simultaneously contain some point of the plane, there is a continuous transition from a linear fraction for $|\mathcal{F}|=O(n^2)$ to a constant fraction for $|\mathcal{F}|=\Theta(n^3)$. A triangle of $S$ is said to be \emph{empty} if it does not contain any points of $S$ in its interior. Let $\tau(S)$ be the number of empty triangles of $S$. It is easily shown that $\tau(S)$ is $\Omega(n^2)$; Katchalski and Meir~\cite{kat} showed that there exist $n$-point sets $S$ with $\tau(S)=\Theta(n^2)$. Note that for such point sets, an edge of $S$ is on average part of a constant number of empty triangles of $S$. However, B{\'a}r{\'a}ny conjectured that there is always an edge of $S$ which is part of a super constant number of empty triangles of $S$; see~\cite{high_degree_erdos,high_degree_barany}. B{\'a}r{\'a}ny et al.~\cite{BaranyMR13} proved this conjecture for random $n$-point sets, showing that for such such sets, $\Theta(n/\log n)$ empty triangles are expected to share an edge. Note that the expected total number of empty triangles in such point sets is $\Theta(n^2)$; see~\cite{Va95}. B\'ar\'any's conjecture suggests that perhaps there is always a point of the plane stabbing many empty triangles of $S$, for any set $S$ of $n$ points in general position. Naturally, the mentioned lower bounds for the number of triangles stabbed by a point of the plane also apply for the family of all empty triangles of $S$. In contrast, the upper bound constructions of Eppstein do not apply, since they contain non-empty triangles or do not contain all empty triangles of their underlying point sets. In this paper, we show that the existence of a point in more triangles than these upper bounds for general families of triangles is not guaranteed; hence the title of our paper. Specifically, we prove the following. \begin{restatable}{theorem}{emptySelectionLemma} \label{thm:LensSquaredHortonSets} For every integer $n$ and every $0\leq \alpha \leq 1$, there exist sets $S$ of $n$~points with $\tau(S) = \Theta{(n^{3-\alpha})}$ empty triangles where every point of the plane stabs $O{(n^{3-2\alpha})}$ empty triangles of $S$. \end{restatable} To prove Theorem~\ref{thm:LensSquaredHortonSets} for $\alpha=1$, we utilize the so called \emph{Horton sets} and \emph{squared Horton sets}. Horton~\cite{horton_1983} constructed a family of arbitrary large sets without large empty convex polygons. Valtr~\cite{Va92} generalized Horton's construction and named the resulting sets ``Horton sets''. Squared Horton sets were defined by Valtr~\cite{Va92} (as set $A_k$ in Section~4). B\'ar\'any and Valtr~\cite{BV2004} showed that squared Horton sets of size $n$ span only $\Theta(n^2)$ empty triangles. \paragraph{Outline.} The remainder of this paper is organized as follows: In Section~\ref{sec:HortonSets}, we give the definition of Horton sets and show several properties of them that will be of use for later sections. Section~\ref{sec:squaredHortonSets} considers squared Horton sets and contains a proof of Theorem~\ref{thm:LensSquaredHortonSets} for the case $\alpha=1$ (Theorem~\ref{thm:horton_stabs}). And in Section~\ref{sec:diamondssquaredhortonset} we present a generalized construction based on squared Horton sets, which we analyze to prove Theorem~\ref{thm:LensSquaredHortonSets}. \section{Horton sets} \label{sec:HortonSets} Let $X$ be a set of $n$ points in the plane such that no two points have the same $x$-coordinate. In the following, we consider the points of $X$ in increasing order of their $x$-coordinates. We denote with $X_0$ the subset of $X$ that contains every second point of $X$ (w.r.t.\ the $x$-order of the points), starting with the leftmost point of $X$. Similarly, $X_1=X\backslash X_0$ is the subset of $X$ that contains every second point of $X$ (and does not contain the leftmost point of $X$). In other words, if the points of $X$ are labeled $\{p_0, p_1,\dots, p_{n-1}\}$ in increasing $x$-order, then $X_0=\{p_0, p_2,\dots,\}$ and $X_1=\{p_1, p_3,\dots \}$. In general, for a binary string $b$, we denote as $X_b$ the subset of vertices of $X$ that is obtained by recursively applying the above splitting. For example, $X_{10}$ consists of every second point of $X_1$ and does not contain the leftmost point of $X_1$. Now consider two point sets $X$ and $Y$ in the plane such that no two points of $X \cup Y$ have the same $x$-coordinate. We say that $Y$ is \emph{high above} $X$ if every line passing through two points of $Y$ is above every point of $X$, and $X$ is \emph{deep below} $Y$ if every line passing through two points of $X$ is below every point of $Y$. Using the above notation, we can now define Horton sets. \begin{defini} Let $H$ be a set of $n$ points in general position in the plane, such that no two points of $H$ have the same $x$-coordinate. Then $H$ is a \textbf{Horton set} if \begin{enumerate} \item $|H|\leq 2$; or \item $|H|>2$, $H_{0}$ and $H_{1}$ are Horton sets, and $H_1$ is high above $H_0$ and $H_0$ is deep below $H_1$. \end{enumerate} \end{defini} One classic way to obtain a Horton $H=S_k$ set with $2^k$ points is by starting with a set $S_1$ of two points on a horizontal line, and then iteratively duplicating it by adding a translated copy $S'_i$ of $S_i$, where $S'_i$ is translated to the right by exactly half the $x$-distance between the first two points of $S_i$ and the translation in $y$-direction is such that $S'_i$ lies high above $S_i$. In the resulting Horton set, all points are evenly spaced in $x$-direction. The following observation states that Horton sets have nice subset properties. They are directly implied by their definition. \begin{obs}\label{obs:hortonsubsets} Let $H=\{p_0, \ldots, p_{n-1}\}$ be a Horton set with points labeled in increasing $x$-order. Then for any $0 \leq i \leq j \leq n-1$, the subset $\{p_i, p_{i+1}, \ldots, p_j\}$ of consecutive points in $x$-direction again forms a Horton set. Similarly, for any integer $k$ and $0 \leq i \leq kj \leq n-1$, the set $\{p_i, p_{i+k}, p_{i+2k}, \ldots, p_{i+jk}\}$ is again a Horton set. \end{obs} We remark that a linear transformation of a Horton set, like for example a rotation, might no longer be a Horton set by the above definition. However, the combinatorial properties of these sets do not change. Hence, for convenience, we still call them Horton sets. To analyze properties of the empty triangles of Horton sets, we define visible edges in Horton sets. Let $H=H_0 \cup H_1$ be a Horton set of at least 4 points. We say that an edge $e=(p_i,p_j)$, with $p_i,p_j \in H_0$, is \emph{visible from above} if $p_k$ is below the line spanned by $p_i$ and $p_j$ for every $p_k \in H_0$ with $i < k < j$. Likewise, an edge $e=(p_i,p_j)$, with $p_i,p_j \in H_1$, is \emph{visible from below} if $p_k$ is above the line spanned by $p_i$ and $p_j$ for every $p_k \in H_1$ with $i < k < j$. An edge of $H$ is \emph{visible} if it is either visible from above or visible from below. \begin{lemma} \label{lem:visible} An edge $e$ spanned by two vertices of a Horton set $H$ is visible from below (above) if and only if it is spanned by two consecutive vertices of $H_b$, where~$b$ is a binary string consisting of a single $1$ followed by an arbitrary number of $0$s (a single $0$ followed by an arbitrary number of $1$s). \end{lemma} \begin{proof} For the first direction of the proof, let $(p_i,p_j)$ be an edge of $H$ that visible from below. Then, by the definition of visibility, $p_i,p_j \in H_1$. Let $b'$ be the unique binary string such that $p_i,p_j \in H_{1b'}$ and such that either $p_i \in H_{1b'0}$ and $p_j \in H_{1b'1}$, or $p_i \in H_{1b'0}$ and $p_j \in H_{1b'1}$. Without loss of generality assume that $p_i \in H_{1b'0}$ and $p_j \in H_{1b'1}$. Suppose first that $b'$ is of the form $b'=b_1 1 b_2$ for some binary strings $b_1$, $b_2$. Let $p_k$ be the point of $H_{b_1}$ that lies between $p_i$ and $p_j$. Note that $p_k \in H_{b_10}$. By the definition of Horton sets $p_k$ is below the line spanned by $p_i$ and $p_j$; this contradicts the assumption that $(p_i,p_j)$ is visible from below. Thus, $b'$ is a binary string that only consists of $0$s. Next, suppose that $p_i$ and $p_j$ are not consecutive vertices of $H_{1b'}$. Then there exists a point $p_k \in H_{1b'0} \subset H_1$ that lies between $p_i$ and $p_j$. Again, by the definition of Horton sets, $p_k$ is below the line spanned by $p_i$ and $p_j$, which contradicts the assumption that $(p_i,p_j)$ is visible from below. Hence, for $b=1b'$, $p_i$ and $p_j$ are consecutive vertices in $H_{b}$. The reasoning for an edge $(p_i,p_j)$ that is visible from above is analogous, which completes the first direction of the proof. For the other direction, let $p_i,p_j$ be two consecutive points in $H_b$ for some binary string $b$ consisting of a single $1$ followed by an arbitrary number of $0$s. We proceed by induction on the length of $b$. If $b$ is empty then there is no point between $p_i$ and $p_j$ in $H_1$. Suppose that $b$ has length at least one. Let $b:=b'0$. There is exactly one point $p_k$ between $p_i$ and $p_j$ in $H_{b'}$. Thus, $p_i$ and $p_k$ are consecutive points in $H_{b'}$. Likewise, $p_k$ and $p_j$ are consecutive points in $H_{b'}$. Let $\ell_1$ and $\ell_2$ be the two lines spanned by $(p_i,p_k)$, and $(p_k,p_j)$, respectively. By induction there are no points in $H_{b'}$ between $p_i$ and $p_k$, and below $\ell_1$. Likewise, there are no points in $H_{b'}$ between $p_k$ and $p_j$, and below $\ell_2$. Since $p_k$ is in $H_{b'1}$, $p_k$ is above the line $\ell$ spanned by $p_i$ and $p_j$. Thus, there are no points in $H_b$ between $p_i$ and $p_j$, and below $\ell$; this implies that $(p_i,p_j)$ is an edge visible from below of $H$. An analogous argument shows that if $p_i$ and $p_j$ are two consecutive points in $H_b$ for some binary string $b$ consisting of a single $0$ followed by an arbitrary number of $1$s, then $(p_i,p_j)$ is an edge of $H$ visible from above, which completes the proof. \end{proof} Note that visible edges are of central relevance for empty triangles in Horton sets. Consider an empty triangle $\Delta$ in $H$ with vertices in both $H_0$ and $H_1$ and let $(p_i,p_j)$ be the edge of $\Delta$ such that both $p_i$ and $p_j$ are in $H_0$ or in $H_1$. Then $(p_i,p_j)$ is a visible edge of $H$: Assume without loss of generality that $p_i, p_j \in H_0$ and suppose for a contradiction that $(p_i,p_j)$ is not a visible edge of $H$. Then there exist a $p_k \in H_0$ such that $p_k$ is above the line spanned by $p_i$ and $p_j$, and $i < k < j$. Since $H_1$ is high above $H_0$ this implies that $p_k$ is in the interior of $\Delta$; thus $\Delta$ is not empty. The following two statements on empty triangles in Horton sets are useful for proving our main theorem. \begin{restatable}{lemma}{hortoninterior}\label{lemma:hortoninterior} Let $H$ be a Horton set of $n$ points. Then every point $q$ of the plane stabs $O(n \log n)$ empty triangles of $H$. \end{restatable} \begin{proof} Assume that $q \in \operatorname{Conv}(H)$, as otherwise $q$ stabs no empty triangle of~$S$. Let $b$ be the binary string such $q \in \operatorname{Conv}(H_b)$, but $q \notin \operatorname{Conv}(H_{b0})$ and $q \notin \operatorname{Conv}(H_{b1})$. Let $\Delta$ be an empty triangle of $H$ stabbed by $q$. Note that either two vertices of $\Delta$ lie in $H_{b0}$ and one vertex in $H_{b1}$, or two vertices of $\Delta$ lie in $H_{b1}$ and one vertex in~$H_{b0}$. Let $(p_i,p_j)$ be the edge of $\Delta$ such that both $p_i,p_j$ are in $H_{b0}$, or both $p_i,p_j$ are in $H_{b1}$. Let $p_k$ be the other vertex of~$\Delta$. Recall that $(p_i,p_j)$ is a visible edge. Let $b'$ be a binary string as in Lemma~\ref{lem:visible} such that $p_i$ and $p_j$ are two consecutive vertices in~$H_{b'}$. Note that the only two consecutive vertices of $H_{b'}$, that together with $p_k$ form an empty triangle containing $q$, are $p_i$ and $p_j$. The number possible values for $b'$ is at most $2 \log_2 n$, and the number of possible choices for $p_k$ is at most $n$. Therefore, the number of empty triangles stabbed by~$q$ is $O(n \log n)$. \end{proof} \begin{restatable}{lemma}{hortonincident}\label{lemma:hortonincident} Let $H$ be a Horton set of $n$ points. Then every point of $H$ is incident to $O(n \log n)$ empty triangles of $H$. \end{restatable} \begin{proof} Let $q \in H$. Let $\Delta$ be an empty triangle of $H$ containing $q$ as a vertex. Let $(p_i,p_j)$ be the edge of $\Delta$ that is a visible edge of $H$. Let $p_k$ be the vertex of $\Delta$ distinct from $p_i$ and $p_j$. Let $b$ be the binary string for $(p_i,p_j)$ as in Lemma~\ref{lem:visible}, such that $p_i$ and $p_j$ are two consecutive points of $H_b$. Suppose that $q$ is equal to one of $p_i$ and $p_j$. For a fixed $b$ there are at most two possible choices for the other vertex of $(p_i,p_j)$, and at most $n/2$ possible choices for $p_k$. Suppose that $q$ is equal to $p_k$. Then for a fixed $b$ there is exactly one choice for $p_i$ and $p_j$. Since the number of possible values for $b$ is $O(\log n)$, there are at most $O( \log n)$ empty triangles of $H$ containing $q$ as a vertex. \end{proof} \section{Squared Horton sets}\label{sec:squaredHortonSets} For $n$ being a squared integer, we denote with $G$ an integer grid of size $\sqrt{n} \times \sqrt{n}$. (Otherwise, $G$ is a subset of an integer grid of size $\lceil\sqrt{n}\rceil \times \lceil\sqrt{n}\rceil$, from which some consecutive points of the topmost row and possibly the leftmost column are removed to have $n$ points remaining.) An \emph{$\varepsilon$-perturbation} of $G$ is a perturbation of $G$ where every point $p$ of $G$ is mapped to a point at distance at most $\varepsilon$ to $p$. \begin{defini} A \emph{squared Horton set} $H$ of size $n$ is a specific $\varepsilon$-perturbation of $G$ such that the following three properties hold. \begin{enumerate} \item Any triple of non-collinear points in $G$ keeps its orientation in $H$. \item The points on any non-vertical line spanned by points of $G$ are perturbed to points forming a Horton set in $H$. \item The points on any vertical line spanned by points of $G$ are perturbed to points forming a rotated copy of a Horton set in $H$. \end{enumerate} \end{defini} As already mentioned in the introduction, squared Horton sets have been defined by Valtr~\cite{Va92}. A way to construct them is also presented in~\cite{BV2004}. For self-containment, we describe a construction similar to the one in~\cite{BV2004} here, for $n$ being a squared integer: Let $H_x$ be a Horton set of $\sqrt{n}$ points such that the $x$-coordinates are the integers $1,\dots, \sqrt{n}$, and its $y$-coordinates are in $[-\varepsilon_x,+\varepsilon_x]$ for some arbitrarily small $0 < \varepsilon_x <1/4$. This can be accomplished by a suitable linear transformation of a Horton set with points evenly spaced in the $x$-coordinate. Let $H_y$ be a Horton set defined as before, for some $0 <\varepsilon_y < \varepsilon_x$ and rotated $90$ degrees, so that the $y$-coordinates of $H_y$ are the integers $1,\dots,\sqrt{n}$ and its $x$-coordinates are in $[-\varepsilon_y,+\varepsilon_y]$. Further, let $H:=\{(x_1+x_2,y_1+y_2): (x_1,y_1) \in H_x \textrm{ and } (x_2,y_2) \in H_y \}$ be the Minkowski sum of $H_x$ and $H_y$ and let $G:=\{(i,j): 1 \leq i,j \leq \sqrt{n}\}.$ Note that for every point $p$ of $H$ there is a unique point $(i,j) \in G$ at distance at most $\varepsilon:=\varepsilon_x+\varepsilon_y$ of $p$. Thus, $H$ is an $\varepsilon$-perturbation of $G$. Let $\pi_{\epsilon}: G \to H$ be the map that sends each such $(i,j) \in G$ to its unique closest $p\in H$. Observation~\ref{obs:hortonsubsets} implies that if $\varepsilon_x$ is chosen small enough and $\varepsilon_y$ is sufficiently smaller than $\varepsilon_x$, then $H$ is a squared Horton set since the following conditions hold. \begin{enumerate} \item For every triple $p_i,p_j,p_k$ of non-collinear points of $G$, the orientation of $(p_i,p_j,p_k)$ and $({\pi_{\epsilon}}(p_i),{\pi_{\epsilon}}(p_j),{\pi_{\epsilon}}(p_k))$ is the same. \item For every non-vertical straight line $\ell$ that is spanned by points of $G$, ${\pi_{\epsilon}}(\ell \cap G)$ is a Horton set. \item For every vertical straight line $\ell$ that is spanned by points of $G$, ${\pi_{\epsilon}}(\ell \cap G)$ is a rotated copy of a Horton set. \end{enumerate} When reasoning about a squared Horton set $H$ we repeatedly reason about structures in $H$ and the according structures in the underlying unperturbed grid $G$ in parallel. To relate structures in $G$ with their perturbed structures in $H$, we will denote by $\pi_{\epsilon}$ the map that is induced by the $\varepsilon$-perturbation that transforms $G$ to $H$. The following lemma is a direct consequence of the definition of squared Horton sets. \begin{restatable}{lemma}{sqHortonUnion}\label{lemma:sqHortonUnion} Let $H=\pi_{\epsilon}(G)$ be a squared Horton set and let $\ell$ and $\ell'$ be two parallel lines spanned by $G$. Then $\pi_{\epsilon}((G\cap \ell) \cup (G \cap \ell'))$ is a (rotated copy of a) Horton set. \end{restatable} \begin{proof} Assume without loss of generality that $\ell$ and $\ell'$ are not vertical and that $\ell$ is below $\ell'$ (otherwise, rotate $H$ accordingly). Let $H_0=\pi_{\epsilon}(G\cap \ell)$ and $H_1=\pi_{\epsilon}(G\cap \ell')$. Since $H$ is a squared Horton set, $H_0$ and $H_1$ are both Horton sets. Furthermore, $H_0$ is deep below $H_1$ since every line passing through two points of $H_0$ is below any point of $H_1$. Similarly $H_1$ is high above $H_0$. Therefore, $H_0 \cup H_1$ is a rotated copy of a Horton set. \end{proof} A triangle in a squared Horton set $H=\pi_{\epsilon}(G)$ either corresponds to a triangle in $G$ or to a set of three collinear points in $G$. In the following, we denote the latter as a \emph{degenerate} triangle. Further, for any empty triangle $\pi_{\epsilon}(\Delta)$ in $H$, $\Delta$ is either degenerate or interior-empty in $G$, due to the fact that $\pi_{\epsilon}$ is the map of an $\varepsilon$-perturbation. Let $\Delta$ be a (possibly degenerate) triangle with vertices in $G$. Let $e$ be an edge of $\Delta$ and let $p$ be the vertex of $\Delta$ opposite to $e$. We say that the \emph{height of $\Delta$ w.r.t.~$e$} is zero if $p$ is on the straight line spanned by $e$; otherwise, it is one plus the number of lines between $e$ and $p$, that are parallel to $e$, and that contain points of the integer grid $\mathbb{Z} \times \mathbb{Z}$. We call the area bounded by two such neighboring lines a \emph{strip}. The \emph{height} of $\Delta$ is the minimum of the heights w.r.t.\ its edges and the edge defining the height of $\Delta$ is the \emph{base edge}. We review a few basics results regarding line and line segments with points in the integer grid. \begin{lemma}\label{lem:integer_lines} Let $\ell$ be a line containing at least two points $a,b \in \mathbb{Z} \times \mathbb{Z}$. Then there exists $d>0$, such that any two consecutive points along $\ell$ in $\mathbb{Z} \times \mathbb{Z}$ are at a distance $d$ of each other. \end{lemma} \begin{proof} Suppose that $\ell$ is not vertical, as otherwise the result holds with $d=1$. Since $a,b$ are points of $\ell$, the slope $m$ of $\ell$ is a rational number. Let $\ell'$ be the translation of $\ell$ by the vector $-a$, so that the point $a$ in $\ell$ is translated to the origin. Let $r/s:=m$, with $r,s$ relative prime integers. Note that $\ell'$ has equation $y=\frac{r}{s} x$. Thus, for $(x,y) \in \ell'$, we have that $(x,y) \in \mathbb{Z} \times \mathbb{Z}$ if and only if $x$ is an integer multiple of $s$. In this case $y$ is an integer multiple of $r$. Thus, the distance between any two consecutive points along $\ell'$ with integer coordinates is equal to \[d=\sqrt{s^2+r^2}.\] Since $\ell'$ is a translation of $\ell$ by a vector in $\mathbb{Z} \times \mathbb{Z}$ , every pair of consecutive points, along $\ell$, of $\ell \cap \mathbb{Z} \times \mathbb{Z}$ are at a distance $d$ of each other. \end{proof} \begin{corollary}\label{cor:integer_segment} Let $a,b \in \mathbb{Z} \times \mathbb{Z}$. Let $\ell$ be a line parallel to $ab$ and containing a point of $\mathbb{Z} \times \mathbb{Z}$. Then every line segment, $e$, contained in $\ell$, of length at least $|ab|$ contains at least one point of $\mathbb{Z} \times \mathbb{Z}$. Moreover, if $e$ has an endpoint in $\mathbb{Z} \times \mathbb{Z}$, the $e$ contains at least two points of $\mathbb{Z} \times \mathbb{Z}$. \end{corollary} \begin{proof} Let $a'$ be a point $\ell \cap \mathbb{Z} \times \mathbb{Z}$. Let $a'b'$ be a line segment parallel to $ab$. Note that $b' \in \mathbb{Z} \times \mathbb{Z}$. Let $d$ be as in Lemma~\ref{lem:integer_lines} for $\ell$. Note that $e \ge |ab| =|a'b'| \ge d$. Since every pair of consecutive points, along $\ell$, of $\ell \cap \mathbb{Z} \times \mathbb{Z}$ are at a distance $d$ of each other, the result follows. \end{proof} \begin{lemma}\label{lem:cutting_lines} Let $m$ be a rational number. Let $L$ be the set of lines with slope $m$ that pass through some point of $\mathbb{Z} \times \mathbb{Z}$. Let $B$ be the intersection points of the lines in $L$ and the $x$-axis. Then there exists $d >0$, such that every two points in $B$, that are consecutive along the $x$-axis, are at a distance $d$ of each other. \end{lemma} \begin{proof} The lines in $L$ have equation of the form $y=mx+b$. Therefore, \[B = \{b: b=y-mx \textrm{ for some } (x,y) \in \mathbb{Z} \times \mathbb{Z}\}.\] Thus, $B$ is the image of the group homomorphism from $\mathbb{Z} \times \mathbb{Z}$ to $\mathbb{Q}$ that maps $(x,y)$ to $y-mx$. Since $\mathbb{Z} \times \mathbb{Z}$ is finitely generated, $B$ is also finitely generated. As every finitely generated subgroup of $\mathbb{Q}$ is cyclic, there exists a rational number $d$ such that $B=\{nd: n \in \mathbb{Z}\}.$ The result follows. \end{proof} \begin{restatable}{lemma}{triangleHeight}\label{lemma:triangle_height} Any interior-empty triangle of $G$ has height at most $2$. \end{restatable} \begin{proof} Let $\Delta$ be a triangle with vertices in $G$. We first show that \LabelQuote{if two edges of $\Delta$ have each at least two interior points in $G$, then there is a point of $G$ in the interior of $\Delta$.}{$\ast$} \begin{figure} \centering \includegraphics[page=4]{figures/no_height_2_triangle.pdf} \caption{A triangle $\Delta$ with two edges containing two interior points of $G$ each and the induced point inside $\Delta$.} \label{fig:height2:case3} \end{figure} Let $e_1$ and $e_2$ be two edges of $\Delta$, each with at least two interior points in~$G$. Let $a$ be the vertex of $\Delta$ common to $e_1$ and $e_2$. Let $x_1$ and $x_2$ be the points closest and second closest to $a$ in $e_1 \cap \mathbb{Z} \times \mathbb{Z}$, respectively. Let $y_1$ and $y_2$ be the points closest and second closest to $a$ in $e_2 \cap \mathbb{Z} \times \mathbb{Z}$, respectively. See in Figure~\ref{fig:height2:case3}. By Lemma~\ref{lem:integer_lines}, there exist $d_1,d_2 >0$ such that $|ax_1|=|x_1x_2|=d_1$ and $|ay_1|=|y_1y_2|=d_2$. This implies that $x_2y_2$ is parallel to and twice the length of $x_1y_1$. Therefore, the midpoint of $x_2y_2$ is in $G$, which proves ($\ast$). Now assume for the contrary that $\Delta$ is interior-empty and has height at least 3. Let $a,b$ and $c$, be the vertices of $\Delta$, and let $e:=ab$. Since $\Delta$ has height at least 3, there exist at least two lines parallel to $e$ each containing a point of $\mathbb{Z} \times \mathbb{Z}$ and crossing through the interior of $\Delta$. Of these lines let $\ell_1$ and $\ell_2$ be the lines closest and second closest to $e$, respectively. Let $\ell'$ be the line parallel to $bc$ and containing $a$. Let $x$ be the point of intersection of $\ell'$ and $\ell_1$; let $y$ be the point of intersection of $ac$ and $\ell_1$; and let $y'$ be the point of intersection between $bc$ an $\ell_1$. See Figure~\ref{fig:height2:case2}. Since $ab$ is parallel to $xy'$ and they have the same length, by Corollary~\ref{cor:integer_segment} there exists a point $p \in \mathbb{Z} \times \mathbb{Z}$ on $xy'$. We may assume that $yy'$ does not contain points of $G$ in its interior as otherwise $\Delta$ is not interior-empty. Therefore, $p$ is either on the line segment $xy$ or $p=y'$. \begin{figure} \centering \includegraphics[page=3]{figures/no_height_2_triangle.pdf} \caption{If a point $p \in G$ is in the interior of $xy$ then the triangle $abc$ is not empty. (Points of $G$ are marked with disks, points which are not neccessarily in $G$ are marked with crosses.)} \label{fig:height2:case2} \end{figure} Suppose next that $p$ is in the interior of $xy$. Let $\ell''$ be the line parallel to $pa$ and containing $c$. Let $q$ be the point of intersection between $\ell''$ and $ab$, as depicted in Figure~\ref{fig:height2:case2}. Note that $|pa|<|cq|$. By Corollary~\ref{cor:integer_segment}, $cq$ contains a point of $G$ in its interior, and $\Delta$ is not interior-empty. If $p=x$, then $y'$ has integer coordinates and hence is a point of $G$. So it remains to consider $p \in \{y, y'\}$. Suppose that $p=y$ or $y'$. Let $e'$ be the side of $\Delta$ that contains $p$, and let $q$ be the endpoint of $e'$ distinct from $c$. Let $p'$ be the intersection point of $e'$ and $\ell_2$. By Lemma~\ref{lem:cutting_lines}, we have that $|qp|=|pp'|$. Since $p$ has integer coordinates, then so does $p'$, and $e'$ contains two interior points in $G$. We repeat the previous arguments now with $e=e'$ to conclude that either $\Delta$ is interior non-empty or $\Delta$ has an edge distinct from $e'$ with two interior points in $G$. In the latter case, we are done by ($\ast$). \end{proof} For the proof of our next statement, we use the Euler's totient function $\phi$. For a given integer $d$, $\phi(d)$ is the number of integers at most $d$ that are relative primes with $d$. Clearly, $\phi(d) \leq d$. A segment with endpoints in the integer grid is \emph{primitive} if it does not contain any integer grid point in its integior. It is well-known that for $d>1$: \begin{itemize} \item $2 \cdot \phi(d)$ is the number of points $(d,a)$ with $|a|<|d|$ on the integer grid such that the segment from the origin to the point $(d,a)$ is primitive; and \item $2 \cdot \phi(d)$ is the number of points $(a,d)$ with $|a|<|d|$ on the integer grid such that the segment from the origin to the point $(a,d)$ is primitive. \end{itemize} We use the following lemma to get asymptotic bounds later on. \begin{lemma}\label{lemma:calc_horton_stabs} Let $n\geq 1$ be the square of an integer. Then \begin{eqnarray*} \sum_{d=1}^{ \sqrt{n} } \phi(d) ( \sqrt{n} /d) \cdot \log_2( \sqrt{n} /d) & = & O(n). \end{eqnarray*} \end{lemma} \begin{proof} We use the bounds $\phi(d) \le d$ and $n!\geq (n/e)^n$. The latter follows by Stirling's formula. \begin{eqnarray*} \sum_{d=1}^{ \sqrt{n} } \phi(d) \cdot ( \sqrt{n} /d ) \cdot \log_2( \sqrt{n}/d ) & \le & \sum_{d=1}^{ \sqrt{n} } \sqrt{n} \cdot \log_2( \sqrt{n}/d ) \\ & = & \sqrt{n} \cdot \log_2 \left( \prod_{d=1}^{\sqrt{n}} \sqrt{n}/d \right) \\ & = & \sqrt{n} \cdot \log_2 \left( (\sqrt{n})^{\sqrt{n}} / \sqrt{n}! \right) \\ & \le & \sqrt{n} \cdot \log_2 \left( \sqrt{n}^{\sqrt{n}} / (\sqrt{n}/e)^{\sqrt{n}} \right) \\ & = & \sqrt{n} \cdot\log_2 \left( e^{ \sqrt{n} } \right) \\ & \le & (\sqrt{n})^2 \cdot \log_2(e) \\ & = & O(n). \end{eqnarray*} \end{proof} \begin{restatable}{theorem}{sqHortonStabs}\label{thm:horton_stabs} Let $H=\pi_{\epsilon}(G)$ be a squared Horton set of $n$ points. Then every point of the plane stabs $O(n)$ empty triangles of $H$. \end{restatable} \begin{proof} Obviously, no point of $H$ can stab any empty triangle of $H$. Consider an arbitrary point $q \in \mathbb{R}^2\setminus H$. Every empty triangle $\pi_{\epsilon}(\Delta)$ in $H$ corresponds to an interior-empty triangle $\Delta$ in $G$. By Lemma~\ref{lemma:triangle_height}, $\Delta$ has height at most 2. We separately count the empty triangles of different heights in $H$ that possibly contain $q$. We consider each such triangle $\pi_{\epsilon}(\Delta)$ by the slope of the base edge of $\Delta$ in $G$. We start with the triangles of height zero; these triangles correspond to degenerate triangles in $G$. Let $\Delta$ be a degenerate triangle in $G$, with slope $m$, such that $\pi_{\epsilon}(\Delta)$ is stabbed by $q$. Let $\ell$ be the line that contains $\Delta$. Let $\ell'$ be a line distinct from $\ell$ and parallel to $\ell$. Let $\Delta'$ be a degenerate triangle in $G$, contained in $\ell'$. By Property $1$ of the definition of Squared Horton sets, the convex hulls of $\pi_{\epsilon}(\Delta)$ and $\pi_{\epsilon}(\Delta')$ do not intersect. In particular $\pi_{\epsilon}(\Delta')$ is not stabbed by $q$. This implies that for every possible slope, $m$, spanned by points in $G$, there exists at most one line containing all degenerate triangles $\Delta$ of $G$, with slope $m$, such that $\pi_{\epsilon}(\Delta)$ is stabbed by $q$. Suppose that $|m| < 1$. Let $d >0$ be the distance, in $x$-direction, between two consecutive points of $\ell \cap G$. Note that $d$ is an integer satisfying $1 \le d \le \sqrt{n}$; thus, $|\ell \cap G| \le \sqrt{n}/d+1$. Let $(d,a)$ be the vector defined by two consecutive integer grid points in $\ell$. Note that since $|m| < 1$, we have that $|a| < |d|$. Therefore, $m$ has at most $2 \cdot \phi(d)$ different possible values. By a similar argument, if $|m| >1$ we have that: $|\ell \cap G| \le \sqrt{n}/d+1$ for some integer $1 \le d \le \sqrt{n}$; and $m$ has at most $2 \cdot \phi(d)$ different possible values. Therefore, $m$ has at most $4\cdot\phi(d)$ different possible values. By Properties $2$ and $3$ of the definition of Squared Horton sets, $\pi_{\epsilon}(\ell \cap G)$ forms a Horton set. Hence, by Lemma~\ref{lemma:hortoninterior}, the number of empty triangles in $\pi_{\epsilon}(\ell \cap G)$ that contain $q$ is bounded by $O(\sqrt{n}/d \cdot \log(\sqrt{n}/d))$. Summing this bound over all possible slopes and applying Lemma~\ref{lemma:calc_horton_stabs}, we obtain an upper bound of \[ \sum_{d=1}^{\sqrt{n}} 4 \cdot \phi(d) \cdot O\big(\sqrt{n}/d \cdot \log(\sqrt{n}/d)\big) = O\Big(\sum_{d=1}^{\sqrt{n}} \phi(d) \cdot \sqrt{n}/d \cdot \log(\sqrt{n}/d)\Big) = O(n)\] for the number of empty triangles of height zero that can be stabbed by $q$. Suppose that $m \in (0^\circ, 45^\circ] \cup (-90^\circ, -45^\circ]$. Let $d >0$ be the distance, in $x$-direction, between two consecutive points of $\ell \cap G$. Note that $d$ is an integer satisfying $1 \le d \le \sqrt{n}$; thus, $|\ell \cap G| \le \sqrt{n}/d+1$. Further note that $m$ has at most $2 \cdot \phi(d)$ different possible values. Suppose that $m \in (-90^\circ, -45^\circ] \cup (45^\circ, 90^\circ]$. By a similar argument (with the roles of the $x$ and $y$ directions interchanged) we have that: $|\ell \cap G| \le \sqrt{n}/d+1$ for some integer $1 \le d \le \sqrt{n}$; and $m$ has at most $2 \cdot \phi(d)$ different possible values. Therefore, $m$ has at most $4\cdot\phi(d)$ different possible values. By Properties $2$ and $3$ of the definition of Squared Horton sets, $\pi_{\epsilon}(\ell \cap G)$ forms a Horton set. Hence, by Lemma~\ref{lemma:hortoninterior}, the number of empty triangles in $\pi_{\epsilon}(\ell \cap G)$ that contain $q$ is bounded by $O(\sqrt{n}/d \cdot \log(\sqrt{n}/d))$. Summing this bound over all possible slopes and applying Lemma~\ref{lemma:calc_horton_stabs}, we obtain an upper bound of \[ \sum_{d=1}^{\sqrt{n}} 4 \cdot \phi(d) \cdot O\big(\sqrt{n}/d \cdot \log(\sqrt{n}/d)\big) = O\Big(\sum_{d=1}^{\sqrt{n}} \phi(d) \cdot \sqrt{n}/d \cdot \log(\sqrt{n}/d)\Big) = O(n)\] for the number of empty triangles of height zero that can be stabbed by $q$. We next bound the number of triangles of height one that contain $q$. Consider two empty triangles $\Delta$ and $\Delta'$ of height one in $G$ whose base edges are parallel and for which $q$ lies in both $\pi_{\epsilon}(\Delta)$ and $\pi_{\epsilon}(\Delta')$. Let $S$ and $S'$ be the points of $G$ that are on the boundary of the parallel strips containing $\Delta$ and $\Delta'$, respectively. Since the strips are parallel and $q \in \operatorname{Conv}(\pi_{\epsilon}(S)) \cap \operatorname{Conv}(\pi_{\epsilon}(S'))$, it follows that $S \cap S' \neq \emptyset$. In other words, $\Delta$ and $\Delta'$ lie either in the same strip or in two neighboring strips of $G$. So for each possible slope of the base edge of $\Delta$, there are at most two strips in $G$ which can contain $\Delta$ such that $q$ stabs $\pi_{\epsilon}(\Delta)$. Similar as before, for each $1 \le d < \sqrt{n}$ there are at most $4 \cdot \phi(d)$ possible slopes for the base edge of $\Delta$ and the according strip $S$ containing $\Delta$ has at most $2(\sqrt{n}/d+1)$ points of $G$. By Lemma~\ref{lemma:sqHortonUnion}, $\pi_{\epsilon}(S)$ forms a Horton set in $H$. Hence, by Lemma~\ref{lemma:hortoninterior}, the number of empty triangles in $\pi_{\epsilon}(S)$ that contains $q$ is upper bounded by $O(\sqrt{n}/d \cdot \log(\sqrt{n}/d))$. Summing up over all possible slopes and the according strips, we obtain an upper bound of \[ \sum_{d=1}^{\sqrt{n}} 4 \cdot \phi(d) \cdot 2 \cdot O\big(\sqrt{n}/d \cdot \log(\sqrt{n}/d)\big) = O\Big(\sum_{d=1}^{\sqrt{n}} \phi(d) \cdot \sqrt{n}/d \cdot \log(\sqrt{n}/d)\Big) = O(n)\] for the number of empty triangles of height one that can be stabbed by $q$. Finally, we consider the triangles of height $2$. Let $\ell$ be the supporting line of the base edge of a triangle $\Delta$ in $G$ of height $2$ such that $\pi_{\epsilon}(\Delta)$ is empty and contains $q$. Let $S$ be the set of points of $G$ in the double strip bounded by $\ell$ that contains $\Delta$ and let $p$ be the corner of $\Delta$ that does not lie on its base edge. Note that all interior-empty triangles with height 2, base edge on $\ell$, and third corner $p$ are pairwise interior-disjoint. Hence $\Delta$ is the only such triangle for which $\pi_{\epsilon}(\Delta)$ is empty and contains $q$. Now consider a triangle $\Delta'$ of height $2$ for which $\pi_{\epsilon}(\Delta')$ is empty and contains $q$ and whose base edge is parallel to the one of $\Delta$. Let $S'$ be the set of points of $G$ in the double strip parallel to $\ell$ that contains $\Delta'$. Since the double-strips of $\Delta$ and $\Delta'$ are parallel and $q \in \operatorname{Conv}(\pi_{\epsilon}(S)) \cap \operatorname{Conv}(\pi_{\epsilon}(S'))$, it follows that $S \cap S' \neq \emptyset$. In other words, the two double strips must be identical, overlapping, or neighboring So for each possible slope of the base edge of $\Delta$, there are at most three double-strips in $G$ which can contain $\Delta$ such that $q$ stabs $\pi_{\epsilon}(\Delta)$. Hence at most five lines of that slope could contain the third vertex $p$ of $\Delta$. For each $1 \le d \le \sqrt{n}$ there are at most $4 \cdot \phi(d)$ possible slopes for the base edge of $\Delta$. Each of the at most five lines of such a slope has at most $\sqrt{n}/d+1$ points of $G$, each of which could be $p$. Summing up over all possible slopes and the according lines and slopes and using $\phi(d)\leq d$, we obtain an upper bound of \[ \sum_{d=1}^{\sqrt{n}} 4 \cdot \phi(d) \cdot 5(\sqrt{n}/d + 1) = O\Big(\sqrt{n} \sum_{d=1}^{\sqrt{n}} \phi(d)/d \Big) = O(n)\] for the number of empty triangles of height $2$ that can be stabbed by $q$. Adding up the bounds for the three different triangle heights yields an upper bound of $3\cdot O(n) = O(n)$ on the total number of empty triangles in $H$ that contain $q$, which completes the proof. \end{proof} The following result on the number of empty triangles incident to a fixed point of a squared Horton set is proven in a similar way as Theorem~\ref{thm:horton_stabs}. \begin{restatable}{lemma}{sqHortonIncident}\label{lemma:sqHorton_incident} Every point of a squared Horton set of $n$ points is incident to $O(n)$ empty triangles. \end{restatable} \begin{proof} Let $H$ be a squared Horton set of $n$ points. We will show that every point $p$ of $H$ is incident to $O(n)$ empty triangles of $H$. We start with the number of such triangles of height zero. For each such triangle $\Delta$, $\pi_{\epsilon}^{-1}(p)$ has to be on the same line as $\pi_{\epsilon}^{-1}(\Delta)$. For each $1 \le d \le \sqrt{n}$, there are at most $4 \cdot \phi(d)$ lines spanned by $G$ through $\pi_{\epsilon}^{-1}(p)$, each with at most $\sqrt{n}/d+1$ points of $G$. For each such line $\ell$, the point set $\pi_{\epsilon}(\ell \cap G)$ is a Horton set which by Lemma~\ref{lemma:hortonincident} has at most $O((\sqrt{n}/d) \cdot \log(\sqrt{n}/d))$ empty triangles incident to $p$. Hence, summing over all possible slopes, the number of empty triangles in $H$ incident to $p$ and having height zero is at most \begin{eqnarray*} \sum_{d=1}^{\sqrt{n}} \phi(d) \cdot O\big((\sqrt{n}/d) \cdot \log_2(\sqrt{n}/d)\big) & = & O(n). \\ \end{eqnarray*} We next consider the triangles of height one. For $p$ to be incident to $\Delta$, $\pi_{\epsilon}^{-1}(p)$ has to lie on the boundary of the strip defining the height of $\pi_{\epsilon}^{-1}(\Delta)$. For each slope there are at most two relevant such strips for $p$, each spanning a Horton set in $H$. By a similar counting as above we again get a bound of $O(n)$ on the number of empty triangles in $H$ of height one incident to $p$. Finally, we consider the triangles of height $2$. For each slope there are at most $3$ grid lines in $G$ that could contain the basis of an interior-empty triangle incident to $\pi_{\epsilon}^{-1}(p)$, namely, the line $\ell$ containing $\pi_{\epsilon}^{-1}(p)$ and the lines $\ell_1,\ell_2$ which are $2$ apart from $\ell$. Further, for each such double-strip, the number of empty triangles in $H$ that is incident to $p$ is bounded from above by the number of points on the boundary of the double-strip. Hence summing up over all slopes and relevant double-strips gives $O(n)$ empty triangles in $H$ that have height $2$ and are incident to $p$, which completes the proof. \end{proof} \section{{\wasylozenge}-squared Horton sets}\label{sec:diamondssquaredhortonset} \begin{figure \centering \includegraphics[scale=1]{figures/wasylozenge_set} \caption{ A {\wasylozenge} point set. } \label{fig:wasylozenge} \end{figure} We denote by {\wasylozenge} a point set obtained by placing four points on the corners of a square and adding further points along four slightly concave arcs between adjacent corners, such that on each arc there is almost the same number of points. An example is depicted in Figure~\ref{fig:wasylozenge}. \begin{defini} Let $H$ be a squared Horton set with $m$ points. Let $H_{\wasylozenge}$ be the set we obtain by replacing every point of $H$ by a small $\wasylozenge$ with $k$ points. We denote the points of $H$ by $p_i$ and the corresponding $\wasylozenge$ by $\wasylozenge_i$. Then $H_{\wasylozenge}$ is a \text{{\wasylozenge}-\emph{squared}} \emph{Horton set} if the following properties hold. \begin{enumerate} \item\label{enum:orient} For any pairwise different $i, j, l \in \{1,\dots, m\}$, any point triple $q_i \in \wasylozenge_i$, $q_j\in \wasylozenge_j$, $q_l \in \wasylozenge_l$ has the same orientation as $p_i,p_j,p_l$. \item\label{enum:arc} The arcs of each~$\wasylozenge_i$ are such that for any $\wasylozenge_j$ with $i\neq j$ there is an arc of $\wasylozenge_i$ and arc of $\wasylozenge_j$ which form a convex set. $\operatorname{Conv}(\wasylozenge_i \cup \wasylozenge_j)\cap \operatorname{Conv}(\wasylozenge_i \cup \wasylozenge_l)$. \item\label{point_on_line} For any five pairwise different $a, b, c, d, e \in \{1, \dots, m\}$ the following holds $\operatorname{Conv}(\wasylozenge_a\cup\wasylozenge_b)\cap\operatorname{Conv}(\wasylozenge_a\cup\wasylozenge_c)\cap\operatorname{Conv}(\wasylozenge_d\cup\wasylozenge_e)=\emptyset$. \item\label{three_cross} For any six pairwise different $a, b, c, d, e, f \in \{1, \dots, m\}$ the following holds $\operatorname{Conv}(\wasylozenge_a\cup\wasylozenge_b)\cap\operatorname{Conv}(\wasylozenge_c\cup\wasylozenge_d)\cap\operatorname{Conv}(\wasylozenge_e\cup\wasylozenge_f)=\emptyset$. \end{enumerate} \end{defini} Observe, that $H_{\wasylozenge}$ has $n=km$ points if $H$ has $m$ points and $\wasylozenge$ consists of $k$ points. Since the points of $H$ are in general position it is possible to choose the $\wasylozenge$ small enough, such that the Properties~\ref{enum:orient},~\ref{point_on_line} and \ref{three_cross} hold. Property~\ref{enum:arc} holds if all $\wasylozenge$ are aligned in the same way. \begin{restatable}{lemma}{wasHortonEmpty}\label{lemma:nbr_empty} Let $H_{\wasylozenge}$ be a ${\wasylozenge}$-squared Horton set, where the underlying squared Horton set has $m$ points and each $\wasylozenge$ consists of $k$ points. Then the number of empty triangles in $H_{\wasylozenge}$ is $\Theta(m^2k^3)$. \end{restatable} \begin{proof} We split the empty triangles of $H_{\wasylozenge}$ into three groups, depending on the number of different $\wasylozenge$ subsets of $H_{\wasylozenge}$ that contain vertices of a triangle. \emph{Case 1.} Triangles spanned by three points of $\wasylozenge_i$, for $i \in \{1,\dots,m\}$. Each $\wasylozenge_i$ spans $O(k^3)$ such empty triangles. Summing up over the $m$ different subsets $\wasylozenge_1,\dots,\wasylozenge_m$ yields $O(m k^3)$ empty triangles of $H_{\wasylozenge}$ for this case. \emph{Case 2.} Triangles spanned by two points in $\wasylozenge_i$ and one point in $\wasylozenge_j$, for $i\neq j \in \{1,\dots,m\}$. Note that every such triangle does not have any point of $\wasylozenge_l$ with $l \neq i,j$ in its interior due to the first property of $\wasylozenge$-squared Horton sets. There are $\Theta(m^2)$ pairs $(\wasylozenge_i, \wasylozenge_j)$. For each of $\wasylozenge_i$ and $\wasylozenge_j$, there are at most $k$ choices for a vertex of an empty triangle. This means, we have $O(m^2k^3)$ empty triangles in this case. On the other hand, due to the third property of $\wasylozenge$-squared Horton sets, an arc of $\wasylozenge_i$ and an arc of $\wasylozenge_j$ form a convex point set. This convex point set is empty by construction. Further, each of the arcs has at least $k/4$ points. This gives us $\Omega(m^2k^3)$ empty triangles in this case. So $H_{\wasylozenge}$ contains $\Theta(m^2k^3)$ empty triangles which are spanned by two $\wasylozenge$s. \emph{Case 3.} Triangles spanned by one point in each of $\wasylozenge_i, \wasylozenge_j, \wasylozenge_l$, for pairwise different $i, j, l \in \{1,\dots,m\}$. Then $p_i,p_j,p_l$ is an empty triangle of $H$. For each of $p_i$, $p_j$, and $p_l$, we have at most $k$ choices for a point of the corresponding $\wasylozenge$ such that the resulting triangle of $H_{\wasylozenge}$ is empty. As $H$ has $\Theta(m^2)$ empty triangles, we have $O(m^2k^3)$ empty triangles of $H_{\wasylozenge}$ for this case. \end{proof} \begin{restatable}{lemma}{lemWasHortonStabbed}\label{lemma:nbr_stabbed} Let $H_{\wasylozenge}$ be a ${\wasylozenge}$-squared Horton set, where the underlying squared Horton set has $m$ points and each $\wasylozenge$ consists of $k$ points. Then every point of the plane stabs $O(mk^3)$ empty triangles of $H_{\wasylozenge}$. \end{restatable} \begin{proof} We fix a stabbing point $s$. We split the empty triangles into three groups: three points of one $\wasylozenge$, two points in one $\wasylozenge$ and the third one in a different $\wasylozenge$, and all three points in different $\wasylozenge$s. \emph{Case 1.} All points in one $\wasylozenge$. If all points are in one $\wasylozenge$, then a point $s$ stabs at most $k^3$ such empty triangles since the convex hulls of the $\wasylozenge$s do not intersect. \emph{Case 2.} Two points in one $\wasylozenge$ and the third point $q$ in a different $\wasylozenge$. We draw a half ray $\ell$ starting from $q$ and through $s$. If $a,b$ are points of a $\wasylozenge$, which is not intersected by $\ell$ then $s$ does not stab the triangle $abq$. Further, $\ell$ intersects at most one other $\wasylozenge$ since the points of the underlying squared Horton set are in general position. So we have $mk$ choices for $q$ and $O(k^2)$ choices for the other two points of the triangle, so that the triangle is stabbed by $s$. \emph{Case 3.} All points in different $\wasylozenge$s. We merge the points of each $\wasylozenge_i$ into one point $p_i$. The result of this merging is a squared Horton set $H$. Further we define a point $s'$ with the following properties: \begin{itemize} \item $s':=s$ if $s \notin \operatorname{Conv}(\wasylozenge_i \cup \wasylozenge_j)$ for any $i,j\in \{1,\dots,m\}$, \item $s':=p_i$, with $i \in \{1,\dots,m\}$ if there exist two different $j, l \in \{1,\dots,m\}\backslash\{i\}$ with ${s \in \operatorname{Conv}(\wasylozenge_i \cup \wasylozenge_j)\cap \operatorname{Conv}(\wasylozenge_i \cup \wasylozenge_l)}$, \item $s':=p_ip_j\cap p_ap_b$, with pairwise distinct $a,b,i,j \in \{1,\dots,m\}$ if it holds that $s \in \operatorname{Conv}(\wasylozenge_i \cup \wasylozenge_j)\cap \operatorname{Conv}(\wasylozenge_a \cup \wasylozenge_b)$. \item $s'\in p_ip_j$ if $s \in \operatorname{Conv}(\wasylozenge_i \cup \wasylozenge_j)$ such that $s'$ is on the same side of the line $p_ap_b$ as $s$ for any $a,b \in \{1,\dots,m\}\backslash\{i,j\}$. \end{itemize} If $s'=p_i$ then by Property~\ref{point_on_line} there do not exist $a,b \in \{1, \dots, m\}\backslash\{i\}$ such that $s\in\operatorname{Conv}(\wasylozenge_a \cup \wasylozenge_b)$. Further, if $s'=p_ip_j\cap p_ap_b$ then by Property~\ref{three_cross} there do not exist $c, d\in \{1, \dots, m\}\backslash\{a,b,i,j\}$ such that $s\in\operatorname{Conv}(\wasylozenge_c \cup \wasylozenge_d)$. So $s'$ is well defined. Observe, that $s'$ is on the same side of the line $p_ap_b$ as $s$ or on the line $p_ap_b$ for any $a,b \in \{1,\dots,m\}$. This means, if $s$ stabs $q_iq_jq_l$ with $q_i \in \wasylozenge_i$, $q_j \in \wasylozenge_j$ and $q_l \in \wasylozenge_l$, then $s'$ is in the interior or on the boundary of the corresponding triangle $p_ip_jp_l$. We have three cases: \emph{Case 3a.} $s'$ is neither a point of $H$ nor a on a line segment spanned by two points of $H$. By Theorem~\ref{thm:horton_stabs} $s'$ stabs $O(m)$ empty triangles of $H$. Let $p_i,p_j,p_k$ be the points of a triangle of $H$ stabbed by $s'$. There are at most $k$ choices to select a point of $\wasylozenge_i,\wasylozenge_j,\wasylozenge_k$, respectively. So $s$ stabs $O(mk^3)$ such empty triangles. \emph{Case 3b.} $s'$ is a point of $H$. By Lemma~\ref{lemma:sqHorton_incident}, $s'$ is incident to $O(m)$ empty triangles of $H$. Let $p_i,p_j,p_k$ be the points of a triangle of $H$ incident to~$s$. There are at most $k$ choices to select a point of $\wasylozenge_i,\wasylozenge_j,\wasylozenge_k$, respectively. So $s$ stabs $O(mk^3)$ such empty triangles. \emph{Case 3c.} $s'$ is on a line $p_ip_j$. We define two points $s'_1$ and $s'_2$ such that both are close to $s'$ and $s'_1$ and $s'_2$ are on different sides of any line $p_ap_b$ passing through $s'$, for $a,b \in \{1,\dots,m\}$. Observe that any triangle containing $s'$ (in the interior or on the boundary) also contains $s'_1$ or $s'_2$. Also observe that $s'_1$ and $s'_2$ are neither points of $H$ nor on a line spanned by points of $H$. So $s'_1$ and $s'_2$ stab $O(m)$ empty triangles, respectively. There are at most $k$ choices to select a point of $\wasylozenge_i,\wasylozenge_j,\wasylozenge_k$, respectively. Again we get, that $s$ stabs $O(mk^3)$ such empty triangles. Adding up all cases $s$ stabs $O(mk^3)$ triangles. \end{proof} With these lemmata we can finally show our main result. \begin{namedproof}{Proof of Theorem~\ref{thm:LensSquaredHortonSets}} Consider a $\wasylozenge$-squared Horton set where the underlying squared Horton set consists of $m=n^{\alpha}$ points and each of the $\wasylozenge$'s consist of $k=n^{1-\alpha}$ points. By Lemma~\ref{lemma:nbr_empty} there are $\Theta(m^2k^3)=\Theta(n^{3-\alpha})$ empty triangles in $S$. By Lemma~\ref{lemma:nbr_stabbed} every point stabs $O(mk^3)$ empty triangles. Hence every point stabs $O(n^{3-2\alpha})$ empty triangles. \end{namedproof} \paragraph{Acknowledgements.} Research on this work has been initiated at a workshop of the H2020-MSCA-RISE project 73499 - CONNECT, held in Barcelona in June 2017. We thank all participants for the good atmosphere as well as for discussions on the topic. \bibliographystyle{abbrv}
{ "timestamp": "2022-10-04T02:19:20", "yymm": "2210", "arxiv_id": "2210.00630", "language": "en", "url": "https://arxiv.org/abs/2210.00630", "abstract": "Let $S$ be a set of $n$ points in general position in the plane. The Second Selection Lemma states that for any family of $\\Theta(n^3)$ triangles spanned by $S$, there exists a point of the plane that lies in a constant fraction of them. For families of $\\Theta(n^{3-\\alpha})$ triangles, with $0\\le \\alpha \\le 1$, there might not be a point in more than $\\Theta(n^{3-2\\alpha})$ of those triangles. An empty triangle of $S$ is a triangle spanned by $S$ not containing any point of $S$ in its interior. Bárány conjectured that there exist an edge spanned by $S$ that is incident to a super constant number of empty triangles of $S$. The number of empty triangles of $S$ might be $O(n^2)$; in such a case, on average, every edge spanned by $S$ is incident to a constant number of empty triangles. The conjecture of Bárány suggests that for the class of empty triangles the above upper bound might not hold. In this paper we show that, somewhat surprisingly, the above upper bound does in fact hold for empty triangles. Specifically, we show that for any integer $n$ and real number $0\\leq \\alpha \\leq 1$ there exists a point set of size $n$ with $\\Theta(n^{3-\\alpha})$ empty triangles such that any point of the plane is only in $O(n^{3-2\\alpha})$ empty triangles.", "subjects": "Computational Geometry (cs.CG)", "title": "No Selection Lemma for Empty Triangles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.984336352666835, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.70733858936216 }
https://arxiv.org/abs/1908.00305
Online Primal-Dual Mirror Descent under Stochastic Constraints
We consider online convex optimization with stochastic constraints where the objective functions are arbitrarily time-varying and the constraint functions are independent and identically distributed (i.i.d.) over time. Both the objective and constraint functions are revealed after the decision is made at each time slot. The best known expected regret for solving such a problem is $\mathcal{O}(\sqrt{T})$, with a coefficient that is polynomial in the dimension of the decision variable and relies on the Slater condition (i.e. the existence of interior point assumption), which is restrictive and in particular precludes treating equality constraints. In this paper, we show that such Slater condition is in fact not needed. We propose a new primal-dual mirror descent algorithm and show that one can attain $\mathcal{O}(\sqrt{T})$ regret and constraint violation under a much weaker Lagrange multiplier assumption, allowing general equality constraints and significantly relaxing the previous Slater conditions. Along the way, for the case where decisions are contained in a probability simplex, we reduce the coefficient to have only a logarithmic dependence on the decision variable dimension. Such a dependence has long been known in the literature on mirror descent but seems new in this new constrained online learning scenario.
\section{Introduction} We consider an online convex optimization (OCO) problem with a sequence of arbitrarily varying convex objective functions $f^t(\mu),~t=0,1,2,\cdots,~\mu\in\Delta \subseteq \mathbb{R}^d$ which are revealed per slot after the decision is made, and $\Delta$ is a closed bounded convex set. For a fixed time horizon $T$, define the regret of a sequence of decisions $\left\{\mu^0,\mu^1,~\cdots,~\mu^{T-1}\right\}\subseteq\Delta$ as \[ \sum_{t=0}^{T-1}f^t(\mu^t) - \min_{\mu\in\Delta }\sum_{t=0}^{T-1}f^t(\mu). \] The goal of OCO is to choose the decision sequence so that the regret grows sublinearly with respect to $T$. OCO is a classical problem and has been considered in a number of previous works such as \citep{Cesa-Bianchi96TNN,Gordon99COLT,Zinkevich03ICML,Hazan16FoundationTrends}. In particular, it is known that for differentiable functions $f^t(\cdot)$, the projected gradient descent algorithm achieves an $\mathcal{O}(\sqrt{T})$ regret which is also worst case optimal. When the set $\Delta $ is a probability simplex, the mirror descent algorithm further achieves an ``almost dimension free'' logarithmic dependency on the dimension $d$. The framework considered in this paper builds upon the previous OCO model by incorporating a sequence of time varying constraint functions $g^t_i(\mu),~i=1,2,\cdots,L$, which are also revealed at each time slot $t$ after the decision is made. The goal of this constrained OCO is to choose the decision sequence $\left\{\mu^0,\mu^1,~\cdots,~\mu^{T-1}\right\}\subseteq\Delta $ so that both the regret and constraint violations grow sublinearly in $T$ (i.e. $\sum_{t=0}^{T-1}g_i^t(\mu_t)\leq o(T)$) with respect to the best fixed decision in hindsight solving the following convex program: \begin{equation}\label{eq:0} \min_{\mu\in\Delta }\sum_{t=0}^{T-1}f^t(\mu),~~s.t.~~\sum_{t=0}^{T-1}g_i^t(\mu)\leq 0,~i=1,2,\cdots,L. \end{equation} The constrained OCO was first considered in the work \citep{Mannor09JMLR} where the authors (somewhat surprisingly) show via a counterexample that even with only one constraint, it is not always possible to achieve the aforementioned goal if we allow both objective and constraint functions to vary arbitrarily. Such an impossibility result implies that if one wants to obtain meaningful results on constrained OCO, then more assumptions have to be posed. The works \citep{Mahdavi12JMLR, Jenatton16ICML, titov2018mirror} consider the scenario where the constraint functions are fixed (i.e. do not depend on the time index $t$) and propose primal-dual type methods whose analyses give $\mathcal{O}(T^{\max\{\beta,1-\beta\}})$ regret and $\mathcal{O}(T^{1-\beta/2})$ constraint violation, where $\beta\in[0,1]$ is an algorithm parameter. This bound is improved in the work \citep{yu2016low} where the authors show an $\mathcal{O}(\sqrt{T})$ regret bound and finite constraint violations (i.e. $\mathcal{O}(1)$ constraint violation) via Slater condition (i.e. There exists a $\mu\in\Delta $ such that $g_i(\mu)<0,~\forall i$). A more recent work \citep{yuan2018online} shows that one can get logarithm regret and $\mathcal{O}(\sqrt{T})$ constraint violations if one assumes instead that all objective functions are strongly convex. Constrained OCO with stochastic constraints, where $g_i^t(\mu) = g_i(\mu,\gamma^t)$ and $\{\gamma^t\}_{t=0}^{T-1}$ are i.i.d., is considered in the works such as \citep{yu2017online,chen2019bandit,pmlr-v97-liakopoulos19a}, where a primal-dual proximal gradient algorithm is proposed and $\mathcal{O}(\sqrt{T})$ expected regret and constraint violations are shown under the Slater condition (i.e. there exists a $\mu\in\Delta $ such that $\expect{g_i(\mu,\omega^t)}<0,~\forall i$). Without Slater condition, the best known result is again $\mathcal{O}(T^{\max\{\beta,1-\beta\}})$ regret and $\mathcal{O}(T^{1-\beta/2})$ constraint violation as is shown in \citep{yi2019distributed}. Also, to the best of our knowledge, previous bounds in constrained online learning fail to recover the ``almost dimension free'' phenomenon for the probability simplex decision set ubiquitous in unconstrained scenarios. In this paper, we make steps towards \textit{removing the Slater condition while maintaining the worst case optimal $\mathcal{O}(\sqrt{T})$ regret, constraint violations, and sharpening the dimension dependency on decision variables.} Slater condition is assumed in the classical analysis of optimization algorithms for constrained convex programs such as the dual subgradient algorithm \citep{nedic2009approximate} and the interior point method \citep{boyd2004convex}. A key implication of Slater condition, which is adopted in the $\mathcal O(1/\sqrt{T})$ convergence rate analysis in \citep{nedic2009approximate}, is that it implies the existence and boundedness of Lagrange multipliers. However, the reverse implication is in general untrue, as one can show that for many equality constrained convex programs, Lagrange multipliers do exist and are bounded \citep{bertsekas1999nonlinear}. This makes ``Slater condition free'' analysis an important topic in optimization theory and motivates series of improved primal-dual type algorithms and analysis for constrained convex programs with competitive convergence rate under the existence of Lagrange multipliers assumption \citep{neely2014simple,yurtsever2015universal,deng2017parallel,yu2017simple}. Replacing the Slater condition with Lagrangian type assumptions in online problems is highly non-trivial and does not follow from that of constrained convex programs. A key issue is that the objective function varies arbitrarily per slot, and so the definition of Lagrange multiplier is not clear. A simple attempt is to look at in-hindsight problems such as \eqref{eq:0} and see if the Lagrange multiplier of this problem helps with the regret analysis. However, since problem \eqref{eq:0} sums the objectives across the horizon, it hardly gives any insight on the per slot dynamics for any practical algorithm considered. If we instead look at the per slot constrained problem, then, one might be able to conduct analysis and obtain per-slot multipliers, but it is not clear how to piece together the analysis for different slots. \subsection{Contributions} In this paper, we consider the stochastic constrained online learning problem and propose a new primal-dual online mirror descent framework, which simultaneously weakens the assumptions and improves the dimension factors in the previously known online proximal gradient type algorithms. We introduce a new \textit{sequential existence of Lagrange multipliers} condition, which is shown to be \textit{strictly weaker} than the Slater condition, allows for equality constraints and bridges the aforementioned dilemma between on-hindsight problem and per slot problem. We then show via a new analysis that under such an assumption, the proposed algorithm enjoys a matching $\mathcal{O}(\sqrt{T})$ expected regret and constraint violations. For the case when decisions are contained in a probability simplex, we reduce the dimension dependency to have only a logarithmic factor. Conceptually, our analysis seems to be distinctive from the previous known methods in the sense that we look at the cumulative objectives over a specifically chosen time period (of length $\sqrt T$), and consider the following static constrained program starting from any time slot $t$: $ \min_{\mu\in\Delta }\sum_{\tau=t}^{t+\sqrt T}\expect{f^{\tau}(\mu)},~~~s.t.~~ \expect{g_i(\mu,\omega^t)}\leq 0, ~~i=1,2,\cdots,L. $ We demonstrate that the existence and boundedness of Lagrange multipliers for this problem provides certain weak error bound conditions for the dual function sufficient to bound the size of the dual variable process, leading to the desired results. \subsection{Notation} For any vector $\mathbf{v}\in\mathbb{R}^d$, $\mathbf{v}\geq0,~\mathbf{v}=0,~\mathbf{v}\leq0$ means $\mathbf{v}$ is entrywise nonnegative, zero and nonpositive, respectively. The notation $[\mf v]_+$ denotes entrywise application of the function $\max(x,0)$. The notation $\mathbb{R}^d_+$ stands for the positive orthant of $\mathbb{R}^d$. For any set $\mathcal S\subseteq\mathbb{R}^d$, let $\text{int}(\mathcal{S})$ be its interior. The norms $\|\mathbf{v}\|_1 := \sum_{i=1}^d|v_i|$, $\|\mathbf{v}\|_2 := (\sum_{i=1}^d|v_i|^2)^{1/2}$ and $\|\mathbf{v}\|_\infty := \max_{i}|v_i|$. For any convex function $f:\mathbb{R}^d\rightarrow\mathbb{R}$, we use $\nabla f(\mathbf{v})$ to denote any one of the subgradients at $\mathbf{v}$ and use $\partial f(\mathbf{v})$ to denote the set of all subgradients at $\mathbf{v}$. For any function $g(\mathbf{v},\xi)$ which is convex on the first argument $\mathbf{v}$, $\nabla g(\mathbf{v},\xi)$ denotes the subgradient of $g$ on $\mathbf{v}$ while fixing $\xi$. For any closed set $K\subseteq\mathbb{R}^d$ and any point $\mf x\in\mathbb{R}^d$, the distance of $\mf x$ to $K$ is defined as $\text{dist}(\mf x, K):=\min_{\mf y\in K}\|\mf{x-y}\|_2$. \section{Problem Formulation and Algorithms} \subsection{Basic definitions}\label{sec:def} Let $\|\cdot\|$ be a general norm in $\mathbb{R}^d$. Define the dual norm on any $x\in\mathbb{R}^d$ as $\|x\|_*:= \sup_{\|y\|\leq1}\dotp{x}{y}$. Consider a convex set $\mathcal C\subseteq\mathbb R^d$ (potentially be $\mathbb R^d$ itself) with a non-empty interior, i.e. $\text{int}(\mathcal C)\neq\emptyset$. Let $\omega:\mathcal{C}\rightarrow\mathbb{R}$ be a function that is continuously differentiable in the interior of $\mathcal C$. Let $\Delta\subseteq \mathcal C$ be a \textit{compact convex} subset containing the origin and $\Delta^o:=\Delta\cap\text{int}(\mathcal C)$, which is non-empty. Define the \textit{Bregman divergence} function $D:\Delta\times \Delta^o\rightarrow\mathbb{R}$ generated from $\omega(\cdot)$ as follows: \[ D(x,y):= \omega(x) - \omega(y) - \dotp{\nabla\omega(y)}{x-y}. \] The following is a key property of the Bregman divergence: \begin{lemma}[Pushback]\label{lem:strong-convex} Let $f:\mathcal{C}\rightarrow\mathbb{R}$ be a convex function. Fix $\alpha>0$, $y\in\Delta^o$. Suppose $x^*\in \text{argmin}_{x\in\Delta} f(x) + \alpha D(x, y)$ and $x^*\in\Delta^o$, then, for any $z\in\Delta$, \[ f(x^*) + \alpha D(x^*,y)\leq f(z) + \alpha D(z,y) - \alpha D(z,x^*). \] \end{lemma} \begin{remark} For the case where $f$ is a linear function and $\omega$ is convex, such a pushback result can be found, for example, in \citep{nemirovski2009robust}. For results with $f$ being on domain $\mathbb{R}^d$, the proof can be found in \citep{tseng2005}. Our result generalizes previous results to arbitrary set $\Delta$. It is proved in the Supplement (Section \ref{sec:property-divergence}) \end{remark} We say $\omega(\cdot)$ is a \textit{distance generating function} if for any $x\in\text{int}(\mathcal C)$, $\omega(\cdot)$ is a continuously differentiable and strongly convex with modulus $\beta$ with respect to the primal norm $\|\cdot\|$, i.e. $ \dotp{x - y}{\nabla\omega(x) - \nabla\omega(y)}\geq \beta \|x-y\|^2,~\forall x,y\in \text{int}(\mathcal C). $ It is easy to see if $\omega$ is a distance generating function, then, the corresponding $D(\cdot,\cdot)$ satisfies \begin{equation}\label{eq:strong-convexity} D(x,y)\geq \left.\beta\|x-y\|^2\right/2,~~\forall x,y\in\text{int}(\mathcal{C}). \end{equation} Note that $D(x,y)$ behaves asymmetrically on $x$ and $y$ over potentially different domains, which results from the (possible) non-differentiability of the distance generating function $\omega(\cdot)$ on the boundary of $\Delta$. One such example is the KL divergence. \begin{enumerate}[leftmargin=*] \item The set $\Delta = \{\mu\in\mathbb{R}^d:~\|\mu\|_1 = 1,~\mu\geq0\}$ is a probability simplex, $\mathcal C = \mathbb{R}^d_+$, the function $\omega(\mu)=-\sum_{i=1}^d\mu_i\log\mu_i$ is the entropy function, and for any two distributions $\mu^a\in\Delta,~\mu^b\in\Delta^o$, $D(\mu^a,\mu^b):=\sum_{i=1}^d\mu^a_i\log(\mu^a_i/\mu^b_i)$ is the well-known Kullback-Leibler (KL) divergence. Furthermore, by Pinsker's inequality, it is strongly convex with respect to $\|\cdot\|_1$ with the strongly convex modulus $\beta=1$. The dual norm in this space is $\|\cdot\|_\infty$. \item The set $\Delta$ is in the Euclidean space $\mathbb R^d$, $\mathcal{C} =\mathbb R^d$ and $\omega(x) = \frac12\|x\|_2^2$, which is strongly convex with respect to $\|\cdot\|_2$, $D(x,y) = \|x-y\|_2^2$, and the dual norm is also $\|\cdot\|_2$. \end{enumerate} \vspace{-1em} \subsection{Problem formulation}\label{sec:formulation} \vspace{-0.5em} In this section, we set up the basic formulation of stochastic constrained online optimization. Let $\{\xi^t\}_{t=0}^\infty$ and $\{\gamma^t\}_{t=0}^{\infty}$ be two processes, where $\{\xi^t\}_{t=0}^\infty$ can be arbitrarily time varying (might be chosen based on the system history) and $\{\gamma^t\}_{t=0}^{\infty}$ are i.i.d. realizations of a random variable $\gamma$ with a possibly unknown distribution. Let $f(\mu,\xi^t)$, $g_i(\mu, \gamma^t), i\in \{1,2,\ldots, L\}$ be deterministic functions which are convex in the first component given the second component. Furthermore, let $\{h_j^t\}_{t=0}^{\infty},~j\in\{1,2,\cdots,M\}$ be sequences of i.i.d. random vectors in $\mathbb{R}^d$. Throughout the paper, we assume $\xi^t,\gamma^t,h_j^t$ are jointly independent for all $t$ with system history up to time $t$ as $\mathcal{F}_t := \{\xi^\tau,\gamma^\tau,h_j^\tau\}_{\tau=0}^{t-1}$. For any fixed $\mu\in\Delta$, we write $f^t(\mu) := f(\mu,\xi^t)$, $g_i^t(\mu) := g_i(\mu, \gamma^t)$, and $\overline f^t(\mu) = \mathbb{E}[f^t(\mu)|\mathcal{F}_t]$, $\overline g_i(\mu) = \mathbb{E}[g_i^t(\mu)]$. We further define the vectorized notations $\mathbf{g}^t(\mu)= [g^{t}_{1}(\mu), \ldots, g^{t}_{L}(\mu)]^{\mkern-1.5mu\mathsf{T}}$, $\overline{\mathbf{g}}(\mu) = [\mathbb{E}[g_{1}(\mu, \omega)], \ldots, \mathbb{E}[g_{L}(\mu, \omega)]]^{\mkern-1.5mu\mathsf{T}}$, $\mathbf{h}^t(\mu)= [\dotp{h^{t}_{1}}{\mu}, \ldots, \dotp{h^{t}_{M}}{\mu}]^{\mkern-1.5mu\mathsf{T}}$ and $\overline{\mathbf{h}}(\mu)= [\dotp{\expect{h^{t}_{1}}}{\mu}, \ldots, \dotp{\expect{h^{t}_{M}}}{\mu}]^{\mkern-1.5mu\mathsf{T}}$. It is also worth noting that our algorithms and analysis also apply to the special case where $\{\xi^t\}_{t=0}^\infty$ are also i.i.d. for which we have $\overline f^t(\mu) = \mathbb{E}[f^t(\mu)]$. Define the benchmarking decision in-hindsight $\mu^*$ as a solution to the following static convex program: \vspace{-0.5em} \begin{equation}\label{eq:on_hindsight} \min_{\mu\in\Delta}~~ \sum_{t=0}^{T-1}\overline f^t(\mu)~~s.t.~~\overline{\mathbf{g}}(\mu)\leq0,~~\overline{\mathbf{h}}(\mu) =\mf b, \end{equation} where $\mf b = [b_1,~b_2,~\cdots,~b_M]^{\mkern-1.5mu\mathsf{T}}$ is a vector of constants. At the beginning of each time slot $t$, none of the objective function $f^{t}(\mu)$, constraint function $g_{i}^t(\mu)$ or random vector $h_j^t$ is known. The decision maker is supposed to choose a vector $\mu^t\in\Delta$ first before observing these quantities. The goal is to make sequential (possibly randomized) decisions so that both the expected regret, defined as $\sum_{t=0}^{T-1}\expect{f^t(\mu^t) - f^t(\mu^*)}$, and expected constraint violations, define as $\sum_{t=0}^{T-1}\expect{g_i^t(\mu^t)}$ and $\mathbb E|\sum_{t=0}^{T-1}h_j^t(\mu^t)|$, grow sublinearly with respect to the time horzon $T$. Throughout this paper, we make the following boundedness assumption: \begin{Assumption}[Boundedness of objectives and constraint functions] \label{as:basic}~ \begin{enumerate}[leftmargin=*] \item Objective functions $f^t(\mu)$ and constraint functions $g_i^t(\mu)$ have bounded subgradients on $\Delta$, i.e. there exist absolute constants $D_{1}>0 $ and $D_{2}>0$ such that $\| \nabla f^{t} (\mu)\|_* \leq D_{1}$, $\sum_{i=1}^L\| \nabla g_{i}^t(\mu) \|_*^2 \leq D_{2}^2$, for all $\mu\in \Delta$, all $t\in \{0,1,\ldots\}$, and all $ i\in\{1,2,\ldots,L\}$. \item There exist absolute constants $F,G,H>0$ such that $|f^t(\mu)|\leq F,~\forall t\in\{0,1,2,\cdots\}$, $\sum_{i=1}^L|g_i^t(\mu)|^2 \leq G^2$ for all $\mu\in \Delta,~ t\in\{0,1,2,\cdots\}$, and $\sum_{j=1}^M\|h_j^t\|_*^2\leq H^2$, for all $j\in\{1,2,\cdots,M\},~t\in \{0,1,\ldots\}$. \item The Bregman divergence $D(\cdot,\cdot)$ is generated from a distance generating $\omega(\cdot)$ and bounded on the set $\Delta$, i.e. there exists a constant $R$ such that $\sup_{x\in\Delta,y\in\Delta^o}D(x,y)\leq R$. \end{enumerate} \end{Assumption} By strong convexity of the Bregman divergence \eqref{eq:strong-convexity}, we have $\sup_{x\in\Delta,y\in\Delta^o}\|x- y\|^2\leq 2R/\beta$. Note further that KL divergence does not satisfy Assumption \ref{as:basic}(3), for which we will develop a separate new algorithm in Section \ref{sec:prob-simplex}. \subsection{Primal-dual online mirror descent} We are now in a position to introduce our new online mirror descent (Algorithm \ref{alg:new-alg}) for the stochastic constrained online learning. The algorithm computes the next decision $\mu^{t+1}$ by a proximal mirror map using $\mu^t$, $f^t$ and $g^t_i$, and control the constraint violations via dual multipliers $\mf Q(t)$ and $\mf H(t)$. \begin{algorithm} \caption{} \label{alg:new-alg} Let $V, \alpha> 0$ be some trade-off parameters. Let $Q_i(t),~H_j(t)$ be sequences of dual multipliers such that $Q_i(0) = 0,~H_j(0) = 0,~\forall i,j$. Let $\mu^{0}=\mu^{-1} \in\Delta$. \textbf{For} t = 0 to $T-1$: \begin{enumerate}[leftmargin=15pt] \item Choose $\mu^{t}$ as a solution to the following problem: \vspace{-0.5em} \begin{align} \min_{\mu\in \Delta} \Big\langle V\nabla f^{t-1}(\mu^{t-1}) + \sum_{i=1}^{L} Q_i(t)\nabla g_i^{t-1}(\mu^{t-1}) + \sum_{j=1}^{M} H_i(t)h_i^{t-1}, \mu\Big\rangle + \alpha D(\mu,\mu^{t-1}) \label{eq:mu-update} \end{align} \item Update each dual multiplier $Q_i(t), H_j(t)$ via \begin{align} &Q_{i}(t+1) = \max\left\{Q_{i}(t) + g^{t-1}_{i}(\mu^{t-1}) + \dotp{\nabla g_{i}^{t-1} (\mu^{t-1})}{\mu^t-\mu^{t-1}}, 0\right\}, ~i\in\{1,2,\cdots,L\}\label{eq:Q-update}\\ &H_{j}(t+1) = H_{j}(t) + \dotp{h_{j}^{t-1}}{\mu^t} - b_j, ~j\in\{1,2,\cdots,M\}\label{eq:H-update} \end{align} \item Observe the objective function $f^{t}$ and constraint functions $\{g_i^t\}_{i=1}^L,~\{h_j^t\}_{j=1}^M$. \end{enumerate} \textbf{End for}. \end{algorithm} \vspace{-0.5em} \subsection{Sequential Existence of Lagrange Multipliers (SELM)} In this section, we introduce our Lagrange multiplier condition. A detailed comparison between such a condition and other constraint qualification conditions is delayed to the Supplementary (Section \ref{sec:constraint-qualification}). We start by defining a partial average function starting from any time slot $t$ as: $\overline{f}^{t,k}: = \frac1k\sum_{i=0}^{k-1}\overline{f}^{t+i}$. Consider the following optimization problem: \begin{equation}\label{eq:partial-static-prob} \min_{\mu\in\Delta}\overline{f}^{t,k}(\mu)~~s.t.~~\overline{\mathbf{g}}(\mu)\leq0,~~\overline{\mathbf{h}}(\mu) = \mf b, \end{equation} where $\overline{\mathbf{g}}(\mu), ~\overline{\mathbf{h}}(\mu)$ are defined in Section \ref{sec:formulation}. Denote the solution to this program as $\overline{f}^{t,k}_*$. Define the Lagrangian dual function of \eqref{eq:partial-static-prob} as \begin{equation}\label{eq:target-dual} q^{(t,k)}(\lambda,\eta):= \min_{\mu\in\Delta} \overline{f}^{t,k}(\mu) + \sum_{i=1}^L\lambda_i\overline{g}_i(\mu) + \sum_{j=1}^M\eta_j(\overline{h}_j(\mu)-b_j), \end{equation} where $\lambda\in\mathbb{R}^L_+$ and $\eta\in\mathbb{R}^M$ are dual variables. For simplicity of notations, we always enforce them to be row vectors. Now, we are ready to state our condition: \begin{Assumption}[Sequential existence of Lagrange multipliers (SELM)]\label{as:selm} For any time slot $t$ and any time period $k$, the set of primal optimal solution to \eqref{eq:partial-static-prob} is non-empty. Furthermore the set of dual optimal solution, which is $\mathcal{V}_{t,k}^* := \text{argmax}_{\lambda\in\mathbb{R}^L_+,~\eta\in\mathbb{R}^M}q^{(t,k)}(\lambda,\eta)$, is non-empty and bounded. Any vector in $\mathcal{V}^*$ is called a Lagrange multiplier associated with \eqref{eq:partial-static-prob}. Furthermore, there exists an absolute constant $B>0$ such that for any $t\in\{0,1,\cdots,T-1\}$ and $k=\sqrt{T}$, the dual optimal set $\mathcal{V}_{t,k}^*$ defined above satisfies $\max_{[\lambda,\mu]\in\mathcal{V}_{t,k}^*}\|[\lambda,\mu]\|_2\leq B$. \end{Assumption} \begin{remark} Note first that SELM reduces to the known existence and boundedness of Lagrange multipliers assumption adopted in optimization theory when the objectives are also i.i.d. functions. In Section \ref{sec:constraint-qualification} of the Supplement, we show that SELM is equivalent to certain constraint qualification conditions and strictly weaker than the Slater conditions. In particular, we obtain the following simplifications in special cases: (1) Lemma \ref{lem:slater-selm} shows that Slater condition implies SELM. (2) Corollary \ref{cor:mfcq-2} implies that when the interior of $\Delta$ is non-empty and there are only equality constraints, the linear independence of $\{\expect{h^t_1},~\expect{h^t_2},~\cdots,~\expect{h^t_M}\}$ is equivalent to SELM. (3) Lemma \ref{cor:mfcq} implies that when $\Delta$ is a probability simplex there are only equality constraints, the linear independence of $\{\mathbf{1},~\expect{h^t_1},~\expect{h^t_2},~\cdots,~\expect{h^t_M}\}$ is equivalent to SELM. \end{remark} The motivation for SELM is as follows: whenever Lagrange multipliers exist and are bounded, we automatically get that the dual function deviates according to a certain curve related to the distance from the set of Lagrange multipliers, namely, the weak error bound condition (EBC). \begin{definition}[Weak error bound condition (EBC)]\label{def:ebc} Let $F(\mathbf{x})$ be a concave function over $\mathbf{x}\in\mathcal{X}$, where $\mathcal{X}$ is closed and convex. Suppose $\Lambda^*:= \argmax_{\mathbf{x}\in\mathcal{X}}F(\mathbf{x})$ is non-empty. The function $F(\mathbf{x})$ satisfies the weak EBC if there exists constants $\ell_0,~c_0>0$ such that for any $\mf x\in\mathcal{X}$ satisfying $\text{dist}(\mf x,\Lambda^*)\geq\ell_0$, \begin{equation*} F(\mf x^*)-F(\mf x) \geq c_{0}\cdot \text{dist}(\mf x,\Lambda^*). \end{equation*} \end{definition} \vspace{-5pt} Note that in Definition \ref{def:ebc}, $\Lambda^*$ is a closed convex set. This follows from the fact that $F(\mf x)$ is a convex function and thus all sub level sets are closed and convex. The following lemma shows SELM implies weak EBC: \begin{lemma}\label{lem:leb} Fix $T\geq 1$. Suppose Assumption \ref{as:selm} holds, then for any $t\in\{0,1,\cdots,T-1\}$ and $k=\sqrt{T}$, there exists constants $c_0,\ell_0>0$, such that the dual function $-q^{(t,k)}(\lambda,\eta)$ defined in \eqref{eq:target-dual} satisfies the weak EBC with parameter $c_0,\ell_0$. \end{lemma} In the Supplement (Section \ref{sec:selm-ebc}), we will compare this weak EBC with the classical EBC in optimization theory and show that classical EBC implies weak EBC with explicit constants. \vspace{-10pt} \section{Main results} \vspace{-5pt} In this section, we present our main result of online primal-dual mirror descent. \begin{theorem}\label{main:theorem} Let $\mu^*$ be a solution to the in-hindsight optimization problem \eqref{eq:on_hindsight}. Suppose Assumption \ref{as:basic} and \ref{as:selm} hold. Let $\overline{c},~\overline{\ell}>0$ be absolute constants such that $c_0\geq\overline{c}$ and $\ell_0\leq\overline{\ell}$ for all $c_0,~\ell_0$ obtained in Lemma \ref{lem:leb} over $t=0,1,2,\cdots, T-1$ and $k=\sqrt{T}$. If we choose $\alpha = T,V=\sqrt{T}$ in Algorithm \ref{alg:new-alg}, then the expected regret and constraint violations satisfy: \begin{align*} &\frac1T\sum_{t=0}^{T-1}\expect{f^t(\mu^t) - f^t(\mu^*)}\leq \frac{C_0^{\prime}}{\sqrt{T}},\\ &\mathbb{E}\Big\Vert\Big[\frac1T\sum_{t=0}^{T-1}\overline{\mathbf{g}}(\mu^t)\Big]_+\Big\Vert_2 \leq \frac{C_1^{\prime}}{\sqrt{T}},~~~ \mathbb{E}\Big\Vert\frac1T\sum_{t=0}^{T-1}\overline{\mf h}(\mu^t) - \mf b\Big\Vert_2\leq \frac{C_2^{\prime}}{\sqrt{T}}, \end{align*} \vspace{-0.1em} where $C_0^{\prime}, C_1^{\prime}, C_{2}^{\prime}$ are constants depending linearly on $D_1^2+D_1+ D_2^2+G^2+H^2+G+H+F$ and independent of $T$. \end{theorem} \vspace{-1em} \subsection{Proof of regret bound}\label{sec:reg-analysis} \vspace{-0.5em} In this section, we present the proof of regret bound in Theorem \ref{main:theorem}. The proofs of technical lemmas are delayed to the Supplement (Section \ref{sec:pf-regret}). We start with the following key bound of a drift-plus-penalty (DPP) expression: \begin{lemma}\label{lem:strong-convex-queue} Define the drift $\Delta(t):= (\|\mf Q(t+1)\|_2^2-\|\mf Q(t)\|_2^2)/2+(\|\mf H(t+1)\|_2^2-\|\mf H(t)\|_2^2)/2$. Consider the following ``drift-plus-penalty'' (DPP) expression at time $t$: $V\dotp{\nabla f^{t-1}(\mu^{t-1})}{\mu^t-\mu^{t-1}}+\Delta(t) + \alpha D(\mu^t, \mu^{t-1})$. Let $M= \frac{4RH^2}{\beta} + G^2+\frac{2RD_2^2}{\beta}$ where $\beta$ is in \eqref{eq:strong-convexity}, then, for any $\mu\in\Delta$, \begin{multline} V\dotp{\nabla f^{t-1}(\mu^{t-1})}{\mu^t-\mu^{t-1}}+\Delta(t) + \alpha D(\mu^t, \mu^{t-1}) \leq V(f^{t-1}(\mu) - f^{t-1}(\mu^{t-1})) \\ + \sum_{i=1}^LQ_i(t)g^{t-1}_{i}(\mu) + \sum_{j=1}^{M} H_j(t)(\dotp{h_j^{t-1}}{\mu}-b_j) + \alpha D(\mu,\mu^{t-1}) - \alpha D(\mu, \mu^t) + M . \label{eq:interim-1} \end{multline} \end{lemma} This lemma is proved via the property of Bregman divergence (Lemma \ref{lem:strong-convex}). Now, for the DPP expression on the left hand side, we also have the following lower bound: \begin{lemma}\label{lem:lhs-lower-bound} Our Algorithm \ref{alg:new-alg} ensures \begin{equation}\label{eq:res-bound} V\dotp{\nabla f^{t-1}(\mu^{t-1})}{\mu^t-\mu^{t-1}} + \alpha D(\mu^t, \mu^{t-1}) \geq - V^{2}D_1^2/2\alpha\beta. \end{equation} \end{lemma} Substituting this bound in to \eqref{eq:interim-1}, taking $\mu=\mu^*$ which is the solution to the in-hindsight problem \eqref{eq:on_hindsight}, and taking conditional expectations from both sides, we readily get: {\small \begin{multline}\label{eq:roadmap-2} -\frac{V^{2}}{2\alpha\beta}D_1^2 + \expect{\Delta(t)|\mathcal{F}_{t-1}}\leq V\expect{f^{t-1}(\mu^*) - f^{t-1}(\mu^{t-1}) |\mathcal{F}_{t-1}} + \mathbb{E}\Big[\sum_{i=1}^LQ_i(t)g^{t-1}_{i}(\mu^*)\Big|\mathcal{F}_{t-1}\Big] \\ + \mathbb{E}\Big[\sum_{j=1}^{M} H_j(t)(\dotp{h_j^{t-1}}{\mu^*}-b_j)~\Big|\mathcal{F}_{t-1}\Big] + \alpha \expect{D(\mu^*,\mu^{t-1}) - D(\mu^*, \mu^t)~|\mathcal{F}_{t-1}} + M . \end{multline} }% \vspace{-0.1em} Note that {\small $$ \mathbb{E}\Big[\sum_{j=1}^{M} H_j(t)(\dotp{h_j^{t-1}}{\mu^*}-b_j)\Big|\mathcal{F}_{t-1}\Big] = \sum_{j=1}^{M} H_j(t)\expect{\dotp{h_j^{t-1}}{\mu^*} - b_j} = 0,$$ $$\mathbb{E}\Big[\sum_{i=1}^LQ_i(t)g^{t-1}_{i}(\mu^*)~\Big|\mathcal{F}_{t-1}\Big] = \sum_{i=1}^LQ_i(t)\expect{g^{t-1}_{i}(\mu^*)}\leq0,$$ }% where, in both inequalities, the first step follows from the fact that $h^t_j, g_i^t$ are i.i.d. and $H_j(t),Q_i(t)$ depend on $\mathcal{F}_{t-1}$, and the second step follows from $\mu^*$ being a solution to the in-hindsight optimization problem \eqref{eq:on_hindsight}, thus, must be feasible, i.e. $\expect{g^{t-1}_{i}(\mu^*)} \leq 0$, $\expect{\dotp{h_j^{t-1}}{\mu^*}}=0$. Thus, taking the full expectation from both sides of \eqref{eq:roadmap-2} gives \vspace{-0.5em} \begin{multline*} \expect{\Delta(t)} + V\expect{f^{t-1}(\mu^{t-1}) - f^{t-1}(\mu^*)} \leq M + \frac{V^{2}D_1^2}{2\alpha\beta} + \alpha \expect{D(\mu^*,\mu^{t-1}) - D(\mu^*, \mu^t)} . \end{multline*} Taking a telescoping sum on both sides from 0 to $T-1$ and dividing both sides by $TV$, \begin{align*} \frac1T\sum_{t=0}^{T-1}\expect{f^{t-1}(\mu^{t-1}) - f^{t-1}(\mu^*)} \leq \frac{M}{V} + \frac{VD_1^2}{2\alpha\beta} + \frac{\alpha}{VT} D(\mu^*,\mu^0), \end{align*} where we use the fact that since $Q_i(0)=0$ and $H_j(0) = 0$, $\sum_{t=0}^{T-1}\Delta(t) = (\|\mf Q(T)\|_2^2 + \|\mf H(T)\|_2^2)/2\geq0$. Substituting $\alpha = T, V = \sqrt{T}$, and $D(\mu^*,\mu^0)\leq R$ yields the desired result with $C_0' = \frac{RH^2}{\beta } + G^{2} +\frac{2RD_2^2}{\beta} + \frac{D_1^2}{2\beta} + R$. \subsection{Proof of constraint violations}\label{sec:constraint-violation} In this section, we present the proof of constraint violations in Theorem \ref{main:theorem}. The proofs of technical lemmas are delayed to the Supplement (Section \ref{sec:pf-constraint-1}-\ref{sec:pf-constraint-2}). First, it is enough to bound dual multipliers via the following lemma: \begin{lemma}\label{lem:constraint} The updating rule \eqref{eq:Q-update} and \eqref{eq:H-update} delivers the following constraint violation bounds: { \begin{align*} &\mathbb{E}\Big\Vert\Big[\frac1T\sum_{t=0}^{T-1}\overline{\mathbf{g}}(\mu^t)\Big]_+\Big\Vert_2 \leq \frac{\expect{\|\mf Q(t)\|_2}}{T} + \frac{VD_{1}D_2}{\alpha\beta} + \frac{1}{T}\sum_{t=1}^{T} \frac{D_{2}}{\alpha \beta}\left(D_2\expect{\|\mf Q(t)\|_2} + H\expect{\|\mf H(t)\|_2}\right) \\ &\mathbb{E}\Big\Vert\frac1T\sum_{t=0}^{T-1}\overline{\mf h}(\mu^t)-\mf b\Big\Vert_2 \leq \frac{\expect{\|\mf H(t)\|_2}}{T} + \frac{VD_{1}H}{\alpha\beta}+\frac{1}{T}\sum_{t=1}^{T}\frac{H}{\alpha\beta}\left(D_2\expect{\|\mf Q(t)\|_2} + H\expect{\|\mf H(t)\|_2}\right) \end{align*} }% \end{lemma} To bound $\expect{\|\mf Q(t)\|_2}$ and $\expect{\|\mf H(t)\|_2}$, we have the following lemma: \begin{lemma}\label{lem:dual-bound} Define constant {$C_{V,\alpha,t_0} := 2\big(\frac{4RH^2}{\beta} + G^{2}+\frac{2RD_2^2}{\beta}+ \frac{V^{2}}{2\alpha\beta}D_1^2 + VF \big)t_0 + 2\big(\frac32G^2 + \frac{2RD_2^2}{\beta} + \frac{8RH^2}{\beta}\big)t_0^{2}+ 2\alpha R.$} Then, for any integer $t_{0}\geq 1$, we have the $t_0$ step drift satisfies \begin{align}\label{eq:drift-bound-1} &\expect{\|\mf Q(t+t_0)\|_2^2 + \|\mf H(t+t_0)\|_2^2~|\mathcal F^{t-1}} - \|\mf Q(t)\|_2^2 - \|\mf H(t)\|_2^2 \nonumber \\ \leq& 2Vt_0\expect{\left.q^{(t-1,t_0)}\big(\frac{\mf Q(t)}{V},~\frac{\mf H(t)}{V}\big) ~\right|\mathcal{F}^{t-1}} + C_{V,\alpha,t_0}. \end{align} where the dual function $q^{(t-1,t_0)}$ is defined in \eqref{eq:target-dual}. \end{lemma} This bound establishes the relation between dual multipliers and the dual function. Next, in view of \eqref{eq:drift-bound-1}, we would like to show that $\expect{\left.q^{(t-1,t_0)}\big(\frac{\mf Q(t)}{V},~\frac{\mf H(t)}{V}\big) ~\right|\mathcal{F}^{t-1}}$ is small. This is done via Lemma \ref{lem:leb} that whenever $\big(\frac{\mf Q(t)}{V},~\frac{\mf H(t)}{V}\big)$ is far away from the optimal set $\mathcal{V}_{t-1,t_0}^*:=\text{argmax}_{\lambda,\eta}q^{(t-1,t_0)}\big(\lambda,~\eta\big)$, which is nonempty and bounded by Assumption \ref{as:selm}, $\expect{\left.q^{(t-1,t_0)}\big(\frac{\mf Q(t)}{V},~\frac{\mf H(t)}{V}\big) ~\right|\mathcal{F}^{t-1}}$ becomes negative. In fact one can prove the following lemma: \begin{lemma}\label{lem:bound-dual-function} { The dual function has the following bound: \[ \expect{q^{(t-1,t_0)}\big(\frac{\mf Q(t)}{V},~\frac{\mf H(t)}{V}\big) ~|\mathcal{F}^{t-1}} \leq F + \overline{\ell}(G+\sqrt{2RH^2/\beta} +\overline{c}) + \overline{c}B -\overline{c}\Big\|\big(\frac{\mf Q(t)}{V},~\frac{\mf H(t)}{V}\big)\Big\|_2, \] where} $B$ is defined in Assumption \ref{as:selm}. \end{lemma} Substituting the above lemma into \eqref{eq:drift-bound-1} and using a known stochastic drift lemma, one can prove the following bound by setting $t_0=\sqrt{T}$, $V=\sqrt{T},~\alpha = T$: \begin{lemma}\label{lem:q-bound} The quantity $\| \big(\mf Q(t),~\mf H(t)\big) \|_2$ satisfies the following conditions: \begin{equation}\label{eq:Q-bounds} \expect{\Big\|\big(\mf Q(t),~\mf H(t)\big)\Big\|_2} \leq C^{\prime} + C^{\prime\prime} \sqrt{T} \end{equation} where $C^{\prime} := \frac{2}{\overline{c}}\big(\frac{4RH^2}{\beta} + G^{2}+\frac{2RD_2^2}{\beta}+ \frac{D_1^2}{2\beta}\big)$ and $C^{\prime\prime}:= \frac{2}{\overline{c}}\big(2F + \frac{3}{2}G^2 + \frac{2RD_2^2}{\beta} + \frac{8RH^2}{\beta} + R + \overline{\ell}(G+\sqrt{8RH^2/\beta} +\overline{c}) + \overline{c}B + 2\big(2 (G+\sqrt{2RD_2^2/\beta}) + \sqrt{8RH^2/\beta}\big)^2 \log\big(\frac{8 (2(G+\sqrt{\frac{2RD_2^2}{\beta}}) + \sqrt{\frac{8RH^2}{\beta}})^2}{\overline c^2}\big)\big)$ are absolute constants. \end{lemma} Substituting the bound \eqref{eq:Q-bounds} into Lemma \ref{lem:constraint} with $\alpha= T$ and $V=\sqrt{T}$ gives the final constraint violation bounds. \vspace{-0.7em} \section{The probability simplex case}\label{sec:prob-simplex} \vspace{-5pt} { In this section, we deal with the probability simplex case where the decision set $\Delta$ is a $d$-dimensional probability simplex with huge $d$. While Algorithm \ref{alg:new-alg} can be applied to solve such problems by choosing $D(\mu, \mu^{t-1})$ to be $\|\mu-\mu^{t-1}\|_2^{2}$, due to the dependencies on the $D_1, D_2,G,H,F$, the constant factors in Theorem \ref{main:theorem} linearly depend on $d$. For mirror descent over a probability simplex, to improve the dimension dependence, people usually choose the Bregman divergence distance $D(\cdot, \cdot)$ to be the KL divergence. However, KL divergence fundamentally violates the third assumption in Assumption \ref{as:basic}. We now present an alternative algorithm in Algorithm \ref{alg:prob-simplex} and shows that it can achieve sublinear regret and constraint violations that logarithmically depends on $d$ .} \vspace{-5pt} \begin{algorithm} \caption{} \label{alg:prob-simplex} Let $V, \alpha> 0$,~$\theta\in[0,1)$ be some trade-off parameters. Let $D(\mu_{1}, \mu_{2}) = \sum_{i=1}^{d} \mu_{1}(i) \log \frac{\mu_{1}(i)}{\mu_{2}(i)}$. Let $Q_i(t),~H_j(t)$ be sequences of dual multipliers such that $Q_i(0) = 0,~H_j(0) = 0,~\forall i,j$. Let $\mu_0=\mu_{-1} = \frac1d\mathbf{1}$. \textbf{For} t = 0 to $T-1$: \begin{enumerate}[leftmargin=15pt] \item Let $\tilde{\mu}^{t-1} = (1-\theta)\mu^{t-1} + \frac{\theta}{d}\mathbf{1}$. \item Choose $\mu^{t}$ as a solution to the following problem: \vspace{-0.3em} \begin{align} \min_{\mu\in \Delta} \Big\langle V\nabla f^{t-1}(\mu^{t-1}) + \sum_{i=1}^{L} Q_i(t)\nabla g_i^{t-1}(\mu^{t-1}) + \sum_{j=1}^{M} H_i(t)h_i^{t-1}, \mu \Big\rangle + \alpha D(\mu,\tilde{\mu}^{t-1}) \label{eq:mu-update} \end{align} \item Update each dual multiplier $Q_i(t), H_j(t)$ via \eqref{eq:Q-update} and \eqref{eq:H-update}. \item Observe the objective function $f^{t}$ and constraint functions $\{g_i^t\}_{i=1}^L,~\{h_j^t\}_{j=1}^M$. \end{enumerate} \textbf{End for}. \end{algorithm} Compared to Algorithm \ref{alg:new-alg}, Algorithm \ref{alg:prob-simplex} uses the K-L divergence as the particular Bregman divergence and introduces a probability mixing step $\tilde{\mu}^{t-1} = (1-\theta)\mu^{t-1} + \frac{\theta}{d}\mathbf{1}$, which pushes the update away from the boundary, at each round. Furthermore, it is known that the problem \eqref{eq:mu-update} admits a closed form solution known as the exponential gradient update \citep{Hazan16FoundationTrends}. More specifically, define \vspace{-0.6em} $$\mathbf{p}^{t-1} := \alpha^{-1}\big(V\nabla f^{t-1}(\mu^{t-1}) + \sum_{i=1}^{L} Q_i(t)\nabla g_i^{t-1}(\mu^{t-1}) + \sum_{j=1}^{M} H_i(t)h_i^{t-1}\big).$$ Then, the update $\mu^t$ can simply be written as $\mu^t_i = \frac{\tilde{\mu}^{t-1}_i\exp(-p_i^{t-1})}{\sum_{k=1}^d\tilde{\mu}^{t-1}_k\exp(-p_k^{t-1})},~~i\in\{1,2,\cdots,d\}.$ We have the following performance bound on this algorithm whose proof is similar to Theorem \ref{main:theorem} and delayed to the Supplement (Section \ref{sec:pf-prob-simplex}): \begin{theorem}\label{main:theorem-simplex} Suppose the first two in Assumption \ref{as:basic} (using $\Vert\cdot\Vert=\Vert\cdot\Vert_{1}$ and $\Vert\cdot\Vert_{\ast}=\Vert\cdot\Vert_{\infty}$) and Assumption \ref{as:selm} hold. Let $\overline{c},~\overline{\ell}>0$ be absolute constants such that $c_0\geq\overline{c}$ and $\ell_0\leq\overline{\ell}$ for all $c_0,~\ell_0$ obtained in Lemma \ref{lem:leb} over $t=0,1,2,\cdots, T-1$ and $k=\sqrt{T}$. Choose $\alpha = T,~V=\sqrt{T}$, $\theta = 1/T$ in Algorithm \ref{alg:prob-simplex}. The expected regret and constraint violations satisfy: \vspace{-0.5em} {\small \begin{align*} &\frac1T\sum_{t=0}^{T-1}\expect{\overline{f}^t(\mu^t) - f_t(\mu^*)}\leq \frac{\hat{C}_{0}^{\prime}}{\sqrt{T}} +\frac{\hat{C}_{0}^{\prime}\log(d)}{\sqrt{T}} \\ &\mathbb{E}\left\|\left[\frac1T\sum_{t=0}^{T-1}\overline{\mathbf{g}}(\mu^t)\right]_+\right\|_2 \leq \frac{\hat{C}^{\prime}_{1}}{\sqrt{T}} + \frac{\hat{C}^{\prime\prime}_{1}\log(Td)}{\sqrt{T}},\\ &\mathbb{E}\left\|\frac1T\sum_{t=0}^{T-1}\overline{\mf h}(\mu^t) - \mf b\right\|_2\leq \frac{\hat{C}^{\prime}_{2}}{\sqrt{T}} + \frac{\hat{C}^{\prime\prime}_{2}\log(Td)}{\sqrt{T}}. \end{align*} }% where $\hat{C}_0^{\prime}, \hat{C}_1^{\prime}, \hat{C}_{1}^{\prime\prime}, \hat{C}_{2}^{\prime}, \hat{C}_{2}^{\prime\prime}$ are absolute constants depending linearly on $D_1^2+D_1+ D_2^2+G^2+H^2+G+H+F$ and independent of $d$ or $T$. (Note that $D_{1}, D_{2}, G, H, F$ in Assumption \ref{as:basic} are independent of $d$ when $\Vert\cdot\Vert_{\ast}=\Vert\cdot\Vert_{\infty}$.) \end{theorem} \section{Simulation experiments} We consider the problem of cost minimization under budget pacing constraints in data center service scheduling. More specifically, consider a geographically distributed data center consists of 5 server clusters serving one stream of incoming jobs arriving at a central controller. Each cluster contains 10 servers. The jobs are directed to different clusters for processing by controller with different per unit electricity costs. In the simulation, we use electricity market price (EMP) data traces from 5 zones of New York ISO open access pricing data (\url{http://www.nyiso.com/}). For example, Fig \ref{fig}(a) depicts the per 5 min EMP data of zone DUNWOD between 05/01/2017 and 05/10/2017. The number of incoming jobs per 5 min is $\lambda(t)$, which is assumed to be poisson distributed with mean equals 1000. each server $k$ can choose a power allocation option $\mu_k^t\in[0,30]$. This option determines the following over the 5 min slot: (1) The electricity money spend of server $k$: $f_k^t(\mu_k^t) = c_k^t\cdot \mu_k^t$, where $c_k^t$ is the per unit EMP of the zone server $k$ belongs to. (2) The number of jobs served $g_k^t(\mu_k^t)$ which follows a Pareto distribution (a.k.a. power law, see \citep{gandhi2012sleep}) of mean $8\log(1+4\mu_k^t)$. (3) Internal budget consumptions $h_k^t\cdot \mu_k^t$, where $h_k^t$ follows a Pareto distribution of mean 5 units. In a typical online service system such as ads service, budget is a measure of internal resources \citep{agarwal2014budget}. The goal is to minimize the total average electricity cost over $T=10000$ slots, i.e. $\sum_{t=1}^T\sum_{k=1}^{50}\expect{c_k^t\cdot \mu_k^t}/T$, subject to the following two requirements: (1) The service rate supports the arrival rate: $\sum_{t=1}^T\sum_{k=1}^{50}\expect{g_k^t(\mu_k^t)}\geq \sum_{t=1}^T\expect{\lambda(t)}$, which is a convex inequality constraint. (2) The internal budget consumption is well-paced, i.e. each cluster consumes a fixed ratio of the total consumed budget in expectation. More specifically, in the simulation, let $\mathcal I_1,\cdots, \mathcal I_5$ be index sets of 5 clusters, then, it is required that $\sum_{t=1}^T\sum_{k\in \mathcal I_j}\expect{h_k^t\cdot \mu_k^t} = \beta_j\cdot \sum_{t=1}^T\sum_{k=1}^{50}\expect{h_k^t\cdot \mu_k^t},~~j=1,2,3$ and $\sum_{t=1}^T\sum_{k\in \mathcal I_4\cup\mathcal I_5}\expect{h_k^t\cdot \mu_k^t} = \beta_4\cdot \sum_{t=1}^T\sum_{k=1}^{50}\expect{h_k^t\cdot \mu_k^t}$, where $[\beta_1,~\beta_2,~\beta_3,~\beta_4] = [0.05,~0.10,~0.25,~0.60]$. In Fig \ref{fig}, we compare our proposed algorithm with the best fixed solution in hindsight choosing the best fixed power allocation knowing all the data, and a benchmark Reac algorithm \citep{gandhi2012sleep}. The Reac algorithm is adapted to our pacing scenario by estimating the number of jobs in the next slot via the average of past 10 slots and assign the load according to the pacing ratio. For cluster 4 and cluster 5 (which take up a total ratio of 0.60), the Reac algorithm evenly distribute the workload between the two. Our algorithm achieves a similar electricity money spend with the best fixed solution which is better than Reac, while keeping the average number of unserved job low and achieving a fast budget pacing. \begin{figure*}[ht!] \centering \begin{subfigure}[t]{0.46\textwidth} \centering \includegraphics[height=5cm] {market_price.pdf} \caption{} \end{subfigure}% ~ \begin{subfigure}[t]{0.46\textwidth} \centering \includegraphics[height=5cm] {money_spend.pdf} \caption{} \end{subfigure} ~ \begin{subfigure}[t]{0.46\textwidth} \centering \includegraphics[height=5cm] {job_delay.pdf} \caption{} \end{subfigure} ~ \begin{subfigure}[t]{0.46\textwidth} \centering \includegraphics[height=5cm] {budget_pacing.pdf} \caption{} \end{subfigure} \caption{\small{(a) Electricity market prices at zone DUNWOD New York; (b) Average money spent buying electricity; (c) Average unserved jobs. (d) Average violation of pacing constraints.}} \label{fig} \end{figure*} \section{Conclusions} This paper proposes a new primal-dual online mirror descent framework for stochastic constrained online learning problem. We introduce a new sequential existence of Lagrange multipliers condition, which is shown to be strictly weaker than the Slater condition, and prove that the proposed algorithm enjoys a $\mathcal{O}(\sqrt{T})$ expected regret and constraint violations. We also obtain an almost dimension free result in the special case when the decision set is a probability simplex. \acks{This work is supported in part by grant NSF CCF-1718477.} \bibliographystyle{chicago}
{ "timestamp": "2019-08-02T02:12:56", "yymm": "1908", "arxiv_id": "1908.00305", "language": "en", "url": "https://arxiv.org/abs/1908.00305", "abstract": "We consider online convex optimization with stochastic constraints where the objective functions are arbitrarily time-varying and the constraint functions are independent and identically distributed (i.i.d.) over time. Both the objective and constraint functions are revealed after the decision is made at each time slot. The best known expected regret for solving such a problem is $\\mathcal{O}(\\sqrt{T})$, with a coefficient that is polynomial in the dimension of the decision variable and relies on the Slater condition (i.e. the existence of interior point assumption), which is restrictive and in particular precludes treating equality constraints. In this paper, we show that such Slater condition is in fact not needed. We propose a new primal-dual mirror descent algorithm and show that one can attain $\\mathcal{O}(\\sqrt{T})$ regret and constraint violation under a much weaker Lagrange multiplier assumption, allowing general equality constraints and significantly relaxing the previous Slater conditions. Along the way, for the case where decisions are contained in a probability simplex, we reduce the coefficient to have only a logarithmic dependence on the decision variable dimension. Such a dependence has long been known in the literature on mirror descent but seems new in this new constrained online learning scenario.", "subjects": "Optimization and Control (math.OC)", "title": "Online Primal-Dual Mirror Descent under Stochastic Constraints", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363522073338, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7073385890319652 }
https://arxiv.org/abs/1611.03791
Convolution, Fourier analysis, and distributions generated by Riesz bases
In this note we discuss notions of convolutions generated by biorthogonal systems of elements of a Hilbert space. We develop the associated biorthogonal Fourier analysis and the theory of distributions, discuss properties of convolutions and give a number of examples.
\section{Introduction} In this work we introduce a notion of a convolution generated by systems of elements of a Hilbert space ${\mathcal H}$ forming a Riesz basis. Such collections often arise as systems of eigenfunctions of densely defined non-self-asjoint operators acting on ${\mathcal H}$, and a suitable notion of convolution also leads to the development of the associated Fourier analysis. In the case of the eigenfunctions having no zeros the corresponding global theory of pseudo-differential operators has been recently developed in \cite{RT16}. The assumption on eigenfunctions having no zeros has been subsequently removed in \cite{RT16a}, and some applications of such analysis to the wave equation for the Landau Hamiltonian were carried out in \cite{RT16b}, as well as for general operators with discrete spectrum in \cite{RT17c}. The analysis in these papers relied on the spectral properties of a fixed operator acting in ${\mathcal H}=L^{2}(M)$ for a smooth manifold $M$ with or without boundary. In this note we aim at discussing an abstract point of view on convolutions when one is given only a Riesz basis in a Hilbert space, without making additional assumptions on an operator for which it may be a basis of eigenfunctions. Such an abstract point of view has a number of advantages, for example, the questions of whether the basis elements (for example in ${\mathcal H}=L^{2}(M)$) have zeros at some points, become irrelevant. More specifically, let ${\mathcal H}$ be a separable Hilbert space, and denote by $$ \mathcal U:=\{u_{\xi}|\,\, u_{\xi}\in{\mathcal H}\}_{\xi\in\mathbb N} $$ and $$ \mathcal V:=\{v_{\xi}|\,\, v_{\xi}\in{\mathcal H}\}_{\xi\in\mathbb N} $$ collections of elements of ${\mathcal H}$ parametrised by a discrete set $\mathbb N$. We assume that the system $\mathcal U$ is a Riesz basis of the space ${\mathcal H}$ and the system $\mathcal V$ is biorthogonal to $\mathcal U$ in ${\mathcal H}$, i.e. we have the property that $$ (u_{\xi},v_{\eta})_{{\mathcal H}}=\delta_{\xi\eta}, $$ where $\delta_{\xi\eta}$ is the Kronecker delta, equal to $1$ for $\xi=\eta$, and to $0$ otherwise. Then from the classical Bari's work \cite{bari} (see also Gelfand \cite{Gel51}) it follows that the system $\mathcal V$ is also basis in ${\mathcal H}$. The Riesz basis is characterised by the property that it is the image of an orthonormal basis in ${\mathcal H}$ under a linear invertible transformation. However, since our aim is to subsequently extend the present constructions in the future beyond the Riesz basis setting we will try not to make explicit use of this property. The results of this paper have been announced in \cite{KRT17}. The setting of Riesz bases has numerous applications to different problems, see e.g. \cite{Chr01, Chr16}, and in different settings and modifications, see e.g. \cite{BT16, GM06, GP11}, to mention only very few. Decomposition systems of different types and the subsequent function spaces is also an active area of research, see e.g. \cite{K03,BH06,B07,GJN17}. \smallskip In this paper we define $\mathcal U$-- and $\mathcal V$--convolutions in the following form: \begin{equation}\label{EQ:c1} f\star_{\mathcal U}g:= \sum_{\xi\in\mathbb N}(f, v_\xi) (g, v_\xi) u_{\xi} \end{equation} and \begin{equation}\label{EQ:c2} h\star_{\mathcal V}j:= \sum_{\xi\in\mathbb N}(h, u_\xi) (j, u_\xi) v_{\xi} \end{equation} for appropriate elements $f,g,h,j\in {\mathcal H}$. These convolutions are clearly commutative and associative, and have a number of properties expected from convolutions, most importantly, they are mapped to the product by the naturally defined Fourier transforms associated to $\mathcal U$ and $\mathcal V$. Without going too much into detail, let us briefly summarise the results of this paper: \begin{itemize} \item The naturally defined Fourier transforms in ${\mathcal H}$ map convolutions \eqref{EQ:c1} and \eqref{EQ:c2} to the product of Fourier transforms. For example, defining $\widehat{f}(\xi):=(f, v_{\xi})$, we have $\widehat{f\star_{\mathcal U} g}=\widehat{f}\,\widehat{g}.$ Moreover, conversely, if a bilinear mapping $K:{\mathcal H}\times{\mathcal H}\to{\mathcal H}$ satisfies $\widehat{K(f,g)}=\widehat{f}\,\widehat{g}$, it must be given by \eqref{EQ:c1}. \item Although the bases $\mathcal U$ and $\mathcal V$ do not have to be orthogonal, there is a Hilbert space $l^{2}_{\mathcal U}$ such that we have the Plancherel identity $(f,g)_{{\mathcal H}}=(\widehat{f},\widehat{g})_{l^{2}_{\mathcal U}}$. \item We discuss more general families of spaces $l^{p}_{\mathcal U}$, $1\leq p\leq\infty$, on the Fourier transform side giving rise to further Fourier analysis in ${\mathcal H}$. Namely, these spaces satisfy analogues of the usual duality and interpolation relations, as well as the Hausdorff-Young inequalities with the corresponding family of subspaces of ${\mathcal H}$. \item The developed biorthogonal Fourier analysis can be embedded in an appropriate theory of distributions realised in suitable rigged Hilbert spaces $(\Phi_{\mathcal U}, {\mathcal H}, \Phi_{\mathcal U}')$ and $(\Phi_{\mathcal V}, {\mathcal H}, \Phi_{\mathcal V}')$, with $\Phi_{\mathcal U}:=\mathcal C^{\infty}_{\mathcal U, \Lambda}$, $\Phi_{\mathcal U}':=\mathcal D'_{\mathcal V, \Lambda}$ and $\Phi_{\mathcal V}:=\mathcal C^{\infty}_{\mathcal V, \Lambda}$, $\Phi_{\mathcal V}':=\mathcal D'_{\mathcal U, \Lambda}$, associated to a fixed spectral set $\Lambda$ satisfying certain natural properties. These triples allow us to extend the notions of $\mathcal U$-- and $\mathcal V$--convolutions and $\mathcal U$-- and $\mathcal V$--Fourier transforms beyond the Hilbert space ${\mathcal H}$. \item We show how these constructions are related to the spectral decompositions of linear operators in ${\mathcal H}$. In particular, we relate the convolutions to the formulae for their resolvents. \item We discuss several examples and several further possible notions of convolutions. \end{itemize} Let us conclude the introduction by giving a concrete example of such convolution also relating it to the spectral analysis. Let us consider the operator $\mathcal L:{\mathcal H}\to{\mathcal H}$ on the interval $(0, 1)$ given by $$ \mathcal L:= -i\frac{d}{d x}, $$ and let us equip this operator with boundary condition $h y(0)=y(1)$ for some $h>0$. The operator $\mathcal L$ is not self-adjoint on ${\mathcal H}=L^{2}(0,1)$ for $h\not=1$. The spectral properties of $\mathcal L$ have been thoroughly investigated by e.g. Titchmarsh \cite{titc} and Cartwright \cite{cart}. In particular, it is known that the collections \begin{equation}\label{EQ:u1} \mathcal U=\{u_{j}(x)=h^{x}e^{ 2\pi i x j },\,\, j\in \mathbb{Z}\} \end{equation} and \begin{equation}\label{EQ:v1} \mathcal V=\{v_{j}(x)=h^{-x} e^{2\pi i x j },\,\, j\in \mathbb{Z}\} \end{equation} are the systems of eigenfunctions of $\mathcal L$ and $\mathcal L^{*}$, respectively, and form Riesz bases in ${\mathcal H}=L^{2}(0, 1).$ In this case the abstract definition of convolution above can be shown (see Proposition \ref{prop:Kc}) to yield the concrete expression $$ (f\star_{\mathcal U} g)(x)=\int^{x}_{0}f(x-t)g(t)dt+\frac{1}{h}\int^{1}_{x}f(1+x-t)g(t)dt, $$ which coincides with the usual convolution for $h=1$, in which case also $\mathcal U=\mathcal V$ is an orthonormal basis in ${\mathcal H}$. Of course, in this example, the main interest for us is the case $h\not=1$ corresponding to biorthogonal bases $\mathcal U$ and $\mathcal V$ in \eqref{EQ:u1} and \eqref{EQ:v1}, respectively. In this paper, to avoid any confusion, we will be using the notation $\mathbb N_{0}=\mathbb N\cup\{0\}.$ \section{Biorthogonal convolutions} In this section we describe the functional analytic setting for investigating convolutions \eqref{EQ:c1} and \eqref{EQ:c2}. Let us take biorthogonal systems $$ \mathcal U:=\{u_{\xi}|\,\, u_{\xi}\in{\mathcal H}\}_{\xi\in\mathbb N} $$ and $$ \mathcal V:=\{v_{\xi}|\,\, v_{\xi}\in{\mathcal H}\}_{\xi\in\mathbb N} $$ in a separable Hilbert space ${\mathcal H}$, where $\mathbb N$ is a discrete set. We assume that $\mathcal U$ (and hence also $\mathcal V$) is a Riesz basis in ${\mathcal H}$, i.e. any element of ${\mathcal H}$ has a unique decomposition with respect of the elements of $\mathcal U$. We note that the basis collections are uniformly bounded in ${\mathcal H}$. Before we proceed with describing a version of the biorthogonal Fourier analysis, let us show that expressions in \eqref{EQ:c1} and \eqref{EQ:c2} are usually well-defined. \begin{prop}\label{PROP:wd} Let $f\star_{\mathcal U}g$ and $h\star_{\mathcal V}j$ be defined by \eqref{EQ:c1} and \eqref{EQ:c2}, respectively, that is, \begin{equation}\label{EQ:c11} f\star_{\mathcal U}g:= \sum_{\xi\in\mathbb N}(f, v_\xi) (g, v_\xi) u_{\xi} \end{equation} and \begin{equation}\label{EQ:c21} h\star_{\mathcal V}j:= \sum_{\xi\in\mathbb N}(h, u_\xi) (j, u_\xi) v_{\xi}. \end{equation} Then there exists a constant $M>0$ such that we have \begin{equation}\label{EQ:est1} \|f\star_{\mathcal U}g\|_{{\mathcal H}}\leq M\sup_{\xi\in\mathbb N}\|u_{\xi}\|_{{\mathcal H}}\|f\|_{{\mathcal H}}\|g\|_{{\mathcal H}},\quad \|h\star_{\mathcal V}j\|_{{\mathcal H}}\leq M\sup_{\xi\in\mathbb N}\|v_{\xi}\|_{{\mathcal H}}\|h\|_{{\mathcal H}}\|j\|_{{\mathcal H}}, \end{equation} for all $f,g,h,j\in{\mathcal H}.$ \end{prop} The statement follows from the Cauchy-Schwarz inequality and the following fact: since systems of $u_\xi$ and of $v_\xi$ are Riesz bases in ${\mathcal H}$, from \cite[Theorem 9]{bari} we have that there are constants $a,A,b,B>0$ such that for arbitrary $g\in{\mathcal H}$ we obtain \begin{equation}\label{LEM: FTl2} a^2\|g\|_{{\mathcal H}}^2 \leq \sum_{\xi\in\mathbb N} |(g,v_{\xi})|^2\leq A^2\|g\|_{{\mathcal H}}^2 \,\,\,\,\,\, \hbox{and} \,\,\,\,\,\, b^2\|g\|_{{\mathcal H}}^2 \leq \sum_{\xi\in\mathbb N} |(g,u_{\xi})|^2\leq B^2\|g\|_{{\mathcal H}}^2. \end{equation} This amounts to simply stating that the Riesz basis collections form collections of frames in ${\mathcal H}$. From the Riesz basis property it also follows that the families $\mathcal U$ and $\mathcal V$ are uniformly bounded in ${\mathcal H}$, that is, $$ \sup_{\xi\in\mathbb N}\|u_{\xi}\|_{{\mathcal H}}+\sup_{\xi\in\mathbb N}\|v_{\xi}\|_{{\mathcal H}}<\infty. $$ Let us introduce $\mathcal U$-- and $\mathcal V$--Fourier transforms by formulas \begin{equation}\label{EQ: FT_u} \mathcal F_{\mathcal U}(f)(\xi):=(f, v_{\xi})=:\widehat{f}(\xi) \end{equation} and \begin{equation}\label{EQ: FT_v} \mathcal F_{\mathcal V}(g)(\xi):=(g, u_{\xi})=:\widehat{g}_{\ast}(\xi), \end{equation} respectively, for all $f, g\in{\mathcal H}$ and for each $\xi\in\mathbb N$. Here $\widehat{g}_{\ast}$ stands for the $\mathcal V$--Fourier transform of the function $g$. Indeed, in general $\widehat{g}_{\ast}\neq\widehat{g}$. Their inverses are given by \begin{equation}\label{EQ: FT_ui} (\mathcal F_{\mathcal U}^{-1}a)(x):=\sum_{\xi\in\mathbb N} a(\xi)u_{\xi} \end{equation} and \begin{equation}\label{EQ: FT_vi} (\mathcal F_{\mathcal V}^{-1}a)(x):=\sum_{\xi\in\mathbb N} a(\xi)v_{\xi}. \end{equation} The Fourier transforms defined in \eqref{EQ: FT_u} and \eqref{EQ: FT_v} are the analysis operators, and, the inverse transforms \eqref{EQ: FT_ui} and \eqref{EQ: FT_vi} are the corresponding synthesis operators, see e.g. \cite{K03}. For more information, see e.g. \cite{BH06, B07, GJN17} and references therein. There is a straightforward relation between $\mathcal U$- and $\mathcal V$-convolutions, and the Fourier transforms: \begin{thm}\label{PR: ConvProp} For arbitrary $f, g, h, j\in{\mathcal H}$ we have $$ \widehat{f\star_{\mathcal U} g}=\widehat{f}\,\widehat{g}, \,\,\,\, \widehat{h\star_{\mathcal V} j}_{\ast}=\widehat{h}_{\ast}\,\widehat{j}_{\ast}. $$ Therefore, the convolutions are commutative and associative. Let $K:{\mathcal H}\times{\mathcal H}\to{\mathcal H}$ be a bilinear mapping. If for all $f,g\in{\mathcal H}$, the form $K(f,g)$ satisfies the property \begin{equation}\label{EQ: Bilinear op-2} \widehat{K(f, g)}=\widehat{f} \,\widehat{g} \end{equation} then $K$ is the $\mathcal U$--convolution, i.e. $K(f,g)=f*_{\mathcal U}g$. Similarly, if $K(f,g)$ satisfies the property \begin{equation}\label{EQ: Bilinear op-22} \widehat{K(f, g)}_{*}=\widehat{f}_{*} \,\widehat{g}_{*} \end{equation} then $K$ is the $\mathcal V$--convolution, i.e. $K(f,g)=f*_{\mathcal V}g$. \end{thm} \begin{proof} Direct calculations yield \begin{align*} \mathcal F_{\mathcal U}(f\star_{\mathcal U} g)(\xi)&=(\sum_{\eta\in\mathbb N} \widehat{f}(\eta)\widehat{g}(\eta)u_{\eta}, v_{\xi}) \\ &=\sum_{\eta\in\mathbb N} \widehat{f}(\eta)\widehat{g}(\eta)(u_{\eta}, v_{\xi}) \\ &=\widehat{f}(\xi)\widehat{g}(\xi). \end{align*} Commutativity follows from the bijectivity of the $\mathcal U$--Fourier transform, also implying the associativity. This can be also seen from the definition: \begin{align*} ((f\star_{\mathcal U} g) \star_{\mathcal U} h) & = \sum_{\xi\in\mathbb N}(\sum_{\eta\in\mathbb N} \widehat{f}(\eta)\widehat{g}(\eta)u_{\eta}, v_{\xi})\widehat{h}(\xi)u_{\xi} \\ &=\sum_{\xi\in\mathbb N}\widehat{f}(\xi)\widehat{g}(\xi)\widehat{h}(\xi)u_{\xi} \\ &=\sum_{\xi\in\mathbb N}\widehat{f}(\xi)\left[\sum_{\eta\in\mathbb N}\widehat{g}(\eta)\widehat{h}(\eta)(u_{\eta}, v_{\xi})\right]u_{\xi} \\ &=\sum_{\xi\in\mathbb N}\widehat{f}(\xi)(\sum_{\eta\in\mathbb N} \widehat{g}(\eta)\widehat{h}(\eta)u_{\eta}, v_{\xi})u_{\xi} \\ &=(f\star_{\mathcal U} (g \star_{\mathcal U} h)). \end{align*} Next, it is enough to prove that $K$ is the $\mathcal U$-convolution under the assumption \eqref{EQ: Bilinear op-2}. The similar property for $\mathcal V$-convolutions under assumption \eqref{EQ: Bilinear op-22} follows by simply replacing $\mathcal U$ by $\mathcal V$ in the part concerning $\mathcal U$-convolutions. Since for arbitrary $f,g\in{\mathcal H}$ and for $K(f,g)\in{\mathcal H}$ the property \eqref{EQ: Bilinear op-2} is valid then we can regain $K(f,g)$ from the inverse $\mathcal U$--Fourier formula $$ K(f,g)=\sum_{\xi\in\mathbb N}\widehat{K(f, g)}(\xi)u_{\xi}=\sum_{\xi\in\mathbb N}\widehat{f}(\xi) \,\widehat{g}(\xi) u_{\xi}. $$ The last expression defines the $\mathcal U$--convolution. \end{proof} \section{Biorthogonal Fourier analysis} \label{SEC:FA} From \eqref{LEM: FTl2} we can conclude that the $\mathcal U$-- and $\mathcal V$--Fourier coefficients of the elements of ${\mathcal H}$ belong to the space of square-summable sequences $l^{2}(\mathbb N)$. However, we note that the Plancherel identity is also valid for suitably defined $l^2$-spaces of Fourier coefficients, see \cite[Proposition 6.1]{RT16}. We explain it now in the present setting. Indeed, the frame property in \eqref{LEM: FTl2} can be improved to the exact Plancherel formula with a suitable choice of norms. \subsection{Plancherel formula} Let us denote by $$l^{2}_{\mathcal U}=l^2(\mathcal U)$$ the linear space of complex-valued functions $a$ on $\mathbb N$ such that $\mathcal F^{-1}_{\mathcal U}a\in {\mathcal H}$, i.e. if there exists $f\in {\mathcal H}$ such that $\mathcal F_{\mathcal U}f=a$. Then the space of sequences $l^{2}_{\mathcal U}$ is a Hilbert space with the inner product \begin{equation}\label{EQ: InnerProd SpSeq-s} (a,\ b)_{l^{2}_{\mathcal U}}:=\sum_{\xi\in\mathbb N}a(\xi)\ \overline{(\mathcal F_{\mathcal V}\circ\mathcal F^{-1}_{\mathcal U}b)(\xi)}, \end{equation} for arbitrary $a,\,b\in l^{2}_{\mathcal U}$. The reason for this choice of the definition is the following formal calculation: \begin{align}\label{EQ:PL-prelim} \nonumber (a,\ b)_{l^{2}_{\mathcal U}}& =\sum_{\xi\in\mathbb N}a(\xi)\ \overline{(\mathcal F_{\mathcal V}\circ\mathcal F^{-1}_{\mathcal U}b)(\xi)}\\ \nonumber &=\sum\limits_{\xi\in\mathbb N }a(\xi)\overline{\left(\mathcal F^{-1}_{\mathcal U}b, u_{\xi}\right)}\\ \nonumber &=\left(\left[\sum\limits_{\xi\in\mathbb N}a(\xi)u_{\xi}\right], \mathcal F^{-1}_{\mathcal U}b\right)\\ \nonumber &=(\mathcal F^{-1}_{\mathcal U}a,\,\mathcal F^{-1}_{\mathcal U}b), \end{align} which implies the Hilbert space properties of the space of sequences $l^{2}_{\mathcal U}$. The norm of $l^{2}_{\mathcal U}$ is then given by the formula \begin{equation}\label{EQ:l2norm} \|a\|_{l^{2}_{\mathcal U}}=\left(\sum_{\xi\in\mathbb N}a(\xi)\ \overline{(\mathcal F_{\mathcal V}\circ\mathcal F^{-1}_{\mathcal U}a)(\xi)}\right)^{1/2}, \quad \textrm{ for all } \; a\in l^{2}_{\mathcal U}. \end{equation} We note that individual terms in this sum may be complex-valued but the whole sum is real and non-negative. Analogously, we introduce the Hilbert space $$l^{2}_{\mathcal V}=l^{2}(\mathcal V)$$ as the space of functions $a$ on $\mathbb N$ such that $\mathcal F^{-1}_{\mathcal V}a\in {\mathcal H}$, with the inner product \begin{equation} \label{EQ: InnerProd SpSeq-s_2} (a,\ b)_{l^{2}_{\mathcal V}}:=\sum_{\xi\in\mathbb N}a(\xi)\ \overline{(\mathcal F_{\mathcal U}\circ\mathcal F^{-1}_{\mathcal V}b)(\xi)}, \end{equation} for arbitrary $a,\,b\in l^{2}_{\mathcal V}$. The norm of $l^{2}_{\mathcal V}$ is given by the formula $$ \|a\|_{l^{2}_{\mathcal V}}=\left(\sum_{\xi\in\mathbb N}a(\xi)\ \overline{(\mathcal F_{\mathcal U}\circ\mathcal F^{-1}_{\mathcal V}a)(\xi)}\right)^{1/2} $$ for all $a\in l^{2}_{\mathcal V}$. The spaces of sequences $l^{2}_{\mathcal U}$ and $l^{2}_{\mathcal V}$ are thus generated by biorthogonal systems $\{u_{\xi}\}_{\xi\in\mathbb N}$ and $\{v_{\xi}\}_{\xi\in\mathbb N}$. Since Riesz bases are equivalent to an orthonormal basis by an invertible linear transformation, we have the equality between the spaces $l^{2}_{\mathcal U}=l^{2}_{\mathcal V}=l^{2}(\mathbb N)$ as sets; of course the special choice of their norms is the important ingredient in their definition. Indeed, the reason for their definition in the above forms becomes clear again in view of the following Plancherel identity: \begin{thm} {\rm(Plancherel's identity)}\label{PlanchId} If $f,\,g\in {\mathcal H}$ then $\widehat{f},\,\widehat{g}\in l^{2}_{\mathcal U}, \,\,\, \widehat{f}_{\ast},\, \widehat{g}_{\ast}\in l^{2}_{\mathcal V}$, and the inner products {\rm(\ref{EQ: InnerProd SpSeq-s}), (\ref{EQ: InnerProd SpSeq-s_2})} take the form $$ (\widehat{f},\ \widehat{g})_{l^{2}_{\mathcal U}}=\sum_{\xi\in\mathbb N}\widehat{f}(\xi)\ \overline{\widehat{g}_{\ast}(\xi)} $$ and $$ (\widehat{f}_{\ast},\ \widehat{g}_{\ast})_{l^{2}_{\mathcal V}}=\sum_{\xi\in\mathbb N}\widehat{f}_{\ast}(\xi)\ \overline{\widehat{g}(\xi)}, $$ respectively. In particular, we have $$ \overline{(\widehat{f},\ \widehat{g})_{l^{2}_{\mathcal U}}}= (\widehat{g}_{\ast},\ \widehat{f}_{\ast})_{l^{2}_{\mathcal V}}. $$ The Parseval identity takes the form \begin{equation}\label{Parseval} (f,g)_{{\mathcal H}}=(\widehat{f},\widehat{g})_{l^{2}_{\mathcal U}}=\sum_{\xi\in\mathbb N}\widehat{f}(\xi)\ \overline{\widehat{g}_{\ast}(\xi)}. \end{equation} Furthermore, for any $f\in {\mathcal H}$, we have $\widehat{f}\in l^{2}_{\mathcal U}$, $\widehat{f}_{\ast}\in l^{2}_{\mathcal V}$, and \begin{equation} \label{Planch} \|f\|_{{\mathcal H}}=\|\widehat{f}\|_{l^{2}_{\mathcal U}}=\|\widehat{f}_{\ast}\|_{l^{2}_{\mathcal V}}. \end{equation} \end{thm} \begin{proof} By the definition we get \begin{align*} (\mathcal F_{\mathcal V}\circ\mathcal F^{-1}_{\mathcal U}\widehat{g})(\xi)=\left(\mathcal F_{\mathcal V}g\right)(\xi)=\widehat{g}_{\ast}(\xi) \end{align*} and \begin{align*} (\mathcal F_{\mathcal U}\circ\mathcal F^{-1}_{\mathcal V}\widehat{g}_{\ast})(\xi)=\left(\mathcal F_{\mathcal U}g\right)(\xi)=\widehat{g}(\xi). \end{align*} Hence it follows that $$ (\widehat{f},\ \widehat{g})_{l^{2}_{\mathcal U}}=\sum_{\xi\in\mathbb N}\widehat{f}(\xi)\ \overline{(\mathcal F_{\mathcal V}\circ\mathcal F^{-1}_{\mathcal U}\widehat{g})(\xi)}=\sum_{\xi\in\mathbb N}\widehat{f}(\xi)\ \overline{\widehat{g}_{\ast}(\xi)} $$ and $$ (\widehat{f}_{\ast},\ \widehat{g}_{\ast})_{l^{2}_{\mathcal V}}=\sum_{\xi\in\mathbb N}\widehat{f}_{\ast}(\xi)\ \overline{(\mathcal F_{\mathcal U}\circ\mathcal F^{-1}_{\mathcal V}\widehat{g}_{\ast})(\xi)}=\sum_{\xi\in\mathbb N}\widehat{f}_{\ast}(\xi)\ \overline{\widehat{g}(\xi)}. $$ To show Parseval's identity \eqref{Parseval}, using these properties and the biorthogonality of $u_\xi$'s to $v_\eta$'s, we can write \begin{multline*} (f,g)=\left(\sum_{\xi\in\mathbb N}\widehat{f}(\xi)u_{\xi} \ , \ \sum_{\eta\in\mathbb N}\widehat{g}_{\ast}(\eta)v_{\eta}\right)\\ =\sum_{\xi\in\mathbb N}\sum_{\eta\in\mathbb N}\widehat{f}(\xi)\overline{\widehat{g}_{\ast}(\eta)}\left(u_{\xi}, \ v_{\eta}\right) =\sum_{\xi\in\mathbb N}\widehat{f}(\xi)\overline{\widehat{g}_{\ast}(\xi)}=(\widehat{f},\widehat{g})_{l^{2}_{\mathcal U}}, \end{multline*} proving \eqref{Parseval}. Taking $f=g$, we get \begin{equation*} \|f\|_{{\mathcal H}}^{2}=(f,f)= \sum_{\xi\in\mathbb N}\widehat{f}(\xi)\overline{\widehat{f}_{\ast}(\xi)}=(\widehat{f},\widehat{f})_{l^{2}_{\mathcal U}}=\|\widehat{f}\|_{l^{2}_{\mathcal U}}^{2}, \end{equation*} proving the first equality in \eqref{Planch}. Then, by checking that \begin{align*} (f,f)=\overline{(f,f)}&=\sum_{\xi\in\mathbb N} \overline{\widehat{f}(\xi)}\widehat{f}_{\ast}(\xi)=\sum_{\xi\in\mathbb N} \widehat{f}_{\ast}(\xi)\overline{\widehat{f}(\xi)}=(\widehat{f}_{\ast},\widehat{f}_{\ast})_{l^{2}_{\mathcal V}} =\|\widehat{f}_{\ast}\|_{l^{2}_{\mathcal V}}^{2}, \end{align*} the proofs of \eqref{Planch} and of Theorem \ref{PlanchId} are complete. \end{proof} \subsection{Hausdorff-Young inequality} Now, we introduce a set of Banach spaces $\{{\mathcal H}^{p}\}_{1\leq p\leq\infty}$ with the norms $\|\cdot\|_{p}$ such that $$ {\mathcal H}^{p}\subseteq{\mathcal H} $$ and with the property \begin{equation}\label{EQ:Holder} |(x, y)_{{\mathcal H}}|\leq\|x\|_{{\mathcal H}^{p}} \|y\|_{{\mathcal H}^{q}} \end{equation} for all $1\leq p\leq\infty$, where $\frac1p+\frac1q=1$. We assume that ${\mathcal H}^{2}={\mathcal H}$, and that ${\mathcal H}^{p}$ are real interpolation properties in the following sense: $$ ({\mathcal H}^{1}, {\mathcal H}^{2})_{\theta,p}={\mathcal H}^{p}, \; 0<\theta<1,\; \frac1p=1-\frac{\theta}{2}, $$ and $$ ({\mathcal H}^{1}, {\mathcal H}^{2})_{\theta,p}={\mathcal H}^{p}, \; 0<\theta<1,\; \frac1p=\frac{1-\theta}{2}. $$ We also assume that $\mathcal U\subset{\mathcal H}^{p}$ and $\mathcal V\subset{\mathcal H}^{p}$ for all $p\in[1, \infty]$. If ${\mathcal H}=L^{2}(\Omega)$ for some $\Omega$, we could take ${\mathcal H}^{p}=L^{2}(\Omega)\cap L^{p}(\Omega)$. If ${\mathcal H}=S_{2}(\mathcal K)$ is the Hilbert space of Hilbert-Schmidt operators on a Hilbert space $\mathcal K$, then we can take ${\mathcal H}^{p}=S_{2}(\mathcal K)\cap S_{p}(\mathcal K),$ where $S_{p}(\mathcal K)$ stands for the space of $p$-Schatten operators on $\mathcal K.$ Below we introduce the $p$-Lebesgue versions of the spaces of Fourier coefficients. Here classical $l^p$ spaces on $\mathbb N$ are extended in a way so that we associate them to the given biorthogonal systems. \begin{defi} Let us define spaces $l^{p}_{\mathcal U}=l^{p}(\mathcal U)$ as the spaces of all $a:\mathbb N\to\mathbb C$ such that \begin{equation}\label{EQ:norm1} \|a\|_{l^{p}(\mathcal U)}:=\left(\sum_{\xi\in\mathbb N}| a(\xi)|^{p} \|u_{\xi}\|^{2-p}_{{\mathcal H}^{\infty}} \right)^{1/p}<\infty,\quad \textrm{ for }\; 1\leq p\leq2, \end{equation} and \begin{equation}\label{EQ:norm2} \|a\|_{l^{p}(\mathcal U)}:=\left(\sum_{\xi\in\mathbb N}| a(\xi)|^{p} \|v_{\xi}\|^{2-p}_{{\mathcal H}^{\infty}} \right)^{1/p}<\infty,\quad \textrm{ for }\; 2\leq p<\infty, \end{equation} and, for $p=\infty$, $$ \|a\|_{l^{\infty}(\mathcal U)}:=\sup_{\xi\in\mathbb N}\left( |a(\xi)|\cdot \|v_{\xi}\|^{-1}_{{\mathcal H}^{\infty}}\right)<\infty. $$ \end{defi} Here, without loss of generality, we can assume that $u_{\xi}\not=0$ and $v_{\xi}\not=0$ for all $\xi\in\mathbb N$, so that the above spaces are well-defined. Analogously, we introduce spaces $l^{p}_{\mathcal V}=l^{p}(\mathcal V)$ as the spaces of all $b:\mathbb N\to\mathbb C$ such that $$ \|b\|_{l^{p}(\mathcal V)}=\left(\sum_{\xi\in\mathbb N}| b(\xi)|^{p} \|v_{\xi}\|^{2-p}_{{\mathcal H}^{\infty}} \right)^{1/p}<\infty,\quad \textrm{ for }\; 1\leq p\leq2, $$ $$ \|b\|_{l^{p}(\mathcal V)}=\left(\sum_{\xi\in\mathbb N}| b(\xi)|^{p} \|u_{\xi}\|^{2-p}_{{\mathcal H}^{\infty}} \right)^{1/p}<\infty,\quad \textrm{ for }\; 2\leq p<\infty, $$ $$ \|b\|_{l^{\infty}(\mathcal V)}=\sup_{\xi\in\mathbb N}\left(|b(\xi)|\cdot \|u_{\xi}\|^{-1}_{{\mathcal H}^{\infty}}\right). $$ Now, we recall a theorem on the interpolation of weighted spaces from Bergh and L\"ofstr\"om \cite[Theorem 5.5.1]{Bergh-Lofstrom:BOOK-Interpolation-spaces}. Then we formulate some basic properties of $l^{p}(\mathcal U)$. \begin{thm}[Interpolation of weighted spaces] \label{TH: IWS} Let us write $d\mu_{0}(x)=\omega_{0}(x)d\mu(x),$ $d\mu_{1}(x)=\omega_{1}(x)d\mu(x),$ and write $L^{p}(\omega)=L^{p}(\omega d\mu)$ for the weight $\omega$. Suppose that $0<p_{0}, p_{1}<\infty$. Then $$ (L^{p_{0}}(\omega_{0}), L^{p_{1}}(\omega_{1}))_{\theta, p}=L^{p}(\omega), $$ where $0<\theta<1$, $\frac{1}{p}=\frac{1-\theta}{p_{0}}+\frac{\theta}{p_{1}}$, and $\omega=\omega_{0}^{\frac{p(1-\theta)}{p_{0}}}\omega_{1}^{\frac{p\theta}{p_{1}}}$. \end{thm} From this we obtain the following property: \begin{cor}[Interpolation of $l^{p}(\mathcal U)$ and $l^{p}(\mathcal V)$] \label{COR:Interp} For $1\leq p\leq2$, we obtain $$ (l^{1}(\mathcal U), l^{2}(\mathcal U))_{\theta,p}=l^{p}(\mathcal U), $$ $$ (l^{1}(\mathcal V), l^{2}(\mathcal V))_{\theta,p}=l^{p}(\mathcal V), $$ where $0<\theta<1$ and $p=\frac{2}{2-\theta}$. \end{cor} Using Theorem \ref{TH: IWS} and Corollary \ref{COR:Interp} we get the following Hausdorff-Young inequality. \begin{thm}[Hausdorff-Young inequality] \label{TH: HY} Assume that $1\leq p\leq2$ and $\frac{1}{p}+\frac{1}{p'}=1$. Then there exists a constant $C_{p}\geq 1$ such that \begin{equation}\label{EQ:HY} \|\widehat{f}\|_{l^{p'}(\mathcal U)}\leq C_{p}\|f\|_{{\mathcal H}^{p}}\quad \textrm{ and }\quad \|\mathcal F_{\mathcal U}^{-1}a\|_{{\mathcal H}^{p'}}\leq C_{p}\|a\|_{l^{p}(\mathcal U)} \end{equation} for all $f\in {\mathcal H}^{p}$ and $a\in l^{p}(\mathcal U)$. Similarly, for all $b\in l^{p}(\mathcal V)$ we obtain \begin{equation}\label{EQ:HYast} \|\widehat{f}_*\|_{l^{p'}(\mathcal V)}\leq C_{p}\|f\|_{{\mathcal H}^{p}}\quad \textrm{ and } \quad \|\mathcal F_{\mathcal V}^{-1}b\|_{{\mathcal H}^{p'}}\leq C_{p}\|b\|_{l^{p}(\mathcal V)}. \end{equation} \end{thm} \begin{proof} It is sufficient to prove only \eqref{EQ:HY} since \eqref{EQ:HYast} is similar. Note that \eqref{EQ:HY} would follow from the ${\mathcal H}^{1}\rightarrow l^{\infty}(\mathcal U)$ and $l^{1}(\mathcal U)\rightarrow {\mathcal H}^{\infty}$ boundedness in view of the Plancherel identity in Theorem \ref{PlanchId} by interpolation, see e.g. Bergh and L\"ofstr\"om \cite[Corollary 5.5.4]{Bergh-Lofstrom:BOOK-Interpolation-spaces}. Thereby, we can put $p=1$. Then from \eqref{EQ:Holder} we have $$ |\widehat{f}(\xi)|\leq\|{v_{\xi}}\|_{{\mathcal H}^{\infty}}\|f\|_{{\mathcal H}^{1}}, $$ and hence $$ \|\widehat{f}\|_{l^{\infty}(\mathcal U)}=\sup_{\xi\in\mathbb N}|\widehat{f}(\xi)| \|v_{\xi}\|^{-1}_{{\mathcal H}^{\infty}}\leq\|f\|_{{\mathcal H}^{1}}. $$ The last estimate gives the first inequality in \eqref{EQ:HY} for $p=1$. For the second inequality, using $$(\mathcal F_{\mathcal U}^{-1}a)=\sum\limits_{\xi\in\mathbb N}a(\xi)u_{\xi}$$ we obtain $$ \|\mathcal F_{\mathcal U}^{-1}a\|_{{\mathcal H}^{\infty}}\leq\sum\limits_{\xi\in\mathbb N}|a(\xi)|\|u_{\xi}\|_{{\mathcal H}^{\infty}} =\|a\|_{l^{1}(\mathcal U)}, $$ in view of the definition of $l^{1}(\mathcal U)$, which gives \eqref{EQ:HY} in the case $p=1$. The proof is complete. \end{proof} Let us establish the duality between spaces $l^{p}(\mathcal U)$ and $l^{q}(\mathcal V)$: \begin{thm}[Duality of $l^{p}(\mathcal U)$ and $l^{q}(\mathcal V)$] \label{TH:Duality lp} Let $1\leq p<\infty$ and $\frac{1}{p}+\frac{1}{q}=1$. Then $$\left(l^{p}(\mathcal U)\right)'=l^{q}(\mathcal V) \quad \textrm{ and }\quad \left(l^{p}(\mathcal V)\right)'=l^{q}(\mathcal U).$$ \end{thm} \begin{proof} The proof is standard. Meanwhile, we provide several details for clarity. The duality is given by $$ (\sigma_{1}, \sigma_{2})=\sum\limits_{\xi\in\mathbb N }\sigma_{1}(\xi){\sigma_{2}(\xi)} $$ for $\sigma_{1}\in l^{p}(\mathcal U)$ and $\sigma_{2}\in l^{q}(\mathcal V)$. Let $1<p\leq2$. Then, if $\sigma_{1}\in l^{p}(\mathcal U)$ and $\sigma_{2}\in l^{q}(\mathcal V)$, we obtain \begin{align*} |(\sigma_{1}, \sigma_{2})|&= \left|\sum_{\xi\in\mathbb N} \sigma_{1}(\xi)\sigma_{2}(\xi)\right|\\ &=\left|\sum_{\xi\in\mathbb N} \sigma_{1}(\xi)\|u_{\xi}\|_{{\mathcal H}^{\infty}}^{\frac{2}{p}-1}\|u_{\xi}\|_{{\mathcal H}^{\infty}}^{-(\frac{2}{p}-1)}\sigma_{2}(\xi)\right|\\ &\leq\left(\sum_{\xi\in\mathbb N} |\sigma_{1}(\xi)|^{p}\|u_{\xi}\|_{{\mathcal H}^{\infty}}^{p(\frac{2}{p}-1)}\right)^{p}\left(\sum_{\xi\in\mathbb N} |\sigma_{2}(\xi)|^{q}\|u_{\xi}\|_{{\mathcal H}^{\infty}}^{-q(\frac{2}{p}-1)}\right)^{\frac{1}{q}}\\ &=\|\sigma_{1}\|_{l^{p}(\mathcal U)}\|\sigma_{2}\|_{l^{q}(\mathcal V)}, \end{align*} where that $2\leq q<\infty$ and that $\frac{2}{p}-1=1-\frac{2}{q}$ were used (last line). Now, let $2<p<\infty$. If $\sigma_{1}\in l^{p}(\mathcal U)$ and $\sigma_{2}\in l^{q}(\mathcal V)$, we get \begin{align*} |(\sigma_{1}, \sigma_{2})|&= \left|\sum_{\xi\in\mathbb N} \sigma_{1}(\xi)\sigma_{2}(\xi)\right|\\ &=\left|\sum_{\xi\in\mathbb N} \sigma_{1}(\xi)\|v_{\xi}\|_{{\mathcal H}^{\infty}}^{\frac{2}{p}-1}\|v_{\xi}\|_{{\mathcal H}^{\infty}}^{-(\frac{2}{p}-1)}\sigma_{2}(\xi)\right|\\ &\leq\left(\sum_{\xi\in\mathbb N} |\sigma_{1}(\xi)|^{p}\|v_{\xi}\|_{{\mathcal H}^{\infty}}^{p(\frac{2}{p}-1)}\right)^{p}\left(\sum_{\xi\in\mathbb N} |\sigma_{2}(\xi)|^{q}\|v_{\xi}\|_{{\mathcal H}^{\infty}}^{-q(\frac{2}{p}-1)}\right)^{\frac{1}{q}}\\ &=\|\sigma_{1}\|_{l^{p}(\mathcal U)}\|\sigma_{2}\|_{l^{q}(\mathcal V)}. \end{align*} Put $p=1$. Then we have \begin{align*} |(\sigma_{1}, \sigma_{2})|&= \left|\sum_{\xi\in\mathbb N} \sigma_{1}(\xi)\sigma_{2}(\xi)\right|\\ &=\left|\sum_{\xi\in\mathbb N} \sigma_{1}(\xi)\|u_{\xi}\|_{{\mathcal H}^{\infty}}\|u_{\xi}\|_{{\mathcal H}^{\infty}}^{-1}\sigma_{2}(\xi)\right|\\ &\leq\left(\sum_{\xi\in\mathbb N} |\sigma_{1}(\xi)|\,\|u_{\xi}\|_{{\mathcal H}^{\infty}}\right)\sup_{\xi\in\mathbb N}|\sigma_{2}(\xi)|\,\|u_{\xi}\|^{-1}_{{\mathcal H}^{\infty}}\\ &=\|\sigma_{1}\|_{l^{1}(\mathcal U)}\|\sigma_{2}\|_{l^{\infty}(\mathcal V)}. \end{align*} The adjoint space cases could be proven in a similar way. \end{proof} \section{Rigged Hilbert spaces} In this section we will investigate a rigged structure of the Hilbert space ${\mathcal H}$. Especially, we will construct a (Gelfand) triple $(\Phi, {\mathcal H}, \Phi')$ with the inclusion property $$ \Phi\subset{\mathcal H}\subset\Phi', $$ where a role of $\Phi$ will be played by the so-called `spaces of test functions' $\mathcal C^{\infty}_{\mathcal U, \Lambda}$ and $\mathcal C^{\infty}_{\mathcal V, \Lambda}$ generated by the systems $\mathcal U$ and $\mathcal V$, respectively, and by some sequence $\Lambda$ of complex numbers. For this aim, let us fix some sequence $\Lambda:=\{\lambda_{\xi}\}_{\xi\in\mathbb N}$ of complex numbers such that the series \begin{equation}\label{EQ:asl} \sum_{\xi\in\mathbb N}(1+|\lambda_{\xi}|)^{-s_{0}}<\infty, \end{equation} converges for some $s_{0}>0$. Indeed, we build two triples, namely, $(\Phi_{\mathcal U}, {\mathcal H}, \Phi_{\mathcal U}')$ and $(\Phi_{\mathcal V}, {\mathcal H}, \Phi_{\mathcal V}')$, with $\Phi_{\mathcal U}:=\mathcal C^{\infty}_{\mathcal U, \Lambda}$, $\Phi_{\mathcal U}':=\mathcal D'_{\mathcal V, \Lambda}$ and $\Phi_{\mathcal V}:=\mathcal C^{\infty}_{\mathcal V, \Lambda}$, $\Phi_{\mathcal V}':=\mathcal D'_{\mathcal U, \Lambda}$. These triples allow us to extend the notions of $\mathcal U$-- and $\mathcal V$--convolutions and $\mathcal U$-- and $\mathcal V$--Fourier transforms outside of the Hilbert space ${\mathcal H}$. \begin{defi}\label{DEF:L} We associate to the pair $(\mathcal U, \Lambda)$ a linear operator $\mathcal L:{\mathcal H}\to{\mathcal H}$ by the formula \begin{equation}\label{EQ-1.1} \mathcal L f:=\sum_{\xi\in\mathbb N}\lambda_{\xi}(f, v_{\xi}) u_{\xi}, \end{equation} for those $f\in{\mathcal H}$ for which the series converges in ${\mathcal H}$. Then $\mathcal L$ is densely defined since $\mathcal L u_{\xi}=\lambda_{\xi}u_{\xi}$ for all $\xi\in\mathbb N$, and $\mathcal U$ is a basis in ${\mathcal H}$. We denote by $\mathrm{Dom}(\mathcal L)$ the domain of the operator $\mathcal L$, so that we have ${\rm Span}\,(\mathcal U)\subset \mathrm{Dom}(\mathcal L)\subset{\mathcal H}$. We call $\mathcal L$ to be the operator associated to the pair $(\mathcal U, \Lambda)$. Operators defined as in \eqref{EQ-1.1} have been also studied in \cite{BIT14}. \end{defi} We note that this construction goes in the opposite direction to the investigations devoted to the development of the global theory of pseudo-differential operators associated to a fixed operator, as in the papers \cite{DR14, DRT16, RT16, RT16a, RT16b}, where one is given an operator $\mathcal L$ acting in ${\mathcal H}$ with the system of eigenfunctions $\mathcal U$ and eigenvalues $\Lambda$. In this case we could `control' only one parameter, i.e. the operator $\mathcal L$. In the present (more abstract) point of view we have two parameters to control: the system $\mathcal U$ and the sequence of numbers $\Lambda$. \smallskip In a similar way to Definition \ref{DEF:L}, we define the operator ${\mathcal L}^{\ast}:{\mathcal H}\to{\mathcal H}$ by $$ {\mathcal L}^{\ast} g:=\sum_{\xi\in\mathbb N}\overline{\lambda_{\xi}}(g, u_{\xi}) v_{\xi}, $$ for those $g\in{\mathcal H}$ for which it makes sense. Then ${\mathcal L}^{\ast}$ is densely defined since ${\mathcal L}^{\ast} v_{\xi}=\overline{\lambda_{\xi}}v_{\xi}$ and $\mathcal V$ is a basis in ${\mathcal H}$, and ${\rm Span}\,(\mathcal V)\subset \mathrm{Dom}({\mathcal L}^{\ast})\subset{\mathcal H}$. One readily checks that we have $$(\mathcal L f,g)_{{\mathcal H}}=(f,\mathcal L^{*}g)_{{\mathcal H}}=\sum_{\xi\in\mathbb N}\lambda_{\xi} (f,v_{\xi})(g,u_{\xi}) $$ on their domains. \smallskip We can now define the following notions: \begin{itemize} \item[(i)] the spaces of $(\mathcal U, \Lambda)$-- and $(\mathcal V, \Lambda)$--test functions are defined by $$ \mathcal C^{\infty}_{\mathcal U, \Lambda}:=\bigcap_{k\in\mathbb N_{0}}\mathcal C^{k}_{\mathcal U, \Lambda}, $$ where $$ \mathcal C^{k}_{\mathcal U, \Lambda}:=\{\phi\in{\mathcal H}: \,\, |(\phi, v_{\xi})|\leq C (1+|\lambda_{\xi}|)^{-k} \,\,\, \hbox{for some constant} \,\, C \,\, \hbox{for all} \,\, \xi\in\mathbb N\}, $$ and $$ \mathcal C^{\infty}_{\mathcal V, \Lambda}:=\bigcap_{k\in\mathbb N_{0}}\mathcal C^{k}_{\mathcal V, \Lambda}, $$ where $$ \mathcal C^{k}_{\mathcal V, \Lambda}:=\{\psi\in{\mathcal H}: \,\, |(\psi, u_{\xi})|\leq C (1+|\lambda_{\xi}|)^{-k} \,\,\, \hbox{for some constant} \,\, C \,\, \hbox{for all} \,\, \xi\in\mathbb N\}. $$ The topology of these spaces is defined by a natural choice of seminorms. We can define spaces of $(\mathcal U, \Lambda)$-- and $(\mathcal V, \Lambda)$--distributions by $ \mathcal D'_{\mathcal U,\Lambda}:=(\mathcal C^{\infty}_{\mathcal V, \Lambda})'$ and $ \mathcal D'_{\mathcal V,\Lambda}:=(\mathcal C^{\infty}_{\mathcal U, \Lambda})'$, as spaces of linear continuous functionals on $\mathcal C^{\infty}_{\mathcal V, \Lambda}$ and $\mathcal C^{\infty}_{\mathcal U, \Lambda}$, respectively. We follow the conventions of rigged Hilbert spaces to denote this duality by \begin{equation}\label{EQ:dual} \langle u,\phi\rangle_{\mathcal D'_{\mathcal U,\Lambda}, \mathcal C^{\infty}_{\mathcal V, \Lambda}}=(u,\phi)_{{\mathcal H}}, \end{equation} extending the inner product on ${\mathcal H}$ for $u,\phi\in{\mathcal H},$ and similarly for the pair $ \mathcal D'_{\mathcal V,\Lambda}:=(\mathcal C^{\infty}_{\mathcal U, \Lambda})'$. \item[(ii)] the $\mathcal U$-- and $\mathcal V$--Fourier transforms \begin{equation*}\label{EQ: F_u} \mathcal F_{\mathcal U}(\phi)(\xi):=(\phi, v_{\xi})=:\widehat{\phi}(\xi) \end{equation*} and \begin{equation*}\label{EQ: F_v} \mathcal F_{\mathcal V}(\psi)(\xi):=(\psi, u_{\xi})=:\widehat{\psi}_{\ast}(\xi), \end{equation*} respectively, for arbitrary $\phi\in\mathcal C^{\infty}_{\mathcal U, \Lambda}$, $\psi\in\mathcal C^{\infty}_{\mathcal V, \Lambda}$ and for all $\xi\in\mathbb N$, and hence by duality, these extend to $\mathcal D'_{\mathcal U,\Lambda}$ and $ \mathcal D'_{\mathcal V,\Lambda}$, respectively. Here we have \begin{equation}\label{EQ: F_vd} \langle\mathcal F_{\mathcal U}(w), a\rangle=\langle w, \mathcal F_{\mathcal V}^{-1}(a)\rangle,\quad w\in \mathcal D'_{\mathcal U,\Lambda},\; a\in \mathcal S(\mathbb N), \end{equation} where the space $\mathcal S(\mathbb N)$ is defined in \eqref{EQ:ssp}. Indeed, for $w\in{\mathcal H}$ we can calculate \begin{multline*}\label{EQ:} \langle \mathcal F_{\mathcal U}(w), a\rangle=(\widehat{w},a)_{\ell^{2}(\mathbb N)} =\sum_{\xi\in\mathbb N}(w,v_{\xi}) \overline{a(\xi)}\\ = \left(w, \sum_{\xi\in\mathbb N} a(\xi) v_{\xi}\right)= \left(w,\mathcal F_{\mathcal V}^{-1} a\right)= \langle w,\mathcal F_{\mathcal V}^{-1}a\rangle, \end{multline*} justifying definition \eqref{EQ: F_vd}. Similarly, we define \begin{equation}\label{EQ: F_vdv} \langle\mathcal F_{\mathcal V}(w), a\rangle=\langle w, \mathcal F_{\mathcal U}^{-1}(a)\rangle,\quad w\in \mathcal D'_{\mathcal V,\Lambda},\; a\in \mathcal S(\mathbb N). \end{equation} The Fourier transforms of elements of $\mathcal D'_{\mathcal U,\Lambda}, \mathcal D'_{\mathcal V,\Lambda}$ can be characterised by the property that, for example, for $w\in \mathcal D'_{\mathcal U,\Lambda}$, there is $N>0$ and $C>0$ such that $$|\mathcal F_{\mathcal U}w(\xi) | \leq C (1+|\lambda_{\xi}|)^{N},\quad \textrm{ for all }\;\xi\in\mathbb N.$$ \item[(iii)] $\mathcal U$-- and $\mathcal V$--convolutions can be extended by the same formula: $$ f\star_{\mathcal U}g:= \sum_{\xi\in\mathbb N}\widehat{f}(\xi) \widehat{g}(\xi) u_{\xi}=\sum_{\xi\in\mathbb N}(f, v_\xi) (g, v_\xi) u_{\xi} $$ for example, for all $f\in \mathcal D'_{\mathcal U,\Lambda}$ and $g\in\mathcal C^{\infty}_{\mathcal U, \Lambda}$. It is well-defined since the series converges in view of properties from (i) above and assumption \eqref{EQ:asl}. By commutativity that spaces for $f$ and $g$ can be swapped. Similarly, $$ h\star_{\mathcal V}j:= \sum_{\xi\in\mathbb N}\widehat{h}_{\ast}(\xi) \widehat{j}_{\ast}(\xi) v_{\xi}=\sum_{\xi\in\mathbb N}(h, u_\xi) (j, u_\xi) v_{\xi} $$ for each $h\in \mathcal D'_{\mathcal V,\Lambda}$, $j\in\mathcal C^{\infty}_{\mathcal V, \Lambda}$. \end{itemize} The space $\mathcal C^{\infty}_{\mathcal U, \Lambda}$ can be also described in terms of the operator $\mathcal L$ in \eqref{EQ-1.1}. Namely, we have \begin{equation}\label{EQ:Cl} \mathcal C^{\infty}_{\mathcal U,\Lambda}=\bigcap_{k\in\mathbb N_{0}}\mathrm{Dom}(\mathcal L^{k}), \end{equation} where $$\mathrm{Dom}(\mathcal L^{k}):=\{f\in{\mathcal H}: \,\, \mathcal L^{i}f\in{\mathcal H}, \, i=2, ... , k-1 \},$$ and similarly $$\mathcal C^{\infty}_{\mathcal V,\Lambda}=\bigcap_{k\in\mathbb N_{0}}\mathrm{Dom}(({\mathcal L}^{\ast})^{k}),$$ where $$\mathrm{Dom}(({\mathcal L}^{\ast})^{k}):=\{g\in{\mathcal H}: \,\, ({\mathcal L}^{\ast})^{i}g\in{\mathcal H}, \, i=2, ... , k-1 \}.$$ Let $\mathcal S(\mathbb N)$ denote the space of rapidly decaying functions $\varphi:\mathbb N\rightarrow\mathbb C$. That is, $\varphi\in\mathcal S(\mathbb N)$ if for any $M<\infty$ there exists a constant $C_{\varphi, M}$ such that \begin{equation}\label{EQ:ssp} |\varphi(\xi)|\leq C_{\varphi, M}(1+|\lambda_{\xi}|)^{-M} \end{equation} holds for all $\xi\in\mathbb N$. The topology on $\mathcal S(\mathbb N)$ is given by the seminorms $p_{k}$, where $k\in\mathbb N_{0}$ and $$p_{k}(\varphi):=\sup_{\xi\in\mathbb N}(1+|\lambda_{\xi}|)^{k}|\varphi(\xi)|.$$ Continuous anti-linear functionals on $\mathcal S(\mathbb N)$ are of the form $$ \varphi\mapsto\langle u, \varphi\rangle:=\sum_{\xi\in\mathbb N}u(\xi)\overline{\varphi(\xi)}, $$ where functions $u:\mathbb N \rightarrow \mathbb C$ grow at most polynomially at infinity, i.e. there exist constants $M<\infty$ and $C_{u, M}$ such that $$ |u(\xi)|\leq C_{u, M}(1+|\lambda_{\xi}|)^{M} $$ holds for all $\xi\in\mathbb N$. Such distributions $u:\mathbb N \rightarrow \mathbb C$ form the space of distributions which we denote by $\mathcal S'(\mathbb N)$, with the distributional duality (as a Gelfand triple) extending the inner product on $\ell^{2}(\mathbb N)$. Summarising the above definitions and discussion, we record the basic properties of the Fourier transforms as follows: \begin{prop}\label{LEM: FTinS} The $\mathcal U$-Fourier transform $\mathcal F_{\mathcal U}$ is a bijective homeomorphism from $\mathcal C^{\infty}_{\mathcal U, \Lambda}$ to $\mathcal S(\mathbb N)$. Its inverse $$\mathcal F_{\mathcal U}^{-1}: \mathcal S(\mathbb N) \rightarrow \mathcal C^{\infty}_{\mathcal U, \Lambda}$$ is given by \begin{equation} \label{InvFourierTr} \mathcal F^{-1}_{\mathcal U}h=\sum_{\xi\in\mathbb N}h(\xi)u_{\xi},\quad h\in\mathcal S(\mathbb N), \end{equation} so that the Fourier inversion formula becomes \begin{equation} \label{InvFourierTr0} f=\sum_{\xi\in\mathbb N}\widehat{f}(\xi)u_{\xi} \quad \textrm{ for all } f\in \mathcal C^{\infty}_{\mathcal U, \Lambda}. \end{equation} Similarly, $\mathcal F_{\mathcal V}:\mathcal C^{\infty}_{\mathcal V, \Lambda}\to \mathcal S(\mathbb N)$ is a bijective homeomorphism and its inverse $$ \mathcal F_{\mathcal V}^{-1}: \mathcal S(\mathbb N)\rightarrow \mathcal C^{\infty}_{\mathcal V, \Lambda} $$ is given by \begin{equation} \label{ConjInvFourierTr} \mathcal F^{-1}_{\mathcal V}h:=\sum_{\xi\in\mathbb N}h(\xi)v_{\xi}, \quad h\in\mathcal S(\mathbb N), \end{equation} so that the conjugate Fourier inversion formula becomes \begin{equation} \label{ConjInvFourierTr0} f=\sum_{\xi\in\mathbb N}\widehat{f}_{\ast}(\xi)v_{\xi}\quad \textrm{ for all } f\in \mathcal C^{\infty}_{\mathcal V, \Lambda}. \end{equation} By \eqref{EQ: F_vd} the Fourier transforms extend to linear continuous mappings $ \mathcal F_{\mathcal U}:\mathcal D'_{\mathcal U, \Lambda}\to \mathcal S'(\mathbb N)$ and $ \mathcal F_{\mathcal V}:\mathcal D'_{\mathcal V, \Lambda}\to \mathcal S'(\mathbb N)$. \end{prop} The proof is straightforward. \medskip Let us formulate the properties of the $\mathcal U$- and $\mathcal V$-convolutions: \begin{prop}\label{ConvProp} For any $f\in \mathcal D'_{\mathcal U,\Lambda}, g\in\mathcal C^{\infty}_{\mathcal U, \Lambda}$, $h\in \mathcal D'_{\mathcal V,\Lambda}, j\in\mathcal C^{\infty}_{\mathcal V, \Lambda}$ we have $$ \widehat{f\star_{\mathcal U} g}=\widehat{f}\,\widehat{g}, \,\,\,\, \widehat{h\star_{\mathcal V} j}_{\ast}=\widehat{h}_{\ast}\,\widehat{j}_{\ast}. $$ The convolutions are commutative and associative. If $g\in\mathcal C^{\infty}_{\mathcal U, \Lambda}$ then for all $f\in \mathcal D'_{\mathcal U,\Lambda}$ we have \begin{equation}\label{EQ:conv1} f\star_{\mathcal U} g\in\mathcal C^{\infty}_{\mathcal U, \Lambda}. \end{equation} \end{prop} \begin{proof} Since the first part of the statement is proving in the same way as analogous one from Proposition \ref{PR: ConvProp}, we will show only the property \eqref{EQ:conv1} which follows if we observe that for all $k\in\mathbb N_{0}$ the series $$ \sum_{\xi\in\mathbb N}\widehat{f}(\xi) \widehat{g}(\xi) \lambda_{\xi}^{k}u_{\xi} $$ converges since $\widehat{g}\in\mathcal S(\mathbb N)$. \end{proof} \begin{prop} \label{PR: appl-1} If $\mathcal L:{\mathcal H}\to{\mathcal H}$ is associated to a pair $(\mathcal U, \Lambda)$ then we have $$ \mathcal L(f\star_{\mathcal U}g)=(\mathcal L f)\star_{\mathcal U} g=f\star_{\mathcal U} (\mathcal L g) $$ for any $f, g\in\mathcal C^{\infty}_{\mathcal U, \Lambda}$. \end{prop} \begin{proof} The proof is valid since the equalities $$ \mathcal F_{\mathcal U}(\mathcal L(f\star_{\mathcal U}g))(\xi)=\lambda_{\xi}\widehat{f}(\xi)\widehat{g}(\xi) $$ and $$ \mathcal F_{\mathcal U}((\mathcal L f)\star_{\mathcal U} g)(\xi)=\mathcal F_{\mathcal U}(\mathcal L f)(\xi)\widehat{g}(\xi)=\lambda_{\xi}\widehat{f}(\xi)\widehat{g}(\xi) $$ are true for all $\xi\in\mathbb N$. \end{proof} As a small application, let us write the resolvent of the operator $\mathcal L$ in terms of the convolution. \begin{thm}\label{TH: apll-2} Let $\mathcal L:{\mathcal H}\to{\mathcal H}$ be an operator associated to a pair $(\mathcal U, \Lambda)$. Then the resolvent of the operator $\mathcal L$ is given by the formula $$ \mathcal R(\lambda)f:=(\mathcal L-\lambda I)^{-1}f=g_{\lambda}\star_{\mathcal U}f,\quad \lambda\not\in\Lambda, $$ where $I$ is an identity operator in ${\mathcal H}$ and $$ g_{\lambda}=\sum_{\xi\in\mathbb N}\frac{1}{\lambda_{\xi}-\lambda}u_{\xi}. $$ \end{thm} \begin{proof} Begin by calculating the following series \begin{align*} g_{\lambda}\star_{\mathcal U}f&=\sum_{\xi\in\mathbb N}\frac{1}{\lambda_{\xi}-\lambda} \widehat{f}(\xi) u_{\xi}\\ &=\sum_{\xi\in\mathbb N}\widehat{f}(\xi)(\mathcal L-\lambda I)^{-1}u_{\xi}\\ &=(\mathcal L-\lambda I)^{-1}\left(\sum_{\xi\in\mathbb N}\widehat{f}(\xi)u_{\xi}\right)\\ &=(\mathcal L-\lambda I)^{-1}f\\ &=\mathcal R(\lambda)f, \end{align*} where we used the continuity of the resolvent. Now the theorem is proved. \end{proof} \section{Examples} We give an example considered in \cite{RT16} that can be also considered as an extension setting in an appropriate sense of the toroidal calculus studied in \cite{Ruzhansky-Turunen-JFAA-torus}. Let the operator ${\rm O}_{h}^{(1)}:L^{2}(0,1)\to L^{2}(0,1)$ be given by the action $$ {\rm O}_{h}^{(1)}:= -i\frac{d}{d x}, $$ where $h>0$, on the domain $(0, 1)$ with the boundary condition $h y(0)=y(1).$ In the case $h=1$ we have ${\rm O}_{1}^{(1)}$ with periodic boundary conditions, and the systems $\mathcal U$ and $\mathcal V$ of eigenfunctions of ${\rm O}_{1}^{(1)}$ and its adjoint ${{\rm O}_{1}^{(1)}}^{*}$ coincide, and are given by $$ \mathcal U=\mathcal V=\{u_{j}(x)=e^{ 2\pi i x j },\,\, j\in \mathbb{Z}\}. $$ This leads to the setting of the classical Fourier analysis on the circle which can be viewed as the interval $(0,1)$ with periodic boundary conditions. The corresponding pseudo-differential calculus was consistently developed in \cite{Ruzhansky-Turunen-JFAA-torus} building on previous observations in the works by Agranovich \cite{agran, agran2} and others. For $h\not=1$, the operator ${\rm O}_{h}^{(1)}$ is not self-adjoint. The spectral properties of ${\rm O}_{h}^{(1)}$ are well-known (see Titchmarsh \cite{titc} and Cartwright \cite{cart}), the spectrum of ${\rm O}_{h}^{(1)}$ is discrete and is given by $\lambda_{j}=-i\ln h+2j\pi, \ j\in \mathbb{Z}.$ The corresponding bi-orthogonal families of eigenfunctions of ${\rm O}_{h}^{(1)}$ and its adjoint are given by $$ \mathcal U=\{u_{j}(x)=h^{x}e^{ 2\pi i x j },\,\, j\in \mathbb{Z}\} $$ and $$ \mathcal V=\{v_{j}(x)=h^{-x} e^{2\pi i x j },\,\, j\in \mathbb{Z}\}, $$ respectively. They form Riesz bases, and ${\rm O}_{h}^{(1)}$ is the operator associated to the pair $\mathcal U$ and $\Lambda=\{\lambda_{j}=-i\ln h+2j\pi\}_{j\in \mathbb{Z}}$. Since $\mathbb N$ denoted an arbitrary discrete set before, all the previous constructions work with $\mathbb Z$ instead of $\mathbb N$. Formally, we can write \begin{equation} \label{CONV} (f\star_{\mathcal U} g)(x)=\int\int F(x,y,z)f(y)g(z)dydz, \end{equation} where $$ F(x,y,z)=\sum_{\xi\in\mathbb N}u_{\xi}(x) \ \overline{v_{\xi}(y)} \ \overline{v_{\xi}(z)}. $$ Here integrals \eqref{CONV} and the last series should be understood in the sense of distributions. In the case $h=1$, it can be shown that $F(x,y,z)=\delta(x-y-z)$, see \cite{Ruzhansky-Turunen-JFAA-torus}. For any $h>0$, it can be shown that the $\mathcal U$--convolution coincides with Kanguzhin's convolution that was studied in \cite{Kanguzhin_Tokmagambetov} and \cite{Kanguzhin_Tokmagambetov_Tulenov}: \begin{prop}\label{prop:Kc} Let ${\mathcal H}=L^{2}(0,1)$, $\mathcal U=\{u_{j}(x)=h^{x}e^{ 2\pi i x j },\,\, j\in \mathbb{Z}\}$, and $\Lambda=\{\lambda_{j}=-i\ln h+2j\pi\}_{j\in \mathbb{Z}}$. Then the operator $\mathcal L:L^{2}(0,1) \to L^{2}(0,1)$ associated to the pair $(\mathcal U, \Lambda)$ coincides with ${\rm O}_{h}^{(1)}$. The corresponding $\mathcal U$-convolution can be written in the integral form: $$ (f\star_{\mathcal U} g)(x)=\int^{x}_{0}f(x-t)g(t)dt+\frac{1}{h}\int^{1}_{x}f(1+x-t)g(t)dt. $$ \end{prop} In particular, when $h=1$, we obtain $$ (f\star_{\mathcal U} g)(x)=\int_{0}^{1}f(x-t)g(t)dt. $$ is the usual convolution on the circle. \begin{proof}[Proof of Proposition \ref{prop:Kc}] Let us denote $$K(f,g)(x):=\int^{x}_{0}f(x-t)g(t)dt+\frac{1}{h}\int^{1}_{x}f(1+x-t)g(t)dt. $$ Then we can calculate \begin{align*} &\mathcal F_{\mathcal U}(K(f,g))(\xi) \\ =&\int_{0}^{1}\int^{x}_{0}f(x-t)g(t)h^{-x} e^{-2\pi i x \xi }dtdx\\ &+\frac{1}{h}\int_{0}^{1}\int^{1}_{x}f(1+x-t)g(t)h^{-x} e^{-2\pi i x \xi }dtdx \\ =&\int_{0}^{1}\left[\int^{1}_{t}f(x-t)h^{-x} e^{-2\pi i x \xi }dx\right]g(t)dt\\ &+\int_{0}^{1}\left[\int^{t}_{0}f(1+x-t)h^{-(1+x)} e^{-2\pi i (1+x) \xi }dx\right]g(t)dt \\ =&\int_{0}^{1}\left[\int^{1}_{t}f(x-t)h^{-(x-t)} e^{-2\pi i (x-t) \xi }dx\right]g(t)h^{-t} e^{-2\pi i t \xi }dt\\ &+\int_{0}^{1}\left[\int^{t}_{0}f(1+x-t)h^{-(1+x-t)} e^{-2\pi i (1+x-t) \xi }dx\right]g(t)h^{-t} e^{-2\pi i t \xi }dt \end{align*} \begin{align*} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, =&\int_{0}^{1}\left[\int^{1-t}_{0}f(z)h^{-z} e^{-2\pi i z \xi }dz\right]g(t)h^{-t} e^{-2\pi i t \xi }dt\\ &+\int_{0}^{1}\left[\int^{1-t}_{1}f(z)h^{-z} e^{-2\pi i z \xi }dx\right]g(t)h^{-t} e^{-2\pi i t \xi }dt \end{align*} \begin{align*} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, =&\int_{0}^{1}\left[\int^{1}_{0}f(z)h^{-z} e^{-2\pi i z \xi }dz\right]g(t)h^{-t} e^{-2\pi i t \xi }dt\\ =&\widehat{f}(\xi)\,\widehat{g}(\xi). \end{align*} Consequently, by Theorem \ref{PR: ConvProp}, we obtain that $K(f,g)=f*_{\mathcal U}g$. \end{proof} \section{Further discussion} We note that in the case when we are given an operator $\mathcal L:{\mathcal H}\to{\mathcal H}$ for which the eigenfunctions do not make a basis of ${\mathcal H}$, other notions of convolutions are possible, still satisfying an analogue of Proposition \ref{PR: appl-1}. Here we can set up a convolution using the characterisation of the space of test functions given by \eqref{EQ:Cl}. \begin{defi}\label{DEF: L-oper} Let $\mathcal L:{\mathcal H}\to{\mathcal H}$ be a linear densely defined operator in ${\mathcal H}$. Denote $\mathrm{Dom}(\mathcal L^{\infty}):=\bigcap_{k\in\mathbb N_{0}}\mathrm{Dom}(\mathcal L^{k})$ with $\mathrm{Dom}(\mathcal L^{k}):=\{f: \,\, \mathcal L^{i}f\in{\mathcal H}, \, i=2, ... , k-1 \}.$ We say that a bilinear associative and commutative operation $*_{\mathcal L}$ is an $\mathcal L$-convolution if for any $f,g\in\mathrm{Dom}(\mathcal L^{\infty})$ we have $$ \mathcal L(f\star_{\mathcal L}g)=(\mathcal L f)\star_{\mathcal L}g=f\star_{\mathcal L}(\mathcal L g). $$ \end{defi} Proposition \ref{PR: appl-1} implies that $\mathcal U$-convolution is a special case of $\mathcal L$-convolutions: \begin{cor} Assume that $\mathcal L:{\mathcal H}\to{\mathcal H}$ is an operator associated with the pair $(\mathcal U, \Lambda)$, where $\mathcal U$ is a Riesz basis in ${\mathcal H}$. Then the $\mathcal U$--convolution $\star_{\mathcal U}$ is an $\mathcal L$-convolution. \end{cor} We finally show that an $\mathcal L$-convolution does not have to be a $\mathcal U$-convolution for any choice of the set $\Lambda$. For this, let us consider an $\mathcal L$-convolution associated to the so-called Ionkin operator considered in \cite{I77}. The Ionkin operator $\mathcal Y:{\mathcal H}\to{\mathcal H}$ is the operator in ${\mathcal H}:=L^{2}(0, 1)$ generated by the differential expression $$ -\frac{d^{2}}{d x^{2}}, \,\,\, x\in(0, 1), $$ with the boundary conditions $$ u(0)=0, \,\,\, u'(0)=u'(1). $$ It has eigenvalues $$ \lambda_{\xi}=(2\pi \xi)^{2},\; \xi\in\mathbb Z_{+}, $$ and an extended set of eigenfunctions $$ u_{0}(x)=x, \,\,\, u_{2\xi-1}(x)=\sin(2\pi \xi x), \,\,\, u_{2\xi}(x)=x\cos(2\pi \xi x), \,\, \xi\in\mathbb N, $$ which give a basis in $L^{2}(0, 1)$, which we denote by $\mathcal U$. The corresponding biorthogonal basis is given by $$ v_{0}(x)=2, \,\,\, v_{2\xi-1}(x)=4(1-x)\sin(2\pi \xi x), \,\,\, v_{2\xi}(x)=4\cos(2\pi \xi x), \,\, \xi\in\mathbb N, $$ for more details, see \cite{I77}. We consider the $\mathcal Y$--convolution (Ionkin--Kanguzhin's convolution) given by the formula \begin{multline} f\star_{\mathcal Y}g(x):=\frac{1}{2}\int_{x}^{1}f(1+x-t)g(t)dt\\ +\frac{1}{2}\int_{1-x}^{1}f(x-1+t)g(t)dt+\int_{0}^{x}f(x-t)g(t)dt\\ -\frac{1}{2}\int_{0}^{1-x}f(1-x-t)g(t)dt+\frac{1}{2}\int_{0}^{x}f(1+t-x)g(t)dt. \end{multline} This is a $\mathcal Y$-convolution in the sense of Definition \ref{DEF: L-oper}, namely, it satisfies $$ \mathcal Y(f\star_{\mathcal Y}g)=(\mathcal Y f)\star_{\mathcal Y}g=f\star_{\mathcal Y}(\mathcal Y g), $$ see \cite{KT15}. For the collection $$ \mathcal U:=\{u_{\xi}: \,\,\, u_{0}(x)=x, \, u_{2\xi-1}(x)=\sin(2\pi \xi x), \,\, u_{2\xi}(x)=x\cos(2\pi \xi x), \,\, \xi\in\mathbb N\}, $$ it can be readily checked that the corresponding $\mathcal U$-Fourier transform satisfies $$ \widehat{f\star_{\mathcal Y}g}(0)=\widehat{f}(0)\widehat{g}(0), $$ $$ \widehat{f\star_{\mathcal Y}g}(2\xi)=\widehat{f}(2\xi)\widehat{g}(2\xi), $$ $$ \widehat{f\star_{\mathcal Y}g}(2\xi-1)=\widehat{f}(2\xi-1)\widehat{g}(2\xi)+\widehat{f}(2\xi)\widehat{g}(2\xi) +\widehat{f}(2\xi)\widehat{g}(2\xi-1), \,\, \xi\in\mathbb N. $$ Therefore, by Theorem \ref{PR: ConvProp}, the $\mathcal Y$--convolution (Ionkin--Kanguzhin convolution) does not coincide with the $\mathcal U$--convolution for any choice of numbers $\Lambda$. \section{Acknowledgment} The authors thank Professor Baltabek Kanguzhin for interesting discussions, and the referee for useful remarks.
{ "timestamp": "2017-12-21T02:05:46", "yymm": "1611", "arxiv_id": "1611.03791", "language": "en", "url": "https://arxiv.org/abs/1611.03791", "abstract": "In this note we discuss notions of convolutions generated by biorthogonal systems of elements of a Hilbert space. We develop the associated biorthogonal Fourier analysis and the theory of distributions, discuss properties of convolutions and give a number of examples.", "subjects": "Functional Analysis (math.FA); Spectral Theory (math.SP)", "title": "Convolution, Fourier analysis, and distributions generated by Riesz bases", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363522073338, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7073385890319651 }
https://arxiv.org/abs/math/9306210
Some stability properties of $c_0$-saturated spaces
A Banach space is $c_0$-saturated if all of its closed infinite dimensional subspaces contain an isomorph of $c_0$. In this article, we study the stability of this property under the formation of direct sums and tensor products. Some of the results are: (1) a slightly more general version of the fact that $c_0$-sums of $c_0$-saturated spaces are $c_0$-saturated; (2) $C(K,E)$ is $c_0$-saturated if both $C(K)$ and $E$ are; (3) the tensor product $JH\tilde{\otimes}_\epsilon JH$ is $c_0$-saturated, where $JH$ is the James Hagler space.
\section{Direct sums of $c_0$-saturated spaces} In \cite{R}, it is stated without proof that $c_0$-sums of $c_0$-saturated spaces are $c_0$-saturated. In this section, we prove a result which includes this as a special case. Let $(E_n)$ be a sequence of Banach spaces, and let $F$ be a Banach space with a basis $(e_n)$. The $F$-sum of the spaces $E_n$ is the Banach space $(\oplus E_n)_F$ of all sequences $(x(n))$ such that $x(n) \in E_n$ for all $n$, and $\sum\|x(n)\|e_n$ converges in $F$, endowed with the norm \[ \|(x(n))\| = \|\sum\|x(n)\|e_n\|. \] For convenience, we will say that a Banach space is $p$-saturated if it is $\ell^p$-saturated ($1 \leq p < \infty$) or $c_0$-saturated ($p = \infty$). \begin{lem}\label{EF} Let $E, F$ be $p$-saturated Banach spaces for some $1 \leq p \leq \infty$, then $E\oplus F$ is $p$-saturated. \end{lem} \begin{pf} It suffices to show that every normalized basic sequence in $E\oplus F$ has a block basis equivalent to the $\ell^p$-basis ($c_0$-basis if $p = \infty$). Let $(x_n\oplus y_n)$ be a normalized basic sequence in $E\oplus F$. If $p = 1$, and $(x_n\oplus y_n)$ has a $\ell^1$-subsequence, then we are done. Otherwise, by Rosenthal's Theorem \cite{Ro}, we may assume that $(x_n\oplus y_n)$ is weakly Cauchy. If $p \neq 1$, using again Rosenthal's Theorem, we may assume that both $(x_n)$ and $(y_n)$ are weakly Cauchy. In both cases, by replacing the sequence $(x_n\oplus y_n)$ with $(x_{2n-1}-x_{2n}\oplus y_{2n-1}-y_{2n})$ if necessary, we may even assume that both $(x_n)$ and $(y_n)$ are weakly null. If $\|y_n\| \to 0$, then a subsequence of $(x_n\oplus y_n)$ is equivalent to a subsequence of $(x_n\oplus 0)$. But then the latter is a basic sequence in $E$, and hence has a block basis equivalent to the $\ell^p$-basis. Therefore, $(x_n\oplus y_n)$ has a $\ell^p$-block basis as well. A similar argument holds if $\|x_n\| \to 0$. Otherwise, we may take both $(x_n)$ and $(y_n)$ to be semi-normalized weakly null sequences. By using a subsequence, it may be assumed that both are basic sequences. Then $(x_n)$ has a $\ell^p$-block basis $(u_k) = (\sum^{n_{k+1}}_{j=n_k+1}a_jx_j)$. Let $(v_k) = (\sum^{n_{k+1}}_{j=n_k+1}a_jy_j)$ be the corresponding block basis of $(y_n)$. If $\|v_k\| \to 0$, then we apply the same argument as above. If $\|v_k\| \to \infty$, then \[ \left(\frac{u_k}{\|v_k\|}\oplus\frac{v_k}{\|v_k\|}\right) \] is a semi-normalized block basis of $(x_n\oplus y_n)$. Since $\|u_k\|/\|v_k\| \to 0$, we may apply the argument above yet again to conclude the proof. Finally, then, we may assume that $(v_k)$ is semi-normalized. Then $(v_k)$ has a $\ell^p$-block basis $(t_k)$. Let $(s_k)$ be the corresponding block basis of $(u_k)$ formed by using the same coefficients. Arguing as before, we may assume that $(s_k)$ is semi-normalized. But since $(u_k)$ is a $\ell^p$-sequence, so is $(s_k)$. Therefore, $(s_k\oplus t_k)$ is a $\ell^p$-block basis of $(u_k\oplus v_k)$, and hence of $(x_n\oplus y_n)$. \end{pf} \begin{thm} Let $(E_n)$ be a sequence of $p$-saturated Banach spaces, and let $F$ be a $p$-saturated Banach space with a basis. Then $E = (\oplus E_n)_F$ is $p$-saturated. \end{thm} \begin{pf} For each $x \in E$, write $x = (x(n))$, where $x(n) \in E_n$ for all $n$. Let $(x_k)$ be a normalized basic sequence in $E$. For each $m \in \N$, let $P_m$ be the projection on $E$ defined by $P_mx = y$, where $y(n) = x(n)$ if $n \leq m$, and $y(n) = 0$ otherwise. If for some subsequence $(z_j)$ of $(x_k)$, and some $m \in \N$, $(P_mz_j)_j$ dominates $(z_j)$, then $(z_j)$ is equivalent to $(P_mz_j)_j$. But then the latter is a basic sequence in $E_1\oplus\ldots\oplus E_m$, which is $p$-saturated by Lemma \ref{EF} and induction. Hence $(z_j)$, and thus $(x_k)$, has a $\ell^p$-block basis, and we are done. Otherwise, for all $m \in \N$, and every subsequence $(z_j)$ of $(x_k)$, \begin{equation}\label{nodom} \inf\{\|\sum_j a_jP_mz_j\| : (a_j) \in c_{00},\ \|\sum a_jz_j\| = 1\} = 0 . \end{equation} Let $m_0 = 0$, and $y_1 = x_1$. Choose $m_1$ such that $\|(1-P_{m_1})y_1\| \leq 1$. By (\ref{nodom}), there exists $k_2 \geq 2$, and $y_2 \in \mbox{span}\{x_k: 2 \leq k \leq k_2\}$, $\|y_2\| = 1$, such that $\|P_{m_1}y_2\| \leq 1/4$. Then choose $m_2 > m_1$ so that $\|(1-P_{m_2})y_2\| \leq 1/4$. Continuing inductively, we obtain a normalized block basis $(y_j)$ of $(x_k)$, and $(m_j)^\infty_{j=0}$ such that \[ \|y_j - (P_{m_j}-P_{m_{j-1}})y_j\| \leq 1/j \] for all $j \geq 1$ ($P_0 = 0$). Let $v_j = (P_{m_j}-P_{m_{j-1}})y_j$. Then $(y_j)$ has a subsequence equivalent to a subsequence of $(v_j)$. But, writing the basis of $F$ as $(e_n)$, it is clear that $(v_j)$ is equivalent to the sequence $(\sum^{m_j}_{n=m_{j-1}+1}\|v_j(n)\|e_n)$ in $F$. Since $F$ is $p$-saturated, we conclude that any subsequence of $(v_j)$ has a $\ell^p$-block basis. Thus, the same can be said of $(x_k)$, and the proof is complete. \end{pf} \section{Tensor products of $c_0$-saturated spaces} For Banach spaces $E$ and $F$, let $K_{w*}(E',F)$ denote the space of all compact weak*-weakly continuous operators from $E'$ into $F$, endowed with the operator norm. The $\epsilon$-tensor product $E \tilde{\otimes}_\epsilon F$ is the closure in $K_{w*}(E',F)$ of the finite rank operators that belong to $K_{w*}(E',F)$. These spaces are equal if either $E$ or $F$ has the approximation property \cite{Ru}. In this section and the next, we investigate special cases of the following: \\ \noindent{\em Problem}: Is $K_{w*}(E',F)$ (or $E\tilde{\otimes}_\epsilon F$) $c_0$-saturated if both $E$ and $F$ are? \\ A Banach space is {\em polyhedral}\/ if the unit ball of every finite dimensional subspace is a polyhedron. It is {\em isomorphically polyhedral}\/ if it is isomorphic to a polyhedral Banach space. Our interest in isomorphically polyhedral spaces arises from the following result of Fonf \cite{F1}. \begin{thm} {\em (Fonf)} An isomorphically polyhedral Banach space is $c_0$-saturated. \end{thm} A subset $W$ of the dual of a Banach space $E$ is said to be {\em isomorphically precisely norming}\/ (i.p.n.)\ if $W$ is bounded and\\ (a) there exists $K < \infty$ such that $\|x\| \leq K\sup_{w\in W}|w(x)|$ for all $x \in E$, \\ (b) the supremum $\sup_{w\in W}|w(x)|$ is attained at some $w_0 \in W$ for all $x \in E$. \\ This terminology was introduced by Rosenthal \cite{R1,R} to provide a succint formulation of the following result of Fonf \cite{F2}. \begin{thm} {\em (Fonf)} A separable Banach space $E$ is isomorphically polyhedral if and only if $E'$ contains a countable i.p.n.\ subset. \end{thm} In this section, we consider the space $K_{w*}(E',F)$ when one of the spaces $E$ or $F$ is isomorphically polyhedral, and the other is $c_0$-saturated. Note the symmetry in the situation as $K_{w*}(E',F)$ is isometric to $K_{w*}(F',E)$ via the mapping $T \mapsto T'$. For Lemma \ref{pn}, note that if $x' \in E'$ and $y' \in F'$, the pair $(x',y')$ defines a functional on $K_{w*}(E',F)$ by $T \mapsto \langle Tx',y'\rangle$. \begin{lem}\label{pn} Let $E, F$ be Banach spaces, and let $W$ and $V$ be i.p.n.\ subsets of $E'$ and $F'$ respectively. Then $W\times V$ is an i.p.n.\ subset of $(K_{w*}(E',F))'$. \end{lem} \begin{pf} It is clear that if both $W$ and $V$ satisy (a) of the definition of an i.p.n.\ set with constant $K$, then $W\times V$ also satisfies it with constant $K^2$. Now assume that both $W$ and $V$ satisfy part (b) of the definition. It is easy to see that $(x',y') \mapsto \langle Tx',y'\rangle$ is a continuous function on $\overline{W}^{w^*}\times\overline{V}^{w^*}$, where both $\overline{W}^{w^*}$ and $\overline{V}^{w^*}$ are given their respective weak* topologies. Since $\overline{W}^{w^*}\times\overline{V}^{w^*}$ is compact, there exists $(w_0,v_0) \in \overline{W}^{w^*}\times\overline{V}^{w^*}$ such that \[ \sup_{(x',y')\in W\times V}|\langle Tx',y'\rangle| = \sup_{(x',y')\in\overline{W}^{w^*}\times\overline{V}^{w^*}}|\langle Tx',y'\rangle| = |\langle Tw_0,v_0\rangle| . \] Now there exists $v\in V$ such that \begin{eqnarray*} |\langle Tw_0,v\rangle| & = & \sup_{y'\in V}|\langle Tw_o,y'\rangle| \\ & = & \sup_{y'\in\overline{V}^{w^*}}|\langle Tw_o,y'\rangle| \\ & \geq & |\langle Tw_0,v_0\rangle|. \end{eqnarray*} Similarly, there exists $w \in W$ such that \begin{eqnarray*} |\langle Tw,v\rangle| & = & |\langle w,T'v\rangle| \\ & = & \sup_{x'\in W}|\langle x',T'v\rangle| \\ & = & \sup_{x'\in\overline{W}^{w^*}}|\langle x',T'v\rangle| \\ & \geq & |\langle Tw_0,v\rangle|. \end{eqnarray*} Combining the above, we see that $|\langle Tw,v\rangle| \geq \sup_{(x',y')\in W\times V}|\langle Tx',y'\rangle|$. Since the reverse inequality is obvious, the proof is complete. \end{pf} \begin{lem}\label{block} Let $(x_n)$ be a non-convergent sequence in a $c_0$-saturated Banach space $F$. There exists a normalized block $(u_k) = (\sum^{n_k}_{n=n_{k-1}+1}b_nx_n)$ of $(x_n)$ which is equivalent to the $c_0$-basis. \end{lem} \begin{pf} Going to a subsequence, we may assume that $\inf_{m\neq n}\|x_m-x_n\| > 0$. By Rosenthal's Theorem \cite{Ro}, we may also assume that $(x_n)$ is weakly Cauchy. Let $y_n = (x_{2n-1}-x_{2n})/\|x_{2n-1}-x_{2n}\|$ for all $n$. Then $(y_n)$ is a weakly null normalized block of $(x_n)$. Without loss of generality, we may assume that $(y_n)$ is a basic sequence. Since $F$ is $c_0$-saturated, $(y_n)$ has a normalized block basis $(u_k)$ equivalent to the $c_0$-basis. As $(u_k)$ is also a normalized block of $(x_n)$, the proof is complete. \end{pf} \begin{lem}\label{opbl} Let $E, F$ be Banach spaces so that $F$ is $c_0$-saturated, and let $(T_n)$ be a normalized basic sequence in $K_{w*}(E',F)$. For every $x' \in E'$, there is a normalized block basis $(S_n)$ of $(T_n)$, and a constant $C$, such that \[ \|\sum a_nS_nx'\| \leq C\sup_n|a_n| \] for all $(a_n) \in c_{00}$. \end{lem} \begin{pf} There is no loss of generality in assuming that $\|x'\| = 1$.\\ \noindent\underline{Case 1} $(T_nx')$ converges.\\ We may assume without loss of generality that $\|(T_{2n-1}-T_{2n})x'\| \leq 2^{-n}$ for all $n$. Let $\epsilon_n = \|T_{2n-1}-T_{2n}\|$. Since $(T_n)$ is a normalized basic sequence, $\epsilon \equiv \inf_n\epsilon_n > 0$. Now let $S_n = \epsilon^{-1}_n(T_{2n-1}-T_{2n})$ for all $n$. Then $(S_n)$ is a normalized block basis of $(T_n)$. Furthermore, for any $(a_n) \in c_{00}$, \begin{eqnarray*} \|\sum a_nS_nx'\| & \leq & \epsilon^{-1}\sup_n|a_n|\sum\|(T_{2n-1}-T_{2n})x'\| \\ & \leq & \epsilon^{-1}\sup_n|a_n| . \end{eqnarray*} \underline{Case 2} $(T_nx')$ does not converge. \\ By Lemma \ref{block}, $(T_nx')$ has a normalized block $(u_k) = (\sum^{n_k}_{n=n_{k-1}+1}b_nT_nx')$ which is equivalent to the $c_0$-basis. Let $R_k = (\sum^{n_k}_{n=n_{k-1}+1}b_nT_n)$. Then $\|R_k\| \geq \|R_kx'\| = \|u_k\| = 1$. Let $S_k = R_k/\|R_k\|$ for all $k$. Then $(S_k)$ is a normalized block basis of $(T_n)$, and \[ \|\sum a_kS_kx'\| = \|\sum \frac{a_k}{\|R_k\|}u_k\| \leq C\sup_k|a_k| \] for all $(a_k) \in c_{00}$, since $(u_k)$ is equivalent to the $c_0$-basis and $\|R_k\| \geq 1$. \end{pf} \begin{thm}\label{main} Let $E, F$ be Banach spaces so that $E$ is isomorphically polyhedral and $F$ is $c_0$-saturated. Then $K_{w*}(E',F)$ is $c_0$-saturated. \end{thm} \begin{pf} Let $(T_n)$ be a normalized basic sequence in $K_{w*}(E',F)$. It is easily seen that $G = [\cup T'_nF']$ is a separable subspace of $E$, and that the sequences $(T_n)$ and $(T_{n|G})$ are equivalent. Thus we may assume that $E$ is separable. Then $E'$ contains a countable i.p.n.\ subset $W$. Write $W = (w_m)$. By Lemma \ref{opbl}, $(T_n)^\infty_{n=1}$ has a normalized block basis $(T^{(1)}_n)$ such that \[ \|\sum a_nT^{(1)}_nw_1\| \leq C_1\sup_n|a_n| \] for some constant $C_1$ for all $(a_n) \in c_{00}$. Inductively, if $(T^{(m)}_n)$ has been chosen, let $(T^{(m+1)}_n)$ be a normalized block basis of $(T^{(m)}_n)^\infty_{n=2}$ such that there is a constant $C_{m+1}$ satisfying \[ \|\sum a_nT^{(m+1)}_nw_{m+1}\| \leq C_{m+1}\sup_n|a_n| \] for all $(a_n) \in c_{00}$. Now let $S_m = T^{(m)}_1$ for all $m \in \N$. Then $(S_m)$ is a normalized block basis of $(T_n)$. Also, for all $k$, $(S_m)^\infty_{m=k}$ is a block basis of $(T^{(k)}_n)$. Fix $k$, and write $S_m = \sum^{j_m}_{n=j_{m-1}+1}b_nT^{(k)}_n$ for all $m \geq k$. Since $(S_m)$ is normalized, and $(T^{(k)}_n)$ is normalized basic, $(b_i)$ is bounded. Therefore, by the choice of $(T^{(k)}_n)$, \begin{eqnarray*} \left\|\sum^\infty_{m=k}a_mS_mw_k\right\| & = & \left\|\sum^\infty_{m=k}a_m\left(\sum^{j_m}_{n=j_{m-1}+1} b_nT^{(k)}_nw_k\right)\right\| \\ & \leq & C_k\sup_{m\geq k}\sup_{j_{m-1}<n\leq j_m}|a_mb_n| \\ & \leq & C_k\sup_n|b_n|\sup_{m\geq k}|a_m| \end{eqnarray*} for all $(a_m) \in c_{00}$. Consequently, $\sum^\infty_{m=1}a_mS_mw_k$ converges in $F$ for all $k$ and all $(a_m) \in c_0$. Hence $\sum|\langle S_mw,y'\rangle| < \infty$ for all $(w,y') \in W\times U_{F'}$. But by Lemma \ref{pn}, $W\times U_{F'}$ is an i.p.n.\ subset of $K_{w*}(E',F)$. Applying Elton's extremal criterion (\cite{E}, see also \cite[Theorem 18]{R}), we see that $[S_m]$ contains a copy of $c_0$. \end{pf} Recall that a subset $A$ of topological space $X$ is {\em dense-in-itself}\/ if every point of $A$ is an accumulation point of $A$. $A$ is {\em scattered}\/ if it contains no non-empty dense-in-itself subset. \\ \begin{cor} Let $K$ be a compact Hausdorff space, and let $E$ be a Banach space. Then $C(K,E)$ is $c_0$-saturated if and only if both $C(K)$ and $E$ are $c_0$-saturated. \end{cor} \begin{pf} The ``only if'' part is clear, since both $C(K)$ and $E$ embed in $C(K,E)$. Now assume that both $C(K)$ and $E$ are $c_0$-saturated. To begin with, assume additionally that $C(K)$ is separable. Then $K$ is metrizable \cite[Proposition II.7.5]{S}. If $K$ is not scattered, by \cite[Theorem 8.5.4]{Sem}, there is a continuous surjection $\phi$ of $K$ onto $[0,1]$. Then $f \mapsto f\circ\phi$ is an isometric embedding of $C[0,1]$ into $C(K)$. This contradicts the fact that $C(K)$ is $c_0$-saturated. Thus $K$ is scattered. By \cite[Theorem 8.6.10]{Sem}, $K$ is homeomorphic to a countable compact ordinal. In particular, $K$ is countable. Hence $C(K)'$ contains a countable i.p.n.\ subset, namely, $\{\delta_k: k \in K\}$, where $\delta_k$ denotes the Dirac measure concentrated at $k$. Therefore, $C(K)$ is isomorphically polyhedral, and $C(K,E) = K_{w*}(C(K)',E)$ is $c_0$-saturated by Theorem \ref{main}. If $C(K)$ is non-separable, as in the proof of the theorem, it suffices to show that $K_{w*}(G',E)$ contains a copy of $c_0$ for an arbitrary separable closed subspace $G$ of $C(K)$. However, $K_{w*}(G',E)$ is isometric to $K_{w*}(E',G)$, which clearly embeds in $K_{w*}(E',F)$ for any closed subspace $F$ of $C(K)$ containing $G$. Take $F$ to be the closed sublattice generated by $G$ and the constant $1$ function. By Kakutani's Representation Theorem \cite[Theorem II.7.4]{S}, $F$ is lattice isometric to some $C(H)$. Note that $C(H)$ is separable since $F$ is. Therefore, $K_{w*}(E',F)$, which is isometric to $K_{w*}(F',E) = C(H,E)$, is $c_0$-saturated by the above. Since $K_{w*}(G',E)$ is isomorphic to a subspace of $K_{w*}(E',F)$, it contains a copy of $c_0$. \end{pf} \section{The space $JH\tilde{\otimes}_\epsilon JH$} In view of Theorem \ref{main} in \S 2, it is interesting to consider spaces $K_{w*}(E',F)$ where both $E$ and $F$ are $c_0$-saturated, but neither is isomorphically polyhedral. In this section, we investigate one such case. Namely, when $E = F = JH$, the James Hagler space. In \cite{L}, it was shown that $JH\tilde{\otimes}_\epsilon JH = K_{w*}(JH',JH)$ does not contain an isomorph of $\ell^1$. We show here that, in fact, $JH\tilde{\otimes}_\epsilon JH$ is $c_0$-saturated. The proof uses Elton's extremal criterion for weak unconditional convergence, and the ``diagonalization technique'' employed by Hagler to show that every normalized weakly null sequence in $JH$ has a $c_0$-subsequence. Let us recall the definition of the space JH, as well as fix some terms and notation. Let $T = \cup^\infty_{n=0}\{0,1\}^n$ be the dyadic tree. The elements of $T$ are called {\em nodes}. If $\phi$ is a node of the form $(\epsilon_i)^n_{i=1}$, we say that $\phi$ has {\em length}\/ $n$ and write $|\phi| = n$. The length of the empty node is defined to be $0$. For $\phi, \psi \in T$ with $\phi = (\epsilon_i)^n_{i=1}$ and $\psi = (\delta_i)^m_{i=1}$, we say that $\phi \leq \psi$ if $n \leq m$ and $\epsilon_i = \delta_i$ for $1 \leq i \leq n$. The empty node is $\leq \phi$ for all $\phi \in T$. We write $\phi < \psi$ if $\phi \leq \psi$ and $\phi \neq \psi$. Two nodes $\phi$ and $\psi$ are {\em incomparable}\/ if neither $\phi \leq \psi$ nor $\psi \leq \phi$ hold. If $\phi \leq \psi$, we say that $\phi$ is an {\em ancestor}\/ of $\psi$, while $\psi$ is a {\em descendant}\/ of $\phi$. For any $\phi \in T$, let $\Delta_\phi$ be the set of all descendants of $\phi$. If $\phi \leq \psi$, let \[ S(\phi,\psi) = \{\xi: \phi \leq \xi \leq \psi\}. \] A set of the form $S(\phi,\psi)$ is called a {\em segment}, or more specifically, a $m$-$n$ {\em segment}\/ provided $|\phi| = m,$ and $|\psi| = n$. A {\em branch}\/ is a maximal totally ordered subset of $T$. The set of all branches is denoted by $\Gamma$. A branch $\gamma$ (respectively, a segment $S$) is said to {\em pass through}\/ a node $\phi$ if $\phi \in \gamma$ (respectively, $\phi \in S$). If $x: T \to \R$\ is a finitely supported function and $S$ is a segment, we define (with slight abuse of notation) $Sx = \sum_{\phi\in S}x(\phi)$. Similarly, if $\gamma \in \Gamma$, we define $\gamma(x) = \sum_{\phi\in \gamma}x(\phi)$. A set of segments $\{S_1,\ldots,S_r\}$ is {\em admissible}\/ if they are pairwise disjoint, and there are $m, n \in \N\cup\{0\}$ such that each $S_i$ is a $m$-$n$ segment. The James Hagler space $JH$ is defined as the completion of the set of all finitely supported functions $x: T \to \R$\ under the norm: \[ \|x\| = \sup\left\{\sum^r_{i=1}|S_ix| : S_1,\ldots,S_r \mbox{ is an admissible set of segments}\right\}. \] Clearly, all $S$ and $\gamma$ extend to norm $1$ functionals on $JH$. Finally, if $x: T \to \R$\ is finitely supported, and $n \geq 0$, let $P_nx: T \to \R$\ be defined by \[ (P_nx)(\phi) = \left\{ \begin{array}{ll} x(\phi) & \mbox{if $|\phi| \geq n$} \\ 0 & \mbox{otherwise.} \end{array} \right. \] Obviously, $P_n$ extends uniquely to a norm $1$ projection on $JH$, which we denote again by $P_n$. We begin with some lemmas on ``node management''. Let $\phi$ and $\psi$ be nodes. Denote by $A(\phi,\psi)$ denote the unique node of maximal length such that $A(\phi,\psi) \leq \phi$ and $A(\phi,\psi) \leq \psi$. A sequence of nodes $(\phi_n)$ is a {\em strongly incomparable sequence}\/ if\\ (a) $\phi_n$ and $\phi_m$ are incomparable if $n \neq m$,\\ (b) no family of admissible segments passes through more than two of the $\phi_n$'s.\\ The first lemma is due to Hagler \cite[Lemma 2]{H}. \begin{lem} {\em (Hagler)} Let $(\phi_n)$ be a sequence of nodes with strictly increasing lengths. Then there exists $N \in {\cal P}_\infty{\em(\N)}$ such that either $(\phi_n)_{n\in N}$ determines a unique branch of $T$, or $(\phi_n)_{n\in N}$ is a strongly incomparable sequence. \end{lem} \begin{lem}\label{ancestor} Let $(\phi(n))$ be a strongly incomparable sequence of nodes such that $|\phi(n)| < |\phi(n+1)|$ for all $n$. Then for all $m \geq n \geq 1$, $\phi(m) \in \Delta_{A(\phi(n),\phi(n+1))}$. \end{lem} \begin{pf} Otherwise, there exist $m \geq n \geq 1$ such that $\phi(m) \notin \Delta_{A(\phi(n),\phi(n+1))}$. In particular, note that $m \geq n + 2$. Let $\phi_1 = \phi(n)$, and let $\phi_2$ and $\phi_3$ be the ancestors of $\phi(n+1)$ and $\phi(m)$ respectively of length $|\phi(n)|$. Then $\phi_1 \neq \phi_2$ since $\phi(n)$ and $\phi(n+1)$ are incomparable . Also $\phi_1 \neq \phi_3$ and $\phi_2 \neq \phi_3$ since $\phi_1, \phi_2 \in \Delta_{A(\phi(n),\phi(n+1))}$, while $\phi_3 \notin \Delta_{A(\phi(n),\phi(n+1))}$. Let $\psi_1, \psi_2$ be nodes of length $|\phi(m)|$ which are $\geq \phi(n)$ and $\phi(n+1)$ respectively, and let $\psi_3 = \phi(m)$. Then $\{S(\phi_i,\psi_i) : i = 1, 2, 3\}$ is an admissible set of segments. However, $\phi(n) \in S(\phi_1,\psi_1), \phi(n+1) \in S(\phi_2,\psi_2)$, and $\phi(m) \in S(\phi_3,\psi_3)$, violating the strong incomparability of $(\phi(k))$. \end{pf} \begin{lem}\label{inc} Let $m \in$ {\em \N}, and let $(\phi(i,j))^\infty_{i=1}$ be a strongly incomparable sequence of nodes for $1 \leq j \leq m$. Assume that $|\phi(i,j)| = |\phi(i,j')| < |\phi(i+1,j)|$ whenever $i, j , j' \in {\em \N}$, $1 \leq j, j' \leq m$, and that for each $i$, $\{\phi(i,j) : 1 \leq j \leq m\}$ are pairwise distinct. Then there exists $k_0$ such that $\{\phi(i,j) : i \geq k_0, 1 \leq j \leq m\}$ are pairwise incomparable . \end{lem} \begin{pf} Induct on $m$. If $m = 1$, there is nothing to prove. Now assume that the statement is true for $m - 1$ $(m \geq 2)$. Let $\{\phi(i,j) : i \geq 1, 1 \leq j \leq m\}$ be as given. Without loss of generality, we may assume that $\{\phi(i,j) : i \geq 1, 1 \leq j \leq m-1 \}$ are pairwise incomparable. First observe that if $\phi(i_1,j_1) < \phi(i_2,j_2)$ for some $j_1, j_2 \leq m$, then $A(\phi(i_2,j_2),\phi(i_2+1,j_2)) \geq \phi(i_1,j_1)$. Indeed, since $A(\phi(i_2,j_2),\phi(i_2+1,j_2))$ and $\phi(i_1,j_1)$ share the same descendant $\phi(i_2,j_2)$, they are comparable. Hence if the claim fails, $A(\phi(i_2,j_2),\phi(i_2+1,j_2)) < \phi(i_1,j_1)$. But then the ancestor of $ \phi(i_2+1,j_2)$ of length $|\phi(i_1,j_1)|$, $\phi(i_1,j_1)$, and $\phi(i_1,j_2)$, are distinct. From this it is easy to construct an admissible set of segments which pass through all three nodes $\{\phi(i_1,j_2), \phi(i_2,j_2), \phi(i_2+1,j_2)\}$, in violation of their strong incomparability. This proves the claim. In particular, under the circumstances, Lemma \ref{ancestor} implies $\phi(i,j_2) \in \Delta_{\phi(i_1,j_1)}$ for all $i \geq i_2$. The remainder of the proof is divided into two cases.\\ \noindent\underline{Case 1}\hspace{1em} There exist $j_1 < m$ and $i_1, i_2 \in \N\ $ such that $\phi(i_1,j_1) \leq \phi(i_2,m)$. \\ \indent Note that since $\{\phi(i_1,j) : j \leq m\}$ are pairwise distinct, we must have $\phi(i_1,j_1) < \phi(i_2,m)$. By the observation above, we obtain that $\phi(i,m) \in \Delta_{\phi(i_1,j_1)}$ for all $i \geq i_2$. However, for any $i' \geq i_2, j < m$, $\phi(i',j)$ is incomparable with $\phi(i_1,j_1)$ by induction. Hence $\phi(i',j) \notin \Delta_{\phi(i_1,j_1)}$. Thus $\phi(i,m)$ and $\phi(i',j)$ are incomparable whenever $i, i' \geq i_2$ and $j < m$. This is enough to show that $\{\phi(i,j) : i \geq i_2, 1 \leq j \leq m\}$ are pairwise incomparable.\\ \noindent\underline{Case 2}\hspace{1em} For all $j_1 < m$ and $i_1, i_2 \in \N$, $\phi(i_1,j_1) \not\leq \phi(i_2,m)$. \\ \indent Let $I = \{i :$ there exist $i' \in \N, j < m$ such that $\phi(i,m) < \phi(i',j)\}$. Let $i_1$ and $i_2$ be distinct elements of $I$. Choose $i'_1, i'_2 \in \N$, $j_1, j_2 < m$ such that $\phi(i_k,m) < \phi(i'_k,j_k)$, $k = 1, 2$. By the observation above, $\phi(i,j_k) \in \Delta_{\phi(i_k,m)}$ for all $i \geq i'_k$, $k = 1, 2$. Now $\phi(i_1,m)$ and $\phi(i_2,m)$ are incomparable by assumption. Therefore, $\Delta_{\phi(i_1,m)} \cap \Delta_{\phi(i_2,m)} = \emptyset$. Combined with the above, we see that $j_1 \neq j_2$. It follows that $|I| \leq m-1$. Now choose $k_0$ such that $i < k_0$ for all $i \in I$. By the case assumption, $\phi(i,m)$ is incomparable with $\phi(i',j)$ whenever $i \geq k_0$ and $j < m$. This is enough to show that $\{\phi(i,j) : i \geq k_0, 1 \leq j \leq m\}$ are pairwise incomparable. \end{pf} \begin{lem}\label{seven} Let $(x_n)$ be a bounded weakly null sequence in $JH$ so that there is a sequence $0 = j_1 < j_2 < j_3 < \cdots$ with $x_n \in (P_{j_n}-P_{j_{n+1}})JH$ for all $n$. Assume that $\sup_{\xi\in\Gamma}|\langle x_n,\xi\rangle| \leq \epsilon$ for all $n$. Then there is a subsequence $(x_{n_k})$ such that \[ \sup_{\xi\in\Gamma}\sum^\infty_{k=1}|\langle x_{n_k},\xi\rangle| \leq 7\epsilon. \] \end{lem} \begin{pf} For $m, n \in$ \N, let \begin{eqnarray*} F(n,m) & = & \{\phi \in \{0,1\}^{j_n} : \mbox{there exists at least one branch } \xi \mbox{ through } \phi \mbox{ with } \\ & & |\langle x_n,\xi\rangle| >\epsilon/2^m, \mbox{ and for all branches $\xi$ through } \phi,\ |\langle x_n,\xi\rangle| \leq \epsilon/2^{m-1} \}. \end{eqnarray*} Since $(x_n)$ is bounded, $\sup_n|F(n,m)| < \infty$ for all $m$. Let $N_1 \in {\cal P}_\infty(\N)$ be such that $(|F(n,1)|)_{n\in N_1}$ is constant, say, $b_1$. Write $F(n,1) = \{\phi(n,1,i) : 1 \leq i \leq b_1\}$ for all $n \in N_1$. Choose $N'_1 \in {\cal P}_\infty(N_1)$ so that for $1 \leq i \leq b_1$, $(\phi(n,1,i))_{n\in N'_1}$ is either a strongly incomparable sequence or determines a branch. Let \[ I_1 = \{i \leq b_1 : (\phi(n,1,i))_{n\in N'_1} \mbox{ determines a branch}\}, \] and let $\Gamma_1$ be the set of branches determined by some $(\phi(n,1,i))_{n\in N'_1}$ for some $i \in I_1$. Let $L_1 = \{1,\ldots,b_1\}\backslash I_1.$ By Lemma \ref{inc}, there exists $N''_1 \in {\cal P}_\infty(N'_1)$ such that $\{\phi(n,1,i) : n \in N''_1, i \in L_1\}$ are pairwise incomparable. Finally, since $\Gamma_1$ is finite, there exists $n_1 \in N''_1$ such that $|\langle x_{n_1},\gamma\rangle| \leq \epsilon/2$ for all $\gamma \in \Gamma_1$. Continue inductively to obtain $n_1 < n_2 < \cdots$, numbers $b_1, b_2, \ldots$, and sets $I_1, I_2, \ldots$, $L_1, L_2, \ldots$, and $\Gamma_1, \Gamma_2, \ldots$ so that \begin{enumerate} \item for all $k \geq m$, $|F(n_k,m)| = b_m$, \item for all $k \geq m$, $F(n_k,m) = \{\phi(n_k,m,i) : i \leq b_m\}$, where $(\phi(n_k,m,i))^\infty_{k=m}$ determines a branch if $i \in I_m$, and is a strongly incomparable sequence if $i \in L_m = \{1,\ldots,b_m\}\backslash I_m$, \item for all $m$, $\Gamma_m$ is the set of branches determined by $(\phi(n_k,m,i))^\infty_{k=m}$ for some $i \in I_m$, \item for all $m$, $\{\phi(n_k,m,i) : m \leq k, i \in L_m\}$ are pairwise incomparable, \item for all $k$, $|\langle x_{n_k},\gamma\rangle| \leq \epsilon/2^k$ for every $\gamma \in \Gamma_1\cup\ldots\cup\Gamma_k$. \end{enumerate} For all $k$, let $G(k) = \{0,1\}^{j_{n_k}}\backslash\cup^k_{m=1}F(n_k,m)$. Then $\{F(n_k,1),\ldots,F(n_k,k),G(k)\}$ is a partition of $\{0,1\}^{j_{n_k}}$ for each $k$. Fix $\xi \in \Gamma$. Say $\xi = (\phi_n)^\infty_{n=0}$, where $|\phi_n| = n$ for all $n \geq 0$. Now let $J_0 = \{k : \phi_{n_k} \in G(k)\}$, and let $J_m = \{k \geq m : \phi_{n_k} \in F(n_k,m)\}$ for all $m \geq 1$. Then $(J_m)^\infty_{m=0}$ is a partition of \N. For all $k$ such that $k \in J_m$ for some $m \geq 1$, choose $i_k$ such that $\phi_{n_k} = \phi(n_k,m,i_k)$. For all $m \geq 1$, let $J_{m,1} = \{k \in J_m : i_k \in I_m\}$, and let $J_{m,2} = \{k \in J_m : i_k \in L_m\}$. Fix $m \geq 1$ such that $J_{m,1} \neq \emptyset$.\\ \noindent\underline{Case 1}\hspace{1em} $J_{m,1}$ is infinite.\\ \indent In this case, there exists $\gamma \in \Gamma_m$ such that $\gamma = \xi$. Therefore, \begin{eqnarray*} \sum_{k\in J_{m,1}}|\langle x_{n_k},\xi\rangle| & \leq & \sum^\infty_{k=m}|\langle x_{n_k},\xi\rangle| = \sum^\infty_{k=m}|\langle x_{n_k},\gamma\rangle| \\ & \leq & \sum^\infty_{k=m}\frac{\epsilon}{2^k} = \frac{\epsilon}{2^{m-1}}. \end{eqnarray*} \noindent\underline{Case 2}\hspace{1em} $J_{m,1}$ is finite. \\ \indent Let $k_0 = \max J_{m,1}$. There exists $\gamma \in \Gamma_m$ such that $\phi_{n_{k_0}} \in \gamma$. Then, since $\phi_{n_{k_0}} \in F(n_{k_0},m)$, \begin{eqnarray*} \sum_{k\in J_{m,1}}|\langle x_{n_k},\xi\rangle| & \leq & \sum^{k_0}_{k=m}|\langle x_{n_k},\xi\rangle| = \sum^{k_0-1}_{k=m}|\langle x_{n_k},\gamma\rangle| + |\langle x_{n_{k_0}},\xi\rangle| \\ & \leq & \sum^\infty_{k=m}\frac{\epsilon}{2^k} + \frac{\epsilon}{2^{m-1}} = \frac{\epsilon}{2^{m-2}}. \end{eqnarray*} Therefore, in either case, we have \begin{equation}\label{jmone} \sum_{k\in J_{m,1}}|\langle x_{n_k},\xi\rangle| \leq \frac{\epsilon}{2^{m-2}}. \end{equation} Now suppose for some $m \geq 1$, there are distinct $k_1, k_2 \in J_{m,2}$. Then $k_1, k_2 \geq m$, and $\phi_{n_{k_l}} = \phi(n_{k_l},m,i_{k_l})$, for some $i_{k_l} \in L_m$, $l = 1, 2$. But then by choice, $\phi_{n_{k_1}}$ and $\phi_{n_{k_2}}$ must be incomparable. This is a contradiction since they both belong to the branch $\xi$. Thus, for all $m \geq 1$, $|J_{m,2}| \leq 1$. Now $k \in J_{m,2}$ implies $\phi_{n_k} \in F(n_k,m)$, and hence $|\langle x_{n_k},\xi\rangle| \leq \epsilon/2^{m-1}$. Consequently, for all $m \geq 1$ \begin{equation} \sum_{k\in J_{m,2}}|\langle x_{n_k},\xi\rangle| \leq \frac{\epsilon}{2^{m-1}}. \end{equation} Finally, \begin{equation}\label{jz} \sum_{k\in J_0}|\langle x_{n_k},\xi\rangle| \leq \sum^\infty_{k=1}\frac{\epsilon}{2^k} = \epsilon. \end{equation} Combining inequalities (\ref{jmone})--(\ref{jz}), we obtain \begin{eqnarray*} \sum_k|\langle x_{n_k},\xi\rangle| & \leq & \sum^\infty_{m=1}\sum_{k\in J_m}|\langle x_{n_k},\xi\rangle| + \sum_{k\in J_0}|\langle x_{n_k},\xi\rangle| \\ & \leq & \sum^\infty_{m=1}\frac{3\epsilon}{2^{m-1}} + \epsilon = 7\epsilon. \end{eqnarray*} \end{pf} For $n \geq 0$, call a subset $D$ of $\{0,1\}^n \times \{0,1\}^n$ {\em diagonal}\/ if whenever $(\phi_1, \psi_1), (\phi_2, \psi_2)$ are distinct elements of $D$, then $\phi_1 \neq \phi_2$, and $\psi_1 \neq \psi_2$. \begin{lem}\label{diag} Let $n \geq 0$, $\epsilon > 0$, and let $T : JH' \to JH$ be normalized. Let \begin{eqnarray*} A & = & \{(\phi,\psi) \in \{0,1\}^n\times\{0,1\}^n : \mbox{there exists\ } \gamma,\xi \in \Gamma, \\ & & \phi\in\gamma, \psi\in\xi, \mbox{such that\ } |\langle TP'_n\gamma,P'_n\xi\rangle| > \epsilon\}. \end{eqnarray*} Then $|D| \leq 1/\epsilon$ for all diagonal subsets $D$ of $A$. \end{lem} \begin{pf} Let $D = \{(\phi_i,\psi_i): 1 \leq i \leq k\}$ be a diagonal subset of $A$. For each $i$, choose $\gamma_i, \xi_i \in \Gamma$ such that $\phi_i \in \gamma_i, \psi_i \in \xi_i$, and $|\langle TP'_n\gamma_i,P'_n\xi_i\rangle| > \epsilon$. Using the diagonality of $D$, we see that $(P'_n\gamma_i)^k_{i=1}$ and $(P'_n\xi_i)^k_{i=1}$ are both isometrically equivalent to the $\ell^\infty(n)$-basis. For $1 \leq i,j \leq k$, let $a_{ij} = \langle TP'_n\gamma_i,P'_n\xi_j\rangle$. Define $S, R: \ell^\infty(k) \to \ell^1(k)$ by $S(b_1,\ldots,b_k) = (\sum^k_{j=1}a_{ij}b_j)^k_{i=1}$ and $R(b_1,\ldots,b_k) = (a_{ii}b_i)^k_{i=1}$ respectively. Then $\|S\| \leq \|T\| = 1$, and $\|R\| \geq k\epsilon$. However, by \cite[Proposition 1.c.8]{LT}, $\|R\| \leq \|S\|$. Therefore, $|D| = k \leq 1/\epsilon$. \end{pf} \begin{lem}\label{longlem} Let $(T_n)$ be a normalized weakly null sequence in $JH\tilde{\otimes}_\epsilon JH$ such that there is a sequence $0 = j_1 < j_2 < \cdots$ with $(P_{j_n} - P_{j_{n+1}})T_n(P_{j_n} - P_{j_{n+1}})' = T_n$ for all $n$. Then there is a subsequence $(T_{n_k})$ such that \[ \sum|\langle T_{n_k}\gamma,\xi\rangle| < \infty \] for all $\gamma, \xi \in \Gamma$. \end{lem} \begin{pf} Note that $\|T_n\| = 1$ implies $|\langle T_n\gamma,\xi\rangle| \leq 1$ for all $\gamma, \xi \in \Gamma$. For all $m, n \in \N$, let \begin{eqnarray*} A(n,m) & = & \{(\phi,\psi)\in \{0,1\}^{j_n}\times\{0,1\}^{j_n} : \mbox{there exist branches } \gamma \mbox{ and } \xi \mbox{ through }\\ & & \phi \mbox{ and } \psi \mbox{ respectively so that } |\langle T_n\gamma,\xi\rangle| > 1/2^m, \mbox{ and for all branches} \\ & & \gamma \mbox{ and $\xi$ through $\phi$ and $\psi$ respectively, } |\langle T_n\gamma,\xi\rangle| \leq 1/2^{m-1} \}. \end{eqnarray*} Fix $n$. Let $B(n,1)$ be a maximal diagonal subset of $A(n,1)$. Then let \[ C(n,1) = \bigcup_{(\phi,\psi)\in B(n,1)}\{(\phi_1,\psi_1): \phi_1 = \phi \mbox{ or } \psi_1 = \psi\}. \] Inductively, if $B(n,k)$ and $C(n,k)$ have been chosen for $k < m$, let $B(n,m)$ be a maximal diagonal subset of $A(n,m)\backslash \cup^{m-1}_{k=1}C(n,k)$. Then let \[ C(n,m) = \bigcup_{(\phi,\psi)\in B(n,m)}\{(\phi_1,\psi_1): \phi_1 = \phi \mbox{ or } \psi_1 = \psi\}. \] It is easily seen that \\ (a) $\cup^\infty_{m=1}B(n,m)$ is a diagonal subset of $\{0,1\}^{j_n}\times\{0,1\}^{j_n}$, \\ (b) for all $m$, $B(n,m) \subseteq A(n,m)\cap C(n,m)$, \\ (c) for all $k$, $\cup^k_{m=1}A(n,m) \subseteq \cup^k_{m=1}C(n,m)$.\\ In particular, by (b) and Lemma \ref{diag}, $|B(n,m)| \leq 2^m$ for all $m, n$. Also, if $(\phi,\psi) \in \{0,1\}^{j_n}\times\{0,1\}^{j_n}\backslash\cup^k_{m=1}C(n,m)$, then (c) implies $(\phi,\psi) \notin \cup^k_{m=1}A(n,m)$. Hence if $\gamma, \xi \in \Gamma$ pass through $\phi$ and $\psi$ respectively, then \begin{equation}\label{size} |\langle T_n\gamma,\xi\rangle| \leq 1/2^k. \end{equation} Now choose $N_1 \in {\cal P}_\infty(\N)$ such that $|B(n,1)|$ is a constant, say $b_1$, for all $n \in N_1$. Write \[ B(n,1) = \{(\phi(n,1,i),\psi(n,1,i)) : 1 \leq i \leq b_1\} \] for all $n \in N_1$. There exists $N'_1 \in {\cal P}_\infty(N_1)$ such that for each $i \leq b_1$, $(\phi(n,1,i))_{n\in N'_1}$ as well as $(\psi(n,1,i))_{n\in N'_1}$ are either strongly incomparable or determine a branch. Let \[ I_1(\phi) = \{1 \leq i \leq b_1: (\phi(n,1,i))_{n\in N'_1} \mbox{ determines a branch}. \} \] Let the set of branches so determined be denoted by $\Gamma_1(\phi)$. Define $I_1(\psi)$ and $\Gamma_1(\psi)$ similarly with regard to the sequence $(\psi(n,1,i))_{n\in N'_1}$. Since $(T_n)$ is weakly null, so are $(T_n\gamma)$ and $(T'_n\xi)$ for all $\gamma, \xi \in \Gamma$. Thus, because both $\Gamma_1(\phi)$ and $\Gamma_1(\psi)$ are finite sets, Lemma \ref{seven} yields a set $N''_1 \in {\cal P}_\infty(N'_1)$ such that \[ \sup_{\xi\in\Gamma}\sum_{n\in N''_1}|\langle T_n\gamma,\xi\rangle| \leq 7 \hspace{1em} \mbox{for all} \hspace{1em} \gamma \in \Gamma_1(\phi) \] and \[ \sup_{\gamma\in\Gamma}\sum_{n\in N''_1}|\langle\gamma,T'_n\xi\rangle| \leq 7 \hspace{1em} \mbox{for all} \hspace{1em} \xi \in \Gamma_1(\psi). \] Inductively, if $N''_m \in {\cal P}_\infty(N)$ has been chosen, let $N_{m+1} \in {\cal P}_\infty(N''_m)$ be such that $|B(n,m+1)|$ is a constant, say $b_{m+1}$, for all $n \in N_{m+1}$. For $n \in N_{m+1}$, list \[ B(n,m+1) = \{(\phi(n,m+1,i),\psi(n,m+1,i)) : 1 \leq i \leq b_{m+1}\}, \] choose $N'_{m+1} \in {\cal P}_\infty(N_{m+1})$ such that for each $i \leq b_{m+1}$, $(\phi(n,m+1,i))_{n\in N'_{m+1}}$ as well as $(\psi(n,m+1,i))_{n\in N'_{m+1}}$ are either strongly incomparable or determine a branch. Let \[ I_{m+1}(\phi) = \{1 \leq i \leq b_{m+1}: (\phi(n,m+1,i))_{n\in N'_{m+1}} \mbox{ determines a branch}. \} \] Let the set of branches so determined be denoted by $\Gamma_{m+1}(\phi)$. Define $I_{m+1}(\psi)$ and $\Gamma_{m+1}(\psi)$ similarly with regard to the sequence $(\psi(n,m+1,i))_{n\in N'_{m+1}}$. If $\gamma \in \Gamma_{m+1}(\phi)$, $\xi \in \Gamma$, and $n \in N'_{m+1}$, let $\phi$ and $\psi$ be the nodes of length $j_n$ in $\gamma$ and $\xi$ respectively. Then $\phi = \phi(n,m+1,i)$ for some $i$. But $(\phi(n,m+1,i),\psi(n,m+1,i)) \in B(n,m+1)$. Therefore, $(\phi,\psi) \in C(n,m+1)$. In particular, $(\phi,\psi) \notin \cup^m_{k=1}C(n,k)$. By equation (\ref{size}), $|\langle T_n\gamma,\xi\rangle| \leq 1/2^m$. Similarly, $|\langle \gamma,T'_n\xi\rangle| \leq 1/2^m$ for all $\gamma \in \Gamma$ and $\xi \in \Gamma_{m+1}(\psi)$. By Lemma \ref{seven}, there is a set $N''_{m+1} \in {\cal P}_\infty(N'_{m+1})$ such that \begin{equation}\label{supt} \sup_{\xi\in\Gamma}\sum_{n\in N''_{m+1}}|\langle T_n\gamma,\xi\rangle| \leq 7/2^m \hspace{1em} \mbox{for all \hspace{1em} $\gamma \in \Gamma_{m+1}(\phi)$}, \end{equation} and \begin{equation}\label{suptp} \sup_{\gamma\in\Gamma}\sum_{n\in N''_{m+1}}|\langle\gamma,T'_n\xi\rangle| \leq 7/2^m \hspace{1em} \mbox{for all \hspace{1em} $\xi \in \Gamma_{m+1}(\psi)$}. \end{equation} Pick $n_1 < n_2 < \cdots$ such that $n_m \in N''_m$ for all $m$, and let \[ D(m) = \{0,1\}^{j_{n_m}}\times\{0,1\}^{j_{n_m}}\backslash\bigcup^m_{k=1}C(n_m,k). \] For all $m$, $\{C(n_m,1),\ldots,C(n_m,m),D(m)\}$ is a partition of $\{0,1\}^{j_{n_m}}\times\{0,1\}^{j_{n_m}}$. Fix $\gamma, \xi \in \Gamma$. We proceed to estimate $\sum_m|\langle T_{n_m}\gamma,\xi\rangle|$. For all $m \in \N$, let $\phi_m$ and $\psi_m$ be the nodes of length $j_{n_m}$ in $\gamma$ and $\xi$ respectively. Define $J_0 = \{m: (\phi_m,\psi_m) \in D(m)\}$, and $J_k = \{m\geq k: (\phi_m,\psi_m) \in C(n_m,k)\}$ for all $k \geq 1$. Note that $\{J_0, J_1, J_2, \ldots\}$ is a partition of $\N$. If $m \in J_0$, then equation (\ref{size}) yields $|\la T_{n_m}\gamma,\xi\ra| \leq 1/2^m$. Consequently, \begin{equation}\label{jnot} \sum_{m\in J_0}|\la T_{n_m}\gamma,\xi\ra| \leq \sum_{m\in J_0}\frac{1}{2^m} \leq 1. \end{equation} Now fix $k \geq 1$ and $m \in J_k$. Then $m \geq k$ and hence $n_m \in N_k$. Thus $B(n_m,k)$ is listed as $\{(\phi(n_m,k,i),\psi(n_m,k,i)): 1 \leq i \leq b_k\}$. But $(\phi_m,\psi_m) \in C(n_m,k)$. Hence there exists $1 \leq i_m \leq b_k$ such that either $\phi_m = \phi(n_m,k,i_m)$ or $\psi_m = \psi(n_m,k,i_m)$. Let $J_k(\phi) = \{m\in J_k: \phi_m = \phi(n_m,k,i_m)\}$, and let $J_k(\psi) = J_k\backslash J_k(\phi)$. Since $n_m \in N'_k$ as well, we may further subdivide these sets into: \begin{eqnarray*} J_{k,1}(\phi) & = & \{m\in J_k(\phi): i_m \in I_k(\phi)\}, \\ J_{k,2}(\phi) & = & J_{k}(\phi)\backslash J_{k,1}(\phi), \\ J_{k,1}(\psi) & = & \{m\in J_k(\psi): i_m \in I_k(\psi)\}, \\ J_{k,2}(\psi) & = & J_{k}(\psi)\backslash J_{k,1}(\psi). \end{eqnarray*} Now $\{\phi(n_m,k,i_m): m \in J_{k,2}(\phi)\}$ is a subset of the branch $\gamma$. But it is also contained in the union of the strongly incomparable sequences $(\phi(n,k,i))_{n\in N'_{k}}$, $i \notin I_k(\phi)$. Hence $|J_{k,2}(\phi)| \leq 2b_k$. Recalling equation (\ref{size}), we obtain \begin{equation}\label{jphitwo} \sum_{m\in J_{k,2}(\phi)}|\la T_{n_m}\gamma,\xi\ra| \leq \sum_{m\in J_{k,2}(\phi)}\frac{1}{2^{k-1}} \leq \frac{4b_k}{2^k}. \end{equation} Similarly, \begin{equation}\label{jpsitwo} \sum_{m\in J_{k,2}(\psi)}|\la T_{n_m}\gamma,\xi\ra| \leq \frac{4b_k}{2^k}. \end{equation} For any $m \in J_{k,1}(\phi)$, $\phi_m$ belongs to a branch in $\Gamma_k(\phi)$. Let $\tilde{\gamma}$ be a branch in $\Gamma_k(\phi)$ such that $M(\tilde{\gamma}) = \{m\in J_{k,1}(\phi):\phi_m\in\tilde{\gamma}\}$ is non-empty. If $M(\tilde{\gamma})$ is finite, let $m_0$ be its maximal element. Then $\phi_{m_0}$ belongs to both branches $\tilde{\gamma}$ and $\gamma$. Therefore, \begin{eqnarray*} \sum_{m\in M(\tilde{\gamma})}|\la T_{n_m}\gamma,\xi\ra| & = & \sum_{m\in M(\tilde{\gamma})\backslash\{m_0\}}|\la T_{n_m}\tilde{\gamma},\xi\ra| + |\la T_{n_{m_0}}\gamma,\xi\ra| \\ & \leq & \sup_{\xi\in\Gamma}\sum_{n\in N''_k}|\la T_n\tilde{\gamma},\xi\ra| + |\la T_{n_{m_0}}\gamma,\xi\ra| \\ & \leq & \frac{7}{2^{k-1}} + \frac{1}{2^{k-1}} = \frac{16}{2^k} \end{eqnarray*} by equations (\ref{supt}) and (\ref{size}) respectively. On the other hand, if $M(\tilde{\gamma})$ is infinite, then $\tilde{\gamma}$ and $\gamma$ coincide. Thus the term containing $T_{n_{m_0}}$ may simply be omitted from the above inequality. Now since $|\Gamma_k(\phi)| \leq b_k$, we obtain \begin{equation}\label{jphione} \sum_{m\in J_{k,1}(\phi)}|\la T_{n_m}\gamma,\xi\ra| \leq \frac{16b_k}{2^k}. \end{equation} Similarly, \begin{equation}\label{jpsione} \sum_{m\in J_{k,1}(\psi)}|\la T_{n_m}\gamma,\xi\ra| \leq \frac{16b_k}{2^k}. \end{equation} Combining equations (\ref{jnot})--(\ref{jpsione}), we see that \[ \sum|\la T_{n_m}\gamma,\xi\ra| \leq \sum^\infty_{k=0}\sum_{m\in J_k}|\la T_{n_m}\gamma,\xi\ra| \leq 1 + \sum^\infty_{k=1}\frac{40b_k}{2^k}. \] To complete the proof, it remains to show that $\sum b_k/2^k < \infty$. Fix $m \in \N$. By property (a) of the sets $B(n,k)$, $\cup^m_{k=1}B(n_m,k)$ is a diagonal subset of $\{0,1\}^{j_{n_m}}\times\{0,1\}^{j_{n_m}}$. Hence $\{\phi(n_m,k,i): i\leq b_k,\ k\leq m\}$ are all distinct, as are $\{\psi(n_m,k,i): i\leq b_k,\ k\leq m\}$. By the definition of $B(n_m,k)$, one can choose, for any $i\leq b_k,\ k\leq m$, branches $\gamma_{k,i}$ and $\xi_{k,i}$, passing through $\phi(n_m,k,i)$ and $\psi(n_m,k,i)$ respectively, so that $|\la T_{n_m}\gamma_{k,i},\xi_{k,i}\ra| > 1/2^k$. Keeping in mind the assumption on $(T_n)$, we see that this inequality remains valid if $\gamma_{k,i}$ and $\xi_{k,i}$ are replaced by $\delta_{k,i} = P'_{j_{n_m}}\gamma_{k,i}$ and $\zeta_{k,i} = P'_{j_{n_m}}\xi_{k,i}$ respectively. Since $(\delta_{k,i})$ and $(\zeta_{k,i})$ are isometrically equivalent to the $\ell^\infty(\sum^m_{k=1}b_k)$-basis, the map $S : \ell^\infty(\sum^m_{k=1}b_k) \to \ell^1(\sum^m_{k=1}b_k)$ \[ S(b_{k,i})^{b_k\ \,m}_{i=1 k=1} = (\sum_{k,i}\la T_{n_m}\delta_{k,i},\zeta_{k',i'}\ra b_{k,i})^{b_{k'}\ \,m}_{i'=1 k'=1} \] has norm $\leq \|T_{n_m}\| = 1$. Then \cite[Proposition 1.c.8]{LT} implies that the ``diagonal'' of $S$ also has norm $\leq 1$. But this means \[ \sum^m_{k=1}\frac{b_k}{2^k} \leq \sum^m_{k=1}\sum^{b_k}_{i=1}|\la T_{n_m}\delta_{k,i},\zeta_{k',i'}\ra| \leq 1. \] Since $m$ is arbitrary, $\sum b_k/2^k$ converges, as required. \end{pf} In order to apply Elton's extremal criterion, we need the following. For convenience, we call an element of $JH'$ of the form $P'_m\gamma$, where $m \geq 0$, and $\gamma \in \Gamma$, a $m$-$\infty$ {\em segment}. \begin{lem}\label{ipnset} Let $W$ be the collection of all elements in $JH'$ of the form $\sum^r_{i=1}a_iS_i$, where $r \in {\em \N}$, $\max|a_i| \leq 1$, and there exist $m,\ n$, with $n$ possibly equal to $\infty$, so that $\{S_1,\ldots,S_r\}$ is a set of pairwise disjoint $m$-$n$ segments. Then $W$ is an i.p.n.\ subset of $JH'$. Consequently, $W\times W$ is an i.p.n.\ subset of $JH\tilde{\otimes}_\epsilon JH$. \end{lem} \begin{pf} The second assertion follows from the first by Lemma \ref{pn}. If $\sum^r_{i=1}a_iS_i \in W$, and $x \in JH$, then \[ |\sum^r_{i=1}a_iS_ix| \leq \sum^r_{i=1}|S_ix| \leq \|x\|. \] To complete the proof, it suffices to show that for all $x \in JH$, there is a collection $\{S_1,\ldots,S_r\}$ of disjoint $m$-$n$ segments ($n$ possibly $= \infty$) such that $\|x\| = \sum^r_{i=1}|S_ix|$. Let $x \in JH$ be fixed. For each $j$, choose an admissible collection of $m_j$-$n_j$ segments $A_j$ such that \begin{equation}\label{norming} \|x\| = \lim_j\sum_{S\in A_j}|Sx|. \end{equation} If $(m_j)$ is unbounded, $\sum_{S\in A_j}|Sx| \leq \|P_{m_j}x\| \to 0$ as $j \to \infty$. Hence $x = 0$, and the result is obvious. If $(n_j)$ is bounded, then so is $(m_j)$. Without loss of generality, we may assume that both $(m_j)$ and $(n_j)$ are constant sequences with finite values, say, $m$ and $n$. But as there are only finitely many sets of admissible $m$-$n$ segments, the limiting value in (\ref{norming}) is attained, and the claim holds. Finally, we consider the case when $(m_j)$ is bounded and $(n_j)$ is unbounded. Going to a subsequence, we may assume that $(m_j)$ has a constant value, say $m$, and $n_j \to \infty$. Then, for each $j$, $|A_j| \leq 2^m$. Using a subsequence again, we may assume that $|A_j| = r$ for some fixed $r$ for all $j$. For each $j$, write $A_j = \{S_1(j),\ldots,S_r(j)\}.$ Choose a subsequence $(j_k)$ such that $(S_i(j_k))_k$ converges weak* to some $S_i$ for every $1 \leq i \leq r$. It is easy to see that $\{S_1,\ldots,S_r\}$ is a collection of pairwise disjoint $m$-$\infty$ segments. From equation (\ref{norming}), we deduce that $\|x\| = \sum^r_{i=1}|S_ix|$, as desired. \end{pf} \begin{lem}\label{mainlem} Let $(T_n)$ be as in Lemma \ref{longlem}, then $[T_n]$ contains a copy of $c_0$. \end{lem} \begin{pf} Let $W$ be as in Lemma \ref{ipnset}. Choose a subsequence $(T_{n_k})$ as given by Lemma \ref{longlem}. Then we have $\sum|\langle T_{n_k}w,v\rangle| < \infty$ for all $(w,v) \in W\times W$. But by Lemma \ref{ipnset}, $W\times W$ is an i.p.n.\ subset of $JH\tilde{\otimes}_\epsilon JH$. Hence Elton's extremal criterion \cite{E} assures us that $[T_{n_k}]$ contains a copy of $c_0$. \end{pf} \begin{thm} The space $JH\tilde{\otimes}_\epsilon JH$ is $c_0$-saturated. \end{thm} \begin{pf} For all $m \in \N$, let \[ E_m = K_{w*}(JH',(1-P_m)JH) \hspace{1em} \mbox{and} \hspace{1em} F_m = K_{w*}((1-P_m)'JH',JH). \] Both $E_m$ and $F_m$ are isomorphic to a direct sum of a finite number of copies of $JH$, hence they are $c_0$-saturated by Lemma \ref{EF}. Let $(S_n)$ be a normalized basic sequence in $JH\tilde{\otimes}_\epsilon JH$. If there exist a subsequence $(R_n)$ of $(S_n)$ and $m \in \N$\ such that $(R_n)$ is dominated by $((1-P_m)R_n\oplus R_n(1-P_m)') \in E_m\oplus F_m$, then $(R_n)$ is equivalent to a sequence in $E_m\oplus F_m$, which is a $c_0$-saturated space by Lemma \ref{EF}. Thus $[R_n]$, and consequently $[S_n]$, contains a copy of $c_0$. Otherwise, for all $m \in \N$\ and every subsequence $(R_n)$ of $S_n$, \[ \inf\left\{\|\sum_na_n(1-P_m)R_n\| + \|\sum_na_nR_n(1-P_m)'\| : (a_n) \in c_{00}, \|\sum a_nR_n\| = 1\right\} = 0. \] Also, it is clear that for any $T \in JH\tilde{\otimes}_\epsilon JH$, $\lim_n(1-P_n)T(1-P_n)' = T$ in norm. Using these observations and a standard perturbation argument, we obtain a normalized block basis of $(S_n)$ which is equivalent to some sequence $(T_n)$ satisfying the hypotheses of Lemma \ref{longlem}. By Lemma \ref{mainlem}, $[T_n]$ contains a copy of $c_0$. Thus, so does $[S_n]$. \end{pf} Hagler \cite{H} proved that, in fact, $JH$ has {\em property}\/ (S): every normalized weakly null sequence has a $c_0$-subsequence. Coupled with the absence of $\ell^1$, property (S) implies $c_0$-saturation. In \cite{KO}, it was also shown that property (S) implies property (u). Thus, we may ask\\ \noindent{\em Question}: Does $JH\tilde{\otimes}_\epsilon JH$ has property (S) or property (u)? \baselineskip 3ex
{ "timestamp": "1998-03-31T20:24:37", "yymm": "9306", "arxiv_id": "math/9306210", "language": "en", "url": "https://arxiv.org/abs/math/9306210", "abstract": "A Banach space is $c_0$-saturated if all of its closed infinite dimensional subspaces contain an isomorph of $c_0$. In this article, we study the stability of this property under the formation of direct sums and tensor products. Some of the results are: (1) a slightly more general version of the fact that $c_0$-sums of $c_0$-saturated spaces are $c_0$-saturated; (2) $C(K,E)$ is $c_0$-saturated if both $C(K)$ and $E$ are; (3) the tensor product $JH\\tilde{\\otimes}_\\epsilon JH$ is $c_0$-saturated, where $JH$ is the James Hagler space.", "subjects": "Functional Analysis (math.FA)", "title": "Some stability properties of $c_0$-saturated spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363517478328, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7073385887017702 }
https://arxiv.org/abs/2208.01506
Notes on derived algebraic geometry
These are notes on derived algebraic geometry in the context of animated rings. More precisely, we recall the proof of Toën-Vaquié that the derived stack of perfect complexes is locally geometric in the language of $\infty$-categories. Along the way, we recall the necessary notions in derived commutative algebra and derived algebraic geometry. We also analyze the deformation theory and quasi-coherent modules over derived stacks.
\section{Derived algebraic geometry} \label{sec:der.alg.geo} For this section, we will closely follow \cite[\S2]{TV2}, \cite{AG} and the lecture notes of Adeel Khan \cite{Khan}.\par In \cite{TV2} To\"en and Vezzosi deal with derived algebraic geometry in the model categorical setting. This allows us to translate them easily to the $\infty$-categorical setting.\par In \cite{AG} Antieu and Gepner deal with spectral algebraic geometry (in the $\infty$-categorical setting). The ideas are more or less the same as we use them and in some parts, we can transport proofs one-to-one. But it is important to note that there is no higher principle which concludes derived algebraic geometry (theory of certain sheaves on animated algebras) as a corollary of spectral algebraic geometry (theory of certain sheaves on $E_{\infty}$-algebras). This is because there is no fully faithful embedding from animated algebras to $E_\infty$-algebras in general. In characteristic $0$ both $\infty$-categories are equivalent and thus the results from \cite{TV2} and \cite{AG} agree. In characteristic $p>0$ however, we have no such relation. Therefore the translation of \cite{AG} to our setting has to be treated with caution. \subsection{Affine derived schemes} \label{sec:affine derived schemes} In the following $R$ will be a ring and $A$ an animated $R$-algebra.\par Let us define the \'etale and fpqc topology. \begin{propdef} Let $B$ an animated $A$-algebra. \begin{enumerate} \item[(a)] There exists a Grothendieck topology on $\AniAlg{A}^{\op}$, called the \textup{fpqc-topology}, which can be described as follows: A sieve (see \cite[Def. 6.2.2.1]{HTT}) $\Ccal\subseteq (\AniAlg{A}^{op})_{/B}\simeq\AniAlg{B}^{op}$ is a covering sieve if and only if it contains a finite family $(B\rightarrow B_{i})_{i\in I}$ for which the induced map $B\rightarrow \prod_{i\in I}B_{i}$ is faithfully flat. \item[(b)] There exists a Grothendieck topology on the full subcategory $(\AniAlg{A}^{\et})^{op}$ of \'etale $A$-algebras, called the \textup{\'etale-topology}, which can be described as follows: A sieve $\Ccal\subseteq (\AniAlg{A}^{\et})^{\op}_{/B}\simeq(\AniAlg{B}^{\et})^{\op}$ is a covering sieve if and only if it contains a finite family $(B\rightarrow B_{i})_{i\in I}$ for which the induced map $A\rightarrow \prod_{i\in I}B_{i}$ is faithfully flat (and \'etale, which is automatic). \end{enumerate} \end{propdef} \begin{proof} Let $S$ be the collection of all faithfully flat (resp. faithfully flat and \'etale) morphisms in $\AniAlg{R}$. It is enough to check that $S$ satisfies the properties of \cite[Prop. A.3.2.1]{SAG}. For this, we note that a morphism of animated rings is faithfully flat (resp. faithfully flat and \'etale) if and only if it is after passage to connective $E_{\infty}$-rings. Since the functor $\theta\colon \AniAlg{A}\rightarrow \Einftycn_{\theta(A)}$ is conservative and commutes with limits and colimits (see Proposition \ref{fun E to SCR}), we see that $S$ satisfies the properties of \cite[Prop. A.3.2.1]{SAG} if and only if the collection of all faithfully flat (resp. faithfully flat and \'etale) morphisms in $\Einftycn_{\theta(A)}$ does so. But this follows from \cite[Prop. B.6.1.3]{SAG} - see also \cite[Var. B.6.1.7]{SAG} - (resp. \cite[Prop. B.6.2.1]{SAG}). \end{proof} \begin{defrem} An \textit{affine derived scheme over $A$} is a functor from $\AniAlg{A}$ to spaces (i.e. a presheaf on $\AniAlg{A}^{\op}$), which is equivalent to $\Spec(B)\coloneqq \Hom_{\AniAlg{A}}(B,-)$ for some $B\in\AniAlg{A}$.\par Note that $\Spec(B)$ is an fpqc sheaf by \cite[D. 6.3.5]{SAG} and Lemma \ref{fun E to SCR} \end{defrem} \begin{defi} Let $\Pbf$ be one of the following properties of a morphism animated rings: \textit{flat, smooth, \'etale, locally of finite presentation}. We say that a morphisms of affine derived schemes $\Spec(B)\rightarrow \Spec(C)$ has property $\Pbf$ if the underlying homomorphism $C\rightarrow B$ has $\Pbf$. \end{defi} \begin{rem} Let us remark that the above properties of morphisms of affine derived schemes are stable under equivalences, composition, pullbacks and are \'etale local on the source and target. This follows from classical theory (as found for example in \cite{stacks-project}) and Propositions \ref{smooth cotangent} and \ref{lfp perfect cotangent}, noting Proposition \ref{projective lift} and that (perfect) modules modules satisfy descent (see Remark \ref{perf fpqc local}). \end{rem} \begin{defi} For a discrete ring $A$, we set $$\Spec(A)_{\textup{cl}}\coloneqq \Hom_{\Ring}(A,-)\colon \Ring\rightarrow \Sets$$ to be its underlying classical scheme. We will abuse notation and denote the underlying locally ringed space of $\Spec(A)_{\textup{cl}}$ the same. \end{defi} \begin{remark} The notation $(-)_{\textup{cl}}$ is introduced since even for a discrete ring $A$ the corresponding derived stack $\Spec(A)$ is a sheaf with values in spaces. Thus, for a (possibly non discrete) animated ring $B$ the space $\Hom_{\Ani}(A,B)$ need not to be discrete, e.g. $\Hom_{\Ani}(\ZZ[X],B)\simeq \Omega^\infty B$. But for example if we restrict ourself to discrete rings $C$, we have $\Hom_{\Ani}(A,C)\simeq \Hom_{\Ring}(\pi_0A,C)$ by adjunction, even when $A$ is not discrete. \end{remark} \subsection{Geometric stacks}\label{sec:geometric-stacks} In this section we closely follow \cite{AG} and \cite{TV2}. In \cite{TV2} the notion of geometric stacks can be found in the context of model categories. Our notion agrees with the notions presented in \cite{TV2} (note that they speak of $n$-representable morphisms rather than $n$-geometric) and we want to remark that the definition of geometric stacks in \cite{AG} is different from ours. The main point are the $0$-geometric stacks. In our case, any scheme will be $1$-geometric, whereas in \cite{AG} a qcqs scheme with non-affine diagonal will not be $1$-geometric. Nevertheless, as the principal of the definition is analogous, the ideas presented in \cite{AG} often times agree with the ideas presented in \cite{TV2}.\par \begin{defi} Let $A$ be an animated ring. A \textit{derived stack over $A$} is a sheaf of spaces on $(\AniAlg{A}^{\et})^{\op}$. We denote the $\infty$-category of derived stacks over $A$ by $\textup{dSt}_A$. If $A=\ZZ$, we simply say derived stack and denote the $\infty$-category of derived stacks by $\textup{dSt}$. \end{defi} \begin{rem} \label{sheaf localization} Let us remark, that this definition makes sense if we do not assume that $\AniAlg{A}$ for any animated ring $A$ is small. The reason for this is that a priori the full subcategory of derived stacks in $\Pcal((\AniAlg{A}^{\et})^{\op})$ is defined as the full subcategory of $S$-local objects, where $S$ consists of monomorphisms $U\hookrightarrow \Spec(A)$ that come from a covering sieve in $(\AniAlg{A}^{\et})^{\op}$. But if we assume smallness of $\AniAlg{A}$ one sees that $\textup{dSt}$ is a topological localization of $\Pcal((\AniAlg{A}^{\et})^{\op})$ (see \cite[Prop. 6.2.2.7]{HTT}) which can come quite handy for example in Remark \ref{rem.sheaf.kan}, where we use \cite[Prop. 5.5.4.2]{HTT}. \end{rem} \begin{defi}[\protect{\cite[\S6.2.3]{HTT}}] Let $A$ be an animated ring. A morphism $f\colon X\rightarrow Y$ in $\textup{dSt}_{A}$ is an \textit{effective epimorphism} if the natural map $\colim_{\Delta}\Cv(X/Y)_{\bullet}\rightarrow Y$ in $\Pcal((\AniAlg{A}^{\et})^{\op})$ is an equivalence in $\textup{dSt}_{A}$ after sheafifcation\footnote{\label{foot.sheafi}We can describe $\textup{dSt}_{A}$ as a localization of $\Pcal(\AniAlg{A}^{\op})$ (see \cite[\S 6.2.2]{HTT}), so we get a functor $L\colon \Pcal(\AniAlg{A}^{\op})\rightarrow \textup{dSt}_{A}$ left adjoint to the inclusion, which we denote as sheafification.}. \end{defi} \begin{remark}[Effective epimorphisms] \label{effective epi} Let $f\colon \Spec(B)\rightarrow \Spec(A)$ be a morphism of affine derived schemes. By \cite[Prop. 7.2.1.14]{HTT} the map $f$ is an effective epimorphism in $\textup{dSt}$ if and only if its $0$-truncation, in the sense of \cite[\S 5.5.6]{HTT}, is an effective epimorphism. Up to homotopy $\tau_{\leq 0}\Spec(A)$ is given by the sheafification of $\pi_0\Hom_{\Ani}(A,-)$ (see \cite[Prop. 5.5.6.28]{HTT} together with Remark \ref{sheaf localization}). Thus $f$ is an effective epimorphism if and only if the sheafification of the induced map $\pi_0f\colon\pi_0\Hom(B,-)\rightarrow \pi_0\Hom(A,-)$ is an epimorphism (note that a priori \cite[Prop. 7.2.1.14]{HTT} tells us that $\pi_0f^{\sim}$ needs to be an effective epimorphism but in a $1$-topos every epimorphism is effective (see \cite[IV 7. Thm. 8]{MLM})).\par We will see in Remark \ref{truncation} that the restriction of an affine derived scheme $\Spec(A)$ onto $\Ring$ preserves limits and colimits. In particular, any effective epimorphism of affine derived schemes $\Spec(A)\rightarrow \Spec(B)$ induces by adjunction an epimorphism of \'etale sheaves of sets $\Spec(\pi_{0}A)_{\textup{cl}}\rightarrow \Spec(\pi_{0}B)_{\textup{cl}}$, which in turn implies that the morphism of the underlying topological spaces of affine schemes, denoted by $|\Spec(\pi_{0}A)_{\textup{cl}}|\rightarrow|\Spec(\pi_{0}B)_{\textup{cl}}|$, is surjective. So for example if $B\rightarrow A$ is flat, then the effective surjectivity implies that it is in fact faithfully flat. \end{remark} \begin{defi}[\protect{\cite[Def. 1.3.3.1]{TV2}}] We will define a geometric morphism inductively. \begin{enumerate} \item[(1)] A derived stack is \textit{$(-1)$-geometric} or \textit{affine} if it is equivalent to an affine derived scheme.\par A morphism of derived stacks $X\rightarrow Y$ is \textit{$(-1)$-geometric} or \textit{affine} if for all affine schemes $\Spec(A)$ and all $\Spec(A)\rightarrow Y$ the base change $X\times_{Y}\Spec(A)$ is affine.\par A $(-1)$-geometric morphism of derived stacks $X\rightarrow Y$ is \textit{smooth} (resp. \textit{\'etale}) if for all affine derived schemes $\Spec(A)$ and all morphisms $\Spec(A)\rightarrow Y$ the base change morphism $\Spec(B)\simeq X\times_Y\Spec(A)\rightarrow \Spec(A)$ corresponds to a smooth (resp. \'etale) morphism of animated rings.\\ \par Now let $n\geq 0$. \item[(2)] An \textit{$n$-atlas} of a derived stack $X$ is a family $(\Spec(A_i)\rightarrow X)_{i\in I}$ of morphisms of derived stacks, such that \begin{enumerate} \item[(a)] each $\Spec(A_i)\rightarrow X$ is $(n-1)$-geometric and smooth, and \item[(b)] the induced morphism $\coprod \Spec(A_i)\rightarrow X$ is an effective epimorphism. \end{enumerate} If each of the morphisms $\Spec(A_{i})\rightarrow X$ is \'etale, then we call the $n$-atlas \textit{\'etale}. \par A derived stack is called \textit{$n$-geometric} (resp. \textit{$n$-DM}), if \begin{enumerate} \item[(a)] it has an $n$-atlas (resp. \'etale $n$-atlas), and \item[(b)] the diagonal $X\xrightarrow{\Delta}X\times X$ is $(n-1)$-geometric. \end{enumerate} \item[(3)] A morphism $X\rightarrow Y$ of derived stacks is called \textit{$n$-geometric} (resp. \textit{$n$-DM}) if for all affine derived scheme $\Spec(A)$ and all morphisms $\Spec(A)\rightarrow Y$ the base change $X\times_Y \Spec(A)$ is $n$-geometric (resp. \textit{$n$-DM}).\par An $n$-geometric morphism $X\rightarrow Y$ of derived stacks is called \textit{smooth}, if for all affine derived scheme $\Spec(A)$ and all morphisms $\Spec(A)\rightarrow Y$ the base change $X\times_Y \Spec(A)$ has an $n$-atlas given by a family of affine derived schemes $(\Spec(A_i))_{i\in I}$, such that the induced morphism $A\rightarrow A_i$ is smooth.\par An $n$-DM morphism $X\rightarrow Y$ of derived stacks is called \textit{\'etale}, if for all affine derived scheme $\Spec(A)$ and all morphisms $\Spec(A)\rightarrow Y$ the base change $X\times_Y \Spec(A)$ has an \'etale $n$-atlas given by a family of affine derived schemes $(\Spec(A_i))_{i\in I}$, such that the induced morphism $A\rightarrow A_i$ is \'etale.\par \end{enumerate} We call a morphism of derived stacks \textit{geometric} (resp. \textit{DM}) if it is $n$-geometric (resp. $n$-DM) for some $n\geq -1$. \end{defi} \begin{remark} From this definition one can see that an $n$-geometric morphism of derived stacks is automatically $(n+1)$-geometric. \end{remark} \begin{defi} \label{def P-geometric} Let $\Pbf$ be a property of affine derived schemes that is stable under equivalences, pullbacks, compositions and is smooth-local on the source and target, then we say a morphism of derived stacks $X\rightarrow Y$ has $\Pbf$ if it is geometric and for an affine $(n-1)$-atlas $(U_i)_{i\in I}$ of the pullback along an affine derived scheme $\Spec(B)$ the corresponding morphism $U_i\rightarrow \Spec(B)$ of affine schemes has $\Pbf$. \end{defi} \begin{lem} The properties ``locally of finite presentation'', ``flat'' and ``smooth'' of morphisms of affine derived schemes satisfy the conditions of Definition \ref{def P-geometric}. \end{lem} \begin{proof} For flat and smooth this follows from the definition (note that on $\pi_{0}$ this follows from classical theory and since smooth covers are in particular flat and on $\pi_{0}$ faithfully flat, we see that the compatibility of the higher homotopy groups in the definition of smooth and flat morphisms is also clear).\par For locally of finite presentation, we use Proposition \ref{lfp perfect cotangent} and the fact that on $\pi_{0}$ this follows from classical theory and that perfectness of modules can be checked fpqc-locally (we will see this in Remark \ref{perf fpqc local}). \end{proof} Our definition above differs from the theory developed in \cite{TV2}, where they assume the property to be \'etale-locally on the source and base. This would include the property \textit{\'etale} but makes no sense for geometric stacks, since this would imply that a morphism of affine derived schemes is \'etale if and only if it is smooth locally \'etale but the next remark shows that this can not hold. We assume that there was a mixup in \cite{TV2}, since if one looks at later proofs where the condition \'etale is used one needs a stronger condition than geometric. This is not surprising, since this problem occurs even in the classical theory for Artin stacks. One can solve this for example by assuming that \'etale morphisms are always DM in the sense that after base change to an affine the resulting stack is actually a DM-stack, i.e. has an \'etale cover by schemes. We did this analogously but want to mention that this is only done for completion and is not used later on in any of the proofs. \begin{rem} We want to remark that the property \textit{\'etale} is not smooth local on the base, since if it would be smooth local in our context, then it would be smooth local in classical theory of schemes which it is not. \end{rem} \begin{defi} A morphism of derived stacks is called \textit{\'etale} if it is DM and \'etale. \end{defi} \begin{defi} A morphism of derived stacks $X\rightarrow Y$ is \begin{enumerate} \item an \textit{open immersion} if it is a flat, locally of finitely presentation and a monomorphism, where flat is in the sense of Definition \ref{def P-geometric} and monomorphism means $(-1)$-truncated in the sense of \cite{HTT}, i.e. the homotopy fibers of $X\rightarrow Y$ are either empty or contractible. \item a \textit{closed immersion} if it is affine and for any $\Spec(B)\rightarrow Y$ the corresponding morphism $X\times_Y\Spec(B)\simeq \Spec(C)\rightarrow \Spec(B)$ induces a surjection $\pi_0B\rightarrow\pi_0C$ of rings. \end{enumerate} \end{defi} \begin{defi} \label{defi immersion} A morphism $f\colon X\rightarrow Y$ of derived stacks is a \textit{locally closed immersion} if for all affine derived schemes $\Spec(A)$ and all morphisms $\Spec(A)\rightarrow Y$ the base change morphism $X\times_{Y}\Spec(A)\rightarrow \Spec(A)$ factors as a closed immersion follows by an open immersion. \end{defi} \begin{rem} The definition of a closed immersion does not impose any monomorphism condition. This makes sense, since we will see that a monomorphism automatically has vanishing cotangent complex (see Lemma \ref{cotangent monomorphism}) and in particular, any closed immersion which is on $t_{0}$ of finite presentation and a monomorphism will be \'etale (this can proven analogously to \cite[Cor. 2.2.5.6]{TV2}). But as many naturally arising closed immersions are not \'etale, e.g. any regular immersion with non-vanishing cotangent complex, the above definition seems to be the one suited for the world of derived algebraic geometry. \end{rem} Let us give an important example of an open immersion of derived stacks. \begin{lem} Let $A$ be an animated $R$-algebra and let $f\in \pi_0A$ be an element. The inclusion $j\colon \Spec(A[f^{-1}])\hookrightarrow \Spec(A)$ is an open immersion. \end{lem} \begin{proof} The proof is given in \cite[Lec. 3 Lem. 4.2]{Khan}. But for the convenience of the reader, we recall the proof.\par We have to check that $j$ is a monomorphism, which is flat and locally of finite presentation. Locally of finite presentation follows from Lemma \ref{localization lfp}. Flatness, i.e. $\pi_0A[f^{-1}]\otimes_{\pi_0A}\pi_iA\simeq \pi_iA[f^{-1}]$ follows from $\pi_i(A[f^{-1}])=\pi_i(A)[f^{-1}]$. To see that it is a monomorphism, we have to show that the homotopy fibers of $\Hom(A[f^{-1}],B)\rightarrow \Hom(A,B)$ for any $B\in \AniAlg{R}$ are either empty or contractible. But this follows from the general property of localization (see Lemma \ref{localization}). \end{proof} \begin{defi} A derived stack is called \textit{separated} if the diagonal is a closed immersion. \end{defi} \begin{defi} A derived stack $X$ is \textit{quasi-compact} if there exists an $n$-atlas consisting of a single affine. A morphism $f\colon X\rightarrow Y$ of derived stacks is \textit{quasi-compact} if for all affine derived schemes $\Spec(A)$ and all morphisms $\Spec(A)\rightarrow Y$ the base change $X\times_{Y}\Spec(A)$ is quasi-compact. \end{defi} \begin{remark} Since affine schemes are separated, we see that affine derived schemes are also separated (note that the diagonal of an affine scheme is representable and that we only have to check that the corresponding ring morphism on $\pi_0$ is surjective, which follows from classical theory). \end{remark} \begin{defi} A derived stack $X$ is \textit{locally geometric} if we can write $X$ as the filtered colimit of geometric derived stacks $X_i$, with open immersions $X_i\hookrightarrow X$.\par We say that locally geometric stack $X\simeq \colim_{i\in I} X_{i}$ is \textit{locally of finite presentation} if each $X_{i}$ is locally of finite presentation. \end{defi} \begin{defprop} For a morphism of derived stacks $f\colon X\rightarrow Y$, we define $\Im(f)$ as an epi-mono factorisation $X\twoheadrightarrow \Im(f)\hookrightarrow Y$ of $f$ (here ``epi'' means ``effective epimorphism''). This factorisation is unique up to homotopy. \end{defprop} \begin{proof} The existence of such a factorisation follows from \cite[Ex. 5.2.8.16]{HTT}. The uniqueness up to homotopy follows from \cite[5.2.8.17]{HTT} \end{proof} \begin{remark} In the reference used in the above proof one shows that the image of a morphism $f\colon X\rightarrow Y$ is equivalent to the $(-1)$-truncation of $f$, which in turn is equivalent to the colimit of the \v{C}ech nerve of $f$ (see \cite[Cor. 6.2.3.5]{HTT} and note that per definition $0$-connective morphisms are effective epimorphisms, see. \cite[Def. 6.5.1.10]{HTT}). \end{remark} The following lemmas are clear from the definitions and may seem unnecessary complicated but will enable us to give another definition of open immersion, which shows that we won't have to deal with geometricity of open immersions. \begin{lem} \label{diag of mono} Let $\iota\colon U\hookrightarrow \Spec(A)$ be a monomorphism of derived stacks. Then $U$ has an affine diagonal. \end{lem} \begin{proof} Since $\iota$ is a monomorphism, we see that the diagonal of $\iota$ is an equivalence and hence we conclude. \end{proof} \begin{lem} \label{smooth affine geometric} Let $(A_i)_{i\in I}$ be a family of animated $B$-algebras having the property \textbf{P}, where \textbf{P} is as in Definition \ref{def P-geometric}. Let $U$ denote the image of the natural map $\coprod_{i\in I}\Spec(A_i)\rightarrow \Spec(B)$ and assume that the base change of $\Spec(A_{i})\rightarrow U$ with any derived affine scheme and any morphism $\Spec(C)\rightarrow U$ is smooth (note that this makes sense by Lemma \ref{diag of mono}) and has property \textbf{P}. Then the $\coprod_{i\in I} \Spec(A_i)$ is a $0$-atlas for $U$, via the natural map and in particular $U\hookrightarrow \Spec(B)$ is $0$-geometric and has property \textbf{P}. \end{lem} \begin{proof} This follows from the definitions. \end{proof} \begin{lem} \label{mono P-geometric} Let $U\hookrightarrow X$ be a monomorphism of derived stacks and let \textbf{P} be a property as in Definition \ref{def P-geometric}. Assume the base change with any $\Spec(A)\rightarrow X$ has a cover by a disjoint union of affine derived schemes over $A$ such that the conditions of Lemma \ref{smooth affine geometric} are satisfied. Then $U\hookrightarrow X$ is $0$-geometric and has property \textit{P}. \end{lem} \begin{proof} This follows from Lemma \ref{smooth affine geometric}. \end{proof} \begin{lem} \label{mono geometric} Let $\iota\colon U\hookrightarrow X$ be a geometric monomorphism of derived stacks. Then $\iota$ is $0$-geometric. \end{lem} \begin{proof} This follows from Lemma \ref{mono P-geometric}. \end{proof} \begin{rem} Lemma \ref{mono geometric} implies for example that open immersions and locally closed immersions are automatically $0$-geometric. \end{rem} \begin{lem} \label{open immersion geometric} A morphism $U\rightarrow X$ of derived stacks is an open immersion if and only if for any $\Spec(A)\rightarrow X$ the base change is a monomorphism and there is an effective epimorphism $\coprod_{i\in I}\Spec(A_i)\rightarrow \Spec(A)\times_X U$ such that each $\Spec(A_i)\rightarrow\Spec(A)$ is an open immersion. \end{lem} \begin{proof} This follows from Lemma \ref{mono P-geometric}. \end{proof} We can also define derived versions of schemes with the notion of open immersions. \begin{defi} Let $X$ be a derived stack. Then $X$ is a \textit{derived scheme} if it admits a cover $(\Spec(A_i)\hookrightarrow X)_{i\in I}$ such that each $\Spec(A_i)\hookrightarrow X$ is an open immersion (in particular $X$ is $1$-geometric). \end{defi} \begin{remark} If we have a morphism $\coprod_{i\in I} U_i\rightarrow X$ of derived stacks where each $U_i$ is an open immersion, we sometimes write $\bigcup_{i\in I} U_i$ for its image. \par If $X\rightarrow Y$ is a morphism of derived stacks, where the diagonal of $Y$ is representable and $X$ is a derived scheme, then the image of $X\rightarrow Y$ is a derived scheme.\par If $\coprod_{i\in I} \Spec(A_i)\rightarrow X$ is a morphism of derived stacks, where $X$ has representable diagonal and $\Spec(A_i)$ are affine open in $X$, then $\bigcup_{i\in I}\Spec(A_i)$ is an open substack of $X$. \end{remark} \begin{remdef}[Truncation] \label{truncation} We give a quick summary of \cite[\S 2.2.4]{TV2}. \par Let $\Abf$ be the model category of simplicial commutative $R$-algebras, as explained in section \ref{sec:simplicial commutative algebras}. The inclusion $\Alg{R}\hookrightarrow \Abf$ has a left adjoint $\pi_0\colon \Abf\rightarrow \Alg{R}$. This is a Quillen adjunction for the trivial model structure on $\Alg{R}$. This induces an adjunction $\begin{tikzcd} \pi_0\colon \AniAlg{R}\arrow[r,"",shift left = 0.8]&\arrow[l,"",shift left = 0.8]\Alg{R}\colon i \end{tikzcd}$. We therefore get adjunctions $$ \begin{tikzcd} i_{!}\colon\Pcal(\Alg{R}^{\op}) \arrow[r,"",shift left = 0.8]&\arrow[l,"",shift left = 0.8]\Pcal(\AniAlg{R}^{\op})\colon i^\ast\colon \Pcal(\AniAlg{R}^{\op})\arrow[r,"",shift left = 0.8]&\arrow[l,"",shift left = 0.8]\Pcal(\Alg{R}^{\op}) \colon \pi_0^\ast \end{tikzcd} $$ (here $i^\ast$ (resp. $\pi_0^\ast$) is defined as the restriction of a presheaf and $\Pcal(\Ccal)$ denotes the $\infty$-category of presheaves of spaces on $\Ccal$ and $i_{!}$ is given by left Kan extension). The inclusion $\Alg{R}\hookrightarrow \Abf$ induces an equivalence to the discrete animated $R$-algebras and thus the restriction preserves $i^{*}$ preserves sheaves and thus composing $i_{!}$ with the sheafification, we get the adjunction $$ \begin{tikzcd} i_{!}\colon \textup{Shv}_{\et}(\Alg{R})\arrow[r,"",shift left = 0.8]&\arrow[l,"",shift left = 0.8]\textup{Shv}_{\et}(\AniAlg{R})\colon i^\ast\colon\textup{Shv}_{\et}(\AniAlg{R}) \arrow[r,"",shift left = 0.8]&\arrow[l,"",shift left = 0.8] \textup{Shv}_{\et}(\Alg{R})\colon \pi_0^\ast. \end{tikzcd} $$ For convenience later on, we define $t_0\coloneqq i^\ast$ and $\iota\coloneqq i_{!}$. Note, that by general theory of Kan extensions, the functor $\iota$ is indeed fully faithful (see \cite[\S 4.3.2]{HTT}).\par For a derived scheme $X\in\textup{Shv}(\AniAlg{R})^{\et}$, we denote its image under $t_0$ with $X_{\textup{cl}}$ and call it the \textit{underlying classical scheme}. Note that $t_0(\Spec(A)) \simeq \Spec(\pi_0A)_{\textup{cl}}$ and $\iota(\Spec(\pi_0A)_{\textup{cl}})\simeq \Spec(\pi_0A)$ (by adjunction and the fact that any morphism from an animated ring to a discrete ring is characterized by the corresponding morphisms on discrete rings). Also by adjunction being an effective epimorphism is preserved under $t_{0}$. So if $\coprod_{i\in I}\Spec(A_i)$ is a Zariski atlas of $X$, we see that $\coprod_{i\in I}\Spec(\pi_0A_i)_\textup{cl}$ is a cover of $X_{\textup{cl}}$ and thus $X_{\textup{cl}}$ has values in discrete spaces, i.e. sets (note that \'etale locally any morphism $\Spec(B)\rightarrow X_{\textup{cl}}$ factors through $\coprod_{i\in I}\Spec(\pi_0A_i)_\textup{cl}$, in particular the points of $X_{\textup{cl}}$ can be computed by the points of its atlas, which are discrete). Hence, $X_{\textup{cl}}$ recovers the classical notion of a scheme.\par Let us state a few interesting properties of $t_0$ and $i$. \begin{enumerate} \item The functor $t_0$ has a right and left adjoint (see above), \item the functor $t_0$ preserves geometricity (here geometricity of sheaves in $\Alg{R}$ is defined similarly to derived stacks, see \cite[\S 2.1.1]{TV2} for further information) and the properties flat, smooth and \'etale along geometric morphisms, \item the functor $\iota$ preserves geometricity, homotopy pullbacks of $n$-geometric stacks along flat morphisms and sends flat (resp. smooth, \'etale) morphisms of $n$-geometric stacks to flat (resp. smooth, \'etale) morphisms of $n$-geometric stacks, \item if $X\in\textup{Shv}(\Alg{R})^{\et}$ is $n$-geometric and $X'\rightarrow \iota(X)$ is a flat morphism, then $X'$ is the image of an $n$-geometric stack under $\iota$. \end{enumerate} A proof for these statements is given in \cite[Prop. 2.2.4.4]{TV2}. \end{remdef} We list some properties of geometric morphisms of derived stacks. \begin{lem} \label{geometric base change} Let $X\rightarrow Z$ and $Y\rightarrow Z$ be morphisms of derived stacks. If $X\rightarrow Z$ is $n$-geometric, then so is $X\times_Z Y\rightarrow Y$. \end{lem} \begin{proof} This follows immediately from the definition. \end{proof} \begin{lem} \label{geometric local} A morphism of derived stacks $X\rightarrow Y$ is $n$-geometric if and only if the base change under $\Spec(A)\rightarrow Y$ for any $A\in \AniAlg{R}$ is $n$-geometric. \end{lem} \begin{proof} This follows immediately from the definitions. \end{proof} \begin{lem} \label{geometric comp} Let $f\colon X\rightarrow Y$ and $g\colon Y\rightarrow Z$ be morphisms of derived stacks. If $f$ and $g$ are $n$-geometric, then so is $g\circ f$. \end{lem} \begin{proof} The proof is straightforward using induction on $n$ (see \cite[Prop. 1.3.3.3 (3)]{TV2}). \end{proof} \begin{prop} \label{diag geometric} Let $f\colon X\rightarrow Y$ be a morphism of derived stacks. Assume $X$ is $n$-geometric and the diagonal $Y\rightarrow Y\times Y$ is $n$-geometric. Then $f$ is $n$-geometric. \end{prop} \begin{proof} This is analogous to \cite[Lem. 4.30]{AG}.\par Let $\coprod_{i\in I} \Spec(A_i)\twoheadrightarrow X$ be an $n$-atlas. Consider a morphism $\Spec(A)\rightarrow Y$, where $A$ is an animated ring. Then we have a morphism $\coprod_{i\in I} \Spec(A_i)\times_Y \Spec(A)\rightarrow \coprod_{i\in I} \Spec(A_i\otimes A)$, which is $n$-geometric, since it is the base change of the diagonal under $\coprod_{i\in I} \Spec(A_i\otimes A)$. Therefore $\coprod_{i\in I}\Spec(A_i)\times_Y \Spec(A)$ has an $n$-atlas, say given by $(\Spec(B_j)\rightarrow\coprod_{i\in I}\Spec(A_i)\times_Y \Spec(A))_{j\in J}$ and by Lemma \ref{geometric comp}, we see that $(\Spec(B_j)\rightarrow X\times_Y\Spec(A))_{j\in J}$ is an $n$-atlas. This finishes the proof, but to make things clear, we finally get following diagram with pullback squares $$ \begin{tikzcd} \coprod_{j\in J} \Spec(B_j)\arrow[d,"n\textup{-atlas}",two heads,swap]&\coprod \Spec(A_i\otimes A) &\\ \coprod_{i\in I} \Spec(A_i)\times_Y \Spec(A)\arrow[ru]\arrow[r,"n\textup{-geom.}",two heads]\arrow[d,""]&X\times_Y\Spec(A)\arrow[r,""]\arrow[d,""]&\Spec(A)\arrow[d,""]\\ \coprod A_i\arrow[r,"n\textup{-atlas}",two heads]&X\arrow[r,""]&Y. \end{tikzcd} $$ \end{proof} \begin{cor} Let $X$ and $Y$ be $n$-geometric stacks. Then any morphism $X\rightarrow Y$ is $n$-geometric. \end{cor} \begin{proof} This follows immediately from the definitions and Proposition \ref{diag geometric} \end{proof} \begin{prop} \label{diag + proj geometric} Let $X\rightarrow Y$ be an effective epimorphism of derived stacks and suppose that $X$ and $X\times_YX$ are $n$-geometric. Further, assume that the projections $X\times_Y X\rightarrow X$ are $n$-geometric and smooth. Then $Y$ is an $(n+1)$-geometric stack. If in addition $X$ is quasi-compact and $X\rightarrow Y$ is a quasi-compact morphism, then $Y$ is quasi-compact. Finally if $X$ is locally of finite presentation, then so is $Y$. \end{prop} \begin{proof} This is analogous to \cite[Lem. 4.29]{AG}.\par Let $\coprod_{i\in I} \Spec(A_i)\twoheadrightarrow X$ be an $n$-atlas. Consider the following diagram with pullback squares $$ \begin{tikzcd} \Spec(A_i)\times_Y \Spec(A_j)\arrow[d,""]\arrow[r,""]&X\times_Y \Spec(A_j)\arrow[d,""]\arrow[r,""]& \Spec(A_j)\arrow[d,""]\\ \Spec(A_i)\times_Y X\arrow[d,""]\arrow[r,""]&X\times_YX\arrow[r,""]\arrow[d,""]&X\arrow[d,""]\\ \Spec(A_i)\arrow[r,""]&X\arrow[r,"f"]&Y. \end{tikzcd} $$ It suffices to show that $\coprod_{i\in I}\Spec(A_i)\rightarrow X\rightarrow Y$ is an $(n+1)$-atlas. Since the projections $X\times_Y X\rightarrow X$ and $\Spec(A_i)\rightarrow X$ are $n$-geometric smooth, we will see that $\Spec(A_i)\rightarrow X\rightarrow Y$ is $n$-geometric smooth, proving our claim.\par Indeed, let $\Spec(C)\rightarrow Y$ be a morphism from an affine derived scheme. Consider the base change $\coprod_{j\in I} \Spec(A_j)\times_Y \Spec(C)\twoheadrightarrow \Spec(C)$, which is an effective epimorphism. In particular, we can find an \'etale covering $\Spec(\widetilde{C})\rightarrow \Spec(C)$, which factors through $\coprod_{j\in I} \Spec(A_j)\times_Y \Spec(C)$. To show that $\Spec(A_i)\times_Y \Spec(C)$ has an $n$-atlas, it suffices to check that the base change with $\widetilde{C}$ has an $n$-atlas (see \cite[Prop. 1.3.3.4]{TV2}). Now let us look at the following diagram with pullback squares $$ \begin{tikzcd} Z\arrow[r,""]\arrow[d," f'"]& W\arrow[r,""]\arrow[d,""]& \Spec(\widetilde{C})\arrow[d,"f"]\\ \coprod_{j\in I} \Spec(A_i)\times_Y \Spec(A_j)\times_Y C\arrow[d,"g'", two heads]\arrow[r,"h"]&\coprod_{j\in I} X\times_Y \Spec(A_j)\times_Y C\arrow[d,"", two heads]\arrow[r,"l"]& \coprod_{j\in I}\Spec(A_j)\times_Y C\arrow[d,"g", two heads]\\ C\times_Y \Spec(A_i)\arrow[d,""]\arrow[r,""]&C\times_YX\arrow[r,""]\arrow[d,""]&C\arrow[d,""]\\ \Spec(A_i)\arrow[r,""]&X\arrow[r,""]&Y. \end{tikzcd} $$ Since $ g\circ f$ is affine \'etale effective epimorphism, we know that $g'\circ f'$ is affine \'etale effective epimorphism. Since the projections are $n$-geometric smooth and by the above the projection $\Spec(A_i)\times_Y \Spec(A_j)\rightarrow \Spec(A_i)$ is $n$-geometric smooth, we see that $l\circ h$ is $n$-geometric smooth. Therefore, $Z$ has an $n$-atlas and since $g\circ f'$ is affine \'etale, we see that the $n$-atlas of $Z$ gives an $n$-atlas of $\Spec(A_i)\times_Y \Spec(C)$.\par The rest of the statement follows immediately by the definitions. \end{proof} We conclude this section with an important remark. This remark shows, for an animated ring $A$, how open subschemes of $\Spec(\pi_{0}A)_{\textup{cl}}$ can be lifted to derived open subschemes of $\Spec(A)$. In particular, when we want to show that an inclusion of derived stacks is an open immersion, it suffices to show that it is an open immersion after applying $t_{0}$. \begin{remark}[Lifting opens along affines] \label{lift along affine} Let $A$ be an animated ring. Assume we have an open subscheme $U\hookrightarrow \Spec(\pi_0A)_{\textup{cl}}$ of an affine scheme. Let $( \Spec(\pi_0A_{f_i})_{\textup{cl}}\rightarrow U)_{i\in I} $ be an affine open cover by basis elements. Certainly, we can lift this open cover to an open subscheme $V\coloneqq \Im(\coprod_{i\in I}\Spec(A[f_i^{-1}]))$, where the image is taken in $\Spec(A)$. Let $B$ be a animated $A$-algebra with structure morphism $w\colon \Spec(B)\rightarrow \Spec(A)$. Then $w$ factors through an $u\colon \Spec(B)\rightarrow V$, i.e. $u\in V(B)$, if and only if there is an \'etale cover $(B\rightarrow B_j)_{j\in J}$ such that for every $j$ there is an $i$ with $\pi_0w_j(f_i)$ invertible, where $w_j$ is the composition of $w$ with the natural map $B\rightarrow B_j$. \par To see this, assume we have a map $u\colon \Spec(B)\rightarrow V$ of derived $A$-schemes. Then base change with the affine open cover of $V$ gives an affine open $\coprod_{i\in I} \Spec(B_i)$ cover of $\Spec(B)$ that maps to $\coprod_{i\in I} \Spec(A[f_i^{-1}])$ via projection (note that $V$ is an open subscheme of an affine scheme and thus separated, so in particular the diagonal of $V$ is affine). The projection $\coprod_{i\in I}\Spec(B_i)\rightarrow\coprod_{i\in I} \Spec(A[f_i^{-1}])$ is induced by the termwise projections (note that coproducts in $\infty$-topoi are universal). Thus, by the universal property of localization, we see that $\pi_0w_i(f_i)$ is invertible in $\pi_0B_i$. \par Now assume there is an \'etale cover $( \Spec(B_j)\rightarrow \Spec(B))_{j\in J}$ such that for every $j$ there is an $i$ with $\Spec(B_j)\rightarrow \Spec(A[f_i^{-1}])$. In particular, we get a map $\Spec(B_j)\rightarrow \coprod_{i\in I}\Spec(A[f_i^{-1}])$ and by taking coproducts and fiber products, we get a map $$\Spec(B)\simeq\colim_{\Delta}(\Cv(\coprod_{j\in J}\Spec(B_j)/B)_{\bullet})\rightarrow \coprod_{i\in I}\Spec(A[f_i^{-1}])\rightarrow V.$$ \end{remark} \subsection{Quasi-coherent modules over derived stacks} In this section we will shortly look at quasi-coherent modules over derived stacks. We show that they behave as ``expected''. Namely, quasi-coherent modules over derived stacks still satisfy descent\footnote{We will make this explicit later on, as we did not define Grothendieck topologies on derived stacks.}. We also have pullback and pushforward functors that are adjoint to another. Further, we show that the $\infty$-category of quasi-coherent modules (in the derived sense) over a (classical) scheme $X$ is equivalent to $\Dcal_{\textup{qc}}(X)$.\par We will closely follow \cite[\S I.3]{GR}, \cite{DAG} and \cite{Khan} and generalize some results following their ideas. \begin{defi} Let $X$ be a presheaf on $\AniAlg{R}^{\op}$, we define the \textit{$\infty$-category of quasi-coherent modules over $X$} to be $$ \QQCoh(X)\coloneqq \lim_{\Spec(A)\rightarrow X}\MMod_A. $$ An element $\Fcal\in\QQCoh(X)$ is called \textit{quasi-coherent module over $X$} or \textit{$\Ocal_{X}$-module}. For any affine derived scheme $\Spec(A)$ and any morphism $f\colon\Spec(A)\rightarrow X$, we denote the image of a quasi-coherent module $\Fcal$ under the projection $\QQCoh(X)\rightarrow\MMod_{A}$, with $f^{*}\Fcal$.\par We define the $\infty$-category of \textit{perfect quasi-coherent modules over $X$} to be $$ \QQCoh_{\perf}(X)\coloneqq \lim_{\Spec(A)\rightarrow X}\MMod^{\perf}_A. $$ We say that a perfect quasi-coherent module $\Fcal$ over $X$ has \textit{Tor-amplitude in $[a,b]$} if for every derived affine scheme $\Spec(A)$ and any morphism $f\colon \Spec(A)\rightarrow X$ the $A$-module $f^{*}\Fcal$ has Tor-amplitude in $[a,b]$. \end{defi} \begin{remark} \label{qqcoh right kan} We see that that by definition $\QQCoh(-)$ (resp. $\QQCoh_{\perf}(-)$) is a right Kan extension of $\MMod_{-}\colon \AniAlg{R}\rightarrow \ICat$ (resp. $\MMod_{-}^{\perf}\colon \AniAlg{R}\rightarrow\ICat$) onto $\Pcal(\AniAlg{R}^{\op})^{\op}$ along the Yoneda emebdding.\par A priori $\MMod_{-}$ is a functor from animated rings to the $\infty$-category of not necessarily small $\infty$-categories. But for the purpose of this article if we talk about the right Kan extension along the Yoneda embedding to presheaves on $\AniAlg{\ZZ}$, we assume smallness of the module categories. \end{remark} \begin{rem} \label{right kan of mono} Note that limits preserve monomorphisms in $\ICat$ (as this $\infty$-category has limits). Therefore, if $F,G\colon \AniAlg{R}\rightarrow\ICat$ are functors and $\alpha\colon F\rightarrow G$ is a natural transformation such that $\alpha(A)$ is a monomorphism for all $A\in\AniAlg{R}$, we see that for the induced morphism $R\alpha$ on the right Kan extensions $RF$ resp. $RG$ of $F$ resp. $G$ under the Yoneda embedding $\AniAlg{R}\hookrightarrow\Pcal(\AniAlg{R}^{\op})^{\op}$ the evaluation on some $X\in\Pcal(\AniAlg{R}^{\op})^{\op}$ yields a monomorphism $R\alpha(X)\colon RF(X)\hookrightarrow RG(X)$.\par This can be applied to see that for example for all $\in\Pcal(\AniAlg{R}^{\op})^{\op}$, we have that the natural morphism $\QQCoh_{\perf}(X)\rightarrow \QQCoh(X)$ is fully faithful and we can see $\QQCoh_{\perf}(X)$ as a full subcategory of $\QQCoh(X)$. \end{rem} \begin{lem} \label{limit of stable} Let $\Ccal$ be the limit of stable $\infty$-categories $\Ccal_{k}$ indexed by some simplicial set $K$ with finite limit preserving transition maps, then $\Ccal$ is stable. \end{lem} \begin{proof} Since $\Ccal_{k}$ have finite limits, we know that $\Ccal$ has finite limits. Then the spectrum of $\Ccal$ is stable by \cite[Cor. 1.4.2.17]{HA}, but by \cite[Rem. 1.4.2.25]{HA}, we know that $\Sp(\Ccal)$ itself is a limit of the tower $\dots\rightarrow \Ccal_{*}\xrightarrow{\Omega}\Ccal_{*}$. In particular, we have $$\Sp(\Ccal)\simeq\Sp(\lim_{K}\Ccal_{\bullet})\simeq\lim_{K}\Sp(\Ccal_{\bullet})\simeq\lim_{K}\Ccal_{\bullet}\simeq \Ccal,$$ where we use \cite[Prop. 1.4.2.21]{HA} for the second to last equivalence. \end{proof} \begin{rem} \label{qcoh stable} By Lemma \ref{limit of stable}, we know that for any $X\in\Pcal(\AniAlg{R}^{\op})$ the $\infty$-category $\QQCoh(X)$ is stable, since it is the limit of stable $\infty$-categories and the transition maps are given by base change (the base change functor preserves fiber sequences, as they are equivalently cofiber sequences, and finite products, that are equivalent to finite coproducts). \end{rem} The following proposition is a generalization of \cite[\S I.3 Cor. 1.3.11]{GR} but we can follow the idea of the proof. \begin{prop} \label{right kan of sheaf} Let $\Ccal$ be a presentable $\infty$-category and let $F\colon \AniAlg{R}\rightarrow \Ccal$ be a (hypercomplete) sheaf with respect to the Grothendieck topology $\tau\in\lbrace \textup{fpqc, \'etale}\rbrace$ on $\AniAlg{R}$. Let $RF$ denote the right Kan extension of $F$ along the Yoneda embedding $\AniAlg{R}\hookrightarrow\Pcal(\AniAlg{R}^{\op})^{\op}$. Further let us denote the corresponding $\infty$-topos of (hypercomplete) $\tau$-sheaves on $\AniAlg{R}$ with $\textup{Shv}_{\tau}$. Then for any diagram $p\colon K\rightarrow \Pcal(\AniAlg{R}^{\op})$, where $K$ is a simplicial set, and morphism $\colim_{K} X_{k}\rightarrow Y$ that becomes an equivalence in $\textup{Shv}_{\tau}$ after sheafification\footnote{Recall${}^{\ref{foot.sheafi}}$ that we can describe $\textup{Shv}_{\tau}$ as a localization of $\Pcal(\AniAlg{R}^{\op})$ (as seen in the proof), so we get a functor $L\colon \Pcal(\AniAlg{R}^{\op})\rightarrow \textup{Shv}_{\tau}$ left adjoint to the inclusion, which we call \textit{sheafification}.}, we have that the natural map $RF(Y)\rightarrow \lim_{K} RF(X_{k})$ is an equivalence. \end{prop} \begin{proof} First, let us note that since $\Ccal$ is presentable, we can find a small subcategory $\Ccal'\subseteq \Ccal$ such that $\Ccal$ is a localization of $\Pcal(\Ccal')$ (see \cite[Thm. 5.5.1.1]{HTT}). In particular, the elements $RF(Y)$ and $\lim_{K} RF(X_{k})$ may be regarded as functors from $\Ccal'$ to $\SS$ and the natural morphism is an equivalence if and only if it is an equivalence after composing with the evaluation for every $c\in\Ccal'$ (see \cite[01DK]{kerodon}). We note that the inclusion of $\Ccal$ into $\Pcal(\Ccal')$ preserves limits and since the evaluation of a functor $G\in \Pcal(\Ccal')$ at $c$ is equivalent to $\Hom_{\Pcal(\Ccal')}(j(c),G)$, where $j\colon \Ccal'\hookrightarrow \Pcal(\Ccal')$ denotes the Yoneda embedding (see \cite[Lem. 5.5.2.1]{HTT}), we see that also the evaluation preserves limits. So it is enough to check that for every $c\in\Ccal'$, the morphism $RF(Y)(c)\rightarrow \lim_{\Delta}(RF(X_{k})(c))$ is an equivalence. In particular, we may replace $F$ by $\Hom_{\Pcal(\Ccal')}(j(c), -) \circ F$ for any $c\in \Ccal'$ and so without loss of generality, we may assume that $\Ccal\simeq \SS$.\par We will first discuss the case of $\tau$-sheaves. By definition of the $\infty$-category $\textup{Shv}_{\tau}$, we know that all $\tau$-sheaves are $S$-local, where $S$ is the collection of those monomorphism $U\hookrightarrow \Spec(A)$, where $A\in\AniAlg{R}$, such that it defines a $\tau$-covering sieve (see \cite[\S 6.2.2]{HTT} for details). This in particular defines a localization functor $L\colon \Pcal(\AniAlg{R})\rightarrow \Pcal(\AniAlg{R})$ with essential image given by $\textup{Shv}_{\tau}$. Using \cite[Prop. 5.5.4.2]{HTT}, we see that any equivalence in $\textup{Shv}_{\tau}$ is local, i.e. any morphism $f\colon U\rightarrow V$ in $\Pcal(\AniAlg{R})$ such that $Lf$ is an equivalence and any $Q\in\textup{Shv}_{\tau}$ we have that the natural morphism $$\Hom_{\Pcal(\AniAlg{R})}(V,Q)\rightarrow\Hom_{\Pcal(\AniAlg{R})}(U,Q)$$ is an equivalence. In particular, in our situation, we have that $$ \Hom_{\Pcal(\AniAlg{R})}(Y,Q)\rightarrow\Hom_{\Pcal(\AniAlg{R})}(\colim_{K}X_{k},Q)\simeq\lim_{K} \Hom_{\Pcal(\AniAlg{R})}(X_{k},Q) $$ is an equivalence (note that the colimit in the second $\Hom$ is taken in the $\infty$-category $\Pcal(\AniAlg{R})$, whereas for $Y\simeq \colim_{K} X_{k}$ colimit is taken in $\textup{Shv}_{\tau}$ which do not agree in general since the inclusion $\textup{Shv}_{\tau}\hookrightarrow \Pcal(\AniAlg{R})$ does not preserve colimits in general).\par Since $F$ is a sheaf with respect to the topology $\tau$, we therefore have $$ \Hom_{\Pcal(\AniAlg{R})}(Y,F)\simeq \lim_{K}\Hom_{\Pcal(\AniAlg{R})}(X_{k},F). $$ Since we can write any presheaf on $\AniAlg{R}$ as a colimit of representable ones (see \cite[Lem. 5.1.5.3]{HTT}) and the Yoneda lemma \cite[Lem. 5.5.2.1]{HTT}, we finally have the equivalence $$ RF(Y)\xrightarrow{\sim} \lim_{K} RF(X_{k}). $$ \par The case of hypercomplete $\tau$-sheaves is completely analogous, noting that the $\infty$-topos of hypercomplete $\tau$-sheaves can be realized as a localization of $\Pcal(\AniAlg{R})$ with respect to hypercovers (see \cite[Cor. 6.5.3.13]{HTT}). \end{proof} \begin{defi} Let $\Ccal$ be a presentable $\infty$-category and let $\tau$ be the fpqc or \'etale topology on $\AniAlg{R}$. A functor $F\colon \Pcal(\AniAlg{R}^{\op})^{\op}\rightarrow \Ccal$ is a \textit{(hypercomplete) sheaf or satisfies $\tau$-descent} if for any effective epimorphism $X\rightarrow Y$ (resp. a hypercover $X^{\bullet}\rightarrow Y$), we have $$RF(Y)\simeq \lim_{\Delta}RF(\Cv(X/Y)_{\bullet})\textup{ (resp. }RF(Y)\simeq \lim_{\Delta_{s}}RF(X^{\bullet})).$$ \end{defi} \begin{rem} \label{rem.sheaf.kan} In the setting of Proposition \ref{right kan of sheaf}, we see that if $F$ is a (hypercomplete) sheaf, then so is its right Kan extension $RF$. \end{rem} \begin{rem} \label{infty cat presentable} An important example of a presentable $\infty$-category is the $\infty$-category $\ICat$ of small $\infty$-categories. Presentability of this $\infty$-category follows from the fact that it is the $\infty$-category of a combinatorial simplicial model category (marked simplicial sets with the model structure of \cite[Prop. 3.1.5.2]{HTT}), which are precisely the presentable $\infty$-categories ( \cite[Prop. A.3.7.6]{HTT}). \end{rem} \begin{prop} \label{right kan ex derived scheme} Let $\Ccal$ be a presentable $\infty$-category and let $F\colon \AniAlg{R}\rightarrow \Ccal$ be an \'etale sheaf. Let $RF$ denote a right Kan extension of $F$ along the Yoneda embedding $\AniAlg{R}\hookrightarrow \Pcal(\AniAlg{R})^{\op}$. Then for any derived scheme $X$ over $R$, the natural morphism $$ RF(X)\rightarrow \lim_{\substack{U\hookrightarrow X \\ \textup{affine open}}} F(U) $$ is an equivalence. \end{prop} \begin{proof} This lemma is a generalization of \cite[Lec. 1 Prop. 3.5]{Khan} but can be proven the same. For the convenience of the reader, we give a proof.\par Let $X$ be derived scheme and $Y\coloneqq \coprod_{i\in I}\Spec(A_i)\rightarrow X$ be a Zariski atlas. By Remark \ref{rem.sheaf.kan} we have $$ RF(X) \simeq \lim_\Delta RF(\Cv(Y/X)_\bullet). $$ For any affine open $U\hookrightarrow X$ let $Y_U\coloneqq \coprod_{i\in I} U\times_X \Spec(A_i)\rightarrow U$ be the induced cover on $U$. Thus the question reduces to showing the equivalence $$ RF(\Cv(Y/X)_n)\rightarrow \lim_{\substack{U\hookrightarrow X\\ \textup{ affine open}}} RF(\Cv(Y_U/U)_n) , $$ for all $[n]\in\Delta$. By cofinality, we may replace $X$ in the limit argument by $\Cv(Y/X)_n$ for any $n$.\par To see this, note that for every affine open $U\hookrightarrow \Cv(X/Y)_n$, we get a morphism $U\rightarrow U\times_X U\simeq \Cv(Y/X)_n\times_X U \times_{\Cv(Y/X)_n} U \simeq \Cv(Y_U/U)_n\times_{\Cv(Y/X)_n} U\rightarrow \Cv(Y_U/U)$. Thus $$\lim_{\substack{U\hookrightarrow \Cv(Y/X)_n\\ \textup{affine open}}} RF(U) \simeq \lim_{\substack{U\hookrightarrow X\\ \textup{affine open}}} RF(\Cv(Y_U/U)_n).$$ \par Assume the pairwise intersection of the $\Spec(A_i)$ is affines, then $X$ is affine and the question is trivial. Now assume the pairwise intersection is not affine then it is open in an affine scheme and thus separated. Thus they admit Zariski covers, where each of the pairwise intersection is affine. Repeating the whole process concludes the proof. \end{proof} \begin{remark} \label{perf fpqc local} Let us remark that the functors $A\mapsto \MMod_{A}$ and $A\mapsto \MMod_A^{\textup{perf}}$ are hypercomplete sheaves for the fpqc topology. The first assertion follows by \cite[Cor. D.6.3.3]{SAG}. The second assertion is clear since modules satisfy flat hyperdescent and since perfect modules are precisely the dualizable ones (see \cite[Prop. 7.2.2.4]{HA}), we can construct a dual fpqc locally (see \cite[Prop. 4.6.1.11]{HA}) - see the proof \cite[Lem. 5.4]{AG} for a more detailed explanation. \end{remark} \begin{rem} \label{descent qcoh} Using the definition of the functors $\QQCoh$ and $\QQCoh_{\perf}$, we see with Remark \ref{perf fpqc local} and Remark \ref{rem.sheaf.kan} that these functors satisfy descent in the sense that for any effective epimorphism of derived stacks $X\twoheadrightarrow Y$, we have $$\QQCoh(Y)\simeq \lim_{\Delta}\QQCoh(\Cv(X/Y)_{\bullet})\textup{ resp. }\QQCoh_{\perf}(Y)\simeq \lim_{\Delta}\QQCoh_{\perf}(\Cv(X/Y)_{\bullet}).$$ \end{rem} \begin{remark} Using Remark \ref{qqcoh right kan} and Proposition \ref{right kan ex derived scheme}, we see that a quasi-coherent module over a derived scheme is given by a compatible family of modules $(\Fcal_A)_A$ for every affine open $\Spec(A)\hookrightarrow X$. \end{remark} \begin{remark} Let us recall the derived direct and inverse image. For a morphism of animated rings $\Spec(B)\rightarrow \Spec(A)$, we get a forgetful functor $\MMod_B\rightarrow \MMod_A$ (this follows from \cite[4.6.2.17]{HA}). This functor is right adjoint to the tensor product $B\otimes_A -$. We can globalize this to the case where we replace the domain by an arbitrary derived stack $X$. Namely, any quasi-coherent module $\Fcal$ over $X$ is determined by its underlying $C$-module $\iota^{*}\Fcal$, for $\iota\colon\Spec(C)\rightarrow X$. Since $C$ is naturally an $A$-algebra, we can forget the $C$-structure and view $\Fcal$ as a limit in $\MMod_A$. The tensor product with each such $C$ also induces a functor from $A$-modules to quasi-coherent $X$-modules. We can also globalize this construction on the base for a geometric morphism of derived stacks, i.e. if $f\colon X\rightarrow S$ is a geometric morphism of derived stacks, we get an adjunction $$ \begin{tikzcd} f^{\ast}\colon \QQCoh(S) \arrow[r,"",shift left = 0.8]&\arrow[l,"",shift left = 0.8] \QQCoh(X)\colon f_\ast, \end{tikzcd} $$ note here that the right adjoint comes formally from the fact that the pullback by construction commutes with colimits. If one adds assumptions to $f$, then we can say more about the pushforward but we will not do this here and refer to \cite[\S 5.5]{DAG} since it is not of interest for us.\par If we work with classical schemes, we will write $Lf^*$ and $Rf_*$ to differentiate between the classical notions. \end{remark} \begin{prop} \label{derived cat is kan ext} Let $X$ be a scheme. Then we have an equivalence of $\infty$-categories $\Dcal_{\textup{qc}}(X)\simeq \QQCoh(X)$, where $\Dcal_{\textup{qc}}(X)$ denoted the derived $\infty$-category of $\Ocal_{X}$-modules with quasi-coherent cohomologies. \end{prop} \begin{proof} This is shown in the spectral setting in \cite{SAG} and can be followed in our setting from Lurie's PhD thesis \cite{DAG}. For convenience of the reader we will show how to conclude this proposition as a consequence of both references.\par A scheme $X$ is per definition a locally ringed space $(X,\Ocal_{X})$. We let $X'\coloneqq\textup{Shv}_{\Sets}(X)$ denote the Grothendieck topos associated to the small Zariski-site of $X$. The sheaf of rings $\Ocal_{X}$ can be viewed as a ring object of $X'$. So in particular, the tuple $(X',\Ocal_{X})$ defines a locally ringed topos. We write $\Xcal$ for the $1$-localic $\infty$-topos assocaited to $X'$ (which exists by \cite[Prop. 6.4.5.7]{HTT}). As explained in \cite[Rem. 1.4.1.5]{SAG}, we can view $\Ocal_{X}$ as a sheaf of connective $0$-truncated $E_{\infty}$-rings (which are just commutative rings) on $\Xcal$, which we denote by $\Ocal_{\Xcal}$ and hence get a spectrally ringed space $(\Xcal,\Ocal_{\Xcal})$, which is local (we refer to \cite[\S I.1.1]{SAG} for the definitions).\par Since $\Ocal_{\Xcal}$ takes values in commutative rings, we can also view it as a sheaf on $\Xcal$, with values in animated rings. Therefore $(\Xcal,\Ocal_{\Xcal})$ defines a spectral scheme resp. a derived scheme in the sense of \cite[\S I.1]{SAG} resp. \cite{DAG}. Note that also the definition of an $\Ocal_{\Xcal}$-module agrees in both references, i.e. in both references, we see an $\Ocal_{\Xcal}$-module as an $\Ocal_{\Xcal}$-module object in $\textup{Shv}_{\Sp}(\Xcal)$, where $\Ocal_{\Xcal}$ is naturally seen as a sheaf with values in spectra.\par By \cite[Thm. 4.6.5]{DAG}, we have an equivalence of $\QQCoh(X)$ and $\infty$-categories of sheaf $\Ocal_{\Xcal}$-modules $M$ on $\Xcal$ such that \begin{enumerate} \item $\pi_{i}M$ (which is defined as the sheafification of the presheaf $V\mapsto \pi_{i}M(V)$) is a quasi-coherent sheaf on the underlying Deligne-Mumford stack of $(\Xcal,\Ocal_{\Xcal})$, and \item the underlying sheaf of spaces of $M$ is a hypersheaf (as explained above $M$ can be seen as a sheaf on $\Xcal$ with values in $\Sp$, i.e. a limit preserving functor $M\colon \Xcal^{\op}\rightarrow \Sp$, and composing with $\Omega^{\infty}\colon \Sp\rightarrow \SS$ defines the underlying sheaf of spaces of $M$). \end{enumerate} Using \cite[Prop. 2.2.6.1]{SAG}, we see that $\QQCoh(X)$ is equivalent to the $\infty$-category of quasi-coherent sheaves on the spectral Deligne-Mumford stack $(\Xcal,\Ocal_{\Xcal})$. Now \cite[Cor. 2.2.6.2]{SAG} shows that indeed the derived $\infty$-category of $\Ocal_{X}$-modules with quasi-coherent cohomology is equivalent to $\QQCoh(X)$. \end{proof} \subsection{Deformation theory of derived stacks} This section is derived from \cite[\S 1.4]{TV2},\cite[\S 4.2]{AG}, \cite[Lecture 5]{Khan}.\par In this section, we will globalize the results of Section \ref{sec:cotangent affine} and \ref{sec:smooth and et} to geometric morphisms of derived stacks. For this, we will define a global version of the cotangent complex and list properties. Most importantly, we will show that any $n$-geometric morphism has a cotangent complex and that smooth morphisms are characterized by the cotangent complex. Further, we will use the results to show that geometric stacks are automatically hypercomplete sheaves for the \'etale topology.\par We let $R$ be a ring and assume every derived stack is a derived stack over $R$. \par Let $f\colon X\rightarrow Y$ be morphism of derived stacks. Let $x\colon \Spec(A)\rightarrow X$ be an $A$-point, where $A$ is an animated $R$-algebra. Let $M\in\MMod^\textup{cn}_A$ and let us look at the commutative square $$ \begin{tikzcd} X(A\oplus M)\arrow[r,""]\arrow[d,""]&X(A)\arrow[d,"f"]\\ Y(A\oplus M)\arrow[r,""]&Y(A), \end{tikzcd} $$ where the vertical arrow are given by the canonical projection $A\oplus M\rightarrow A$. We set the \textit{dervations at the point $x$} as $$ \Der_x(X/Y,M)\coloneqq \fib_x(X(A\oplus M)\rightarrow X(A)\times_{Y(A)}Y(A\oplus M)), $$ where we see $x$ as a point in the target via the natural map induced by $ \Spec(A\oplus M)\rightarrow\Spec(A) \xrightarrow{x}X\xrightarrow{f}Y. $ \begin{defi}[\protect{\cite[Def. 1.4.1.5]{TV2}}] \label{defi.cotangent.global} Let $f\colon X\rightarrow Y$ be a morphism of derived stacks. We say $L_{f,x}\in\MMod_A$ is a \textit{cotangent complex for $f$ at the point $x\colon \Spec(A)\rightarrow X$}, if it is $(-n)$-connective, for some $n\geq 0$ and for all $M\in\MMod_A^\textup{cn}$ there is a functorial equivalence $$\Hom_{\MMod_A}(L_{f,x},M)\simeq\Der_x(X/Y,M).$$\par When such $L_{f,x}$ exists, we say \textit{$f$ admits a cotangent complex at the point $x$}. If there is no possibility of confusion, we also write $L_{X/Y,x}$ for $L_{f,x}$. We also write $L_X$ if $Y\simeq \Spec(R)$. \end{defi} \begin{defi} Let $f\colon X\rightarrow Y$ be a morphism of derived stacks. We say that $L_f\in \QQCoh(X)$ \textit{is a cotangent complex for $f$} if for all points $x\colon \Spec(A)\rightarrow X$ the $A$-module $x^*L_f$ is a cotangent complex for $f$ at the point $x$.\par If $L_f$ exists, we say that \textit{$f$ admits a cotangent complex}. We will write $L_{f,x}$ instead of $x^*L_f$ if $f$ admits a cotangent complex.\par If $Y\simeq\Spec(R)$ and $L_{f}$ exists, we say that $X$ admits an \textit{absolute cotangent complex}. \end{defi} \begin{remark} By Lemma \ref{connective yoneda ff}, the cotangent complex, for any morphism of derived stacks, is unique up to homotopy. \end{remark} \begin{rem} \label{affines have cotangent} Note that any morphism of affine derived schemes $f\colon \Spec(B)\rightarrow \Spec(A)$ admits a cotangent complex. For any point $x\colon\Spec(C)\rightarrow \Spec(B)$, we have $L_{\Spec(B)/\Spec(A),x}\coloneqq L_{B/A}\otimes_B C$. \end{rem} \begin{lem} \label{properties of cotangent} Let $f\colon X\rightarrow Y$ be a morphism of derived stacks. \begin{enumerate} \item If $X$ and $Y$ admit absolute cotangent complexes, then $f$ admits a cotangent complex and we have the following cofiber sequence for any point $x\colon \Spec(A)\rightarrow X$ $$ L_{Y,f\circ x}\rightarrow L_{X,x}\rightarrow L_{f,x}. $$ \item If $f$ admits a cotangent complex, then for any morphism of derived stacks $Z\rightarrow Y$ and any point $x\colon \Spec(A)\rightarrow X\times_YZ$, we have $$ L_{f,x}\simeq L_{X\times_YZ/Z,x}. $$ \item If for any morphism $x\colon \Spec(A)\rightarrow X$ the projection $\pr\colon X\times_{Y,f\circ x} \Spec(A)\rightarrow \Spec(A)$ admits a cotangent complex, then $f$ admits a cotangent complex and further we have $$ L_{f,x}\simeq L_{\pr,x}. $$ \item If for any point $x\colon \Spec(A)\rightarrow X$ the stack $X\times_{Y,f\circ x} \Spec(A)$ admits a cotangent complex, then $f$ has a cotangent complex and we have $$ L_{\Spec(A),\id_{\Spec(A)}}\rightarrow L_{X\times_Y A,x}\rightarrow L_{f,x}. $$ \end{enumerate} \end{lem} \begin{proof} The proof in the model categorical case is given in \cite[Lem. 1.4.1.16]{TV2}. But these properties are straightforward to check.\par Part \textit{1} and \textit{2} follow from the definitions. Part \textit{3} follows from \textit{2} and part \textit{4} follows from \textit{1}, \textit{3} and Remark \ref{affines have cotangent} \end{proof} \begin{lem} \label{relative global cotangent sequence} Let $X\xrightarrow{f} Y\xrightarrow{g} Z$ be a morphism of derived stacks. Assume $Y/Z$ admits a cotangent complex, then $X/Y$ admits a cotangent complex if and only if $X/Z$ admits a cotangent complex. Further, we obtain a cofiber sequence $$ f^*L_{Y/Z}\rightarrow L_{X/Z}\rightarrow L_{X/Y} $$ of quasi-coherent modules over $X$ if the cotangent complexes exist. \end{lem} \begin{proof} This is stated in \cite[Lec. 5 Prop. 5.7]{Khan}. But anyway we will give a proof. \par Let us take a point $x\colon \Spec(A)\rightarrow X$. Then what we impose is that we have a cofiber sequence $$ L_{Y/Z,f\circ x}\rightarrow L_{X/Z,x}\rightarrow L_{X/Y,x}. $$ By Lemma \ref{connective yoneda ff} it is enough to show that for any connective $A$-module $M$ the fiber of $\Der_{x}(X/Z,M)\rightarrow \Der_{f\circ x}(Y/Z,M)$ at the trivial derivation (given by $A\oplus M\rightarrow A\rightarrow X\rightarrow Y$) is given by $\Der_{x}(X/Y,M)$. But this is clear. \end{proof} \begin{lem} \label{cotangent monomorphism} Let $j\colon X\hookrightarrow Y$ be a monomorphism of derived stacks over $A$, then $j$ admits a cotangent complex and $L_j\simeq 0$. \end{lem} \begin{proof} The proof is the same as in \cite[Lec. 5 Prop. 5.9]{Khan}, but for the convenience of the reader, we recall the proof.\par It suffices to show that at any point $x\colon\Spec(B)\rightarrow X$ the cotangent complex is zero, i.e. the space of derivations at $x$ is contractible. Since $j$ is a monomorphism, i.e. its fibers are $(-1)$-truncated (so either contractible or empty (see \cite[Def. 5.5.6.8]{HTT})), we know that the canonical map $$ X(B\oplus M)\rightarrow X(B)\times_{Y(B)} Y(B\oplus M) $$ is also a monomorphism (note that any point in the fiber of the above morphism defines a point in the fiber of $X(B\oplus M)\rightarrow Y(B\oplus M)$, which per definition has either contractible or empty fibers). But the canonical map $u\colon \Spec(B\oplus M)\rightarrow \Spec(B)\rightarrow X$ defines a derivation, so the space of derivations is nonempty and thus contractible. \end{proof} Now we can easily see that the homotopy groups of the localization with respect to one element is given by the localization of the homotopy groups. \begin{lem} \label{homotopy of localization} Let $A$ be an animated ring and $f\in\pi_{0}A=\pi_{0}\Hom_{\MMod_{A}}(A,A)$. Then we have $\pi_{i}(A[f^{-1}])\cong (\pi_{i}A)_{f}$ as $\pi_{0}A$-modules. \end{lem} \begin{proof} From Proposition \ref{localization} and Lemma \ref{localization lfp} it follows that the map $\Spec(A[f^{-1}])\hookrightarrow \Spec(A)$ is locally of finite presentation and a monomorphism. Since monomorphisms have a vanishing relative cotangent complex (see Lemma \ref{cotangent monomorphism}), we conclude with Proposition \ref{smooth cotangent} that $A\rightarrow A[f^{-1}]$ is \'etale. Hence, we conclude using the definition of \'etale morphisms. \end{proof} \begin{lem} \label{etale cover of triv sq zero ext} Let $(A\rightarrow A_i)_{i\in I}$ be an \'etale covering in $\AniAlg{R}$ and let $M$ be an $A$-module. Then the family induced by base change $(A\oplus M\rightarrow A_i\oplus (M\otimes_A A_i))_{i\in I}$ is an \'etale cover. \end{lem} \begin{proof} We only need to show that $A_i\otimes_A (A\oplus M)\simeq A_i\oplus (M\otimes_A A_i)$, since \'etale covers are stable under base change. But by construction the functors $A_i\otimes_A (-\oplus M)$ and $(-\otimes_A A_i) \oplus (M\otimes_A A_i)$ from $\SCRMod^{\cn}_R$ to $\AniAlg{A_i}$ commute with sifted colimits and thus we are reduced to classical commutative algebra, where it is clear. \end{proof} \begin{lem} \label{cotangent exists for schemes} Let $f\colon X\rightarrow Y$ be a morphism of derived schemes. Then $f$ admits a cotangent complex. \end{lem} \begin{proof} A proof sketch is given in \cite[Lec. 5 Thm. 5.12]{Khan}, but since these are lecture notes, we recall the proof.\par We may assume that $Y=\Spec(R)$ using Lemma \ref{relative global cotangent sequence}.\par By Proposition \ref{right kan ex derived scheme}, we know that $$\QQCoh(X)\simeq \lim_{\substack{\Spec(B)\hookrightarrow X\\\textup{ open immersion }}} \MMod_B.$$ For each of the open affines $\Spec(B)$ in $X$, we take $L_{\Spec(B)}\coloneqq L_{B/\ZZ}$, viewed as a quasi-coherent sheaf on $\Spec(B)$. Since the cotangent complex is compatible with taking pullbacks, i.e. $L_{B}\otimes_B A\simeq L_A$ for a triangle $$ \begin{tikzcd} \Spec(A)\arrow[rd,""]\arrow[d,""] \\ \Spec(B)\arrow[r,""]&X, \end{tikzcd} $$ where $\Spec(A),\Spec(B)$ are open in $X$, we see that this indeed defines an object in the limit, which we denote with $L_X$ (for the compatibility, note that the relative cotangent complexes of monomorphisms vanish by Lemma \ref{cotangent monomorphism}). This defines a cotangent complex on $X$.\par We have to show that $L_X$ represents the space of derivations. Since we have glued the cotangent complex for affine opens, we will use a descent argument for arbitrary points. For this, we use that modules satisfy fpqc descent and that for any point $x\colon \Spec(B)\rightarrow X$, the base change with an affine open cover of $X$ gives an affine open cover $(B_i)$ of $B$. Hence, for a connective $B$-module $M$, it suffices to show that $$ \lim\Hom_{B_i}(L_{X,B_i},M_i)\simeq \lim \Der_{B_i}(X/\ZZ,M_i), $$ where $M_i\coloneqq M\otimes_B B_i$, which is clear termwise, since each $B_i$ factors through some affine open $A_i$ of $X$ by construction (to write the derivations as a limit use the sheaf property of $X$, $\Spec(\ZZ)$ and Lemma \ref{etale cover of triv sq zero ext}). Note that for each affine open the cotangent complex exists and we claim that $L_{X,B_i}\simeq L_{X,A_i}\otimes_{A_i} B_i\simeq L_{\Spec(A_i),A_i}\otimes_{A_i}B_i\simeq L_{\Spec(A_i),B_i}$, which concludes the lemma.\par To see this, note that $\Der_{B_i}(A_i/X,M_i)\simeq 0$, since $\Spec(A_i)\hookrightarrow X$ is a monomorphism (see Lemma \ref{cotangent monomorphism}). Therefore $\Der_{B_i}(A_i/\ZZ,M_i)\simeq \Der_{B_i}(X/\ZZ,M_i)$ (since its fiber is $\Der_{B_i}(A_i/X,M_i)$), the same holds if we replace the point by $A_i$. Since the cotangent complexes at $A_i$ exist, we get $L_{\Spec(A_i),A_i}\simeq L_{X,A_i}$ and after tensoring with $B_i$, we see that $L_{X,B_i}$ is a cotangent complex for $X/\ZZ$ at $B_i$ if and only if $\Der_{B_i}(A_i/\ZZ,M_i)\simeq \Der_{B_i}(X/\ZZ,M_i)$ but this we have seen above, i.e. we have equivalences \begin{align*} \Hom_{B_i}(L_{X,B_i},M_i)\simeq \Hom_{B_i}(L_{\Spec(A_i),B_i},M_i)\simeq \Der_{B_i}(A_i/\ZZ,M_i)\simeq\Der_{B_i}(X/\ZZ,M_i). \end{align*}\par We remark that by this construction and commutativity of $\tau_{\geq 0}$ with limits, we have that $L_X$ is connective. In particular, we have shown that $L_X$ is a cotangent complex for $X/\ZZ$. \end{proof} \begin{remark} Let us give another construction of a cotangent complex. Consider the functor $L_{-}\colon \AniAlg{\ZZ}\rightarrow \Dcal(R)$ given by the usual cotangent complex seen as complex of abelian groups. We denote its right Kan extension along the inclusion $\AniAlg{R}\hookrightarrow \dSch^{\op}_{/\Spec(R)}$ with $\RR L_{-}$. By the above proof, we see that for a derived scheme $X$, we have $L_{X}\simeq \RR L_{X}$ in $\Dcal(R)$. In particular, by stability of the derived $\infty$-category, the same holds for the relative cotangent complex. \end{remark} \begin{defi} We recall the notion of an obstruction theory for derived stacks respectively morphism of derived stacks (see \cite[1.4.2.1, 1.4.2.2]{TV2}). \begin{enumerate} \item[(i)] A derived stack $X$ is called \textit{infinitesimally cartesian} or \textit{inf-cartesian} if and only if for every animated $R$-algebra $A$, connective $A$-module $M$ with $\pi_0M=0$ and derivation $d\in \Der(A,M)$ the pullback square $$ \begin{tikzcd} A\oplus_d M\arrow[r,""]\arrow[d,""]&A\arrow[d,"d"]\\ A\arrow[r,"s"]&A\oplus M, \end{tikzcd} $$ where $s$ denotes the trivial derivation, induces a pullback square $$ \begin{tikzcd} X(A\oplus_d M)\arrow[r,""]\arrow[d,""]&X(A)\arrow[d,"d"]\\ X(A)\arrow[r,"s"]&X(A\oplus M). \end{tikzcd} $$ \par A morphism $f\colon X\rightarrow Y$ of derived stacks is called \textit{infinitesimally cartesian} or \textit{inf-cartesian} if and only if for every animated $R$-algebra $A$, connective $A$-module $M$ with $\pi_0M=0$ and derivation $d\in \Der(A,M)$ we have a pullback square $$ \begin{tikzcd} X(A\oplus_d M)\arrow[r,""]\arrow[d,""]&Y(A\oplus_d M)\arrow[d,""]\\ X(A)\times_{X(A\oplus M)}X(A)\arrow[r,""]&Y(A)\times_{Y(A\oplus M)}Y(A). \end{tikzcd} $$ \item[(ii)] A derived stack $X$ has an \textit{obstruction theory} if and only if it has a cotangent complex and is infinitesimally cartesian.\par A morphism of derived stacks $f\colon X\rightarrow Y$ has an \textit{obstruction theory} if and only if it has a cotangent complex and is infinitesimally cartesian. \end{enumerate} \end{defi} \begin{defi}[\protect{\cite[Def. 1.2.8.1]{TV2}}] Let $f\colon X\rightarrow Y$ be a morphism of derived stacks, we say $f$ is \textit{formally smooth} if for any $A\in \AniAlg{R}$, a connective $A$-module $M$ with $\pi_0M=0$, and derivation $d\in \Der_{R}(A,M)$ the natural map $$ \pi_{0}X(A\oplus_d M)\rightarrow \pi_{0}(X(A)\times_{Y(A)}Y(A\oplus_d M)) $$ is surjective. \end{defi} \begin{lem} \label{properties of obstruction} Let $f\colon X\rightarrow Y$ be a morphism of derived stacks. \begin{enumerate} \item If $X$ and $Y$ have an obstruction theory, then $f$ has an obstruction theory. \item If $f$ has an obstruction theory, then for any morphism of derived stacks $Z\rightarrow Y$ the base change $Z\times_Y X\rightarrow Z$ has an obstruction theory. \item If for any $A\in \AniAlg{R}$ and any morphism $\Spec(A)\rightarrow Y$ the base change $X\times_Y \Spec(A)\rightarrow \Spec(A)$ has an obstruction theory, then $f$ has an obstruction theory. \end{enumerate} \end{lem} \begin{proof} This is \cite[Lem. 1.4.2.3]{TV2} but nevertheless we recall the proof.\par The existence of the cotangent complex follows from Lemma \ref{properties of cotangent}.\par Part \textit{1} and \textit{2} are clear by definition. For part \textit{3} let $B$ be an animated ring, $M$ a connective $B$-module with $\pi_0M=0$ and $d\in \Der_{R}(B,M)$. We need to show that the diagram $$ \begin{tikzcd} X(A\oplus_d M)\arrow[r,""]\arrow[d,""]&Y(A\oplus_d M)\arrow[d,""]\\ X(A)\times_{X(A\oplus M)}X(A)\arrow[r,""]&Y(A)\times_{Y(A\oplus M)}Y(A) \end{tikzcd} $$ is a pullback diagram. Let $x\in Y(A\oplus_{d} M)$, we claim that it suffices to show that the induced morphism of the fibers of the two horizontal arrows at $x$ is an equivalence.\par Indeed, assume that we have a commutative diagram \begin{equation} \label{eq.fiber.seq.lem} \begin{tikzcd} F\arrow[r,""]\arrow[d,""]& \ast\arrow[d,"x"]\\ X(A\oplus_d M)\arrow[r,""]\arrow[d,""]&Y(A\oplus_d M)\arrow[d,""]\\ X(A)\times_{X(A\oplus M)}X(A)\arrow[r,""]&Y(A)\times_{Y(A\oplus M)}Y(A) \end{tikzcd} \end{equation} where the upper square and the outer square is a pullback. Let us also consider the pullback diagram $$ \begin{tikzcd} Z\arrow[r,"\alpha"]\arrow[d,""]&Y(A\oplus_d M)\arrow[d,""]\\ X(A)\times_{X(A\oplus M)}X(A)\arrow[r,""]&Y(A)\times_{Y(A\oplus M)}Y(A) \end{tikzcd} $$ (note that naturally $\fib_{x}(\alpha)\simeq F$ as the outer square of (\ref{eq.fiber.seq.lem}) is a pullback diagram). We have a naturally induced morphism of fiber sequences (i.e. a commutative diagram of the form) $$ \begin{tikzcd} F\arrow[r,""]\arrow[d,"\simeq"]& X(A\oplus_d M)\arrow[d,""]\arrow[r,""]&Y(A\oplus_d M)\arrow[d,"\simeq"]\\ \fib_{x}(\alpha)\arrow[r,""]&Z\arrow[r,""]&Y(A\oplus_d M). \end{tikzcd} $$ The long exact homotopy sequence for fiber sequences in $\SS$ now implies the claim.\par That the upper square and the outer square of (\ref{eq.fiber.seq.lem}) are pullback diagrams follows from the fact that the pullback of $f$ under the morphism corresponding to $x$ has an obstruction theory. \end{proof} The following technical lemma shows, how liftings along square zero extensions are linked to loops in the space of derivations. This is crucial, when dealing with formal smoothness of morphisms. Lifts of morphisms along square zero extensions are controlled by the cotangent complex which is, in some cases, easier to handle. \begin{lem} \label{obstruction morph imp formal smooth} Let $f\colon X\rightarrow Y$ be a morphism of derived stacks, and assume $f$ has an obstruction theory. Let $A\in\AniAlg{R}$, $M$ be a connective $A$-module with $\pi_0M=0$ and $d\in \Der(A,M)$ a derivation. Let $x\in X(A)\times_{Y(A)}Y(A\oplus_d M)$ be a point and $L(x)$ the fiber of $X(A\oplus_d M)\rightarrow X(A)\times_{Y(A)}Y(A\oplus_d M)$ at $x$. There exists an element $\alpha(x)\in \pi_0\Hom_{\MMod_A}(L_{f,x},M)$ such that $L(x)\simeq\Omega_{\alpha(x),0}\Hom_{\MMod_A}(L_{f,x},M)$, where we consider the pullback diagram $$ \begin{tikzcd} \Omega_{\alpha(x),0}\Hom_{\MMod_A}(L_{f,x},M)\arrow[r,""]\arrow[d,""]&\ast\arrow[d,"\alpha(x)"]\\ \ast\arrow[r,"0"]&\Hom_{\MMod_A}(L_{f,x},M). \end{tikzcd} $$ \end{lem} \begin{proof} This is \cite[Prop. 1.4.2.6]{TV2}, but for the convenience of the reader we give a proof.\par First, note that $x$ corresponds to a diagram of the form $$ \begin{tikzcd} \Spec(A)\arrow[r,""]\arrow[d,""]&X\arrow[d,""]\\ \Spec(A\oplus_d M)\arrow[r,""]&Y. \end{tikzcd} $$ After composition with the natural maps, we get $$ \begin{tikzcd} \Spec(A\oplus M)\arrow[r,"d"]\arrow[d,"s"]&\Spec(A)\arrow[r,""]\arrow[d,""]&X\arrow[d,""]\\ \Spec(A)\arrow[r,""]&\Spec(A\oplus_d M)\arrow[r,""]&Y, \end{tikzcd} $$ which gives a point $\alpha(x)\in \Hom_{\textup{dSt}_{A//Y}}(\Spec(A\oplus M),X)\simeq\Der_A(X/Y,M)$. Using that $f$ is inf-cartesian, we get a pullback diagram $$ \begin{tikzcd} L(x)\arrow[r,""]\arrow[d,""]&\ast\arrow[d,"\alpha(x)"]\\ \ast\arrow[r,"0"]&\Der_{A}(X/Y,M). \end{tikzcd} $$ \par To see this, note the following commutative diagram with pullback squares. $$ \begin{tikzcd}[column sep=1ex] L(x)\arrow[r,""]\arrow[d,""]&\ast\arrow[d,"x"]\\ X(A\oplus_d M)\arrow[r,""]\arrow[d,""]&X(A)\times_{Y(A)}Y(A\oplus_d M)\arrow[d,"\alpha"]\\ \ast\arrow[r,"0"]\arrow[d,""]&\Der_{A}(X/Y,M)\arrow[r,""]\arrow[d]&\ast\arrow[d,""]\\ X(A)\times_{Y(A)}Y(A\oplus M)\arrow[r,""]\arrow[d,"\simeq"]&X(A\oplus M)\arrow[r,""]&X(A)\times_{Y(A)}Y(A\oplus M)\\ X(A)\times_{Y(A)\times_{Y(A\oplus M)}Y(A)} Y(A). \end{tikzcd} $$ \end{proof} \begin{lem} \label{affine obstruction} Any affine derived scheme $X\simeq \Spec(B)$ has an obstruction theory. \end{lem} \begin{proof} Certainly, $X$ has a cotangent complex by $L_{X,x}\simeq L_B\otimes_B A$, for $x\colon\Spec(A)\rightarrow X.$ So we are left to show that $X$ is infinitesimally cartesian. But this follows from compatability of the $\Hom$ functor with limits. \end{proof} \begin{lem} \label{affine formal smooth} Let $f\colon X\rightarrow Y$ be a morphism of affine derived schemes. If $f$ is smooth, then it is formally smooth. \end{lem} \begin{proof} This follows from \cite[Prop. 2.2.5.1]{TV2}, but for convenience of the reader, we give a proof in our setting.\par Let $A$ be an animated $R$-algebra, $M$ a connected $C$-module and $d\in \Der(A,M)$. Let $x\in \pi_0 X(A)\times_{Y(A)}Y(A\oplus_d M)$ be a point. We have to show that the fiber of $$ X(A\oplus_d M)\rightarrow X(A)\times_{Y(A)}Y(A\oplus_d M) $$ along $x$ is nonempty. By Lemma \ref{affine obstruction} and \ref{obstruction morph imp formal smooth}, it suffices to show that $\pi_0\Hom(L_{B/C}\otimes_B A, M)$ is contractible, where $X\simeq \Spec(B)\rightarrow \Spec(C)\simeq Y$. By Proposition \ref{smooth cotangent} the $A$-module $L_{B/C}\otimes_B A$ is finite projective, so especially a retract of a free module (see \cite[Cor. 7.2.2.9]{HA}) and therefore $\pi_0\Hom(L_{B/C}\otimes_B A, M)$ is a retract of a product of $\pi_0M$, which is zero by hypothesis on $M$. \end{proof} \begin{rem} We want to remark that Lemma \ref{affine formal smooth} holds more generally. An $n$-geometric morphism of derived stacks is smooth if any only if after restriction to $\Ring$ via $t_{0}$ is locally of finite presentation and it is formally smooth. This is a bit technical but a proof of this is given for example in \cite[Prop. 2.2.5.1]{TV2}. \end{rem} The next proposition and corollary show, how smoothness of a geometric morphism is linked to its cotangent complex. This can be seen as a globalization of Proposition \ref{smooth cotangent}. \begin{prop} \label{cotangent of smooth} Let $f\colon X\rightarrow Y$ be an $n$-geometric morphism of derived stacks. Then $f$ has an obstruction theory. Further if $f$ is smooth, then $f$ is formally smooth and $L_f$ is perfect with Tor-amplitude in $[-n-1,0]$. \end{prop} \begin{proof} The proof of this lemma in the spectral setting is given in \cite[Prop. 4.45]{AG}. The proof in the derived setting is analogous. But for the convenience of the reader, we give a proof.\par We prove this lemma by induction over $n$. For the formal smoothness part we will first reduce to the case where $Y$ is affine.\par Indeed, let $B$ be an animated $R$-algebra. Note that we have to show that for any point $x\in \pi_0(X(B)\times_{Y(B)}Y(B\oplus M))$ its fiber under $X(B\oplus_d M)\rightarrow X(B)\times_{Y(B)}Y(B\oplus_d M)$ is nonempty. The point $x$ corresponds to a commutative diagram of the form $$ \begin{tikzcd} \Spec(B)\arrow[r,""]\arrow[d,""]&X\arrow[d,""]\\ \Spec(B\oplus_d M)\arrow[r,""]&Y. \end{tikzcd} $$ After base change, we get a diagram of the form $$ \begin{tikzcd} \Spec(B)\arrow[r,""]\arrow[d,""]&X\times_Y \Spec(B\oplus_d M)\arrow[d,""]\\ \Spec(B\oplus_d M)\arrow[r,"\id"]&\Spec(B\oplus_d M) \end{tikzcd} $$ showing that, we can replace $f$ by the projection $X\times_Y \Spec(B\oplus_d M)\rightarrow \Spec(B\oplus_d M)$, in particular, we can assume $Y$ to be affine (this reduction is part of \cite[Prop. 2.2.5.1]{TV2}).\par Further for the existence of an obstruction theory, we may assume without loss of generality that $Y\simeq \Spec(R)$ (see Lemma \ref{properties of obstruction} and use that affine schemes have an obstruction theory by Lemma \ref{affine formal smooth}).\par Let $n=-1$, then $X\simeq\Spec(A)$ and each $A$ is a smooth $R$-algebra. In particular, we see with Lemma \ref{affine obstruction} and Proposition \ref{smooth cotangent} that $L_{A/R}$ exists and is finite projective. The formal smoothness follows from Lemma \ref{affine formal smooth}.\par Now assume $n\geq 0$ and let $p\colon U\simeq \coprod_{i\in I}\Spec(A_i)\rightarrow X$ be an $n$-atlas, where $A_i$ are smooth $R$-algebras. Let $B$ be a animated $R$-algebra, $M$ be a connective $B$-module with $\pi_0M=0$ and $d\in \Der(B,M)$ a derivation.\par \textit{Inf-cartesian.} For this we will follow \cite[Lem. 1.4.3.10]{TV2}.\par By Lemma \ref{etale cover of triv sq zero ext} any \'etale cover $B\rightarrow B'$ gives a cartesian square of the form $$ \begin{tikzcd} B'\oplus_d M\arrow[r,""]\arrow[d,""]&B'\arrow[d,"d"]\\ B'\arrow[r,"s"]&B'\oplus M, \end{tikzcd} $$ which covers the square induced by the derivation. So to check if $X(B\oplus_d M)\simeq X(B)\times_{X(B\oplus M)}X(B)$, we can pass to an \'etale cover of $B$. Therefore, we may assume that any image $x_1\in X(B)$ of $x\in X(B)\times_{X(B\oplus M)}X(B)$ under the projection, lifts to a point in $u\in \Spec(A_i)(B)$, for some $i$. Next, we claim that the point $x$ lifts to a point $y\in \Spec(A_i)(B)\times_{ \Spec(A_i)(B\oplus M)} \Spec(A_i)(B)$.\par To see this, consider the following commutative diagram $$ \begin{tikzcd} \Spec(A_i)(B)\times_{ \Spec(A_i)(B\oplus M)} \Spec(A_i)(B)\arrow[r,"f"]\arrow[d,"p"]&X(B)\times_{X(B\oplus M)}X(B)\arrow[d,"q"]\\ \Spec(A_i)(B)\arrow[r,""]&X(B). \end{tikzcd} $$ Let $F(p)$ (resp. $F(q)$) denote the fiber of $u$ (resp. $x_1$) under $p$ (resp. $q$). We get a natural morphism $g\colon F(p)\rightarrow F(q)$. Moreover the fiber of $f$ along $x$ receives a natural morphism from $\fib_x(g)$. Therefore, to see that $\fib_x(f)$ is nonempty it is enough to show that $\fib_x(g)$ is nonempty. But now $g$ is naturally identified, per definition, with the morphism $\Omega_{d',0}\Der_B(A_i,M)\rightarrow \Omega_{d',0}\Der_B(X,M)$, where $d'$ is the derivation that is given by the image of $u$ (note that $X(B)\rightarrow X(B\oplus M)\rightarrow X(B)$ is equivalent to the identity). Thus the fiber of $g$ is given by $\Omega_{d',0}\Der_B(A_i/X,M)$, which is equivalent to $\Omega_{d',0}\Hom(L_{A_i/X,B},M)$ by induction hypothesis. But now, again by induction hypothesis, we can find an \'etale cover of $B$ such that $\pi_0\Hom(L_{A_i/X,B},M)=0$, since $M$ is assumed to be connected, and therefore $\Omega_{d',0}\Hom(L_{A_i/X,B},M)$ is nonempty.\par Now consider the commutative digram $$ \begin{tikzcd} \Spec(A_i)(B\oplus_d M)\arrow[r,"a"]\arrow[d,""]& \Spec(A_i)(B)\times_{ \Spec(A_i)(B\oplus M)} \Spec(A_i)(B)\arrow[d,""]\\ X(B\oplus_d M)\arrow[r,""]&X(B)\times_{X(B\oplus M)}X(B). \end{tikzcd} $$ By induction hypothesis this square is a pullback sqaure, further $a$ is an equivalence by affineness. Since $x$ lifts to a point in $y\in \Spec(A_i)(B)\times_{ \Spec(A_i)(B\oplus M)}\Spec(A_i)(B)$, we see that the fiber at $x$ is given by the fiber of $a$ at $y$, which is nonempty and contractible (since affine schemes are inf-cartesian by Lemma \ref{affine obstruction}). \par \textit{Existence.} Let us look at the fiber $L_f$ of $L_{f\circ p}\rightarrow L_p$. By induction hypothesis $L_p$ and $L_{f\circ p}$ exist and if $f$ is smooth both are perfect with Tor-amplitude in $[-n,0]$ and $[0,0]$ respectively. In particular if $f$ is smooth then $L_f$ is perfect and has Tor-amplitude in $[-n-1,0]$. We have to show that $L_f$ satisfies the universal property of the cotangent complex for $f$ at any point. Let $x\colon \Spec(B)\rightarrow X$ be a morphism. We may assume, that $x$ factors through $p$, since $p$ is an effective epimorphism, so we can pass to an \'etale cover $B$ which factors through $U$. Let $y\colon \Spec(B)\rightarrow U$ be such a factorisation. We get a map $$ F\colon \Der_{y}(U,N)\rightarrow \Der_{x}(X,N) $$ for any connective $B$-module $N$, which is surjective.\par To see the surjectivity of $F$, note that by induction hypothesis $p$ is formally smooth. Further any element in $d\in\Der_{x}(X,N)$ corresponds to a diagram of the form $$ \begin{tikzcd} \Spec(B\oplus N) \arrow[r,"d"] & X\\ \Spec(B)\arrow[ur,"x", swap]\arrow[u]. \end{tikzcd} $$ Using the factorization by $U$, we get an element in $U(B)\times_{X(B)} X(B\oplus N)$. By formal smoothness of $p$, we can lift this element to an element in $U(B\oplus N)$ (note that $B\times_{0,B\oplus N[1],0}B\simeq B\oplus N$). But by construction this element has to be a derivation of $U$ at $y$. \par The fiber of $F$ along the derivation, which is induced by $\Spec(B\oplus N)\rightarrow \Spec(B)\xrightarrow{x}X$ is given by $\Der_{y}(U/X,N)$. Thus, we get a fiber sequence $$ \Hom_B(L_p,N)\rightarrow \Hom_B(L_{f\circ p},N)\rightarrow \Der_{x}(X,N). $$ After delooping\footnote{For a pointed $\infty$-category $\Ccal$ a deloop of an object $c\in\Ccal$ is an object $c'\in\Ccal$, such that $c\simeq \Omega c'$. For $\Ccal = \SS$ the $\infty$-category of spaces, there is a deloop for every object. This follows from the effectivity of groupoid objects in $\SS$ (see \cite[Cor. 6.1.3.20]{HTT}) (the map $x\rightarrow \ast$, where $x\in \SS$, defines a simplicial object, which extends via the colimit to a \v{C}ech nerve).} and surjectivity of $F$, we see that $\Der_{x}(X,N)$ is the fiber of $$B\Hom_B(L_p,N)\rightarrow B\Hom_B(L_{f\circ p},N),$$ where the prefix ``$B$'' denotes the deloop, and therefore $\Der_x(X,N)\simeq\Hom_B(L_f,N)$.\par To see this, note that the map from $\Der_x(X,N)$ to the fiber of $B\Hom_B(L_p,N)\rightarrow B\Hom_B(L_{f\circ p},N)$ is by the five-lemma an equivalence on the homotopy groups.\par \textit{Formal smoothness.} This is part of \cite[Prop. 2.2.5.1]{TV2}.\par Assume $f$ is smooth and $x\colon \Spec(B)\rightarrow X$ is a point. By the above $f$ has an obstruction theory. Therefore by Lemma \ref{obstruction morph imp formal smooth} it suffices to show that $\pi_0\Hom(L_{f,x},M)$ is contractible. But this follows from $$ \pi_0\Hom(L_{f,x},M)\simeq \pi_0 (L_{f,x}^\vee \otimes_B M)\simeq \pi_0L_{f,x}^\vee \otimes_{\pi_0B} \pi_0M\simeq 0, $$ by connectedness of $M$ (note that the above construction of the relative cotangent complex implies that $L_{f}$ is perfect in the smooth case and therefore dualizable and the dual is connective). \end{proof} \begin{cor} \label{cotangent implies smooth} Let $f\colon X\rightarrow Y$ be an $n$-geometric morphism of derived stacks. Then $f$ is smooth if and only if $t_{0}f$ is locally of finite presentation and $L_f$ is perfect and has Tor-amplitude in $[-n-1,0]$. \end{cor} \begin{proof} The proof is the same as in the spectral setting presented in \cite[Prop. 4.46]{AG} using Proposition \ref{cotangent of smooth}. But for the convenience of the reader, we recall the proof.\par We may assume that $Y\simeq\Spec(A)$ is affine (use Lemma \ref{properties of cotangent}). Let us fix an $n$-atlas $p\colon U\coloneqq \coprod_{i\in I} \Spec(T_i)\rightarrow X$. The "only if" part is Proposition \ref{cotangent of smooth} and the fact that the $T_i$ are smooth $A$-algebras by construction and thus are locally of finite presentation (see Proposition \ref{smooth cotangent}).\par For the "if" part assume the $\pi_{0}T_i$ are locally of finite presentation over $\pi_{0}B$, $L_f$ exists, is perfect and has Tor-amplitude in $[-n-1,0]$. We have a cofiber sequence $$ p^*L_f\rightarrow L_{U/A}\rightarrow L_p, $$ where by construction $L_f$ and $L_p$ are perfect with Tor-amplitude in $[-n-1,0]$ and $[-n,0]$ respectively (for the existence and Tor-amplitude of $L_p$, we use Proposition \ref{cotangent of smooth}). Therefore, $L_{U/A}$ is also perfect with Tor-amplitude in $[-n-1,0]$. But since $U$ is the disjoint union of affines the Tor-amplitude of $L_{U/A}$ is concentrated in $[0,0]$ and thus $L_{U/A}$ is perfect and finite projective, which implies that $\Spec(T_i)\rightarrow \Spec(A)$ is smooth (see Lemma \ref{smooth cotangent}). \end{proof} \begin{cor} \label{global lfp cotangent} Let $f\colon X\rightarrow Y$ be an $n$-geometric morphism of derived stacks locally of finite presentation. Then $L_{f}$ is perfect. \end{cor} \begin{proof} Per definition of perfect quasi-coherent modules over a derived stack, we have to check that for any point $x\colon \Spec(A)\rightarrow X$ the cotangent complex $L_{f,x}$ is a perfect $A$-module. By Lemma \ref{properties of cotangent}, we know that the cotangent complex of the projection $\pr\colon X\times_{Y,f\circ x}\Spec(A)\Spec(A)$ at the point induced by $x$ is equivalent to $L_{f,x}$. So without loss of generality, we may assume that $Y\simeq \Spec(B)$ is affine.\par Since $f$ is $n$-geometric and locally of finite presentation, we know that there exists an $n$-atlas $(p_{i}\colon\Spec(A_{i})\rightarrow X)_{i\in I}$ such that $A_{i}$ are locally of finite presentation over $A$. Since perfect quasi-coherent modules satisfy fpqc descent (see Remark \ref{descent qcoh}), we have that $L_{f}\in\QQCoh_{\perf}(X)$ if and only if each $p_{i}^{*}L_{f}$ is perfect. But by Lemma \ref{relative global cotangent sequence}, we have the following cofiber sequence $$ p_{i}^{*}L_{f}\rightarrow L_{\Spec(A_{i})/\Spec(A)}\rightarrow L_{p_{i}}. $$ Since $A_{i}$ is locally of finite presentation over $A$, we know by Proposition \ref{lfp perfect cotangent}, that $L_{\Spec(A_{i})/\Spec(A)}$ is perfect and since by definition $p_{i}$ is smooth, we have with Proposition \ref{cotangent of smooth} that indeed $p_{i}^{*}L_{f}$ is perfect. \end{proof} The last part of this section is dedicated to show that a geometric derived stack $X$ is automatically hypercomplete for the \'etale topology. The idea is to show that for any animated $R$-algebra $A$, we have $X(A)\simeq \lim_{n}X(A_{\leq n})$. Then we reduce to the case, where we look at $n$-truncated sheaves, which are always hypercomplete. \begin{lem} \label{nilcomplete} Let $X$ be an $n$-geometric derived stack for some $n\geq -1$ and $A$ an animated ring. Then the natural morphism $X\rightarrow \lim_{n}X\circ\tau_{\leq n}$ is an equivalence. \end{lem} \begin{proof} This is analogous to \cite[Prop. 5.3.7]{DAG}.\par We will do this by induction over $n$. This is certainly true for if $X$ is affine. So assume that $n\geq 0$ and let $p\colon U\coloneqq \coprod_{i\in I}\Spec(A_{i})\rightarrow X$ be an $n$-atlas.\par By definition $U\times_{X} U$ is $(n-1)$-geometric, this also holds for every successive fiber product, i.e. every element of the \v{C}ech nerve $\Cv(U/X)_{\bullet}$ is $(n-1)$-geometric. Since $p$ is an effective epimorphism, we have that the natural map $\colim_{\Delta}\Cv(U/X)_{\bullet}\rightarrow X$ is an equivalence. By induction hypothesis, we have for every $[n]\in \Delta$ that the natural map $\Cv(U/X)_{[n]}\rightarrow \lim\Cv(U/X)_{[n]}\circ\tau_{\leq n}$ is an equivalence. Thus, also its colimit under $\Delta$ is an equivalence, so we get an induced commutative diagram $$ \begin{tikzcd} \colim_{\Delta}\Cv(U/X)_{[n]}\arrow[r,""]\arrow[d,""]&\colim_{\Delta}\lim_{n}\Cv(U/X)_{[n]}\circ\tau_{\leq n}\arrow[d,""]\\ X\arrow[r,""]& \lim_{n}X\circ\tau_{\leq n}, \end{tikzcd} $$ where the top arrow and left vertical arrow are equivalences and the right vertical arrow is a monomorphism. Thus the bottom vertical arrow is an equivalence if $\lim_{n} U\circ\tau_{\leq n}\rightarrow \lim_{n}X\circ \tau_{\leq n}$ is an effective epimorphism. Let $x\in \lim_{n}X(A_{\leq n})$ and consider the projection onto $X(A_{\leq 0})$, denoted by $x_{0}$. Then we can find an \'etale cover $\widetilde{\pi_{0}A}$ of $A_{\leq 0}\simeq \pi_{0}A$ such that $x_{0}$ has a lift in $U(\widetilde{\pi_{0}A})$. By Proposition \ref{lift etale} there is an \'etale cover $\widetilde{A}$ of $A$ such that $\pi_{0}\widetilde{A}\simeq \widetilde{\pi_{0}A}$. In particular, we see that we can lift the image of $x_{0}$ in $X(\widetilde{A}_{\leq 0})$ under $U(\widetilde{A}_{\leq 0})\rightarrow X(\widetilde{A}_{\leq 0})$. Now let $x_{n}$ be the image of $x$ in $X(A_{\leq n})$. We will show the result by induction. Assume the argument holds for $n-1$. In particular, let $u_{n-1}$ be the lift of $x_{n-1}$ under $U(\widetilde{A}_{\leq n-1})\rightarrow X(\widetilde{A}_{\leq n-1})$. It is enough to prove that we can find a lift of $(u_{n-1},x_{n})$ under $U(\widetilde{A}_{\leq n})\rightarrow U(\widetilde{A}_{\leq n-1})\times_{X(\widetilde{A}_{\leq n-1})}X(\widetilde{A}_{\leq n})$, since then for all $n\in \NN_{0}$ there is a lift $u_{n}$ of $x_{n}$ compatible with the maps in the limit, i.e. we get an element $u\in\lim_{n}U(\widetilde{A}_{\leq n})$ that maps to the image of $x$ in $\lim_{n}X(\widetilde{A}_{\leq n})$. But this follows from formal smoothness of $U\rightarrow X$ (see Proposition \ref{cotangent of smooth}) and the fact that the map $A_{\leq n}\rightarrow A_{\leq n-1}$ is a square zero extension (see Lemma \ref{postnikov square zero}). \end{proof} \begin{lem} \label{geometric truncated} Let $X\rightarrow Y$ be an $n$-geometric morphism and $A$ be a $k$-truncated animated $R$-algebra. Then $X(A)\rightarrow Y(A)$ is $(n+k+1)$-truncated. \end{lem} \begin{proof} This is a consequence of Lemma \ref{nilcomplete} and analogous to \cite[Cor. 5.3.8]{DAG}.\par We have to show that the fiber of $X(A)\rightarrow Y(A)$ is $(n+k+1)$-truncated. By Lemma \ref{nilcomplete} it suffices to show that for all $n\in \NN_{0}$ the map $X(A_{\leq j})\rightarrow X(A_{\leq j-1})\times_{Y(A_{\leq n-1})}Y(A_{\leq j})$ is $(n+j+1)$-truncated, whenever $j\leq k$. But from Lemma \ref{postnikov square zero} and Lemma \ref{obstruction morph imp formal smooth} the fiber of the previous map is given by the loop of $\Hom_{\MMod_{A}}(L_{X/Y,A}, \pi_{j}A[j])$ which by adjunction is $(n+j+1)$-truncated, since by definition $L_{X/Y}[n]$ is connective\footnote{Note that $\pi_{n+j+1+k}\Hom_{\MMod_{A}}(L_{X/Y,A}, \pi_{j}A[j-1])\cong\pi_{0}\Hom_{\MMod_{A}}(L_{X/Y,A}[n+j+1+k], \pi_{j}A[j+1])$ and since $L_{X/Y,A}[n+j+1+k]\in(\MMod_{A})_{\geq j+1+k}$ and $ \pi_{j}A[j+1]\in(\MMod_{A})_{\leq -j+1}$, we see by definition of the $t$-structures that $\pi_{n+j+1+k}\Hom_{\MMod_{A}}(L_{X/Y,A}, \pi_{j}A[j+1])\cong0$ for $k\geq 1$.}. \end{proof} \begin{lem} \label{geometric hypercomplete} Let $X$ be an $n$-geometric stack, then $X$ is hypercomplete. \end{lem} \begin{proof} This is a direct consequence of Lemma \ref{nilcomplete} and \ref{geometric truncated} and the fact that truncated $\infty$-topoi are automatically hypercomplete. This is analogous to \cite[Cor. 5.3.9]{DAG} but anyway we will explain this.\par By Lemma \ref{nilcomplete}, we have $X\simeq X\circ \tau_{\leq n}$ and for any $k$-truncated animated ring $A$, we have that $X(A)$ is $(n+k+1)$-truncated (see Lemma \ref{geometric truncated}), in particular $X\circ\tau_{\leq n}$ is hypercomplete (see \cite[Lem. 6.5.2.9]{HTT}) and since the $\infty$-topos of hypercomplete sheaves has limits, we have that $X$ is hypercomplete. \end{proof} \section{Derived commutative algebra} \label{sec:derived commutative algebra} In the following, $R$ will be a ring.\par In this section, we want to give a quick summary about animated rings, present the cotangent complex and analyse smooth morphisms between animated rings. Mainly, we show that these notions arise from our classical point of view and behave like one can expect. \subsection{Animated rings} \label{sec:simplicial commutative algebras} In this section, we summarise important aspects of animated rings, for this we will follow \cite[\S25]{SAG}. \par By $\Poly_R$ we denote the category of polynomial $R$-algebras in finitely many variables. Then the category of $R$-algebras is naturally equivalent to the category of functors from $\Poly_R^{\op}$ to $\sets$ which preserve finite products\footnote{For a functor $F\colon \Poly_{R}^{\op}\rightarrow \Sets$ that preserves finite products we can set $F(R[X])$ as the underlying ring of $F$, where the multiplication is induced by $R[T]\rightarrow R[T_{1}]\otimes_{R}R[T_{2}],\ T\mapsto T_{1}T_{2}$ and the addition by $T\mapsto T_{1}+T_{2}$. Conversely, for any $R$-algebra $A$, we can construct a contraviarant functor from $\Poly_{R}$ to $\Sets$ via $A\mapsto\Hom_{\Alg{R}}(-,A)$. These constructions are inverse to each other.}. Applying this construction to the $\infty$-categorical case, we obtain $\AniAlg{R}$ the $\infty$-category of animated $R$-algebras, i.e. $$ \AniAlg{R} \coloneqq \Fun_{\pi}(\Poly_R^{\op},\SS), $$ where the subscript $\pi$ denotes the full subcategory of $\Fun(\Poly_R^{\op},\SS)$, that preserve finite products. Alternatively, this $\infty$-category is obtained by freely adjoining sifted colimits to $\Poly_R$ (this is the meaning of \cite[Prop. 5.5.8.15]{HTT} using that any element in $\AniAlg{R}$ can be obtained by a sifted colimit in $\Poly_{R}$ by \cite[Lem. 5.5.8.14, Cor. 5.5.8.17]{HTT}).\par For a cocomplete category $C$ that is generated under colimits by its full subcategory of compact projective objects $C^{\textup{sfp}}$, Cesnavicius-Scholze define the $\infty$-category \textit{animation of $C$} in \cite[\S 5.1]{CS} denoted by $\Ani(C)$. The $\infty$-category $\Ani(C)$ is the $\infty$-category freely generated under sifted colimits by $C^{\textup{sfp}}$. In particular, with this definition, we see that $\Ani(\Alg{R})\simeq \AniAlg{R}$. This process can also be applied to $R$-modules, which we will look at later in Section \ref{sec:mod}, and to abelian groups, where $\Ani(\Ab)$ recovers the $\infty$-category of simplicial abelian groups (see \cite[\S 5.1]{CS} for more details). The animation of $\Sets$ recovers the $\infty$-category of $\infty$-groupoids, i.e. $\Ani(\Sets)\simeq \SS.$\par We have another description for animated $R$-algebras. Let $\Abf$ be the category of product preserving functors from $\Poly_{R}^{\op}$ to simplicial sets\footnote{As for product preserving functors from $\Poly^{\op}_{R}$ to $\Sets$ it is not hard to see that a product preserving functor $F\colon\Poly^{\op}_{R}\rightarrow \SS$ defines a simplicial commutative ring, via $F\mapsto F(R[X])$ (the face and degeneracy maps have to respect the ring structure by functoriality). In particular, in this way we can identify $\Abf$ with the category of simplicial commutative $R$-algebras.}. We obtain a model structure on $\Abf$ by the Quillen model structure on simplicial sets (see \cite[5.5.9.1]{HTT}) - this is often called the model category of \textit{simplicial commutative $R$-algebras}. This model category is known to be a combinatorial, proper, simplicial model category (for more details on these properties, we refer to \cite[Ch. II \S4, \S6]{Quillen}). The $\infty$-category associated to this model category (i.e. $\N^{\textup{hc}}(\Abf^{\circ})$) is equivalent to $\AniAlg{R}$, where $\Abf^\circ$ denotes the full subcategory consisting of fibrant/cofibrant objects (see \cite[Cor. 5.5.9.3]{HTT}). \begin{defi} For a ring $R$, we define the $\infty$-category of \textit{animated $R$-algebras}, denoted by $\AniAlg{R}$ as the $\infty$-category $\Fun_{\pi}(\Poly_{R}^{\op},\SS)$. For an animated ring $A$, we define $\AniAlg{A}\coloneqq (\AniAlg{\ZZ})_{A/}$ as the $\infty$-category of \textit{animated $A$-algebras}.\par Further, if $R=\ZZ$, we call an animated $R$-algebra an \textit{animated ring}. \end{defi} \begin{rem} Note that for a ring $R$, we have $\AniAlg{R}\simeq(\AniAlg{\ZZ})_{R/}$ by \cite[Prop. 25.1.4.2]{SAG}. \end{rem} \begin{rem} \label{Ani presentable} Since $\AniAlg{A}$ is the over category of $\AniAlg{\ZZ}$ which is the $\infty$-category of a combinatorial model category (which is explained in the beginning), we see with \cite[Prop. 5.5.3.11, Prop. A.3.7.6]{HTT} that $\AniAlg{A}$ is a presentable $\infty$-category. \end{rem} The following theorem allows us to connect $\AniAlg{R}$ with $\Einftycn_R$. The idea is simple, as $\AniAlg{R}$ is generated by $\Poly_{R}$ under sifted colimits, any sifted colimit preserving functor is up to homotopy determined by its restriction to $\Poly_{R}$. Since $\Poly_{R}$ lies fully in $\Einftycn_{R}$ (which has sifted colimits), we therefore get a functor $\theta\colon\AniAlg{R}\rightarrow \Einftycn_{R}$ corresponding to the inclusion $\Poly_{R}\hookrightarrow \Einftycn_{R}$. We can also use this philosophy to analyze functors by restricting them to $\Poly_{R}$ if they preserve sifted colimits. \begin{prop} Let $j\colon \Poly_R\rightarrow \AniAlg{R}$ denote the Yoneda-embedding. Then we have an equivalence of $\infty$-categories $$ \Fun_{\textup{sift}}(\AniAlg{R},\Einftycn_R)\rightarrow \Fun(\Poly_R,\Einftycn_R), $$ where the subscript \textup{sift} denotes the full subcategory of sifted colimit preserving functors. \end{prop} \begin{proof} This follows from \cite[Prop. 5.5.8.15, Cor. 5.5.8.17]{HTT} and note that $\AniAlg{R}$ has small colimits since it is presentable by Remark \ref{Ani presentable}. \end{proof} \begin{prop} \label{fun E to SCR} The functor $\theta\colon \AniAlg{R}\rightarrow \Einftycn_R$ described above is conservative, has a left adjoint $\theta^L$ and a right adjoint $\theta^R$. \end{prop} \begin{proof} This is \cite[25.1.2.2]{SAG}. \end{proof} \begin{rem} Let us take a closer look at the left adjoint of $\theta$. We know that per definition $\ZZ[X]$ is a compact and projective object of $\AniAlg{\ZZ}$, so in particular the functor $\Hom_{\AniAlg{\ZZ}}(\ZZ[X],-)$ commutes with sifted colimits (see \cite[Prop. 5.5.8.25]{HTT}) and since we can write any animated ring as a sifted colimit of polynomial rings, we see that for any animated ring $A$, we have $\Hom_{\AniAlg{\ZZ}}(\ZZ[X],A)\simeq \Omega^{\infty}\theta(A)$. We observe that for the free $E_{\infty}$-$\ZZ$-algebra in one variable $\ZZ\lbrace X\rbrace$ the same equivalence holds, i.e. $\Hom_{\Einftycn_{\ZZ}}(\ZZ\lbrace X\rbrace,\theta(A))\simeq \Omega^{\infty}\theta(A)$. Using the adjunction, we therefore see that $\theta^{L}(\ZZ\lbrace X\rbrace)\simeq \ZZ[X]$. \end{rem} The functor $\theta$ allows us to view any $A\in\AniAlg{R}$ as a ring object in $\Sp$. Thus we can associate fundamental groups to this object and also module objects in $\Sp$. \begin{defi} Let $A\in \AniAlg{R}$. For any $i\in\ZZ$, we set $\pi_i(A)\coloneqq \pi_i(\theta(A))$ and we set $\MMod_A\coloneqq \MMod_{\theta(A)}$. We refer to elements of $\MMod_{A}$ as \textit{$A$-modules}. \end{defi} Recall from Section \ref{HA}, that animated rings per definition have no negative homotopy groups and $\pi_{*}A$ is a graded ring. \begin{notation} \label{notation truncation} We want to remark that we have the notion of truncation functors for animated rings (see \cite[\S 25.1.3]{SAG}), denoted by $\tau_{\leq n}$ for $n\in\NN_{0}$ and are induced by the truncations on the underlying $E_{\infty}$-rings. For an animated ring $A$ we denote $\tau_{\leq n}A$ with $A_{\leq n}$.\par We denote with $(\AniAlg{R})_{\leq n}$ the full subcategory of $n$-truncated animated $R$-algebras. The elements of $(\AniAlg{R})_{\leq 0}\simeq \Alg{R}$ are called \textit{discrete}. \end{notation} \begin{rem} The inclusion of $n$-truncated animated $R$-algebras $(\AniAlg{R})_{\leq n}$ into $\AniAlg{R}$, for some $n\in\NN_{0}$, has a left adjoint denoted by $\tau_{\leq n}$ (see \cite[Rem. 25.1.3.4]{SAG}). Since per definition $\tau_{\leq 0} = \pi_{0}$, we see that passage to the underlying ring of an animated ring via $\pi_{0}$ preserves colimits. \end{rem} We can view any connective $E_\infty$-algebra over $R$ as a connective $R$-module (more precisely $\Einftycn_R\simeq\CAlg(\MMod^{\textup{cn}}_R)$). This induces a forgetful functor $\Einftycn_R\rightarrow \MMod^{\textup{cn}}_R$, which has a left adjoint (see \cite[Ex. 3.1.3.14]{HA}). Using the above left adjoint $\theta^L$, we can associate an animated ring to any connective $R$-module $M$. \begin{defi} Let $A$ be an animated ring. Let $M\in\MMod_{A}^{\textup{cn}}$ be a connective $A$-module. We denote the image of $M$ under the left adjoint to the forgetful functor $\Einftycn_{\theta(A)}\rightarrow \MMod^{\textup{cn}}_A$ composed with $\theta^{L}$ by $\Sym_{A}(M)$ and call it the \textit{symmetric animated $A$-algebra of $M$}.\footnote{By \cite[Prop. 5.2.5.1]{HTT} the adjunction of $\theta$ and $\theta^{L}$ can be transfered to the adjunction of slice categories, i.e. $\theta$ induces an adjunction, which by abuse of notation we denote the same, between $\AniAlg{A}$ and $\Einftycn_{\theta(A)}$.} \end{defi} Further, in the following remark we want to explain that there are two possible ways to define homotopy groups on animated rings, rather naturally. But both notions are in fact equivalent (in the sense that the two notions produce isomorphic homotopy groups). \begin{remark} The homotopy groups of an animated ring can be defined alternatively via the following. We have a natural functor from rings to abelian groups and then to sets by forgetting the ring structure. This induces a functor from $$F\colon \AniAlg{\ZZ}\rightarrow\Ani(\Ab)\rightarrow \Ani(\Sets)$$ (see \cite[\S 5.1.4]{CS}). The animation of abelian groups is the $\infty$-category of simplicial abelian groups and the animation of sets is $\SS$. Using this functor, we can also define the $n$-homotopy group of an animated ring $A$ via $\pi_{n}F(A)\in\SS$. This construction of the homotopy groups agrees with the construction of the homotopy groups via passage to spectra.\par The reason for this is the following commutative diagram $$ \begin{tikzcd} \AniAlg{\ZZ}\arrow[dd,"F",bend right, shift right =0.9em, swap]\arrow[r,"\theta"]\arrow[d,""]& \Einfty^{\textup{cn}}\arrow[d,""]\\ \Ani(\Ab)\arrow[r,""]\arrow[d,""]&\Dcal(\ZZ)^{\textup{cn}}\arrow[dl,"\Omega^{\infty}"]\\ \SS \end{tikzcd} $$ (as all of these functors commute with sifted colimits\footnote{For $F$ and $\theta$ this follows from construction. That the forgetful functor from connective $E_{\infty}$-rings to modules commutes with sifted colimits follows from the fact, that the tensor product on spectra commutes with sifted colimits (see \cite[4.8.2.19, Cor. 3.2.3.2]{HA}), for $\Omega^{\infty}$ see \cite[Prop. 1.4.3.9]{HA}.}, we only need to check commutativity on polynomial $\ZZ$-algebras, which follows by construction). \end{remark} We want to conclude this section by explaining localizations of animated rings. \begin{defrem} Let $\Ccal$ be a presentable $\infty$-category and let $S$ be a set of morphisms in $\Ccal$. Then, we say that an object $Z\in \Ccal$ is \textit{$S$-local} if for any morphism $f\colon X\rightarrow Y\in S$, we have that the morphism $\Hom_{\Ccal}(Y,Z)\rightarrow \Hom_{\Ccal}(X, Z)$ induced by $f$ is an equivalence. We say that a morphism $f\colon X\rightarrow Y$ in $\Ccal$ is an \textit{$S$-equivalence} if for any $S$-local object the morphism $\Hom_{\Ccal}(Y,Z)\rightarrow \Hom_{\Ccal}(X, Z)$ induced by $f$ is an equivalence (see \cite[Def. 5.5.4.1]{HTT}, note that this definition does not need presentability of $\Ccal$ but as explained below presentability allows us to work with the full subcategory of $S$-local objects in a nice way).\par The inclusion of the full subcategory $\Ccal[S^{-1}]$ of $S$-local objects in $\Ccal$ admits a left adjoint, which we call localization $\Ccal\rightarrow \Ccal[S^{-1}]$ (see \cite[Prop. 5.5.4.15]{HTT}). The idea is to "complete" $S$ by taking $\Sbar$ as the set of $S$-equivalences in $\Ccal$. Then $\Ccal[S^{-1}]$ is the localization of $\Ccal$ by $\Sbar$, which is strongly saturated (see \cite[\S 5.5.4]{HTT} for more details).\par This is analogous to the classical localization, where even if we want to localize at one element of a ring, we have to automatically localize the multiplicative subset generated by the element. \end{defrem} \begin{propdef}[Localization] \label{localization} Let $A$ be an animated $R$-algebra and let $F\subseteq \pi_0(A)=\pi_0(\Hom_{\MMod_A}(A,A))$ be a subset. Then there exists $A[F^{-1}]\in \AniAlg{A}$, such that for all $B\in\AniAlg{A}$ the simplicial set $\Hom_{\AniAlg{A}}(A[F^{-1}],B)$ is nonempty if and only if the image of all $f\in F$ under $\pi_0(A)\rightarrow\pi_0(B)$ is invertible. Further if it is nonempty, then it is contractible. \end{propdef} \begin{proof} The proof is analogous to the proof of \cite[Prop. 1.2.9.1]{TV2}, which treats the special case where $F$ has only one element.\par Let $\Sym\colon \MMod^{\textup{cn}}_A\rightarrow \AniAlg{A}$ be the left adjoint to the map $\AniAlg{A}\rightarrow\Einftycn_A\rightarrow\MMod^{\textup{cn}}_A$. Consider the set $S\coloneqq\lbrace \Sym(f)\colon \Sym(A)\rightarrow \Sym(A)\mid f\in F\rbrace$. We set $A[F^{-1}]$ as the image of $A$ under the localization map $\AniAlg{A}\rightarrow \AniAlg{A}[S^{-1}]$, where $\AniAlg{A}[S^{-1}]$ denotes the full subcategory of $\AniAlg{A}$ of $S$-local objects (note that $\AniAlg{A}$ is presentable by Remark \ref{Ani presentable}).\par An object $B\in \AniAlg{A}$ is $S$-local if and only if the induced map $$ f^*\colon \Hom_{\MMod_A}(A,B)\rightarrow \Hom_{\MMod_A}(A,B) $$ is an equivalence for all $f\in F$. Equivalently, $f^{*}$ is an equivalence if and only if the multiplication by the image of $f$ on $\pi_iB$ is an equivalence for all $f\in F$. Therefore, any $A$-algebra $B$ is $S$-local if and only if the image of $f$ under $\pi_0A\rightarrow \pi_0B$ is invertible for all $f\in F$.\par Now assume that $\Hom_{\AniAlg{A}}(A[F^{-1}],B)$ is nonempty, then the morphism $\pi_{0}A\rightarrow \pi_{0}B$ factors through $\pi_{0}A[F^{-1}]$, so every $f\in F$ has invertible image in $\pi_{0}B$, since by definition $A[F^{-1}]$ is $S$-local. To see that if $\Hom_{\AniAlg{A}}(A[F^{-1}],B)$ is nonempty, then it is contractible, note that in this case $B$ is $S$-local and we have $$ \Hom_{\AniAlg{A}}(A[F^{-1}],B)\simeq \Hom_{\AniAlg{A}}(A,B)\simeq \ast $$ by adjunction. \end{proof} \begin{rem} Note that in the proof of Proposition \ref{localization} if we have a subset $F\subseteq \pi_{0}A$ and denote its generated multiplicative subset by $S$, then by \cite[Prop. 5.5.4.15]{HTT} an animated $A$-algebra $B$ is $\lbrace \Sym(f)\colon \Sym(A)\rightarrow \Sym(A)\mid f\in F\rbrace$-local if and only if it is $\lbrace \Sym(s)\colon \Sym(A)\rightarrow \Sym(A)\mid s\in S\rbrace$-local (note that $\AniAlg{A}$ is presentable by Remark \ref{Ani presentable}). \end{rem} \begin{notation} Let $A$ be an animated ring and $f\in \pi_{0}A$. Then we define the localization by an element as $A[f^{-1}]\coloneqq A[\lbrace f\rbrace^{-1}]$. \end{notation} \begin{rem} Let $F$ be a subset of $\pi_{0}A$ and let $S$ be the multiplicative subset generated by $F$. By the universal property of the localization of rings, we know that $\pi_{0}A[F^{-1}]\cong S^{-1}\pi_{0}A$.\par Now assume that $F$ is given by a single element $f\in \pi_{0}A$. After the characterization of \'etale morphisms via the cotangent complex, we will see that the $\pi_{i}A[f^{-1}]\cong (\pi_{i}A)_{f}$ for all $i\geq 0$ (see Lemma \ref{homotopy of localization}) \end{rem} \begin{defi} Let $A\rightarrow B$ be a morphism of animated rings. Then $B$ is \textit{locally of finite presentation} over $A$ if it is compact as an animated $A$-algebra, i.e. the functor $\Hom_{\AniAlg{A}}(B,-)\colon \AniAlg{A}\rightarrow \SS$ commutes with filtered colimits. \end{defi} \begin{rem} Our notion of ''locally finite presentation'' is \textit{stronger} than the notion of ``finitely presented'' in the classical sense. What we mean is that if a morphism of animated rings $A\rightarrow B$ is locally of finite presentation, then the induced morphism of rings $\pi_{0}A\rightarrow \pi_{0}B$ is finitely presented. But the other way around is not true, as we will see that $A\rightarrow B$ is locally of finite presentation if and only if $\pi_{0}A\rightarrow \pi_{0}B$ is locally of finite presentation and its cotangent complex is perfect (see Proposition \ref{lfp perfect cotangent}). An example of a finitely presented morphism with non-perfect cotangent complex is the non-lci morphism $\FF_{p}\rightarrow \FF_{p}[X,Y]/(X^{2},XY,Y^{2})$ (the non-perfectness follows from \cite[(1.3)]{Avramov}). \end{rem} As open immersions are finitely presented in the classical world of algebraic geometry, we would expect a similar result in the derived world. This will get explicit later but first we would like to show that the fundamental example of an open immersion, the localization of a ring along an element, is locally of finite presentation. \begin{lem} \label{localization lfp} Let $A$ be an animated ring and $f\in\pi_0A$. Then $A[f^{-1}]$ is locally of finite presentation over $A$. \end{lem} \begin{proof} We have that $A$-algebra morphisms from the localization to any other $A$-algebra $B$ are either empty or contractible, depending whether $f$ is invertible in $\pi_0B$. For any filtered system $(B_i)_{i \in I}$ of $A$-algebras, we have $\pi_0\colim_{i \in I} B_i = \colim_{i \in I} \pi_0 B_i$. Therefore, we see that $\Hom_{\AniAlg{R}}(A[f^{-1}],\colim_{i \in I} B_{i})$ is empty if $f$ is not invertible in $\colim_{i \in I} \pi_{0}B_{i}$ and if $f$ is invertible in $\colim_{i \in I} \pi_{0}B_{i}$, then the space $\Hom_{\AniAlg{R}}(A[f^{-1}],\colim_{i \in I} B_{i})$ is contractible. Since $\pi_{0}A[f^{-1}]\cong \pi_{0}(A)_{f}$ is locally of finite presentation as a $\pi_{0}A$-algebra, we know that $f$ is invertible in $\colim \pi_{0}B_{i}$ if and only there is an $i'\in I$ such $f$ is invertible in $\pi_{0}B_{i'}$. So in particular, if there is such $i'$, we have $$\Hom_{\AniAlg{R}}(A[f^{-1}],\colim_{i \in I} B_{i})\simeq\ast\simeq \colim_{i\in I}\Hom_{\AniAlg{R}}(A[f^{-1}],B_{i})$$ and if there is no such $i'$, we have $$\Hom_{\AniAlg{R}}(A[f^{-1}],\colim_{i \in I} B_{i})=\emptyset= \colim_{i\in I}\Hom_{\AniAlg{R}}(A[f^{-1}],B_{i}).$$ \end{proof} \subsection{Modules over animated rings} \label{sec:mod} Let us recall some useful notions about modules over animated ring. In the following $A$ will be an animated ring. \begin{rem} \label{pi_n = H_n} Before we start, let us remark that under the symmetric monoidal equivalence of stable $\infty$-categories $\MMod_{R}\simeq \Dcal(R)$ explained in Section \ref{HA}, for a ring $R$, the homotopy groups are isomorphic to the corresponding homology groups, this isomorphism respects the module structure (see \cite[B.1]{SS}).\par \end{rem} \begin{rem} We want to make clear that throughout, we will work in \textit{homological} notation. This is natural from the homotopy theory standpoint but differs from the algebraic geometry standpoint which uses \textit{cohomological} notation. In particular, we will define notions such as ``Tor-amplitude'' homologically. \end{rem} \begin{defrem} Let $P$ be a connective $A$-module, then $P$ is called \textit{projective} if for all connective $A$-modules $Q$, we have $\Ext^{1}(P,Q)\cong 0$, where $\Ext^{1}(P,Q)$ is defined as $$\pi_0\Hom_{\MMod_A}(P,Q[1])\cong \Hom_{h\MMod_{A}}(P,Q[1])\footnote{Recall that the homotopy category of a stable $\infty$-categories is an additive category, so the expression $\Ext^{1}(P,Q)\cong 0$ makes sense.}.$$ \par Equivalently, $P$ is projective if for all fiber sequences $$M'\rightarrow M\rightarrow M'',$$ where $M,M',M''$ are connective $A$-modules, the induced map $\Ext^0(P,M)\rightarrow \Ext^0(P,M'')$ is surjective (see \cite[Prop. 7.2.2.6]{HA}. This also shows equivalence with the definition given in \cite[Def. 7.2.2.4]{HA}).\par We denote by $\Proj(A)$ the full subcategory of projective $A$-modules in $\MMod_{A}$. \end{defrem} \begin{rem} From the definition of projective modules it follows that if an $A$-module $P$ is projective, then for any connective module $Q$ we have $\Ext^{i}(P,Q)\cong 0$ for all $i\geq 0$. In fact, this condition is equivalent to the same condition with $Q$ assumed to be discrete. This is also an equivalent definition of projective modules (see \cite[Prop. 7.2.2.6]{HA}). \end{rem} \begin{defi} A connective $A$-module $M$ is called \textit{flat}, if $\pi_{0}M$ is a flat $\pi_{0}A$-module and the natural morphism $\pi_{i}A\otimes_{\pi_{0}A}\pi_{0}M\rightarrow \pi_{i}M$ is an isomorphism. \end{defi} The compatibility with the higher homotopy groups is important if we want to define for example \textit{flat morphisms} of animated rings. The Tor-spectral sequence below then shows us that the homotopy groups are compatible with base change.\par The following is a direct consequence of the definition of flatness. \begin{lem} \label{flat proj} Let $P$ be an $A$-module. \begin{enumerate} \item If $P$ is projective it is flat. \item If $P$ is flat, then it is projective if and only if $\pi_0P$ is projective over $\pi_0A$. \end{enumerate} \end{lem} \begin{proof} See \cite[Lem. 7.2.2.14]{HA} and \cite[Prop. 7.2.2.18]{HA}. \end{proof} We also have a homotopy equivalence relating projective modules over $A$ and over $\pi_0A$. \begin{prop} \label{projective lift} The base change with the natural map $A\rightarrow \pi_0A$ induces an equivalence between $h\Proj(A)$ and the $h\Proj(\pi_{0}A)$\footnote{Note, that since projective modules are flat, $h\Proj(\pi_{0}A)$ is just the usual category of (classical) projective $\pi_{0}A$-modules.}. \end{prop} \begin{proof} This follows from \cite[Cor. 7.2.2.19]{HA}. \end{proof} \begin{defi} We call an $A$-module $P$ \textit{finite projective}, if it is projective and $\pi_0P$ is finitely presented over $\pi_0A$. \end{defi} We can generalize this notion via the notion of perfectness. \begin{defrem} An $A$-module $P$ is called \textit{perfect}, if it is a compact object of $\MMod_A$. Equivalently, $P$ is perfect if and only if there exists an $A$-module $P^{\vee}$ such that we have $\Hom_{\MMod_A}(P,-)\simeq \Omega^{\infty}(P^{\vee}\otimes_A -)$ (see \cite[Def. 7.2.4.1]{HA} and \cite[Prop. 7.2.4.2]{HA}). \end{defrem} \begin{remark} If $A$ is discrete, then we have $\MMod_A\simeq \Dcal(\pi_{0}A)$ as symmetric monoidal $\infty$-categories and a complex of $A$-modules is perfect in the our sense if and only if it is perfect in the classical sense (see \cite[07LT]{stacks-project}). \end{remark} \begin{rem}[\protect{\cite[Prop. 7.2.1.19]{HA}}] \label{Tor-ss} Let us insert a quick remark about the Tor-spectral sequence associated to spectral modules. Let $A$ be an $E_{\infty}$-ring and $M,N\in\MMod_{A}$. Then $\pi_{\ast}M$ and $\pi_{\ast}N$ are graded $\pi_{\ast}A$-modules and we have a spectral sequence in graded abelian groups of the following form called the \textit{Tor-spectral sequence} $$ E^{p,q}_{2}= \Tor_{p}^{\pi_{\ast}A}(\pi_{*}M,\pi_{*}N)_{q}\Rightarrow \pi_{p+q}(M\otimes_{A}N). $$ Here convergence is in the sense that $\pi_{p+q}(M\otimes_{A}N)$ has a filtration $F^{\bullet}$ such that $\gr^{p}_{F}\cong E_{\infty}^{p,q}$, there is a $k\leq 0$ such that $F^{n}\simeq 0$ for $n\leq k$ and $\colim_{n} F^{n}\simeq \pi_{p+q}(M\otimes_{A}N)$ (see \cite[Var. 7.2.1.15]{HA} for the construction of the Tor-group). \end{rem} We also have the notion of a Tor-amplitude for $A$-modules. We will use the homological notation, since it is in line with the definitions given in homotopy theory. \begin{defi}[\protect{\cite[Def. 2.11]{AG}}] Let $M$ be an $A$-module. Then we say that $M$ has \textit{Tor-amplitude (concentrated) in $[a,b]$} for $a\leq b\in \ZZ$ if for all discrete $A$-modules $N$, we have $$\pi_i(M\otimes_A N)= 0$$ for all $i\not\in [a,b]$.\end{defi} \begin{lem} Let $M$ be an $A$-module. Then $M$ has Tor-amplitude in $[a,b]$ if and only if the ordinary complex $M\otimes_A \pi_0A$ in $\Dcal(\pi_0A)$ has Tor-amplitude in $[a,b]$. \end{lem} \begin{proof} Let $F\colon \MMod_{\pi_{0}A}\rightarrow \MMod_{A}$ be the forgetful functor. This functor comes from the Cartesian fibration of \cite[Cor. 3.4.3.4]{HTT}. In particular, we see that for a discrete $\pi_{0}A$-module $N$ the underlying spectra of $N$ and $F(N)$ are equivalent. So, their homotopy groups are isomorphic, so $F(N)$ is discrete and up to equivalence determined by $\pi_{0}F(N)$ (see \cite[Prop. 7.1.1.13]{HA}). Since $\pi_{0}N$ determines $N$ up to equivalence and $\pi_{0}N\cong \pi_{0} F(N)$, we see that the diagram $$ \begin{tikzcd} & N\Mod{\pi_{0}A}\arrow[ld,"i_{1}",hook, swap]\arrow[dr,"i_{2}",hook]&\\ \MMod_{\pi_{0}A}\arrow[rr,"F"]&&\MMod_{A} \end{tikzcd} $$ commutes on the level of elements up to equivalence. Therefore, we see that for any $N\in N\Mod{\pi_{0}A}$, we have $$M\otimes_{A}i_{2}(N)\simeq M\otimes_{A}F(i_{1}(N))\simeq M\otimes_{A}\pi_{0}A\otimes_{\pi_{0}A}i_{1}(N)$$ concluding the proof. \end{proof} \begin{lem} \label{general props of Tor} Let $A$ be an animated $R$-algebra. Let $P$ and $Q$ be $A$-modules. \begin{enumerate} \item If $P$ is perfect, then $P$ has finite Tor-amplitude. \item If $B$ is an $A$-algebra and $P$ has Tor-amplitude in $[a,b]$, then the $B$-module $P\otimes_A B$ has Tor-amplitude in $[a,b]$. \item If $P$ has Tor-amplitude in $[a,b]$ and $Q$ has Tor-amplitude in $[c,d]$, then $P\otimes_A Q$ has Tor-amplitude in $[a+c,b+d]$. \item If $P,Q$ have Tor-amplitude in $[a,b]$, then for any morphism $f\colon P\rightarrow Q$ the fiber of $f$ has Tor-amplitude in $[a-1,b]$ and the cofiber of $f$ has Tor-amplitude in $[a,b+1]$. \item If $P$ is a perfect $A$-module with Tor-amplitude in $[0,b]$, with $0\leq b$, then $P$ is connective and $\pi_0P\simeq \pi_0(P\otimes_A \pi_0A)$. \item $P$ is perfect and has Tor-amplitude in $[a,a]$ if and only if $P$ is equivalent to $M[a]$ for some finite projective $A$-module. \item If $P$ is perfect and has Tor-amplitude in $[a,b]$, then there exists a morphisms $$ M[a]\rightarrow P $$ such that $M$ is a finite projective $A$-module and the cofiber is perfect with Tor-amplitude in $[a+1,b]$. \end{enumerate} \end{lem} \begin{proof} Since modules over animated rings are defined as modules over their underlying $E_\infty$-ring spectrum, this is \cite[Prop. 2.13]{AG}. \end{proof} Let us conclude this section by specifically looking at connective modules over animated rings. These will be given by the animation of classical modules. This illustrates why we work with modules over spectra, as the animation of modules seems natural but produced only \textit{connective} objects.\par First let us consider the $\infty$-category $\MMod(\Sp)$ of tuples $(M,A)$, where $A$ is an $E_{\infty}$-R-algebra and $M$ is an $A$-module. This $\infty$-category comes naturally with a cartesian fibration $\MMod(\Sp)\rightarrow \Einfty_{R}$ (see \cite[\S 4.5]{HA} for more details).\par Now we can define the $\infty$-category $\textup{AR-Mod}_R\coloneqq \MMod(\Sp)\times_{\CAlg_{R}(\Sp)}\AniAlg{R}$. Let us denote the full subcategory, consisting of objects $(M,A)\in\textup{AR-Mod}_R$, where $M$ is connective by $\SCRMod^{\cn}_R$. The next proposition shows that $\SCRMod^{\cn}_R$ is the animation of the category of tuples $(A,M)$, where $A$ is an $R$-algebra and $M$ is an $A$-module. \begin{prop} \label{Animod} Let $\Ccal\subseteq \SCRMod^{\cn}_R$ be the full subcategory consisting of objects $(M,A)$, where $A$ is a polynomial $R$-algebra and $M$ is a free $A$-module of finite rank. Let $\Ecal$ be an $\infty$-category, which admits sifted colimits. Let us denote by $\Fun_{\textup{sift}}(\SCRMod^{\cn}_R,\Ecal)$ the full subcategory of $\Fun(\SCRMod^{\cn}_R,\Ecal)$ spanned by those functors, which preserve sifted colimits. Then the restriction functor $$ \Fun_{\textup{sift}}(\SCRMod^{\cn}_R,\Ecal)\rightarrow \Fun(\Ccal,\Ecal) $$ is an equivalence of $\infty$-categories. \end{prop} \begin{proof} This is \cite[Cor. 25.2.1.3]{SAG}, but since some references are broken, we recall the proof.\par It suffices to show that $P_{\Sigma}(\Ccal)\simeq \SCRMod^{\cn}_R$, since then the proposition follows from \cite[Prop. 5.5.8.15]{HTT}, where $P_{\Sigma}(\Ccal)$ denotes those presheaves that preserve finite products.\par The following is \cite[Prop. 25.2.1.2]{SAG} (here the references are broken). Note that $\Ccal$ consists of those objects in $\SCRMod^{\cn}_R$ which are coproducts of finitely many copies of $C\coloneqq (R[X],0)$ and $D=(R,R)$. We especially see that $C$ and $D$ corepresent the functors $\SCRMod^{\cn}_R\rightarrow \SS$ given by $$ (A,M)\mapsto \Omega^{\infty}A^{\textup{sp}},\quad (A,M)\mapsto \Omega^{\infty}M $$ respectively. Since both functors preserve sifted colimits, the objects $C$ and $D$ are compact, projective and $\Ccal$ consists of compact projective objects of $\SCRMod^{\cn}_R$. It follows with \cite[Prop. 5.5.8.22]{HTT}, that the inclusion $\Ccal\hookrightarrow \SCRMod^{\cn}_R$ extends (see \cite[Prop. 5.5.8.15]{HTT}) to a fully faithful functor $F\colon P_{\Sigma}(\Ccal)\rightarrow \SCRMod^{\cn}_R$, which commutes with sifted colimits. Since the inclusion preserves finite coproducts, we see that $F$ preserves small colimits (see \cite[Prop. 5.5.8.15]{HTT}) and therefore admits a right adjoint $G$, by the adjoint functor theorem (see \cite[Cor. 5.5.2.9]{HTT}). To prove that $F$ is an equivalence, it suffices to show that $G$ is conservative.\par To see this, note that since $F$ is left adjoint and fully faithful, the unit map $\id\rightarrow GF$ is an equivalence.\par That $G$ is conservative is clear, since the conservative functor $$ \SCRMod^{\cn}_R\rightarrow \SS\times\SS,\quad (A,M)\mapsto (\Omega^{\infty}A^{\textup{sp}},\Omega^{\infty}M) $$ factors through $G$. \end{proof} \begin{notation} Again, for an animated ring $A$, we denote $\textup{AR-Mod}_A\coloneqq \textup{AR-Mod}\times_{\Ani} \AniAlg{A}$ and $\SCRMod^{\cn}_A\coloneqq \SCRMod^{\cn}\times_{\Ani}\AniAlg{A}$. \end{notation} \begin{remark} The above proposition also shows that the $\infty$-category of simplicial commutative $R$-modules, which is equivalent to $\AniMod{R}$ (the animation of $R$-modules), is equivalent to the connective $R$-modules $\MMod_R^{\textup{cn}}$. \end{remark} Lastly, we will insert a lemma which will show the uniqueness of the cotangent complex for derived stacks (see Definition \ref{defi.cotangent.global}). \begin{lem} \label{connective yoneda ff} Let $j\colon \MMod_A\rightarrow \Pcal(\MMod_A^{\textup{cn},\op})$ be the Yoneda embedding followed by restriction. Then for all $n\geq 0$ the restriction of $j$ to the $(-n)$-connective objects is fully faithful. \end{lem} \begin{proof} We will not prove this here and refer to \cite[Prop. 1.2.11.3]{TV2}. \end{proof} \subsection{The cotangent complex} \label{sec:cotangent affine} We will define square zero extensions and the cotangent complex following \cite[\S25]{SAG}.\par Proposition \ref{Animod} allows us to define square zero extensions. Namely, if we look at the functor $\Ccal\rightarrow \Ring\simeq (\AniAlg{R})_{\leq 0}\hookrightarrow \AniAlg{R}$, where $\Ccal$ is as in Proposition \ref{Animod}, given by $(M,A)\mapsto A\oplus M$, we see that it induces a functor $\SCRMod^{\cn}_R\rightarrow \AniAlg{R}$ commuting with sifted colimits. \begin{defi} For an $A\in\AniAlg{R}$ and a connective $A$-module $M$, we define the \textit{square zero extension of $A$ by $M$} as the image of $(M,A)$ under the functor $\SCRMod^{\cn}_R\rightarrow \AniAlg{R}$ described above and denote the resulting animated $R$-algebra by $A\oplus M$. \end{defi} \begin{rem} Since the forgetful functor from $\AniAlg{R}\rightarrow \MMod_{R}$ preserves colimits, we see that the underlying module of $A\oplus M$, for some animated $R$-algebra $A$ and a connective $A$-module $M$, is equivalent to the direct sum in $\MMod_{R}$ of $A$ and $M$. \end{rem} \begin{remark} Let $A\in \AniAlg{R}$ and $M$ be a connective $A$-module. In $\MMod_A$ we have (up to homotopy) unique maps $0\rightarrow M\rightarrow 0$, these determine maps between animated rings $A\rightarrow A\oplus M\rightarrow A$. Thus, we can view $A\oplus M$ as an element of $(\AniAlg{A})_{/A}$. \end{remark} Since we have defined square zero extensions of an animated algebra by a connective module, we can now define the notion of a derivation. \begin{defi} Let $A\in\AniAlg{R}$ and $M\in \MMod^\textup{cn}_A$. The \textit{space of $R$-linear derivations $\Der_R(A,M)$ of $A$ into $M$} is defined as the mapping space $\Hom_{(\AniAlg{R})_{/A}}(A,A\oplus M)$. \end{defi} \begin{defi} \label{defi triv square zero ext} Let $A\in\AniAlg{R}$, $M\in \MMod^\textup{cn}_A$ and $d\in \Der_R(A,M)$. Then we define $A\oplus_{d} M$ as the pullback of $d\colon A\rightarrow A\oplus M$ and the trivial derivation $s$, i.e. we have a pullback diagram of the form $$ \begin{tikzcd} A\oplus_{d}M\arrow[r,""]\arrow[d,""]& A\arrow[d,"d"]\\ A\arrow[r,"s"]&A\oplus M. \end{tikzcd} $$ \end{defi} Next, we want to define the absolute cotangent complex associated to an aniamted ring $A$. This should be thought of as an $\infty$-analogue of the module of differentials. So, we will characterize it by a universal property. \begin{propdef} Let $A\in\AniAlg{R}$. There is a connective $A$-module $L_A$ and a derivation $\eta\in \Der_R(A,L_A)$ uniquely (up to equivalence) characterized by the property, that for every connective $A$-module $M$ the map $$ \Hom_{\MMod_A}(L_A,M)\rightarrow \Der_R(A,M) $$ induced by $\eta$ is an equivalence.\par We call the $A$-module $L_{A/R}$ the \textit{cotangent complex of $A$ over $R$}.\par If $R=\ZZ$, then we write $L_{A}$ and call it the \textit{absolute cotangent complex of $A$}. \end{propdef} \begin{proof} This is \cite[Prop. 25.3.1.5]{SAG}. \end{proof} \begin{rem} \label{morph on derivations} Let $A\rightarrow B$ be a morphism in $\AniAlg{R}$. Then for any connective B-module $M$, we will see that we have a map $\Der_{R}(B,M)\rightarrow \Der_{R}(A,M)$, where we see $M$ as an $A$-module via the forgetful functor. This follows from the following.\par Note that by functoriality, we have a commutative diagram of the form $$ \begin{tikzcd} A\oplus M\arrow[r,""]\arrow[d,""]& A\arrow[d,""]\\ B\oplus M\arrow[r,""]&B. \end{tikzcd} $$ This induces a map $A\oplus M\rightarrow A\times_{B} (B\oplus M)$, which is an equivalence when passing to the underlying $R$-modules, since the underlying $R$-module of $B\oplus M$ is the direct sum of $B$ and $M$. Therefore for an $R$-derivation $d\colon B\rightarrow B\oplus M$, we get the following diagram in $\AniAlg{R}$ with pullback squares $$ \begin{tikzcd} A\arrow[r,""]\arrow[d,""]& A\oplus M\arrow[d,""]\arrow[r,""]&A\arrow[d,""]\\ B\arrow[r,""]&B\oplus M\arrow[r,""]&B, \end{tikzcd} $$ where the composition of the horizontal arrows is the identity on $A$, respectively $B$. Therefore, an $R$-derivation of $B$ induces an $R$-derivation of $A$.\par Thus, per definition, we get a map $$\Hom_{B}(L_{B/R},M)\rightarrow\Hom_{A}(L_{A/R},M)\simeq \Hom_{B}(L_{A/R}\otimes_{A}B,M).$$ Where the second map is induced by the adjunction of the forgetful functor and the tensor product. Now Lemma \ref{connective yoneda ff} induces a map $L_{A/R}\otimes_{A}B\rightarrow L_{B/A}$. \end{rem} \begin{defi} Let $A\rightarrow B$ be a morphism in $\AniAlg{R}$. Then we define \textit{the (relative) cotangent complex of $B$ over $A$}, denoted by $L_{B/A}$, as the cofiber of the induced map $B\otimes_A L_{A/R}\rightarrow L_{B/R}$. \end{defi} \begin{remark} \label{rel cotangent rep} Let $A\rightarrow B$ be a morphism in $\AniAlg{R}$. For every connective $B$-module $M$ the definition of $L_{B/A}$ as the cofiber of $L_{A/R}\otimes_{A} B\rightarrow L_{B/R}$ induces an equivalence $$\Hom_{\MMod_B}(L_{B/A},M)\simeq\fib_{d_{0}}(\Der_{R}(B,M)\rightarrow \Der_{R}(A,M)),$$ where $d_{0}$ is the trivial $R$-derivation of $A$ into $M$. Therefore, seeing $B\oplus M$ as an animated $A$-algebra, via the trivial derivation and the natural morphism $A\oplus M\rightarrow B\oplus M$, we have an equivalence $$ \Hom_{\MMod_B}(L_{B/A},M)\xrightarrow{\sim} \fib_{\id_B}(\Hom_{\AniAlg{A}}(B,B\oplus M)\rightarrow \Hom_{\AniAlg{A}}(B,B)). $$\par So the relative cotangent complex represents morphisms $B\rightarrow B\oplus M$, with an augmentation $B\oplus M\rightarrow B$, which are not only $R$-linear but in fact also $A$-linear, i.e. $\Hom_{\MMod_{B}}(L_{B/A},M)\simeq \Hom_{(\AniAlg{A})_{/B}}(B,B\oplus M)$. \end{remark} \begin{rem} \label{cotangent bc} We claim that the definition of the cotangent complex as the module representing derivations shows that for any pushout diagram of the form $$ \begin{tikzcd} A'\arrow[r,""]\arrow[d,""]& B'\arrow[d,""]\\ A\arrow[r,""]&B \end{tikzcd} $$ in $\AniAlg{R}$, we have $L_{B/A}\simeq L_{B'/A'}\otimes_{A'} B$.\par Indeed, let $M$ be a $B$-module. Using the arguments of Remark \ref{morph on derivations}, we get a morphism $$\Hom_{(\AniAlg{A})_{/B}}(B,B\oplus M)\rightarrow \Hom_{(\AniAlg{A'})_{/B'}}(B',B'\oplus M).$$ Let $d\in \Hom_{(\AniAlg{A'})_{/B'}}(B',B'\oplus M)$, which is given by a diagram $$ B'\rightarrow B'\oplus M\rightarrow B' $$ such that the composition is homotopic to the identity on $B'$. By the universal property of the pushout, this induces an $A$-derivation of $B$ into $B\oplus M$. Both constructions are inverse to each other using universal properties, so the morphism $$\Hom_{(\AniAlg{A})_{/B}}(B,B\oplus M)\rightarrow \Hom_{(\AniAlg{A'})_{/B'}}(B',B'\oplus M)$$ is an equivalence and using Remark \ref{rel cotangent rep}, we are done. \end{rem} \begin{remark} We defined the cotangent complex of an animated $R$-algebra as the $\infty$-analogue of the K\"ahler-differentials. The same construction can be done for $E_\infty$-algebras (see \cite[7.3]{HA}). Thus, we could ask the question whether for a map of animated $R$-algebras $A\rightarrow B$ the relative cotangent complex $L_{B/A}$ is equivalent to the relative cotangent complex of the underlying map of $E_\infty$-algebras $L^\infty_{B/A}\coloneqq L_{\theta(B)/\theta(A)}$. In general, the answer is no, take for example $\ZZ\rightarrow \ZZ[X]$ (since this morphism in not formally smooth if we allow non-connective $E_{\infty}$-rings, this follows from \cite[Prop. 2.4.1.5]{TV2}). \par But we have an induced morphism $L^\infty_{A/B}\rightarrow L_{B/A}$ and passing to the homotopy groups, we see that the map $\pi_i L^\infty_{A/B}\rightarrow \pi_i L_{B/A}$ is an isomorphism if $i\leq 1$ and surjective for $i=2$ (see \cite[25.3.5.1]{SAG}, note that Lurie calls our cotangent complex \textit{the algebraic cotangent complex}).\par In characteristic $0$ however we have that the map on homotopy groups is an isomorphism for all $i\in\ZZ$ (see \cite[25.3.5.3]{SAG}). This is not surprising, since in characteristic $0$ we have an equivalence of animated rings and connective $E_\infty$-rings. \end{remark} \begin{remark} \label{rem.sym.cotangent} Let $R$ be a ring and $A$ be an animated $R$-algebra. Let $B=\Sym_A(M)$ for some connective $A$-module $M$. We claim that $L_{B/A}$ is given by $M\otimes_A B$.\par Indeed, for any connective $B$-module $M'$, we have \begin{align*} \Hom_{(\AniAlg{A})_{/B}}(\Sym_{A}(M),A\oplus M') \\ \simeq \fib_{\id_{B}}(\Hom_{\AniAlg{A}}&(\Sym_{A}(M),B\oplus M')\rightarrow \Hom_{\AniAlg{A}}(\Sym_{A}(M),B)). \end{align*} Using the adjunction of the forget functor and $\Sym_{A}$, we get a map $\iota\colon M\rightarrow B$ corresponding to the identity on $B$. Further, we have \begin{align*} \fib_{\id_{B}}(\Hom_{\AniAlg{A}}(\Sym_{A}(M),B\oplus M')\rightarrow \Hom_{\AniAlg{A}}(\Sym_{A}(M),B))\\ \simeq \fib_{\iota}(\Hom_{\MMod_{A}}(M,B\oplus M')\rightarrow \Hom_{\MMod_{A}}(M,B)). \end{align*} Now the underlying module of $B\oplus M$ is the direct sum of $B$ with $M$ in $\MMod_{A}$. Therefore, this fiber is equivalent to $\Hom_{\MMod_{A}}(M,M')\simeq \Hom_{\MMod_{B}}(M\otimes_{A} B,M')$. Hence, by the universal property the cotangent complex $L_{B/A}$ is given by $M\otimes_A B$. \end{remark} \begin{prop} \label{affine cotangent complex} Let $A$ be an animated $R$-algebra and write $A$ as the sifted colimit of polynomial $R$-algebras $P^\bullet$. Then we have $L_{A/R}\simeq \colim L_{P^\bullet/R}\otimes_{P^\bullet} A$. \end{prop} \begin{proof} This is \cite[Lec. 5 Thm. 2.3]{Khan}. For the convenience of the reader, we recall the proof.\par Note that each $P^i$ is equivalent to $\Sym(R^{n_{i}})$ for some $n_{i}\in \NN$. Also by Remark \ref{rem.sym.cotangent}, we have $\colim L_{P^\bullet/R}\otimes_{P^\bullet} A \simeq \colim R^\bullet\otimes_{R} P^\bullet \otimes_{P^\bullet} A \simeq \colim A^\bullet$, where $A^\bullet\coloneqq R^\bullet\otimes_R A.$ Thus, we have to show that $$ \lim\Hom_{\MMod_A}(A^\bullet,M)\simeq \lim\fib_{\iota_\bullet}(\Hom_{\AniAlg{R}}(P^\bullet,A\oplus M)\rightarrow \Hom_{\AniAlg{R}}(P^\bullet,A)) $$ for all connective $A$-modules $M$, where $\iota_\bullet\colon P^\bullet\rightarrow A$ are the natural maps. But now we will see that both sides can be identified with $\lim \Omega^{\infty}(M)^\bullet$, so we conclude the equivalence. \par Indeed, each $A^{n_{i}}$ is free this is clear for the left hand side and for the right hand side note that $P^\bullet$ is given by symmetric algebras of free algebras and conclude using the above. \end{proof} \begin{remark} Note that the proof of Proposition \ref{affine cotangent complex} shows that for a discrete ring $R$, the cotangent complex $L_{B/R}$ for some $B\in\AniAlg{R}$ agrees with the left Kan extension of $\Omega^{1}_{-/R}\colon\Poly_R\rightarrow \Dcal(R)$ along the inclusion $\Poly_{R}\hookrightarrow\AniAlg{R}$. So in particular, our definition agrees with the classical definition in the discrete case\footnote{Let $L\Omega^{1}_{-/R}$ denote the left Kan extension of $\Omega^{1}_{-/R}\colon\Poly_R\rightarrow \Dcal(R)$ along the inclusion $\Poly_{R}\hookrightarrow\AniAlg{R}$. Then $L\Omega^{1}_{-/R}$ factors through $\SCRMod^{\cn}_R$, since $\Omega^{1}_{A/R}$ has an $A$-module structure for some polynomial $R$-algebra $A$. Thus, we see that $L\Omega^{1}_{B/R}$ has a natural $B$-module structure and on polynomial algebras agrees with $L_{B/R}$ (note that this a priori proves the equivalence in $\Dcal(R)$ and thus in $\Sp$ but the forget functor $\SCRMod^{\cn}_R\rightarrow \Sp$ is conservative by \cite[Cor. 4.2.3.2]{HA}). }.\par Setting $R=\ZZ$, we get an analogous description of $L_B.$ \end{remark} We add a little lemma showing that the steps in the Postnikov towers are square-zero extensions. For $E_\infty$-algebras this can be found in \cite[Cor. 7.4.1.28]{HA}. We will not prove this lemma, since it would be to involved. But a detailed (model categorical) proof can be found in the notes of Porta and Vezzosi \cite{PV}. \begin{lem} \label{postnikov square zero} Let $A$ be an animated $R$-algebra. There exists a unique derivation $$d\in\pi_0\Der(A_{\leq n-1},\pi_n(A)[n+1])$$ such that the projection $A_{\leq n-1}\oplus_{d}\pi_n(A)[n+1]\rightarrow A_{\leq n-1}$ is equivalent to the natural morphism $A_{\leq n}\rightarrow A_{\leq n-1}$ (recall the notation from Definition \ref{defi triv square zero ext}). \end{lem} \begin{proof} \cite[Lem. 2.2.1.1]{TV2}. \end{proof} \subsection{Smooth and \'etale morphisms} \label{sec:smooth and et} In this section, we will follow \cite{TV2}. In the reference To\"en and Vezzosi deal with animated rings and derived algebraic geometry (in our sense) in the model categorical setting. Most definitions however are made in such a way, such that we can easily translate them to the $\infty$-categorical setting (for more explanation on how to go from model categories to $\infty$-categories, we recommend \cite{HTT} and \cite{HA}). \begin{defi} A morphism $f\colon A\rightarrow B$ of animated $R$-algebras is called \textit{flat (resp. faithfully flat, smooth, \'etale)} if the following two conditions are satisfied \begin{enumerate} \item[(i)] the induced ring homomorphism $\pi_0f\colon\pi_0A\rightarrow\pi_0B$ is flat (resp. faithfully flat, smooth, \'etale), and \item[(ii)] we have an isomorphism $\pi_*A\otimes_{\pi_0A}\pi_0B\rightarrow \pi_* B$ of graded rings. \end{enumerate}\par \end{defi} Note that in the above definition $\pi_{0}f$ is a morphism of commutative rings, so asking whether they are flat, \'etale or smooth is natural. The condition (ii) on the homotopy groups is a natural compatibility condition that assures that homotopy groups and base change commute in the sense that $\pi_{n}M\otimes_{\pi_{0}A}\pi_{0}B\cong \pi_{n}(M\otimes_{A}B)$ for an $A$-module $M$ and a flat animated $A$-algebra $B$. (see \cite[Prop. 7.2.2.13]{HA}). \newline\par Let $f\colon A\rightarrow B$ be a homomorphism of rings. If $f$ is smooth, we know that the module of differentials $\Omega_{B/A}$ is finite projective. The other direction is in general not correct, i.e. there are ring homomorphisms locally of finite presentation with finite projective module of differentials that are not smooth. One example is the projection $k[X]\rightarrow k[X]/(X)\cong k$ for a field $k$. Its module of differentials $\Omega_{k/k[X]}$ vanishes, so is in particular a finite dimensional $k$-vector space. But the projection is not smooth. One way to see this, is that the cotangent complex $L_{k/k[X]}$ is quasi-isomorphic to $(X)/(X^{2})[-1]$\footnote{As the projection $k[X]\rightarrow k$ is a regular immersion, we can use \cite[08SJ]{stacks-project}.} (see \cite[07BU]{stacks-project}). This condition on the cotangent complex comes rather naturally in derived algebraic geometry, as we can see the cotangent complex as the derived version of the module of differentials. In fact, we will make this explicit in Proposition \ref{smooth cotangent} to see that a morphism of animated rings is smooth if and only if it is locally of finite presentation and its cotangent complex is finite projective.\par But for technical reasons, we need to understand the cotangent complex of the natural maps $A_{\leq k}\rightarrow A_{\leq k-1}$. As these are given by square zero extensions by $\pi_{k}A$ (see Lemma \ref{postnikov square zero}) and isomorphisms on $\pi_{i}$ for $i\leq k-1$, we will see that the cotangent complex is easier to understand. \begin{lem} \label{cotangent of truncation} Let $A$ be an animated $R$-algebra and $k\geq 1$. Then there exists natural isomorphisms $$ \pi_{k+1}L_{A_{\leq k-1}/A_{\leq k}} \cong \pi_k A, $$ and $\pi_i L_{A_{\leq k-1}/A_{\leq k}} \cong 0$ for $i\leq k$ (recall Notation \ref{notation truncation} for the $A_{\leq *}$). \end{lem} \begin{proof} This is \cite[Lem. 2.2.2.8]{TV2} translated to $\infty$-categories. \par We can also deduce this lemma using \cite{HA} and \cite{SAG}. Note that the fiber of $A_{\leq k}\rightarrow A_{\leq k-1}$ as an $A$-module is given by $\pi_kA[k]$ and thus the cofiber is given by $\pi_kA[k+1]$. Now the first assertion follows from \cite[Rem. 25.3.6.5]{SAG}. For the vanishing of the lower homotopy groups, note that for $i\leq k+2$, we have $\pi_i L^{\infty}_{A_{\leq k-1}/A_{\leq k}}\cong \pi_i L_{A_{\leq k-1}/A_{\leq k}}$ by \cite[Prop. 25.3.5.1]{SAG}, where $L^{\infty}_{A_{\leq k-1}/A_{\leq k}}$ is the cotangent complex associated to the underlying $E_{\infty}$-algebra of $A_{\leq k-1}\rightarrow A_{\leq k}$ (see \cite[\S 7.3]{HA} for more details). Thus, the vanishing follows with \cite[Lem. 7.4.3.17]{HA}. \end{proof} In the following proof, we want to compute $$\Tor_*^{\pi_\ast A}(\pi_0A,\pi_\ast M)$$ for some $A\in\AniAlg{R}$ and an $A$-module $M$. The Tor-spectral sequence will be an important tool when calculating tensor products. But first lets talk about graded free resolutions, which compute the Tor-groups. \begin{remark}[Graded free resolutions] \label{graded free res} Let $A\in\AniAlg{R}$ and fix an $A$-module $M$ such that there is an $n\geq 0$ such that $\pi_kM=0$ for all $k< n$. To compute the graded Tor group, we need a graded free resolution of $\pi_*M$, where a graded free resolution is an exact sequence $\dots\rightarrow P_1\rightarrow P_0\rightarrow \pi_*M\rightarrow 0$ of graded $\pi_*A$-modules with $P_i$ equal to the direct sum of shifts of $\pi_*A$ (see \cite[09KK]{stacks-project} for details on graded free resolutions). We can follow the existence of such a sequence inductively with \cite[09KN]{stacks-project}. Namely, set $M_0\coloneqq \pi_*M$, then for any $i$, we can find a short exact sequence of the form $0\rightarrow M_{i+1}\rightarrow P_i\rightarrow M_i\rightarrow 0$, such that $P_i$ is the direct sum of shifts of $\pi_*A$ and is concentrated in degrees $\geq n$. \par The $P_i$ are constructed as follows, for any $m\in (M_i)_k$, we can find a $\pi_*A$-linear map $ \pi_*(A)[k]\rightarrow M_i$ sending $1\in \pi_*(A)[k]_k$ to $m$ and $d(1)$ to $d(m)$. The module $P_i$ is now defined as the direct sum over all non-zero degrees and corresponding elements. Therefore by construction $P_i$ is a direct sum of shifts of $\pi_*A$ and further by induction, we see that if $M_0$ is concentrated in degrees $\geq n$, then for all $i\geq 0$ the module $P_i$ is concentrated in degrees $\geq n$. \end{remark} \begin{prop} \label{smooth cotangent} Let $f\colon A\rightarrow B$ be a morphism in $\AniAlg{R}$. \begin{enumerate} \item The morphism $f$ is smooth if and only if the $B$-module $L_{B/A}$ is finite projective and $\pi_0B$ is of finite presentation over $\pi_0A$. \item The morphism $f$ is \'etale if and only if $L_{B/A}\simeq 0$ and $\pi_0B$ is of finite presentation over $\pi_0A$. \end{enumerate} \end{prop} \begin{proof} This is \cite[Thm. 2.2.2.6]{TV2} in the $\infty$-categorical setting. For the convenience of the reader, we recall the proof of the first assertion. The proof of the second assertion is left out and can be reconstructed following the proof of \cite[Thm. 2.2.2.6]{TV2}.\par Let $f$ be a smooth morphism, then it is flat by assumption, thus $\pi_0B= \pi_0A \otimes_A B$ (by \cite[Prop. 7.2.2.15]{HA} $B\otimes_A\pi_0A$ has to be discrete and since on $\pi_0$ it is an equivalence, we have the equivalence on the level of animated rings). In particular, we have $$\pi_0 L_{B/A} = \pi_0(L_{B/A} \otimes_B \pi_0 B) = \pi_0 L_{\pi_0B/\pi_0A} = \Omega_{\pi_0B/\pi_0A}[0],$$ by compatibility of the cotangent complex with base change (see Remark \ref{cotangent bc}). Since $f$ is smooth, we see that $\pi_0(L_{B/A})=\Omega_{\pi_0B/\pi_0A}[0]$ is finite projective over $\pi_0B$. By Proposition \ref{projective lift}, there is a projective $B$-module $P$ with $P\otimes_B\pi_0B\simeq \pi_0L_{B/A}$ (thus $P$ is in fact finite projective). Using the projectivity of $P$, we lift the natural projection $P\rightarrow \pi_0L_{B/A}$ to a morphism $\phi\colon P\rightarrow L_{B/A}$ (see \cite[Prop. 7.2.2.6]{HA} and note that surjectivity of $\phi$ on $\pi_0$ implies that the fiber is connective). We want to show that $\phi$ is in fact an equivalence. For this it is enough to show $\cofib(\phi)\simeq 0$. By construction, it is clear that $\pi_0 \cofib(\phi)= 0$ and we will show by induction on $n$ that $\pi_n \cofib(\phi) =0$.\par To see this, let $n> 0$, assume $\pi_k\cofib(\phi)= 0$ for $k<n$ and consider the following Tor spectral sequence (see Remark \ref{Tor-ss}) $$ E_2^{p,q}=\Tor_p^{\pi_\ast B}(\pi_0B,\pi_\ast \cofib(\phi))_q\Rightarrow \pi_{p+q}(\pi_0B\otimes_B \cofib(\phi))= 0. $$ To see that $\pi_0B\otimes_B \cofib(\phi)\simeq 0$, note that $\phi\otimes\id_{\pi_0B}$ is equivalent to the identity on $L_{\pi_0B/\pi_0A}$ and thus the cofiber of $\phi\otimes\id_{\pi_0B}$ which is $\pi_0B\otimes_B \cofib(\phi)$ vanishes. Let $P_\bullet$ be the graded free resolution of $\pi_*\cofib(\phi)$ constructed in Remark \ref{graded free res}. Then $E^{p,q}_2 = H^{-p}(\pi_0B\otimes_{\pi_*B}P_\bullet)_q = 0$ for $q< n$, since it is a subgroup of a quotient of $(\pi_0B\otimes_{\pi_*B}P_p)_q$, which itself is a quotient of $(P_p)_q$. Therefore, $\pi_n \cofib(\phi)\simeq H^0(\pi_0B\otimes_{\pi_*B}P_\bullet)_n = E^{0,n}_2\simeq \pi_n(\pi_0B\otimes_B \cofib(\phi))\simeq 0.$\par To see the first equivalence note that $H^0(\pi_0B\otimes_{\pi_*B}P_\bullet) \simeq \pi_0B\otimes_{\pi_*B} H^0(P_\bullet)\simeq \pi_0B\otimes_{\pi_*B}\pi_*\cofib(\phi) \simeq \pi_*\cofib(\phi)$ (see \cite[09LL]{stacks-project} for the definition of the tensor product of dg-modules and note that taking the $0$th-cohomology is just taking a cokernel which commutes with tensor products). \par Now assume that $L_{B/A}$ is finite projective and $\pi_0A\rightarrow \pi_0B$ is of finite presentation. Consider the pushout square $$ \begin{tikzcd} A\arrow[r,""]\arrow[d,""]&B\arrow[d,""]\\ \pi_0A\arrow[r,""]&C. \end{tikzcd} $$ We want to show that the natural morphism $C\rightarrow \pi_0C\simeq \pi_0B$ is an equivalence. For this assume there is a smallest integer $i>0$ such that $\pi_iC \not =0$. We get a fiber sequence $$L_{C_{\leq i}/\pi_0 A}\otimes_{C_{\leq i}}\pi_0C\rightarrow L_{\pi_0C/\pi_0A}\rightarrow L_{\pi_0C/C_{\leq i}},$$ using Lemma \ref{cotangent of truncation} (actually its proof), we see that $$\pi_i(L_{C/\pi_0 A}\otimes_{C}\pi_0C) \simeq \pi_i(L_{C_{\leq i}/\pi_0 A}\otimes_{C_{\leq i}}\pi_0C)\simeq \pi_{i+1}(L_{\pi_0C/C_{\leq i}})\simeq \pi_i(C).$$ But $L_{C/\pi_0 A}\otimes_{C}\pi_0C$ is projective over $\pi_0 C$ and thus discrete which is a contradiction. Therefore, we see that $\pi_0B\simeq \pi_0A\otimes_A B$ and thus $L_{\pi_0B/\pi_0A}\simeq L_{B/A}\otimes_{B}\pi_0B$ is discrete. Hence, with \cite[07BU]{stacks-project}, we see that $\pi_0A\rightarrow \pi_0B$ is smooth. \par It remains to show that the natural map $\phi\colon\pi_nA\otimes_{\pi_0A}\pi_0B\rightarrow \pi_nB$ is an equivalence. But this follows from $B\otimes_A \pi_0A \simeq \pi_0B$ and \cite[Thm. 7.2.2.15]{HA} (we have to test that $B\otimes_A M$ is discrete for all discrete $A$-modules $M$, but since $\pi_0B$ is a flat $\pi_0A$-module, we see that $B\otimes_A M\simeq B\otimes_A \pi_0A\otimes_{\pi_0A}M\simeq \pi_0B\otimes_{\pi_0A}M$ is discrete). \end{proof} The following proposition is in a similar fashion to Proposition \ref{smooth cotangent}. Namely, we can characterize finitely presented morphisms by their cotangent complex and their behavior on the underlying discrete rings (i.e. on $\pi_{0}$). \begin{prop} \label{lfp perfect cotangent} Let $f\colon A\rightarrow B$ be a morphism in $\AniAlg{R}$. Then $f$ is locally of finite presentation if and only if the $B$-module $L_{B/A}$ is perfect and $\pi_0B$ is of finite presentation over $\pi_0A$\end{prop} \begin{proof} We will not prove this and refer to \cite[Prop. 2.2.2.4]{TV2} or \cite[Prop. 3.2.18]{DAG}. \end{proof} \begin{cor} \label{smooth imp lfp} Let $f\colon A\rightarrow B$ be a smooth morphism of animated $R$-algebras. Then $f$ is locally of finite presentation. \end{cor} \begin{proof} Combine Proposition \ref{lfp perfect cotangent} and \ref{smooth cotangent}. \end{proof} One other important fact is that for an animated ring $A$, we can lift truncated \'etale maps $B\rightarrow \pi_0A$ to \'etale maps $\widetilde{B}\rightarrow A$. The idea is to use Postnikov towers and the compatibility of cotangent complexes and truncations. \begin{prop} \label{lift etale} Let $A$ be an animated $R$-algebra. Then the base change under the natural morphism $A\rightarrow \pi_0A$ induces a equivalence $\infty$-categories of \'etale $A$-algebras and \'etale $\pi_0A$-algebras. \end{prop} \begin{proof} See \cite[Prop. 5.2.3]{CS}. \end{proof} Lastly, we can use Proposition \ref{lift etale} to show that any finite projective module $P$ over an animated ring $A$ is finite locally free. \begin{cor} \label{finite proj = locally free} Let $A$ be an animated ring and let $P$ be a finite projective module over $A$. Then there is a finite \'etale cover $(A\rightarrow A_{i})_{i\in I}$, i.e. the $A_i$ are \'etale $A$-algebras, where $I$ is finite and $A\rightarrow \prod_{i\in I}A_{i}$ is faithfully flat, such that $P\otimes_A A_i$ is free of finite rank, i.e. $P\otimes_A A_i\simeq A_i^r$ for some $r\in \NN$. \end{cor} \begin{proof} Let $\Proj(A)$ denote the full subcategory of $\MMod_A$ of projective modules. We have an equivalence of categories $\textup{h}\Proj(A)\rightarrow \textup{h}\Proj(\pi_0A)\simeq\Proj_{\pi_0A}$ given by the tensor product (see \cite[Cor 7.2.2.19]{HA}). By definition, this restricts to an equivalence of finite projective modules. Since classical finite projective modules on $\pi_0A$ are finite locally free, we know that there exists an open cover $(\widetilde{A}_i)$ of $\pi_0A$ such that $\widetilde{A}_i\otimes_{\pi_0A}\pi_0A\otimes_AP$ is equivalent to some finite free $\widetilde{A}_i$-module. By Proposition \ref{lift etale}, we can lift this open cover to an \'etale cover $A_i$ of $A$ (certainly $A\rightarrow \prod_{i\in I}A_{i}$ is \'etale and faithfully flatness can be checked on $\pi_{0}$). Since $A_i\otimes_A \pi_0A =\widetilde{A}_i$, the equivalence of the categories involved shows the claim. \end{proof} \section{The stack of perfect modules} \label{sec:perf} In this section, we want to prove that the derived stack of perfect modules is locally geometric. This was already proven in \cite{TVaq} in the model categorical setting and in \cite{AG} in the spectral setting, but we recall the proof in its entirety in our setting.\par We recall some lemmas needed for the proof, as they will become important later on when analyzing the substacks of derived $F$-zips. \begin{lem} \label{upper semi-cont qc} Let $A$ be a commutative ring and $P$ be a perfect complex of $A$-modules and let $n\in \NN_{0}$. Further, for $k\in \ZZ$ let $\beta_{k}\colon \Spec(A)_{\textup{cl}}\rightarrow \NN_{0}$ be the function given by $s\mapsto \dim_{\kappa(s)}\pi_{k}(P\otimes_{A}\kappa(s))$. Then $\beta_{k}^{-1}([0,n])$ is quasi-compact open. \end{lem} \begin{proof} By \cite[0BDI]{stacks-project} $\beta_{k}$ is upper semi-continuous and locally constructible. As $\Spec(A)_{\textup{cl}}$ is affine it is quasi-compact quasi-separated and so we see that $\beta_{k}^{-1}([0,n])$ is quasi-compact open. \end{proof} \begin{rem} \label{tor and type} Let $A$ be a commutative ring and $P$ be a perfect complex of $A$-modules. Let $I\subseteq \ZZ$ be a finite subset and for $k\in \ZZ$ let $\beta_{k}$ be as in Lemma \ref{upper semi-cont qc} . Assume that $\beta_i \not =0$ for $i\in I$ and zero everywhere else. Then using \cite[0BCD,066N]{stacks-project}, we see that $P$ has Tor-amplitude in $[\min(I),\max(I)]$. \end{rem} \begin{lem} \label{lem perfect zero} Let $A$ be a commutative ring and $P$ be a perfect complex of $A$-modules. Then there exists a quasi-compact open subscheme $U\subseteq\Spec(A)_{\textup{cl}}$ with the following property, \begin{enumerate} \item[$\bullet$] an affine scheme morphism $\Spec(B)_{\textup{cl}}\rightarrow \Spec(A)_{\textup{cl}}$ factors through $U$ if and only if $P\otimes_{A} B\simeq 0$. \end{enumerate} \end{lem} \begin{proof} Let $\beta_{k}$ be as in Lemma \ref{upper semi-cont qc}. Then we set $$ U\coloneqq\bigcap_{k\in\ZZ} \beta_{k}^{-1}(\lbrace 0 \rbrace ). $$ As $P$ is perfect, so in particular has finite Tor-amplitude, this intersection has only finitely many pieces that are non equal to $\Spec(A)_{\textup{cl}}$. Therefore, $U$ is a finite intersection of quasi-compact opens in an affine scheme (see Lemma \ref{upper semi-cont qc}) and thus quasi-compact open.\par Now assume we have a morphism $\Spec(B)_{\textup{cl}}\rightarrow \Spec(A)_{\textup{cl}}$ such that $P\otimes_{A}B = 0$. Then certainly for any $b\in B$ and all $i\in\ZZ$ we have $\dim_{\kappa(b)}\pi_{i}(P\otimes_{A}B\otimes_{B}\kappa(b))=0$. Let $a\in\Spec(A)$ be the image of $b$. Then for all $i\in\ZZ$ we have the following equalities \begin{align*} \dim_{\kappa(b)}\pi_{i}(P\otimes_{A}B\otimes_{B}\kappa(b)) &=\dim_{\kappa(b)}\pi_{i}(P\otimes_{A}\kappa(a)\otimes_{\kappa(a)}\kappa(b))\\ &=\dim_{\kappa(b)}\pi_{i}(P\otimes_{A}\kappa(a))\otimes_{\kappa(a)}\kappa(b)\\ &=\dim_{\kappa(a)}\pi_{i}(P\otimes_{A}\kappa(a)), \end{align*} where we use flatness of field extensions in the second equality. Therefore, we see that $\Spec(B)\rightarrow \Spec(A)$ factors through $U$.\par For the other direction assume that $\Spec(B)_{\textup{cl}}\rightarrow \Spec(A)_{\textup{cl}}$ factors through $U$. Then for any $b\in \Spec(B)$ and any $i\in \ZZ$, we have that $\pi_{i}(P\otimes_{A}B\otimes_{B}\kappa(b)) = 0$. By Remark \ref{tor and type}, we see that $P\otimes_{A}B$ is given by a finite projective module $M$ concentrated in one degree. The fiberwise dimension of $M$ is equal to $0$ by assumption and thus by Nakayama $M=0$. \end{proof} The next lemma shows that the vanishing locus of perfect complexes is quasi-compact open. This will be applied to the cofiber of morphisms of perfect complexes. In particular, the locus classifying equivalences between fixed perfect modules is therefore quasi-compact open. \begin{lem} \label{equiv zariski open} Let $A\in\AniAlg{R}$ and $P$ be a perfect $A$-module. Define the derived stack $V_P$ via $V_P(B) =$ full sub-$\infty$-category of $\Hom_{\AniAlg{R}}(A,B)$ consisting of morphisms $u\colon A\rightarrow B$ such that $P\otimes_{A,u}B\simeq 0$. This is a quasi-compact open substack of $\Spec(A)$. \end{lem} \begin{proof} This is \cite[Prop. 2.23]{TVaq} translated to our setting. But for the convenience of the reader, we give a proof. \par Consider $Q\coloneqq P\otimes_A \pi_0A$. Then $Q$ is a perfect complex of $\pi_0A$-modules. Lemma \ref{lem perfect zero} shows that there is a quasi-compact open subscheme $U\subseteq\Spec(\pi_0 A)_{\textup{cl}}$, such that for any point $u\colon \Spec(R')_{\textup{cl}}\rightarrow \Spec(\pi_0A)_{\textup{cl}}$, where $R'$ is a commutative ring, the module $Q\otimes_{\pi_0 A} R'$ is isomorphic to $0$ if and only if $u$ factors through $U$.\par Let $f_1,\dots,f_n\in\pi_0A$, such that the $\pi_0A_{f_i}$ covers $U$. Then $V\coloneqq \Im(\coprod_{i=1}^{n} \Spec(A[f_i^{-1}]))$, the image of $\coprod_{i=1}^{n}\Spec(A[f_{i}^{-1}])\rightarrow \Spec(A)$, is equivalent to $V_P$.\par Indeed, take a morphism $u\colon A\rightarrow B$ in $\AniAlg{R}$. Then $u\in V(B)$ if and only if there exists an $i$ such that $\pi_0u(f_i)$ is \'etale locally invertible in $\pi_0B$ (see Remark \ref{lift along affine}). This is equivalent to $P\otimes_A \pi_0 B\simeq0$ by the choice of the $f_i$. This again is equivalent to $P\otimes_A B\simeq 0$.\par To see this, assume the $P\otimes_A B\not\simeq 0$ and take the minimal $i\in\ZZ$ with $\pi_i (P\otimes_A B)\not\simeq 0$. Consider the Tor spectral sequence $$ E_2^{p,q}=\Tor_p^{\pi_\ast B}(\pi_\ast(P\otimes_A B),\pi_0B)_q\Rightarrow \pi_{p+q}(P\otimes_A \pi_0B). $$ By the definition of the graded tensor product, we see that $E^{0,q}_2 = \pi_q(P\otimes_A B)$ for $q= i$ and $0$ for $q<i$. As explained in Remark \ref{graded free res}, we can choose a graded free resolution of $P\otimes_{A}B$ such that each term of the resolution concentrated in degrees $\geq i$. Therefore, the spectral sequence is concentrated in the quadrant where the left lower corner is at $p=0$ and $q=i$. Thus, $\pi_{i}(P\otimes_{A}B)\cong E_{2}^{0,i}\cong \pi_{i}(P\otimes_{A}\pi_{0}B)\cong 0$ contradicting the assumption.\par The geometricity follows from Lemma \ref{smooth affine geometric}. \end{proof} Next, we show that the stack classifying morphisms between perfect modules\footnote{Note that for two perfect $A$-modules $P,Q$ over some animated ring $A$, we have $\Hom_{\MMod_{B}}(P\otimes_{A}B,Q\otimes_{A}B)\simeq \Hom_{\MMod_{A}}(P\otimes Q^{\vee},B)$ as shown in the proof of Lemma \ref{diagonal geometric}.} is actually geometric and in good cases smooth. Since derived $F$-zips will come with two bounded perfect filtrations (i.e. finite chains of morphisms of perfect modules), this lemma is crucial for the geometricity of derived $F$-zips. \begin{lem} \label{spec sym geometric} Let $A$ be an animated $R$-algebra. Let $P$ be a perfect $A$-module with Tor-amplitude concentrated in $[a,b]$ with $a\leq 0$. Then the derived stack \begin{align*} F^{A}_P\colon \AniAlg{A}&\rightarrow \SS\\ B&\mapsto \Hom_{\MMod_A}(P,B) \end{align*} is $(-a-1)$-geometric and locally of finite presentation over $\Spec(A)$\footnote{Certainly, we can view $F_{P}^{A}$ as a derived stack over $R$ with a morphism to $\Spec(A)$. So for any animated $R$-algebra $C$ that does not come with a morphism $A\rightarrow C$ the value of $F_{P}^{A}$ is empty.}. Further, the cotangent complex of $F_P$ at a point $x\colon \Spec(B)\rightarrow F^{A}_P$ is given by $$ L_{F_P,x}\simeq P\otimes_A B. $$ \par In particular, if $b\leq0$, then $F_P$ is smooth. \end{lem} \begin{proof} Before showing the geometricity, let us calculate the space of derivations of $F_{P}$ and hence the cotangent complex.\par Let $x\colon \Spec(B)\rightarrow F^{A}_P$ be a morphism of derived stacks corresponding to a morphism $f\colon P\rightarrow B$ in $\MMod_{A}$ and $M$ be a connective $B$-module. We have that $\Der_{x}(F^{A}_{P}/A,M)$ is given by the fiber of $\Hom_{A}(P,B\oplus M)\rightarrow \Hom_{A}(P,B)$ at $f$. The underlying $R$-module of $B\oplus M$ is per construction the direct sum of the underlying $R$-module of $B$ and of $M$. Therefore, any morphism $P\rightarrow B\oplus M$ is uniquely up to homotopy characterized by a morphism $P\rightarrow B$ and $P\rightarrow M$ and thus, we see that $\Der_{x}(F^{A}_{P}/A,M)\simeq \Hom_{A}(P,M)\simeq \Hom_{A}(P\otimes_{A}B,M)$. Hence, we have $L_{F_{P},x}\simeq P\otimes_{A}B$.\par Let us conclude the rest of the proof, which we will prove by induction on $a$.\par If $a= 0$, then $P$ is connective and $F^{A}_P\simeq \Hom_{\AniAlg{A}}(-,\Sym_A P)$ which has the desired properties.\par Now assume $a<0$. By Lemma \ref{general props of Tor} we have $P\simeq \fib(Q\rightarrow M[a+1])$, where $Q$ has Tor-amplitude in $[a+1,b]$ and $M$ is a finite projective $A$-module. Thus we get a fiber sequence $$F^{A}_{M[a+1]}\rightarrow F^{A}_Q\rightarrow F^{A}_P\rightarrow F_{M[a]}.$$ By induction hypothesis $F_Q$ is $(-a-2)$-geometric and locally of finite presentation. We will see that the map $p\colon F^{A}_Q\rightarrow F^{A}_P$ is an effective epimorphism.\par Indeed, note that the above fiber sequence and projectivity of $M$ imply that $\pi_0p$ is surjective and thus $p$ is an effective epimorphism (see Remark \ref{effective epi}). \par The diagonal $F^{A}_Q\times_{F^{A}_P}F^{A}_Q$ is given by $F^{A}_{Q\oplus_P Q}$\footnote{Here $Q\oplus_{P}Q$ is defines as the pushout of the morphism $P\rightarrow Q$ with itself.}, which will be $(-a-2)$-geometric with smooth projections to $F^{A}_Q$.\par To see this, note that we have a fiber sequence $Q\rightarrow Q\oplus_PQ\rightarrow M[a+1]$ which has a retract. Thus the natural map $Q\oplus_PQ\rightarrow Q\oplus M[a+1]$ is an equivalence on the level of homotopy groups by the splitting lemma (the induced exact sequences are short exact, using the retract) and therefore $Q\oplus_PQ\simeq Q\oplus M[a+1]$. Hence, $F^{A}_{Q\oplus_P Q}\simeq F^{A}_Q\times F^{A}_{M[a+1]}$, which is the pullback of $(-a-2)$-geometric stack and thus itself geometric. \par Also the projection to $F^{A}_Q$ is smooth, because $F^{A}_{M[a+1]}$ is smooth (the smoothness of $F^{A}_{M[a+1]}$ follows since $L_{F^{A}_{M[a+1]},x}\simeq M[a+1]\otimes_A B$ at a point $x\colon \Spec(B)\rightarrow F^{A}_{M[a+1]}$ and thus has Tor-amplitude in $[a+1,0]$, which concludes (see Corollary \ref{cotangent implies smooth})). \par By Proposition \ref{diag + proj geometric}, we see that $F^{A}_P\rightarrow\Spec(A)$ is a quasi-compact $(-a-1)$-geometric stack locally of finite presentation.\par \par If $b\leq 0$, then $L_{F^{A}_P}$ is perfect with Tor-amplitude concentrated in degree $[a,0]$ and therefore $F^{A}_P$ is smooth by Corollary \ref{cotangent implies smooth}. \end{proof} \begin{rem} A variant of Lemma \ref{spec sym geometric} in the spectral setting can be found in \cite[Thm. 5.2]{AG}. Alternatively, one can look at the proofs given in \cite[Lem. 3.9]{TVaq} and \cite[3.12]{TVaq} to construct a proof in the model categorical setting. \end{rem} \begin{lem} \label{diagonal geometric} The diagonal map $\textup{Perf}^{[a,b]}\rightarrow \textup{Perf}^{[a,b]}\times_{R}\textup{Perf}^{[a,b]}$ is $(b-a)$-geometric and locally of finite presentation. \end{lem} \begin{proof} This is part of the proof of \cite[Thm. 5.6]{AG} translated to our setting. For the convenience of the reader, we give a proof.\par A morphism $\Spec(A)\rightarrow \textup{Perf}^{[a,b]}\times_{R}\textup{Perf}^{[a,b]}$ corresponds to two perfect modules $P,Q$ with Tor-amplitude concentrated in $[a,b]$. The pullback under the diagonal classifies equivalences between $P$ and $Q$. This is an open, $0$-geometric substack of \begin{align*} \Hom_{\MMod_A}(P\otimes_A Q^\vee,-) \simeq \Hom_{\MMod_A}(P,Q\otimes_A-) \simeq \Hom_{\MMod_-}(P\otimes_A -,Q\otimes_A-), \end{align*} (note that perfect modules are dualizable).\par To see this, note that for any morphism $\Spec(B)\rightarrow \Hom_{\MMod_A}(P\otimes_A Q^\vee,-)$, given by a morphism $\varphi\colon P\otimes_A B\rightarrow Q\otimes_A B$, the stack $\Equiv(P,Q)\times_{\Hom_{\MMod_A}(P\otimes_A Q^\vee,-)}\Spec(B)$ classifies morphisms $u\colon B\rightarrow C$, where $\cofib\varphi\otimes_{B,u} C\simeq 0$, which is an open, $0$-geometric substack of $\Spec(B)$ by Lemma \ref{equiv zariski open}. \par Now $P\otimes Q^\vee$ is a perfect module of Tor-dimension $[a-b,b-a]$ (see Lemma \ref{general props of Tor}) and thus Lemma \ref{spec sym geometric} concludes the proof. \end{proof} \begin{defi} Let $n\in \NN$ and $A\in\AniAlg{R}$. We denote the $\infty$-category of finite projective $A$-modules of rank $n$ with $\BGL_{n}(A)$. \end{defi} \begin{lem} Let $n\in \NN$. The functor $A\mapsto \BGL_{n}(A)$ from $\AniAlg{R}$ to $\ICat$ satisfies fpqc descent. \end{lem} \begin{proof} We already know that modules satisfy decent so it is enough to check that an $A$-module $M$ is finite projective of rank $n$ if it is after base change to an fpqc-cover $(A\rightarrow A_{i})_{i\in I}$. Note that $\pi_{0}A\rightarrow \pi_{0}A_{i}$ is faithfully flat for every $i\in I$. Now assume that $M\otimes_{A}A_{i}$ is finite projective of rank $n$. Then it is in particular flat and we will show first that $M$ is flat over $A$. By flatness, the natural map $$\pi_{j}A_{i}\otimes_{\pi_{0}A}\pi_{0}M \cong \pi_{j}A_{i}\otimes_{\pi_{0}A_{i}}\pi_{0}M\otimes_{\pi_{0}A}\pi_{0}A_{i}\cong\pi_{j}A_{i}\otimes_{\pi_{0}A_{i}}\pi_{0}(M\otimes_{A}A_{i})\rightarrow \pi_{j}(M\otimes_{A}A_{i})$$ is an equivalence. By flatness of $A\rightarrow A_{i}$, we have $\pi_{j}A_{i}\cong \pi_{j}A\otimes_{\pi_{0}A}\pi_{0}A_{i}$. Hence, we have $\pi_{j}A_{i}\otimes_{\pi_{0}A}\pi_{0}M\cong \pi_{j}A\otimes_{\pi_{0}A}\pi_{0}A_{i}\otimes_{\pi_{0}A}\pi_{0}M$. By faithfully flatness the map $\pi_{j}A\otimes_{\pi_{0}A}\pi_{0}M\rightarrow \pi_{j}M$ is an equivalence if and only if it so after base change to $\pi_{0}A_{i}$ for all $i\in I$. But the above shows that this base change gives the map $$\pi_{j}A\otimes_{\pi_{0}A}\pi_{0}A_{i}\otimes_{\pi_{0}A}\pi_{0}M\rightarrow \pi_{j}(M\otimes_{A}A_{i})$$ and flatness of $A_{i}$ over $A$ shows that $\pi_{j}(M\otimes_{A}A_{i})\cong \pi_{j}M\otimes_{\pi_{0}A}A_{i}$ (see \cite[Prop. 7.2.2.13]{HA}) so indeed, $\pi_{j}A\otimes_{\pi_{0}A}\pi_{0}M\rightarrow \pi_{j}M$ is an equivalence. Therefore, $M$ is flat over $A$. \par Now a flat module is finite projective of rank $n$ if it is so on $\pi_{0}$ (see Lemma \ref{flat proj} and use the definition of finite projectiveness) but this follows from classical faithfully flat descent. \end{proof} \begin{remark} The inclusion of $\SS\hookrightarrow \ICat$ is left adjoint to the functor $(-)^{\simeq}$ that passes to the largest Kan complex contained in an $\infty$-category (see \cite[Prop. 1.2.5.3]{HTT}). Therefore if $F\colon \Ccal\rightarrow \ICat$ is a (hypercomplete) sheaf for some Grothendieck topology on $\Ccal$ then also $F^{\simeq}\coloneqq(-)^{\simeq}\circ F$ is one. \end{remark} \begin{defi} We define the derived stack classifying vector bundles as the stack \begin{align*} \BGL_{n,R}\colon \AniAlg{R}&\rightarrow \SS\\ A&\mapsto \BGL_{n}(A)^{\simeq}. \end{align*} Further, we denote by $\GL_{n,R}$ the loop under the map $\Spec(R)\rightarrow \BGL_{n,R}$, which is given for an animated $R$-algebra $A$ by $\ast\mapsto A^{n}$, i.e. we have the following pullback diagram in $\textup{dSt}_{R}$ $$ \begin{tikzcd} \GL_{n,R}\arrow[r,""]\arrow[d,""]& \Spec(R)\arrow[d,""]\\ \Spec(R)\arrow[r,""]&\BGL_{n,R} \end{tikzcd} $$ (note that for a commutative ring $A$, we have that $\BGL_{n,R}(A)$ is the groupoid of rank $n$ vector bundles on $A$ and thus $\GL_{n,R}(A)$ is indeed given by the points of the general linear group scheme of rank $n$). \end{defi} \begin{lem} \label{proj geometric} Let $\Proj_R$ denote the derived stack classifying finite projective modules. Then $\Proj_R\simeq \coprod_{n\in\NN} \BGL_{n,R}$, in particular $\Proj_R$ is $1$-geometric and smooth.\par Further, $\GL_{n,R}$ is an affine derived scheme and $\Proj_R$ has an affine diagonal. \end{lem} \begin{proof} The proof is the same as \cite[Cor. 1.3.7.12]{TV2} but for the convenience of the reader, we give a sketch.\par That $\Proj_R\simeq\coprod_{n\in \NN} \BGL_{n,R}$, where $\BGL_{n,R}$ denotes the stack of finite projective modules of rank $n$, is clear. So it suffices to show that $\BGL_{R,n}$ is a $1$-geometric smooth stack. It is enough to show that $\GL_{n,R}\rightarrow \Spec(R)$ is $0$-geometric smooth. Then $\GL_{n,R}\rightarrow \Spec(R)$ defines a $0$-Segal groupoid (see \cite[Def. 1.3.4.1]{TV2}) and $\BGL_{R,n}$ is $1$-geometric (see \cite[Prop. 1.3.4.2]{TV2}). That $\BGL_{n,R}$ is smooth follows from the fact that the natural morphism $\Spec(R)\rightarrow \BGL_{n,R}$ gives a $1$-atlas.\par The claim about $\GL_{n,R}$ follows in the following way. The stack $\GL_{n,R}$ is equivalent to the stack classifying automorphisms of $R^n$, i.e. $\GL_{n,R}(A)\simeq \Equiv_{A}(A^n)$, for $A\in \AniAlg{R}$. But by Lemma \ref{equiv zariski open} this is a $0$-geometric open substack of $F_{R^{n^2}}\simeq \Spec(\Sym_R(R^{n^2}))$, which is a $(-1)$-geometric smooth stack.\par Alternatively, one could follow \cite[Prop. 1.3.7.10]{TV2} and show directly that the inclusion $\iota\colon \GL_{n,R}\hookrightarrow F_{R^{n^2}}$ is representable and \'etale by showing that for any point $x\in F_{R^{n^2}}(A)$, we have $\iota^{-1}(x)\simeq \Spec(A[\det(x)^{-1}])$. In particular, we see in this way that $\GL_{n,R}$ is representable by an affine derived scheme. This also shows that $\BGL_{n,R}$ has an affine diagonal, since $\Spec(R)\times_{\Spec(R)}\Spec(R)\rightarrow \BGL_{n,R}\times_{\Spec(R)}\BGL_{n,R}$ is a $1$-atlas by the above and so, we have a pullback diagram of the form $$ \begin{tikzcd} \GL_{n,R}\arrow[r,""]\arrow[d,""]& \Spec(R)\simeq\Spec(R)\times_{\Spec(R)}\Spec(R)\arrow[d,""]\\ \BGL_{n,R}\arrow[r,"\Delta"]&\BGL_{n,R}\times_{\Spec(R)}\BGL_{n,R}, \end{tikzcd} $$ which shows that the diagonal is affine, since this can be tested after passing to a cover by affines (see \cite[Lem. 1.3.2.8]{TV2}). \end{proof} \begin{rem} \label{kan fib pull} For the proof of the next theorem, we want to remark some generalities about pullbacks of Kan complexes.\par Let $X,Y$ be Kan complexes, i.e. elements in $\SS$, and assume we have a morphism $X\rightarrow\Fun(\del\Delta^{1},Y)$ in $\SS$. We want to compute the following pullback in $\SS$ $$ \begin{tikzcd} W\arrow[r,""]\arrow[d,""]& \Fun(\Delta^{1},Y)\arrow[d,"i"]\\ X\arrow[r,""]&\Fun(\del\Delta^{1},Y), \end{tikzcd} $$ where $i$ is given by the restriction. In general this is a pullback in the $\infty$-categorical sense, which can also be computed on the level of model categories via the homotopy pullback (recall that $\SS$ is the $\infty$-category associated to the model category of simplicial sets with the usual model structure (weak equivalences are given by weak equivalences of the underlying Kan complexes and fibrations are Kan fibrations)). We claim that the homotopy pullback of the underlying Kan complexes is equivalent in $\SS$ to the ordinary pullback of simplicial sets if $i$ is a Kan fibration (i.e. $i$ is a fibration in the model category of simplicial sets and since each simplicial set involved is already a Kan complex they are per definition fibrant).\par Indeed, this is a classical result and remarked in \cite[Rem. A.2.4.5]{HTT} but we will shortly sketch the idea behind it. Let $\Abf$ be a combinatorial model category (e.g. the category of simplicial sets with the model structure explained above) and let $I$ be the diagram category given by three objects $\lbrace 0,1,2\rbrace$ together with morphisms $0\rightarrow 2$, $1\rightarrow 2$ and identities. One can attach the injective model structure onto the functor category $\Fun(I,\Abf)$ by defining weak equivalences to be pointwise weak equivalences and defining cofibrations also pointwise (the fibration are then given by certain lifting properties, which we will not discuss on detail). The homotopy limit is defined as the right Quillen adjoint of the constant functor $\Abf\rightarrow \Fun(I,\Abf)$, which is weakly equivalent to the the limit of a \textit{fibrant} diagram in $\Fun(I,\Abf)$, i.e. for a diagram $a\rightarrow c\leftarrow b$, the homotopy limit is defined as an object that is weakly equivalent to ordinary limit of $a'\rightarrow c'\leftarrow b'$, where $a'\rightarrow c'$ and $b'\rightarrow c'$ are fibrations, $c'$ is fibrant and we have a commutative diagram of the form $$ \begin{tikzcd} a\arrow[r,""]\arrow[d,""]& c\arrow[d,""]&b\arrow[d,""]\arrow[l,""]\\ a'\arrow[r,""]&c'&b'\arrow[l,""], \end{tikzcd} $$ where the vertical arrows are weak equivalences (this is analogous to the theory of homotopy pushouts, which can be found in \cite[\S A.2.4]{HTT}). Now let $I_{R}$ be the full subcategory of $I$ generated by the elements $0$ and $2$ and define $I_{L}$ as the full subcategory of $I$ generated by the elements $1$ and $2$. The tuple $(I_{L},I_{R})$ makes $I$ into a Reedy category on $I$ (see \cite[\S A.2.9]{HTT} for more on Reedy categories). As explained in \cite[Proposition A.2.9.19]{HTT} there is a model structure on $\Fun(I,\Abf)$ corresponding to the Reedy structure on $I$ called the Reedy model structure. Important for us is that a diagram $a\rightarrow c\leftarrow b$ is fibrant for the Reedy model structure if $a,b$ are fibrant and either $a\rightarrow c$ or $b\rightarrow c$ is a fibration. Further, as remarked in \cite[Rem. A.2.9.23]{HTT} the Reedy model structure and the injective model structure are Quillen equivalent via the identity functor. Therefore, if we have a fibrant diagram with respect to the Reedy model structure, then the homotopy limit is per definition weak equivalent to the ordinary limit in $\Abf$. Since the homotopy limit in this case is the homotopy pullback, we are done.\par But that $i$ is a Kan fibration follows from \cite[Cor. 3.1.3.3]{kerodon} and hence we can compute the above pullback in the $\infty$-category $\SS$ via the limit of the underlying simplicial sets. \par Now assume that $Y$ is an arbitrary $\infty$-category. We want to compute the following pullback in $\SS$ $$ \begin{tikzcd} W\arrow[r,""]\arrow[d,""]& \Fun(\Delta^{1},Y)^{\simeq}\arrow[d,"i"]\\ X\arrow[r,""]&\Fun(\del\Delta^{1},Y)^{\simeq}, \end{tikzcd} $$ where $i$ is naturally given by applying the functor $(-)^{\simeq}$ to the restriction. If $i$ is a Kan fibration, then we can apply the argument above. In general it may not be clear if $i$ is a Kan fibration. But in this case, we have that the natural morphism $F\colon \Fun(\Delta^{1},Y)\rightarrow\Fun(\del\Delta^{1},Y)$ is an isofibration of $\infty$-categories (see \cite[01F3]{kerodon}), meaning that it is an inner fibration on the level of homotopy categories, we have the following property: if $x\in h\Fun(\Delta^{1},Y)$ and we have an isomorphism $u'\colon y\xrightarrow{\sim} F(x)$ then, there exists a $x'\in h\Fun(\Delta^{1},Y)$ with an isomorphism $u\colon x'\rightarrow x$ such that $F(u) = u'$. In particular, \cite[Prop. 4.4.3.7]{kerodon} implies that $i$ is a Kan fibration. \end{rem} \begin{theorem} \label{main thm} The derived stack \begin{align*} \textup{Perf}_R\colon \AniAlg{R} &\rightarrow \SS\\ A&\mapsto (\MMod_A^{\textup{perf}})^{\simeq} \end{align*} is locally geometric and locally of finite presentation.\par To be more specific, we can write $\textup{Perf}_R = \colim_{a\leq b} \textup{Perf}_R^{[a,b]}$, where $\textup{Perf}_R^{[a,b]}$ is the moduli space consisting of perfect modules which have Tor-amplitude concentrated in degree $[a,b]$ and each $\textup{Perf}_R^{[a,b]}$ is $(b-a+1)$-geometric and locally of finite presentation and the inclusion $\textup{Perf}_{R}^{[a,b]}\hookrightarrow \textup{Perf}_{R}$ is a quasi-compact open immersion. If $b-a\leq 1$ then $\textup{Perf}_R^{[a,b]}$ is in fact smooth. \end{theorem} \begin{proof} The proof in the model categorical setting can be found in \cite[Prop. 3.7]{TVaq} and in the spectral setting in \cite[Thm. 5.6]{AG}. The latter follows the former with few changes for readability. We will follow the proof presented in the latter using our setting.\par We show that $\textup{Perf}^{[a,b]}$ is $n+1$-geometric, where $n=b-a$, by induction over $n$.\par For $n=0$, we are done, since then we have $\Proj\simeq \textup{Perf}^{[a,a]}$ (see Lemma \ref{general props of Tor}), which is $1$-geometric and locally of finite presentation by Lemma \ref{proj geometric}.\par Now let $n>0$ and assume $\textup{Perf}^{[a+1,b]}$ is $n$-geometric and locally of finite presentation. Let $U$ be defined via the pullback diagram of derived stacks $$ \begin{tikzcd} U\arrow[r,""]\arrow[d,"p", swap]&\Fun(\Delta^1,\MMod^{\perf})^{\simeq}\arrow[d,""]\\ \textup{Perf}^{[a+1,b]}\times_R \textup{Perf}^{[a+1,a+1]}\arrow[r,""]&\Fun(\del\Delta^1,\MMod^{\perf})^{\simeq}. \end{tikzcd} $$ Let $\Spec(A)\rightarrow \textup{Perf}^{[a+1,b]}\times_R \textup{Perf}^{[a+1,a+1]}$ be given by $(P,Q)$, where $P$ is a perfect $A$-module of Tor-amplitude $[a+1,b]$ and $Q$ is the $a+1$ shift of a finite projective $A$-module, then $p^*(P,Q)$ classifies morphisms between those, i.e. $p^*(P,Q)\simeq \Spec(\Sym(P\otimes_AQ^\vee))$ (note that $P\otimes_AQ^\vee$ is perfect and has Tor-amplitude in $[0,b-(a+1)]$ and thus is connective (see Lemma \ref{general props of Tor}). Therefore $p$ is $(-1)$-geometric and locally of finite presentation and with Lemma \ref{diagonal geometric} and \ref{geometric comp}, we see that $U$ is $n$-geometric and locally of finite presentation. Note that if $b-a\leq 1$ then $p$ is even smooth and using that $\textup{Perf}^{[a+1,a+1]}$ is smooth, we see that $U$ is smooth.\par By sending a morphism to its fiber, we get a morphism of derived stacks $q\colon U\rightarrow \textup{Perf}^{[a,b]}$. Using Proposition \ref{diag geometric} with Lemma \ref{diagonal geometric}, we see that $q$ is $n$-geometric, so it suffices to show that $q$ is also smooth and an effective epimorphism.\par That it is an effective epimorphism follows from Lemma \ref{general props of Tor}. To check smoothness let $\Spec(A)\rightarrow \textup{Perf}^{[a,b]}$ be a morphism classified by a perfect $A$-module $P$ with Tor-amplitude in $[a,b]$. Then $q^{-1}(P)(B)$, for some animated $A$-algebra $B$, consists of morphisms of perfect $B$-modules $f\colon Q\rightarrow M[a+1]$ such that $\fib(f)\simeq P\otimes_A B$, where $Q$ has Tor-amplitude in $[a+1,b]$ and $M$ is finite projective. Since locally every finite projective module is free of finite rank, we can decompose $$ q^{-1}(P)\simeq \coprod_m q^{-1}(P)_m, $$ where $q^{-1}(P)_m$ is the substack of $q^{-1}(P)$, where the classified morphisms have codomain given by the $a+1$ shift of free modules of rank $m$. The stack $q^{-1}(P)_m$ is equivalent to the stack classifying morphisms $ A^m[a]\rightarrow P$, where the cofiber has Tor-amplitude in $[a+1,b]$, which is equivalent to $A^m[a]\rightarrow P$ beeing a surjection on $\pi_a$. \par To see the equivalence of stacks let us look at $q^{-1}(P)_m(B)$. These are all morphisms $f\colon Q\rightarrow B^m[a+1]$ such that $\fib(f)\simeq P\otimes_A B$, again $Q$ is a perfect $B$-module with Tor-amplitude in $[a+1,b]$. Since $\MMod_A$ is stable, we see that $P\otimes_AB\rightarrow Q\rightarrow B^{m}[a+1]$ is a fiber diagram if and only if it is a cofiber diagram and thus after shift we see that $q^{-1}(P)_m(B)$ consists of morphisms $g\colon B^{m}[a]\rightarrow P\otimes_AB$ such that its cofiber $\cofib(g)$ has Tor-amplitude in $[a+1,b]$. By Lemma \ref{general props of Tor} $\cofib(g)$ has Tor-amplitude in $[a,b]$. Since after tensoring $M^m[a]\rightarrow P\otimes_A M\rightarrow \cofib(g)\otimes_B M$, where $M$ is a discrete $\pi_0B$-module, we still have a cofiber sequence it is enough to check that $\pi_a(g\otimes\id_M)$ is a surjection. But since $\pi_a(P\otimes_AM) = \pi_a(P) \otimes_{\pi_0A} M$ (use the degeneracy of the Tor-spectral sequence at $(0,a)$) and the ordinary tensor product of $\pi_0A$-modules preserves surjections it is enough to check that $\pi_ag$ is surjective. And therefore $q^{-1}(P)_m(B)$ consists of morphisms $B^m[a]\rightarrow P\otimes_A B$, which are surjective on $\pi_a$ (obviously any morphism $B^m[a]\rightarrow P\otimes_A B$ with cofiber having Tor-amplitude in $[a+1,b]$ is surjective on $\pi_a$).\par By this characterization the stack $q^{-1}(P)_m$ is an open substack of $F^{A}_{(P^{\vee})^{m}[a]}$ (see Lemma \ref{spec sym geometric} for notation).\par To see this, let $\Spec(B)\rightarrow F_{(P^{\vee})^{m}[a]}$ be given by a morphism $\xi\colon B^m[a]\rightarrow P\otimes_A B$. Let $Z$ be the pullback of $\Spec(B)$ along the inclusion $q^{-1}(P)_m\hookrightarrow F_{(P^{\vee})^{m}[a]}$. In particular, for any animated $A$-algebra $C$, we have that $Z(C)$ consists of those morphisms $f\colon B\rightarrow C$, such that $\pi_af^\ast\xi$ is surjective. Since $P\otimes_A B$ is perfect and has Tor-amplitude in $[a,b]$ its homotopy group $\pi_a(P\otimes_A B)$ is finitely presented (see \cite[Cor. 7.2.4.5]{HA}). Therefore, being surjective is an open condition on $\pi_0B$ (see \cite[Prop. 8.4]{WED}). Further refining by principal affine opens $D(f_i)\subseteq\Spec(\pi_0B)$, we get an open substack $\bigcup \Spec(B[f_i^{-1}])$ of $\Spec(B)$. Now a morphism $u\colon B\rightarrow C$ is in $Z(C)$ if and only if \'etale locally there is an $i$ such that $\pi_0u(f_i)$ is invertible. Therefore, $Z\simeq \bigcup\Spec(B[f_i^{-1}])$.\par Since $q^{-1}(P)_m$ is open in $F^{A}_{(P^{\vee})^{m}[a]}$, which itself is smooth by Lemma \ref{spec sym geometric}, we see that $q^{-1}(P)$ is smooth over $A$, which concludes the proof. \par Indeed, let $P$ be a perfect $A$-module with Tor-amplitude in $[a,b]$. Then, by Lemma \ref{general props of Tor}, we can find a cofiber sequence $M[a]\rightarrow P\rightarrow Q$, where $Q$ is perfect of Tor-amplitude $[a+1,b]$ and $M$ is finite projective. Analogous to the above, we see that $P$ has Tor-amplitude in $[a+1,b]$.\par For the open immersion part it suffices by induction to show that for all $a< b\in \ZZ$ the inclusion $\textup{Perf}_{R}^{[a+1,b]}\hookrightarrow \textup{Perf}_{R}^{[a,b]}$ is an open immersion. Let $A\in\AniAlg{R}$ and $\Spec(A)\rightarrow\textup{Perf}_{R}^{[a,b]}$ be a morphism classified by a perfect $A$-module $P$ of Tor-amplitude $[a,b]$. By Lemma \ref{general props of Tor}, we have a fiber sequence of $A$-modules $P\rightarrow M[a+1]\rightarrow Q$, where $Q$ is perfect of Tor-amplitude in $[a+1,b]$ and $M$ is finite projective. Now $P$ has Tor-amplitude in $[a+1,b]$ if and only if $M\simeq 0$. But by Lemma \ref{equiv zariski open}, we see that the vanishing locus of $M$ is a quasi-compact open in $\Spec(A)$, which concludes the proof. \end{proof} \begin{cor} \label{cotangent of perf} Let $A$ be an animated $R$-algebra and let $\Spec(A)\rightarrow \textup{Perf}_R$ be a morphism given by a perfect $A$-module $P$. The cotangent complex $L_{\textup{Perf}_R,A}$ is perfect and if $A/R$ is \'etale then the cotangent complex a that point is given by $$ L_{\textup{Perf}_R,A}\simeq (P\otimes_A P^{\vee})^{\vee}[-1]. $$ \end{cor} \begin{proof} This is analogous to \cite[Cor. 5.9]{AG}, but for the convenience of the reader we recall the proof.\par The first assertion follows from Theorem \ref{main thm} with Corollary \ref{global lfp cotangent}.\par For the second assertion let $\Omega_P\textup{Perf}_R$ denote the loop of $\textup{Perf}_R$ along $\Spec(A)\rightarrow \textup{Perf}_R$ classified by a perfect $A$-module $P$, i.e. the $\Omega_P\textup{Perf}_R\coloneqq \Spec(A)\times_{\textup{Perf}_{R}}\Spec(A)$. We know per definition of the cotangent complex that we have a pullback diagram $$ \begin{tikzcd} L_{\textup{Perf},A} \arrow[r]\arrow[d] & L_{A/R}\arrow[d]\\ L_{A/R} \arrow[r]&L_{\Omega_P\textup{Perf}_R,*}, \end{tikzcd} $$ where $*$ is the point corresponding to the canonical map $\Spec(A)\rightarrow \Omega_P\textup{Perf}_R$.\par Let $T\in \AniAlg{A}$, then $\Omega_P\textup{Perf}_R(T)\simeq \Equiv_T(P\otimes_A T)$, where $ \Equiv_T(P\otimes_A T)$ denotes the $T$-automorphisms of $P\otimes_A T$. In particular $\Omega_P\textup{Perf}$ is an open substack of $$\Hom_A((P\otimes_A P^{\vee})^\vee,-)$$ (after using adjunctions). By Lemma \ref{spec sym geometric}, we now have $$ L_{\Omega_P\textup{Perf}_R,\ast}\simeq (P\otimes_A P^{\vee})^{\vee}.$$\par Therefore, if $A/R$ is \'etale, we have $\Sigma L_{\textup{Perf}_R,A}\simeq L_{\Omega_P\textup{Perf}_R,*}$, whe finishing the proof. \end{proof} \section*{Introduction} Derived algebraic geometry is the study of algebraic geometry using homotopy theoretical methods. It was used by Arinkin-Gaitsgory \cite{ArinkinGaits} to formulate a geometric Langlands conjecture. T\"oen-Vaqui\'e used derived algebraic geometry to study moduli of dg-categories in \cite{TVaq}. In \cite{AG} Antieau-Gepner studied derived versions of Brauer groups and Azumaya algebras. More recently, Annala constructed in \cite{Annala} a bivariant derived algebraic cobordism for quasi-smooth projective derived schemes. In \cite{yaylali}, the author defined derived versions of $F$-zips and used these to analyze families of proper smooth morphisms, which was previously not possible in this generality with classical $F$-zips. \par A general theory using certain model categories (for example simplicial commutative rings, DG-rings and connective $E_{\infty}$-rings) was developed by To\"en-Vezzosi in \cite{TV2}. The idea is to study sheaves of $\infty$-groupoids on these model categories. They also look more closely into the 3 settings listed above. \par Before working with model categories, one could ask how to extend algebraic geometry to sheaves of rings that take values in $n$-groupoids, for $n\geq 2$. The idea of Simpson in \cite{Simpson}, was to extend the definition of an atlas. Namely, he first starts with $0$-geometric stacks, which are algebraic spaces. The $0$-geometric morphisms are morphisms that are representable by algebraic spaces. Then an $n$-geometric stack, is an \'etale sheaf of $(n+1)$-groupoids on rings together with a smooth $(n-1)$-geometric morphism, by a $(n-1)$-geometric stack. It turns out, that if we start sheaves with values in $\infty$-groupoids, then an $n$-geometric sheaf is automatically $(n+1)$-truncated. The definition of To\"en-Vezzosi of $n$-geometric stacks replaces the source category by a ``nice'' model category with a suitable topology (where ``nice'' is in their sense, for example simplicial commutative rings with the \'etale topology).\par The case of simplicial commutative rings is also treated by Lurie in his PhD-thesis \cite{DAG} in the language of $\infty$-categories. The benefit of working with $\infty$-categories is the fact, that we do not need to specify a model structure on animated rings (the $\infty$-categories of simplicial commutative rings\footnote{The term ``simplicial commutative ring'' in the $\infty$-categorical language was replaced by \v{C}esnavi\v{c}ius-Scholze in \cite{CS} by the term ``animated ring''. This makes in particular clear, that one works with objects up to homotopy.}). Besides this setting, there is a lot of work by Lurie on spectral algebraic geometry, i.e. algebraic geometry using $E_{\infty}$-rings (see \cite{SAG}). Even though in positive characteristic there is no equivalence between animated rings and $E_{\infty}$-rings, morally, geometry in both settings should behave the same. \par For example, in both settings one can look at the sheaf of perfect complexes, i.e. we can look at the functor $A\mapsto (\MMod_{A}^{\perf})^{\simeq}$, that maps an animated ring (resp. an $E_{\infty}$-ring) $A$ to the largest $\infty$-groupoid in perfect $A$-modules. In \cite{TVaq} To\"en-Vaqui\'e show that this stack is locally geometric in the simplicial commutative ring setting (in the language of model categories) and in \cite{AG} Antieau-Gepner show local geometricity in the spectral setting. Here one should remark, that the definition of $n$-geometric stacks in \cite{TV2}, \cite{AG} and \cite{DAG} differ. Most notably, in \cite{TV2} separated Artin stacks are $0$-geometric, in \cite{AG} a $0$-geometric stacks are disjoint unions of affines and in \cite{DAG} the $0$-geometric stacks must have an \'etale atlas.\par In this article, we focus on derived algebraic geometry in the context of animated rings. The most notable difference is, that for animated rings in positive characteristic, we have a Frobenius endomorphism. This does not hold for $E_{\infty}$-rings, as we would need to find a homotopy coherent Frobenius, whereas for animated rings, we can use the classical Frobenius to induce one on animated rings. Throughout, we will omit that we work in the context of animated rings.\par Another benefit of animated rings and so, derived algebraic geometry is the naturally occurring deformation theory. There is a natural notion of square-zero extensions and thus derivations. Classically, derivations are represented by the module of K\"ahler-differentials. In the context of derived algebraic geometry, derivations are represented by the cotangent complex. \par The goal of these notes is to recall important aspects of \cite{TV2} and \cite{SAG} in terms of animated rings and prove that the stack of perfect complexes is locally geometric following the proofs in \cite{TVaq} and \cite{AG}. \subsubsection*{Derived commutative algebra} The $\infty$-category of animated rings $\AniAlg{\ZZ}$ is given by polynomial algebras, where we freely adjoin sifted colimits of those. Looking at over categories for any animated ring $A$, we can define the $\infty$-category of animated $A$-algebra $\AniAlg{A}\coloneqq (\AniAlg{\ZZ})_{A/}$. The benefit of this definition is that a lot of questions about functors from $\AniAlg{\ZZ}$ to $\infty$-categories with sifted colimits can be reduced to polynomial algebras. Animated rings should be thought as spectral commutative rings (i.e. $E_\infty$-rings) with extra structure. Especially, after forgetting this extra structure, we can also define modules over animated rings (as modules over the underlying $E_{\infty}$-ring). One important example of such a module is the cotangent complex. This module arises naturally if we want to define an analogue of the module of differentials as the module that represents the space of derivations.\par By left Kan-extension of the classical square-zero extension, we can associate to any connective $A$-module, the square-zero extension $A\oplus M$ of $A$ by $M$. In this way, we can define $A$-linear derivations by $M$, as factorizations $A\rightarrow A\oplus M\rightarrow A$ of the identity of $A$. As in the classical case, there is a connective $A$-module $L_{A}$ representing the space of derivations, called \textit{cotangent complex}. If $A$ is discrete, then this is the classical cotangent complex of Illusie. We can also extend this definition to a relative version $L_{B/A}$ for animated ring homomorphisms $A\rightarrow B$.\par The underlying $E_{\infty}$-ring of an animated ring is a commutative algebra object in spectra. Thus, we can define homotopy groups of animated rings and automatically see (with the theory developed in \cite{HA}) that we can associate to every animated ring $A$ a $\NN_{0}$-graded ring $\pi_{*}A$. Using this, we reduce definitions like smoothness of a morphism $A\rightarrow B\in\AniAlg{\ZZ}$ to smoothness of the ordinary rings $\pi_0A\rightarrow \pi_0B$ together with compatibility of the graded ring structure, i.e. $\pi_*A\otimes_{\pi_0A} \pi_0B\cong \pi_*B$. \par A benefit of naturally occurring deformation theory is, that we can work with the cotangent complex to relate its properties to geometric properties of the underlying animated rings. \begin{prop}[\protect{\ref{smooth cotangent}}] Let $f\colon A\rightarrow B$ is a morphism of animated rings that is finitely presented in $\pi_{0}$, then $L_{B/A}$ is finite projective if and only if $f$ is smooth. \end{prop} This cannot be achieved using the module of K\"ahler-differentials, as $\Omega_{B/A}^{1}=0$ whenever $f$ is a closed immersion.\par One can also show that \'etale maps of animated rings $A\rightarrow B$ are equivalently given by \'etale maps $\pi_{0}A\rightarrow \pi_{0}B$ of (classical) rings. So, one can think of animated rings as nilpotent thickenings into the homotopy direction of (classical) rings. \subsubsection*{Derived algebraic geometry} Defining \textit{derived stacks} is rather straightforward now, we set them as presheaves (of spaces) on $\AniAlg{\ZZ}$ which satisfy \'etale descent. One important example are \textit{affine derived schemes}, which we set as representable presheaves on $\AniAlg{\ZZ}$. We can also define relative versions, where we replace $\ZZ$ with an animated ring. We will see that they naturally satisfy fpqc-descent. For affine derived schemes it is easy to define properties by using their underlying animated rings. To do the same for derived stacks, we will need the notion of \textit{$n$-geometric} morphisms. This notion is defined inductively, where we say a morphism $f\colon F\rightarrow G$ of derived stacks is \begin{enumerate} \item[•] $(-1)$-geometric, if the base change with an affine derived scheme is representable by a an affine derived schemes. A $(-1)$-geometric morphism is smooth if it is so after base change to any affine. \item[•] The morphism $f$ is $n$-geometric, if for any affine derived scheme $\Spec(A)$ with morphism $\Spec(A)\rightarrow G$ the base change $F\times_G \Spec(A)$ has a smooth $(n-1)$-geometric effective epimorphism by $\coprod \Spec(T_i)$, where an $n$-geometric morphism is smooth if after affine base change the induced maps of the atlas to the base is $(-1)$-geometric and smooth. \end{enumerate} We can also define \textit{locally geometric} derived stacks, as those derived stacks $X$, that can be written as the filtered colimit of open substacks.\par For a good class\footnote{With ``good class'' we mean stable under base change, composition, equivalences and smooth local on source and target.} of properties \textbf{P} of affine derived schemes, e.g. smooth, flat,\dots\footnote{Note that the property \'etale is not smooth local on the source ! We have to be careful if we want to define \'etale morphisms of $n$-geometric stacks.}, we can now say that a morphism of derived stacks has property $\pbf\in\Pbf$ if it is $n$-geometric for some $n$ and after base change with an affine derived scheme the atlas over the affine base has property $\pbf$. Since geometricity is defined by using smooth atlases, we are mostly interested in this property. \par We would like to relate smoothness to deformation theory, as in the affine case. But before that, we need to define quasi-coherent modules on derived stacks. As modules over animated rings satisfy fpqc descent, we can define the $\infty$-category quasi-coherent modules $\QQCoh(X)$ on an $n$-geometric derived stack $X$ via gluing along smooth atlases, i.e. via right Kan-extension of $A\mapsto \MMod_{A}$. Even though not clear, we will see that using the works of Lurie, one has that for classical schemes, the quasi-coherent modules are equivalently given by complexes in the derived category with quasi-coherent cohomology. \begin{prop}[\protect{\ref{derived cat is kan ext}}] Let $X$ be a scheme. Then we have an equivalence of $\infty$-categories $\Dcal_{\textup{qc}}(X)\simeq \QQCoh(X)$, where $\Dcal_{\textup{qc}}(X)$ denoted the derived $\infty$-category of $\Ocal_{X}$-modules with quasi-coherent cohomologies. \end{prop} \par Now we can define the cotangent complex as in the affine case. So, we define it as the quasi-coherent module that represents derivations. We can also use the atlas of $n$-geometric derived stacks to see, that any $n$-geometric morphisms of derived stacks has a cotangent complex. In particular, as in the affine case, we can relate geometric properties of derived stacks to properties of the cotangent complex. \begin{thm}[\ref{cotangent implies smooth}] Let $f\colon X\rightarrow Y$ be an $n$-geometric morphism of derived stacks. Then $f$ is smooth if and only if $f_{|\Ring}$ is locally of finite presentation and the cotangent complex of $f$ is perfect and has Tor-amplitude in $[-n-1,0]$. \end{thm} The proof of this theorem uses a lot of the homotopical algebraic geometric methods combining ideas of the homotopy theoretical world with ideas of the algebraic geometric world. We can extend these results even further, to see that any $n$-geometric stack $X$ is nilcomplete, meaning that if we know the values on truncated animated rings, we know the value on all animated rings, i.e. $X\simeq \lim_{n} X\circ\tau_{\leq n}.$ This also shows that $n$-geometric derived stacks are automatically hypercomplete. \subsubsection*{Geometricity of perfect complexes} Let us state the main theorem of this article. \begin{theorem}[\ref{main thm}] The derived stack \begin{align*} \textup{Perf}\colon \AniAlg{R} &\rightarrow \SS\\ A&\mapsto (\MMod_A^{\textup{perf}})^{\simeq} \end{align*} is locally geometric and locally of finite presentation. \end{theorem} The proof is separated into different steps. First, write this stack as a filtered colimit of substacks $\textup{Perf}^{[a,b]}$, where we fix the Tor-amplitude of the perfect complexes by $a\leq b\in \ZZ$. Per definition, it is enough to show that these are $(b-a+1)$-geometric stacks locally of finite presentation and open in $\textup{Perf}$. \par The proof is by induction over $n\coloneqq b-a$. For $n=0$, this is an interesting and well known stack $\textup{Perf}^{[0,0]}\simeq \Proj\simeq\coprod_n \BGL_n$.\par The idea for higher $n$ is straightforward, knowing that any perfect module $M$ of Tor-amplitude in $[a,b]$ can be written as the fiber of $Q\rightarrow M$, where $Q$ is a perfect module of Tor-amplitude $[a+1,b]$ and $M$ is perfect of Tor-amplitude in $[a+1,a+1]$. Inductively, this reduces to the question, if the stack classifying morphisms between perfect modules is geometric and smooth. But this this follows from explicit calculations of the cotangent complex.\par For the openness, we have to consider the vanishing locus of perfect complexes. But since in classical algebraic geometry this is open, we can extend this to the derived world. \subsection*{Structure of this article} We start by summarizing \cite{HA} (see Section \ref{HA}). We try to show that spectral rings and modules behave in some sense like expected.\par The next step is the introduction of derived commutative algebra, i.e. the theory of algebras over animated rings (see Section \ref{sec:derived commutative algebra}). We first define animated rings $\AniAlg{\ZZ}$, show that there is a relation to $E_\infty$-rings and use this relation to define modules over animated rings. As a consequence, we can define the cotangent complex and define what properties of morphisms in $\AniAlg{\ZZ}$ are. We end this section by showing that the cotangent complex is highly related to smoothness of morphisms.\par After talking about derived commutative algebra, we introduce the theory derived algebraic geometry (see Section \ref{sec:der.alg.geo}). Mainly, we introduce the notion of derived stacks, geometricity of morphisms and derived schemes. We also talk about truncation of those and how it relates to classical algebraic geometry. Further, we again cover smoothness of such morphisms and how it relates to the cotangent complex.\par We finish the summary on derived algebraic geometry by showing that the stack of perfect modules is locally geometric and locally of finite presentation (see Section \ref{sec:perf}).\par \subsection*{Assumptions} All rings are commutative with one. \par We work with the Zermelo-Frenkel axioms of set theory with the axiom of choice and assume the existence of inaccessible regular cardinals.\par Throughout this paper we fix some uncountable inaccessible regular cardinal $\kappa$ and the collection $U(\kappa)$ of all sets having cardinality $<\kappa$, which is a Grothendieck universe (and as a Grothendieck universe is uniquely determined by $\kappa$) and hence satisfies the usual axioms of set theory (see \cite{universe}). When we talk about small, we mean $\Ucal(\kappa)$-small. In the following, we will use some theorems, which assume smallness of the respective ($\infty$-)categories. When needed, without further mentioning it, we assume that the corresponding ($\infty$-)categories are contained in $\Ucal(\kappa)$.\par If we work with families of objects that is indexed by some object, we will assume, if not further mentioned, that the indexing object is a $\Ucal(\kappa)$-small set. \subsection*{Notation} We work in the setting of $(\infty,1)$-categories (see\cite{HTT}). By abuse of notation for any $1$-category $C$, we will always denote its nerve again with $C$, unless otherwise specified.\par A \textit{subcategory} $\Ccal'$ of an $\infty$-category $\Ccal$ is a simplicial subset $\Ccal'\subseteq\Ccal$ such that the inclusion is an inner fibration. In particular, any subcategory of an $\infty$-category is itself a $\infty$-category and we will not mention this fact. \begin{enumerate} \item[$\bullet$] $\Delta$ denotes the simplex category (see \cite[000A]{kerodon}), i.e. the category of finite non-empty linearly ordered sets, $\Delta_{+}$ the category of (possibly empty) finite linearly ordered sets. We denote with $\Delta_{s}$ those finite linearly ordered sets whose morphisms are strictly increasing functions and with $\Delta_{s,+}$ those (possibly empty) finite linearly ordered sets whose morphisms are strictly increasing functions. \item[$\bullet$] With an $\infty$-category, we always mean an $(\infty,1)$-category. \item[$\bullet$] $\SS$ denotes the $\infty$-category of small spaces (also called $\infty$-groupoids or anima). \item[$\bullet$] $\ICat$ denotes the $\infty$-category of small $\infty$-categories. \item[$\bullet$] $\Sp$ denotes the $\infty$-category of spectra. \item[$\bullet$] For an $E_{\infty}$-ring $A$, we denote the $\infty$-category of $A$-modules in spectra, i.e. $\MMod_{A}(\Sp)$ in the notation of \cite{HA}, with $\MMod_{A}$. \item[$\bullet$] For any ordered set $(S,\leq)$, we denote its corresponding $\infty$-category again with $S$, where the corresponding $\infty$-category of an ordered set is given by the nerve of the of $(S,\leq)$ seen as a $1$-category (the objects are given by the elements of $S$ and $\Hom_S(a,b)=\ast$ if and only if $a\leq b$ and otherwise empty). \item[$\bullet$] For any set $S$ the $\infty$-category $S^{\disc}$ will denote the nerve of the set $S$ seen as a discrete $1$-category (the objects are given by the elements of $S$ and $\Hom_S(a,a)=\ast$ for any $a\in S$ and otherwise empty). \item[$\bullet$] For any morphism $f\colon X\rightarrow Y$ in an $\infty$-category $\Ccal$ with finite limits, we denote the functor from $\Delta_{+}$ to $\Ccal$ that is given by the \v{C}ech nerve of $f$ (see \cite[\S 6.1.2]{HTT}) if it exists by $\Cv(Y/X)_{\bullet}$. \item[$\bullet$] Let $\Ccal$ be an $\infty$-category with final object $\ast$. For morphisms $f\colon \ast\rightarrow X$ and $g\colon \ast\rightarrow X$, we denote the homotopy pullback $\ast\times_{f,X,g}\ast$ if it exists with $\Omega_{f,g}X$. If $\Ccal$ has an initial object $0$, then we denote the pullback $0\times_{X}0$ with $\Omega X$. \item[$\bullet$] Let $f\colon X\rightarrow Y$ be a morphism in $\SS$ and let $y\in Y$. We write $\fib_{y}(X\rightarrow Y)$ or $\fib_{y}(f)$ for the pullback $X\times_{Y}\ast$, where $\ast$ is the final object in $\SS$ (up to homotopy) and the morphism $\ast\rightarrow Y$ is induced by the element $y$, which by abuse of notation, we also denote with $y$. \item[$\bullet$] For a morphism $f\colon M\rightarrow N$ in $\MMod_{A}$, where $A$ is some $E_{\infty}$-ring, we set $\fib(f)=\fib(M\rightarrow N)$ (resp. $\cofib(f)=\cofib(M\rightarrow N)$) as the pullback (resp. pushout) of $f$ with the essentially unique zero morphism $0\rightarrow N$ (resp. $M\rightarrow 0$). \item[$\bullet$] When we say that a square diagram in an $\infty$-category $\Ccal$ of the form $$ \begin{tikzcd} W\arrow[r,""]\arrow[d,""]& X\arrow[d,""]\\ Y\arrow[r,""] &Z \end{tikzcd} $$ is commutative, we always mean, that we can find a morphism $\Delta^{1}\times\Delta^{1}\rightarrow \Ccal$ of $\infty$-categories that extends the above diagram. \end{enumerate} \subsection*{Acknowledgement} These note are part of my PhD thesis \cite{thesis} (another extract, dealing with derived $F$-zips, is \cite{yaylali}). I want to thank my advisor Torsten Wedhorn with whom I had many discussions concerning problems in derived algebraic geometry. He suggested that I should write these notes separately, so they could be beneficial for all people interested in this topic. I would also like to thank Benjamin Antieu, Adeel Khan and Jonathan Weinberger for their help concerning derived algebraic geometry respectively higher topos theory and Timo Richarz for helpful discussions. Lastly, I want to thank Simone Steilberg and Thibaud van den Hove. \section{Overview: Higher algebra} \label{HA} In this section, we want to summarize some important aspects of spectra, $E_\infty$-rings and modules over those presented in \cite{HA}.\par Our main interests are animated rings and modules over those. Animated rings are presented in Section \ref{sec:derived commutative algebra}, so we want to focus on modules. These will be defined over a monoidal $\infty$-category, in this case the $\infty$-category of spectra $\Sp\coloneqq\Sp(\SS)$. So, one should think of $\Sp$ as an $\infty$-category equipped with a tensor product and spectral modules as modules for this tensor product over some commutative ring, called $E_\infty$-rings.\par Let us start by recalling stable $\infty$-categories. An $\infty$-category is stable if it has a zero object, it admits fibers and cofibers and every cofiber sequence is a fiber sequence (see \cite[Def. 1.1.1.9]{HA}). This can be seen as an $\infty$-analogue of an abelian category. One very important feature of a stable $\infty$-category is that its homotopy category is automatically triangulated (see \cite[Thm. 1.1.2.14]{HA}). Stable $\infty$-categories have other nice stability properties, e.g. a square is a pullback if and only if it is a pushout and there exists finite limits and colimits (see \cite[Prop. 1.1.3.4]{HA}), but listing everything concerning stable $\infty$-categories would be to involved so we refer to \cite[\S1]{HA}.\par Now let us come to the definition of the spectrum $\Sp(\Ccal)$ of a pointed $\infty$-category $\Ccal$ with finite limits. One definition is obtained by setting $\Sp(\Ccal)$ as the $\infty$-category of excisive, reduced functors from\footnote{Here $\SS^{\textup{fin}}_{\ast}$ denotes the smallest subcategory of $\SS$ that contains the final object, is stable under finite colimits and consist of pointed objects, where pointed means objects $x\in \SS$ with a morphism $\ast\rightarrow x$, (see \cite[Not. 1.4.2.5]{HA}).} $\SS^{\textup{fin}}_{\ast}$ to $\Ccal$ (see \cite[Def. 1.2.4.8]{HA}). Alternatively, one obtains $\Sp(\Ccal)$ as the homotopy limit of the tower $\dots\xrightarrow{\Omega}\Ccal\xrightarrow{\Omega}\Ccal$ (see \cite[Rem. 1.4.2.25]{HA}). Both viewpoints are useful. An important property of $\Sp(\Ccal)$ is that it is a stable $\infty$-category and if $\Ccal$ is presentable then so is $\Sp(\Ccal)$ (see \cite[Cor. 1.4.2.17, Prop. 1.4.4.4]{HA}). From now on, we assume $\Ccal$ to be presentable. The first definition as functors allows us to define a functor $$ \Omega^{\infty}\colon \Sp(\Ccal)\rightarrow \Ccal $$ by evaluation on the zero sphere. In fact, $\Ccal$ is stable if and only if $\Omega^{\infty}$ is an equivalence. Another property of $\Omega^{\infty}$ is that it admits a left adjoint $\Sigma^{\infty}$ (see \cite[Prop 1.4.4.4]{HA}).\par Let us set $\Ccal=\SS_*$, the $\infty$-category of pointed spaces and let us set $\Sp\coloneqq\Sp(\SS_*)$. The second definition we gave allows us to identify the homotopy category $\textup{hSp}$ with the classical stable homotopy category\footnote{Symmetric spectra are certain sequences of Kan complexes $X_0,X_1,\dots$ with maps $\Sigma X_{n-1}\rightarrow X_{n}$. This category is equipped with a model structure (called the stable model structure) and is closed monoidal. One can also equip certain sequences of pointed topological spaces $X_0,X_1,\dots$ with maps $\Sigma X_{n-1}\rightarrow X_{n}$ with a model structure and endow it with a closed monoidal structure using the smash product. Both constructions are in fact Quillen equivalent (see \cite{SymSpec} for further information about symmetric spectra).} (see \cite[Rem. 1.4.3.2]{HA}). In fact, one can show that $\Sp$ is the $\infty$-category associated to the model category of symmetric spectra (see \cite[Ex. 4.1.8.6]{HA}). This allows us to define a monoidal structure on $\Sp$ using the monoidal structure on the underlying model category, given by the smash product. One important aspect of this monoidal structure is that its unit element is the sphere spectrum\footnote{We use the convention of \cite{HA}, where the sphere spectrum is the image of the final object $\ast\in\SS$ by $\Sigma^{\infty}$ (see \cite[\S 1.4.4]{HA} for more details).} and the tensor product preserves colimits in each variable (see \cite[Cor. 4.8.2.19]{HA}). This construction shows that for a spectrum object $X\in \Sp$ we have $$\Hom_{\Sp}(S,X)\simeq \Hom_{\SS}(\Delta^0,X)\simeq \Omega^{\infty}(X),$$ where $S$ denotes the sphere spectrum. This is nothing new if we think about abelian groups for example, since $\ZZ$-module homomorphisms from $\ZZ$ to any abelian group are uniquely characterized by the elements of the group and since here $\Hom_{\Sp}(S,X)$ is a Kan-complex, we see that it is equivalent to the underlying space of the spectrum $X$. An important side remark is that the heart of spectra is naturally identified with (the nerve of) the category of abelian groups.\par Using $\Omega^{\infty}$, stability of $\Sp$ and homotopy groups of Kan-complexes, we can define an accessible $t$-structure on $\Sp$ (see \cite[Prop 1.4.3.6]{HA}). In particular, we can define the homotopy groups of spectrum objects $X\in \Sp$, via $\pi_nX\simeq\pi_0\Omega^{\infty}(X[-n])$ (see proof of \cite[Prop 1.4.3.6]{HA}) and if $n\geq 2$, then these are given by $\pi_n\Omega^{\infty}(X)$ (see \cite[Rem. 1.4.3.8]{HA}).\\ \par Before we can define modules, we start with $E_\infty$-rings. We will not go into detail, since we will work with an analogue, namely animated rings. One should think of animated rings as $E_\infty$-rings with an extra structure. This extra structure vanishes if we are in characteristic zero but gives us no relation except a conservative functor from animated rings to $E_\infty$-rings in positive or mixed characteristic (see Proposition \ref{fun E to SCR}).\par As stated above, one should think of $\Sp$ as an $\infty$-categorical analogue of abelian groups. To define commutative rings in this $\infty$-category one could use the theory of $\infty$-operads and describe $E_\infty$-rings in terms of sections of the operad induced by the monoidal structure of $\Sp$ (see \cite[\S2]{HA} for more information about $\infty$-operads and \cite[\S3, 4]{HA} for the construction of rings using this approach). We will not describe how this is achieved but instead use a rectification argument, i.e. we set $E_\infty$-rings as the $\infty$-category associated to the commutative algebra objects in the underlying model category of $\Sp$. Both approaches are equivalent (see \cite[Thm. 4.5.4.7]{HA}) so we can think of $E_\infty$-rings as certain commutative rings in the model category associated to $\Sp$. Using the Eilenberg-Mac Lane spectrum one can see that for example ordinary commutative rings are discrete $E_\infty$-rings. This construction is also vague, since it requires the $\infty$-categorical localization of cofibrant commutative algebra objects in the underlying model category. But contrary to $\infty$-operads, we think it gives a more classical feeling of commutative rings.\\ \par Now let us conclude this section with modules over $E_{\infty}$-rings. Again \cite{HA} deals with modules using $\infty$-operads (see \cite[\S 3, 4]{HA}). Analogous to the $E_\infty$-ring case, we can apply a rectification statement to define modules using the monoidal structure on the underlying model category, again both constructions are equivalent (see \cite[Thm. 4.3.3.17]{HA}). For an $E_\infty$-ring $A$, we will denote the $\infty$-category of spectral $A$-modules with $\MMod_A$. By forgetting the module structure, we get a functor $\MMod_A\rightarrow \Sp$ (induced by the construction using $\infty$-operads, see \cite[Def. 3.3.3.8]{HA}). As in the classical case for abelian groups and modules, the forgetful functor is conservative (this follows from \cite[Cor. 4.2.3.2]{HA}). Further, if we restrict ourselves to connective\footnote{Here an object $c$ in an $\infty$-category $\Ccal$ with a conservative functor $f\colon\Ccal\rightarrow \Dcal$ into a stable $\infty$-category with $t$-structure is connective, if $f(c)$ is connective, i.e. $\pi_{i}f(c)\cong 0$ for $i<0$.} modules, then even the composition $\MMod_A^{\textup{cn}}\rightarrow\Sp\xrightarrow{\Omega^{\infty}}\SS$ is conservative (see \cite[Rem. 7.1.1.8]{HA}) This is an analogue to the fact that a morphism of classical modules is an isomorphism if and only if it is a bijection on the underlying sets. Another fact that is well known in the classical case is that, we have the following equivalence in $\SS$ $$ \Hom_{\MMod_A}(A,M)\simeq \Hom_{\Sp}(S,M)\simeq \Omega^\infty(M) $$ (see \cite[Cor. 4.2.4.7]{HA}).\par As in the classical case, we can also define $E_\infty$-algebras and modules over algebras by setting $\Einfty_A\coloneqq \CAlg(\MMod_A)$, i.e. we endow $A$-modules with a monoidal structure and define $A$-algebras as commutative object in $\MMod_A$ (see \cite[7.1.3.8]{HA}). Alternatively, we could look at the over category $\Einfty_{A/}$ but both constructions are in fact equivalent (see \cite[Cor. 3.4.1.7]{HA}). We also have a forgetful functor from $\Einfty_A$ to $\Sp$, which under the identification $\Einfty_A\simeq \CAlg(\MMod_A)$ factors through the forget functor $\Einfty_A\rightarrow \MMod_A$ which is conservative (again follows from \cite[Cor. 4.2.3.2]{HA}). Further, the identification of $\pi_nR$ with $\pi_0\Hom_{\Sp}(S[n],R)$ for an $E_\infty$-ring $R$, where $\pi_nR$ is defined on the underlying spectrum of $R$, allows us to endow $\pi_*R\coloneqq \bigoplus_{n\in \ZZ}\pi_n R$ with a graded commutative ring structure (see \cite[7 7.1.1, Rem. 7.1.1.6]{HA}). We can also endow $\pi_*M\coloneqq\bigoplus_{n\in\ZZ}\pi_nM$ for any $R$-module $M$, with a graded $\pi_*R$-module structure (see \cite[\S 7.1.1]{HA}).\par The $\infty$-category of $A$-modules is also stable (see \cite[7.1.1.5]{HA}) and has an accessible $t$-structure induced by the accessible $t$-structure on $\Sp$. This $t$-structure allows us to identify the heart of $\MMod_A$ with the (nerve of the) ordinary category of $\pi_0A$-modules (see \cite[Prop. 7.1.1.13]{HA}) (note that by the above $\pi_0A$ is an ordinary commutative ring). This is analogous to $E_\infty$-algebra case, where for a connective $E_\infty$-ring $A$ the discrete $A$-algebras are precisely the ordinary commutative $\pi_0A$-algebras (see \cite[Prop. 7.1.3.18]{HA} (note that there is a typo in the statement)). \par A key difference to connective $E_\infty$-rings is that over ordinary (discrete) commutative rings $R$, the $R$-module spectra are not discrete $R$-modules but instead we have $\MMod_R\simeq \Dcal(R)$ as symmetric monoidal $\infty$-categories, where $\Dcal(R)$, denotes the derived $\infty$-category of $R$-modules\footnote{The derived $\infty$-category of a Grothendieck abelian category $\Acal$ is the $\infty$-category associated to the model category of chain complexes $\Ch(\Acal)$ (see \cite[Prop. 1.3.5.15.]{HA} and \cite[Prop. 1.3.5.3]{HA} for the model structure on chain complexes). The homotopy category $\textup{h}\Dcal(\Acal)$ is equivalent to the ordinary derived category $D(\Acal)$ of $\Acal$.} (see \cite[Thm. 7.1.2.13]{HA}). Let us also remark, that under this equivalence the homotopy groups of module spectra are isomorphic to the homology groups of the associated complex and since this is an equivalence of symmetric monoidal $\infty$-categories this isomorphism also respects the module structure (see Remark \ref{pi_n = H_n}).
{ "timestamp": "2022-08-03T02:16:46", "yymm": "2208", "arxiv_id": "2208.01506", "language": "en", "url": "https://arxiv.org/abs/2208.01506", "abstract": "These are notes on derived algebraic geometry in the context of animated rings. More precisely, we recall the proof of Toën-Vaquié that the derived stack of perfect complexes is locally geometric in the language of $\\infty$-categories. Along the way, we recall the necessary notions in derived commutative algebra and derived algebraic geometry. We also analyze the deformation theory and quasi-coherent modules over derived stacks.", "subjects": "Algebraic Geometry (math.AG)", "title": "Notes on derived algebraic geometry", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363496800778, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.707338587215893 }
https://arxiv.org/abs/1302.5651
Universal nowhere dense subsets of locally compact manifolds
In each manifold $M$ modeled on a finite or infinite dimensional cube $[0,1]^n$ we construct a closed nowhere dense subset $S\subset M$ (called a spongy set) which is a universal nowhere dense set in $M$ in the sense that for each nowhere dense subset $A\subset M$ there is a homeomorphism $h:M\to M$ such that $h(A)\subset S$. The key tool in the construction of spongy sets is a theorem on topological equivalence of certain decompositions of manifolds. A special case of this theorem says that two vanishing cellular strongly shrinkable decompositions $\mathcal A,\mathcal B$ of a Hilbert cube manifold $M$ are topologically equivalent if any two non-singleton elements $A\in\mathcal A$ and $B\in\mathcal B$ of these decompositions are ambiently homeomorphic.
\section{Introduction} In this paper we shall construct and characterize universal nowhere dense subsets of manifolds modeled on finite or infinite dimensional cubes $\mathbb I^n$, $n\le \omega$. A paracompact space $M$ is called a {\em manifold modeled on a model space $E$} (briefly, an {\em $E$-manifold\/}) if each point $x\in X$ has an open neighborhood $O_x\subset M$ homeomorphic to an open subset of the model space $E$. A nowhere dense subset $N$ of a topological space $M$ is called {\em a universal nowhere dense set} in $M$ if for each nowhere dense subset $A\subset M$ there is a homeomorphism $h:M\to M$ such that $h(A)\subset N$. It is well-known that the standard Cantor set $M^1_0$ is a universal nowhere dense subset of the unit interval $\mathbb I=[0,1]$ and the Sierpi\'nski carpet $M^2_1$ is a universal nowhere dense subset of the square $\mathbb I^2$. The Cantor set and the Sierpi\'nski carpet are first representatives in the hierarchy of the Menger cubes $M^n_{n-1}$, which are universal nowhere dense subsets of the $n$-dimensional cubes $\mathbb I^n$, see \cite{Menger}. The topology of the pair $(\mathbb I^2,M^2_1)$ was characterized by Whyburn \cite{Whyburn}. His result was generalized by Cannon \cite{Cannon73} who gave a topological characterization of the pair $(\mathbb I^n,M^n_{n-1})$ for all positive integers $n\ne 4$. In this paper we shall generalize these results of Whyburn and Cannon by constructing a specific universal nowhere dense subset $S$ (called a {\em spongy set}) in each $\mathbb I^n$-manifold $M$ and giving a topological characterization of the resulting pair $(M,S)$. The definition of a spongy set is based on the notion of a tame ball. \begin{definition} A subset $B$ of an $\mathbb I^n$-manifold $M$, $n\le \omega$, is called a {\em tame ball\/} in $M$ if $B$ has an open neighborhood $O(B)\subset M$ such that the pair $(O(B),B)$ is homeomorphic to the pair $$\begin{cases} (\mathbb R^n,\mathbb I^n)&\mbox{if $n<\omega$,}\\ \big(\mathbb I^\omega\times[0,2),\mathbb I^\omega\times[0,1]\big)&\mbox{if $n=\omega$}. \end{cases} $$ \end{definition} A family $\mathcal F$ of subsets of a topological space $X$ is called {\em vanishing} if for any open cover $\mathcal U$ of $X$ the family $\mathcal F'=\{F\in\mathcal F:\forall U\in\mathcal U\;\;F\not\subset U\}$ is {\em locally finite} in $X$. \begin{definition} A subset $S$ of an $\mathbb I^n$-manifold $M$, $n\le \omega$, is called a {\em spongy set} in $M$ if \begin{enumerate} \item $S$ is closed and nowhere dense in $M$, \item the family $\mathcal C$ of connected components of the complement $M\setminus S$ is vanishing in $M$, \item any two connected components $C,C'\in\mathcal C$ have disjoint closures in $M$, and \item the closure $\bar C$ of each connected component $C\in\mathcal C$ is a tame ball in $M$. \end{enumerate} \end{definition} A typical example of a spongy set in a finite dimensional cube $\mathbb I^n$ is the Menger cube $M^n_{n-1}$. The following theorem generalizes the results of Whyburn \cite{Whyburn} (for $n=2$) and Cannon \cite{Cannon73} (for $n\in\mathbb N\setminus\{4\}$) and gives many examples of universal nowhere dense subsets in finite and infinite dimensional manifolds. This theorem is essentially used in the paper \cite{BR} devoted to constructing universal meager subsets in locally compact manifolds. \begin{theorem}\label{main1} Let $M$ be a manifold modeled on a cube $\mathbb I^n$, $n\le\omega$. \begin{enumerate} \item Each nowhere dense subset of $M$ lies in a spongy subset of $M$. \item Any two spongy subsets of $M$ are ambiently homeomorphic. \item Any spongy subset of $M$ is a universal nowhere dense subset in $M$. \end{enumerate} \end{theorem} Two subsets $A,B$ of a topological space $X$ are called {\em ambiently homeomorphic} if the pairs $(X,A)$ and $(X,B)$ are homeomorphic. The latter means that $h(A)=B$ for some homeomorphism $h:X\to X$. The spongy subsets $M^n_{n-1}$ of finite dimensional cubes $\mathbb I^n$ are typical examples of deterministic fractals (see \cite{Barnsley} and \cite{Falconer} for the theory of fractals). In contrast, spongy sets in Hilbert cube manifolds do not have such a fractal structure and they are Hilbert cube manifolds as well. \begin{theorem}\label{t:spongeQ} Any spongy subset $S$ of a Hilbert cube manifold $M$ is a retract of $M$ and is homeomorphic to $M$. \end{theorem} This theorem will be proved in Section~\ref{s:spongeQ}. Theorem~\ref{main1} will be proved in Section~\ref{s:main1} after long preparatory work in Sections~\ref{s:dec}--\ref{s:main2}. The principal tool in the proof of Theorem~\ref{main1} is Theorem~\ref{main2} on the topological equivalence of $\mathcal K$-tame decompositions of strongly locally homogeneous completely metrizable spaces, discussed in Section~\ref{s:dec} and proved in Section~\ref{s:main2}. In Section~\ref{s:cellular} we shall apply Theorem~\ref{main2} to prove Corollaries~\ref{c4.8} and \ref{c4.9} establishing the topological equivalence of some vanishing cellular decompositions of Hilbert cube manifolds. \section{Topological equivalence of certain decompositions of topological spaces}\label{s:dec} In this section we discuss the problem of the topological equivalence of decompositions of completely metrizable spaces. For the theory of decompositions of finite dimensional manifolds we refer the reader to Daverman's monograph \cite{Dav}. Now let us fix some notation. For a subset $A$ of a topological space $X$ we shall denote by $\bar A$, $\mathrm{Int}(A)$, and $\partial A=\bar A\setminus\mathrm{Int}(A)$ the closure, the interior, and the boundary of $A$ in $X$, respectively. For a metric space $(X,d)$, a point $x\in X$ and a subset $A\subset X$ we put $d(x,A)=\inf_{a\in A}d(x,a)$ and $\mathrm{diam}(A)=\sup\{d(a,b):a,b\in A\}$. For a real number $\varepsilon$ we shall denote by $O_d(x,\varepsilon)=\{y\in X:d(x,y)<\varepsilon\}$ and $O_d(A,\varepsilon)=\{x\in X:d(x,A)<\varepsilon\}=\bigcup_{a\in A}O_d(a,\varepsilon)$ the open $\varepsilon$-neighborhoods of $a$ and $A$ in the metric space $X$. Let $\mathcal A,\mathcal B$ be two families $\mathcal A,\mathcal B$ of subsets of a space $X$. We shall write $\mathcal A\prec\mathcal B$ and say that the family $\mathcal A$ refines the family $\mathcal B$ if each set $A\in\mathcal A$ is contained in some set $B\in\mathcal B$. A subset $A\subset X$ is called {\em $\mathcal B$-saturated} if $A$ coincides with its {\em $\mathcal B$-star} $\mathcal{S}t(A,\mathcal B)=\bigcup\{B\in\mathcal B:A\cap B\ne \emptyset\}$. The family $\mathcal A$ is called {\em $\mathcal B$-saturated} if each set $A\in\mathcal A$ is $\mathcal B$-saturated. The family $\mathcal{S}t(\mathcal A,\mathcal B)=\{\mathcal{S}t(A,\mathcal B):A\in\mathcal B\}$ will be called the {\em $\mathcal B$-star} of the family $\mathcal A$, and $\mathcal{S}t(\mathcal A)=\mathcal{S}t(\mathcal A,\mathcal A)$ is the {\em star} of $\mathcal A$. Given functions $f,g:Z\to X$ we write $(f,g)\prec\mathcal A$ if for each point $z\in Z$ with $f(z)\ne g(z)$ the doubleton $\{g(z),f(z)\}$ lies in some set $A\in\mathcal A$. This definition implies that $f(z)=g(z)$ for each point $z\in Z\setminus \big(f^{-1}(\bigcup\mathcal A)\cap g^{-1}(\bigcup\mathcal A)\big)$. If $d$ is a metric on the space $X$, then we denote by $d(f,g)=\sup_{z\in Z}d(f(z),g(z))$ the $d$-distance between the functions $f,g$. Sometimes by $d(f,g)$ we shall also understand the function $d(f,g):X\to \mathbb R$, $d(f,g):x\mapsto d(f(x),g(x))$. A topological space $X$ is called {\em completely metrizable} if its topology is generated by a complete metric. By \cite[4.3.26]{En}, a topological space is completely metrizable if and only if it is metrizable and \v Cech complete. It is well-known \cite[5.1.8]{En} that each metrizable space $X$ is {\em collectionwise normal} in the sense that for each discrete family $\mathcal F$ of closed subsets of $X$ there is a discrete family $\{U_F\}_{F\in\mathcal F}$ of open subsets of $X$ such that $F\subset U_F$ for all $F\in\mathcal F$. By a {\em decomposition} of a topological space $X$ we understand a cover $\mathcal D$ of $X$ by pairwise disjoint non-empty compact subsets. For each decomposition $\mathcal D$ we can consider the quotient map $q_\mathcal D:X\to \mathcal D$ assigning to each point $x\in X$ the unique compact set $q(x)\in\mathcal D$ that contains $x$. The quotient map $q_\mathcal D$ induces the quotient topology on $\mathcal D$ turning $\mathcal D$ into a topological space called the {\em decomposition space} of the decomposition $\mathcal D$. Sometimes to distinguish a decomposition $\mathcal D$ from its decomposition space we shall denote the latter space by $X/\mathcal D$. A decomposition $\mathcal D$ of a topological space $X$ is {\em upper semicontinuous} if for each closed subset $F\subset X$ its {\em $\mathcal D$-saturation} $\mathcal{S}t(F,\mathcal D)=\bigcup\{D\in\mathcal D:D\cap F\ne\emptyset\}$ is closed in $X$. It is easy to see that a decomposition $\mathcal D$ of $X$ is upper semicontinuous if and only if the quotient map $q_\mathcal D:X\to X/\mathcal D$ is closed if and only if the quotient map $q_\mathcal D$ is perfect (the latter means that $q_\mathcal D$ is closed and for each point $y\in X/\mathcal D$ the preimage $q_\mathcal D^{-1}(y)$ is compact). Since the (complete) metrizability is preserved by perfect maps (see \cite[3.9.10 and 4.4.15]{En}), we get the following lemma (cf. Proposition 2 \cite{Dav}). \begin{lemma}\label{l1} For any upper semicontinuous decomposition $\mathcal D$ of a (completely) metrizable space $X$ the decomposition space $X/\mathcal D$ is (completely) metrizable. \end{lemma} Let us recall that a decomposition $\mathcal D$ of a topological space $X$ is called {\em vanishing} if for each open cover $\mathcal U$ of $X$ the subfamily $\mathcal D'=\{D\in\mathcal D:\forall U\in\mathcal U\;\;D\not\subset U\}$ is discrete in $X$ in the sense that each point $x\in X$ has a neighborhood $O_x\subset X$ that meets at most one set $D\in\mathcal D'$. Each vanishing disjoint family $\mathcal C$ of non-empty compact subsets of a topological space $X$ generates the vanishing decomposition $$\dot\mathcal C=\mathcal C\cup\big\{\{x\}:x\in X\setminus\textstyle{\bigcup}\mathcal C\big\}$$of the space $X$. In particular, each non-empty compact set $K\subset X$ induces the vanishing decomposition $\{K\}\cup\big\{\{x\}:x\in X\setminus K\}$ whose decomposition space will be denoted by $X/K$. By $q_K:X\to X/K$ we shall denote the corresponding quotient map. The following (probably known) lemma generalizes Proposition 3 of \cite{Dav}. \begin{lemma}\label{l2} Each vanishing decomposition $\mathcal D$ of a regular space $X$ is upper semicontinuous. \end{lemma} \begin{proof} Given a closed subset $F\subset X$ we need to check that its $\mathcal D$-saturation $\mathcal{S}t(F,\mathcal D)=q_\mathcal D^{-1}(q_\mathcal D(F))$ is closed in $X$. Fix any point $x\in X\setminus \mathcal{S}t(F,\mathcal D)$ and let $D_x=q_\mathcal D(x)$ be the unique element of the decomposition $\mathcal D$, which contains the point $x$. By the regularity of the space $X$, the compact subset $D_x\subset X\setminus F$ has an open neighborhood $V\subset X$ such that $\overline{V}\cap F=\emptyset$. Since the decomposition $\mathcal D$ is vanishing, for the open cover $\mathcal U=\{X\setminus F,X\setminus \overline{V}\}$ of $X$ the family $$\mathcal D'=\{D\in\mathcal D:D\not\subset X\setminus F,\;D\not\subset X\setminus \overline{V}\}=\{D\in\mathcal D:D\cap F\ne\emptyset\ne D\cap\overline{V}\}$$ is discrete in $X$ and hence its union $D'=\bigcup\mathcal D'$ is closed in $X$. Since $D_x\notin \mathcal D'$, we conclude that $D_x\cap D'=\emptyset$ and hence $U_x=V\setminus D'$ is an open neighborhood of $x$ missing the set $\mathcal{S}t(F,\mathcal D)$ and therefore the latter set is closed in $X$. \end{proof} A decomposition $\mathcal D$ of a space $X$ will be called {\em dense} (resp. {\em discrete}) if its non-degeneracy part $$\mathcal D^\circ=\{D\in\mathcal D:|D|>1\}$$ is dense (resp. closed and discrete) in the decomposition space $\mathcal D=X/\mathcal D$. A decomposition $\mathcal D$ of a topological space $X$ is called \begin{itemize} \item {\em shrinkable} if for each $\mathcal D$-saturated open cover $\mathcal U$ of $X$ and each open cover $\mathcal V$ of $X$ there is a homeomorphism $h:X\to X$ such that $(h,\mathrm{id}_X)\prec\mathcal U$ and $\{h(D):D\in\mathcal D\}\prec\mathcal V$; \item {\em strongly shrinkable} if for each $\mathcal D$-saturated open set $U\subset X$ the decomposition $\mathcal D|U=\{D\in\mathcal D:D\subset U\}$ of $U$ is shrinkable. \end{itemize} A compact subset $K$ of a topological space $X$ is called {\em locally shrinkable} if for each neighborhood $O(K)\subset X$ and any open cover $\mathcal V$ of $O(K)$ there is a homeomorphism $h:X\to X$ such that $h|X\setminus O(K)=\mathrm{id}$ and $h(K)$ is contained in some set $V\in\mathcal V$. It is easy to see that a compact subset $K\subset X$ is locally shrinkable if and only if the decomposition $\{K\}\cup\big\{\{x\}:x\in X\setminus K\big\}$ of $X$ is strongly shrinkable (cf. \cite[p.42]{Dav}). (Strongly) shrinkable decompositions are tightly connected with (strong) near homeomorphisms. A map $f:X\to Y$ between topological spaces will be called a \begin{itemize} \item a {\em near homeomorphism} if for each open cover $\mathcal U$ of $Y$ there is a homeomorphism $h:X\to Y$ such that $(h,f)\prec\mathcal U$; \item a {\em strong near homeomorphism} if for each open set $U\subset Y$ the map $f|f^{-1}(U):f^{-1}(U)\to U$ is a near homeomorphism. \end{itemize} The following Shrinkability Criterion was proved in \cite[Theorem 2.6]{Dav}. \begin{theorem}[Shrinkability Criterion]\label{t:shrink} An upper semicontinuous decomposition $\mathcal D$ of a completely metrizable space $X$ is (strongly) shrinkable if and only if the quotient map $q_\mathcal D:X\to X/\mathcal D$ is a (strong) near homeomorphism. \end{theorem} For two decompositions $\mathcal A\prec\mathcal B$ of a space $X$ we shall denote by $q^\mathcal A_\mathcal B:X/\mathcal A\to X/\mathcal B$ the unique map making the following diagram commutative: $$\xymatrix{ &X\ar[rd]^{q_\mathcal B}\ar[ld]_{q_\mathcal A}&\\ X/\mathcal A\ar[rr]_{q^\mathcal A_\mathcal B}&&X/\mathcal B }$$ We shall say that a decomposition $\mathcal A$ of a topological space $X$ is {\em topologically equivalent} to a decomposition $\mathcal B$ of a topological space $Y$ if there is a homeomorphism $\Phi:X\to Y$ such that the decomposition $\Phi(\mathcal A)=\{\Phi(A):A\in\mathcal A\}$ of $Y$ is equal to the decomposition $\mathcal B$. This happens if and only if there is a unique homeomorphism $\varphi:X/\mathcal A\to Y/\mathcal B$ making the diagram $$\xymatrix{ X\ar[r]^{\Phi}\ar[d]_{q_\mathcal A}&Y\ar[d]^{q_\mathcal B}\\ X/\mathcal A\ar[r]_{\varphi}&Y/\mathcal B }$$commutative. In this case we shall say that the homeomorphism $\Phi$ is $(\mathcal A,\mathcal B)$-factorizable and the homeomorphism $\varphi:X/\mathcal A\to Y/\mathcal B$ is $(\mathcal A,\mathcal B)$-liftable. More precisely, we define a homeomorphism $\varphi:X/\mathcal A\to Y/\mathcal B$ (resp. $\Phi:X\to Y$) to be {\em $(\mathcal A,\mathcal B)$-liftable} (resp. $(\mathcal A,\mathcal B)$-{\em factorizable}~) if there is a homeomorphism $\Phi:X\to Y$ (resp. $\varphi:X/\mathcal A\to Y/\mathcal B$~) such that $q_\mathcal B\circ \Phi=\varphi\circ q_\mathcal A$. It is clear that each $(\mathcal A,\mathcal B)$-liftable homeomorphism $\varphi:X/\mathcal A\to Y/\mathcal B$ maps the non-degeneracy part $\mathcal A^\circ$ of the decomposition $\mathcal A$ onto the non-degeneracy part $\mathcal B^\circ$ of the decomposition $\mathcal B$. So, $\varphi:(\mathcal A,\mathcal A^\circ)\to (\mathcal B,\mathcal B^\circ)$ is a homeomorphism of pairs. Observe that two decompositions $\mathcal A,\mathcal B$ of a topological space $X$ are topologically equivalent if and only if there is an $(\mathcal A,\mathcal B)$-factorizable homeomorphism $\Phi:X\to X$ if and only if there exists an $(\mathcal A,\mathcal B)$-liftable homeomorphism $\varphi:X/\mathcal A\to X/\mathcal B$ between the decomposition spaces. We shall be interested in finding conditions on vanishing decompositions $\mathcal A$, $\mathcal B$ of a space $X$, which guarantee that the set of $(\mathcal A,\mathcal B)$-liftable homeomorphisms is dense in the space $\mathcal H(\mathcal A,\mathcal B)$ of all homeomorphisms between the decomposition spaces $\mathcal A=X/\mathcal A$ and $\mathcal B=X/\mathcal B$. The homeomorphism space $\mathcal H(\mathcal A,\mathcal B)$ will be endowed with the {\em limitation topology} \cite{Chig} whose neighborhood base at a homeomorphism $f:X\to Y$ consists of the sets $$N(f,\mathcal U)=\{g\in \mathcal H(X,Y):(f,g)\prec\mathcal U\}$$ where $\mathcal U$ runs over all open covers of $Y$. The following definition of a tame family will be used in Definition~\ref{d:d-Ktame} of a $\mathcal K$-tame decomposition. \begin{definition}\label{d:K-tame} Let $\mathcal K$ be a family of compact subsets of a topological space $X$. We shall say that the family $\mathcal K$ \begin{itemize} \item is {\em ambiently invariant} if for each homeomorphism $h:X\to X$ and each set $K\in\mathcal K$ we get $h(K)\in\mathcal K$; \item has the {\em local shift property} if for any point $x\in X$ and a neighborhood $O_x\subset X$ there is a neighborhood $U_x\subset O_x$ of $x$ such that for any sets $A,B\in\mathcal K$ with $A,B\subset U_x$ there is a homeomorphism $h:X\to X$ such that $h(A)=B$ and $h|X\setminus O_x=\mathrm{id}|X\setminus O_x$; \item {\em tame} if $\mathcal K$ is ambiently invariant, consists of locally shrinkable sets, has the local shift property, and each non-empty open subset $U\subset X$ contains a set $K\in\mathcal K$. \end{itemize} \end{definition} Now we can define $\mathcal K$-tame decompositions. \begin{definition}\label{d:d-Ktame} Let $\mathcal K$ be a tame family of compact subsets of a Polish space $X$. A decomposition $\mathcal D$ of $X$ is called {\em $\mathcal K$-tame} if $\mathcal D$ is vanishing, strongly shrinkable, and $\mathcal D^\circ\subset\mathcal K$. \end{definition} The following theorem that will be proved in Section~\ref{s:exist} yields many examples of $\mathcal K$-tame decompositions. \begin{theorem}\label{t:exist} Let $\mathcal K$ be a tame family of compact subsets of a completely metrizable space $X$ such that each set $K\in\mathcal K$ contains more than one point. For any open set $U\subset X$ there is a $\mathcal K$-tame decomposition $\mathcal D$ of $X$ such that $\bigcup\mathcal D^\circ$ is a dense subset of $U$. \end{theorem} We shall say that a topological space $X$ is {\em strongly locally homogeneous} if the family of singletons $\big\{\{x\}\big\}_{x\in X}$ is tame. This happens if and only if this family has the local shift property. So, our definition of the strong local homogeneity agrees with the classical one introduced in \cite{Bennett}. It is easy to see that each connected strongly locally homogeneous space is {\em topologically homogeneous} in the sense that for any two points $x,y\in X$ there is a homeomorphism $h:X\to X$ with $h(x)=y$. The main technical result of this paper in the following theorem on the density of liftable homeomorphisms between decomposition spaces. \begin{theorem}\label{main2} For any tame family $\mathcal K$ of compact subsets of a strongly locally homogeneous completely metrizable space $X$ and any dense $\mathcal K$-tame decompositions $\mathcal A,\mathcal B$ of $X$, the set of $(\mathcal A,\mathcal B)$-liftable homeomorphisms is dense in the homeomorphism space $\mathcal H(\mathcal A,\mathcal B)$. \end{theorem} The proof of this theorem will be presented in Section~\ref{s:main2} after long preparatory work in Sections~\ref{s:dense}--\ref{s:dense-Ktame}. Now we apply this theorem to prove the following corollary. \begin{corollary}\label{c2.6} For any tame family $\mathcal K$ of compact subsets of a strongly locally homogeneous completely metrizable space $X$, any two dense $\mathcal K$-tame decompositions $\mathcal A,\mathcal B$ of $X$ are topologically equivalent. Moreover, for any open cover $\mathcal U$ of $X$ there is a homeomorphism $\Phi:X\to X$ such that $\Phi(\mathcal A)=\mathcal B$ and $(\Phi,\mathrm{id}_X)\prec\mathcal W$, where $$\mathcal W=\{\mathcal{S}t(A,\mathcal U)\cup\mathcal{S}t(B,\mathcal U):A\in\mathcal A,\;B\in\mathcal B,\;\;\mathcal{S}t(A,\mathcal U)\cap\mathcal{S}t(\mathcal B,\mathcal U)\ne\emptyset\}.$$ \end{corollary} \begin{proof} Fix an open cover $\mathcal U$ of $X$. For every set $A\in\mathcal A$ consider its open neighborhood $\mathcal{S}t(A,\mathcal U)=\{U\in\mathcal U:A\cap U\ne\emptyset\}$. Since the quotient map $q_\mathcal A:X\to\mathcal A=X/\mathcal A$ is closed, the set $O(A)=\mathcal A\setminus q_\mathcal A\big(X\setminus\mathcal{S}t(A,\mathcal U)\big)$ is an open neighborhood of the point $A=q_\mathcal A(A)\in\mathcal A$ in the decomposition space $\mathcal A=X/\mathcal A$. By Lemma~\ref{l1}, the decomposition space $\mathcal A=X/\mathcal A$ is metrizable and hence paracompact. Consequently, we can find an open cover $\mathcal U_\mathcal A$ of $\mathcal A$ such that $\mathcal{S}t(\mathcal U_\mathcal A)\prec\{O(A):A\in\mathcal A\}$. By analogy, choose an open cover $\mathcal U_\mathcal B$ of the decomposition space $\mathcal B$ such that $\mathcal{S}t(\mathcal U_\mathcal B)\prec\{O(B):B\in\mathcal B\}$ where $O(B)=\mathcal B\setminus q_\mathcal B\big(X\setminus\mathcal{S}t(B,\mathcal U)\big)$ for each $B\in\mathcal B$. By Definition~\ref{d:d-Ktame} and Theorem~\ref{t:shrink}, the quotient maps $q_\mathcal A:X\to\mathcal A$ and $q_\mathcal B:X\to\mathcal B$ are near homeomorphism. Consequently, we can find homeomorphisms $h_\mathcal A:X\to\mathcal A$ and $h_\mathcal B:X\to\mathcal B$ such that $(h_\mathcal A,q_\mathcal A)\prec\mathcal U_\mathcal A$ and $(h_\mathcal B,q_\mathcal B)\prec\mathcal U_\mathcal B$. Applying Theorem~\ref{main2}, find an $(\mathcal A,\mathcal B)$-liftable homeomorphism $\varphi:\mathcal A\to\mathcal B$ such that $(\varphi,h_B\circ h_A^{-1})\prec\mathcal U_B$. The $(\mathcal A,\mathcal B)$-liftability of $\varphi$ yields a homeomorphism $\Phi:X\to X$ such that $q_\mathcal B\circ\Phi=\varphi\circ q_\mathcal A$. The latter equality implies that $$\big\{\Phi(A):A\in\mathcal A\big\}=\big\{q_\mathcal B^{-1}\circ\varphi\circ q_\mathcal A(A):A\in\mathcal A\big\}=\big\{q_\mathcal B^{-1}\circ \varphi(\{A\}):A\in\mathcal A\big\}=\big\{q_\mathcal B^{-1}(\{B\}):B\in\mathcal B\big\}=\mathcal B.$$ To show that $(\Phi,\mathrm{id}_X)\prec\mathcal W$, take any point $x\in X$ and consider the point $y=h_\mathcal A^{-1}\circ q_\mathcal A(x)\in X$. Since $(h_\mathcal A,q_\mathcal A)\prec\mathcal U_A$, there are a set $U\in\mathcal U_A$ and a set $A\in\mathcal A$ such that $\{q_\mathcal A(x),q_\mathcal A(y)\}=\{h_\mathcal A(y),q_\mathcal A(y))\subset U\subset O(A)$. Then $\{x,y\}\subset q^{-1}_\mathcal A(O(A)) \subset\mathcal{S}t(A,\mathcal U)$. The choice of the homeomorphism $h_B:X\to\mathcal B$ guarantees that $\{h_\mathcal B(y),q_\mathcal B(y)\}\subset U$ for some set $U\in\mathcal U_\mathcal B$. Since $(\varphi,h_\mathcal B\circ h^{-1}_\mathcal A)\prec\mathcal U_\mathcal B$, we conclude that $\{\varphi\circ q_\mathcal A(x),h_\mathcal B\circ h^{-1}_\mathcal A\circ q_\mathcal A(x)\}\subset U'$ for some set $U'\in\mathcal U_\mathcal B$. Then $h_\mathcal B(y)=h_\mathcal B\circ h_\mathcal A^{-1}\circ q_\mathcal A(x)\in U\cap U'$ and hence $\{\varphi\circ q_\mathcal A(x),q_\mathcal B(y)\}\subset U\cup U'\subset O(B)$ for some set $B\in\mathcal B$. The definition of the set $O(B)$ implies that $$\{\Phi(x),y\}\subset q_\mathcal B^{-1}\circ\varphi\circ q_\mathcal A(x)\cup q_\mathcal B^{-1}\circ q_\mathcal B(y)\subset q^{-1}_\mathcal B(O(B))\subset \mathcal{S}t(B,\mathcal U).$$ Consequently, $y\in\mathcal{S}t(A,\mathcal U)\cap\mathcal{S}t(B,\mathcal U)$ and $\{x,\Phi(x)\}\subset\mathcal{S}t(A,\mathcal U)\cup\mathcal{S}t(B,\mathcal U)\in \mathcal W$. \end{proof} \section{Approximating strong near homeomorphisms by homeomorphisms} In this section we prove an auxiliary result on the approximation of strong near homeomorphisms by homeomorphisms. This result will be used in the proof of Theorem~\ref{t5}. \begin{lemma}\label{l:approx} Let $\mathcal D$ be a vanishing decompositions of a metrizable space $X$, $U\subset \mathcal D$ be an open neighborhood of the non-degeneracy part $\mathcal D^\circ$ in the decomposition space $\mathcal D=X/\mathcal D$, and $V=q_\mathcal D^{-1}(U)\subset X$. Then there is an open cover $\mathcal U$ of $U$ such that for any homeomorphism $h:V\to U$ with $(h,q_\mathcal D|V)\prec\mathcal U$ the map $\bar h:X\to\mathcal D$ defined $$\bar h(x)=\begin{cases} h(x)&\mbox{if $x\in V$}\\ \{x\}&\mbox{otherwise} \end{cases} $$ is a homeomorphism of $X$ onto the decomposition space $\mathcal D=X/\mathcal D$. \end{lemma} \begin{proof} Fix a metric $d$ generating the topology of the space $X$ and let $\mathcal V$ be an open cover of the set $V=q_\mathcal D^{-1}(U)$ such that $\mathcal{S}t(\mathcal V)\prec\{O_d(v,d(v,X\setminus V)/2):v\in V\}$. \begin{claim}\label{cl:ed} For each point $x_0\in X\setminus V$, and each $\epsilon>0$ there is a positive $\delta\le\epsilon$ such that for each $D\in\mathcal D$, if $x\notin D$ and $\mathcal{S}t(D,\mathcal V)\cap O_d(x_0,\delta)\ne\emptyset$, then $\mathcal{S}t(D,\mathcal V)\subset O_d(x_0,\epsilon)$. \end{claim} \begin{proof} Consider the open cover $\{O_d(x_0,\epsilon/2),X\setminus \bar O_d(x_0,\epsilon/4)\}$ of the space $X$. Since the decomposition $\mathcal D$ is vanishing, the family $\mathcal D'=\{D\in\mathcal D:x_0\notin D,\;D\not\subset O_d(x_0,\epsilon/2),\;D\not\subset X\setminus \bar O_d(x_0,\epsilon/4)\}$ is discrete in $X$ and hence has closed union $\bigcup\mathcal D'$, which does not contain the point $x_0$. Then we can find a positive $\delta<\epsilon/6$ such that $O_d(x_0,3\delta/2)\cap\bigcup\mathcal D'=\emptyset$. Assume now that $\mathcal{S}t(D,\mathcal V)\cap O_d(x_0,\delta)\ne\emptyset$ for some set $D\in\mathcal D$ with $x_0\notin D$. Pick any point $x\in \mathcal{S}t(D,\mathcal V)\cap O_d(x_0,\delta)$ and find a point $z\in D\cap\mathcal{S}t(x,\mathcal V)\subset O_d(x,d(x,X\setminus V)/2)$. Since $$d(z,x_0)\le d(z,x)+d(x,x_0)<\frac12d(x,X\setminus V)+d(x,x_0)\le \frac32d(x,x_0)<\frac32\delta<\frac14\epsilon,$$ the set $D$ meets the ball $O_d(x_0,3\delta/2)$ and hence does not belong to the family $\mathcal D'$. Taking into account that the set $D\notin\mathcal D'$ meets the ball $O_d(x_0,\epsilon/4)$, we conclude that $D\subset O_d(x_0,\epsilon/2)$. Given any point $y\in\mathcal{S}t(D,\mathcal U)$, observe that $\emptyset\ne D\cap\mathcal{S}t(y,\mathcal V)\subset O_d(x_0,\epsilon/2)\cap O_d(y,O_d(y,X\setminus V)/2)$ and hence $$d(x_0,y)< \frac12\epsilon+\frac12d(y,X\setminus V)\le\frac12\epsilon+\frac12d(y,x_0),$$which implies that $d(y,x_0)<\epsilon$ and $\mathcal{S}t(D,\mathcal V)\subset O_d(x_0,\epsilon)$. \end{proof} The decomposition $\mathcal D$ induces the decomposition $\mathcal D_V=\{D\in\mathcal D: D\subset V\}$ of the space $V$. By Lemma~\ref{l2}, the vanishing decomposition $\mathcal D$ is upper semicontinuous and hence the quotient map $q_\mathcal D:X\to \mathcal D$ is closed. Consequently, for every set $D\in\mathcal D_V\subset\mathcal D$ the set $F=q_\mathcal D(X\setminus\mathcal{S}t(D,\mathcal V))$ is closed in $\mathcal D$ and the set $O(D)=U\setminus F$ is an open neighborhood of the point $D\in\mathcal D$ in the decomposition space $\mathcal B$. Since $\bigcup\mathcal D_V=V$, the family $\mathcal U=\{O(D):D\in\mathcal D_V\}$ is an open cover of the open subspace $U=q_\mathcal D(V)$ of the decomposition space $\mathcal D=X/\mathcal D$. We claim that the open cover $\mathcal U$ has the property required in Lemma~\ref{l:approx}. Let $h:V\to U$ be a homeomorphism with $(h,q_\mathcal D|V)\prec\mathcal U$ and $\bar h:X\to\mathcal D$ be the extension of $h$ such that $\bar h(x)=\{x\}$ for all $x\in X\setminus V$. It is clear that the map $\bar h$ is bijective. Since $\bar h|V=h|V$, the map $\bar h$ is open and continuous at each point $x_0\in V$. So, it remains to prove the continuity and the openness of $\bar h$ at each point $x_0\in X\setminus V$. To prove the continuity of $\bar h$ at $x_0$, take any neighborhood $O(\{x_0\})\subset\mathcal D$ of the image $h(x_0)=\{x_0\}$ of $x_0$ in the decomposition space $\mathcal D$. By the continuity of the quotient map $q_\mathcal D$ the preimage $O(x_0)=q_\mathcal D^{-1}\big(O(\{x_0\})\big)$ of this neighborhood is a $\mathcal D$-saturated open neighborhood of the point $x_0$ in $X$. Find a positive $\epsilon$ such that $O_d(x_0,\epsilon)\subset O(x_0)$. By Claim~\ref{cl:ed}, there a positive number $\delta\le\varepsilon$ such that for each set $D\in\mathcal D_V$ with $O_d(x_0,\delta)\cap\mathcal{S}t(D,\mathcal V)\ne\emptyset$, we get $\mathcal{S}t(D,\mathcal V)\subset O_d(x_0,\epsilon)$. We claim that $\bar h(O_d(x_0,\delta))\subset O(\{x_0\})$. Pick any point $x\in O_d(x_0,\delta)$. If $x\notin V$, then $x\in O_d(x_0,\delta)\subset O_d(x_0,\epsilon)\subset O(x_0)=q_\mathcal B^{-1}(O(\{x_0\})$ and hence $\bar h(x)=\{x\}=q_\mathcal B(x)\in O(\{x_0\})$. So, we assume that $x\in V$. In this case $\bar h(x)=h(x)$ and $(h(x),q_\mathcal B(x))\subset O(D)\in\mathcal U$ for some set $D\in \mathcal D_V$. Then $\{x\}\cup q_\mathcal B^{-1}(h(x))\subset q_\mathcal B^{-1}(O(D))\subset\mathcal{S}t(D,\mathcal V)$. Since $x\in \mathcal{S}t(D,\mathcal V)\cap O_d(x_0,\delta)$, the choice of the number $\delta$ guarantees that $q_\mathcal B^{-1}(h(x))\subset\mathcal{S}t(x_0,\mathcal V)\subset O_d(x_0,\varepsilon)\subset O(x_0)$ and hence $h(x)\in q_\mathcal B(O(x_0))=O(\{x_0\})$. So, the map $\bar h:X\to\mathcal D$ is continuous at $x_0$. Next, we show that the map $\bar h$ is open at $x_0$. Given any $\epsilon>0$, we should find an open neighborhood $U(\{x_0\})\subset\mathcal D$ of the point $\{x_0\}=\bar h(x_0)=q_\mathcal D(x_0)$ such that $U(\{x_0\})\subset h(O_d(x_0,\epsilon))$. By Claim~\ref{cl:ed}, there exists a positive number $\delta\le \epsilon$ such that for each set $D\in\mathcal D_V$ with $\mathcal{S}t(D,\mathcal V)\cap O_d(x_0,\delta)\ne\emptyset$, we get $\mathcal{S}t(D,\mathcal V)\subset O_d(x_0,\epsilon)$. Since the decomposition $\mathcal D$ is upper semicontinuous, for the closed subset $C=X\setminus O_d(x_0,\delta)$ of $X$ its $\mathcal D$-saturation $\mathcal{S}t(C,\mathcal D)$ is closed in $X$. Then $U(x_0)=X\setminus \mathcal{S}t(C,\mathcal D)\subset O_d(x_0,\delta)$ is a $\mathcal D$-saturated open neighborhood of $x_0$ in $X$ and its image $U(\{x_0\})=q_\mathcal D(U(x_0))$ is an open neighborhood of the point $\{x_0\}$ in the decomposition space $\mathcal D$. We claim that $U(\{x_0\})\subset \bar h(O_d(x_0,\varepsilon))$. Take any point $y\in U(\{x_0\})$ and consider its preimage $x=\bar h^{-1}(y)\in X$. If $x\notin V$, then $y=\bar h(x)=q_\mathcal D(x)=\{x\}$ and hence $x\in q_\mathcal D^{-1}(y)\subset q_\mathcal D^{-1}\big(U(\{x_0\})\big)=U(x_0)\subset O_d(x_0,\delta)\subset O_d(x_0,\epsilon)$. So, we assume that $y\in U$. In this case $y=\bar h(x)=h(x)$. Since $(h,q_\mathcal D|V)\prec\mathcal U$, there is a set $D\in\mathcal D_V$ such that $\{q_\mathcal D(x),y\}=\{q_\mathcal D(x),h(x)\}\subset O(D)\in\mathcal U$ and thus $q_\mathcal D^{-1}(y)\subset q_\mathcal D^{-1}(\{q_\mathcal D(x),y\})\subset q_\mathcal D^{-1}\big(O(D)\big)\subset \mathcal{S}t(D,\mathcal V)$ by the choice of the neighborhood $O(D)$. Taking into account that $q_\mathcal D^{-1}(y)\subset U(x_0)\subset O_d(x_0,\delta)$, we see that the $\mathcal V$-star $\mathcal{S}t(D,\mathcal V)$ of $D$ meets the $\delta$-ball $O_d(x_0,\delta)$ and hence is contained in the $\varepsilon$-ball $O_d(x_0,\epsilon)$ by the choice of $\delta$. Then $x\in q_\mathcal D^{-1}\big(q_\mathcal D(x)\big)\subset \mathcal{S}t(D,\mathcal V)\subset O_d(x_0,\epsilon)$ and $y=\bar h(x)\in \bar h\big(O_d(x_0,\epsilon)\big)$. \end{proof} \section{Topological equivalence of dense $\sigma$-discrete subsets of strongly locally homogeneous spaces}\label{s:dense} In this section we establish one important property of strongly locally homogeneous completely metrizable spaces, which will be used several times in the proof of Theorem~\ref{main2} and \ref{t5}. Let us recall that a topological space $X$ is called {\em strongly locally homogeneous} if for each point $x\in X$ and an open neighborhood $O_x\subset X$ of $x$ there is an open neighborhood $U_x\subset O_x$ of $x$ such that for any point $y\in U_x$ there is a homeomorphism $h:X\to X$ such that $h(x)=y$ and $h|X\setminus O_x=\mathrm{id}$. A subset $D$ of a topological space $X$ is called {\em $\sigma$-discrete} if $D$ can be written as a countable union $D=\bigcup_{n\in\omega}D_n$ of closed discrete subsets of $X$. The following theorem generalizes a result of Bennett \cite{Bennett} on the topological equivalence of any countable dense subsets in a strongly locally homogeneous Polish space. \begin{theorem}\label{t:dense} If $X$ is a strongly locally homogeneous completely metrizable space, then for any open cover $\mathcal U$ of an open subspace $U\subset X$, and any dense $\sigma$-discrete subspaces $A,B\subset U$ there is a homeomorphism $h:X\to X$ such that $h(A)=B$ and $(h,\mathrm{id})\prec\mathcal U$. \end{theorem} \begin{proof} Since the strong local homogeneity is inherited by open subspaces, we lose no generality assuming that $U=X$. Using a standard technique of Tukey (cf. \cite[5.4.H]{En}), we can choose a complete metric $d$ generating the topology of $X$ and such that the cover $\{O_d(x,1):x\in X\}$ of $X$ by closed 1-balls refines the cover $\mathcal U$. Given dense $\sigma$-discrete subsets $A,B$ in $U=X$, choose a (not necessarily continuous) function $\delta:A\cup B\to (0,1]$ such that for each $\epsilon>0$ the set $\{x\in A\cup B:\delta(x)>\epsilon\}$ is closed and discrete in $X$. We shall construct inductively a sequence of homeomorphisms $(h_n:X\to X)_{n\in\omega}$ and two sequences $(A_n)_{n\in\omega}$ and $(B_n)_{n\in\omega}$ of closed discrete subsets of $X$ such that for every $n\in\omega$ the following conditions will be satisfied: \begin{enumerate} \item $A_{n-1}\cup\{a\in A:\delta(a)\ge 2^{-n}\}\subset A_n\subset A$; \item $B_{n-1}\cup\{b\in B:\delta(b)\ge 2^{-n}\}\subset B_n\subset B$; \item $h_n(A_n\setminus A_{n-1})=B_n\setminus B_{n-1}$; \item $h_n|A_{n-1}=h_{n-1}|A_{n-1}$, and \item $d(h_n,h_{n-1})\le 2^{-n-1}$ and $d(h_n^{-1},h_{n-1}^{-1})\le 2^{-n-1}$. \end{enumerate} We start the inductive construction by letting $A_0=B_0=\emptyset$ and $h_0=\mathrm{id}_X$. Assume that for some $n\in\mathbb N$ subsets $A_i,B_i$ and homeomorphisms $h_i$ have been constructed for all $i<n$. The inductive assumptions (3) and (4) imply that $h_{n-1}(A_{n-1})=B_{n-1}$. Consider the subsets $\tilde A_n=\{a\in A\setminus A_{n-1}:\delta(a)\ge 2^{-n}\}$ and $\tilde B_n=\{b\in B\setminus B_{n-1}:\delta(b)\ge 2^{-n}\}$. By the choice of the function $\delta$, these sets are closed and discrete in $X$. Then the sets $B_n'=h_{n-1}(\tilde A_n)\setminus \tilde B_n$ and $A_n'=h_{n-1}^{-1}(\tilde B_n)\setminus\tilde A_n$ also are closed and discrete in $X$. It follows that $h_{n-1}(A'_n)\cap B_n'=\emptyset$. By normality of the space $X$, the closed sets $A_n',B_n'$ have open neighborhoods $O(A'_n),O(B'_n)\subset X$ such that $h_{n-1}(\bar O(A'_n))\cap \bar O(B'_n)=\emptyset$, where $\bar O(A_n')$ and $\bar O(B_n')$ are the closures of these neighborhoods in $X$. Moreover, we can assume that $\bar O(A_n')\cap(A_{n-1}\cup \tilde A_n)=\emptyset$ and $\bar O(B_n')\cap(B_{n-1}\cup \tilde B_n)=\emptyset$. For each point $b\in B_n'$ choose a neighborhood $V_b\subset O(B_n')$ such that $\mathrm{diam}(V_b)<2^{-n-1}$ and $\mathrm{diam}(h^{-1}_{n-1}(V_b))<2^{-n-1}$. Since the set $B_n'$ is closed and discrete in the collectionwise normal space $X$, we can assume that the family $(V_b)_{b\in B_n'}$ is discrete in $X$. Since the space $X$ is strongly locally homogeneous, each point $b\in B_n'$ has a neighborhood $W_b\subset V_b$ such that for each point $b'\in W_b$ there is a homeomorphism $\beta_b:X\to X$ such that $\beta_b(b)=b'$ and $\beta_b|X\setminus V_b=\mathrm{id}$. Since the subset $B\subset X$ is dense, we can choose a point $b'\in B\cap W_b$ and find a homeomorphism such that $\beta_b(b)=b'$ and $\beta_b|X\setminus V_b=\mathrm{id}$. The homeomorphisms $\beta_b$, $b\in B_n'$, produce a single homeomorphism $\beta:X\to X$ defined by the formula $$ \beta(x)=\begin{cases}\beta_b(x)&\mbox{if $x\in V_b$ for some $b\in B_n'$,}\\ x&\mbox{otherwise.} \end{cases} $$ It is easy to see that the homeomorphism $\beta:X\to X$ has the following properties: \begin{itemize} \item $\beta(B_n')\subset B$, \item $\beta|X\setminus O(B'_n)=\mathrm{id}$, \item $d(\beta\circ h_{n-1},h_{n-1})\le 2^{-n-1}$, and \item $d(h_{n-1}^{-1}\circ \beta^{-1},h_{n-1}^{-1})\le 2^{-n-1}$. \end{itemize} Let us prove the latter inequality. Given any point $x\in X$, we need to check that $d(h_{n-1}^{-1}\circ\beta^{-1}(x),h_{n-1}^{-1}(x))\le2^{-n-1}$. If $x\notin\bigcup_{b\in b_n'}V_b$, then $\beta(x)=x=\beta^{-1}(x)$ and hence $d(h_{n-1}^{-1}\circ\beta^{-1}(x),h_{n-1}^{-1}(x))=0\le 2^{-n-1}$. So, we assume that $x\in V_b$ for some $b\in B'_n$. Then the point $y=\beta^{-1}(x)$ also belongs to $V_b$ and hence $$d\big(h_{n-1}^{-1}\circ \beta^{-1}(x),h_{n-1}^{-1}(x)\big)\le\mathrm{diam} \big(h_{n-1}^{-1}(V_b)\big)\le 2^{-n-1}$$by the choice of the neighborhood $V_b$. By analogy we can construct a homeomorphism $\alpha:X\to X$ such that \begin{itemize} \item $\alpha(A_n')\subset A$, \item $\alpha|X\setminus O(A'_n)=\mathrm{id}$, \item $d(\alpha\circ h^{-1}_{n-1},h^{-1}_{n-1})\le 2^{-n-1}$, and \item $d(h_{n-1}\circ \alpha^{-1},h_{h-1})\le 2^{-n-1}$. \end{itemize} Let $A_n=A_{n-1}\cup\tilde A_n\cup \alpha(A_n')$ and $B_n=B_{n-1}\cup\tilde B_n\cup \beta(B_n')$. Now consider the homeomorphism $h_n:X\to X$ defined by the formula $$h_n(x)=\begin{cases} \beta\circ h_{n-1}(x)&\mbox{if $x\in h_{n-1}^{-1}(O(B'_n))$},\\ h_{n-1}\circ\alpha^{-1}(x)&\mbox{if $x\in O(A_n')$}\\ h_{n-1}(x)&\mbox{otherwise}. \end{cases} $$ The choice of the neighborhoods $O(A_n')$ and $O(B'_n)$ guarantees that $h_n$ is a well-defined homeomorphism that satisfies the conditions (1)--(5) of the inductive construction. This completes the inductive step. The condition (5) of the inductive construction imply that the limit map $h=\lim_{n\to\infty}h_n$ is a homeomorphism of $X$ such that $$d(h,\mathrm{id})\le \sum_{n=1}^\infty d(h_n,h_{n-1})\le\sum_{n=1}2^{-n-1}=1$$and hence $(h,\mathrm{id})\prec\mathcal U$ by the choice of the metric $d$. The conditions (3) and (4) of the inductive construction imply that $h|A_n=h_n|A_n$ and $h_n(A_n)=B_n$ for all $n\in\omega$. Taking into account that $A=\bigcup_{n\in\omega}A_n$ and $B=\bigcup_{n\in\omega}B_n$, we conclude that $h(A)=B$. \end{proof} \section{Topological equivalence of discrete $\mathcal K$-tame decompositions} In this section we shall prove a discrete version of Theorem~\ref{main2}. We recall that a decomposition $\mathcal D$ of a topological space $X$ is called {\em discrete} if its non-degeneracy part $\mathcal D^\circ=\{D\in\mathcal D:|D|>1\}$ is closed and discrete in the decomposition space $\mathcal D=X/\mathcal D$. The following fact easily follows from the definitions. \begin{lemma}\label{l:dshrink} A discrete decomposition $\mathcal D$ of a regular topological space is strongly shrinkable if and only if each set $D\in\mathcal D$ is locally shrinkable in $X$. \end{lemma} For two decompositions $\mathcal A,\mathcal B$ of a topological space $X$ we shall denote by $\mathcal H^\circ(\mathcal A^\circ,\mathcal B^\circ)$ the space of all homeomorphisms $h:(\mathcal A,\mathcal A^\circ)\to(\mathcal B,\mathcal B^\circ)$ of the pairs $(\mathcal A,\mathcal A^\circ)$ and $(\mathcal B,\mathcal B^\circ)$, endowed with the strong limitation topology, whose neighborhood base at a homeomorphism $h\in\mathcal H^\circ(\mathcal A^\circ,\mathcal B^\circ)$ consists of the sets $$N(h,\mathcal U)=\{g\in\mathcal H^\circ(\mathcal A^\circ,\mathcal B^\circ):(f,g)\prec\mathcal U\}$$where $\mathcal U$ runs over all covers of the non-degeneracy part $\mathcal B^\circ$ by open subsets of the decomposition space $\mathcal B$. \begin{theorem}\label{t4} Let $\mathcal K$ be a tame family $\mathcal K$ of compact subsets of a strongly locally homogeneous completely metrizable space $X$. Then for any discrete decompositions $\mathcal A,\mathcal B\subset\mathcal K\cup\big\{\{x\}:x\in X\big\}$ of $X$, the set of $(\mathcal A,\mathcal B)$-liftable homeomorphisms is dense in the homeomorphism space $\mathcal H^\circ(\mathcal A^\circ,\mathcal B^\circ)$. \end{theorem} \begin{proof} Given a homeomorphism of pairs $f:(\mathcal A,\mathcal A^\circ)\to(\mathcal B,\mathcal B^\circ)$ and a cover $\mathcal W$ of the non-degeneracy part $\mathcal B^\circ$ by open subsets of $\mathcal B$, we need to construct a $(\mathcal A,\mathcal B)$-liftable homeomorphism $\varphi:\mathcal A\to\mathcal B$ such that $(\varphi,f)\prec\mathcal W$. Since the decomposition $\mathcal B$ is discrete, its non-degeneracy part $\mathcal B^\circ$ is closed and discrete in the decomposition space $\mathcal B=X/\mathcal B$. Then we can choose for every point $b\in \mathcal B^\circ$ an open neighborhood $W_b\subset\mathcal B$ of $b$, which lies in some set of the cover $\mathcal W$. Moreover, since the set $\mathcal B^\circ$ is closed and discrete in the metrizable (and collectionwise normal) space $\mathcal B$, we can additionally assume that the indexed family $\{W_b:b\in\mathcal B^\circ\}$ is discrete in $\mathcal B$. By Definition~\ref{d:K-tame} and Lemma~\ref{l:dshrink}, the discrete decomposition $\mathcal B$ is strongly shrinkable and by Theorem~\ref{t:shrink}, the quotient map $q_\mathcal B:X\to \mathcal B$ is a strong near homeomorphism, which implies that the decomposition space $\mathcal B$ is homeomorphic to $X$. Then $\mathcal K(\mathcal B)=\{h(K):K\in\mathcal K,\;\;h\in\mathcal H(\mathcal B,X)\}$ is a tame family of compact subsets in the space $\mathcal B$. This family has the local shift property, which implies that each point $b\in\mathcal B^\circ$ has a neighborhood $U_b\subset W_b$ such that for any compact subsets $K,K'\in \mathcal K(\mathcal B)$ of $U_b$ there is a homeomorphism $h_b:\mathcal B\to\mathcal B$ such that $h_b(K)=K'$ and $h_b|\mathcal B\setminus W_b=\mathrm{id}$. Let $U=\bigcup_{b\in\mathcal B^\circ}U_b$. Since the quotient map $q_\mathcal B:X\to\mathcal B$ is a strong near homeomorphism, there is a homeomorphism $\beta:X\to \mathcal B$ such that $\beta(q_\mathcal B^{-1}(U_b))=U_b$ for every $b\in\mathcal B^\circ$ and $\beta(x)=\{x\}$ for each $x\in X\setminus q_\mathcal B^{-1}(U)$. By analogy we shall define a homeomorphism $\alpha:X\to\mathcal A$. Namely, for every point $a\in\mathcal A^\circ$ consider the open neighborhood $V_a=f^{-1}(U_{f(a)})$ of $a$ in the decomposition space $\mathcal A$, and put $V=\bigcup_{a\in\mathcal A^\circ}V_a=f^{-1}(U)$. Since the decomposition $\mathcal A$ is strongly shrinkable, the quotient map $q_\mathcal A:X\to\mathcal A$ is a strong near homeomorphism, which allows us to find a homeomorphism $\alpha:X\to \mathcal A$ such that $\alpha(q_\mathcal A^{-1}(V_a))=V_a$, for every $a\in\mathcal A^\circ$, and $\alpha(x)=\{x\}$ for each $x\in X\setminus q_\mathcal A^{-1}(V)$. For every $b\in\mathcal B^\circ$, consider the point $a=f^{-1}(b)\in\mathcal A^\circ$ and the compact subsets $K=\beta(b)$ and $K'=f\circ\alpha(a)$ of $U_b$, which belong to the family $\mathcal K(\mathcal B)$. By the choice of the neighborhood $U_b$, there exists a homeomorphism $h_b:\mathcal B\to\mathcal B$ such that $h_b(K')=K$ and $h_b|\mathcal B\setminus W_b=\mathrm{id}$. The homeomorphisms $h_b$, $b\in\mathcal B^\circ$, yield a single homeomorphism $h:\mathcal B\to\mathcal B$ defined by $$h(y)=\begin{cases}h_b(y)&\mbox{if $y\in W_b$ for some $b\in\mathcal B^\circ$};\\ y&\mbox{otherwise}. \end{cases} $$ Consider the homeomorphism $\Phi=\beta^{-1}\circ h\circ f\circ\alpha:X\to X$. The definition of the homeomorphism $h$ implies that for every compact set $a\in\mathcal A^\circ$ of $X$ and its image $b=f(a)\in\mathcal B^\circ$ we get $$\Phi(a)=\beta^{-1}\circ h\circ f\circ\alpha(a)= \beta^{-1}\circ h_b(f\circ\alpha(a))=\beta^{-1}\circ \beta(b)=b.$$ This means that the homeomorphism $\Phi$ is $(\mathcal A,\mathcal B)$-factorizable and hence there is a homeomorphism $\varphi:\mathcal A\to\mathcal B$ such that $q_\mathcal B\circ\Phi=\varphi\circ q_\mathcal A$. The choice of the neighborhoods $W_b$, $b\in\mathcal B^\circ$, guarantees that the $(\mathcal A,\mathcal B)$-liftable homeomorphism $\varphi:\mathcal A\to\mathcal B$ is $\mathcal W$-near to the homeomorphism $f$. \end{proof} \section{Topological equivalence of dense $\mathcal K$-tame decompositions}\label{s:dense-Ktame} In the proof of Theorem~\ref{t5} below we shall widely use multivalued maps; see \cite{RS}. By a multivalued map $\Phi:X\multimap Y$ between sets $X$ and $Y$ we understand any subset $\Phi\subset X\times Y$ of their Cartesian product. This subset $\Phi$ can be thought of as a multivalued function $\Phi:X\multimap Y$ which assigns to each point $x\in X$ the subset $\Phi(x)=\{y\in Y:(x,y)\in\Phi\}$ of $Y$ and to each subset $A\subset X$ the subset $\Phi(A)=\bigcup_{a\in A}\Phi(a)$ of $Y$. Usual functions $f:X\to Y$, identified with their graphs $\{(x,f(x)):x\in X\}$, become multivalued (more precisely, singlevalued) functions. For two multivalued functions $\Psi:X\multimap Y$ and $\Psi:Y\multimap Z$ their composition $\Psi\circ\Phi:X\multimap Z$ is defined as the multivalued function assigning to each point $x\in X$ the subset $\Psi(\Phi(x))$ of $Z$. The inverse $\Phi^{-1}$ of a multivalued function $\Phi:X\multimap Y$ is the multivalued function $\Phi^{-1}=\{(y,x):(x,y)\in\Phi\}\subset Y\times X$, assigning to each point $y\in Y$ the subset $\Phi^{-1}(y)=\{x\in X:y\in\Phi(x)\}$. \begin{theorem}\label{t5} For any tame family $\mathcal K$ of compact subsets of a strongly locally homogeneous completely metrizable space $X$, and any dense $\mathcal K$-tame decompositions $\mathcal A,\mathcal B$ of $X$, the set of $(\mathcal A,\mathcal B)$-liftable homeomorphisms is dense in the homeomorphism space $\mathcal H^\circ(\mathcal A^\circ,\mathcal B^\circ)$. \end{theorem} \begin{proof} Given a homeomorphism of pairs $\varphi_0:(\mathcal A,\mathcal A^\circ)\to(\mathcal B,\mathcal B^\circ)$ and a cover $\mathcal W$ of the non-degeneracy part $\mathcal B^\circ$ by open subsets of $\mathcal B$, we need to construct a $(\mathcal A,\mathcal B)$-liftable homeomorphism $\varphi:\mathcal A\to\mathcal B$ such that $(\varphi,\varphi_0)\prec\mathcal W$. Fix a complete metric $d$ that generates the topology of the completely metrizable space $X$. Replacing $d$ by $\min\{d,1\}$, if necessary, we can assume that $\mathrm{diam}(X)\le 1$. Also fix a metric $\rho\le1$ degenerating the topology of the decomposition space $\mathcal B=X/\mathcal B$ (which is metrizable by Lemma~\ref{l1}). Choose a continuous function $\varepsilon:\mathcal B\to[0,1]$ such that $\varepsilon^{-1}(0)=\mathcal B\setminus\bigcup\mathcal W$ and for each point $b\in\bigcup\mathcal W$ the closed $\varepsilon(b)$-ball $\bar O_\rho(b,\varepsilon(b))=\{y\in \mathcal B:\rho(y,b)\le\varepsilon(b)\}$ is contained in some element of the cover $\mathcal W$. Then each map $\varphi:\mathcal A\to\mathcal B$ with $\rho(\varphi,\varphi_0)\le\varepsilon\circ \varphi_0$ is $\mathcal W$-near to the map $\varphi_0$. So, it suffices to construct a $(\mathcal A,\mathcal B)$-liftable homeomorphism $\varphi:\mathcal A\to\mathcal B$ such that $\rho(\varphi(a),\varphi_0(a))\le\varepsilon\circ\varphi_0(a)$ for every $a\in\mathcal A$. To find such a homeomorphism $\varphi$, we shall construct inductively two sequences $(\mathcal A_n)_{n\in\omega}$ and $(\mathcal B_n)_{n\in\omega}$ of decompositions of the space $X$, and two sequences of homeomorphisms $(h_n:\mathcal A_n\to\mathcal B_n)_{n\in\omega}$, $(\varphi_n:\mathcal A\to\mathcal B)_{n\in\omega}$ between the corresponding decomposition spaces such that for the multivalued functions $\Phi_n=q_{\mathcal B_n}^{-1}\circ h_n\circ q_{\mathcal A_n}:X\multimap X$, $n\in\omega$, the the following conditions are satisfied for every $n\ge 1$: \begin{enumerate} \item[$(1_n)$] $\mathcal A_{n}^\circ\subset \mathcal A^\circ_{n-1}\subset\mathcal A^\circ$ and $\mathcal B^\circ_{n}\subset\mathcal B^\circ_{n-1}\subset\mathcal B^\circ$, \item[$(2_n)$] the families $\mathcal A^\circ_{n-1}\setminus \mathcal A^\circ_{n}$ and $\mathcal B^\circ_{n-1}\setminus \mathcal B^\circ_{n}$ are discrete in $X$ and\newline contain the families $\{A\in\mathcal A_n:\mathrm{diam}(A)\ge 2^{-n+1}\}$ and $\{B\in\mathcal B_n:\mathrm{diam}(B)\ge 2^{-n+1}\}$, respectively; \item[$(3_n)$] $q_\mathcal B^{\mathcal B_{n}}\circ h_{n}=\varphi_{n}\circ q_\mathcal A^{\mathcal A_{n}}$; \item[$(4_n)$] $\rho(\varphi_{n},\varphi_{n-1})\le 2^{-n}\cdot \varepsilon\circ\varphi_0$; \item[$(5_n)$] $\varphi_{n}|\mathcal A_0^\circ\setminus\mathcal A_{n}^\circ= \varphi_{n-1}|\mathcal A_0^\circ\setminus\mathcal A_{n}^\circ$; \item[$(6_n)$] $\varphi_{n}(\mathcal A_{n}^\circ)=\mathcal B_{n}^\circ$ and $\varphi_{n}(\mathcal A_{n-1}^\circ\setminus\mathcal A_{n}^\circ)=\mathcal B_{n-1}^\circ\setminus\mathcal B_{n}^\circ$; \item[$(7_n)$] $\Phi_{n}|\bigcup(\mathcal A_0^\circ\setminus\mathcal A_{n-1}^\circ)= \Phi_{n-1}|\bigcup(\mathcal A_0^\circ\setminus\mathcal A^\circ_{n-1})$; \item[$(8_n)$] $\mathrm{diam}\big(\Phi_{n}(x)\cup\Phi_{n-1}(x)\big)<2^{-n+2}$ and $\mathrm{diam}\big(\Phi_{n}^{-1}(x)\cup\Phi_{n-1}^{-1}(x)\big)<2^{-n+2}$ for all $x\in X$. \end{enumerate} So, for every $n\in\omega$ we shall inductively construct decompositions $\mathcal A_n$, $\mathcal B_n$, homeomorphisms $h_n:\mathcal A_n\to\mathcal B_n$, $\varphi_n:\mathcal A\to\mathcal B$, and a multivalued function $\Phi_n:X\multimap X$ making the following diagram commutative $$\xymatrix{ X\ar[r]^{\Phi_n}\ar[d]_{q_{\mathcal A_n}}&X\ar[d]^{q_{\mathcal B_n}}\\ \mathcal A_n\ar[r]^{h_n}\ar[d]_{q_\mathcal A^{\mathcal A_n}}&\mathcal B_n\ar[d]^{q_{\mathcal B_n}^{\mathcal B_{n+1}}}\\ \mathcal A\ar[r]_{\varphi_n}&\mathcal B } $$ We start the inductive constructing putting $\mathcal A_0=\mathcal A$, $\mathcal B_0=\mathcal B$, and $h_0=\varphi_0$. \smallskip {\bf Inductive step.} Assume that for some $n\in\omega$ decompositions $\mathcal A_i$, $\mathcal B_i$, $i\le n$, and homeomorphisms $h_i:\mathcal A_i\to\mathcal B_i$, $\varphi_i:\mathcal A\to\mathcal B$, $i\le n$, satisfying the conditions $(1_i)$--$(8_i)$, $1\le i\le n$, have been constructed. We should construct decompositions $\mathcal A_{n+1}$ and $\mathcal B_{n+1}$ of $X$ and homeomorphisms $h_{n+1}:\mathcal A_{n+1}\to\mathcal B_{n+1}$ and $\varphi_{n+1}:\mathcal A\to\mathcal B$. Consider the decomposition spaces $\mathcal A_{n}=X/\mathcal A_{n}$, $\mathcal B_{n}=X/\mathcal B_{n}$, and the corresponding quotient maps $q_{\mathcal A_{n}}:X\to \mathcal A_{n}$ and $q_{\mathcal B_{n}}:X\to \mathcal B_{n}$. By the conditions $(2_k)$, $k\le n$, the family $\mathcal A_0^\circ\setminus\mathcal A_{n}^\circ$ is discrete in $X$. Consequently, its union $\bigcup(\mathcal A_0^\circ\setminus\mathcal A_{n}^\circ)$ is closed in $X$ and its projection $\bar A_n=q_{\mathcal A_{n}}\big(\bigcup(\mathcal A_0^\circ\setminus\mathcal A_{n}^\circ)\big)$ is closed in the decomposition space $\mathcal A_{n}=X/\mathcal A_{n}$. By the same reason, the set $\bar B_n=q_{\mathcal B_{n}}\big(\textstyle{\bigcup} (\mathcal B_0^\circ\setminus\mathcal B_{n}^\circ)\big)$ is closed in the decomposition space $\mathcal B_{n}=X/\mathcal B_{n}$. The density of the decomposition $\mathcal A$ implies that the set $\bigcup\mathcal A_0^\circ$ is dense in $X$ and consequently the set $$\mathcal A^\circ_n=q_{\mathcal A_n}(\textstyle{\bigcup}\mathcal A^\circ_n)=q_{\mathcal A_n}\big(\textstyle{\bigcup}\mathcal A_0^\circ)\setminus\bar A_n$$ is dense in the open subspace $\mathcal A_n\setminus \bar A_n$ of the decomposition space $\mathcal A_n=X/\mathcal A_n$. By the same reason, the set $$\mathcal B^\circ_n=q_{\mathcal B_n}(\textstyle{\bigcup}\mathcal B^\circ_n)=q_{\mathcal B_n}\big(\textstyle{\bigcup}\mathcal B_0^\circ)\setminus\bar B_n$$ is dense in the open subspace $\mathcal B_n\setminus \bar B_n$ of the decomposition space $\mathcal B_n=X/\mathcal B_n$. Since the decomposition $\mathcal A$ is vanishing and $\mathcal A_n^\circ\subset\mathcal A_0^\circ=\mathcal A^\circ$, the decomposition $\mathcal A_n$ is vanishing too. Consequently, for each $\varepsilon>0$ the subfamily $\mathcal A^\circ_{n,\varepsilon}=\{A\in\mathcal A_n:\mathrm{diam}(A)\ge\varepsilon\}$ is discrete in $X$, which implies that the set $\mathcal A^\circ_{n,\varepsilon}=q_{\mathcal A_n}(\bigcup\mathcal A_{n,\varepsilon})$ is closed and discrete in $\mathcal A_n$. Since $\mathcal A_n^\circ=\bigcup_{k=1}^\infty \mathcal A^\circ_{n,2^{-k}}$, the subset $\mathcal A^\circ_n$ is $\sigma$-discrete in $\mathcal A_n\setminus \bar A_n$. By analogy we can show that the set $\mathcal B^\circ_n$ is $\sigma$-discrete in $\mathcal B_n\setminus \bar B_n$. Now consider the homeomorphisms $h_n:\mathcal A_n\to\mathcal B_n$, $\varphi_n:\mathcal A\to\mathcal B$, and the induced multivalued function $\Phi_n=q_{\mathcal B_n}^{-1}\circ h_n\circ q_{\mathcal A_n}:X\multimap X$. The inductive assumption $(3_n)$, $(5_n)$, and $(6_n)$ imply that $h_n(\bar A_n)=\bar B_n$ and $h_n(\mathcal A^\circ_n)=\mathcal B_n^\circ$. Since the decomposition $\mathcal A$ is vanishing, the family $\mathcal A^\circ_{n,2^{-n}}=\{A\in\mathcal A^\circ_n:\mathrm{diam}(A)\ge 2^{-n}\}$ is discrete in $X$ and its image $\mathcal A^\circ_{n,2^{-n}}=q_{\mathcal A_n}(\bigcup\mathcal A^\circ_{n,2^{-n}})\subset \mathcal A^\circ_n$ is a closed discrete subset of the decomposition space $\mathcal A_n=X/\mathcal A_n$. By the same reason, the family $\mathcal B^\circ_{n,2^{-n}}=\{B\in\mathcal B^\circ_n:\mathrm{diam}(B)\ge 2^{-n}\}$ is discrete in $X$ and is a closed discrete subset $B^\circ_{n,2^{-n}}=q_{\mathcal B_n}(\bigcup\mathcal B^\circ_{n,2^{-n}})\subset \mathcal B^\circ_n$ of the decomposition space $\mathcal B_n=X/\mathcal B_n$. The conditions $(3_n)$ and $(6_n)$ of the inductive construction imply that $h_n(\mathcal A^\circ_n)=\mathcal B_n^\circ$. Consequently, the closed discrete subset $\mathcal A^\circ_{n,2^{-n}}\cup h_n^{-1}(\mathcal B^\circ_{n,2^{-n}})$ of the decomposition space $\mathcal A_n=X/\mathcal A_n$ is a subset of $\mathcal A^\circ_n$. By the same reason, the closed discrete subset $\mathcal B^\circ_{n,2^{-n}}\cup h_n(\mathcal A^\circ_{n,2^{-n}})$ of the decomposition space $\mathcal B_n=X/\mathcal B_n$ is a subset of $\mathcal B_n^\circ$. So, we can consider the subfamilies $$\mathcal A^\circ_{n+1}=\big\{q_{\mathcal A_n}^{-1}(y):y\in \mathcal A^\circ_n\setminus\big(\mathcal A^\circ_{n,2^{-n}}\cup h_n^{-1}(\mathcal B^\circ_{n,2^{-n}})\big)\big\}=\mathcal A^\circ_n\setminus\big(\mathcal A^\circ_{n,2^{-n}}\cup h_n^{-1}(\mathcal B^\circ_{n,2^{-n}})\big)\subset\mathcal A^\circ_n$$and $$\mathcal B^\circ_{n+1}=\big\{q_{\mathcal B_n}^{-1}(y):y\in \mathcal B^\circ_n\setminus(\mathcal B^\circ_{n,2^{-n}}\cup h_n(\mathcal A^\circ_{n,2^{-n}})\big)\big\}=\mathcal B^\circ_n\setminus\big(\mathcal B^\circ_{n,2^{-n}}\cup h_n(\mathcal A^\circ_{n,2^{-n}})\big)\subset\mathcal B^\circ_n.$$ These subfamilies $\mathcal A_{n+1}^\circ\subset\mathcal A^\circ_n$ and $\mathcal B^\circ_{n+1}\subset\mathcal B_n^\circ$ generate the decompositions $$ \begin{aligned} \mathcal A_{n+1}&=\mathcal A_{n+1}^\circ\cup\big\{\{x\}:x\in X\setminus\textstyle{\bigcup}\mathcal A_{n+1}^\circ\big\}\mbox{ \ and \ }\\ \mathcal B_{n+1}&=\mathcal B_{n+1}^\circ\cup\big\{\{x\}:x\in X\setminus\textstyle{\bigcup}\mathcal B_{n+1}^\circ\big\}, \end{aligned} $$ of the space $X$, satisfying the conditions $(1_{n+1})$ and $(2_{n+1})$ of the inductive construction. For every numbers $k,m\in\omega$ with $0\le k\le m\le n+1$ the conditions $(1_k)$, $k\le n+1$, guarantee that $\mathcal A_k^\circ\subset\mathcal A_m^\circ$ and hence $\mathcal A_k\prec\mathcal A_m$. So, there is a (unique) map $q_{\mathcal A_k}^{\mathcal A_{m}}:\mathcal A_{m}\to\mathcal A_k$ making the following triangle commutative: $$\xymatrix{ &X\ar[rd]^{q_{\mathcal A_k}}\ar[ld]_{q_{\mathcal A_m}}&\\ \mathcal A_m\ar[rr]^{q_{\mathcal A_k}^{\mathcal A_m}}&&\mathcal A_k. }$$ This map $q_{\mathcal A_k}^{\mathcal A_{m}}:\mathcal A_m\to\mathcal A_k$ determines a decomposition $$\mathcal A_k^{m}=\big\{(q_{\mathcal A_k}^{\mathcal A_{m}})^{-1}(y):y\in\mathcal A_{k}\big\}=\big\{q_{\mathcal A_{m}}(A):A\in\mathcal A\big\}$$of the space $\mathcal A_{n+1}$. The non-degeneracy part $$(\mathcal A_k^{m})^\circ=\big\{q_{\mathcal A_{m}}(A):A\in\mathcal A_k^\circ\setminus\mathcal A_{m}^\circ\}$$ of this decomposition is discrete in $\mathcal A_{m}$ by the conditions $(2_i)$, $k<i\le m$, of the inductive construction. By analogy, for any $0\le k\le m\le n+1$ we can define the map $q_{\mathcal B_k}^{\mathcal B_m}:\mathcal B_m\to\mathcal B_k$ and the corresponding decomposition $\mathcal B_k^m=\{(q_{\mathcal B_k}^{\mathcal B_m})^{-1}(y):y\in\mathcal B_k\}=\{q_{\mathcal B_m}(B):B\in\mathcal B\}$ of the decomposition space $\mathcal B_{m}$. Now consider the diagram: $$\xymatrix{ & X\ar@<2pt>@{..>}[r]^{\Phi_{n+1}}\ar@<-2pt>[r]_{\Phi_n}\ar[d]_{q_{\mathcal A_{n+1}}}&X\ar[d]^{q_{\mathcal B_{n+1}}}& \\ &\mathcal A_{n+1}\ar@<2pt>@{..>}[r]^{h_{n+1}}\ar@<-2pt>@{..>}[r]_{\tilde h_{n+1}}\ar[d]_{q_{\mathcal A_n}^{\mathcal A_{n+1}}}& \mathcal B_{n+1}\ar[d]^{q_{\mathcal B_n}^{\mathcal B_{n+1}}}&\\ \mathcal A_{n}^\circ{\setminus}\mathcal A_{n+1}^\circ\ar[r]\ar[d]& \mathcal A_{n}\ar@<2pt>@{..>}[r]^{\tilde h_{n}}\ar@<-2pt>[r]_{h_{n}}\ar[d]_{q_{\mathcal A_0}^{\mathcal A_{n}}}& \mathcal B_{n}\ar[d]^{q_{\mathcal B_0}^{\mathcal B_{n}}}&\ar[l]\mathcal B_{n}^\circ{\setminus}\mathcal B_{n+1}^\circ\ar[d]\\ \mathcal A^\circ\ar[r]&\mathcal A\ar@<2pt>[r]^{\varphi_n}\ar@<-2pt>@{..>}[r]_{\varphi_{n+1}}&\mathcal B&\ar[l]\mathcal B^\circ } $$ In this diagram the straight arrows denote the maps which are already defined while dotted arrows denote maps which will be constructed during the inductive step in the following way. First, using Theorem~\ref{t4} we approximate the homeomorphism $h_n$ by a $(\mathcal A_n^{n+1},\mathcal B_n^{n+1})$ liftable homeomorphism $\tilde h_n$, which determines a homeomorphism $\tilde h_{n+1}:\mathcal A_{n+1}\to\mathcal B_{n+1}$. Then using Theorem~\ref{t:dense} we approximate the homeomorphism $\tilde h_{n+1}$ by a $(\mathcal A^{n+1}_0,\mathcal B^{n+1}_0)$-factorizable homeomorphism $h_{n+1}$ such that $h_n(\mathcal A_{n+1}^\circ)=\mathcal B_{n+1}^\circ$. The homeomorphism $h_{n+1}$ determines a homeomorphism $\varphi_{n+1}:\mathcal A\to\mathcal B$ and the multivalued function $\Phi_{n+1}=q^{-1}_{\mathcal B_{n+1}}\circ h_{n+1}\circ q_{\mathcal A_{n+1}}:X\multimap X$, which will satisfy the inductive assumptions $(3_{n+1})$--$(8_{n+1})$. Now we realize this strategy in details. The homeomorphism $\tilde h_n$ will differ from the homeomorphism $h_n$ on a neighborhood $U'_n\subset\mathcal A_{n}$ of the closed discrete subset $\mathcal A_n^\circ\setminus\mathcal A_{n+1}^\circ$ of the decomposition space $\mathcal A_n$. The neighborhood $U'_n$ will be constructed as follows. Observe that each element $a\in \mathcal A^\circ_n\setminus\mathcal A^\circ_{n+1}\subset\mathcal A_n$ is a compact subset of the space $X$, equal to its own preimage $q^{-1}_{\mathcal A_n}(a)$ under the quotient map $q_{\mathcal A_n}:X\to\mathcal A_n$. The condition $(2_{n})$ of the inductive construction guarantees that $\mathrm{diam}(a)<2^{-n+1}$. The same is true for any point $b\in\mathcal B^\circ_n\setminus\mathcal B_{n+1}^\circ=h_n(\mathcal A^\circ_n\setminus\mathcal A^\circ_{n+1})\subset\mathcal B_n$: it coincides with its own preimage $q_{\mathcal B_n}^{-1}(b)\subset X$ and has diameter $\mathrm{diam}(b)<2^{-n+1}$. Since the non-degeneracy set $\bar B_n=\bigcup(\mathcal B_0^n)^\circ$ of the map $q^{\mathcal B_n}_{\mathcal B_0}:\mathcal B_n\to\mathcal B$ is disjoint with the closed discrete subset $\mathcal B^\circ_n\setminus \mathcal B_{n+1}^\circ\subset \mathcal B_{n}$, for every point $b\in \mathcal B^\circ_n\setminus\mathcal B_{n+1}^\circ$ we can choose a neighborhood $U_n(b)\subset \mathcal B_n$ such that \begin{itemize} \item $U_n(b)\cap \bar B_n=\emptyset$, \item $\mathrm{diam}\big(q^{-1}_{\mathcal B_n}(U_n(b))\big)<2^{-n+1}$; \item $\mathrm{diam}\big(q^{-1}_{\mathcal A_n}(h_n^{-1}(U_n(b)))\big)<2^{-n+1}$, and \item $U_n(b)=(q^{\mathcal B_n}_{\mathcal B_0})^{-1}(W_n(b))$ for some open set $W_n(b)\subset\bigcup\mathcal W\subset\mathcal B$\newline that has $\rho$-diameter $\mathrm{diam}\big(W_n(b)\big)<2^{-n-1}\cdot\inf \varepsilon\circ\varphi_0\circ\varphi_n^{-1}(W_n(b))$. \end{itemize} Since the set $\mathcal B^\circ_n\setminus\mathcal B_{n+1}^\circ$ is closed and discrete in the (collectionwise normal) decomposition space $\mathcal B_n$, we can additionally assume that the indexed family $\{U_n(b):b\in \mathcal B^\circ_n\setminus\mathcal B_{n+1}^\circ\}$ is discrete in $\mathcal B_n$. Then \begin{enumerate} \item $U_n=\bigcup\{U_n(b):b\in \mathcal B^\circ_n\setminus\mathcal B_{n+1}^\circ\}$ is an open neighborhood of the closed discrete subset $\mathcal B^\circ_n\setminus\mathcal B_{n+1}^\circ$ in the decomposition space $\mathcal B_n$, \item $W_n=\bigcup\{W_n(b):b\in \mathcal B^\circ_n\setminus\mathcal B_{n+1}^\circ\}$ is an open neighborhood of the closed discrete subset $\mathcal B^\circ_n\setminus\mathcal B_{n+1}^\circ$ in the decomposition space $\mathcal B$, \item $U_n'=h_n^{-1}(U_n)$ is an open neighborhood of the closed discrete subset $\mathcal A^\circ_n\setminus\mathcal A_{n+1}^\circ=h_n^{-1}(\mathcal B^\circ_n\setminus\mathcal B_{n+1}^\circ)$ in the decomposition space $\mathcal A_n$, and \item $W_n'=\varphi_n^{-1}(W_n)$ is an open neighborhood of the closed discrete subset $\mathcal A^\circ_n\setminus\mathcal A_{n+1}^\circ=\varphi_n^{-1}(\mathcal B^\circ_n\setminus\mathcal B_{n+1}^\circ)$ in the decomposition space $\mathcal A$. \end{enumerate} The choice of the neighborhoods $U_n(b)$, $b\in\mathcal B_n^\circ\setminus\mathcal B_{n+1}^\circ$, guarantees that $U_n\cap\bar B_n=\emptyset$, which implies $U_n'\cap\bar A_n=\emptyset$. These sets fit into the following commutative diagram: $$\xymatrix{ &&\mathcal A_{n+1}\ar[d]_{q_{\mathcal A_n}^{\mathcal A_{n+1}}}&\mathcal B_{n+1}\ar[d]^{q_{\mathcal B_n}^{\mathcal B_{n+1}}} &&\\ \mathcal A_n^\circ\setminus\mathcal A_{n+1}^\circ\ar[r]\ar[d]&U_n'\ar[r]\ar[d]&\mathcal A_n\ar[r]^{h_n}\ar[d]_{q^{\mathcal A_n}_{\mathcal A_0}}& \mathcal B_n\ar[d]^{q^{\mathcal B_n}_{\mathcal B_0}}&U_n\ar[l]\ar[d] &\mathcal B_n^\circ\setminus\mathcal B_{n+1}^\circ\ar[l]\ar[d]\\ \mathcal A_n^\circ\setminus\mathcal A_{n+1}^\circ\ar[r]&W_n'\ar[r]&\mathcal A\ar[r]^{\varphi_n}&\mathcal B&W_n\ar[l] &\mathcal B_n^\circ\setminus\mathcal B_{n+1}^\circ\ar[l]\\ } $$ It follows that $U_n$ is an open neighborhood of the non-degeneracy set $\mathcal B_n^\circ\setminus\mathcal B_{n+1}^\circ$ of the map $q^{\mathcal B_{n+1}}_{\mathcal B_n}:\mathcal B_{n+1}\to\mathcal B_n$ while $U_n'$ is an open neighborhood of the non-degeneracy set $\mathcal A_n^\circ\setminus \mathcal A_{n+1}^\circ$ of the map $q^{\mathcal A_{n+1}}_{\mathcal A_n}:\mathcal A_{n+1}\to\mathcal A_n$. The shrinkability of the decomposition $\mathcal A$ (which follows from the $\mathcal K$-tameness of $\mathcal A$) implies the shrinkability of the decomposition $\mathcal A_{n+1}\prec\mathcal A$. Then Theorem~\ref{t:shrink} implies that the quotient map $q_{\mathcal A_{n+1}}:X\to\mathcal A_{n+1}$ is a near homeomorphism and hence the decomposition space $\mathcal A_{n+1}=X/\mathcal A_{n+1}$ is homeomorphic to $X$. So, we can consider the tame family $\mathcal K(\mathcal A_{n+1})=\{f(K):K\in\mathcal K,\; f\in\mathcal H(X,\mathcal A_{n+1})\}$ of compact subsets of $\mathcal A_{n+1}$. We claim that $(\mathcal A_{n}^{n+1})^\circ\subset\mathcal K(\mathcal A_{n+1})$. Fix any set $A_{n+1}\in(\mathcal A_n^{n+1})^\circ$ and consider its preimage $A=q_{\mathcal A_{n+1}}^{-1}(A_{n+1})\in \mathcal A^\circ_n\setminus\mathcal A^\circ_{n+1}$ in $X$. Observe that $A_{n+1}$ is a compact subset of the decomposition space $\mathcal A_{n+1}$, disjoint with its non-degeneracy part $\mathcal A_{n+1}^\circ$. Since $A\in\mathcal A$, the open set $S=X\setminus A$ is $\mathcal A$-saturated. The strong shrinkability of the decomposition $\mathcal A$ implies the shrinkability of the decompositions $\mathcal A|S$ and $\mathcal A_{n+1}|S$. Then Theorem~\ref{t:shrink} and Lemma~\ref{l:approx} imply that the quotient map $q_{\mathcal A_{n+1}}:X\to\mathcal A_{n+1}$ can be approximated by a homeomorphism $h:X\to \mathcal A_{n+1}$ such that $h(A)=A_{n+1}$, which means that the pairs $(X,A)$ and $(\mathcal A_{n+1},A_{n+1})$ are homeomorphic and hence $A_{n+1}\in\mathcal K(\mathcal A_{n+1})$. So, $\mathcal A_n^{n+1}$ is a discrete $\mathcal K(\mathcal A_{n+1})$-tame decomposition of the space $\mathcal A_{n+1}$. By analogy, we can show that the decomposition $\mathcal B_n^{n+1}$ of the space $\mathcal B_{n+1}$ is discrete and $\mathcal K(\mathcal B_{n+1})$-tame for the tame family $\mathcal K(\mathcal B_{n+1})=\{f(K):K\in\mathcal K,\;\;f\in\mathcal H(X,\mathcal B_{n+1})\}$ of compact subsets of the decomposition space $\mathcal B_{n+1}$ (which is homeomorphic to $X$). Now one can apply Theorem~\ref{t4}, and approximate the homeomorphism $h_n:\mathcal A_n\to \mathcal B_n$ by a $(\mathcal A_n^{n+1},\mathcal B_n^{n+1})$-liftable homeomorphism $\tilde h_n:\mathcal A_n\to\mathcal B_n$ such that $(\tilde h_n,h_n)\prec\mathcal U_n$ where $\mathcal U_n=\{U_n(b):b\in\mathcal B_n^\circ\setminus\mathcal B_{n+1}^\circ\}$. The relation $(\tilde h_n,h_n)\prec\mathcal U_n$ implies that $\tilde h_n|X\setminus U_n'=h_n|X\setminus U'_n$ and hence $\tilde h_n|\bar A_n=h_n|\bar A_n$. The homeomorphism $\tilde h_n$ can be lifted to a homeomorphism $\tilde h_{n+1}:\mathcal A_{n+1}\to\mathcal B_{n+1}$ making the following diagram commutative: $$\xymatrix{ \mathcal A_{n+1}\ar[d]_{q_{\mathcal A_n}^{\mathcal A_{n+1}}}\ar[r]^{\tilde h_{n+1}}&\mathcal B_{n+1}\ar[d]^{q_{\mathcal B_n}^{\mathcal B_{n+1}}}\\ \mathcal A_n\ar[r]_{\tilde h_n}&\mathcal B_n } $$ Since the homeomorphism $\tilde h_n$ is $(\mathcal A_n^{n+1},\mathcal B_n^{n+1})$-liftable it maps the non-degeneracy set $\mathcal A_n^\circ\setminus\mathcal A_{n+1}^\circ$ of the map $q^{\mathcal A_{n+1}}_{\mathcal A_n}$ onto the non-degeneracy set $\mathcal B_n^\circ\setminus\mathcal B_{n+1}^\circ$ of the map $q^{\mathcal B_{n+1}}_{\mathcal B_n}$. This fact, combined with the equality $\tilde h_n|\bar A_n=h_n|A_n$, implies $\tilde h_{n+1}(\bar A_{n+1})=\bar B_{n+1}$. Now we shall approximate the homeomorphism $\tilde h_{n+1}$ by a homeomorphism $h_{n+1}:\mathcal A_{n+1}\to\mathcal B_{n+1}$ such that $h_{n+1}|\bar A_{n+1}=h_{n+1}|\bar A_{n+1}$ and $h_{n+1}(\mathcal A_{n+1}^\circ)=\mathcal B_{n+1}^\circ$. For this, for every point $b\in \mathcal B^\circ_n\setminus \mathcal B^\circ_{n+1}\subset\mathcal B_n$, consider the open set $U_n(b)\setminus\{b\}$ and its preimage $V_{n+1}(b)=(q_{\mathcal B_n}^{\mathcal B_{n+1}})^{-1}(U_n(b)\setminus\{b\})$ in $\mathcal B_{n+1}$. Then $\mathcal V_{n+1}=\{V_{n+1}(b):b\in\mathcal B^\circ_n\setminus\mathcal B^\circ_{n+1}\}$ is an open cover of the open subset $V_{n+1}=\bigcup\mathcal V_{n+1}\subset\mathcal B_{n+1}$, which coincides with the set $(q_{\mathcal B_n}^{\mathcal B_{n+1}})^{-1}\big(U_n\setminus(\mathcal B^\circ_n\setminus\mathcal B^\circ_{n+1})\big)$and does not intersect the closed subset $\bar B_{n+1}=\bigcup(\mathcal B^{n+1}_0)^\circ$ of the decomposition space $\mathcal B_{n+1}$. It follows that the open subset $V_{n+1}'=\tilde h_{n+1}^{-1}(V_{n+1})$ of the decomposition space $\mathcal A_{n+1}$ coincides with the set $(q_{\mathcal A_n}^{\mathcal A_{n+1}})^{-1}\big(U'_n\setminus(\mathcal A^\circ_n\setminus\mathcal A^\circ_{n+1})\big)$ and does not intersect the closed subset $\bar A_{n+1}=\bigcup(\mathcal A_0^{n+1})^\circ$ of $\mathcal A_{n+1}$. The density of the decomposition $\mathcal A_0=\mathcal A$ implies that the set $q_{\mathcal A_{n+1}}\big(\bigcup\mathcal A_0^\circ \big)$ is dense in $\mathcal A_{n+1}$ and the set $\mathcal A_{n+1}^\circ=q_{\mathcal A_{n+1}}\big(\bigcup\mathcal A_0^\circ\big)\setminus\bar A_n$ is dense in $\mathcal A_{n+1}\setminus\bar A_n$. Taking into account that the decomposition $\mathcal A_{n+1}$ is vanishing, we conclude that its non-degeneracy part $\mathcal A^\circ_{n+1}=\bigcup_{k\in\omega}\mathcal A^\circ_{n+1,2^{-k}}$ is $\sigma$-discrete in $\mathcal A_{n+1}$. Then $\mathcal A_{n+1}^\circ\cap V'_{n+1}$ is a dense $\sigma$-discrete subset in $V'_{n+1}$. By analogy we can show that $\mathcal B_{n+1}^\circ\cap V_{n+1}$ is a dense $\sigma$-discrete subset in $V_{n+1}$. Applying Theorem~\ref{t:dense}, we can approximate the homeomorphism $\tilde h_{n+1}$ by a homeomorphism $h_{n+1}:\mathcal A_{n+1}\to\mathcal B_{n+1}$ such that $h_{n+1}(V'_{n+1}\cap \mathcal A^\circ_{n+1})=V_{n+1}\cap\mathcal B^\circ_{n+1}$ and $(h_{n+1},\tilde h_{n+1})\prec\mathcal V_{n+1}$, which implies that the homeomorphisms $h_{n+1}$ and $\tilde h_{n+1}$ coincide on the set $X\setminus V'_{n+1}\supset\bar A_{n+1}$. We claim that the homeomorphism $h_{n+1}$ is $(\mathcal A^{n+1}_0,\mathcal B^{n+1}_0)$-factorizable. This will follow as soon as we check that for every sets $A\in \mathcal A^{n+1}_0$ and $B\in \mathcal B^{n+1}_0$ the sets $q^{\mathcal B_{n+1}}_{\mathcal B_0}\circ h_{n+1}(A)\subset\mathcal B$ and $q^{\mathcal A_{n+1}}_{\mathcal A_0}\circ h^{-1}_{n+1}(B)\subset\mathcal A$ are singletons. First we check that the set $q^{\mathcal B_{n+1}}_{\mathcal B_0}\circ h_{n+1}(A)\subset\mathcal B$ is a singleton. This is clear if $A$ is a singleton. So, we assume that $A$ is not a singleton, in which case $A\subset\bigcup(\mathcal A_0^{n+1})^\circ=\bar A_{n+1}$, $h_{n+1}|A=\tilde h_{n+1}|A$, and $$ q_{\mathcal B_0}^{\mathcal B_{n+1}}\circ h_{n+1}(A) =q_{\mathcal B_0}^{\mathcal B_{n}}\circ q_{\mathcal B_n}^{\mathcal B_{n+1}}\circ \tilde h_{n+1}(A) =q_{\mathcal B_0}^{\mathcal B_{n}}\circ \tilde h_n\circ q_{\mathcal A_n}^{\mathcal A_{n+1}}(A)=q_{\mathcal B_0}^{\mathcal B_{n}}\circ h_n\circ q_{\mathcal A_n}^{\mathcal A_{n+1}}(A).$$ Observe that the set $q_{\mathcal A_n}^{\mathcal A_{n+1}}(A)$ is an element of the decomposition $\mathcal A_0^n$ of the decomposition space $\mathcal A_n$. The condition $(3_n)$ of the inductive assumption guarantees that the homeomorphism $h_n$ is $(\mathcal A_0^n,\mathcal B_0^n)$-factorizable, which implies that the set $$q^{\mathcal B_n}_{\mathcal B_0}\circ h_n\circ q_{\mathcal A_n}^{\mathcal A_{n+1}}(A)=q_{\mathcal B}^{\mathcal B_{n+1}}\circ h_{n+1}(A)$$ is a singleton. By analogy we can check that for every set $B\in \mathcal B^{n+1}_0$ the set $q^{\mathcal B_{n+1}}_{\mathcal A}\circ h^{-1}_{n+1}(B)$ is a singleton in $\mathcal A$. This implies that the homeomorphism $h_{n+1}$ is $(\mathcal A_0^{n+1},\mathcal B_0^{n+1})$-factorizable and hence there is a homeomorphism $\varphi_{n+1}:\mathcal A\to\mathcal B$ such that $q_{\mathcal B_0}^{\mathcal B_{n+1}}\circ h_{n+1}=\varphi_{n+1}\circ q_{\mathcal A_0}^{\mathcal A_{n+1}}$. So, the condition $(3_{n+1})$ of the inductive construction is satisfied. To prove the condition $(4_{n+1})$, we need to prove that $\rho(\varphi_{n+1}(a),\varphi_n(a))\le2^{-n-1}\cdot\varepsilon\circ\varphi_0(a)$ for each $a\in\mathcal A$. This inequality follows from the equality $\varphi_{n+1}(a)=\varphi_n(a)$ if $a\in\mathcal A\setminus W_n'$. If $a\in W'_n$, then $\varphi_{n+1}(a),\varphi_n(a)\in W_n(b)$ for some $b\in \mathcal B_n^\circ\setminus\mathcal B_{n+1}^\circ$ and hence $$\rho(\varphi_{n+1}(a),\varphi_n(a))\le\mathrm{diam}\big( W_n(b)\big)\le 2^{-n-1}\cdot\inf \varepsilon\circ\varphi_0\circ\varphi_n^{-1}(W_n(b))\le 2^{-n-1}\cdot\varepsilon\circ\varphi_0(a).$$ It follows from the construction of the homeomorphisms $h_{n+1}$ and the choice of the neighborhoods $W_n(b)$, $b\in\mathcal B^\circ_n\setminus\mathcal B_{n+1}^\circ$, that the homeomorphisms $\varphi_{n+1}$ and $\varphi_n$ coincide on the set $(\mathcal A\setminus W'_n)\cup (\mathcal A_n^\circ\setminus\mathcal A_{n+1}^\circ)\supset\mathcal A_0^\circ\setminus\mathcal A_{n+1}^\circ$. So, the inductive condition $(5_{n+1})$ holds. Taking into account that the homeomorphism $\tilde h_n$ coincides with the homeomorphism $h_n$ on the set $\mathcal A^\circ_n\setminus\mathcal A^\circ_{n+1}$ and the homeomorphism $h_{n+1}$ coincides with the lift $\tilde h_{n+1}$ of $\tilde h_n$ on the set $(q_{\mathcal A_n}^{\mathcal A_{n+1}})^{-1}(\mathcal A^\circ_n\setminus\mathcal A^\circ_{n+1})=(\mathcal A_n^{n+1})^\circ$, we conclude that $\varphi_{n+1}(\mathcal A^\circ_n\setminus\mathcal A^\circ_{n+1})=\varphi_n(\mathcal A^\circ_n\setminus\mathcal A^\circ_{n+1})=\mathcal B_n^\circ\setminus\mathcal B_{n+1}^\circ$. To finish the proof of the condition $(6_{n+1})$, observe that the equalities $\varphi_{n+1}|\mathcal A\setminus W_n'=\varphi_n|\mathcal A\setminus W'_n$ and $\varphi_{n+1}(W'_n)=W_n$ and the inductive assumption $(6_n)$ imply $$\varphi_{n+1}(\mathcal A^\circ_{n}\setminus W'_n)=\varphi_n(\mathcal A^\circ_n\setminus W'_n)=\mathcal B^\circ_n\setminus W_n.$$ On the other hand, the equality $h_{n+1}(\mathcal A^\circ_{n+1}\cap V_{n+1}')=\mathcal B^\circ_{n+1}\cap V_{n+1}$ and the definition of the open sets $V_{n+1}$ and $V_{n+1}'$ imply that $\varphi_{n+1}(\mathcal A^\circ_{n+1}\cap W'_n)=\mathcal B^\circ_{n+1}\cap W_n$. So, $\varphi_{n+1}(\mathcal A_{n+1}^\circ)=\mathcal B_{n+1}^\circ$, which means that the condition $(6_{n+1})$ holds. To complete the inductive step, it remains to check that the multivalued map $\Phi_{n+1}=q_{\mathcal B_{n+1}}^{-1}\circ h_{n+1}\circ q_{\mathcal A_{n+1}}:X\multimap X$ satisfies the conditions $(7_{n+1})$ and $(8_{n+1})$. To see that the condition $(7_{n+1})$ holds, observe that the map $q_{\mathcal A_n}^{\mathcal A_{n+1}}:\mathcal A_{n+1}\to\mathcal A_n$ is injective on the set $\bar A'_n=q_{\mathcal A_{n+1}}\big(\bigcup(\mathcal A_0\setminus\mathcal A_n)\big)$ and the map $q_{\mathcal B_n}^{\mathcal B_{n+1}}:\mathcal B_{n+1}\to\mathcal B_n$ is injective on the set $\bar B'_n=q_{\mathcal B_{n+1}}\big(\bigcup(\mathcal B_0\setminus\mathcal B_n)\big)$. Taking into account that $h_{n+1}|\bar A_n'=\tilde h_{n+1}|\bar A'_n$ and $\tilde h_n|\bar A_n=h_n|\bar A_n$, we conclude that $$h_{n+1}|\bar A'_n=\tilde h_{n+1}|\bar A_n'=(q_{\mathcal B_n}^{\mathcal B_{n+1}})^{-1}\circ \tilde h_n\circ q_{\mathcal A_n}^{\mathcal A_{n+1}}|\bar A'_n=(q_{\mathcal B_n}^{\mathcal B_{n+1}})^{-1}\circ h_n\circ q_{\mathcal A_n}^{\mathcal A_{n+1}}|\bar A'_n$$and hence for every $x\in\bigcup(\mathcal A_0^\circ\setminus\mathcal A_n^\circ)$ we get $$ \begin{aligned} \Phi_{n+1}(x)&=q_{\mathcal B_{n+1}}^{-1}\circ h_{n+1}\circ q_{\mathcal A_{n+1}}(x)=q_{\mathcal B_{n+1}}^{-1}\circ \tilde h_{n+1}\circ q_{\mathcal A_{n+1}}(x)=\\ &=q_{\mathcal B_{n+1}}^{-1}\circ (q_{\mathcal B_n}^{\mathcal B_{n+1}})^{-1}\circ \tilde h_{n}\circ q_{\mathcal A_n}^{\mathcal A_{n+1}}\circ q_{\mathcal A_{n+1}}(x)=\\ &=\big(q^{\mathcal B_{n+1}}_{\mathcal B_n}\circ q_{\mathcal B_{n+1}}\big)^{-1}\circ h_{n}\circ q_{\mathcal A_n}(x) =q_{\mathcal B_n}^{-1}\circ h_{n}\circ q_{\mathcal A_n}(x)=\Phi_n(x). \end{aligned} $$ So, the condition $(7_{n+1})$ holds. To check the condition $(8_{n+1})$, fix any point $x\in X$. If the projection $a=q_\mathcal A(x)\in\mathcal A_0$ does not belong to the open set $W'_n$, then $\Phi_{n}(x)=\Phi_{n+1}(x)\in \mathcal B_{n+1}$ and hence $\mathrm{diam}(\Phi_n(x)\cup\Phi_{n+1}(x))=\mathrm{diam}(\Phi_{n+1}(x))<2^{-n}$ by the condition $(2_{n+1})$ of the inductive construction. So, we assume that $a\in W'_n$ and hence $\varphi_n(a),\varphi_{n+1}(a)\in W_n(b)$ for some element $b\in\mathcal B^\circ_n\setminus\mathcal B^\circ_{n+1}$. The choice of the neighborhood $W_n(b)$ guarantees that the set $q_\mathcal B^{-1}(W_n(b))$ has diameter $<2^{-n+1}$. Taking into account that $$\Phi_n(x)\cup\Phi_{n+1}(x)\subset q_{\mathcal B}^{-1}(\{\varphi_n(a),\varphi_{n+1}(a)\})\subset q_{\mathcal B}^{-1}(W_n(b)),$$ we obtain the desirable inequality $$\mathrm{diam}\big(\Phi_n(x)\cup\Phi_{n+1}(x)\big)\le \mathrm{diam} \big(q_{\mathcal B}^{-1}(W_n(b))\big)<2^{-n+1}.$$ By analogy we can prove that $\mathrm{diam}\big(\Phi_n^{-1}\cup\Phi_{n+1}^{-1}(x)\big)<2^{-n+1}$. This completes the inductive step. \smallskip After completing the inductive construction, we obtain the sequences of decompositions $(\mathcal A_n)_{n\in\omega}$, $(\mathcal B_n)_{n\in\omega}$ of $X$, the sequences of homeomorphisms $(h_n:\mathcal A_n\to\mathcal B_n)_{n\in\omega}$, $(\varphi_n:\mathcal A\to\mathcal B)_{n\in\omega}$ and the sequence $(\Phi_n:X\multimap X)_{n\in\omega}$ of multivalued functions, satisfying the conditions $(1_n)$--$(8_n)$, $n\in\mathbb N$, of the inductive construction. Taking the limit $\Phi=\lim_{n\to\infty}\Phi_n$ of the multivalued functions $\Phi_n$ we shall obtain a $(\mathcal A,\mathcal B)$-factorizable homeomorphism $\Phi:X\to X$ inducing a $(\mathcal A,\mathcal B)$-liftable homeomorphism $\varphi:\mathcal A\to\mathcal B$ of the decomposition spaces. To define the map $\Phi$, consider for every $x\in X$ the sequence $(\Phi_n(x))_{n\in\omega}$ of compact subsets of the space $X$. The conditions $(8_n)$, $n\in\mathbb N$, of the inductive construction guarantee that this sequence is Cauchy in the hyperspace $\exp(X)$ of $X$ endowed with the Hausdorff metric $d_H$, which is complete according to \cite[4.5.23]{En}. Let us recall that the {\em hyperspace} $\exp(X)$ is the space of non-empty compact subsets of $X$, endowed with the {\em Hausdorff metric} $d_H$ defined by the (well-known) formula $$d_H(A,B)=\max\{\max_{a\in A}d(a,B),\max_{b\in B}d(A,b)\}\mbox{ where $A,B\in\exp(X)$}.$$ We shall identify the metric space $(X,d)$ with the subspace of singletons in $(\exp(X),d_H)$. The completeness of the hyperspace $(\exp(X),d_H)$ guarantees that the Cauchy sequence $(\Phi_n(x))_{n\in\omega}$ has the limit $\Phi(x)=\lim\limits_{n\to\infty}\Phi_n(x)$ in $\exp(X)$. Moreover, the conditions $(8_n)$, $n\in\mathbb N$, imply that \begin{equation}\label{lim} d_H(\Phi(x),\Phi_n(x))\le \sum_{k=n}^\infty d_H(\Phi_{k+1}(x),\Phi_k(x))\le \sum_{k=n}^\infty\mathrm{diam}(\Phi_{k+1}(x)\cup\Phi_k(x))<\sum_{k=n}^\infty 2^{-k+1}=2^{-n+2} \end{equation} for every $n\in\mathbb N$. Also the conditions $(8_n)$, $n\in\mathbb N$, of the inductive construction imply that $\Phi(x)=\lim_{n\to\infty}\Phi_n(x)$ is a singleton. So, $\Phi:x\mapsto \Phi(x)$ can be thought as a usual singlevalued function $\Phi:X\to X\subset \exp(X)$. \begin{claim}\label{cl8.2} The function $\Phi:X\to X$ is continuous. \end{claim} \begin{proof} Given any point $x_0\in X$ and $\epsilon>0$, we need to find a neighborhood $O(x_0)\subset X$ such that $\Phi(O(x_0))\subset O_d(\Phi(x_0),\epsilon)$ where $O_d(y,\epsilon)=\{x\in X:d(x,y)<\epsilon\}$ denotes the $\epsilon$-ball centered at a point $y\in X$. Find $n\in\mathbb N$ such that $2^{-n+5}<\epsilon$ and consider the multivalued function $\Phi_n=q_{\mathcal B_n}^{-1}\circ h_n\circ q_{\mathcal A_n}:X\multimap X$. Consider the point $a=q_{\mathcal A_n}(x_0)\in\mathcal A_n$ and its image $b=h_n(A)\in\mathcal B_n$, which is a compact subset of $X$. Since the quotient map $q_{\mathcal B_n}:X\to \mathcal B_n$ is closed, the point $b\in \mathcal B_n$ has an open neighborhood $O(b)\subset \mathcal B_n$ such that $q_{\mathcal B_n}^{-1}(O(b))\subset O_d(b,2^{-n})$. By the continuity of the homeomorphism $h_n:\mathcal A_n\to\mathcal B_n$, the point $a\in\mathcal A_n$ has a neighborhood $O(a)\subset\mathcal A_n$ such that $h_n(O(a))\subset O(b)$. The continuity of the quotient projection $q_{\mathcal A_n}$ implies that $O(x_0)=q_{\mathcal A_n}^{-1}(O(a))$ is an open neighborhood of the point $x_0\in q_{\mathcal A_n}^{-1}(a)$. We claim that $d(\Phi(x),\Phi(x_0))<\epsilon$ for every $x\in O(x_0)$. Observe that $\Phi_n(x_0)\cup \Phi_n(x)\subset q_{\mathcal B_n}^{-1}\circ h_n\circ q_{\mathcal A_n}(\{x,x_0\})\subset q_{\mathcal B_n}^{-1}(h_n(O(a))\subset q_{\mathcal B_n}^{-1}(O(b))\subset O_d(b,2^{-n})$. Now the upper bound (\ref{lim}) implies that $$\Phi(x)\cup\Phi(x_0)\subset O_d(\Phi_n(x)\cup\Phi_n(x_0),2^{-n+2})\subset O_d(b,2^{-n}+2^{-n+2})\subset O_d(b,2^{-n+3}).$$ Since $b\in \mathcal B_n$, the condition $(2_{n})$ of the inductive construction guarantees that $\mathrm{diam}(b)< 2^{-n+1}$. Consequently, $$d(\Phi(x),\Phi(x_0))\le\mathrm{diam}\big( O_d(b,2^{-n+3})\big)\le \mathrm{diam}(b)+2\cdot 2^{-n+3}\le2^{-n+1}+2^{-n+4}<2^{-n+5}<\epsilon.$$ \end{proof} \begin{claim}\label{cl8.3} There exists a continuous function $\varphi:\mathcal A\to\mathcal B$ such that $q_\mathcal B\circ\Phi=\varphi\circ q_\mathcal A$ and $\varphi|\mathcal A^\circ_0\setminus\mathcal A^\circ_{n}=\varphi_{n}|\mathcal A^\circ_0\setminus \mathcal A^\circ_{n}$ for all $n\in\mathbb N$. \end{claim} \begin{proof} To define the function $\varphi:\mathcal A\to\mathcal B$, we shall show that for each element $a\in\mathcal A$ the set $q_\mathcal B\circ\Phi(a)\subset\mathcal B$ is a singleton. This is trivially true if the compact subset $a$ of $X$ is a singleton. So, we assume that $a$ is not a singleton and hence $a\in\mathcal A^\circ_{n-1}\setminus\mathcal A^\circ_{n}$ for some $n\in\mathbb N$. In this case $a\subset\bigcup(\mathcal A_0^\circ\setminus\mathcal A_{n}^\circ)$ and hence $\Phi|a=\Phi_{n}|a$ by the conditions $(7_k)$, $k>n$, of the inductive construction. Now we see that the set $$ \begin{aligned} q_\mathcal B\circ \Phi(a)&=q_\mathcal B\circ \Phi_{n}(a)=q_\mathcal B\circ q_{\mathcal B_{n}}^{-1}\circ h_n\circ q_{\mathcal A_n}(a)=\\ &=q_{\mathcal B}^{\mathcal B_n}\circ q_{\mathcal B_n}\circ q_{\mathcal B_n}^{-1}\circ h_n\circ q_{\mathcal A_n}(a)=\\ &=q_{\mathcal B}^{\mathcal B_n}\circ h_n\circ q_{\mathcal A_n}(a)=\varphi_n\circ q_{\mathcal A}^{\mathcal A_n}\circ q_{\mathcal A_n}(a)=\\ &=\varphi_n\circ q_\mathcal A(a)=\varphi_n(\{a\})=\{\varphi_n(a)\} \end{aligned} $$ is a singleton. So, there is a unique function $\varphi:\mathcal A\to\mathcal B$ making the following square commutative: $$\xymatrix{ X\ar[d]_{q_\mathcal A}\ar[r]^{\Phi}&X\ar[d]^{q_\mathcal B}\\ \mathcal A\ar[r]_{\varphi}&\mathcal B. }$$ Taking into account that the functions $\Phi$, $q_\mathcal B$ are continuous, and the function $q_\mathcal A$ is closed, we conclude that the function $\varphi$ is continuous. \end{proof} By analogy with the proofs of Claims~\ref{cl8.2} and \ref{cl8.3} we can prove \begin{claim} \begin{enumerate} \item For every point $x\in X$ the sequence $(\Phi_n^{-1}(x))_{n\in\omega}$ of compact subsets of $X$ converges in the hyperspace $(\exp(X),d_H)$ to some singleton $\Psi(x)\subset X$, \item the function $\Psi:X\to X\subset\exp(X)$, $\Psi:x\mapsto\Psi(x)$, is continuous, and \item the function $\Psi$ is $(\mathcal B,\mathcal A)$-factorizable, which means that the square $$\xymatrix{ X\ar[r]^{\Psi}\ar[d]_{q_{\mathcal B}}&X\ar[d]^{q_{\mathcal A}}\\ \mathcal B\ar[r]_{\psi}&\mathcal A } $$is commutative for some continuous function $\psi:\mathcal B\to\mathcal A$. \end{enumerate} \end{claim} Next, we show that the functions $\Phi$ and $\Psi$ are inverse of each other. \begin{claim} $\Phi\circ\Psi(x)=\lim_{n\to\infty}\Phi_n\circ \Phi_n^{-1}(x)$ for every $x\in X$. \end{claim} \begin{proof} Given any $\epsilon>0$ we need to find $m\in\mathbb N$ such that $d_H(\Phi\circ\Psi(x),\Phi_n\circ\Phi_n^{-1}(x))<\epsilon$ for all $n\ge m$. By the continuity of the map $\Phi$ at the singleton $\Psi(x)$, there is $\delta>0$ such that $\Phi\big(O_d(\Psi(x),\delta)\big)\subset O_d(\Phi\circ\Psi(x),\epsilon/2)$. Choose $m\in\mathbb N$ so large that $2^{-m+3}<\min\{\epsilon,\delta\}$ and take any $n\ge m$. By analogy with the equality (\ref{lim}), we can prove that $d_H(\Psi(x),\Phi_n^{-1}(x))<2^{-n+2}$ and hence $\Phi_n^{-1}(x)\subset O_d(\Psi(x),2^{-n+2})\subset O_d(\Psi(x),\delta)$. The choice of $\delta$ guarantees that $\Phi\circ \Phi^{-1}_n(x)\subset \Phi\big(O_d(\Psi(x),\delta)\big)\subset O_d(\Phi\circ\Psi(x),\epsilon/2)$, which implies $d_H(\Phi\circ \Phi^{-1}_n(x),\Phi\circ \Psi(x))<\epsilon/2$. On the other hand, the equality (\ref{lim}) implies that $$d_H(\Phi_n\circ\Phi^{-1}_n(x),\Phi\circ\Phi^{-1}_n(x))<2^{-n+2}<\epsilon/2$$and hence $$d_H(\Phi_n\circ\Phi^{-1}_n(x),\Phi\circ\Psi(x))\le d_H(\Phi_n\circ\Phi^{-1}_n(x),\Phi\circ\Phi^{-1}_n(x))+d_H(\Phi\circ \Phi_n^{-1}(x),\Phi\circ\Psi(x))<\epsilon/2+\epsilon/2=\epsilon.$$ \end{proof} \begin{claim} $\Phi\circ \Psi(x)=\{x\}$ for all $x\in X$. \end{claim} \begin{proof} For every $n\in\mathbb N$, the definition of the multivalued function $\Phi_n$ implies that $$x\in \Phi_n\circ\Phi_n^{-1}(x)\subset q_{\mathcal B_n}^{-1}\circ q_{\mathcal B_n}(x)=q_{\mathcal B_n}(x)\in\mathcal B_n.$$ The condition $(2_{n-1})$ of the inductive construction guarantees that $$\mathrm{diam}(\Phi_n\circ\Phi_n^{-1}(x))\le \mathrm{diam}\big( q_{\mathcal B_n}(x)\big)<2^{-n+1},$$ which implies that $\Phi_n\circ\Phi_n^{-1}(x)\subset O_d(x,2^{-n+1})$ and hence $\Phi\circ\Psi(x)=\lim_{n\to\infty}\Phi_n\circ\Phi_n^{-1}(x)=\{x\}$. \end{proof} By analogy we can prove that $\Psi\circ\Phi(x)=\{x\}$ for all $x\in X$. So, $\Phi\circ\Psi=\mathrm{id}_X=\Psi\circ\Phi$. \smallskip Now consider the commutative diagram $$ \xymatrix{ X\ar[r]^{\Phi}\ar[d]_{q_{\mathcal A}}&X\ar[r]^\Psi\ar[d]^{q_{\mathcal B}}&X\ar[d]^{q_\mathcal A}\\ \mathcal A\ar[r]_{\varphi}&\mathcal B\ar[r]_{\psi}&\mathcal A }$$ and observe that $\psi\circ\varphi:\mathcal A\to\mathcal A$ is a unique map such that $q_\mathcal A\circ\mathrm{id}_X=q_\mathcal A\circ(\Psi\circ\Phi)=(\psi\circ\phi)\circ q_\mathcal A$, which implies that $\psi\circ\phi=\mathrm{id}_\mathcal A$. By analogy we can prove that $\phi\circ\psi=\mathrm{id}_\mathcal B$. This means that $\varphi:\mathcal A\to\mathcal B$ is a $(\mathcal A,\mathcal B)$-liftable homeomorphism with the inverse $\varphi^{-1}=\psi$. To finish the proof of Theorem~\ref{main2}, it remains to check that the homeomorphism $\varphi$ is $\mathcal W$-near to the homeomorphism $\varphi_0$. By the choice of the function $\varepsilon:\mathcal B\to[0,1]$ this will follow as soon as we check that $\rho(\varphi,\varphi_0)\le\varepsilon\circ\varphi_0$. By the density of the set $\mathcal A^\circ$ in $\mathcal A$ and the continuity of the functions $\varphi$, $\varphi_0$, and $\varepsilon$, it suffices to check that $\rho(\varphi|\mathcal A^\circ,\varphi_0|\mathcal A^\circ)\le \varepsilon\circ\varphi_0|\mathcal A^\circ$. Given any point $a\in\mathcal A^\circ$, find a (unique) number $n\in\mathbb N$ with $a\in\mathcal A^\circ_{n-1}\setminus\mathcal A^\circ_n$. Then $\varphi(a)=\varphi_n(a)$ and hence $$\rho(\varphi(a),\varphi_0(a))=\rho(\varphi_n(a),\varphi_0(a))\le \sum_{k=1}^n\rho(\varphi_k(a),\varphi_{k-1}(a))\le\sum_{k=1}^n2^{-k}\varepsilon\circ\varphi_0(a)\le\varepsilon\circ\varphi_0(a)$$ by the conditions $(4_k)$, $k\in\mathbb N$, of the inductive construction. \end{proof} \section{Proof of Theorem~\ref{main2}}\label{s:main2} In this subsection we shall deduce Theorem~\ref{main2} from Theorems~\ref{t5} and \ref{t:dense}. Given a tame collection $\mathcal K$ of compact subsets of a strongly locally homogeneous completely metrizable space $X$ and two dense $\mathcal K$-tame decompositions $\mathcal A,\mathcal B$ of the space $X$, we need to show that the set of $(\mathcal A,\mathcal B)$-liftable homeomorphisms in dense in the homeomorphism space $\mathcal H(\mathcal A,\mathcal B)$. This will be done as soon as for each homeomorphism $f:\mathcal A\to\mathcal B$ and an open cover $\mathcal U$ of the decomposition space $\mathcal B=X/\mathcal B$ we find an $(\mathcal A,\mathcal B)$-liftable homeomorphism $h:\mathcal A\to\mathcal B$ which is $\mathcal U$-near to $f$. By Lemma~\ref{l1}, the decomposition space $\mathcal B$ is metrizable and hence paracompact. So, we can find an open cover $\mathcal V$ of $\mathcal B$ such that $\mathcal{S}t(\mathcal V)\prec\mathcal U$. First we shall find a homeomorphism $g:\mathcal A\to\mathcal B$ such that $(g,f)\prec\mathcal V$ and $g(\mathcal A^\circ)=\mathcal B^\circ$. Fix any complete metric $d$ generating the topology of the completely metrizable space $X$. Since the decomposition $\mathcal B$ is vanishing, for every $\epsilon>0$ the subfamily $\mathcal B^\circ_\varepsilon=\{B\in\mathcal B:\mathrm{diam}(B)>\epsilon\}$ is discrete in $X$ and hence $\mathcal B^\circ_\epsilon$ is a closed discrete subset in the decomposition space $\mathcal B$. Since $\mathcal B^\circ=\bigcup_{n\in\omega}\mathcal B^\circ_{2^{-n}}$, we see that the non-degeneracy part $\mathcal B^\circ$ of the (dense) decomposition $\mathcal B$ is $\sigma$-discrete (and dense) in $\mathcal B$. By analogy we can show that the non-degeneracy part $\mathcal A^\circ$ of the decomposition $\mathcal A$ is dense and $\sigma$-discrete in the decomposition space $\mathcal A$. Then $f(\mathcal A^\circ)$ is a dense $\sigma$-discrete subset of the decomposition space $\mathcal B$. By Theorem~\ref{t:shrink}, the quotient map $q_\mathcal B:X\to X/\mathcal B$ is a strong near homeomorphism, which implies that the decomposition space $\mathcal B$ is homeomorphic to $X$ and hence is strongly locally homogeneous and completely metrizable. Now it is legal to apply Theorem~\ref{t:dense} and find a homeomorphism $h:\mathcal B\to\mathcal B$ such that $(h,\mathrm{id})\prec\mathcal V$ and $h\big(f(\mathcal A^\circ)\big)=\mathcal B^\circ$. Then the homeomorphism $g=h\circ f:\mathcal A\to\mathcal B$ maps $\mathcal A^\circ$ onto $\mathcal B^\circ$ and is $\mathcal V$-near to $f$. Since $g(\mathcal A^\circ)=\mathcal B^\circ$, the homeomorphism $g:\mathcal A\to\mathcal B$ belongs to the space $\mathcal H^\circ(\mathcal A^\circ,\mathcal B^\circ)$. Applying Theorem~\ref{t5}, find a $(\mathcal A,\mathcal B)$-liftable homeomorphism $\varphi:\mathcal A\to\mathcal B$ such that $(\varphi,g)\prec\mathcal V$. It follows from $(\varphi,g)\prec\mathcal V$ and $(g,f)\prec\mathcal V$ that $(\varphi,f)\prec\mathcal{S}t(\mathcal V)\prec\mathcal U$. So, $\varphi:\mathcal A\to\mathcal B$ is a required $(\mathcal A,\mathcal B)$-liftable homeomorphism, which is $\mathcal U$-near to the homeomorphism $f$. \section{Existence of $\mathcal K$-tame decompositions}\label{s:exist} In this section we shall prove Theorem~\ref{t:exist}. Let $(X,d)$ be a metric space and $\mathcal K$ be a tame family of compact subsets of $X$ containing more than one point. Given a non-empty open subset $U\subset X$, we need to construct a $\mathcal K$-tame decomposition $\mathcal D$ of $X$ such that $\bigcup\mathcal D^\circ$ is a dense subset of $U$. By induction for every $n\in\omega$ we shall construct a discrete subfamily $\mathcal D_n\subset \mathcal K$ and for every $D\in\mathcal D_n$ an open neighborhood $U_n(D)\subset X$ of $D$, and a homeomorphism $h_{n,D}:X\to X$ such that the following conditions are satisfied: \begin{enumerate} \item $\mathcal D_n\supset\mathcal D_{n-1}$; \item $\bigcup\mathcal D_n\subset U$ and for each $u\in U$ there is a point $x\in\bigcup\mathcal D_n$ with $d(x,u)<2^{-n}$; \item $D\subset U_n(D)\subset U$ for every $D\in\mathcal D_n$; \item $U_n(D)\subset U_{n-1}(D)\cap O_d(D,2^{-n-1})$ for each $D\in\mathcal D_{n-1}$, and $\mathrm{diam} \big( U_n(D)\big)<2^{-n}$ for every $D\in\mathcal D_n\setminus\mathcal D_{n-1}$; \item the family $\big(U_n(D)\big)_{D\in\mathcal D_n}$ is discrete in $X$; \item for each $k<n$, $D\in\mathcal D_k$ and $D'\in \mathcal D_n\setminus \mathcal D_{n-1}$ either $\bar U_n(D')\cap \bar U_k(D)=\emptyset$ or else $U_n(D')\subset U_k(D)$ and $\mathrm{diam}\big(h_{k,D}(U_n(D'))\big)<2^{-n}$, and \item $h_{n,D}|X\setminus U_n(D)=\mathrm{id}$ and $\mathrm{diam}(h_{n,D}(D))<2^{-n}$ for each $D\in\mathcal D_n$. \end{enumerate} We start the inductive construction by letting $\mathcal D_{-1}=\emptyset$. Assume that for some $n\in\omega$ the families $\mathcal D_k$, neighborhoods $U_k(D)$, $D\in\mathcal D_k$, and homeomorphisms $h_{k,D}$, $D\in\mathcal D_k$, have been constructed for all $k<n$. The inductive assumption (5) implies that the union $B=\bigcup_{k<n}\bigcup_{D\in\mathcal D_k}\partial U_k(D)$ of boundaries of the open sets $U_k(D)$ is a closed nowhere dense subset in $X$. Consider the subset $V=U\setminus O_d(\bigcup\mathcal D_{n-1},2^{-n})$ and the dense subset $W=V\setminus B$ of $V$. Using Zorn's Lemma, find a maximal subset $S\subset W$, which is $2^{-n-1}$-{\em separated} in the sense that $d(x,y)\ge 2^{-n-1}$ for any distinct points $x,y\in S$. \begin{claim} For every point $v\in V$ there is a point $s\in S$ such that $d(s,v)<\frac34\cdot{2^{-n}}$. \end{claim} \begin{proof} Assume that $d(v,s)\ge \frac34\cdot {2^{-n}}$ for all $s\in S$. Then for any point $w\in O_d(v,2^{-n-2})\setminus B$ and each $s\in S$ we get $d(w,s)\ge d(v,s)-d(v,w)> \frac34{2^{-n}}-\frac14{2^{-n}}=2^{-n-1}$. Consequently, the set $S\cup\{w\}\subset W$ is $2^{-n-1}$-separated, which contradicts the maximality of $S$. \end{proof} For each point $s\in S$ chose a positive number $\varepsilon_s<2^{-n-3}$ such that for the open $\varepsilon_s$-ball $U_s=O_d(s,\varepsilon_s)$ and any $k<n$ and $D\in\mathcal D_k$ the following conditions hold: \begin{itemize} \item $U_s\subset W$; \item if $s\in U_k(D)$, then $\overline{U}_s\subset U_k(D)$ and $\mathrm{diam}(h_{k,D}(U_s))<2^{-n}$, and \item if $s\notin U_k(D)$, then $\bar U_n(D')\cap \bar U_k(D)=\emptyset$. \end{itemize} By Definition~\ref{d:K-tame}, we can find in each ball $U_s$ a set $K_s\in\mathcal K$. Put $\mathcal D_n=\mathcal D_{n-1}\cup\{K_s,s\in S\}$. The choice of the set $S$ and the numbers $\varepsilon_s$, $s\in S$, guarantees that the family $\mathcal D_n$ is discrete in $X$ and satisfies the conditions (1) and (2) of the inductive construction. For each $D\in\mathcal D_n$ put $U_n(D)=U_s$ if $D=K_s$ for some $s\in S$ and $U_n(D)=O_d(D,2^{-n-1})\cap U_{n-1}(D)$ if $D\in\mathcal D_{n-1}$. It is easy to see that the family $\big(U_n(D)\big)_{D\in\mathcal D_n}$ satisfies the conditions (3)--(6) of the inductive construction. Since each set $D\in\mathcal D_n\subset\mathcal K$ is locally shrinkable, there is a homeomorphism $h_{n,D}:X\to X$ satisfying the condition (7) of the inductive construction. This completes the inductive step. After completing the inductive construction, we obtain a disjoint subfamily $\mathcal D_\omega=\bigcup_{n\in\omega}\mathcal D_n\subset\mathcal K$ inducing the decomposition $$\mathcal D=\mathcal D_\omega\cup\big\{\{x\}:x\in X\setminus\textstyle{\bigcup}\mathcal D_\omega\big\}$$of $X$. Taking into account that the family $\mathcal K\supset\mathcal D_\omega$ does not contains singletons, we conclude that $\mathcal D^\circ=\mathcal D_\omega\subset\mathcal K$. The condition (2) of the inductive construction guarantees that the union $\bigcup\mathcal D^\circ=\bigcup_{n\in\omega}\mathcal D_n$ is dense in $U$. \begin{claim} The decomposition $\mathcal D$ is vanishing. \end{claim} \begin{proof} Given an open cover $\mathcal U$ of $X$ we need to check that the subfamily $$\mathcal D'=\{D\in\mathcal D:\forall U\in\mathcal U\;\;D\not\subset U\}$$ is discrete in $X$. This will follow as soon as for each point $x\in X$ we find a neighborhood $O_x\subset X$ of $x$ that meets at most one set $D\in\mathcal D'$. Find $n\in\omega$ such that the ball $O_d(x,2^{-n})$ is contained in some set $U\in\mathcal U$. We claim that the family $\mathcal D_x=\{D\in\mathcal D':D\cap O_d(x,2^{-n-1})\ne\emptyset\}$ lies in $\mathcal D_n$. Assume for a contradiction that the family $\mathcal D_x$ contains some set $D\in\mathcal D'\setminus\mathcal D_n$. Then $\mathrm{diam}(D)<2^{-n-1}$ by the condition (4) of the inductive construction. Taking into account that $D\cap O_d(x,2^{-n-1})\ne\emptyset$, we conclude that $D\subset O_d(x,2^{-n})\subset U$, which contradicts $D\in\mathcal D'$. So, $\mathcal D_x\subset\mathcal D_n$. Since the family $\mathcal D_n$ is discrete in $X$, the point $x$ has a neighborhood $O_x\subset O_d(x,2^{-n-1})$ that meets at most one set of the family $\mathcal D_n$. Then the neighborhood $O_x$ meets at most one set of the families $\mathcal D_x$ and $\mathcal D'$, thus showing that the family $\mathcal D'$ is discrete in $X$ and $\mathcal D$ is vanishing. \end{proof} To complete the proof of Theorem~\ref{t:exist}, it remains to check that the decomposition $\mathcal D$ is strongly shrinkable. Given a $\mathcal D$-saturated open subset $W\subset X$, a $\mathcal D$-saturated open cover $\mathcal U$ of $W$, and an open cover $\mathcal V$ of $W$, we need to construct a homeomorphism $h:W\to W$ such that $(h,\mathrm{id})\prec\mathcal U$ and $\{h(D):D\in\mathcal W,\;D\subset W\}\prec\mathcal V$. \begin{claim}\label{cl:sv} The family $\mathcal D'=\{D\in\mathcal D:D\subset W,\;\;\forall V\in\mathcal V\;\;D\not\subset V\}$ is discrete in $W$. \end{claim} \begin{proof} Assuming that the disjoint family $\mathcal D'$ is not discrete in $W$, find a point $x\in W$ such that each neighborhood $O_x\subset W$ meets infinitely many sets of the family $\mathcal D'$. By the regularily of the metrizable space $X$, the point $x$ has a closed neighborhood $N_x\subset X$ such that $N_x\subset W$. Then the open cover $\mathcal V_X=\mathcal V\cup\{X\setminus N_x\}$ witnesses that the decomposition $\mathcal D$ is not vanishing in $X$, which is a desired contradiction. \end{proof} By Claim~\ref{cl:sv}, the family $\mathcal D'$ is discrete in $W$. Consequently, for each set $D\in\mathcal D'$ we can find an open neighborhood $O(D)\subset W$ such that the family $\{O(D)\}_{D\in\mathcal D'}$ is discrete in $W$. Since each set $D\in\mathcal D'$ is compact, we can find a number $n_D\in\mathbb N$ so large that \begin{itemize} \item $D\in \mathcal D_{n_D}$; \item $O_d(D,2^{-n_D})\subset O(D)\cap U$ for some $\mathcal D$-saturated open set $U\in\mathcal U$, and \item each subset $B\subset O_d(D,2^{-n_D})$ of diameter $\mathrm{diam}(B)<2^{-n_D}$ lies in some set $V\in\mathcal V$. \end{itemize} Now consider the homeomorphism $h:W\to W$ defined by $$h(x)=\begin{cases}h_{n_D,D}(x)&\mbox{if $x\in U_{n_D}(D)$ for some $D\in\mathcal D'$}\\ x&\mbox{otherwise} \end{cases} $$The conditions (4) and (7) of the inductive construction and the choice of the numbers $n_D$, $D\in\mathcal D'$, guarantee that $h$ is a well-defined homeomorphism of $W$ with $(h,\mathrm{id}_W)\prec\mathcal U$. Next we show that for each set $K\in\mathcal D$ the image $h(K)$ lies in some set $V\in\mathcal V$. This is clear if $K$ is a singleton. So, assume that the set $K\in\mathcal D$ is not a singleton. If $K=D$ for some $D\in\mathcal D'$, then $\mathrm{diam}(h(K))=\mathrm{diam}(h(D))=\mathrm{diam}(h_{n_D}(D))<2^{-n_D}$ by the condition $(7)$ of the inductive assumption and hence $h(D)\subset V$ for some set $V\in\mathcal V$ by the choice of the number $n_D$. Next, assume that $K\notin\mathcal D'$. Find a unique number $k\in\omega$ such that $K\in\mathcal D_k\setminus\mathcal D_{k-1}$. If $K\subset U_{n_D}(D)$ for some $D\in\mathcal D'$, then $k>n_{D}$ by the condition (5) of the inductive construction, and the set $h(K)=h_{n_D,D}(K)$ has diameter $\mathrm{diam}(h(K))<2^{-n_{D}}$ by the condition (6) of the inductive construction. If $K\not\subset U_{n_D}(D)$ for all $D\in\mathcal D'$, then $K$ is disjoint with the union $\bigcup_{D\in\mathcal D'}U_{n_{D}(D)}$ by the condition (6) of the inductive construction and then $h(K)=K\subset V$ for some $V\in\mathcal V$ by the definition of the family $\mathcal D'\ni K$. \section{Topological equivalence and universality of $\mathcal K$-spongy sets} In this section we shall derive from Corollary~\ref{c2.6} a general version of Theorem~\ref{main1} treating so-called $\mathcal K$-spongy sets. \begin{definition} Let $\mathcal K$ be a tame family of compact subsets of a topological space $X$ such that each set $K\in\mathcal K$ has non-empty interior $\mathrm{Int}(K)$ in $X$. A subset $S\subset X$ is called {\em $\mathcal K$-spongy} if there is a dense $\mathcal K$-tame decomposition $\mathcal D$ of $X$ such that $X\setminus S=\bigcup\{\mathrm{Int}(D):D\in\mathcal D\}$. \end{definition} Theorem~\ref{main1} will be derived from the following more general theorem. \begin{theorem}\label{main1K} Let $X$ be a strongly locally homogeneous completely metrizable space, and $\mathcal K$ be a tame family of compact subsets $X$ such that each set $K\in\mathcal K$ contains more than one point and has a non-empty interior in $X$. Then: \begin{enumerate} \item Each nowhere dense subset of $X$ lies in a $\mathcal K$-spongy subset of $X$; \item Any two $\mathcal K$-spongy subsets of $X$ are ambiently homeomorphic, and \item Any $\mathcal K$-spongy subset of $X$ is a universal nowhere dense subset in $X$. \end{enumerate} \end{theorem} \begin{proof} 1. Given a nowhere dense subset $A\subset X$, consider the open dense subset $W=X\setminus\bar A$, and using Theorem~\ref{t:exist}, find a $\mathcal K$-tame decomposition $\mathcal D$ of $X$ such that $\bigcup\mathcal D^\circ$ is a dense subset of $W$. Then $\mathcal D$ is a dense $\mathcal K$-tame decomposition and $S=X\setminus\bigcup_{D\in\mathcal D}\mathrm{Int}(D)$ is a $\mathcal K$-spongy set containing the nowhere dense set $A$. \smallskip 2. Given two $\mathcal K$-spongy sets $S$ and $S'$ in $X$, find dense $\mathcal K$-tame decompositions $\mathcal D$ and $\mathcal D'$ of $X$ such that $X\setminus S=\bigcup_{D\in\mathcal D}\mathrm{Int}(D)$ and $X\setminus S'=\bigcup_{D\in\mathcal D'}\mathrm{Int}(D)$. By Corollary~\ref{c2.6}, the decompositions $\mathcal D$ and $\mathcal D'$ are topologically equivalent. Consequently, there is a $(\mathcal D,\mathcal D')$-factorizable homeomorphism $\Phi:X\to X$, which maps $X\setminus S$ onto $X\setminus S'$ and witnesses that the $\mathcal K$-spongy sets $S$ and $S'$ are ambiently homeomorphic. \smallskip 3. The third statement of Theorem~\ref{main1K} follows immediately from the first two statements of this theorem. \end{proof} \section{Spongy sets in Hilbert cube manifolds}\label{s:spongeQ} In this section we shall prove Theorem~\ref{t:spongeQ}. Given a spongy subset $S$ in a Hilbert cube manifold $M$, we need to prove that $S$ is a retract of $M$, homeomorphic to $M$. Let $d$ be any metric generating the topology of the space $M$. Let $\mathcal C$ be the family of connected components of the complement $M\setminus S$. Since $M$ is a spongy set, the closure $\bar C$ of each set $C\in\mathcal C$ is a tame ball in the Hilbert cube manifold $M$. This implies that the pair $(\bar C,\partial C)$ is homeomorphic to $(\mathbb I^\omega\times[0,1],\mathbb I^\omega\times\{1\})$. Here by $\partial C$ we denote the boundary of $C$ in $M$. So, we can choose a retraction $r_C:\bar C\to\partial C$ such that the preimage $r_C^{-1}(y)$ of each point $y\in \partial C$ is homeomorphic to the closed interval $\mathbb I=[0,1]$. Extend the retraction $r$ to a retraction $\bar r_C:M\to M\setminus C$ defined by $\bar r|\bar C=r_C$ and $\bar r|M\setminus C=\mathrm{id}$. The vanishing property of the family $\mathcal C$ guarantees that the map $r:M\to M\setminus\bigcup \mathcal C$ defined by $$r(x)=\begin{cases} r_C(x)&\mbox{if $x\in \bar C$ for some $C\in\mathcal C$,}\\ x&\mbox{otherwise} \end{cases} $$is a continuous retraction of $M$ onto the spongy set $S=M\setminus\bigcup\mathcal C$ such that the preimage of each point $y\in S$ is either a singleton or an arc. Being a retract of the Hilbert cube manifold $M$, the spongy set $S$ is a locally compact ANR. \begin{claim} The spongy set $S$ is a Hilbert cube manifold. \end{claim} \begin{proof} According to the characterization theorem of Toru\'nczyk \cite{Tor80}, it suffices to show that for each $\epsilon>0$ and a continuous map $f:\mathbb I^\omega\times\{0,1\}\to S$ there is a continuous map $\tilde f:\mathbb I^\omega\times \{0,1\}\to X$ such that $d(\tilde f,f)<\varepsilon$ and $\tilde f(\mathbb I^\omega\times\{0\})\cap \tilde f(\mathbb I^\omega\times\{1\})=\emptyset$. Since $M$ is an $\mathbb I^\omega$-manifold, by Theorem 18.2 of \cite{Chap}, the map $f:\mathbb I^\omega\times \{0,1\}\to S\subset M$ can be approximated by a map $g:\mathbb I^\omega\times \{0,1\}\to M$ such that $d(g,f)<\frac12\epsilon$ and $g(\mathbb I^\omega\times\{0\})\cap g(\mathbb I^\omega\times\{1\})=\emptyset$. Fix a positive real number $\delta<\epsilon$ such that $$\delta\le \mathrm{dist}\big(g(\mathbb I^\omega\times\{0\}),g(\mathbb I^\omega\times\{1\})\big)=\inf\big\{d(x,y):x\in g(\mathbb I^\omega\times\{0\}),\;\;y\in g(\mathbb I^\omega\times\{1\})\big\}.$$ The vanishing property of the family $\mathcal C$ guarantees that the subfamily $\mathcal C'=\{C\in\mathcal C:\mathrm{diam}(C)\ge\delta/5\}$ is discrete in $M$. By the collectivewise normality of $M$, for each set $C\in\mathcal C'$ its closure $\bar C$ has an open neighborhood $O(\bar C)\subset M$ such that the indexed family $\big(O(\bar C)\big)_{C\in\mathcal C'}$ is discrete in $X$. Since for each set $C\in\mathcal C'$ the closure $\bar C$ is a tame ball in $M$, we can additionally assume that the pair $(O(\bar C),\bar C)$ is homeomorphic to the pair $\big(\mathbb I^\omega\times[0,2),\mathbb I^\omega\times[0,1]\big)$. \begin{claim}\label{cl10.2} For every $C\in\mathcal C'$ there is a map $g_C:\mathbb I^\omega\times\{0,1\}\to M\setminus C$ such that \begin{enumerate} \item $d(g_C,\bar r_C\circ g)<\delta/5$; \item $g_C|g^{-1}(M\setminus O(\bar C))= g|g^{-1}(M\setminus O(\bar C))$; \item $g_C(g^{-1}(\bar C))\subset\partial C$; \item $g_C(g^{-1}(O(\bar C)))\subset O(\bar C)$, and \item $g_C(\mathbb I^\omega\times\{0\})\cap g_C(\mathbb I^\omega\times\{1\})=\emptyset$. \end{enumerate} \end{claim} \begin{proof} Choose an open neighborhood $U(\bar C)$ of $\bar C$ in $M$ such that $\bar U(\bar C)\subset O(\bar C)$. Consider the closed subset $F_C=g^{-1}(\bar C)\subset\mathbb I^\omega\times\{0,1\}$, and its open neighborhoods $O(F_C)=g^{-1}(O(\bar C))$ and $U(F_C)=g^{-1}(U(\bar C))$. It follows from $\bar U(\bar C)\subset O(\bar C)$ that $\bar U(F_C)\subset O(F_C)$. Next, consider the map $\bar r_C\circ g|O(F_C):O(F_C)\to O(\bar C)\setminus C$. Since $O(\bar C)\setminus C$ is an absolute retract (homeomorphic to $\mathbb I^\omega\times[1,2)$), by Theorems 5.1.1 and 5.1.2 of \cite{Hu}, there is an open cover $\mathcal U_C$ of $O(\bar C)\setminus C$ such that any map $g':F_C\to O(\bar C)\setminus C$ with $(g',\bar r_C\circ g|F_C)\prec\mathcal U_C$ can be extended to a map $g'_C:O(F_C)\to O(\bar C)\setminus C$ such that $g'_C|O(F_C)\setminus U(F_C)=g|O(F_C)\setminus (F_C)$ and $d(g'_C,g|O(C))<\epsilon/5$. Since the boundary $\partial C$ of the tame ball $\bar C$ in $M$ is homeomorphic to the Hilbert cube $\mathbb I^\omega$, by Theorem~8.1 of \cite{Chap}, the map $\bar r\circ g|F_C\to\partial C$ can be approximated by an injective map $g':F_C\to\partial C$ such that $(g',g|F_C)\prec\mathcal U_C$. By the choice of the cover $\mathcal U_C$ the map $g'$ can be extended to a continuous map $g'_C:O(F_C)\to O(\bar C)\setminus C$ such that $g'_C|O(F_C)\setminus U(F_C)=g|O(F_C)\setminus U(F_C)$ and $d(g'_C,g|O(F_C))<\delta/5$. Extend the map $g'_C$ to a continuous map $g_C:\mathbb I^\omega\times\{0,1\}\to M\setminus C$ such that $$g_C(x)=\begin{cases} g'_C(x)&\mbox{if $x\in O(C)$}\\ g(x)&\mbox{otherwise}. \end{cases} $$ It is easy to see that the map $g_C$ satisfies the conditions (1)--(5). \end{proof} Now define a map $\tilde g:\mathbb I^\omega\times\{0,1\}\to M'$ by the formula $$\tilde g(x)=\begin{cases} g_C(x)&\mbox{if $x\in g^{-1}(O(\bar C))$ for some $C\in\mathcal C'$};\\ g(x)&\mbox{otherwise}. \end{cases} $$ Claim~\ref{cl10.2} implies that $d(\tilde g,g)<\delta/5$ and $\tilde g(\mathbb I^\omega\times\{0\})\cap \tilde g(\mathbb I^\omega\times\{1\})=\emptyset$. Finally, put $\tilde f=r\circ \tilde g:\mathbb I^\omega\times\{0,1\}\to S$. The choice of the family $\mathcal C'$ guarantees that $d(\tilde f,\tilde g)<\delta/5$ and hence $d(\tilde f,g)<\frac25\delta$ and $d(\tilde f,f)\le d(\tilde f,g)+d(g,f)<\frac25\delta+\frac12\epsilon<\epsilon$. The choice of $\delta\le\mathrm{dist}\big(g(\mathbb I^\omega{\times}\{0\}),g(\mathbb I^\omega{\times}\{0\})\big)$ guarantees that $$\mathrm{dist}\big(\tilde f(\mathbb I^\omega{\times}\{0\}),\tilde f(\mathbb I^\omega{\times}\{0\})\big)\ge\delta-2d(\tilde f,g)\ge\frac15\delta>0$$and thus $\tilde f(\mathbb I^\omega{\times}\{0\})\cap\tilde f(\mathbb I^\omega{\times}\{1\})=\emptyset$. By the characterization theorem of Toru\'nczyk \cite{Tor80}, the space $S$ is an $\mathbb I^\omega$-manifold. \end{proof} Since for each point $y\in S$ the preimage $r^{-1}(y)$ is either a singleton or an arc, the retraction $r:M\to S$ is a cell-like surjective map between Hilbert cube manifolds $M$ and $S$. By Corollary 43.2 of \cite{Chap}, the map $r$ is a near homeomorphism. So, the Hilbert cube manifolds $M$ and $S$ are homeomorphic. \section{The family of tame balls in a manifold is tame} In this section we shall show that the family $\mathcal K$ of tame balls in an $\mathbb I^n$-manifold $X$ is tame and each vanishing decomposition $\mathcal D\subset\mathcal K\cup\big\{\{x\}:x\in X\big\}$ of $X$ is $\mathcal K$-tame. \begin{theorem}\label{t:tame} Let $n\in\mathbb N\cup\{\omega\}$ and $X$ be an $\mathbb I^n$-manifold. Then: \begin{enumerate} \item The family $\mathcal K$ of tame balls in $X$ is tame. \item Each vanishing decomposition $\mathcal D\subset\mathcal K\cup\big\{\{x\}:x\in X\big\}$ of $X$ is strongly shrinkable and hence is $\mathcal K$-tame. \end{enumerate} \end{theorem} \begin{proof} (1) The definition of a tame ball implies that the family $\mathcal K$ is ambiently invariant. If $X$ is a finite dimensional manifold, then the local shift property of $\mathcal K$ follows from the Annulus Conjecture proved in \cite{Rado}, \cite{Moise}, \cite{Quinn,Edwards}, \cite{Kirby} for dimensions 2, 3, 4, and $\ge 5$, respectively. If $X$ is a Hilbert cube manifold, then the local shift property follows from Theorem 11.1 of \cite{Chap} on extensions of homeomorphisms between $Z$-sets of the Hilbert cube. The strong shrinkability of tame balls in finite dimensional manifolds was proved in Proposition 6.2 \cite{Dav}. The strong shrinkability of tame balls in Hilbert cube manifolds follows from Theorem 2.4 of \cite{Cerin} and Corollary 43.2 of \cite{Chap}. The fact that each non-empty open subset of the manifold $X$ contains a tame ball is trivial if $X$ is finite dimensional and follows from 12.1 \cite{Chap} if $X$ is a Hilbert cube manifold. \smallskip (2) Let $\mathcal D\subset\mathcal K\cup\big\{\{x\}:x\in X\big\}$ be a vanishing decomposition of the manifold $X$ into singletons and tame balls. If $X$ is finite dimensional, then each tame ball $D\in\mathcal D^\circ$ has a neighborhood homeomorphic to $\mathbb R^n$ and hence $D$ does not intersect the boundary $\partial X$ of the manifold $X$. Then $\mathcal D$ is a vanishing decomposition of the $\mathbb R^n$-manifold $M=X\setminus \partial X$. By Theorem~8.7 of \cite{Dav}, it is strongly shrinkable. If $X$ is a Hilbert cube manifold, then the strong shrinkability of the decomposition $\mathcal D$ follows from Corollary 43.2 \cite{Chap} (saying that each cell-like map between Hilbert cube manifolds is a near homeomorphism), and Theorem 5.3 of \cite{Cerin} implying the decomposition space $X/\mathcal D$ is a Hilbert cube manifold. The latter fact can be alternatively deduced from Theorem~\ref{t:spongeQ} and Toru\'nczyk's Theorem 3' (saying that for a decomposition $\mathcal D$ of an $\mathbb I^\omega$-manifold $M$ the decomposition space $M/\mathcal D$ is an $\mathbb I^\omega$-manifold provided the union $\bigcup\mathcal D^\circ$ is contained in a countable union of $Z$-sets in $M$). \end{proof} \section{Proof of Theorem~\ref{main1}}\label{s:main1} Given an $\mathbb I^n$-manifold $X$ we need to prove the following statements: \begin{enumerate} \item Each nowhere dense subset of $X$ lies in a spongy subset of $X$; \item Any two spongy subsets of $X$ are ambiently homeomorphic, and \item Any spongy subset of $X$ is a universal nowhere dense subset in $X$; \end{enumerate} By Theorem~\ref{t:tame}, a subset $S\subset X$ is spongy if and only if $S$ is $\mathcal K$-spongy for the family $\mathcal K$ of tame balls in $X$. If $X$ is a Hilbert cube manifold, then $X$ is a strongly locally homogeneous completely metrizable space and the statements (1)--(3) follow immediately from Theorem~\ref{main1K}. The same argument works if $X$ is an $\mathbb R^n$-manifold for a finite $n$. It remains to consider the case of an $\mathbb I^n$-manifold $X$ that has non-empty boundary $\partial X$ (which consists of points $x\in X$ that do not have open neighborhoods homeomorphic to $\mathbb R^n$). It follows that $M=X\setminus\partial X$ is an $\mathbb R^n$-manifold. Theorem~\ref{t:tame} guarantees that the family $\mathcal K(M)$ of tame balls in $M$ is tame. By Theorem~\ref{t:exist}, each nowhere dense subset of $X$ is contained in a $\mathcal K$-spongy subset of $X$, so the statement (1) holds for the $\mathbb I^n$-manifold $X$. To prove the statement (2), fix any two spongy subsets $S$ and $S'$ in $X$. Denote by $\mathcal C$ and $\mathcal C'$ the families of connected components of the complements $X\setminus S$ and $X\setminus S'$. By the definition of a spongy set, for each component $C\in\mathcal C$ its closure $\bar C$ is a tame ball in $X$ and hence $\bar C$ has an open neighborhood homeomorphic to $\mathbb R^n$. Then $\bar C\cap\partial X=\emptyset$ and hence $\bar C\subset M$. Now consider the dense decompositions $$ \begin{aligned} \mathcal A&=\{\bar C:C\in\mathcal C\}\cup\big\{\{x\}:x\in M\setminus\bigcup_{C\in\mathcal C}\bar C\big\} \mbox{ and }\\ \mathcal B&=\{\bar C:C\in\mathcal C'\}\cup\big\{\{x\}:x\in M\setminus\bigcup_{C\in\mathcal C'}\bar C\big\} \end{aligned} $$of the $\mathbb R^n$-manifold $M$. The vanishing property of the families $\mathcal C$ and $\mathcal C'$ implies that the decompositions $\mathcal A$ and $\mathcal B$ of the $\mathbb R^n$-manifold $M$ are vanishing and hence $\mathcal K(M)$-tame according to Theorem~\ref{t:tame}. Fix any metric $d$ generating the topology of the manifold $X$ and by the paracompactness of $X$, find an open cover $\mathcal U$ of $X$ such that $\mathcal{S}t(x,\mathcal U)\subset O_d(x,d(x,\partial X)/2)$ for each point $x\in M$. By (the proof of ) Corollary~\ref{c2.6}, there is a homeomorphism $\Phi:M\to M$ such that $\{\Phi(A):A\in\mathcal A\}=\mathcal B$ and for each point $x\in M$ there are sets $A\in\mathcal A$ and $B\in\mathcal B$ such that $x\in\mathcal{S}t(A,\mathcal U)$, $\Phi(x)\in\mathcal{S}t(B,\mathcal U)$ and $\mathcal{S}t(A,\mathcal U)\cap\mathcal{S}t(B,\mathcal U)\ne\emptyset$. Extend the homeomorphism $\Phi:M\to M$ to a bijective map $\bar \Phi:X\to X$ such that $\bar\Phi|M=\Phi$ and $\bar\Phi|\partial X=\mathrm{id}$. We claim that the functions $\bar\Phi$ and $\bar\Phi^{-1}$ are continuous. It is necessary to check the continuity of these functions at each point $x_0\in\partial X$. First we verify the continuity of the function $\bar\Phi$ at $x_0$. Given any $\epsilon>0$ we need to find $\delta>0$ such that $\bar\Phi(O_d(x_0,\delta))\subset O_d(x_0,\varepsilon)$. Repeating the proof of Claim~\ref{cl:ed}, for the number $\epsilon$, we can find a positive real number $\eta\le\epsilon$ such that for each set $B\in \mathcal B$ with $\mathcal{S}t(B,\mathcal U)\cap O_d(x_0,\eta)\ne\emptyset$, we get $\mathcal{S}t(B,\mathcal U)\subset O_d(x_0,\epsilon)$. Next, by the same argument, for the number $\eta$ choose a positive real number $\delta\le\eta$ such that for each set $A\in \mathcal A$ with $\mathcal{S}t(A,\mathcal U)\cap O_d(x_0,\delta)\ne\emptyset$, we get $\mathcal{S}t(A,\mathcal U)\subset O_d(x_0,\eta)$. We claim for each point $x\in X$ with $d(x,x_0)<\delta$, we get $d(\bar\Phi(x),x_0)<\varepsilon$. This inequality trivially holds if $x\in\partial X$. So, we assume that $x\in M$. By the choice of the homeomorphism $\Phi$, there are sets $A\in\mathcal A$ and $B\in\mathcal B$ such that $x\in\mathcal{S}t(A,\mathcal U)$, $\Phi(x)\in\mathcal{S}t(B,\mathcal U)$, and the intersection $\mathcal{S}t(A,\mathcal U)\cap\mathcal{S}t(B,\mathcal U)$ contains some point $y\in X$. Taking into account that the set $\mathcal{S}t(A,\mathcal U)$ meets the ball $O_d(x_0,\delta)\ni x$, we conclude that $y\in\mathcal{S}t(A,\mathcal U)\subset O_d(x_0,\eta)$. Since the set $\mathcal{S}t(B,\mathcal U)\ni y$ meets the ball $O_d(\eta)$, the choice of the number $\eta$ guarantees that $\bar\Phi(x)=\Phi(x)\in\mathcal{S}t(B,\mathcal U)\subset O_d(x_0,\epsilon)$. This means that the map $\bar\Phi$ is continuous. By analogy we can show that the inverse map $\bar\Phi^{-1}:X\to X$ is continuous too. So, $\bar\Phi:X\to X$ a homeomorphism of $X$ such that $\Phi(\bigcup_{C\in\mathcal C'}\bar C)=\Phi(\bigcup\mathcal A^\circ)=\bigcup\mathcal B^\circ=\bigcup_{C\in\mathcal C'}\bar C$. This implies that $\Phi(\bigcup\mathcal C)=\bigcup\mathcal C'$ and $\Phi(S)=\Phi(X\setminus \bigcup\mathcal C)=X\setminus\bigcup\mathcal C'=S'$, witnessing that the spongy sets $S$ and $S'$ are ambiently homeomorphic in $X$. This completes the proof of the statement (2) of Theorem~\ref{main1}. The statement (3) follows immediately from the statements (1) and (2). \section{Topological equivalence of cellular decompositions of Hilbert cube manifolds}\label{s:cellular} In this section we shall apply Theorem~\ref{main2} to prove topological equivalence of certain cellular decompositions of Hilbert cube manifolds. But first we shall study the structure of tame families of compact subsets in more general topological spaces. The following proposition shows that for strongly locally homogeneous completely metrizable spaces the definition~\ref{d:K-tame} of a tame family can be a bit simplified. \begin{proposition}\label{p4.1} Let $X$ be a strongly locally homogeneous completely metrizable space and $\mathcal K$ be an ambiently invariant family of locally shrinkable compact subsets of $X$, possessing the local shift property. Then the following conditions are equivalent: \begin{enumerate} \item $\bigcup\mathcal K=X$; \item $\bigcup\mathcal K$ is dense in $X$; \item each non-empty open set $U\subset X$ contains a set $K\in\mathcal K$, and \item for each point $x\in X$ and each open neighborhood $U\subset X$ of $x$ there is a set $K\in\mathcal K$ such that $x\in K\subset U$. \end{enumerate} \end{proposition} \begin{proof} It is clear that $(4)\Rightarrow(3)\Rightarrow(2)\Leftarrow (1)\Leftarrow(4)$. So, it remains to prove the implication $(2)\Rightarrow(4)$. Given a point $x\in X$ and an open neighborhood $U_x\subset X$ of $x$, consider the orbit $O_x=\{h(x):h\in\mathcal H(X)\}$ of $x$ under the action of the homeomorphism group $\mathcal H(X)$ of $X$. The strong local homogeneity of $X$ implies that this orbit is open-and-closed in $X$. Since the union $\bigcup\mathcal K$ of the family $\mathcal K$ is dense in $X$, there exists a set $K'\in\mathcal K$ that intersects the orbit $O_x$. So, there exists a homeomorphism $f:X\to X$ such that $f(x)\in K'$. Then the compact set $K=f^{-1}(K')$ contains the point $x$ and belongs to the family $\mathcal K$ (by the ambient invariance of $\mathcal K$). Since the set $K\in\mathcal K$ is locally shrinkable, the quotient map $q_K:X\to X/K$ is a strong near homeomorphism by Theorem~\ref{t:shrink}, which implies that the space $X/K$ is homeomorphic to $X$ and hence is strongly locally homogeneous. Then for the point $y=q_K(K)\in X/K$ its orbit $O_y$ under the action of the homeomorphism group $\mathcal H(X/K)$ is closed-and-open in the quotient space $X/K$. Then $W=q_\mathcal D^{-1}(O_y)$ is a closed-and-open neighborhood of $K$ in $X$. Since the quotient map $q_K:X\to X/K$ is a strong near homeomorphism, there is a homeomorphism $h_1:X\to X/K$ such that $h_1|X\setminus W=q_K|X\setminus W$ and hence $h_1(W)=O_y$. Since $h_1(x)\in O_y$, there is a homeomorphism $h_2:X/K\to X/K$ such that $h_2(h_1(x))=y$. Since the space $X/K$ is strongly locally homogeneous, for the neighborhood $U_y=h_2\circ h_1(U_x)\cap O_y$ of the point $y=q_K(K)$ there is a neighborhood $V_y\subset U_y$ such that for any point $z\in V_y$ there is a homeomorphism $h:X/K\to X/K$ such that $h(z)=y$ and $h(U_y)=U_y$. Since $q_K$ is a strong near homeomorphism, for the neighborhood $V_y$ of the point $y=q_K(K)$ there is a homeomorphism $h_3:X\to X/K$ such that $h_3(K)\subset V_y$. By the choice of $V_y$ for the point $z=h_3(x)\in h_3(K)\subset V_y$ there is a homeomorphism $h_4:X/K\to X/K$ such that $h_4(z)=y$ and $h_4(U_y)=U_y$. Then the homeomorphism $h=h_1^{-1}\circ h_2^{-1}\circ h_4\circ h_3:X\to X$ has the properties: $$h(x)=h_1^{-1}\circ h_2^{-1}\circ h_4\circ h_3(x)=h_1^{-1}\circ h_2^{-1}\circ h_4(z)=h_1^{-1}\circ h_2^{-1}(y)=h_1^{-1}(h_1(x))=x$$and $$h(K)=h_1^{-1}\circ h_2^{-1}\circ h_4\circ h_3(K)\subset h_1^{-1}\circ h_2^{-1}\circ h_4(U_y)= h_1^{-1}\circ h_2^{-1}(U_y)=U_x.$$ Since the family $\mathcal K$ is ambiently invariant, the compact set $h(K)$ belongs to the tame family $\mathcal K$ and has the required properties: $x=h(x)\in h(K)\subset U_x$. \end{proof} \begin{proposition}\label{p4.2} If $\mathcal K$ is an ambiently invariant family of locally shrinkable compact subsets of a topologically homogeneous completely metrizable space $X$ and $\mathcal K$ has the local shift property, then any two sets $A,B\in\mathcal K$ are ambiently homeomorphic. \end{proposition} \begin{proof} By Theorem~\ref{t:shrink}, the quotient maps $q_A:X\to X/A$ and $q_B:X\to X/B$ are strong near homeomorphisms. This implies that the decomposition spaces $X/A$ and $X/B$ are homeomorphic to $X$ and hence are topologically homogeneous. So, we can choose a homeomorphism $f:X/A\to X/B$ that maps the singleton $\{A\}=q_A(A)\in X/A$ onto the singleton $\{B\}=q_B(B)\in X/B$. Since the quotient space $X/B$ is homeomorphic to $X$, we can consider the ambiently invariant family $\mathcal K(X/B)=\{h(K):K\in\mathcal K,\;h\in\mathcal H(X,X/B)\}$ of compact subsets of $X/B$ induced by the tame family $\mathcal K$. Since this family has the local shift property, the point $B\in X/B$ has a neighborhood $U\subset X/B$ such that for any two compact sets $K,K'\in\mathcal K(X/B)$ in $U$ there is a homeomorphism $h:X/B\to X/B$ such that $h(K)=K'$. Since the quotient maps $q_B:X\to X/B$ and $q_A:X\to X/A$ are strong near homeomorphisms, there are homeomorphisms $h_B:X\to X/B$ and $h_A:X\to X/A$ such that $h_B(B)\subset U$ and $h_A(A)\subset f^{-1}(U)$. Then the compact sets $K=f\circ h_A(A)$ and $K'=h_B(B)$ belong to the family $\mathcal K(X/B)$ and lie in the open set $U\subset X/B$. By the choice of $U$, there is a homeomorphism $h:X/B\to X/B$ such that $h(K)=K'$. Now we see that the homeomorphism $h^{-1}_B\circ h\circ f\circ h_A:X\to X$ maps $A$ onto $B$, and hence the sets $A$ and $B$ are ambiently homeomorphic. \end{proof} Now we consider three shape properties of subsets. A compact subset $K$ of a topological space $X$ will be called \begin{itemize} \item {\em point-like} if for each closed neighborhood $N\subset X$ of $K$ the complement $N\setminus K$ is homeomorphic to the complement $N\setminus\{x\}$ of some interior point $x\in \mathrm{Int}(N)$ of $N$; \item {\em cell-like} if for each neighborhood $U$ of $K$ in $X$ the set $K$ is contractible in $U$; \item {\em cellular} if for each neighborhood $U$ of $K$ in $X$ there is a neighborhood $V\subset U$ of $K$ homeomorphic to $$\begin{cases} \mathbb R^n&\mbox{if $n=\dim(X)$ is finite,}\\ \mathbb I^\omega\times [0,1)&\mbox{if $\dim(X)$ is infinite}. \end{cases} $$\end{itemize} If each singleton $\{x\}\subset X$ of a paracompact topological space is cellular, then $X$ is a manifold modeled on the Hilbert cube $\mathbb I^\omega$ or an Euclidean space $\mathbb R^n$, where $n=\dim(X)$. Each cellular subset in an $\mathbb I^n$-manifold is cell-like but the converse is not true even for $\mathbb R^n$-manifolds as shown by the Whitehead Example 9.7 \cite{Dav}. On the other hand, cellularity is equivalent to point-likeness, as shown by the following characterization whose finite dimensional case was proved in \cite[Proposition 2]{Dav} and \cite{ChriOsb}, and infinite dimensional case in \cite{Cerin}. \begin{proposition}\label{p13.3} Let $X$ be a manifold modeled on a space $E\in\{\mathbb I^\omega,\mathbb R^n:n\in\mathbb N\}$. For a compact subset $K$ of $X$ the following conditions are equivalent: \begin{enumerate} \item $K$ is point-like; \item $K$ is cellular; \item for each neighborhood $U\subset X$ of $K$ there is a tame ball $V\subset U$ that contains $K$; \item $K$ is locally shrinkable, and \item the quotient map $q_K:X\to X/K$ is a strong near homeomorphism. \end{enumerate} \end{proposition} We recall that a topological space $X$ is called {\em locally contractible} if for each point $x\in X$ and a neighborhood $U\subset X$ of $x$ there is another neighborhood $V\subset U$ of $x$, which is contractible in $U$. \begin{proposition}\label{p13.4} If $\mathcal K$ is a tame family of compact subsets of a metrizable topological space $X$, then each compact set $K\in\mathcal K$ is \begin{enumerate} \item point-like in $X$ provided $X$ is completely metrizable; \item cell-like in $X$ provided $X$ is locally contractible, and \item cellular in $X$ provided $X$ is a manifold modeled on a space $E\in\{\mathbb I^\omega,\mathbb R^n:n\in\mathbb N\}$. \end{enumerate} \end{proposition} \begin{proof} Fix a compact set $K\in\mathcal K$ and a neighborhood $U$ of $K$ in $X$. By Definition~\ref{d:K-tame}, the set $K$ is locally shrinkable. (1) If $X$ is completely metrizable, then by Theorem~\ref{t:shrink}, the quotient map $q_K$ is a strong near homeomorphism. Consequently, there exists a homeomorphism $f:X\to X/K$ such that $f|X\setminus U=\mathrm{id}$. Consider the point $K\in X/K$ and its image $x=f^{-1}(K)\in U$ under the inverse homeomorphism $f^{-1}:X/K\to X$. It follows that $h=f^{-1}\circ q_K|\bar U\setminus K:\bar U\setminus K\to \bar U\setminus\{x\}$ a homeomorphism, proving that the set $K$ is point-like in $X$. \smallskip (2) If the space $X$ is locally contractible, then the locally shrinkable subset $K\subset X$ is cell-like by Theorem~3.5 of \cite{Dav}. \smallskip (3) The third statement follows immediately from Proposition~\ref{p13.3}. \end{proof} Propositions~\ref{p4.2} and \ref{p13.4} imply that each tame family $\mathcal K$ of compact subsets of a topologically homogeneous $\mathbb I^n$-manifold consists of pairwise ambiently homeomorphic cellular subsets and hence $\mathcal K=\{h(K_0):h\in\mathcal H(X)\}$ for some cellular subset $K_0\subset X$. Now we are going to prove the converse statement: for each cellular subset $K_0$ of a topologically homogeneous Hilbert cube manifold $X$ the family $\mathcal K=\{h(K_0):h\in\mathcal H(X)\}$ is tame. \begin{theorem}\label{t4.7} A family $\mathcal K$ of compact subsets of a topologically homogeneous Hilbert cube manifold $X$ is tame if and only if $\mathcal K=\{h(K_0):h\in \mathcal H(X)\}$ for some cellular compact subset $K_0\subset X$. \end{theorem} \begin{proof} The ``only if'' part follows from Propositions~\ref{p4.2} and \ref{p13.4}. To prove the ``if'' part, assume that $\mathcal K=\{h(K_0):h\in \mathcal H(X)\}$ for some cellular compact subset $K_0\subset X$. It is clear that thus defined family $\mathcal K$ is ambiently invariant and $\bigcup\mathcal K=X$ is dense in $X$. Since topologically homogeneous manifolds are strongly locally homogeneous, Proposition~\ref{p4.1} implies that each non-empty open subset of $X$ contains a set $K\in\mathcal K$. By Proposition~\ref{p13.3}, each cellular subset of $X$ is locally shrinkable. It remains to show that $\mathcal K$ has the local shift property. Given a point $x\in X$ and a neighborhood $O_x\subset X$ we need to find a neighborhood $U_x\subset X$ such that for any sets $K,K'\in\mathcal K'$ in $U_x$ there is a homeomorphism $h:X\to X$ such that $h|X\setminus O_x=h|X\setminus O_x$. By Theorem~12.1 of \cite{Chap}, the point $x$ of the Hilbert cube manifold $X$ has a neighborhood $U_x\subset O_x$ homeomorphic to $\mathbb I^\omega\times[0,1)$. We claim that for any two compact subsets $K_1,K_2\in\mathcal K$ in $U_x$ there is a homeomorphism $h:X\to X$ such that $h(K_1)=K_2$ and $h|X\setminus O_x=\mathrm{id}$. For every $i\in\{1,2\}$ fix a homeomorphism $h_i$ of $X$ such that $h_i(K_0)=K_i$. The set $K_0$, being cellular in $X$, lies in the interior of a tame ball $B_0\subset X$ such that $B_0\subset h_1^{-1}(U_x)\cap h_2^{-1}(U_x)$. Then $B_1=h_1(B_0)$ and $B_2=h_2(B_0)$ are tame balls in $U_x$ and $h_{12}=h_2\circ h_1:X\to X$ is a homeomorphism such that $h_{12}(K_1)=K_2$ and $f(\partial B_1)=\partial B_2$. Since $U_x$ is homeomorphic to $\mathbb I^\omega\times[0,1)$, the union $B_1\cup B_2$ lies in the interior of some tame ball $B$ in $U_x$. Being tame, the ball $B$ is homeomorphic to the Hilbert cube $\mathbb I^\omega$ and its boundary $\partial B$ in $X$ is also homeomorphic to the Hilbert cube $\mathbb I^\omega$. Moreover, $\partial B_0$ is a $Z$-set in $B$ (which means that the identity map $\mathrm{id}:B\to B$ can be uniformly approximated by maps $B\to B\setminus\partial B$). By the same reason, for every $i\in\{1,2\}$ the boundary $\partial B_i$ of the tame cube $B_i$ is homeomorphic to $\mathbb I^\omega$ and is a $Z$-set both in $B_i$ and in the complement $B\setminus\mathrm{Int}(B_i)$. Moreover, since the boundary $\partial B_i$ is a retract of the tame ball $B_i$, the complement $B\setminus \mathrm{Int}(B_i)$ is a retract of the tame ball $B$ and hence $B\setminus\mathrm{Int}(B_i)$ is homeomorphic to the Hilbert cube $\mathbb I^\omega$, being a compact contractible $\mathbb I^\omega$-manifold; see \cite[22.1]{Chap}. By Theorem 11.1 of \cite{Chap}, the homeomorphism $h_{12}|\partial B_1\cup \mathrm{id}|\partial B:\partial B_1\cup \partial B\to\partial B_2\cup\partial B$ can be extended to a homeomorphism $\bar h_{12}:B\setminus\mathrm{Int}(B_1)\to B\setminus\mathrm{Int}(B_2)$ such that $\bar h_{12}|\partial B_1=h_{12}|B_1$ and $h_{12}|\partial B=\mathrm{id}$. Then the homeomorphism $h:X\to X$ defined by $h|B_1=h_{12}|B_1$, $h|B_1\setminus\mathrm{Int}(B_1)=\bar h_{12}$, and $h|X\setminus\mathrm{Int}(B)=\mathrm{id}$ has the required property: $h(K_1)=K_2$ and $h|X\setminus O_x=\mathrm{id}$. \end{proof} A decomposition $\mathcal D$ of an $\mathbb I^n$-manifold $X$ will be called {\em cellular} if each set $D\in\mathcal D$ is cellular in $X$. Theorem~\ref{t4.7} and Corollary~\ref{c2.6} imply the following corollaries. \begin{corollary}\label{c4.8} Two cellular dense vanishing strongly shrinkable decompositions $\mathcal A,\mathcal B$ of a Hilbert cube manifold $X$ are topologically equivalent if any two sets $A\in\mathcal A^\circ$ and $B\in\mathcal B^\circ$ are ambiently homeomorphic. \end{corollary} \begin{corollary}\label{c4.9} Two cellular dense vanishing decompositions $\mathcal A,\mathcal B$ of a topologically homogeneous Hilbert cube manifold $X$ are topologically equivalent if any two sets $A\in\mathcal A^\circ$ and $B\in\mathcal B^\circ$ are homeomorphic $Z$-sets in $X$. \end{corollary} \begin{proof} By Theorem 3' of \cite{Tor80}, the decomposition space $X/\mathcal A$ is a Hilbert cube manifold and by Corollary 43.2 \cite{Chap}, the quotient map $q_\mathcal A:X\to X/\mathcal A$ is a near homeomorphism. By Theorem~\ref{t:shrink}, the decomposition $\mathcal A$ is strongly shrinkable. Next, we show that any two sets $A\in\mathcal A$ and $B\in\mathcal B$ are ambiently homeomorphic in $X$. By our assumption, $A$ and $B$ are homeomorphic cellular $Z$-sets in $X$. Then there is a homeomorphism $h:A\to B$. Being cellular, the compact sets $A,B$ are connected. Let $X_A$ and $X_B$ be the connected components of $X$ that contain the sets $A,B$, respectively. Since the space $X$ is topologically homogeneous, there is a homeomorphism $f:X\to X$ such that $f(X_B)=X_A$. By Theorem 15.3 \cite{Dav}, the maps $i_A:A\to X_A$ and $f^{-1}\circ h:A\to X_A$ are homotopic (being homotopic to constant maps into the path-connected space $X_A$). Since $A$ and $f^{-1}\circ h(A)=f^{-1}(B)$ are $Z$-sets in $X_A$, Theorem 19.4 of \cite{Chap}, yields a homeomorphism $\Phi:X\to X$ such that $\Phi|A=f^{-1}\circ h|A$. Then $f\circ\Phi:(X,A)\to (X,B)$ is a homeomorphism of the pairs, witnessing that the sets $A,B$ are ambiently homeomorphic in $X$. By Corollary~\ref{c4.8}, the decompositions $\mathcal A$ and $\mathcal B$ are topologically equivalent. \end{proof} \begin{remark} Corollary~\ref{c4.8} cannot be generalized to finite dimensional $\mathbb R^n$-manifolds. Denote by $\mathcal H_+(\mathbb R^2)$ the subgroup of the homeomorphism group $\mathcal H(\mathbb R^2)$, consisting of orientation preserving homeomorphisms of the real plane $\mathbb R^2$. Take any cellular subset $K_0\subset \mathbb R^2$ such that $K_0\ne h(K_0)$ for each orientation reversing homeomorphism $h\in\mathcal H(\mathbb R^2)\setminus\mathcal H_+(\mathbb R^2)$. Such a set $K_0$ can look as shown on the picture: \begin{picture}(100,40)(-100,-10) \put(20,0){\line(1,0){60}} \put(40,0){\line(0,1){20}} \put(60,0){\line(0,1){20}} \put(60,20){\circle*{5}} \end{picture} Consider the families $\mathcal K_+=\{h(K_0):h\in\mathcal H_+(\mathbb R^2)\}$ and $\mathcal K_-=\{h(K_0):h\in\mathcal H(\mathbb R^2)\setminus\mathcal H_+(\mathbb R^2)\}$. Repeating the proof of Theorem~\ref{t:exist} it is possible to construct dense vanishing strongly shrinkable decompositions $\mathcal A$ and $\mathcal B$ of the plane $\mathbb R^2$ such that $$\mathcal A^\circ\subset\mathcal K_+,\;\;\mathcal B^\circ\subset\mathcal K_+\cup\mathcal K_- \mbox{ \ and \ }\mathcal B^\circ\cap\mathcal K_+\ne\emptyset\ne\mathcal B^\circ\cap\mathcal K_-.$$ It can be shown that the decompositions $\mathcal A$ and $\mathcal B$ are not topologically equivalent in spite of the fact that any two sets $A\in\mathcal A$ and $B\in\mathcal B$ are ambiently homeomorphic. \end{remark} \section*{Acknowledgements} This research was supported by the Slovenian Research Agency grants P1-0292-0101 and J1-4144-0101. The first author has been partially financed by NCN means granted by decision DEC-2011/01/B/ST1/01439.
{ "timestamp": "2013-02-28T02:03:36", "yymm": "1302", "arxiv_id": "1302.5651", "language": "en", "url": "https://arxiv.org/abs/1302.5651", "abstract": "In each manifold $M$ modeled on a finite or infinite dimensional cube $[0,1]^n$ we construct a closed nowhere dense subset $S\\subset M$ (called a spongy set) which is a universal nowhere dense set in $M$ in the sense that for each nowhere dense subset $A\\subset M$ there is a homeomorphism $h:M\\to M$ such that $h(A)\\subset S$. The key tool in the construction of spongy sets is a theorem on topological equivalence of certain decompositions of manifolds. A special case of this theorem says that two vanishing cellular strongly shrinkable decompositions $\\mathcal A,\\mathcal B$ of a Hilbert cube manifold $M$ are topologically equivalent if any two non-singleton elements $A\\in\\mathcal A$ and $B\\in\\mathcal B$ of these decompositions are ambiently homeomorphic.", "subjects": "Geometric Topology (math.GT); General Topology (math.GN)", "title": "Universal nowhere dense subsets of locally compact manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363494503271, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7073385870507956 }
https://arxiv.org/abs/1801.09003
Preperiodic points for quadratic polynomials over cyclotomic quadratic fields
Given a number field $K$ and a polynomial $f(z) \in K[z]$ of degree at least 2, one can construct a finite directed graph $G(f,K)$ whose vertices are the $K$-rational preperiodic points for $f$, with an edge $\alpha \to \beta$ if and only if $f(\alpha) = \beta$. Restricting to quadratic polynomials, the dynamical uniform boundedness conjecture of Morton and Silverman suggests that for a given number field $K$, there should only be finitely many isomorphism classes of directed graphs that arise in this way. Poonen has given a conjecturally complete classification of all such directed graphs over $\mathbb{Q}$, while recent work of the author, Faber, and Krumm has provided a detailed study of this question for all quadratic extensions of $\mathbb{Q}$. In this article, we give a conjecturally complete classification like Poonen's, but over the cyclotomic quadratic fields $\mathbb{Q}(\sqrt{-1})$ and $\mathbb{Q}(\sqrt{-3})$. The main tools we use are dynamical modular curves and results concerning quadratic points on curves.
\section{Introduction} Let $K$ be a number field, and let $\phi \in K(z)$ be a rational map of degree $d \ge 2$. For each integer $n \ge 0$, we let $\phi^n$ denote the $n$-fold composition of $\phi$; that is, $\phi^0$ is the identity, and $\phi^n = \phi \circ \phi^{n-1}$ for each $n \ge 1$. We say that $\alpha \in \mathbb{P}^1(K)$ is {\bf preperiodic} for $\phi$ if there exist integers $m \ge 0$ and $n \ge 1$ such that $\phi^{m + n}(\alpha) = \phi^m(\alpha)$; in this case, the minimal such $m$ and $n$ are called the {\bf preperiod} and {\bf eventual period}, respectively, and we refer to the pair $(m,n)$ as the {\bf (preperiodic) portrait} of $\alpha$. If the preperiod is 0, we say that $\alpha$ is {\bf periodic} with {\bf period} $n$. We set \[ \operatorname{PrePer}(\phi,K) := \{\alpha \in \mathbb{P}^1(K) : \alpha \text{ is preperiodic for } \phi\}. \] We denote by $G(\phi,K)$ the functional graph associated to the restriction of $\phi$ to $\operatorname{PrePer}(\phi,K)$; that is, the vertices of $G(\phi,K)$ are the $K$-rational preperiodic points for $\phi$, and there is a directed edge from $\alpha$ to $\beta$ if and only if $\phi(\alpha) = \beta$. Northcott proved in \cite[Thm. 3]{northcott:1950} that $\operatorname{PrePer}(\phi,K)$ is a finite set. Based on the analogy between preperiodic points for rational maps and torsion points on elliptic curves, Morton and Silverman have conjectured a dynamical analogue of the strong uniform boundedness conjecture (now Merel's theorem \cite{merel:1996}) for elliptic curves: \begin{conj}[{\cite[p. 100]{morton/silverman:1994}}]\label{conj:ubc} Fix $n \ge 1$ and $d \ge 2$. There is a constant $C(n,d)$ such that for any number field $K$ of absolute degree $n$, and for any rational map $\phi \in K(z)$ of degree $d$, \[ \#\operatorname{PrePer}(\phi,K) \le C(n,d). \] \end{conj} It is currently unknown whether such a constant exists for any pair of integers $(n,d)$, even if one restricts to polynomial maps. The simplest polynomial case is $(n,d) = (1,2)$, that is, quadratic polynomials over $\mathbb{Q}$. The difficulty in proving Conjecture~\ref{conj:ubc} in this case is bounding the possible {\it periods} of rational periodic points. It is shown in \cite{walde/russo:1994} that for each period $n \in \{1,2,3\}$, there are infinitely many quadratic polynomials (up to the appropriate notion of equivalence) with a rational point of period $n$. On the other hand, there are no quadratic polynomials with rational points of period $4$ (\cite[Thm. 4]{morton:1998}), period $5$ (\cite[Thm. 1]{flynn/poonen/schaefer:1997}), or---assuming standard conjectures on $L$-series for the Jacobian of a certain curve of genus $4$---period $6$ (\cite[Thm. 7]{stoll:2008}). It was conjectured in \cite{flynn/poonen/schaefer:1997} that no quadratic polynomial over $\mathbb{Q}$ could have a rational point of period greater than 3, and Poonen has shown that this conjecture would imply uniform boundedness for quadratic polynomials over $\mathbb{Q}$, analogous to Mazur's theorem \cite{mazur:1977} for rational torsion points on elliptic curves. \begin{thm}[{\cite[Cor. 1]{poonen:1998}}]\label{thm:poonen} Let $f \in \mathbb{Q}[z]$ be a quadratic polynomial. If $f$ does not admit rational points of period greater than 3, then $G(f,\mathbb{Q})$ is isomorphic to one of the following twelve directed graphs, which appear in Appendix~\ref{app:all_data}: \begin{center} \rm0, 2(1), 3(1,1), 3(2), 4(1,1), 4(2), 5(1,1)a, 6(1,1), 6(2), 6(3), 8(2,1,1), 8(3). \end{center} In particular, $\#\operatorname{PrePer}(f,\mathbb{Q}) \le 9$. \end{thm} \begin{rem}\label{rem:infinity} The bound from Theorem~\ref{thm:poonen} is 9, while the largest graph appearing in the classification has eight vertices. This is due to the fact that, following the convention of \cite{poonen:1998,doyle/faber/krumm:2014,doyle:2018quad}, we omit the fixed point at infinity when describing the preperiodic graph for a polynomial map. \end{rem} A reasonable next step in studying preperiodic points for quadratic polynomials is to give a classification like Poonen's, but over quadratic extensions of $\mathbb{Q}$. Conjecture~\ref{conj:ubc} suggests that there should be only finitely many (isomorphism classes of) graphs $G(f,K)$ with $K$ a quadratic field and $f \in K[z]$ quadratic, so there are two natural directions to pursue: \renewcommand{\labelenumi}{(\arabic{enumi})} \begin{enumerate} \item Classify those graphs $G$ that may be realized as $G(f,K)$ for some quadratic field $K$ and some quadratic $f \in K[z]$. \item Fix a collection of quadratic fields $K$, and for each classify those graphs $G$ that may be realized as $G(f,K)$ for some quadratic $f \in K[z]$. \end{enumerate} \renewcommand{\labelenumi}{(\Alph{enumi})} A classification as in (1) would be a dynamical analogue of the corresponding result by Kamienny and Kenku-Momose for quadratic torsion points on elliptic curves; the three articles \cite{doyle/faber/krumm:2014,doyle:2018quad,doyle/krumm/wetherell} give progress in this direction. In the current article, we consider direction (2), giving a conditional classification like Theorem~\ref{thm:poonen} for the quadratic cyclotomic fields $\mathbb{Q}(i)$ and $\mathbb{Q}(\omega)$, where $i = \sqrt{-1}$ and $\omega = (-1 + \sqrt{-3})/2$ are primitive fourth and third roots of unity, respectively. We now state our main result, which should be viewed as a conditional analogue of classification results due to Najman \cite{najman:2011, najman:2010} for torsion on elliptic curves defined over these two quadratic fields. \begin{thm}\label{thm:main}\mbox{} \begin{enumerate} \item Let $K = \mathbb{Q}(i)$, and let $f \in K[z]$ be a quadratic polynomial that does not admit $K$-rational points of period greater than 5. Then $G(f,K)$ is isomorphic to one of the following fourteen graphs: \begin{center} \rm 0, 3(2), 4(1,1), 4(2), 5(1,1)a/b, 5(2)a, 6(1,1), 6(2), 6(2,1), 6(3), 8(2,1,1), 8(3), 10(2,1,1)a. \end{center} \item Let $K = \mathbb{Q}(\omega)$, and let $f \in K[z]$ be a quadratic polynomial that does not admit $K$-rational points of period greater than 5. Then $G(f_c,K)$ is isomorphic to one of the following thirteen graphs: \begin{center} \rm 0, 3(2), 4(1), 4(1,1), 4(2), 5(1,1)a, 6(1,1), 6(2), 6(3), 7(2,1,1)a, 8(2)a, 8(2,1,1), 8(3). \end{center} \end{enumerate} \end{thm} The proofs of all of the results referred to above for torsion points on elliptic curves relied heavily on the use of modular curves, which parametrize elliptic curves (up to isomorphism) together with marked points of a given order. Perhaps unsurprisingly, much of the corresponding work on preperiodic points for quadratic polynomials has relied on {\it dynamical modular curves,} which parametrize quadratic polynomials (up to dynamical equivalence) together with marked preperiodic points. In particular, Theorem~\ref{thm:poonen} required finding the full set of rational points on several dynamical modular curves, just as the articles \cite{doyle/faber/krumm:2014, doyle:2018quad, doyle/krumm/wetherell} involve finding {\it quadratic} points on such curves; this is the strategy we employ in the current article. We give a brief overview of dynamical modular curves in \textsection\ref{sec:dmc}, and we collect in \textsection\ref{sec:quad} several results that will be useful for determining the set of quadratic points on such curves. Section~\ref{sec:genus_gonality} gives some results on dynamical modular curves of low genus, which are later used to prove our main theorem. Finally, \textsection\ref{sec:specific} contains the proof of Theorem~\ref{thm:main}, which has been split into two statements, Propositions~\ref{prop:main_i} and \ref{prop:main_omega}. We also include in an appendix the complete determination of the set of $\mathbb{Q}(i)$-rational points on the dynamical modular curve $X_0(5)$, which parametrizes quadratic polynomials together with a marked periodic cycle of length $5$. The calculation involves a slight variant of the usual Chabauty-Coleman method. \subsection*{A remark on computations} Nearly all of the required computations were carried out using Magma \cite{magma}. We have included, as an ancillary file to this article's arXiv submission, two files containing Magma code and output. The first contains the calculations for the main body of the paper, and the second includes the computations for the appendix. \subsection*{Acknowledgments} This article began as part of my dissertation at the University of Georgia, though several improvements have been made since that time. I thank my advisor, Bob Rumely, for many insightful conversations and for his guidance during my time at Georgia. I thank Pete Clark for his help with some of the background on algebraic curves and Bjorn Poonen for helpful discussions. I would also like to thank the anonymous referee for the motivation to prove Theorem~\ref{thm:5cycle_cyclotomic} over $\mathbb{Q}(i)$ (the previous draft only included the statement for $\mathbb{Q}(\omega)$), and I am indebted to Joseph Wetherell for introducing me to the method used in \textsection \ref{sec:ChabCalc} to do so. \section{Dynamical modular curves for quadratic polynomial maps}\label{sec:dmc} In this section, $K$ will be a number field. Given a finite directed graph $G$, we describe a \emph{dynamical modular curve} whose $K$-rational points parametrize quadratic maps $f \in K[z]$, up to equivalence, together with a collection of marked points that ``generate" a subgraph of $G(f,K)$ isomorphic to $G$. Throughout this article, we will use the notation $X_1(\cdot)$, $Y_1(\cdot)$, and $U_1(\cdot)$ exclusively to represent various dynamical modular curves. When we need to refer to a \emph{classical} modular curve, which parametrizes elliptic curves together with certain level structure, we will heed the advice of \cite[p. 163]{silverman:2007} and write $X^{\Ell}_1(\cdot)$ and $Y^{\Ell}_1(\cdot)$ to avoid confusion. We first describe what we mean by dynamical equivalence: We say that two polynomial maps $f,g \in K[z]$ are {\bf linearly conjugate} if there exists a polynomial $\ell(z) = az + b$, with $a,b \in K$ and $a \ne 0$, such that $g = f^\ell := \ell^{-1} \circ f \circ \ell$. Linear conjugation is the appropriate notion of equivalence dynamically since conjugation commutes with iteration. In particular, $\ell$ induces a graph isomorphism $G(g,K) \overset{\sim}{\longrightarrow} G(f,K)$. It is well known that every quadratic polynomial over a field $K$ of characteristic 0 is linearly conjugate to a polynomial of the form \[ f_c(z) := z^2 + c \] for a unique $c \in K$, so it suffices to restrict our attention to maps of this form. In this paper, we give only an informal description of these dynamical modular curves. A more formal treatment appears in \cite{doyle:2019}. \subsection{Dynatomic curves}\label{sub:per_dyn} Let $N$ be any positive integer. If $x$ is a point of period $N$ for $f_c$, then we have $f_c^N(x) - x = 0$. However, this equation is also satisfied if $x$ has period equal to a proper divisor of $N$. One therefore defines the \textbf{$N$th dynatomic polynomial} to be \[ \Phi_N(x,c) := \prod_{n \mid N} \left(f_c^n(x) - x\right)^{\mu(N/n)} \in \mathbb{Z}[x,c], \] where $\mu$ is the M\"{o}bius function. The dynatomic polynomials provide a natural factorization \begin{equation}\label{eq:Ncycle} f_c^N(x) - x = \prod_{n \mid N} \Phi_n(x,c) \end{equation} for all $N \in \mathbb{N}$---see \cite[p. 571]{morton/vivaldi:1995}. If $(x,c) \in K^2$ satisfies $\Phi_N(x,c) = 0$, we say that $x$ has \textbf{formal period} $N$ for $f_c$. Every point of exact period $N$ has formal period $N$, but in some cases a point of formal period $N$ may have exact period $n$ a proper divisor of $N$. The fact that $\Phi_N(x,c)$ is a polynomial is shown in \cite[Thm. 4.5]{silverman:2007}. If we define\label{eq:r(N)} \begin{align*} d(N) &:= \deg_x \Phi_N(x,c) = \sum_{n \mid N} \mu(N/n) 2^n,\\ r(N) &:= \frac{d(N)}{r(N)}, \end{align*} then $d(N)$ (resp., $r(N)$) denotes the number of points (resp., cycles) of period $N$ for a generic quadratic polynomial map. Since $\Phi_N(x,c)$ has coefficients in $\mathbb{Z}$, the equation $\Phi_N(x,c) = 0$ defines an affine plane curve $Y_1(N)$ over $K$, and this curve was shown to be irreducible over $\mathbb{C}$ by Bousch \cite[\textsection 3, Thm. 1]{bousch:1992}. We define $U_1(N)$ to be the Zariski open subset of $Y_1(N)$ on which $\Phi_n(x,c) \ne 0$ for each proper divisor $n$ of $N$. In other words, $(x,c)$ lies on $Y_1(N)$ (resp., $U_1(N)$) if and only if $x$ has formal (resp., exact) period $N$ for $f_c$. We denote by $X_1(N)$ the normalization of the projective closure of $Y_1(N)$. Given a collection of pairwise distinct positive integers $N_1,\ldots,N_m$, we let $Y_1(N_1,\ldots,N_m)$ be the curve given as the subscheme of $\mathbb{A}^{m+1}$ defined by \[ \Phi_{N_1}(x_1,c) = \cdots = \Phi_{N_m}(x_m,c) = 0, \] and we let $X_1(N_1,\ldots,N_m)$ be the normalization of the projective closure of $Y_1(N_1,\ldots,N_m)$. \subsection{Generalized dynatomic curves} More generally, suppose $\alpha$ has preperiodic portrait $(M,N)$ for $f_c$ for some $M \ge 0$ and $N \ge 1$. In this case, we have $f_c^{M+N}(\alpha) - f_c^M(\alpha) = 0$; however, this equation is satisfied whenever $\alpha$ has portrait $(m,n)$ for some $0 \le m \le M$ and $n \mid N$. Therefore, for a pair of positive integers $M,N$, we define the \textbf{generalized dynatomic polynomial} \[ \Phi_{M,N}(x,c) := \frac{\Phi_N(f_c^M(x), c)}{\Phi_N(f_c^{M-1}(x), c)} \in \mathbb{Z}[x,c], \] and we extend this definition to $M = 0$ by setting $\Phi_{0,N} := \Phi_N$. That $\Phi_{M,N}$ is a polynomial is proven in \cite[Thm. 1]{hutz:2015}. The generalized dynatomic polynomials give a natural factorization \[ f_c^{M+N}(x) - f_c^M(x) = \prod_{m=0}^M\prod_{n \mid N} \Phi_{m,n}(x,c) \] for all $M \ge 0$ and $N \ge 1$. If $\Phi_{M,N}(\alpha,c) = 0$, we say that $\alpha$ has \textbf{formal (preperiodic) portrait} $(M,N)$ for $f_c$. Just as in the periodic case, every point of exact portrait $(M,N)$ has formal portrait $(M,N)$, but the converse is not true in general. Let $Y_1((M,N))$ be the affine plane curve defined by $\Phi_{M,N}(x,t) = 0$. That these curves are irreducible over $\mathbb{C}$ follows from the work of Bousch \cite[p. 67]{bousch:1992}. We define $U_1((M,N))$ to be the Zariski open subset of $Y_1((M,N))$ given by \begin{equation}\label{eq:phiMNconditions} \Phi_{m,n}(x,c) \ne 0 \text{\ for all $m < M$ and $n < N$ (with $n \mid N$)}, \end{equation} and we denote by $X_1((M,N))$ the normalization of the projective closure of $Y_1((M,N))$. Note that a point $(\alpha,c)$ lies on $Y_1((M,N))$ (resp., $U_1((M,N))$) if and only if $\alpha$ has formal portrait (resp., exact portrait) $(M,N)$ for $f_c$. \subsection{Admissible graphs}\label{sub:admissible} Given a number field $K$ and a parameter $c \in K$, the graph $G(f_c,K)$ necessarily has a great deal of structure and symmetry dictated by the dynamics of quadratic polynomial maps. For this reason, we restrict our attention to finite directed graphs $G$ that possess this additional structure. \begin{defn}\label{defn:admissible} A finite directed graph $G$ is \textbf{admissible} if it has the following two properties: \renewcommand{\labelenumi}{(\alph{enumi})} \begin{enumerate} \item Every vertex of $G$ has out-degree 1 and in-degree either 0 or 2. \item For each $N \ge 2$, $G$ contains at most $r(N)$ $N$-cycles. (See the definition of $r(N)$ on page~\pageref{eq:r(N)}.) \end{enumerate} We say that $G$ is \textbf{strongly admissible} if it satisfies the following additional condition: \begin{enumerate} \setcounter{enumi}{2} \item If $G$ contains a fixed point (i.e., a vertex with a self-loop), then $G$ contains exactly two such vertices. \end{enumerate} \renewcommand{\labelenumi}{(\Alph{enumi})} \end{defn} Strong admissibility is a property shared by nearly all preperiodic graphs $G(f_c,K)$. Condition (a) can only fail if there is a vertex of in-degree $1$, which happens if and only if the critical point $0$ is preperiodic. Condition (c) can only fail if $f_c$ has exactly one fixed point, and it is well known that only $c = 1/4$ has this property. To summarize, we have the following: \begin{lem}[{\cite[Lem. 2.4 \& Cor. 2.6]{doyle:2019}}]\label{lem:admissible} Let $K$ be a number field, and let $c \in K$. The graph $G(f_c,K)$ is \textit{admissible} if and only if $0 \notin \operatorname{PrePer}(f_c,K)$ and is \textit{strongly admissible} if and only if $0 \notin \operatorname{PrePer}(f_c,K)$ and $c \ne 1/4$. In particular, the set of parameters $c \in K$ for which $G(f_c,K)$ is not strongly admissible is finite. \end{lem} Given an admissible graph $G$, we define the \textbf{cycle structure} of $G$ to be the nondecreasing list of lengths of disjoint cycles occurring in $G$. We will say that $G$ \emph{contains} the cycle structure $\tau = (N_1,\ldots,N_m)$ if $\tau$ is a subsequence of the cycle structure of $G$; that is, if $G$ has an admissible subgraph with cycle structure $\tau$. Before discussing the dynamical modular curves associated to admissible graphs, we require one more definition. \begin{defn}\label{defn:generate} Let $G$ be an admissible graph, and let $\{P_1,\ldots,P_n\}$ be a set of vertices of $G$. Let $H$ be the smallest admissible subgraph of $G$ containing all of the vertices $P_1,\ldots,P_n$. We say that $\{P_1,\ldots,P_n\}$ is a \textbf{generating set} for $H$. If any other generating set for $G$ contains at least $n$ vertices, then we call $\{P_1,\ldots,P_n\}$ a \textbf{minimal generating set} for $G$. \end{defn} We now describe dynamical modular curves $X_1(G)$ associated to admissible graphs $G$, generalizing those curves $X_1(N)$ and $X_1((M,N))$ defined above. The curves $X_1(G)$ are formally defined in \cite{doyle:2019}, where it is shown that $X_1(G)$ is always an irreducible curve in characteristic $0$. For the purposes of this article, however, we will be content to describe $X_1(G)$ as a curve over $\mathbb{C}$ as follows: Let $\{P_1,\ldots,P_n\}$ be a minimal generating set for $G$. For a subfield $L \subseteq \mathbb{C}$, define\footnote{For simplicity, the definition of $U_1(G)$ given here is slightly more restrictive than our definition of $U_1(G)$ in \cite{doyle:2019}. The difference is that in \cite{doyle:2019}, there were finitely many additional points $(\alpha_1,\ldots,\alpha_n,c)$ on $U_1(G)$ for which $0$ is in the orbit of $\alpha_i$ under $f_c$ for some $i \in \{1,\ldots,n\}$, which forces inadmissibility of the associated preperiodic graph by Lemma~\ref{lem:admissible}.} $U_1(G)(L)$ to be the set of all tuples $(\alpha_1,\ldots,\alpha_n,c) \in \mathbb{A}^{n+1}(L)$ such that $\{\alpha_1,\ldots,\alpha_n\}$ generates a subgraph of $\operatorname{PrePer}(f_c,L)$ isomorphic to $G$ via an identification $P_i \longmapsto \alpha_i$. Let $Y_1(G)$ be the Zariski closure in $\mathbb{A}^{n+1}$ of the set $U_1(G)(\mathbb{C})$, and let $X_1(G)$ be the normalization of the projective closure of $Y_1(G)$. Note that if $G$ is generated by a single vertex of portrait $(M,N)$, then $X_1(G) = X_1((M,N))$, and similarly for $Y_1(G)$ and $U_1(G)$. The assignment $G \longmapsto X_1(G)$ is (contravariant) functorial: If $G$ and $H$ are admissible graphs with $H \subseteq G$, there is a nonconstant map $X_1(G) \longrightarrow X_1(H)$ that commutes with projection onto the $c$-line; see \cite[Prop. 3.3]{doyle:2019}. \begin{figure} \centering \begin{overpic}[scale=.5]{graph1and2_3} \put(44,0){$\alpha$} \put(-7,52){$\beta$} \put(110,0){$\alpha'$} \end{overpic} \caption{An admissible graph $G$} \label{fig:U1_ex} \end{figure} We end this section with an example: The graph $G$ in Figure~\ref{fig:U1_ex} is strongly admissible and is minimally generated by the vertices $\alpha$, $\alpha'$, and $\beta$. Therefore, for a number field $K$ we have \begin{align*} U_1(G)(K) = \{(\alpha,\alpha',\beta,c) \in \mathbb{A}^4(K) : & \text{ $\alpha$ and $\alpha'$ are distinct fixed points for $f_c$;}\\ & \text{ $\beta$ has portrait $(3,2)$ for $f_c$; and}\\ & \text{ $0$ is not in the orbit of $\alpha$, $\alpha'$, or $\beta$ under $f_c$}\}. \end{align*} \section{Quadratic points on algebraic curves}\label{sec:quad} Let $K$ be a number field, and let $X$ be an algebraic curve defined over $K$. We say that $P \in X(\overline{K})$ is \textbf{quadratic over $K$} if the field of definition of $P$, denoted $K(P)$, is a quadratic extension of $K$. We will mostly be working in the situation that $K = \mathbb{Q}$, in which case we will simply say that $P$ is {\bf quadratic}. If $X$ is an affine curve, then the \textbf{genus} of $X$, denoted $g(X)$, will be understood to be the geometric genus of $X$; i.e., the genus of the nonsingular projective curve birational to $X$. A {\bf Weierstrass point} on $X$ is a point $P \in X$ for which there exists a rational map of degree at most $g(X)$ vanishing only at $P$. (Equivalently, a Weierstrass point is one for which there exists a nonconstant rational map of degree at most $g(X)$ which is regular away from $P$). \subsection{Hyperelliptic curves}\label{sec:hyp} Several of the curves we consider in this paper are hyperelliptic. Recall that a hyperelliptic curve of genus $g$ defined over $K$ has an affine model of the form $y^2 = f(x)$ for some polynomial $f(x) \in K[x]$ of degree $2g + 1$ or $2g + 2$ with no repeated roots. We denote by $\iota$ the hyperelliptic involution on $X$: \[ \iota(x,y) = (x,-y), \] and we say that $\iota P$ is the {\bf hyperelliptic conjugate} of $P$. If $\deg f$ is odd, then $X$ has a single point at infinity, which is necessarily $K$-rational; if $\deg f$ is even, then $X$ has two points at infinity, and they are $K$-rational if and only if the leading coefficient of $f$ is a square in $K$. This can be seen by covering $X$ by two affine patches: The first is given by the equation $y^2 = f(x)$, and the second is given by $v^2 = u^{2g + 2}f(1/u)$, with the identification $x = 1/u$, $y = v/u^{g+1}$. If $\deg f$ is even, and if $c$ is the leading coefficient of $f$, then we take $\infty^\pm$ to be the two points on $X$ corresponding to $(u,v) = (0,\pm \sqrt{c})$. If $\deg f$ is odd, then we take $\infty^+ = \infty^- = \infty$ to be the unique point at infinity, given by $(u,v) = (0,0)$. In either case, we have $\iota \infty^\pm = \infty^\mp$. Weierstrass points on hyperelliptic curves are simple to describe: they are precisely the ramification points for the double cover of $\mathbb{P}^1$ given by $(x,y) \longmapsto x$; equivalently, they are the fixed points of the hyperelliptic involution. More concretely, $P$ is a Weierstrass point on $X$ if and only if $P = (x,0)$ or $\deg f$ is odd and $P$ is the point at infinity. In particular, every hyperelliptic curve of genus $g$ has $2g+2$ Weierstrass points. We now focus on curves of genus $2$; much of what follows may be found in \cite{cassels/flynn:1996}. Let $X$ be a curve of genus $2$ defined over a number field $K$. Since every genus $2$ curve $X$ is hyperelliptic, $X$ has an affine model of the form $y^2 = f(x)$ with $f(x) \in K[x]$ of degree $d \in \{5,6\}$ having no repeated roots. Let $J$ be the Jacobian of $X$. If we assume that $X$ has a $K$-rational point (this is guaranteed if $d = 5$), then we may identify the Mordell-Weil group $J(K)$ with the group of $K$-rational degree $0$ divisors of $X$ modulo linear equivalence (see \cite[p. 39]{cassels/flynn:1996} and \cite[p. 168]{milne:1986}, for example). For $n \in \mathbb{N}$, we set \[ J(K)[n] := \{\mathcal{P} \in J(K) : n\mathcal{P} = \mathcal{O}\}, \] and we denote by $J(K)_{{\operatorname{tors}}} = \bigcup_{n \in \mathbb{N}} J(K)[n]$ the full torsion subgroup of $J(K)$. The divisor $\infty^+ + \infty^-$ is a $K$-rational divisor on $X$, and the divisor class $\mathcal{K}$ containing $\infty^+ + \infty^-$ is the canonical divisor class. By the Riemann-Roch theorem, every degree $2$ divisor class $\mathcal{D}$ contains an effective divisor, and this effective divisor is unique if and only if $\mathcal{D} \ne \mathcal{K}$. The effective divisors in the canonical class $\mathcal{K}$ are precisely those of the form $P + \iota P$. We may therefore represent every nontrivial element of $J(K)$ \emph{uniquely} by a divisor of the form $P + Q - \infty^+ - \infty^-$, up to reordering of $P$ and $Q$, where $P + Q$ is a $K$-rational divisor on $X$ (either $P$ and $Q$ are both $K$-rational points on $X$ or $P$ and $Q$ are Galois conjugate quadratic points on $X$). We therefore represent points of $J(K)$ as unordered pairs $\{P,Q\}$, with the identification \[ \{P,Q\} = \left[ P + Q - \infty^+ - \infty^- \right], \] where $[D]$ denotes the divisor class of the divisor $D$. Note that $\{P,Q\} = \mathcal{O}$ if and only if $[P + Q] = \mathcal{K}$; that is, if and only if $P$ and $Q$ are hyperelliptic conjugates. It follows that $-\{P,Q\} = \{\iota P, \iota Q\}$, since \[ \{P,Q\} + \{\iota P, \iota Q\} = \{P, \iota P\} + \{Q, \iota Q\} = \mathcal{O}. \] Note that we have a morphism $X \longrightarrow J$ obtained by mapping $P \longmapsto \{P,P\}$; unlike the standard Albanese map $P \longmapsto [P - P_0]$ (for a fixed base point $P_0$), this map is not an embedding, since every Weierstrass point maps to $\mathcal{O}$. The following statement regarding 2-torsion on genus 2 curves is well known: \begin{lem}\label{lem:2torsion} Let $X$ be a genus $2$ curve defined over a number field $K$, let $J$ be its Jacobian, and let $\{P_1,\ldots,P_6\}$ be the set of Weierstrass points on $X$. Then the set of points on $J$ of exact order $2$ is given by \[ J(\overline{K})[2] \setminus \{\mathcal{O}\} = \{\{P_i,P_j\} : i \ne j\}. \] \end{lem} \begin{proof} The Jacobian $J$ has 15 points of order 2, and that is precisely the number of unordered pairs $\{P_i,P_j\}$ with $i \ne j$, so it suffices to show that each $\{P_i,P_j\}$ is a nonzero 2-torsion point. That $\{P_i,P_j\} \ne \mathcal{O}$ follows from the fact that $\iota P_i = P_i \ne P_j$, and $\{P_i,P_j\}$ has order $2$ since \[ -\{P_i,P_j\} = \{\iota P_i, \iota P_j\} = \{P_i, P_j\}. \] \end{proof} \subsection{Points on curves and their Jacobians after base extension} Let $A$ be an abelian variety over a field $K$. If $L$ is an extension of $K$, then certainly $A(K) \subseteq A(L)$; in this section we give a sufficient condition for equality to hold. We then give a consequence for rational points on a curve $X$ over a number field $K$ upon base change to a finite Galois extension $L/K$. If $G$ is a finitely generated abelian group and $H \subseteq G$ is a subgroup, then the \textbf{saturation} of $H$ in $G$ is the largest subgroup $H' \subseteq G$ containing $H$ such that $[H':H] < \infty$. We will say that $H$ is \textbf{saturated} in $G$ if the saturation of $H$ in $G$ is $H$ itself. \begin{prop}\label{prop:saturation} Let $K$ be a field, and let $A$ be an abelian variety defined over $K$ such that $A(K)$ is finitely generated. Let $L$ be a finite Galois extension of $K$ of degree $n := [L:K]$. Suppose that $A(L)_{{\operatorname{tors}}} = A(K)_{{\operatorname{tors}}}$ and $A(K)[n] = 0$. Then $A(K)$ is saturated in $A(L)$. \end{prop} \begin{rem} If $K$ is finitely generated over its prime subfield, for example, then N\'{e}ron's generalization \cite{neron:1952} of the Mordell-Weil theorem states that $A(K)$ is necessarily finitely generated. \end{rem} \begin{proof}[Proof of Proposition~\ref{prop:saturation}] Let $\mathcal{P} \in A(L)$ lie in the saturation of $A(K)$; we claim that $\mathcal{P} \in A(K)$. Let $\sigma \in G := \operatorname{Gal}(L/K)$ be arbitrary. Since $\mathcal{P}$ lies in the saturation of $A(K)$, there exists some $\ell \in \mathbb{Z}$ for which $\ell \mathcal{P} \in A(K)$, and therefore $\sigma(\ell \mathcal{P}) = \ell \mathcal{P}$. Writing this as $\ell (\sigma \mathcal{P} - \mathcal{P}) = 0$ shows that $\sigma \mathcal{P} - \mathcal{P}$ must be a torsion element of $A(L)$. Since $A(L)_{{\operatorname{tors}}} = A(K)_{{\operatorname{tors}}}$, $\sigma \mathcal{P} - \mathcal{P}$ must in fact be $K$-rational, so $\tau(\sigma \mathcal{P} - \mathcal{P}) = \sigma \mathcal{P} - \mathcal{P}$ for all $\tau \in G$. Thus \begin{equation}\label{eq:GaloisEq} n(\sigma \mathcal{P} - \mathcal{P}) = \sum_{\tau \in G} (\sigma \mathcal{P} - \mathcal{P}) = \sum_{\tau \in G} \tau\left(\sigma \mathcal{P} - \mathcal{P}\right) = \sum_{\tau \in G} \tau\sigma \mathcal{P} - \sum_{\tau \in G} \tau \mathcal{P} = 0 \end{equation} Since we assumed that $A(K)[n] = 0$, it follows that $\sigma \mathcal{P} = \mathcal{P}$. Since this holds for all $\sigma \in G$, we have $\mathcal{P} \in A(K)$. \end{proof} \begin{cor}\label{cor:saturation_samerank} Let $K$ be a field, and let $A$ be an abelian variety defined over $K$ such that $A(K)$ is finitely generated. Let $L$ be a finite Galois extension of $K$ of degree $n := [L:K]$. Suppose that $\operatorname{rk} A(L) = \operatorname{rk} A(K)$, $A(L)_{{\operatorname{tors}}} = A(K)_{{\operatorname{tors}}}$, and $A(K)[n] = 0$. Then $A(L) = A(K)$. \end{cor} \begin{proof} Since $A(K)$ and $A(L)$ have the same rank, the index $[A(L):A(K)]$ is finite. Therefore $A(L)$ is the saturation of $A(K)$ in $A(L)$, so $A(L) = A(K)$ by Proposition~\ref{prop:saturation}. \end{proof} \begin{ex} It is clear that the conditions $\operatorname{rk} A(L) = \operatorname{rk} A(K)$ and $A(L)_{{\operatorname{tors}}} = A(K)_{{\operatorname{tors}}}$ are necessary for the conclusion of Corollary~\ref{cor:saturation_samerank}. We now give an example to show that we cannot, in general, omit the restriction on the $n$-torsion. Let $K = \mathbb{Q}$, let $A$ be the elliptic curve defined by $y^2 + xy = x^3 - x$, and let $L = \mathbb{Q}(\sqrt{5})$. This curve appears as 65A1 in Cremona's table \cite{cremona:1997}, where one finds that $\operatorname{rk} A(\mathbb{Q}) = 1$ and $A(\mathbb{Q})_{{\operatorname{tors}}} \cong \mathbb{Z}/2\mathbb{Z}$; in particular, the $n$-torsion condition fails in this case. A computation in Magma shows that $\operatorname{rk} A(L) = \operatorname{rk} A(\mathbb{Q}) = 1$ and $A(L)_{{\operatorname{tors}}} = A(\mathbb{Q})_{{\operatorname{tors}}} \cong \mathbb{Z}/2\mathbb{Z}$. However, we have $\left(\frac{1 + \sqrt{5}}{2}, 1 \right) \in A(L) \setminus A(\mathbb{Q})$. \end{ex} We will ultimately apply Corollary~\ref{cor:saturation_samerank} to Jacobian varieties of curves. If $J$ is the Jacobian of a curve $X$, then knowing that $J(L) = J(K)$ essentially determines $X(L)$ from $X(K)$. We make this precise with the following proposition: \begin{prop}\label{prop:same_jac} Let $X$ be a curve of genus $g \ge 2$ defined over a number field $K$, and let $J$ be the Jacobian of $X$. Let $L/K$ be a Galois extension, and suppose that $J(L) = J(K)$. Then one of the following must be true: \begin{enumerate} \item $X(L) = X(K)$, or \item $X$ is hyperelliptic, $X(K) = \emptyset$, and $X(L)$ consists entirely of Weierstrass points. \end{enumerate} \end{prop} \begin{proof} Assume $X(L) \supsetneq X(K)$, and let $P \in X(L) \setminus X(K)$. First, suppose for contradiction that there is a point $Q \in X(K)$. Since $[P - Q] \in J(L) = J(K)$, we have \[ [\sigma P - Q] = [P - Q]^\sigma = [P - Q] \] for all $\sigma \in \operatorname{Gal}(L/K)$. This implies that $[\sigma P - P]$ is the trivial divisor class for all $\sigma \in \operatorname{Gal}(L/K)$; since $g > 0$, it must be that $\sigma P = P$ for all $\sigma \in \operatorname{Gal}(L/K)$, contradicting our assumption that $P \notin X(K)$. Therefore, $X(K) = \emptyset$. Now, choose any $\sigma \in \operatorname{Gal}(L/K)$ for which $\sigma P \ne P$. Since $[\sigma P - P] \in J(L) = J(K)$, we have \[ [P - \sigma^{-1}P] = [\sigma P - P]^{\sigma^{-1}} = [\sigma P - P], \] hence $[2P - \sigma P - \sigma^{-1} P]$ is the trivial class. By assumption, neither $\sigma P$ nor $\sigma^{-1} P$ is equal to $P$, so there is a degree 2 rational map $f$ on $X$ that vanishes only at $P$. The fact that $\deg f = 2$ implies that $X$ is hyperelliptic, and the fact that $\deg f \le g$ implies that $P$ is a Weierstrass point. \end{proof} \begin{ex} We give an example to show that the situation described in part (B) of Proposition~\ref{prop:same_jac} does occur. Let $X$ be the genus 2 curve given by \[ y^2 = (x^2 + 1)(2x^4 + x^3 + 2x^2 + 2x + 2), \] and let $L = \mathbb{Q}(i)$. Then $J(L) = J(\mathbb{Q}) = \{\mathcal{O}, \{P^+, P^-\}\}$, where $P^{\pm} = (\pm i, 0)$ are the two $L$-rational Weierstrass points on $X$. In this case, we have $X(\mathbb{Q}) = \emptyset$ and $X(L) = \{P^+,P^-\}$. \end{ex} \section{Dynamical modular curves of genus at most $2$}\label{sec:genus_gonality} Because we are concerned with dynamics over quadratic fields, it would be useful to know for which admissible graphs $G$ we should expect $X_1(G)$ to have infinitely many quadratic points. By \cite[Cor. 3]{harris/silverman:1991}, this is equivalent to asking for which admissible graphs $G$ the curve $X_1(G)$ is rational, elliptic, hyperelliptic, or admits a degree 2 morphism to an elliptic curve with positive Mordell-Weil rank over $\mathbb{Q}$. In \cite{doyle/krumm/wetherell}, we show that the only such curves $X_1(G)$ are those of genus at most 2 (see \cite{ogg:1974, jeon/kim:2004} for similar results for classical modular curves). In this section, we analyze the torsion subgroups of the Jacobians of the curves $X_1(G)$ of genus at most 2. This analysis, together with Proposition~\ref{prop:same_jac}, will be used in \textsection \ref{sec:specific} to determine the set of $K$-rational points on $X_1(G)$ for certain graphs $G$, with $K = \mathbb{Q}(i)$ and $K = \mathbb{Q}(\omega)$. \begin{rem}\label{rem:strongly} When considering the curves $X_1(G)$ for admissible graphs $G$, we lose no generality by restricting to {\it strongly} admissible graphs. Let $G$ be an admissible graph which is not strongly admissible, which implies that $G$ has a single fixed point. Let $G'$ be the strongly admissible graph obtained from $G$ by adjoining a second fixed point (and, necessarily, its nonperiodic preimage). Then the curve $X_1(G)$ is isomorphic to $X_1(G')$: Indeed, let $K_G/\mathbb{C}(c)$ and $K_{G'}/\mathbb{C}(c)$ be the function fields of the curves $X_1(G)$ and $X_1(G')$, respectively, in a common algebraic closure of $\mathbb{C}(c)$. (Here, we are taking $c$ to be an indeterminate over $\mathbb{C}$.) Then $K_{G'}$ is generated over $K_G$ by a root of $\Phi_1(x,c) = x^2 - x + c$; however, $G$ already has one fixed point, hence $K_G$ already contains a root of $\Phi_1(x,c)$, and therefore $K_G$ contains {\it both} roots of $\Phi_1(x,c)$. It follows that $K_{G'} = K_G$, thus $X_1(G') \cong X_1(G)$. \end{rem} Since every dynamical modular curve of genus $0$ already has a rational point---hence is isomorphic over $\mathbb{Q}$ to $\mathbb{P}^1$---we restrict our attention to dynamical modular curves of genus $1$ or $2$. For such curves, we will be interested in determining the torsion subgroups $J_1(G)(K)_{{\operatorname{tors}}}$ as $K$ ranges over all quadratic extensions $K/\mathbb{Q}$. In order to do so, we require a complete list of all dynamical modular curves of genus $1$ or $2$. \begin{prop}\label{prop:all_dyn_mod_curves} Let $G$ be a strongly admissible graph. \begin{enumerate} \item The curve $X_1(G)$ has genus 0 if and only if $G$ is isomorphic to one of the following: \begin{center} \rm 4(1,1), 4(2), 6(1,1), 6(2), 6(3), 8(2,1,1). \end{center} \item The curve $X_1(G)$ has genus 1 if and only if $G$ is isomorphic to one of the following: \begin{center} \rm 8(1,1)a, 8(1,1)b, 8(2)a, 8(2)b, 10(2,1,1)a, 10(2,1,1)b. \end{center} \item The curve $X_1(G)$ has genus 2 if and only if $G$ is isomorphic to one of the following: \begin{center} \rm 8(3), 8(4), 10(3,1,1), 10(3,2). \end{center} \end{enumerate} \end{prop} \begin{proof} The genera of $X_1(G)$ for those graphs $G$ listed in the statement of the proposition were determined in earlier papers, specifically \cite{walde/russo:1994, morton:1992, morton:1998, poonen:1998}. Thus, we need only show that if $G$ is any strongly admissible graph {\it not} listed, then $g(X_1(G)) > 2$. The curves $X_1(G)$ naturally form an inverse system, with maps $X_1(G') \longrightarrow X_1(G)$ whenever $G \subseteq G'$. Moreover, if $G \subsetneq G'$, the corresponding map of dynamical modular curves has degree at least $2$; these two statements form the content of \cite[Prop. 3.3]{doyle:2019}. Thus, once we have $g(X_1(G)) \ge 2$, it follows from Riemann-Hurwitz that $g(X_1(G')) > 2$ for all $G' \supsetneq G$. Bousch \cite{bousch:1992} gave an explicit formula for the genera of the curves $X_1(n)$; from that formula, one sees that $g(X_1(n))$ grows on the order of $n2^n$ as $n \to \infty$. The values of $g(X_1(n))$ for small values of $n$ are shown in Table~\ref{tab:genus}, and using Bousch's formula one can verify that $X_1(n)$ has genus greater than $2$ when $n > 4$. It follows that if $X_1(G)$ has genus at most $2$, then either $G \cong {\rm8(4)}$ (the minimal admissible graph with a 4-cycle), in which case $g(X_1(G)) = 2$, or $G$ only contains cycles of length 1, 2, or 3. \begin{table} \renewcommand{\arraystretch}{1.5} \caption{Genera of $X_1(n)$ for small values of $n$} \label{tab:genus} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\ \hline $g(X_1(n))$ & 0 & 0 & 0 & 2 & 14 & 34 & 124 & 285 \\ \hline \end{tabular} \end{table} A quadratic polynomial necessarily has at most two fixed points, a single $2$-cycle, and two $3$-cycles. However, Morton \cite{morton:1992} showed that the curve $X_1(3,3)$, which parametrizes maps $f_c$ together with a pair of marked points of period $3$ with {\it disjoint} orbits, has genus $4$. Thus, if $G$ is a strongly admissible graph with $g(X_1(G)) \le 2$, then either $G \cong \rm8(4)$ or the cycle structure of $G$ (defined immediately following Lemma~\ref{lem:admissible}) is one of the following: \begin{equation}\label{eq:CycleStructures} \rm (1,1),\ (2),\ (3),\ (1,1,2),\ (1,1,3),\ (2,3). \end{equation} All strongly admissible graphs with eight vertices are listed in the statement of the proposition; in particular, for each such graph $G$ we have $g(X_1(G)) \le 2$. There are only twelve strongly admissible $10$-vertex graphs with one of the cycle structures given above; we list these graphs in Table~\ref{tab:TenVertex}, together with the genera of their dynamical modular curves:\footnote{Given a model for each curve, Magma can easily compute its genus. Models appear in \cite{poonen:1998, doyle/faber/krumm:2014, doyle:2018quad, doyle/krumm/wetherell}.} \begin{table} \renewcommand{\arraystretch}{1.5} \caption{Ten-vertex graphs with cycle structures from \eqref{eq:CycleStructures}} \label{tab:TenVertex} \begin{tabular}{|c|ccccccccc|}\hline $G$ & 10(1,1)a/b & 10(2) & 10(3)a/b & 10(2,1,1)a/b & 10(3,1,1) & 10(3,2) & $G_1$ & $G_2$ & $G_3$\\\hline $g(X_1(G))$ & 5 & 5 & 9 & 1 & 2 & 2 & 5 & 5 & 5\\\hline \end{tabular} \end{table} Now suppose $G$ were a graph that did not appear in the statement of the proposition but for which $X_1(G)$ had genus at most $2$. Then $G$ would have to properly contain $\rm 10(2,1,1)a$ or $\rm 10(2,1,1)b$. Moreover, since $\rm 10(3,1,1)$ and $\rm 10(3,2)$ are the smallest admissible graphs containing points of period $1$ and period $3$ (resp., period $2$ and period $3$), and since their curves have genus $2$, $G$ cannot also have a $3$-cycle. Thus, $G$ must have cycle structure $\rm (1,1,2)$. Any admissible graph of cycle structure $\rm (1,1,2)$ that properly contains $\rm 10(2,1,1)a$ or $\rm 10(2,1,1)b$ must contain one of $\rm 12(2,1,1)a/b$, $G_4$, $G_5$, or $G_6$. However, the dynamical modular curve associated to each of these graphs has genus $5$; see \cite{doyle/faber/krumm:2014, doyle:2018quad}. Therefore, the proposition lists all strongly admissible graphs $G$ such that $g(X_1(G)) \le 2$. \end{proof} We include in Appendix~\ref{app:models} two key pieces of information for each dynamical modular curve of genus $1$ or $2$: First, we provide an explicit model for each such curve. Second, each point on $X_1(G)$ carries the information of a map $f_c$ together with a collection of preperiodic points; for the models we provide, we include the rational map $X_1(G) \longrightarrow \mathbb{P}^1$ that maps a point to the corresponding parameter $c$. \subsection{Curves of genus $1$}\label{sec:gen1} Each of the dynamical modular curves of genus $1$ has rational points and is therefore isomorphic to an elliptic curve over $\mathbb{Q}$. All of these curves have small conductor, so we may refer to Cremona's tables \cite{cremona:1997} to determine the Mordell-Weil groups of these curves over $\mathbb{Q}$; in each case, one finds that the rank is $0$, hence $X_1(G)(\mathbb{Q}) = X_1(G)(\mathbb{Q})_{\operatorname{tors}}$. We give in Table~\ref{tab:genus_one_labels} the Cremona labels (found in \cite{poonen:1998}) and rational Mordell-Weil groups for each of the genus $1$ dynamical modular curves. \begin{table} \centering \renewcommand{\arraystretch}{1.5} \caption{Dynamical modular curves of genus $1$} \label{tab:genus_one_labels} \begin{tabular}{|c|c|c|} \hline $G$ & Cremona label for $X_1(G)$ & $X_1(G)(\mathbb{Q})$\\ \hline 8(1,1)a & 24A4 & $\mathbb{Z}/4\mathbb{Z}$ \\ \hline 8(1,1)b & 11A3 & $\mathbb{Z}/5\mathbb{Z}$ \\ \hline 8(2)a & 40A3 & $\mathbb{Z}/4\mathbb{Z}$ \\ \hline 8(2)b & 11A3 & $\mathbb{Z}/5\mathbb{Z}$ \\ \hline 10(2,1,1)a & 17A4 & $\mathbb{Z}/4\mathbb{Z}$ \\ \hline 10(2,1,1)b & 15A8 & $\mathbb{Z}/4\mathbb{Z}$ \\ \hline \end{tabular} \end{table} We now determine, for each of the curves $X$ listed in Table~\ref{tab:genus_one_labels}, the torsion subgroup $X(K)_{{\operatorname{tors}}}$ over all quadratic fields $K$. For the following theorem, we list the genus $1$ dynamical modular curves according to their Cremona labels. \begin{thm}\label{thm:gen_one_torsion} Let $d$ be a squarefree integer, and let $K = \mathbb{Q}(\sqrt{d})$. \begin{enumerate} \item[] {\rm(11A3)} If $G = {\rm8(1,1)b}$ or $G = {\rm8(2)b}$, then \[ X_1(G)(K)_{{\operatorname{tors}}} \cong \mathbb{Z}/5\mathbb{Z}. \] \item[] {\rm(15A8)} If $G = {\rm10(2,1,1)b}$, then \[ X_1(G)(K)_{{\operatorname{tors}}} \cong \begin{cases} \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/4\mathbb{Z}, &\mbox{ if } d = -15;\\ \mathbb{Z}/8\mathbb{Z}, &\mbox{ if } d \in \{-3,5\};\\ \mathbb{Z}/4\mathbb{Z}, &\mbox{ otherwise}. \end{cases} \] \item[] {\rm(17A4)} If $G = {\rm10(2,1,1)a}$, then \[ X_1(G)(K)_{{\operatorname{tors}}} \cong \begin{cases} \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/4\mathbb{Z}, &\mbox{ if } d = 17;\\ \mathbb{Z}/4\mathbb{Z}, &\mbox{ otherwise}. \end{cases} \] \item[] {\rm(24A4)} If $G = {\rm8(1,1)a}$, then \[ X_1(G)(K)_{{\operatorname{tors}}} \cong \begin{cases} \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/4\mathbb{Z}, &\mbox{ if } d = -3;\\ \mathbb{Z}/8\mathbb{Z}, &\mbox{ if } d \in \{-1,3\};\\ \mathbb{Z}/4\mathbb{Z}, &\mbox{ otherwise}. \end{cases} \] \item[] {\rm(40A3)} If $G = {\rm8(2)a}$, then \[ X_1(G)(K)_{{\operatorname{tors}}} \cong \begin{cases} \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/4\mathbb{Z}, &\mbox{ if } d = 5;\\ \mathbb{Z}/4\mathbb{Z}, &\mbox{ otherwise}. \end{cases} \] \end{enumerate} \end{thm} \begin{proof} The curves with Cremona labels 11A3, 15A8, and 24A4 are birational to the classical modular curves $X^{\Ell}_1(11)$, $X^{\Ell}_1(15)$, and $X^{\Ell}_1(2,12)$, respectively. Theorem~\ref{thm:gen_one_torsion} was proven for these curves by Rabarison \cite{rabarison:2010}, whose proof relies on the extension of Mazur's theorem to quadratic fields due to Kenku-Momose \cite{kenku/momose:1988} and Kamienny \cite{kamienny:1992}. Our method of proof for the remaining curves---17A4 and 40A3---is more elementary and, though we do not do so here, may be used to give an alternative proof for the curves 11A3, 15A8, and 24A4. Let $E_{17}$ and $E_{40}$ denote the curves 17A4 and 40A3, respectively, given in \cite{cremona:1997} by the following models: \begin{align*} E_{17}&: y^2 + xy + y = x^3 - x^2 - x;\\ E_{40}&: y^2 = (x-1)(x^2 + x - 1). \end{align*} The primes 3 and 5 (resp., 3 and 17) are primes of good reduction for $E_{17}$ (resp., $E_{40}$). If $\mathfrak{p}$ is a prime in $\calO_K$ lying above the rational prime $p$, then the residue field $k_{\mathfrak{p}}$ embeds into $\mathbb{F}_{p^2}$; computing in Magma, we find the following: \begin{alignat*}{4} E_{17}(\mathbb{F}_{3^2}) &\cong \mathbb{Z}/4\mathbb{Z} \oplus \mathbb{Z}/4\mathbb{Z} & \hspace{10mm} E_{40}(\mathbb{F}_{3^2}) &\cong \mathbb{Z}/4\mathbb{Z} \oplus \mathbb{Z}/4\mathbb{Z}\\ E_{17}(\mathbb{F}_{5^2}) &\cong \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/16\mathbb{Z} & E_{40}(\mathbb{F}_{17^2}) &\cong \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/160\mathbb{Z}. \end{alignat*} It follows that both $E_{17}(K)_{{\operatorname{tors}}}$ and $E_{40}(K)_{{\operatorname{tors}}}$ embed into $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/4\mathbb{Z}$. Since both $E = E_{17}$ and $E = E_{40}$ have $E(\mathbb{Q})_{{\operatorname{tors}}} \cong \mathbb{Z}/4\mathbb{Z}$, it follows that if $E$ gains additional torsion points after base change to $K$, then $E$ necessarily gains a $K$-rational 2-torsion point, hence the full torsion subgroup $E(K)_{{\operatorname{tors}}}$ is precisely $\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/4\mathbb{Z}$. It remains, then, to find the fields of definition for $E_{17}[2]$ and $E_{40}[2]$. One can easily verify that $E_{17}[2]$ consists of the points $\infty$, $(1,-1)$, and the two points $(x,-1/2(x+1))$ with $4x^2 + x - 1 = 0$. Therefore $E_{17}$ attains full 2-torsion over $K = \mathbb{Q}(\sqrt{17})$. The 2-torsion on $E_{40}$ is perhaps more apparent: $E_{40}[2]$ consists of the points $\infty$, $(1,0)$, and $(x,0)$ with $x^2 + x - 1 = 0$. Hence $E_{40}$ attains full 2-torsion over $K = \mathbb{Q}(\sqrt{5})$. \end{proof} To say that there exists a parameter $c \in K$ such that $G(f_c,K)$ contains a subgraph isomorphic to $G$ is equivalent to saying that $U_1(G)$ has a $K$-rational point. With this in mind, we now apply Theorem~\ref{thm:gen_one_torsion} to show the following: \begin{prop}\label{prop:gen1} Let $G$ be an admissible graph for which $X_1(G)$ has genus $1$, and let $K$ be a quadratic field. \begin{enumerate} \item If $\operatorname{rk} X_1(G)(K) = 0$, then $G$ does not occur as a subgraph of $G(f_c,K)$ for any $c \in K$, unless $G \cong {\rm10(2,1,1)b}$, $K = \mathbb{Q}(\sqrt{-15})$, and $c = 3/16$. \item If $\operatorname{rk} X_1(G)(K) \ge 1$, then $G$ occurs as a subgraph of $G(f_c,K)$ for infinitely many $c \in K$. \end{enumerate} \end{prop} \begin{proof} We begin by proving (B), so assume that $X_1(G)(K)$ has positive Mordell-Weil rank. Then the curve $X_1(G)$ necessarily contains infinitely many $K$-rational points. Since $U_1(G)$ is open in $X_1(G)$, this implies that the set $U_1(G)(K)$ is infinite, hence the graph $G$ occurs as a subgraph of $G(f_c,K)$ for infinitely many parameters $c \in K$. We now prove (A), so we suppose that $\operatorname{rk} X_1(G)(K) = 0$. If also $X_1(G)(K)_{{\operatorname{tors}}} = X_1(G)(\mathbb{Q})_{{\operatorname{tors}}}$, then necessarily $X_1(G)(K) = X_1(G)(\mathbb{Q})$. For each genus $1$ dynamical modular curve, the set $U_1(G)(\mathbb{Q})$ is empty by \cite{poonen:1998}, so in this case $U_1(G)(K)$ is empty as well. Therefore the graph $G$ never occurs as a subgraph of $G(f_c,K)$ for any $c \in K$. It remains to consider the case that $X_1(G)(K)_{{\operatorname{tors}}} \supsetneq X_1(G)(\mathbb{Q})_{{\operatorname{tors}}}$. We consider the graphs in the same order as in Theorem~\ref{thm:gen_one_torsion}, where all of the corresponding torsion subgroups are described. If $G = {\rm8(1,1)b}$ or $G = {\rm8(2)b}$, then $X_1(G)(K)_{{\operatorname{tors}}} = X_1(G)(\mathbb{Q})_{{\operatorname{tors}}}$ for all quadratic fields $K$, so the proposition holds for these graphs. Now let $G = {\rm10(2,1,1)b}$. The only quadratic fields over which $X_1(G)$ gains torsion points are $K = \mathbb{Q}(\sqrt{d})$ with $d \in \{-15,-3,5\}$, and $X_1(G)$ has rank $0$ over all three of these fields. The four additional points on $X_1(\mathbb{Q}(\sqrt{-15}))$ correspond to $c = 3/16$, in which case $G(f_c,\mathbb{Q}(\sqrt{-15})) \cong {\rm10(2,1,1)b}$, giving us the unique exception in the statement of the proposition. The four additional points on $X_1(\mathbb{Q}(\omega))$ correspond to $c = 0$ and $c = -3/4$, for which $G(f_c,\mathbb{Q}(\omega))$ is isomorphic to 7(2,1,1)a and 6(1,1), respectively. Of the four additional points on $X_1(\mathbb{Q}(\sqrt{5}))$, two are points at infinity, and the other two correspond to $c = -2$, in which case $G(f_c,\mathbb{Q}(\sqrt{5})) \cong {\rm9(2,1,1)}$. In the case $G = {\rm10(2,1,1)a}$, the only quadratic field over which $X_1(G)$ gains torsion points is $\mathbb{Q}(\sqrt{17})$. However, $X_1(G)$ has rank $1$ over $\mathbb{Q}(\sqrt{17})$. We now consider $G = {\rm8(1,1)a}$. In this case, $X_1(G)(K)_{{\operatorname{tors}}}$ is strictly larger than $X_1(\mathbb{Q})_{{\operatorname{tors}}}$ only for $K = \mathbb{Q}(\sqrt{d})$ with $d \in \{-3,-1,3\}$. For $d = -3$, the four additional points correspond to $c = 1/4$, where we have $G(f_c,\mathbb{Q}(\omega)) \cong {\rm4(1)}$. For $d = -1$, two of the additional points are points at infinity, while the other two correspond to $c = 0$, for which we have $G(f_c,\mathbb{Q}(i)) \cong {\rm5(1,1)b}$. When $d = 3$, the extra points on $X_1(G)$ correspond to $c = -2$, in which case $G(f_c,\mathbb{Q}(\sqrt{3})) \cong {\rm7(1,1)b}$. Finally, let $G = {\rm8(2)a}$. The elliptic curve $X_1(G)$ only gains additional torsion over $\mathbb{Q}(\sqrt{5})$. The new points correspond to $c = -3/4$, where we actually have $G(f_c,\sqrt{5}) \cong {\rm6(1,1)}$. \end{proof} \subsection[Curves of genus $2$]{Curves of genus $2$}\label{sec:gen2} In this section, we consider the Jacobians of each of the four genus $2$ dynamical modular curves listed in Proposition~\ref{prop:all_dyn_mod_curves}. As we did for the genus $1$ curves in the previous section, we explicitly determine the torsion subgroups of these four Jacobians over all quadratic extensions $K/\mathbb{Q}$. Three of the four genus $2$ dynamical modular curves are also classical modular curves: \begin{equation}\label{eq:X1(1,3)} \begin{split} X_1(4) &\cong X^{\Ell}_1(16) : \hspace{5mm} y^2 = f_{16}(x) := -x(x^2 + 1)(x^2 - 2x - 1)\\ X_1(1,3) &\cong X^{\Ell}_1(18) : \hspace{5mm} y^2 = f_{18}(x) := x^6 + 2x^5 + 5x^4 + 10x^3 + 10x^2 + 4x + 1\\ X_1(2,3) &\cong X^{\Ell}_1(13) : \hspace{5mm} y^2 = f_{13}(x) := x^6 + 2x^5 + x^4 + 2x^3 + 6x^2 + 4x + 1. \end{split} \end{equation} These curves correspond to the graphs 8(4), 10(3,1,1), and 10(3,2), respectively. For $G = {\rm8(3)}$, which is generated by a point of portrait $(2,3)$, it was shown in \cite{poonen:1998} that $X_1(G) = X_1((2,3))$ is given by the equation \[ y^2 = x^6 - 2x^4 + 2x^3 + 5x^2 + 2x + 1. \] Each of these curves $X_1(G)$ has rational points, but none of them lie on $U_1(G)$. \begin{rem} Since the notation is so similar, we pause to emphasize that $X_1(2,3)$ parametrizes maps $f_c$ together with marked points of period $2$ and $3$, respectively, while $X_1((2,3))$ parametrizes maps $f_c$ together with a single marked point of portrait $(2,3)$. \end{rem} The rational Mordell-Weil groups of the classical modular Jacobians $J^{\Ell}_1(N)$ with $N \in \{13,16,18\}$ are well known, and the same was computed for $J_1((2,3))$ in \cite{poonen:1998}: \begin{align*} J_1(4)(\mathbb{Q}) = J^{\Ell}_1(16)(\mathbb{Q}) &\cong \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/10\mathbb{Z}\\ J_1(1,3)(\mathbb{Q}) = J^{\Ell}_1(18)(\mathbb{Q}) &\cong \mathbb{Z}/21\mathbb{Z}\\ J_1(2,3)(\mathbb{Q}) = J^{\Ell}_1(13)(\mathbb{Q}) &\cong \mathbb{Z}/19\mathbb{Z}\\ J_1((2,3))(\mathbb{Q}) &\cong \mathbb{Z}, \end{align*} We now determine over which quadratic fields $K$ these Jacobians gain new torsion points. \begin{thm}\label{thm:gen_two_torsion} Let $d$ be a squarefree integer, and let $K = \mathbb{Q}(\sqrt{d})$. \begin{enumerate} \item \[ J_1(4)(K)_{{\operatorname{tors}}} \cong \begin{cases} (\mathbb{Z}/2\mathbb{Z})^2 \oplus \mathbb{Z}/10\mathbb{Z}, &\mbox{ if } d \in \{-1,2\};\\ \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/10\mathbb{Z}, &\mbox{ otherwise}. \end{cases} \] \item \[ J_1(1,3)(K)_{{\operatorname{tors}}} \cong \begin{cases} \mathbb{Z}/3\mathbb{Z} \oplus \mathbb{Z}/21\mathbb{Z}, &\mbox{ if } d = -3;\\ \mathbb{Z}/21\mathbb{Z}, &\mbox{ otherwise}. \end{cases} \] \item \[ J_1(2,3)(K)_{{\operatorname{tors}}} \cong \mathbb{Z}/19\mathbb{Z}. \] \item \[ J_1((2,3))(K)_{\operatorname{tors}} \cong 0. \] \end{enumerate} \end{thm} \begin{rem} The torsion subgroups of the Jacobians of classical modular curves of genus 2 have been computed over certain quadratic fields---including $\mathbb{Q}(i)$ and $\mathbb{Q}(\omega)$---in \cite{kamienny/najman:2012,najman:2010,najman:2011}. Here, we compute the torsion subgroups over all quadratic fields simultaneously for dynamical modular curves of genus 2, just as we did in the genus 1 case in \textsection \ref{sec:gen1}. \end{rem} The most difficult case of Theorem~\ref{thm:gen_two_torsion} is (B). In order to prove (B), we will require two lemmas concerning the curve $X_1(1,3)$. Recall that for a divisor $D$ on a curve $X$, the {\it Riemann-Roch space of $D$} is the space \[ \mathcal{L}(D) := \{f \in \mathbb{C}(X) : (f) + D \text{ is effective}\}, \] and the {\it complete linear system} $|D|$ is the set of all effective divisors linearly equivalent to $D$; that is, the set of effective divisors $E$ for which $[E - D] = [0] = \mathcal{O}$. \begin{lem}\label{lem:3infty} Let $X = X_1(1,3)$, given by the model $y^2 = f_{18}(x)$ from \eqref{eq:X1(1,3)}. Then $[3P - 3\infty^+] = \mathcal{O}$ if and only if $P = \infty^+$. \end{lem} \begin{proof} One direction is immediate, so we suppose $P \ne \infty^+$ and show that $[3P - 3\infty^+] \ne \mathcal{O}$. First, take $P = \infty^-$. One can verify in Magma that the point $[\infty^- - \infty^+] \in J_1(2,3)(\mathbb{Q})$ is a point of order 21, which means that $[3\infty^- - 3\infty^+] = 3[\infty^- - \infty^+] \ne \mathcal{O}$. Next, we observe that if $P$ is a Weierstrass point, then \[ [3P - 3\infty^+] = [P + \infty^- - 2\infty^+] + [2P - \infty^+ - \infty^-] = [P + \infty^- - 2\infty^+]. \] Since $\infty^+$ is not a Weierstrass point, there is no rational function of degree $2$ with a pole only at $\infty^+$, so $[P + \infty^- - 2\infty^+]$---and therefore $[3P - 3\infty^+]$---is nonzero. Finally, suppose $P$ is a finite, non-Weierstrass point on $X$; write $P = (x_0,y_0)$ with $y_0 \ne 0$. Suppose for contradiction that $[3P - 3\infty^+] = \mathcal{O}$, and let $g$ be a rational function on $X$ with zero divisor $3P$ and pole divisor $3\infty^+$. Then $g$ lies in $\mathcal{L}(3\infty^+)$, which has dimension $2$ by the Riemann-Roch theorem. The constant functions certainly lie in $\mathcal{L}(3\infty^+)$, and we claim that the function $h := y + x^3 + x^2 + 2x$ also lies in $\mathcal{L}(3\infty^+)$, so that $\mathcal{L}(3\infty^+) = \langle 1, h \rangle$. In other words, we claim that the only pole of $h$ is a triple pole at $\infty^+$. Certainly $h$ has no finite poles. To better understand the behavior of $h$ at infinity, we cover $X$ by the affine patches $y^2 = f_{18}(x)$ and $v^2 = u^6 f_{18}(1/u)$, with the identifications $x = 1/u$ and $y = v/u^3$ (as described in \textsection \ref{sec:hyp}). The two points $\infty^{\pm}$ on $X$ are given by $(u,v) = (0,\pm 1)$. We rewrite $h$ in terms of $u$ and $v$ to get \begin{equation}\label{eq:h_uv} h = \frac{v + 2u^2 + u + 1}{u^3}. \end{equation} Certainly $h$ has a triple pole at $(u,v) = (0,1)$, since $u = 1/x$ is a uniformizer at $\infty^+$. On the other hand, multiplying each of the numerator and denominator of \eqref{eq:h_uv} by $v - (2u^2 + u + 1)$ yields \[ h = \frac{u^3 + 4u^2 + 6u + 6}{v - (2u^2 + u + 1)}, \] which now visibly does not have a pole at $(u,v) = (0,-1)$. Therefore $h \in \mathcal{L}(3\infty^+)$, as claimed. It follows that the function $g$ may be written as $a + b(y + x^3 + x^2 + 2x)$ for some scalars $a$ and $b$. Since $g$ must be nonconstant, we must have $b \ne 0$; scaling by $1/b$, we may assume $g$ is of the form \[ g = y + x^3 + x^2 + 2x + A, \] which we rewrite as \[ g = \frac{y^2 - (x^3 + x^2 + 2x + A)^2}{y - (x^3 + x^2 + 2x + A)} = -\frac{p(x)}{y - (x^3 + x^2 + 2x + A)}, \] where \[ p(x) := 2(A-3)x^3 + 2(A-3)x^2 + 4(A-1)x + (A+1)(A-1). \] Since $P$ is not a Weierstrass point, $x - x_0$ is a uniformizer at $P$; since $g$ vanishes to order $3$ at $P$, this means that $(x-x_0)^3$ must divide $p(x)$. Thus each of $p(x)$ and $p'(x)$ has a multiple root, so \[ \operatorname{disc}(p) = -4 (A-1) (A-3) \left(27 A^4-118 A^3+180 A^2-42 A+17\right) = 0 \] and \[ \operatorname{disc}(p') = -16(A - 3)(5A - 3) = 0. \] This forces $A = 3$, which contradicts the fact that $p(x)$ must have degree $3$. Having exhausted all possibilities for $P \ne \infty^+$, we have completed the proof. \end{proof} \begin{lem}\label{lem:3torsion} Let $X = X_1(1,3)$ and $J = J_1(1,3)$. The $3$-torsion subgroup $J[3]$ contains only nine points of degree at most $2$ over $\mathbb{Q}$, all of which are defined over $\mathbb{Q}(\omega)$. \end{lem} \begin{proof} Suppose $\{P,Q\}$ is a point of order 3 on $J$. This means that \begin{equation}\label{eq:3-torsion} [3P + 3Q - (3 \infty^+ + 3\infty^-)] = 3[P + Q - \infty^+ - \infty^-] = \mathcal{O}, \end{equation} so there is a function $g$ on $X$ whose divisor is $(g) = 3P + 3Q - (3 \infty^+ + 3\infty^-)$. We first show that neither $P$ nor $Q$ may be a point at infinity. Suppose to the contrary that $Q = \infty^-$. (There is no loss of generality here: The pair $\{P,Q\}$ is unordered, so we are free to switch $P$ and $Q$, and if $\{P,\infty^+\}$ is a 3-torsion point, then so is $-\{P,\infty^+\} = \{\iota P, \infty^-\}$.) Then $\mathcal{O} = [3P + 3Q - (3 \infty^+ + 3\infty^-)] = [3P - 3\infty^+]$. However, by Lemma~\ref{lem:3infty} this implies $P = \infty^+$, which means that $\{P,Q\} = \{\infty^+,\infty^-\} = \mathcal{O}$, hence $\{P,Q\}$ does not have order $3$. We now show that neither $P$ nor $Q$ may be a Weierstrass point. Suppose for contradiction that $P$ is a Weierstrass point. (Again, we lose no generality in doing so since $\{P,Q\}$ is unordered.) Then \begin{align*} \mathcal{O} = [3P + 3Q - 3\infty^+ - 3\infty^-] = \{P,P\} + \{Q,Q\} + \{P,Q\} = \{Q,Q\} + \{P,Q\}, \end{align*} since $P$ is assumed to be a Weierstrass point. This implies that \[ \{P,Q\} = -\{Q,Q\} = \{\iota Q, \iota Q\}, \] hence $P = \iota Q = Q$. It follows that $\{P,Q\} = \{P,P\} = \mathcal{O}$, so $\{P,Q\}$ does not have order $3$. Since neither $P$ nor $Q$ is a point at infinity, there is no cancellation in the difference $3P + 3Q - 3\infty^+ - 3\infty^-$, so the function $g$ must have zero divisor equal to $3P + 3Q$ and pole divisor equal to $D := 3\infty^+ + 3\infty^-$. By Riemann-Roch, $\dim \mathcal{L}(D) = 5$; since the set \[ \{1, x, x^2, x^3, y\} \] is a linearly independent set of elements of $\mathcal{L}(D)$, it must be a basis. Therefore there exist scalars $a,b,c,d,e \in \mathbb{C}$ for which \[ g = ay + bx^3 + cx^2 + dx + e. \] We claim that $a \ne 0$. Indeed, if $a = 0$, then the set of points on $X$ for which $g = 0$ is \[ \mathcal{S} := \{(x,\pm \sqrt{f_{18}(x)}) : bx^3 + cx^2 + dx + e = 0\}. \] Since $(g) = 3(P + Q - \infty^+ - \infty^-)$, $\mathcal{S}$ contains only two points. Thus either $g = 0$ has a single solution $x_0$, or $g = 0$ has two distinct solutions $x_1$ and $x_2$ with $f_{18}(x_1) = f_{18}(x_2) = 0$. In the former case, $P$ and $Q$ are hyperelliptic conjugates, so $\{P,Q\} = \mathcal{O}$; in the latter, $P$ and $Q$ are distinct Weierstrass points, which we have already ruled out. In either case, $\{P,Q\}$ is not a point of order 3, so we must have $a \ne 0$; dividing by $a$ if necessary, we take $g$ to be of the form \[ g = y - (Ax^3 + Bx^2 + Cx + D), \] which we rewrite as \begin{align*} g &= \frac{y^2 - (Ax^3 + Bx^2 + Cx + D)^2}{y + (Ax^3 + Bx^2 + Cx + D)}\\ &= -\frac{q(x)}{y + (Ax^3 + Bx^2 + Cx + D)}, \end{align*} where \begin{align}\label{eq:3tors_poly1} \begin{split} q(x) = (A+1)(A-1)x^6 &+ 2(AB - 1)x^5 + (2AC + B^2 - 5)x^4 \\ & + 2(AD + BC - 5)x^3 + (2BD + C^2 - 10)x^2 \\ & + 2(CD - 2)x + (D+1)(D-1). \end{split} \end{align} The function $q(x)$ must vanish to order 3 at each of $P = (x_1,y_1)$ and $Q = (x_2,y_2)$. Since $P$ and $Q$ are not Weierstrass points, $(x-x_1)$ and $(x-x_2)$ are uniformizers at $P$ and $Q$, respectively. Thus $(x-x_1)^3(x-x_2)^3$ must divide $q(x)$, hence $(A + 1)(A - 1) \ne 0$ and \begin{equation}\label{eq:3tors_poly2} q(x) = (A+1)(A-1)(x^2 - tx + n)^3, \end{equation} where $t = x_1 + x_2$ and $n = x_1x_2$. Equating the coefficients of the expressions for $q(x)$ given in \eqref{eq:3tors_poly1} and \eqref{eq:3tors_poly2} yields the following system of equations: \begin{equation}\label{eq:3tors_system} \left\{ \begin{split} \hfill 2(AB - 1) &= -3(A+1)(A-1)t\\ \hfill 2AC + B^2 - 5 &= 3 (A+1)(A-1)(t^2 + n)\\ \hfill 2(AD + BC - 5) &= -(A+1)(A-1)t(t^2 + 6n)\\ \hfill 2BD + C^2 - 10 &= 3(A+1)(A-1)n(t^2 + n)\\ \hfill 2(CD - 2) &= -3(A+1)(A-1)tn^2\\ \hfill (D+1)(D-1) &= (A+1)(A-1)n^3 \end{split} \right. \end{equation} The system \eqref{eq:3tors_system} defines a $0$-dimensional scheme $S \subseteq \mathbb{A}^6$. A Magma calculation finds all 80 points of $S(\overline{\bbQ})$, each of which corresponds to a point of order 3 in $J(\overline{\bbQ})$. Now, in order for $\{P,Q\}$ to be a \emph{quadratic} point on $J$, say defined over the quadratic field $K$, either $P$ and $Q$ must both be defined over $K$, or $P$ and $Q$ must be Galois conjugates defined over some quadratic extension $L/K$. In either case, the parameters $t$ and $n$ must both lie in $K$. The only points in $S(\overline{\bbQ})$ with $[\mathbb{Q}(t,n):\mathbb{Q}] \le 2$ are the eight points satisfying \[ (t,n) \in \{(-1,1),(-2(\omega + 1),\omega),(-2(\omega^2 + 1),\omega^2)\}, \] all of which are defined over $\mathbb{Q}(\omega)$. Therefore the only quadratic field over which $J$ gains additional 3-torsion is $\mathbb{Q}(\omega)$, and over this field there are a total of eight points (two of which are $\mathbb{Q}$-rational) of order $3$. The lemma now follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:gen_two_torsion}] For (A), we note that 3 is a prime of good reduction for $J_1(4)$, and that \[ J_1(4)(\mathbb{F}_{3^2}) \cong (\mathbb{Z}/2\mathbb{Z})^3 \oplus \mathbb{Z}/10\mathbb{Z}. \] Moreover, since 5 is a prime of good reduction and $\# J_1(4)(\mathbb{F}_{5^2}) = 2^7 \cdot 5$, $J_1(4)(K)$ cannot have 3-torsion. Hence \[ J_1(4)(K)_{{\operatorname{tors}}} \hookrightarrow (\mathbb{Z}/2\mathbb{Z})^3 \oplus \mathbb{Z}/10\mathbb{Z}. \] Since $J_1(4)(\mathbb{Q})_{{\operatorname{tors}}} \cong \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/10\mathbb{Z}$, the only way for $J_1(4)(K)_{{\operatorname{tors}}}$ to be strictly larger than $J_1(4)(\mathbb{Q})_{{\operatorname{tors}}}$ is for $J_1(4)$ to gain a 2-torsion point upon base change from $\mathbb{Q}$ to $K$. By Lemma~\ref{lem:2torsion}, the 2-torsion points are the points supported on the Weierstrass locus of $X_1(4)$. The Weierstrass points are $\infty$ and the points \[ P := (0,0), \ Q^{\pm} := (\pm i,0), \text{ and } R^{\pm} := (1 \pm \sqrt{2},0). \] The sixteen points in $J_1(4)[2]$ are therefore those appearing in Table~\ref{tab:2torsion}. Hence the only quadratic fields over which $J_1(4)$ gains additional torsion are $\mathbb{Q}(i)$ and $\mathbb{Q}(\sqrt{2})$, and over each of these fields the torsion subgroup is isomorphic to $(\mathbb{Z}/2\mathbb{Z})^2 \oplus \mathbb{Z}/10\mathbb{Z}$. \begin{table} \centering \renewcommand{\arraystretch}{1.5} \caption{The 2-torsion points on $J_1(4)$} \label{tab:2torsion} \begin{tabular}{|c|c c c c|} \hline Field of definition & \multicolumn{4}{|c|}{Points} \\ \hline $\mathbb{Q}$ & $\mathcal{O}$ & $\{\infty, P\}$ & $\{Q^+,Q^-\}$ & $\{R^+,R^-\}$ \\ \hline $\mathbb{Q}(i)$ & $\{\infty,Q^+\}$ & $\{\infty,Q^-\}$ & $\{P,Q^+\}$ & $\{P,Q^-\}$\\ \hline $\mathbb{Q}(\sqrt{2})$ & $\{\infty,R^+\}$ & $\{\infty,R^-\}$ & $\{P,R^+\}$ & $\{P,R^-\}$\\ \hline $\mathbb{Q}(i,\sqrt{2})$ & $\{Q^+,R^+\}$ & $\{Q^+,R^-\}$ & $\{Q^-,R^+\}$ & $\{Q^-,R^-\}$\\ \hline \end{tabular} \end{table} Next, we consider part (B). Since $J_1(1,3)$ has good reduction at the primes 5 and 11, we compute \begin{align*} J_1(1,3)(\mathbb{F}_{5^2}) &\cong (\mathbb{Z}/3\mathbb{Z})^2 \oplus (\mathbb{Z}/7\mathbb{Z})^2 , \ \\ J_1(1,3)(\mathbb{F}_{11^2}) &\cong (\mathbb{Z}/4\mathbb{Z})^2 \oplus (\mathbb{Z}/3\mathbb{Z})^2 \oplus \mathbb{Z}/7\mathbb{Z} \oplus \mathbb{Z}/13\mathbb{Z}. \end{align*} Therefore \[ J_1(1,3)(K)_{{\operatorname{tors}}} \hookrightarrow (\mathbb{Z}/3\mathbb{Z})^2 \oplus \mathbb{Z}/7\mathbb{Z} = \mathbb{Z}/3\mathbb{Z} \oplus \mathbb{Z}/21\mathbb{Z}. \] Since $J_1(1,3)(\mathbb{Q}) \cong \mathbb{Z}/21\mathbb{Z}$, the only way for $J_1(1,3)$ to gain torsion points over a quadratic field $K$ is to gain a point of order 3, in which case the full torsion subgroup is $\mathbb{Z}/3\mathbb{Z} \oplus \mathbb{Z}/21\mathbb{Z}$. We know from Lemma~\ref{lem:3torsion} that the only quadratic field over which $J_1(1,3)$ admits additional $K$-rational points of order 3 is $K = \mathbb{Q}(\omega)$, and (B) now follows. For parts (C) and (D), we observe that 3 and 5 are both primes of good reduction for $J_1(2,3)$ and $J_1((2,3))$, and that \begin{alignat*}{4} \#J_1(2,3)(\mathbb{F}_{3^2}) &= 3 \cdot 19 & \hspace{10mm} \#J_1((2,3))(\mathbb{F}_{3^2}) &= 3^4;\\ \#J_1(2,3)(\mathbb{F}_{5^2}) &= 19^2 & \#J_1((2,3))(\mathbb{F}_{5^2}) &= 19 \cdot 43. \end{alignat*} Therefore $J_1(2,3)(K)_{{\operatorname{tors}}} \hookrightarrow \mathbb{Z}/19\mathbb{Z}$ and $J_1((2,3))(K)_{{\operatorname{tors}}} = 0$. Since $J_1(2,3)(\mathbb{Q}) \cong \mathbb{Z}/19\mathbb{Z}$, this proves (C) and (D). \end{proof} \begin{prop}\label{prop:gen2} Let $G$ be an admissible graph for which $J_1(G)$ has genus $2$, and let $K$ be a quadratic field. Suppose $\operatorname{rk} J_1(G)(K) = \operatorname{rk} J_1(G)(\mathbb{Q})$. \begin{enumerate} \item If $G$ is isomorphic to 8(4), 10(3,1,1), or 10(3,2), then $G$ does not occur as a subgraph of $G(f_c,K)$ for any $c \in K$. \item If $G = \rm 8(3)$, then the only $c \in K$ for which $G(f_c,K)$ contains a subgraph isomorphic to $G$ is $c = -29/16$, in which case $G(f_c,\mathbb{Q}) \cong {\rm8(3)}$. \end{enumerate} \end{prop} \begin{proof} We begin with statement (A). If $G$ is one of the graphs 8(4), 10(3,1,1), or 10(3,2), then $\operatorname{rk} J_1(G)(\mathbb{Q}) = 0$, in which case the conditions \[\operatorname{rk} J_1(G)(K) = \operatorname{rk} J_1(G)(\mathbb{Q}) \ \mbox{ and } \ J_1(G)(K)_{{\operatorname{tors}}} = J_1(G)(\mathbb{Q})_{{\operatorname{tors}}} \] automatically imply that $J_1(G)(K) = J_1(G)(\mathbb{Q})$. Since $X_1(G)(\mathbb{Q}) \ne \emptyset$ for each of these three graphs $G$, Proposition~\ref{prop:same_jac} immediately gives us $X_1(G)(K) = X_1(G)(\mathbb{Q})$. Since $U_1(G)(\mathbb{Q}) = \emptyset$ for each $G$ (see \cite{poonen:1998,morton:1998}), we conclude that (A) holds if $J_1(G)(K)_{\operatorname{tors}} = J_1(G)(\mathbb{Q})_{\operatorname{tors}}$. It remains to consider those quadratic fields $K$ for which $J_1(G)(K)_{\operatorname{tors}} \supsetneq J_1(G)(\mathbb{Q})_{\operatorname{tors}}$. We begin by considering $G = {\rm8(4)}$. By Theorem~\ref{thm:gen_two_torsion}, $J_1(G) = J_1(4)$ only gains torsion points over $K = \mathbb{Q}(i)$ and $K = \mathbb{Q}(\sqrt{2})$, and $J_1(G)$ still has rank $0$ over those two fields. Over each of these fields $K$, we can explicitly determine all forty elements of $J_1(G)(K)$, and we find no {\it non-trivial} points of the form $\{P,P\}$ with $P \in X_1(G)(K) \setminus X_1(G)(\mathbb{Q})$. This means that the only additional $K$-rational points on $X_1(G)$ are the Weierstrass points: $(\pm i, 0)$ over $\mathbb{Q}(i)$, and $(1 \pm \sqrt{2},0)$ over $\mathbb{Q}(\sqrt{2})$. However, the points $(\pm i,0)$ correspond to $c = (\mp2i + 1)/4$, for which we have $G(f_c,\mathbb{Q}(i)) \cong {\rm4(1,1)}$, and the points $(1 \pm \sqrt{2},0)$ correspond to $c = -5/4$, for which we have $G(f_c,\mathbb{Q}(\sqrt{2})) \cong {\rm4(2)}$. Now let $G = {\rm10(3,1,1)}$. The Jacobian $J_1(G) = J_1(1,3)$ only gains additional torsion over the quadratic field $K = \mathbb{Q}(\omega)$. As in the previous case, we can explicitly find all 63 points on $J_1(G)(K)$, which allows us to completely determine $X_1(G)(K)$. The only new points on $X_1(G)(K)$ are $(\omega, \pm(\omega - 1))$ and their Galois conjugates. These correspond to $c = 1/4 + 3\omega/4$ and its conjugate, for which we have $G(f_c,K) \cong {\rm4(1,1)}$. For $G = {\rm10(3,2)}$, the torsion subgroup of $J_1(G) = J_1(2,3)$ is unchanged upon base change to any quadratic field $K$, so we are already done in this case. We now prove (B), so let $G = {\rm8(3)}$. In this case, we have $\operatorname{rk} J_1(G)(\mathbb{Q}) = 1$, so assume $K$ is a quadratic field with $\operatorname{rk} J_1(G)(K) = 1$. Since $J_1(G)(K)_{\operatorname{tors}}$ is trivial for all quadratic fields $K$, Corollary~\ref{cor:saturation_samerank} tells us that $J_1(G)(K) = J_1(G)(\mathbb{Q})$; since $X_1(G)(\mathbb{Q})$ is nonempty, it follows that Proposition~\ref{prop:same_jac} that $X_1(G)(K) = X_1(G)(\mathbb{Q})$. As shown in \cite[\textsection 4]{poonen:1998}, the only points on $U_1(G)(\mathbb{Q})$---and, therefore, the only points on $U_1(G)(K)$---correspond to $c = -29/16$, in which case we have $G(f_c,\mathbb{Q}) \cong 8(3)$. \end{proof} \subsection[The curve $X_0(5)$]{The curve $X_0(5)$}\label{sec:X0(5)} It is shown in \cite{flynn/poonen/schaefer:1997} that if $c \in \mathbb{Q}$, then $f_c$ cannot admit rational points of period 5. Rather than attempting to directly find all rational points on the curve $X_1(5)$, the authors of \cite{flynn/poonen/schaefer:1997} work with the quotient curve $X_0(5)$, which parametrizes maps $f_c$ together with a marked {\it cycle} of length $5$. The model given in \cite{flynn/poonen/schaefer:1997} for $X_0(5)$ is \begin{equation}\label{eq:X0(5)} y^2 = x^6 + 8x^5 + 22x^4 + 22x^3 + 5x^2 + 6x + 1. \end{equation} They show that $\operatorname{rk} J_0(5)(\mathbb{Q}) = 1$, and then they determine that \[ X_0(5)(\mathbb{Q}) = \{(0,\pm 1), (-3,\pm 1), \infty^{\pm} \} \] using a version of the Chabauty-Coleman method for genus $2$ curves developed by Flynn \cite{flynn:1997}. They conclude that the only values of $c \in \mathbb{Q}$ for which $f_c$ has a rational 5-cycle (i.e., the cycle is Galois invariant as a set, but not necessarily pointwise) are $-2$, $-16/9$, and $-64/9$. However, for each such $c$ the corresponding points of period 5 generate a degree 5 extension of $\mathbb{Q}$. Since $X_0(5)$ has genus $2$, we may apply the methods used in the previous section to compute the torsion subgroup of $J_0(5)(K)$ for quadratic fields $K$. From this information, we will deduce a sufficient condition for a quadratic field $K$ to contain no elements $c$ for which $f_c$ admits $K$-rational points of period 5. \begin{prop} Let $K$ be a quadratic field. Then \[ J_0(5)(K)_{{\operatorname{tors}}} = 0. \] \end{prop} \begin{proof} The primes $p = 3$ and $p = 5$ are primes of good reduction for the curve $X$ given in \eqref{eq:X0(5)}, which is birational to $X_0(5)$. Letting $J := \operatorname{Jac}(X)$, a Magma computation shows that \[ \#J(\mathbb{F}_{3^2}) = 3^4 \text{\ \ and\ \ } \#J(\mathbb{F}_{5^2}) = 29 \cdot 41. \] As before, if $\mathfrak{p}$ is any prime in $\calO_K$ lying above the rational prime $p$, then $\mathbb{F}_{\mathfrak{p}} \hookrightarrow \mathbb{F}_{p^2}$ and, therefore, $J(\mathbb{F}_{\mathfrak{p}}) \hookrightarrow J(\mathbb{F}_{p^2})$. Since $\#J(\mathbb{F}_{3^2})$ and $\#J(\mathbb{F}_{5^2})$ are coprime, we conclude that $J(K)_{{\operatorname{tors}}} = 0$. \end{proof} \begin{cor}\label{cor:5cycle} Let $K$ be a quadratic field. If $\operatorname{rk} J_0(5)(K) = 1$, then there is no element $c \in K$ for which $f_c$ admits a $K$-rational point of period 5. \end{cor} \begin{proof} Let $X := X_0(5)$ and $J := J_0(5)$. If $\operatorname{rk} J(K) = 1$, then we have $\operatorname{rk} J(K) = \operatorname{rk} J(\mathbb{Q})$ and $J(K)_{{\operatorname{tors}}} = 0$, hence $J(K) = J(\mathbb{Q})$ by Corollary~\ref{cor:saturation_samerank}. Since $X$ has rational points, we conclude from Proposition~\ref{prop:same_jac} that $X(K) = X(\mathbb{Q})$, so the only $c \in K$ such that $f_c$ has a $K$-rational 5-cycle are $c \in \{-2,-16/9,-64/9\}$. However, as mentioned above, the points of period 5 must actually lie in a degree 5 extension of $K$, so $f_c$ has no $K$-rational points of period 5. \end{proof} \begin{thm}\label{thm:5cycle_cyclotomic} Let $K$ be the field $\mathbb{Q}(i)$ or $\mathbb{Q}(\omega)$. There is no element $c \in K$ for which $f_c$ admits a $K$-rational point of period 5. \end{thm} \begin{proof} In both cases, we have $X_0(5)(K) = X_0(5)(\mathbb{Q})$: For $K = \mathbb{Q}(\omega)$, this follows from the fact that the twist of $X_0(5)$ by $-3$ has rank $0$ over $\mathbb{Q}$, so the rank of $J_0(5)(K)$ is equal to $1$, and therefore Corollary~\ref{cor:5cycle} applies. For $K = \mathbb{Q}(i)$, the proof of this statement requires a somewhat involved Chabauty-Coleman type argument; we therefore defer the proof to Appendix~\ref{app:X0(5)}. \end{proof} \section[Preperiodic points over cyclotomic quadratic fields]{Preperiodic points over cyclotomic quadratic fields}\label{sec:specific} We now move from making general statements that hold over arbitrary quadratic fields to giving results over two particular quadratic fields---namely, the cyclotomic quadratic fields. Our main result is a conditional classification result like that of Poonen \cite{poonen:1998}, but over these two quadratic extensions of $\mathbb{Q}$ rather than over $\mathbb{Q}$ itself. We begin by restricting the cycle structures that can appear for a graph $G(f_c,K)$ with $K$ a quadratic cyclotomic field and $c \in K$. \begin{lem}\label{lem:structures} Let $K$ be the field $\mathbb{Q}(i)$ or $K = \mathbb{Q}(\omega)$, let $c \in K$, and assume $f_c$ does not admit points of period greater than $5$. If $G(f_c,K)$ is strongly admissible, then the cycle structure of $G(f_c,K)$ is $(1,1)$, $(2)$, $(3)$, or $(1,1,2)$ \end{lem} \begin{rem} It was shown by Erkama in \cite{erkama:2006} that if $c \in \mathbb{Q}(i)$, then $f_c$ cannot have $\mathbb{Q}(i)$-rational points of period 4. His proof uses different techniques from ours, including an interesting {\it $2$}-dimensional dynamical system that models iteration of the family $f_c$. \end{rem} \begin{proof}[Proof of Lemma~\ref{lem:structures}] A computation in Magma shows that $\operatorname{rk} J_1(4)(K) = 0$ for both fields $K$, thus by Proposition~\ref{prop:gen2} there is no $c \in K$ for which $f_c$ has a $K$-rational point of period $4$. Further, Theorem~\ref{thm:5cycle_cyclotomic} says that there is no $c \in K$ for which $f_c$ has a $K$-rational point of period $5$. Thus, we now assume that $f_c$ has no $K$-rational points of period greater than $3$. It follows from the results in \cite[\textsection 4]{doyle:2018quad} that if $K$ is {\it any} quadratic field, $c \in K$, and $G(f_c,K)$ is strongly admissible with no cycles of length greater than $3$, then the cycle structure of $G(f_c,K)$ must be one of the following: \[ (1,1),\ (2),\ (3),\ (1,1,2),\ (1,1,3),\ (2,3). \] It therefore remains to show that for both fields $K$ under consideration, if $c \in K$ admits a $K$-rational point of period 3, then it has no $K$-rational points of period 1 or 2. Indeed, yet another Magma computation shows that for both fields $K$, $\operatorname{rk} J_1(1,3)(K) = \operatorname{rk} J_1(2,3)(K) = 0$, so the result follows from Proposition~\ref{prop:gen2}. \end{proof} \begin{figure} \[ \footnotesize \xymatrix{ G_1 \ar[d]\ar[dr] & \text{10(1,1)a/b} \ar[d] & \text{10(2,1,1)b} \ar[ddl]\ar[dr] & \fbox{10(2,1,1)a} \ar[d]\ar[ddrr] & G_2 \ar[dr]\ar[drr] & G_3 \ar[dr] & \text{10(2)} \ar[d] & \text{10(3)a/b} \ar[d]\\ \text{8(1,1)a} \ar[dr] & \text{8(1,1)b} \ar[d] & & \dbox{\fbox{8(2,1,1)}} \ar[ddll]\ar[ddrr] & & \dbox{8(2)a} \ar[d] & \text{8(2)b} \ar[dl] & \dbox{\fbox{8(3)}} \ar[d]\\ & \dbox{\fbox{6(1,1)}} \ar[d] & & & & \dbox{\fbox{6(2)}} \ar[d] & & \dbox{\fbox{6(3)}} \\ & \dbox{\fbox{4(1,1)}} & & & & \dbox{\fbox{4(2)}} & } \] \caption{Strongly admissible graphs with at most ten vertices and cycle structure (1,1), (2), (3), or (1,1,2). There is a directed path from $G$ to $H$ if and only if $H \subset G$. A graph has a solid (resp., dashed) box around it if it is realized as $G(f_c,K)$ over $K = \mathbb{Q}(i)$ (resp., $K = \mathbb{Q}(\omega)$).} \label{fig:directed_system} \end{figure} We now briefly sketch an outline of the proof of Theorem~\ref{thm:main}, which we have separated into Propositions~\ref{prop:main_i} and \ref{prop:main_omega} for $\mathbb{Q}(i)$ and $\mathbb{Q}(\omega)$, respectively. For each cyclotomic quadratic field $K$, certain small strongly admissible graphs $G$ do occur as $G(f_c,K)$ for some $c \in K$; the known such graphs appear in Appendix~\ref{app:all_data} and are indicated for convenience in Figure~\ref{fig:directed_system}. For reference, Figure~\ref{fig:directed_system} actually includes \textit{all} strongly admissible graphs with at most ten vertices and having cycle structures allowed by Lemma~\ref{lem:structures}. For each cycle structure allowed by Lemma~\ref{lem:structures}, we then consider those strongly admissible graphs (with the given cycle structure) that are {\it minimal} among those not known to occur over $K$. For each such graph $G$, we show that any $K$-rational points on $X_1(G)(K)$ actually correspond to parameters $c \in K$ for which $G(f_c,K)$ is not isomorphic to $G$; i.e., we show that $U_1(G)(K)$ is empty. This final step relies on the results of \textsection \ref{sec:quad}. For many of the arguments, we use the fact that the Jacobians of certain dynamical modular curves have rank $0$ over various quadratic fields. In every such case, the relevant rank computations were performed using the method of $2$-descent implemented in Magma's \texttt{RankBound} function. We now prove Theorem~\ref{thm:main}, which we state as two separate propositions for convenience. As before, all graphs appear in Appendix~\ref{app:all_data}. \begin{prop}\label{prop:main_i} Let $K = \mathbb{Q}(i)$, and let $c \in K$. Suppose $f_c$ does not admit $K$-rational points of period greater than 5. Then $G(f_c,K)$ is isomorphic to one of the following fourteen graphs: \begin{center} \rm 0, 3(2), 4(1,1), 4(2), 5(1,1)a, 5(1,1)b, 5(2)a, 6(1,1), 6(2), 6(2,1), 6(3), 8(2,1,1), 8(3), 10(2,1,1)a. \end{center} \end{prop} \begin{proof} Each of the graphs listed does occur over $\mathbb{Q}(i)$, as seen in Appendix~\ref{app:all_data}. Also, it follows from \cite[\textsection 5]{doyle/faber/krumm:2014} that 3(2), 5(1,1)a/b, 5(2)a, and 6(2,1) are the only graphs that are not strongly admissible but that may be realized as $G(f_c,K)$ for some $c \in K$. We henceforth assume that $G(f_c,K)$ is strongly admissible, and by Lemma~\ref{lem:structures} we may further assume that the cycle structure of $G(f_c,K)$ is (1,1), (2), (3), or (1,1,2). For such $c \in K$, it suffices to show that $G(f_c,K)$ is isomorphic to one of the following: \begin{center} 0, 4(1,1), 4(2), 6(1,1), 6(2), 6(3), 8(2,1,1), 8(3), 10(2,1,1)a. \end{center} By considering the system of graphs in Figure~\ref{fig:directed_system}, we must show the following: \begin{enumerate} \item The graph $G(f_c,K)$ does not contain a subgraph isomorphic to 8(1,1)a, 8(1,1)b, 8(2)a, 8(2)b, or 10(2,1,1)b. \item The graph $G(f_c,K)$ does not {\it properly} contain a subgraph isomorphic to 10(2,1,1)a. \item If $G(f_c,K)$ contains a subgraph isomorphic to 8(3), then $c = -29/16$ and $G(f_c,K) \cong$ 8(3). \end{enumerate} Statement (A) holds by applying Proposition~\ref{prop:gen1} to each graph $G$ listed in part (A), since $J_1(G)(K)$ has rank $0$ for each such $G$. For (B), we first note that the only 12-vertex strongly admissible graphs containing 10(2,1,1)a are 12(2,1,1)a, $G_4$, and $G_6$. The graph $G_6$ contains 10(2,1,1)b, so by (A) it cannot be a subgraph of $G(f_c,K)$. It remains to show, then, that $G(f_c,K)$ cannot contain 12(2,1,1)a or $G_4$. For 12(2,1,1)a, this follows from \cite[Cor. 3.36]{doyle/faber/krumm:2014}. It was shown in \cite[Prop. 5.10]{doyle:2018quad} that $X_1(G_4)$ has a model of the form \[ \left\{ \begin{split} y^2 &= 2(x^3 + x^2 - x + 1)\\ z^2 &= 5x^4 + 8x^3 + 6x^2 - 8x + 5, \end{split} \right. \] and that any finite quadratic point $(x,y,z)$ on $X_1(G_4)$ satisfies $x \in \mathbb{Q}$ and $y,z \notin \mathbb{Q}$. Therefore, a finite $K$-rational (but not $\mathbb{Q}$-rational) point on $X_1(G_4)$ yields a {\it rational} point on the twist \begin{equation}\label{eq:twist-1} \left\{ \begin{split} -y^2 &= 2(x^3 + x^2 - x + 1)\\ -z^2 &= 5x^4 + 8x^3 + 6x^2 - 8x + 5. \end{split} \right. \end{equation} The curve $C$ defined by $-y^2 = 2(x^3 + x^2 - x + 1)$ is birational to the elliptic curve labeled 176B1 in \cite{cremona:1997}, which has a single rational point. Since $C$ has a rational point at infinity, $C$ has no finite rational points, hence there are no rational solutions to \eqref{eq:twist-1}. Therefore $G(f_c,K)$ cannot contain a graph isomorphic to $G_4$. Finally, for (C), we note that for $G = {\rm8(3)}$ we have $\operatorname{rk} J_1(G)(K) = 1$, which means (by Proposition~\ref{prop:gen2}) that the only $c \in K$ with $G(f_c,K)$ containing 8(3) is $c = -29/16$, and a simple calculation verifies that in this case $G(f_c,K) \cong {\rm8(3)}$. \end{proof} \begin{prop}\label{prop:main_omega} Let $K = \mathbb{Q}(\omega)$, and let $c \in K$. Suppose $f_c$ does not admit $K$-rational points of period greater than 5. Then $G(f_c,K)$ is isomorphic to one of the following thirteen graphs: \begin{center} \rm 0, 3(2), 4(1), 4(1,1), 4(2), 5(1,1)a, 6(1,1), 6(2), 6(3), 7(2,1,1)a, 8(2)a, 8(2,1,1), 8(3). \end{center} \end{prop} \begin{proof} Each of these thirteen graphs is realized over $\mathbb{Q}(\omega)$, as indicated in Appendix~\ref{app:all_data} (and Figure~\ref{fig:directed_system}). From \cite[\textsection 5]{doyle/faber/krumm:2014}, the only graphs $G(f_c,K)$ with $c \in K$ that are not strongly admissible are 3(2), 4(1), 5(1,1)a, and 7(2,1,1)a. Just as in the proof of Proposition~\ref{prop:main_i}, we need only consider those $c \in K$ such that $G(f_c,K)$ is strongly admissible with cycle structure (1,1), (2), (3), or (1,1,2). It suffices to show that for such $c \in K$ the graph $G(f_c,K)$ is isomorphic to one of the following: \begin{center} 0, 4(1,1), 4(2), 6(1,1), 6(2), 6(3), 8(2)a, 8(2,1,1), 8(3). \end{center} By considering Figure~\ref{fig:directed_system}, it remains to show the following: \begin{enumerate} \item The graph $G(f_c,K)$ does not contain 8(1,1)a, 8(1,1)b, 8(2)b, 10(2,1,1)a, or 10(2,1,1)b. \item If $G(f_c,K)$ contains a subgraph isomorphic to 8(3), then $c = -29/16$ and $G(f_c,K) \cong {\rm8(3)}$. \end{enumerate} Part (A) follows from Proposition~\ref{prop:gen1}, since $\operatorname{rk} J_1(G)(K) = 0$ for each of the graphs $G$ appearing in (A). Part (B) follows just as in the proof of Proposition~\ref{prop:main_i}, since the Jacobian of the curve associated to 8(3) also has rank $1$ over $K$. \end{proof}
{ "timestamp": "2019-09-11T02:18:02", "yymm": "1801", "arxiv_id": "1801.09003", "language": "en", "url": "https://arxiv.org/abs/1801.09003", "abstract": "Given a number field $K$ and a polynomial $f(z) \\in K[z]$ of degree at least 2, one can construct a finite directed graph $G(f,K)$ whose vertices are the $K$-rational preperiodic points for $f$, with an edge $\\alpha \\to \\beta$ if and only if $f(\\alpha) = \\beta$. Restricting to quadratic polynomials, the dynamical uniform boundedness conjecture of Morton and Silverman suggests that for a given number field $K$, there should only be finitely many isomorphism classes of directed graphs that arise in this way. Poonen has given a conjecturally complete classification of all such directed graphs over $\\mathbb{Q}$, while recent work of the author, Faber, and Krumm has provided a detailed study of this question for all quadratic extensions of $\\mathbb{Q}$. In this article, we give a conjecturally complete classification like Poonen's, but over the cyclotomic quadratic fields $\\mathbb{Q}(\\sqrt{-1})$ and $\\mathbb{Q}(\\sqrt{-3})$. The main tools we use are dynamical modular curves and results concerning quadratic points on curves.", "subjects": "Dynamical Systems (math.DS); Algebraic Geometry (math.AG); Number Theory (math.NT)", "title": "Preperiodic points for quadratic polynomials over cyclotomic quadratic fields", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.984336348990826, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7073385867206006 }
https://arxiv.org/abs/1907.13481
Wiener index of graphs with fixed number of pendant or cut vertices
The Wiener index of a connected graph is defined as the sum of the distances between all unordered pair of its vertices. In this paper, we characterize the graphs which extremize the Wiener index among all graphs on $n$ vertices with $k$ pendant vertices. We also characterize the graph which minimizes the Wiener index over the graphs on $n$ vertices with $s$ cut vertices.
\section{Introduction} Throughout this paper, graphs are finite, simple, connected and undirected. Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G).$ For a vertex $v \in V(G),$ $ N_G(v)$ denotes the set of all neighbours of $v$ in $G.$ A vertex of degree one is called a {\it pendant vertex}. A vertex $v$ of $G$ is called a {\it cut-vertex} if $G\setminus v$ is disconnected. The distance between two vertices $ u,v \in V(G)$, denoted by $d_G(u,v)$ or $d(u,v)$ (if the context is clear), is the number of edges in a shortest path joining $u$ and $v$. The distance of a vertex $v \in V(G),$ denoted by $D_G(v)$, is defined as $D_G(v)= \sum_{u \in V(G)} d_G(u,v).$ We refer to \cite{We} for undefined notations and terminologies. The {\it Wiener index} of $G$, denoted by $W(G)$, is defined as the sum of distances between all unordered pair of its vertices. $\it{i.e.}$ $$W(G)=\underset{\{u,v\}\subseteq V(G)}{\sum}d_G(u,v)=\frac{1}{2}\underset{v\in V(G)}{\sum}D_G(v).$$ Different names such as {\it graph distance} \cite{Ejs}, {\it transmission} \cite{Jp}, {\it total status} \cite{Buc} and {\it sum of all distances} \cite{G,Yg} have been used to study the graphical invariant $W(G)$. Apparently, the chemist H. Wiener was the first to point out in 1947 (see \cite{Wi}) that $W(G)$ is well correlated with certain physio-chemical properties of the organic compound from which $G$ is derived. The {\it mean distance} \cite{D1,W1} or {\it the average distance} \cite{A,D} between the vertices is a quantity closely related to $W(G).$ By considering $G$ as an interconnection network connecting many processors, the average distance of $G$ between the nodes of the network is a measure of the average delay for traversing the messages from one node to another. In Mathematical literature, the Wiener index is first studied by Entringer et al. in \cite{Ejs}. This gave an important direction to the researchers to characterize the graphs with extremal Wiener index in certain classes of graphs. In last $20$ years a lot of studies for the optimal graphs in different classes of trees and unicyclic graphs have been done (see \cite{Hlw,Jt,Lp,T,Twl,W,Wg,Yf,Zx}). Apart from trees and unicyclic graphs, some other classes of graphs are also studied for the characterization of graphs having extremal Wiener index. Wiener index of graphs with fixed maximum degree is studied in \cite{S}. The graphs with maximum and minimum Wiener index among all Eulerian graphs on $n$ vertices are characterized in \cite{Gcr}. Wiener index of unicyclic graphs with fixed number of pendent vertices or cut vertices is studied in \cite{Twl}. In this paper, we characterize the graphs having maximum and minimum Wiener index over all connected graphs on $n$ vertices with $k$ pendant vertices. We also obtain the graph which minimizes the Wiener index among all connected graphs on $n$ vertices with $s$ cut-vertices. \subsection{Main results} We first construct some classes of graphs. For $g < n$, let $U_{n,g}^p$ be the graph obtained by attaching $n-g$ pendant vertices at one vertex of the cycle $C_g$ and $U_{n,g}^l$ be the graph obtained by joining an edge between a pendant vertex of the path $P_{n-g}$ with a vertex of $C_g$. Let $ \mathfrak{H}_{n,k} $ denote the class of all connected graphs on $n$ vertices and $k$ pendant vertices. Let $\mathfrak{T}_{n,k}$ be the subclass of $ \mathfrak{H}_{n,k} $ containing all the trees on $n$ vertices and $k$ pendant vertices. The path $ [v_1 v_2\ldots v_n]$ on $n$ vertices is denoted by $P_n$. For positive integers $k,l,d$ with $n=k+l+d$, let $T(k,l,d)$ be the tree obtained by taking the path $P_d$ and adding $k$ pendant vertices adjacent to $v_1$ and $l$ pendant vertices adjacent to $v_d$. Note that $T(1,1,d)$ is a path on $d+2$ vertices. We define a specific subclass of graphs in $ \mathfrak{H}_{n,0} $ as follows. Let $m_1,m_2$ and $n$ be positive integers with $m_1,m_2\geq 3$ and $n \geq m_1+m_2-1.$ If $n > m_1+m_2-1,$ take a path on $n-(m_1+m_2)+2$ vertices and identify one pendant vertex of the path with a vertex of $C_{m_1}$ and another pendant vertex with a vertex of $C_{m_2}.$ If $n= m_1+m_2-1$, then identify one vertex of $C_{m_1}$ with a vertex of $C_{m_2}.$ We denote this graph by $C_{m_1,m_2}^n$. In this paper, we prove the following results: \begin{theorem}\label{max-thm0} Let $0\leq k \leq n-2$ and let $G \in \mathfrak {H}_{n,k}$. Then \begin{enumerate} \item[(i)] for $2 \leq k \leq n-2,$ $W(G)\leq W\left(T(\lfloor\frac{k}{2}\rfloor,\lceil \frac{k}{2}\rceil,n-k)\right)$ and equality happens if and only if $G= T(\lfloor\frac{k}{2}\rfloor,\lceil \frac{k}{2}\rceil,n-k).$ Furthermore, $W\left(T(\left\lfloor\frac{k}{2}\right\rfloor,\left\lceil \frac{k}{2}\right\rceil,n-k)\right)=$ $$\begin{cases}{n-k+1 \choose 3}+\frac{k^2}{4}(n-k+3)+\frac{k}{2}[(n-k)^2+n-k-2] &\mbox{if k is even} \\ {n-k+1 \choose 3}+\frac{k^2-1}{4}(n-k+3)+\frac{k}{2}[(n-k)^2+n-k-2]+1 & \mbox{if k is odd.}\end{cases}$$ \item[(ii)] for $k=1,$ $W(G) \leq W(U_{n,3}^l)$ and equality holds if and only if $G=U_{n,3}^l.$ Furthermore, $$W(U_{n,3}^l)=\frac{n^3-7n+12}{6}.$$ \item[(iii)] for $k=0$ and $n\geq 7$, $W(G)\leq W(C_{3,3}^n)$ and equality holds if and only if $G=C_{3,3}^n.$ Furthermore, $$W(C_{3,3}^n)=\frac{n^3-13n+24}{6}.$$ \end{enumerate} \end{theorem} For $ 0\leq k \leq n-3 $ and $ n \geq 4,$ let $P_n^k $ be the graph obtained by adding $k$ pendant vertices at one vertex of the complete graph $K_{n-k}.$ \begin{theorem}\label{min-thm0} Let $0\leq k \leq n-2$ and let $G \in \mathfrak {H}_{n,k}$. Then \begin{enumerate} \item[(i)] for $0 \leq k \leq n-3$, $ W(P_n^k) \leq W(G) $ and equality holds if and only if $ G=P_{n}^k $. Furthermore, $$W(P_n^k)= {n-k \choose 2}+k^2+2k(n-k-1).$$ \item[(ii)] for $k=n-2,$ $W(T(1,n-3,2)) \leq W(G)$ and equality holds if and only if $G=T(1,n-3,2)$. Furthermore, $$W(T(1,n-3,2))=n^2-n-2.$$ \end{enumerate} \end{theorem} Let $T_{n,k} \in \mathfrak{T}_{n,k} $ be the tree that has a vertex $v$ of degree $k$ and $T_{n,k}\setminus v = r P_{q+1} \cup (k-r) P_q$, where $q=\lfloor\frac{n-1}{k} \rfloor$ and $r = n-1 -kq$. Here, we have $0\leq r <k.$ \begin{theorem}\label{min-thm1} Let $2\leq k\leq n-2$ and $T \in \mathfrak{T}_{n,k}.$ Then $W(T_{n,k}) \leq W(T)$ and equality holds if and only if $T=T_{n,k}.$ \end{theorem} Let $\mathfrak{C_{n,s}}$ be the set of all connected graphs on $n$ vertices and $s$ cut vertices. For $2\leq m\leq n,$ let $v_1,v_2,\ldots,v_m$ be the vertices of a complete graph $K_m$. For $i=1,2,\ldots,m$ consider the paths $P_{l_i}$ such that $l_1+l_2+\cdots+l_m=n$. Identify a pendant vertex of the path $P_{l_i}$ with the vertex $v_i,$ for $i=1,2,\ldots,m$ to obtained a graph on $n$ vertices and we denote it by $K_m^n(l_1,l_2,\ldots, l_m)$. \begin{theorem}\label{min-thm2} Let $0\leq s\leq n-3$ and $i,j\in \{1,2,\ldots,n-s\}.$ Then the graph $K_{n-s}^n(l_1,l_2,\ldots,l_{n-s})$ with $|l_i-l_j|\leq 1$ has the minimum Wiener index over $\mathfrak{C}_{n,s}.$ \end{theorem} In the next section we will discuss some results related to Wiener index of graphs which are useful to prove our main theorems. \section{Preliminaries} \label{section-2} We start this section with the following lemma. \begin{lemma} \label{ed} Let $G$ be a graph and $u,v \in V(G)$ are non adjacent. Let $G'$ be the graph obtained from $G$ by joining the vertices $u$ and $v$ by an edge. Then $W(G')<W(G).$ \end{lemma} It follows from Lemma \ref{ed} that among all connected graphs on $n$ vertices, the Wiener index is minimized by the complete graph $K_n$ and maximized by a tree. Among all trees on $n$ vertices, the Wiener index is minimized by the star $K_{1,n-1}$ and maximized by the path $P_n$ (see \cite{We}, Theorem 2.1.14). It is easy to determine the Wiener index of the following graphs(see \cite{We}): (i)$W(K_n)={n \choose 2}$ (ii) $W(P_n)={n+1 \choose 3}$ (iii) $W(K_{1,n-1})=(n-1)^2.$ The Wiener index of the cycle $C_n$ is (see \cite{Jp},Theorem 5) \begin{equation}\label{eq-c1} W(C_n)=\begin{cases} \frac{1}{8}n^3 &\textit{if n is even}\\ \frac{1}{8}n(n^2-1) & \textit{if n is odd.}\end{cases} \end{equation} Also for $u\in V(C_n)$ \begin{equation}\label{eq-c2} D_{C_n}(u)=\begin{cases}\frac{n^2}{4} &\textit{if n is even} \\ \frac{n^2-1}{4} & \textit{if n is odd.}\end{cases} \end{equation} The following lemma is very useful. \begin{lemma} (\cite{Bsv},Lemma 1.1)\label{count} Let $G$ be a graph and $u$ be a cut vertex in $G$. Let $G_1$ and $G_2$ be two subgraphs of $G$ with $G= G_1 \cup G_2$ and $V(G_1) \cap V(G_2)= \{u\}$. Then $$W(G)=W(G_1)+W(G_2)+(|V(G_1)|-1)D_{G_2}(u)+(|V(G_2)|-1)D_{G_1}(u).$$ \end{lemma} \begin{corollary}\label{new1} Let $G$ and $H$ be two connected graphs having at least $2$ vertices each. Let $u,v \in V(G)$ and $w \in V(H)$. Let $G_1$ and $G_2$ be the graphs obtained from $G$ and $H$ by identifying the vertex $w$ of $H$ with the vertices $u$ and $v$ of $G$, respectively. If $D_G(v)\geq D_G(u)$ then $W(G_2)\geq W(G_1)$ and equality happens if and only if $D_G(v) = D_G(u).$ \end{corollary} \begin{proof} By Lemma \ref{count}, $$W(G_1)=W(G)+W(H)+(|V(G)|-1)D_H(w)+(|V(H)|-1)D_G(u)$$ and $$W(G_2)=W(G)+W(H)+(|V(G)|-1)D_H(w)+(|V(H)|-1)D_G(v).$$ So $$W(G_2)-W(G_1)=(|V(H)|-1)(D_G(v)-D_G(u))$$ and the result follows. \end{proof} Let $G$ be a connected graph on $n\geq 2$ vertices. Let $v$ be a vertex of $G.$ For $l,k \geq 1,$ let $G_{k,l}$ be the graph obtained from $G$ by attaching two new paths $P:vv_{1}v_{2}\cdots v_{k}$ and $Q:vu_{1}u_{2}\cdots u_{l}$ of lengths $k$ and $l$ respectively, at $v$, where $u_{1},u_{2},\ldots,u_{l}$ and $v_{1},v_{2},\ldots,v_{k}$ are distinct new vertices. Let ${\widetilde G}_{k,l}$ be the graph obtained from $G_{k,l}$ by removing the edge $\{v_{k-1},v_{k}\}$ and adding the edge $\{u_{l},v_{k}\} $. Observe that the graph ${\widetilde G}_{k,l}$ is isomorphic to the graph $G_{k-1,l+1}$. We say that ${\widetilde G}_{k,l}$ is obtained from $G_{k,l}$ by {\em grafting} an edge. Consider the path $P_n:v_1 v_2\ldots v_n$ on $n$ vertices with $v_i$ adjacent to $v_{i-1}$ and $v_{i+1}$ for $2\leq i\leq n-1.$ Then for $i=1,2,\ldots,n$, $$D_{P_n}(v_i)=D_{P_n}(v_{n-i+1})=\dfrac{(n-i)(n-i+1)+i(i-1)}{2}.$$ So, if $n$ is odd, then $$D_{P_n}(v_1)>D_{P_n}(v_2)>\cdots>D_{P_n}(v_{\frac{n+1}{2}})<D_{P_n}(v_{\frac{n+3}{2}})<\cdots <D_{P_n}(v_{n-1})<D_{P_n}(v_n)$$ and if $n$ is even, then $$D_{P_n}(v_1)>D_{P_n}(v_2)>\cdots>D_{P_n}(v_{\frac{n}{2}})=D_{P_n}(v_{\frac{n+2}{2}})<\cdots<D_{P_n}(v_{n-1})<D_{P_n}(v_n).$$ The next result follows from the above and Corollary \ref{new1}. \begin{corollary}\label{effect-2}(\cite{Lp},Lemma 2.4) If $1\leq k \leq l,$ then $W(G_{k-1,l+1})>W(G_{k,l}).$ \end{corollary} The following result compares the Wiener index of two graphs, where one is obtained from the other by moving one component from a vertex to another vertex. \begin{lemma}(\cite{Ll},Lemma 2.4)\label{effect-0} Let $ H,X,Y $ be three connected pairwise vertex disjoint graphs having at least $2$ vertices each. Suppose that $u$ and $v$ are two distinct vertices of $H$, $x$ is a vertex of $X$ and $y$ is a vertex of $Y.$ Let G be the graph obtained from $H, X, Y $ by identifying $u$ with $x$ and $v$ with $y$, respectively. Let $G_1^*$ be the graph obtained from $H, X, Y$ by identifying vertices $u,x,y$ and let $G _2^* $ be the graph obtained from $H, X, Y$ by identifying vertices $v,x,y$ (see figure \ref{fig}). Then $W(G_1^*)< W(G)$ or W $(G_2^*) < W (G)$. \end{lemma} \begin{figure}[h] \begin{center} \begin{tikzpicture} \filldraw (1,0)node[right]{u} circle [radius=.3mm]; \filldraw (3,0)node[left]{v} circle [radius=.3mm] (2,0)node{H} (0,.7)node{X} (4,.7)node{Y} (2,-2.5)node{G}; \draw (1,0)..controls(2,2)..(3,0)..controls(2,-2)..(1,0); \draw (1,0) to[out=90,in=25] (-.5,1) to[out=275,in=180] (1,0); \draw (3,0) to[out=90,in=155] (4.5,1) to[out=275,in=0] (3,0); \end{tikzpicture} \hskip 1cm \begin{tikzpicture} \filldraw (1,0)node[right]{u} circle [radius=.3mm]; \filldraw (3,0)node[left]{v} circle [radius=.3mm] (2,0)node{H} (0,.7)node{X} (0,-.7)node{Y}(2,-2.5)node{$G_1^*$}; \draw (1,0)..controls(2,2)..(3,0)..controls(2,-2)..(1,0); \draw (1,0) to[out=90,in=25] (-.5,1) to[out=275,in=180] (1,0); \draw (1,0) to[out=180,in=90] (-.5,-1) to[out=-25,in=250] (1,0); \end{tikzpicture} \hskip 1cm \begin{tikzpicture} \filldraw (1,0)node[right]{u} circle [radius=.3mm]; \filldraw (3,0)node[left]{v} circle [radius=.3mm] (2,0)node{H} (4,.7)node{X} (4,-.7)node{Y} (2,-2.5)node{$G_2^*$}; \draw (1,0)..controls(2,2)..(3,0)..controls(2,-2)..(1,0); \draw (3,0) to[out=-90,in=-155] (4.5,-1) to[out=-275,in=0] (3,0); \draw (3,0) to[out=90,in=155] (4.5,1) to[out=275,in=0] (3,0); \end{tikzpicture} \end{center} \caption{Movement of a component from one vertex to other}\label{fig} \end{figure} \begin{corollary}\label{effect-1} Let $G$ be a connected graph on $n \geq 2 $ vertices and let $u,v \in V(G)$. For $n_1,n_2 \geq 0,$ let $G_{uv}(n_1,n_2)$ be the graph obtained from $G$ by attaching $n_1$ pendant vertices at $u$ and $n_2$ pendant vertices at $v$. If $n_1,n_2 \geq 1$ then $$W(G_{uv}(n_1+n_2,0))< W(G_{uv}(n_1,n_2))\;\;\mbox{or} \;\; W(G_{uv}(0,n_1+n_2))< W(G_{uv}(n_1,n_2)).$$ \end{corollary} In \cite{Yf}, Lemma 2.6, if we take $G_0=P_{n_0}$ and $u_0$ and $v_0$ as two distinct pendant vertices of $G_0$, then $G_0\cong G_1 \cong G_2.$ So, $W(G_0)=W(G_1)=W(G_2),$ hence the statement of the mentioned lemma is not true. In the following result, we have given a proof of the corrected version of it. \begin{lemma}\label{effect-3} Let $G$ be a connected graph on $n \geq 3$ vertices and $u,v \in V(G).$ For $l,k\geq 1,$ let $G_{uv}^p(l,k)$ be the graph obtained from $G$ by identifying a pendant vertex of the path $P_l$ with $u$ and identifying a pendant vertex of the path $P_k$ with $v$. Suppose $l,k\geq 2.$ If $G$ is not the $u$-$v$ path and $D_G(u)\geq D_G(v)$ then $$W(G_{uv}^p(l+k-1,1)) >W(G_{uv}^p(l,k)).$$ \end{lemma} \begin{proof} First consider the graph $G_{u,v}^p(l,1)$ as $H$ and let $w$ be the pendant vertex of $H$ corresponding to $P_l.$ Then by Lemma \ref{count}, $$W(G_{u,v}^p(l,k))=W(H)+W(P_k)+(|V(H)|-1)D_{P_k}(v)+(k-1)D_H(v)$$ and $$W(G_{u,v}^p(l+k-1,1))=W(H)+W(P_k)+(|V(H)|-1)D_{P_k}(w)+(k-1)D_H(w).$$ As $D_{P_k}(v)=D_{P_k}(w)$ we get, $$W(G_{u,v}^p(l+k-1,1))-W(G_{u,v}^p(l,k))=(k-1)(D_H(w)-D_H(v)).$$ Now $$D_H(w)=D_{P_{l-1}}(w)+(l-1)|V(G)|+D_G(u)$$ and $$D_H(v)=D_G(u)+(l-1)(d_G(u,v)+1)+D_{P_{l-1}}(u')$$ where $u'$ is the vertex on the path $P_l$ adjacent to $u.$ Since $D_{P_{l-1}}(w)=D_{P_{l-1}}(u')$, so $$D_H(w)-D_H(v)=(l-1)(|V(G)|-d_G(u,v)-1)+D_G(u)-D_G(v).$$ As $l\geq 2$ and $G$ is not the $u$-$v$ path, so $(l-1)(|V(G)|-d_G(u,v)-1)>0$. Hence the result follows from the given condition $D_G(u)\geq D_G(v).$ \end{proof} The Wiener index of $U_{n,g}^p$ and $U_{n,g}^l$ are useful for our results and can be found in \cite{Yf}( see Theorem 1.1). \begin{equation}\label{eq1} W(U_{n,g}^p)=\begin{cases} \frac{g^3}{8}+(n-g)(\frac{g^2}{4}+n-1) &\textit{ if g is even} \\ \frac{g(g^2-1)}{8}+(n-g)(\frac{g^2-1}{4}+n-1) & \textit{if g is odd}\end{cases} \end{equation} \begin{equation}\label{eq2} W(U_{n,g}^l)=\begin{cases} \frac{g^3}{8}+(n-g)(\frac{n^2+ng+3g-1}{6}-\frac{g^2}{12}) &\mbox{ if g is even} \\ \frac{g(g^2-1)}{8}+(n-g)(\frac{n^2+ng+3g-1}{6}-\frac{g^2}{12}-\frac{1}{4}) & \mbox{if g is odd}\end{cases} \end{equation} We next calculate the Wiener index of some more trees, which we need for the extremal bounds in some of our results. Let $S_{d,k}$ be the tree obtained by identifying a pendant vertex of the path $P_d$ with the central vertex of the star $K_{1,k}$. By using Lemma \ref{count}, it is easy to see that \begin{equation}\label{eq3} W(S_{d,k})= {d+1 \choose 3}+k^2+(d-1)k+\frac{d(d-1)k}{2}. \end{equation} Then by Lemma \ref{count} and using the value of $W(S_{d,k})$ and $W(K_{1,l})$, we get \begin{equation}\label{eq4} W(T(l,k,d))={d+1 \choose 3}+l^2+k^2+\frac{(d^2+d-2)(k+l)}{2}+(d+1)kl. \end{equation} For $l \geq 2$ and $q \geq 1$, let $T_l^q$ be the tree on $lq+1$ vertices with $l$ pendant vertices having one vertex $v$ of degree $l$ and $T_l^q-v=lP_q$ ($l$ copies of $P_q$). Note that $T_1^q$ is the path $P_{q+1}.$ Then \begin{equation}\label{eq5} D_{T_l^q}(v)=l+2l+\cdots+ql=\frac{lq(q+1)}{2}. \end{equation} Now by lemma \ref{count}, \begin{align*} W(T_l^q)&=W(T_{l-1}^q)+W(T_1^q)+(l-1)qD_{T_1^q}(v)+qD_{T_{l-1}^q}(v)\\ &=W(T_{l-1}^q)+{q+2 \choose 3}+(l-1)q^2(q+1). \end{align*} Solving this recurrence relation we get, \begin{equation}\label{eq6} W(T_l^q)=l{q+2 \choose 3}+\frac{q^2l(q+1)(l-1)}{2}. \end{equation} \section{Proofs of Theorem \ref{max-thm0},Theorem \ref{min-thm0} and Theorem \ref{min-thm1}} We first recall three known results related to Wiener index of graphs. \begin{theorem}\label{pmax-thm1}(\cite{Shi},Theorem 4) For $2\leq k\leq n-2,$ the tree $T(\lfloor\frac{k}{2}\rfloor,\lceil\frac{k}{2}\rceil,n-k)$ maximizes the Wiener index over $\mathfrak{T}_{n,k}.$ \end{theorem} \begin{theorem}\label{pmax-thm3}(\cite{Yf},Corollary 1.2) Among all unicyclic graphs on $n > 4$ vertices, the graph $U_{n,3}^l$ has the maximum Wiener index. \end{theorem} \begin{theorem} (\cite{Jp},Theorem 5) \label{plesnic} Let $G$ be a two connected graph with $n$ vertices then $W(G)\leq W(C_n)$ and equality holds if and only if $G=C_n.$ \end{theorem} We now compare the Wiener index of the graphs $C_{3,3}^n$ and $C_n.$ \begin{lemma}\label{3cycles} For $n \geq 6$, $W(C_n)\leq W(C_{3,3}^n)$ and equality happens if and only if $n=6.$ \end{lemma} \begin{proof} By (\ref{eq2}), we have $W(U_{n,3}^l)=\frac{n^3-7n+12}{6}$. If $u$ is the pendant vertex of $U_{n,3}^l$ then $D_{U_{n,3}^l}(u)=D_{P_{n-2}}(u)+2(n-2)=\frac{(n-3)(n-2)}{2}+2n-4=\frac{n^2-n-2}{2}.$ For $n \geq 6$, let $u$ be the cut-vertex common to $C_3$ and $U_{n-2,3}^l$ of $C_{3,3}^n$. Then by Lemma \ref{count}, \begin{align}\label{eq7} W(C_{3,3}^n)&=W(C_3)+W(U_{n-2,3}^l)+2D_{U_{n-2,3}^l}(u)+2(n-3) \nonumber\\ &=3+\frac{(n-2)^3-7(n-2)+12}{6}+(n-2)^2-(n-2)-2+2n-6 \nonumber\\ &=\frac{n^3-13n+24}{6} \end{align} By (\ref{eq-c1}) and (\ref{eq7}), we have $$W(C_{3,3}^n)-W(C_n)=\begin{cases} \frac{n(n^2-52)}{24}+4, & \mbox{if $n$ is even} \\ \frac{n(n^2-49)}{24}+4, & \mbox{if $n$ is odd.}\end{cases}$$ Hence the result follows. \end{proof} \begin{lemma}\label{c-cmn} Let $m_1,m_2 \geq 3$ be two integers and let $n=m_1+m_2-1$. Then $W(C_n)>W(C_{m_1,m_2}^n).$ \end{lemma} \begin{proof} Let $v$ be the vertex of degree $4$ in $C_{m_1,m_2}^n$. First suppose $n$ is even. Then one of $m_1$ or $m_2$ is odd and other is even. Without loss of generality, suppose $m_1$ is odd and $m_2$ is even. Then by Lemma \ref{count}, (\ref{eq-c1}) and (\ref{eq-c2}) we have \begin{align*} W(C_{m_1,m_2}^n)&=W(C_{m_1})+W(C_{m_2})+(m_2-1)D_{C_{m_1}}(v)+(m_1-1)D_{C_{m_2}}(v)\\ &= \frac{m_1^3-m_1}{8}+\frac{m_2^3}{8}+(m_2-1)\frac{m_1^2-1}{4}+(m_1-1)\frac{m_2^2}{4}\\ &=\frac{1}{8}(m_1^3+m_2^3+2m_1^2m_2+2m_1m_2^2-2m_1^2-2m_2^2-m_1-2m_2+2) \end{align*} and \begin{align*} W(C_n)&=\frac{1}{8}(m_1+m_2-1)^3\\ &=\frac{1}{8}(m_1^3+m_2^3+3m_1^2m_2+3m_1m_2^2-3m_1^2-3m_2^2-6m_1m_2+3m_1+3m_2-1) \end{align*} The difference is \begin{align*} W(C_n)-W(C_{m_1,m_2}^n)&=\frac{1}{8}(m_1^2m_2+m_1m_2^2-m_1^2-m_2^2-6m_1m_2+4m_1+5m_2-3)\\ &=\frac{1}{8}\left((m_2-1)m_1^2+(m_1-1)m_2^2+4m_1+5m_2-6m_1m_2-3\right) \end{align*} An easy calculation gives \begin{equation*} W(C_n)-W(C_{m_1,m_2}^n)\begin{cases} = \frac{1}{4}m_2(m_2-2), & \mbox{if $m_1=3$}\\ \geq\frac{1}{8}(3(m_1-m_2)^2+4m_1+5m_2-3), & \mbox{if $m_1 \geq 5$}\\ \end{cases} \end{equation*} which is greater than $0.$\\ Now suppose $n$ is odd. Then there are two possibilities.\\ \textbf{Case 1:} Both $m_1$ and $m_2$ are even. \begin{align*} W(C_{m_1,m_2}^n)&=W(C_{m_1})+W(C_{m_2})+(m_2-1)D_{C_{m_1}}(v)+(m_1-1)D_{C_{m_2}}(v)\\ &=\frac{m_1^3}{8}+\frac{m_2^3}{8}+(m_2-1)\frac{m_1^2}{4}+(m_1-1)\frac{m_2^2}{4}\\ &=\frac{1}{8}(m_1^3+m_2^3+2m_2m_1^2+2m_1m_2^2-2m_1^2-2m_2^2) \end{align*} \begin{align*} W(C_n)&=W(C_{m_1+m_2-1})\\ &=\frac{1}{8}\left((m_1+m_2-1)^3-(m_1+m_2-1)\right)\\ &=\frac{1}{8}(m_1^3+m_2^3+3m_1^2m_2+3m_1m_2^2-3m_1^2-3m_2^2-6m_1m_2+2m_1+2m_2) \end{align*} The difference is \begin{align*} W(C_n)-W(C_{m_1,m_2}^n)&=\frac{1}{8}\left((m_1-1)m_2^2+(m_2-1)m_1^2-6m_1m_2+2m_1+2m_2\right)\\ & \geq \frac{1}{8}\left( 3(m_1-m_2)^2+2m_1+2m_2 \right) \\ &>0 \end{align*} \textbf{Case 2:} Both $m_1$ and $m_2$ are odd. \begin{align*} W(C_{m_1,m_2}^n)&=\frac{m_1^3-m_1}{8}+\frac{m_2^3-m_2}{8}+(m_2-1)\frac{m_1^2-1}{4}+(m_1-1)\frac{m_2^2-1}{4}\\ &=\frac{1}{8}(m_1^3+m_2^3+2m_2m_1^2+2m_1m_2^2-2m_1^2-2m_2^2-3m_1-3m_2+4) \end{align*} and the difference is $$W(C_n)-W(C_{m_1,m_2}^n)=\frac{1}{8}\left((m_1-1)m_2^2+(m_2-1)m_1^2-6m_1m_2+5m_1+5m_2-4\right).$$ An easy calculation gives \begin{align*} W(C_n)-W(C_{m_1,m_2}^n) \begin{cases} >\frac{1}{8}\left( 3(m_1-m_2)^2+5m_1+5m_2-4 \right), & \mbox{if $m_1,m_2 \geq 5$} \\ =\frac{1}{8}(2m_2^2-4m_2+2), &\mbox{if $m_1=3$}\\ =\frac{1}{8}(2m_1^2-4m_1+2), &\mbox{if $m_2=3$} \end{cases} \end{align*} which is greater than $0$ and this completes the proof. \end{proof} \begin{lemma}\label{24} Let $u$ be the pendant vertex and $v$ be a non-pendant vertex of the unicyclic graph $U_{n,g}^l$. Then $D_{U_{n,g}^l}(u)>D_{U_{n,g}^l}(v)$. \end{lemma} \begin{proof} Let $g$ be the vertex of degree $3$ in $U_{n,g}^l$ and let $g+1$ be the vertex adjacent to $g$ not on the $g$-cycle of $U_{n,g}^l$. Then \begin{equation}\label{lol} D_{U_{n,g}^l}(u)=D_{P_{n-g+1}}(u)+(g-1)(n-g)+D_{C_g}(g). \end{equation} If $v$ is a vertex on the cycle $C_g$ of $U_{n,g}^l$ then $$D_{U_{n,g}^l}(v)=D_{C_g}(v)+d(v,g)(n-g)+D_{P_{n-g+1}}(g)$$ and if $w$ is a non pendant vertex of $U_{n,g}^l$ which is not on the cycle then $$D_{U_{n,g}^l}(w)=D_{P_{n-g+1}}(w)+d(w,g)(g-1)+D_{C_g}(g)$$ Since $D_{P_{n-g+1}}(u)=D_{P_{n-g+1}}(g)$,$D_{P_{n-g+1}}(u)>D_{P_{n-g+1}}(w)$ and $D_{C_g}(g)=D_{C_g}(v),$ so \begin{align*} D_{U_{n,g}^l}(u)-D_{U_{n,g}^l}(v)&=(n-g)(g-1-d(v,g))>0\\ D_{U_{n,g}^l}(u)-D_{U_{n,g}^l}(w)& > (g-1)(n-g-d(w,g))>0. \end{align*} \end{proof} The next corollary follows from Lemma \ref{24} and Corollary \ref{new1}. \begin{corollary}\label{3uni1} Let $G$ be a connected graph with at least two vertices and let $u\in V(G).$ Suppose $v$ is the pendant vertex of $U_{n,g}^l$ and $w$ is a non-pendant vertex of $U_{n,g}^l$. Let $G_1$ and $G_2$ be the graph obtained from $G$ and $H$ by identifying $u$ of $G$ with the vertices with $v$ and $w$ of $U_{n,g}^l$, respectively. Then $W(G_1)>W(G_2).$ \end{corollary} \begin{lemma}\label{3uni2} Let $u$ be a vertex of a connected graph $G.$ For $m\geq 4,$ let $G_1$ be the graph obtained by identifying the vertex $u$ of $G$ with the pendant vertex of $U_{m+1,m}^l$ and $G_2$ be the graph obtained by identifying the vertex $u$ with the pendant vertex of $U_{m+1,3}^l$. Then $W(G_2)>W(G_1)$. \end{lemma} \begin{proof} By Lemma \ref{count}, we have $$W(G_1)=W(G)+W(U_{m+1,m}^l)+(|V(G)|-1)D_{U_{m+1,m}^l}(u)+mD_G(u)$$ and $$W(G_2)=W(G)+W(U_{m+1,3}^l)+(|V(G)|-1)D_{U_{m+1,3}^l}(u)+mD_G(u).$$ By Theorem \ref{pmax-thm3}, $W(U_{m+1,3}^l)>W(U_{m+1,m}^l)$. So, the difference is $$W(G_2)-W(G_1)>(|V(G)|-1)(D_{U_{m+1,3}^l}(u)-D_{U_{m+1,m}^l}(u)).$$ By (\ref{lol}), we have $D_{U_{m+1,3}^l}(u)=\frac{(m-1)(m+2)}{2}$ and \begin{equation*} D_{U_{m+1,m}^l}(u)=\begin{cases}m+\frac{m^2}{4} &\mbox{if n is even} \\ m+\frac{m^2-1}{4} & \mbox{if n is odd.}\end{cases} \end{equation*} So, \begin{equation*} D_{U_{m+1,3}^l}(u)-D_{U_{m+1,m}^l}(u)=\begin{cases}\frac{m^2-2m-4}{4} &\mbox{if m is even}\\ \frac{m^2-2m-3}{4} &\mbox{if m is odd} \end{cases} \end{equation*} which is greater than $0$ and this completes the proof. \end{proof} \begin{corollary}\label{cor1} Let $m_1,m_2\geq 3$ be two integers and let $m_1+m_2\leq n.$ Then $W(C_{3,3}^n)\geq W(C_{m_1,m_2}^n)$ and equality happens if and only if $m_1=m_2=3.$ \end{corollary} \begin{proof}[{\bf Proof of Theorem \ref{max-thm0}:}] \begin{enumerate} \item[(i)] Let $G \in \mathfrak{H}_{n,k}.$ Construct a spanning tree $G'$ from $G$ by deleting some edges if required. Then by Lemma \ref{ed}, $W(G') \geq W(G)$. The number of pendent vertices of $G'$ is greater than or equal to $k$. Suppose $G'$ has more than $k$ pendant vertices. Since $k\geq 2$, $G'$ has at least one vertex of degree greater than $2$ and two paths attached to it. Consider a vertex $v$ of $G'$ with $d(v)\geq 3$ and two paths $P_{l_1},P_{l_2},\; l_1\geq l_2$ attached at $v.$ Using grafting of edge operation on $G'$, we get a new tree $\tilde{G}$ with number of pendant vertices one less than the number of pendant vertices of $G'$ and by Corollary \ref{effect-2}, $W(\tilde{G})>W(G').$ Continue this process till we get a tree with $k$ pendant vertices from $\tilde{G}.$ By Lemma \ref{effect-2}, every step in this process the Wiener index will increase. So, we will reach at a tree of order $n$ with $k$ pendant vertices. Hence the result follows from Theorem \ref{pmax-thm1}. Then replacing $d, l$ and $k$ by $n-k,\lfloor \frac{k}{2} \rfloor$ and $\lceil \frac{k}{2} \rceil$, respectively in (\ref{eq4}), we get $W\left(T(\lfloor\frac{k}{2}\rfloor,\lceil \frac{k}{2}\rceil,n-k)\right)$. \item[(ii)] Let $G\in \mathfrak{H}_{n,1}.$ Since $G$ is connected and has exactly one pendent vertex, it must contain a cycle. Let $C_g$ be a cycle in $G.$ If $G$ has more than one cycle, then construct a new graph $G'$ from $G$ by deleting edges from all cycles other than $C_g$ so that the graph remains connected. Then by Lemma \ref{ed}, $W(G')> W(G)$ and $G'$ is a unicyclic graph on $n$ vertices with girth $g.$ By Theorem \ref{pmax-thm3}, $W(U_{n,3}^l)\geq W(G')$ and equality happens if and only if $G' = U_{n,3}^l.$ As $U_{n,3}^l\in \mathfrak{H}_{n,1}$, so the result follows and we get the value of $W(U_{n,3}^l)$ from (\ref{eq2}). \item[(iii)] Let $n\geq 7$ and let $G\in \mathfrak{H}_{n,0}$. Then we have two cases:\\ \textbf{Case 1:} For some integers $m_1,m_2 \geq 3$ with $n=m_1+m_2-1$ and $C_{m_1,m_2}^n$ is a subgraph of $G$. Since $C_{m_1,m_2}^n$ is a subgraph of $G,$ by deleting some edges from $G$ we get $C_{m_1,m_2}^n \in \mathfrak{H}_{n,0}$ and by Lemma \ref{ed} $W(G)<W(C_{m_1,m_2}^n).$ Again by Lemma \ref{c-cmn}, $W(C_{m_1,m_2}^n)<W(C_n).$ Now the result follows from Lemma \ref{3cycles}.\\ \textbf{Case 2:} There is no integers $m_1,m_2 \geq 3$ with $n=m_1+m_2-1$ such that $C_{m_1,m_2}^n$ is a subgraph of $G$. If $G$ is a two connected graph then by Theorem \ref{plesnic}, $W(G)\leq W(C_n)$ and the result follows from Lemma \ref{3cycles}. So let $G$ has at least one cut vertex. \underline{Claim:} $W(G) \leq W(C_{g_1,g_2}^n)$ for some $g_1,g_2 \geq 3$ and the equality holds if and only if $G=C_{g_1,g_2}^n.$ Since $G$ has a cut-vertex and no pendant vertices, so $G$ contains two cycles with at most one common vertex. Let $C_{g_1}$ and $C_{g_2}$ be two cycles of $G$ with at most one common vertex. Since $C_{m_1,m_2}^n$ with $m_1+m_2-1=n$ is not a subgraph of $G$, so $g_1+g_2\leq n.$ Clearly $G$ has at least $n+1$ edges. If $G$ has exactly $n+1$ edges, then there is no common vertex between $C_{g_1}$ and $C_{g_2}$ and $G=C_{g_1,g_2}^n.$ So, let $G$ has at least $n+2$ edges. Suppose $|E(G)|=n+k$, where $k\geq 2.$ Choose $k-1$ edges $\{e_1,\ldots,e_{k-1}\}\subset E(G)$ such that $e_i\notin E(C_{g_1})\cup E(C_{g_2}),\;\; i=1,\ldots, k-1$ and $G\setminus \{e_1,\ldots,e_{k-1}\}$ is connected. Let $G_1=G\setminus \{e_1,\ldots,e_{k-1}\}$ ($G_1$ may have some pendant vertices). Then by Lemma \ref{ed}, $W(G_1)> W(G).$ If $G_1$ has no pendant vertices then $G_1=C_{g_1,g_2}^n.$ Let $G_1$ has some pendant vertices. Then for some $l<n,$ $C_{g_1,g_2}^l$ is a subgraph of $G_1.$ By grafting of edges operation(if required), we can form a new graph $G_2$ from $G_1$ where $G_2$ is a connected graph on $n$ vertices obtained by attaching some paths to some vertices of $C_{g_1,g_2}^l.$ Then by Corollary \ref{effect-2}, $W(G_2)> W(G_1).$ If more than one paths are attached to different vertices of $C_{g_1,g_2}^l$ in $G_2$, then using the graph operation as mentioned in Lemma \ref{effect-3}, form a new graph $G_3$ from $G_2$, where $G_3$ has exactly one path attached to $C_{g_1,g_2}^l.$ Then by Lemma \ref{effect-3}, $W(G_3)> W(G_2).$ Let the path attached to the vertex $u$ in $C_{g_1,g_2}^l$ of $G_3.$ Then again we have two cases:\\ \textbf{Case-i:} $u \in V(C_{g_1})\cup V(C_{g_2})$\\ Without loss of generality, assume that $u\in V(C_{g_1}).$ Then the induced subgraph of $G_3$ containing the vertices of $C_{g_1}$ and the vertices of the path attached to it, is the graph $U_{k,g_1}^l$ for some $k>g_1.$ Let $v$ be the pendant vertex of $U_{k,g_1}^l$. Since the two cycles $C_{g_1}$ and $C_{g_2}$ have at most one vertex in common, so we have two subcases: \underline{Subcase-1}: $V(C_{g_1})\cap V(C_{g_2})=\{w\}$ Let $H_1$ be the induced subgraph of $G_3$ containing the vertices $\{V(G_3)\setminus V(U_{k,g_1}^l)\}\cup \{w\}.$ Clearly $H_1$ is the cycle $C_{g_2}.$ Then identify the vertex $v$ of $U_{k,g_1}^l$ with the vertex $w$ of $H_1$ to form a new graph $G_4.$ By Corollary \ref{3uni1}, $W(G_4)> W(G_3)$ and $G_4$ is the graph $C_{g_1,g_2}^n.$ \underline{Subcase-2}: $V(C_{g_1})\cap V(C_{g_2})=\phi$ Let $H_2$ be the induced subgraph of $G_3$ containing the vertices $V(G_3)\setminus V(U_{k,g_1}^l).$ In $G_3$ exactly one vertex $w_1\in U_{k,g_1}^l$ adjacent to exactly one vertex $w_2$ of $H_2.$ Form a new graph $G_5$ from $G_3$ by deleting the edge $\{w_1,w_2\}$ and adding the edge $\{v,w_2\}.$ By Corollary \ref{3uni1}, $W(G_5)> W(G_3)$ and $G_5$ is the graph $C_{g_1,g_2}^n.$\\ \textbf{Case-ii:} $u \notin V(C_{g_1})\cup V(C_{g_2})$\\ Let $w$ be the pendant vertex of $G_3$ and let $w_3$ be a vertex in $C_{g_1,g_2}^l$ of $G_3$ adjacent to $u.$ Form a new graph $G_6$ from $G_3$ by deleting the edge $\{u,w_3\}$ and adding the edge $\{w,w_3\}.$ By Corollary \ref{3uni1}, $W(G_6)> W(G_3)$ and $G_6$ is the graph $C_{g_1,g_2}^n.$ This proves our claim. Now from Corollary \ref{cor1}, it follows that $W(G) \leq W(C_{3,3}^n)$ and by (\ref{eq7}) $W(C_{3,3}^n)=\frac{n^3-13n+24}{6}.$ This completes the proof. \end{enumerate} \end{proof} \noindent It can be checked easily that for $n\leq 5,$ the cycle $C_n$ has the maximum Wiener index over $\mathfrak{H}_{n,0}$ and for $n=6,$ the Wiener index is maximized by both the graphs $C_6$ and $C_{3,3}^n$.\\ \begin{proof}[{\bf Proof of Theorem \ref{min-thm0}:}] \begin{enumerate} \item[(i)] Let $G\in \mathfrak{H}_{n,k}$ and let $v_1,v_2,\ldots,v_{n-k}$ be the non-pendant vertices of $G.$ If the induced subgraph $G [v_1,v_2,\ldots,v_{n-k}]$ is not complete, then form a new graph $G'$ from $G$ by joining all the non-adjacent non-pedant vertices of $G$ with new edges. Then $G' \in \mathfrak{H}_{n,k}$ and by Lemma \ref{ed} $W(G') < W(G).$ If $G'= P_n ^k$ then we are done, otherwise $G'$ has at least two vertices of degree greater than or equal to $n-k.$ Form a new graph $G''$ from $G'$ by moving all the pendant vertices to one of the vertex $v_1,v_2,\ldots,v_{n-k}$. Then $G'' = P_n ^k$ and by Corollary \ref{effect-1}, the result follows. Let $u\in V(P_n ^k)$ be a vertex of degree $n-1.$ Then by Lemma \ref{count}, we have \begin{align*} W(P_n^k)&=W(K_{n-k})+W(K_{1,k})+(|V(K_{n-k})|-1)k+kD_{K_{n-k}}(u)\\ &= {n-k\choose 2}+k^2+2k(n-k-1). \end{align*} \item[(ii)] Let $G \in \mathfrak{H}_{n,n-2}.$ Then $G$ is isomorphic to a tree $T(k,l,2)$ for some $k, l\geq 1.$ If $k$ and $l$ both greater than or equal to $2$ then form the tree $T(1,n-3,2)$ from $G$ by moving pendant vertices from one end to other. The by Corollary \ref{effect-1}, $W(T(1,n-3,2))<W(G)$ and by taking $d=2,l=1$ and $k=n-3$ in (\ref{eq4}), we have $W(T(1,n-3,2))=n^2-n-2.$ \end{enumerate} \end{proof} \begin{proof}[{\bf Proof of Theorem \ref{min-thm1}}] We first prove that for $k \geq 3$, if $T \in \mathfrak{T}_{n,k}$ has minimum Wiener index then there is a unique vertex $v \in V(T)$ with $d(v)\geq 3.$ Let there be two vertices $u,v \in V(T)$ with $d(u)=n_1 \geq 3$, $d(v)=n_2 \geq 3.$ Let $N_T(u)=\{u_1,u_2,\ldots,u_{n_1}\}$ and $N_T(v)=\{v_1,v_2,\ldots,v_{n_2}\}$ where $u_1$ and $v_1$ lie on the path joining $u$ and $v$ ($u_1$ may be $v$ and $v_1$ may be $u$). Let $T_1$ be the largest subtree of $T$ consisting of $u,u_2,u_3,\ldots,u_{n_1-1}$ but not $u_1,u_{n_1}$ and $T_2$ be the largest subtree of $T$ containing $v,v_2,v_3,\ldots,v_{n_2-1}$ but not $v_1,v_{n_2}.$ We rename the vertices $u \in V(T_1)$ and $v \in V(T_2)$ by $u'$ and $v'$, respectively. Let $H=T \setminus \{u_2,u_3,\ldots,u_{n_1-1},v_2,v_3,\ldots,v_{n_2-1}\}.$ Construct two trees $T'$ and $T''$ from $H$, $T_1$ and $T_2$ by identifying the vertices $u,u',v'$ and $v,u',v'$, respectively. Clearly both $T', T''\in\mathfrak{T}_{n,k}$ and by Lemma \ref{effect-0}, either $W(T') < W(T)$ or $W(T'') < W(T)$ which is a contradiction. Let $T$ be the tree which minimizes the Wiener index in $\mathfrak{T}_{n,k}$. For $k=2$, the only possible tree is the path $P_n$ which is isomorphic to $T_{n,2}.$ So assume $3 \leq k \leq n-2.$ Then there exists a unique vertex $v \in V(T)$ with $d(v) \geq 3.$ Hence the result follows from Corollary \ref{effect-2}. \end{proof} For $r=0,$ the tree $T_{n,k}$ is isomorphic to the tree $T_k^q$ and hence by (\ref{eq6}), $$W(T_{n,k})=k{q+2 \choose 3}+\frac{q^2(q+1)k(k-1)}{2}.$$ For $1\leq r <k,$ by Lemma \ref{count}, we have $$W(T_{n,k})=W(T_r^{q+1})+W(T_{k-r}^q)+r(q+1)D_{T_{k-r}^q}(v)+(k-r)qD_{T_r^{q+1}}(v),$$ where $v$ is the vertex of $T_{n,k}$ with $T_{n,k}\setminus v = r P_{q+1} \cup (k-r) P_q$. Thus by using (\ref{eq5}) and (\ref{eq6}) the value of $W(T_{n,k})$ can be obtained. \section{Proof of Theorem \ref{min-thm2}} Any graph on $n$ vertices has at most $n-2$ cut vertices. The path $P_n$ is the only graph on $n$ vertices with $n-2$ cut vertices. Hence for $\mathfrak{C_{n,s}}$, we consider $0\leq s\leq n-3$. Let $\mathfrak{C_{n,s}^t}$ be the set of all trees on $n$ vertices with $s$ cut vertices. In a tree every vertex is either a pendant vertex or a cut vertex. So, $\mathfrak{C_{n,s}^t}= \mathfrak{T}_{n,n-s}.$ Hence the next result follows from Theorem \ref{pmax-thm1} and Theorem \ref{min-thm1}. \begin{theorem} For $0\leq s\leq n-3$, the tree $T(\lfloor\frac{n-s}{2}\rfloor,\lceil \frac{n-s}{2}\rceil,s)$ maximizes the Wiener index and the tree $T_{n,n-s}$ minimizes the Wiener index over $\mathfrak{C_{n,s}^t}.$ \end{theorem} A block in a graph $G$ is a maximal connected component without any cut vertices in it. Let $B_G$ be the graph corresponding to $G$ with $V(B_G)$ as the set of blocks of $G$ and two vertices $u$ and $v$ of $B_G$ are adjacent whenever the corresponding blocks contains a common cut vertex of $G.$ A vertex of $G$ with minimum eccentricity is called a central vertex. We call a block $B$ in $G$, a pendant block if there is exactly one cut vertex of $G$ in $B$. The block corresponding to a central vertex in $B_G$ is called a central block of $G.$ \begin{lemma}\label{c} Let $G$ be a graph which minimizes the Wiener index over $\mathfrak{C_{n,s}}$. Then every block of $G$ is a complete graph. \end{lemma} \begin{proof} Let $B$ be a block of $G$ which is not complete. Then there are at least two non adjacent vertices in $B.$ Let $u$ and $v$ be two non adjacent vertices in $B$. Form a new graph $G'$ from $G$ by joining the edge $\{u,v\}.$ Clearly $G'\in \mathfrak{C_{n,s}}$ and by Lemma \ref{ed} $W(G')<W(G)$, which is a contradiction. \end{proof} \begin{lemma} \label{c_0} Let $G$ be a graph which minimizes the Wiener index over $\mathfrak{C_{n,s}}.$ Then every cut vertex of $G$ is shared by exactly two blocks. \end{lemma} \begin{proof} Let $c$ be a cut vertex in $G$ shared by more than two blocks say $B_1,B_2,\ldots,B_k, k\geq 3.$ Construct a new graph $G'$ from $G$ by joining all the non adjacent vertices of $\cup B_i, i=2,3,\ldots,k$. Then $G'\in \mathfrak{C_{n,s}}$ and by Lemma \ref{ed}, $W(G')<W(G)$ which is a contradiction. \end{proof} \begin{lemma}\label{c_2} Let $m\geq 3.$ For $i,j\in \{1,2,\ldots,m\}$, if $l_i \leq l_j-2,$ then $$W(K_m^n(l_1,\ldots,l_i+1,\ldots,l_j-1,\ldots,l_m)) < W(K_m^n(l_1,\ldots,l_i,\ldots,l_j,\ldots,l_m)).$$ \end{lemma} \begin{proof} Let $u$ be the pendant vertex of $K_m^n(l_1,\ldots,l_i+1,\ldots,l_j-1,\ldots,l_m)$ on the path $P_{l_i+1}$ and $v$ be the pendant vertex of $K_m^n(l_1,\ldots,l_i,\ldots,l_j,\ldots,l_m)$ on the path $P_{l_j}$. Let $w_1$ and $w_2$ be the vertices adjacent to $u$ and $v$, respectively. Then using Lemma \ref{count} we have \begin{align*} &W(K_m^n(l_1,\ldots,l_i+1,\ldots,l_j-1,\ldots,l_m)) - W(K_m^n(l_1,\ldots,l_i,\ldots,l_j,\ldots,l_m))\\ &=D_{K_m^{n-1}(l_1,\ldots,l_i,\ldots,l_j-1,\ldots,l_m)}(w_1) - D_{K_m^{n-1}(l_1,\ldots,l_i,\ldots,l_j-1,\ldots,l_m)}(w_2). \end{align*} Since $l_i <l_j-1$ and $m\geq 3,$ so the result follows. \end{proof} Let $G$ be a graph in which every cut vertex is shared by exactly two blocks. Then $B_G$ is a tree. So, $B_G$ has either one central vertex or two adjacent central vertices and hence $G$ has either one central block or two central blocks with a common cut vertex. \begin{lemma}\label{c_3} Let $G$ be a graph which minimizes the Wiener index over $\mathfrak{C_{n,s}}$. If $s\geq 2$, then every pendant block of $G$ is $K_2.$ \end{lemma} \begin{proof} All the blocks in $G$ are complete by Lemma \ref{c}. Suppose $B$ is a pendant block of $G$ which is not $K_2.$ Let $V(B)=\{v_1,v_2,\ldots,v_m\}$ with $m>2.$ Assume $v_1$ is the cut vertex of $G$ in $B$ which is shared by another block $B'$ with $V(B')=\{v_1=u_1,u_2,\ldots,u_r\}$ and $r\geq2.$ Construct a new graph $G'$ from $G$ as follows: Delete the edges $\{v_2,v_j\}, j=3,4,\ldots,m$ and add the edges $\{v_j,u_i\},j=3,4,\ldots,m$ and $i=2,3,\ldots r.$ When $G$ changes to $G'$ the only type of distances which increase are $d(v_2,v_j), j=3,4,\ldots, m.$ Each such distance increases by one and hence the total increment in distances for $v_j,\;\; j=\{3,\ldots,m\}$ is exactly $m-2$. The distance $d(v_j,u_i), j=3,4,\ldots, m\;\;i =2,3,\ldots r$ decreases by one. Since $r\geq 2,$ the total distance decreases by such pair of vertices is at least $m-2.$ Since $s\geq 2$ there exists a vertex $w$ belonging to some other block $B''$ such that $d(v_j,w), j=3,4,\ldots m$ decreases by one. So $W(G')<W(G)$, which is a contradiction. \end{proof} Let $G$ be a graph in which every block is complete and every cut vertex is shared by exactly two blocks. Let $B$ be a central block in $G$. Let $B_1$ be a non central non pendant block of $G$ and $c_1,c_2 \in V(B_1)$ be two cut vertices of $G$. Suppose that the vertex $c_1$ is identified by a pendant vertex of a path $P_l$ and $c_2$ is shared by another block $B_2$ such that the vertices corresponding to $B_1, B_2$ and $B$ in the tree $B_G$ lie on a path. Let $V(B_1)=\{c_1=u_1,u_2,\ldots,u_{m_1}=c_2\}$ and $V(B_2)=\{v_1,v_2,\ldots,v_{m_2}=c_2\}$. Construct a new graph $G'$ from $G$ as follow: Delete the edges $\{c_1,u_i\}$ for all $u_i\in V(B_1)\setminus \{c_1,c_2\}$ and add the edges $\{u_i,v_j\}$ for all $u_i \in V(B_1)\setminus\{c_1,c_2\}$ and $ v_j \in V(B_2)\setminus \{c_2\}.$ \begin{lemma}\label{c_1} Let $G$ and $G'$ be the graphs defined as above.Then $W(G')<W(G).$ \end{lemma} \begin{proof} For $i=2,\ldots,m_1-1$, let $H_i$ be the maximal connected component of $G$ containing exactly one vertex $u_i$ of $B_1$. Let $P_l:t_1t_2\cdots t_l$ be the path with $t_1$ identified with $c_1.$ When $G$ changes to $G'$, the only type of distances which increase in $G'$ are $d_{G'}(u,t_j)$ where $u\in \cup_{i=2}^{m_1-1}V(H_i)$ and $j=1,2,\ldots,l$. Each such distance increases by one in $G'.$ For any other pair of vertices, the distance between them either decreases or remains the same. Since $B_1$ is not a central block, for each $t_j, j=1,2,\ldots,l$ there exists a vertex $t_j' \in V(G)\setminus \left(\cup_{i=2}^{m_1-1}V(H_i) \cup \{t_1,t_2,\ldots,t_l,v_1,v_2,\ldots,v_{m_2}\}\right)$ such that $d_{G'}(u,t_j')$ decreases by one where $u\in \cup_{i=2}^{m_1-1}V(H_i)$. So, the increment in distance by the pairs $u,t_j$ are neutralized by the pairs $u,t_j'.$ Apart from this at least the distances $d_{G'}(u_i,v_j)$ for $i=2,3,\ldots,m_1-1$ and $j=1,2,\ldots,m_2-1$ decreases by one. So $W(G')<W(G).$ \end{proof} \begin{proof}[{\bf Proof of Theorem \ref{min-thm2}:}] Let $G$ be a graph which minimizes the Wiener index over $\mathfrak{C_{n,s}}.$ we first claim that $G$ is isomorphic to $K_{n-s}^n(l_1,\ldots,l_{n-s})$ for some $l_1,l_2,\ldots,l_{n-s}.$ By Lemma \ref{c} and Lemma \ref{c_0}, every block of $G$ is complete and every cut vertex of $G$ is shared by exactly two blocks. If $s=0$, then $G$ has exactly one block and $G=K_n$ also $K_n$ is isomorphic to $K_n^n(1,1,\cdots,1).$ For $s=1,$ $G$ has exactly two complete blocks with a common vertex $w$ (say). Let $B_1$ and $B_2$ be the two blocks of $G.$ If any of $B_1$ or $B_2$ is $K_2$ then $G$ is isomorphic to $K_{n-1}^n(2,1,\ldots,1)$. Otherwise, let $V(B_1)=\{u_1,u_2,\ldots,u_{m_1}=w\}$ and $V(B_2)=\{v_1,v_2,\ldots,v_{m_2}=w\}$ with $m_1,m_2 > 2.$ Construct a new graph $G'$ from $G$ as follow: Delete the edges $\{u_1,u_i\} ,i=2,3,\ldots,m_1-1$ and add the edges $\{u_i,v_j\},i=2,3,\ldots,m_1-1;j=1,2,\ldots,m_2-1.$ Clearly $G'\in \mathfrak{C_{n,s}}.$ Then the only type of distances which increase are $d(u_1,u_j), j=2,3,\ldots u_{m_1-1}$ and each such distance increases by one. So total increment in distance is exactly $m_1-2.$ Also each distance $d(u_i,v_j),\; i=2,3,\ldots,m_1-1; j=2,3,\ldots m_2-1$ decreases by one. The total decrement is $(m_1-2)(m_2-1).$ Since $m_1,m_2>2,$ so $W(G')<W(G)$, which is a contradiction. Hence $G$ is isomorphic to $ K_{n-1}^n(2,1,\ldots,1)$. Now suppose $s\geq 2.$ Then $G$ has $s+1$ blocks and also $G$ has either one central block or two adjacent central blocks. \underline{Claim:} All non central blocks of $G$ are $K_2.$\\ Suppose $B$ is a non central block of $G$ which is not $K_2$. Then by Lemma \ref{c_3}, $B$ must be a non pendant block. Construct $G'$ from $G$ as in Lemma \ref{c_1}. Clearly $G'\in \mathfrak{C_{n,s}}$ and by Lemma \ref{c_1}, $W(G')<W(G)$ which is a contradiction. If $G$ has exactly one central block, then $G$ is isomorphic to $K_{n-s}^n(l_1,\ldots,l_{n-s})$ for some $l_1,l_2,\ldots,l_s$. Suppose $G$ has two central blocks and $G$ is not isomorphic to $K_{n-s}^n(l_1,\ldots,l_{n-s})$ for any $l_1,l_2,\ldots,l_{n-s}$. Then each of the central blocks of $G$ has at least $3$ vertices. Let $B_1$ and $B_2$ be the two central blocks with a common vertex $w.$ Let $V(B_1)=\{u_1,u_2,\ldots,u_{m_1}=w\}$ and $V(B_2)=\{v_1,v_2,\ldots,v_{m_2}=w\}$ with $m_1,m_2 > 2.$ Let $H_1$($H_2$) be the maximal connected component of $G$ containing exactly one vertex $w$ of $B_2$($B_1$). Let $P_l:wu_1t_3\cdots t_l$ be the longest path in $H_1$ starting at $w$ containing $u_1$ such that non of the vertex $t_3,\ldots,t_l$ belongs to $B_1.$ Take $w$ as $t_1$ and $u_1$ as $t_2$ in $P_l.$ Since $B_1$ and $B_2$ are central blocks, so there exists a path $P'_l:t_1't_2'\cdots t_l'$ on $l$ vertices in $H_2$ starting at $w=t_1'$ and containing exactly two vertices of $B_2.$ Construct a new graph $G'$ from $G$ as follow: Delete the edges $\{u_1,u_i\} ,i=2,3,\ldots,m_1-1$ and add the edges $\{u_i,v_j\},i=2,3,\ldots,m_1-1;j=1,2,\ldots,m_2-1.$ Clearly $G'\in \mathfrak{C_{n,s}}.$ The only type of distances which increase in $G'$ are $d_{G'}(u,t_j)$ where $u\in V(H_1)\setminus V(P_l)$ and $j=2,\ldots,l$ also each such distance increases by one. The distance $d_{G'}(u,t_j')$ decreases by one where $u\in V(H_1)\setminus V(P_l)$ and $j=2,\ldots,l$. So, the increment in distance by the pairs $u,t_j$ are neutralized by the pairs $u,t_j'.$ Since $m_2\geq 3$, there exist at least one vertex $w'$ in $B_2$ which is not in $P_l'.$ For each $u\in V(H_1)\setminus V(P_l)$, the distance $d_{G'}(u,w')$ decreases by one. So, $W(G')< W(G)$, which is a contradiction. Hence $G$ is $K_{n-s}^n(l_1,\ldots,l_{n-s})$ for some $l_1,l_2,\ldots,l_{n-s}.$ Now the result follows from Lemma \ref{c_2}. \end{proof}
{ "timestamp": "2019-08-01T02:13:34", "yymm": "1907", "arxiv_id": "1907.13481", "language": "en", "url": "https://arxiv.org/abs/1907.13481", "abstract": "The Wiener index of a connected graph is defined as the sum of the distances between all unordered pair of its vertices. In this paper, we characterize the graphs which extremize the Wiener index among all graphs on $n$ vertices with $k$ pendant vertices. We also characterize the graph which minimizes the Wiener index over the graphs on $n$ vertices with $s$ cut vertices.", "subjects": "Combinatorics (math.CO)", "title": "Wiener index of graphs with fixed number of pendant or cut vertices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363485313247, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7073385863904055 }
https://arxiv.org/abs/1805.11641
Classifying Rotationally-Closed Languages Having Greedy Universal Cycles
Let $\textbf{T}(n,k)$ be the set of strings of length $n$ over the alphabet $\Sigma=\{1,2,\ldots,k\}$. A universal cycle for $\textbf{T}(n,k)$ can be constructed using a greedy algorithm: start with the string $k^n$, and continually append the least symbol possible without repeating a substring of length $n$. This construction also creates universal cycles for some subsets $\textbf{S}\subseteq\textbf{T}(n,k)$; we will classify all such subsets that are closed under rotations.
\section{Results} Some notational conventions: when we are working with a particular set $\textbf{T}(n,k)$, we will use $\alpha$ and $\beta$ (possibly with subscripts) to represent strings in $\textbf{T}(n,k)$. Other Greek letters (like $\gamma$) will represent strings in $\textbf{T}(m,k)$ for some $m\le n$. (This includes the possibility of $\gamma$ being an empty string.) Latin letters ($a$, $b$, etc.) will represent individual elements of $\{1,\ldots,k\}$. We will sometimes use exponential notation to write strings with repetitions in a shorter form. A couple of examples: $2^5 3$ represents the string $222223$, and $4(21)^3$ represents the string $4212121$. Given a set $\textbf{S}\subseteq\textbf{T}(n,k)$ (where $\textbf{S}\ne\emptyset$), let $\alpha\in\textbf{S}$. Define the string $\textit{Greedy}_\alpha(\textbf{S})$ as follows: let $\beta_0=\alpha$, and having defined $\beta_0$ through $\beta_j=a\gamma$, let $\beta_{j+1}=\gamma a_{j+1}$, where $a_{j+1}$ is the least element of $\{1,\ldots,k\}$ such that $\gamma a_{j+1}\in\textbf{S}$ and $\gamma a_{j+1}\ne\beta_i$ for all $1\le i\le j$. The process halts when we reach an $\beta_m$ where either $\beta_m=\alpha$, or it is impossible to define $\beta_{m+1}$. (The latter occurs when, given $\beta_m=a\gamma$, we have $\gamma b\in\{\beta_i\}_{i=1}^m$ for all $b\in\{1,\ldots,k\}$ such that $\gamma b\in\textbf{S}$.) Let $m$ be the largest positive integer for which $\beta_m$ exists. We then define $\textit{Greedy}_\alpha(\textbf{S})$ to be: $$\textit{Greedy}_\alpha(\textbf{S})=a_1 a_2\cdots a_m.$$ In the case where the process ends because $\beta_m=\alpha$, the string $\textit{Greedy}_\alpha(\textbf{S})$ may be viewed as a cycle; each of the strings from $\beta_1$ to $\beta_m=\alpha$ appears exactly once in that cycle. Let $\mathcal{G}(n,k)$ be the collection of subsets $\textbf{S}\subseteq\textbf{T}(n,k)$ such that, for some $\alpha\in S$, $\textit{Greedy}_\alpha(\textbf{S})$ is a universal cycle for $\textbf{S}$. That is, the length-$n$ suffix of $\textit{Greedy}_\alpha(\textbf{S})$ is $\alpha$, and for all $\beta\in\textbf{S}$, $\beta$ is a substring of $\textit{Greedy}_\alpha(\textbf{S})$ (treated as a cycle). The goal is to find a characterization of the sets in $\mathcal{G}(n,k)$ that are closed under rotations. Given $\textbf{S}\in\textbf{T}(n,k)$, and given $\alpha,\beta\in\textbf{S}$, we will say that $\beta$ is ``increasable in $\textbf{S}$ to $\alpha$'' if we can transform $\beta$ into $\alpha$ by continually increasing individual symbols, and if the resulting string after each such increase is in $\textbf{S}$. (By convention, we will say that $\alpha\in\textbf{S}$ is increasable in $\textbf{S}$ to $\alpha$.) For example, for our ``no primes'' set $\textbf{S}_2\subseteq\textbf{T}(2,9)$, 57 is increasable in $\textbf{S}_2$ to 99: $$57\rightarrow 58\rightarrow 68\rightarrow 69\rightarrow 99$$ Given this definition, our ultimate result is the following: \begin{theorem} \label{mainresult} Let $\textbf{S}\subseteq\textbf{T}(n,k)$ be closed under rotations, and let $\alpha,\beta\in\textbf{S}$. Then $\beta$ is a substring of $\textit{Greedy}_\alpha(\textbf{S})$ (treated as a cycle) if and only if $\beta$ is increasable in $\textbf{S}$ to a rotation of $\alpha$. Thus, if $\textbf{S}\subseteq\textbf{T}(n,k)$ is closed under rotations, then $\textbf{S}\in\mathcal{G}(n,k)$ if and only if there exists an $\alpha\in\textbf{S}$ such that every $\beta\in\textbf{S}$ is increasable in $\textbf{S}$ to $\alpha$. \end{theorem} This theorem explains the absence of 77, 78, 87, and 88 in the string $\textit{Greedy}_{99}(\textbf{S}_2)$: none of those four strings are increasable in $\textbf{S}_2$ to 99, since none of 79, 89, 97, or 98 is in $\textbf{S}$. Note that this theorem is a generalization of Theorem 3, from \cite{SWW}. The proof of this result will rely on an analysis of a combinatorial game, which we'll call the Warden's Game. The rules for this game will be given in Section 2, and the proof of Theorem \ref{mainresult} will follow in Section 3. In Section 4, we will look at several interesting families of sets $\textbf{S}\subseteq\textbf{T}(n,k)$ where a universal cycle can be generated with the greedy algorithm. Lastly, a possible avenue for future work will be detailed in Section 5. \section{The Warden's Game} Consider the following fanciful scenario: there's a certain prison warden who loves playing games. He sometimes makes an offer to let his prisoners out of prison, if they can beat him at a particular game. The game works as follows: The warden shows the prisoner a row of $n$ $k$-sided dice on a table. On each die, the faces are numbered from 1 to $k$. (It's possible to have $k=2$ here; the ``dice'' would then be coins with a 1 on one side and a 2 on the other.) A certain string $\alpha\in\textbf{T}(n,k)$ is chosen: the prisoner will earn his freedom if, after any move, the dice on the table are showing the string $\alpha$. (If the dice are showing $\alpha$ at the start of the game, the prisoner does not immediately win; the prisoner only wins when the dice show $\alpha$ after a move.) This game will be played at a rate of one move per day. So, the prisoner wants to reach $\alpha$ as quickly as possible; the warden wants to delay this as long as possible (indefinitely, if he can). Each day, the rightmost die in the row will be moved to the far left, and possibly rotated to show a different number. The warden always has priority; he may transfer the rightmost die to the far left, and lower the number on that die. If he doesn't want to do that (or he can't, because he can't lower the number any further), then the warden passes; then the prisoner must transfer the rightmost die to the far left, and optionally increase the number on that die. As an example: let $n=3$ and $k=6$, so that the game is being played with three 6-sided dice. Let's say the current position is 513; the leftmost die shows 5, the middle die shows 1, and the rightmost die shows 3. The warden may transfer the rightmost die to the far left, lowering its value to 1 or 2 (thus producing the position 151 or 251). Or the warden may pass, in which case the prisoner must transfer the rightmost die and optionally increase its value (producing one of the positions 351, 451, 551, or 651). Let's say the warden chooses to move to the position 251. Then on the next move, the warden can't lower the value showing on the rightmost die. So the warden must pass, and the prisoner can move to 125, 225, 325, 425, 525, or 625. And so on. Note: in the case where $k=2$, the rules can be stated even more simply. If the rightmost coin is showing a 2, the warden transfers that coin, and optionally flips it to 1. If the rightmost coin is showing a 1, the prisoner transfers that coin, and optionally flips it to 2. We can generalize this game still further, by limiting the legal positions in the game. We can choose any subset $\textbf{S}\subseteq\textbf{T}(n,k)$, closed under rotations, to be the set of legal positions. (We'll assume that the goal state $\alpha$ is in $\textbf{S}$.) Then each move of the game, whether made by the prisoner or the warden, must be to a position in $\textbf{S}$. We require that $\textbf{S}$ be closed under rotations so that there is a legal move from every legal position; if the warden ever passes, the prisoner always has the option to transfer the rightmost die without changing its value. In \cite{GW}, Weiss analyzed the Warden's Game (though not under that name) in the case where $k=2$, $\textbf{S}=\textbf{T}(n,2)$, and $\alpha$ is the string $2^n$. Weiss proved that the game tree for the game is summarized by the lexicographically minimal de Bruijn sequence for $\textbf{T}(n,2)$; if both players play optimally, the game will proceed backwards through the de Bruijn sequence, one move at a time. For example, if $n=4$, the lexicographically minimal de Bruijn sequence is the following: $$(2222)1111211221212222$$ For this game, consider the position 2212. If we move one step backwards in the de Bruijn sequence from 2212, we get 1221; thus, the optimal move from 2212 must be for the warden to flip the rightmost coin before moving it, producing the position 1221. Similarly, the next optimal move is for the prisoner to move from 1221 to 1122, and so on, until the goal position 2222 is finally reached. As we will prove in Section 3, the same holds true for any values of $n$ and $k$, any subset $\textbf{S}\subseteq\textbf{T}(n,k)$ of legal positions (closed under rotations), and any goal state $\alpha\in\textbf{S}$. The greedy algorithm always generates the full game tree for the Warden's Game. As an example, let's once again consider the subset $\textbf{S}_2\subseteq\textbf{T}(2,9)$ consisting of those strings where both the string itself and its reverse are 2-digit composite numbers. Here, once again, is the (not quite universal) cycle generated by the greedy algorithm, starting from $\alpha=99$. $$(99)33621224251526394454648182728496556685758699$$ For example, consider the position 82. The preceding substring of length 2 is 18; thus, the optimal move must be for the warden to move the rightmost die and reduce its value from 2 to 1. The next optimal move must be to 81; both the prisoner and the warden refuse to change the value on the rightmost die. The next optimal move is to 48, which means the warden passes, and the prisoner increases the value on the rightmost die from 1 to 4. And so on. Remember, four positions from $\textbf{S}_2$ do not appear in this cycle: 77, 78, 87, and 88. Why don't they appear? Because they are losing positions for the prisoner! From any of those positions, the warden has a simple way to keep the game going indefinitely: he refuses to ever decrease the value on a die, and passes every time. The prisoner will never be able to increase a die to a 9, because 79, 89, 97, and 98 are all illegal positions. So the prisoner will never be able to reach the goal state, 99. Our goal for the next section is to prove that this sort of thing happens regardless of the choices of $\textbf{S}$ and $\alpha$. We will show that the prisoner can win from a given position $\beta\in\textbf{S}$ if and only if he can win from $\beta$ with the warden always passing: this happens when $\beta$ is increasable in $\textbf{S}$ to a rotation of $\alpha$. We will also show that the greedy algorithm generates the game tree for this game; thus, the greedy algorithm generates a universal cycle if and only if every $\beta\in\textbf{S}$ is increasable in $\textbf{S}$ to a rotation of $\alpha$. \section{Proof of Theorem 1} Assume we are given values of $n$ and $k$, a set $\textbf{S}\subseteq\textbf{T}(n,k)$ of legal positions (closed under rotations), and a goal state $\alpha\in\textbf{S}$. Define the ``remoteness function'' $r$ on $\textbf{S}$ as follows: given $\beta\in\textbf{S}$, the remoteness of $\beta$, $r(\beta)$, is the number of moves the game will last starting from $\beta$ if both players play optimally. If the warden can keep the game going forever, then $r(\beta)=\infty$. This definition of remoteness is similar to the concept of remoteness used in \cite{BCG}. Note: we can consider $\alpha$ to either be an end position (of remoteness 0) or a start position (of nonzero remoteness). We will always use the notation $r(\alpha)$ for the number of moves the game will last \textbf{starting} from $\alpha$; thus, $r(\alpha)>0$. \begin{lemma} \label{ifwardenpasses} Let $\beta_1,\beta_2\in\textbf{S}$ be such that $\beta_1=\gamma a$ and $\beta_2=a'\gamma$ for $a'\ge a$. (Thus, if the current position is $\beta_1$ and the warden passes, then the prisoner may move to $\beta_2$.) Then, starting from $\beta_1$, the prisoner has a strategy which can force the position to eventually reach $\beta_2$. \end{lemma} \begin{proof} This can be proven by induction on the sum of the symbols in $\beta_1$. Starting from $\beta_1=b_1 b_2\cdots b_{n-1}a$, if the warden passes, then the prisoner may move immediately to $\beta_2$. Otherwise, the warden must move to $\beta_3=a''b_1 b_2\cdots b_{n-1}$ for some $a''<a$. But the sum of the symbols of $\beta_3$ is less than the sum of the symbols of $\beta_1$. So by the inductive hypothesis, the prisoner has a strategy to eventually force the position to $b_{n-1}a''b_1\cdots b_{n-2}$, then to $b_{n-2}b_{n-1}a''b_1\cdots b_{n-3}$, and so on to $b_1 b_2\cdots b_{n-1}a''$. We still have a smaller sum than the sum of the symbols in $\beta_1$, so the prisoner can eventually force the position to $\beta_2=a'b_1 b_2\cdots b_{n-1}$, since $a'\ge a''$. \end{proof} \begin{lemma} Given $\beta\in\textbf{S}$, the prisoner can win from $\beta$ if and only if $\beta$ is increasable in $\textbf{S}$ to a rotation of $\alpha$. \end{lemma} \begin{proof} If $\beta$ is not increasable in $\textbf{S}$ to a rotation of $\alpha$, then the warden can keep the game going indefinitely, simply by passing on every turn. Since the prisoner can only increase values, if the warden always passes, the prisoner will only be able to reach positions $\gamma$ where $\beta$ is increasable in $\textbf{S}$ to a rotation of $\gamma$. Since $\alpha$ is not such a position, the prisoner can never win. Now assume that $\beta$ is increasable in $\textbf{S}$ to a rotation of $\alpha$. Then, if the warden chooses to pass on every move, then there is a sequence of moves $\beta=\beta_0,\beta_1,\beta_2,\ldots,\beta_m=\alpha$ that the prisoner may make to win. By Lemma \ref{ifwardenpasses}, if the game starts from $\beta=\beta_0$, then prisoner can eventually force the position to be $\beta_1$, then $\beta_2$, and so on until finally reaching $\alpha$ and winning. \end{proof} Note: while this shows that the prisoner can win eventually from any position $\beta$ that is increasable in $S$ to $\alpha$, the recursive strategy described above will probably \textbf{not} be the prisoner's optimal strategy. \begin{lemma} \label{nojumps} Given any positive integer $m$, if there are no positions of remoteness $m$, then there are no positions of remoteness $m+1$. (Thus, by induction, there are no positions of remoteness $m'$ for any integer $m'\ge m$.) \end{lemma} \begin{proof} If there were a position of remoteness $m+1$, then with optimal play, the first move from such a position would be to a position of remoteness $m$... and no such position exists. So there are no positions of remoteness $m+1$. \end{proof} \begin{lemma} \label{largerhelpsdwarden} Given positions $\gamma a_1,\gamma a_2\in\textbf{S}$, if $a_1<a_2$, then $r(\gamma a_1)\le r(\gamma a_2)$. \end{lemma} The point here is that, the greater the rightmost symbol in the string, the better off the warden is. From $\gamma a_1$, the warden may move to any $a\gamma\in\textbf{S}$ such that $a<a_1$, or the warden may give the prisoner the choice to move to any $a\gamma\in\textbf{S}$ where $a\ge a_1$. From $\gamma a_2$, the warden still may move to any $a\gamma\in\textbf{S}$ such that $a<a_1$, or the warden can ensure that the next move is to $a\gamma\in\textbf{S}$ for some $a\ge a_1$... but in the latter case, the warden may choose a specific $a\gamma\in\textbf{S}$ such that $a_1\le a<a_2$, if he so desires. This extra option can only help the warden, never hurt him. So we must have $r(\gamma a_1)\le r(\gamma a_2)$. Note: we will later see that if $r(\gamma a_1)$ and $r(\gamma a_2)$ are both finite, then $r(\gamma a_1)<r(\gamma a_2)$. \begin{lemma} \label{onechain} For any nonnegative integer $m$, there is at most one position of remoteness $m$. \end{lemma} This is a significant result; combined with Lemma \ref{nojumps}, the conclusion is that the ``game tree'' is really a chain, not a tree. There is one position of remoteness 0 (namely, $\alpha$), one position of remoteness 1, one position of remoteness 2, and so on until all the winning positions for the prisoner have been exhausted. And given any position $\beta$ that is winning for the prisoner, if a game starting from $\beta$ is played optimally, the game will pass through all positions of remoteness less than $r(\beta)$ until finally reaching $\alpha$. \begin{proof} We will prove this by contradiction. Let $m$ be the smallest integer where there are multiple positions of remoteness $m$. There is only one position of remoteness 0 (namely, $\alpha)$, so $m\ge 1$. Let $a\gamma$ be the one position of remoteness $m-1$; this position must be reachable in one move from all positions of remoteness $m$, so all such positions must have the form $\gamma b$. Let $b_1<b_2<\cdots <b_l$ be the elements of $\{1,2,\cdots,k\}$ such that $\gamma b_i\in\textbf{S}$ for each $i$. By Lemma \ref{largerhelpsdwarden}, $r(\gamma b_1)\le r(\gamma b_2)\le\cdots\le r(\gamma b_l)$. If $j$ is the smallest natural number such that $r(\gamma b_j)=m$, then because there is just one position of each remoteness less than $m$, we must have $$r(\gamma b_1)<r(\gamma b_2)<\cdots <r(\gamma b_{j-1})<r(\gamma b_j)=r(\gamma b_{j+1}).$$ For each $i\le j$, let $a_i\gamma$ be the next position reached from $\gamma b_i$ if both sides play optimally (the $a_i$'s for $1\le i\le j$ are all distinct). We can now show that it is impossible to have $r(\gamma b_j)=r(\gamma b_{j+1})=m$: Consider the two sets $\{a_i\}_{i=1}^j$ and $\{b_i\}_{i=1}^j$. If these two sets are identical, then consider what happens if the warden passes from the position $\gamma b_{j+1}$. The prisoner is then forced to move to $a\gamma$ for some $a\ge b_{j+1}$. This $a$ will not be an element of $\{a_i\}_{i=1}^j$, and hence $a\gamma$ will not have remoteness at most $m-1$. So by passing, the warden can force the next move to be to a position of remoteness at least $m$; $\gamma b_{j+1}$ can't have remoteness $m$. On the other hand, assume $\{a_i\}_{i=1}^j$ and $\{b_i\}_{i=1}^j$ are not identical. That means there is some $b_i<b_{j+1}$ that is not in $\{a_i\}_{i=1}^j$. If the warden moves from $\gamma b_{j+1}$ to $b_i\gamma$, then the warden has not moved to a position of remoteness at most $m-1$. So again, the warden was able to force the next move to be to a position of remoteness at least $m$; $\gamma b_{j+1}$ can't have remoteness $m$. This completes the contradiction; it is impossible to have two positions of the same finite remoteness. \end{proof} \begin{lemma} If $\alpha\in\textbf{S}$ is the goal state, then the position in $\textbf{S}$ of highest finite remoteness is $\alpha$. \end{lemma} The reason: since $\alpha$ is (trivially) increasable in $\textbf{S}$ to a rotation of $\alpha$, $r(\alpha)$ is finite. Say $r(\alpha)=m>0$ (we are treating $\alpha$ as a start position, not an end position). If there were any position $\beta$ such that $r(\beta)=m+1$, then with optimal play, the next move from $\beta$ would be to a position of remoteness $m$: namely, $\alpha$. But that means, with optimal play, $\beta$ is just one move from the goal state; so $r(\beta)=1$, not $m+1$. Thus, $\alpha$ has the maximal finite remoteness of any string in $\textbf{S}$. This also means that the positions in $\textbf{S}$ that are winning for the prisoner form a cycle. The only question remaining is why this is the same cycle that we would get from the greedy algorithm. \begin{theorem} The greedy algorithm generates the game ``tree'' for the warden's game. \end{theorem} \begin{proof} Let $\{\beta_m\}\subseteq\textbf{S}$ be the sequence of strings generated by the greedy algorithm, starting from $\alpha$. We have $\beta_0=\alpha$, and for each $m\ge 0$, if $\beta_m=a\gamma$, then $\beta_{m+1}=\gamma b$, where $b$ is the least element of $\{1,\cdots,k\}$ such that $\gamma b\in\textbf{S}$ and $\gamma b$ does not appear in the set $\{\beta_i\}_{i=1}^m$. (If $\gamma b\in \{\beta_i\}_{i=1}^m$ for all $b\in\{1,\cdots,k\}$, then there is no $\beta_{m+1}$; $\beta_m$ is the last string in the sequence.) Obviously, $\beta_0=\alpha$ is the one position of remoteness 0. We must show that $r(\beta_m)=m$ for all $m>0$; we will prove this by induction. Assume we have $r(\beta_i)=i$ whenever $0<i\le m$. Let $\beta_m=a\gamma$. Assume there is a position of remoteness $m+1$; it must be of the form $\gamma b\in\textbf{S}$ (so that there is a move available to $a\gamma$), where $\gamma b$ is not in $\{\beta_i\}_{i=1}^m$ (since all strings in that set have remoteness $m$ or less). Let $b_1<b_2<\cdots <b_j$ be the elements of $\{1,\cdots k\}$ such that $\gamma b_i\in\textbf{S}$, but $\gamma b_i$ is not in $\{\beta_i\}_{i=1}^m$. (Thus, $\beta_{m+1}=\gamma b_1$.) By Lemma \ref{largerhelpsdwarden}, $r(\gamma b_1)\le r(\gamma b_2)\le\cdots \le r(\gamma b_j)$; by Lemma \ref{onechain}, all of those inequalities are strict except for where we have multiple positions of infinite remoteness. We have $r(\gamma b_1)>m$, so the only $i$ where we can have $r(\gamma b_i)=m+1$ is $i=1$. Thus, we must have $r(\beta_{m+1})=r(\gamma b_1)=m+1$. Now assume that $m$ is the largest finite remoteness of any position in $\textbf{S}$; that is, $r(\alpha)=m$. The above inductive argument shows that $\beta_m=\alpha$. And the way we defined $\textit{Greedy}_\alpha(\textbf{S})$, the process halts if we ever have $\beta_m=\alpha$. So we do have $r(\beta_m)=m$ for all $m>0$; the length-$n$ substrings of $\textit{Greedy}_\alpha(\textbf{S})$ are exactly the winning positions for the prisoner, in order of remoteness. \end{proof} We thus have proven Theorem \ref{mainresult}; the length-$n$ substrings of $\textit{Greedy}_\alpha(\textbf{S})$ are exactly the winning positions for the prisoner, which are exactly the strings in $\textbf{S}$ which are increasable in $\textbf{S}$ to a rotation of $\alpha$. As a final note for this section, here's a comment on the optimal strategy for the warden: \begin{corollary} Given any position $\gamma b\in\textbf{S}$, if $a$ is the greatest number less than $b$ such that $a\gamma\in\textbf{S}$, then the optimal move for the warden from $\gamma b$ is either to move to $a\gamma$, or to pass. (So, when the warden does decrease a number, he should always do so by the smallest amount possible.) \end{corollary} \begin{proof} Assume not. Assume there is a position $\gamma b\in\textbf{S}$, where $a$ is the greatest number less than $b$ where $a\gamma\in\textbf{S}$, but the warden's optimal move is to $c\gamma$, where $c<a$. Let $r(\gamma b)=m$; then the remoteness of $c\gamma$ is $m-1$ (where, if $c\gamma=\alpha$, we are treating $\alpha$ as an end position). Since $\textbf{S}$ is closed under rotations and $a\gamma\in\textbf{S}$, we have $\gamma a\in\textbf{S}$. From Lemmas \ref{largerhelpsdwarden} and \ref{onechain}, since $a<b$, we have $r(\gamma a)<r(\gamma b)$. So $r(\gamma a)<m$. But from the position $\gamma a$, the warden can move to $c\gamma$, a position of remoteness $m-1$. So $r(\gamma a)\ge m$, contradiction. \end{proof} There seems to be no similar statements we can make about the optimal strategy for the prisoner; depending on the situation, the prisoner may want to increase the value on a die by the least amount possible, the greatest amount possible, or some amount in between. For example, all such possiblities occur in our ``no primes'' example, $\textbf{S}_2\subseteq\textbf{T}(2,9)$. There seems to be no way for the prisoner to determine anything about his optimal next move from $\beta$, other than to generate the entire string $\textit{Greedy}_\alpha(\textbf{S})$ until $\beta$ is reached. \section{Interesting examples} In \cite{SWW}, there are a number of examples of interesting sets $\textbf{S}\subseteq\textbf{T}(n,k)$ where the greedy algorithm produces a universal cycle for $\textbf{S}$. Here are some new such sets derived from Theorem \ref{mainresult}. \subsection{Strings increasable to a rotation of $\alpha$} Choose any $\alpha\in\textbf{T}(n,k)$, and let $\textbf{S}\subseteq\textbf{T}(n,k)$ be the strings that are increasable in $\textbf{T}(n,k)$ to a rotation of $\alpha$. Obviously, $\textbf{S}$ is then closed under rotations. Given any $\beta\in\textbf{S}$, $\beta$ is increasable in $\textbf{T}(n,k)$ to a rotation of $\alpha$. So there are strings $\beta_0,\beta_1,\ldots,\beta_m\in\textbf{T}(n,k)$ such that $\beta_0=\beta$, $\beta_m$ is a rotation of $\alpha$, and each $\beta_i$ can be changed to $\beta_{i+1}$ by increasing one symbol. Then each $\beta_i$ is increasable in $\textbf{T}(n,k)$ to a rotation of $\alpha$, so each $\beta_i\in\textbf{S}$. But that means $\beta$ is actually increasable in $\textbf{S}$ to a rotation of $\alpha$. Since this is true for all $\beta\in\textbf{S}$, the greedy algorithm starting from $\alpha$ generates a universal cycle of $\textbf{S}$. For example, let $n=3$, $k=4$, and $\alpha=143$. Here's a universal cycle for the strings in $\textbf{T}(3,4)$ increasable to a rotation of 143: $$(143)1112113122123132133141142143$$ Note: we get the same collection of strings if $\alpha$ is either 314 or 431. But the resulting universal cycle would be different in either such case. Here's the universal cycle for $\alpha=314$: $$(314)1112113114212213214312313314$$ And here's the universal cycle for $\alpha=431$: $$(431)1121131221231321331411421431$$ \subsection{Unions} Let $\textbf{S}_1,\textbf{S}_2\in\mathcal{G}(n,k)$, where $\textbf{S}_1$ and $\textbf{S}_2$ are both closed under rotations. Assume all strings in $\textbf{S}_1$ and $\textbf{S}_2$ are increasable (in their respective sets) to rotations of a single string $\alpha$. Then all strings in $\textbf{S}_1\cup\textbf{S}_2$ are increasable in $\textbf{S}_1\cup\textbf{S}_2$ to a rotation of $\alpha$, so the greedy algorithm starting from $\alpha$ generates a universal cycle of $\textbf{S}_1\cup\textbf{S}_2$. This raises the question of whether the same can be said of intersections. However, this turns out to be false: even if the greedy algorithm generates universal cycles for $\textbf{S}_1$ and $\textbf{S}_2$, the same may not be true of $\textbf{S}_1\cap\textbf{S}_2$. One simple example will demonstrate why. Let $\textbf{S}_1,\textbf{S}_2\subseteq\textbf{T}(2,3)$ be as follows: $$\textbf{S}_1=\{11,13,31,33\}$$ $$\textbf{S}_2=\{11,12,21,23,32,33\}$$ The greedy algorithm (starting from 33) generates universal cycles for both $\textbf{S}_1$ and $\textbf{S}_2$, but does not do so for $\textbf{S}_1\cap\textbf{S}_2=\{11,33\}$. The problem is that there may be an element $\beta\in\textbf{S}_1\cap\textbf{S}_2$ which is increasable to a rotation of $\alpha$ in both $\textbf{S}_1$ and in $\textbf{S}_2$, but the paths from $\beta$ to $\alpha$ may be different in each set. (Here, $\beta=11$; we have $11\rightarrow 13\rightarrow 33$ in $\textbf{S}_1$, and $11\rightarrow 12\rightarrow 32\rightarrow 33$ in $\textbf{S}_2$.) \subsection{Rotations of increasing strings} Assume that $n\le k$. Let $\textbf{S}$ be the set containing all strictly increasing strings in $\textbf{T}(n,k)$ and their rotations. For example, in $\textbf{T}(3,5)$, $\textbf{S}$ contains the following strings and their rotations: $$\{123,124,125,134,135,145,234,235,245,345\}$$ By definition, $\textbf{S}$ is closed under rotations. Let $\alpha$ be the lexicographically maximal, strictly increasing string in $\textbf{T}(n,k)$: $$\alpha=(k-n+1)(k-n+2)\cdots(k-1)k.$$ Any string $\beta\in\textbf{S}$ is increasable in $\textbf{S}$ to a rotation of $\alpha$; the greatest symbol in $\beta$ can be increased to $k$, then the next-greatest symbol can be increased to $k-1$, and so on. So a universal cycle for $\textbf{S}$ can be generated with the greedy algorithm. \subsection{Maximum cyclic increment or cyclic decrement} Choose integers $I>0$ and $D>0$. Let $\textbf{S}\subseteq\textbf{T}(n,k)$ be the set of strings with no cyclic increment of size greater than $I$ and no cyclic decrement of size greater than $D$. For example, if we take $\textbf{T}(3,4)$, $I=2$, and $D=1$, then $\textbf{S}$ consists of the following strings and their rotations: $$\{111,112,122,132,222,223,233,243,333,334,344,444\}$$ By definition, $\textbf{S}$ is closed under rotations. Let $\alpha=k^n$; $\alpha$ contains no cyclic increments or decrements, so $\alpha\in\textbf{S}$. To show that the greedy algorithm works here: choose any $\beta\in\textbf{S}$ such that $\beta\ne k^n$. Assume $\beta=\gamma_1 a\gamma_2$, where $a$ is the least symbol in $\beta$. Let $\beta'=\gamma_1(a+1)\gamma_2$. This change from $\beta$ to $\beta'$ will either decrease the size of cyclic increments/decrements, or will produce a new cyclic increment or decrement of size 1 (which is legal). So $\beta'\in\textbf{S}$. Thus, for any $\beta\in\textbf{S}$, it's possible to increase a symbol of $\beta$ by 1 to produce another string in $\textbf{S}$. This process can be continued until $\alpha$ is reached. So any $\beta\in\textbf{S}$ is increasable in $\textbf{S}$ to $\alpha$, and the greedy algorithm (starting from $\alpha$) produces a universal cycle. \subsection{Minimum span, maximum span} Choose integers $m$ and $M$ such that $0\le m<M<k$. Let $\textbf{S}\subseteq\textbf{T}(n,k)$ be the set of strings $\beta$ whose span is at least $m$ and at most $M$. (The ``span'' of a string $\beta$ is the difference between the least and greatest symbols in $\beta$.) For example, if we take $\textbf{T}(3,4)$, $m=1$, and $M=2$, then $\textbf{S}$ consists of the following strings and their permutations: $$\{112,113,122,123,133,223,224,233,234,244,334,344\}$$ Clearly, $\textbf{S}$ is closed under rotations. Any $\beta\in\textbf{S}$ can be increased in $\textbf{S}$ to a rotation of $\alpha=(k-m)k^{n-1}$, as follows: if the span of $\beta$ is greater than $m$, increase the least symbol of $\beta$ by 1. If the span of $\beta$ equals $m$, increase the greatest symbol of $\beta$ by 1, unless the greatest symbol is $k$. Repeat this process until a string $\beta$ containing the symbol $k$ is reached. At that point, the least symbol in $\beta$ will be $k-m$; leave that one symbol alone, and increase all the other symbols of $\beta$ to $k$. Thus, the greedy algorithm generates a universal cycle for $\textbf{S}$. \subsection{Avoiding a substring} Choose a string $\gamma\in\textbf{T}(m,k)$ for some $m\ge 1$, and let $\textbf{S}\subseteq\textbf{T}(n,k)$ be the set of strings that do not contain $\gamma$ as a cyclic substring. It was proven in \cite{SWW} that if $\gamma$ does not contain $k$, then $\textbf{S}$ can be generated by the greedy algorithm. If $\gamma$ does contain $k$, then we can still make a weaker statement: let $i,j\in\{1,\ldots,k\}$ be two symbols such that $i<j$. If $\gamma$ contains $i$ but not $j$, then $\textbf{S}$ can be generated by the greedy algorithm. As an example, let $\textbf{S}\subseteq\textbf{T}(3,3)$ be the set of strings not containing 13 as a cyclic substring. (In this case, $i=1$ and $j=2$.) Then $\textbf{S}$ consists of the rotations of the following strings: $$\{111,112,122,123,222,223,233,333\}$$ The reason why the greedy algorithm works: $k^n\in\textbf{S}$, since the forbidden substring $\gamma$ includes a symbol $a<k$. Given any $\beta\in\textbf{S}$, we can increase $\beta$ in $\textbf{S}$ to $k^n$, as follows: replace any occurrences of $a$ in $\beta$ with $b$, then increase all symbols in $\beta$ to $k$. So all strings in $\textbf{S}$ are increasable in $\textbf{S}$ to $k^n$. It would seem to be a difficult question to completely categorize the forbidden substrings $\gamma$ for which $\textbf{S}$ can be generated by the greedy algorithm. I have not found any examples of a forbidden substring $\gamma$ containing a symbol $a\le k-2$ where the greedy algorithm fails. And I would conjecture that there are none: \begin{conjecture} Let $\gamma\in\textbf{T}(m,k)$ be a string containing a symbol $a\le k-2$. Let $\textbf{S}\subseteq\textbf{T}(n,k)$ (for some $n\ge m$) be the set of strings not containing $\gamma$ as a cyclic substring. Then the greedy algorithm starting from $k^n$ generates a universal cycle for $\textbf{S}$. \end{conjecture} Now, if all symbols in $\gamma$ are either $k$ or $k-1$, then the greedy algorithm may fail. Let $\textbf{S}\subseteq\textbf{T}(n,9)$ (for some $n\ge 4$) be the set of strings not containing a particular $\gamma$ as a cyclic substring. I leave it as an exercise to the interested reader to show that the greedy algorithm (starting from $\alpha=9^n$) succeeds if $\gamma$ is in the following set... $$\{8899,8989\}$$ ... but the greedy algorithm fails if $\gamma$ is in the following set. $$\{89,889,899,8889,8999\}$$ Note: for some strings $\gamma$, the outcome depends on the value of $n$. One example is $\gamma=8998$; the greedy algorithm will fail if and only if $n$ is a multiple of 3. (All strings in $\textbf{S}$ will be increasable in $\textbf{S}$ to $9^n$, except for $\beta=(889)^{n/3}$ and its rotations.) \section{Conclusion and future work: necklaces} At this point, we have classified the subsets $\textbf{S}\subseteq\textbf{T}(n,k)$, closed under rotations, where the greedy algorithm produces a universal cycle for $\textbf{S}$. But there's one major problem with generating universal cycles with the greedy algorithm: we must store the entire cycle (which could be exponential in length) in order to generate the cycle. Fortunately, when $\textbf{S}=\textbf{T}(n,k)$, there is a faster method: Given $\alpha\in\textbf{T}(n,k)$, $\alpha$ is a ``necklace'' if, out of all rotations of $\alpha$, $\alpha$ itself is the lexicographically earliest such rotation. If we take all such necklaces in lexicographic order, and append their aperiodic prefixes, we obtain a de Bruijn cycle for $\textbf{T}(n,k)$ (the same cycle produced by the greedy algorithm). For example, here's the resulting universal cycle for $\textbf{T}(3,3)$ (with spaces added between the prefixes): $$(333)1 \ 112 \ 113 \ 122 \ 123 \ 132 \ 133 \ 2 \ 223 \ 233 \ 3$$ In \cite{SWW}, this is called the FKM algorithm. It is proved in \cite{SWW} that this algorithm produces a universal cycle for $\textbf{S}\subseteq\textbf{T}(n,k)$ (the same universal cycle produced by the greedy algorithm) if \begin{enumerate} \item $\textbf{S}$ is closed under rotations, and \item every necklace in $\textbf{S}$ remains a necklace in $\textbf{S}$ if the suffix of length $i$ (whenever $1\le i\le n$) is replaced with $k^i$. (When this second condition holds true, $\textbf{S}$ is referred to as a ``$k$-suffix language''.) \end{enumerate} However, this is not an ``if and only if'' situation. The following example of a set $\textbf{S}\subseteq\textbf{T}(4,3)$ is given in \cite{SWW}: $$\textbf{S}=\{1112,1121,1122,1211,1212,1221,1222,1322,2111,$$ $$2112,2121,2122,2132,2211,2212,2213,2221,3221\}$$ Not all necklaces in $\textbf{S}$ remain in $\textbf{S}$ when a suffix is replaced with all 3's. However, the FKM algorithm works here. The necklaces in $\textbf{S}$, in lexicographic order, are 1112, 1122, 1212, 1222, and 1322. Reduce 1212 to its aperiodic prefix 12, then concatenate all the strings, and you obtain a universal cycle: $$(1322)1112 \ 1122 \ 12 \ 1222 \ 1322$$ So it is natural to ask whether there is a necessary and sufficient condition on $\textbf{S}\subseteq\textbf{T}(n,k)$ so that the FKM algorithm generates a universal cycle for $\textbf{S}$. I have a possible candidate for just such a condition: Let's generalize the concept of a $k$-suffix language as follows. Given a string $\alpha\in\textbf{T}(n,k)$, call a set $\textbf{S}\subseteq\textbf{T}(n,k)$ an ``$\alpha$-suffix language'' if, for any $\beta\in\textbf{S}$, each symbol in $\beta$ is at most the corresponding symbol in $\alpha$, and we obtain another element of $\textbf{S}$ if we replace any suffix of $\beta$ with an equal-length suffix of $\alpha$. That is, if $\beta=b_1\cdots b_n$ and $\alpha=a_1\cdots a_n$, then for all $m$ such that $1\le m\le n$, we have $b_m\le a_m$ and $b_1\cdots b_{m-1}a_m\cdots a_n\in\textbf{S}$. With this definition in place, I would conjecture the following: \begin{conjecture} Let $\textbf{S}\subseteq\textbf{T}(n,k)$ be a set that is closed under rotations. Then the FKM algorithm generates a universal cycle for $\textbf{S}$ if and only if the set of necklaces in $\textbf{S}$ is an $\alpha$-suffix language, where $\alpha$ is the lexicographically maximal necklace in $\textbf{S}$. \end{conjecture} The reason for this conjecture: assume that $\textbf{S}\subseteq\textbf{T}(n,k)$ is closed under rotations and is an $\alpha$-suffix language, where $\alpha$ is the lexicographically maximal necklace in $\textbf{S}$. Under these circumstances, it appears that the prisoner has a strategy such that, if the current position is a necklace $\beta\in\textbf{S}$, the prisoner can ensure that the next necklace position reached is lexicographically earlier than $\beta$. Thus, in the universal cycle generated by the greedy algorithm, the necklaces appear in lexicographic order. Perhaps then, the FKM algorithm generates the same cycle as the greedy algorithm.
{ "timestamp": "2018-05-31T02:00:58", "yymm": "1805", "arxiv_id": "1805.11641", "language": "en", "url": "https://arxiv.org/abs/1805.11641", "abstract": "Let $\\textbf{T}(n,k)$ be the set of strings of length $n$ over the alphabet $\\Sigma=\\{1,2,\\ldots,k\\}$. A universal cycle for $\\textbf{T}(n,k)$ can be constructed using a greedy algorithm: start with the string $k^n$, and continually append the least symbol possible without repeating a substring of length $n$. This construction also creates universal cycles for some subsets $\\textbf{S}\\subseteq\\textbf{T}(n,k)$; we will classify all such subsets that are closed under rotations.", "subjects": "Combinatorics (math.CO)", "title": "Classifying Rotationally-Closed Languages Having Greedy Universal Cycles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363545048391, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7073385847508158 }
https://arxiv.org/abs/1911.01397
Periodic Orbits on Obtuse Edge Tessellating Polygons
A periodic orbit on a frictionless billiard table is a piecewise linear path of a billiard ball that begins and ends at the same point with the same angle of incidence. The period of a primitive periodic orbit is the number of times the ball strikes a side of the table as it traverses its trajectory exactly once. In this paper we find and classify the periodic orbits on a billiard table in the shape of a 120-isosceles triangle, a 60-rhombus, a 60-90-120-kite, and a 30-right triangle. In each case, we use the edge tessellation (also known as tiling) of the plane generated by the figure to unfold a periodic orbit into a straight line segment and to derive a formula for its period in terms of the initial angle and initial position.
\section{Introduction} Elementary mathematical billiards studies the motion of a massless particle moving with unit speed along a piece-wise linear path in the interior of a polygon $G$ subject to elastic reflections at the boundary $\partial G$, i.e., the angle of incidence equals the angle of reflection. We think of $G$ as a frictionless billiard table, the edges of $G$ as its bumpers, the vertices of $G$ as its pockets, and the particle in motion as the cue ball. The particle's path is its \emph{orbit}. If the orbit reaches a vertex of $G$, it terminates and is \emph{singular}. A non-singular orbit that begins and ends at the same point is \emph{periodic} if the particle retraces its orbit when allowed to continue. A periodic orbit is \emph{primitive} if the particle traverses its orbit exactly once. The \emph{period} of a periodic orbit is the number of times the particle strikes $\partial G$ as it traverses a primitive sub-orbit. In 2006, A. Baxter and R. Umble found and classified the periodic orbits on equilateral triangles \cite{Ba-Um}. Five year later, A. Baxter, E. McCarthy, and J. Eskreis-Winkler solved the analogous problem on rectangles, isosceles right triangles, and $30^\circ$-right triangles \cite{Ba-Es-Mc}. In this paper we solve the problem on $120^\circ$-isosceles triangles, $60^\circ$-rhombuses, and $60^\circ$-$90^\circ$-$120^\circ$-kites, and we make a conjecture in the case of regular hexagons. An \emph{edge tessellation} of the plane is generated by reflecting a polygon $G$ and its reflected images in their edges. Each such $G$ lies in exactly one of the eight aforementioned families \cite{K-U}. The \emph{edges} of an edge tessellation are the edges of its polygons and the lines containing them are its \emph{inclines}. For example, an equilateral triangle in standard position generates an edge tessellation with inclines of $0^\circ$, $60^\circ$, and $120^\circ$. We identify a non-singular orbit in an edge tessellating polygon $G$ with a piece-wise linear curve $\gamma:I \rightarrow G$ defined on a finite interval $I$. An \emph{unfolding} of $\gamma$ in the edge tessellation $\mathcal{T}$ generated by $G$ is a line segment produced by successively reflecting $\gamma$ and its reflected images in the inclines of $\mathcal{T}$. Unfoldings relate the periodicity of $\gamma$ to the geometry of $\mathcal{T}$ and are sufficient for classifying periodic orbits in the non-obtuse cases. However, the analysis requires us to use more sophisticated techniques involving what we call the ``fence.'' \section{Periodic orbits on a $120^\circ$-isosceles triangle} \label{120isosceles} \subsection{Preliminaries} Consider a $120^\circ$-isosceles $\triangle ABC$ positioned and labeled so that $\overline{AC}$ is horizontal with $A$ to the left of $C,$ the apex $B$ is positioned above $\overline{AC}$, and $\angle B$ is obtuse. The edge tessellation $\mathcal{T}$ generated by $\triangle ABC$ has inclines of $0^\circ$, $30^\circ,$ $60^\circ,$ $90^\circ,$ $120^\circ,$ and $150^\circ$. Given triangles $\triangle_1$ and $\triangle_2$ in $\mathcal T$ such that $\triangle_2=\tau (\triangle_1)$ for some translation $\tau$, two points $P_1$ on $\triangle_1$ and $P_2$ on $\triangle_2$ are \emph{(translationally) aligned (with respect to $\triangle_1$ and $\triangle_2$)} if $P_2=\tau (P_1)$. Let $\gamma:(0,T]\rightarrow \triangle ABC$ be an orbit in $\triangle ABC$. Let $\upsilon: (0,T]\rightarrow \mathcal{T}$ be an unfolding of $\gamma$ with \emph{initial point} $P = \lim_{t \searrow 0} \upsilon(t)$ and \emph{terminal point} $Q = \upsilon(T) = \sigma(\gamma(T))$, where $\sigma$ is the composition of reflections associated with $\upsilon$. The \emph{initial triangle} $\triangle ABC$ and the \emph{terminal triangle} $\triangle A'B'C' := \sigma \left( \triangle ABC \right)$ are \emph{consistently oriented} if $\sigma$ is orientation preserving. Unlike orientation, which is determined by comparing the labelings $(A,B,C)$ and $(A',B',C')=(\sigma(A),\sigma(B),\sigma(C))$ \cite{U-H}, alignment is determined by the relative positions of $P$ and $Q$ on $\triangle ABC$ and $\triangle A'B'C'$, and is independent of labeling. The periodicity of $\gamma$ is characterized by \begin{Theorem}\label{align+orient} Let $\gamma$ be an orbit with unfolding $\upsilon$ whose initial point is on $\overline{AC}$. Then $\gamma$ is periodic if and only if the initial and terminal triangles are consistently oriented and the initial and terminal points are aligned. \end{Theorem} \begin{proof} When $\triangle A'B'C'$ and $\triangle ABC$ are consistently oriented, $\sigma$ is a rotation or a translation. Since $P$ and $Q$ are aligned, $\sigma$ is a translation and $\sigma (P)=Q$. Hence $P$ and $Q$ are in the same relative position in their respective triangles and the initial and terminal angles are equal. Therefore $\gamma$ is periodic. Conversely, given a periodic orbit $\gamma$, let $\upsilon$ be the unfolding with initial point $P$ on $\overline{AC}$, initial angle $\Theta : =m\angle QPC$, and generated by first reflecting in a non-horizontal incline. If $\Theta\neq 90^\circ$, the reflection of $\overline{PQ}$ in the vertical line through $P$ is another unfolding, and we may restrict our considerations to initial angles in the range $0<\Theta\leq 90^\circ$. The images of $\overleftrightarrow{AC}$ in $\mathcal{T}$ are the horizontal, $60^{\circ}$, and $120^{\circ}$ inclines. We claim that $Q$ is on a horizontal incline. Suppose $Q$ is on a $60^\circ$ incline. Then the terminal angle at $Q$, which equals the initial angle $\Theta$ at $P$, is $60^\circ-\Theta$, $\Theta+120^\circ$, $\Theta-60^\circ$, or $240^\circ-\Theta$. But $0<\Theta\leq 90^\circ$ implies $\Theta=30^\circ$ (see Figure~\ref{initial-terminal-angle}), and an initial angle of $\Theta=30^\circ$ produces a period $8$ orbit that terminates on a horizontal incline, which is a contradiction (see Figure~\ref{30-unfolding}). Suppose $Q$ is on a $120^{\circ}$ incline. Then the terminal angle at $Q$ is $120^\circ-\Theta$, $\Theta+60^\circ$, $\Theta-120^\circ$, or $300^\circ-\Theta$. But $0<\Theta\leq 90^\circ$ implies $\Theta=60^\circ$, and an initial angle of $\Theta=60^\circ$ produces an orbit of period $4$ or period $10$, both of which terminate on a horizontal incline, which is a contradiction (see Figure~\ref{alignex2}). Therefore $Q$ is on a horizontal incline as claimed. \begin{figure} \centering \begin{minipage}[b]{.4\textwidth} \centering \include{figure1.tikz} \caption{\small Equal initial and terminal angles of an unfolding.} \label{initial-terminal-angle} \end{minipage}% \hskip2cm\begin{minipage}[b]{.4\textwidth} \centering \include{figure2.tikz} \caption{\small An unfolding with initial angle $30^\circ$.} \label{30-unfolding} \end{minipage} \end{figure} \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.4\textwidth} \include{figure4.tikz} \caption{An unfolding with initial angle $60^\circ$ and period 10.} \label{60-unfolding-10} \end{subfigure} \hskip1cm\begin{subfigure}[b]{0.4\textwidth} \include{figure3.tikz} \caption{An unfolding with initial angle $60^\circ$ and period 4.} \label{60-unfolding-4} \end{subfigure} \caption{} \label{alignex2} \end{figure} Since $Q$ is on a horizontal incline, the base $\overline{A'C'}$ is horizontal. Furthermore, since $\overline{PQ}$ does not cross the interior of $\triangle A'B'C'$, its apex $B'$ lies above the base. Thus $\sigma$ is a translation or a glide reflection. If $\sigma$ is a glide reflection, it reverses orientation. Then $180^\circ -\Theta =\Theta$ implies $\Theta=90^{\circ}$. But a periodic orbit with initial angle $90^\circ$ coincides with the period $8$ orbit with initial angle $30^{\circ}$. Since $\sigma$ is a composition of eight reflections, it preserves orientation, which is a contradiction. Therefore $\sigma$ is a translation, $P$ and $Q$ are aligned, and $\triangle A'B'C'$ and $\triangle ABC$ are consistently oriented. \end{proof} \begin{Corollary}\label{evenperiod} The period of a periodic orbit is even, and an unfolding of a periodic orbit with initial point on a horizontal terminates on a horizontal. \end{Corollary} Corollary \ref{evenperiod} does not hold for all edge tessellating polygons. For example, Fagnano's periodic orbit on an equilateral triangle has period $3$ and terminates on a non-horizontal incline \cite{Ba-Um}. Note that the periodic orbit displayed in Figure~\ref{30-unfolding}\text{ } has initial angle $30^{\circ}$ and period $8$, while the periodic orbits displayed in Figure~\ref{alignex2}\text{ } have initial angle $60^{\circ}$ and respective periods $4$ and $10$. A periodic orbit $\gamma$ is \emph{monoperiodic} if every periodic orbit with the same initial angle as $\gamma$ has the same period; otherwise, $\gamma$ is \emph{biperiodic}. Our next proposition allows us to restrict our considerations to periodic orbits with initial angles between 60 and 90 degrees. \begin{Proposition}\label{align} Every periodic orbit can be represented by an unfolding with an initial angle $\Theta$ in the range $60^\circ \leq \Theta \leq 90^\circ$. \end{Proposition} \begin{proof} Let $\gamma$ be a periodic orbit with initial angle $\Theta$ in the range $0^\circ<\Theta\leq 90^\circ$. Suppose $0^\circ<\Theta\leq 30^\circ$. Since $Q$ is on a horizontal incline, $\overline{PQ}$ cuts a $120^{\circ}$ incline at a point $P'$ with angle of incidence $\Phi=60^\circ+\Theta.$ Thus, there is an unfolding $\overline{P'Q'}$ of $\gamma$ with initial angle $\Phi$ in the range $60^\circ<\Phi\leq 90^\circ$. On the other hand, suppose $30^\circ<\Theta< 60^\circ$, and let $\overline{PQ'}$ be the reflection of $\overline{PQ}$ in $\overleftrightarrow{AC}.$ Then $\overline{PQ'}$ cuts a $60^{\circ}$ incline at a point $P'$ with angle of incidence $\Psi=120^\circ-\Theta$. Thus, there is an unfolding $\overline{P'R}$ with initial angle $\Psi$ in the range $60^\circ<\Psi<90^\circ$. \end{proof} Since periodic orbits with initial angles $60^\circ$ and $90^\circ$ are understood, our problem reduces to classifying periodic orbits with initial angles in $(60^\circ, 90^\circ)$. \subsection{Contact points of an unfolding and the fence} Let $AC$ denote the length of $\overline{AC}$ and let $u=\frac{1}{2}AC$. Impose a rectangular coordinate system on $\mathcal T$ with horizontal axis $\overleftrightarrow{AC}$, origin $O$ at the midpoint of $\overline{AC}$, horizontal unit of length $u$, and vertical unit of length $\sqrt{3}u$. Then points on vertical inclines have integer horizontal coordinates, points on horizontal inclines have integer vertical coordinates, and adjacent vertical and adjacent horizontal inclines lie one unit apart. If $\upsilon$ is an unfolding of a periodic orbit with initial point $P$, terminal point $Q$, and initial angle $\Theta$, Theorem~\ref{align+orient} implies that the vector $\mathbf{PQ}$ is parallel to a vector $\left( x,y\right) $ for some $x,y\in \mathbb{N}$; hence $\Theta=\arctan\left(\frac{y}{x}\sqrt{3}\right)$. Furthermore, $\Theta\in\left(60^\circ, 90^\circ\right)$ implies $x < y$, and we may assume $\gcd\left( x,y\right) =1$. Parametrize $\upsilon$ via $\upsilon\left( t\right) :=\left( t+a,\frac{y}{x}t\right),$ $0< t \leq T$, where $a\in(-1,1)$; then $P = \lim_{t \searrow 0} \upsilon(t)=(a,0)$ and $Q = \upsilon(T)$. Each point at which $\upsilon$ cuts a vertical incline lies on a fundamental vertical segment of length 2 connecting the midpoints of two horizontal edges. The function $f:\mathbb{Z} \times\mathbb{R}\rightarrow \mathbb{R}/2\mathbb{Z}$ defined by $f\left(\alpha, \beta \right) :=\alpha+ \beta +2 \mathbb Z$ identifies each such segment with the quotient group $\mathbb{R}/2\mathbb{Z}$, called the \emph{fence}. Geometrically, the projection in the $60^\circ$ direction onto the vertical coordinate axis sends $(\alpha,\beta) $ into the coset $f (\alpha,\beta) $. It will often be convenient to think of the fence as the interval $\mathcal{F}:=\left[0,2\right)$ of coset representatives and to write $f\left(\alpha, \beta \right) =\left(\alpha+ \beta \right) \operatorname{mod}2$. The fence $\mathcal{F}$ consists of the \emph{barrier} $\mathcal{B}:=\left( \frac{1}{3},\frac{5}{3}\right] $ and the \emph{gate} $\mathcal{B}^{c}:=\mathcal{F\smallsetminus B}$. Between consecutive horizontal inclines, an unfolding $\upsilon$ always cuts four non-vertical edges. But whether or not $\upsilon$ also cuts a vertical edge is determined by its set of \emph{contact points} \begin{equation*} \mathcal{C}_{T}:=\left\{ f\left(\upsilon(t)\right) : 0< t \leq T \,\, \mbox{and} \,\, t+a \in \mathbb Z\right\}\subset \mathcal F. \end{equation*} Indeed, $\upsilon$ cuts a vertical edge at $\upsilon(t_i)$ if and only if $f(\upsilon(t_i))\in \mathcal B$. The \emph{multiplicity} of a contact point $c\in \mathcal{C}_{T}$, denoted by $m_{T}\left( c\right) $, is the number of times $\upsilon$ cuts a vertical incline at a point corresponding to $c$, i.e., \begin{equation*} m_{T}\left( c\right) :=\#\left\{ t :0< t \leq T \,\, \mbox{and} \,\, f\left(\upsilon\left(t \right)\right) =c\right\} . \end{equation*} These ideas are illustrated in Figure~\ref{figure4}. \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \hspace{-2em} \include{figure5a.tikz} \caption{Geometric motivation for the fence.} \label{geometric-motivation} \end{subfigure} \kern10em \begin{subfigure}[b]{0.3\textwidth} \centering \include{figure5b.tikz} \caption{The fence for the unfolding in (a); contact points along the unfolding are indexed sequentially.} \label{fence} \end{subfigure} \caption{} \label{figure4} \end{figure} Define $t_i:= i-a$, where $i \in \mathbb Z$; then $c_i:= f(\upsilon\left( t_{i}\right)) =f ( i,\frac{y}{x}(i-a) )$. Extending the domain of $\upsilon$ to all real numbers allows us to define $c_i$ for all integers $i$, in which case the equality still holds. The following lemma characterizes the purely geometric notion of alignment in terms of analytic conditions on the set of contact points. \begin{Lemma}\label{align-contact} The initial and terminal points of an unfolding $\upsilon\left( t\right), \, 0<t \leq T$, are aligned if and only if $c_i =c_{i+T}$ for all $i \in \mathbb Z$. \end{Lemma} \begin{proof} By inspection, the initial point $\left(a, 0\right)$ and the terminal point $\left(T+a, \frac{y}{x}T\right)$ are aligned in the tessellation $\mathcal T$ if and only if the horizontal change $T$ and the vertical change $ \frac{y}{x}T$ are both integers, and $2\mid \left(T+\frac{y}{x}T\right)$. By definition, $c_i =c_{i+T}$ if and only if $T \in \mathbb Z$ and $i + \frac{y}{x} (i-a) +2 \mathbb Z = i+T +\frac{y}{x}(i+T-a) +2 \mathbb Z$, where the latter condition is equivalent to $2\mid \left(T+\frac{y}{x}T\right)$. The fact that $T \in \mathbb Z$ and $2\mid \left(T+\frac{y}{x}T\right)$ implies $\frac{y}{x}T \in \mathbb Z$ completes the proof. \end{proof} Computing the period of a periodic orbit will be significantly simplified by \begin{Proposition}\label{multiplicity} If the initial and terminal points of an unfolding are aligned, all contact points are equally spaced on the fence and have the same multiplicity. \end{Proposition} \begin{proof} If the initial and terminal points of $\upsilon\left( t\right), \, 0<t \leq T$, are aligned, then $c_i = c_{i+T}$ for all $i \in \mathbb Z$ by Lemma \ref{align-contact}. Hence $\mathcal G := \{ (1+\frac{y}{x})i + 2 \mathbb{Z}:i\in \mathbb Z\}$ is a finite subgroup of $\mathbb{R} / 2 \mathbb{Z}$, and the set of contact points $\{c_i\}=(-\frac{y}{x} a+2 \mathbb{Z})+\mathcal {G}$ is a coset of $\mathcal G$ in $\mathbb R / 2 \mathbb Z$. Therefore contact points are equally spaced in $\mathbb{R}/2\mathbb{Z}$. Furthermore, if $|\mathcal G|=m$, then $m \mid T$ and $m_{T}\left(c_i\right)=\frac{T}{m}$ for all $i$. \end{proof} \subsection{Counting the number of edges cut by an unfolding} Since the initial and terminal points of $\upsilon\left( t\right), \, 0<t \leq T$, are aligned if and only if both $T$ and $\frac{y}{x} T$ are integers and $2\mid \left(T+\frac{y}{x}T \right)$, it follows that $x \mid T$ since $\mathrm{gcd}(x,y)=1$. The \emph{first alignment of} $\upsilon$, denoted by $T_1$, is the smallest value of $T$ for which the three conditions above hold. With the parities of $x$ and $y$ in mind, it is easy to check that these three conditions imply \begin{Proposition}\label{firstalignment} The first alignment of an unfolding $\upsilon$ is given by \begin{align*} T_1= \begin{cases} x & \textnormal{if } x\equiv y\operatorname{mod}2\\ 2x & \textnormal{if } x\not\equiv y\operatorname{mod}2. \end{cases} \end{align*} \end{Proposition} Let $N_T$ be the number of edges of $\mathcal T$ cut by $\upsilon(t) , \, 0< t \leq T$. Proposition \ref{firstalignment} implies that the initial and terminal points of $\upsilon\left( t\right),$ $0< t \leq 2x$, are always aligned. Consequently, we can compute $N_{2x}$ by appealing to the regularity of the contact points given by Proposition \ref{multiplicity}. \begin{Lemma}\label{decomposition} Let $b_{2x}$ denote the number of contact points of $\upsilon(t) , \, 0< t \leq 2x$, on the barrier. Then $$N_{2x}=8y+m_{2x} b_{2x} .$$ \end{Lemma} \begin{proof} Since $0< t \leq 2x$, the unfolding $\upsilon$ cuts $\frac{y}{x} \cdot 2x =2y$ horizontal inclines. Between consecutive horizontal inclines, $\upsilon$ cuts four non-vertical edges. Recall that the points at which $\upsilon$ cuts vertical edges correspond to the contact points on the barrier. Since the initial and terminal points of $\upsilon$ are aligned, all contact points have the same multiplicity $m_{2x}$ by Proposition \ref{multiplicity}. Thus the total number of contact points on the barrier is $m_{2x} b_{2x}$, which is also the total number of vertical edges cut by $\upsilon$. Consequently, the total number of edges cut by $\upsilon$ is $4 \cdot 2y+m_{2x} b_{2x}=8y+m_{2x} b_{2x} .$ \end{proof} An explicit formula for $N_{2x}$ follows from explicit formulas for $m_{ 2x}$ and $b_{ 2x}$. \begin{Lemma}\label{spacing} The multiplicity $m_{2x}$ and the spacing $s$ between consecutive contact points are given by \begin{equation*} m_{2x}=\left\{ \begin{array} [c]{cc}% 2, & \textnormal{if }x\equiv y\operatorname{mod}2\\ 1, & \textnormal{if }x\not\equiv y\operatorname{mod}2% \end{array} \right. \text{ and \ }s=\left\{ \begin{array} [c]{cc}% 2/x, & \textnormal{if } x\equiv y\operatorname{mod}2\\ 1/x, & \, \textnormal{if } x\not\equiv y\operatorname{mod}2.% \end{array} \right. \end{equation*} \end{Lemma} \begin{proof} At the first alignment, $\mathcal C_{ T_1}$ consists of $T_1$ distinct contact points each with multiplicity $1$. If $x\equiv y \bmod 2$, then $T_1=x$ by Proposition \ref{multiplicity}, so that each contact point of $\mathcal{C}_{2x}$ has multiplicity $2$. Since there are $x$ distinct contact points on the fence $\mathcal{ F}$ of length $2$, the spacing $s=\frac{2}{x}$. If $x \not \equiv y \bmod 2$, then $T_1=2x$ so that each contact point in $\mathcal{C}_{2x}$ has multiplicity $1$. Consequently, there are $2x$ distinct contact points and the spacing $s=\frac{2}{2x}=\frac{1}{x}$. \end{proof} Having established the spacing $s$, let us derive a formula for the number $b_{2x}$ of contact points on the barrier. Let $\left[x\right]$ denote the integer part of $x$. \begin{Lemma}\label{perturb} The number of contact points on the barrier is \begin{align*} b_{2x}= \begin{cases} \frac{4}{3s} & \textnormal{if } 3 \mid x\smallskip\\ \left[\frac{4}{3s}\right] \,\, \textnormal{or} \,\, \left[\frac{4}{3s}\right] +1 & \textnormal{if } 3 \nmid x. \end{cases} \end{align*} \end{Lemma} \begin{proof} Horizontally translating the initial point $P$ uniformly shifts the contact points and preserves the spacing $s$. Since $s \in\{\frac{1}{x},\frac{2}{x}\}$, the length of barrier $\frac{4}{3}$ is an integer multiple of $s$ if and only if $3 \mid x$. Suppose $3 \mid x$. Since contact points are equally spaced and the barrier $\left( \frac{1}{3},\frac{5}{3}\right]$ is half open, uniformly shifting the contact points preserves the number of contact points on the barrier. Thus $b_{2x}= \frac{4}{3s}$. Suppose $3\nmid x$. Uniformly shifting the contact points alternately increases or decreases $b_{2x}$ by $1$ as contact points enter or leave the barrier. Therefore $b_{2x}\in\{\left[\frac{4}{3s}\right],\left[\frac{4}{3s}\right] +1\}$. \end{proof} The facts we need to derive an explicit formula for $N_{ 2x}$ are now in place. \begin{Proposition}\label{count} The number $N : =N_{2x}$ is given by the following table: {\small \begin{center} \begin{tabular}{| c | c | c || l | } \hline $N \equiv 1,3 \bmod 4$ & $N \equiv 2 \bmod 4$ & $N \equiv 0 \bmod 4$ & \\ \hline \hline & & $8y+\frac{4x}{3}$ & $x \equiv 0 \bmod 3 \textnormal{ and } x \equiv y \bmod 2$ \\ \hline & & $8y+\frac{4x}{3}$ & $x \equiv 0 \bmod 3 \textnormal{ and } x \not\equiv y \bmod 2$ \\ \hline &$8y+\frac{4x+2}{3}$ & $8y+\frac{4x-4}{3}$ & $x \equiv 1 \bmod 3 \textnormal{ and } x \equiv y \bmod 2$\\ \hline $8y+\frac{4x-1}{3}$ & $8y+\frac{4x+2}{3}$ & &$x \equiv 1 \bmod 3 \textnormal{ and } x \not\equiv y \bmod 2$ \\ \hline &$8y+\frac{4x-2}{3}$ & $8y+\frac{4x+4}{3}$ & $x \equiv 2 \bmod 3 \textnormal{ and } x \equiv y \bmod 2$\\ \hline $8y+\frac{4x+1}{3}$ & $8y+\frac{4x-2}{3}$ & &$x \equiv 2 \bmod 3 \textnormal{ and } x \not\equiv y \bmod 2$ \\ \hline \end{tabular} \end{center} } \end{Proposition} \begin{remark*} The columns are arranged to accommodate Proposition~\ref{terminate}. \end{remark*} \begin{proof} The formula for $N_{2x}$ follows immediately from Lemmas \ref{decomposition}, \ref{spacing}, and \ref{perturb}. \smallskip \noindent \textbf{Case 1}. If $x \equiv 0 \bmod 3$ and $x \equiv y \bmod 2$, then $m_{2x}=2$, $s=\frac{2}{x}$, and $b_{2x}=\frac{4}{3s}$. Thus $N_{2x}=8y+m_{2x} b_{2x}=8y+ 2 \cdot \frac{4}{3} \cdot \frac{x}{2}=8y+ \frac{4x}{3}$. \smallskip \noindent \textbf{Case 4}. If $x \equiv 1 \bmod 3$ and $x \not\equiv y \bmod 2$, then $m_{2x}=1$, $s=\frac{1}{x}$, and either $b_{2x}=\left[\frac{4}{3s}\right]$ or $\left[\frac{4}{3s}\right] +1$. If $b_{2x}=\left[\frac{4}{3s}\right]$, then $N_{2x}=8y+m_{2x} b_{2x}=8y+ 1 \cdot \left[\frac{4}{3} \cdot \frac{x}{1}\right]=8y+\frac{4x-1}{3}$. If $b_{2x}=\left[\frac{4}{3s}\right]+1$, then $N_{2x}=8y+ 1 \cdot \left( \left[\frac{4}{3}\cdot \frac{x}{1}\right]+1 \right)= 8y+\frac{4x+2}{3}$. \smallskip \noindent Proofs of the other cases are similar and left to the reader. \end{proof} \subsection{Computing the period of a periodic orbit} Since the period of a periodic orbit is the period of its primitive sub-orbits, let us determine the values of $T$ for which $\upsilon (t), 0<t \leq T$, is an unfolding of a primitive periodic orbit. \begin{Proposition}\label{terminate} Let $x,y\in \mathbb{N}$ such that $x < y$ and $\mathrm{gcd}(x,y)=1$. A primitive periodic orbit $\gamma$ with initial angle $\Theta = \arctan(\frac{y}{x}\sqrt{3})$ has an unfolding $\upsilon(t)$, $0 < t \leq T$, for some $T\in\{x,2x,4x\}$, and period $p(x,y)$ determined as follows: \begin{enumerate} \item If $x\equiv y \bmod 2$ and $N_{2x} \equiv 0 \bmod 4$, then $T=x$ and $p(x,y)=\frac{1}{2} N_{2x}$. \item If $x\equiv y \bmod 2$ and $N_{2x}\equiv 2 \bmod 4$, then $T=2x$ and $p(x,y)=N_{2x}$. \item If $x \not\equiv y \bmod 2$ and $N_{2x}$ is even, then $T=2x$ and $p(x,y) = N_{2x}$. \item If $x\not\equiv y \bmod 2$ and $N_{2x}$ is odd, then $T=4x$ and $p(x,y)=2 N_{2x}$. \end{enumerate} \end{Proposition} \begin{proof} A primitive periodic orbit has an unfolding $\upsilon(t)$, $0 < t \leq T$, if and only if $T$ is the smallest positive integer such that the initial and terminal points are aligned and $N_T$ is even. Recall that the alignment condition implies $x \mid T$, and the first alignment $T_1=x$ when $x\equiv y \bmod 2$ and $T_1=2x$ otherwise. \smallskip \noindent\textbf{Case 1}. If $x\equiv y \bmod 2$, then $T_1=x$. If $N_{2x} \equiv 0 \bmod 4$, then $N_x=\frac{1}{2} N_{2x}$ is even so that $T=x$ and $p(x,y)=N_x=\frac{1}{2}N_{2x}$. If $N_{2x}\equiv 2 \bmod 4$, then $N_x=\frac{1}{2} N_{2x}$ is odd so that $T=2x$ and $p(x,y)=N_{2x}$. \smallskip \noindent\textbf{Case 2}. If $x\not\equiv y \bmod 2$, then $T_1=2x$. If $N_{2x}$ is even, then $T=2x$ and $p(x,y)=N_{2x}$. If $N_{2x}$ is odd, then $N_{4x}=2 N_{2x}$ is even so that $T=4x$ and $p(x,y)=N_{4x}=2N_{2x}$. \end{proof} The period of every periodic orbit is now determined. \begin{Theorem}\label{formula} Let $x,y \in \mathbb{N}$ such that $x < y$ and $\mathrm{gcd}(x,y)=1$. Then the period $p(x,y)$ of a periodic orbit with initial angle $\Theta=\arctan(\frac{y}{x}\sqrt{3})$ is $$p(x, y)= \begin{cases} 4y + \frac{2x}{3} & {\rm if} \,\, x \equiv 0 \textnormal{ mod 3} \textnormal{ and } x \equiv y \textnormal{ mod 2}\\ 8y + \frac{4x}{3} & {\rm if} \,\, x \equiv 0 \textnormal{ mod 3} \textnormal{ and } x \not\equiv y \textnormal{ mod 2}\\ 4y + \frac{2x-2}{3} \textnormal{ or } \textnormal 8y + \frac{4x+2}{3} & {\rm if} \,\, x \equiv 1 \textnormal{ mod 3} \textnormal{ and } x \equiv y \textnormal{ mod 2}\\ 16y + \frac{8x-2}{3} \textnormal{ or } 8y + \frac{4x+2}{3} & {\rm if} \,\, x \equiv 1 \textnormal{ mod 3} \textnormal{ and } x \not\equiv y \textnormal{ mod 2}\\ 4y + \frac{2x+2}{3} \textnormal{ or } 8y + \frac{4x-2}{3} & {\rm if} \,\, x \equiv 2 \textnormal{ mod 3} \textnormal{ and } x \equiv y \textnormal{ mod 2}\\ 16y + \frac{8x+2}{3} \textnormal{ or } 8y + \frac{4x-2}{3} & {\rm if} \,\, x \equiv 2 \textnormal{ mod 3} \textnormal{ and } x \not\equiv y \textnormal{ mod 2}.\\ \end{cases}$$ \end{Theorem} \begin{proof} Propositions \ref{count} and \ref{terminate} immediately lead to the formula for $p(x,y)$. \smallskip \noindent \textbf{Case 1}. If $x \equiv 0 \bmod 3$ and $x \equiv y \bmod 2$, then $N_{2x}=8y+\frac{4x}{3} \equiv 0 \bmod 4$ so that $p(x,y)=\frac{1}{2} N_{2x}= 4y+\frac{2x}{3}$. \smallskip \noindent \textbf{Case 4}. If $x \equiv 1 \bmod 3$ and $x \not\equiv y \bmod 2$, then $N_{2x}=8y+\frac{4x-1}{3}$ or $N_{2x}=8y+\frac{4x+2}{3}$. In the first case, $N_{2x}$ is odd so that $p(x,y)=2 N_{2x}= 16y+\frac{8x-2}{3}$; in the second case, $N_{2x}$ is even so that $p(x,y)=N_{2x}= 8y+\frac{4x+2}{3}$. \smallskip \noindent Proofs of the other cases are similar and left to the reader. \end{proof} \noindent When $\Theta=60^\circ$, periods $4$ and $10$ are consistent with the formula in Theorem~\ref{formula} with $(x,y)=(1,1)$; when $\Theta=90^\circ$, period $8$ is consistent with $(x,y)=(0,1)$. \begin{Corollary} A biperiodic periodic orbit with initial angle $\Theta = \arctan \big(\frac{y}{x} \sqrt{3} \big)$ has one of two possible periods $p_1 < p_2$, where $p_2 = 2p_1 + 2$ or $p_2 = 2 p_1 - 2$. \end{Corollary} While our understanding of unfoldings follows by considering the fence, the simple statement in Thoerem 12 makes no mention of the initial point. However, the framework developed here allows us to determine a more precise period formula in terms of the initial and terminal points $P = (a, 0)$ and $Q = (a + x, y)$, and doing so required us to compute the number of contact points on the barrier as a function of $a$: $$b_a(x,y)=\left[ \frac{5/3 + a y/x}{s_a(x,y)} \right]-\left[ \frac{1/3 + a y/x}{s_a(x,y)} \right],$$ where $s_a(x,y)$ is the spacing function. \section{Periodic orbits on other obtuse polygons}\label{othershapes} The methods developed in Section \ref{120isosceles} can be applied to a $60^\circ$-rhombus and a $60^\circ$-$90^\circ$-$120^\circ$-kite. Let $\mathcal{T}_1$ and $\mathcal{T}_2$ be the respective edge tessellations generated by a $60^\circ$-rhombus and a $60^\circ$-$90^\circ$-$120^\circ$-kite. Note that an analogue of Theorem \ref{align+orient} holds in both of these cases. In either case, periodic orbits can be represented by unfoldings with an initial angle $\Theta \in [60^\circ, 90^\circ]$, and $\Theta \in (60^\circ, 90^\circ)$ can be expressed in the form $\Theta=\arctan(\frac{y}{x}\sqrt{3})$ with $x < y$ and $\gcd(x, y)=1$. When $\Theta=60^\circ$ or $\Theta=90^\circ$, the period can be determined by inspection and fits the general formula to be derived. \subsection{The $60^\circ$-rhombus} Note that $\mathcal {T}_1$ can be obtained from $\mathcal T$ by removing its $0^\circ$, $60^\circ$, and $120^\circ$ inclines (see Figure~\ref{60-rhombusT}). Impose the same coordinate system on $\mathcal{T}_1$ we imposed on $\mathcal{T}$. The barrier and gate for $\mathcal{T}_1$ are identical to the barrier and gate for $\mathcal{T}$, and all definitions and techniques in the previous sections apply. Although $\mathcal{T}_1$ has no horizontal inclines, we can position the initial point of an unfolding $\upsilon$ on a horizontal incline of $\mathcal T$ (the dotted line on Figure \ref{60-rhombusT}). Although a $60^\circ$-rhombus exhibits both line and rotational symmetry, a quick check shows that the initial and terminal rhombuses determined by an unfolding with aligned initial and terminal points, may differ by a reflection but not by a rotation. The formula $N_{2x}=4y+m_{2x} b_{2x}$ (as in Lemma~\ref{decomposition}) leads to the following table (as in Proposition~\ref{count}): {\small \begin{center} \begin{tabular}{| c | c | c || l | } \hline $N \equiv 1,3 \bmod 4$ & $N \equiv 2 \bmod 4$ & $N \equiv 0 \bmod 4$ & \\ \hline \hline & & $4y+\frac{4x}{3}$ & $x \equiv 0 \bmod 3 \textnormal{ and } x \equiv y \bmod 2$ \\ \hline & & $4y+\frac{4x}{3}$ & $x \equiv 0 \bmod 3 \textnormal{ and } x \not\equiv y \bmod 2$ \\ \hline &$4y+\frac{4x+2}{3}$& $4y+\frac{4x-4}{3}$ & $x \equiv 1 \bmod 3 \textnormal{ and } x \equiv y \bmod 2$\\ \hline $4y+\frac{4x-1}{3}$ & $4y+\frac{4x+2}{3}$ & &$x \equiv 1 \bmod 3 \textnormal{ and } x \not\equiv y \bmod 2$ \\ \hline &$4y+\frac{4x-2}{3}$& $4y+\frac{4x+4}{3}$ & $x \equiv 2 \bmod 3 \textnormal{ and } x \equiv y \bmod 2$\\ \hline $4y+\frac{4x+1}{3}$ & $4y+\frac{4x-2}{3}$ & &$x \equiv 2 \bmod 3 \textnormal{ and } x \not\equiv y \bmod 2$ \\ \hline \end{tabular} \end{center} } Proposition~\ref{terminate} applies, and when combined with the table above, produces the following formula for the period (as in Theorem~\ref{formula}): $$p(x, y)= \begin{cases} 2y + \frac{2x}{3} & {\rm if} \,\, x \equiv 0 \textnormal{ mod 3} \textnormal{ and } x \equiv y \textnormal{ mod 2}\\ 4y + \frac{4x}{3} & {\rm if} \,\, x \equiv 0 \textnormal{ mod 3} \textnormal{ and } x \not\equiv y \textnormal{ mod 2}\\ 2y + \frac{2x-2}{3} \textnormal{ or } \textnormal 4y + \frac{4x+2}{3} & {\rm if} \,\, x \equiv 1 \textnormal{ mod 3} \textnormal{ and } x \equiv y \textnormal{ mod 2}\\ 4y + \frac{4x+2}{3} \textnormal{ or } 8y + \frac{8x-2}{3} & {\rm if} \,\, x \equiv 1 \textnormal{ mod 3} \textnormal{ and } x \not\equiv y \textnormal{ mod 2}\\ 2y + \frac{2x+2}{3} \textnormal{ or } 4y + \frac{4x-2}{3} & {\rm if} \,\, x \equiv 2 \textnormal{ mod 3} \textnormal{ and } x \equiv y \textnormal{ mod 2}\\ 4y + \frac{4x-2}{3} \textnormal{ or }8y + \frac{8x+2}{3} & {\rm if} \,\, x \equiv 2 \textnormal{ mod 3} \textnormal{ and } x \not\equiv y \textnormal{ mod 2}.\\ \end{cases}$$ \begin{figure}[] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \include{rhombus.tikz} \caption{The tessellation $\mathcal T_1$ generated by a $60^\circ$-rhombus.} \label{60-rhombusT} \end{subfigure} \kern2em \begin{subfigure}[b]{0.4\textwidth} \centering \include{kite.tikz} \caption{The tessellation $\mathcal T_2$ generated by a $60^\circ$-$90^\circ$-$120^\circ$-kite.} \label{60-90-120-kiteT} \end{subfigure} \caption{} \label{figure5} \end{figure} \subsection{The $60^\circ$-$90^\circ$-$120^\circ$-kite} The edge tessellation $\mathcal{T}_2$ is also related to $\mathcal T$. Consider the $60^\circ$-$90^\circ$-$120^\circ$-kite positioned as $\Box AOBD$, where $A$ and $B$ coincide with the two vertices of the $120^\circ$-isosceles $\triangle ABC$ and $O$ is the midpoint of $\overline{AC}$ (see Figure~\ref{60-90-120-kiteT}). Impose the same coordinate system on $\mathcal{T}_2$ we imposed on $\mathcal{T}$ and position the initial point of an unfolding $\upsilon$ on a horizontal incline. While the fence is the same as before, the barrier and gate are interchanged, i.e., $(\frac{1}{3},\frac{5}{3}]$ is the gate. The formula $N_{2x}=6y+m_{2x} b_{2x}$ (as in Lemma~\ref{decomposition}) leads to the following table (as in Proposition~\ref{count}): {\small \begin{center} \begin{tabular}{| c | c | c || l | } \hline $N \equiv 1,3 \bmod 4$ & $N \equiv 2 \bmod 4$ & $N \equiv 0 \bmod 4$ & \\ \hline \hline & & $6y+\frac{2x}{3}$ & $x \equiv 0 \bmod 3 \textnormal{ and } x \equiv y \bmod 2$ \\ \hline & $6y+\frac{2x}{3}$ & & $x \equiv 0 \bmod 3 \textnormal{ and } x \not\equiv y \bmod 2$ \\ \hline & $6y+\frac{2x-2}{3}$& $6y+\frac{2x+4}{3}$ & $x \equiv 1 \bmod 3 \textnormal{ and } x \equiv y \bmod 2$\\ \hline $6y+\frac{2x+1}{3}$ & &$6y+\frac{2x-2}{3}$ &$x \equiv 1 \bmod 3 \textnormal{ and } x \not\equiv y \bmod 2$ \\ \hline &$6y+\frac{2x+2}{3}$& $6y+\frac{2x-4}{3}$ & $x \equiv 2 \bmod 3 \textnormal{ and } x \equiv y \bmod 2$\\ \hline $6y+\frac{2x-1}{3}$ & & $6y+\frac{2x+2}{3}$ &$x \equiv 2 \bmod 3 \textnormal{ and } x \not\equiv y \bmod 2$ \\ \hline \end{tabular} \end{center} } Proposition~\ref{terminate} applies, and when combined with the table above, produces the following formula for the period (as in Theorem~\ref{formula}): $$p(x, y)= \begin{cases} 3y + \frac{x}{3} & {\rm if} \,\, x \equiv 0 \textnormal{ mod 3} \textnormal{ and } x \equiv y \textnormal{ mod 2}\\ 6y + \frac{2x}{3} & {\rm if} \,\, x \equiv 0 \textnormal{ mod 3} \textnormal{ and } x \not\equiv y \textnormal{ mod 2}\\ 3y + \frac{x+2}{3} \textnormal{ or } \textnormal 6y + \frac{2x-2}{3} & {\rm if} \,\, x \equiv 1 \textnormal{ mod 3} \textnormal{ and } x \equiv y \textnormal{ mod 2}\\ 6y + \frac{2x-2}{3} \textnormal{ or } 12y + \frac{4x+2}{3} & {\rm if} \,\, x \equiv 1 \textnormal{ mod 3} \textnormal{ and } x \not\equiv y \textnormal{ mod 2}\\ 3y + \frac{x-2}{3} \textnormal{ or } 6y + \frac{2x+2}{3} & {\rm if} \,\, x \equiv 2 \textnormal{ mod 3} \textnormal{ and } x \equiv y \textnormal{ mod 2}\\ 6y + \frac{2x+2}{3} \textnormal{ or }12y + \frac{4x-2}{3} & {\rm if} \,\, x \equiv 2 \textnormal{ mod 3} \textnormal{ and } x \not\equiv y \textnormal{ mod 2}.\\ \end{cases}$$ \subsection{The Regular Hexagon} While the techniques in Section \ref{120isosceles} can be applied to determine the number of edges cut by an unfolding of a periodic orbit on a regular hexagon, the presence of rotational symmetries renders this information insufficient to determine exactly where an unfolding terminates. Position the hexagon so that a subdivision of its edge tessellation is the edge tessellation generated by a $120^\circ$-isosceles triangle, and restrict considerations to initial angles $\Theta\in(30^\circ,60^\circ)$. Although we are unable to rigorously solve the problem, we pose a conjecture based on extensive numerical evidence. We computed the period for 3814 pairs $(x,y)$ and partitioned these pairs into planar groups using a mixture algorithm \cite{G}. We conjecture that the period $p(x,y)$ is \begin{equation*} \begin{cases} 3y + x & {\rm if} \,\, x \equiv 0 \textnormal{ mod 3}, x \equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A_{0,0} \\ y + \frac{x}{3} & {\rm if} \,\, x \equiv 0 \textnormal{ mod 3}, x \equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A^c_{0,0} \\ 6y + 2x & {\rm if} \,\, x \equiv 0 \textnormal{ mod 3}, x \not\equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A_{0,1} \\ 2y + \frac{2x}{3} & {\rm if} \,\, x \equiv 0 \textnormal{ mod 3}, x \not\equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A^c_{0,1} \\ 2y + \frac{2x-2}{3} \textnormal{ or } \textnormal 3y + x+2 & {\rm if} \,\, x \equiv 1 \textnormal{ mod 3}, x \equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A_{1,0} \\ y + \frac{x+2}{3} \textnormal{ or } \textnormal 2y + \frac{2x-2}{3} & {\rm if} \,\, x \equiv 1 \textnormal{ mod 3}, x \equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A^c_{1,0} \\ 4y + \frac{4x+2}{3} \textnormal{ or } 6y + 2x-2 & {\rm if} \,\, x \equiv 1 \textnormal{ mod 3}, x \not\equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A_{1,1} \\ 2y + \frac{2x-2}{3} \textnormal{ or } 4y + \frac{4x+2}{3} & {\rm if} \,\, x \equiv 1 \textnormal{ mod 3}, x \not\equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A^c_{1,1} \\ 2y + \frac{2x+2}{3} \textnormal{ or } 3y + x-2 & {\rm if} \,\, x \equiv 2 \textnormal{ mod 3}, x \equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A_{2,0} \\ y + \frac{x-2}{3} \textnormal{ or } 2y + \frac{2x+2}{3} & {\rm if} \,\, x \equiv 2 \textnormal{ mod 3}, x \equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A^c_{2,0} \\ 4y + \frac{4x-2}{3} \textnormal{ or } 6y + 2x+2 & {\rm if} \,\, x \equiv 2 \textnormal{ mod 3}, x \not\equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A_{2,1} \\ 2y + \frac{2x+2}{3} \textnormal{ or } 4y + \frac{4x-2}{3} & {\rm if} \,\, x \equiv 2 \textnormal{ mod 3}, x \not\equiv y \textnormal{ mod 2}, \textnormal{and } (x,y) \in A^c_{2,1}, \end{cases} \end{equation*} where the sets $A_{i,j}$ have yet to be determined. A numerical grid search appeared to indicate that the sets $A_{i,j}$ cannot be described by a linear modulus condition $c_1 x + c_2 y \bmod c_3$ for any $c_1,c_2=-36,-35,...,35,36$ and $c_3=2,...,36$. However, when $3 \mid x$, we conjecture that whenever $(x,y) \in A_{i,j}$ for any $i,j$, then $(27y-7x,11y-3x)\in A_{i',j'}$ for some possibly different $i',j'$. Under this assumption, we can state our conjecture for the period formula more precisely for a given $x$. \section*{Acknowledgments} We wish to thank David Brown for his participation in an undergraduate research seminar during the 2011-2012 academic year in which this problem was first considered, and Joshua Pavoncello for writing a computer program \cite{P} that visualizes the orbits on an edge tessellating polygon and collects related experimental data. And we wish to thank the referees whose numerous helpful suggestions improved the exposition.
{ "timestamp": "2021-04-08T02:08:33", "yymm": "1911", "arxiv_id": "1911.01397", "language": "en", "url": "https://arxiv.org/abs/1911.01397", "abstract": "A periodic orbit on a frictionless billiard table is a piecewise linear path of a billiard ball that begins and ends at the same point with the same angle of incidence. The period of a primitive periodic orbit is the number of times the ball strikes a side of the table as it traverses its trajectory exactly once. In this paper we find and classify the periodic orbits on a billiard table in the shape of a 120-isosceles triangle, a 60-rhombus, a 60-90-120-kite, and a 30-right triangle. In each case, we use the edge tessellation (also known as tiling) of the plane generated by the figure to unfold a periodic orbit into a straight line segment and to derive a formula for its period in terms of the initial angle and initial position.", "subjects": "Dynamical Systems (math.DS)", "title": "Periodic Orbits on Obtuse Edge Tessellating Polygons", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363545048391, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7073385847508158 }
https://arxiv.org/abs/1403.3984
A Study on Integer Additive Set-Graceful Graphs
A set-labeling of a graph $G$ is an injective function $f:V(G)\to \mathcal{P}(X)$, where $X$ is a finite set and a set-indexer of $G$ is a set-labeling such that the induced function $f^{\oplus}:E(G)\rightarrow \mathcal{P}(X)-\{\emptyset\}$ defined by $f^{\oplus}(uv) = f(u){\oplus}f(v)$ for every $uv{\in} E(G)$ is also injective. An integer additive set-labeling is an injective function $f:V(G)\rightarrow \mathcal{P}(\mathbb{N}_0)$, $\mathbb{N}_0$ is the set of all non-negative integers and an integer additive set-indexer is an integer additive set-labeling such that the induced function $f^+:E(G) \rightarrow \mathcal{P}(\mathbb{N}_0)$ defined by $f^+ (uv) = f(u)+ f(v)$ is also injective. In this paper, we extend the concepts of set-graceful labeling to integer additive set-labelings of graphs and provide some results on them.
\section{Introduction} For all terms and definitions, not defined specifically in this paper, we refer to \cite{BM1}, \cite{BLS}, \cite{FH} and \cite{DBW}. For more about graph labeling, we refer to \cite{JAG}. Unless mentioned otherwise, all graphs considered here are simple, finite and have no isolated vertices. The researches on graph labeling problems commenced with the introduction of $\beta$-valuations of graphs in \cite{AR}. Analogous to the number valuations of graphs, the concepts of set-labelings and set-indexers of graphs are introduced in \cite{A1} as follows. Let $X$ be a non-empty set and $\mathcal{P}(X)$ be its power set. A \textit{set-labeling} of a graph $G$ with respect to the set $X$ is an injective set valued function $f:V(G)\to \mathcal{P}(X)$ with the induced edge function $f^+(E(G))\to \mathcal{P}(X)$ is defined by $f^+(uv)=f(u)\oplus f(v)$ for all $uv\in E(G)$, where $\mathcal{P}(X)$ is the set of all subsets of $X$ and $\oplus$ is the symmetric difference of sets. A set-labeling of $G$ is said to be a \textit{set-indexer} of $G$ if the induced edge function $f^+$ is also injective. A graph which admits a set-labeling (or a set-indexer) is called a set-labeled (or set-indexed) graph. It is proved in \cite{A1} that every graph has a set-indexer. A set-indexer $f:V(G)\to \mathcal{P}(X)$ of a given graph $G$ is said to be a {\em set-graceful labeling} of $G$ if $f^{\oplus}(E(G))=\mathcal{P}(X)-\{\emptyset\}$. A graph $G$ which admits a set-graceful labeling is called a {\em set-graceful graph}. The sumset of two non-empty sets $A$ and $B$, denoted by $A+B$, is the set defined by $A+B=\{a+b:a\in A, b\in B\}$. If either $A$ or $B$ is countably infinite, then their sumset $A+B$ is also countably infinite. Hence all sets we consider here are non-empty finite sets. If $C=A+B$, where $A\neq \{0\}$ and $B\neq\{0\}$, then $A$ and $B$ are said to be the \textit{non-trivial summands} of the set $C$ and $C$ is said to be a \textit{non-trivial sumset} of $A$ and $B$. Using the concepts of sumsets, the notion of integer additive set-labeling of a given graph $G$ is defined as follows. \begin{definition}{\rm Let $\mathbb{N}_0$ be the set of all non-negative integers. Let $X\subseteq \mathbb{N}_0$ and $\mathcal{P}(X)$ be its power set. An {\em integer additive set-labeling} (IASL, in short) of a graph $G$ is an injective function $f:V(G)\to \mathcal{P}(X)$ whose associated function $f^+:E(G)\to \mathcal{P}(X)$ is defined by $f^+(uv)=f(u)+f(v), uv\in E(G)$. A graph $G$ which admits an integer additive set-labeling is called an integer additive set-labeled graph. (IASL-graph).} \end{definition} \begin{definition}{\rm \cite{GA}, \cite{GS1} An {\em integer additive set-labeling} $f$ is said to be an \textit{integer additive set-indexer} (IASI, in short) if the induced edge function $f^+:E(G) \to \mathcal{P}(X)$ defined by $f^+ (uv) = f(u)+ f(v)$ is also injective. A graph $G$ which admits an integer additive set-indexer is called an \textit{integer additive set-indexed graph}.} \end{definition} By an element of a graph $G$, we mean a vertex or an edge of $G$. The cardinality of the set-label of an element of a graph $G$ is called the {\em set-indexing number} of that element. An IASL (or an IASI) is said to be a $k$-uniform IASL (or $k$-uniform IASI) if $|f^+(e)|=k ~ \forall ~ e\in E(G)$, where $k$ is a positive integer. The vertex set $V(G)$ is called {\em $l$-uniformly set-indexed}, if all the vertices of $G$ have the set-indexing number $l$. With respect to an integer additive set-labeling (or integer additive set-indexer) of a graph $G$, the vertices of $G$ has non-empty set-labels and the set-labels of every edge of $G$ is the sumsets of the set-labels of its end vertices and hence no element of a given graph can have $\emptyset$ as its set-labeling. Hence, we need to consider only non-empty subsets of $X$ for set-labeling the vertices or edges of $G$. Hence, all sets we mention in this paper are finite sets of non-negative integers. We denote the cardinality of a set $A$ by $|A|$. We denote, by $X$, the finite ground set of non-negative integers that is used for set-labeling the vertices or edges of $G$ and cardinality of $X$ by $n$. In this paper, analogous to the set-graceful labeling of graphs, we introduce the notion of an integer additive set-graceful labeling of a given graph $G$ and study its properties. \section{Integer Additive Set-Graceful Graphs} Certain studies have been done on set-graceful graphs in \cite{A1}, \cite{A2}, \cite{AGPR} and \cite{AGKS}. Motivated by these studies we introduce the notion of a graceful type of integer additive set-labeling as follows. \begin{definition}{\rm Let $G$ be a graph and let $X$ be a non-empty set of non-negative integers. An integer additive set-indexer $f:V(G)\to \mathcal{P}(X)-\{\emptyset\}$ is said to be an {\em integer additive set-graceful labeling} (IASGL) or a {\em graceful integer additive set-indexer} of $G$ if $f^{+}(E(G))=\mathcal{P}(X)-\{\emptyset,\{0\}\}$. A graph $G$ which admits an integer additive set-graceful labeling is called an {\em integer additive set-graceful graph} (in short, IASG-graph).} \end{definition} As all graphs do not admit graceful IASIs in general, studies on the structural properties of IASG-graphs arouse much interest. The choice of set-labels of the vertices or edges of $G$ and the corresponding ground set is very important to define a graceful IASI for a given graph. A major property of integer additive set-graceful graphs is established as follows. \begin{property}\label{P-IASGL0a} If $f:V(G)\to \mathcal{P}(X)-\{\emptyset\}$ is an integer additive set-graceful labeling on a given graph $G$, then $\{0\}$ must be a set-label of one vertex of $G$. \end{property} \begin{proof} If possible, let $\{0\}$ is not the set-label of a vertex in $G$. Since $X$ is a non-empty subset of the set $\mathbb{N}_0$ of non-negative integers, it contains at least one element, say $x$, which is not the sum of any two elements in $X$. Hence, $\{x\}$ can not be the set-label of any edge of $G$. This is a contradiction to the hypothesis that $f$ is an integer additive set-graceful labeling. \end{proof} Examining the above mentioned property of IASG-graphs, it can be understood that if an IASL $f:V(G)\to \mathcal{P}(X)-\{\emptyset\}$ of a given graph $G$ is an integer additive set-graceful labeling on $G$, then the ground set $X$ must contain the element $0$. That is, only those sets containing the element $0$ can be chosen as the ground sets for defining a graceful IASI of a given graph. Another trivial but an important property of certain set-labels of vertices of an IASG-graph $G$ is as follows. \begin{property}\label{O-IASGL1} Let $f:V(G)\to \mathcal{P}(X)-\{\emptyset\}$ be an integer additive set-graceful labeling on a given graph $G$. Then, the vertices of $G$, whose set-labels are not the non-trivial sumsets of any two subsets of $X$, must be adjacent to the vertex $v$ that has the set-label $\{0\}$. \end{property} \begin{proof} Let $A_i\neq \emptyset$ be a subset of $X$ that is not a non-trivial sumset of any two subsets of $X$. But since $f$ is an IASGL of $G$, $A_i$ must be the set-label of some edge of $G$. This is possible only when $A_i$ is the set-label of some vertex $v_i$ that is adjacent to the vertex $v$ whose set-label is $\{0\}$. \end{proof} \noindent Invoking Property \ref{O-IASGL1}, we have the following remarks. \begin{remark}\label{O-IASGL1a}{\rm Let $x_i~\in ~X$ be not the sum of any two elements in $X$. Since, $f$ is an integer additive set-graceful labeling, $\{x_i\}$ must be the set-label of one edge, say $e$ of $G$. This is possible only when one end vertex of $e$ has the set-label $\{0\}$ and the other end vertex has the set-label $\{x_i\}$.} \end{remark} \begin{remark}\label{C-IASGL1a}{\rm Let $f:V(G)\to \mathcal{P}(X)-\{\emptyset\}$ be an integer additive set-graceful labeling on a given graph $G$ and let $x_1$ and $x_2$ be the minimal and second minimal non-zero elements of $X$. Then, by Remark \ref{O-IASGL1a}, the vertices of $G$ that have the set-labels $\{x_1\}$ and $\{x_2\}$, must be adjacent to the vertex $v$ that has the set-label $\{0\}$.} \end{remark} \begin{property}\label{O-IASGL2a} Let $A_i$ and $A_j$ are two distinct subsets of the ground set $X$ and let $x_i$ and $x_j$ be the maximal elements of $A_i$ and $A_j$ respectively. Then, $A_i$ and $A_j$ are the set-labels of two adjacent vertices of an IASG-graph $G$ is that $x_i+x_j\le x_n$, the maximal element of $X$. \end{property} \begin{proof} Let $v$ be a vertex of $G$ that has a set-label $A_i$ whose maximal element $x_i$. If $v$ is adjacent to a vertex, say $u$, with a set-label $A_j$ whose maximal element is $x_j$, then $f^+(uv)$ contains the element $x_i+x_j$. Therefore, $x_i+x_j~\in ~ X$. Hence, $x_i+x_j\leq x_n$. \end{proof} \noindent In view of Property, we can observe the following remarks. \begin{remark}\label{O-IASGL2}{\rm Let $f:V(G)\to \mathcal{P}(X)-\{\emptyset\}$ be an integer additive set-graceful labeling on a given graph $G$ and let $x_n$ be the maximal element of $X$. If $A_i$ and $A_j$ are set-labels of two adjacent vertices, then $A_i+A_j$ is the set-label for the edge. Hence for any $x_i\in A_i , x_j\in A_j$, $x_i+x_j\le x_n$ and hence, if one of the set is having the maximal element, then the other can not have a non-zero element. Hence, $x_n$ is an element of the set-label of a vertex $v$ of $G$ if $v$ is a pendant vertex that is adjacent to the vertex labeled by $\{0\}$. It can also be noted that if $G$ is a graph without pendant vertices, then no vertex of $G$ can have a set-label consisting of the maximal element of the ground set $X$.} \end{remark} The following results establish the relation between the size of an IASG-graph and the cardinality of its ground set. \begin{remark}\label{R-IASGL3}{\rm Let $f$ be an integer additive set-graceful labeling defined on $G$. Then, $f^+(E)=\mathcal{P}(X)-\{\emptyset,\{0\}\}$. Therefore, $|E(G)|=| \mathcal{P}(X)|-2 = 2^{|X|}-2 =2(2^{|X|-1}-1)$. That is, $G$ has even number of edges.} \end{remark} \begin{remark}\label{R-IASGL4}{\rm Let $G$ be an IASG-graph, with an integer additive set-graceful labeling $f$. By Remark \ref{R-IASGL3}, $|E(G)|=2^{|X|}-2$. Therefore, the cardinality of the ground set $X$ is $|X|=\log_2[|E(G)|+2]$.} \end{remark} The conditions for certain graphs and graph classes to admit a integer additive set-graceful labeling are established in following discussions. \begin{theorem}\label{T-IASGL5} A star graph $K_{1,m}$ admits an integer additive set-graceful labeling if and only if $m=2^n-2$ for any integer $n>1$. \end{theorem} \begin{proof} Let $v$ be the vertex of degree $d(v)>1$. Let $m=2^n-2$ and $\{v_1,v_2,\ldots,v_m\}$, be the vertices in $K_{1,m}$ which are adjacent to $v$. Let $X$ be a set of non-negative integers containing $0$. First, assume that $K_{1,m}$ admits an integer additive set-graceful labeling, say $f$. Then, by Remark \ref{R-IASGL3}, $|E(G)|=m=2^{|X|}-2$. Therefore, $m=2^n-2$, where $n=|X|>1$. Conversely, assume that $m=2^n-2$ for some integer $n>1$. Label the vertex $v$ by the set $\{0\}$ and label the remaining $m$ vertices of $K_{1,m}$ by the remaining $m$ distinct non-empty subsets of $X$. Clearly, this labeling is an integer additive set-graceful labeling for $K_{1,m}$. This completes the proof. \end{proof} The following theorem checks whether a tree can be an IASG-graph. \begin{proposition}\label{P-IASGL6} If a tree on $m$ vertices admits an integer additive set-graceful labeling, then $1+m=2^n$, for some positive integer $n>1$. \end{proposition} \begin{proof} Let $G$ be a tree on $m$ vertices. Then, $|E(G)|=m-1$. Assume that $G$ admits an integer additive set-graceful labeling, say $f$. Then, by Remark \ref{R-IASGL3}, for a ground set $X$ of cardinality $n$, then $m-1=2^{|X|}-2$. Hence, $m+1=2^{|X|}$. \end{proof} \begin{corollary}\label{C-IASGL6a} Let $G$ be a tree on $m$ vertices. For a ground set $X$, let $f:V(G)\to \mathcal{P}(X)$ be be an integer additive set-graceful labeling on $G$. Then, $|X|=\log_2(m+1)$. \end{corollary} \begin{proof} Let $G$ be a tree which admits an IASGL. Then, by Theorem \ref{P-IASGL6}, we have $m+1 = 2^{|X|}$, where $X$ is the ground set for labeling the vertices and edges of graphs. Hence, $|X| = \log_2(m+1).$ \end{proof} \begin{theorem}\label{P-IASGL6b} A tree $G$ is an IASG-graph if and only if it is a star $K_{1,\, 2^n-2}$, for some positive integer $n$. \end{theorem} \begin{proof} If $G=K_{1,2^n-2}$, then by Theorem \ref{T-IASGL5}, $G$ admits an integer additive set-graceful labeling. Conversely, assume that the tree $G$ on $m$ vertices admits an integer additive set-graceful labeling, say $f$ with respect to a ground set $X$ of cardinality $n$. Therefore, all the $2^n-1$ non-empty subsets of $X$ are required for labeling the vertices of $X$. Also, note that $\{0\}$ can not be a set-label of any edge of $G$. Hence, all the remaining $2^n-2$ non-empty subsets of $X$ are required for the labeling the edges of $G$. It is to be noted that the set-labels containing $0$ are either the the sumsets of some other set-labels containing $0$ or not a sumset of any subsets of $X$. Let $0\in A_i\subseteq X$ be the set-label of an edge,say $e$, of $G$. Then, if $A_i$ is not a sumset of subsets of $X$, by Property \ref{O-IASGL1}, then $A_i$ must be the set-label of a vertex, say $u$, that is adjacent to the vertex $v$ having set-label $\{0\}$. Assume that $A_i$ is the sum set of two sets $A_r$ and $A_s$. If $e=v_rv_s$, where $A_r$ and $A_s$ are respectively the set-labels of $v_r$ and $v_s$, then $e$ will be an edge of $G$ which is in a of cycle of $G$, a contradiction to the fact that $G$ is a tree. Therefore, vertices, whose set-labels containing $0$, must be adjacent to the vertex $v$ which has the set-label $\{0\}$. Also, note that, by Remark \ref{O-IASGL2}, the vertices, whose set-labels containing the maximal element $x_n$ of $X$ must also be adjacent to the vertex labeled by $\{0\}$. Let $X_i$ be a subset of $X$ which contains either $0$ or $x_n$ and let $v_i$ be the vertex of $G$ that has the set-label $X_i$. Then, the set-label of the edge $vv_i$ is also $X_i$. Let $X_j$ be a subset of $X$ that contains neither $0$ nor $x_n$ and is the set-label of an edge $e$ of $G$. Then, if $X_j$ is not a sumset in $\mathcal{P}(X)$, then $X_j$ must be the set-label of vertex which is adjacent to the vertex $v$ having the set-label $\{0\}$. If $X_j$ is the sumset of two subsets $X_r$ and $X_s$ and $e=v_rv_s$, where $X_r$ and $X_s$ are respectively the set-labels of $v_r$ and $v_s$, then as explained above, the edge $e$ will be in a cycle of $G$, a contradiction to the fact that $G$ is a tree. Therefore, $X_j$ must be the set-label of vertex which is adjacent to the vertex $v$ having the set-label $\{0\}$. Hence, all vertices of $G$ having non-empty subsets of $X$, other than $\{0\}$, as the set-labels must be adjacent to the vertex $v$ having set-label $\{0\}$. Hence, $G$ is a star graph $deg(v)=2^n-2$. \end{proof} We now check the admissibility of integer additive set-graceful labeling by path graphs and cycle graphs. \begin{corollary}\label{C-IASGL6p} For a positive integer $m>2$, the path $P_m$ does not admit an integer additive set-graceful labeling. \end{corollary} \begin{proof} Every path is a tree and no path other than $P_2$ is a star graph. Hence, by Theorem \ref{P-IASGL6b}, $P_m,~ m>2$ is not an IASG-graph. \end{proof} \begin{proposition}\label{P-IASGL6c} For any positive integer $m>3$, the cycle $C_m$ does not admit an integer additive set-graceful labeling. \end{proposition} \begin{proof} Let $X$ be a ground set with $n$ elements. Since $C_m$ has $m$ edges, by Remark \ref{R-IASGL3}, \begin{equation} m=2^n-2 \label{myequn1} \end{equation} Since $C_m$ has no pendant vertices, by Proposition \ref{O-IASGL2}, the maximal element, say $x_n$, will not be an element of any set-label of the vertices of $C_m$. Therefore, only $2^{n-1}-1$ non-empty subsets of $X$ are available for labeling the vertices of $C_m$. Hence, \begin{equation} m\le 2^{n-1}-1 \label{myequn2} \end{equation} Clearly, Equation \ref{myequn1} and Equation \ref{myequn2} do not hold simultaneously. Hence, $C_m$ does not admit an integer additive set-graceful labeling. \end{proof} An interesting question we need to address here is whether complete graphs admit integer additive set-graceful labeling. We investigate the conditions for a complete graph to admit an integer additive set-graceful labeling and based on these conditions check whether the complete graphs are IASG-graphs. \begin{theorem}\label{T-IASGL7} A complete graph $K_m$, does not admit an integer additive set-graceful labeling. \end{theorem} \begin{proof} Since $K_2$ has only one edge and $K_3$ has three edges, by Remark \ref{R-IASGL3}, $K_2$ and $K_3$ do not have an integer additive set-graceful labeling. Hence, we need to consider the complete graphs on more than three vertices. Assume that a complete graph $K_m,m>3$ admits an integer additive set-graceful labeling. Then, by Remark \ref{R-IASGL3}, $|E(G)|=2^{|X|}-2=\frac{m(m-1)}{2}$. That is, $2^{|X|-1}-1=\frac{m(m-1)}{4}$. Since $|X|>1$, $2^{|X|-1}-1$ is a positive integer. Hence, $m(m-1)$ is a multiple of $4$. This is possible only when either $m$ or $(m-1)$ is a multiple of $4$. Since $|X|>1$, $2^{|X|-1}-1$ is a positive odd integer. Hence, for an odd integer $k$, either $m=4k$ or $m-1=4k$. Therefore, $2^{|X|-1}-1 = \frac{4k(4k-1)}{4}=k(4k-1)$ or $2^{|X|-1}-1=\frac{4k(4k-1)}{4}=k(4k+1)$. That is, $2^{|X|-1} =1+k(4k\pm 1)$. That is, if a complete graph $K_m$ admits an integer additive set-graceful labeling, then there exist an integral solution for the equation \begin{equation} 4k^2\pm k +1=2^n \label{myequn3} \end{equation} where $k$ is an odd non-negative integer and $n>3$ be a positive integer. The equation \eqref{myequn3} can be written as a quadratic equation as follows. \begin{equation} 4k^2\pm k +(1-2^n)=0 \label{myequn4} \end{equation} \noindent The value of $k$ is obtained from \eqref{myequn4} as $k=\frac{\pm 1\pm \sqrt{1-16(1-2^n)}}{8}=\frac{\pm 1\pm \sqrt{2^{n+4}-15}}{8}$, which can not be a non-negative integer for the values $n>3$. Hence, $K_m$ does not admit an integer additive set-graceful labeling. \end{proof} In this context, we need to find the conditions required for the given graphs to admit IASGLs. The structural properties of IASGL graphs are discussed in the following results. \begin{proposition}\label{P-IASGL1b} Let $G$ be an IASG-graph. Then, the minimum number of vertices of $G$ that are adjacent to the vertex having the set-label $\{0\}$ is the number of subsets of $X$ which are not the non-trivial sumsets of any subsets of $X$. \end{proposition} \begin{proof} Let $G$ admits an integer additive set-graceful labeling $f$. Then, $f^+(E(G))=\mathcal{P}(X)-\{\emptyset,\{0\}\}$. Let $X_i$ be a non-empty subset of $X$, which is not a non-trivial sumset of any other subsets of $X$. Since $G$ is an IASG-graph, $X_i$ should be the set-label of some edge of $G$. Since $X_i$ is the set-label of edges of $G$ and is not a non-trivial sumset any two subsets of $G$, this is possible only when $X_i$ is the set-label of some vertex of $G$ which is adjacent to the vertex $v$ whose set-label is $\{0\}$. Therefore, the minimum number of vertices adjacent to $v$ is the number of subsets of $X$ which are not the sumsets of any two subsets of $X$. \end{proof} Another important structural property of an IASG-graph is established in the following theorem. \begin{theorem}\label{T-IASGL2b} If a graph $G$ admits an IASGL $f$ with respect to a finite ground set $X$, then the vertices of $G$, which have the set-labels which are not non-trivial summands of any subset of $X$, are the pendant vertices of $G$. \end{theorem} \begin{proof} Let $f$ be an IASGL defined on a given graph $G$. Then, every subset of $X$, other than $\emptyset$ and $\{0\}$, must be the set-label of some edges of $G$. Let $X_i$ be not a non-trivial summand of any subset of $X$. Then, the vertex $v_i$ with set-label $X_i$ can not be adjacent to any other vertex $v_j$ with set-label $X_j$, where $X_j\neq \{0\}$ as the set-label of the edge $v_iv_j$ is $X_i+X_j$ which is not a subset of $X$. Hence, $v_i$ can be adjacent only to the vertex $v$ having the set-label $\{0\}$. \end{proof} \noindent Invoking the above theorems, we can establish the following result. \begin{theorem}\label{T-IASGL3a} If $G$ is an IASG-graph, then at least $k$ pendant vertices must be adjacent to a single vertex of $G$, where $k$ is the number of subsets of $X$ which are neither the non-trivial sumsets of subsets of $X$ nor the non-trivial summands of any subset of $X$. \end{theorem} \begin{proof} Let $f$ be an IASGL defined on a given graph $G$. Then, every subset of $X$, other than $\emptyset$ and $\{0\}$, must be the set-label of some edges of $G$. If $X_i$ is not a sumset of any two subsets of $X$, then by Proposition \ref{P-IASGL1b}, the vertex $v_i$of $G$ with the set-label $X_i$ will be adjacent to the vertex $v$ with the set-label $\{0\}$. If $X_i$ is not a non-trivial summand of any subset of $X$, then by \ref{T-IASGL2b}, the vertex $v_i$ will be a pendant vertex. Therefore, the minimum number of pendant vertex required for a graph $G$ to admit an IASGL is the number of subsets of $X$ which are neither the non-trivial sumset of any two subsets of $X$ nor the non-trivial summands of any subset of $X$. \end{proof} The following result is an immediate consequence of the above theorems. \begin{theorem} Let $G$ be an IASG-graph which admits an IASGL $f$ with respect to a finite non-empty set $X$. Then, $G$ must have at least $|X|-1$ pendant vertices. \end{theorem} \begin{proof} Let A graph $G$ admits an IASL, say $f$ with respect to a ground set $X=\{0,x_1,x_2,\ldots,x_n\}$. By Theorem \ref{T-IASGL3a}, the number of pendant vertices is equal to the number of subsets of $X$ which are neither the non-trivial sumsets nor the non-trivial summands of any subsets of $X$. Clearly, the set $\{0, x_n\}$ is neither a sumset of any two subsets in $X$ nor a non-trivial summand of any set in $X$. By Property \ref{O-IASGL2}, the vertex of $G$ with set-label $\{0, x_n\}$ can be adjacent to a single vertex that has the set-label $\{0\}$. Now, consider the three element sets of the form $X_i=\{0,x_i,x_n\};~ 1\le i<n$. If possible let, the set $X_i$ be the sumset of two subsets say $A$ and $B$ of $X$. Then, $A$ and $B$ can have at most two elements. Since $0\in X_i$, $0$ must belong to both $A$ and $B$. Hence, let $A=\{0,a\}$ and $\{0,b\}$. Then, $X_i=A+B=\{0,a,b,a+b\}$ which is possible only when $a=b$ and hence $A=B$ and it contradicts the injectivity of $f$. Therefore, $X_i$ is not a sumset of any other subsets of $X$. Since $x_n\in X_i$, by Property \ref{O-IASGL2}, it can not be a non-trivial summand of any subset of $X$. Therefore, $X_i$ can be the set-label of a pendant vertex of $G$ only. The number of three element subsets of $X$ of this kind is $n-1$. It is to be noted that be that a subset $\{0,x_i,x_j,x_n\}$ can be a sumset of two subsets $\{0,x_i\}$ and $\{0,x_j\}$ of $X$ if $x_n=x_i+x_j$. This property holds for all subsets of $X$ with cardinality greater than $3$. Hence, the minimum number of subsets of $X$ which are neither the non-trivial sumsets of any two subsets of $X$ nor the non-trivial summands of any other subsets of $X$ is $n=|X|-1$. Hence, by Theorem \ref{T-IASGL3a}, the minimum number of pendant vertices of an IASG-graph is $|X|-1$. \end{proof} An interesting question that arises in this context is about the existence of a graph corresponding to a given ground set $X$ such that the function $f:V(G)\to \mathcal{P}(X)$ is a graceful IASI on $G$. Hence we introduce the following notion. \begin{definition}{\rm Let $X$ be a non-empty finite set of non-negative integers. A graph $G$ which admits a graceful IASI with respect to the set $X$ is said to be a \textit{graceful graph-realisation} of the set $X$ with respect to the IASL $f$.} \end{definition} It can be noted that a star graph $K_{1,2^{|X|-2}}$ admits an IASGL. The question whether there is a non-bipartite graph that admits an IASGL with respect to a given ground set $X$ is addressed in the following theorem. \begin{theorem}\label{T-IASGL6a} Let $X$ be a non-empty finite set of non-negative integers containing the element $0$. Then, there exists a non-bipartite graceful graph-realisation $G$. \end{theorem} \begin{proof} Let $X$ be a finite non-empty set of non-negative integers containing $0$. Let $\mathcal{A}$ be the collection of all subsets of $X$ which are not the non-trivial sumsets of any two subsets of $X$ and let $\mathcal{B}$ be the collection of subsets of $X$ which are not the non-trivial summands of any subsets of $X$. We need to construct an IASG-graph $G$ with respect to $X$. For this, first take a vertex, say $v_0$ and label this vertex by $\{0\}$. Now, mark the vertices $v_1,v_2,\ldots v_r$, where $r=|\mathcal{A}\cup \mathcal{B}|$ and label these vertices in an injective manner by the sets in $\mathcal{A}\cup \mathcal{B}$. In view of Proposition \ref{P-IASGL1b} and Theorem \ref{T-IASGL2b}, draw the edges from each of these vertices to the vertex $v_0$. Next, mark new vertices $v_{r+1},v_{r+2},\ldots, v_{r+l}=v_n$, where $l=|(\mathcal{A}\cup \mathcal{B})^c|$ and label these vertices in an injective manner by the sets in $(\mathcal{A}\cup \mathcal{B})^c$. Draw edges between the vertices $v_i$ and $v_j$, if $f(v_i)+f(v_j)\subseteq X$, for all $ 0 \le i,j \le |V|$. Clearly, this labeling is an IASGL for the graph $G$ constructed here. \end{proof} Invoking all the above results, we can summarise a necessary and sufficient condition for a graph $G$ to admit a graceful IASI with respect to a given ground set $X$. \begin{theorem}\label{T-IASGL6} Let $X$ be a non-empty finite set of non-negative integers. Then, a graph $G$ admits a graceful IASI if and only if the following conditions hold. \begin{enumerate}\itemsep0mm \item[(a)] $0\in X$ and $\{0\}$ be a set-label of some vertex, say $v$, of $G$ \item[(b)] the minimum number of pendant vertices in $G$ is the number of subsets of $X$ which are not the non-trivial summands of any subsets of $X$. \item[(c)] the minimum degree of the vertex $v$ is equal to the number of subsets of $X$ which are not the sumsets of any two subsets of $X$ and not non-trivial summands of any other subsets of $X$. \item[(d)] the minimum number of pendant vertices that are adjacent to a given vertex of $G$ is the number of subsets of $X$ which are neither the non-trivial sumsets of any two subsets of $X$ nor the non-trivial summands of any subsets of $X$. \end{enumerate} \end{theorem} \begin{proof} The necessary part of the theorem follows together from Property \ref{O-IASGL1}, Proposition \ref{P-IASGL1b}, Theorem \ref{T-IASGL2b} and \ref{T-IASGL3a} and the converse of the theorem follows from Theorem \ref{T-IASGL6a}. \end{proof} \section{Conclusion} In this paper, we have discussed the concepts and properties of integer additive set-graceful graphs analogous to those of set-graceful graphs and have done a characterisation based on this labeling. We note that the admissibility of integer additive set-indexers by the given graphs depends also upon the nature of elements in $X$. A graph may admit an IASGL for some ground sets and may not admit an IASGL for some other ground sets. Hence, choosing a ground set $X$ is very important in the process of checking whether a given graph is an IASG-graph. Certain problems in this area are still open. Some of the areas which seem to be promising for further studies are listed below. \begin{problem}{\rm Characterise different graph classes which admit integer additive set-graceful labelings.} \end{problem} \begin{problem}{\rm Verify the existence of integer additive set-graceful labelings for different graph operations and graph products.} \end{problem} \begin{problem}{\rm Analogous to set-sequential labelings, define integer additive set-sequential labelings of graphs and their properties.} \end{problem} \begin{problem}{\rm Characterise different graph classes which admit integer additive set-sequential labelings.} \end{problem} \begin{problem}{\rm Verify the existence of integer additive set-sequential labelings for different graph operations and graph products.} \end{problem} The integer additive set-indexers under which the vertices of a given graph are labeled by different standard sequences of non negative integers, are also worth studying. All these facts highlight a wide scope for further studies in this area.
{ "timestamp": "2015-09-29T02:11:17", "yymm": "1403", "arxiv_id": "1403.3984", "language": "en", "url": "https://arxiv.org/abs/1403.3984", "abstract": "A set-labeling of a graph $G$ is an injective function $f:V(G)\\to \\mathcal{P}(X)$, where $X$ is a finite set and a set-indexer of $G$ is a set-labeling such that the induced function $f^{\\oplus}:E(G)\\rightarrow \\mathcal{P}(X)-\\{\\emptyset\\}$ defined by $f^{\\oplus}(uv) = f(u){\\oplus}f(v)$ for every $uv{\\in} E(G)$ is also injective. An integer additive set-labeling is an injective function $f:V(G)\\rightarrow \\mathcal{P}(\\mathbb{N}_0)$, $\\mathbb{N}_0$ is the set of all non-negative integers and an integer additive set-indexer is an integer additive set-labeling such that the induced function $f^+:E(G) \\rightarrow \\mathcal{P}(\\mathbb{N}_0)$ defined by $f^+ (uv) = f(u)+ f(v)$ is also injective. In this paper, we extend the concepts of set-graceful labeling to integer additive set-labelings of graphs and provide some results on them.", "subjects": "Combinatorics (math.CO)", "title": "A Study on Integer Additive Set-Graceful Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.984336354504839, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7073385847508157 }
https://arxiv.org/abs/0706.4019
Induction and computation of Bass Nil Groups for finite groups
Let G be a finite group. We show that the Bass Nil-groups $NK_n(RG)$, $n \in Z$, are generated from the p-subgroups of G by induction maps, certain twisting maps depending on elements in the centralizers of the p-subgroups, and the Verschiebung homomorphisms. As a consequence, the groups $NK_n(RG)$ are generated by induction from elementary subgroups. For $NK_0(ZG)$ we get an improved estimate of the torsion exponent.
\section{Introduction}\label{one} In this note we study the Bass $\Nil$-groups (see \cite[Chap.~XII]{bass1}) $$\NK_n(R G) = \ker(K_n(R G[t]) \to K_n(R G)),$$ where $R$ is an associative ring with unit, $G$ is a finite group, $n \in \mathbf Z$, and the augmentation map sends $t\mapsto 0$. Alternately, the isomorphism $$\NK_n(RG) \cong \widetilde K_{n-1}(\NIL(RG))$$ identifies the Bass $\Nil$-groups with the $K$-theory of the category $\NIL(RG)$ of nilpotent endomorphisms $(Q, f)$ on finitely-generated projective $RG$-modules \cite[Chap.~XII]{bass1}, \cite[Theorem 2]{grayson2}. Farrell \cite{farrell1} proved that $\NK_1(RG)$ is not finitely-generated as an abelian group whenever it is non-zero, and the corresponding result holds for $\NK_n(RG)$, $n\in \mathbf Z$ (see \cite[4.1]{weibel1}), so some organizing principle is needed to better understand the structure of the $\Nil$-groups. Our approach is via induction theory. The functors $\NK_n$ are Mackey functors on the subgroups of $G$, and we ask to what extent they can be computed from the $\Nil$-groups of proper subgroups of $G$. The Bass-Heller-Swan formula \cite[Chap.~XII, \S 7]{bass1}, \cite[p.~236]{grayson1} relates the Bass $\Nil$-groups with the $K$-theory of the infinite group $G \times \mathbf Z$. There are two (split) surjective maps $$N_{\pm} \colon K_n(R[G \times \mathbf Z]) \to \NK_n(R G)$$ which form part of the Bass-Heller-Swan direct sum decomposition $$ K_n(R[G \times \mathbf Z])=K_n(R G)\oplus K_{n-1}(R G)\oplus \NK_n(R G)\oplus \NK_n(R G).$$ Notice that both $K_n(R[(-)\times \mathbf Z])$ and $\NK_n(R[-])$ are Mackey functors on the subgroups of $G$ (see Farrell-Hsiang \cite[\S 2]{farrell-hsiang2} for this observation about infinite groups). We observe that the maps $N_{\pm}$ are actually natural transformations of Mackey functors (see Section \ref{seven}). It follows from Dress induction \cite{dress2} that the functors $\NK_n(RG)$ and the maps $N_{\pm}$ are computable from the hyperelementary family (see Section \ref{four}, and Harmon \cite[Cor.~4]{harmon1} for the case $n=1$). We will show how the results of Farrell \cite{farrell1} and techniques of Farrell-Hsiang \cite{farrell-hsiang3} lead to a better generation statement for the Bass $\Nil$-groups. \smallskip We need some notation to state the main result. For each prime $p$, we denote by $\mathfrak P_p(G)$ the set of finite $p$-subgroups of $G$, and by $\mathfrak E_p(G)$ the set of $p$-elementary subgroups of $G$. Recall that a $p$-elementary group has the form $E=C \times P$, where $P$ is a finite $p$-group, and $C$ is a finite cyclic group of order prime to $p$. For each element $g\in C$, we let $$I(g) = \{ k \in \mathbb N \, | \, \text{\ a\ prime\ } q \text{\ divides\ } k \Rightarrow q \text{\ divides\ } |g|\}$$ where $|g|$ denotes the order of $g$. For each $P \in \mathfrak P_p(G)$, let $$C_G^\perp(P) = \{ g\in G \, | \, gx=xg, \forall x \in P,\text{\ and\ } p\nmid |g|\}$$ and for each $g\in C_G^\perp(P)$ we define a functor $$\phi(P,g)\colon \NIL(RP) \to \NIL(RG)$$ by sending a nilpotent $RP$-endomorphism $f\colon Q\to Q$ of a finitely-generated projective $RP$-module $Q$ to the nilpotent $RG$-endomorphism $$RG\otimes_{RP} Q \to RG\otimes_{RP} Q, \qquad x\otimes q \mapsto xg\otimes f(q)\ .$$ Note that this $RG$-endomorphism is well-defined since $g \in C_G^\perp(P)$. The functor $\phi(P,g)$ induces a homomorphism $$\phi(P,g)\colon \NK_n(RP) \to \NK_n(RG)$$ for each $n\in \mathbf Z$. For each $p$-subgroup $P$ in $G$, define a homomorphism $$\Phi_P\colon \NK_n(RP) \to \NK_n(RG)$$ by the formula $$\Phi_P = \sum_{{g\in C_G^\perp(P), \ k \in I(g)}} {\hskip -6pt}V_k \circ \phi(P,g),$$ where $$V_k \colon \NK_n(RG) \to \NK_n(RG)$$ denotes the Verschiebung homomorphism, $k \geq 1$, recalled in more detail in Section \ref{two}. \begin{thma} Let $R$ be an associative ring with unit, and $G$ be a finite group. For each prime $p$, the map $$\Phi = (\Phi_P)\colon \bigoplus_{P \in \mathfrak P_p(G)} \NK_n(RP)_{(p)} \to \NK_n(RG)_{(p)}$$ is surjective for all $n\in \mathbf Z$, after localizing at $p$. \end{thma} For every $g\in C_G^\perp(P)$, the homomorphism $\phi(P,g)$ factorizes as $$\NK_n(RP) \xrightarrow{\phi(P,g)} \NK_n(R[C \times P]) \xrightarrow{i_*} \NK_n(RG)$$ where $C = \langle g \rangle$ and $i\colon C\times P \to G$ is the inclusion map. Since the Verschiebung homomorphisms are natural with respect to the maps induced by group homomorphisms, we obtain: \begin{corb} The sum of the induction maps $$\bigoplus_{E\in \mathfrak E_p(G)} \NK_n(RE)_{(p)} \to \NK_n(RG)_{(p)}$$ from $p$-elementary subgroups is surjective, for all $n\in \mathbf Z$. \end{corb} Note that Theorem A does not show that $\NK_n(RG)$ is generated by induction from $p$-groups, because the maps $\phi(P,g)$ for $g\neq 1$ are not the usual maps induced by inclusion $P \subset G$. \smallskip The Bass $\Nil$-groups are non-finitely generated torsion groups (if non-zero) so they remain difficult to calculate explicitly, but we have some new qualitative results. For example, Theorem A shows that the order of every element of $\NK_n(R G)$ is some power of $m=|G|$, whenever $\NK_n(R) =0$ (since its $q$-localization is zero for all $q\nmid m$). For $R = \mathbf Z$ and some related rings, this is a result of Weibel~\cite[(6.5), p.~490]{weibel1}. In particular, we know that every element of $\NK _n(\mathbf Z P)$ has $p$-primary order, for every finite $p$-group $P$. If $R$ is a regular (noetherian) ring (e.g.~$R=\mathbf Z$), then $\NK_n(R) = 0$ for all $n\in \mathbf Z$. Note that an exponent that holds uniformly for all elements in $\NK_n(RP)$, over all $p$-groups of $G$, will be an exponent for $\NK_n(RG)$. As a special case, we have $\NK _n(\mathbf Z[\mathbf Z/p])=0$ for $n \le 1$, for $p$ a prime (see Bass-Murthy \cite{bass-murthy1}), so Theorem A implies: \begin{corollary} \label{cor:vanishing_of_NK_n(ZG)_for_n_le_1} Let $G$ be a finite group and let $p$ be a prime. Suppose that $p^2$ does not divide the order of $G$. Then $$\NK _n(\mathbf Z G)_{(p)} = 0$$ for $n \le 1$. \end{corollary} As an application, we get a new proof of the fact that $\NK _n(\mathbf Z G) = 0$, for $n \le 1$, if the order of $G$ is square-free (see Harmon \cite{harmon1}). We also get from Theorem A an improved estimate on the exponent of $\NK _0(\mathbf Z G)$, using a result of Connolly-da Silva \cite{connolly-dasilva1}. If $n$ is a positive integer, and $n_q=q^k$ is its $q$-primary part, then let $c_q(n) = q^l$, where $l \geq \log_q(kn)$. According to \cite{connolly-dasilva1}, the exponent of $\NK_0(\mathbf Z G)$ divides $$c(n) = \prod_{q\mid n} c_q(n), \quad \text{where\ } n = |G|,$$ but according to Theorem A, the exponent of $\NK_0(\mathbf Z G)$ divides $$d(n) = \prod_{q\mid n} c(n_q)\ .$$ For example, $c(60) = 1296000$, but $d(60) = 120$. \medskip \noindent {\textbf{Acknowledgement}}: This paper is dedicated to Tom Farrell and Lowell Jones, whose work in geometric topology has been a constant inspiration. We are also indebted to Frank Quinn, who suggested that the Farrell-Hsiang induction techniques should be re-examined (see \cite{quinn1}). \section{Bass $\Nil$-groups}\label{two} Standard constructions in algebraic $K$-theory for exact categories or Waldhausen categories yield only $K$-groups in degrees $n \ge 0$ (see Quillen \cite{quillen1}, Waldhausen~\cite{waldhausen1}). One approach on the categorial level to negative $K$-groups has been developed by Pedersen-Weibel (see \cite{pedersen-weibel2}, \cite[\S 2.1]{hp2004}). Another ring theoretic approach is given as follows (see Bartels-L\"uck~\cite[Section~9]{bartels-lueck1}, Wagoner~\cite{wagoner1}). The {\em cone ring} $\Lambda \bbZ$ of $\bbZ$ is the ring of column and row finite $\mathbb N \times \mathbb N$-matrices over $\bbZ$, i.e., matrices such that every column and every row contains only finitely many non-zero entries. The {\em suspension ring} $\Sigma \bbZ$ is the quotient of $\Lambda \bbZ$ by the ideal of finite matrices. For an associative (but not necessarily commutative) ring $A$ with unit, we define $\Lambda A = \Lambda \bbZ \otimes_{\bbZ} A$ and $\Sigma A = \Sigma \bbZ \otimes_{\bbZ} A$. Obviously $\Lambda$ and $\Sigma$ are functors from the category of rings to itself. There are identifications, natural in $A$, \eqncount \begin{eqnarray} K_{n-1}(A) & = & K_n(\Sigma A) \label{K_n-1(A)_is_K_n(SigmaA)} \\ \eqncount \NK_{n-1}(A)& = & \NK_n(\Sigma A) \label{NK_n-1(A)_is_NK_n(SigmaA)} \end{eqnarray} for all $n \in \bbZ$. In our applications, usually $A = RG$ where $R$ is a ring with unit and $G$ is a finite group. Using these identifications it is clear how to extend the definitions of certain maps between $K$-groups given by exact functors to negative degrees. Moreover, we will explain constructions and proofs of the commutativity of certain diagrams only for $n \ge 1$, and will not explicitly mention that these carry over to all $n \in \bbZ$, because of the identifications~\eqref{K_n-1(A)_is_K_n(SigmaA)} and \eqref{NK_n-1(A)_is_NK_n(SigmaA)} and the obvious identification $\Sigma(RG) = (\Sigma R)G$, or because of Pedersen-Weibel~\cite{pedersen-weibel2}. We have a direct sum decomposition \eqncount \begin{eqnarray}K_n(A[t]) & = & K_n(A) \oplus \NK_n(A) \label{K_n(A[t])_is_NK_n(A)_oplus_K_N(A)} \end{eqnarray} which is natural in $A$, using the inclusion $A \to A[t]$ and the ring map $A[t] \to A$ defined by $t\mapsto 0$. Let $\FP(A)$ be the exact category of finitely generated projective $A$-modules, and let $\NIL(A)$ be the exact category of nilpotent endomorphism of finitely generated projective $A$-modules. The functor $\FP(A) \to \NIL(A)$ sending $Q\mapsto (Q,0)$ and the functor $\NIL(A) \to \FP(A)$ sending $(Q,f)\mapsto Q$ are exact functors. They yield a split injection on the $K$-groups $K_n(A):= K_n(\FP(A)) \to K_n(\NIL(A))$ for $n \in \bbZ$. Denote by $\widetilde{K}_n(\NIL(A))$ the cokernel for $n \in \bbZ$. There is an identification (see Grayson~\cite[Theorem~2]{grayson2}) \eqncount \begin{eqnarray} \widetilde{K}_{n-1}(\NIL(A)) & = & \NK_{n}(A), \label{K_n(Nil(A))_is_NK_nplus1(A)} \end{eqnarray} for $n \in \bbZ$, essentially given by the passage from a nilpotent $A$-endomorphism $(Q,f)$ to the $A\bbZ$-automorphism $$A\bbZ \otimes_A Q \to A\bbZ \otimes_A Q, \quad u \otimes q \mapsto u \otimes q - ut \otimes f(q),$$ for $t \in \bbZ$ a fixed generator. The Bass $\Nil$-groups appear in the Bass-Heller-Swan decomposition for $n \in \bbZ$ (see \cite[Chapter~XII]{bass1}, \cite{bass-heller-swan1}, \cite[p.~236]{grayson1}, \cite[p.~38]{quillen1}, and \cite[Theorem~10.1]{swan3} for the original sources, or the expositions in \cite[Theorems~3.3.3 and 5.3.30]{rosenberg1}, \cite[Theorem~9.8]{srinivas1} ). \eqncount \begin{eqnarray} B \colon K_n(A) \oplus K_{n-1}(A) \oplus \NK_n(A) \oplus \NK_n(A) \xrightarrow{\cong} K_n(A\bbZ). \label{Bass-Heller-Swan_decomposition} \end{eqnarray} The isomorphism $B$ is natural in $A$ and comes from the localization sequence \eqncount \begin{multline} 0 \to K_n(A) \xrightarrow{K_n(i) \oplus -K_n(i)} K_n(A[t]) \oplus K_n(A[t]) \xrightarrow{K_n(j_+) + K_n(j_-)} K_n(A\bbZ)\\ \xrightarrow{\partial_n} K_{n-1}(A) \to 0 \label{localization_sequence} \end{multline} where $i \colon A \to A[t]$ is the obvious inclusion and the inclusion $j_{\pm} \colon A[t] \to A\bbZ$ sends $t$ to $t^{\pm 1}$ if we write $A\bbZ = A[t,t^{-1}]$, the splitting of $\partial_n$ \eqncount \begin{eqnarray} s_n \colon K_{n-1}(A) \to K_n(A\bbZ) \label{splitting_s_n} \end{eqnarray} which is given by the natural pairing \begin{eqnarray*} K_{n-1}(A) \otimes K_1(\bbZ[t,t^{-1}]) \to K_n(A \otimes_{\bbZ}\bbZ[t,t^{-1}]) = K_n(A\bbZ) \end{eqnarray*} evaluated at the class of unit $t \in \bbZ[t,t^{-1}]$ in $K_1(\bbZ[t,t^{-1}])$, and the canonical splitting~\eqref{K_n(A[t])_is_NK_n(A)_oplus_K_N(A)}. Let $B$ be the direct sum of $K_n(j_+\circ i)$, $s_n$, and the restrictions of the maps $ K_n(j_+)$ and $K_n(j_{-})$ to $\NK_n(A)$. In particular we get two homomorphisms, both natural in $A$, from the Bass-Heller-Swan decomposition~\eqref{Bass-Heller-Swan_decomposition}\begin{eqnarray*} i_n \colon \NK_n(A) & \to & K_n(A\bbZ)\\ r_n \colon K_n(A\bbZ) & \to & \NK_n(A), \end{eqnarray*} by focusing on the first copy of $\NK_n(A)$, such that $r_n \circ i_n$ is the identity on $\NK_n(A)$. Let $\sigma_k \colon \bbZ \to \bbZ$ be the injection given by $t\mapsto t^k$. We may consider the ring $A[t]$ as an $A[t]-A[t]$ bimodule with standard left action, and right action $a(t)\cdot b(t) = a(t)b(t^k)$ induced by $\sigma_k$. This map induces an induction functor $$\ind_k\colon \FP(A[t]) \to \FP(A[t])$$ defined by $P \mapsto A[t]\otimes_{\sigma_k} P$. There is also a restriction functor $$\res_k\colon \FP(A[t]) \to \FP(A[t])$$ defined by equipping $P$ with the new $A[t]$-module structure $a(t)\cdot p = a(t^k)p$, for all $a(t) \in A[t]$ and all $p \in P$. The induction and restriction functors yield two homomorphisms \begin{eqnarray*} \ind_{k} \colon K_n(A\bbZ) & \to & K_n(A\bbZ) \\ \res_{k } \colon K_n(A\bbZ) & \to & K_n(A\bbZ)\ . \end{eqnarray*} See \cite{stienstra1} or \cite[p.~27]{quillen1} for more details. There are also \emph{Verschiebung} and \emph{Frobenius} homomorphisms \eqncount \begin{eqnarray} V_k, F_k \colon \NK_n(A) & \to & \NK_n(A) \label{F_k_and_V_k} \end{eqnarray} induced on the $\Nil$-groups (and related to $\ind_k$ and $\res_k$ respectively). The Frobenius homomorphism is induced by the functor $\NIL(A) \to \NIL(A)$ sending $$(f \colon Q \to Q) \mapsto (f^k \colon Q \to Q),$$ while the Verschiebung homomorphism is induced by the functor $$\bigoplus_{i=1}^k Q \to \bigoplus_{i=1}^k Q,\quad (q_1, q_2, \ldots q_k) \mapsto (f(q_k),q_1, q_2,\ldots, q_{k-1})\ .$$ The next result is proven by Stienstra~\cite[Theorem~4.7]{stienstra1}). (Notice that Stienstra considers only commutative rings $A$, but his argument goes through in our case since the set of polynomials $T$ we consider is $\{t^n \, | \, n \in \bbZ, n \ge 0\}$ and each polynomial in $T$ is central in $A[t]$ with respect to the multiplicative structure.) \begin{lemma}\label{lem:ind/res_and_V/F} The following diagrams commute for all $n \in \bbZ$ and $k\in \bbZ, k \ge 1$ $$\xymatrix{\NK_n(A)\ar[r]^{i_n}\ar[d]^{F_k}&{K_n(A\bbZ)}\ar[d]^{\res_{k}}\\ {\NK_n(A)}\ar[r]^{i_n}&{K_n(A\bbZ)}}$$ and $$\xymatrix{\NK_n(A)\ar[r]^{i_n}\ar[d]^{V_k}&{K_n(A\bbZ)}\ar[d]^{\ind_k}\\ {\NK_n(A)}\ar[r]^{i_n}&{K_n(A\bbZ)}}$$ \end{lemma} The next result is well-known for $n=1$ (see Farrell \cite[Lemma~3]{farrell1}). The general case is discussed by Weibel \cite[p.~479]{weibel1}, Stienstra \cite[p.~90]{stienstra1}, and Grunewald \cite[Prop.~4.6]{grunewald1}. \begin{lemma} \label{lem:Frobenius_finally_vanishes} For every $n \in \bbZ$ and each $x \in \NK_n(A)$, there exists a positive integer $M(x)$ such that $ \res_{m} \circ\, i_n(x) = 0$ for $m \ge M(x)$. \end{lemma} \begin{proof} The Frobenius homomorphism $F_m \colon \NK_n(A) \to \NK_n(A)$ is induced by the functor sending $(Q,f)$ in $\NIL(A)$ to $(Q,f^m)$. For a given object $(Q,f)$ in $\NIL(A)$ there exists a positive integer $M(f)$ with $(Q,f^m) = (Q,0)$ for $m \ge M(f)$. This implies by a filtration argument (see \cite[p.~90]{stienstra1} or \cite[Prop.~4.6]{grunewald1}), that for $x \in\NK_n(A)$ there exists a positive integer $M(x)$ with $F_m(x) = 0$ for $m \ge M(x)$. Now the claim follows from Lemma~\ref{lem:ind/res_and_V/F}. \end{proof} \section{Subgroups of $G \times \mathbf Z$}\label{three} A finite group $G$ is called $p$-hyperelementary if is isomorphic to an extension $$1\to C \to G \to P \to 1$$ where $P$ is a $p$-group, and $C$ is a cyclic group of order prime to $p$. Such an extension is a semi-direct product, and hence determined by the action map $\alpha\colon P \to \aut(C)$ defined by conjugation. The group $G$ is $p$-elementary precisely when $\alpha$ is the trivial map, or in other words, when there exists a retraction $G \to C$. Notice that for a cyclic group $C=\cy{q^k}$, where $q\neq p$ is a prime, $\aut(C) = \cy{q^{k-1}(q-1)}$, if $q$ odd, or $\aut(C) = \cy{2^{k-2}} \times \cy{2}$, $k\geq 2$, if $q=2$. In either case, $\aut(C)_{(p)} \cong \aut(Q)_{(p)}$ by projection to any non-trivial quotient group $C \to Q$. \begin{lemma} \label{lem_hyper_elem_versus_elem} Let $p$ be a prime, and let $G$ be a finite $p$-hyper\-elemen\-ta\-ry group. Suppose that for every prime $q\neq p$ which divides the order of $G$, there exists an epimorphism $f_q \colon G \to Q_q$ onto a non-trivial cyclic group $Q_q$ of $q$-power order. Then $G$ is $p$-elementary. \end{lemma} \begin{proof} Let $Q$ be the product of the groups $Q_q$ over all primes $q\neq p$ which divide the order of $G$. Let $f\colon G \to Q$ be the product of the given epimorphisms. Since every subgroup in $G$ of order prime to $p$ is characteristic, we have a diagram $$\xymatrix@R-3pt{1 \ar[r] & C \ar[r]\ar[d] & G \ar[r]\ar[d] & P \ar@{=}[d]\ar[r]& 1\ \hphantom{.}\cr 1 \ar[r] & Q \ar[r] & \bar{G} \ar[r]\ & P \ar[r]& 1\ . }$$ But the epimorphism $f\colon G \to Q$ induces a retraction $\bar{G} \to Q$ of the lower sequence, hence its action map $\bar{\alpha}\colon P \to \aut(Q)$ is trivial. As remarked above, this implies that $\alpha$ is also trivial and hence $G$ is $p$-elementary. \end{proof} We now combine this result with the techniques of \cite{farrell-hsiang3}. Given positive integers $m$, $n$ and a prime $p$, we choose an integer $N=N(m,n,p)$ satisfying the following conditions: \begin{enumerate}\addtolength{\itemsep}{0.2\baselineskip} \item $p\nmid N$, but $q\mid N$ if and only if $q\mid n$ for all primes $q\neq p$. \item $k \geq \log_q(mn)$ (i.e.~$q^k \geq mn$) for each full prime power $q^k\|N$. \end{enumerate} The Farrell-Hsiang technique is to compute $K$-theory via $p$-hyperelementary subgroups $H \subset G \times \cy N$, and their inverse images $\Gamma_H = \pr^{-1}(H) \subset G \times \mathbf Z$ via the second factor projection map $\pr\colon G\times \mathbf Z \to G \times \cy N$. \begin{lemma} \label{lem:deep_and_p-torsion} Let $G$ be a finite group and let $M$ be a positive integer. Let $p$ be a prime dividing the order of $|G|$, and choose an integer $N= N(M, |G|,p)$. For every $p$-hyperelementary subgroup $H \subset G \times \cy N$, one of the following holds: \begin{enumerate} \item the inverse image $\Gamma_H \subset G \times m\cdot\mathbf Z$, for some $m \geq M$, or \item \label{lem:deep_and_p-torsion:p-elementary} the group $H$ is $p$-elementary. \end{enumerate} In the second case, we have the following additional properties: \begin{enumerate}\renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \item \label{lem:deep_and_p-torsion:identification_of_H} There exists a finite $p$-group $P$ and isomorphism $$\alpha \colon P \times \bbZ \xrightarrow{\ \cong\ } \Gamma_H.$$ \item \label{lem:deep_and_p-torsion:commutative_square} There exists a positive integer $k$, a positive integer $\ell$ with $(\ell,p) =1$, an element $u \in \cy \ell$ and an injective group homomorphism $$j \colon P \times \cy \ell \to G$$ such that the following diagram commutes $$\xymatrix{\hsp{xxxx}{P \times \bbZ}\hsp{xxxx} \ar[r]^(0.6){\alpha}\ar[d]_{\operatorname{id}_P \times \beta}&\ {\Gamma_H}\ar[d]^{i} \\ {P \times \cy \ell \times \bbZ}\ar[r]^(0.6){j \times k \cdot \operatorname{id}_{\bbZ}}&\ {G \times \bbZ}}$$ where $i \colon \Gamma_{H} \to G \times \bbZ$ is the inclusion and $\beta \colon \bbZ \to \cy \ell \times \bbZ$ sends $n$ to $(nu,n)$. \end{enumerate} \end{lemma} \begin{proof} In the proof we will write elements in $\mathbf Z/N$ additively and elements in $G$ multiplicatively. Let $H \subset G \times \cy N$ be a $p$-hyperelementary subgroup, and suppose that $\Gamma_H$ is \emph{not} contained in $G \times m\cdot \mathbf Z$ for any $m\geq M$. We have a pull-back diagram $$\xymatrix@R-2pt{&\ G''_\hsp{x}\ar@{=}[r]\ar@{>->}[d]&\ G''_\hsp{x}\ar@{>->}[d]\\ \cy{N''}\ \ar@{=}[d]\ar@{>->}[r]&H \ar@{->>}[r]\ar@{->>}[d]& G'\ar@{->>}[d]\\ \cy{N''}\ \ar@{>->}[r]&\cy{N'} \ar@{->>}[r]&\cy{\ell} }$$ where $G'\subset G$ and $\cy{N'}$ are the images of $H\subset G \times \mathbf Z$ under the first and second factor projection, respectively. Notice that $\cy{\ell}$ is the common quotient group of $G'$ and $\cy{N'}$. In terms of this data, $H \subseteq G'\times \cy{N'}$ and hence the pre-image $\Gamma_H \subseteq G' \times m\cdot \mathbf Z\subseteq G\times m\cdot \mathbf Z$, where $m=N/N'$. We now show that $G''$ is a $p$-group. Suppose, if possible, that some other prime $q\neq p$ divides $|G''|$. Since $H$ is $p$-hyperelementary the Sylow $q$-subgroup of $H$ is cyclic. However $G''\times \cy{N''} \subseteq H$, so $q\nmid N''$. But $N' = N''\cdot \ell$, hence this implies that the $q$-primary part $N'_q = \ell_q \leq |G'|\leq |G|$. Now $$m = N/N' \geq q^k/N'_q \geq q^k/|G| \geq M$$ by definition of $N = N(M, |G|, p)$. This would imply that $\Gamma_H \subset G \times m\cdot \mathbf Z$ for some $m\geq M$, contrary to our assumption. Hence $P:=G''$ is a $p$-group, or more precisely the $p$-Sylow subgroup of $G'$ since $p\nmid \ell$. Alternative (ii) is an immediate consequence. If $q\neq p$ is a prime dividing $|H|$, then $q\mid N'$ since $G''$ is a $p$-group. Hence $H$ admits an epimorphism onto a non-trivial finite cyclic $q$-group. By Lemma \ref{lem_hyper_elem_versus_elem}, this implies that $H$ is $p$-elementary. Note that there is an isomorphism $$j'=(id_P\times s)\colon P \times \cy \ell \xrightarrow{\cong} G'$$ defined by the inclusion $id_P\colon P\subset G'$ and a splitting $s\colon \cy \ell \to G'$ of the projection $G' \to \cy \ell$. Next we consider assertion (a). A similar pull-back diagram exists for the subgroup $\Gamma_H \subset G \times \mathbf Z$. We obtain a pull-back diagram of exact sequences $$\xymatrix{ 1\ar[r]&P \ar[r]\ar@{=}[d]&\Gamma_H\ar[r]\ar[d]& \mathbf Z\ar[r]\ar[d]& 1\\ 1\ar[r]&P \ar[r]&G'\ar[r]& \cy \ell\ar[r]\ar@/_/[l]_s& 1}$$ since $P = \Gamma_H \cap (G \times 0)$, and $\pr_{\mathbf Z}(\Gamma_H) = k\cdot \mathbf Z$ for some positive integer $k$. This exact sequence splits, since it is the pull-back of the lower split sequence: we can choose the element $(g_0, k) \in \Gamma_H \subseteq G'\times \mathbf Z$ which projects to a generator of $\pr_{\mathbf Z}(\Gamma_H) $, by taking $g_0=s(u)$ where $u \in \cy \ell$ is a generator. The isomorphism $\alpha\colon P \times \mathbf Z \xrightarrow{\approx} \Gamma_H $ is defined by $\alpha(g, n) = (gg_0^n, n)$ for $g\in P$ and $n \in \mathbf Z$. Assertion (b) follows by composing the splitting $ (id_P\times s)\colon P \times \cy \ell \cong G'$ with the inclusion $G'\subseteq G$ to obtain an injection $j\colon P \times \cy \ell\to G$. By the definition of $g_0$, the composite $(j\times k\cdot id_\mathbf Z)\circ (id_P \times \beta) = i \circ \alpha$, where $i\colon \Gamma_H \to G\times \mathbf Z$ is the inclusion. \end{proof} \section{The Proof of Theorem A}\label{four} We will need some standard results from induction theory for Mackey functors over finite groups, due to Dress (see \cite {dress1}, \cite{dress2}), as well as a refinement called the Burnside quotient Green ring associated to a Mackey functor (see \cite[\S 1]{h2006} for a description of this construction, and \cite{htw2007} for the detailed account). For any homomorphism $\pr\colon \Gamma \to G$ from an infinite discrete group to a finite group $G$, the functor $$\mathcal M(H): = K_n(R\Gamma_H),$$ where $\Gamma_H =\pr^{-1}(H)$, is a Mackey functor defined on subgroups $H \subseteq G$. The required restriction maps exist because the index $[\Gamma_H: \Gamma_K]$ is finite for any pair of subgroups $K\subset H$ in $G$. This point of view is due to Farrell and Hsiang \cite[\S 2]{farrell-hsiang2}. The Swan ring $SW(G,\mathbf Z)$ acts as a Green ring on $\mathcal M$, and it is a fundamental fact of Dress induction theory that the Swan ring is computable from the family $\mathcal H$ of hyperelementary subgroups of $G$. More precisely, the localized Green ring $SW(G,\mathbf Z)_{(p)}$ is computable from the family $\mathcal H_p$ of $p$-hyperelementary subgroups of $G$, for every prime $p$. If follows from Dress induction that the Mackey functor $\mathcal M(G)_{(p)}$ is also $p$-hyperelementary computable. We need a refinement of this result. \begin{theorem}[{\cite[Theorem 1.8]{h2006}}] \label{thm: computable} Suppose that $\mathcal G$ is a Green ring which acts on a Mackey functor $\mathcal M$. If $\mathcal G\otimes \mathbf Z_{(p)}$ is $\mathcal H$-computable, then every $x\in \mathcal M(G)\otimes \mathbf Z_{(p)}$ can be written as $$x = \sum_{H \in \mathcal H_p} a_H \Ind_H^G(\Res_G^H(x))$$ for some coefficients $a_H \in \mathbf Z_{(p)}$. \end{theorem} We fix a prime $p$. For each element $x\in \NK_n(RG)$, let $M= M(x)$ as in Lemma \ref{lem:Frobenius_finally_vanishes} applied to the ring $A=RG$. Then $$\res_{m} \colon K_n(R[G\times \bbZ]) \to K_n(R[G\times \bbZ])$$ sends $i_n(x)$ to zero for $m \ge M(x)$. Now let $N= N(M, |G|, p)$, as defined in Section \ref{three}, and consider $\mathcal M(H) = K_n(R\Gamma_H)$ as a Mackey functor on the subgroups $H \subseteq G \times \cy N$, via the projection $\pr\colon G\times \mathbf Z \to G \times \cy N$. Let $\mathcal H_p(x)$ denote the set of $p$-hyperelementary subgroups $H \subseteq G \times\cy N$, such that $\Gamma_H$ is \emph{not} contained in $G\times m\cdot \mathbf Z$, for any $m\geq M(x)$. By the formula of Theorem \ref{thm: computable}, applied to $y = i_n(x)$, we see that $x$ lies in the image of the composite map \eqncount \begin{eqnarray} \bigoplus_{H \in \mathcal H_p(x)} K_n(R\Gamma_H)_{(p)} & \xrightarrow{i_*} & K_n(R[G\times \bbZ])_{(p)} \xrightarrow{r_n} \NK_n(RG)_{(p)}. \label{x_lies_in_calh_p(M(x)} \end{eqnarray} We conclude from Lemma~\ref{lem:deep_and_p-torsion}~\eqref{lem:deep_and_p-torsion:commutative_square} (using that notation) that the composite \eqncount \begin{multline}K_n(R[P \times \bbZ]) \xrightarrow{\alpha_*} K_n(R\Gamma_H) \xrightarrow{i_*} K_n(R[G \times \bbZ]) \xrightarrow{r_n} \NK_n(RG) \label{comp(1)} \end{multline} agrees with the composite \eqncount \begin{multline} K_n(R[P \times \bbZ]) \xrightarrow{(\operatorname{id}_P \times \beta)_*} K_n(R[P \times \cy \ell\times \bbZ]) \xrightarrow{(j \times \operatorname{id}_{\bbZ})_*} K_n(R[G\times \bbZ]) \\ \xrightarrow{(\operatorname{id}_G \times k \cdot\operatorname{id}_{\bbZ})_*} K_n(R[G \times \bbZ]) \xrightarrow{r_n} \NK_n(RG). \label{comp(2)} \end{multline} Recall that $\beta\colon \bbZ \to \cy \ell\times \bbZ$ sends $n$ to $(nu,n)$ for some generator $u \in \cy \ell$. Let $ \NIL(RP) \to \NIL(R[P \times \cy{\ell}])$ be the functor which sends a nilpotent $RG$-endomorphism $f \colon Q \to Q$ of a finitely generated $RP$-module $Q$ to the nilpotent $R[G \times \cy{\ell}]$-endomorphism $$ R[P \times \cy l] \otimes_{RP} Q \mapsto R[P \times \cy{\ell}] \otimes_{RP} Q, \quad x \otimes q \mapsto xu \otimes f(q).$$ Let $\phi\colon \NK_n(RP) \to \NK_n(R[P \times \cy{\ell}])$ denote the induced homomorphism. \begin{lemma} \label{lem:three_commutative_diagrams} \mbox{} \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \begin{enumerate} \item \label{lem:three_commutative_diagrams:(1)} The following diagram commutes $$\xymatrix@!C=10em{K_n(R[P \times \bbZ]) \ar[r]^-{(\operatorname{id}_P \times \beta)_*} \ar[d]^-{r_n} & K_n(R[P \times \cy{\ell} \times \bbZ]) \ar[d]^-{r_n} \\ \NK_n(RP) \ar[r]^-{\phi} & \NK_n(R[P \times \cy{\ell}])\ .}$$ \item \label{lem:three_commutative_diagrams:(2)} The following diagram commutes $$\xymatrix@!C=10em{K_n(R[P \times \cy{\ell}\times \bbZ]) \ar[r]^-{(j\times \operatorname{id}_{\bbZ})_*} \ar[d]^-{r_n} & K_n(R[G\times \bbZ]) \ar[d]^-{r_n} \\ \NK_n(R[P \times \cy{\ell}) \ar[r]^-{j_*}& \NK_n(RG)\ .}$$ \item \label{lem:three_commutative_diagrams:(3)} The following diagram commutes $$\xymatrix@!C=8em{{K_n(R[G\times \bbZ])} \ar[r]^{\ind_k}\ar[d]^{r_n}&{K_n(R[G \times \bbZ])}\ar[d]^ {r_n}\\ {\NK_n(RG)}\ar[r]^{V_k}&{\NK_n(RG)}\ .}$$ \end{enumerate} \end{lemma} \begin{proof} \eqref{lem:three_commutative_diagrams:(1)} The tensor product $\otimes_\mathbf Z$ induces a pairing \eqncount \begin{eqnarray} \label{naturality_of_pairing} &\mu_{R, \Gamma}\colon K_{n-1}(R) \otimes_{\bbZ} K_1(\bbZ \Gamma) \to K_n(R\Gamma)& \end{eqnarray} for every group $\Gamma$, which is natural in $R$ and $\Gamma$. It suffices to prove that the following diagram is commutative for every ring $R$ (since we can replace $R$ by $RP$). Let $A = R[\cy \ell]$ for short. $$\xymatrix@R30mm {K_n(R) \oplus K_{n-1}(R) \oplus \NK_n(R) \oplus \NK_n(R) \ar[d]_-{\left(\begin{array}{cccc} i_{1,1} & 0 & 0 & 0 \\ i_{1,2} & i_{2,2} & 0 & 0 \\ 0 & 0 & \phi & 0 \\ 0 & 0 & 0 & \phi \end{array}\right)} \ar[r]^-B_-{\cong} & K_n(R\bbZ) \ar[d]^{\beta_*} \\ K_n(A) \oplus K_{n-1}(A) \oplus \NK_n(A) \oplus \NK_n(A) \ar[r]^-B_-{\cong} & K_n(A\bbZ)}$$ Here the vertical arrows are the isomorphisms given by the Bass-Heller-Swan decomposition~\eqref{Bass-Heller-Swan_decomposition}, the homomorphisms $i_{1,1}$ and $i_{2,2}$ are induced by the inclusion $R \to R[\cy \ell]$ and the homomorphism $i_{1,2}$ comes from the pairing \eqncount \begin{eqnarray} &\mu_{R, \cy \ell}\colon K_{n-1}(R) \otimes_{\bbZ} K_1(\bbZ [\cy \ell]) \to K_n(R[\cy \ell])& \end{eqnarray} evaluated at the class of the unit $u \in \bbZ[\cy \ell]$ in $K_1(\bbZ[\cy \ell])$ and the obvious change of rings homomorphisms $K_1(R[\cy \ell]) \to K_1(R[\cy \ell \times \bbZ])$ In order to show commutativity it suffices to prove its commutativity after restricting to one of the four summands in the left upper corner. This is obvious for $K_n(RG)$ since induction with respect to group homomorphisms is functorial. For $K_{n-1}(R)$ this follows from the naturality of the pairing (\ref{naturality_of_pairing}) in $R$ and the group $\cy \ell$ and the equality $$K_1(\beta)(t) = K_1(R[j_{\bbZ}])(t) + K_1(R[j_{\cy \ell}])(u)$$ where $j_{\bbZ} \colon \bbZ \to \cy \ell \times \bbZ$ and $j_{\cy \ell} \colon \cy \ell \to \cy \ell \times \bbZ$ are the obvious inclusions. The commutativity when restricted to the two Bass $\Nil$-groups follows from a result of Stienstra~\cite[Theorem~4.12 on page~78]{stienstra1}. \\[1mm] \eqref{lem:three_commutative_diagrams:(2)} This follows from the naturality in $R$ of $r_n$. \\[1mm] \eqref{lem:three_commutative_diagrams:(3)} It suffices to show that the following diagram commutes (since we can replace $R$ by $RG$) $$\xymatrix@R30mm {K_n(R) \oplus K_{n-1}(R) \oplus \NK_n(R) \oplus \NK_n(R) \ar[d]_-{\left(\begin{array}{cccc} \operatorname{id} & 0 & 0 & 0 \\ 0 & k\cdot \operatorname{id} & 0 & 0 \\ 0 & 0 & V_k & 0 \\ 0 & 0 & 0 & V_k \end{array}\right)} \ar[r]^-B_-{\cong} & K_n(R\bbZ) \ar[d]^{\ind_k} \\ K_n(R) \oplus K_{n-1}(R) \oplus \NK_n(R) \oplus \NK_n(R) \ar[r]^-B_-{\cong} & K_n(R\bbZ)}$$ where the vertical arrows are the isomorphisms given by the Bass-Heller-Swan decomposition~\eqref{Bass-Heller-Swan_decomposition}. In order to show commutativity it suffices to prove its commutativity after restricting to one of the four summands in the left upper corner. This is obvious for $K_n(R)$ since induction with respect to group homomorphisms is functorial. Next we inspect $K_{n-1}(R)$. The following diagram commutes $$\xymatrix{{K_{n-1}(R) \otimes_{\bbZ} K_1(\bbZ[\bbZ])}\ar[r]\ar[d]^{\operatorname{id} \otimes \ind_{k}}&{K_n(R\bbZ)}\ar[d]^ {\ind_{k}}\\ {K_{n-1}(R) \otimes_{\bbZ} K_1(\bbZ[\bbZ])}\ar[r]&{K_n(R\bbZ)}}$$ where the horizontal pairings are given by $\mu_{R, \mathbf Z}$ from (\ref{naturality_of_pairing}). Since in $K_1(\mathbf Z[\mathbf Z])$, $k$ times the class $[t]$ of the unit $t$ is the class $[t^k]=\ind_k([t])$, the claim follows for $K_{n-1}(R)$. The commutativity when restricted to the copies of $\NK_n(R)$ follows from Lemma~\ref{lem:ind/res_and_V/F}. This finishes the proof of Lemma~\ref{lem:three_commutative_diagrams}. \end{proof} Lemma~\ref{lem:three_commutative_diagrams} implies that the composite~\eqref{comp(2)} and hence the composite~\eqref{comp(1)} agree with the composite \begin{multline*} K_n(R[P \times \bbZ]) \xrightarrow{r_n} \NK_n(RP) \xrightarrow{\phi} \NK_n(R[P \times \cy \ell]) \\ \xrightarrow{\ j_*\ } \NK_n(RG) \xrightarrow{V_k} \NK_n(RG). \label{com(3)} \end{multline*} Since we have already shown that the element $x \in \NK_n(RG)_{(p)}$ lies in the image of \eqref{x_lies_in_calh_p(M(x)}, we conclude that $x$ lies in the image of the map $$ \Phi = (\Phi_P)\colon \bigoplus_{P \in \mathfrak P_p(G)} \NK_n(RP)_{(p)} \to \NK_n(RG)_{(p)} $$ subject only to the restriction $k \in I(g)$ in the definition of $\Phi_P$. Consider $k \ge 1$, $P \in \mathfrak P_p$ and $g \in C^{\perp}_GP_p$. We write $k = k_0k_1$ for $k_1 \in I(g)$ and $(k_0,|g|) = 1$. We have $V_k = V_{k_1} \circ V_{k_0}$ (see Stienstra~\cite[Theorem~2.12]{stienstra1}). Since $(k_0,|g|) = 1$, we can find an integer $l_0$ such that $(l_0,|g|) = 1$ and $(g^{l_0})^{k_0} = g$. We conclude from Stienstra~\cite[page~67]{stienstra1} $$V_{k_0} \circ \phi(P,g) = V_{k_0} \circ \phi(P,(g^{l_0})^{k_0}) = \phi(P,g^{l_0}) \circ V_{k_0}.$$ Hence the image of $V_k \circ \phi(P,g)$ is contained in the image of $V_{k_1} \circ \phi(P,g^{l_0})$ and $g^{l_0} \in C^{\perp}_GP_p$. This finishes the proof of Theorem A. \qed \section{Examples} We briefly discuss some examples. As usual, $p$ is a prime and $G$ is a finite group. The first example shows that Theorem A gives some information about $p$-elementary groups. \begin{example} \label{exa:p-elementary} Let $P$ be a finite $p$-group and let $\ell \ge 1$ an integer with $(\ell,p) = 1$. Then Theorem A says that $\NK_n(R[P \times\cy \ell])_{(p)}$ is generated by the images of the maps $$V_k \circ \phi(P,g) \colon \NK_n(RP)_{(p)} \to \NK_n(R[P \times\cy \ell])_{(p)}$$ for all $g \in \cy \ell$, and all $k \in I(g)$. Since the composite $F_k \circ V_k=k \cdot \operatorname{id}$ for $k\geq 1$ (see~\cite[Theorem~2.12]{stienstra1}) and $(k,p) = 1$, the map $$V_k \colon \NK_n(RG)_{(p)} \to \NK_n(RG)_{(p)}$$ is injective for all $k \in I(g)$. For $g=1$, $\phi(P, 1)$ is the map induced by the first factor inclusion $P \to P \times\cy \ell$, and this is a split injection. In addition, the composition of $\phi(P,g)$ with the functor induced by the projection $P\times \cy \ell \to P$ is the identity on $\NIL(RP)$, for any $g\in \cy \ell$. Therefore, all the maps $V_k \circ \phi(P,g)$ are split injective. It would be interesting to understand better the images of these maps as $k$ and $g$ vary. For example, what is the image of $\Phi_{\{1\}}$ where we take $P = \{1 \}$~? \qed \end{example} In some situations the Verschiebung homomorphisms and the homomorphisms $\phi(P,g)$ for $g \not=1$ do not occur. \begin{example} \label{ex:no_phi(P,g)_and_V_k} Suppose that $R$ is a regular ring. We consider the special situation where the $p$-Sylow subgroup $G_p$ is a normal subgroup of $G$, and furthermore where $C_G(P) \subseteq P$ holds for every non-trivial subgroup $P \subseteq G_p$. For $P \neq \{ 1\}$, we have $C^\perp_G(P) = \{1\}$ and the homomorphism $\Phi_P = \phi(P, 1)$, which is the ordinary induction map. We can ignore the map $\Phi_{\{1\}}$ since $\NK_n(R) = 0$ by assumption. Therefore, the (surjective) image of $\Phi$ in Theorem A is just the image of the induction map $$ \NK_n(RG_p)_{(p)} \to \NK_n(RG)_{(p)}\ .$$ Note that $\NK_n(RG_p)$ is $p$-local, and we can divide out the conjugation action on $ \NK_n(RG_p)$ because inner automorphisms act as the identity on $\NK_n(RG)$. However, $G/G_p$ is a finite group of order prime to $p$, so that $$H_0(G/G_p; \NK_n(RG_p)) = H^0(G/G_p; \NK_n(RG_p)) = \NK_n(RG_p)^{G/G_p}\ .$$ Hence the induction map on this fixed submodule $$\lambda_n\colon \NK_n(RG_p)^{G/G_p} \to \NK_n(RG)_{(p)}$$ is surjective. An easy application of the double coset formula shows that the composition of $\lambda_n$ with the restriction map $\res_{G}^{G_p} \colon \NK_n(RG)_{(p)} \to \NK_n(RG_p)_{(p)}$ is given by $|G/G_p|$-times the inclusion $\NK_n(RG_p)^{G/G_p} \to \NK_n(RG_p)$. Since $(|G/G_p|,p) = 1$ this composition, and hence $\lambda_n$, are both injective. We conclude that $\lambda_n$ is an isomorphism. Concrete examples are provided by semi-direct products $G = P \rtimes C$, where $P$ is a cyclic $p$-group, $C$ has order prime to $p$, and the action map $\alpha\colon C \to \aut(P)$ is injective. If we assume, in addition, that the order of $C$ is square-free, then $$\NK_n(\mathbf Z P)^{C} \xrightarrow{\cong} \NK_n(\mathbf Z[P \rtimes C])$$ for all $n\leq 1$ (this dimension restriction, and setting $R=\mathbf Z$, are only needed to apply Bass-Murthy \cite{bass-murthy1} in order to eliminate possible torsion in the $\Nil$-group of orders prime to $p$). \qed \end{example} \section{$NK_n(A)$ as a Mackey functor}\label{seven} Let $G$ be a finite group. We want to show that the natural maps $$i_n \colon \NK_n(RG) \to K_n(R[G\times \bbZ])$$ and $$r_n \colon K_n(R[G\times \bbZ]) \to \NK_n(RG)$$ in the Bass-Heller-Swan isomorphism are maps of Mackey functors (defined on subgroups of $G$). Hence $\NK_n(RG)$ is a direct summand of $K_n(R[G \times \bbZ])$ as a Mackey functor. Since $RG$ is a finitely generated free $RH$-module, for any subgroup $H\subset G$, it is enough to apply the following lemma to $A = RH$ and $B= RG$. \begin{lemma} \label{lem:Bass_Heller_Swan_and_Mackey} Let $i \colon A \to B$ be an inclusion of rings. Then the following diagram commutes $$\xymatrix{K_n(A) \oplus K_{n-1}(A) \oplus \NK_n(A) \oplus \NK_n(A) \ar[r]^-{\cong} \ar[d]^-{i_* \oplus i_* \oplus i_* \oplus i_*} & K_n(A\bbZ) \ar[d]^-{i[\bbZ]_*} \\ K_n(B) \oplus K_{n-1}(B) \oplus \NK_n(B) \oplus \NK_n(B) \ar[r]^-{\cong} & K_n(B\bbZ) }$$ where the vertical maps are given by induction, and the horizontal maps are the Bass-Heller-Swan isomorphisms. If $B$ is finitely generated and projective, considered as an $A$-module, then $$\xymatrix{K_n(B) \oplus K_{n-1}(B) \oplus \NK_n(B) \oplus \NK_n(B) \ar[r]^-{\cong} \ar[d]^-{i^* \oplus i^* \oplus i^* \oplus i^*} & K_n(B\bbZ) \ar[d]^-{i[\bbZ]^*} \\ K_n(A) \oplus K_{n-1}(A) \oplus \NK_n(A) \oplus \NK_n(A) \ar[r]^-{\cong} & K_n(A\bbZ) }$$ where the vertical maps are given by restriction, and the horizontal maps are the Bass-Heller-Swan isomorphisms. \end{lemma} \begin{proof} One has to show the commutativity of the diagram when restricted to each of the four summands in the left upper corner. In each case these maps are induced by functors, and one shows that the two corresponding composites of functors are naturally equivalent. Hence the two composites induce the same map on $K$-theory. As an illustration we do this in two cases. Consider the third summand $\NK_n(A)$ in the first diagram. The Bass-Heller-Swan isomorphism restricted to it is given by the the restriction of the map $(j_+)_* \colon K_n(A[t]) \to K_n(A[t,t^{-1}]) = K_n(A\bbZ)$ induced by the obvious inclusion $j_+ \colon A[t] \to A[t,t^{-1}]$ restricted to $\NK_n(A) = \ker \left(\epsilon_* \colon K_n(A[t]) \to K_n(A)\right)$, where $\epsilon \colon A[t] \to A$ is given by $t = 0$. Since all these maps come from induction with ring homomorphisms, the following two diagrams commute $$\xymatrix{K_n(A[t]) \ar[r]^-{\epsilon_*} \ar[d]^-{i[t]_*} & K_n(A) \ar[d]^-{i_*} \\ K_n(B[t]) \ar[r]^-{\epsilon_*} & K_n(B) } $$ and $$\xymatrix{K_n(A[t]) \ar[r]^-{(j_+)_*} \ar[d]^-{i[t]_*} & K_n(A[t,t^{-1}]) \ar[d]^-{i[t,t^{-1}]_*} \\ K_n(B[t]) \ar[r]^-{(j_+)_*} & K_n(B[t,t^{-1}]) } $$ and the claim follows. Consider the second summand $K_{n-1}(B)$ in the second diagram. The restriction of the Bass-Heller-Swan isomorphism to $K_{n-1}(B)$ is given by evaluating the pairing~\eqref{naturality_of_pairing} for $\Gamma = \bbZ$ for the unit $t \in \bbZ[\bbZ]$. Hence it suffices to show that the following diagram commutes, where the horizontal maps are given by the pairing~\eqref{naturality_of_pairing} for $\Gamma = \bbZ$ and the vertical maps come from restriction $$ \xymatrix{K_{n-1}(B) \otimes K_1(\bbZ[\bbZ]) \ar[r] \ar[d]_{i^* \otimes \operatorname{id}} & K_n(B\bbZ) \ar[d]_{i[\bbZ]^*} \\ K_{n-1}(A) \otimes K_1(\bbZ[\bbZ]) \ar[r] & K_n(A\bbZ) }$$ This follows from the fact that for a finitely generated projective $A$-module $P$ and a finitely generated projective $\bbZ[\bbZ]$-module $Q$ there is a natural isomorphism of $B\bbZ$-modules $$(\res_A P)\otimes_{\bbZ} Q \xrightarrow{\cong} \res_{A\bbZ} (P \otimes_{\bbZ} Q), \quad p \otimes q \mapsto p \otimes q. \qedhere$$ \end{proof} \begin{corollary} Let $G$ be a finite group, and $R$ be a ring. Then, for any subgroup $H\subset G$, the induction maps $\ind_H^G\colon \NK_n(RH) \to \NK_n(RG)$ and the restriction maps $\res^H_G\colon \NK_n(RG) \to \NK_n(RH)$ commute with the Verschiebung and Frobenius homomorphisms $V_k$, $F_k$, for $k \geq 1$. \end{corollary} \begin{proof} We combine the results of Lemma \ref{lem:Bass_Heller_Swan_and_Mackey} with Stienstra's Lemma \ref{lem:ind/res_and_V/F} (note that these two diagrams also commute with $i_n$ replaced by $r_n$). \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2010-08-23T02:01:44", "yymm": "0706", "arxiv_id": "0706.4019", "language": "en", "url": "https://arxiv.org/abs/0706.4019", "abstract": "Let G be a finite group. We show that the Bass Nil-groups $NK_n(RG)$, $n \\in Z$, are generated from the p-subgroups of G by induction maps, certain twisting maps depending on elements in the centralizers of the p-subgroups, and the Verschiebung homomorphisms. As a consequence, the groups $NK_n(RG)$ are generated by induction from elementary subgroups. For $NK_0(ZG)$ we get an improved estimate of the torsion exponent.", "subjects": "K-Theory and Homology (math.KT); Algebraic Topology (math.AT)", "title": "Induction and computation of Bass Nil Groups for finite groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.984336353585837, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.707338584090426 }
https://arxiv.org/abs/1809.10907
An Introduction to Modular Forms
In this course we introduce the main notions relative to the classical theory of modular forms. A complete treatise in a similar style can be found in the author's book joint with F. Str{ö}mberg [1].
\section{Functional Equations} Let $f$ be a complex function defined over some subset $D$ of ${\mathbb C}$. A \emph{functional equation} is some type of equation relating the value of $f$ at any point $z\in D$ to some other point, for instance $f(z+1)=f(z)$. If $\gamma$ is some function from $D$ to itself, one can ask more generally that $f(\gamma(z))=f(z)$ for all $z\in D$ (or even $f(\gamma(z))=v(\gamma,z)f(z)$ for some known function $v$). It is clear that $f(\gamma^m(z))=f(z)$ for all $m\ge0$, and even for all $m\in{\mathbb Z}$ if $\gamma$ is invertible, and more generally the set of bijective functions $u$ such that $f(u(z))=f(z)$ forms a \emph{group}. Thus, the basic setting of functional equations (at least of the type that we consider) is that we have a group of transformations $G$ of $D$, that we ask that $f(u(z))=f(z)$ (or more generally $f(u(z))=j(u,z)f(z)$ for some known $j$) for all $u\in G$ and $z\in D$, and we ask for some type of regularity condition on $f$ such as continuity, meromorphy, or holomorphy. Note that there is a trivial but essential way to construct from scratch functions $f$ satisfying a functional equation of the above type: simply choose any function $g$ and set $f(z)=\sum_{v\in G}g(v(z))$. Since $G$ is a group, it is clear that \emph{formally} $f(u(z))=f(z)$ for $u\in G$. Of course there are convergence questions to be dealt with, but this is a fundamental construction, which we call \emph{averaging} over the group. We consider a few fundamental examples. \subsection{Fourier Series} We choose $D={\mathbb R}$ and $G={\mathbb Z}$ acting on ${\mathbb R}$ by translations. Thus, we ask that $f(x+1)=f(x)$ for all $x\in{\mathbb R}$. It is well-known that this leads to the theory of \emph{Fourier series}: if $f$ satisfies suitable regularity conditions (we need not specify them here since in the context of modular forms they will be satisfied) then $f$ has an expansion of the type $$f(x)=\sum_{n\in{\mathbb Z}}a(n)e^{2\pi inx}\;,$$ absolutely convergent for all $x\in{\mathbb R}$, where the \emph{Fourier coefficients} $a(n)$ are given by the formula $$a(n)=\int_0^1e^{-2\pi inx}f(x)\,dx\;,$$ which follows immediately from the orthonormality of the functions $e^{2\pi imx}$ (you may of course replace the integral from $0$ to $1$ by an integral from $z$ to $z+1$ for any $z\in{\mathbb R}$). An important consequence of this, easily proved, is the \emph{Poisson summation formula}: define the \emph{Fourier transform} of $f$ by $$\wh{f}(x)=\int_{-\infty}^{\infty}e^{-2\pi i xt}f(t)\,dt\;.$$ We ignore all convergence questions, although of course they must be taken into account in any computation. Consider the function $g(x)=\sum_{n\in{\mathbb Z}}f(x+n)$, which is exactly the averaging procedure mentioned above. Thus $g(x+1)=g(x)$, so $g$ has a Fourier series, and an easy computation shows the following (again omitting any convergence or regularity assumptions): \begin{proposition}[Poisson summation] We have $$\sum_{n\in{\mathbb Z}}f(x+n)=\sum_{m\in{\mathbb Z}}\wh{f}(m)e^{2\pi imx}\;.$$ In particular $$\sum_{n\in{\mathbb Z}}f(n)=\sum_{m\in{\mathbb Z}}\wh{f}(m)\;.$$ \end{proposition} A typical application of this formula is to the ordinary Jacobi \emph{theta function}: it is well-known (prove it otherwise) that the function $e^{-\pi x^2}$ is invariant under Fourier transform. This implies the following: \begin{proposition} If $f(x)=e^{-a\pi x^2}$ for some $a>0$ then $\wh{f}(x)=a^{-1/2}e^{-\pi x^2/a}$. \end{proposition} \begin{proof} Simple change of variable in the integral.\qed\end{proof} \begin{corollary}\label{corth} Define $$T(a)=\sum_{n\in{\mathbb Z}}e^{-a\pi n^2}\;.$$ We have the functional equation $$T(1/a)=a^{1/2}T(a)\;.$$ \end{corollary} \begin{proof} Immediate from the proposition and Poisson summation.\qed\end{proof} This is historically the first example of modularity, which we will see in more detail below. \begin{exercise} Set $S=\sum_{n\ge1}e^{-(n/10)^2}$. \begin{enumerate}\item Compute numerically $S$ to $100$ decimal digits, and show that it is apparently equal to $5\sqrt{\pi}-1/2$. \item Show that in fact $S$ is not exactly equal to $5\sqrt{\pi}-1/2$, and using the above corollary give a precise estimate for the difference. \end{enumerate} \end{exercise} \begin{exercise}\begin{enumerate} \item Show that the function $f(x)=1/\cosh(\pi x)$ is also invariant under Fourier transform. \item In a manner similar to the corollary, define $$T_2(a)=\sum_{n\in{\mathbb Z}}1/\cosh(\pi na)\;.$$ Show that we have the functional equation $$T_2(1/a)=aT_2(a)\;.$$ \item Show that in fact $T_2(a)=T(a)^2$ (this may be more difficult). \item Do the same exercise as the previous one by noticing that $S=\sum_{n\ge1}1/\cosh(n/10)$ is very close to $5\pi-1/2$. \end{enumerate} \end{exercise} Above we have mainly considered Fourier series of functions defined on ${\mathbb R}$. We now consider more generally functions $f$ defined on ${\mathbb C}$ or a subset of ${\mathbb C}$. We again assume that $f(z+1)=f(z)$, i.e., that $f$ is periodic of period $1$. Thus (modulo regularity) $f$ has a Fourier series, but the Fourier coefficients $a(n)$ now depend on $y=\Im(z)$: $$f(x+iy)=\sum_{n\in{\mathbb Z}}a(n;y)e^{2\pi i nx}\text{\quad with\quad}a(n;y)=\int_0^1f(x+iy)e^{-2\pi inx}\,dx\;.$$ If we impose no extra condition on $f$, the \emph{functions} $a(n;y)$ are quite arbitrary. But in almost all of our applications $f$ will be \emph{holomorphic}; this means that $\partial(f)(z)/\partial{\ov{z}}=0$, or equivalently that $(\partial/\partial(x)+i\partial/\partial(y))(f)=0$. Replacing in the Fourier expansion (recall that we do not worry about convergence issues) gives $$\sum_{n\in{\mathbb Z}}(2\pi ina(n;y)+ia'(n;y))e^{2\pi inx}=0\;,$$ hence by uniqueness of the expansion we obtain the differential equation $a'(n;y)=-2\pi na(n;y)$, so that $a(n;y)=c(n)e^{-2\pi ny}$ for some constant $c(n)$. This allows us to write cleanly the Fourier expansion of a holomorphic function in the form $$f(z)=\sum_{n\in{\mathbb Z}}c(n)e^{2\pi inz}\;.$$ Note that if the function is only \emph{meromorphic}, the region of convergence will be limited by the closest pole. Consider for instance the function $f(z)=1/(e^{2\pi iz}-1)=e^{\pi iz}/(2i\sin(\pi z))$. If we set $y=\Im(z)$ we have $|e^{2\pi iz}|=e^{-2\pi y}$, so if $y>0$ we have the Fourier expansion $f(z)=-\sum_{n\ge0}e^{2\pi inz}$, while if $y<0$ we have the different Fourier expansion $f(z)=\sum_{n\le-1}e^{2\pi inz}$. \section{Elliptic Functions} The preceding section was devoted to periodic functions. We now assume that our functions are defined on some subset of ${\mathbb C}$ and assume that they are \emph{doubly periodic}: this can be stated either by saying that there exist two ${\mathbb R}$-linearly independent complex numbers $\omega_1$ and $\omega_2$ such that $f(z+\omega_i)=f(z)$ for all $z$ and $i=1,2$, or equivalently by saying that there exists a \emph{lattice} $\Lambda$ in ${\mathbb C}$ (here ${\mathbb Z}\omega_1+{\mathbb Z}\omega_2$) such that for any $\lambda\in\Lambda$ we have $f(z+\lambda)=f(z)$. Note in passing that if $\omega_1/\omega_2\in{\mathbb Q}$ this is equivalent to (single) periodicity, and if $\omega_1/\omega_2\in{\mathbb R}\setminus{\mathbb Q}$ the set of periods would be dense so the only ``doubly periodic'' (at least continuous) functions would essentially reduce to functions of one variable. For a similar reason there do not exist nonconstant continuous functions which are triply periodic. In the case of simply periodic functions considered above there already existed some natural functions such as $e^{2\pi inx}$. In the doubly-periodic case no such function exists (at least on an elementary level), so we have to construct them, and for this we use the standard averaging procedure seen and used above. Here the group is the lattice $\Lambda$, so we consider functions of the type $f(z)=\sum_{\omega\in\Lambda}\phi(z+\omega)$. For this to converge $\phi(z)$ must tend to $0$ sufficiently fast as $|z|$ tends to infinity, and since this is a double sum ($\Lambda$ is a two-dimensional lattice), it is easy to see by comparison with an integral (assuming $|\phi(z)|$ is regularly decreasing) that $|\phi(z)|$ should decrease at least like $1/|z|^{\alpha}$ for $\alpha>2$. Thus a first reasonable definition is to set $$f(z)=\sum_{\omega\in\Lambda}\dfrac{1}{(z+\omega)^3}=\sum_{(m,n)\in{\mathbb Z}^2}\dfrac{1}{(z+m\omega_1+n\omega_2)^3}\;.$$ This will indeed be a doubly periodic function, and by normal convergence it is immediate to see that it is a meromorphic function on ${\mathbb C}$ having only poles for $z\in\Lambda$, so this is our first example of an \emph{elliptic function}, which is by definition a doubly periodic function which is meromorphic on ${\mathbb C}$. Note for future reference that since $-\Lambda=\Lambda$ this specific function $f$ is odd: $f(-z)=-f(z)$. However, this is not quite the basic elliptic function that we need. We can integrate term by term, as long as we choose constants of integration such that the integrated series continues to converge. To avoid stupid multiplicative constants, we integrate $-2f(z)$: all antiderivatives of $-2/(z+\omega)^3$ are of the form $1/(z+\omega)^2+C(\omega)$ for some constant $C(\omega)$, hence to preserve convergence we will choose $C(0)=0$ and $C(\omega)=-1/\omega^2$ for $\omega\ne0$: indeed, $|1/(z+\omega)^2-1/\omega^2|$ is asymptotic to $2|z|/|\omega^3|$ as $|\omega|\to\infty$, so we are again in the domain of normal convergence. We will thus define: $$\wp(z)=\dfrac{1}{z^2}+\sum_{\omega\in\Lambda\setminus\{0\}}\left(\dfrac{1}{(z+\omega)^2}-\dfrac{1}{\omega^2}\right)\;,$$ the \emph{Weierstrass $\wp$-function}. By construction $\wp'(z)=-2f(z)$, where $f$ is the function constructed above, so $\wp'(z+\omega)=\wp'(z)$ for any $\omega\in \Lambda$, hence $\wp(z+\omega)=\wp(z)+D(\omega)$ for some constant $D(\omega)$ depending on $\omega$ but not on $z$. Note a slightly subtle point here: we use the fact that ${\mathbb C}\setminus\Lambda$ is \emph{connected}. Do you see why? Now as before it is clear that $\wp(z)$ is an even function: thus, setting $z=-\omega/2$ we have $\wp(\omega/2)=\wp(-\omega/2)+D(\omega)=\wp(\omega/2)+D(\omega)$, so $D(\omega)=0$ hence $\wp(z+\omega)=\wp(z)$ and $\wp$ is indeed an elliptic function. There is a mistake in this reasoning: do you see it? Since $\wp$ has poles on $\Lambda$, we cannot reason as we do when $\omega/2\in\Lambda$. Fortunately, this does not matter: since $\omega_i/2\notin\Lambda$ for $i=1,2$, we have shown at least that $D(\omega_i)=0$ hence that $\wp(z+\omega_i)=\wp(z)$ for $i=1,2$, so $\wp$ is doubly periodic (so indeed $D(\omega)=0$ for \emph{all} $\omega\in\Lambda$). The theory of elliptic functions is incredibly rich, and whole treatises have been written about them. Since this course is mainly about modular forms, we will simply summarize the main properties, and emphasize those that are relevant to us. All are proved using manipulation of power series and complex analysis, and all the proofs are quite straightforward. For instance: \begin{proposition}\label{propellf} Let $f$ be a nonzero elliptic function with period lattice $\Lambda$ as above, and denote by $P=P_a$ a ``fundamental parallelogram'' $P_a=\{z=a+x\omega_1+y\omega_2,\ 0\le x<1,\ 0\le y<1\}$, where $a$ is chosen so that the boundary of $P_a$ does not contain any zeros or poles of $f$ (see Figure 1). \begin{enumerate}\item The number of zeros of $f$ in $P$ is equal to the number of poles (counted with multiplicity), and this number is called the \emph{order} of $f$. \item The sum of the residues of $f$ at the poles in $P$ is equal to $0$. \item The sum of the zeros and poles of $f$ in $P$ belongs to $\Lambda$. \item If $f$ is nonconstant its order is at least $2$. \end{enumerate}\end{proposition} \begin{proof} For (1), (2), and (3), simply integrate $f(z)$, $f'(z)/f(z)$, and $zf'(z)/f(z)$ along the boundary of $P$ and use the residue theorem. For (4), we first note that by (2) $f$ cannot have order $1$ since it would have a simple pole with residue $0$. But it also cannot have order $0$: this would mean that $f$ has no pole, so it is an entire function, and since it is doubly-periodic its values are those taken in $P$ which is compact, so $f$ is \emph{bounded}. By a famous theorem of Liouville (of which this is the no less most famous application) it implies that $f$ is constant, contradicting the assumption of (4).\qed\end{proof} \begin{figure} \tikzset{->-/.style={decoration={ markings, mark=at position .5 with {\arrow{latex}}},postaction={decorate}}} \begin{tikzpicture} \coordinate (Origin) at (0,0); \coordinate (XAxisMin) at (-3,0); \coordinate (XAxisMax) at (7,0); \coordinate (YAxisMin) at (0,-2); \coordinate (YAxisMax) at (0,7); \coordinate (A) at (1,0); \draw [thin, gray,-latex] (XAxisMin) -- (XAxisMax) \draw [thin, gray,-latex] (YAxisMin) -- (YAxisMax) \clip (-3,-2) rectangle (7cm,7cm); \pgftransformcm{1}{0.2}{0.5}{1}{\pgfpoint{0cm}{0cm}} \coordinate (Bone) at (0,3); \coordinate (Btwo) at (3,0); \draw[style=help lines,dashed] (-10,-10) grid[step=3cm] (10,10); \foreach \x in {-5,-4,...,5} \foreach \y in {-5,-4,...,5} \node[draw,circle,inner sep=1pt,gray,fill=gray] at (3*\x,3*\y) {}; } } \node at ($0.5*(Bone)+(Btwo)+(A)$) [right] {$C_a$}; \node[draw,circle,inner sep=1pt,black,fill] at ($(Bone)+(A)$) {}; \node[draw,circle,inner sep=1pt,black,fill] at ($(Btwo)+(A)$) {}; \node[draw,circle,inner sep=1pt,black,fill] at ($(Bone)+(Btwo)+(A)$) {}; \node[draw,circle,inner sep=1pt,black,fill] at (A) {}; \draw [ultra thick,-,black] ($(Bone)+(A)$) -- (A) node [below left] {$a$}; \draw [ultra thick,-,black] (A)-- ($(Btwo)+(A)$) node [below right] {$\omega_2+a$}; \draw [ultra thick,-,black] ($(Bone)+(Btwo)+(A)$) -- ($(Bone)+(A)$) node [above] {$\omega_1+a$}; \draw [ultra thick,-,black] ($(Btwo)+(A)$)-- ($(Btwo)+(Bone)+(A)$) node [below right] {$\omega_1+\omega_2+a$}; \draw (Bone) node [above left] {$\omega_1$}; \draw (Btwo) node [above left] {$\omega_2$}; \end{tikzpicture} \caption{Fundamental Parallelogram $P_a$} \label{fig:lattice_contour} \end{figure} Note that clearly $\wp$ has order $2$, and the last result shows that we cannot find an elliptic function of order $1$. Note however the following: \begin{exercise}\begin{enumerate} \item By integrating term by term the series defining $-\wp(z)$ show that if we define the \emph{Weierstrass zeta function} $$\zeta(z)=\dfrac{1}{z}+\sum_{\omega\in\Lambda\setminus\{0\}}\left(\dfrac{1}{z+\omega}-\dfrac{1}{\omega}+\dfrac{z}{\omega^2}\right)\;,$$ this series converges normally on any compact subset of ${\mathbb C}\setminus\Lambda$ and satisfies $\zeta'(z)=-\wp(z)$. \item Deduce that there exist constants $\eta_1$ and $\eta_2$ such that $\zeta(z+\omega_1)=\zeta(z)+\eta_1$ and $\zeta(z+\omega_2)=\zeta(z)+\eta_2$, so that if $\omega=m\omega_1+n\omega_2$ we have $\zeta(z+\omega)=\zeta(z)+m\eta_1+n\eta_2$. Thus $\zeta$ (which would be of order $1$) is not doubly-periodic but only quasi-doubly periodic: this is called a \emph{quasi-elliptic function}. \item By integrating around the usual fundamental parallelogram, show the important relation due to Legendre: $$\omega_1\eta_2-\omega_2\eta_1=\pm2\pi i\;,$$ the sign depending on the ordering of $\omega_1$ and $\omega_2$. \end{enumerate} \end{exercise} The main properties of $\wp$ that we want to mention are as follows: First, for $z$ sufficiently small and $\omega\ne0$ we can expand $$\dfrac{1}{(z+\omega)^2}=\sum_{k\ge0}(-1)^k(k+1)z^k\dfrac{1}{\omega^{k+2}}\;,$$ so $$\wp(z)=\dfrac{1}{z^2}+\sum_{k\ge1}(-1)^k(k+1)z^kG_{k+2}(\Lambda)\;,$$ where we have set $$G_k(\Lambda)=\sum_{\omega\in\Lambda\setminus\{0\}}\dfrac{1}{\omega^k}\;,$$ which are called \emph{Eisenstein series} of weight $k$. Since $\Lambda$ is symmetrical, it is clear that $G_k=0$ if $k$ is odd, so the expansion of $\wp(z)$ around $z=0$ is given by $$\wp(z)=\dfrac{1}{z^2}+\sum_{k\ge1}(2k+1)z^{2k}G_{2k+2}(\Lambda)\;.$$ Second, one can show that \emph{all} elliptic functions are simply rational functions in $\wp(z)$ and $\wp'(z)$, so we need not look any further in our construction. Third, and this is probably one of the most important properties of $\wp(z)$, it satisfies a \emph{differential equation} of order $1$: the proof is as follows. Using the above Taylor expansion of $\wp(z)$, it is immediate to check that $$F(z)={\wp'(z)}^2-(4\wp(z)^3-g_2(\Lambda)\wp(z)-g_3(\Lambda))$$ has an expansion around $z=0$ beginning with $F(z)=c_1z+\cdots$, where we have set $g_2(\Lambda)=60G_4(\Lambda)$ and $g_3(\Lambda)=140G_6(\Lambda)$. In addition, $F$ is evidently an elliptic function, and since it has no pole at $z=0$ it has no poles on $\Lambda$ hence no poles at all, so it has order $0$. Thus by Proposition \ref{propellf} (4) $f$ is constant, and since by construction it vanishes at $0$ it is identically $0$. Thus $\wp$ satisfies the differential equation $${\wp'(z)}^2=4\wp(z)^3-g_2(\Lambda)\wp(z)-g_3(\Lambda)\;.$$ A fourth and somewhat surprising property of the function $\wp(z)$ is connected to the theory of \emph{elliptic curves}: the above differential equation shows that $(\wp(z),\wp'(z))$ parametrizes the cubic curve $y^2=4x^3-g_2x-g_3$, which is the general equation of an elliptic curve (you do not need to know the theory of elliptic curves for what follows). Thus, if $z_1$ and $z_2$ are in ${\mathbb C}\setminus\Lambda$, the two points $P_i=(\wp(z_i),\wp'(z_i))$ for $i=1$, $2$ are on the curve, hence if we draw the line through these two points (the tangent to the curve if they are equal), it is immediate to see from Proposition \ref{propellf} (3) that the third point of intersection corresponds to the parameter $-(z_1+z_2)$, and can of course be computed as a rational function of the coordinates of $P_1$ and $P_2$. It follows that $\wp(z)$ (and $\wp'(z)$) possesses an \emph{addition formula} expressing $\wp(z_1+z_2)$ in terms of the $\wp(z_i)$ and $\wp'(z_i)$. \begin{exercise} Find this addition formula. You will have to distinguish the cases $z_1=z_2$, $z_1=-z_2$, and $z_1\ne\pm z_2$.\end{exercise} An interesting corollary of the differential equation for $\wp(z)$, which we will prove in a different way below, is a \emph{recursion} for the Eisenstein series $G_{2k}(\Lambda)$: \begin{proposition} We have the recursion for $k\ge4$: $$(k-3)(2k-1)(2k+1)G_{2k}=3\sum_{2\le j\le k-2}(2j-1)(2(k-j)-1)G_{2j}G_{2(k-j)}\;.$$ \end{proposition} \begin{proof} Taking the derivative of the differential equation and dividing by $2\wp'$ we obtain $\wp''(z)=6\wp(z)^2-g_2(\Lambda)/2$. If we set by convention $G_0(\Lambda)=-1$ and $G_2(\Lambda)=0$, and for notational simplicity omit $\Lambda$ which is fixed, we have $\wp(z)=\sum_{k\ge-1}(2k+1)z^{2k}G_{2k+2}$, so on the one hand $$\wp''(z)=\sum_{k\ge-1}(2k+1)(2k)(2k-1)z^{2k-2}G_{2k+2}\;,$$ and on the other hand $\wp(z)^2=\sum_{K\ge-2}a(K)z^{2K}$ with $$a(K)=\sum_{k_1+k_2=K}(2k_1+1)(2k_2+1)G_{2k_1+2}G_{2k_2+2}\;.$$ Replacing in the differential equation it is immediate to check that the coefficients agree up to $z^2$, and for $K\ge2$ we have the identification $$6\sum_{\substack{k_1+k_2=K\\k_i\ge-1}}(2k_1+1)(2k_2+1)G_{2k_1+2}G_{2k_2+2}=(2K+3)(2K+2)(2K+1)G_{2K+4}$$ which is easily seen to be equivalent to the recursion of the proposition using $G_0=-1$ and $G_2=0$.\qed\end{proof} For instance $$G_8=\dfrac{3}{7}G_4^2\,\quad G_{10}=\dfrac{5}{11}G_4G_6\,\quad G_{12}=\dfrac{18G_4^3+25G_6^2}{143}\;,$$ and more generally this implies that $G_{2k}$ is a \emph{polynomial} in $G_4$ and $G_6$ with rational coefficients which are \emph{independent} of the lattice $\Lambda$. As other corollary, we note that if we choose $\omega_2=1$ and $\omega_1=iT$ with $T$ tending to $+\infty$, then the definition $G_{2k}(\Lambda)=\sum_{(m,n)\in{\mathbb Z}^2\setminus\{(0,0)\}}(m\omega_1+n\omega_2)^{-2k}$ implies that $G_{2k}(\Lambda)$ will tend to $\sum_{n\in{\mathbb Z}\setminus\{0\}}n^{-2k}=2\zeta(2k)$, where $\zeta$ is the Riemann zeta function. If follows that for all $k\ge2$, $\zeta(2k)$ is a polynomial in $\zeta(4)$ and $\zeta(6)$ with rational coefficients. Of course this is a weak but nontrivial result, since we know that $\zeta(2k)$ is a rational multiple of $\pi^{2k}$. \smallskip To finish this section on elliptic functions and make the transition to modular forms, we write explicitly $\Lambda=\Lambda(\omega_1,\omega_2)$ and by abuse of notation $G_{2k}(\omega_1,\omega_2):=G_{2k}(\Lambda(\omega_1,\omega_2))$, and we consider the dependence of $G_{2k}$ on $\omega_1$ and $\omega_2$. We note two evident facts: first, $G_{2k}(\omega_1,\omega_2)$ is \emph{homogeneous} of degree $-2k$: for any nonzero complex number $\lambda$ we have $G_{2k}(\lambda\omega_1,\lambda\omega_2)=\lambda^{-2k}G_{2k}(\omega_1,\omega_2)$. In particular, $G_{2k}(\omega_1,\omega_2)=\omega_2^{-2k}G_{2k}(\omega_1/\omega_2,1)$. Second, a general ${\mathbb Z}$-basis of $\Lambda$ is given by $(\omega'_1,\omega'_2)=(a\omega_1+b\omega_2,c\omega_1+d\omega_2)$ with $a$, $b$, $c$, $d$ integers such that $ad-bc=\pm1$. If we choose an \emph{oriented} basis such that $\Im(\omega_1/\omega_2)>0$ we in fact have $ad-bc=1$. Thus, $G_{2k}(a\omega_1+b\omega_2,c\omega_1+d\omega_2)=G_{2k}(\omega_1,\omega_2)$, and using homogeneity this can be written $$(c\omega_1+d\omega_2)^{-2k}G_{2k}\left(\dfrac{a\omega_1+b\omega_2}{c\omega_1+d\omega_2},1\right)=\omega_2^{-2k}G_{2k}\left(\dfrac{\omega_1}{\omega_2},1\right)\;.$$ Thus, if we set $\tau=\omega_1/\omega_2$ and by an additional abuse of notation abbreviate $G_{2k}(\tau,1)$ to $G_{2k}(\tau)$, we have by definition $$G_{2k}(\tau)=\sum_{(m,n)\in{\mathbb Z}^2\setminus\{(0,0)\}}(m\tau+n)^{-2k}\;,$$ and we have shown the following \emph{modularity} property: \begin{proposition} For any $\psmm{a}{b}{c}{d}\in\SL_2({\mathbb Z})$, the group of $2\times2$ integer matrices of determinant $1$, and any $\tau\in{\mathbb C}$ with $\Im(\tau)>0$ we have $$G_{2k}\left(\dfrac{a\tau+b}{c\tau+d}\right)=(c\tau+d)^{2k}G_{2k}(\tau)\;.$$ \end{proposition} This will be our basic definition of (weak) modularity. \section{Modular Forms and Functions} \subsection{Definitions} Let us introduce some notation: $\bullet$ We denote by $\Gamma$ the \emph{modular group $\SL_2({\mathbb Z})$}. Note that properly speaking the modular group should be the group of transformations $\tau\mapsto(a\tau+b)/(c\tau+d)$, which is isomorphic to the quotient of $\SL_2({\mathbb Z})$ by the equivalence relation saying that $M$ and $-M$ are equivalent, but for this course we will stick to this definition. If $\gamma=\psmm{a}{b}{c}{d}$ we will of course write $\gamma(\tau)$ for $(a\tau+b)/(c\tau+d)$. $\bullet$ The \emph{Poincar\'e upper half-plane} $\H$ is the set of complex numbers $\tau$ such that $\Im(\tau)>0$. Since for $\gamma=\psmm{a}{b}{c}{d}\in\Gamma$ we have $\Im(\gamma(\tau))=\Im(\tau)/|c\tau+d|^2$, we see that $\Gamma$ is a group of transformations of $\H$ (more generally so is $\SL_2({\mathbb R})$, there is nothing special about ${\mathbb Z}$). $\bullet$ The completed upper half-plane $\ov{\H}$ is by definition $\ov{\H}=\H\cup\P_1({\mathbb Q})=\H\cup{\mathbb Q}\cup\{i\infty\}$. Note that this is \emph{not} the closure in the topological sense, since we do not include any real irrational numbers. \begin{definition} Let $k\in{\mathbb Z}$ and let $F$ be a function from $\H$ to ${\mathbb C}$. \begin{enumerate}\item We will say that $F$ is \emph{weakly modular} of weight $k$ for $\Gamma$ if for all $\gamma=\psmm{a}{b}{c}{d}\in\Gamma$ and all $\tau\in\H$ we have $$F(\gamma(\tau))=(c\tau+d)^kF(\tau)\;.$$ \item We will say that $F$ is a modular \emph{form} if, in addition, $F$ is holomorphic on $\H$ and if $|F(\tau)|$ remains bounded as $\Im(\tau)\to\infty$. \item We will say that $F$ is a modular \emph{cusp form} if it is a modular form such that $F(\tau)$ tends to $0$ as $\Im(\tau)\to\infty$. \end{enumerate} \end{definition} We make a number of immediate but important remarks. \begin{remarks}{\rm \begin{enumerate} \item The Eisenstein series $G_{2k}(\tau)$ are basic examples of modular forms of weight $2k$, which are not cusp forms since $G_{2k}(\tau)$ tends to $2\zeta(2k)\ne0$ when $\Im(\tau)\to\infty$. \item With the present definition, it is clear that there are no nonzero modular forms of \emph{odd weight} $k$, since if $k$ is odd we have $(-c\tau-d)^k=-(c\tau+d)^k$ and $\gamma(\tau)=(-\gamma)(\tau)$. However, when considering modular forms defined on \emph{subgroups} of $\Gamma$ there may be modular forms of odd weight, so we keep the above definition. \item Applying modularity to $\gamma=T=\psmm{1}{1}{0}{1}$ we see that $F(\tau+1)=F(\tau)$, hence $F$ has a Fourier series expansion, and if $F$ is holomorphic, by the remark made above in the section on Fourier series, we have an expansion $F(\tau)=\sum_{n\in{\mathbb Z}}a(n)e^{2\pi in\tau}$ with $a(n)=e^{2\pi ny}\int_0^1F(x+iy)e^{-2\pi inx}\,dx$ for any $y>0$. Thus, if $|F(x+iy)|$ remains bounded as $y\to\infty$ it follows that as $y\to\infty$ we have $a(n)\le Be^{2\pi ny}$ for a suitable constant $B$, so we deduce that $a(n)=0$ whenever $n<0$ since $e^{2\pi ny}\to0$. Thus if $F$ is a modular \emph{form} we have $F(\tau)=\sum_{n\ge0}a(n)e^{2\pi in\tau}$, hence $\lim_{\Im(\tau)\to\infty}F(\tau)=a(0)$, so $F$ is a cusp form if and only if $a(0)=0$. \end{enumerate}} \end{remarks} \begin{definition} We will denote by $M_k(\Gamma)$ the vector space of modular forms of weight $k$ on $\Gamma$ ($M$ for Modular of course), and by $S_k(\Gamma)$ the subspace of cusp forms ($S$ for the German Spitzenform, meaning exactly cusp form).\end{definition} Notation: for any matrix $\gamma=\psmm{a}{b}{c}{d}$ with $ad-bc>0$, we will define the weight $k$ \emph{slash operator} $F|_k\gamma$ by $$F|_k\gamma(\tau)=(ad-bc)^{k/2}(c\tau+d)^{-k}F(\gamma(\tau))\;.$$ The reason for the factor $(ad-bc)^{k/2}$ is that $\lambda\gamma$ has the same action on $\H$ as $\gamma$, so this makes the formula homogeneous. For instance, $F$ is weakly modular of weight $k$ if and only if $F|_k\gamma=F$ for all $\gamma\in\Gamma$. We will also use the universal modular form convention of writing $q$ for $e^{2\pi i\tau}$, so that a Fourier expansion is of the type $F(\tau)=\sum_{n\ge0}a(n)q^n$. We use the additional convention that if $\alpha$ is any complex number, $q^{\alpha}$ will mean $e^{2\pi i\tau\alpha}$. \begin{exercise} Let $F(\tau)=\sum_{n\ge0}a(n)q^n\in M_k(\Gamma)$, and let $\gamma=\psmm{A}{B}{C}{D}$ be a matrix in $M_2^+({\mathbb Z})$, i.e., $A$, $B$, $C$, and $D$ are integers and $\Delta=\det(\gamma)=AD-BC>0$. Set $g=\gcd(A,C)$, let $u$ and $v$ be such that $uA+vC=g$, set $b=uB+vD$, and finally let $\zeta_{\Delta}=e^{2\pi i/\Delta}$. Prove the matrix identity $$\begin{pmatrix}A&B\{\mathbb C}&D\end{pmatrix}=\begin{pmatrix}A/g&-v\{\mathbb C}/g&u\end{pmatrix}\begin{pmatrix}g&b\\0&\Delta/g\end{pmatrix}\;,$$ and deduce that we have the more general Fourier expansion $$F|_k\gamma(\tau)=\dfrac{g^{k/2}}{\Delta^k}\sum_{n\ge0}\zeta_{\Delta}^{nbg}a(n)q^{g^2/\Delta}\;,$$ which is of course equal to $F$ if $\Delta=1$, since then $g=1$. \end{exercise} \subsection{Basic Results} The first fundamental result in the theory of modular forms is that these spaces are \emph{finite-dimensional}. The proof uses exactly the same method that we have used to prove the basic results on elliptic functions. We first note that there is a ``fundamental domain'' (which replaces the fundamental parallelogram) for the action of $\Gamma$ on $\H$, given by $$\mathfrak F=\{\tau\in\H,\ -1/2\le\Re(\tau)<1/2,\ |\tau|\ge1\}\;.$$ \begin{figure} \centering \begin{tikzpicture}[scale=2] \def\b{0.866025403784439}; \tikzstyle{every node}=[font=\footnotesize] \coordinate (Origin) at (0,0); \coordinate (XAxisMin) at (-1.5,0); \coordinate (XAxisMax) at (1.5,0); \coordinate (YAxisMin) at (0,0); \coordinate (YAxisMax) at (0,3); \coordinate (e1) at (0.5,\b); \coordinate (e2) at (-0.5,\b); \coordinate (f1) at (0.5,\b+3.5); \coordinate (f2) at (-0.5,\b+3.5); \clip (-1,-0.3) rectangle (2,2.5); \draw (0.5,-0.05) -- (0.5,0.05); \draw (0.5,0) node [below] {$\frac{1}{2}$}; \draw (-0.5,-0.05) -- (-0.5,0.05); \draw (-0.55,0) node [below] {$-\frac{1}{2}$}; \draw [thin, gray,-latex] (XAxisMin) -- (XAxisMax) \draw [thin, gray,-latex] (YAxisMin) -- (YAxisMax) \draw (0,1.5) node [above right,font=\normalsize] {$\mathfrak F$}; \shade [gray,bottom color=transparent!100, top color=transparent!5,fill opacity=0.02] (f1) -- (e1) arc (60:120:1) -- (f2) -- cycle; \draw [black] (f1) -- (e1) arc (60:120:1) -- (f2); \end{tikzpicture} \caption{The fundamental domain, $\mathfrak F$, of $\Gamma$} \label{fig:fundamental_domain_gamma} \end{figure} The proof that this is a fundamental domain, in other words that any $\tau\in\H$ has a unique image by $\Gamma$ belonging to $\mathfrak F$ is not very difficult and will be omitted. We then integrate $F'(z)/F(z)$ along the boundary of $\mathfrak F$, and using modularity we obtain the following result: \begin{theorem}\label{thmval} Let $F\in M_k(\Gamma)$ be a nonzero modular form. For any $\tau_0\in\H$, denote by $v_{\tau_0}(F)$ the \emph{valuation} of $F$ at $\tau_0$, i.e., the unique integer $v$ such that $F(\tau)/(\tau-\tau_0)^v$ is holomorphic and nonzero at $\tau_0$, and if $F(\tau)=G(e^{2\pi i\tau})$, define $v_{i\infty}(F)=v_0(G)$ (i.e., the number of first vanishing Fourier coefficients of $F$). We have the formula $$v_{i\infty}(F)+\sum_{\tau\in\mathfrak F}\dfrac{v_{\tau}(F)}{e_{\tau}}=\dfrac{k}{12}\;,$$ where $e_i=2$, $e_{\rho}=3$, and $e_{\tau}=1$ otherwise ($\rho=e^{2\pi i/3}$). \end{theorem} This theorem has many important consequences but, as already noted, the most important is that it implies that $M_k(\Gamma)$ is finite dimensional. First, it trivially implies that $k\ge0$, i.e., there are no modular \emph{forms} of negative weight. In addition it easily implies the following: \begin{corollary}\label{cordim} Let $k\ge0$ be an even integer. We have \begin{align*}\dim(M_k(\Gamma))&=\begin{cases} \lfloor k/12\rfloor&\text{\quad if $k\equiv2\pmod{12}$\;,}\\ \lfloor k/12\rfloor+1&\text{\quad if $k\not\equiv2\pmod{12}$\;,} \end{cases}\\ \dim(S_k(\Gamma))&=\begin{cases} 0&\text{\quad if $k<12$\;,}\\ \lfloor k/12\rfloor-1&\text{\quad if $k\ge12$, $k\equiv2\pmod{12}$\;,}\\ \lfloor k/12\rfloor&\text{\quad if $k\ge12$, $k\not\equiv2\pmod{12}$\;.} \end{cases}\end{align*} \end{corollary} Since the product of two modular forms is clearly a modular form (of weight the sum of the two weights), It is clear that $M_{*}(\Gamma)=\bigoplus_kM_k(\Gamma)$ (and similarly $S_{*}(\Gamma)$) is an algebra, whose structure is easily described: \begin{corollary}\label{core4e6} We have $M_{*}(\Gamma)={\mathbb C}[G_4,G_6]$, and $S_{*}(\Gamma)=\Delta M_{*}(\Gamma)$, where $\Delta$ is the unique generator of the one-dimensional vector space $S_{12}(\Gamma)$ whose Fourier expansion begins with $\Delta=q+O(q^2)$. \end{corollary} Thus, for instance, $M_0(\Gamma)={\mathbb C}$, $M_2(\Gamma)=\{0\}$, $M_4(\Gamma)={\mathbb C} G_4$, $M_6(\Gamma)={\mathbb C} G_6$, $M_8(\Gamma)={\mathbb C} G_8={\mathbb C} G_4^2$, $M_{10}(\Gamma)={\mathbb C} G_{10}={\mathbb C} G_4G_6$, $$M_{12}(\Gamma)={\mathbb C} G_{12}\oplus{\mathbb C}\Delta={\mathbb C} G_4^3\oplus{\mathbb C} G_6^2\;.$$ In particular, we recover the fact proved differently that $G_8$ is a multiple of $G_4^2$ (the exact multiple being obtained by computing the Fourier expansions), $G_{10}$ is a multiple of $G_4G_6$, $G_{12}$ is a linear combination of $G_4^3$ and $G_6^2$. Also, we see that $\Delta$ is a linear combination of $G_4^3$ and $G_6^2$ (we will see this more precisely below). A basic result on the structure of the modular group $\Gamma$ is the following: \begin{proposition} Set $T=\psmm{1}{1}{0}{1}$, which acts on $\H$ by the unit translation $\tau\mapsto\tau+1$, and $S=\psmm{0}{-1}{1}{0}$ which acts on $\H$ by the symmetry-inversion $\tau\mapsto-1/\tau$. Then $\Gamma$ is generated by $S$ and $T$, with relations generated by $S^2=-I$ and $(ST)^3=-I$ ($I$ the identity matrix). \end{proposition} There are several (easy) proofs of this fundamental result, which we do not give. Simply note that this proposition is essentially equivalent to the fact that the set $\mathfrak F$ described above is indeed a fundamental domain. A consequence of this proposition is that to check whether some function $F$ has the modularity property, it is sufficient to check that $F(\tau+1)=F(\tau)$ and $F(-1/\tau)=\tau^kF(\tau)$. \begin{exercise}\label{ex:Bol} (Bol's identity). Let $F$ be any continuous function defined on the upper-half plance $\H$, and define $I_0(F,a)=F$ and for any integer $m\ge1$ and $a\in\ov{\H}$ set: $$I_m(F,a)(\tau)=\int_a^{\tau}\dfrac{(\tau-z)^{m-1}}{(m-1)!}F(z)\,dz\;.$$ \begin{enumerate}\item Show that $I_m(F,a)'(\tau)=I_{m-1}(F,a)(\tau)$, so that $I_m(F,a)$ is an $m$th antiderivative of $F$. \item Let $\gamma\in\Gamma$, and assume that $k\ge1$ is an integer. Show that $$I_{k-1}(F,a)|_{2-k}\gamma=I_{k-1}(F|_k\gamma,\gamma^{-1}(a))\;.$$ \item Deduce that if we set $F^*_a=I_{k-1}(F,a)$ then $$D^{(k-1)}(F^*_a|_{2-k}\gamma)=F|_k\gamma\;,$$ where $D=(1/2\pi i)d/d\tau=qd/dq$ is the basic differential operator that we will use (see Section \ref{sec:deriv}). \item Assume now that $F$ is weakly modular of weight $k\ge1$ and holomorphic on $\H$ (in particular if $F\in M_k(\Gamma)$, but $|F|$ could be unbounded as $\Im(\tau)\to\infty$). Show that $$(F^*_a|_{2-k}|\gamma)(\tau)=F^*_a(\tau)+P_{k-2}(\tau)\;,$$ where $P_{k-2}$ is the polynomial of degree less than or equal to $k-2$ given by $$P_{k-2}(X)=\int_{\gamma^{-1}(a)}^a\dfrac{(X-z)^{k-2}}{(k-2)!}F(z)\,dz\;.$$ \end{enumerate} \end{exercise} What this exercise shows is that the $(k-1)$st derivative of some function which behaves modularly in weight $2-k$ behaves modularly in weight $k$, and conversely that the $(k-1)$st antiderivative of some function which behaves modularly in weight $k$ behaves modularly in weight $k$ up to addition of a polynomial of degree at most $k-2$. This duality between weights $k$ and $2-k$ is in fact a consequence of the \emph{Riemann--Roch theorem}. Note also that this exercise is the beginning of the fundamental theories of \emph{periods} and of \emph{modular symbols}. Also, it is not difficult to generalize Bol's identity. For instance, applied to the Eisenstein series $G_4$ and using Proposition \ref{propg4} below we obtain: \begin{proposition}\label{propg4star}\begin{enumerate} \item Set $$F_4^*(\tau)=-\dfrac{\pi^3}{180}\left(\dfrac{\tau}{i}\right)^3+\sum_{n\ge1}\sigma_{-3}(n)q^n\;.$$ We have the functional equation $$\tau^2F_4^*(-1/\tau)=F_4^*(\tau)+\dfrac{\zeta(3)}{2}(1-\tau^2)-\dfrac{\pi^3}{36}\dfrac{\tau}{i}\;.$$ \item Equivalently, if we set $$F_4^{**}(\tau)=-\dfrac{\pi^3}{180}\left(\dfrac{\tau}{i}\right)^3-\dfrac{\pi^3}{72}\left(\dfrac{\tau}{i}\right)+\dfrac{\zeta(3)}{2}+\sum_{n\ge1}\sigma_{-3}(n)q^n$$ we have the functional equation $$F_4^{**}(-1/\tau)=\tau^{-2}F_4^{**}(\tau)\;.$$ \end{enumerate} \end{proposition} Note that the appearance of $\zeta(3)$ comes from the fact that, up to a multiplicative constant, the $L$-function associated to $G_4$ is equal to $\zeta(s)\zeta(s-3)$, whose value at $s=3$ is equal to $-\zeta(3)/2$. \subsection{The Scalar Product} We begin by the following exercise: \begin{exercise}\label{ex:dmu}\begin{enumerate} \item Denote by $d\mu=dxdy/y^2$ a measure on $\H$, where as usual $x$ and $y$ are the real and imaginary part of $\tau\in\H$. Show that this measure is invariant under $\SL_2({\mathbb R})$. \item Let $f$ and $g$ be in $M_k(\Gamma)$. Show that the function $F(\tau)=f(\tau)\ov{g(\tau)}y^k$ is invariant under the modular group $\Gamma$. \end{enumerate} \end{exercise} It follows in particular from this exercise that if $F(\tau)$ is any integrable function which is invariant by the modular group $\Gamma$, the integral $\int_{\Gamma\backslash\H}F(\tau)d\mu$ makes sense if it converges. Since $\mathfrak F$ is a fundamental domain for the action of $\Gamma$ on $\H$, this can also be written $\int_{\mathfrak F}F(\tau)d\mu$. Thus it follows from the second part that we can define $$<f,g>=\int_{\Gamma\backslash\H}f(\tau)\ov{g(\tau)}y^k\,\dfrac{dxdy}{y^2}\;,$$ whenever this converges. It is immediate to show that a necessary and sufficient condition for convergence is that at least one of $f$ and $g$ be a cusp form, i.e., lies in $S_k(\Gamma)$. In particular it is clear that this defines a \emph{scalar product} on $S_k(\Gamma)$ called the Petersson scalar product. In addition, any cusp form in $S_k(\Gamma)$ is \emph{orthogonal} to $G_k$ with respect to this scalar product. It is instructive to give a sketch of the simple proof of this fact: \begin{proposition}\label{prop:unfold} If $f\in S_k(\Gamma)$ we have $<G_k,f>=0$.\end{proposition} \begin{proof} Recall that $G_k(\tau)=\sum_{(m,n)\in{\mathbb Z}^2\setminus\{(0,0)\}}(m\tau+n)^{-k}$. We split the sum according to the GCD of $m$ and $n$: we let $d=\gcd(m,n)$, so that $m=dm_1$ and $n=dn_1$ with $\gcd(m_1,n_1)=1$. It follows that $$G_k(\tau)=2\sum_{d\ge1}d^{-k}E_k(\tau)=2\zeta(k)E_k(\tau)\;,$$ where $E_k(\tau)=(1/2)\sum_{\gcd(m,n)=1}(m\tau+n)^{-k}$. We thus need to prove that $<E_k,f>=0$. On the other hand, denote by $\Gamma_\infty$ the group generated by $T$, i.e., translations $\psmm{1}{b}{0}{1}$ for $b\in{\mathbb Z}$. This acts by left multiplication on $\Gamma$, and it is immediate to check that a system of representatives for this action is given by matrices $\psmm{u}{v}{m}{n}$, where $\gcd(m,n)=1$ and $u$ and $v$ are chosen arbitrarily (but only once for each pair $(m,n)$) such that $un-vm=1$. It follows that we can write $$E_k(\tau)=\sum_{\gamma\in\Gamma_\infty\backslash\Gamma}(m\tau+n)^{-k}\;,$$ where it is understood that $\gamma=\psmm{u}{v}{m}{n}$ (the factor $1/2$ has disappeared since $\gamma$ and $-\gamma$ have the same action on $\H$). Thus \begin{align*}<E_k,f>&=\int_{\Gamma\backslash\H}\sum_{\gamma\in\Gamma_\infty\backslash\Gamma}(m\tau+n)^{-k}\ov{f(\tau)}y^k\,\dfrac{dxdy}{y^2}\\ &=\sum_{\gamma\in\Gamma_\infty\backslash\Gamma}\int_{\Gamma\backslash\H}(m\tau+n)^{-k}\ov{f(\tau)}y^k\,\dfrac{dxdy}{y^2}\;.\end{align*} Now note that by modularity $f(\tau)=(m\tau+n)^{-k}f(\gamma(\tau))$, and since $\Im(\gamma(\tau))=\Im(\tau)/|m\tau+n|^2$ it follows that $$(m\tau+n)^{-k}\ov{f(\tau)}y^k=\ov{f(\gamma(\tau))}\Im(\gamma(\tau))^k\;.$$ Thus, since $d\mu=dxdy/y^2$ is an invariant measure we have \begin{align*}<E_k,f>&=\sum_{\gamma\in\Gamma_\infty\backslash\Gamma}\int_{\Gamma\backslash\H}\ov{f(\gamma(\tau))}\Im(\gamma(\tau))^kd\mu =\int_{\Gamma_\infty\backslash\H}\ov{f(\tau)}y^k\,\dfrac{dxdy}{y^2}\;.\end{align*} Since $\Gamma_\infty$ is simply the group of integer translations, a fundamental domain for $\Gamma_\infty\backslash\H$ is simply the vertical strip $[0,1]\times[0,\infty[$, so that $$<E_k,f>=\int_0^\infty y^{k-2}dy\int_0^1\ov{f(x+iy)}dx\;,$$ which trivially vanishes since the inner integral is simply the conjugate of the constant term in the Fourier expansion of $f$, which is $0$ since $f\in S_k(\Gamma)$.\end{proof} The above procedure (replacing the complicated fundamental domain of $\Gamma\backslash\H$ by the trivial one of $\Gamma_\infty\backslash\H$) is very common in the theory of modular forms and is called \emph{unfolding}. \subsection{Fourier Expansions} The Fourier expansions of the Eisenstein series $G_{2k}(\tau)$ are easy to compute. The result is the following: \begin{proposition}\label{propg4} For $k\ge4$ even we have the Fourier expansion $$G_k(\tau)=2\zeta(k)+2\dfrac{(2\pi i)^{k}}{(k-1)!}\sum_{n\ge1}\sigma_{k-1}(n)q^n\;,$$ where $\sigma_{k-1}(n)=\sum_{d\mid n,\ d>0}d^{k-1}$. \end{proposition} Since we know that when $k$ is even $2\zeta(k)=-(2\pi i)^kB_k/k!$, where $B_k$ is the $k$-th Bernoulli number defined by $$\dfrac{t}{e^t-1}=\sum_{k\ge0}\dfrac{B_k}{k!}t^k\;,$$ it follows that $G_k=2\zeta(k)E_k$, with $$E_k(\tau)=1-\dfrac{2k}{B_k}\sum_{n\ge1}\sigma_{k-1}(n)q^n\;.$$ This is the normalization of Eisenstein series that we will use. For instance \begin{align*} E_4(\tau)&=1+240\sum_{n\ge1}\sigma_3(n)q^n\;,\\ E_6(\tau)&=1-504\sum_{n\ge1}\sigma_5(n)q^n\;,\\ E_8(\tau)&=1+480\sum_{n\ge1}\sigma_7(n)q^n\;.\end{align*} In particular, the relations given above which follow from the dimension formula become much simpler and are obtained simply by looking at the first terms in the Fourier expansion: $$E_8=E_4^2\;,\quad E_{10}=E_4E_6\;,\quad E_{12}=\dfrac{441E_4^3+250E_6^2}{691}\;,\quad\Delta=\dfrac{E_4^3-E_6^2}{1728}\;.$$ Note that the relation $E_4^2=E_8$ (and the others) implies a highly nontrivial relation between the sum of divisors function: if we set by convention $\sigma_3(0)=1/240$, so that $E_4(\tau)=\sum_{n\ge0}\sigma_3(n)q^n$, we have $$E_8(\tau)=E_4^2(\tau)=240^2\sum_{n\ge0}q^n\sum_{0\le m\le n}\sigma_3(m)\sigma_3(n-m)\;,$$ so that by identification $\sigma_7(n)=120\sum_{0\le m\le n}\sigma_3(m)\sigma_3(n-m)$, so $$\sigma_7(n)=\sigma_3(n)+120\sum_{1\le m\le n-1}\sigma_3(m)\sigma_3(n-m)\;.$$ It is quite difficult (but not impossible) to prove this directly, i.e., without using at least indirectly the theory of modular forms. \begin{exercise} Find a similar relation for $\sigma_9(n)$ using $E_{10}=E_4E_6$.\end{exercise} This type of reasoning is one of the reasons for which the theory of modular forms is so important (and lots of fun!): if you have a modular form $F$, you can usually express it in terms of a completely explicit basis of the space to which it belongs since spaces of modular forms are \emph{finite-dimensional} (in the present example, the space is one-dimensional), and deduce highly nontrivial relations for the Fourier coefficients. We will see a further example of this below for the number $r_k(n)$ of representations of an integer $n$ as a sum of $k$ squares. \begin{exercise}\begin{enumerate}\item Prove that for any $k\in{\mathbb C}$ we have the identity $$\sum_{n\ge1}\sigma_k(n)q^n=\sum_{n\ge1}\dfrac{n^kq^n}{1-q^n}\;,$$ the right-hand side being called a \emph{Lambert series}. \item Set $F(k)=\sum_{n\ge1}n^k/(e^{2\pi n}-1)$. Using the Fourier expansions given above, compute explicitly $F(5)$ and $F(9)$. \item Using Proposition \ref{propg4star}, compute explicitly $F(-3)$. \item Using Proposition \ref{prop:E2} below, compute explicitly $F(1)$. \end{enumerate}\end{exercise} Note that in this exercise we only compute $F(k)$ for $k\equiv1\pmod4$. It is also possible but more difficult to compute $F(k)$ for $k\equiv3\pmod4$. For instance we have: $$F(3)=\dfrac{\Gamma(1/4)^8}{80(2\pi)^6}-\dfrac{1}{240}\;.$$ \subsection{Obtaining Modular Forms by Averaging} We have mentioned at the beginning of this course that one of the ways to obtain functions satisfying functional equations is to use \emph{averaging} over a suitable group or set: we have seen this for periodic functions in the form of the Poisson summation formula, and for doubly-periodic functions in the construction of the Weierstrass $\wp$-function. We can do the same for modular forms, but we must be careful in two different ways. First, we do not want \emph{invariance} by $\Gamma$, but we want an automorphy factor $(c\tau+d)^k$. This is easily dealt with by noting that $(d/d\tau)(\gamma(\tau))=(c\tau+d)^{-2}$: indeed, if $\phi$ is some function on $\H$ we can define $$F(\tau)=\sum_{\gamma\in\Gamma}\phi(\gamma(\tau))((d/d\tau)(\gamma(\tau)))^{k/2}\;.$$ \begin{exercise} Ignoring all convergence questions, by using the chain rule $(f\circ g)'=(f'\circ g)g'$ show that for all $\delta=\psmm{A}{B}{C}{D}\in\Gamma$ we have $$F(\delta(\tau))=(C\tau+D)^kF(\tau)\;.$$ \end{exercise} But the second important way in which we must be careful is that the above contruction rarely converges. There are, however, examples where it does converge: \begin{exercise} Let $\phi(\tau)=\tau^{-m}$, so that $$F(\tau)=\sum_{\gamma=\psmm{a}{b}{c}{d}\in\Gamma}\dfrac{1}{(a\tau+b)^m(c\tau+d)^{k-m}}\;.$$ Show that if $2\le m\le k-2$ and $m\ne k/2$ this series converges normally on any compact subset of $\H$ (i.e., it is majorized by a convergent series with positive terms), so defines a modular form in $M_k(\Gamma)$. \end{exercise} Note that the series converges also for $m=k/2$, but this is more difficult. One of the essential reasons for non-convergence of the function $F$ is the trivial observation that for a given pair of coprime integers $(c,d)$ there are infinitely many elements $\gamma\in\Gamma$ having $(c,d)$ as their second row. Thus in general it seems more reasonable to define $$F(\tau)=\sum_{\gcd(c,d)=1}\phi(\gamma_{c,d}(\tau))(c\tau+d)^{-k}\;,$$ where $\gamma_{c,d}$ is \emph{any fixed} matrix in $\Gamma$ with second row equal to $(c,d)$. However, we need this to make sense: if $\gamma_{c,d}=\psmm{a}{b}{c}{d}\in\Gamma$ is one such matrix, it is clear that the general matrix having second row equal to $(c,d)$ is $T^n\psmm{a}{b}{c}{d}=\psmm{a+nc}{b+nd}{c}{d}$, and as usual $T=\psmm{1}{1}{0}{1}$ is translation by $1$: $\tau\mapsto\tau+1$. Thus, an essential necessary condition for our series to make any kind of sense is that the function $\phi$ be \emph{periodic} of period~$1$. The simplest such function is of course the constant function $1$: \begin{exercise} (See the proof of Proposition \ref{prop:unfold}.) Show that $$F(\tau)=\sum_{\gcd(c,d)=1}(c\tau+d)^{-k}=2E_k(\tau)\;,$$ where $E_k$ is the normalized Eisenstein series defined above. \end{exercise} But by the theory of Fourier series, we know that periodic functions of period $1$ are (infinite) linear combinations of the functions $e^{2\pi i n\tau}$. This leads to the definition of \emph{Poincar\'e series}: $$P_k(n;\tau)=\dfrac{1}{2}\sum_{\gcd(c,d)=1}\dfrac{e^{2\pi in\gamma_{c,d}(\tau)}}{(c\tau+d)^k}\;,$$ where we note that we can choose any matrix $\gamma_{c,d}$ with bottom row $(c,d)$ since the function $e^{2\pi in\tau}$ is $1$-periodic, so that $P_k(n;\tau)\in M_k(\Gamma)$. \begin{exercise} Assume that $k\ge4$ is even. \begin{enumerate} \item Show that if $n<0$ the series defining $P_k$ diverges (wildly in fact). \item Note that $P_k(0;\tau)=E_k(\tau)$, so that $\lim_{\tau\to i\infty}P_k(0;\tau)=1$. Show that if $n>0$ the series converges normally and that we have $\lim_{\tau\to i\infty}P_k(n;\tau)=0$. Thus in fact $P_k(n;\tau)\in S_k(\Gamma)$ if $n>0$. \item By using the same \emph{unfolding method} as in Proposition \ref{prop:unfold}, show that if $f=\sum_{n\ge0}a(n)q^n\in M_k(\Gamma)$ and $n>0$ we have $$<P_k(n),f>=\dfrac{(k-2)!}{(4\pi n)^{k-1}}a(n)\;.$$ \end{enumerate} \end{exercise} It is easy to show that in fact the $P_k(n)$ \emph{generate} $S_k(\Gamma)$. We can also compute their \emph{Fourier expansions} as we have done for $E_k$, but they involve Bessel functions and Kloosterman sums. \subsection{The Ramanujan Delta Function} Recall that by definition $\Delta$ is the generator of the $1$-dimensional space $S_{12}(\Gamma)$ whose Fourier coefficient of $q^1$ is normalized to be equal to $1$. By simple computation, we find the first terms in the Fourier expansion of $\Delta$: $$\Delta(\tau)=q-24q^2+252q^3-1472q^4+\cdots\;,$$ with no apparent formula for the coefficients. The $n$th coefficient is denoted $\tau(n)$ (no confusion with $\tau\in\H$), and called Ramanujan's tau function, and $\Delta$ itself is called Ramanujan's Delta function. Of course, using $\Delta=(E_4^3-E_6^2)/1728$ and expanding the powers, one can give a complicated but explicit formula for $\tau(n)$ in terms of the functions $\sigma_3$ and $\sigma_5$, but this is far from being the best way to compute them. In fact, the following exercise already gives a much better method. \begin{exercise}\label{prob1} Let $D$ be the differential operator $(1/(2\pi i))d/d\tau=qd/dq$. \begin{enumerate}\item Show that the function $F=4E_4D(E_6)-6E_6D(E_4)$ is a modular form of weight $12$, then by looking at its constant term show that it is a cusp form, and finally compute the constant $c$ such that $F=c\cdot\Delta$. \item Deduce the formula $$\tau(n)=\dfrac{n}{12}(5\sigma_3(n)+7\sigma_5(n))+70\sum_{1\le m\le n-1}(2n-5m)\sigma_3(m)\sigma_5(n-m)\;.$$ \item Deduce in particular the congruences $\tau(n)\equiv n\sigma_5(n)\equiv n\sigma_1(n)\pmod5$ and $\tau(n)\equiv n\sigma_3(n)\pmod7$. \end{enumerate} \end{exercise} Although there are much faster methods, this is already a very reasonable way to compute $\tau(n)$. The cusp form $\Delta$ is one of the most important functions in the theory of modular forms. Its first main property, which is not at all apparent from its definition, is that it has a \emph{product expansion}: \begin{theorem} We have $$\Delta(\tau)=q\prod_{n\ge1}(1-q^n)^{24}\;.$$ \end{theorem} \begin{proof} We are not going to give a complete proof, but sketch a method which is one of the most natural to obtain the result. We start backwards, from the product $R(\tau)$ on the right-hand side. The logarithm transforms products into sums, but in the case of \emph{functions} $f$, the \emph{logarithmic derivative} $f'/f$ (more precisely $D(f)/f$, where $D=qd/dq$) also does this, and it is also more convenient. We have $$D(R)/R=1-24\sum_{n\ge1}\dfrac{nq^n}{1-q^n}=1-24\sum_{n\ge1}\sigma_1(n)q^n$$ as is easily seen by expanding $1/(1-q^n)$ as a geometric series. This is exactly the case $k=2$ of the Eisenstein series $E_k$, which we have excluded from our discussion for convergence reasons, so we come back to our series $G_{2k}$ (we will divide by the normalizing factor $2\zeta(2)=\pi^2/3$ at the end), and introduce a convergence factor due to Hecke, setting $$G_{2,s}(\tau)=\sum_{(m,n)\in{\mathbb Z}^2\setminus\{(0,0)\}}(m\tau+n)^{-2}|m\tau+n|^{-2s}\;.$$ As above this converges for $\Re(s)>0$, satisfies $$G_{2,s}(\gamma(\tau))=(c\tau+d)^2|c\tau+d|^{2s}G_{2,s}(\tau)$$ hence in particular is periodic of period $1$. It is straightforward to compute its Fourier expansion, which we will not do here, and the Fourier expansion shows that $G_{2,s}$ has an \emph{analytic continuation} to the whole complex plane. In particular, the limit as $s\to0$ makes sense; if we denote it by $G_2^*(\tau)$, by continuity it will of course satisfy $G_2^*(\gamma(\tau))=(c\tau+d)^2G_2^*(\tau)$, and the analytic continuation of the Fourier expansion that has been computed gives $$G_2^*(\tau)=\dfrac{\pi^2}{3}\left(1-\dfrac{3}{\pi\Im(\tau)}-24\sum_{n\ge1}\sigma_1(n)q^n\right)\;.$$ Note the essential fact that there is now a \emph{nonanalytic term} $3/(\pi\Im(\tau))$. We will of course set the following definition: \begin{definition} We define $$E_2(\tau)=1-24\sum_{n\ge1}\sigma_1(n)q^n\text{\quad and\quad}E_2^*(\tau)=E_2(\tau)-\dfrac{3}{\pi\Im(\tau)}\;.$$ \end{definition} Thus $E_2(\tau)=D(R)/R$, $G_2^*(\tau)=(\pi^2/3)E_2^*(\tau)$, and we have the following: \begin{proposition}\label{prop:E2} For any $\gamma=\psmm{a}{b}{c}{d}\in\Gamma$ We have $E_2^*(\gamma(\tau))=(c\tau+d)^2E_2^*(\tau)$. Equivalently, $$E_2(\gamma(\tau))=(c\tau+d)^2E_2(\tau)+\dfrac{12}{2\pi i}c(c\tau+d)\;.$$ \end{proposition} \begin{proof} The first result has been seen above, and the second follows from the formula $\Im(\gamma(\tau))=\Im(\tau)/|c\tau+d|^2$.\qed\end{proof} \begin{exercise}\label{ex:e2} Show that $$E_2(\tau)=-24\left(-\dfrac{1}{24}+\sum_{m\ge1}\dfrac{m}{q^{-m}-1}\right)\;.$$ \end{exercise} {\it Proof of the theorem. \/} We can now prove the theorem on the product expansion of $\Delta$: noting that $(d/d\tau)\gamma(\tau)=1/(c\tau+d)^2$, the above formulas imply that if we set $S=R(\gamma(\tau))$ we have \begin{align*}\dfrac{D(S)}{S}&=\dfrac{D(R)}{R}(\gamma(\tau))(d/d\tau)(\gamma(\tau))\\ &=(c\tau+d)^{-2}E_2(\gamma(\tau))=E_2(\tau)+\dfrac{12}{2\pi i}\dfrac{c}{c\tau+d}\\ &=\dfrac{D(R)}{R}(\tau)+12\dfrac{D(c\tau+d)}{c\tau+d}\;.\end{align*} By integrating and exponentiating, it follows that $$R(\gamma(\tau))=(c\tau+d)^{12}R(\tau)\;,$$ and since clearly $R$ is holomorphic on $\H$ and tends to $0$ as $\Im(\tau)\to\infty$ (i.e., as $q\to0$), it follows that $R$ is a cusp form of weight $12$ on $\Gamma$, and since $S_{12}(\Gamma)$ is $1$-dimensional and the coefficient of $q^1$ in $R$ is $1$, we have $R=\Delta$, proving the theorem.\qed\end{proof} \begin{exercise} We have shown in passing that $D(\Delta)=E_2\Delta$. Expanding the Fourier expansion of both sides, show that we have the recursion $$(n-1)\tau(n)=-24\sum_{1\le m\le n-1}\sigma_1(m)\tau(n-m)\;.$$ \end{exercise} \begin{exercise} \begin{enumerate} \item Let $F\in M_k(\Gamma)$, and for some \emph{squarefree} integer $N$ set $$G(\tau)=\sum_{d\mid N}\mu(d)d^{k/2}F(d\tau)\;,$$ where $\mu$ is the M\"obius function. Show that $G|_kW_N=\mu(N)G$, where $W_N=\psmm{0}{-1}{N}{0}$ is the so-called \emph{Fricke involution}. \item Show that if $N>1$ the same result is true for $F=E_2$, although $E_2$ is only quasi-modular. \item Deduce that if $\mu(N)=(-1)^{k/2-1}$ we have $G(i/\sqrt{N})=0$. \item Applying this to $E_2$ and using Exercise \ref{ex:e2}, deduce that if $\mu(N)=1$ and $N>1$ we have $$\sum_{\gcd(m,N)=1}\dfrac{m}{e^{2\pi m/\sqrt{N}}-1}=\dfrac{\phi(N)}{24}\;,$$ where $\phi(N)$ is Euler's totient function. \item Using directly the functional equation of $E_2^*$, show that for $N=1$ there is an additional term $-1/(8\pi)$, i.e., that $$\sum_{m\ge1}\dfrac{m}{e^{2\pi m}-1}=\dfrac{1}{24}-\dfrac{1}{8\pi}\;.$$ \end{enumerate} \end{exercise} \subsection{Product Expansions and the Dedekind Eta Function} We continue our study of product expansions. We first mention an important identity due to Jacobi, the triple product identity, as well as some consequences: \begin{theorem}[Triple product identity] If $|q|<1$ and $u\ne0$ we have $$\prod_{n\ge1}(1-q^n)(1-q^nu)\prod_{n\ge0}(1-q^n/u) =\sum_{k\ge0}(-1)^k(u^k-u^{-(k+1)})q^{k(k+1)/2}\;.$$ \end{theorem} \begin{proof} (sketch): denote by $L(q,u)$ the left-hand side. We have clearly $L(q,u/q)=-uL(q,u)$, and since one can write $L(q,u)=\sum_{k\in{\mathbb Z}}a_k(q)u^k$ this implies the recursion $a_k(q)=-q^ka_{k-1}(q)$, so $a_k(q)=(-1)^kq^{k(k+1)/2}a_0(q)$, and separating $k\ge0$ and $k<0$ this shows that $$L(q,u)=a_0(q)\sum_{k\ge0}(-1)^k(u^k-u^{-(k+1)})q^{k(k+1)/2}\;.$$ The slightly longer part is to show that $a_0(q)=1$: this is done by setting $u=i/q^{1/2}$ and $u=1/q^{1/2}$, which after a little computation implies that $a(q^4)=a(q)$, and from there it is immediate to deduce that $a(q)$ is a constant, and equal to $1$.\qed\end{proof} To give the next corollaries, we need to define the \emph{Dedekind eta function} $\eta(\tau)$, by $$\eta(\tau)=q^{1/24}\prod_{n\ge1}(1-q^n)\;,$$ (recall that $q^{\alpha}=e^{2\pi i\alpha\tau}$). Thus by definition $\eta(\tau)^{24}=\Delta(\tau)$. Since $\Delta(-1/\tau)=\tau^{12}\Delta(\tau)$, it follows that $\eta(-1/\tau)=c\cdot(\tau/i)^{1/2}\eta(\tau)$ for some $24$th root of unity $c$ (where we always use the principal determination of the square root), and since we see from the infinite product that $\eta(i)\ne0$, replacing $\tau$ by $i$ shows that in fact $c=1$. Thus $\eta$ satisfies the two basic modular equations $$\eta(\tau+1)=e^{2\pi i/24}\eta(\tau)\text{\quad and\quad}\eta(-1/\tau)=(\tau/i)^{1/2}\eta(\tau)\;.$$ Of course we have more generally $$\eta(\gamma(\tau))=v_{\eta}(\gamma)(c\tau+d)^{1/2}\eta(\tau)$$ for any $\gamma\in\Gamma$, with a complicated $24$th root of unity $v_{\eta}(\gamma)$, so $\eta$ is in some (reasonable) sense a modular form of weight $1/2$, similar to the function $\th$ that we introduced at the very beginning. \medskip The triple product identity immediately implies the following two identities: \begin{corollary}\label{coreta} We have \begin{align*} \eta(\tau)&=q^{1/24}\left(1+\sum_{k\ge1}(-1)^k(q^{k(3k-1)/2}+q^{k(3k+1)/2})\right)\text{\quad and}\\ \eta(\tau)^3&=q^{1/8}\sum_{k\ge0}(-1)^k(2k+1)q^{k(k+1)/2}\;. \end{align*} \end{corollary} \begin{proof} In the triple product identity, replace $(u,q)$ by $(1/q,q^3)$: we obtain $$\prod_{n\ge1}(1-q^{3n})(1-q^{3n-1})\prod_{n\ge0}(1-q^{3n+1})= \sum_{k\ge0}(-1)^k(q^{-k}-q^{k+1})q^{3k(k+1)/2}\;.$$ The left-hand side is clearly equal to $\eta(\tau)$, and the right-hand side to \begin{align*}&1-q+\sum_{k\ge1}(-1)^k(q^{k(3k+1)/2}-q^{(k+1)(3k+2)/2})\\ &=1+\sum_{k\ge1}(-1)^kq^{k(3k+1)/2}-q+\sum_{k\ge2}(-1)^kq^{k(3k-1)/2}\;, \end{align*} giving the formula for $\eta(\tau)$. For the second formula, divide the triple product identity by $1-1/u$ and make $u\to1$.\qed\end{proof} Thus the first few terms are: \begin{align*} \prod_{n\ge1}(1-q^n)&=1-q-q^2+q^5+q^7-q^{12}-q^{15}+\cdots\\ \prod_{n\ge1}(1-q^n)^3&=1-3q+5q^3-7q^6+9q^{10}-11q^{15}+\cdots\;. \end{align*} The first identity was proved by L.~Euler. \begin{exercise}\begin{enumerate} \item Show that $24\Delta D(\eta)=\eta D(\Delta)$, and using the explicit Fourier expansion of $\eta$, deduce the recursion $$\sum_{k\in{\mathbb Z}}(-1)^k(75k^2+25k+2-2n)\tau\left(n-\dfrac{k(3k+1)}{2}\right)=0\;.$$ \item Similarly, from $8\Delta D(\eta^3)=\eta^3 D(\Delta)$ deduce the recursion $$\sum_{k\in{\mathbb Z}}(-1)^k(2k+1)(9k^2+9k+2-2n)\tau\left(n-\dfrac{k(k+1)}{2}\right)=0\;.$$ \end{enumerate} \end{exercise} \begin{exercise}\label{expoch} Define the \emph{$q$-Pochhammer symbol} $(q)_n$ by $(q)_n=(1-q)(1-q^2)\cdots(1-q^n)$. \begin{enumerate}\item Set $f(a,q)=\prod_{n\ge1}(1-aq^n)$, and define coefficients $c_n(q)$ by setting $f(a,q)=\sum_{n\ge0}c_n(q)a^n$. Show that $f(a,q)=(1-aq)f(aq,q)$, deduce that $c_n(q)(1-q^n)=-q^nc_{n-1}(q)$ and finally the identity $$\prod_{n\ge1}(1-aq^n)=\sum_{n\ge0}(-1)^na^nq^{n(n+1)/2}/(q)_n\;.$$ \item Write in terms of the Dedekind eta function the identities obtained by specializing to $a=1$, $a=-1$, $a=-1/q$, $a=q^{1/2}$, and $a=-q^{1/2}$. \item Similarly, prove the identity $$1/\prod_{n\ge1}(1-aq^n)=\sum_{n\ge0}a^nq^n/(q)_n\;,$$ and once again write in terms of the Dedekind eta function the identities obtained by specializing to the same five values of $a$. \item By multiplying two of the above identities and using the triple product identity, prove the identity $$\dfrac{1}{\prod_{n\ge1}(1-q^n)}=\sum_{n\ge0}\dfrac{q^{n^2}}{(q)_n^2}\;.$$ \end{enumerate}\end{exercise} Note that this last series is the generating function of the \emph{partition function $p(n)$}, so if one wants to make a table of $p(n)$ up to $n=10000$, say, using the left-hand side would require $10000$ terms, while using the right-hand side only requires $100$. \subsection{Computational Aspects of the Ramanujan $\tau$ Function} Since its introduction, the Ramanujan tau function $\tau(n)$ has fascinated number theorists. For instance there is a conjecture due to D.~H.~Lehmer that $\tau(n)\ne0$, and an even stronger conjecture (which would imply the former) that for every prime $p$ we have $p\nmid\tau(p)$ (on probabilistic grounds, the latter conjecture is probably false). To test these conjectures as well as others, it is an interesting computational challenge to \emph{compute} $\tau(n)$ for large $n$ (because of Ramanujan's first two conjectures, i.e., Mordell's theorem that we will prove in Section \ref{sec:ram} below, it is sufficient to compute $\tau(p)$ for $p$ \emph{prime}). We can have two distinct goals. The first is to compute a \emph{table} of $\tau(n)$ for $n\le B$, where $B$ is some (large) bound. The second is to compute \emph{individual values} of $\tau(n)$, equivalently of $\tau(p)$ for $p$ prime. \medskip Consider first the construction of a \emph{table}. The use of the first recursion given in the above exercise needs $O(n^{1/2})$ operations per value of $\tau(n)$, hence $O(B^{3/2})$ operations in all to have a table for $n\le B$. However, it is well known that the \emph{Fast Fourier Transform} (FFT) allows one to compute products of power series in essentially linear time. Thus, using Corollary \ref{coreta}, we can directly write the power series expansion of $\eta^3$, and use the FFT to compute its eighth power $\eta^{24}=\Delta$. This will require $O(B\log(B))$ operations, so is much faster than the preceding method; it is essentially optimal since one needs $O(B)$ time simply to write the result. \medskip Using large computer resources, especially in memory, it is reasonable to construct a table up to $B=10^{12}$, but not much more. Thus, the problem of computing \emph{individual} values of $\tau(p)$ is important. We have already seen one such method in Exercise \ref{prob1} above, which gives a method for computing $\tau(n)$ in time $O(n^{1+\varepsilon})$ for any $\varepsilon>0$. A deep and important theorem of B.~Edixhoven, J.-M.~Couveignes, et al., says that it is possible to compute $\tau(p)$ in time \emph{polynomial} in $\log(p)$, and in particular in time $O(p^{\varepsilon})$ for any $\varepsilon>0$. Unfortunately this algorithm is not at all practical, and at least for now, completely useless for us. The only practical and important application is for the computation of $\tau(p)$ modulo some small prime numbers $\ell$ (typically $\ell<50$, so far from being sufficient to apply the Chinese Remainder Theorem). However, there exists an algorithm which takes time $O(n^{1/2+\varepsilon})$ for any $\varepsilon>0$, so much better than the one of Exercise \ref{prob1}, and which is very practical. It is based on the use of the Eichler--Selberg \emph{trace formula}, together with the computation of \emph{Hurwitz class numbers} $H(N)$ (essentially the class numbers of imaginary quadratic orders counted with suitable multiplicity): if we set $H_3(N)=H(4N)+2H(N)$ (note that $H(4N)$ can be computed in terms of $H(N)$), then for $p$ prime \begin{align*}\tau(p)&=28p^6-28p^5-90p^4-35p^3-1\\ &\phantom{=}-128\sum_{1\le t<p^{1/2}}t^6(4t^4-9pt^2+7p^2)H_3(p-t^2)\;. \end{align*} See \cite{Coh-Str} Exercise 12.13 of Chapter 12 for details. Using this formula and a cluster, it should be reasonable to compute $\tau(p)$ for $p$ of the order of $10^{16}$. \subsection{Modular Functions and Complex Multiplication} Although the terminology is quite unfortunate, we cannot change it. By definition, a modular \emph{function} is a function $F$ from $\H$ to ${\mathbb C}$ which is weakly modular of weight $0$ (so that $F(\gamma(\tau))=F(\tau)$, in other words is \emph{invariant} under $\Gamma$, or equivalently defines a function from $\Gamma\backslash\H$ to ${\mathbb C}$), meromorphic, including at $\infty$. This last statement requires some additional explanation, but in simple terms, this means that the Fourier expansion of $F$ has only finitely many Fourier coefficients for negative powers of $q$: $F(\tau)=\sum_{n\ge n_0}a(n)q^n$, for some (possibly negative) $n_0$. A trivial way to obtain modular functions is simply to take the quotient of two modular forms having the same weight. The most important is the $j$-function defined by $$j(\tau)=\dfrac{E_4^3(\tau)}{\Delta(\tau)}\;,$$ whose Fourier expansion begins by $$j(\tau)=\dfrac{1}{q}+744+196884q+21493760q^2+\cdots$$ Indeed, one can easily prove the following theorem: \begin{theorem}\label{thmratj} Let $F$ be a meromorphic function on $\H$. The following are equivalent: \begin{enumerate}\item $F$ is a modular function. \item $F$ is the quotient of two modular forms of equal weight. \item $F$ is a rational function of $j$. \end{enumerate} \end{theorem} \begin{exercise}\label{exj} \begin{enumerate} \item Noting that Theorem \ref{thmval} is valid more generally for modular functions (with $v_{\tau}(f)=-r<0$ if $f$ has a pole of order $r$ at $\tau$) and using the specific properties of $j(\tau)$, compute $v_{\tau}(f)$ for the functions $j(\tau)$, $j(\tau)-1728$, and $D(j)(\tau)$, at the points $\rho=e^{2\pi i/3}$, $i$, $i\infty$, and $\tau_0$ for $\tau_0$ distinct from these three special points. \item Set $f=f(a,b,c)=D(j)^a/(j^b(j-1728)^c)$. Show that $f$ is a modular \emph{form} if and only if $2c\le a$, $3b\le 2a$, and $b+c\ge a$, and give similar conditions for $f$ to be a \emph{cusp form}. \item Show that $E_4=f(2,1,1)$, $E_6=f(3,2,1)$, and $\Delta=f(6,4,3)$, so that for instance $D(j)=-E_{14}=-E_4^2E_6/\Delta$. \end{enumerate} \end{exercise} An important theory linked to modular functions is the theory of \emph{complex multiplication}, which deserves a course in itself. We simply mention one of the basic results. We will say that a complex number $\tau\in\H$ is a CM point (CM for Complex Multiplication) if it belongs to an imaginary quadatic field, or equivalently if there exist integers $a$, $b$, and $c$ with $a\ne0$ such that $a\tau^2+b\tau+c=0$. The first basic theorem is the following: \begin{theorem} If $\tau$ is a CM point then $j(\tau)$ is an algebraic integer.\end{theorem} Note that this theorem has two parts: the first and most important part is that $j(\tau)$ is algebraic. This is in fact easy to prove. The second part is that it is an algebraic \emph{integer}, and this is more difficult. Since any modular function $f$ is a rational function of $j$, it follows that if this rational function has algebraic coefficients then $f(\tau)$ will be algebraic (but not necessarily integral). Another immediate consequence is the following: \begin{corollary} Let $\tau$ be a CM point and define $\Omega_\tau=\eta(\tau)^2$, where $\eta$ is as usual the Dedekind eta function. For any modular form $f$ of weight $k$ (in fact $f$ can also be meromorphic) the number $f(\tau)/\Omega_\tau^k$ is algebraic. In fact $E_4(\tau)/\Omega_\tau^4$ and $E_6(\tau)/\Omega_\tau^6$ are always algebraic \emph{integers}. \end{corollary} But the importance of this theorem lies in algebraic number theory. We give the following theorem without explaining the necessary notions: \begin{theorem} Let $\tau$ be a CM point, and $D=b^2-4ac$ its \emph{discriminant}, where we choose $\gcd(a,b,c)=1$. Then ${\mathbb Q}(j(\tau))$ is the \emph{ring class field} of discriminant $D$, and in particular if $D$ is the discriminant of a quadratic field $K={\mathbb Q}(\sqrt{D})$, then $K(j(\tau))$ is the \emph{Hilbert class field} of $K$. In particular, the degree of the minimal polynomial of the algebraic integer $j(\tau)$ is equal to the \emph{class number} $h(D)$ of the order of discriminant $D$. \end{theorem} Examples: \begin{align*} j((1+i\sqrt3)/2)&=0=1728-3(24)^2\\ j(i)&=1728=12^3=1728-4(0)^2\\ j((1+i\sqrt7)/2)&=-3375=(-15)^3=1728-7(27)^2\\ j(i\sqrt2)&=8000=20^3=1728+8(28)^2\\ j((1+i\sqrt{11})/2)&=-32768=(-32)^3=1728-11(56)^2\\ j((1+i\sqrt{163})/2)&=-262537412640768000=(-640320)^3\\ &=1728-163(40133016)^2\\ j(i\sqrt3)&=54000=2(30)^3=1728+12(66)^2\\ j(2i)&=287496=(66)^3=1728+8(189)^2\\ j((1+3i\sqrt3)/2)&=-12288000=-3(160)^3=1728-3(2024)^2\\ j((1+i\sqrt{15})/2)&=\dfrac{-191025-85995\sqrt5}{2}\\ &=\dfrac{1-\sqrt5}{2}\left(\dfrac{75+27\sqrt5}{2}\right)^3= 1728-3\left(\dfrac{273+105\sqrt5}{2}\right)^2 \end{align*} Note that we give the results in the above form since it can be shown that the functions $j^{1/3}$ and $(j-1728)^{1/2}$ also have interesting arithmetic properties. The example with $D=-163$ is particularly spectacular: \begin{exercise} Using the above table, show that $$(e^{\pi\sqrt{163}}-744)^{1/3}=640320-\varepsilon\;,$$ with $0<\varepsilon<10^{-24}$, and more precisely that $\varepsilon$ is approximately equal to $65628e^{-(5/3)\pi\sqrt{163}}$ (note that $65628=196884/3$). \end{exercise} \begin{exercise}\begin{enumerate} \item Using once again the example of $163$, compute heuristically a few terms of the Fourier expansion of $j$ assuming that it is of the form $1/q+\sum_{n\ge0}c(n)q^n$ with $c(n)$ reasonably small integers using the following method. Set $q=-e^{-\pi\sqrt{163}}$, and let $J=(-640320)^3$ be the exact value of $j((-1+i\sqrt{163})/2)$. By computing $J-1/q$, one notices that the result is very close to $744$, so we guess that $c(0)=744$. We then compute $(J-1/q-c(0))/q$ and note that once again the result is close to an integer, giving $c(1)$, and so on. Go as far as you can with this method. \item Do the same for $67$ instead of $163$. You will find the same Fourier coefficients (but you can go less far). \item On the other hand, do the same for $58$, starting with $J$ equal to the integer close to $e^{\pi\sqrt{58}}$. You will find a \emph{different} Fourier expansion: it corresponds in fact to another modular function, this time defined on a subgroup of $\Gamma$, called a \emph{Hauptmodul}. \item Try to find other rational numbers $D$ such that $e^{\pi\sqrt{D}}$ is close to an integer, and do the same exercise for them (an example where $D$ is not integral is $89/3$). \end{enumerate}\end{exercise} \subsection{Derivatives of Modular Forms}\label{sec:deriv} If we differentiate the modular equation $f((a\tau+b)/(c\tau+d))=(c\tau+d)^kf(\tau)$ with $\psmm{a}{b}{c}{d}\in\Gamma$ using the operator $D=(1/(2\pi i))d/d\tau$ (which gives simpler formulas than $d/d\tau$ since $D(q^n)=nq^n$), we easily obtain $$D(f)\left(\dfrac{a\tau+b}{c\tau+d}\right)=(c\tau+d)^{k+2}\left(D(f)(\tau)+\dfrac{k}{2\pi i}\dfrac{c}{c\tau+d}f(\tau)\right)\;.$$ Thus the derivative of a weakly modular form of weight $k$ looks like one of weight $k+2$, except that there is an extra term. This term vanishes if $k=0$, so the derivative of a modular function of weight $0$ is indeed modular of weight $2$ (we have seen above the example of $j(\tau)$ which satisfies $D(j)=-E_{14}/\Delta$). If $k>0$ and we really want a true weakly modular form of weight $k+2$ there are two ways to do this. The first one is called the \emph{Serre derivative}: \begin{exercise} Using Proposition \ref{prop:E2}, show that if $f$ is weakly modular of weight $k$ then $D(f)-(k/12)E_2f$ is weakly modular of weight $k+2$. In particular, if $f\in M_k(\Gamma)$ then $SD_k(f):=D(f)-(k/12)E_2f\in M_{k+2}(\Gamma)$.\end{exercise} The second method is to set $D^*(f):=D(f)-(k/(4\pi\Im(\tau)))f$ since by Proposition \ref{prop:E2} we have $D^*(f)=SD_k(f)-(k/12)E_2^*f$. This loses holomorphy, but is very useful in certain contexts. Note that if more than one modular form is involved, there are more ways to make new modular forms using derivatives: \begin{exercise}\label{ex:RC}\begin{enumerate} \item For $i=1$, $2$ let $f_i\in M_{k_i}(\Gamma)$. By considering the modular function $f_1^{k_2}/f_2^{k_1}$ of weight $0$, show that $$k_2f_2D(f_1)-k_1f_1D(f_2)\in S_{k_1+k_2+2}(\Gamma)\;.$$ Note that this generalizes Exercise \ref{prob1}. \item Compute constants $a$, $b$, and $c$ (depending on $k_1$ and $k_2$ and not all $0$) such that $$[f_1,f_2]_2=aD^2(f_1)+bD(f_1)D(f_2)+cD^2(f_2)\in S_{k_1+k_2+4}(\Gamma)\;.$$ \end{enumerate}\end{exercise} This gives the first two of the so-called \emph{Rankin--Cohen} brackets. \smallskip As an application of derivatives of modular forms, we give a proof of a theorem of Siegel. We begin by the following: \begin{lemma} Let $a$ and $b$ be nonnegative integers such that $4a+6b=12r+2$. The constant term of the Fourier expansion of $F_r(a,b)=E_4^aE_6^b/\Delta^r$ vanishes. \end{lemma} \begin{proof} By assumption $F_r(a,b)$ is a meromorphic modular form of weight $2$. Since $D(\sum_{n\ge n_0}a(n)q^n)=\sum_{n\ge n_0}na(n)q^n$, it is sufficient to find a modular function $G_r(a,b)$ of weight $0$ such that $F_r(a,b)=D(G_r(a,b))$ (recall that the derivative of a modular function of weight $0$ is still modular). We prove this by an induction first on $r$, then on $b$. Recall that by Exercise \ref{exj} we have $D(j)=-E_{14}/\Delta=-E_4^2E_6/\Delta$, and since $4a+6b=14$ has only the solution $(a,b)=(2,1)$ the result is true for $r=1$. Assume it is true for $r-1$. We now do a recursion on $b$, noting that since $2a+3b=6r+1$, $b$ is odd. Note that $D(j^r)=rj^{r-1}D(j)=-rE_4^{3r-1}E_6/\Delta^r$, so the constant term of $F_r(a,1)$ indeed vanishes. However, since $E_4^3-E_6^2=1728\Delta$, if $a\ge3$ we have $$F_r(a-3,b+2)=E_4^{a-3}E_6^b(E_4^3-1728\Delta)/\Delta^r =F_r(a,b)-1728F_{r-1}(a-3,b)\;,$$ proving that the result is true for $r$ by induction on $b$ since we assumed it true for $r-1$.\qed\end{proof} We can now prove (part of) Siegel's theorem: \begin{theorem}\label{thmsieg} For $r=\dim(M_k(\Gamma))$ define coefficients $c_i^k$ by $$\dfrac{E_{12r-k+2}}{\Delta^r}=\sum_{i\ge-r}c_i^kq^i\;,$$ where by convention we set $E_0=1$. Then for any $f=\sum_{n\ge0}a(n)\in M_k(\Gamma)$ we have the relation $$\sum_{0\le n\le r}c_{-n}^ka(n)=0\;.$$ In addition we have $c_0^k\ne0$, so that $a(0)=\sum_{1\le n\le r}(c_{-n}^k/c_0^k)a(n)$ is a linear combination with \emph{rational coefficients} of the $a(n)$ for $1\le n\le r$. \end{theorem} \begin{proof} First note that by Corollary \ref{cordim} we have $r\ge(k-2)/12$ (with equality only if $k\equiv2\pmod{12}$), so the definition of the coefficients $c_i^k$ makes sense. Note also that since the Fourier expansion of $E_{12r-k+2}$ begins with $1+O(q)$ and that of $\Delta^r$ by $q^r+O(q^{r+1})$, that of the quotient begins with $q^{-r}+O(q^{1-r})$ (in particular $c_{-r}^k=1$). The proof of the first part is now immediate: the modular form $fE_{12r-k+2}$ belongs to $M_{12r+2}(\Gamma)$, so by Corollary \ref{core4e6} is a linear combination of $E_4^aE_6^b$ with $4a+6b=12r+2$. It follows from the lemma that the constant term of $fE_{12r-k+2}/\Delta^r$ vanishes, and this constant term is equal to $\sum_{0\le n\le r}c_{-n}^ka(n)$, proving the first part of the theorem. The fact that $c_0^k\ne0$ (which is of course essential) is a little more difficult and will be omitted, see \cite{Coh-Str} Theorem 9.5.1.\qed\end{proof} This theorem has (at least) two consequences. First, a theoretical one: if one can construct a modular form whose constant term is some interesting quantity and whose Fourier coefficients $a(n)$ are rational, this shows that the interesting quantity is also rational. This is what allowed Siegel to show that the value at negative integers of Dedekind zeta functions of totally real number fields are rational, see Section \ref{sec:several}. Second, a practical one: it allows to compute explicitly the constant coefficient $a(0)$ in terms of the $a(n)$, giving interesting formulas, see again Section \ref{sec:several}. \section{Hecke Operators: Ramanujan's discoveries}\label{sec:ram} We now come to one of the most amazing and important discoveries on modular forms due to S.~Ramanujan, which has led to the modern development of the subject. Recall that we set $$\Delta(\tau)=q\prod_{m\ge1}(1-q^m)^{24}=\sum_{n\ge1}\tau(n)q^n\;.$$ We have $\tau(2)=-24$, $\tau(3)=252$, and $\tau(6)=-6048=-24\cdot252$, so that $\tau(6)=\tau(2)\tau(3)$. After some more experiments, Ramanujan conjectured that if $m$ and $n$ are coprime we have $\tau(mn)=\tau(m)\tau(n)$. Thus, by decomposing an integer into products of prime powers, assuming this conjecture, we are reduced to the study of $\tau(p^k)$ for $p$ prime. Ramanujan then noticed that $\tau(4)=-1472=(-24)^2-2^{11}=\tau(2)^2-2^{11}$, and again after some experiments he conjectured that $\tau(p^2)=\tau(p)^2-p^{11}$, and more generally that $\tau(p^{k+1})=\tau(p)\tau(p^k)-p^{11}\tau(p^{k-1})$. Thus $u_k=\tau(p^k)$ satisfies a linear recurrence relation $$u_{k+1}-\tau(p)u_k+p^{11}u_{k-1}=0\;,$$ and since $u_0=1$ the sequence is entirely determined by the value of $u_1=\tau(p)$. It is well-known that the behavior of a linear recurrent sequence is determined by its \emph{characteristic polynomial}. Here it is equal to $X^2-\tau(p)X+p^{11}$, and the third of Ramanujan's conjectures is that the discriminant of this equation is always negative, or equivalently that $|\tau(p)|<p^{11/2}$. Note that if $\alpha_p$ and $\beta_p$ are the roots of the characteristic polynomial (necessarily distinct since we cannot have $|\tau(p)|=p^{11/2}$), then $\tau(p^k)=(\alpha_p^{k+1}-\beta_p^{k+1})/(\alpha_p-\beta_p)$, and the last conjecture says that $\alpha_p$ and $\beta_p$ are \emph{complex conjugate}, and in particular of modulus \emph{equal} to $p^{11/2}$. These conjectures are all true. The first two (multiplicativity and recursion) were proved by L.~Mordell only one year after Ramanujan formulated them, and indeed the proof is quite easy (in fact we will prove them below). The third conjecture $|\tau(p)|<p^{11/2}$ is extremely hard, and was only proved by P.~Deligne in 1970 using the whole machinery developed by the school of A.~Grothendieck to solve the Weil conjectures . The main idea of Mordell, which was generalized later by E.~Hecke, is to introduce certain linear operators (now called Hecke operators) on spaces of modular forms, to prove that they satisfy the multiplicativity and recursion properties (this is in general much easier than to prove this on numbers), and finally to use the fact that $S_{12}(\Gamma)={\mathbb C}\Delta$ is of dimension $1$, so that necessarily $\Delta$ is an \emph{eigenform} of the Hecke operators whose eigenvalues are exactly its Fourier coefficients. \medskip Although there are more natural ways of introducing them, we will define the Hecke operator $T(n)$ on $M_k(\Gamma)$ directly by its action on Fourier expansions $T(n)(\sum_{m\ge0}a(m)q^m)=\sum_{m\ge0}b(m)q^m$, where $$b(m)=\sum_{d\mid\gcd(m,n)}d^{k-1}a(mn/d^2)\;.$$ Note that we can consider this definition as purely formal, apart from the presence of the integer $k$ this is totally unrelated to the possible fact that $\sum_{m\ge0}a(m)q^m\in M_k(\Gamma)$. A simple but slightly tedious combinatorial argument shows that these operators satisfy $$T(n)T(m)=\sum_{d\mid\gcd(n,m)}d^{k-1}T(nm/d^2)\;.$$ In particular if $m$ and $n$ are coprime we have $T(n)T(m)=T(nm)$ (multiplicativity), and if $p$ is a prime and $k\ge1$ we have $T(p^k)T(p)=T(p^{k+1})+p^{k-1}T(p^{k-1})$ (recursion). This shows that these operators are indeed good candidates for proving the first two of Ramanujan's conjectures. We need to show the essential fact that they preserve $M_k(\Gamma)$ and $S_k(\Gamma)$ (the latter will follow from the former since by the above definition $b(0)=\sum_{d\mid n}d^{k-1}a(0)=a(0)\sigma_{k-1}(n)=0$ if $a(0)=0$). By recursion and multiplicativity, it is sufficient to show this for $T(p)$ with $p$ prime. Now if $F(\tau)=\sum_{m\ge0}a(m)q^m$, $T(p)(F)(\tau)=\sum_{m\ge0}b(m)q^m$ with $b(m)=a(mp)$ if $p\nmid m$, and $b(m)=a(mp)+p^{k-1}a(m/p)$ if $p\mid m$. On the other hand, let us compute $G(\tau)=\sum_{0\le j<p}F((\tau+j)/p)$. Replacing directly in the Fourier expansion we have $$G(\tau)=\sum_{m\ge0}a(m)q^{m/p}\sum_{0\le j<p}e^{2\pi imj/p}\;.$$ The inner sum is a complete geometric sum which vanishes unless $p\mid m$, in which case it is equal to $p$. Thus, changing $m$ into $pm$ we have $G(\tau)=p\sum_{m\ge0}a(pm)q^m$. On the other hand, we have trivially $\sum_{p\mid m}a(m/p)q^m=\sum_{m\ge0}a(m)q^{pm}=F(p\tau)$. Replacing both of these formulas in the formula for $T(p)(F)$ we see that $$T(p)(F)(\tau)=p^{k-1}F(p\tau)+\dfrac{1}{p}\sum_{0\le j<p}F\left(\dfrac{\tau+j}{p}\right)\;.$$ \begin{exercise} Show more generally that $$T(n)(F)(\tau)=\sum_{ad=n}a^{k-1}\dfrac{1}{d}\sum_{0\le b<d}F\left(\dfrac{a\tau+b}{d}\right)\;.$$\end{exercise} It is now easy to show that $T(p)F$ is modular: replace $\tau$ by $\gamma(\tau)$ in the above formula and make a number of elementary manipulations to prove modularity. In fact, since $\Gamma$ is generated by $\tau\mapsto\tau+1$ and $\tau\mapsto-1/\tau$, it is immediate to check modularity for these two maps on the above formula. As mentioned above, the proof of the first two Ramanujan conjectures is now immediate: since $T(n)$ acts on the one-dimensional space $S_{12}(\Gamma)$ we must have $T(n)(\Delta)=c\cdot\Delta$ for some constant $c$. Replacing in the definition of $T(n)$, we thus have for all $m$ $c\tau(m)=\sum_{d\mid \gcd(n,m)}d^{11}\tau(nm/d^2)$. Choosing $m=1$ and using $\tau(1)=1$ shows that $c=\tau(n)$, so that $$\tau(n)\tau(m)=\sum_{d\mid\gcd(n,m)}d^{11}\tau(nm/d^2)$$ which implies (and is equivalent to) the first two conjectures of Ramanujan. \smallskip Denote by $P_k(n)$ the \emph{characteristic polynomial} of the linear map $T(n)$ on $S_k(\Gamma)$. A strong form of the so-called Maeda's conjecture states that for $n>1$ the polynomial $P_k(n)$ is \emph{irreducible}. This has been tested up to very large weights. \begin{exercise} The above proof shows that the Hecke operators also preserve the space of modular \emph{functions}, so by Theorem \ref{thmratj} the image of $j(\tau)$ will be a rational function in $j$: \begin{enumerate}\item Show for instance that \begin{align*}T(2)(j)&=j^2/2-744j+81000\text{\quad and}\\ T(3)(j)&=j^3/3-744j^2+356652j-12288000\;.\end{align*} \item Set $J=j-744$, i.e., $j$ with no term in $q^0$ in its Fourier expansion. Deduce that \begin{align*}T(2)(J)&=J^2/2-196884\text{\quad and}\\ T(3)(J)&=J^3/3-196884J-21493760\;,\end{align*} and observe that the coefficients that we obtain are exactly the Fourier coefficients of $J$. \item Prove that $T(n)(j)$ is a \emph{polynomial} in $j$. Does the last observation generalize? \end{enumerate} \end{exercise} \section{Euler Products, Functional Equations} \subsection{Euler Products} The case of $\Delta$ is quite special, in that the modular form space to which it naturally belongs, $S_{12}(\Gamma)$, is only $1$-dimensional. As can easily be seen from the dimension formula, this occurs (for cusp forms) only for $k=12$, $16$, $18$, $20$, $22$, and $26$ (there are no nonzero cusp forms in weight $14$ and the space is of dimension $2$ in weight $24$), and thus the evident cusp forms $\Delta E_{k-12}$ for these values of $k$ (setting $E_0=1$) are generators of the space $S_k(\Gamma)$, so are eigenforms of the Hecke operators and share exactly the same properties as $\Delta$, with $p^{11}$ replaced by $p^{k-1}$. When the dimension is greater than $1$, we must work slightly more. From the formulas given above it is clear that the $T(n)$ form a \emph{commutative algebra} of operators on the finite dimensional vector space $S_k(\Gamma)$. In addition, we have seen above that there is a natural \emph{scalar product} on $S_k(\Gamma)$. One can show the not completely trivial fact that $T(n)$ is Hermitian for this scalar product, hence in particular is diagonalizable. It follows by an easy and classical result of linear algebra that these operators are \emph{simultaneously diagonalizable}, i.e., there exists a basis $F_i$ of forms in $S_k(\Gamma)$ such that $T(n)F_i=\lambda_i(n)F_i$ for all $n$ and $i$. Identifying Fourier coefficients as we have done above for $\Delta$ shows that if $F_i=\sum_{n\ge1}a_i(n)q^n$ we have $a_i(n)=\lambda_i(n)a_i(0)$. This implies first that $a_i(0)\ne0$, otherwise $F_i$ would be identically zero, so that by dividing by $a_i(0)$ we can always \emph{normalize} the eigenforms so that $a_i(0)=1$, and second, as for $\Delta$, that $a_i(n)=\lambda_i(n)$, i.e., the eigenvalues are exactly the Fourier coefficients. In addition, since the $T(n)$ are Hermitian, these eigenvalues are real for any embedding into ${\mathbb C}$, hence are \emph{totally real}, in other words their minimal polynomial has only real roots. Finally, using Theorem \ref{thmval}, it is immediate to show that the field generated by the $a_i(n)$ is finite-dimensional over ${\mathbb Q}$, i.e., is a number field. \begin{exercise} Consider the space $S=S_{24}(\Gamma)$, which is the smallest weight where the dimension is greater than $1$, here $2$. By the structure theorem given above, it is generated for instance by $\Delta^2$ and $\Delta E_4^3$. Compute the matrix of the operator $T(2)$ on this basis of $S$, diagonalize this matrix, so find the \emph{eigenfunctions} of $T(2)$ on $S$ (the prime number $144169$ should occur). Check that these eigenfunctions are also eigenfunctions of $T(3)$. \end{exercise} Thus, let $F=\sum_{n\ge1}a(n)q^n$ be a \emph{normalized} eigenfunction for all the Hecke operators in $S_k(\Gamma)$ (for instance $F=\Delta$ with $k=12$), and consider the \emph{Dirichlet series} $$L(F,s)=\sum_{n\ge1}\dfrac{a(n)}{n^s}\;,$$ for the moment formally, although we will show below that it converges for $\Re(s)$ sufficiently large. The multiplicativity property of the coefficients ($a(nm)=a(n)a(m)$ if $\gcd(n,m)=1$, coming from that of the $T(n)$) is \emph{equivalent} to the fact that we have an \emph{Euler product} (a product over primes) $$L(F,s)=\prod_{p\in P}L_p(F,s)\text{\quad with\quad}L_p(F,s)=\sum_{j\ge0}\dfrac{a(p^j)}{p^{js}}\;,$$ where we will always denote by $P$ the set of prime numbers. The additional recursion property $a(p^{j+1})=a(p)a(p^j)-p^{k-1}a(p^{j-1})$ is equivalent to the identity $$L_p(F,s)=\dfrac{1}{1-a(p)p^{-s}+p^{k-1}p^{-2s}}$$ (multiply both sides by the denominator to check this). We have thus proved the following theorem: \begin{theorem} Let $F=\sum_{n\ge1}a(n)q^n\in S_k(\Gamma)$ be an eigenfunction of all Hecke operators. We have an Euler product $$L(F,s)=\sum_{n\ge1}\dfrac{a(n)}{n^s}=\prod_{p\in P}\dfrac{1}{1-a(p)p^{-s}+p^{k-1}p^{-2s}}\;.$$ \end{theorem} Note that we have not really used the fact that $F$ is a cusp form: the above theorem is still valid if $F=F_k$ is the normalized Eisenstein series $$F_k(\tau)=-\dfrac{B_k}{2k}E_k(\tau)=-\dfrac{B_k}{2k}+\sum_{n\ge1}\sigma_{k-1}(n)q^n\;,$$ which is easily seen to be a normalized eigenfunction for all Hecke operators. In fact: \begin{exercise} Let $a\in{\mathbb C}$ be any complex number and let as usual $\sigma_a(n)=\sum_{d\mid n}d^a$. \begin{enumerate}\item Show that $$\sum_{n\ge1}\dfrac{\sigma_a(n)}{n^s}=\zeta(s-a)\zeta(s)=\prod_{p\in P}\dfrac{1}{1-\sigma_a(p)p^{-s}+p^ap^{-2s}}\;,$$ with $\sigma_a(p)=p^a+1$. \item Show that $$\sigma_a(m)\sigma_a(n)=\sum_{d\mid\gcd(m,n)}d^a\sigma_a\left(\dfrac{mn}{d^2}\right)\;,$$ so that in particular $F_k$ is indeed a normalized eigenfunction for all Hecke operators. \end{enumerate} \end{exercise} \subsection{Analytic Properties of $L$-Functions} Everything that we have done up to now is purely formal, i.e., we do not need to assume convergence. However in the sequel we will need to prove some analytic results, and for this we need to prove convergence for certain values of $s$. We begin with the following easy bound, due to Hecke: \begin{proposition} Let $F=\sum_{n\ge1}a(n)q^n\in S_k(\Gamma)$ be a cusp form (not necessarily an eigenform). There exists a constant $c>0$ (depending on $F$) such that for all $n$ we have $|a(n)|\le c n^{k/2}$.\end{proposition} \begin{proof} The trick is to consider the function $g(\tau)=|F(\tau)\Im(\tau)^{k/2}|$: since we have seen that $\Im(\gamma(\tau))=\Im(\tau)/|c\tau+d|^2$, it follows that $g(\tau)$ is \emph{invariant} under $\Gamma$. It follows that $\sup_{\tau\in\H}g(\tau)=\sup_{\tau\in\mathfrak F}g(\tau)$, where $\mathfrak F$ is the fundamental domain used above. Now because of the Fourier expansion and the fact that $F$ is a cusp form, $|F(\tau)|=O(e^{-2\pi\Im(\tau)})$ as $\Im(\tau)\to\infty$, so $g(\tau)$ tends to $0$ also. It immediately follows that $g$ is \emph{bounded} on $\mathfrak F$, hence on $\H$, so that there exists a constant $c_1>0$ such that $|F(\tau)|\le c_1\Im(\tau)^{-k/2}$ for all $\tau$. We can now easily prove Hecke's bound: from the Fourier series section we know that for any $y>0$ $$a(n)=e^{2\pi ny}\int_0^1 F(x+iy)e^{-2\pi inx}\,dx\;,$$ so that $|a(n)|\le c_1e^{2\pi ny}y^{-k/2}$, and choosing $y=1/n$ proves the proposition with $c=e^{2\pi}c_1$.\qed\end{proof} The following corollary is now clear: \begin{corollary} The $L$-function of a cusp form of weight $k$ converges absolutely (and uniformly on compact subsets) for $\Re(s)>k/2+1$. \end{corollary} \begin{remark} Deligne's deep result mentioned above on the third Ramanujan conjecture implies that we have the following optimal bound: there exists $c>0$ such that $|a(n)|\le c\sigma_0(n)n^{(k-1)/2}$, and in particular $|a(n)|=O(n^{(k-1)/2+\varepsilon})$ for all $\varepsilon>0$. This implies that the $L$-function of a cusp form converges absolutely and uniformly on compact subsets in fact also for $\Re(s)>(k+1)/2$. \end{remark} \begin{exercise}. Define for all $s\in{\mathbb C}$ the function $\sigma_s(n)$ by $\sigma_s(n)=\sum_{d\mid n}d^s$ if $n\in{\mathbb Z}_{>0}$, $\sigma_s(0)=\zeta(-s)/2$ (and $\sigma_s(n)=0$ otherwise). Set $$S(s_1,s_2;n)=\sum_{0\le m\le n}\sigma_{s_1}(m)\sigma_{s_2}(n-m)\;.$$ \begin{enumerate}\item Compute $S(s_1,s_2;n)$ exactly in terms of $\sigma_{s_1+s_2+1}(n)$ for $(s_1,s_2)=(3,3)$ and $(3,5)$, and also for $(s_1,s_2)=(1,1)$, $(1,3)$, $(1,5)$, and $(1,7)$ by using properties of the function $E_2$. \item Using Hecke's bound for cusp forms, show that if $s_1$ and $s_2$ are odd positive integers the ratio $S(s_1,s_2;n)/\sigma_{s_1+s_2+1}(n)$ tends to a limit $L(s_1,s_2)$ as $n\to\infty$, and compute this limit in terms of Bernoulli numbers. In addition, give an estimate for the \emph{error term} $|S(s_1,s_2;n)/\sigma_{s_1+s_2+1}(n)-L(s_1,s_2)|$. \item Using the values of the Riemann zeta function at even positive integers in terms of Bernoulli numbers, show that if $s_1$ and $s_2$ are odd positive integers we have $$L(s_1,s_2)=\dfrac{\zeta(s_1+1)\zeta(s_2+1)}{(s_1+s_2+1)\binom{s_1+s_2}{s_1}\zeta(s_1+s_2+2)}\;.$$ \item (A little project.) \emph{Define} $L(s_1,s_2)$ by the above formula for all $s_1$, $s_2$ in ${\mathbb C}$ for which it makes sense, interpreting $\binom{s_1+s_2}{s_1}$ as $\Gamma(s_1+s_2+1)/(\Gamma(s_1+1)\Gamma(s_2+1))$. Check on a computer whether it still seems to be true that $$S(s_1,s_2;n)/\sigma_{s_1+s_2+1}(n)\to L(s_1,s_2)\;.$$ Try to \emph{prove} it for $s_1=s_2=2$, and then for general $s_1$, $s_2$. If you succeed, give also an estimate for the error term analogous to the one obtained above. \end{enumerate}\end{exercise} \smallskip We now do some (elementary) analysis. \begin{proposition}\label{propga} Let $F\in S_k(\Gamma)$. For $\Re(s)>k/2+1$ we have $$(2\pi)^{-s}\Gamma(s)L(F,s)=\int_0^{\infty}F(it)t^{s-1}\,dt\;.$$ \end{proposition} \begin{proof} Using $\Gamma(s)=\int_0^{\infty}e^{-t}t^{s-1}\,dt$, this is trivial by uniform convergence which insures that we can integrate term by term.\qed\end{proof} \begin{corollary}\label{corfeq} The function $L(F,s)$ is a holomorphic function which can be analytically continued to the whole of ${\mathbb C}$. In addition, if we set $\Lambda(F,s)=(2\pi)^{-s}\Gamma(s)L(F,s)$ we have the functional equation $\Lambda(F,k-s)=i^{-k}\Lambda(F,s)$. \end{corollary} Note that in our case $k$ is even, so that $i^{-k}=(-1)^{k/2}$, but we prefer writing the constant as above so as to be able to use a similar result in odd weight, which occur in more general situations. \begin{proof} Indeed, splitting the integral at $1$, changing $t$ into $1/t$ in one of the integrals, and using modularity shows immediately that $$(2\pi)^{-s}\Gamma(s)L(F,s)=\int_1^{\infty}F(it)(t^{s-1}+i^kt^{k-1-s})\,dt\;.$$ Since the integral converges absolutely and uniformly for all $s$ (recall that $F(it)$ tends exponentially fast to $0$ when $t\to\infty$), this immediately implies the corollary.\qed\end{proof} As an aside, note that the integral formula used in the above proof is a very efficient numerical method to compute $L(F,s)$, since the series obtained on the right by term by term integration is exponentially convergent. For instance: \begin{exercise} Let $F(\tau)=\sum_{n\ge1}a(n)q^n$ be the Fourier expansion of a cusp form of weight $k$ on $\Gamma$. Using the above formula, show that the value of $L(F,k/2)$ at the center of the ``critical strip'' $0\le\Re(s)\le k$ is given by the following exponentially convergent series $$L(F,k/2)=(1+(-1)^{k/2})\sum_{n\ge1}\dfrac{a(n)}{n^{k/2}}e^{-2\pi n}P_{k/2}(2\pi n)\;,$$ where $P_{k/2}(X)$ is the polynomial $$P_{k/2}(X)=\sum_{0\le j<k/2}X^j/j!=1+X/1!+X^2/2!+\cdots+X^{k/2-1}/(k/2-1)!\;.$$ Note in particular that if $k\equiv2\pmod4$ we have $L(F,k/2)=0$. Prove this directly. \end{exercise} \begin{exercise}\begin{enumerate}\item Prove that if $F$ is not necessarily a cusp form we have $|a(n)|\le cn^{k-1}$ for some $c>0$. \item Generalize the proposition and the integral formulas so that they are also valid form non-cusp forms; you will have to add polar parts of the type $1/s$ and $1/(s-k)$. \item Show that $L(F,s)$ still extends to the whole of ${\mathbb C}$ with functional equation, but that it has a pole, simple, at $s=k$, and compute its residue. In passing, show that $L(F,0)=-a(0)$. \end{enumerate} \end{exercise} \subsection{Special Values of $L$-Functions} A general ``paradigm'' on $L$-functions, essentially due to P.~Deligne, is that if some ``natural'' $L$-function has both an Euler product and functional equations similar to the above, then for suitable integral ``special points'' the value of the $L$-function should be a certain (a priori transcendental) number $\omega$ times an algebraic number. In the case of modular forms, this is a theorem of Yu.~Manin: \begin{theorem} Let $F$ be a normalized eigenform in $S_k(\Gamma)$, and denote by $K$ the number field generated by its Fourier coefficients. There exist two nonzero complex numbers $\omega_{+}$ and $\omega_{-}$ such that for $1\le j\le k-1$ integral we have $$\Lambda(F,j)/\omega_{(-1)^j}\in K\;,$$ where we recall that $\Lambda(F,s)=(2\pi)^{-s}\Gamma(s)L(F,s)$. In addition, $\omega_{\pm}$ can be chosen such that $\omega_{+}\omega_{-}=<F,F>$. \end{theorem} In other words, for $j$ odd we have $L(F,j)/\omega_{-}\in K$ while for $j$ even we have $L(F,j)/\omega_{+}\in K$. For instance, in the case $F=\Delta$, if we choose $\omega_{-}=\Lambda(F,3)$ and $\omega_{+}=\Lambda(F,2)$, we have \begin{align*} (\Lambda(F,j))_{1\le j\le 11\ odd}&=(1620/691,1,9/14,9/14,1,1620/691)\omega_{-}\\ (\Lambda(F,j))_{1\le j\le 11\ even}&=(1,25/48,5/12,25/48,1)\omega_{+}\;, \end{align*} and $\omega_{+}\omega_{-}=(8192/225)<F,F>$. \begin{exercise} (see also Exercise \ref{ex:Bol}). For $F\in S_k(\Gamma)$ define the \emph{period polynomial} $P(F,X)$ by $$P(F;X)=\int_0^{i\infty}(X-\tau)^{k-2}F(\tau)\,d\tau\;.$$ \begin{enumerate} \item For $\gamma\in\Gamma$ show that $$P(F;X)|_{2-k}=\int_{\gamma^{-1}(0)}^{\gamma^{-1}(i\infty)}(X-\tau)^{k-2}F(\tau)\,d\tau\;.$$ \item Show that $P(F;X)$ satisfies \begin{align*} &P(F;X)|_{2-k}S+P(F;X)=0\text{\quad and}\\ &P(F;X)|_{2-k}(ST)^2+P(F;X)|_{2-k}(ST)+P(F;X)=0\;. \end{align*} \item Show that $$P(F;X)=-\sum_{j=0}^{k-2}(-i)^{k-1-j}\binom{k-2}{j}\Lambda(F,k-1-j)X^j\;.$$ \item If $F=\Delta$, using Manin's theorem above show that up to the multiplicative constant $\omega_+$, $\Re(P(F;X))$ factors completely in ${\mathbb Q}[X]$ as a product of linear polynomials, and show a similar result for $\Im(P(F;X))$ after omitting the extreme terms involving $691$. \end{enumerate}\end{exercise} \subsection{Nonanalytic Eisenstein Series and Rankin--Selberg} If we replace the expression $(c\tau+d)^k$ by $|c\tau+d|^{2s}$ for some complex number $s$, we can also obtain functions which are invariant by $\Gamma$, although they are nonanalytic. More precisely: \begin{definition}\label{def:nonhol} Write as usual $y=\Im(\tau)$. For $\Re(s)>1$ we define \begin{align*}G(s)(\tau)&=\sum_{(c,d)\in{\mathbb Z}^2\setminus\{(0,0)\}}\dfrac{y^s}{|c\tau+d|^{2s}}\text{\quad and}\\ E(s)(\tau)&=\sum_{\gamma\in\Gamma_\infty\backslash\Gamma}\Im(\gamma(\tau))^s=\dfrac{1}{2}\sum_{\gcd(c,d)=1}\dfrac{y^s}{|c\tau+d|^{2s}}\;.\end{align*}\end{definition} This is again an \emph{averaging} procedure, and it follows that $G(s)$ and $E(s)$ are \emph{invariant} under $\Gamma$. In addition, as in the case of the holomorphic Eisenstein series $G_k$ and $E_k$, it is clear that $G(s)=\zeta(2s)E(s)$. One can also easily compute their Fourier expansion, and the result is as follows: \begin{proposition} Set $\Lambda(s)=\pi^{-s/2}\Gamma(s/2)\zeta(s)$. We have the Fourier expansion $$\Lambda(2s)E(s)=\Lambda(2s)y^s+\Lambda(2-2s)y^{1-s}+4y^{1/2}\sum_{n\ge1}\dfrac{\sigma_{2s-1}(n)}{n^{s-1/2}}K_{s-1/2}(2\pi ny)\cos(2\pi nx)\;.$$ \end{proposition} In the above, $K_{\nu}(x)$ is a $K$-Bessel function which we do not define here. The main properties that we need is that it tends to $0$ exponentially (more precisely $K_{\nu}(x)\sim(\pi/(2x))^{1/2}e^{-x}$ as $x\to\infty$) and that $K_{-\nu}=K_{\nu}$. It follows from the above Fourier expansion that $E(s)$ has an \emph{analytic continuation} to the whole complex plane, that it satisfies the functional equation $\E(1-s)=\E(s)$, where we set $\E(s)=\Lambda(2s)E(s)$, and that $E(s)$ has a unique pole, at $s=1$, which is simple with residue $3/\pi$, independent of $\tau$. \begin{exercise} Using the properties of the Riemann zeta function $\zeta(s)$, show this last property, i.e., that $E(s)$ has a unique pole, at $s=1$, which is simple with residue $3/\pi$, independent of $\tau$.\end{exercise} There are many reasons for introducing these nonholomorphic Eisenstein series, but for us the main reason is that they are fundamental in \emph{unfolding} methods. Recall that using unfolding, in Proposition \ref{prop:unfold} we showed that $E_k$ (or $G_k$) was orthogonal to any cusp form. In the present case, we obtain a different kind of result called a \emph{Rankin--Selberg convolution}. Let $f$ and $g$ be in $M_k(\Gamma)$, one of them being a cusp form. Since $E(s)$ is invariant by $\Gamma$ the scalar product $<E(s)f,g>$ makes sense, and the following proposition gives its value: \begin{proposition} Let $f(\tau)=\sum_{n\ge0}a(n)q^n$ and $g(\tau)=\sum_{n\ge0}b(n)q^n$ be in $M_k(\Gamma)$, with at least one being a cusp form. For $\Re(s)>1$ we have $$<E(s)f,g>=\dfrac{\Gamma(s+k-1)}{(4\pi)^{s+k-1}}\sum_{n\ge1}\dfrac{a(n)\ov{b(n)}}{n^{s+k-1}}\;.$$ \end{proposition} \begin{proof} We essentially copy the proof of Proposition \ref{prop:unfold} so we skip the details: setting temporarily $F(\tau)=f(\tau)\ov{g(\tau)}y^k$ which is invariant by $\Gamma$, we have \begin{align*}<E(s)f,g>&=\int_{\Gamma\backslash\H}\sum_{\gamma\in\Gamma_\infty\backslash\Gamma}\Im(\gamma(\tau))^sF(\gamma(\tau))\,d\mu\\ &=\sum_{\Gamma_\infty\backslash\H}\Im(\tau)^sF(\tau)\,d\mu\\ &=\int_0^\infty y^{s+k-2}\int_0^1F(x+iy)\,dx\,dy\;.\end{align*} The inner integral is equal to the constant term in the Fourier expansion of $F$, hence is equal to $\sum_{n\ge1}a(n)\ov{b(n)}e^{-4\pi ny}$ (note that by assumption one of $f$ and $g$ is a cusp form, so the term $n=0$ vanishes), and the proposition follows.\qed\end{proof} \begin{corollary} For $\Re(s)>k$ set $$R(f,g)(s)=\sum_{n\ge1}\dfrac{a(n)\ov{b(n)}}{n^s}\;.$$ \begin{enumerate} \item $R(f,g)(s)$ has an analytic continuation to the whole complex plane and satisfies the functional equation $\mathcal R(2k-1-s)=\mathcal R(s)$ with $$\mathcal R(s)=\Lambda(2s-2k+1)(4\pi)^{-s}\Gamma(s)R(f,g)(s)\;.$$ \item $R(f,g)(s)$ has a single pole, which is simple, at $s=k$ with residue $$\dfrac{3}{\pi}\dfrac{(4\pi)^k}{(k-1)!}<f,g>\;.$$ \end{enumerate} \end{corollary} \begin{proof} This immediately follows from the corresponding properties of $E(s)$: we have $$\Lambda(2s-2k+2)(4\pi)^{-s}\Gamma(s)R(f,g)(s)=<\E(s-k+1)f,g>\;,$$ and the right-hand side has an analytic continuation to ${\mathbb C}$, is invariant when changing $s$ into $2k-1-s$. In addition by the proposition $E(s-k+1)=\E(s-k+1)/\Lambda(2s-2k+2)$ has a single pole, which is simple, at $s=k$, with residue $3/\pi$, so $R(f,g)(s)$ also has a single pole, which is simple, at $s=k$ with residue $\dfrac{3}{\pi}\dfrac{(4\pi)^k}{(k-1)!}<f,g>$.\qed\end{proof} It is an important fact (see Theorem 7.9 of my notes on $L$-functions in the present volume) that $L$-functions having analytic continuation and standard functional equations can be very efficiently computed at any point in the complex plane (see the note after the proof of Corollary \ref{corfeq} for the special case of $L(F,s)$). Thus the above corollary gives a very efficient method for computing Petersson scalar products. Note that the \emph{holomorphic} Eisenstein series $E_k(\tau)$ can also be used to give Rankin--Selberg convolutions, but now between forms of different weights: \begin{exercise} Let $f=\sum_{n\ge0}a(n)q^n\in M_{\ell}(\Gamma)$ and $g=\sum_{n\ge0}b(n)q^n\in M_{k+\ell}(\Gamma)$, at least one being a cusp form. Using exactly the same unfolding method as in the above proposition or as in Proposition \ref{prop:unfold}, show that $$<E_kf,g>=\dfrac{(k+\ell-2)!}{(4\pi)^{k+\ell-1}}\sum_{n\ge1}\dfrac{a(n)\ov{b(n)}}{n^{k+\ell-1}}\;.$$\end{exercise} \section{Modular Forms on Subgroups of $\Gamma$} \subsection{Types of Subgroups} We have used as basic definition of (weak) modularity $F|_k\gamma=F$ for all $\gamma\in\Gamma$. But there is no reason to restrict to $\Gamma$: we could very well ask the same modularity condition for some group $G$ of transformations of $\H$ different from $\Gamma$. There are many types of such groups, and they have been classified: for us, we will simply distinguish three types, with no justification. For any such group $G$ we can talk about a fundamental domain, similar to $\mathfrak F$ that we have drawn above (I do not want to give a rigorous definition here). We can distinguish essentially three types of such domains, corresponding to three types of groups. The first type is when the domain (more precisely its closure) is \emph{compact}: we say in that case that $G$ is \emph{cocompact}. It is equivalent to saying that it does not have any ``cusp'' such as $i\infty$ in the case of $G$. These groups are very important, but we will not consider them here. The second type is when the domain is not compact (i.e., it has cusps), but it has \emph{finite volume} for the measure $d\mu=dxdy/y^2$ on $\H$ defined in Exercise \ref{ex:dmu}. Such a group is said to have finite \emph{covolume}, and the main example is $G=\Gamma$ that we have just considered, hence also evidently all the subgroups of $\Gamma$ of \emph{finite index}. \begin{exercise} Show that the covolume of the modular group $\Gamma$ is finite and equal to $\pi/3$.\end{exercise} The third type is when the volume is infinite: a typical example is the group $\Gamma_{\infty}$ generated by integer translations, i.e., the set of matrices $\psmm{1}{n}{0}{1}$. A fundamental domain is then any vertical strip in $\H$ of width $1$, which can trivially be shown to have infinite volume. These groups are not important (at least for us) for the following reason: they would have ``too many'' modular forms. For instance, in the case of $\Gamma_{\infty}$ a ``modular form'' would simply be a holomorphic periodic function of period $1$, and we come back to the theory of Fourier series, much less interesting. We will therefore restrict to groups of the second type, which are called \emph{Fuchsian groups of the first kind}. In fact, for this course we will even restrict to subgroups $G$ of $\Gamma$ of \emph{finite index}. \medskip However, even with this restriction, it is still necessary to distinguish two types of subgroups: the so-called \emph{congruence subgroups}, and the others, of course called non-congruence subgroups. The theory of modular forms on non-congruence subgroups is quite a difficult subject and active research is being done on them. One annoying aspect is that they apparently do not have a theory of Hecke operators. Thus will will restrict even more to congruence subgroups. We give the following definitions: \begin{definition} Let $N\ge1$ be an integer. \begin{enumerate} \item We define \begin{align*} \Gamma(N)&=\{\gamma=\begin{pmatrix}a & b\\ c & d\end{pmatrix}\in\Gamma,\ \gamma\equiv \begin{pmatrix}1 & 0\\ 0 & 1\end{pmatrix}\pmod{N}\}\;,\\ \Gamma_1(N)&=\{\gamma=\begin{pmatrix}a & b\\ c & d\end{pmatrix}\in\Gamma,\ \gamma\equiv \begin{pmatrix}1 & *\\ 0 & 1\end{pmatrix}\pmod{N}\}\;,\\ \Gamma_0(N)&=\{\gamma=\begin{pmatrix}a & b\\ c & d\end{pmatrix}\in\Gamma,\ \gamma\equiv \begin{pmatrix}* & *\\ 0 & *\end{pmatrix}\pmod{N}\}\;,\\ \end{align*} where the congruences are component-wise and $*$ indicates that no congruence is imposed. \item A subgroup of $\Gamma$ is said to be a \emph{congruence subgroup} if it contains $\Gamma(N)$ for some $N$, and the smallest such $N$ is called the \emph{level} of the subgroup. \end{enumerate} \end{definition} It is clear that $\Gamma(N)\subset\Gamma_1(N)\subset\Gamma_0(N)$, and it is trivial to prove that $\Gamma(N)$ is normal in $\Gamma$ (hence in any subgroup of $\Gamma$ containing it), that $\Gamma_1(N)/\Gamma(N)\simeq{\mathbb Z}/N{\mathbb Z}$ (with the map $\psmm{a}{b}{c}{d}\mapsto b\bmod N$), and that $\Gamma_1(N)$ is normal in $\Gamma_0(N)$ with $\Gamma_0(N)/\Gamma_1(N)\simeq({\mathbb Z}/N{\mathbb Z})^*$ (with the map $\psmm{a}{b}{c}{d}\mapsto d\bmod N$). If $G$ is a congruence subgroup of level $N$ we have $\Gamma(N)\subset G$, so (whatever the definition) a modular form on $G$ will in particular be on $\Gamma(N)$. Because of the above isomorphisms, it is not difficult to reduce the study of forms on $\Gamma(N)$ to those on $\Gamma_1(N)$, and the latter to forms on $\Gamma_0(N)$, except that we have to add a slight ``twist'' to the modularity property. Thus for simplicity, we will restrict to modular forms on $\Gamma_0(N)$. \subsection{Modular Forms on Subgroups} In view of the definition given for $\Gamma$, it is natural to say that $F$ is weakly modular of weight $k$ on $\Gamma_0(N)$ if for all $\gamma\in\Gamma_0(N)$ we have $F|_k\gamma=F$, where we recall that if $\gamma=\psmm{a}{b}{c}{d}$ then $F|_k\gamma(\tau)=(c\tau+d)^{-k}F(\tau)$. To obtain a modular \emph{form}, we need also to require that $F$ is holomorphic on $\H$, plus some additional technical condition ``at infinity''. In the case of the full modular group $\Gamma$, this condition was that $F(\tau)$ remains bounded as $\Im(\tau)\to\infty$. In the case of a subgroup, this condition is not sufficient (it is easy to show that if we do not require an additional condition the corresponding space will in general be infinite-dimensional). There are several equivalent ways of giving the additional condition. One is the following: writing as usual $\tau=x+iy$, we require that there exists $N$ such that in the strip $-1/2\le x\le 1/2$, we have $|F(\tau)|\le y^N$ as $y\to\infty$ and $|F(\tau)|\le y^{-N}$ as $y\to0$ (since $F$ is $1$-periodic, there is no loss of generality in restricting to the strip). It is easily shown that if $F$ is weakly modular and holomorphic, then the above inequalities imply that $|F(\tau)|$ is in fact \emph{bounded} as $y\to\infty$ (but in general \emph{not} as $y\to0$), so the first condition is exactly the one that we gave in the case of the full modular group. Similarly, we can define a \emph{cusp form} by asking that in the above strip $|F(\tau)|$ tends to $0$ as $y\to\infty$ and as $y\to0$. \begin{exercise} If $F\in M_k(\Gamma)$ show that the second condition $|F(\tau)|\le y^{-N}$ as $y\to0$ is satisfied. \end{exercise} Now that we have a solid definition of modular form, we can try to proceed as in the case of the full modular group. A number of things can easily be generalized. It is always convenient to choose a system of representatives $(\gamma_j)$ of right cosets for $\Gamma_0(N)$ in $\Gamma$, so that $$\Gamma=\bigsqcup_j\Gamma_0(N)\gamma_j\;.$$ For instance, if $\mathfrak F$ is the fundamental domain of $\Gamma$ seen above, one can choose ${\mathcal D}=\bigsqcup\gamma_j(\mathfrak F)$ as fundamental domain for $\Gamma_0(N)$. The theorem that we gave on valuations generalizes immediately: $$\sum_{\tau\in\ov{{\mathcal D}}}\dfrac{v_{\tau}(F)}{e_{\tau}}=\dfrac{k}{12}[\Gamma:\Gamma_0(N)]\;,$$ where $\ov{{\mathcal D}}$ is ${\mathcal D}$ to which is added a finite number of ``cusps'' (we do not explain this; it is \emph{not} the topological closure), $e_{\tau}=2$ (resp., $3$) if $\tau$ is $\Gamma$-equivalent to $i$ (resp., to $\rho$), and $e_{\tau}=1$ otherwise, and we can then deduce the dimension of $M_k(\Gamma_0(N))$ and $S_k(\Gamma_0(N))$ as we did for $\Gamma$: \begin{theorem}\label{thmdim} We have $M_0(\Gamma_0(N))={\mathbb C}$ (i.e., the only modular forms of weight $0$ are the constants) and $S_0(\Gamma_0(N))=\{0\}$. For $k\ge2$ even, we have \begin{align*}\dim(M_k(\Gamma_0(N)))&=A_1-A_{2,3}-A_{2,4}+A_3\text{\quad and}\\ \dim(S_k(\Gamma_0(N)))&=A_1-A_{2,3}-A_{2,4}-A_3+\delta_{k,2}\;,\end{align*} where $\delta_{k,2}$ is the Kronecker symbol ($1$ if $k=2$, $0$ otherwise) and the $A_i$ are given as follows: \begin{align*} A_1&=\dfrac{k-1}{12}N\prod_{p\mid N}\left(1+\dfrac{1}{p}\right)\;,\\ A_{2,3}&=\left(\dfrac{k-1}{3}-\left\lfloor\dfrac{k}{3}\right\rfloor\right)\prod_{p\mid N}\left(1+\leg{-3}{p}\right)\text{\quad if\ \ $9\nmid N$,\quad $0$\ \ otherwise,}\\ A_{2,4}&=\left(\dfrac{k-1}{4}-\left\lfloor\dfrac{k}{4}\right\rfloor\right)\prod_{p\mid N}\left(1+\leg{-4}{p}\right)\text{\quad if\ \ $4\nmid N$,\quad $0$\ \ otherwise,}\\ \bigskip A_3&=\dfrac{1}{2}\sum_{d\mid N}\phi(\gcd(d,N/d))\;. \end{align*} \end{theorem} \subsection{Examples of Modular Forms on Subgroups} We give a few examples of modular forms on subgroups. First note the following easy lemma: \begin{lemma}\label{lembd} If $F\in M_k(\Gamma_0(N))$ then for any $m\in{\mathbb Z}_{\ge1}$ we have $F(m\tau)\in M_k(\Gamma_0(mN))$.\end{lemma} \begin{proof} Trivial since when $\psmm{a}{b}{c}{d}\in\Gamma_0(mN)$ one can write $(m(a\tau+b)/(c\tau+d))=(a(m\tau)+mb)/((c/m)\tau+d)$.\qed\end{proof} Thus we can already construct many forms on subgroups, but in a sense they are not very interesting, since they are ``old'' in a precise sense that we will define below. \smallskip A second more interesting example is Eisenstein series: there are more general Eisenstein series than those that we have seen for $\Gamma$, but we simply give the following important example: using a similar proof to the above lemma we can construct Eisenstein series of \emph{weight $2$} as follows. Recall that $E_2(\tau)=1-24\sum_{n\ge1}\sigma_1(n)q^n$ is not quite modular, and that $E_2^*(\tau)=E_2(\tau)-3/(\pi\Im(\tau))$ is weakly modular (but of course non-holomorphic). Consider the function $F(\tau)=NE_2(N\tau)-E_2(\tau)$, analogous to the construction of the lemma with a correction term. We have the evident but crucial fact that we also have $F(\tau)=NE_2^*(N\tau)-E_2^*(\tau)$ (since $\Im(\tau)$ is multiplied by $N$), so $F$ is also weakly modular on $\Gamma_0(N)$, but since it is holomorphic we have thus constructed a (nonzero) modular form of weight $2$ on $\Gamma_0(N)$. \smallskip A third important example is provided by theta series. This would require a book in itself, so we restrict to the simplest case. We have seen in Corollary \ref{corth} that the function $T(a)=\sum_{n\in{\mathbb Z}}e^{-a\pi n^2}$ satisfies $T(1/a)=a^{1/2}T(a)$, which looks like (and is) a modularity condition. This was for $a>0$ real. Let us generalize and for $\tau\in\H$ set $$\th(\tau)=\sum_{n\in{\mathbb Z}}q^{n^2}=\sum_{n\in{\mathbb Z}}e^{2\pi in^2\tau}\;,$$ so that for instance we simply have $T(a)=\th(ia/2)$. The proof of the functional equation for $T$ that we gave using Poisson summation is still valid in this more general case and shows that $$\th(-1/(4\tau))=(2\tau/i)^{1/2}\th(\tau)\;.$$ On the other hand, the definition trivially shows that $\th(\tau+1)=\th(\tau)$. If we denote by $W_4$ the matrix $\psmm{0}{-1}{4}{0}$ corresponding to the map $\tau\mapsto-1/(4\tau)$ and as usual $T=\psmm{1}{1}{0}{1}$, we thus have $\th|_{1/2}W_4=c\th$ and $\th_{1/2}T=\th$ for some $8$th root of unity $c$. (Note: we always use the principal determination of the square roots; if you are uncomfortable with this, simply square everything, this is what we will do below anyway.) This implies that if we let $\Gamma_{\th}$ be the intersection of $\Gamma$ with the group generated by $W_4$ and $T$ (as transformations of $\H$), then for all $\gamma\in\Gamma_{\th}$ we will have $\th|_{1/2}\gamma=c(\gamma)\th$ for some $8$th root of unity $c(\gamma)$, but in fact $c(\gamma)$ is a $4$th root of unity which we will give explicitly below. One can easily describe this group $\Gamma_{\th}$, and in particular show that it contains $\Gamma_0(4)$ as a subgroup of index $2$. This implies that $\th^4\in M_2(\Gamma_0(4))$, and more generally of course $\th^{4m}\in M_{2m}(\Gamma_0(4))$. As one of the most famous application of the finite-dimensionality of modular form spaces, solve the following exercise: \begin{exercise}\label{exr4r8}\begin{enumerate} \item Using the dimension formulas, show that $2E_2(2\tau)-E_2(\tau)$ together with $4E_2(4\tau)-E_2(\tau)$ form a basis of $M_2(\Gamma_0(4))$. \item Using the Fourier expansion of $E_2$, deduce an explicit formula for the Fourier expansion of $\th^4$, and hence that $r_4(n)$, the number of representations of $n$ as a sum of $4$ squares (in ${\mathbb Z}$, all permutations counted) is given for $n\ge1$ by the formula $$r_4(n)=8(\sigma_1(n)-4\sigma_1(n/4))\;,$$ where it is understood that $\sigma_1(x)=0$ if $x\notin{\mathbb Z}$. In particular, show that this trivially implies Lagrange's theorem that every integer is a sum of four squares. \item Similarly, show that $r_8(n)$, the $n$th Fourier coefficient of $\th^8$, is given for $n\ge1$ by $$r_8(n)=16(\sigma_3(n)-2\sigma_3(n/2)+16\sigma_3(n/4))\;.$$ \end{enumerate} \end{exercise} \begin{remark} Using more general methods one can give ``closed'' formulas for $r_k(n)$ for $k=1$, $2$, $3$, $4$, $5$, $6$, $7$, $8$, and $10$, see e.g., \cite{Coh-Str}.\end{remark} \subsection{Hecke Operators and $L$-Functions} We can introduce the same Hecke operators as before, but to have a reasonable definition we must add a coprimality condition: we define $T(n)(\sum_{m\ge0}a(m)q^m)=\sum_{m\ge0}b(m)q^m$, with $$b(m)=\sum_{\substack{d\mid\gcd(m,n)\\\gcd(d,N)=1}}d^{k-1}a(mn/d^2)\;.$$ This additional condition $\gcd(d,N)=1$ is of course automatically satisfied if $n$ is coprime to $N$, but not otherwise. One then shows exactly like in the case of the full modular group that $$T(n)T(m)=\sum_{\substack{d\mid\gcd(n,m)\\\gcd(d,N)=1}}d^{k-1}T(nm/d^2)\;,$$ that they preserve modularity, so in particular the $T(n)$ form a commutative algebra of operators on $S_k(\Gamma_0(N))$. And this is where the difficulties specific to subgroups of $\Gamma$ begin: in the case of $\Gamma$ we stated (without proof nor definition) that the $T(n)$ were \emph{Hermitian} with respect to the Petersson scalar product, and deduced the existence of eigenforms for \emph{all} Hecke operators. Unfortunately here the same proof shows that the $T(n)$ are Hermitian when $n$ is coprime to $N$, but \emph{not} otherwise. It follows that there exist common eigenforms for the $T(n)$, but \emph{only} for $n$ coprime to $N$, which creates difficulties. An analogous problem occurs for \emph{Dirichlet characters}: if $\chi$ is a Dirichlet character modulo $N$, it may in fact come by natural extension from a character modulo $M$ for some divisor $M\mid N$, $M<N$. The characters which have nice properties, in particular with respect to the functional equation of their $L$-functions, are the \emph{primitive} characters, for which such an $M$ does not exist. A similar but slightly more complicated thing can be done for modular forms. It is clear that if $M\mid N$ and $F\in M_k(\Gamma_0(M))$, then of course $F\in M_k(\Gamma_0(N))$. More generally, by Lemma \ref{lembd}, for any $d\mid N/M$ we have $F(d\tau)\in M_k(\Gamma_0(N))$. Thus we want to exclude such ``oldforms''. However it is not sufficient to say that a newform is not an oldform. The correct definition is to define a newform as a form which is \emph{orthogonal} to the space of oldforms with respect to the scalar product, and of course the new space is the space of newforms. Note that in the case of Dirichlet characters this orthogonality condition (for the standard scalar product of two characters) is automatically satisfied so need not be added. This theory was developed by Atkin--Lehner--Li, and the new space $S_k^{\text{new}}(\Gamma_0(N))$ can be shown to have all the nice properties that we require. Although not trivial, one can prove that it has a basis of common eigenforms for \emph{all} Hecke operators, not only those with $n$ coprime to $N$. More precisely, one shows that in the new space an eigenform for the $T(n)$ for all $n$ coprime to $N$ is automatically an eigenform for \emph{any} operator which commutes with all the $T(n)$, such as, of course, the $T(m)$ for $\gcd(m,N)>1$. In addition, we have not really lost anything by restricting to the new space, since it is easy to show that $$S_k(\Gamma_0(N))=\bigoplus_{M\mid N}\bigoplus_{d\mid N/M}B(d)S_k^{\text{new}}(\Gamma_0(M))\;,$$ where $B(d)$ is the operator sending $F(\tau)$ to $F(d\tau)$. Note that the sums in the above formula are \emph{direct} sums. \begin{exercise} The above formula shows that $$\dim(S_k(\Gamma_0(N)))=\sum_{M\mid N}\sigma_0(N/M)\dim(S_k^{\text{new}}(\Gamma_0(M)))\;,$$ where $\sigma_0(n)$ is the number of divisors of $n$. \begin{enumerate}\item Using the M\"obius inversion formula, show that if we define an arithmetic function $\beta$ by $\beta(p)=-2$, $\beta(p^2)=1$, and $\beta(p^k)=0$ for $k\ge3$, and extend by multiplicativity ($\beta(\prod p_i^{v_i})=\prod\beta(p_i^{v_i})$), we have the following dimension formula for the new space: $$\dim(S_k^{\text{new}}(\Gamma_0(N)))=\sum_{M\mid N}\beta(N/M)\dim(S_k(\Gamma_0(M)))\;.$$ \item Using Theorem \ref{thmdim}, deduce a direct formula for the dimension of the new space. \end{enumerate} \end{exercise} \begin{proposition} Let $F\in S_k(\Gamma_0(N))$ and $W_N=\psmm{0}{-1}{N}{0}$. \begin{enumerate}\item We have $F|_kW_N\in S_k(\Gamma_0(N))$, where $$F|_kW_N(\tau)=N^{-k/2}\tau^{-k}F(-1/(N\tau))\;.$$ \item If $F$ is an eigenform (in the new space) then $F|_kW_N=\pm F$ for a suitable sign $\pm$. \end{enumerate} \end{proposition} \begin{proof} (1): this simply follows from the fact that $W_N$ \emph{normalizes} $\Gamma_0(N)$: $W_N^{-1}\Gamma_0(N)W_N=\Gamma_0(N)$ as can easily be checked, and the same result would be true for any other normalizing operator such as the \emph{Atkin--Lehner} operators which we will not define. The operator $W_N$ is called the \emph{Fricke involution}. (2): It is easy to show that $W_N$ commutes with all Hecke operators $T(n)$ when $\gcd(n,N)=1$, so by what we have mentioned above, if $F$ is an eigenform in the new space it is automatically an eigenform for $W_N$, and since $W_N$ acts as an involution, its eigenvalues are $\pm1$.\qed\end{proof} The eigenforms can again be normalized with $a(1)=1$, and their $L$-function has an Euler product, of a slightly more general shape: $$L(F,s)=\prod_{p\nmid N}\dfrac{1}{1-a(p)p^{-s}+p^{k-1}p^{-2s}}\prod_{p\mid N}\dfrac{1}{1-a(p)p^{-s}}\;.$$ Proposition \ref{propga} is of course still valid, but is not the correct normalization to obtain a functional equation. We replace it by $$N^{s/2}(2\pi)^{-s}\Gamma(s)L(F,s)=\int_0^{\infty}F(it/N^{1/2})t^{s-1}\,dt\;,$$ which of course is trivial from the proposition by replacing $t$ by $t/N^{1/2}$. Indeed, thanks to the above proposition we split the integral at $t=1$, and using the action of $W_N$ we deduce the following proposition: \begin{proposition} Let $F\in S_k^{\text{new}}(\Gamma_0(N))$ be an eigenform for all Hecke operators, and write $F|_kW_N=\varepsilon F$ for some $\varepsilon=\pm1$. The $L$-function $L(F,s)$ extends to a holomorphic function in ${\mathbb C}$, and if we set $\Lambda(F,s)=N^{s/2}(2\pi)^{-s}\Gamma(s)L(F,s)$ we have the functional equation $$\Lambda(F,k-s)=\varepsilon i^{-k}\Lambda(F,s)\;.$$ \end{proposition} \begin{proof} Indeed, the trivial change of variable $t$ into $1/t$ proves the formula $$N^{s/2}(2\pi)^{-s}\Gamma(s)L(F,s)=\int_1^{\infty}F(it/N^{1/2})(t^{s-1}+\varepsilon i^kt^{k-1-s})\,dt\;,$$ from which the result follows.\qed\end{proof} Once again, we leave to the reader to check that if $F(\tau)=\sum_{n\ge1}a(n)q^n$ we have $$L(F,k/2)=(1+\varepsilon(-1)^{k/2})\sum_{n\ge1}\dfrac{a(n)}{n^{k/2}}e^{-2\pi n/N^{1/2}}P_{k/2}(2\pi n/N^{1/2})\;.$$ \subsection{Modular Forms with Characters} Consider again the problem of sums of squares, in other words of the powers of $\th(\tau)$. We needed to raise it to a power which is a multiple of $4$ so as to have a pure modularity property as we defined it above. But consider the function $\th^2(\tau)$. The same proof that we mentioned for $\th^4$ shows that for any $\gamma=\psmm{a}{b}{c}{d}\in\Gamma_0(4)$ we have $$\th^2(\gamma(\tau))=\leg{-4}{d}(c\tau+d)\th^2(\tau)\;,$$ where $\lgs{-4}{d}$ is the Legendre--Kronecker character (in this specific case equal to $(-1)^{(d-1)/2}$ since $d$ is odd, being coprime to $c$). Thus it satisfies a modularity property, except that it is ``twisted'' by $\lgs{-4}{d}$. Note that the equation makes sense since if we change $\gamma$ into $-\gamma$ (which does not change $\gamma(\tau)$), then $(c\tau+d)$ is changed into $-(c\tau+d)$, and $\lgs{-4}{d}$ is changed into $\lgs{-4}{-d}=-\lgs{-4}{d}$. It is thus essential that the multiplier that we put in front of $(c\tau+d)^k$, here $\lgs{-4}{d}$, has the same parity as $k$. We mentioned above that the study of modular forms on $\Gamma_1(N)$ could be reduced to those on $\Gamma_0(N)$ ``with a twist''. Indeed, more precisely it is trivial to show that $$M_k(\Gamma_1(N))=\bigoplus_{\chi(-1)=(-1)^k}M_k(\Gamma_0(N),\chi)\;,$$ where $\chi$ ranges through all Dirichlet characters modulo $N$ of the specified parity, and where $M_k(\Gamma_0(N),\chi)$ is defined as the space of functions $F$ satisfying $$F(\gamma(\tau))=\chi(d)(c\tau+d)^kF(\tau)$$ for all $\gamma=\psmm{a}{b}{c}{d}\in\Gamma_0(N)$, plus the usual holomorphy and conditions at the cusps (note that $\gamma\mapsto\chi(d)$ is the group homomorphism from $\Gamma_0(N)$ to ${\mathbb C}^*$ which induces the above-mentioned isomorphism from $\Gamma_0(N)/\Gamma_1(N)$ to $({\mathbb Z}/N{\mathbb Z})^*$). \begin{exercise}\begin{enumerate} \item Show that a system of coset representatives of $\Gamma_1(N)\backslash\Gamma_0(N)$ is given by matrices $M_d=\psmm{u}{-v}{N}{d}$, where $0\le d<N$ such that $\gcd(d,N)=1$ and $u$ and $v$ are such that $ud+vN=1$. \item Let $f\in M_k(\Gamma_1(N))$. Show that in the above decomposition of $M_k(\Gamma_1(N))$ we have $f=\sum_{\chi(-1)=(-1)^k}f_{\chi}$ with $$f_{\chi}=\sum_{0\le d<N,\ \gcd(d,N)=1}\overline{\chi(d)}f|_kM_d\;.$$ \end{enumerate}\end{exercise} These spaces are just as nice as the spaces $M_k(\Gamma_0(N))$ and share exactly the same properties. They have finite dimension (which we do not give), there are Eisenstein series, Hecke operators, newforms, Euler products, $L$-functions, etc... An excellent rule of thumb is simply to replace any formula containing $d^{k-1}$ (or $p^{k-1}$) by $\chi(d)d^{k-1}$ (or $\chi(p)p^{k-1}$). In fact, in the Euler product of the $L$-function of an eigenform we do not need to distinguish $p\nmid N$ and $p\mid N$ since we have $$L(F,s)=\prod_{p\in P}\dfrac{1}{1-a(p)p^{-s}+\chi(p)p^{k-1-2s}}\;,$$ and $\chi(p)=0$ if $p\mid N$ since $\chi$ is a character modulo $N$. Thus, for instance $\th^2\in M_1(\Gamma_0(4),\chi_{-4})$, more generally $\th^{4m+2}\in M_{2m+1}(\Gamma_0(4),\chi_{-4})$, where we use the notation $\chi_D$ for the Legendre--Kronecker symbol $\lgs{D}{d}$. The space $M_1(\Gamma_0(4),\chi_{-4})$ has dimension $1$, generated by the single Eisenstein series $$1+4\sum_{n\ge1}\sigma_0^{(-4)}(n)q^n\;,\text{\quad where\quad}\sigma_{k-1}^{(D)}(n)=\sum_{d\mid n}\leg{D}{d}d^{k-1}$$ according to our rule of thumb (which does not tell us the constant $4$). Comparing constant coefficients, we deduce that $r_2(n)=4\sigma_0^{(-4)}(n)$, where as usual $r_2(n)$ is the number of representations of $n$ as a sum of two squares. This formula was in essence discovered by Fermat. For $r_6(n)$ we must work slightly more: $\th^6\in M_3(\Gamma_0(4),\chi_{-4})$, and this space has dimension $2$, generated by two Eisenstein series. The first is the natural ``rule of thumb'' one (which again does not give us the constant) $$F_1=1-4\sum_{n\ge1}\sigma_2^{(-4)}(n)q^n\;,$$ and the second is $$F_2=\sum_{n\ge1}\sigma_2^{(-4,*)}(n)q^n\;,$$ where $$\sigma_{k-1}^{(D,*)}=\sum_{d\mid n}\leg{D}{n/d}d^{k-1}\;,$$ a sort of dual to $\sigma_{k-1}^{(D)}$ (these are my notation). Since $\th^6=1+12q+\cdots$, comparing the Fourier coefficients of $1$ and $q$ shows that $\th^6=F_1+16F_2$, so we deduce that $$r_6(n)=-4\sigma_2^{(-4)}(n)+16\sigma_2^{(-4,*)}(n) =\sum_{d\mid n}\left(16\leg{-4}{n/d}-4\leg{-4}{d}\right)d^2\;.$$ \subsection{Remarks on Dimension Formulas and Galois Representations} The explicit dimension formulas alluded to above are valid for $k\in{\mathbb Z}$ \emph{except} for $k=1$; in addition, thanks to the theorems mentioned below, we also have explicit dimension formulas for $k\in 1/2+{\mathbb Z}$. Thus, the theory of modular forms of weight $1$ is very special, and their general construction more difficult. This is also reflected in the construction of \emph{Galois representations} attached to modular eigenforms, which is an important and deep subject that we will not mention in this course, except to say the following: in weight $k\ge2$ these representations are $\ell$-adic (or modulo $\ell$), i.e., with values in $\GL_2({\mathbb Q}_\ell)$ (or $\GL_2({\mathbb F}_\ell)$), while in weight $1$ they are \emph{complex} representations, i.e., with values in $\GL_2({\mathbb C})$. The construction in weight $2$ is quite old, and comes directly from the construction of the so-called \emph{Tate module} $T(\ell)$ attached to an Abelian variety (more precisely the Jacobian of a modular curve), while the construction in higher weight, due to Deligne, is much deeper since it implies the third Ramanujan conjecture $|\tau(p)|<p^{11/2}$. Finally, the case of weight $1$ is due to Deligne--Serre, in fact using the construction for $k\ge2$ and congruences. \subsection{Origins of Modular Forms} Modular forms are all pervasive in mathematics, physics, and combinatorics. We just want to mention the most important constructions: \begin{itemize} \item Historically, the first modular forms were probably \emph{theta functions} (this dates back to J.~Fourier at the end of the 18th century in his treatment of the \emph{heat equation}) such as $\th(\tau)$ seen above, and more generally theta functions associated to \emph{lattices}. These functions can have integral or half-integral weight (see below) depending on whether the number of variables which occur (equivalently, the dimension of the lattice) is even or odd. Later, these theta functions were generalized by introducing \emph{spherical polynomials} associated to the lattice. For example, the theta function associated to the lattice ${\mathbb Z}^2$ is simply $f(\tau)=\sum_{(x,y)\in{\mathbb Z}^2}q^{x^2+y^2}$, which is clearly equal to $\th^2$, so belongs to $M_1(\Gamma_0(4),\chi_{-4})$. But we can also consider for instance $$f_5(\tau)=\sum_{(x,y)\in{\mathbb Z}^2}(x^4-6x^2y^2+y^4)q^{x^2+y^2}\;,$$ and show that $f_5\in S_5(\Gamma_0(4),\chi_{-4})$: \begin{exercise}\begin{enumerate} \item Using the notation and results of Exercise \ref{ex:RC}, show that $[\th,\th]_2=cf_5$ for a suitable constant $c$, so that in particular $f_5\in S_5(\Gamma_0(4),\chi_{-4})$. \item Show that the polynomial $P(x,y)=x^4-6x^2y^2+y^4$ is a \emph{spherical polynomial}, in other words that $D(P)=0$, where $D$ is the Laplace differential operator $D=\partial^2/\partial^2x+\partial^2/\partial^2y$. \end{enumerate} \end{exercise} \item The second occurrence of modular forms is probably \emph{Eisenstein series}, which in fact are the first that we encountered in this course. We have only seen the most basic Eisenstein series $G_k$ (or normalized versions) on the full modular group and a few on $\Gamma_0(4)$, but there are very general constructions over any space such as $M_k(\Gamma_0(N),\chi)$. Their Fourier expansions can easily be explicitly computed and are similar to what we have given above. More difficult is the case when $k$ is only half-integral, but this can also be done. As we have seen, an important generalization of Eisenstein series are \emph{Poincar\'e series}, which an also be defined over any space as above. \item A third important construction of modular forms comes from the Dedekind eta function $\eta(\tau)$ defined above. In itself it has a complicated \emph{multiplier system}, but if we define an \emph{eta quotient} as $F(\tau)=\prod_{m\in I}\eta(m\tau)^{r_m}$ for a certain set $I$ of positive integers and exponents $r_m\in{\mathbb Z}$, then it is not difficult to write necessary and sufficient conditions for $F$ to belong to some $M_k(\Gamma_0(N),\chi)$. The first example that we have met is of course the Ramanujan delta function $\Delta(\tau)=\eta(\tau)^{24}$. Other examples are for instance $\eta(\tau)\eta(23\tau)\in S_1(\Gamma_0(23),\chi_{-23})$, $\eta(\tau)^2\eta(11\tau)^2\in S_2(\Gamma_0(11))$, and $\eta(2\tau)^{30}/\eta(\tau)^{12}\in S_9(\Gamma_0(8),\chi_{-4})$. \item Closely related to eta quotients are $q$-identities involving the $q$-Pochhammer symbol $(q)_n$ and generalizing those seen in Exercise \ref{expoch}, many of which give modular forms not related to the eta function. \item A much deeper construction comes from algebraic geometry: by the modularity theorem of Wiles et al., to any elliptic curve defined over ${\mathbb Q}$ is associated a modular form in $S_2(\Gamma_0(N))$ which is a normalized Hecke eigenform, where $N$ is the so-called \emph{conductor} of the curve. For instance the eta quotient of level $11$ just seen above is the modular form associated to the isogeny class of the elliptic curve of conductor $11$ with equation $y^2+y=x^3-x^2-10x-20$. \end{itemize} \section{More General Modular Forms} In this brief section, we will describe modular forms of a more general kind than those seen up to now. \subsection{Modular Forms of Half-Integral Weight} Coming back again to the function $\th$, the formulas seen above suggest that $\th$ itself must be considered a modular form, of weight $1/2$. We have already mentioned that $$\th^2(\gamma(\tau))=\leg{-4}{d}(c\tau+d)\th^2(\tau)\;.$$ But what about $\th$ itself? For this, we must be very careful about the determination of the square root: Notation: $z^{1/2}$ will \emph{always} denote the principal determination of the square root, i.e., such that $-\pi/2<\Arg(z^{1/2})\le\pi/2$. For instance $(2i)^{1/2}=1+i$, $(-1)^{1/2}=i$. Warning: we do not in general have $(z_1z_2)^{1/2}=z_1^{1/2}z_2^{1/2}$, but only up to sign. As a second notation, when $k$ is odd, $z^{k/2}$ will \emph{always} denote $(z^{1/2})^k$ and \emph{not} $(z^k)^{1/2}$ (for instance $(2i)^{3/2}=(1+i)^3=-2+2i$, while $((2i)^3)^{1/2}=2-2i$). Thus, let us try and take the square root of the modularity equation for $\th^2$: $$\th(\gamma(\tau))=v(\gamma,\tau)\leg{-4}{d}^{1/2}(c\tau+d)^{1/2}\;,$$ where $v(\gamma,\tau)=\pm1$ and may depend on $\gamma$ and $\tau$. A detailed study of Gauss sums shows that $v(\gamma,\tau)=\lgs{-4c}{d}$, the general Kronecker symbol, so that the modularity equation for $\th$ is, for any $\gamma\in\Gamma_0(4)$: $$\th(\gamma(\tau))=v_{\th}(\gamma)(c\tau+d)^{1/2}\th(\tau)\text{\quad with\quad}v_{\th}(\gamma)=\leg{c}{d}\leg{-4}{d}^{-1/2}\;.$$ Note that there is something very subtle going on here: this complicated \emph{theta multiplier system} $v_{\th}(\gamma)$ must satisfy a complicated \emph{cocycle relation} coming from the trivial identity $\th((\gamma_1\gamma_2)(\tau))=\th(\gamma_1(\gamma_2(\tau)))$ which can be shown to be equivalent to the general \emph{quadratic reciprocity law}. The following definition is due to G.~Shimura: \begin{definition} Let $k\in1/2+{\mathbb Z}$. A function $F$ from $\H$ to ${\mathbb C}$ will be said to be a modular form of (half integral) weight $k$ on $\Gamma_0(N)$ with character $\chi$ if for all $\gamma=\psmm{a}{b}{c}{d}\in\Gamma_0(N)$ we have $$F(\gamma(\tau))=v_{\th}(\gamma)^{2k}\chi(d)(c\tau+d)^kF(\tau)\;,$$ and if the usual holomorphy and conditions at the cusps are satisfied (equivalently if $F^2\in M_{2k}(\Gamma_0(N),\chi^2\chi_{-4})$). \end{definition} Note that if $k\in1/2+{\mathbb Z}$ we have $v_{\th}(\gamma)^{4k}=\chi_{-4}$, which explains the extra factor $\chi_{-4}$ in the above definition. Since $v_{\th}(\gamma)$ is defined only for $\gamma\in\Gamma_0(4)$ we need $\Gamma_0(N)\subset\Gamma_0(4)$, in other words $4\mid N$. In addition, by definition $v_{\th}(\gamma)(c\tau+d)^{1/2}=\th(\gamma(\tau))/\th(\tau)$ is invariant if we change $\gamma$ into $-\gamma$, so if $k\in1/2+{\mathbb Z}$ the same is true of $v_{\th}(\gamma)^{2k}(c\tau+d)^k$, hence it follows that in the above definition we must have $\chi(-d)=\chi(d)$, i.e., $\chi$ must be an \emph{even} character ($\chi(-1)=1$). As usual, we denote by $M_k(\Gamma_0(N),\chi)$ and $S_k(\Gamma_0(N),\chi)$ the spaces of modular and cusp forms. The theory is more difficult than the theory in integral weight, but is now well developed. We mention a few items: \begin{enumerate}\item There is an explicit but more complicated \emph{dimension formula} due to J.~Oesterl\'e and the author. \item By a theorem of Serre--Stark, modular forms of weight $1/2$ are simply linear combinations of \emph{unary theta functions} generalizing the function $\th$ above. \item One can easily construct Eisenstein series, but the computation of their Fourier expansion, due to Shimura and the author, is more complicated. \item As usual, if we can express $\th^m$ solely in terms of Eisenstein series, this leads to explicit formulas for $r_m(n)$, the number of representation of $n$ as a sum of $m$ squares. Thus, we obtain explicit formulas for $r_3(n)$ (due to Gauss), $r_5(n)$ (due to Smith and Minkowski), and $r_7(n)$, so if we complement the formulas in integral weight, we have explicit formulas for $r_m(n)$ for $1\le m\le 8$ and $m=10$. \item The deeper part of the theory, which is specific to the half-integral weight case, is the existence of \emph{Shimura lifts} from $M_k(\Gamma_0(N),\chi)$ to $M_{2k-1}(\Gamma_0(N/2),\chi^2)$, the description of the \emph{Kohnen subspace} $S_k^+(\Gamma_0(N),\chi)$ which allows both the Shimura lift to go down to level $N/4$, and also to define a suitable Atkin--Lehner type new space, and the deep results of Waldspurger, which nicely complement the work of Shimura on lifts. \end{enumerate} \medskip We could try to find other types of interesting modularity properties than those coming from $\th$. For instance, we have seen that the Dedekind eta function is a modular form of weight $1/2$ (not in Shimura's sense), and more precisely it satisfies the following modularity equation, now for any $\gamma\in\Gamma$: $$\eta(\gamma(\tau))=v_{\eta}(\gamma)(c\tau+d)^{1/2}\eta(\tau)\;,$$ where $v_{\eta}(\gamma)$ is a very complicated $24$-th root of unity. We could of course define $\eta$-modular forms of half-integral weight $k\in1/2+{\mathbb Z}$ by requiring $F(\gamma(\tau))=v_{\eta}(\gamma)^{2k}(c\tau+d)^kF(\tau)$, but it can be shown that this would not lead to any interesting theory (more precisely the only interesting functions would be \emph{eta-quotients} $F(\tau)=\prod_m\eta(m\tau)^{r_m}$, which can be studied directly without any new theory. Note that there are functional relations between $\eta$ and $\th$: \begin{proposition} We have $$\th(\tau)=\dfrac{\eta^2(\tau+1/2)}{\eta(2\tau+1)}=\dfrac{\eta^5(2\tau)}{\eta^2(\tau)\eta^2(4\tau)}\;.$$\end{proposition} \begin{exercise}\begin{enumerate}\item Prove these relations in the following way: first show that the right-hand sides satisfy the same modularity equations as $\th$ for $T=\psmm{1}{1}{0}{1}$ and $W_4=\psmm{0}{-1}{4}{0}$, so in particular that they are weakly modular on $\Gamma_0(4)$, and second show that they are really modular forms, in other words that they are holomorphic on $\H$ and at the cusps. \item Using the definition of $\eta$, deduce two \emph{product expansions} for $\th(\tau)$.\end{enumerate} \end{exercise} We could also try to study modular forms of fractional or even real weight $k$ not integral or half-integral, but this would lead to functions with no interesting \emph{arithmetical} properties. In a different direction, we can relax the condition of holomorphy (or meromorphy) and ask that the functions be eigenfunctions of the \emph{hyperbolic Laplace operator} $$\Delta=-y^2\left(\dfrac{\partial^2}{\partial^2 x}+\dfrac{\partial^2}{\partial^2 y}\right)=-4y^2\dfrac{\partial^2}{\partial\tau\partial\ov{\tau}}$$ which can be shown to be invariant under $\Gamma$ (more generally under $\SL_2({\mathbb R})$) together with suitable boundedness conditions. This leads to the important theory of \emph{Maass forms}. The case of the eigenvalue $0$ reduces to ordinary modular forms since $\Delta(F)=0$ is equivalent to $F$ being a linear combination of a holomorphic and antiholomorphic (i.e., conjugate to a holomorphic) function, each of which will be modular or conjugate of modular. The case of the eigenvalue $1/4$ also leads to functions having nice arithmetical properties, but all other eigenvalues give functions with (conjecturally) transcendental coefficients, but these functions are useful in number theory for other reasons which we cannot explain here. Note that a famous conjecture of Selberg asserts that for \emph{congruence subgroups} there are no eigenvalues $\lambda$ with $0<\lambda<1/4$. For instance, for the full modular group, the smallest nonzero eigenvalue is $\lambda=91.1412\cdots$, which is quite large. \begin{exercise} Using the fact that $\Delta$ is invariant under $\Gamma$ show that $\Delta(\Im(\gamma(\tau)))=s(1-s)\Im(\gamma(\tau))$ and deduce that the nonholomorphic Eisenstein series $E(s)$ introduced in Definition \ref{def:nonhol} is an eigenfunction of the hyperbolic Laplace operator with eigenvalue $s(1-s)$ (note that it does not satisfy the necessary boundedness conditions, so it is not a Maass form: the functions $E(s)$ with $\Re(s)=1/2$ constitute what is called the \emph{continuous spectrum}, and the Maass forms the \emph{discrete spectrum} of $\Delta$ acting on $\Gamma\backslash\H$). \end{exercise} \subsection{Modular Forms in Several Variables}\label{sec:several} The last generalization that we want to mention (there are much more!) is to several variables. The natural idea is to consider holomorphic functions from $\H^r$ to ${\mathbb C}$, now for some $r>1$, satisfying suitable modularity properties. If we simply ask that $\gamma\in\Gamma$ (or some subgroup) acts component-wise, we will not obtain anything interesting. The right way to do it, introduced by Hilbert--Blumenthal, is to consider a \emph{totally real} number field $K$ of degree $r$, and denote by $\Gamma_K$ the group of matrices $\gamma=\psmm{a}{b}{c}{d}\in\SL_2({\mathbb Z}_K)$, where ${\mathbb Z}_K$ is the ring of algebraic integers of $K$ (we could also consider the larger group $\GL_2({\mathbb Z}_K)$, which leads to a very similar theory). Such a $\gamma$ has $r$ \emph{embeddings} $\gamma_i$ into $\SL_2({\mathbb R})$, which we will denote by $\gamma_i=\psmm{a_i}{b_i}{c_i}{d_i}$, and the correct definition is to ask that $$F(\gamma_1(\tau_1),\cdots,\gamma_r(\tau_r))=(c_1\tau_1+d_1)^k\cdots(c_r\tau_r+d_r)^kF(\tau_1,\dots,\tau_r)\;.$$ Note that the restriction to totally real number fields is due to the fact that for $\gamma_i$ to preserve the upper-half plane it is necessary that $\gamma_i\in\SL_2({\mathbb R})$. Note also that the $\gamma_i$ are \emph{not} independent, they are conjugates of a single $\gamma\in\SL_2({\mathbb Z}_K)$. A holomorphic function satisfying the above is called a \emph{Hilbert-Blumenthal} modular form (of \emph{parallel weight $k$}, one can also consider forms where the exponents for the different embeddings are not equal), or more simply a Hilbert modular form (note that there are no ``conditions at infinity'', since one can prove that they are automatically satisfied unless $K={\mathbb Q}$). Since $T=\psmm{1}{1}{0}{1}\in\SL_2({\mathbb Z}_K)$ is equal to all its conjugates, such modular forms have Fourier expansions, but using the action of $\psmm{1}{\alpha}{0}{1}$ with $\alpha\in{\mathbb Z}_K$ it is easy to show that these expansions are of a special type, involving the \emph{codifferent} ${\mathfrak d}^{-1}$ of $K$, which is the fractional ideal of $x\in K$ such that $\Tr(x{\mathbb Z}_K)\subset{\mathbb Z}$, where $\Tr$ denotes the trace. One can construct Eisenstein series, here called Hecke--Eisenstein series, and compute their Fourier expansion. One of the important consequences of this computation is that it gives an explicit formula for the value $\zeta_K(1-k)$ of the \emph{Dedekind zeta function} of $K$ at negative integers (hence by the functional equation of $\zeta_K$, also at positive even integers), and in particular it proves that these values are \emph{rational numbers}, a theorem due to C.-L.~Siegel as an immediate consequence of Theorem \ref{thmsieg}. An example is as follows: \begin{proposition} Let $K={\mathbb Q}(\sqrt{D})$ be a real quadratic field with $D$ a fundamental discriminant. Then: \begin{enumerate} \item We have \begin{align*}\zeta_K(-1)&=\dfrac{1}{60}\sum_{|s|<\sqrt{D}}\sigma_1\left(\dfrac{D-s^2}{4}\right)\;,\\ \zeta_K(-3)&=\dfrac{1}{120}\sum_{|s|<\sqrt{D}}\sigma_3\left(\dfrac{D-s^2}{4}\right)\;.\end{align*} \item We also have formulas such as \begin{align*} \sum_{|s|<\sqrt{D}}\sigma_1(D-s^2)&=60\left(9-2\leg{D}{2}\right)\zeta_K(-1)\;,\\ \sum_{|s|<\sqrt{D}}\sigma_3(D-s^2)&=120\left(129-8\leg{D}{2}\right)\zeta_K(-3)\;.\\ \end{align*} \end{enumerate} \end{proposition} We can of course reformulate these results in terms of $L$-functions by using $L(\chi_D,-1)=-12\zeta_K(-1)$ and $L(\chi_D,-3)=120\zeta_K(-3)$, where as usual $\chi_D$ is the quadratic character modulo $D$. \begin{exercise} Using Exercise \ref{exr4r8} and the above formulas, show that the number $r_5(D)$ of representations of $D$ as a sum of $5$ squares is given by $$r_5(D)=480\left(5-2\leg{D}{2}\right)\zeta_K(-1)=-40\left(5-2\leg{D}{2}\right)L(\chi_D,-1)\;.$$ \end{exercise} Note that this formula can be generalized to arbitrary $D$, and is due to Smith and (much later) to Minkowski. There also exists a similar formula for $r_7(D)$: when $-D$ (\emph{not} $D$) is a fundamental discriminant $$r_7(D)=-28\left(41-4\leg{D}{2}\right)L(\chi_{-D},-2)\;.$$ \smallskip Note also that if we restrict to the \emph{diagonal} $\tau_1=\cdots=\tau_r$, a Hilbert modular form of (parallel) weight $k$ gives rise to an ordinary modular form of weight $kr$. \medskip We finish this section with some terminology with no explanation: if $K$ is \emph{not} a totally real number field, one can also define modular forms, but they will not be defined on products of the upper-half plane $\H$ alone, but will also involve the \emph{hyperbolic $3$-space} $\H_3$. Such forms are called \emph{Bianchi} modular forms. A different generalization, close to the Weierstrass $\wp$-function seen above, is the theory of \emph{Jacobi forms}, due to M.~Eichler and D.~Zagier. One of the many interesting aspects of this theory is that it mixes in a nontrivial way properties of forms of integral weight with forms of half-integral weight. Finally, we mention \emph{Siegel modular forms}, introduced by C.-L.~Siegel, which are defined on higher-dimensional \emph{symmetric spaces}, on which the \emph{symplectic groups} $\Sp_{2n}({\mathbb R})$ act. The case $n=1$ gives ordinary modular forms, and the next simplest, $n=2$, is closely related to Jacobi forms since the Fourier coefficients of Siegel modular forms of degree $2$ can be expressed in terms of Jacobi forms. \section{Some Pari/GP Commands} There exist three software packages which are able to compute with modular forms: {\tt magma}, {\tt Sage}, and {\tt Pari/GP} since the spring of 2018. We give here some basic {\tt Pari/GP} commands with little or no explanation (which is available by typing {\tt ?} or {\tt ??}): we encourage the reader to read the tutorial {\tt tutorial-mf} available with the distribution and to practice with the package, since it is an excellent way to learn about modular forms. All commands begin with the prefix {\tt mf}, with the exception of {\tt lfunmf} which more properly belongs to the $L$-function package. Creation of modular forms: {\tt mfDelta} (Ramanujan Delta), {\tt mfTheta} (ordinary theta function), {\tt mfEk} (normalized Eisenstein series $E_k$), more generally {\tt mfeisenstein}, {\tt mffrometaquo} (eta quotients), {\tt mffromqf} (theta function of lattices with or without spherical polynomial), {\tt mffromell} (from elliptic curves over ${\mathbb Q}$), etc... Arithmetic operations: {\tt mfcoefs} (Fourier coefficients at infinity), {\tt mflinear} (linear combination, so including addition/subtraction and scalar multiplication), {\tt mfmul}, {\tt mfdiv}, {\tt mfpow} (clear), etc... Modular operations: {\tt mfbd}, {\tt mftwist}, {\tt mfhecke}, {\tt mfatkin}, {\tt mfderivE2}, {\tt mfbracket}, etc... Creation of modular form \emph{spaces}: {\tt mfinit}, {\tt mfdim} (dimension of the space), {\tt mfbasis} (random basis of the space), {\tt mftobasis} (decomposition of a form on the {\tt mfbasis}), {\tt mfeigenbasis} (basis of normalized eigenforms). Searching for modular forms with given Fourier coefficients: {\tt mfeigensearch}, {\tt mfsearch}. Expansion of $F|_k\gamma$: {\tt mfslashexpansion}. Numerical functions: {\tt mfeval} (evaluation at a point in $\H$ or at a cusp), {\tt mfcuspval} (valuation at a cusp), {\tt mfsymboleval} (computation of integrals over paths in the completed upper-half plane), {\tt mfpetersson} (Petersson scalar product), {\tt lfunmf} ($L$-function associated to a modular form), etc... Note that for now {\tt Pari/GP} is the only package for which these last functions (beginning with {\tt mfslashexpansion}) are implemented. \section{Suggestions for further Reading} The literature on modular forms is vast, so I will only mention the books which I am familar with and that in my opinion will be very useful to the reader. Note that the classic book \cite{Shi} is absolutely remarkable, but may be difficult for a beginning course. In addition to the recent book \cite{Coh-Str} by F.~Str\"omberg and the author (which of course I strongly recommend !!!), I also highly recommend the paper \cite{Zag}, which is essentially a small book. Perhaps the most classical reference is \cite{Miy}. The more recent book \cite{Dia-Shu} is more advanced since its ultimate goal is to explain the modularity theorem of Wiles et al. \bigskip
{ "timestamp": "2018-10-01T02:09:06", "yymm": "1809", "arxiv_id": "1809.10907", "language": "en", "url": "https://arxiv.org/abs/1809.10907", "abstract": "In this course we introduce the main notions relative to the classical theory of modular forms. A complete treatise in a similar style can be found in the author's book joint with F. Str{ö}mberg [1].", "subjects": "Number Theory (math.NT)", "title": "An Introduction to Modular Forms", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363535858372, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.707338584090426 }
https://arxiv.org/abs/2203.16068
Euler-symmetric complete intersection in projective space
Euler-symmetric projective varieties, introduced by Baohua Fu and Jun-Muk Hwang in 2020, are nondegenerate projective varieties admitting many $\mathbb{C}^{\times}$-actions of Euler type. They are quasi-homogeneous and uniquely determined by their fundamental forms at a general point. In this paper, we study complete intersections in projective spaces which are Euler-symmetric. It is proven that such varieties are complete intersections of hyperquadrics and the base locus of the second fundamental form at a general point is again a complete intersection.
\section{Introduction}\label{In} Throughout this paper, we work over the field of complex numbers $\mathbb{C}$. In \cite{FH20}, the following notion is introduced as a quasi-homogeneous generalization of Hermitian symmetric spaces. \begin{defn} Let $Z \subset \mathbb{P}V$ be a projective variety. For a nonsingular point $x\in Z$, a $\mathbb{C}^{\times}$-action on $Z$ coming from a multiplicative subgroup of ${\bf G}\mathrm{L}(V)$ is said to be of \textit{Euler type} at $x$, if $x$ is an isolated fixed point of the induced action on $Z$ and the isotropic action on the tangent space $T_xZ$ is by scalar multiplication (i.e., the induced action on $\mathbb{P}T_xZ$ is trivial). We say that $Z \subset \mathbb{P}V$ is {\em Euler-symmetric} if for a general point $x\in Z$, there exists a $\mathbb{C}^{\times}$-action on $Z$ of Euler type at $x$. \end{defn} In \cite[Proposition 2.7]{FH20}, it is shown that any Euler-symmetric projective variety is uniquely determined by their fundamental forms at general points. Conversely, given a symbol system $\textbf{F} =\oplus_{k=0}^r\operatorname{Sym}^k(W^*)$ of rank $r$ (cf. Definition \ref{d.symbol}), there exists a unique Euler-symmetric projective variety (denoted by $M(\textbf{F})$), whose fundamental forms at general points are isomorphic to $\textbf{F}$ (\cite[Theorem 3.7]{FH20}). It is a challenging problem to relate geometrical properties of $M(\textbf{F})$ to algebraic properties of $\textbf{F}$. It turns out that any Euler-symmetric projective variety $Z$ is quasi-homogeneous. More precisely, it is an equivariant compactification of a vector group (\cite[Theorem 3.7]{FH20}), namely there exists a $\mathbb{G}_a^n$-action on $Z$ with an open orbit, where $n=\dim Z$. In this way, we obtain lots of examples of equivariant compactifications of vector groups, which are in general singular. In the smooth case, there are several papers dedicated to the study of different equivariant compactification structures on a given variety. The first one is the work of Hassett and Tschinkel \cite{hassett1999geometry}, where they established a correspondence between equivariant compactification structures on projective space $\mathbb{P}^n$ and commutative associative local algebras with unit of dimension $n+1$. This result also follows from a more general correspondence between finite-dimensional commutative associative unital algebras and open equivariant embeddings of commutative linear algebraic groups into projective space established by Knop and Lange \cite{knop1984commutative}. Equivariant compactification structures on projective hypersurfaces are studied in \cite{arzhantsev2014additive}, and the case of flag varieties is studied in \cite{arzhantsev2011flag} and \cite{devyatov2015unipotent}. There are also some works on toric varieties \cite{arzhantsev2017additive}, \cite{dzhunusov2020uniqueness}. The purpose of this article is to determine when an Euler-symmetric projective variety is a complete intersection. As is well-known, a smooth complete intersection has continuous automorphisms if and only if it is either a smooth hyperquadric or a smooth cubic plane curve. It follows that a smooth complete intersection is Euler-symmetric if and only if it is a smooth hyperquadric. Hence, our main focus is on singular complete intersections, which are much less studied. Our first result is the following: \begin{thm}\label{thm1} Let $Z \subseteq \mathbb{P}(V)$ be an Euler-symmetric variety corresponding to a symbol system $\textbf{F}$ of rank $r\geq 3$. Then $Z\subseteq \mathbb{P}(V)$ is not a complete intersection. \end{thm} It follows that if $M(\textbf{F}) \subseteq \mathbb{P}(V_{\textbf{F}}) $ is an Euler-symmetric complete intersection corresponding to a symbol system $\textbf{F}$, then the rank of $\textbf{F}$ is $2$. There are examples of Euler-symmetric varieties of rank 2 which are not complete intersections (see Example \ref{exam38}). Now let $\textbf{F}$ be a symbol system of rank $2$, namely $F^2$ is a collection of quadratic polynomials generated by $Q_1, \cdots, Q_k$. The associated Euler-symmetric variety $M(\textbf{F})$ is covered by lines. For a general point $x\in M(\textbf{F})$, let $\mathcal{L}_x(M(\textbf{F}))$ denote the variety of lines through $x$. We denote by $\textbf{Bs}(F)\subseteq \mathbb{P}^{n-1}$ the intersection of hyperquadrics $\{Q_1=\cdots=Q_k=0\}$. \begin{thm}\label{thm2} Let $M(\textbf{F})\subseteq \mathbb{P}V_{\textbf{F}}$ be an Euler-symmetric variety of dimension $n$ associated to a symbol system $\textbf{F}$ of rank $2$. The following statements are equivalent: \begin{enumerate} \item $M(\textbf{F})$ is a complete intersection of codimension $c$ in $\mathbb{P}(V_{\textbf{F}})$; \item $\textbf{Bs}(F)\subseteq \mathbb{P}^{n-1}$ is a set-theoretic complete intersection of codimension $c$; \item $\mathcal{L}_x(M(\textbf{F})) \subseteq \mathbb{P}T_xM(\textbf{F})$ is a set-theoretic complete intersection of codimension $c$, for a general point $x\in M(\textbf{F})$. \end{enumerate} If the above equivalent statements hold, then we have $c = \operatorname{codim}_{\mathbb{P}V}(M(\textbf{F})) \leq n$. \end{thm} For the proof, first we show that an Euler-symmetric complete intersection is a complete intersection of hyperquadrics (Proposition \ref{prop31}). When the rank is at least $3$, we then find one more homogeneous polynomial that, after several computations, does not lie in the ideal generated by hyperquadrics defined in Section \ref{sect3}, which proves Theorem \ref{thm1}. For Theorem \ref{thm2}, we use the relation of regular sequence of homogeneous polynomials with set-theoretic complete intersection to get a numerical criterion for $Y$ to be a set-theoretic complete intersection (Proposition \ref{prop41}). Finally, we prove equivalent conditions for $M(\textbf{F})$ to be a complete intersection in Theorem \ref{thm42}, and $M(\textbf{F}) = Y$ is a set-theoretic complete intersection of hyperquadrics in Theorem \ref{thm41}, which allows us to conclude the proof of Theorem \ref{thm2}. \textbf{Acknowledgements.} The author is greatly indebted to his advisor Baohua Fu for illuminating discussions, guidance and revising this paper. The author is also very grateful to Zheng Xu and Renjie Lyu for helpful discussions. \section{Preliminaries} We first recall some definitions from \cite{FH20}. \begin{defn} Let $W$ be a vector space. For $w\in W$, define $$\iota_w: \operatorname{Sym}^{k+1}W^* \to \operatorname{Sym}^{k}W^*, \, \iota_w\varphi(w_1, \cdots, w_k) = \varphi(w, w_1, \cdots, w_k),$$ for any $w_1, \cdots, w_k \in W$. By convention, we define $\iota_w(\operatorname{Sym}^0W^*) = 0$. For a subspace $F^k \subseteq \operatorname{Sym}^{k}W^*$ of symmetric $k$-linear forms on $W$, define its prolongation $\textbf{prolong}(F^k) \subseteq \operatorname{Sym}^{k+1}W^*$ by the following $$ \textbf{prolong}(F^k):= \bigcap_{w\in W}\iota_w^{-1}(F^k).$$ \end{defn} \begin{rmk} By Lemma 3.5 of \cite{FH20}, the restriction of $\iota_w^k$ to $F^k$ determines an element in $(F^k)^*$, which is just the map $\phi \mapsto \phi(w, \cdots, w)$. By abuse of notation, we just denote it by $\iota_w^k \in (F^k)^*$. \end{rmk} \begin{defn} \label{d.symbol} Let $W$ be a vector space. Fix a natural number $r$. A subspace $$ \textbf{F} = \oplus_{k\geq 0}F^k \subseteq \oplus_{k\geq 0}\operatorname{Sym}^{k}W^* $$ with $$ F^0 = \mathbb{C} = \operatorname{Sym}^{0}W^*, \; F^1 = W^*,\; F^r \neq 0, \; \text{and}\; F^{r+i} = 0 \; \forall \; i \geq 1, $$ is called a \textit{symbol system of rank $r$}, if $F^{k+1} \subseteq \textbf{prolong}(F^k)$ for each $1\leq k \leq r$. \end{defn} \begin{rmk} On can find in Chapter $3$ of \cite{ivey2003cartanivey2003cartan} a more general definition of fundamental forms. By Cartan's theorem (p.68 \cite{LM03} or Exercise 3.5.10 in \cite{ivey2003cartanivey2003cartan}) the fundamental forms at a general point form a symbol system. \end{rmk} \begin{defn}\label{rank} Given a symbol system $\textbf{F}$, define a rational map $$ \phi_{\textbf{F}} : \mathbb{P}(\mathbb{C}\oplus W) \dashrightarrow \mathbb{P}(\mathbb{C}\oplus W \oplus (F^2)^*\oplus \cdots \oplus (F^r)^*), $$ by $$ [t:w] \mapsto [t^r: t^{r-1}w: t^{r-2}\iota^2_w:\cdots: t\, \iota^{r-1}_w:\iota^r_w]. $$ Write $V_{\textbf{F}}:=\mathbb{C} \oplus W \oplus (F^2)^*\oplus \cdots \oplus (F^r)^*$. We will denote the closure of the image of the rational map $\phi_{\textbf{F}}$ by $M(\textbf{F}) \subseteq \mathbb{P}(V_{\textbf{F}})$. Then $\phi_{\textbf{F}} : \mathbb{P}(\mathbb{C}\oplus W) \dashrightarrow M(\textbf{F})$ is a birational map and $M(\textbf{F}) \subseteq \mathbb{P}(V_{\textbf{F}})$ is nondegenerate. We say the projective variety $M(\textbf{F})$ associated to a symbol system $\textbf{F}$ has rank $r$, denoted by $\operatorname{rank}(M(\textbf{F}))$, if the symbol system $\textbf{F}$ has rank $r$. Set $$N = \operatorname{dim}(\mathbb{P}(V_{\textbf{F}})), \; n = \operatorname{dim}(\mathbb{P}(\mathbb{C}\oplus W)) = \operatorname{dim}(M(\textbf{F})). $$ \end{defn} \begin{rmk} Let $r = \operatorname{rank}(M(\textbf{F}))$, if $r = 1$, then $M(\textbf{F}) = \mathbb{P}^n = \mathbb{P}^N$. Hence, we always assume that $r \geq 2$. \end{rmk} \begin{thm}[Theorem 3.7 of \cite{FH20}]\label{thmfh20} Let $o= [1:0:\cdots:0]\in M(\textbf{F})$ be the point $\phi_{\textbf{F}}([t=1: w=0])$. Then: \begin{enumerate \item The natural action of the vector group $W$ on $\mathbb{P}(\mathbb{C}\oplus W)$ can be extended to an action of $W$ on $\mathbb{P}(V_{\textbf{F}})$ preserving $M(\textbf{F})$ such that the orbit of $o$ is an open subset biregular to $W$. \item The $\mathbb{C}^{\times}$-action on $W$ with weight $1$ induces a $\mathbb{C}^{\times}$-action on $M(\textbf{F})$ of Euler type at $o$, making $M(\textbf{F})$ Euler-symmetric. \item The system of fundamental forms of $M(\textbf{F})\subseteq \mathbb{P}(V_{\textbf{F}})$ at $o$ is isomorphic to the symbol system $\textbf{F}$. \end{enumerate} Conversely, any Euler-symmetric projective variety is of the form $M(\textbf{F})$ for some symbol system $\textbf{F}$ on a vector space $W$. \end{thm} Recall that a smooth complete intersection in a projective space has no continuous automorphisms unless it is a smooth hyperplane or a smooth cubic plane curve. This immediately gives the following. \begin{cor} A smooth Euler-symmetric variety is not a complete intersection unless it is a hyperquadric. \end{cor} From Theorem \ref{thmfh20}, there indeed are lots of Euler-symmetric varieties. The example below from Example 2.2 \cite{FH20} shows that there are at least as many nonsingular Euler-symmetric varieties as nonsingular projective varieties. \begin{exam} Let $S\subseteq \mathbb{P}^{n-1} \subseteq \mathbb{P}^n$ be a nonsingular algebraic subset in a hyperplane of $\mathbb{P}^n$. For each point $x\in \mathbb{P}^n\backslash \mathbb{P}^{n-1}$, the scalar multiplication on the affine space $\mathbb{P}^n\backslash \mathbb{P}^{n-1}$ regarded as a vector space with the origin at $x$ can be extended to a $\mathbb{C}^{\times}$-action \[ \textbf{A}_x: \mathbb{C}^{\times}\times \mathbb{P}^n \to \mathbb{P}^n, \] which fixes every point of the hyperplane $\mathbb{P}^{n-1}$. Let $\beta:\textbf{Bl}_S(\mathbb{P}^n)\to \mathbb{P}^n$ be the blowup of $\mathbb{P}^n$ along $S$ and let $E$ the exceptional divisor. For suitable positive integers $a$ and $b$, the line bundle $L := \mathcal{O}(-aE)\otimes\beta^*\mathcal{O}_{\mathbb{P}^n}(b)$ is very ample. The action $\textbf{A}_x$ induces an action on the image \[ Z \subseteq \mathbb{P}H^0(\textbf{Bl}_S(\mathbb{P}^n), L)^* \] of the projective embedding, which is of Euler type at $x\in Z$. Thus, $Z$ is an Euler-symmetric projective variety. \end{exam} \begin{defn} For a symbol system $\textbf{F} = \oplus_{k\geq 0}F^k$ of rank $r$, define the projective algebraic subset $\textbf{Bs}(F^k) \subset \mathbb{P}(W)$ by the affine cone in $W$ \[ \{w\in W\mid \phi(w, \cdots, w) = 0,\; \forall \phi \in F^k\} \] By the definition of a symbol system, we have the inclusion $ \textbf{Bs}(F^k) \subset \textbf{Bs}(F^{k+1})$ for each $k\in \mathbb{N}$. The \textit{base loci} of $\textbf{F}$ is the nonempty projective algebraic subset \[ \textbf{Bs}(\textbf{F}) = \bigcap_{\textbf{Bs}(F^k) \neq \emptyset, k \leq r}\textbf{Bs}(F^k). \] \end{defn} \section{Euler-symmetric varieties of rank $\geq 3$}\label{sect3} Let $W$ be a vector space of dimension $n$, and let $\textbf{F} = \oplus_{k=0}^r F^k\subseteq \oplus_{k\geq 0}\operatorname{Sym}^{k}W^*$ be a symbol system of rank $r\geq 3$, where $F^k = \left\langle Q^{(k)}_1, \cdots, Q^{(k)}_{s_k} \right\rangle$ is the vector subspace of $\operatorname{Sym}^kW^*$ generated by $Q^{(k)}_1, \cdots, Q^{(k)}_{s_k}$, for $r\geq k\geq 2$. Write the coordinates of $\mathbb{P}V_{\textbf{F}}$ as $[z_0:z_1:\cdots:z_n:w^{(2)}_1:\cdots:w^{(2)}_{s_2}:\cdots:w^{(r)}_1:\cdots:w^{(r)}_{s_r}]$, and let $$\mathcal{I} =I(M(\textbf{F})) \subseteq \mathbb{C}[z_0, z_1, \cdots, z_n, w^{(2)}_1, \cdots, w^{(2)}_{s_2}, \cdots, w^{(r)}_1, \cdots, w^{(r)}_{s_r}]$$ be the defining ideal of $M(\textbf{F})$. The codimension of $M(\textbf{F})$ in $\mathbb{P}V_{\textbf{F}}$ is $m = \sum_{i =2}^r s_i$. Obviously, the following $m$ polynomials of degree $2$ lie in $\mathcal{I}$: \begin{align*} f_1^{(2)} &= z_0w_1^{(2)} - Q_1^{(2)}(z_1, \cdots, z_n);\\ & \vdots \\ f_{s_2}^{(2)} &= z_0w_{s_2}^{(2)} - Q_{s_2}^{(2)}(z_1, \cdots, z_n);\\ & \vdots \\ f_{i}^{(j)} &= z_0w_{i}^{(j)} - \sum_{l = 1}^{s_{j-1}}g_l^{(j-1)}(z_1, \cdots, z_n) w_l^{(j-1)} ;\\ \end{align*} where $2 < j \leq r$, $1\leq i \leq s_j$ and $g_l^{(j-1)}$ are linear polynomials satisfying the following Euler identity: \[ Q^{(j)}_i = \sum_{l = 1}^{s_{j-1}}g_l^{(j-1)} Q_l^{(j-1)}. \] \begin{proposition}\label{prop31} An Euler-symmetric complete intersection is a complete intersection of hyperquadrics defined as above. \end{proposition} \begin{proof} Let $M(\textbf{F})\subseteq \mathbb{P}V_{\textbf{F}}$ be an Euler-symmetric complete intersection corresponding to the symbol system $\textbf{F}$. Since $M(\textbf{F})\subseteq \mathbb{P}V_{\textbf{F}}$ is nondegenerate, all polynomials $g \in\mathcal{I}$ have $\operatorname{deg}(g) \geq 2$. Since $M(\textbf{F})$ is a complete intersection, there exist $m = \operatorname{codim}_{\mathbb{P}V_{\textbf{F}}}(M(\textbf{F}))$ homogeneous polynomials $g_i\in \mathcal{I}$ such that \[ \mathcal{I} = (g_1, \cdots, g_m). \] Therefore, we have the inclusion \[ \left\langle f_i^{(j)}\mid 2 \leq j \leq r,\; 1\leq i \leq s_j \right\rangle \subseteq \left\langle g_1\cdots, g_m \right\rangle, \] where $\left\langle S \right\rangle$ is the vector space generated by elements in $S$. By dimension reason, the inclusion is an equality, i.e., $\mathcal{I} =(f_i^{(j)}\mid 2 \leq j \leq r,\; 1\leq i \leq s_j))$. \end{proof} Let $\mathcal{J}_1$ be the ideal generated by $f_i^{(j)},\; 2 \leq j \leq r,\; 1\leq i \leq s_j$, $\mathcal{J} = \operatorname{rad}(\mathcal{J}_1)\subseteq \mathcal{I}$. \begin{lemma}\label{lemma31} Let $G$ be a nonzero homogeneous polynomial in $$\mathbb{C}[z_1, \cdots, z_n, w^{(r-1)}_1, \cdots, w^{(r-1)}_{s_{r-1}}, w_1^{(r)}]$$ such that $G = G_0 + w_1^{(r)}G_1$ and $0 \neq G_0 \in \mathbb{C}[w^{(r-1)}_1, \cdots, w^{(r-1)}_{s_{r-1}}]$. If $G \in \mathcal{I}$ and $s_r = 1$, then $G \notin \mathcal{J}$. \end{lemma} \begin{proof} Define $\operatorname{deg}_t(w_i^{(j)}) = r-j$, $\operatorname{deg}_t(z_i) = r-1$, $\operatorname{deg}_t(z_0) = r$. If $G \in \mathcal{J}$, then there exists a positive integer $k$ such that $F = G^k = F_0 + w_1^{(r)}F_1 \in \mathcal{J}_1$, where $0 \neq F_0 \in \mathbb{C}[w^{(r-1)}_1, \cdots, w^{(r-1)}_{s_{r-1}}]$. Let $\operatorname{deg}(F_0) = s$, then $\operatorname{deg}_t(F_0) = s$ and $\operatorname{deg}_t(f_i^{(j)}) = 2r-j$. Therefore, there exist $m$ homogeneous polynomials $h_i^{(j)}$ of degree $s-2$ in $\mathbb{C}[z_0, z_1, \cdots, z_n, w^{(2)}_1, \cdots, w^{(2)}_{s_2}, \cdots, w^{(r-1)}_1, \cdots, w^{(r-1)}_{s_{r-1}}, w_1^{(r)}]$ such that \[ F = \sum_{i, j} h_i^{(j)} f_i^{(j)}. \] Let $f_i^{(j)} = z_0w_i^{(j)} + \overline{f_i^{(j)}} $, $h_i^{(j)} = \overline{h_i^{(j)}} + z_0 e_i^{(j)} $, where $$\overline{h_i^{(j)}} \in \mathbb{C}[z_1, \cdots, z_n, w^{(2)}_1, \cdots, w^{(2)}_{s_2}, \cdots, w^{(r)}_1].$$ Then we have \begin{align*} F = & \sum_{i, j} h_i^{(j)} f_i^{(j)} \\ = & \sum_{i,j} \overline{h_i^{(j)}} \cdot \overline{f_i^{(j)}} + z_0\sum_{i, j}(z_0e_i^{(j)}w_i^{(j)} + \overline{h_i^{(j)}} w_i^{(j)} + \overline{f_i^{(j)}} e_i^{(j)}). \end{align*} Since \[ F,\; \overline{h_i^{(j)}},\; \overline{f_i^{(j)}} \in \mathbb{C}[z_1, \cdots, z_n, w^{(2)}_1, \cdots, w^{(2)}_{s_2}, \cdots, w^{(r)}_1], \] then $F = \sum_{i, j} \overline{h_i^{(j)}} \cdot \overline{f_i^{(j)}}$. Let $\overline{h_i^{(j)}}= H_i^{(j)} + w_1^{(r)}l_i^{(j)}$, where $$H_i^{(j)} \in \mathbb{C}[z_1, \cdots, z_n, w^{(2)}_1, \cdots, w^{(2)}_{s_2}, \cdots, w^{(r-1)}_{s_{r-1}}].$$ Hence, \begin{align*} F = & F_0 + w_1^{(r)}F_1 \\ = & \sum_{i, j} \overline{h_i^{(j)}}\cdot \overline{f_i^{(j)}} \\ = & \sum_{i,j} H_i^{(j)} \overline{f_i^{(j)}} + w_1^{(r)}\sum_{i, j}(l_i^{(j)}\overline{f_i^{(j)}} ). \end{align*} This implies $F_0 = \sum_{i, j} H_i^{(j)} \overline{f_i^{(j)}}$. Let $H_i^{(j)} = \sum_{a} H_{i, a}^{(j)}$ be the homogeneous decomposition with respect to $t$, where $H_{i, a}^{(j)}$ is a homogeneous polynomial, and $\operatorname{deg}_t(H_{i, a}^{(j)}) = a$. Note that $H_{i, a}^{(j)}$ are also homogeneous polynomials of degree $s-2$ and \[ H_{i, a}^{(j)} \in \mathbb{C}[z_1, \cdots, z_n, w^{(2)}_1, \cdots, w^{(2)}_{s_2}, \cdots, w^{(r-1)}_{s_{r-1}}]. \] This implies $a = \operatorname{deg}_t(H_{i, a}^{(j)}) \geq s-2$. Then \begin{align*} F_0 =& \sum_{i, j} H_i^{(j)} \overline{f_i^{(j)}}\\ = & \sum_{i, j} \sum_a H_{i, a}^{(j)} \overline{f_i^{(j)}}, \end{align*} where $\operatorname{deg}_t(H_{i, a}^{(j)} \overline{f_i^{(j)}}) \geq s-2 + 2r-j$ for every $i, j$. Since $r\geq 3$, we have $s-2 + 2r-j \geq s + 1 > s$. Then $$s = \operatorname{deg}_t(F_0) =\operatorname{deg}_t(\sum_{i, j, a} H_{i, a}^{(j)} \overline{f_i^{(j)}}) > s ,$$ which is a contradiction. \end{proof} \begin{lemma}\label{lemma32} Let $F\in \mathbb{C}[x_1, \cdots, x_n]$ be a homogeneous polynomial of degree $m$. If $V = \left\langle \partial_{x_1}F, \cdots, \partial_{x_n}F\right\rangle$ is a vector space of dimension $k\leq n$, then there exist $k$ linearly independent linear homogeneous polynomials $f_1, \cdots, f_k$ such that : \[ F \in \mathbb{C}[f_1, \cdots, f_k]_m \] where $\mathbb{C}[f_1, \cdots, f_k]_m$ is the collection of all homogeneous polynomials of degree $m$ with respect to $f_1, \cdots, f_k$. \end{lemma} \begin{proof} Write $\partial_{j}F = \partial_{x_j}F$. We may assume that $V = \left\langle \partial_1F, \cdots, \partial_kF\right\rangle$. By Euler identity, we have \[ mF = \sum_{i=1}^{n}x_i \partial_iF = \sum_{i=1}^kx_i\partial_iF + \sum_{j=k+1}^nx_j\partial_jF. \] Since $\partial_jF \in V$, for $k+1 \leq j \leq n$, then $\partial_jF = \sum_{i=1}^ka_{ij}\partial_iF$. Therefore, we have \[ mF = \sum_{i=1}^{k}f_i \partial_iF, \quad f_i = x_i + \sum_{j>k}a_{ij}x_j, \] and $f_i$ are linearly independent linear polynomials. We claim $$(m-s)\partial_{i_1}\partial_{i_2}\cdots\partial_{i_s} F = \sum_{j=1}^{k}f_j \partial_{i_1}\partial_{i_2}\cdots\partial_{i_{s}}\partial_j F,$$ for all $s < m$, $1 \leq i_1, \ldots, i_s \leq k$. For $s = 1$, we have \begin{align*} m\partial_iF &= \sum_{j =1}^{k}(f_j\partial_i(\partial_jF) + \partial_i(f_j)\partial_jF);\\ & = \sum_{j =1}^{k}(f_j\partial_i(\partial_jF) + \delta_{ij}\partial_jF); \\ & =\partial_iF + \sum_{j =1}^{k}f_j\partial_i(\partial_jF). \end{align*} By induction, we have, \begin{align*} & (m-s)\partial_{i_{s+1}}\partial_{i_1}\partial_{i_2}\cdots\partial_{i_{s}} F \\ = &\sum_{j=1}^{k}\partial_{i_{s+1}}(f_j \partial_{i_1}\partial_{i_2}\cdots\partial_{i_{s}}\partial_j F); \\ = & \sum_{j=1}^k(f_j\partial_{i_{s+1}}\partial_{i_1}\partial_{i_2}\cdots\partial_{i_{s}}\partial_j F + (\partial_{i_{s+1}}f_j) \partial_{i_1}\partial_{i_2}\cdots\partial_{i_{s}}\partial_j F) \\ = & \sum_{j=1}^k(f_j\partial_{i_{s+1}}\partial_{i_1}\partial_{i_2}\cdots\partial_{i_{s}}\partial_j F + \delta_{ji_{s+1}} \partial_{i_1}\partial_{i_2}\cdots\partial_{i_{s}}\partial_j F) \\ = & \partial_{i_1}\cdots\partial_{i_{s}}\partial_{i_{s+1}} F + \sum_{j=1}^kf_j\partial_{i_{s+1}}\partial_{i_1}\partial_{i_2}\cdots\partial_{i_{s}}\partial_j F, \\ \end{align*} where $1\leq i_{s+1}\leq k$, $s+1 < m$. This proves the claim. As $\operatorname{deg}F = m$, $\partial_{i_1}\partial_{i_2}\cdots\partial_{i_{m-1}}\partial_jF$ are constant for all $1 \leq i_1, \ldots, i_{m-1}, j \leq k$. Then by the claim, \[ \partial_{i_1}\partial_{i_2}\cdots\partial_{i_{m-1}} F = \sum_{j=1}^{k}f_j \partial_{i_1}\partial_{i_2}\cdots\partial_{i_{s}}\partial_jF \in \mathbb{C}[f_1, \cdots, f_k]_1. \] Therefore, $F \in \mathbb{C}[f_1, \cdots, f_k]_m$ by induction which concludes the proof. \end{proof} Now, we can prove the first main theorem of this paper. \begin{proof}[Proof of Theorem \ref{thm1}] Assume $\operatorname{rank}(\textbf{F}) = r \geq 3$. Consider the Euler-symmetric projective variety $Z = M(\textbf{F})$ and set $m = \operatorname{codim}_{\mathbb{P}V_{\textbf{F}}}(M(\textbf{F}))$. Since Euler-symmetric projective varieties are nondegenerate, all homogeneous polynomials $g \in \mathcal{I}$ have $\operatorname{deg}(g) \geq 2$. Let $S$ be a generating set of $\mathcal{I}$ with finite elements and let $\left\langle S \right\rangle$ be the vector space generated by elements in $S$, then $f_i^{(j)} \in \left\langle S \right\rangle$. Thus, $\operatorname{dim}(\left\langle S \right\rangle) \geq m$. If $\operatorname{dim}(\left\langle S \right\rangle) > m$ for all generating sets $S$ of $\mathcal{I}$ with finite elements, then $M(\textbf{F})$ is not a complete intersection. Thus, if we can find one nonzero homogeneous polynomial $G \in \mathcal{I} \backslash \mathcal{J}$, then $\operatorname{dim}(\left\langle S \right\rangle) > m$ for all generating sets of $\mathcal{I}$ with finite elements. Moreover, if we can find one nonzero homogeneous polynomial \[ G \in \mathcal{I} \cap \mathbb{C}[z_0, z_1, \cdots, z_n, w^{(2)}_1, \cdots, w^{(2)}_{s_2}, \cdots, w^{(r-1)}_{s_{r-1}}, w^{(r)}_{1}], \] and \[ G \notin \mathcal{J} \cap\mathbb{C}[z_0, z_1, \cdots, z_n, w^{(2)}_1, \cdots, w^{(2)}_{s_2}, \cdots, w^{(r-1)}_{s_{r-1}}, w^{(r)}_{1}], \] then $G \notin \mathcal{J}$. Therefore, we can also assume that $F^r = \left\langle T \right\rangle$. If $\operatorname{dim}(M(\textbf{F})) = 1$, then $F^r = \left\langle x^r \right\rangle$ and $M(\textbf{F})$ is a rational normal curve in $\mathbb{P}(V_{\textbf{F}})$, namely the closure of $\{[t^r:t^{r-1}x:\cdots:tx^{r-1}:x^r]\in \mathbb{P}(V_{\textbf{F}}) \}$, which is not a complete intersection as $r\geq 3$. If $\operatorname{dim}(M(\textbf{F})) = n \geq 2$, then we have the following cases: \begin{enumerate} \item[Case (1)] $\operatorname{dim}(F^{r-1}) = s_{r-1} > n$: \\ we know that $\{Q^{(r-1)}_1, \cdots, Q^{(r-1)}_{s_{r-1}}\}$ are algebraically dependent, hence there exists a polynomial $G \in \mathbb{C}[y_1, \cdots, y_{s_{r-1}}]$ such that $$G(Q^{(r-1)}_1, \cdots, Q^{(r-1)}_{s_{r-1}}) \equiv 0.$$ Since $Q^{(r-1)}_1, \cdots, Q^{(r-1)}_{s_{r-1}}$ are all homogeneous polynomials of degree $r-1$, we can assume that $G \in \mathbb{C}[y_1, \cdots, y_{s_{r-1}}]$ is also homogeneous. Hence, from Lemma \ref{lemma31} we have a polynomial $G(w^{(r-1)}_1, \cdots, w^{(r-1)}_{s_{r-1}}) \in \mathcal{I}\backslash \mathcal{J}$. \item[Case (2)] $\operatorname{dim}(F^{r-1}) = s_{r-1} = n$: \\ Since $\{Q^{(r-1)}_1, \cdots, Q^{(r-1)}_n, T\}$ is algebraically dependent, then there exists a polynomial $G\in \mathbb{C}[y_1, \cdots, y_n, y]$ such that $$G(Q^{(r-1)}_1, \cdots, Q^{(r-1)}_n, T) \equiv 0.$$ Rewrite $G$ as \[ G = a_my^m + \cdots + a_1y + a_0, \] where $a_j$ are polynomials of $y_1, \cdots, y_n$. Define $\operatorname{deg}_xy_i = r-1,\; \operatorname{deg}_xy = r$. Then $\operatorname{deg}_x G$ makes sense, and we say it is the degree of $G$ w.r.t $x_1, \cdots, x_n$. Firstly, we will reduce $G$ to a homogeneous polynomial w.r.t $x_1, \cdots, x_n$ and $a_j$ to homogeneous polynomials. Let $a_j = \sum a_j^i$ be the homogeneous decomposition in $\mathbb{C}[y_1, \cdots, y_n]$. Since $y_1, \cdots, y_n$ are homogeneous polynomials of degree $r-1$ w.r.t. $x_1, \cdots, x_n$, $a_j^i$ are homogeneous w.r.t $x_1, \cdots, x_n$. Then we have the following homogeneous decomposition w.r.t $x_1, \cdots, x_n$: \begin{align*} &G(y_1, \cdots, y_n, y) \\ =& \sum a_m^i y^m + \cdots + \sum a_1^i y + \sum a_0^i \\ =& G_1 + G_2 + \cdots \end{align*} where $G_i = a_m^{i_m}y^m + \cdots + a_1^{i_1}y + a_0^{i_0}$ are homogeneous polynomials w.r.t $x_1, \cdots x_n$ and $a_j^{i_j}$ are homogeneous polynomials As $G(Q^{(r-1)}_1, \cdots, Q^{(r-1)}_n, T)$ is a zero polynomial, we get $$G_i(Q^{(r-1)}_1, \cdots, Q^{(r-1)}_n, T)$$ are also zero polynomials. Therefore, we can assume that $G$ is homogeneous w.r.t $x_1, \cdots, x_n$ and $a_j$ are homogeneous polynomials. Secondly, without loss of generality, we may assume $a_0 \neq 0$, otherwise, we consider $\tilde{G} = a_my^{m-1} + \cdots + a_1$, then $\tilde{G}(Q^{(r-1)}_1, \cdots, Q^{(r-1)}_n, T)$ is also a zero polynomial. Finally, let $h = \operatorname{deg}_x(G)$, then $(r-1)| h$ as $a_0\neq 0$, and $h = (r-1)s$, where $s = \operatorname{deg}(a_0)$. This gives $$h =(r-1)s = \operatorname{deg}_x(a_jy^j) = jr+(r-1)\operatorname{deg}(a_j).$$ Hence, $(r-1)|j$, and \begin{align*} G = a_{m(r-1)} y^{m(r-1)} + \cdots + a_{2(r-1)}y^{2(r-1)} + a_{r-1}y^{r-1} + a_0. \end{align*} Therefore, \begin{align*} h = & (r-1)s = \operatorname{deg}_x(a_{j(r-1)}y^{j(r-1)}) \\ = &j(r-1)r+(r-1)\operatorname{deg}(a_{j(r-1)}). \end{align*} Then $\operatorname{deg}(a_{j(r-1)}) = s - jr\geq 0$, for $0\leq j \leq m$. By multiplying $y_1^i$ with $G$, we can assume that $s = rp$, then $h = (r-1)s = (r-1)rp$ and $s - jr = rp - jr = r(p-j) \geq 0$. Note that \begin{align*} & a_{j(r-1)}(Q^{(r-1)}_1, \cdots, Q^{(r-1)}_n)T^{j(r-1)} \\ =& a_{j(r-1)}(\frac{w^{(r-1)}_1}{t}, \cdots, \frac{w^{(r-1)}_n}{t})(w^{(r)}_1)^{j(r-1)} \\ =& \frac{1}{t^{r(p-j)}}a_{j(r-1)}(w^{(r-1)}_1, \cdots, w^{(r-1)}_n)(w^{(r)}_1)^{j(r-1)}, \end{align*} which yields that \begin{align*} & t^{rp}G(w^{(r-1)}_1, \cdots, w^{(r-1)}_n, w^{(r)}_1) \\ =& t^{rm}a_{m(r-1)}(w^{(r-1)}_1, \cdots, w^{(r-1)}_n)(w^{(r)}_1)^{m(r-1)}+ \cdots \\ &+ a_{0}(w^{(r-1)}_1, \cdots, w^{(r-1)}_n). \end{align*} Set \begin{align*} G^{\prime} = & z_0^ma_{m(r-1)}(w^{(r-1)}_1, \cdots, w^{(r-1)}_n)(w^{(r)}_1)^{m(r-1)}+\cdots \\ &+ a_{0}(w^{(r-1)}_1, \cdots, w^{(r-1)}_n), \end{align*} $G^{\prime}$ is a homogeneous polynomial in $\mathbb{C}[z_0:z_1:\cdots:z_n:w^{(r-1)}_1:\cdots:w^{(r)}_{1}]$, and $G^{\prime} \in \mathcal{I}$. Then \begin{align*} G^{\prime} = & a_{m(r-1)}(w^{(r-1)}_1, \cdots, w^{(r-1)}_n)(f_1^{(r)} - \overline{f_1^{(r)}})^m(w^{(r)}_1)^{m(r-2)}+ \\ &\vdots \\ &+ a_{0}(w^{(r-1)}_1, \cdots, w^{(r-1)}_n)\\ =& G_0 + f_1^{(r)}G_1 \equiv G_0 (\operatorname{mod} \mathcal{J}) , \end{align*} where $G_0= a_{0} + w_1^{(r)}\overline{G_{0}} \in \mathcal{I}\cap \mathbb{C}[z_1, \cdots, z_n, w^{(r-1)}_1, \cdots, w^{(r-1)}_{n}, w^{(r)}_{1}]$ is a nonzero homogeneous polynomial and $\overline{f_1^{(r)}} = f_1^{(r)} - z_0w_1^{(r)}$. Since $a_{0}(w^{(r-1)}_1, \cdots, w^{(r-1)}_n) \neq 0$, we have $G_0 \notin \mathcal{J}$ by Lemma \ref{lemma31}. \item[Case (3)] $\operatorname{dim}(F^{r-1}) = s_{r-1} < n$: \\ We have $\left\langle \partial_{x_1}T, \cdots, \partial_{x_n}T\right\rangle \subseteq F^{r-1}$, \[ \operatorname{dim}(\left\langle \partial_{x_1}T, \cdots, \partial_{x_n}T\right\rangle) = q \leq s_{r-1} < n. \] Assume that $\left\langle \partial_{x_1}T, \cdots, \partial_{x_n}T\right\rangle = \left\langle Q^{(r-1)}_1, \cdots, Q^{(r-1)}_q \right\rangle$. By Lemma \ref{lemma32} we can assume that \[ Q^{(r-1)}_1, \cdots, Q^{(r-1)}_q, T \in \mathbb{C}[x_1, \cdots, x_q]. \] Similarly, by above arguments, there exists a homogeneous polynomial $G$ such that $$G(z_1, \cdots, z_n, w^{(r-1)}_1, \cdots, w^{(r-1)}_q, w^{(r)}_1) \in \mathcal{I} \backslash \mathcal{J}.$$ \end{enumerate} \end{proof} \begin{exam} Let $F^2 =\left\langle x_1^2 + x_2^2, x_1x_2 \right\rangle $, $F^3 = \left\langle x_1^3 + 3 x_1x_2^2 \right\rangle$, and $\textbf{F} = F^0 \oplus F^1 \oplus F^2 \oplus F^3$. By a direct calculation, polynomials of degree $2$ in $\mathcal{I}$ are exactly in $\left\langle f_1^{(2)}, f_2^{(2)}, f_1^{(3)} \right\rangle$, and $\mathcal{J} = (f_1^{(2)}, f_2^{(2)}, f_1^{(3)}, g)$, where $g = 3z_2w_1^{(2)}w_2^{(2)} - 2z_1(w_2^{(2)})^2 - z_2^2w_1^{(3)}$, then $M(\textbf{F})$ is not a complete intersection without finding one more homogeneous polynomial in $\mathcal{I} \backslash \mathcal{J}$. \end{exam} \begin{exam} Consider the homogeneous polynomial $P = \sum_{i=1}^n x_i^3 \in Sym^3W^*$, the Euler-symmetric projective variety $M(\textbf{F}_P)$ defined in Example 3.10 of \cite{FH20} is exactly the projective Legendrian variety studied in Section 4.3 of \cite{LM07}. The degree of the polynomial constructed in Theorem \ref{thm1} is extremely large. However, it may not be the polynomial in $\mathcal{I}\backslash \mathcal{J}$ of the least degree. \end{exam} \begin{exam} The twisted cubic curve in $\mathbb{P}^3$ is an Euler-symmetric projective variety of rank $3$, hence not a complete intersection, and generated by three polynomials of degree $2$. The rational normal curve $C^r \subseteq \mathbb{P}^r$ with $r\geq 3$ is an Euler-symmetric projective variety of rank $r$, hence not a complete intersection. But in 1941 Perron observed that all rational normal curves are set-theoretic complete intersections. For more details, we refer readers to \cite{buadescu2010grothendieck}, \cite{torrente2015rational}. \end{exam} \begin{cor}\label{bzy} Let $Z \subset \mathbb{P}^N$ be an Euler-symmetric variety of dimension $n$. If \[ n < \frac{N}{2}. \] Then $Z$ is not a complete intersection. \end{cor} \begin{proof} If $\operatorname{rank}(Z) \geq 3$, then by Theorem \ref{thm1}, $Z$ is not a complete intersection. Therefore, we may assume that $\operatorname{rank}(Z) = 2$, then by Theorem \ref{thmfh20} the corresponding symbol system $\textbf{F} = F^0\oplus F^1 \oplus F^2$ and $s_2 = \operatorname{dim}(F^2) = \operatorname{codim}_{\mathbb{P}^N}(Z) = N - n > n$. From Case $(1)$ of the proof of Theorem \ref{thm1}, we have a homogeneous polynomial $$G \in \mathbb{C}[w_1^{(2)}, \cdots, w_{s_2}^{(2)}]$$ such that $G \in \mathcal{I}$ but $G \notin \mathcal{J}$, where $\mathcal{J} = \operatorname{rad}((f_i^{(2)}\mid 1\leq i \leq s_2))$. Therefore, $Z$ is not a complete intersection either. \end{proof} \begin{rmk} The inequality in Corollary \ref{bzy} is optimal, namely, for every $n \geq \frac{N}{2}$, there exists an Euler-symmetric projective variety of dimension $n$ which is a complete intersection (Example \ref{exam41}). \end{rmk} \begin{exam}\label{exam38} Let $\textbf{F}$ be a symbol system of rank $2$, and let $F^2\subseteq \operatorname{Sym}^2W^*$ be a subspace of dimension $k$. If $k > n= \operatorname{dim}(W)$, then the corresponding Euler-symmetric variety $M(\textbf{F})$ is not a complete intersection by Corollary \ref{bzy}. \end{exam} \section{Euler-symmetric varieties of rank $2$} Now we restrict ourselves to the case where $\textbf{F}$ is a symbol system of rank $2$. Let $F^2 = \left\langle Q_1, \cdots, Q_c \right\rangle$ and $\textbf{Bs}(\textbf{F})= V_+(Q_1, \cdots, Q_c)\subseteq \mathbb{P}^{n-1} = \mathbb{P}W$. Write the coordinates of $\mathbb{P}V_{\textbf{F}}$ as $[w_0:w_1:\cdots:w_n:w_{n+1}:\cdots:w_{N}]$ with $N=n+c$. Let \begin{align*} f_1 &= w_0w_{n+1} - Q_1(w_1, \cdots, w_n);\\ & \vdots \\ f_c &= w_0w_{N} - Q_c(w_1, \cdots, w_n);\\ \end{align*} be $c$ polynomials of degree $2$, and let $Y$ be the subvariety of $\mathbb{P}V_{\textbf{F}}$ defined by those polynomials, then $\operatorname{dim}(Y)\geq n$. Let $\mathcal{I} = I(M(\textbf{F}))$, $\mathcal{J} = I(Y)$ and $b = \operatorname{codim}_{\mathbb{P}(W)}(\textbf{Bs}(\textbf{F}))$. Corollary \ref{bzy} suggests that we can assume that $c \leq n$, and we always have $b \leq c$. For $c = n$, we say $\textbf{Bs}(\textbf{F})$ is a set-theoretic complete intersection of codimension $c$ if $\textbf{Bs}(\textbf{F}) = \emptyset$. Denote $\operatorname{codim}_{\mathbb{P}^{n-1}}(\textbf{Bs}(\textbf{F})) = n$ and $\operatorname{dim}(\textbf{Bs}(\textbf{F})) = -1$ if $\textbf{Bs}(\textbf{F}) = \emptyset$. \begin{defn} We say an Euler-symmetric projective variety $M(\textbf{F})$ is {\em quadratic}, if $M(\textbf{F}) = Y = V_+(f_1, \cdots, f_c)$ or equivalently $\mathcal{I} = \mathcal{J}$. \end{defn} \begin{defn} A subvariety $X$ of dimension $n$ in $\mathbb{P}^{n+c}$ is a {\em set-theoretic complete intersection} if $X$ can be written as the intersection of $c$ hypersurfaces or equivalently there are $c$ homogeneous polynomials $g_1, \cdots, g_c$ such that $$I(X) = \operatorname{rad}((g_1, \cdots, g_c))\subseteq\mathbb{C}[w_0, w_1, \cdots, w_{n+c}].$$ \end{defn} \begin{lemma}\label{lemma41} If the subvariety $X = V_+(g_1, \cdots, g_c)\subseteq \mathbb{P}^{n+c}$ is a set-theoretic complete intersection of codimension $c$, then the sequence $(g_1, \cdots, g_c)$ of homogeneous polynomials $$g_i \in \mathbb{C}[w_0, w_1, \cdots, w_n, w_{n+1}, \cdots, w_{N}]$$ is a regular sequence. \end{lemma} \begin{proof} If the sequence $(g_1, \cdots, g_c)$ is not regular, then there exists $1 \leq j< c$ such that the sequence $(g_1, \cdots, g_j)$ is regular, while the sequence $(g_1, \cdots, g_j, g_{j+1})$ is not. Therefore, every irreducible component of $V_+(g_1, \cdots, g_j)$ has dimension $n + c -j$, and there exists an irreducible component $Z_1$ of $V_+(g_1, \cdots, g_j, g_{j+1})$ with dimension $n +c-j$. Then every irreducible component of $Z_1\cap V_+(g_{j+2})\subseteq V_+(g_1, \cdots, g_j, g_{j+1}, g_{j+2})$ has dimension $\geq n+c-j-1$. Thus, by induction, $X = V_+(g_1, \cdots, g_j, g_{j+1},\cdots, g_c)$ has dimension $\geq n + 1$. This leads to a contradiction, since $X$ is a set-theoretic complete intersection of codimension $c$. \end{proof} \begin{proposition}\label{prop41} The subvariety $Y\subset \mathbb{P}V_{\textbf{F}}$ is a set-theoretic complete intersection of codimension $c$ if and only if $b\geq c-1$. Furthermore, if $b = c$, then $M(\textbf{F})$ is quadratic. \end{proposition} \begin{proof} Let $H$ be the hyperplane of $\mathbb{P}V_{\textbf{F}}$ defined by $w_0 = 0$, and let $U$ be its complement. Note that $H\cap Y$ is a cone over $\textbf{Bs}(\textbf{F})\subseteq \mathbb{P}W \subseteq \mathbb{P}V_{\textbf{F}}$, so $\operatorname{dim}(H\cap Y) = n + c -b -1$ and $U \cap Y \simeq \mathbb{C}^n$. By Lemma \ref{lemma41}, the finite sequence $(f_1, \cdots, f_c)$ is regular if and only if the subvariety $Y\subset \mathbb{P}V_{\textbf{F}}$ is a set-theoretic complete intersection of codimension $c$, which is equivalent to $\operatorname{dim}(Y) = n$. Thus, we have $\operatorname{dim}(H\cap Y) = n + c -b -1 \leq n$, which is equivalent to $b \geq c-1$. If $b = c$, then $\operatorname{dim}(H\cap Y) = n-1$, $H \cap Y$ is a divisor of $Y$. As $U \cap Y \simeq \mathbb{C}^n$ is irreducible, $Y$ is irreducible. Therefore, $Y = M(\textbf{F})$. \end{proof} \begin{cor} If $\textbf{Bs}(\textbf{F})$ is a set-theoretic complete intersection of codimension $c$, then $M(\textbf{F})$ is a set-theoretic complete intersection of codimension $c$. \end{cor} \begin{proof} By Proposition \ref{prop41}, $Y = M(\textbf{F})$, and $Y$ is a set-theoretic complete intersection of dimension $n$. \end{proof} \begin{lemma}\label{lemstab} The subvariety $Y$ is stable under the action of $W$ on $\mathbb{P}V_{\textbf{F}}$ defined in Theorem \ref{thmfh20}. To be more precise, $$ g_v\cdot w = [w_0:b_1:\cdots:b_n:b_{n+1}:\cdots:b_{N}], $$ for any $v = (v_1, \cdots, v_n) \in W$ and $w = [w_0:w_1:\cdots:w_n:w_{n+1}:\cdots:w_{N}]\in \mathbb{P}V_{\textbf{F}}$, where \begin{align*} b_i & = w_i + w_0 v_i, \; 1\leq i \leq n; \\ b_{n+j} &= w_{n+j} + 2v\cdot A_j \cdot (w_1, \cdots, w_n)^{t} + w_0Q_j(v), \; 1\leq j \leq c, \end{align*} and $A_j$ is the symmetric matrix corresponding to the quadric polynomial $Q_j$. \end{lemma} \begin{proof} For any $w = [w_0:w_1:\cdots:w_n:w_{n+1}:\cdots:w_{N}]\in Y$, $\overline{w} := (w_1, \cdots, w_n)$, we have \begin{align*} & w_0b_{n+j} - Q_j(b_1, \cdots, b_n) \\ = & w_0 w_{n+j} + 2w_0(v_1, \cdots, v_n)\cdot A_j \cdot \overline{w}^{t}+ Q_j(w_0v_1, \cdots, w_0v_n)- Q_j(b_1,\cdots, b_n)\\ = & Q_j(w_1, \cdots, w_n) + (w_0v_1, \cdots, w_0v_n)\cdot A_j \cdot \overline{w}^{t}+ v \cdot A_j \cdot (w_0v_1, \cdots, w_0v_n)^{t} \\ & +Q_j(w_0v_1, \cdots, w_0v_n)- Q_j(b_1,\cdots, b_n)\\ = & (w_1+w_0v_1, \cdots, w_n + w_0v_n) \cdot A_j \cdot (w_1+w_0v_1, \cdots, w_n + w_0v_n)^{t} - Q_j(b_1,\cdots, b_n) \\ = & 0, \end{align*} for $1\leq j \leq c$. This completes the proof. \end{proof} \begin{proposition}\label{bp} For a general smooth point $x\in M(\textbf{F})$, we have \begin{enumerate} \item $Bs(\textbf{F}) = \mathcal{L}_x(Y)$; \item $\mathcal{L}_x(M(\textbf{F})) = \mathcal{L}_x(Y)$; \end{enumerate} \end{proposition} \begin{proof} We divide the proof into three steps: Steps 1, for any $y \in O_x$, we have $\mathcal{L}_y(Y) = \mathcal{L}_x(Y)$, where $O_x$ is the orbit of $x$ in $M(\textbf{F})$ under the action of $W$ defined in Lemma \ref{lemstab}: We only need to show that $g_u\cdot l$ is also a line in $Y$ for all $u \in W$ and line $l$ through $v, w \in Y$, \[ l =\overline{vw}= \{[av_0+bw_0: \cdots: av_N + bw_N]\mid [a:b]\in \mathbb{P}^1\}\subseteq Y. \] For $u = (u_1, \cdots, u_n) \in W$, we have \[ g_u\cdot l = \{[av_0+bw_0: z_1: \cdots: z_N]\mid [a:b]\in \mathbb{P}^1\}, \] where \begin{align*} z_i = & a(v_i+v_0u_i) + b(w_i+w_0u_i) = a(g_u\cdot v)_i + b(g_u\cdot w)_i, \; \forall 1\leq i\leq n;\\ z_{n+j} = & (av_{n+j}+bw_{n+j}) + 2 (u_1, \cdots, u_n)A_j(av_1+bw_1, \cdots, av_n+bw_n)^t \\ &+ (av_0+bw_0)Q_j(u) \\ = &a(v_{n+j} + 2 (u_1, \cdots, u_n)A_j(v_0, \cdots, v_n)^t + v_0Q_j(u)) +\\ & b(w_{n+j} + 2 (u_1, \cdots, u_n)A_j(w_0, \cdots, w_n)^t+ w_0Q_j(u))\\ = &a(g_u\cdot v)_{n+j} + b(g_u\cdot w)_{n+j}. \end{align*} Therefore, $g_u\cdot l$ is a line in $Y$ through $g_u\cdot v$ and $g_u\cdot w$. Steps 2, we can assume that $x=[1: 0: \cdots:0]\in M(\textbf{F})$, $v= [v_0: v_1: \cdots: v_N] \in Y$. If the line $l = \overline{xv} \in \mathcal{L}_x(Y)$, we have \[ (a+bv_0)bv_{n+j} = b^2Q_j(v_1, \cdots, v_n), \; [a:b]\in \mathbb{P}^1\; \forall 1\leq j \leq c; \] therefore, $v_{n+j} = 0$ and $[v_1: \cdots: v_n]\in \textbf{Bs}(\textbf{F})$. Then the line $l$ is of the form \[ l = \{[a: bv_1: \cdots: bv_n: 0:\cdots:0]\mid [a:b]\in \mathbb{P}^1 \}. \] Therefore, the rational map $\tau_x$ defined in \cite{H01} \[ \tau_x : \mathcal{L}_x(Y) \to \textbf{Bs}(\textbf{F})\subseteq \mathbb{P}T_xY, \; [l]\mapsto \text{tangent direction at } x, \] is a regular morphism. We have the inverse morphism \[ \phi: \textbf{Bs}(\textbf{F}) \to \mathcal{L}_x(Y), v\mapsto [l_v], \] where $l_v = \{[a:bv_1:\cdots:bv_n:0:\cdots :0]\mid [a:b]\in \mathbb{P}^1 \} \subseteq Y$. Hence, we have $Bs(\textbf{F}) = \mathcal{L}_x(Y)$. Step 3, if $Y = M(\textbf{F})$, then $\mathcal{L}_x(M(\textbf{F})) = \mathcal{L}_x(Y)$. Assume now $M(\textbf{F}) \subsetneq Y$, write $Y = M(\textbf{F}) \cup Y^{\prime}$, $M(\textbf{F})\nsubseteq Y^{\prime}$. If there exists $[l]\in \mathcal{L}_x(Y)$ such that $[l]\notin \mathcal{L}_x(M(\textbf{F}))$, $l\cap M(\textbf{F})$ is a finite set and $x\in l \subseteq Y^{\prime}$. For any $u\in W$, from Lemma \ref{lemstab} and Theorem \ref{thmfh20} we have \[ g_u\cdot x \in g_u\cdot l \subseteq Y^{\prime}. \] According to Theorem \ref{thmfh20}, the orbit of $x$ is an open dense subset of $M(\textbf{F})$, hence $M(\textbf{F})\subseteq Y^{\prime}$, this leads to a contradiction. Therefore, we have $\mathcal{L}_x(M(\textbf{F})) = \mathcal{L}_x(Y)$. \end{proof} \begin{thm}\label{thm41} Let $M(\textbf{F})\subseteq \mathbb{P}V_{\textbf{F}}$ be an Euler-symmetric projective variety of dimension $n$ corresponding to the symbol system $\textbf{F}$ of rank $2$ with $F^2 = \left\langle Q_1, \cdots, Q_c\right\rangle$ of dimension $c$. Let $\textbf{Bs}(\textbf{F})\subseteq \mathbb{P}W$ be the base loci of the symbol system $\textbf{F}$, and $b = \operatorname{codim}_{\mathbb{P}W}(\textbf{Bs}(\textbf{F}))$. For a general smooth point $x\in M(\textbf{F})$, the following are equivalent: \begin{enumerate} \item $M(\textbf{F})$ is quadratic; \item $\textbf{Bs}(\textbf{F})\subseteq \mathbb{P}W$ is a set-theoretic complete intersection of codimension $c$; \item $\mathcal{L}_x(M(\textbf{F}))$ is a set-theoretic complete intersection of codimension $c$; \item $b = c$; \end{enumerate} \end{thm} \begin{proof} Proposition \ref{bp} implies that $(2)$ and $(3)$ are equivalent. $(2)$ implies $(4)$ is obvious. $(4)$ implies $(1)$ by Proposition \ref{prop41}. If $M(\textbf{F})$ is quadratic, then $\operatorname{dim}(Y) = n$ and irreducible. Let $H$ be the hyperplane of $\mathbb{P}V_{\textbf{F}}$ defined by $w_0=0$, and let $U$ be its complement. Since $U \cap Y \simeq \mathbb{C}^n$, we have $\operatorname{dim}(H\cap Y) = n-1 = n + c -b -1$. Therefore, $\operatorname{dim}(\textbf{Bs}(\textbf{F})) = n -1 -c$. Since $\textbf{Bs}(\textbf{F}) = V_+(Q_1, \cdots, Q_c) \subseteq \mathbb{P}^{n-1}$, $\textbf{Bs}(\textbf{F})$ is a set-theoretic complete intersection of codimension $c$. \end{proof} \begin{exam} Let $F^2 = \left\langle x_1^2, x_2^2, x_1x_2 \right\rangle \subseteq \mathbb{C}[x_1, x_2, x_3, x_4]$, then $M(\textbf{F}) \subsetneq Y$ and $Y$ is a set-theoretic complete intersection of codimension $c = 3$, $\textbf{Bs}(\textbf{F}) = \mathbb{P}^1\subseteq \mathbb{P}W = \mathbb{P}^3$, $b = c-1$. \end{exam} \begin{thm}\label{thm42} Let $M(\textbf{F})\subseteq \mathbb{P}V_{\textbf{F}}$ be an Euler-symmetric projective variety of dimension $n$ corresponding to the symbol system $\textbf{F}$ of rank $2$ with $F^2 = \left\langle Q_1, \cdots, Q_c\right\rangle$ of dimension $c$. Let $\textbf{Bs}(\textbf{F})\subseteq \mathbb{P}W$ be the base loci of the symbol system $\textbf{F}$. The following statements are equivalent: \begin{enumerate} \item $M(\textbf{F})$ is a complete intersection of codimension $c$ in $\mathbb{P}V_{\textbf{F}}$; \item $\textbf{Bs}(F)$ is a set-theoretic complete intersection of codimension $c$ in $\mathbb{P}W$; \item The finite sequence $(Q_1, \cdots, Q_c)$ is a regular sequence in $\mathbb{C}[x_1, \cdots, x_n]$. \end{enumerate} \end{thm} \begin{proof} By Lemma \ref{lemma41}, $(2)$ is equivalent to $(3)$. Now we prove $(1)$ is equivalent to $(2)$. \begin{enumerate}[(i)] \item Suppose $M(\textbf{F})$ is a complete intersection. Let $\mathcal{I} = I(M(\textbf{F})) = (g_1, \cdots, g_c)$. Since $f_i \in \mathcal{I}$ and $M(\textbf{F})$ nondegenerate, we have $\operatorname{deg}(g_i)\geq 2$ and $\left\langle f_1, \cdots, f_c \right\rangle \subseteq \left\langle g_1, \cdots, g_c \right\rangle$. Therefore, $M(\textbf{F}) = Y$. Then $\textbf{Bs}(\textbf{F})$ is a set-theoretic complete intersection of codimension $c$ in $\mathbb{P}W$ by Theorem \ref{thm41}. \item Suppose $\textbf{Bs}(\textbf{F})$ is a set-theoretic complete intersection of codimension $c$, then $Y = M(\textbf{F})$ by Theorem \ref{thm41}. If $M(\textbf{F})$ is not a complete intersection, then there is a nonzero homogeneous polynomial $g(w_0, \cdots, w_N)\in \mathcal{I}\backslash (f_1, \cdots, f_c)$ of degree $k$. Since $(f_1, \cdots, f_c) \subseteq \mathcal{I}$, we can assume that \begin{align*} g = & \displaystyle\sum_{i =1}^{j}w_0^ig_i(w_1, \cdots, w_n) + g_0(w_1,, \cdots, w_N). \end{align*} By the definition of $M(\textbf{F})$, $g \in \mathcal{I}$ is equivalent to $g_0 \in \mathcal{I}$ and $g_i \equiv 0$, that is to say, $g = g_0$ is of degree $k$. We have a homogeneous decomposition of $g$ as following \[ g = \sum h_{i,j}(w_1, \cdots, w_n, w_{n+1}, \cdots, w_N), \] where $h_{i,j}(w_1, \cdots, w_n, w_{n+1}, \cdots, w_N)$ are homogeneous polynomials of degree $i$ w.r.t. $w_1, \cdots, w_n$, and are homogeneous polynomials of degree $j$ w.r.t. $w_{n+1}, \cdots, w_{N}$, respectively. Again $g \in \mathcal{I}$ is equivalent to $h_{i,j} \in \mathcal{I}$. If $h_{i,0} \in \mathcal{I}$, then $h_{i,0}\equiv 0$. Therefore, we can also assume that $$g = h_{i,j}(w_1, \cdots, w_n, w_{n+1}, \cdots, w_N) \in \mathcal{I}\backslash (f_1, \cdots, f_c) $$ for some $j > 0$, and $w_{n+i} \nmid g, \; \forall\; 1 \leq i \leq c$. Thus, we get $$g(x_1, \cdots, x_n, Q_1, \cdots, Q_c) \equiv 0.$$ By reassigning the index of $\{w_{n+1}, \cdots, w_{n+c}\}$ if necessary, we can rewrite $g$ into the form, \[ g = \sum_{t =1}^l w_{n+t}F_t(w_1, \cdots, w_n, w_{n+t}, \cdots, w_{n+c}), \] with $F_t \notin (w_{n+1}, \cdots, w_{n+t-1})$ and $l \geq 2$. Therefore, \[ \sum_{t =1}^l Q_{t}F_t(x_1, \cdots, x_n, Q_{t}, \cdots, Q_{c}) = 0. \] This implies that the image of $Q_l$ in $\mathbb{C}[x_1, \cdots, x_n]/(Q_1, \cdots, Q_{l-1})$ is a zero-divisor, which contradicts the fact that the sequence $(Q_1, \cdots, Q_{c})$ is a regular sequence. \end{enumerate} This concludes the proof. \end{proof} \begin{rmk} (1) Theorem \ref{thm2} follows from Theorem \ref{thm41} and Theorem \ref{thm42}. \\ (2) For a smooth variety which is covered by lines, Theorem 2.4 of \cite{Rasso13} proved a similar result. \end{rmk} \begin{exam}\label{exam41} Let $F^2 = \left\langle x_1^2, \cdots, x_i^2 \right\rangle$, for $1 \leq i \leq n$. $\textbf{F} = F^0 \oplus F^1 \oplus F^2$, then $M(\textbf{F})$ is a complete intersection of codimension $i$. \end{exam} \begin{exam} Let $X$ be an Euler-symmetric projective surface. If $X$ is a complete intersection, then the corresponding symbol system $\textbf{F}$ is of rank $2$, and it is exactly one of the following: \begin{enumerate} \item $F^2 = \left\langle Q \right\rangle $, where $Q$ is any one homogeneous polynomial of degree $2$ in $\mathbb{C}[x_1, x_2]$. In this case, $X$ is smooth if $Q$ is not a square, and $X$ has only one singular point if $Q$ is square. \item $F^2 =\left\langle x_1^2, ax_1x_2 + bx_2^2 \right\rangle $, where $b\in \mathbb{C}^{\times}$. In this case, $X$ is non-normal and $\operatorname{Sing}(X) = \mathbb{P}^1$. \end{enumerate} \end{exam} \begin{exam} Let $F^2 = \left\langle \sum_{i=1}^n x_i^2, \sum_{i=1}^n \lambda_i x_i^2 \right\rangle$, $\lambda_i \neq \lambda_j$, for $i\neq j$. From Proposition 2.1 \cite{reid72}, $\textbf{Bs}(\textbf{F})$ is smooth and of codimension $2$ in $\mathbb{P}^{n-1}$, if $n\geq 3$. By Theorem \ref{thm2}, $M(\textbf{F})$ is a complete intersection, hence not smooth. This also shows that the converse of Proposition 4.4 in \cite{FH20} is not true. \end{exam} \begin{exam} Let $F^2 = \left\langle Q_1, \cdots, Q_c \right\rangle$, and $\textbf{Bs}(\textbf{F})$ be a set-theoretic complete intersection. We can easily see that $$\operatorname{Sing}(M(\textbf{F})) \supseteq \mathbb{P}^{c-1} = \{[0:\cdots :0:w_{n+1}:\cdots:w_{n+c}]\in \mathbb{P}^{n+c}\},$$ if $c\geq 2$. This is another way to see that an Euler-symmetric projective variety which is a complete intersection is not smooth. \end{exam} \begin{exam} Let $F^2 = \left\langle x_1x_5-x_2^2, x_1x_6-x_3^2, x_1x_7-x_2x_3 \right\rangle$, then $\textbf{Bs}(\textbf{F})\subseteq \mathbb{P}^{6}$ is a set-theoretic complete intersection of codimension $3$, and the corresponding Euler-symmetric variety $M(\textbf{F})\subseteq \mathbb{P}^{10}$ is a complete intersection of codimension $3$. But $I(\textbf{Bs}(\textbf{F})) = (x_1x_5-x_2^2, x_1x_6-x_3^2, x_1x_7-x_2x_3, x_3x_5-x_2x_7, x_2x_6-x_3x_7)$ and $x_3x_5-x_2x_7 \notin (x_1x_5-x_2^2, x_1x_6-x_3^2, x_1x_7-x_2x_3)$. This implies that $\textbf{Bs}(\textbf{F})\subseteq \mathbb{P}^{6}$ is not a scheme-theoretic complete intersection. \end{exam} \begin{exam} Let $F^2 = \left\langle x_1x_3-x_2^2, x_3x_5-x_4^2, x_1x_5-2x_2x_4 +x_3^2 \right\rangle$, then $\textbf{Bs}(\textbf{F})$ is the rational normal quartic curve in $\mathbb{P}^{4}$ by Theorem 1 of \cite{torrente2015rational}. By a direct calculation, $M(F)\subseteq \mathbb{P}V_{\textbf{F}} = \mathbb{P}^7$ is a complete intersection, i.e., \[ \mathcal{I} = (w_0w_6-w_1w_3+w_2^2, w_0w_7-w_3w_5+ w_4^2, w_0w_7 -w_1w_5+2w_2w_4-w_3^2). \] \end{exam}
{ "timestamp": "2022-03-31T02:19:27", "yymm": "2203", "arxiv_id": "2203.16068", "language": "en", "url": "https://arxiv.org/abs/2203.16068", "abstract": "Euler-symmetric projective varieties, introduced by Baohua Fu and Jun-Muk Hwang in 2020, are nondegenerate projective varieties admitting many $\\mathbb{C}^{\\times}$-actions of Euler type. They are quasi-homogeneous and uniquely determined by their fundamental forms at a general point. In this paper, we study complete intersections in projective spaces which are Euler-symmetric. It is proven that such varieties are complete intersections of hyperquadrics and the base locus of the second fundamental form at a general point is again a complete intersection.", "subjects": "Algebraic Geometry (math.AG)", "title": "Euler-symmetric complete intersection in projective space", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363517478327, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7073385827696465 }
https://arxiv.org/abs/0808.3604
On the dimension of the Hilbert scheme of curves
Consider a component of the Hilbert scheme whose general point corresponds to a degree d genus g smooth irreducible and nondegenerate curve in a projective variety X. We give lower bounds for the dimension of such a component when X is P^3, P^4 or a smooth quadric threefold in P^4 respectively. Those bounds make sense from the asymptotic viewpoint if we fix d and let g vary. Some examples are constructed using determinantal varieties to show the sharpness of the bounds for d and g in a certain range. The results can also be applied to study rigid curves.
\section{Introduction} In this section, we briefly recall some basic facts about Hilbert schemes, and state the main results of this paper. Let $P$ be the Hilbert polynomial of a subscheme in $\mathbb P^{r}$. We can ask if there exists a good parameter space $\mathcal {H}_{P, r}$ parametrizing all the subschemes that have $P$ as their Hilbert polynomial. Grothendieck proved the following fundamental result on the existence of $\mathcal {H}_{P, r}$. \begin{theorem} There exists a fine moduli space $\mathcal {H}_{P, r}$. Moreover, it is a projective scheme. \end{theorem} Very few facts about the global properties of $\mathcal H_{P, r}$ have been obtained. However, the connectedness of $\mathcal H_{P, r}$ has been proved in Hartshorne's thesis. \begin{theorem} The Hilbert scheme $\mathcal H_{P,r}$ is connected for any $P$ and $r$. \end{theorem} Here curves are our main interests. The Hilbert polynomial $P$ of a curve is a linear function with leading coefficient $d$ and constant term $1-g$, where $d$ and $g$ are the degree and genus of the curve. In this case, we use the notation $\mathcal H_{d,g,r}$ in stead of $\mathcal H_{P,r}$. Sometimes we will also simply use $\mathcal H$ when there is no confusion. Consider the dimension of $\mathcal H$. We have the following result. See, for instance \cite[Section 1.E]{HM}, for related references. \begin{theorem} \label{dim} Let $C$ be a 1-dimensional subscheme in $\mathbb P^{r}$ such that $[C]\in\mathcal H_{d,g,r}$. The tangent space of $\mathcal H$ at $[C]$ can be identified as $$ T_{[C]}\mathcal H = H^{0}(C,\mathcal N_{C/\mathbb P^{r}}), $$ where $\mathcal N_{C/\mathbb P^{r}}$ is the normal sheaf of $C$ in $\mathbb P^{r}$. Moreover, if $C$ is a local complete intersection, then $$ h^{0}(C, \mathcal N_{C/\mathbb P^{r}})-h^{1}(C, \mathcal N_{C/\mathbb P^{r}})\leq \mbox{dim}_{[C]}\mathcal H\leq h^{0}(C, \mathcal N_{C/\mathbb P^{r}}). $$ \end{theorem} Let $U$ be a component of $\mathcal H_{d,g,r}$ whose general point corresponds to a smooth irreducible and nondegenerate curve $C$. Also let $l_{d,g,r}$ be the lower bound for the dimension of all such components $U$. Our aim is to estimate $l_{d,g,r}$. Define a number $h_{d,g,r} = \chi(\mathcal N_{C/\mathbb P^{r}}) = h^{0}(C, \mathcal N_{C/\mathbb P^{r}})-h^{1}(C, \mathcal N_{C/\mathbb P^{r}}) = (r+1)d-(r-3)(g-1)$. By Theorem \ref{dim}, we know that $l_{d,g,r}\geq h_{d,g,r}$. However, this bound $h_{d,g,r}$ may not be good in many cases. For the beginning case $r = 3$, $h_{d,g,3} = 4d$ is independent of $g$. If we fix $d$ and let $g$ vary, the genus of a degree $d$ irreducible and nondegenerate curve in $\mathbb P^{3}$ can be as large as the Castelnuovo bound $\pi(d,3) = \frac{d^{2}}{4}+O(d)$. One can refer to \cite[Section 3]{H} for a good introduction on the Castelnuovo theory and related results. When $g$ approaches $\pi(d,3)$, we can compute $l_{d,g,3}$ explicitly. Roughly speaking, $l_{d,g,3}$ is asymptotically equal to $g$, which is much larger than $4d$. Therefore, it would be nice if we can come up with an improved lower bound. \begin{theorem} \label{r=3} Define an integer-valued function $\mu(d,g)$ in the range $g^{2}\geq d^{3}$ as follows, $$ \mu(d,g) = 1+\lfloor\frac{d^{2}-3d-2g}{g+d+\sqrt{g^{2}-d^{3}+4dg+4d^{2}}}\rfloor, $$ where $\lfloor \cdot\rfloor$ is the floor function. Then for any $d \geq 3$ and $g \leq \pi(d,3)$, we have $$ l_{d,g,3} \geq \begin{cases} 4d, &\text{if $g^{2}< d^{3}$}; \\ 4d+g-1-\mu(d,g)d, &\text{if $g^{2}\geq d^{3}$}. \end{cases} $$ \end{theorem} The function $\mu$ invloved in Theorem \ref{r=3} may look confusing. But let us analyze this new bound a little bit. If $g^{2}\geq d^{3}$, we always have $g-1-\mu(d,g)d>0$. Moreover, if we fix $d$, $g-1-\mu(d,g)d$ is an increasing function of $g$. It implies that in the range $g^{2}\geq d^{3}$, $l_{d,g,3}$ is strictly larger than the expected dimension $4d$. It actually goes up to $g$ when the genus approaches the Castelnuovo bound $\pi(d,3)$, which has been already predicted by the Castelnuovo theory. We can also present an example to show the power of this bound. Suppose $d = 100$ and $g$ can vary from $0$ to the Castelnuovo bound $\pi(100,3) = 2401$. Pick $g = 1100$, which is large but not close to the Castelnuovo bound. The bound $4d$ only tells us that $l_{100,1100,3}\geq 400$. However, by Theorem \ref{r=3}, we get $l_{100, 1100, 3}\geq 1099$, which is much better. Now consider the case $r\geq 4$. The number $h_{d,g,r} = (r+1)d-(r-3)(g-1)$ could be negative if $g$ is larger than $d$. So it makes sense to find at least a positive bound for $l_{d,g,r}$. Furthermore, it may help answer a question about rigid curves. A rigid curve in $\mathbb P^{r}$ is a smooth irreducible and nondegenerate curve that does not have any deformation except those induced from the automorphisms of $\mathbb P^{r}$. Apparently, rational normal curves are rigid. To the author's best knowledge, people have not found any other rigid curves. In \cite{HM}, Harris and Morrison conjectured that there does not exist a rigid curve except rational normal curves. One way to attack this conjecture is to bound $l_{d,g,r}$. For instance, if the equality $l_{d,g,r} > $ dim PGL($r)=r^{2}+2r$ holds, there cannot exist a degree $d$ genus $g$ rigid curve in $\mathbb P^{r}$. In fact, this is one of our motivations to study $l_{d,g,r}$. For the case $r=4$, we have the following result. \begin{theorem} \label{r=4} Let $C$ be a degree $d$ genus $g$ smooth irreducible and nondegenerate curve in $\mathbb P^{4}$. Fix $d$ and let $g$ vary. If $g>3d\sqrt{d}+O(d)$, then $C$ is not rigid. \end{theorem} Here we could be more precise on the range of $d$ and $g$ as we have done in Theorem \ref{r=3}. However, we choose to only focus on the asymptotic behavior, since the order $d\sqrt{d}$ seems to be important. Currently we have not been able to extend the result to $r\geq 5$. But combining the results in \cite{CCG}, we expect the following conjecture to hold in general. \begin{conjecture} \label{r>4} For $r\geq 5$, there always exists a constant $\lambda_{r}$ such that if $g\geq \lambda_{r}d\sqrt{d}+O(d)$, a degree $d$ genus $g$ smooth irreducible and nondegenerate curve in $\mathbb P^{r}$ is not rigid. \end{conjecture} In addition to projective spaces, we can also study the deformation of curves on a hypersurface. The beginning case would be a smooth quadric threefold in $\mathbb P^{4}$. Since all the smooth quadrics in $\mathbb P^{4}$ are isomorphic, we fix one and denote it by $Q$. Let $\mathcal H_{d, g}(Q)$ be the union of components of the Hilbert scheme whose general point parameterizes a degree $d$ genus $g$ smooth irreducible and nondegenerate curve on $Q$. Here a nondegenerate curve means that it is not contained in a $\mathbb P^{3}$. For a curve [$C$]$\in \mathcal H_{d,g}(Q)$, as in Theorem \ref{dim}, $\mathcal X(\mathcal N_{C/Q})= h^{0}(\mathcal N_{C/Q})-h^{1}(\mathcal N_{C/Q}) = 3d$ provides a lower bound for the dimension of any component in $\mathcal H_{d,g}(Q)$. We can still ask how good this lower bound would be. A similar result as Theorem \ref{r=4} can be established as follows. \begin{theorem} \label{quadric} If $g> \frac{1}{\sqrt{2}} d\sqrt{d} + O(d)$, then the dimension of any component of $\mathcal H_{d,g}(Q)$ is strictly greater than the expected dimension $3d$. On the other hand, if $g<\frac{2}{15\sqrt{5}} d\sqrt{d} + O(d)$, then there always exists a component of $\mathcal H_{d,g}(Q)$ whose dimension equals $3d$. \end{theorem} Again, we only focus on the asymptotic behavior. The coefficients of $d\sqrt{d}$ might be improved by refining our techniques, but it seems hard to obtain a better order than $d\sqrt{d}$. Throughout the paper, we work over the complex number field. A degree $d$ genus $g$ curve in a projective space means a 1-dimensional subscheme that has $dm + 1-g$ as its Hilbert polynomial. Most of the time we will only consider smooth irreducible and nondegenerate curves. {\bf Acknowledgements.} I am grateful to Professor Joe Harris, who first told me about this question and made many useful suggestions during the preparation of this work. \section{The Hilbert scheme of curves in $\mathbb P^{3}$} In this section, we will verify Theorem \ref{r=3}. Let us briefly describe the outline of the proof. Fix $d$ and let $g$ vary. On one hand, we construct some components of the Hilbert scheme with the expected dimension $4d$ when $g$ is relatively small. On the other hand, if $g$ is quite large, the curve must lie on a low degree surface. We can estimate the dimension of the deformation of the curve on that surface, which would provide a better bound than $4d$. \subsection{Determinantal curves in $\mathbb P^{3}$} As mentioned before, we want to construct some components of the Hilbert scheme that have $4d$ as their dimension. For a curve $C$ in $\mathbb P^{3}$, let $\mathcal I_{C} = \mathcal I_{C/\mathbb P^{3}}$ denote the ideal sheaf of $C$, and let $\mathcal N_{C}$ be the normal sheaf $\mathcal N_{C/\mathbb P^{3}}$. Firstly, let us look at an example constructed in \cite{E}. Consider a curve $C$ whose ideal sheaf has resolution as follows, \begin{equation} \label{d3} 0\rightarrow \mathcal O^{\oplus s}_{\mathbb P^{3}}(-s-1)\rightarrow \mathcal O^{\oplus (s+1)}_{\mathbb P^{3}}(-s)\rightarrow \mathcal I_{C} \rightarrow 0. \end{equation} It is easy to derive the determinantal model for such a curve from this resolution. Pick an $s \times (s+1)$ matrix $A$ whose entries are general linear forms. Then the ideal sheaf of the curve defined by the determinants of all the $s\times s$ minors of $A$ has the above resolution. Tensor the exact sequence (\ref{d3}) with $\mathcal O_{\mathbb P^{3}}(k)$, and we get $h^{1}(\mathcal I_{C}(k)) = 0$ for any $k$. Hence, $C$ is projectively normal. We can also get the Hilbert polynomial of $C$. Actually, when $k$ is large enough, we have \begin{eqnarray*} h^{0}(\mathcal I_{C}(k)) & = & (s+1)\cdotp h^{0}(\mathcal O_{\mathbb P^{3}}(k-s)) - s\cdotp h^{0}(\mathcal O_{\mathbb P^{3}}(k-s-1)) \\ & = & \frac{1}{6}(k-s+2)(k-s+1)(k+2s+3). \end{eqnarray*} The Hilbert polynomial of $C$ equals $$h^{0}(\mathcal O_{\mathbb P^{3}}(k))-h^{0}(\mathcal I_{C}(k)) = \frac{1}{2}(s^{2}+s)k - \frac{1}{6}(2s^{3}-3s^{2}-5s).$$ So immediately we obtain the degree and genus of $C$, $$ d=\frac{1}{2}s(s+1), $$ $$ g = 1+\frac{1}{6}(2s^{3}-3s^{2}-5s). $$ If we take all possible linear forms as entries of $A$, by the above construction, we get an irreducible component $U$ in the Hilbert scheme whose general point $[C]$ corresponds to a smooth irreducible and nondegenerate curve. By counting parameters, the dimension of $U$ is $$ 4s(s+1)-1-\mbox{dim PGL}_{s}-\mbox{dim PGL}_{s+1} = 2s^{2}+2s = 4d. $$ Actually $U$ is smooth at $[C]$, due to the fact that $H^{1}(\mathcal N_{C})=0$. By \cite[Remark 2.2.6]{Kl}, we have, \begin{eqnarray} \label{H0} H^{0}(\mathcal N_{C}) = \mbox{Ext}^{1}(\mathcal I_{C}, \mathcal I_{C}), \nonumber \\ \label{H1} H^{1}(\mathcal N_{C}) = \mbox{Ext}^{2}(\mathcal I_{C}, \mathcal I_{C}). \end{eqnarray} Apply the functor Hom$(-, \mathcal I_{C})$ to the exact sequence (\ref{d3}). We get a long exact sequence \begin{eqnarray} \label{LE} 0 \rightarrow \mbox{Hom}(\mathcal I_{C}, \mathcal I_{C}) \rightarrow \mbox{Hom}(\mathcal O^{\oplus (s+1)}_{\mathbb P^{3}}(-s), \mathcal I_{C}) \rightarrow \mbox{Hom}(\mathcal O^{\oplus s}_{\mathbb P^{3}}(-s-1), \mathcal I_{C}) \nonumber \\ \hookrightarrow \mbox{Ext}^{1}(\mathcal I_{C}, \mathcal I_{C}) \rightarrow \mbox{Ext}^{1}(\mathcal O^{\oplus (s+1)}_{\mathbb P^{3}}(-s), \mathcal I_{C}) \rightarrow \mbox{Ext}^{1}(\mathcal O^{\oplus s}_{\mathbb P^{3}}(-s-1), \mathcal I_{C}) \nonumber \\ \hookrightarrow \mbox{Ext}^{2}(\mathcal I_{C}, \mathcal I_{C}) \rightarrow \mbox{Ext}^{2}(\mathcal O^{\oplus (s+1)}_{\mathbb P^{3}}(-s), \mathcal I_{C}) \rightarrow \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{eqnarray} Note that $ \mbox{Ext}^{2}(\mathcal O_{\mathbb P^{3}}(-s), \mathcal I_{C}) = H^{2}(\mathcal I_{C}(s)). $ Twist (\ref{d3}) by $\mathcal O_{\mathbb P^{3}}(s)$, and we get $H^{2}(\mathcal I_{C}(s))=0$. Moreover, $\mbox{Ext}^{1}(\mathcal O_{\mathbb P^{3}}(-s-1), \mathcal I_{C}) = H^{1}(\mathcal I_{C}(s+1)) = 0$, since $C$ is projectively normal. Then from (\ref{H1}) and (\ref{LE}), it follows that $H^{1}(\mathcal N_{C}) = 0$ as we expect. Let us look at the values of $d$ and $g$ obtained above. One observation is that $g^{2}\sim \frac{8}{9} d^{3}$ asymptotically. It implies that the ratio $\frac{g^{2}}{d^{3}}$ might be an important index. More precisely, we want to find a number $\lambda$ such that if $\frac{g^{2}}{d^{3}}\leq \lambda$ asymptotically, then there exists a component of the Hilbert scheme whose dimension is close to $4d$. In this case, the lower bound $4d$ is still good. Actually, we will show that $\lambda = 1$ is almost the best. Continue to consider determinantal curves. Modify the entries of the matrix $A$ by using degree $t$ homogeneous polynomials instead of linear forms. Then the ideal sheaf of $C$ has resolution \begin{equation} \label{IIE} 0\rightarrow \mathcal O^{\oplus s}_{\mathbb P^{3}}(-t-ts)\rightarrow \mathcal O^{\oplus (s+1)}_{\mathbb P^{3}}(-ts)\rightarrow \mathcal I_{C} \rightarrow 0. \end{equation} Compute the Hilbert polynomial as before. We obtain the degree and genus of $C$ as follow, $$ d = \frac{1}{2}s(s+1)t^{2}, $$ $$ g = 1 + \frac{1}{6}s(s+1)(2s+1)t^{3}-s(1+s)t^{2}. $$ By counting parameters, the dimension of the component of such curves is \begin{eqnarray*} 4s(s+1)\binom{t+3}{3}-1-\mbox{dim PGL}_{s}-\mbox{dim PGL}_{s+1} \\ = \frac{1}{6}s(s+1)(t^{3}+6t^{2}+11t-6). \end{eqnarray*} We denote the above value by $l$. Note that the ratio $\frac{g^{2}}{d^{3}}$ satisfies the inequality $$ \frac{g^{2}}{d^{3}} < \frac{2}{9}(4+\frac{1}{s^{2}+s}). $$ So asymptotically $\frac{g^{2}}{d^{3}}\leq 1$. Moreover, the ratio tends to 1 if and only if $s=1$, that is, when $C$ is a complete intersection of two degree $t$ surfaces. Another interesting fact is that when $t\leq 3$ we always have $l=4d$ for any $s$. But as $t$ increases, $l$ will get much larger than $4d$. We have already discussed the case $t=1$. Now we take a look at $t=2$ and $3$. If $t=2$, we have $$ d = 2s(s+1), $$ $$ g = 1 + \frac{8}{3}(s-1)s(s+1). $$ Asymptotically $\frac{g^{2}}{d^{3}}$ goes to $\frac{8}{9}$. If $t=3$, we have $$ d = \frac{9}{2}s(s+1), $$ $$ g = 1 + \frac{9}{2}s(s+1)(2s-1). $$ Asymptotically $\frac{g^{2}}{d^{3}}$ goes to $\frac{8}{9}$ as well, still less than 1. Next, we further modify the matrix $A$ by allowing the entries at different rows to have different degrees. Suppose $A=(F_{ij})$, $1\leq i \leq s, 1\leq j \leq s+1$, and the degree of $F_{ij}$ is $k_{i}$. Let $t=\sum_{i=1}^{s}k_{i}$. Then the ideal sheaf of $C$ has resolution \begin{equation} \label{IIIE} 0\rightarrow \bigoplus_{i=1}^{s}\mathcal O_{\mathbb P^{3}}(-t-k_{i})\rightarrow \mathcal O^{\oplus (s+1)}_{\mathbb P^{3}}(-t)\rightarrow \mathcal I_{C} \rightarrow 0. \end{equation} We can obtain the degree and genus of $C$, $$ d = \frac{1}{2}\big(t^{2}+\sum_{i=1}^{s}k_{i}^{2}\big), $$ $$ g = 1 + \frac{1}{6}\Big(2t^{3}-6t^{2}+3\big(\sum_{i=1}^{s}k_{i}^{2}\big)t+\sum_{i=1}^{s}\big(k_{i}^{3}-6k_{i}^{2}\big)\Big). $$ In this case, the dimension estimate by counting parameters depends on how many $k_{i}$'s may have the same value. However, we are more interested in if $\frac{g^{2}}{d^{3}}$ can approach a better upper bound, e.g., much larger than 1. Let $u=\sum_{i=1}^{s}k_{i}^{2}$, then $\sum_{i=1}^{s}k_{i}^{3}\leq tu$. Hence, we have $$ g < \frac{1}{6}(2t^{2}+4ut), $$ $$ \frac{g^{2}}{d^{3}} < \frac{8}{9}\cdotp\frac{(1+2\alpha)^{2}}{(1+\alpha)^{3}}, $$ where $\alpha = \frac{u}{t^{2}}.$ When $\alpha = \frac{1}{2}$, the right hand side of the last inequality has the maximum $\frac{256}{243}$, which is still close to 1. The above examples partially explains why we separate the case $g^{2} < d^{3}$ in Theorem \ref{r=3}. The reader may also wonder why we do not construct other examples in addition to determinantal curves to see if we can get $\frac{g^{2}}{d^{3}}$ much larger than 1 and at the same time keep the dimension of the component relatively low. In fact, this is impossible. Next subsection explains the different situation when $g^{2}\geq d^{3}$. \subsection{Curves on a surface in $\mathbb P^{3}$} In this subsection, we want to show if $g$ is much larger than $d$, then a curve must be contained in a relatively low degree surface in $\mathbb P^{3}$. Moreover, we can estimate the deformation of the curve on that surface, which provides a proof for the second part of Theorem \ref{r=3}. Firstly, we cite a result originally mentioned by Halphen and proved later by Gruson and Peskine \cite{GP}. \begin{theorem} \label{s3} Let $C$ be a connected smooth curve of degree $d$ and genus $g$ in $\mathbb P^{3}$. $s$ is a positive integer such that $s(s-1)<d$. If $g$ satisfies \begin{equation} \label{g3} g > \frac{d}{2}(s+\frac{d}{s}-4)-\frac{r(s-r)(s-1)}{2s}, \end{equation} where $0\leq r < s, d+r\equiv 0$ (mod $s$), then $C$ must lie on a surface of degree less than $s$. \end{theorem} Note that if $s\sim \sqrt{d}$, then the right hand side of (\ref{g3}) $\sim \sqrt{d^{3}}$. Hence, Theorem \ref{s3} can help us deal with the case $g^{2}>d^{3}$. Since we only want asymptotic results, Theorem \ref{s3} can be slightly modified for our convenience. \begin{proposition} \label{s3m} Let $C$ be a connected smooth curve of degree $d$ and genus $g$ in $\mathbb P^{3}$. $s$ is a positive integer such that $s(s+1)< d$. If $g$ satisfies \begin{equation} \label{g3m} g > \frac{d}{2}(s+\frac{d}{s+1}-3), \end{equation} then $C$ must lie on a surface of degree $k \leq s$. \end{proposition} For fixed $d$ and $g$ in the range $g^{2}\geq d^{3}$, consider the smallest positive integer $s$ satisfying $s(s+1)< d$ and the inequality (\ref{g3m}). Then there exists a surface $S$ of degree $k\leq s$ such that $S$ contains $C$. Let $\mathcal H_{d,g}(S)$ be the Hilbert scheme parameterizing degree $d$ and genus $g$ curves on $S$. $\mathcal H_{d,g}(S)$ can be viewed as a subscheme of $\mathcal H_{d,g,3}$. We want to estimate $\mbox{dim}_{[C]}\mathcal H_{d,g}(S)$. If $S$ is smooth, then $\chi(\mathcal N_{C/S})$ provides a lower bound for $\mbox{dim}_{[C]}\mathcal H_{d,g}(S)$. We have the exact sequence \begin{equation} \label{nb} 0\rightarrow \mathcal N_{C/S}\rightarrow \mathcal N_{C/\mathbb P^{3}}\rightarrow \mathcal N_{S/\mathbb P^{3}}\otimes \mathcal O_{C} \rightarrow 0. \end{equation} By adjunction formula, $\mathcal N_{S/\mathbb P^{3}}\otimes\mathcal O_{C} = \mathcal O_{C}(k)$. Then we can compute $\chi(\mathcal N_{C/S})$ by the exact sequence (\ref{nb}) and Riemann-Roch, \begin{eqnarray} \mathcal X(\mathcal N_{C/S}) &=& \mathcal X(\mathcal N_{C/\mathbb P^{3}}) - \mathcal X(\mathcal N_{S/\mathbb P^{3}}\otimes \mathcal O_{C})\nonumber \\ &=& 4d - \mathcal X(\mathcal O_{C}(k)) \nonumber \\ &=& 4d+g-1-kd \nonumber \\ &\geq & 4d+g-1-sd. \nonumber \end{eqnarray} So we have \begin{equation*} \mbox{dim}_{[C]}\mathcal H_{d,g,3}\geq \mbox{dim}_{[C]}\mathcal H_{d,g}(S)\geq 4d+g-1-sd. \end{equation*} Therefore, we get a lower bound for the dimension of $\mathcal H_{d,g,3}$, \begin{equation} \label{se3} l_{d,g,3}\geq 4d+g-1-sd. \end{equation} The advantage of (\ref{se3}) is because in the range $g^{2}\geq d^{3}$, as $g$ increases, $s$ decreases, and $4d+g-1-sd$ is more dominated by $g$. For instance, if we fix $d$ and let $g$ approach the Castelnuovo bound $\pi(d,3)$, then the dimension of $\mathcal H_{d,g,3}$ tends to $g$. But at this moment $s$ is very small. Therefore, the estimate (\ref{se3}) does not lose much information from the asymptotic viewpoint. Now we can finish the proof of Theorem \ref{r=3} easily. \begin{proof} $4d$ is the classical lower bound for any $d,g$. Moreover, in the range $g^{2}\geq d^{3}$, the smallest integer $s$ satisfying $s(s+1)< d$ and $g > \frac{d}{2}(s+\frac{d}{s+1}-3)$ is given by $s = \mu (d,g)$. Apply the lower bound $4d+g-1-sd$ obtained in (\ref{se3}). It then completes the proof. \end{proof} In the above argument, there is one gap we need to fix, that is, when the surface $S$ is singular and $C$ passes through singular points of $S$. In that case we cannot simply apply cohomology to estimate the dimension of the deformation of $C$ on $S$. Instead, we have to use Ext groups. Before doing that, we will prove a simple result, which shows that the situation is not very bad even if $S$ is singular. \begin{lemma} \label{fs} Let $S_{sing}$ denote the singular locus of a surface $S$. Under the assumption of Proposition \ref{s3m}, if $C\cap S_{sing}$ is not empty, then it is 0-dimensional. \end{lemma} \begin{proof} If the dimension of $S_{sing}$ is 0, then the statement is trivial. Otherwise the dimension of $S_{sing}$ is 1. By B\'{e}zout, the degree of $S_{sing}$ is at most $k(k-1)\leq s(s-1)< d$. Hence, $C$ cannot be contained in $S_{sing}$. \end{proof} By Lemma \ref{fs}, we can apply the following result from \cite[Lemma 2.13, Theorem 2.15]{Ko}. \begin{proposition} \label{K3} Keep the above notation. If $C\cap S_{sing}$ is 0-dimensional, then $C\subset S$ is generically unobstructed and the dimension of every irreducible component of $\mathcal H_{d,g}(S)$ at $[C]$ is at least \begin{eqnarray} \label{ext3} \mathrm{dim \ Hom}_{C}(\mathcal I_{C/S}/\mathcal I_{C/S}^{2}, \mathcal O_{C}) - \mathrm{dim \ Ext}_{C}^{1}(\mathcal I_{C/S}/\mathcal I_{C/S}^{2}, \mathcal O_{C}). \end{eqnarray} \end{proposition} If $S$ is smooth, the value of (\ref{ext3}) is just $\mathcal X(\mathcal N_{C/S})$. When $S$ is singular, we need to verify some exact sequences of K\"{a}hler differentials. We will do it in a more general setting since the results can be applied to many other cases. \begin{proposition} \label{Kn} Suppose $C$ is a smooth connected curve, $X$ is an $(n-k)$-dimensional local complete intersection, and $C\subset X\subset \mathbb P^{n}, \ n\geq 3, 1\leq k\leq n-2$. If $C\cap X_{sing}$ is 0-dimensional, we have the following exact sequences \begin{eqnarray} \label{cxn} 0\rightarrow\mathcal I_{C/X}/\mathcal I_{C/X}^{2}\xrightarrow{d}\Omega_{X}\otimes \mathcal O_{C}\rightarrow\Omega_{C}\rightarrow 0, \\ \label{cxpn} 0\rightarrow (\mathcal I_{X}/\mathcal I_{X}^{2})\otimes\mathcal O_{C}\xrightarrow{d} \Omega_{\mathbb P^{n}}\otimes\mathcal O_{C}\rightarrow \Omega_{X}\otimes \mathcal O_{C}\rightarrow 0. \end{eqnarray} \end{proposition} Note that if $X$ is smooth, those results are well-known. When $X$ is singular, the above sequences are still exact except the left hand sides may not be injective, cf. \cite[II 8]{Ha}. \begin{proof} It suffices to verify that the map to the middle term is always injective for each sequence. Since the question is local, we only need to work on a local affine chart $U$. Suppose $x_{1},\ldots,x_{n}$ are the local coordinates, and $f_{1},\ldots,f_{k}$ locally cut out $X$ in $U$. We have $\Omega_{X}(U) = \Omega_{\mathbb P^{n}}\otimes \mathcal O_{X}(U)/(df_{1},\ldots, df_{k})$. Firstly, let us verify (\ref{cxn}). Pick an element $g\in \mathcal I_{C/X}(U)$. Suppose we have $$dg=\sum_{j=1}^{n}\frac{\partial g}{\partial x_{j}}dx_{j}=0 \in \Omega_{X}\otimes \mathcal O_{C}(U).$$ There also exist $a_{1},\ldots,a_{k}\in \mathcal O_{C}(U)$ such that restricted on $C$, $$\frac{\partial g}{\partial x_{j}}=\sum_{i=1}^{k}a_{i}\frac{\partial f_{i}}{\partial x_{j}}, \ 1\leq j\leq n. $$ It follows that $d(g - \sum_{i=1}^{k}a_{i}f_{i}) = 0$ on $C$. Since $C$ is smooth, the vanashing of $g - \sum_{i=1}^{k}a_{i}f_{i}$ and its differential on $C$ tell us that $g - \sum_{i=1}^{k}a_{i}f_{i}\in \mathcal I_{C}^{2}(U)$, which implies $g = g - \sum_{i=1}^{k}a_{i}f_{i} = 0$ as elements in $\mathcal I_{C/X}/\mathcal I_{C/X}^{2}(U)$. Next, let us verify the exactness of (\ref{cxpn}). Take an element $h = \sum_{i=1}^{k}b_{i}f_{i}\in \mathcal I_{X}(U)$. If $dh=0$ restricted on $C$, since $f_{1},\ldots,f_{k}$ vanash on $C$, we have $$\sum_{i=1}^{k}b_{i}\frac{\partial f_{i}}{\partial x_{j}}dx_{j} = 0, \ 1\leq j\leq n$$ on $C$. Note that $X_{sing}\cap U$ consists of those points where the matrix $$\Big(\frac{\partial f_{i}}{\partial x_{j}}\Big)_{1\leq i\leq k, 1\leq j\leq n}$$ drops rank. Since $C\cap X_{sing}$ consists of at most finitely many points, $b_{1},\ldots,b_{k}$ must vanash at a non empty open subset of $C\cap U$, which forces that they vanash completely on $C\cap U$. Hence, $h\otimes 1 = \sum_{i=1}^{k} f_{i}\otimes b_{i} = 0 \in (\mathcal I_{X}/\mathcal I_{X}^{2})\otimes \mathcal O_{C}(U).$ \end{proof} Now consider the deformation of $C$ on $X$. We have the following result. \begin{proposition} \label{cx} Keep the above assumption. If $C\cap X_{sing}$ is 0-dimensional, the dimension of every component of $\mathcal H_{d,g}(X)$ at $[C]$ is at least $$\mathcal X(\mathcal N_{C/\mathbb P^{n}})-\mathcal X(\mathcal N_{X/\mathbb P^{n}}|_{C}).$$ Moreover, suppose $X$ is a complete intersection cut out by hypersurfaces $F_{1},\ldots, F_{k}$, deg $F_{i} = d_{i}, i=1,\ldots, k$. The above lower bound can be written explicitly as $$(n+1-\sum_{i=1}^{k}d_{i})d+(k-n+3)(g-1).$$ \end{proposition} \begin{proof} By the assumption, $C\subset X$ is generically unobstructed, so we can apply the result from \cite[Lemma 2.13, Theorem 2.15]{Kl}. The local dimension of any component of $\mathcal H_{d,g}(X)$ at $[C]$ is at least \begin{eqnarray} \label{extcx} \mathrm{dim \ Hom}_{C}(\mathcal I_{C/X}/\mathcal I_{C/X}^{2}, \mathcal O_{C}) - \mathrm{dim \ Ext}_{C}^{1}(\mathcal I_{C/X}/\mathcal I_{C/X}^{2}, \mathcal O_{C}) \end{eqnarray} Note that if $X$ is smooth, the value of (\ref{extcx}) is $\mathcal X(\mathcal N_{C/X})$, which equals $\mathcal X(\mathcal N_{C/\mathbb P^{n}})-\mathcal X(\mathcal N_{X/\mathbb P^{n}}|_{C})$ due to the well-known exact sequence $$ 0 \rightarrow \mathcal N_{C/X}\rightarrow \mathcal N_{C/\mathbb P^{n}}\rightarrow \mathcal N_{X/\mathbb P^{n}}|_{C}\rightarrow 0.$$ If $X$ is singular, apply the functor Hom($\cdotp$, $\mathcal O_{C}$) to (\ref{cxn}). Then we get a long exact sequence \begin{eqnarray} \label{LExtn} 0 \rightarrow \mathrm{Hom} (\Omega_{C}, \mathcal O_{C}) \rightarrow \mathrm{Hom} (\Omega_{X}\otimes \mathcal O_{C}, \mathcal O_{C})\rightarrow \mathrm{Hom} (\mathcal I_{C/X}/\mathcal I_{C/X}^{2},\mathcal O_{C}) \nonumber \\ \hookrightarrow \mathrm{Ext}^{1} (\Omega_{C}, \mathcal O_{C}) \rightarrow \mathrm{Ext}^{1} (\Omega_{X}\otimes \mathcal O_{C}, \mathcal O_{C})\rightarrow \mathrm{Ext}^{1} (\mathcal I_{C/X}/\mathcal I_{C/X}^{2},\mathcal O_{C}) \nonumber \\ \hookrightarrow 0. \hspace{9.6cm} \end{eqnarray} The last term is zero, because $\mathrm{Ext}^{2}(\Omega_{C},\mathcal O_{C}) = H^{2}(\mathcal T_{C}) = 0$. Moreover, apply the functor Hom($\cdotp$, $\mathcal O_{C}$) to (\ref{cxpn}), we get another long exact sequence \begin{eqnarray} \label{LLExtn} 0 \rightarrow \mathrm{Hom}(\Omega_{X}\otimes\mathcal O_{C},\mathcal O_{C})\rightarrow \mathrm{Hom}(\Omega_{\mathbb P^{n}}\otimes \mathcal O_{C},\mathcal O_{C})\rightarrow \mathrm{Hom}((\mathcal I_{X}/\mathcal I_{X}^{2})\otimes \mathcal O_{C},\mathcal O_{C}) \nonumber \\ \hookrightarrow \mathrm{Ext}^{1}(\Omega_{X}\otimes\mathcal O_{C},\mathcal O_{C})\rightarrow \mathrm{Ext}^{1}(\Omega_{\mathbb P^{n}}\otimes \mathcal O_{C},\mathcal O_{C})\rightarrow \mathrm{Ext}^{1}((\mathcal I_{X}/\mathcal I_{X}^{2})\otimes \mathcal O_{C},\mathcal O_{C}) \nonumber \\ \hookrightarrow \mathrm{Ext}^{2}(\Omega_{X}\otimes \mathcal O_{C},\mathcal O_{C})\rightarrow 0. \hspace{6.8cm} \end{eqnarray} The last term is zero, because $\mathrm{Ext}^{2}(\Omega_{\mathbb P^{n}}\otimes O_{C}, \mathcal O_{C}) = H^{2}(\mathcal T_{\mathbb P^{n}}|_{C})=0$. Note that $C$ is smooth, so $\mathrm{Ext}^{i}(\Omega_{C}, \mathcal O_{C})=H^{i}(\mathcal T_{C})$ and $\mathrm{Ext}^{i}(\Omega_{\mathbb P^{n}}\otimes \mathcal O_{C},\mathcal O_{C}) = H^{i}(\mathcal T_{\mathbb P^{n}}|_{C})$ for any $i$. From (\ref{cxpn}), we know $(\mathcal I_{X}/\mathcal I_{X}^{2})\otimes \mathcal O_{C}$ is locally free, so $\mathrm{Ext}^{i}((\mathcal I_{X}/\mathcal I_{X}^{2})\otimes\mathcal O_{C}, \mathcal O_{C}) = H^{i}(\mathcal N_{X/\mathbb P^{n}}|_{C})$. Then by (\ref{LExtn}) and (\ref{LLExtn}), we have \begin{eqnarray} \label{XCEn} &&\mathrm{dim \ Hom}(\mathcal I_{C/X}/\mathcal I_{C/X}^{2}, \mathcal O_{C}) - \mathrm{dim \ Ext}^{1}(\mathcal I_{C/X}/\mathcal I_{C/X}^{2}, \mathcal O_{C}) \nonumber \\ &=& \mathcal X(\mathcal T_{\mathbb P^{n}}|_{C}) - \mathcal X(\mathcal N_{X/\mathbb P^{n}}|_{C}) - \mathcal X(\mathcal T_{C}) - \mathrm{dim \ Ext}^{2}(\Omega_{X}\otimes \mathcal O_{C},\mathcal O_{C})\nonumber \\ &=& \mathcal X(\mathcal N_{C/\mathbb P^{n}}) - \mathcal X(\mathcal N_{X/\mathbb P^{n}}|_{C}) - \mathrm{dim \ Ext}^{2}(\Omega_{X}\otimes \mathcal O_{C},\mathcal O_{C}). \nonumber \end{eqnarray} $\mathcal X(\mathcal N_{C/\mathbb P^{n}})$ equals $h_{d,g,n} = (n+1)d-(n-3)(g-1)$. If $X$ is a complete intersection cut out by $F_{1},\ldots, F_{k}$, the normal sheaf $\mathcal N_{X/\mathbb P^{n}}$ splits into $\bigoplus_{i=1}^{k}\mathcal O_{X}(d_{i})$. Therefore, in this case we can compute $\mathcal X(\mathcal N_{X/\mathbb P^{n}}|_{C})$ explicitly as $\mathcal X(\bigoplus_{i=1}^{k}\mathcal O_{C}(d_{i})) = \sum_{i=1}^{k}(1-g+dd_{i})$. Now the theorem follows if we can show that $\mathrm{Ext}^{2}(\Omega_{X}\otimes \mathcal O_{C},\mathcal O_{C}) = 0$. In case $X$ is smooth, we have the well-known exact sequence $$ 0\rightarrow \mathcal T_{X}\otimes \mathcal O_{C}\rightarrow \mathcal T_{\mathbb P^{n}}\otimes \mathcal O_{C}\rightarrow \mathcal N_{X/\mathbb P^{n}}\otimes\mathcal O_{C} \rightarrow 0.$$ If $X$ is singular, the last map may not be surjective. Instead, we have $$ 0\rightarrow \mathcal T_{X}\otimes \mathcal O_{C}\rightarrow \mathcal T_{\mathbb P^{n}}\otimes \mathcal O_{C}\rightarrow \mathcal N_{X/\mathbb P^{n}}\otimes\mathcal O_{C} \rightarrow \mathcal F\rightarrow 0, $$ where $\mathcal F$ is a sheaf supported at some points of $C\cap X_{sing}$. Split the above sequence into two short exact sequences \begin{eqnarray} \label{En} &0\rightarrow \mathcal T_{X}\otimes \mathcal O_{C}\rightarrow \mathcal T_{\mathbb P^{n}}\otimes \mathcal O_{C}\rightarrow \mathcal E\rightarrow 0 &\\ \label{Fn} &0\rightarrow \mathcal E \rightarrow \mathcal N_{X/\mathbb P^{n}}\otimes\mathcal O_{C} \rightarrow \mathcal F\rightarrow 0. & \end{eqnarray} Since $H^{2}(\mathcal T_{X}\otimes \mathcal O_{C}) = 0$, then from (\ref{En}), the map $H^{1}(\mathcal T_{\mathbb P^{n}}\otimes \mathcal O_{C})\rightarrow H^{1}(\mathcal E)$ is surjective. Moreover, $\mathcal F$ is only supported at finitely many points on $C$, so $H^{1}(\mathcal F) = 0$. From (\ref{Fn}), the map $H^{1}(\mathcal E)\rightarrow H^{1}(\mathcal N_{X/\mathbb P^{n}}\otimes\mathcal O_{C})$ is also surjective. Hence, we get a surjective map $H^{1}(\mathcal T_{\mathbb P^{n}}\otimes O_{C})\rightarrow H^{1}(\mathcal N_{X/\mathbb P^{n}}\otimes\mathcal O_{C}) $, i.e. a surjective map $\mathrm{Ext}^{1}(\Omega_{\mathbb P^{n}}\otimes\mathcal O_{C},\mathcal O_{C})\rightarrow \mathrm{Ext}^{1}((\mathcal I_{X}/\mathcal I_{X}^{2})\otimes \mathcal O_{C},\mathcal O_{C})$. Then from (\ref{LLExtn}), it follows that $\mathrm{Ext}^{2}(\Omega_{X}\otimes O_{C},\mathcal O_{C})=0$. \end{proof} Now, apply Proposition \ref{K3} and \ref{cx} to our situation when $X = S$ is a surface in $\mathbb P^{3}$. The bound $4d+g-1-sd$ is still valid as a lower bound for $l_{d,g,3}$. Now we have completely finished the proof of Theorem \ref{r=3}. At the end of this section, we want to show that the new bound in Theorem \ref{r=3} makes sense. Using determinantal curves, we already constructed components with the expected dimension $4d$ and the corresponding values of $g$ and $d$ satisfy $\frac{g^{2}}{d^{3}}\sim \frac{8}{9}$. Actually, if $d$ is large and $g\leq \frac{1}{6\sqrt{2}}d\sqrt{d}+k_{1}d+k_{2}\sqrt{d}+k_{3}$, where $k_{1}, k_{2}, k_{3}$ are some constants, there always exists a component of $\mathcal H_{d,g,3}$ with the expected dimension $4d$, cf. \cite{P}. Moreover, we can construct those components up to $g\sim \frac{2\sqrt{2}}{3}d\sqrt{d}$ asymptotically, cf. \cite{F} and \cite{W}. Therefore, in the range $g^{2}< d^{3}$, $4d$ is almost the best lower bound for $l_{d,g,3}$. On the other hand, in the range $g^{2}\geq d^{3}$ we always have $\mu(d,g) < \sqrt{d}$. Furthermore, as $g$ increases, $\mu(d,g)$ decreases and $4d+g-1-\mu(d,g)d$ is dominated by $g$. In fact, we know the dimension of a component of $\mathcal H_{d,g,3}$ whose general points correspond to smooth irreducible and nondegenerate curves is always less than or equal to $4d+g$ for any $d,g$, cf. \cite[2.b]{H}. Therefore, the result of Theorem \ref{r=3} does not lose much information from the asymptotic perspective. More importantly, it is better than the expected dimension $4d$ if $g$ is much larger than $d$. \section{The Hilbert scheme of curves in $\mathbb P^{4}$} In this section we will prove Theorem \ref{r=4}. The idea of the proof is simple. We will show that if $g$ is large enough, a degree $d$ genus $g$ smooth irreducible and nondegenerate curve $C$ in $\mathbb P^{4}$ must be contained in a surface $S$ such that $S$ is a complete intersection and $C$ is not contained in its singular locus $S_{sing}$. By estimating the dimension of the deformation of $C$ on $S$, we can derive the desired result. For the first step, let us recall some basic results from the Castelnuovo theory. \begin{theorem} \label{Ca} Let $C$ be a degree $d$ genus $g$ reduced irreducible and nondegenerate curve in $\mathbb P^{r}$. Then $g$ has an upper bound $\pi(d,r) = \frac{d^{2}}{2(r-1)} + O(d)$. \end{theorem} For the precise definition of $\pi(d,r)$ and the proof of the theorem, cf. e.g., \cite{H}. By the above theorem, it is easy to find a low degree threefold $F$ that contains $C$. \begin{lemma} \label{F} Let $k$ be a positive integer and $N = {k+4\choose 4}-1$. If $g$ satisfies \begin{equation} \label{gk} g > \pi(dk, N), \end{equation} then $C$ is contained in an irreducible threefold $F$ of degree $a\leq k$. \end{lemma} \begin{proof} Embed $\mathbb P^{4}$ into $\mathbb P^{N}$ by the Veronese map of degree $k$ . Then the image $C'$ of $C$ is a curve of degree $dk$ and genus $g$. Since $g$ is larger than the Castelnuovo bound $\pi(dk, N)$, $C'$ must be contained in a hyperplane in $\mathbb P^{N}$. That is, $C$ is contained in a degree $k$ threefold in $\mathbb P^{4}$. Then we take an irreducible component $F$ of this threefold that contains $C$. $F$ has degree $a \leq k$. \end{proof} Fix $F$ and its degree $a$. Our next goal is to find another threefold that contains $C$ as well. \begin{lemma} \label{G} Suppose $l$ is an integer and $l\geq a$. Let $M = {l+4\choose 4}-{l-a+4\choose 4}-1$. If $g$ satisfies \begin{equation} \label{gl} g> \pi(dl, M), \end{equation} then we can find a degree $b$ irreducible threefold $G$ containing $C$ such that $b\leq l$ and the surface $S=F\cap G$ is a complete intersection. \end{lemma} \begin{proof} Embed $\mathbb P^{4}$ into $\mathbb P^{N}$ by the Veronese map of degree $l$. By a similar argument as before, we can show that $C$ is contained in at least ${l-a+4\choose 4}+1$ independent degree $l$ threefolds in $\mathbb P^{4}$. Notice that there are at most ${l-a+4\choose 4}$ independent degree $l$ threefolds containing $F$ as a component, since $F$ is irreducible. Hence, we can find a degree $l$ threefold containing $C$ but not $F$. Take an irreducible component $G$ of this threefold that contains $C$. $G$ has degree $b\leq l$ and $S=F\cap G$ is a complete intersection. \end{proof} In order to apply standard deformation theory for $C\subset S$, we should avoid the situation $C\subset S_{sing}$. \begin{lemma} \label{Sing} Let $S$ be a surface in $\mathbb P^{4}$ cut out by two threefolds of degree $a$ and $b$ respectively. If $S_{sing}$ is 1-dimensional, its degree has an upper bound $\frac{1}{2}ab(a+b-2)$. \end{lemma} \begin{proof} Take a general hyperplane section $X = H\cap S$ in $\mathbb P^{4}$. $X$ is a curve of degree $ab$ and arithmetic genus $\frac{1}{2}ab(a+b-4)+1$ in $H\cong \mathbb P^{3}$. Even though $X$ might be reducible, the total number of its singularities is at most $ab + \frac{1}{2}ab(a+b-4) +1 - 1 = \frac{1}{2}ab(a+b-2)$. Since $H\cap S_{sing}\subset X_{sing}$, we get deg $S_{sing} \leq$ deg $X_{sing} \leq \frac{1}{2}ab(a+b-2)$. \end{proof} By this lemma, we immediately get the following consequence. \begin{lemma} \label{S} Keep the above assumption. If the degree $d$ of the curve $C$ satisfies \begin{equation} \label{dab} d> \frac{1}{2}ab(a+b-2), \end{equation} then $C\cap S_{sing}$ is either empty or 0-dimensional. \end{lemma} Consider the deformation of $C$ on $S$. Since $S$ is a complete intersection and $C\not\subset S_{sing}$, we can apply Proposition \ref{cx} to derive the following result. \begin{lemma} \label{Def} The dimension of the deformation of $C$ on $S$ is at least $5d+g-1-(a+b)d$. \end{lemma} Now we have all the ingredients to prove Theorem \ref{r=4}. \begin{proof} By an elementary calculation, if $g>3d\sqrt{d}+O(d)$, we can find integers $k, a, l, b$ successively in the above setting such that they satisfy the inequalities (\ref{gk}), (\ref{gl}) and (\ref{dab}). Therefore, by Lemma \ref{F}, \ref{G} and \ref{S}, we know that $C$ lies in a complete intersection surface $S$ of type $(a, b)$ and $C\not\subset S_{sing}$. Moreover, we can check that $(a+b)d < g$. Then by Lemma \ref{Def}, the dimension of the deformation of $C$ on $S$ $\geq 5d + g -1 - (a+b)d \geq 5d > 24 = $ dim PGL(5). \end{proof} It is possible to enlarge the range $g>3d\sqrt{d}+O(d)$ by refining the results in Lemma \ref{F}, \ref{G} and \ref{Sing}. However, it seems that only the leading coefficient could be improved rather than the exponent $d^{3/2}$. So when $g$ is slightly bigger than $d$, the situation remains mysterious to us. On the other hand, by the result of \cite{CCG}, Conjecture \ref{r>4} mentioned in the introduction section sounds highly possible and might be handled by an analogous argument. We state the conjecture again as the end of this section. \begin{conjecture} For $r\geq 5$, there always exists a constant $\lambda_{r}$ such that if $g\geq \lambda_{r}d\sqrt{d}+O(d)$, a degree $d$ genus $g$ smooth irreducible and nondegenerate curve in $\mathbb P^{r}$ is not rigid. \end{conjecture} \section{The Hilbert scheme of curves on a quadric threefold} In this section we will prove Theorem \ref{quadric}. Recall that $\mathcal H_{d, g}(Q)$ parameterizes degree $d$ genus $g$ smooth irreducible and nondegenerate curves on a smooth quadric $Q$ in $\mathbb P^{4}$. For [$C$]$\in \mathcal H_{d,g}(Q)$, $\mathcal X(\mathcal N_{C/Q}) = 3d$ is a lower bound for the dimension of any component of $\mathcal H_{d,g}(Q)$. Theorem \ref{quadric} provides a further analysis for the sharpness of this bound. Its proof consists of two steps. Firstly, if $g$ is large enough, $C$ must lie on another threefold $F$ of low degree. Consider the deformation of $C$ on the surface $X = Q\cap F$. We can easily derive the first part of Theorem \ref{quadric}. For the second part, we use a similar method as in \cite{P}. A component whose general element represents a curve as the intersection of $Q$ and a determinantal surface has dimension $3d$. Then we apply the smoothing technique in \cite{S} to enlarge the range of the pair ($d, g$) to cover the case when $g<\frac{2}{15\sqrt{5}} d\sqrt{d} + O(d)$. By the main result of \cite{C}, we can verify the first step easily. \begin{lemma} \label{GB} If $g> \frac{1}{\sqrt{2}}d\sqrt{d} + O(d)$, the dimension of the deformation of $C$ on $Q$ is bigger than $3d$. \end{lemma} \begin{proof} When $d$ and $g$ satisfy the above inequality, we can find an integer $k$ such that $d>2k(k-1)$ and $g>\frac{d^{2}}{4k}+\frac{1}{2}kd$. By the result of \cite{C}, there exists an intergral surface $X\in |\mathcal O_{Q}(a)|$ containing $C$, where $a\leq k$. Since $d>2k(k-1)$ and $X$ is of degree $2a$, $C\not\subset X_{sing}$. By Proposition \ref{cx}, $\mathcal X(\mathcal N_{C/X}) = 3d + g - ad - 1$ provides a lower bound for the dimension of the deformation of $C$ on $X$. A simple calculation shows that $3d + g - ad - 1 \geq 3d + g - kd - 1 > 3d$. \end{proof} The second step is harder. We still want to construct a component of the Hilbert scheme that parameterizes certain determinantal curves. But the curves should be contained in the quadric $Q$. A natural idea is to take the intersection of a determinantal surface with $Q$. Let $\big(H_{ij}\big)$ be a $t\times (t+1)$ matrix. The entry $H_{ij}$ is a general linear form in $\mathbb P^{4}$. Those $t\times t$ minors define a determinantal surface $S$. The ideal sheaf of $S$ has the following resolution $$ 0 \rightarrow \mathcal O^{\oplus t}_{\mathbb P^{4}}(-t-1)\rightarrow \mathcal O^{\oplus (t+1)}_{\mathbb P^{4}}(-t) \rightarrow \mathcal I_{S}\rightarrow 0.$$ By Bertini, if we take a general quadric threefold $Q$, $C=Q\cap S$ is smooth. It is not hard to get the degree and genus of $C$, $$ d = t(t+1), $$ $$ g = \frac{2}{3}t^{3} - \frac{1}{2}t^{2} - \frac{7}{6}t + 1.$$ Note that asymptotically $g \sim \frac{2}{3}d\sqrt{d}$. Let us count parameters. The dimension of the component parameterizing curves generated in the above way is $5t(t+1) - 1 - \text{dim PGL}(t) - \text{dim PGL}(t+1) = 3t(t+1) = 3d$. In order to show that this is a real component of $\mathcal H_{d,g}(Q)$, we have to check that for $C = S\cap Q$, $H^{1}(\mathcal N_{C/Q}) = 0$. Actually for general $S$ and $Q$, the ideal sheaf $\mathcal I_{C/Q}$ has the resolution $$ 0\rightarrow \mathcal O^{\oplus t}_{Q}(-t-1) \rightarrow \mathcal O^{\oplus (t+1)} _{Q}(-t)\rightarrow \mathcal I_{C/Q}\rightarrow 0. $$ By \cite[Remark 2.2.6]{Kl}, we know that $H^{1}(\mathcal N_{C/Q}) = \text{Ext}_{Q}^{2}(\mathcal I_{C/Q}, \mathcal I_{C/Q})$. Apply the functor $\text{Hom}_{Q}(-, \mathcal I_{C/Q})$ to the exact sequence. Then it is easy to derive the conclusion $H^{1}(\mathcal N_{C/Q})= 0$. The above construction is nice. But it has strong restriction on the values of $d$ and $g$. We really want to extend the result to more general values of $d$ and $g$. Here we will follow the methods in \cite{P} and \cite{S}. The idea works as follows. Take a smooth determinantal curve $\Gamma$ constructed as above and a smooth rational curve $\gamma$ on $Q$ such that they meet transversely. Further assume that $H^{1}(\mathcal N_{\Gamma/Q})=H^{1}(\mathcal N_{\gamma/Q})=0$. Then the nodal curve $\Gamma\cup \gamma$ can be smoothed out in $Q$. Moreover, the vanishing property of $H^{1}(\mathcal N)$ is locally preserved under this smoothing process. Then after smoothing the nodal curve, we may get the degree and genus in a more general range. Firstly, let us introduce an important smoothing technique used in \cite{S}. \begin{lemma} \label{SM} Let $\Gamma' = \Gamma \cup \gamma$ be a nodal union of two smooth irreducible curves on the quadric threefold $Q$. $\Gamma \cap \gamma = {P_{1},\ldots, P_{\delta}}$. If $H^{1}(\mathcal N_{\Gamma/Q})=H^{1}(\mathcal N_{\gamma/Q}) = H^{1}(\mathcal N_{\gamma/Q}(-P_{1}-\ldots-P_{\delta}))=0$, then $H^{1}(\mathcal N_{\Gamma'/Q})=0 $ and $\Gamma'$ is smoothable in $Q$. \end{lemma} \begin{proof} Let us first set up some notation. For a connected reduced curve $C$ on $Q$, denote $\mathcal N'_{C/Q}$ as the cokernel of the map $\mathcal T_{C}\rightarrow \mathcal T_{Q|C}$ and let $\mathcal T^{1}_{C/Q}$ be the cotangent sheaf of $C$ in $Q$. $\mathcal T^{1}_{C/Q}$ can be defined as the cokernel of the map $\mathcal N'_{C/Q}\rightarrow \mathcal N_{C/Q}$. Suppose the singularities of $C$ are only nodes. Then $\mathcal T^{1}_{C/Q}$ is a torsion sheaf supported on each node of $C$. Furthermore, if $H^{1}(\mathcal N'_{C/Q}) = 0$, by the argument of \cite[Proposition 1.6]{S}, $C$ is smoothable in $Q$. Now in our case, the ideal sheaves $\mathcal I_{\Gamma/\Gamma'}\cong \mathcal O_{\gamma}(-P_{1}-\ldots - P_{\delta})$ and $\mathcal I_{\gamma/\Gamma'}\cong \mathcal O_{\Gamma}(-P_{1}-\ldots-P_{\delta})$. As in \cite[Lemma 5.1]{S}, we can also establish two exact sequences of sheaves on $\Gamma'$, $$ 0\rightarrow \mathcal I_{\Gamma/\Gamma'}\otimes \mathcal N_{\Gamma'/Q} \rightarrow \mathcal N'_{\Gamma'/Q}\rightarrow \mathcal N_{\Gamma/Q}\rightarrow 0, $$ $$ 0\rightarrow \mathcal N_{\gamma/Q}(-P_{1}-\ldots-P_{\delta})\rightarrow \mathcal I_{\Gamma/\Gamma'}\otimes \mathcal N_{\Gamma'/Q}\rightarrow \mathcal T^{1}_{\Gamma'/Q}\rightarrow 0. $$ By the assumption and the fact that $H^{1}(\mathcal T^{1}_{\Gamma'/Q}) = 0$, we get $H^{1}(\mathcal N'_{\Gamma'/Q})=0$ from the long exact sequences of cohomology. Hence, $\Gamma'$ is smoothable in $Q$. Moreover, the map $\mathcal N'_{\Gamma'/Q}\rightarrow \mathcal N_{\Gamma'/Q}$ is injective and its cokernel $T^{1}_{\Gamma'/Q}$ is supported at the nodes. So $H^{1}(\mathcal N'_{\Gamma'/Q})=0$ implies that $H^{1}(\mathcal N_{\Gamma'/Q})=0$. \end{proof} We still need another source curve $\gamma$. Here we will consider rational normal curves in $\mathbb P^{4}$ that lie on the quadric $Q$. \begin{lemma} \label{RC} Let $R=Q \cap H$ be a general hyperplane section of $Q$. $P_{1},\ldots, P_{m}\in R$ are $m\geq 6$ points in general position. For any integer $\delta\leq 4$, there exists a rational normal curve $\gamma \subset R$ such that $\gamma$ passes through exactly $\delta$ points of $P_{1},\ldots, P_{m}$. Furthermore, suppose those points on $\gamma$ are $P_{1},\ldots, P_{\delta}$. Then we have $H^{1}(\mathcal N_{\gamma/Q}(-P_{1}-\ldots - P_{\delta}))=0$. \end{lemma} \begin{proof} $R$ is a smooth quadric surface in $H\cong \mathbb P^{3}$. It is easy to find a degree 3 smooth rational curve $\gamma$ on $R$ that passes through $\delta$ general points, say, $P_{1},\ldots, P_{\delta}$. By the exact sequence $$ 0 \rightarrow \mathcal N_{\gamma/R} \rightarrow \mathcal N_{\gamma/Q} \rightarrow \mathcal N_{R/Q}\otimes \mathcal O_{\gamma} \rightarrow 0, $$ we can also get $H^{1}(\mathcal N_{\gamma/Q}(-P_{1}-\ldots - P_{\delta}))=0$. \end{proof} Now we have all the ingredients to prove the second part of Theorem \ref{quadric}. \begin{proof} Take a determinantal curve $\Gamma\subset Q$ whose degree $d_{\Gamma} = t(t+1)$ and genus $g_{\Gamma} = \frac{2}{3}t^{3}-\frac{1}{2}t^{2}-\frac{7}{6}t+1$. Consider a general hyperplane section of $\Gamma$. We get $d_{\Gamma}$ points in general position. By Lemma \ref{SM} and \ref{RC}, we can pick a suitable degree $3$ rational curve $\gamma$, such that $\Gamma\cap\gamma$ consists of $\delta$ reduced points for any $\delta\leq 4$ and the nodal union $\Gamma \cup\gamma$ is smoothable. Hence, starting from the pair $(d_{\Gamma}, g_{\Gamma})$, we can get a new pair $(d' = d_{\Gamma} + 3, g' = g_{\Gamma}+\delta - 1)$ where the Hilbert scheme $\mathcal H_{d',g'}(Q)$ also has a component of expected dimension $3d'$. Do the same step again and eventually it covers every pair $(d, g)$ in the form $(d_{\Gamma} + 3k, g_{\Gamma} + h)$, $0\leq h\leq 3k$. Now we fix $d$. Note that $d_{\Gamma}=t(t+1)\equiv 0$ or 2 (mod 3). So if $d \equiv 0$ (mod 3), by the above construction, the range of $g$ for which $\mathcal H_{d,g}(Q)$ has a component of dimension $3d$ contains the following, $$\frac{1}{6}(4t^{3}-3t^{2}-7t+6)\leq g \leq \frac{1}{6}(4t^{3}-3t^{2}-7t+6) + 3\cdot\frac{d -t(t+1)}{3}, $$ for any $t(t+1) \leq d$ and $t\equiv 0$ or 2 (mod 3). In order to cover the case $d \equiv 1$ or 2 (mod 3), we can use a suitable line $l$ on $Q$ instead of the rational curve $\gamma$ in Lemma \ref{RC} such that $l$ intersects the source curve only at one point $P$. One can easily check that $H^{1}(\mathcal N_{l/Q}) = H^{1}(\mathcal N_{l/Q}(-P)) = 0$ hold. Then after smoothing the nodal union of $l$ and the source curve, this construction provides $(d-1,g)\rightarrow (d,g)$. So if $d\not\equiv 0$ (mod 3), we can always consider $d-1$ or $d-2$ instead. In sum, the desired range of genus includes $$ L(t) =\frac{1}{6}(4t^{3}-3t^{2}-7t+6)\leq g \leq \frac{1}{6}(4t^{3}-3t^{2}-7t+6) + d -t(t+1) - 2 = R(t), $$ where $t(t+1)\equiv 0$ (mod 3). Since $t\equiv 0$ or 2 (mod 3), each time $t$ increases by 1 or 2. In order to avoid that the value of $g$ jumps for a fixed $d$, we have to require that $L(t+2)\leq R(t)$. Solving this inequality and plugging the upper bound of $t$ into $R(t)$, we get the desired range of $g$ up to $\frac{2}{15\sqrt{5}} d\sqrt{d} + O(d)$. \end{proof} \begin{remark} We can obtain a similar result for the Hilbert scheme of curves on a general cubic threefold Y. It is easy to check that the curve $C$ cut out by a determinantal surface and $Y$ also satisfies $H^{1}(\mathcal N_{C/Y})=0$. However, when we resume the process to quartic threefolds, the determinantal model does not work any longer. Another long standing problem is about quintic threefolds, since the expected dimension of the Hilbert scheme is 0 in that case. Even for rational curves, the famous Clemens' conjecture has been only solved when the degree of the curve is small. If we consider threefolds of higher degree, things become further unclear. To the author's best knowledge, we even do not know if a general threefold of degree $k>5$ in $\mathbb P^{4}$ contains an irreducible curve whose degree is not divisible by $k$. In sum, the Hilbert scheme of curves on a threefold of higher degree remains mysterious to us. \end{remark}
{ "timestamp": "2008-08-25T23:57:02", "yymm": "0808", "arxiv_id": "0808.3604", "language": "en", "url": "https://arxiv.org/abs/0808.3604", "abstract": "Consider a component of the Hilbert scheme whose general point corresponds to a degree d genus g smooth irreducible and nondegenerate curve in a projective variety X. We give lower bounds for the dimension of such a component when X is P^3, P^4 or a smooth quadric threefold in P^4 respectively. Those bounds make sense from the asymptotic viewpoint if we fix d and let g vary. Some examples are constructed using determinantal varieties to show the sharpness of the bounds for d and g in a certain range. The results can also be applied to study rigid curves.", "subjects": "Algebraic Geometry (math.AG)", "title": "On the dimension of the Hilbert scheme of curves", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363517478328, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7073385827696465 }
https://arxiv.org/abs/1210.8122
Non-maximality of known extremal metrics on torus and Klein bottle
El Soufi-Ilias' theorem establishes a connection between minimal submanifolds of spheres and extremal metrics for eigenvalues of the Laplace-Beltrami operator. Recently, this connection was used to provide several explicit examples of extremal metrics. We investigate the maximality of these metrics and prove that all of them are not maximal.
\section*{Introduction} Let $M$ be a closed surface and $g$ be a Riemannian metric on $M$. Then the Laplace-Beltrami operator $\Delta$ acts on the space of smooth functions on $M$ by the formula $$ \Delta f = -\frac{1}{\sqrt{|g|}}\frac{\partial}{\partial x^i}\bigl(\sqrt{|g|}g^{ij}\frac{\partial f}{\partial x^j}\bigr). $$ It is known that the spectrum of $\Delta$ is discrete and consists only of eigenvalues. Moreover, the multiplicity of any eigenvalue is finite and the sequence of eigenvalues tends to infinity. Let us denote this sequence by $$ 0 = \lambda_0(M,g) < \lambda_1(M,g) \leqslant \lambda_2(M,g) \leqslant \lambda_3(M,g) \leqslant \ldots, $$ where the eigenvalues are written with their multiplicities. For a fixed $M$ the following quantities can be considered as functionals on the space of all Riemannian metrics on $M$, $$ \Lambda_i(M,g) = \lambda_i(M,g) \area(M,g). $$ Several recent papers~\cite{EGJ, ElSoufiIlias1, ElSoufiIlias2, Hersh, JNP, Korevaar, LiYau, Nadirashvili1, Nadirashvili2, YangYau} deal with finding supremum of these functionals in the space of all Riemannian metrics on $M$. An upper bound for $\Lambda_1(M,g)$ in terms of genus of $M$ was provided in the paper~\cite{YangYau} and the existence of such a bound for $\Lambda_i(M,g)$ was shown in the paper~\cite{Korevaar}. The exact upper bounds are known for a limited number of functionals: $\Lambda_1(\mathbb{S}^2,g)$ (see~\cite{Hersh}), $\Lambda_1(\mathbb{RP}^2,g)$ (see~\cite{LiYau}), $\Lambda_1(\mathbb{T}^2,g)$ (see~\cite{Nadirashvili1}), $\Lambda_1(\mathbb{K}\mathrm{l},g)$ (see~\cite{EGJ, JNP}), $\Lambda_2(\mathbb{S}^2,g)$ (see~\cite{Nadirashvili2}). We refer to the introduction to the paper~\cite{PenskoiOtsuki} for more details. The functional $\Lambda_i(M,g)$ depends continously on $g$ but this functional is not differentiable. However, it is known that for an analytic deformation $g_t$ of the initial metric $g$ there exist the left and right derivatives of $\Lambda_i(M,g_t)$ with respect to $t$, see e.g. the papers~\cite{ElSoufiIlias2,BU,Berger}. This is a motivation for the following definition. \begin{definition}[see~\cite{ElSoufiIlias1,Nadirashvili1}] A Riemannian metric $g$ on a closed surface $M$ is called an {\em extremal} metric for a functional $\Lambda_i(M,g)$ if for any analytic deformation $g_t$ such that $g_0 = g$ the following inequality holds, $$ \frac{d}{dt}\Lambda_i(M,g_t)\Bigl|_{t=0+} \leqslant 0 \leqslant \frac{d}{dt}\Lambda_i(M,g_t)\Bigl|_{t=0-}. $$ \end{definition} \begin{definition} A metric $g$ is called a {\em maximal} metric for a functional $\Lambda_i(M,g)$ if for any metric $h$ on $M$ $$ \Lambda_i(M,g)\geqslant\Lambda_i(M,h). $$ \end{definition} A question whether there exists smooth maximal metric is not trivial. For example there is no smooth maximal metric for $\Lambda_2(\mathbb{S}^2,g)$ (see~\cite{Nadirashvili2}). The list of known extremal metrics is longer than the list of known exact upper bounds for $\Lambda_i(M,g)$, but until now their maximality has not been studied. In the present paper we investigate the maximality of the known extremal metrics. The list of currently known extremal metrics follows. \begin{itemize} \item[(A)] Metrics on the Otsuki tori $O_{p/q}$ were studied in the paper~\cite{PenskoiOtsuki}. \item[(B)] Metrics on the Lawson tori and Klein bottles $\tau_{m,k}$ were studied in the paper~\cite{PenskoiLawson}. \item[(C)] Metrics on the surfaces $\tilde\tau_{m,k}$ bipolar to Lawson surfaces were studied in the paper~\cite{Lapointe}. \item[(D)] Metrics on the bipolar surfaces $\tilde O_{p/q}$ to Otsuki tori were studied in the paper~\cite{Karpukhin}. \end{itemize} In further description a Klein bottle is denoted by $\mathbb{K}$. The definitions of these surfaces are given in the following sections. The main result of the present paper is the following theorem. \begin{theorem} There are no maximal metrics among the metrics (A)-(D) except for $\tilde\tau_{3,1}$. \label{MainTheorem} \end{theorem} \begin{remark} The metric on the Lawson bipolar Klein bottle $\tilde\tau_{3,1}$ is maximal for the functional $\Lambda_1(\mathbb{K},g)$, see~\cite{EGJ,JNP}. \end{remark} We also prove the following proposition. \begin{proposition} The metric on the Clifford torus is extremal for an infinite number of functionals $\Lambda_i(M,g)$, but it is not maximal for any of them. \label{CliffordTheorem} \end{proposition} The extremality of the Clifford torus for an infinite number of functionals $\Lambda_i(M,g)$ is known, but to the best of author's knowledge has not yet been published. In the present paper we fill this gap. In the following description we use the notations $K(k)$, $E(k)$ and $\Pi(n,k)$ for the elliptic integrals of the first, second and third kind respectively, see~\cite{Friedman}, \begin{equation*} \begin{split} &K(k) = \int\limits_0^1\frac{1}{\sqrt{1-x^2}\sqrt{1-k^2x^2}}\,dx,\qquad E(k) = \int\limits_0^1\frac{\sqrt{1-k^2x^2}}{\sqrt{1-x^2}}\,dx, \\ &\Pi(n,k) = \int\limits_0^1\frac{1}{(1-nx^2)\sqrt{1-x^2}\sqrt{1-k^2x^2}}\,dx. \end{split} \end{equation*} The paper is organized in the following way. In Section~\ref{bound} we prove lower bounds for $\sup\Lambda_n(\mathbb{T}^2,g)$ and $\sup\Lambda_n(\mathbb{K},g)$. These bounds are used throughout the paper in order to prove the non-maximality of metrics (A)-(D). In Section~\ref{Connection} we recall a connection between extremal metrics and minimal submanifolds of the unit sphere. Section~\ref{OtsukiDef} contains a discription of Otsuki tori as an $SO(2)$-invariant minimal submanifolds of $\mathbb{S}^3$ of cohomogeneity 1. Sections~\ref{OtsukiEstimate}, \ref{LawsonEstimate}, \ref{BipLawsonEstimate}, \ref{BipOtsukiEstimate} are dedicated to estimates for extremal metrics (A)-(D) respectively and this finishes the proof of Theorem~\ref{MainTheorem}. Finally, Section~\ref{CliffordEstimate} contains the proof of Proposition~\ref{CliffordTheorem}. \section{Lower bounds for $\sup\Lambda_n$} The aim of this section is to prove the following proposition (compare with Corollary 4 in the paper~\cite{CE}). \label{bound} \begin{proposition} One has the following inequalities, $$ \sup\Lambda_n(\mathbb{T},g)\geqslant 8\pi\left(n-1+\frac{\pi}{\sqrt{3}}\right), $$ $$ \sup\Lambda_n(\mathbb{K},g)\geqslant 8\pi(n-1)+12\pi E\left(\frac{2\sqrt{2}}{3}\right), $$ where $E(k)$ stands for the elliptic integral of the second kind. \label{LowerBound} \end{proposition} \subsection{Attaching handles due to Chavel-Feldman} Let $M$ be a compact smooth Riemannian manifold of dimension $n\geqslant 2$. Let us pick two distinct points $p_1,p_2\in M$. For $\varepsilon > 0$ we define \begin{itemize} \item[$B_\varepsilon$] $\colon =$ union of open geodesic balls of radius $\varepsilon$ about $p_1$ and $p_2$, \item[$\Omega_\varepsilon$] $\colon = M\backslash B_\varepsilon$, \item[$\Gamma_\varepsilon$] $\colon = \partial B_\varepsilon = \partial \Omega_\varepsilon$. \end{itemize} Here the number $\varepsilon$ is chosen to be less than $\frac{1}{4}$ of injectivity radius of $M$ and less than $\frac{1}{4}$ of a distance between $p_1$ and $p_2$ if $p_1$ and $p_2$ lie in the same connected component of $M$. We say that manifold $M_\varepsilon$ is obtained from $M$ by adding a handle across $\Gamma_\varepsilon$ if \begin{itemize} \item[1)] $\Omega_\varepsilon$ is isometrically embedded in $M_\varepsilon$; \item[2)] there exists a diffeomorphism $\Psi_\varepsilon\colon M_\varepsilon\backslash\Omega_{2\varepsilon}\to [-1,1]\times \mathbb{S}^{n-1}$ such that $$ M_\varepsilon\backslash\Omega_\varepsilon = \Psi_\varepsilon^{-1}\left(\left[-\frac{1}{2},\frac{1}{2}\right]\times\mathbb{S}^{n-1}\right). $$ \end{itemize} Let us denote by $\lambda_j$ and $\lambda_j(\varepsilon)$ the Laplace spectrum of $M$ and $M_\varepsilon$ respectively. Chavel and Feldman in their paper~\cite{ChFeld} obtained a sufficient condition for convergence $\lambda_j(\varepsilon)\to\lambda_j$ as $\varepsilon$ tends to $0$. In order to formulate this condition we need to give the following definition. \begin{definition} For any compact connected Riemannian manifold $X$ of dimension $n\geqslant 2$, the isoperimetric constant $c_1(X)$ is defined by $$ c_1(X)=\inf\limits_Y\frac{(\vol_{n-1}(Y))^n}{(\min(\vol_n(X_1),\vol_n(X_2)))^{n-1}}, $$ where $\vol_k$ stands for $k$-dimensional Riemannian measure, and $Y$ ranges over all compact $(n-1)$-dimensional submanifolds of $X$ such that they divide $X$ into $2$ open submanifolds $X_1$, $X_2$ each having boundary $Y$. \end{definition} \begin{theorem}[Chavel, Feldman~\cite{ChFeld}] Assume that $M_\varepsilon$ is connected for any $\varepsilon$ and there exists a constant $c>0$ such that $c_1(M_\varepsilon)\geqslant c$ for all $\varepsilon>0$. Then $\lim_{\varepsilon\to 0}\lambda_j(\varepsilon) = \lambda_j$ for all $j=1,2,\ldots$ \label{Ch1} \end{theorem} \begin{remark} The assumption in the previous theorem implies $\lim_{\varepsilon\to 0}\vol_n(M_\varepsilon)=\vol_n(M)$ by picking $Y=\Gamma_\varepsilon$. \end{remark} In the same paper existence of such $M_\varepsilon$ is verified for any surface $M$ and almost any pair of points $p_1$, $p_2$. \begin{theorem} Let $M$ be a compact $2$-dimensional Riemannian manifold, $K$ be its Gaussian curvature and $\tilde M = (M\backslash K^{-1}(0))\cup \mathrm{int}\, K^{-1}(0)$. Then $\tilde M$ is open and dense in $M$. Suppose that $p_1,p_2\in\tilde M$ and one of the following possibilities occur: \begin{itemize} \item $M$ is connected, \item $M$ has two connected components and $p_i$ lie in different connected components. \end{itemize} Then $M_\varepsilon$ can be constructed so that assumption of Theorem~\ref{Ch1} holds. In particular, $\area(M_\varepsilon)\to \area(M)$ as $\varepsilon\to 0$. \label{Ch2} \end{theorem} \begin{remark} Let us remark that Chavel and Feldman considered only the case of a connected manifold $M$. However, their arguments could be extended almost without changes to the non-connected case as stated above. \end{remark} \subsection{Proof of Proposition~\ref{LowerBound}.} Consider the flat equilateral torus $\tau_{eq}$. After suitable rescaling of the metric we have $\area(\tau_{eq}) = 4\pi^2/\sqrt{3}$ and $\lambda_1(\tau_{eq}) = 2$. For the euclidean sphere $\mathbb{S}^2$ of volume $4\pi$ one also has $\lambda_1(\tau_{eq}) = 2$. Let us take $n-1$ copies of $\mathbb{S}^2$ denoted by $S_i$, $i = 1,2,\ldots,n-1$. Thus for $T_n = \tau_{eq}\coprod_{i=1}^{n-1}S_i$ we have $\lambda_n(T_n) = 2$ and therefore $\Lambda_n(T_n) = 8\pi\left(n-1+\pi/\sqrt{3}\right)$. Consecutive application of Theorem~\ref{Ch2} yields the existance of the sequence $M_\varepsilon$, diffeomorphic to torus, such that $\Lambda_n(M_\varepsilon)\to\Lambda_n(T_n)$ as $\varepsilon$ tends to $0$. This observation completes the proof of the first inequality. The second inequality can be proved in the same fashion. The only difference is that instead of $\tau_{eq}$ one has to use Lawson bipolar Klein bottle $\tilde\tau_{3,1}$ (see Section~\ref{BipLawsonEstimate} for a defintion). It was proven in the paper~\cite{JNP} that $\Lambda_1(\tilde\tau_{3,1}) = 12\pi E\left(2\sqrt{2}/3\right)$. By a suitable rescaling of the metric on $\tilde\tau_{3,1}$, one can assume that $\lambda_1(\tilde\tau_{3,1}) = 2$ and then apply construction of the previous paragraph. \section{Otsuki tori} \subsection{Connection with minimal submanifolds of the sphere.} Let $\psi\colon M \looparrowright \mathbb{S}^n$ be a minimal immersion in the unit sphere with canonical metric $g_{can}$. We denote by $\Delta$ the Laplace-Beltrami operator on $M$ associated with the metric $\psi^*g_{can}$. Let us introduce the Weyl's eigenvalues counting funcion $$ N(\lambda) = \#\{i|\lambda_i(M,g)<\lambda\}. $$ The following theorem provides a general approach to finding smooth extremal metrics. \begin{theorem}[El Soufi, Ilias,~\cite{ElSoufiIlias2}] Let $\psi\colon M \looparrowright \mathbb{S}^n$ be a minimal immersion in the unit sphere $\mathbb{S}^n$ endowed with the canonical metric $g_{can}$. Then the metric $\psi^*g_{can}$ on $M$ is extremal for the functional $\Lambda_{N(2)}(M,g)$. \label{th1} \end{theorem} Therefore, one can start with minimal submanifold $N$ of the unit sphere then compute $N(2)$ and the metric induced on $N$ by this immersion is extremal for the functional $\Lambda_{N(2)}(N,g)$. However, for a given minimal submanifold there is no algorithm for computing the exact value of $N(2)$. Nevertheless, this approach was succesfully realized by Penskoi in the papers~\cite{PenskoiOtsuki, PenskoiLawson} for metrics (A),(B) as well as by the author in the paper~\cite{Karpukhin} for metrics (D). Some ideas of this approach was partially used in the paper~\cite{Lapointe} for metrics (C). \subsection{Reduction theorem for minimal submanifolds.} Let $M$ be a \label{Connection} Riemannian manifold equipped with a metric $g'$ and let $G$ be a compact group acting on $M$ by isometries. For every point $x\in M$ let us denote by $G_x$ the stability subgroup of $x$. \begin{definition} For two points $x,y\in M$ we say that $x\preccurlyeq y$ if $G_x\subset gG_yg^{-1}$ for some $g\in G$. The orbit $Gx$ is the {\em orbit of principal type} if for any point $y\in M$ one has $x\preccurlyeq y$. \end{definition} Let $M^*$ stand for the union of all orbits of principal type, then $M^*$ is an open dense submanifold of $M$ (see~\cite{MSY}). Moreover, $M^*/G$ carries a natural Riemannian metric $g$ defined by the formula $g(X,Y) = g'(X',Y')$, where $X,Y$ are tangent vectors at $x\in M^*/G$ and $X',Y'$ are tangent vectors at a point $x'\in\pi^{-1}(x)\subset M^*$ such that $X'$ and $Y'$ are orthogonal to the orbit $\pi^{-1}(x)$ and $d\pi(X')=X,\,d\pi(Y')=Y$. Let $f\colon N \looparrowright M$ be a $G$-invariant immersed submanifold, i.e. a manifold equipped with an action of $G$ by isometries such that $g\cdot f(x) = f(g\cdot x)$ for any $x\in N$. \begin{definition} A {\em cohomogeneity} of a $G$-invariant immersed submanifold $N$ is the number $\dim N - \nu$, where $\nu$ is the dimension of the orbits of principal type. \end{definition} Let us define for $x\in M^*/G$ a volume function $V(x)$ by the formula $V(x) = \mathrm{Vol}(\pi^{-1}(x))$. Also for each integer $k \geqslant 1$ let us define a metric $g_k = V^{\frac{2}{k}}g$. \begin{proposition}[Hsiang, Lawson~\cite{HsiangLawson}] Let $f\colon N\looparrowright M^*$ be a $G$-invariant immersed submanifold of cohomogeneity $k$, and let $M^*/G$ be equipped with the metric $g_k$. Then $f\colon N\looparrowright M^*$ is minimal if and only if $\bar f\colon N/G\looparrowright M^*/G$ is minimal. \label{HsiangTh} \end{proposition} \subsection{Otsuki tori.} Otsuki tori \label{OtsukiDef} were introduced by Otsuki in the paper~\cite{Otsuki}. Let us recall the concise description by Penskoi from the paper~\cite{PenskoiOtsuki}. For more details see Section 1.2 of the paper~\cite{PenskoiOtsuki}. Consider the action of $SO(2)$ on the three-dimensional unit sphere $\mathbb{S}^3 \subset \mathbb{R}^4$ given by the formula $$ \alpha\cdot(x,y,z,t) = (\cos\alpha x+\sin\alpha y, -\sin\alpha x +\cos\alpha y, z, t), $$ where $\alpha \in [0,2\pi)$ is a coordinate on $SO(2)$. The space of orbits $\mathbb{S}^3/SO(2)$ is the closed half-sphere $\mathbb{S}^2_+$, $$ q^2+z^2+t^2=1, \qquad q\geqslant 0, $$ where a point $(q,z,t)$ corresponds to the orbit $(q\cos\alpha,q\sin\alpha,z,t) \in \mathbb{S}^3$. The space of principal orbits $(\mathbb{S}^3)^*/SO(2)$ is the open half sphere $$ \mathbb{S}^2_{>0} = \{(q,z,t)\in\mathbb{S}^2|q>0\}. $$ Let us introduce the spherical coordinates in the space of orbits, $$ \left\{ \begin{array}{rcl} t &=& \cos\varphi\sin\theta,\\ z &=& \cos\varphi\cos\theta,\\ q &=& \sin\varphi.\\ \end{array} \right. $$ Since we look for minimal submanifolds of cohomogeneity 1, the Hsiang-Lawson's metric is given by the formula \begin{equation} V^2(d\varphi^2 + \cos^2\varphi d\theta^2) = 4\pi^2\sin^2\varphi(d\varphi^2 + \cos^2\varphi d\theta^2). \label{OtsukiMetric} \end{equation} \begin{definition} An immersed minimal $SO(2)$-invariant two-dimensional torus in $\mathbb{S}^3$ such that its image by the projection $\pi\colon\mathbb{S}^3\to\mathbb{S}^3/SO(2)$ is a closed geodesics in $(\mathbb{S}^3)^*/SO(2)$ endowed with the metric (\ref{OtsukiMetric}) is called an {\em Otsuki torus}. \end{definition} The following proposition was proved in the paper~\cite{PenskoiOtsuki}. \begin{proposition} Except one particular case given by the equation $\psi = \pi/4$, Otsuki tori are in\label{PenskoiProp} one-to-one correspondence with rational numbers $p/q$ such that $$ \frac{1}{2}<\frac{p}{q}<\frac{\sqrt{2}}{2},\qquad p,q>0,\,(p,q) = 1. $$ \end{proposition} \begin{definition} By $O_{p/q}$ we denote the Otsuki torus corresponding to $p/q$. Following the paper~\cite{PenskoiOtsuki} we reserve the term "Otsuki tori" for the tori $O_{p/q}$. \end{definition} In order to fix notations we give a sketch of the proof of Proposition~\ref{PenskoiProp}. \begin{proof} Let us use the standard notation for the coefficients of the metric~(\ref{OtsukiMetric}), $$ E = 4\pi^2\sin^2\varphi, \qquad G = 4\pi^2\sin^2\varphi\cos^2\varphi. $$ As we know the velocity vector of a geodesic has a constant length. Suppose this length equals 1. Then this assumption as well as the equation of geodesics for $\ddot\theta$ provides the following two equations, \begin{equation} \label{dtheta} \dot\theta = \frac{\sin a\cos a}{2\pi\cos^2\varphi\sin^2\varphi}, \end{equation} \begin{equation} \label{dvarphi} \dot\varphi^2 = \frac{\sin^2\varphi\cos^2\varphi - \sin^2 a\cos^2 a}{4\pi^2\sin^4\varphi\cos^2\varphi}, \end{equation} where $a$ is the minimal value of $\varphi$ on the geodesic. Then the geodesic is situated in the annulus $a\leqslant \varphi \leqslant \dfrac{\pi}{2} - a$. We choose a natural parameter $t$ such that $\varphi(0) = a$. Let us denote by $\Omega(a)$ the difference between the value of $\theta$ corresponding to $\varphi = a$ and the closest to it value of $\theta$ corresponding to $\varphi = \pi/2-a$. It is clear that $$ \Omega(a) = \sin a\cos a\int\limits_a^{\pi/2-a}\frac{d\varphi}{\cos\varphi\sqrt{\sin^2\varphi\cos^2\varphi - \sin^2 a\cos^2 a}}. $$ The geodesic is closed iff $\Omega(a) = p\pi/q$. The rest of the proof follows from the following properties of the function $\Omega(a)$, see the paper~\cite{Otsuki}, \begin{itemize} \item[1)] $\Omega(a)$ is continuous and monotonous on $\left(0,\pi/4\right]$, \item[2)] $\lim_{a\to 0+}\Omega(a) = \pi/2$ and $\Omega\left(\pi/4\right) = \pi/\sqrt{2}$. \end{itemize} \end{proof} \subsection{Estimates for $\Lambda_{2p-1}(O_{p/q})$.} According to the paper~\cite{PenskoiOtsuki}, the metric on an Otsuki torus $O_{p/q}$ is extremal \label{OtsukiEstimate} for the functional $\Lambda_{2p-1}(\mathbb{T}^2,g)$. The goal of this section is to prove the following proposition. \begin{proposition} For $p,q$, such that $(p,q)=1$ and $1/2<p/q<\sqrt{2}/2$, the following inequality holds, $$ 8\pi\left(2p-2 +\frac{\pi}{\sqrt{3}} \right)>\Lambda_{2p-1}(O_{p/q}). $$ \label{OtsukiEstimateProp} \end{proposition} In order to prove Proposition~\ref{OtsukiEstimateProp} we have to prove several auxiliary propositions. \begin{proposition} For $a\in\left(0,\pi/4\right)$ such that $\Omega(a) = p\pi/q$ one has $$ \Lambda_{2p-1}(O_{p/q}) = 8\pi q\cos aE\left(\sqrt{1-\tan^2a}\right). $$ \label{valueOtsuki} \end{proposition} \begin{proof} Let us use the notations of Proposition~\ref{PenskoiProp}. As we know, $$ \dot \varphi = \pm\frac{\sqrt{G-c^2}}{\sqrt{EG}}, $$ where $c = 2\pi\sin a\cos a$. Therefore, the length of the segment on the geodesic $\pi(O_{p/q})$ between the closest points with $\varphi = a$ and $\varphi=\pi/2-a$ is equal to $2\pi I$, where $$ I = \int_a^{\pi/2-a}\frac{\sin\varphi}{\sqrt{1-\sin^2a\cos^2a/(\sin^2\varphi\cos^2\varphi)}}d\varphi. $$ Let us express $I$ in terms of elliptic integrals, \begin{equation*} \begin{split} I =& \int_{\sin a}^{\cos a}\frac{x\sqrt{1-x^2}}{\sqrt{x^2(1-x^2) - \cos^2a\sin^2a}}dx = \frac{1}{2}\int_{\sin^2a}^{\cos^2a}\frac{\sqrt{1-u}}{\sqrt{u(1-u)-\cos^2a\sin^2a}} \\= &\frac{1}{2}\int_0^1\frac{\sqrt{(1-\sin^2a) - (\cos^2a-\sin^2a)t}}{\sqrt{t(1-t)}}dt = \frac{1}{2}\cos a\int_0^1\frac{\sqrt{1-(1-\tan^2a)t}}{\sqrt{t(1-t)}}dt \\=& \cos a\int_0^1\frac{\sqrt{1-(1-\tan^2a)y^2}}{\sqrt{1-y^2}} = \cos aE(\sqrt{1-\tan^2a}). \end{split} \end{equation*} Here the following changes of variables were used, $$ \cos\varphi = x,\quad x^2=u, \quad u=(\cos^2a-\sin^2a)t+\sin^2a,\quad t=y^2. $$ Since the maps $\theta\mapsto\theta+\theta_0$ and $\theta\mapsto\theta_0-\theta$ are isometries, the length of the geodesic $\pi(O_{p/q})$ is equal to $4\pi q\cos aE(\sqrt{1-\tan^2a})$. By Proposition 13 from the paper~\cite{PenskoiOtsuki}, $\Lambda_{2p-1}(O_{p/q})$ is equal to the doubled length of the geodesic $\pi(O_{p/q})$. \end{proof} \begin{proposition} For $k\in[0,1]$ one has the following inequality, $$ K(k) - \frac{2}{2-k^2}E(k)\geqslant 0. $$ \label{MainLemma} \end{proposition} \begin{proof} Let us expand the left hand side using the definitions of $E$ and $K$, \begin{equation*} \begin{split} K(k) - \frac{2}{2-k^2}E(k) = &\int_0^{\pi/2}\frac{d\theta}{\sqrt{1-k^2\sin^2\theta}} - \frac{2}{2-k^2}\int_0^{\pi/2}\sqrt{1-k^2\sin^2\theta}\, d\theta \\=&\frac{k^2}{2-k^2}\int _0^{\pi/2}\frac{2\sin^2\theta - 1}{\sqrt{1-k^2\sin^2\theta}}d\theta. \end{split} \end{equation*} Since the integrand is negative on $(0,\pi/4)$ and positive on $(\pi/4,\pi/2)$, one has \begin{equation*} \begin{split} \int_0^{\pi/2}&\frac{2\sin^2\theta-1}{\sqrt{1-k^2\sin^2\theta}}d\theta = \int_0^{\pi/4} \frac{2\sin^2\theta-1}{\sqrt{1-k^2\sin^2\theta}}d\theta + \int_{\pi/4}^{\pi/2}\frac{2\sin^2\theta-1} {\sqrt{1-k^2\sin^2\theta}}d\theta\\ &\geqslant \int_0^{\pi/4}\frac{2\sin^2\theta-1}{\sqrt{1-k^2/2}} + \int_{\pi/4}^{\pi/2} \frac{2\sin^2\theta-1}{\sqrt{1-k^2/2}} \\&= -\frac{1}{\sqrt{1-k^2/2}}\int_0^{\pi/2}\cos 2\theta d\theta = 0. \end{split} \end{equation*} \end{proof} Let us introduce the notation $$ \Phi(a) = \cos aE\left(\sqrt{1-\tan^2a}\right). $$ \begin{proposition} The function $\Phi(a)$ is non-decreasing and $\Phi'(a)<1/2$ for any $a\in \left(0,\dfrac{\pi}{4}\right)$. In particular, $1 = \Phi(0) \leqslant \Phi(a) \leqslant \Phi\left(\pi/4\right) = \pi/(2\sqrt{2})$. \label{PhiProp} \end{proposition} \begin{corollary} One has \begin{equation} \label{inOtsuki} 4\sqrt{2}\pi^2 q\geqslant\Lambda_{2p-1}(O_{p/q})\geqslant 8\pi q. \end{equation} \label{ineqOtsuki} \end{corollary} \textbf{Remark.} Let us remark that during the preparation of the manuscript inequality~(\ref{inOtsuki}) appeared in the paper~\cite{HuSong}. \begin{proof}[Proof of Proposition~\ref{PhiProp}.] Let us recall the following formulae for the derivatives of elliptic integrals, \begin{equation} \label{dedk} \frac{dE(k)}{dk} = \frac{E(k)-K(k)}{k}\,,\qquad \frac{dK(k)}{dk} = \frac{E(k)}{k(1-k^2)} - \frac{K(k)}{k}, \end{equation} \begin{equation} \label{dpi} \begin{split} \frac{\partial\Pi(n,k)}{\partial n} = \frac{1}{2(k^2-n)(n-1)}&\left(E(k) +\frac{(k^2-n)}{n}K(k) + \frac{(n^2-k^2)}{n}\Pi(n,k)\right), \\ \frac{\partial\Pi(n,k)}{\partial k} = &\frac{k}{n-k^2}\left(\frac{E(k)}{k^2-1} + \Pi(n,k)\right). \end{split} \end{equation} Let us introduce a notation $\beta = \sqrt{1-\tan^2a}$. One obtains \begin{equation} \label{dphi} \begin{split} \Phi'(a) &= \cos a\left(-2\tan a \frac{E(\beta)-K(\beta)}{2\cos^2a(1-\tan^2a)}\right) - \sin a E(\beta)\\ &= -\sin a\left(E(\beta) + \frac{E(\beta)-K(\beta)}{\cos^2a - \sin^2a}\right)\\ &= \frac{\sqrt{(1-\beta^2)(2-\beta^2)}}{\beta^2}\left(K(\beta) - \frac{2}{2-\beta^2}E(\beta)\right). \end{split} \end{equation} Now the monotonicity of the function $\Phi(a)$ follows from Proposition~\ref{MainLemma}. For the proof of the second part, let us go back to formula~(\ref{dphi}). One has \begin{equation*} \begin{split} \Phi'(a) =& -\sin a\left(\frac{2\cos^2aE(\beta)-K(\beta)}{\cos^2a-\sin^2a}\right) \\=& -\frac{\sin a}{\cos^2a-\sin^2a} \int_0^{\pi/2}\frac{2\cos^2a(1-\beta^2\sin^2\theta)-1}{\sqrt{1-\beta^2\sin^2\theta}}d\theta\\ =& \sin a\int_0^{\pi/2}\frac{2\sin^2\theta-1}{\sqrt{1-\beta^2\sin^2\theta}}d\theta\leqslant \sin a\int_{\pi/4}^{\pi/2}\frac{2\sin^2\theta-1}{\sqrt{1-\beta^2}}d\theta\\ =& -\cos a \int_{\pi/4}^{\pi/2}\cos 2\theta d\theta = \cos a\frac{\sin 2\theta}{2}\Bigl|_{\pi/2} ^{\pi/4}\leqslant \frac{1}{2}. \end{split} \end{equation*} This finishes the proof of Proposition~\ref{PhiProp}. \end{proof} \begin{proposition} The function $(2/\pi)\Omega(a) - \Phi(a)$ is increasing on the interval $\left(0,\pi/4\right)$. \end{proposition} \begin{proof} In the paper~\cite{HuSong} the following formula was proved, $$ \Omega(a) = \frac{1}{\sin a}\Pi\left(-\frac{\cos 2a}{\sin^2a},\sqrt{1-\tan^2a}\right). $$ Using formulae~(\ref{dpi}) one obtains the following formula, $$ \frac{d\Omega(a)}{da} = \frac{1}{\cos a\cos 2a}K\left(\sqrt{1-\tan^2a}\right) - \frac{2\cos a}{\cos 2a}E\left(\sqrt{1-\tan^2a}\right). $$ Let us recall the notation $\beta(a) = \sqrt{1 - \tan^2a}$. Then one has \begin{equation} \label{domega} \Omega'(a) = \frac{(2-\beta^2)^{\frac{3}{2}}}{\beta^2}\left(K(\beta) - \frac{2}{2-\beta^2}E(\beta)\right), \end{equation} $$ \Omega(a) = \sqrt{\frac{2-\beta^2}{1-\beta^2}}\Pi\left(-\frac{\beta^2}{1-\beta^2},\beta\right). $$ Moreover, by formula~(\ref{dphi}) one has $$ \Phi'(a) = \frac{\sqrt{(1-\beta^2)(2-\beta^2)}}{\beta^2}\left(K(\beta) - \frac{2}{2-\beta^2}E(\beta)\right). $$ The inequality $(2/\pi)(2-\beta^2) - \sqrt{1-\beta^2}>0$ and Proposition~\ref{MainLemma} imply the inequality $$ \frac{2}{\pi}\Omega'(a) - \Phi'(a) = \frac{\sqrt{2-\beta^2}}{k^2}\left(K(\beta) - \frac{2}{2-\beta^2}E(\beta)\right)\left(\frac{2}{\pi}(2-\beta^2) - \sqrt{1-\beta^2}\right) > 0. $$ \end{proof} \begin{corollary} For $a\in\left[1/5,\pi/4\right]$ one has $$ \frac{2}{\pi}\Omega(a) - \Phi(a) > \frac{2\sqrt{3}-\pi}{3\sqrt{3}}. $$ \label{cor1} \end{corollary} \begin{proof} Using the tables of elliptic integrals, e.g. the book~\cite{Friedman}, one obtains the inequality $$ \frac{2}{\pi}\Omega\left(\frac{1}{5}\right) - \Phi\left(\frac{1}{5}\right) > \frac{2\sqrt{3}-\pi}{3\sqrt{3}}. $$ The rest of the proof follows from the monotonicity of the function on the left hand side. \end{proof} \begin{proposition} For $\xi\in\left[0,1/5\right]$ one has $$ \Omega'(\xi) > \frac{\pi}{4}\left(\frac{\pi}{\sqrt{3}}-1\right)^{-1} $$ \label{prop1} \end{proposition} \begin{proof} By formula~(\ref{domega}) for $\xi\in\left[0,1/5\right]$ one has \begin{equation*} \begin{split} \Omega'(\xi) &= \frac{(2-\beta(\xi)^2)^{\frac{3}{2}}}{\beta(\xi)^2}\left(K(\beta(\xi)) - \frac{2}{2-\beta(\xi)^2}E(\beta(\xi))\right)\\ &\geqslant K\left(\beta\left(\frac{1}{5}\right)\right) - 2\frac{2-\beta^2\left(1/5\right)}{\beta^2\left(1/5\right)}E\left(\beta\left(\frac{1}{5}\right)\right). \end{split} \end{equation*} In the last inequality we used the facts that $K(k)$ is increasing function and $E(k)$ as well as $\beta(a)$ are decreasing functions. The table of the elliptic integrals in the book~\cite{Friedman} provides the inequality $$ K\left(\beta\left(\frac{1}{5}\right)\right) - 2\frac{2-\beta^2\left(1/5\right)}{\beta^2\left(1/5\right)}E\left(\beta\left(\frac{1}{5}\right)\right)>\frac{\pi}{4}\left(\frac{\pi}{\sqrt{3}}-1\right)^{-1} $$ which completes the proof. \end{proof} \begin{proof}[Proof of Proposition~\ref{OtsukiEstimateProp}] We want to prove that $$ 8\pi\left(2p-2 + \frac{\pi}{\sqrt{3}}\right) > 8\pi q\Phi(a), $$ where $\Omega(a) = p\pi/q$. This inequality is equivalent to the following one $$ 2\frac{p}{q} - \frac{2\sqrt{3}-\pi}{q\sqrt{3}} > \Phi(a). $$ Since $\Omega(a) = p\pi/q$, it is sufficient to prove that \begin{equation} \frac{2}{\pi}\Omega(a) - \Phi(a) > \frac{2\sqrt{3}-\pi}{q\sqrt{3}}. \label{mainineq} \end{equation} Since $q\geqslant 3$, the application of Corollary~\ref{cor1} provides inequality~(\ref{mainineq}) for $a\in\left[1/5,\pi/4\right]$. In order to prove inequality for $a\in\left[0,1/5\right]$ let us note that by Proposition~\ref{PhiProp} \begin{equation*} \begin{split} \frac{2}{\pi}\Omega(a) - \Phi(a) &= \frac{2}{\pi}(\Omega(a)-\Omega(0)) - (\Phi(a)-\Phi(0))\\ &= a\left(\frac{2}{\pi}\Omega'(\xi)-\Phi'(\eta)\right)\geqslant a\left(\frac{2}{\pi}\Omega'(\xi)-\frac{1}{2}\right) \end{split} \end{equation*} for some $\xi,\eta\in(0,a)$. Moreover, $$ \frac{1}{2q}\pi\leqslant\frac{2p-q}{2q}\pi = \frac{p}{q}\pi - \frac{1}{2}\pi = \Omega(a)-\Omega(0) = a\Omega'(\xi) $$ or $$ \frac{1}{q}<\frac{2a}{\pi}\Omega'(\xi). $$ Therefore, inequality~(\ref{mainineq}) follows from the inequality $$ \frac{2}{\pi}\Omega'(\xi) - \frac{1}{2} > \frac{2}{\pi}\left(2-\frac{\pi}{\sqrt{3}}\right)\Omega'(\xi) $$ or the inequality $$ \Omega'(\xi) > \frac{\pi}{4}\left(\frac{\pi}{\sqrt{3}}-1\right)^{-1}. $$ The last inequality easily follows from Proposition~\ref{prop1}. \end{proof} \section{Lawson surfaces} A Lawson tau-surface $\tau_{m,k}$ is an immersed surface in the sphere $\mathbb{S}^3$ \label{LawsonEstimate} defined by the double-periodic immersion of $\mathbb{R}^2$ given by the formula $$ (\cos mx \cos y,\sin mx\cos y, \cos kx\sin y,\sin kx\sin y). $$ It was introduced by Lawson in the paper~\cite{Lawson}. He also proved that for each pair $\{m,k\}$, such that $m\geqslant k\geqslant 1$ and $(m,k)=1$, the surface $\tau_{m,k}$ is a distinct compact minimal surface in $\mathbb{S}^3$. Let us assume that $(m,k)=1$ then if both $m$ and $k$ are odd then $\tau_{m,k}$ is a torus, we call it a Lawson torus. Otherwise $\tau_{m,k}$ is a Klein bottle, we call it a Lawson Klein bottle. \begin{proposition}[Penskoi \cite{PenskoiLawson}] Let $\tau_{m,k}$ be a Lawson surface. Then the induced metric on $\tau_{m,k}$ is an extremal metric for the functional $\Lambda_j(M,g)$, where \begin{equation} j = 2\left[\frac{\sqrt{m^2+k^2}}{2}\right] + m + k - 1, \label{j} \end{equation} $M = \mathbb{T}^2$ if both $m,k$ are odd and $M=\mathbb{K}$ otherwise. The corresponding value of the functional is $$ \Lambda_j(\tau_{m,k}) = 8\pi mE\left(\frac{\sqrt{m^2-k^2}}{m}\right). $$ \end{proposition} \begin{proposition} Let $j$ be defined by formula~(\ref{j}). If $\tau_{m,k}$ is a Lawson torus, then $$ \Lambda_j(\tau_{m,k}) < 8\pi\left(j-1+\frac{\pi}{\sqrt{3}}\right). $$ If $\tau_{m,k}$ is a Klein bottle, then $$ \Lambda_j(\tau_{m,k}) < 8\pi(j-1)+12\pi E\left(\frac{2\sqrt{2}}{3}\right). $$ \label{LawsonProp} \end{proposition} \begin{proof} It is sufficient to obtain the inequality \begin{equation} \label{LawsonIneq} j \geqslant mE\left(\frac{\sqrt{m^2-k^2}}{m}\right). \end{equation} Let us remark that the function $$ \varphi(x) = 1 + x - E(\sqrt{1-x^2}) $$ is positive on the interval $[0,1]$. Indeed, \begin{equation*} \begin{split} E(x) =& \int_0^{\pi/2}\sqrt{1 - x^2\sin^2\psi}\,d\psi\leqslant\int_0^{\pi/2}(\sqrt{1 - \sin^2\psi} + \sqrt{(1-x^2)\sin^2\psi})\, d\psi\\ =& 1 + \sqrt{1-x^2}. \end{split} \end{equation*} Let us divide both sides of inequality (\ref{LawsonIneq}) by $m$ and denote by $x$ the ratio $\dfrac{k}{m}\in [0,1]$. Since $$ \left[\sqrt{\frac{m^2+k^2}{2}}\right]\geqslant \left[\frac{m+k}{2}\right]\geqslant\left[\frac{m+1}{2}\right] > \frac{m}{2}, $$ one has that inequality (\ref{LawsonIneq}) follows from the positivity of $\varphi(x)$. \end{proof} \section{Bipolar surfaces to the Lawson surfaces} Let $I\colon N \looparrowright \mathbb{S}^3$ be a minimal immersion. \label{BipLawsonEstimate} A Gauss map $I^*\colon N \to\mathbb{S}^3$ is defined pointwise as the image of the unit normal in $\mathbb{S}^3$ translated to the origin in $\mathbb{R}^4$. Then the exterior product $\tilde I = I\wedge I^*$ is an immersion of $N$ in $\mathbb{S}^5\subset\mathbb{R}^6$. Lawson proved in the paper~\cite{Lawson} that this immersion is minimal. The image $\tilde I(N)$ is called a bipolar surface to $N$. Let us denote by $\tilde \tau_{m,k}$ the bipolar surface to the surface $\tau_{m,k}$. Lapointe proved in the paper~\cite{Lapointe} that \begin{itemize} \item if $mk\equiv 0\,\,(\mathrm{mod}\, 2)$ then $\tilde \tau_{m,k}$ is a torus carrying the extremal metric for the functional $\Lambda_{4m-2}(\mathbb{T}^2,g)$ and $$ \Lambda_{4m-2}(\tilde\tau_{m,k}) = 16\pi mE\left(\dfrac{\sqrt{m^2-k^2}}{m}\right); $$ \item if $mk\equiv 1\,\,(\mathrm{mod}\, 4)$ then $\tilde \tau_{m,k}$ is a torus carrying the extremal metric for the functional $\Lambda_{2m-2}(\mathbb{T}^2,g)$ and $$ \Lambda_{2m-2}(\tilde\tau_{m,k}) = 8\pi mE\left(\dfrac{\sqrt{m^2-k^2}}{m}\right); $$ \item if $mk\equiv 3\,\,(\mathrm{mod}\, 4)$ then $\tilde \tau_{m,k}$ is a Klein bottle carrying the extremal metric for the functional $\Lambda_{m-2}(\mathbb{K},g)$ and $$ \Lambda_{m-2}(\tilde\tau_{m,k}) = 4\pi mE\left(\dfrac{\sqrt{m^2-k^2}}{m}\right). $$ \end{itemize} \begin{proposition} If $mk\equiv 1 \,\,(\mathrm{mod}\, 4)$ then the following inequality holds $$ \Lambda_{2m-2}(\tilde\tau_{m,k})<8\pi\left(2m-3 + \frac{\pi}{\sqrt{3}}\right). $$ If $mk\equiv 0 \,\,(\mathrm{mod}\, 2)$ then the following inequality holds $$ \Lambda_{4m-2}(\tilde\tau_{m,k})<8\pi\left(4m-3 + \frac{\pi}{\sqrt{3}}\right). $$ If $mk\equiv 3\,\,(\mathrm{mod}\, 4)$ and $\{m,k\}\ne\{3,1\}$ then the following inequality holds $$ 8\pi(m-3) + 12\pi E\left(\frac{2\sqrt{2}}{3}\right) > \Lambda_{m-2}(\tilde\tau_{m,k}). $$ \label{BipLawsonProp} \end{proposition} \begin{proof} In order to prove the first inequality it is sufficient to prove that \begin{equation} \label{dot} mE\left(\frac{\sqrt{m^2-k^2}}{m}\right)\leqslant (2m-2). \end{equation} It is well-known that $E(\tilde k)\leqslant \pi/2$ for $\tilde k\in [0,1]$. This implies that it is sufficient to prove that $$ \pi m\leqslant 4m-4. $$ This inequality holds for $m\geqslant 5$. The statement for $\tilde \tau_{1,1}$ follows from the fact that $\tilde\tau_{1,1}$ is a Clifford torus and $\Lambda_1(\tau_{1,1}) = 4\pi^2$. In the same way, in order to prove the second inequality in Proposition~\ref{BipLawsonProp} it is sufficient to prove that $$ \pi m\leqslant 4m-3 + \frac{\pi}{\sqrt{3}}. $$ This inequality holds for $m\geqslant 2$. The third inequality is equivalent the following one $$ 2(m-3) + 3E\left(\frac{2\sqrt{2}}{3}\right) > m E\left(\frac{\sqrt{m^2-k^2}}{m}\right). $$ Since $E(\tilde k) < \pi/2$ it is sufficient to prove that $$ \left(2-\frac{\pi}{2}\right)m > 6 - 3E\left(\frac{2\sqrt{2}}{3}\right). $$ This inequality holds for $m\geqslant 7$. For the exceptional case $\{m,k\} = \{5,3\}$ one verifies the third inequality explicitly using the tables of elliptic integrals in the book~\cite{Friedman}. \end{proof} \section{Bipolar surfaces to Otsuki tori} In the paper~\cite{Karpukhin} the following proposition was proved. \label{BipOtsukiEstimate} \begin{proposition} The bipolar surface $\tilde O_{p/q}$ to an Otsuki torus $O_{p/q}$ is a torus. If $q$ is odd then the metric on bipolar Otsuki torus $\tilde O_{p/q}$ is extremal for the functional $\Lambda_{2q+4p-2}(\mathbb{T}^2,g)$ and $\Lambda_{2q+4p-2}(\tilde O_{p/q}) < 4\sqrt{2}q\pi^2$. If $q$ is even then the metric on bipolar Otsuki torus $\tilde O_{p/q}$ is extremal for the functional $\Lambda_{q+2p-2}(\mathbb{T}^2,g)$ and $\Lambda_{q+2p-2}(\tilde O_{p/q}) < 2\sqrt{2}q\pi^2$.\label{bipOtsuki} \end{proposition} \begin{proposition} If $q$ is even then the following inequality holds, $$ \Lambda_{q+2p-2}(\tilde O_{p/q}) < 8\pi\left(q+2p-3+\frac{\pi}{\sqrt{3}}\right). $$ If $q$ is odd then one has the following inequality, $$ \Lambda_{2q+4p-2}(\tilde O_{p/q}) < 8\pi\left(2q+4p-3+\frac{\pi}{\sqrt{3}}\right). $$ \label{BipOtsukiProp} \end{proposition} \begin{proof} If $q$ is even, then we have $$ 8\pi\left(q+2p-3+\frac{\pi}{\sqrt{3}}\right)>8\pi(q+2p-2)>12\pi q > 2\sqrt{2}\pi^2q. $$ We used the inequalities $2p>q$ and $p>1$ in order to prove the last inequality. In the same way, if $q$ is odd, then we have $$ 8\pi\left(2q+4p-3+\frac{\pi}{\sqrt{3}}\right)>8\pi(2q+4p-2)>24\pi q > 4\sqrt{2}\pi^2q. $$ \end{proof} Now it is easy to see that Propositions~\ref{OtsukiEstimateProp},~\ref{LawsonProp},~\ref{BipLawsonProp},~\ref{BipOtsukiProp} together with Proposition~\ref{LowerBound} imply Theorem~\ref{MainTheorem}. \section{Clifford torus} Let us represent the Clifford torus as a flat torus with the square lattice with edges equal to $2\pi$. In this case \label{CliffordEstimate} the Laplace-Beltrami coinsides up to a sign with the classical two-dimensional Laplace operator. Therefore, using the separation of variables one obtains that the eigenfunctions are $$ \sin nx\sin my, \quad \sin nx\cos ly,\quad \cos kx\sin my, \quad\cos kx\cos ly, $$ where $n,m \in \mathbb{N}$ and $k,l\in\mathbb{Z}_{\geqslant 0}$. Then, the eigenvalues are equal to $n^2+m^2$, $n^2+l^2$, $k^2+m^2$, $k^2+l^2$ respectively. \begin{proposition} For the Clifford torus the Weyl's counting function $N(\lambda)$ is equal to the number of integer points in the open disk of radius $\sqrt{\lambda}$ with the center at the origin of ${R}^2$. \end{proposition} \begin{proof} Let us introduce an one-to-one correspondence $\nu$ between eigenfunctions and integer points in $\mathbb{R}^2$. We set $$ \left\{ \begin{array}{lcl} \nu(\sin nx\sin my) &=& (n,m),\\ \nu(\sin nx\cos ly) &=& (n,-l),\\ \nu(\cos kx\sin my) &=& (-k,m),\\ \nu(\cos kx\cos ly) &=& (-k,-l).\\ \end{array} \right. $$ Let us also remark that the eigenvalue of the function $f$ is equal to the squared distance between $(0,0)$ and $\nu(f)$. This observation completes the proof. \end{proof} \subsection{Proof of Proposition~\ref{CliffordTheorem}.} It is easy to check that the set of functions $$ (\sin kx,\cos kx,\sin ky,\cos ky) $$ form an isometrical immersion of Clifford torus in the unit sphere. The same is true for the set $$ (\sin kx\sin ky,\sin kx\cos ky, \cos kx\sin ky, \cos kx\cos ky) $$ and the set \begin{equation*} \begin{split} (\sin k&x\sin ly,\sin kx\cos ly, \cos kx\sin ly, \cos kx\cos ly,\\ &\sin lx\sin ky,\sin lx\cos ky, \cos lx\sin ky, \cos lx\cos ky), \end{split} \end{equation*} where $k\ne l$. Therefore, according to Theorem~\ref{th1}, the metric on the Clifford torus is extremal for the functionals $\Lambda_{N(r^2)}(\mathbb{T}^2,g)$, where $r^2 = n^2+m^2$ with $n,m\in\mathbb{Z}$, and $\Lambda_{N(r^2)}(\mathbb{T}_{Cl}) = 4\pi^2 r^2$. Let $B_r$ be a disc of radius $r$. Then one has a simple estimate $$ N(r^2)\geqslant \mathrm{Area}\left(B_{r-\sqrt{2}/2}\right) =\pi\left(r-\frac{\sqrt{2}}{2}\right)^2. $$ So, it is sufficient to prove that $$ 2\left(r-\frac{\sqrt{2}}{2}\right)^2 > r^2 $$ and this inequality holds for $r^2\geqslant 6$. And for $r^2<6$ holds the inequality $8\pi N(r^2) > 4\pi r^2$. This inequality can be obtained by the direct enumeration of all possible values of $r^2$. This completes the proof of Proposition~\ref{CliffordTheorem}. \subsection*{Acknowledgements} The author thanks A.V. Penskoi for the statement of this problem, fruitful discussions and invaluable help in the preparation of the manuscript. The research of the author was partially supported by Dobrushin Fellowship and by Simons-IUM Fellowship.
{ "timestamp": "2014-07-22T02:11:44", "yymm": "1210", "arxiv_id": "1210.8122", "language": "en", "url": "https://arxiv.org/abs/1210.8122", "abstract": "El Soufi-Ilias' theorem establishes a connection between minimal submanifolds of spheres and extremal metrics for eigenvalues of the Laplace-Beltrami operator. Recently, this connection was used to provide several explicit examples of extremal metrics. We investigate the maximality of these metrics and prove that all of them are not maximal.", "subjects": "Spectral Theory (math.SP); Differential Geometry (math.DG)", "title": "Non-maximality of known extremal metrics on torus and Klein bottle", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363517478327, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7073385827696465 }
https://arxiv.org/abs/1304.5676
Bundles of spectra and algebraic K-theory
A parametrized spectrum E is a family of spectra E_x continuously parametrized by the points x of a topological space X. We take the point of view that a parametrized spectrum is a bundle-theoretic geometric object. When R is a ring spectrum, we consider parametrized R-module spectra and show that they give cocycles for the cohomology theory determined by the algebraic K-theory K(R) of R in a manner analogous to the description of topological K-theory K^0(X) as the Grothendieck group of vector bundles over X. We prove a classification theorem for parametrized spectra, showing that parametrized spectra over X whose fibers are equivalent to a fixed R-module M are classified by homotopy classes of maps from X to the classifying space BAut_R(M) of the A_\infty space of R-module equivalences from M to M. In proving the classification theorem for parametrized spectra, we define of the notion of a principal G fibration where G is an A_\infty space and prove a similar classification theorem for principal G fibrations.
\section{Introduction} Contemporary algebraic topology features a vast array of generalized cohomology theories, but our knowledge of their geometric content remains limited to the examples of ordinary cohomology theories, topological $K$-theory and cobordism theories. In this paper we describe the geometry underlying the class of cohomology theories given by the algebraic $K$-theory of a ring, or more generally a ring spectrum. The higher algebraic $K$-groups $K_n(R)$ of a ring spectrum $R$ may be defined as the homotopy groups of the algebraic $K$-theory spectrum $K(R)$. By the geometry of $K(R)$-theory, we mean a geometric description of the cocycles $K(R)^{*}(X)$ for the cohomology theory that the spectrum $K(R)$ represents. The result is reminiscent of the description of topological $K$-theory $K^0(X)$ in terms of the Grothendieck group of vector bundles over $X$. The analogue of vector bundles for $K(R)$-theory are parametrized spectra that are modules over the ring spectrum $R$. We call these objects $R$-bundles. The main result is the following: \begin{theorem}\label{main_theorem} Let $R$ be a connective ring spectrum and let $K(R)$ be the algebraic $K$-theory spectrum of $R$. Then for any finite CW complex $X$, there is a natural isomorphism \[ K(R)^{0}(X) \cong \mathrm{Gr} [ \text{lifted finite rank free $R$-bundles over $X$}] \] between the degree zero cocycles for the $K(R)$-theory of $X$ and the Grothendieck group completion of the abelian monoid of equivalence classes of lifted finite rank free $R$-bundles over $X$. \end{theorem} \noindent We will give a precise meaning to all of the terms occurring in the statement of the theorem in \S \ref{section:bundle_classification} and \S\ref{proving_main_theorem_section}, but for now we note that an $R$-bundle $E$ over $X$ is free of finite rank if every fiber $E_x$ admits an equivalence of $R$-modules to the $n$-fold wedge $R^{\vee n}$ for some $n \geq 0$. Our geometric description of $K(R)$-theory is inspired by previous work. When $R$ is a discrete ring, Karoubi gave a similar description of the cocycles for $K(R)$-theory in terms of fibrations of projective $R$-modules \cite{Kar}. When $R$ is the connective complex $K$-theory spectrum $ku$, Baas, Dundas, Richter and Rognes interpreted the cocycles of $K(ku)$-theory as 2-vector bundles, which are a categorification of complex vector bundles \citelist{\cite{BDR} \cite{BDRR}}. In forthcoming work with Jonathan Campbell \cite{CL}, we explain how to directly compare 2-vector bundles and $ku$-bundles. By definition, $K(R)^0(X)$ is the group of homotopy classes of maps from $X$ to the zeroeth space of the algebraic $K$-theory spectrum, whose homotopy type can be described using Quillen's plus construction: \[ \Omega^{\infty} K(R) \simeq K_0(R) \times B \mGL_{\infty}^{+}(R). \] Here $K_0(R) = K_0^{f} (\pi_0 R)$ is the Grothendieck group of free modules over the discrete ring $\pi_0 R$ and $B \mGL_{\infty}^{+}(R)$ is Quillen's plus construction applied to the H-space $B \mGL_{\infty}(R) = \colim_n B \mGL_n(R)$, where $B \mGL_n(R)$ is the classifying space of the grouplike $A_{\infty}$ space $GL_n(R) = \Aut_R(R^{\vee n})$ of $R$-module equivalences $R^{\vee n} \arr R^{\vee n}$. The $H$-space structure on $B \mGL_{\infty}(R)$ arises via the usual block-sum of matrices formula, and we take the plus construction with respect to the commutator subgroup as usual. One important point is that, unlike the case of vector bundles and complex $K$-theory, the plus construction can radically change the homotopy type. This forces the bundles that define cocycles for $K(R)$-theory to be \emph{lifted} $R$-bundles over $X$, meaning $R$-bundles defined up to covers of $X$ with homologically trivial fibers---see \S\ref{proving_main_theorem_section} for a precise definition. The term ``bundle'' is perhaps a little naive: as one continuously varies the basepoint in $X$, the fibers of a parametrized spectrum are weak homotopy equivalent, but need not be strictly isomorphic. Put another way, to describe a parametrized spectrum in terms of cocycle data would require a derived or infinitely homotopy coherent descent condition. This point of view naturally leads to the description of parametrized objects in a quasicategory, as developed by Ando, Blumberg, Gepner, Hopkins and Rezk \citelist{\cite{ABG1} \cite{ABG2} \cite{ABGHR2}}. Rather than using quasicategories, we follow the foundations of parametrized stable homotopy theory developed by May and Sigurdsson \cite{MS}. In their framework, parametrized spectra are defined in terms of a ``total object'' over $X$ instead of cocycle data. Homotopical control of the fiber homotopy type of parametrized spectra is maintained via the framework of Quillen model categories. Theorem \ref{main_theorem} follows from a general classification theorem for parametrized $R$-module spectra. In this paper, a spectrum means an orthogonal spectrum, and we use the stable model structure on orthogonal ring and module spectra from Mandell-May-Schwede-Shipley \cite{MMSS}. Given an $R$-module $M$, we say that a parametrized $R$-module spectrum $E$ over $X$ has fiber $M$ if the fiber $E_{x}$ of $E$ over every point $x \in X$ admits a stable equivalence $E_{x} \simeq M$ of $R$-modules. We use the terms $R$-bundle with fiber $M$ and parametrized $R$-module with fiber $M$ interchangeably. Let $\Aut_{R} M = \mGL_1 F^{R}(M, M)$ be the $A_{\infty}$ space of stable equivalences of $R$-modules $M \arr M$, and let $B \Aut_{R} M$ be its classifying space. \begin{theorem}\label{thm:classification_of_Rbundles} Let $X$ be a CW complex, let $R$ be an $s$-cofibrant ring spectrum and let $M$ be an $s$-cofibrant and $s$-fibrant $R$-module. There is a natural bijection between stable equivalence classes of $R$-bundles over $X$ with fiber $M$ and homotopy classes of maps $[X, B \Aut_{R}M]$. \end{theorem} Ando-Blumberg-Gepner \cite{ABG2} prove that the quasicategory of functors $X \arr \mathscr{S}_{\infty}$ from a Kan complex $X$ to the quasicategory of spectra $\mathscr{S}_{\infty}$ is equivalent to the quasicategory associated to the May-Sigurdsson model category of parametrized spectra over the geometric realization $\abs{X}$. Variants of their arguments can be used to prove results in the same vein as Theorem \ref{thm:classification_of_Rbundles}. The proof in this paper is more concrete, using the pullback of a universal bundle to induce the equivalence instead of Lurie's straightening functor \cite{HTT}*{\S3.2.1} and universal properties of functors of quasicategories. When $M = R$, Theorem \ref{thm:classification_of_Rbundles} says that line $R$-bundles over $X$ are classified by the classifying space $B \mGL_1 R$ of the units of $R$. The construction of the line $R$-bundle associated to a map $f \colon X \arr B \mGL_1 R$ is the generalised Thom spectrum of $f$ as studied by Ando-Blumberg-Gepner-Hopkins-Rezk \citelist{\cite{ABGHR1} \cite{ABGHR2}}; see Remark \ref{construction_thomspectra}. From another point of view, a parametrized spectrum with fiber $M$ gives a twisted form of the cohomology theory $M$. We can then view Theorem \ref{thm:classification_of_Rbundles} as giving a general classification theorem of the twists of $M$-theory. In order to prove Theorem \ref{thm:classification_of_Rbundles}, we develop an associated theory of principal $A_{\infty}$ fibrations. Intuitively, a principal $A_{\infty}$ fibration is a homotopical version of a principal $G$-bundle. Instead of a group, $G$ is a grouplike $A_{\infty}$ space and we let $G$ act on fibers via weak equivalences instead of isomorphisms. In order to make this notion precise, it is useful to work with a symmetric monoidal product $\boxtimes$ whose monoids are $A_{\infty}$ spaces. This is possible, but at the cost of working within a different category than the category of topological spaces---in fact there are a few different options, which are described and compared in \cite{diagram_spaces} (see also \citelist{\cite{BCS} \cite{SS}}). Since May-Sigurdsson parametrized spectra are built out of orthogonal spectra, it is most natural to use the category of $\mathcal{I}$-spaces. An $\mathcal{I}$-space is a continuous functor from the category $\mathcal{I}$ of finite dimensional inner product spaces and isometries to the category of topological spaces. There is a symmetric monoidal product $\boxtimes$ on $\mathcal{I}$-spaces whose monoids we call $\mathcal{I}$-monoids (and are equivalent, but slightly different from, the $\mathcal{I}$-monoids studied by Schlichtkrull and Sagave \cite{SS}). The category of $\mathcal{I}$-monoids is Quillen equivalent to the category of $A_{\infty}$ spaces, which justifies our use of this technology to model $A_{\infty}$ multiplications. If $G$ is an $\mathcal{I}$-monoid, then there is a classifying $\mathcal{I}$-space $B^{\boxtimes}G$ built as a two-sided bar construction out of the symmetric monoidal product $\boxtimes$. We prove the classification theorem for principal $G$-fibrations (Theorem \ref{classification_theorem_for_diagram_fibrations}) using $B^{\boxtimes}G$. Under the equivalence of homotopy categories between the homotopy category of $\mathcal{I}$-spaces and the homotopy category of spaces, $B^{\boxtimes}G$ represents the usual classifying space $BG$ of an $A_{\infty}$ space equivalent to $G$. The classification theorem for principal $A_{\infty}$ fibrations in the context of $\mathcal{I}$-spaces takes the following form. \begin{theorem}\label{generic_Gbundle_classification} Let $G$ be a grouplike $q$-cofibrant $\mathcal{I}$-monoid and let $X$ be a CW-complex. Then equivalence classes of principal $G$-fibrations over $X$ are in bijective correspondence with the set of homotopy classes of maps $[X, BG]$. \end{theorem} The classification of principal fibrations originated in the work of Stasheff \cite{stasheff}, and has been studied recently in more structured and abstract contexts motivated in part by higher gauge theory \citelist{ \cite{NSS1} \cite{NSS2} \cite{wendt} \cite{RS} \cite{stevenson} }. The framework developed here does not strive for maximal generality, but aims to interweave with the theory of parametrized spectra as seamlessly as possible. We follow May's generalization \cite{class_and_fib} of Stasheff's original results, enhanced to connect with the modern model structures of parametrized homotopy theory but still fundamentally based on the homotopy lifting property of Hurewicz fibrations in $\mathcal{I}$-spaces. \S\ref{principal_fibrations_section}--\S\ref{classification_section_Ispaces} are devoted to the proof of Theorem \ref{generic_Gbundle_classification}. We prove in \S\ref{model_cat_param_spaces_section}--\S\ref{section:bundle_classification} that the associated bundle construction \[ Y \longmapsto M \sma_{\Sigma^{\infty}_{+}\Aut_{R} M} \Sigma^{\infty}_{X} Y \] induces a bijection between equivalence classes of principal $\Aut_{R}M$-fibrations and equivalence classes of $R$-bundles with fiber $M$, which proves Theorem \ref{thm:classification_of_Rbundles} from Theorem \ref{generic_Gbundle_classification}. This material is almost entirely based on model category theory, with a notable exception in \S\ref{section:prep}, where we construct the principal $\Aut_{R}M$-fibration associated to an $R$-bundle with fiber $M$. Our work, particularly this construction, has been used by Cohen and Jones \citelist{\cite{cohen_jones} \cite{cohen_jonesII}} in their study of the gauge group of parametrized spectra and the $K$-theory of string topology. {\it Topological Conventions.} We will rely heavily on the foundations for parametrized homotopy theory developed by May-Sigurdsson \cite{MS}. As explained there, it is advantageous to leave the category $\mathscr{U}$ of compactly generated spaces. By a ``space'' we mean a $k$-space, and we denote the category of spaces by $\mathscr{K}$. We will use versions of $\mathcal{I}$-spaces and $\ast$-modules from \cite{diagram_spaces} based on the category $\mathscr{K}$ instead of the category $\mathscr{U}$ of compactly generated spaces. All of the results from that paper are still valid in this context, although some results proven by inducting over cell complexes require the additional assumption of being well-grounded. We will always assume that the base space (denoted by $B$ or $X$) is compactly generated. {\it Outline.} In \S\ref{principal_fibrations_section} we set up the general theory of principal fibrations in a symmetric monoidal category. In \S\ref{section:homotopical_analysis} we introduce the categories of $\mathcal{I}$-spaces and $\ast$-modules and prove some homotopical results regarding bar constructions in these categories. \S\ref{classification_section_Ispaces} is devoted to the classification theorem for principal fibrations of $\mathcal{I}$-spaces (Theorem \ref{generic_Gbundle_classification}). In \S\ref{model_cat_param_spaces_section} and \S\ref{section:modelcat_spectra} we switch gears and introduce model category structures on the categories of parametrized $\mathcal{I}$-spaces and parametrized spectra. In \S\ref{section:prep} and \S\ref{section:bundle_classification} we show that principal $\Aut_{R}M$-fibrations are equivalent to $R$-bundles with fiber $M$ and deduce Theorem \ref{thm:classification_of_Rbundles} from Theorem \ref{generic_Gbundle_classification}. We complete the proof of Theorem \ref{main_theorem} in \S\ref{proving_main_theorem_section}. The reader willing to take the classification theorem for principal $G$-fibrations for granted can skip directly to the material on parametrized spectra by starting in \S\ref{model_cat_param_spaces_section}. {\it Acknowledgments.} I thank Peter May for his support and interest in the project. I also thank Matt Ando, Andrew Blumberg and Mike Shulman for stimulating conversations. \section{Principal fibrations}\label{principal_fibrations_section} Let $(\mathscr{C}, \boxtimes, *)$ be a topologically bicomplete symmetric monoidal category whose unit $*$ is a terminal object. Suppose that $\mathscr{C}$ is equipped with a class of morphisms called weak equivalences that contains all isomorphisms in $\mathscr{C}$, and further assume that the associated homotopy category $\ho \mathscr{C}$ exists. We will define the notion of a principal fibration structured by a $\boxtimes$-monoid in $\mathscr{C}$. In the following sections, we will be interested in the cases where $\mathscr{C}$ is the category of $\mathbb{I}$-spaces, $\mathcal{I}$-spaces or $\ast$-modules from \cite{diagram_spaces}, but for now we work in full generality. In those examples, $\boxtimes$-monoids model $A_{\infty}$ spaces so we may think of a principal fibration as a principal $A_{\infty}$ fibration. Let $B$ be an object of $\mathscr{C}$. Let $(X, p) = (p \colon X \arr B)$ be an object of the category $\mathscr{C}/B$ of objects of $\mathscr{C}$ over $B$. A point $b$ of $B$ is a map $i_{b} \colon * \arr B$ from the terminal object into $B$, and we define the fiber of $(X, p)$ over $b$ to be the object $X_{b} = i_{b}^* X$ of $\mathscr{C}$ defined by the following pullback square: \[ \xymatrix{ X_{b} \ar[r] \ar[d] & X \ar[d]^{p} \\ \ast \ar[r]_{i_{b}} & B } \] The symmetric monoidal product $\boxtimes$ on $\mathscr{C}$ extends to a bifunctor: \[ - \boxtimes - \colon \mathscr{C} \times \mathscr{C}/B \arr \mathscr{C}/B \] defined by \[ A \boxtimes (X, p) = (A \boxtimes X \xrightarrow{ \pi \boxtimes p} * \boxtimes B \cong B), \] where $\pi$ is the map to the terminal object. The category $\mathscr{C}/B$ is enriched in $\mathscr{C}$ and the bifunctor $\boxtimes$ makes $\mathscr{C}/B$ tensored over $\mathscr{C}$. Taking pullbacks along a map $f \colon A \arr B$ in $\mathscr{C}$ defines a base change functor $f^* \colon \mathscr{C}/B \arr \mathscr{C}/A$. The functor $f^*$ has a left adjoint $f_{!} \colon \mathscr{C}/A \arr \mathscr{C}/B$ and a right adjoint $f_{*} \colon \mathscr{C}/A \arr \mathscr{C}/B$. By identifying the category $\mathscr{C}$ with the category $\mathscr{C}/\ast$ of objects over a point, the functor $X \longmapsto X_{b}$ is the base change functor $i_{b}^* \colon \mathscr{C}/B \arr \mathscr{C}/\ast$. Since $i_{b}^*$ has both a left and right adjoint, it commutes with limits and colimits. In particular, if $A$ is an object of $\mathscr{C}$, we have a natural isomorphism $(A \boxtimes X)_{b} \cong A \boxtimes X_{b}$. Let $G$ be a monoid under $\boxtimes$ in $\mathscr{C}$. A $G$-module over $B$ is an object $(E, p)$ of $\mathscr{C}/B$ with an associative and unital action $\alpha \colon G \boxtimes E \arr E$ of $G$ that is a map over $B$. Equivalently, $E$ is a $G$-module and $p \colon E \arr B$ is a map of $G$-modules, where $B$ is given the trivial $G$-module structure. A map $E \arr E'$ of $G$-modules over $B$ is a map of $G$-modules in the category $\mathscr{C}/B$. Each fiber $E_{b}$ inherits the structure of a $G$-module by applying the functor $i_b^*$ to the action map $\alpha$. Under our assumptions, the category $\mathscr{C}$ is tensored over the category $\mathscr{K}$ of unbased spaces. Let $I$ denote the unit interval $[0, 1]$. Given an object $X$ of $\mathscr{C}$, the tensor $X \times I$ defines a cylinder object on $X$. Using these cylinders, we have an intrinsic notion of a homotopy of morphisms in $\mathscr{C}$. The category of $G$-modules has tensors defined in the underlying category $\mathscr{C}$, so we can also define homotopies between maps of $G$-modules. We use the terms homotopy equivalence, $h$-fibration and $h$-cofibration for the usual notions defined in terms of such homotopies. This means that an $h$-fibration is a map satisfying the covering homotopy property (CHP) and an $h$-cofibration is a map satisfying the homotopy extension property (HEP). \begin{definition}\label{structured_CHP} A principal $G$-fibration over $B$ is a $G$-module $(E, p)$ over $B$ for which: \begin{itemize} \item[(i)] the structure map $p \colon E \arr B$ is an $h$-fibration in the category of $G$-modules, and \item[(ii)] every fiber $E_b$ admits a chain of weak equivalences of $G$-modules to $G$. \end{itemize} A map of principal $G$-fibrations over $B$ is a map of $G$-modules over $B$. A weak equivalence of principal $G$-fibrations is a map of principal $G$-fibrations that is a weak equivalence on underlying objects. \end{definition} Our candidate for a classifying space $BG$ for principal $G$-fibrations will be built using the two-sided bar construction based on the symmetric monoidal product $\boxtimes$ on $\mathscr{C}$. Suppose that $X$ and $Y$ are left and right $G$-modules, respectively. The two-sided bar construction $B^{\boxtimes}(X, G, Y)$ is the geometric realization of the simplicial object in $\mathscr{C}$ whose $q$-simplices are the object $B^{\boxtimes}_{q}(X, G, Y) = X \boxtimes G^{\boxtimes q} \boxtimes Y$. The face maps are defined by the action of $G$ on $X$, the monoid structure of $G$ and the action of $G$ on $Y$. The degeneracy maps are induced by the unit map $* \arr G$. In particular, we may form: \[ B^{\boxtimes}G = B^{\boxtimes}(*, G, *) \quad \text{and} \quad E^{\boxtimes}G = B^{\boxtimes}(G, G, *). \] Projecting the copy of $G$ on the left to the terminal object defines a natural map $\pi \colon E^{\boxtimes}G \arr B^{\boxtimes}G$ and $E^{\boxtimes}G$ is a left $G$-module. Furthermore, the usual contracting simplicial homotopy shows that $E^{\boxtimes} G$ is contractible. Consider the equivalence relation on the collection of principal $G$-fibrations over $X$ generated by the weak equivalences of principal $G$-fibrations. Let $\mathscr{E}_{G}(X)$ denote the collection of equivalence classes under this equivalence relation. \begin{definition}\label{def:classthm} We say that the classification theorem for principal $G$-fibrations over $X$ holds if there is a natural bijection $\mathscr{E}_{G}(X) \cong \ho \mathscr{C}(X, B^{\boxtimes} G)$ between the set of equivalence classes of principal $G$-fibrations and the set of maps $X \arr B^{\boxtimes} G$ in the homotopy category of $\mathscr{C}$. \end{definition} \noindent We will prove the classification theorem for $\mathcal{I}$-spaces in \S\ref{classification_section_Ispaces}. \section{Homotopical analysis of structured spaces}\label{section:homotopical_analysis} We now introduce the categories of structured spaces that we will work with. Let $\mathcal{I}$ denote the category of finite dimensional real inner product spaces and linear isometries $V \arr W$. An $\mathcal{I}$-space is a continuous functor $X \colon \mathcal{I} \arr \mathscr{K}$ from $\mathcal{I}$ to the category of (unbased) spaces. We write \[ X_{h \mathcal{I}} = \hocolim_{\mathcal{I}} X = \hocolim_{\mathcal{I}^{\dagger}} X \] for the homotopy colimit of $X$. Here and throughout the paper, a homotopy colimit written over the category $\mathcal{I}$ is actually taken over a small equivalent subcategory $\mathcal{I}^{\dagger}$ that takes into account the Grassmann topology on the space of linear subspaces of a chosen universe $U$. The details of this construction are described in \S A of \cite{diagram_spaces}. A morphism of $\mathcal{I}$ spaces $f \colon X \arr Y$ is a natural transformation of functors, and we say that $f$ is a $q$-equivalence if the induced map of homotopy colimits $f_{h \mathcal{I}} \colon X_{h \mathcal{I}} \arr Y_{h \mathcal{I}}$ is a weak homotopy equivalence of spaces. The category $\mathcal{I}\mathscr{K}$ of $\mathcal{I}$-spaces is a symmetric monoidal category under the bifunctor $X \boxtimes Y$ defined as the left Kan extension of the external cartesian product \begin{align*} X \overline{\times} Y \colon \mathcal{I} \times \mathcal{I} &\arr \mathscr{K} \\ (V, W) &\longmapsto X(V) \times Y(W) \end{align*} along the direct sum functor $\oplus \colon \mathcal{I} \times \mathcal{I} \arr \mathcal{I}$. We call a monoid under $\boxtimes$ an $\mathcal{I}$-monoid. The category of $\mathcal{I}$-monoids is Quillen equivalent to the category of algebras over the linear isometries operad without symmetric groups actions (see \cite{diagram_spaces}*{Theorem 1.3}, but note that $\mathcal{I}$-monoids are called $\mathcal{I}$-FCPs in that paper). Thus an $\mathcal{I}$-monoid is a model for an $A_{\infty}$ space. Similarly, commutative $\mathcal{I}$-monoids model $E_{\infty}$ spaces. Let $X$ and $Y$ be $\mathcal{I}$-spaces, and consider the natural transformation \[ l \colon X_{h\mathcal{I}} \times Y_{h\mathcal{I}} \arr ((X \boxtimes Y) \circ \oplus)_{h \mathcal{I}^2} \overset{\oplus_{*}}{\arr} (X \boxtimes Y)_{h \mathcal{I}} \] induced by the canonical map to the left Kan extension $X \boxtimes Y$ followed by the map of homotopy colimits induced by $\oplus \colon \mathcal{I}^2 \arr \mathcal{I}$. The map $l$ gives the homotopy colimit functor $(-)_{h \mathcal{I}}$ the structure of a lax monoidal functor. In particular, if $G$ is an $\mathcal{I}$-monoid, then the homotopy colimit $G_{h \mathcal{I}}$ is a topological monoid. We say that $G$ is grouplike if $\pi_0 G = \pi_0 G_{h \mathcal{I}}$ is a group. We record the following lemma for later use. \begin{lemma}\label{interchange_lemma} The induced map of components \[ \pi_0 l \colon \pi_0 X_{h \mathcal{I}} \times \pi_0 Y_{h \mathcal{I}} \arr \pi_0 (X \boxtimes Y)_{h \mathcal{I}} \] is an isomorphism. \end{lemma} \noindent \begin{proof} There is also a natural transformation in the reverse direction \[ d \colon (X \boxtimes Y)_{h \mathcal{I}} \xrightarrow{\pi_1 \times \pi_2} (X \times Y)_{h \mathcal{I}} \overset{\Delta_{*}}{\arr} X_{h \mathcal{I}} \times Y_{h \mathcal{I}} \] given by the product of the projection maps of the form $\pi_1 \colon X \boxtimes Y \arr X \boxtimes \ast \cong X$, followed by the map of homotopy colimits induced by the diagonal functor $\Delta \colon \mathcal{I} \arr \mathcal{I}^2$. A diagram chase shows that $\pi_1 \circ d \circ l$ is homotopic to the projection onto the first factor (compare with the proof of \cite{SS}*{2.27}). Combined with the same fact for the other side, this proves that $d \circ l$ is homotopic to the identity map, and so $\pi_0 l$ is injective. For surjectivity, notice that before taking the homotopy colimit, the first map in the composite defining $l$ is the level-wise quotient map \[ X(V) \times Y(W) \arr X \boxtimes Y(V \oplus W) \] to the enriched coend defining the value of the left Kan extension $X \boxtimes Y$ at $V \oplus W$. In particular, after passing to homotopy colimits the map is surjective. The second map $\oplus_*$ has a section induced by the functor $\sigma \colon \mathcal{I} \arr \mathcal{I}^2$ which takes $V$ to the pair $(V, 0)$ and similarly for morphisms. Hence the composite $l$ is also surjective, and thus surjective on $\pi_0$. \end{proof} We now turn to $\mathbb{L}$-spaces, following \citelist{\cite{BCS} \cite{diagram_spaces}}. Fix a universe $U$, by which we mean an infinite dimensional real inner product space. Let $\mathbb{L}$ denote the functor on spaces defined by $X \longmapsto \mathscr{L}(1) \times X$, where $\mathscr{L}(1) = \mathcal{I}_{c}(U, U)$ is the space of linear isometries from $U$ to $U$. Composition of linear isometries gives $\mathbb{L}$ the structure of a monad, and we define the category $\mathbb{L} \mathscr{K}$ of $\mathbb{L}$-spaces to be the category of algebras for the monad $\mathbb{L}$. A map of $\mathbb{L}$-spaces is a $q$-equivalence if it is a weak homotopy equivalence of underlying spaces. There is a symmetric monoidal structure $\boxtimes_{\mathscr{L}}$ on the category of $\mathbb{L}$-spaces defined in analogy with the EKMM smash product $\sma_{\mathscr{L}}$ of $\mathscr{L}$-spectra. The one point space $\ast$ is an $\mathbb{L}$-space under the trivial action, and for any $\mathbb{L}$-space $X$ there is a natural unit map $\lambda \colon \ast \boxtimes_{\mathscr{L}} X \arr X$. A $\ast$-module is an $\mathbb{L}$-space for which the unit map $\lambda$ is an isomorphism. The full subcategory of $\mathbb{L}$-spaces consisting of the $\ast$-modules is a symmetric monoidal category under $\boxtimes_{\mathscr{L}}$ with unit the one point space $\ast$. The category of (commutative) $\boxtimes_{\mathscr{L}}$-monoids is equivalent to the category of ($E_{\infty}$) $A_{\infty}$ spaces structured by the linear isomotries operad (with or without symmetric group actions taken into account). Both the category of $\mathcal{I}$-spaces and the category of $\ast$-modules satisfy the assumptions placed on the category $\mathscr{C}$ in \S\ref{principal_fibrations_section}. Furthermore, they are both well-grounded compactly generated topological monoidal model categories \citelist{\cite{diagram_spaces}*{3.4} \cite{BCS}*{\S 4.6}}. In either case, the weak equivalences are the $q$-equivalences and we use the terms $q$-cofibration and $q$-fibration for the cofibrations and fibrations. For both $\mathcal{I}$-spaces and $\mathbb{L}$-spaces, there are induced model structures on $\boxtimes$-monoids and modules over a fixed $\boxtimes$-monoid where the weak equivalences and fibrations are created in the underlying category. We will ultimately be interested in $\mathcal{I}$-spaces because they are the natural home for the infinite loop space information of orthogonal spectra, and our model for parametrized spectra is based on orthogonal spectra. If instead we used symmetric spectra, then we would employ the category of $\mathbb{I}$-spaces, where $\mathbb{I}$ is the category of finite sets and injections. When $\mathscr{C}$ is the category of $*$-modules, we will write $B^{\mathscr{L}}(X, G, Y)$ for the bar construction $B^{\boxtimes}(X, G, Y)$ as a reminder that we are working with the linear isometries operad. When $\mathscr{C}$ is the category of $\mathcal{I}$-spaces, we will write $B^{\mathcal{I}}(X, G, Y)$ for the bar construction $B^{\boxtimes}(X, G, Y)$ as a reminder that we are working diagrammatically with diagram category $\mathcal{I}$. We say that a monoid $G$ under $\boxtimes$ has a non-degenerate basepoint if the unit map $* \arr G$ is an $h$-cofibration. \begin{lemma}\label{Ispace_bar_constructions_are_cofibrant} Let $(\mathscr{C}, \boxtimes, \ast)$ denote either the category of $\mathcal{I}$-spaces or the category of $\ast$-modules, and let $G$ be a $q$-cofibrant $\boxtimes$-monoid. \begin{itemize} \item[(i)] $G$ is $q$-cofibrant as an object of $\mathscr{C}$ and has a non-degenerate basepoint. \item[(ii)] If $X$ is a $q$-cofibrant right $G$-module and $Y$ is a $q$-cofibrant left $G$-module, then the two-sided bar construction $B^{\boxtimes}(X, G, Y)$ is $q$-cofibrant. \item[(iii)] $B^{\boxtimes}(-, G, -)$ preserves $q$-equivalences in either entry. \end{itemize} \end{lemma} \begin{proof}We may assume that $G$ is a cellular $\boxtimes$-monoid constructed out of cells of the forms $\coprod_{n \geq 0} i^{\boxtimes n}$ where $i$ is a generating $q$-cofibration for the underlying model structure on $\mathscr{C}$. Then a cellular $\boxtimes$-monoid is cellular as an object of $\mathscr{C}$; the proof in \cite{BCS}*{4.25} for $\ast$-modules works just as well for $\mathcal{I}$-spaces. That same proof exhibits the unit map $\ast \arr G$ as a colimit of cellular inclusions, each of which is an $h$-cofibration since the generating $q$-cofibrations are $h$-cofibrations \citelist{\cite{BCS}*{\S4.6}\cite{diagram_spaces}*{\S15}}. This proves that $G$ has a non-degenerate basepoint. From here, the proofs of (ii) and (iii) are standard: see \cite{EKMM}*{X.2.7.(i) and X.2.4.(ii)} or \cite{shulman_hocolim}*{23.10} for a general statement that includes the present situation. \end{proof} We say that a $\boxtimes$-monoid $G$ is grouplike if the monoid $\pi_0 G$ is a group and that a map $p \colon E \arr B$ of $\ast$-modules is a quasifibration if the underlying map of spaces $\mathbb{U} p$ is a quasifibration. \begin{lemma}\label{*module_quasifibration_lemma} Let $G$ be grouplike $q$-cofibrant $\boxtimes_{\mathscr{L}}$-monoid in $\mathscr{M}_{*}$, let $X$ be a $q$-cofibrant right $G$-module and let $Y$ be a $q$-cofibrant left $G$-module. Then the projection maps \[ \pi \colon B^{\mathscr{L}}(X, G, Y) \arr B^{\mathscr{L}}(X, G, *) \qquad \text{and} \qquad \pi \colon B^{\mathscr{L}}(X, G, Y) \arr B^{\mathscr{L}}(*, G, Y) \] are quasifibrations. \end{lemma} \noindent The proof of this result for $X = \ast$ and $Y = G$ given in \cite{ABGHR1}*{3.8} works in the same way here. \begin{comment} \begin{proof} The space $\mathbb{V} G$ is a grouplike topological monoid with non-degenerate basepoint, so the projection \[ \pi \colon B (\mathbb{V} X, \mathbb{V} G, \mathbb{V} Y) \arr B(\mathbb{V} X, \mathbb{V} G, \ast) \] is a quasifibration of spaces \cite{class_and_fib}*{7.6}. Now consider the following commutative diagram: \[ \xymatrix{ \mathbb{U} B^{\mathscr{L}}(X, G, Y) \ar[d]_{\mathbb{U} \pi} \ar[r]^-{\psi} & B (\mathbb{V} X, \mathbb{V} G, \mathbb{V} Y) \ar[d]^{\pi} \\ \mathbb{U} B^{\mathscr{L}}(X, G, \ast) \ar[r]^-{\psi} & B (\mathbb{V} X, \mathbb{V} G, \ast) } \] The bar constructions $B^{\mathscr{L}}(X, G, Y)$ and $B^{\mathscr{L}}(X, G, \ast)$ are $q$-cofibrant by Lemma \ref{Ispace_bar_constructions_are_cofibrant}, and it follows that both instances of $\psi$ are weak homotopy equivalences. Pick a point $b \in \mathbb{U} B^{\mathscr{L}}(X, G, \ast)$ and let $\psi(b)$ be its image in $B (\ast, \mathbb{V} G, Y)$. Denote the homotopy fibers of $U \pi$ and $\pi$ over $b$ and $\psi(b)$ by $F_{b}(\mathbb{U} \pi)$ and $F_{\psi(b)}(\pi)$, respectively. Since $\pi$ is a quasifibration, the inclusion of the fiber $\pi^{-1}(\psi(b)) \arr F_{\psi(b)}(\pi)$ is a weak equivalence. Now consider the following commutative diagram relating fibers and homotopy fibers: \[ \xymatrix{ \mathbb{U} \pi^{-1}(b) \ar[r] \ar[d] & \pi^{-1}(\psi(b)) \ar[d]^{\simeq} \\ F_{b}(\mathbb{U} \pi) \ar[r] & F_{\psi(b)}(\pi) } \] The map of homotopy fibers is a weak homotopy equivalence by the five lemma. The top horizontal map of fibers may be identified with $\psi \colon \mathbb{U} Y \arr \mathbb{V} Y$, which is a weak equivalence because $Y$ is $q$-cofibrant. Therefore the left vertical map is a weak homotopy equivalence, which proves that $\mathbb{U} \pi$ is a quasifibration. The other projection map is proved to be a quasifibration using the same argument. \end{proof} \end{comment} Before proving the analogue of this result for $\mathcal{I}$-spaces, let us recall some material in \cite{diagram_spaces}*{\S8--\S9}. The left adjoint in the Quillen equivalence between $\mathcal{I}$-spaces and $*$-modules is the functor $\mathbb{Q}_* \colon \mathcal{I} \mathscr{U} \arr \mathscr{M}_{*}$ defined by: \[ X \longmapsto * \boxtimes_{\mathscr{L}} (\mathcal{I}( - \otimes U, U) \odot_{\mathcal{I}} X). \] Here $\mathcal{I}( - \otimes U, U) \odot_{\mathcal{I}} X$ denotes the tensor product of the functor $X \colon \mathcal{I} \arr \mathscr{U}$ and the represented functor $\mathcal{I}(- \otimes U, U) \colon \mathcal{I}^{\op} \arr \mathbb{L}\mathscr{U}$, and can be computed using an enriched coend \cite{diagram_spaces}*{A.4}. The functor $\ast \boxtimes_{\mathscr{L}} (-)$ takes $\mathbb{L}$-spaces to $\ast$-modules, and is a left adjoint. The composite functor $\mathbb{Q}_*$ is symmetric monoidal and a left adjoint, so we have natural isomorphisms \[ \mathbb{Q}_* B^{\boxtimes_{\mathcal{I}}}(X, G, Y) \cong B^{\boxtimes_{\mathscr{L}}}(\mathbb{Q}_* X, \mathbb{Q}_* G, \mathbb{Q}_* Y). \] In particular, $\mathbb{Q}_* E^{\mathcal{I}} G \cong E^{\mathscr{L}} \mathbb{Q}_* G$ and $\mathbb{Q}_* B^{\mathcal{I}} G \cong B^{\mathscr{L}} \mathbb{Q}_* G$. We record the following consistency result, which is a consequence of the weak equivalence $\psi \colon \mathbb{U} \arr \mathbb{V}$ on cofibrant objects (see \cite{SS_groupcompletion} for the analogous result in $\mathbb{I}$-spaces). \begin{lemma}\label{lemma:bar_construction_consistency} If $G$ is a $q$-cofibrant $\mathcal{I}$-monoid, there is a natural weak homotopy equivalence of spaces $\mathbb{U} \mathbb{Q}_* B^{\mathcal{I}} G \simeq B(\mathbb{U} \mathbb{Q}_{*} G)$. Therefore the $\mathcal{I}$-space bar construction $B^{\mathcal{I}}$ and the usual classifying space functor $B$ induce the same derived functor on the homotopy category of $A_{\infty}$ spaces. \end{lemma} Let $\mathcal{J}$ denote the subcategory of $\mathcal{I}$ consisting of subspaces $V \subset U$ with morphisms the inclusions $V \arr W$. There is a natural weak homotopy equivalence of spaces \cite{diagram_spaces}*{9.4} \[ \hocolim_{\mathcal{J}} X \simeq \hocolim_{\mathcal{I}} X, \] so $q$-equivalences of $\mathcal{I}$-spaces are detected by taking the homotopy colimit over $\mathcal{J}$. Both $\hocolim_{\mathcal{J}} X$ and $\colim_{\mathcal{J}} X$ are $\mathbb{L}$-spaces, as described in \cite{diagram_spaces}*{A.12, 9.7}. We now compare these $\mathbb{L}$-spaces with $\mathbb{Q}_* X$. \begin{lemma}\label{nat_trans_equiv_for_cofibrants} Let $X$ be a $q$-cofibrant $\mathcal{I}$-space. Then there is a natural chain of $q$-equivalences of $\mathbb{L}$-spaces: \[ \underset{\mathcal{J}}{\hocolim} \, X \arr \underset{\mathcal{J}}{\colim} \, X \longleftarrow \mathbb{Q} X \longleftarrow \mathbb{Q}_* X. \] \end{lemma} \begin{proof} The natural projection from the homotopy colimit to the colimit is a weak homotopy equivalence for cofibrant $\mathcal{I}$-spaces \cite{diagram_spaces}*{9.2}. The second natural transformation depends on a choice of one dimensional subspace of the universe $U$, and is also a weak homotopy equivalence \cite{diagram_spaces}*{9.7}. The third natural transformation is the weak homotopy equivalence given by the unit map $\lambda \colon \ast \boxtimes_{\mathscr{L}} \mathbb{Q} X \arr \mathbb{Q} X$. \end{proof} Next we define quasifibrations of $\mathcal{I}$-spaces and relate them to quasifibrations of spaces. To do this we will use relative mapping path-spaces. Given a map of spaces $f \colon X \arr Y$ and a subspace $A \subset Y$, let $P(f ; A) = X \times_{Y} Y^{I} \times_{Y} A$ be the space of pairs $(\gamma, x)$ of paths in $Y$ and $x \in X$ such that $\gamma(1) \in A$ and $\gamma(0) = f(x)$. Notice that $P(f ; \{b\})$ is the homotopy fiber $F_{b}(f)$ of $f$ above $b \in Y$. Given a map $\pi \colon E \arr B$ of $\mathcal{I}$-spaces, let $P\pi = P(\pi, B)$ be the level-wise mapping path space. In other words, the $\mathcal{I}$-space $P\pi$ is given by $P\pi(V) = P(\pi(V) ; B(V))$. The projection $P \pi \arr B$ is an $h$-fibration of $\mathcal{I}$-spaces, and in particular a level-wise $h$-fibration. We define the homotopy fiber $F_{b}(\pi)$ of $\pi \colon E \arr B$ over a point $i_b \colon * \arr B$ to be the $\mathcal{I}$-space given by taking the level-wise fiber of $P \pi \arr B$ over $b$. We say that $\pi \colon E \arr B$ is a quasifibration of $\mathcal{I}$-spaces if for every point $b$ of $B$, the inclusion of the fiber into the homotopy fiber $E_{b} \arr F_{b}(\pi)$ is a $q$-equivalence of $\mathcal{I}$-spaces. Notice that a level-wise quasifibration of $\mathcal{I}$-spaces is a quasifibration of $\mathcal{I}$-spaces. Unlike the case of topological spaces, an $h$-fibration of $\mathcal{I}$-spaces need not be a $q$-fibration of $\mathcal{I}$-spaces. However, an $h$-fibration of $\mathcal{I}$-spaces is a quasifibration of $\mathcal{I}$-spaces. \begin{lemma}\label{transfer_quasifib_lemma} Let $\pi \colon E \arr B$ be a map of $\mathcal{I}$-spaces. For every point $i_b \colon * \arr B$ in the base, the homotopy fiber $F_{b}(\pi)$ of $\pi$ fits into a long exact sequence of homotopy groups \[ \dotsm \arr \pi_i F_{b}(\pi) \arr \pi_i E \arr \pi_i B \arr \pi_{i - 1}F_{b}(\pi) \arr \dotsm \] that ends with: \[ \dotsm \arr \pi_0 F_{b}(\pi) \arr \pi_0 E \arr \pi_0 B. \] Assume further that $E$, $B$ and every fiber $E_{b}$ are $q$-cofibrant $\mathcal{I}$-spaces. If the map $\mathbb{Q}_* \pi \colon \mathbb{Q}_* E \arr \mathbb{Q}_* B$ is a quasifibration of spaces, then the map $\pi$ is a quasifibration of $\mathcal{I}$-spaces. \end{lemma} \begin{proof} We will prove the second claim first. Taking homotopy colimits, we have the following commutative diagram: \begin{equation*} \xymatrix{ & \underset{\mathcal{J}}{\hocolim} \, E_{b} \ar[dl] \ar[dr] \ar[rr]^{\simeq} & & \colim_{\mathcal{J}} E_{b} \ar[d] & \mathbb{Q}_* E_{b} \ar[d] \ar[l]_-{\simeq} \\ \underset{\mathcal{J}}{\hocolim} \, F_{b}(\pi) \ar[d]_{\pi_1} \ar[r] & \underset{\mathcal{J}}{\hocolim} \, P\pi \ar[d]_{\pi_2} & \underset{\mathcal{J}}{\hocolim} \, E \ar[l]_-{\simeq} \ar[d]_{\pi_3} \ar[r]^-{\simeq} & \colim_{\mathcal{J}} E \ar[d]_{\pi_4} & \mathbb{Q}_* E \ar[d]^{\mathbb{Q}_* \pi} \ar[l] _-{\simeq} \\ B \mathcal{J} \ar[r]_-{i_b} & \underset{\mathcal{J}}{\hocolim} \, B \ar@{=}[r] & \underset{\mathcal{J}}{\hocolim} \, B \ar[r]^-{\simeq} & \colim_{\mathcal{J}} B & \mathbb{Q}_* B \ar[l]_-{\simeq} } \end{equation*} The horizontal maps in the right three columns are components of the chain of natural transformations in Lemma \ref{nat_trans_equiv_for_cofibrants}. They are $q$-equivalences by the cofibrancy assumptions. The other displayed equivalence is induced by the level-wise homotopy equivalence $E \arr P \pi$. If $B \mathcal{J}$ were a point, we could work directly with the homotopy fibers of $\pi_2$ and $\pi_3$. However, $B \mathcal{J}$ is only contractible, so we must work with the relative mapping path-spaces $P(\pi_i ; B \mathcal{J})$. For $\pi_2$ and $\pi_3$, we identify $B \mathcal{J}$ with its image in $\hocolim_{\mathcal{J}} B$ under the map of homotopy colimits induced by the inclusion $i_b \colon * \arr B$. Since the functor $\mathbb{Q}_*$ is strong symmetric monoidal, the map $\mathbb{Q}_* i_b \colon \mathbb{Q}_*(*) \arr \mathbb{Q}_* B$ is the inclusion of a point, whose image we denote by $b \in \mathbb{Q}_*B$. The relative mapping path spaces of $\mathbb{Q}_* \pi$ and $\pi_4$ over $b$ and its image $b \in \colim_{\mathcal{J}} B$ are the homotopy fibers $F_b(\mathbb{Q}_* \pi)$ and $F_b(\pi_4)$. This notation allows us to make the identifications of fibers $(\mathbb{Q}_*E)_b = \mathbb{Q}_*E_b$ and $(\colim_{\mathcal{J}} E)_b = \colim_{\mathcal{J}}E_b$. We now have a commutative diagram of fibers and relative mapping path spaces: \addtocounter{theorem}{1} \begin{equation}\label{fiber_diagramII} \xymatrix{ \underset{\mathcal{J}}{\hocolim} \, F_{b}(\pi) \ar[d]_{\simeq} & \underset{\mathcal{J}}{\hocolim} \, E_{b} \ar[l] \ar[rr]^-{\simeq} \ar[dl] \ar[dr] & & \colim_{\mathcal{J}} E_b \ar[d] & \mathbb{Q}_* E_{b} \ar[d] \ar[l]_-{\simeq} \\ P(\pi_1; B \mathcal{J}) \ar[r]^{\simeq} & P(\pi_2 ; B \mathcal{J}) & P(\pi_3 ; B \mathcal{J}) \ar[l]_{\simeq} \ar[r]^-{\simeq} & F_{b}(\pi_4) & F_{b} (\mathbb{Q}_* \pi) \ar[l]_{\simeq} } \end{equation} The two right vertical maps are the canonical inclusions of the fiber into the homotopy fiber. The maps along the bottom row arise by the functoriality of the relative mapping path-space construction, and the right three are weak homotopy equivalences because they are induced by weak homotopy equivalences of total and base spaces. The left vertical map $\hocolim_{\mathcal{J}} F_{b} \pi \arr P(\pi_1, B\mathcal{J})$ is a weak homotopy equivalence because $B \mathcal{J}$ is contractible. To prove that the map $P(\pi_1 ; B \mathcal{J}) \arr P(\pi_2 ; B \mathcal{J})$ is a weak homotopy equivalence, we compare it to the map of the homotopy fibers of $\pi_1$ and $\pi_2$: \[ \xymatrix{ P(\pi_1 ; *) \ar[r] \ar[d] & P(\pi_2 ; *) \ar[d] \\ P(\pi_1 ; B \mathcal{J}) \ar[r] & P(\pi_2 ; B \mathcal{J}) } \] Here $* \in B \mathcal{J}$ is any choice of basepoint and the vertical maps are induced by its inclusion into $B \mathcal{J}$. The map $P(\pi_1; *) \arr P(\pi_2; *)$ of homotopy fibers is a weak homotopy equivalence by \cite{diagram_spaces}*{15.7.(i)}. Since $B \mathcal{J}$ is contractible, the vertical maps are also weak homotopy equivalences. Thus the map $P(\pi_1 ; B \mathcal{J}) \arr P(\pi_2 ; B \mathcal{J})$ in diagram \eqref{fiber_diagramII} is a weak homotopy equivalence. We now complete the proof of the second claim in the lemma. If $\mathbb{Q}_* \pi$ is a quasifibration of spaces, then the right vertical map $\mathbb{Q}_* (E_{b}) \arr F_{b}(\mathbb{Q}_* \pi)$ in diagram \eqref{fiber_diagramII} is a weak equivalence. It then follows from the diagram that $E_b \arr F_{b}(\pi)$ is a $q$-equivalence and so $\pi$ is a quasifibration of $\mathcal{I}$-spaces. For the first claim, notice that diagram \eqref{fiber_diagramII} contains a natural chain of weak homotopy equivalences (that do not depend on the cofibrancy assumptions) \[ \hocolim_{\mathcal{J}} F_{b}(\pi) \simeq P(\pi_3; B \mathcal{J}). \] Since $B \mathcal{J}$ is contractible, this implies that $\hocolim_{\mathcal{J}} F_{b}(\pi)$ is weak homotopy equivalent to the homotopy fiber of $\pi_3 \colon \hocolim_{\mathcal{J}} E \arr \hocolim_{\mathcal{J}} B$, from which we deduce the long exact sequence of homotopy groups. \end{proof} Before proving that the projection $E^{\mathcal{I}} G \arr B^{\mathcal{I}} G$ is a quasifibration, we need to be able to identify its fiber. \begin{lemma}\label{identification_of_fiber_lemma} The fiber $F$ of the projection $\pi_X \colon X \boxtimes_{\mathcal{I}} Y \arr X$ over a point $i_{x} \colon * \arr X$ is naturally isomorphic to $Y$. \end{lemma} \begin{proof} Define a map $f \colon X \boxtimes_{\mathcal{I}} Y \arr F$ using the universal mapping property of the pullback: \[ \xymatrix{ X \boxtimes_{\mathcal{I}} Y \ar@/_2pc/[ddr] \ar[dr]_{f} \ar[rr]^{\pi_Y} & & Y \ar[dr]^{i_{x} \boxtimes \id} & \\ & F \ar[d] \ar[rr]^{i_{F}} & & X \boxtimes_{\mathcal{I}} Y \ar[d]^{\pi_X} \\ & \ast \ar[rr]_{i_x} & & X } \] A diagram chase shows that the composite \[ Y \xrightarrow{i_{x} \boxtimes \id} X \boxtimes_{\mathcal{I}} Y \overset{f}{\arr} F \overset{i_{F}}{\arr} X \boxtimes_{\mathcal{I}} Y \overset{\pi_Y}{\arr} Y \] is the identity map of $Y$. Thus $Y$ is a retract of $F$, and it suffices to show that $\pi_Y \circ i_{F}$ is a monomorphism. We will use the coequalizer description of $X \boxtimes Y$ from \cite{diagram_spaces}*{18.6} to show that this is true after evaluation at $\mathbf{R}^m$. The map $\pi_Y$ is induced by the map of coequalizer diagrams with domain \[ \xymatrix{ \underset{a + b + c = m}{\displaystyle\coprod} O(m) \times_{O(a) \times O(b) \times O(c)} X(\mathbf{R}^a) \times Y(\mathbf{R}^c) \ar@<.5ex>[d] \ar@<1.5ex>[d] \\ \underset{a + b = m}{\displaystyle\coprod} O(m) \times_{O(a) \times O(b)} X(\mathbf{R}^a) \times Y(\mathbf{R}^b) \ar[d] \\ X \boxtimes_{\mathcal{I}} Y(\mathbf{R}^m). } \] that collapses the entries $X(-)$ to a point. Let us write $x_{n} \in X(\mathbf{R}^n)$ for the image of $i_{x} \colon * \arr X$ at level $\mathbf{R}^n$. Let $(\phi, x_{a}, y)$ and $(\phi', x_{a'}, y')$ be two points of the fiber $F$ coming from the $(a, b)$ and $(a', b')$ summands, respectively, and suppose that they have the same image under $\pi_Y$. Then $(\phi \mspace{-6mu}\mid_{\mathbf{R}^b})(y) = (\phi' \mspace{-6mu}\mid_{\mathbf{R}^{b'}})(y')$ and it follows that $(\phi, x_{a}, y)$ and $(\phi', x_{a'}, y')$ are identified under the coequalizer. Therefore $(\pi_Y \circ i_{F})(V)$ is a monomorphism. \end{proof} \begin{lemma}\label{fiber_of_bar_construction_lemma} Suppose that $G$ is an $\mathcal{I}$-monoid, $X$ is a right $G$-module and $Y$ is a left $G$-module. Consider the projection map of bar constructions \[ \pi \colon B^{\mathcal{I}}(X, G, Y) \arr B^{\mathcal{I}}(X, G, \ast) \] induced by the unique map $Y \arr \ast$. The fiber of $\pi$ over any point $* \arr B^{\mathcal{I}}(X, G, \ast)$ is naturally isomorphic to $Y$. \end{lemma} \begin{proof} A point in the realization has a unique representative $\ast \arr \Delta^{q} \times B_q(X, G, \ast)$ in some simplicial degree $q$ that is not in the image of the simplicial degeneracy maps. By the previous lemma, the fiber over a point $* \arr B^{\mathcal{I}}_{q}(X, G, \ast)$ can be canonically identified with $Y$. The claim follows. \end{proof} \begin{proposition}\label{pi_is_quasifib_prop} Let $G$ be a grouplike $q$-cofibrant $\mathcal{I}$-monoid. Let $X$ be a right $G$-module and let $Y$ be a left $G$-module. Then the projection maps \[ \pi \colon B^{\mathcal{I}}(X, G, Y) \arr B^{\mathcal{I}}(X, G, *) \qquad \text{and} \qquad \pi \colon B^{\mathcal{I}}(X, G, Y) \arr B^{\mathcal{I}}(*, G, Y) \] are quasifibrations of $\mathcal{I}$-spaces. \end{proposition} \begin{proof} We will do the first map as the proof for the other works in the same way. Let us first prove the claim under the additional assumption that $X$ and $Y$ are $q$-cofibrant $G$-modules. Let $b \colon * \arr B^{\mathcal{I}}(X, G, *)$ be a point in the base. By Lemma \ref{fiber_of_bar_construction_lemma}, the fiber of $\pi$ over $b$ is isomorphic to $Y$, and in particular is $q$-cofibrant. The $\mathcal{I}$-spaces $B^{\mathcal{I}}(X, G, Y)$ and $B^{\mathcal{I}}(X, G, *)$ are also $q$-cofibrant by Lemma \ref{Ispace_bar_constructions_are_cofibrant}.(ii). The cofibrancy hypotheses in Lemma \ref{transfer_quasifib_lemma} are now satisfied, so it remains to show that $\mathbb{Q}_* \pi$ is a quasifibration of spaces. Consider the following commutative diagram: \[ \xymatrix{ \mathbb{Q}_* B^{\mathcal{I}}(X, G, Y) \ar[rr]^-{\mathbb{Q}_* \pi} \ar[d]^{\cong} & & \mathbb{Q}_* B^{\mathcal{I}}(X, G, \ast) \ar[d]^{\cong} \\ B^{\mathscr{L}}(\mathbb{Q}_* X, \mathbb{Q}_* G, \mathbb{Q}_* Y) \ar^{\pi}[rr] & & B^{\mathscr{L}}(\mathbb{Q}_* X, \mathbb{Q}_* G, \ast) } \] The vertical maps are the canonical isomorphisms interchanging the functor $\mathbb{Q}_*$ and the bar construction. Since the functor $\mathbb{Q}_*$ is left Quillen, $\mathbb{Q}_* G$ is a grouplike $q$-cofibrant monoid in $*$-modules. Thus Lemma \ref{*module_quasifibration_lemma} applies, proving that the bottom horizontal arrow is a quasifibration of spaces. To deduce the general case, notice that cofibrant approximations of $X$ and $Y$ induce $q$-equivalences of the base and total objects by Lemma \ref{Ispace_bar_constructions_are_cofibrant}.(iii), as well as $q$-equivalences of fibers and homotopy fibers. \end{proof} \section{The classification of principal fibrations}\label{classification_section_Ispaces} Let $G$ be a grouplike $q$-cofibrant $\mathcal{I}$-monoid. This section is devoted to the proof of the classification theorem for principal $G$-fibrations. We will only work with $\mathcal{I}$-spaces, so let us drop the superscript $\mathcal{I}$ from the notation for bar constructions. We follow May's method of proof for the classification theorem of fibrations of topological spaces \cite{class_and_fib}. There is some subtlety in our more general context because not all $\mathcal{I}$-spaces are fibrant. Let $[X, Y]$ denote the set of homotopy equivalence classes of maps of $\mathcal{I}$-spaces $X \arr Y$, defined in terms of the tensor $X \times I$. When $X$ is $q$-cofibrant, $X \times I$ is a cylinder object in the $q$-model structure on $\mathcal{I}$-spaces. Thus for $X$ $q$-cofibrant and $Y$ $q$-fibrant, there is a natural isomorphism $[X, Y] \cong \ho \mathcal{I} \mathscr{K}(X, Y)$ between homotopy classes of maps and maps in the homotopy category of $\mathcal{I}$-spaces. The classification theorem for principal $G$-fibrations, in the sense of definition \ref{def:classthm}, follows from the following theorem. \begin{theorem}\label{classification_theorem_for_diagram_fibrations} Let $X$ be a $q$-cofibrant $\mathcal{I}$-space and let $B'G$ be a $q$-fibrant approximation of $B^{\mathcal{I}}G = BG$. There is a natural bijection $\mathscr{E}_{G}(X) \cong [X, B'G]$ between $q$-equivalence classes of principal $G$-fibrations over $X$ and homotopy classes of maps $X \arr B'G$. \end{theorem} \noindent A CW complex $X$ determines a constant $\mathcal{I}$-space $\Delta X(V) = X$, which is $q$-cofibrant by the description of the generating $q$-cofibrations in \cite{diagram_spaces}*{\S15}. By the consistency result in Lemma \ref{lemma:bar_construction_consistency}, the equivalence of homotopy categories $\ho \mathcal{I} \mathscr{K} \simeq \ho \mathscr{K}$ gives a natural bijection \[ [X, B \mathbb{Q}_*G] \cong \ho \mathscr{K} (X, B \mathbb{Q}_*G) \cong \ho \mathcal{I} \mathscr{K} (\Delta X, B^{\boxtimes} G). \] We thus deduce Theorem \ref{generic_Gbundle_classification} from Theorem \ref{classification_theorem_for_diagram_fibrations}. The proof of the Theorem \ref{classification_theorem_for_diagram_fibrations} takes up the rest of this section. We will need the following version of Whitehead's theorem. The proof is standard model category theory. \begin{lemma}\label{getting_homotopy_inverses} Suppose that $X$ is a $q$-cofibrant $\mathcal{I}$-space. \begin{itemize} \item[(i)] A $q$-equivalence $f \colon A \arr B$ of $q$-fibrant $\mathcal{I}$-spaces induces an isomorphism \[ f_* \colon [X, A] \arr [X, B]. \] \item[(ii)] If $p \colon B \arr X$ is a $q$-equivalence of $q$-fibrant $\mathcal{I}$-spaces, then there exists a map $s \colon X \arr B$ such that $p \circ s$ is homotopic to $\id_{X}$. Up to homotopy, $s$ is unique among maps with this property. Furthermore, the construction of $s$ is natural up to homotopy. \end{itemize} \end{lemma} It is straightforward to verify that principal $G$-fibrations behave as expected under pullback: \begin{lemma}\label{homotopic_maps_equiv_fibrations_prop} Let $p \colon E \arr B$ be a principal $G$-fibration and suppose that $f \colon A \arr B$ is a map of $\mathcal{I}$-spaces. Then the pullback $(f^* E, f^* p)$ of $(E, p)$ along $f$ is a principal $G$-fibration. Furthermore, if $f, g \colon A \arr B$ are homotopic maps of $\mathcal{I}$-spaces, then there is a fiberwise homotopy equivalence of principal $G$-fibrations $f^* E \simeq g^* E$ over $A$. \end{lemma} We will let $\Gamma p \colon \Gamma E \arr B$ denote the approximation functor of \cite{class_and_fib}*{Def. 3.2} applied level-wise to a map of $\mathcal{I}$-spaces $p \colon E \arr B$. The cited construction is a verion of the mapping path space $P(p, B)$ considered in \S\ref{section:homotopical_analysis}, but using Moore paths so that $\Gamma$ is a monad. This extra structure is not necessary for us and we could just as well use $P$. The purpose of $\Gamma$ is to replace quasifibrations with $h$-fibrations that still have the correct fiber homotopy type. \begin{comment} Given an $\mathcal{I}$-space $B$, let $B^{[0, \infty]}$ be the cotensor of the $\mathcal{I}$-space $B$ with the space $[0, \infty]$, and let $\_{\Pi} B = B^{[0, \infty]} \times [0, \infty)$ be the tensor with $[0, \infty)$. We think of $\_{\Pi} B$ as the space of all paths in $B$ with a specified time for endpoint evaluation. The inclusion $[0, \infty) \arr [0, \infty]$ and the counit of the tensor/cotensor adjunction define the endpoint evaluation map \[ e \colon \_{\Pi} B \arr B^{[0, \infty]} \times [0, \infty] \overset{\epsilon}{\arr} B. \] To restrict to paths that are constant after a compact amount of time, let $+$ be the shift of endpoint map \begin{align*} + \colon \_{\Pi}B \times(0, \infty) &\arr \_{\Pi}B \\ (\gamma, s, \epsilon) &\longmapsto (\gamma, s + \epsilon), \end{align*} and define $\Pi B$ to be the equalizer of the following two composites: \[ \Pi B = \mathrm{equalizer} \begin{cases} \hspace{-1pc} &\_{\Pi} B \overset{\eta}{\arr} (\_{\Pi} B \times (0, \infty))^{(0, \infty)} \xrightarrow{+^{(0, \infty)}} \_{\Pi} B^{(0, \infty)} \xrightarrow{e^{(0, \infty)}} B^{(0, \infty)} \\ \hspace{-1pc} &\_{\Pi} B \overset{e}{\arr} B \xrightarrow{\mathrm{const}} B^{(0, \infty)} \end{cases} \] The $\mathcal{I}$-space $\Pi B$ is the appropriate version of Moore paths on an $\mathcal{I}$-space. Given an $\mathcal{I}$-space $(E, p)$ over $B$, define the $\mathcal{I}$-space $\Gamma E$ as the pullback of $p \colon E \arr B$ along the evaluation at $0$ map $e_{0} \colon \Pi B \arr B$. Let $\Gamma p$ be the induced endpoint evaluation map: \[ \Gamma p \colon \Gamma E \arr \Pi B \overset{e}{\arr} B. \] Then $(\Gamma E, \Gamma p)$ is an $\mathcal{I}$-space over $B$, and $\Gamma p$ is an $h$-fibration of $\mathcal{I}$-spaces. We record the basic properties of $\Gamma$. \end{comment} $\Gamma$ defines a functor $\Gamma \colon \mathcal{I}\mathscr{K}/B \arr \mathcal{I}\mathscr{K}/B$ and for any $p \colon E \arr B$, the map $\Gamma p \colon \Gamma E \arr B$ is an $h$-fibration of $\mathcal{I}$-spaces. The construction $\Gamma$ restricts to a functor from $G$-modules over $B$ to $G$-modules over $B$ and has the following basic properties. \begin{proposition}\label{gamma_properties} \hspace{2in} \begin{itemize} \item[(i)] If $(E, p)$ is a $G$-module over $B$ for which every homotopy fiber of $(E, p)$ admits a chain of $q$-equivalences of $G$-modules $F_{b}(p) \simeq G$, then $(\Gamma E, \Gamma p)$ is a principal $G$-fibration over $B$. In particular, if $p$ is a quasifibration of $\mathcal{I}$-spaces and every fiber $E_b$ is $q$-equivalent to $G$, then $(\Gamma E, \Gamma p)$ is a principal $G$-fibration over $B$. \item[(ii)] Suppose that $(D, p) \arr (E, q)$ is a $q$-equivalence of $G$-modules over $B$. Then the induced map $(\Gamma D, \Gamma p) \arr (\Gamma E, \Gamma q)$ is a $q$-equivalence of principal $G$-fibrations. \item[(iii)] The map $\eta \colon (E, p) \arr (\Gamma E, \Gamma p)$ defined by the inclusion into constant paths is a homotopy equivalence of $G$-modules over $B$. If $p$ is a quasifibration of $\mathcal{I}$-spaces, then $\eta$ restricts to a $q$-equivalence on fibers. \end{itemize} \end{proposition} \begin{lemma}\label{action_map_is_equiv_lemma} Suppose that $Y$ is a left $G$-module that admits a $q$-equivalence of $G$-modules $Y \simeq G$ and let $y \colon \ast \arr G^{\boxtimes q} \boxtimes Y$ be a point. Then the composite \begin{equation*} G \cong G \boxtimes * \xrightarrow{ \id \boxtimes y } G \boxtimes (G^{\boxtimes q} \boxtimes Y) \overset{\alpha}{\arr} Y \end{equation*} of $y$ with the $(q + 1)$-fold iteration of the left $G$-module structure map $\alpha$ is a $q$-equivalence. \end{lemma} \begin{proof} Choose a zig-zag of $q$-equivalences of $G$-modules between $G$ and $Y$. In the following diagram, the upper right square represents the induced chain of commuting naturality squares relating the induced actions of the topological monoid $G_{h\mathcal{I}}$ on $G_{h \mathcal{I}}$ and $Y_{h \mathcal{I}}$. The bottom portion of the diagram is induced by the lax monoidal structure map $X_{h \mathcal{I}} \times Y_{h \mathcal{I}} \arr (X \boxtimes Y)_{h \mathcal{I}}$ of the homotopy colimit functor, and the lower right square commutes by definition. \[ \xymatrix{ G_{h \mathcal{I}} \times * \ar[r]^-{\id \times g} & G_{h \mathcal{I}} \times G_{h \mathcal{I}}^{q} \times G_{h \mathcal{I}} \ar[r]^-{\mu} \ar@{-}[d]_{\simeq} & G_{h \mathcal{I}} \ar@{-}[d]_{\simeq} \\ G_{h \mathcal{I}} \times B \mathcal{I} \ar[r]^-{\id \times y} \ar@{-}[u]^{\simeq} \ar[d] & G_{h \mathcal{I}} \times G_{h \mathcal{I}}^q \times Y_{h \mathcal{I}} \ar[r]^-{\alpha} \ar[d] & Y_{h \mathcal{I}} \\ (G \boxtimes \ast)_{h \mathcal{I}} \ar[r]^-{\id \boxtimes y} & (G \boxtimes G^{\boxtimes q} \boxtimes Y)_{h \mathcal{I}} \ar[r]^-{\alpha} & Y_{h \mathcal{I}} \ar@{=}[u] } \] Since $B \mathcal{I}$ is contractible and $\pi_0 Y_{h \mathcal{I}} \cong \pi_0 G_{h \mathcal{I}}$, we may choose a point $g \in G_{h \mathcal{I}}^q \times G_{h \mathcal{I}}$ so that the upper left hand square represents a chain of commuting squares. The vertical maps on the left are both homotopy equivalences, and the top composite is a homotopy equivalence because $G_{h \mathcal{I}}$ is a grouplike topological monoid. Therefore the bottom composite is a weak homotopy equivalence. \end{proof} Suppose that $X$ is a right $G$-module, $Y$ is a left $G$-module, and that we are given a map of $\mathcal{I}$-spaces $f \colon X \boxtimes_{G} Y \arr Z$. We will write $\epsilon(f)$ for the composite \[ \epsilon(f) \colon B(X, G, Y) \arr X \boxtimes_{G} Y \overset{f}{\arr} Z \] of $f$ with the canonical projection from the two sided bar construction. In particular, the canonical isomorphism $\alpha \colon G \boxtimes_{G} Y \arr Y$ induces a map of left $G$-modules $\epsilon(\alpha) \colon B(G, G, Y) \arr Y$, and the appropriate version of a standard simplicial homotopy argument (for example, \cite{geom_infinite_loops}*{9.8}) shows that $\epsilon(\alpha)$ is a homotopy equivalence. Now suppose that $p \colon Y \arr X$ is a principal $G$-fibration. Since the action of $G$ on $X$ is trivial, the map $p$ factors through the quotient of $Y$ by the action of $G$ to give a map $\overline{p} \colon \ast \boxtimes_{G} Y \arr X$. \begin{proposition}\label{epsilon_equiv} The map \[ \epsilon(\overline{p}) \colon B(*, G, Y) \arr X \] is a $q$-equivalence of $\mathcal{I}$-spaces. \end{proposition} \begin{proof} Consider the following commutative diagram. \[ \xymatrix{ B(G, G, Y) \ar[rr]^-{\epsilon(\alpha)} \ar[d]^{\pi} & & Y \ar[d]^{p} \\ B(*, G, Y) \ar[rr]_-{\epsilon(\overline{p})} & & X} \] Let $b \colon * \arr B(*, G, Y)$ be a point and let $x = \epsilon(p) \circ b \colon * \arr X$. The map of fibers \[ \epsilon(\alpha) \colon B(G, G, Y)_{b} \arr Y_{x}. \] is isomorphic to the composite \begin{equation*} G \cong G \boxtimes * \xrightarrow{ \id \boxtimes y } G \boxtimes (G^{\boxtimes q} \boxtimes Y_{x}) \overset{\alpha}{\arr} Y_{x}, \end{equation*} where $y \colon * \arr G^{\boxtimes q} \boxtimes Y_{x}$ is determined by a choice of representative \[ (t, y) \in \Delta^{q} \times B_q(\ast, G, Y) \] for $b$ at some simplicial level. Thus Lemma \ref{action_map_is_equiv_lemma} implies that $\epsilon(\alpha)$ induces a $q$-equivalence of fibers. The projection $\pi$ is a quasifibration by Proposition \ref{pi_is_quasifib_prop}. Since $p$ is an $h$-fibration of $G$-modules, by neglect of structure it is a level $h$-fibration, hence a level quasifibration. It follows that $p$ is also a quasifibration of $\mathcal{I}$-spaces. This implies that $\epsilon(\alpha)$ induces a $q$-equivalence of homotopy fibers. The map $\epsilon(\alpha)$ is a homotopy equivalence of $\mathcal{I}$-spaces, so the five lemma implies that $\epsilon(\overline{p})$ is a $q$-equivalence. \end{proof} We are now ready to prove Theorem \ref{classification_theorem_for_diagram_fibrations}. We will construct natural transformations $\Psi \colon [X, B' G] \arr \mathscr{E}_{G}(X)$ and $\Phi \colon \mathscr{E}_{G}(X) \arr [X, B' G]$ that are mutually inverse. Let us denote $q$-fibrant approximation by $(-)'$, where we take $q$-fibrant approximation in the category of $G$-modules when that structure is present. Write $B'(X, G, Y)$ for the $q$-fibrant approximation of the bar construction $B(X, G, Y)$, and make the abbreviations $B'G = B'(\ast, G, \ast)$ and $E'G = B'(G, G, \ast)$. Although we have no control over the fiber of $\pi' \colon E'G \arr B'G$, we know that its homotopy fiber is $q$-equivalent to $G$ because $\pi$ is a quasifibration. It follows that $\Gamma \pi' \colon \Gamma E'G \arr B'G$ is a principal $G$-fibration. Given $[f] \in [X, B' G]$, define $\Psi[f] = (f^*\Gamma E' G, \Gamma \pi')$, the pullback of the principal $G$-fibration $\Gamma E' G \arr B' G$ along $f$. The map $\Psi$ is well-defined by Lemma \ref{homotopic_maps_equiv_fibrations_prop}. Now suppose that $p \colon Y \arr X$ is a principal $G$-fibration. Define the classifying map $\Phi(Y, p) \colon X \arr B' G$ as follows. In the following commutative diagram, \[ \xymatrix{ X \ar[d]_{r} & & B(\ast, G, Y) \ar[rr]^-{q} \ar[ll]_-{\epsilon(\overline{p})} \ar[d]^{r} & & BG \ar[d]^{r} \\ X' \ar@{-->}@<-.5ex>[rr]_-{s} & & B'(*, G, Y) \ar[rr]^-{q'} \ar@<-.5ex>[ll]_-{\epsilon(\overline{p})'} & & B' G } \] the map $q$ is the projection off of $Y$ and the vertical maps are all fibrant approximations. The resulting map $\epsilon(\overline{p})'$ is a $q$-equivalence by Proposition \ref{epsilon_equiv}, so by Lemma \ref{getting_homotopy_inverses}.(ii) we may choose a map $s \colon X' \arr B'(*, G, Y)$ such that $\epsilon(\overline{p})' \circ s$ is homotopic to $\id_{X'}$. Define $\Phi(Y, p) = [q' \circ s \circ r]$. The function $\Phi$ is well-defined because $s$ is unique up to homotopy. We will now prove that $\Psi \Phi = \id$. Given $(Y, p) \in \mathscr{E}_{G}(X)$, we have the classifying map $f = q' \circ s \circ r$, and we may choose a homotopy $H$ from $\epsilon(\overline{p})' \circ s$ to $\id_{X'}$. Write $g = q' \circ s$ and form the following diagram. \addtocounter{theorem}{1} \begin{equation}\label{3d_diagram} \xymatrix{ & \Gamma B'(G, G, Y) \ar@{-} '[d] \ar[dl]_-{\Gamma \epsilon(\alpha)'} \ar[rr]^{\Gamma q'} & & \Gamma E' G \ar@{-} '[d] & \\ \Gamma Y' \ar[dd]_{\Gamma p'} & \ar[d]_-{\Gamma \pi'} & s^* \Gamma B'(G, G, Y) \ar[ll]_<<<<<<<<<<<<<<<<{\widetilde{H}_{1}} \ar[rr]^<<<<<<<<<<<{K} \ar[dd]^>>>>>>>>>>>>>>>>>>{s^*\Gamma \pi'} \ar[ul]_-{\widetilde{s}} & \ar[d]^-{\Gamma \pi'} & g^* \Gamma E' G \ar[dd]^{g^* \Gamma \pi'} \ar[ul]_-{\widetilde{g}} \\ & B'(*, G, Y) \ar[dl]_{\epsilon(\overline{p})'} \ar@{-} '[r] & \ar[r]^{q'} & B' G & \\ X' \ar@{=}[rr] & & X' \ar@{=}[rr] \ar[ul]_{s} & & X' \ar[ul]_{g} } \end{equation} The squares on the right that face into the page are both pullback squares and $K$ is the induced map of pullbacks. The lower triangle commutes up to homotopy via $H$ and the CHP for the $h$-fibration of $G$-modules $\Gamma p' \colon \Gamma Y' \arr X'$ allows us to lift $H$ to a homotopy $\widetilde{H}$ of $G$-modules that makes the upper triangle commute up to homotopy. The rest of the diagram commutes strictly. The map $\Gamma \epsilon(\alpha)'$ is a $q$-equivalence because $\epsilon(\alpha)$ is a $q$-equivalence, and $\widetilde{s}$ is a $q$-equivalence because it is a map of $h$-fibrations that induces a $q$-equivalence on fibers and on base spaces. It follows that the map $\widetilde{H}_{1}$ is a $q$-equivalence as well. To analyze the right side of the diagram, first notice that in the diagram \[ \xymatrix{ B(G, G, Y) \ar[d]_{\pi} \ar[r]^-{q} & EG \ar[d]^{\pi} \\ B(\ast, G, Y) \ar[r] & BG } \] the upper map $q$ induces an isomorphism on fibers. Since the projections $\pi$ are quasifibrations, the map $q$ is a $q$-equivalence on homotopy fibers as well. Taking fibrant approximation of the entire diagram, the induced map of total objects $q' \colon B'(G, G, Y) \arr E'G$ is a $q$-equivalence on homotopy fibers as well. Applying the approximation functor $\Gamma$, we see that $\Gamma q'$ is also a $q$-equivalence on homotopy fibers. Since its domain and codomain are $h$-fibrations, this means that $\Gamma q'$ induces a $q$-equivalence on fibers. Returning to diagram \eqref{3d_diagram}, we see that the induced map of pullbacks $K$ is also a $q$-equivalence on fibers. Since $K$ is a map of $h$-fibrations over the same base object, this means that $K$ is a $q$-equivalence. We have now established that the front wall of diagram \eqref{3d_diagram} displays a chain of $q$-equivalences of principal $G$-fibrations between $(\Gamma Y', \Gamma p')$ and $(g^* \Gamma E'G, g^* \Gamma \pi')$. Taking the pullback of these $q$-equivalences along the $q$-fibrant approximation map $r \colon X \arr X'$ gives a chain of $q$-equivalences between $(r^*\Gamma Y', r^*\Gamma p')$ and $\Psi \Phi (Y, p) = (f^* \Gamma E'G, f^* \Gamma \pi')$. A little diagram chase shows that the induced map $Y \arr r^* \Gamma Y'$ is a $q$-equivalence, so that the original principal $G$-fibration $(Y, p)$ is equivalent to the pullback $(r^*\Gamma Y', r^*\Gamma p')$. This proves that $\Psi \Phi = \id$. \medskip To show that $\Phi \Psi = \id$, let $f \colon X \arr B' G$ and consider the following commutative diagram: \[ \xymatrix{ X \ar[r]^{r} \ar[d]_{f} & X' \ar[d]_{f'} \ar@{-->}@<-.5ex>[rr]_-{s_1} & & B'(*, G, f^* \Gamma E' G) \ar@<-.5ex>[ll]_-{\epsilon(\overline{f^*\Gamma \pi'})'} \ar[d]_{B'(\id, \id, \widetilde{f})} \ar[dr]^{q'} & \\ B'G \ar[r]_{r} & B'' G \ar@{-->}@<-.5ex>[rr]_-{s_2} & & B'(*, G, \Gamma E'G) \ar@<-.5ex>[ll]_-{\epsilon(\overline{\Gamma \pi'})'} \ar[r]_-{q'} & B'G } \] The left square is a naturality square for fibrant approximation and the middle square is the result of taking the fibrant approximation of the corresponding square that relates the maps $\epsilon(\overline{f^* \Gamma \pi'})$ and $\epsilon(\overline{\Gamma \pi'})'$ via $f$ and $\widetilde{f}$. The existence of the section up to homotopy $s_1$ is part of the construction $\Phi$. Applying Proposition \ref{epsilon_equiv} to the principal $G$-fibration $\Gamma \pi' \colon \Gamma E' G \arr B' G$, we see that the map $\epsilon(\overline{\Gamma \pi'})'$ is a $q$-equivalence. Thus we may also find a section $s_2$ up to homotopy as indicated. The diagram involving $s_1$ and $s_2$ commutes up to homotopy by the uniqueness statement in Lemma \ref{getting_homotopy_inverses}.(ii). Since $\Gamma E' G$ is contractible, the lower instance of $q'$ is a $q$-equivalence. By Lemma \ref{getting_homotopy_inverses}.(i), the map \[ \Phi \Psi \colon [f] \longmapsto [q' \circ s_1 \circ r] = [(q' \circ s_2 \circ r) \circ f] \] is an automorphism of $[X, B' G]$. Thus $\Psi$ is injective, so the identity $\Psi = (\Psi \Phi) \Psi = \Psi (\Phi \Psi)$ implies that $\Phi \Psi = \id$. This concludes the proof of Theorem \ref{classification_theorem_for_diagram_fibrations}. \section{Model categories of parametrized diagram spaces}\label{model_cat_param_spaces_section} In this section we discuss model category structures on parametrized $\mathcal{I}$-spaces. We first recall some basic material from May-Sigurdsson \cite{MS}. The category $\mathscr{K}$ of $k$-spaces admits a compactly generated topological model structure with weak equivalences the weak homotopy equivalences, fibrations the Serre fibrations, and cofibrations the retracts of relative CW-complexes. We refer to this model structure as the $q$-model structure, and use the terms $q$-equivalences, $q$-fibrations, and $q$-cofibrations for its weak equivalences, fibrations, and cofibrations. Let $B$ be a compactly generated topological space. The category $\mathscr{K}/B$ of spaces $(X, p) = (p \colon X \arr B)$ over $B$ admits a model structure whose weak equivalences and fibrations are detected by the forgetful functor $(X, p) \longmapsto X$ to the $q$-model structure on $\mathscr{K}$. An ex-space is a space $(X, p)$ over $B$ along with a map $s \colon B \arr X$ such that $p \circ s = \id_{B}$. The category $\mathscr{K}_{B}$ of ex-spaces $(X, p, s)$ also admits a model structure given by the forgetful functor to the $q$-model structure on $\mathscr{K}$. We refer to these model structures as the $q$-model structure on $\mathscr{K}/B$ and $\mathscr{K}_{B}$, respectively. While both of these model structures are compactly generated and topological, they are not well-grounded, in the sense of \cite{MS}*{\S 5.3-5.6}. The problem is that the generating $q$-cofibrations and acyclic $q$-cofibrations do not satisfy the CHP defined in terms of the cylinder objects native to their category---they are only Hurewicz cofibrations in the underlying category of spaces. As a result, applications of the glueing lemma that would allow standard inductive arguments over cell complexes built out of the generating sets fail for these model structures. In attempting to construct a stable model structure on parametrized spectra based on the $q$-model structure, the verification that relative cell complexes built out of the generating acyclic cofibrations are weak equivalences is unattainable. As an alternative, May-Sigurdsson develop the $qf$-model structure on $\mathscr{K}/B$ and $\mathscr{K}_{B}$. The $qf$-model structure also has the $q$-equivalences as weak equivalences, so that the associated homotopy category is still the homotopy category of spaces over $B$, but there are fewer $qf$-cofibrations than $q$-cofibrations. A $qf$-fibration need not be a Serre fibration but is a quasifibration. For our purposes, we do not need the details of the definitions, only the fact that in each case the $qf$-model structure is a well-grounded compactly generated model category. We will work in the un-sectioned context, building well-grounded compactly generated model structures on parametrized diagram spaces out of the $qf$-model structure on $\mathscr{K}/B$. We will now introduce the $qf$-model structure on parametrized $\mathcal{I}$-spaces. There is also a $q$ model structure on parametrized $\mathcal{I}$-spaces, but we will not need to use it. When $B = \ast$, both the $q$ and $qf$ model structure on parametrized $\mathcal{I}$-spaces agrees with the model structure on $\mathcal{I}$-spaces constructed in \cite{diagram_spaces}*{\S15}. We always assume that the base space $B$ is a compactly generated topological space. By considering $B$ as a constant $\mathcal{I}$-space $B(V) = B$, the category $\mathcal{I} \mathscr{K}/B$ of $\mathcal{I}$-spaces over $B$ may be identified with the category of continuous functors from $\mathcal{I}$ to the category $\mathscr{K}/B$ of spaces over $B$. As such, there is a level $qf$ model structure on $\mathcal{I}$-spaces over $B$ with weak equivalences and fibrations those maps $X \arr Y$ for which $X(V) \arr Y(V)$ is a weak homotopy equivalence or $qf$-fibration, respectively, in the $qf$-model structure on $\mathscr{K}/B$ for every object $V$ of $\mathcal{I}$. Define a $qf$-fibration of $\mathcal{I}$-spaces over $B$ to be a map that has the right lifting property with respect to level-wise $qf$-cofibrations that are $q$-equivalences of $\mathcal{I}$-spaces. Define a $qf$-cofibration to be a level-wise $qf$-cofibration. Replacing $\mathscr{U}$ under the $q$-model structure with $\mathscr{K}/B$ under the $qf$-model structure, we may make the same arguments as in \cite{diagram_spaces}*{\S 15} to prove the following theorem. \begin{theorem} Let $B$ be a compactly generated space. \begin{itemize} \item[(i)] The category $\mathcal{I} \mathscr{K}/B$ of $\mathcal{I}$-spaces over $B$ is a well-grounded compactly generated model category with respect to the $q$-equivalences, $qf$-fibrations and $qf$-cofibrations. We refer to this model structure as the $qf$-model structure on $\mathcal{I} \mathscr{K}/B$. \item[(ii)] Let $G$ be an $\mathcal{I}$-monoid. There is a well-grounded compactly generated model structure on the category $\Mod_{G}/B$ of $G$-modules over $B$ with weak equivalences and fibrations created by the forgetful functor to the $qf$-model structure on $\mathcal{I}$-spaces over $B$. \end{itemize} \end{theorem} \noindent It is a formal consequence that the category $\mathcal{I} \mathscr{K}_{B}$ of ex-$\mathcal{I}$-spaces inherits a well-grounded compactly generated model structure from the $qf$-model structure on $\mathcal{I} \mathscr{K}/B$. We will pass through this category briefly when constructing parametrized spectra, but we will not need to do any real work there. It will be useful to have the following description of the $qf$-fibrations. \begin{proposition}\label{qf_fibration_description_prop} A map $(X, p) \arr (Y, q)$ of $\mathcal{I}$-spaces over $B$ is a $qf$-fibration if and only if it is a level-wise $qf$-fibration and for every morphism $\phi \colon V \arr W$ of $\mathcal{I}$, the induced map to the pullback \[ X(V) \arr X(W) \times_{Y(W)} Y(V) \] is a $q$-equivalence of spaces over $B$. In particular, $(X, p)$ is $qf$-fibrant if and only if each structure map $p(V) \colon X(V) \arr B$ is a $qf$-fibration of spaces and every morphism $\phi \colon V \arr W$ of $\mathcal{I}$ induces a $q$-equivalence $X(V) \arr X(W)$ of spaces over $B$. Since $qf$-fibrations of spaces are quasifibrations and level-wise quasifibrations of $\mathcal{I}$-spaces are quasifibrations of $\mathcal{I}$-spaces, it follows that every $qf$-fibration of $\mathcal{I}$-spaces is a quasifibration of $\mathcal{I}$-spaces. \end{proposition} For the next result, the proof of its analogue in the $s$-model structure on parametrized spectra \cite{MS}*{12.6.7} works in the same way by using the appropriate generating sets for $\mathcal{I}$-spaces. \begin{proposition}\label{prop:base_change_quillen} Let $f \colon A \arr B$ be a map. Then $(f_{!}, f^*)$ is a Quillen adjoint pair with respect to the $qf$-model structure on $\mathcal{I} \mathscr{K}/B$ and $\mathcal{I} \mathscr{K} / B$. If $f$ is a $q$-equivalence of spaces, then $(f_{!}, f^*)$ is a Quillen equivalence. The same statements hold for the $qf$-model structure on parametrized $G$-modules when $G$ is an $\mathcal{I}$-monoid. \end{proposition} In particular, the fiber functor $i_{b}^* = (-)_{b}$ is right Quillen on the category of $G$-modules over $B$. We let $\mathbf{F}_{b} = \mathbf{R} i_{b}^*$ denote its right derived functor. In other words, $\mathbf{F}_{b} Y$ is the object of the homotopy category of $G$-modules determined by the fiber $(R^{qf}Y)_{b}$ of a $qf$-fibrant approximation of $Y$. The inclusion of the fiber into the homotopy fiber for $(Y, p)$ and $(R^{qf}Y, R^{qf}p)$ are related by the commutative diagram \addtocounter{theorem}{1} \begin{equation}\label{diagram:justify} \xymatrix{ Y_{b} \ar[d] \ar[r] & (R^{qf}Y)_{b} \ar[d]^{\simeq} \\ F_{b}(p) \ar[r]^-{\simeq} & F_{b}(R^{qf}p) } \end{equation} induced by fibrant approximation. Since the fibrant approximation is a $q$-equivalence of total spaces, it induces a $q$-equivalence of the homotopy fibers. The $qf$-fibration $R^{qf}p$ is in particular a quasifibration of $\mathcal{I}$-spaces, which gives the other displayed $q$-equivalence. It follows that the derived fiber $\mathbf{F}_{b}Y$ is canonically $q$-equivalent to the homotopy fiber $F_{b}(p)$. While the following terminology is non-standard, it will be useful as an intermediary between the highly structured notion of a principal $G$-fibration and the model-theoretic fiber conditions on parametrized spectra. \begin{definition}\label{def:Gtorsor} A $G$-torsor over $B$ is a $G$-module $(Y, p)$ over $B$ for which every derived fiber $\mathbf{F}_{b}Y$ admits a zig-zags of $q$-equivalences of $G$-modules to $G$. \end{definition} Diagram \eqref{diagram:justify} gives the following characterization of $G$-torsors: \begin{lemma}\label{lemma:Gtorsor_characterization} A $G$-module $(Y, p)$ over $B$ is a $G$-torsor if and only if for every $b \in B$, the homotopy fiber $F_{b}(p)$ admits a chain of $q$-equivalences of $G$-modules $F_{b}(p) \simeq G$. \end{lemma} Let $\ho (G \Mod/B)$ denote the homotopy category of $G$-modules over $B$ formed using the $qf$-model structure on $G$-modules over $B$. Let $\ho (G \Tor/B)$ be the subcategory of $\ho (G \Mod/B)$ consisting of $G$-torsors and $q$-equivalences of $G$-torsors. Recall the $h$-fibration approximation functor $\Gamma$ from \S\ref{classification_section_Ispaces}. Using the characterization of $G$-torsors in Lemma \ref{lemma:Gtorsor_characterization}, it follows from Proposition \ref{gamma_properties} that $\Gamma$ takes $G$-torsors to principal $G$-fibrations and preserves $q$-equivalences. Conversely, every principal $G$-fibration satisfies the condition in Lemma \ref{lemma:Gtorsor_characterization} and thus is a $G$-torsor. The map $\eta \colon Y \arr \Gamma Y$ in Proposition \ref{gamma_properties}.(iii) is a $q$-equivalence of $G$-modules, so we have: \begin{proposition}\label{torsor_principal_fibration_equiv_prop} The functor $\Gamma$ induces a natural isomorphism between the set $\pi_0 \ho (G\Tor/B)$ of isomorphism classes of $G$-torsors over $B$ in the homotopy category and the set $\mathscr{E}_{G}(B)$ of equivalence classes of principal $G$-fibrations over $B$. \end{proposition} \section{Model categories of parametrized spectra}\label{section:modelcat_spectra} We now summarize what we need from the theory of parametrized spectra, following chapters 11 and 12 of May-Sigurdsson \cite{MS}. A spectrum over $B$ is an orthogonal spectrum in the category of ex-spaces over $B$. In other words, a parametrized spectrum $X$ consists of an $O(V)$-equivariant ex-space $(X(V), p(V), s(V))$ for each finite dimensional real inner product space $V$, along with compatible $(O(V) \times O(W))$-equivariant structure maps \[ \sigma \colon X(V) \sma_{B} S^{W}_{B} \arr X(V \oplus W) \] over and under $B$. Here $S^{V}_{B} = r^* S^{V} = S^{V} \times B$ is the trivially twisted ex-space with fiber the one-point compactification $S^{V}$. The section of $S_{B}^{V}$ is determined by the basepoint of $S^{V}$. The smash product $\sma_{B}$ is the fiberwise smash product of ex-spaces. A map $f \colon X \arr Y$ of spectra over $B$ consists of an equivariant map $f(V) \colon X(V) \arr Y(V)$ of ex-spaces for each indexing space $V$ that are suitably compatible with the structure maps $\sigma$. For each point $b \in B$, the fiber of $X$ over $b$ is the spectrum $X_{b} = i_{b}^* X$ given by the pullback of $X$ along the base change functor associated to the inclusion map $i_{b} \colon \{b\} \arr B$. The fiber spectrum is described level-wise in terms of the fibers of its constituent ex-spaces by the formula $X_{b}(V) = X(V)_{b}$. The level model structure on the category $\mathscr{S}_{B}$ of spectra over $B$ has as weak equivalences, respectively fibrations, those maps $f$ such that each $f(V)$ is a $q$-equivalence, respectively $qf$-fibration, of ex-spaces. We refer to these maps as the level-wise $q$-equivalences and level-wise $qf$-fibrations, respectively. The homotopy groups of a level-wise $qf$-fibrant spectrum $X$ over $B$ are the homotopy groups $\pi_q X_{b}$ of all of the fibers of $X$. The homotopy groups of a spectrum $X$ over $B$ are the homotopy groups $\pi_q (R^{l} X)_{b}$ of the fibers of a level-wise $qf$-fibrant approximation $R^{l}X$ of $X$. We say that a map $X \arr Y$ of spectra over $B$ is a stable equivalence if it induces an isomorphism on all homotopy groups of all fibers. An $\Omega$-spectrum over $B$ is a level $qf$-fibrant spectrum $X$ over $B$ whose adjoint structure maps \[ \widetilde{\sigma} \colon X(V) \arr \Omega_{B}^{W} X(V \oplus W) \] are $q$-equivalences of ex-spaces over $B$. \begin{theorem}\label{s_model_structure_param_spectra}\cite{MS}*{12.3.10} The category $\mathscr{S}_{B}$ of spectra over $B$ admits the structure of a well-grounded compactly generated model category whose weak equivalences are the stable equivalences. The fibrations and cofibrations are called the $s$-fibrations and the $s$-cofibrations, and the $s$-fibrant objects are the $\Omega$-spectra over $B$. We refer to this model structure as the $s$-model structure (or stable model structure) on $\mathscr{S}_{B}$. \end{theorem} \noindent In the case $B = *$, we recover the stable model structure on orthogonal spectra from Mandell-May-Schwede-Shipley \cite{MMSS}. Parametrized $\mathcal{I}$-spaces and parametrized spectra are related by suspension spectrum and underlying infinite loop space functors. If $(Y, p)$ is an $\mathcal{I}$-space over $B$, the suspension spectrum $\Sigma^{\bullet}_{B} Y$ is the spectrum over $B$ defined by \[ (\Sigma^{\bullet}_{B}Y) (V) = (Y(V), p)_{B} \sma_{B} S_B^{V}, \] where $(Y(V), p)_{B}$ is the ex-space over $B$ obtained from $(Y(V), p)$ by adjoining a disjoint section: \[ (Y(V), p)_{B} = (Y(V) \amalg B, p \amalg \id_{B}, \id_{B}). \] The spectrum structure maps $\sigma$ are defined using the map $Y(V) \arr Y(V \oplus W)$ of ex-spaces induced by the canonical inclusion $V \arr V \oplus W$ and the canonical isomorphism $S_{B}^{V} \sma_{B} S_{B}^{W} \cong S_{B}^{V \oplus W}$. When $B = \ast$, this agrees with the definition in \cite{diagram_spaces} of the suspension spectrum functor $\Sigma^{\bullet}_{+}$ carrying $\mathcal{I}$-spaces to spectra. Notice that we define $\Sigma^{\bullet}_{B}$ to take un-sectioned $\mathcal{I}$-spaces as input. In other words, $\Sigma^{\bullet}_{B}$ is the parametrized analogue of $\Sigma^{\bullet}_{+}$. This is in notational conflict with May-Sigurdsson's use of $\Sigma^{\infty}_{B}$ to denote the suspension spectrum of an ex-space, no disjoint section added. If $X$ is a spectrum over $B$, we define the $\mathcal{I}$-space $\Omega^{\bullet}_{B} X$ over $B$ by \[ (\Omega^{\bullet}_{B} X)(V) = \Omega^{V}_{B} X(V) = F_{B}(S^{V}_{B}, X(V)). \] Here $F_{B}(Y, Z)$ is the ex-space of fiberwise based maps $Y \arr Z$. The fiber $F_{B}(Y, Z)_{b}$ over each point $b \in B$ consists of the space of based maps $Y_{b} \arr Z_{b}$, and $F_{B}(Y, -)$ is right adjoint to the fiberwise smash product $(-) \sma_{B} Y$. The functoriality of $\Omega^{\bullet}_{B} X$ in $\mathcal{I}$ follows as in the non-parametrized context. These functors form an adjunction: \begin{equation*} \xymatrix{ \mathcal{I} \mathscr{K}/B \ar@<.5ex>[rr]^-{\Sigma^{\bullet}_B} & & \mathscr{S}_{B} \ar@<.5ex>[ll]^-{\Omega^\bullet_{B}} }. \end{equation*} Inspection of the definitions gives natural isomorphisms of fibers \[ (\Sigma^{\bullet}_{B} Y)_{b} \cong \Sigma^{\bullet}_{+} Y_{b} \] and \[ (\Omega^{\bullet}_{B} X)_{b} \cong \Omega^{\bullet} X_{b}. \] The category $\mathcal{I} \mathscr{K}/B$ is enriched and tensored over the category $\mathcal{I} \mathscr{K}$ of $\mathcal{I}$-spaces with tensor given by the symmetric monoidal product $\boxtimes$. Similarly, the category $\mathscr{S}_{B}$ of spectra over $B$ is enriched and tensored over the category $\mathscr{S}$ of spectra with tensor the fiberwise smash product $\sma$ (May-Sigurdsson denote this by $\overline{\sma}$). The adjunction $(\Sigma^{\bullet}_{B}, \Omega^{\bullet}_{B})$ respects the enrichments and tensoring, in the following sense. \begin{proposition}\label{Om_respects_tensoring} \hspace{2in} \begin{itemize} \item[(i)] Let $A$ be an $\mathcal{I}$-space and let $Y$ be an $\mathcal{I}$-space over $B$. There is a natural isomorphism of parametrized spectra over $B$ \[ \Sigma^{\bullet}_{B}(A \boxtimes Y) \cong \Sigma^{\bullet}_{+} A \sma \Sigma^{\bullet}_{B} Y \] that satisfies the analogues of the associativity and unit diagrams for a monoidal natural transformation. \item[(ii)] Let $D$ be a spectrum and let $X$ be a parametrized spectrum over $B$. There is a natural transformation of $\mathcal{I}$-spaces over $B$ \[ \Omega^{\bullet} D \boxtimes \Omega^{\bullet}_{B} X \arr \Omega^{\bullet}_{B}(D \sma X) \] satisfying the analogues of the associativity and unit diagrams for a monoidal natural transformation. \end{itemize} \end{proposition} We now turn to parametrized module spectra. Let $R$ be a (non-parametrized) ring spectrum. We assume, once and for all, that $R$ is well-grounded, meaning that each $R(V)$ is well-based and compactly generated. An $R$-module over $B$ is a spectrum $N$ over $B$ with an associative and unital map of spectra over $B$ \[ R \sma N \arr N, \] where $\sma$ denotes the tensor of a spectrum with a spectrum over $B$. \begin{theorem}\cite{MS}*{14.1.7} The category $R \Mod_{B}$ of $R$-modules over $B$ is a well-grounded compactly generated model category with weak equivalences and fibrations created by the forgetful functor to $\mathscr{S}_{B}$. We refer to this model structure as the $s$-model structure on $R\Mod_{B}$. \end{theorem} It is a consequence of Proposition \ref{Om_respects_tensoring} that the adjunction $(\Sigma^{\bullet}_{B}, \Omega^{\bullet}_{B})$ restricts to an adjunction between $G$-modules over $B$ and $\Sigma^{\bullet}_{+} G$-module spectra over $B$. We now consider the homotopical properties of the adjunction $(\Sigma^{\bullet}_{B}, \Omega^{\bullet}_{B})$. \begin{proposition}\label{prop:sus_is_quillen} \hspace{2in} \begin{itemize} \item[(i)] The adjoint pair $(\Sigma^{\bullet}_{B}, \Omega^{\bullet}_{B})$ is a Quillen adjunction between the $qf$-model structure on $\mathcal{I}$-spaces over $B$ and the $s$-model structure on spectra over $B$. \item[(ii)] Let $G$ be an $\mathcal{I}$-monoid. The adjoint pair $(\Sigma^{\bullet}_{B}, \Omega^{\bullet}_{B})$ is a Quillen adjunction between the $qf$-model structure on $G$-modules over $B$ and the $s$-model structure on $\Sigma^{\bullet}_{+} G$-modules over $B$. \end{itemize} \end{proposition} \begin{proof} By the descriptions of fibrant objects and fibrations in Theorem \ref{s_model_structure_param_spectra} and Proposition \ref{qf_fibration_description_prop}, (i) follows using the same proof as for the non-parametrized Quillen adjunction $(\Sigma^{\bullet}_{+}, \Omega^{\bullet})$ \cite{diagram_spaces}*{2.5 and 3.6}. The only difference is that $qf$-fibrations are level $qf$-fibrations instead of level $q$-fibrations. However, $qf$-fibrations of spaces over $B$ are closed under pullbacks and are quasifibrations \cite{MS}*{6.5.1}, so we can still use the five lemma argument given in the cited proof. For the preservation of acyclic fibrations, use \cite{MS}*{12.6.2} to deduce that an acyclic $s$-fibration of parametrized spectra is a level $q$-equivalence. Part (ii) follows because fibrations and acyclic fibrations of modules are detected in the underlying model structures in (i). \end{proof} It is a formal consequence that the left Quillen functor $\Sigma^{\bullet}_{B}$ preserves weak equivalences between cofibrant objects. However, it will be useful to know that a stronger result is true. \begin{lemma}\label{suspension_preserves_we_lemma} The functors $\Sigma^{\bullet}_{+} \colon \mathcal{I} \mathscr{K} \arr \mathscr{S}$ and $\Sigma^{\bullet}_{B} \colon \mathcal{I} \mathscr{K}/B \arr \mathscr{S}_{B}$ preserve all weak equivalences. \end{lemma} \begin{proof} Let us write $\underset{m}{\hocolim}^* (-)$ for the based homotopy colimit over the poset category $\mathcal{N}$ of natural numbers. Let $X$ be an $\mathcal{I}$-space and write $X(m) = X(\mathbf{R}^m)$. By filtering the homotopy colimit by finite stages, we see that the natural map \[ \underset{m}{\hocolim}^{*} \, \Omega^{n} \Sigma^{n}_{+} X(m) \arr \Omega^{n} \underset{m}{\hocolim}^* \, \Sigma^{n}_{+} X(m) \] is a weak homotopy equivalence. We now take the homotopy colimits over $n$ and observe the the functor $\Sigma_{n}^+$ converts unbased homotopy colimits to based homotopy colimits, giving a weak homotopy equivalence: \[ \underset{n}{\hocolim}^* \underset{m}{\hocolim}^{*} \Omega^{n} \Sigma^{n}_{+} X(m) \overset{\simeq}{\arr} \underset{n}{\hocolim}^* \Omega^{n} \Sigma^{n}_{+} \underset{m}{\hocolim} \, X(m). \] The inclusion of categories $\mathcal{N} \arr \mathcal{J} \arr \mathcal{I}$ is homotopy cofinal, so by applying the homotopy cofinality criterion for topological homotopy colimits \cite{diagram_spaces}*{A.5}, the codomain is homotopy equivalent to $\hocolim_n^* \Omega^{n} \Sigma^{n}_{+} X_{h \mathcal{I}}$. On the other hand, the diagonal functor $\mathcal{N} \arr \mathcal{N} \times \mathcal{N}$ is also homotopy cofinal, so the domain is homotopy equivalent to $\hocolim_n^* \Omega^{n} \Sigma^{n}_{+} X(n)$. Since all of the equivalences are induced by natural transformations, it follows that if $f$ is a $q$-equivalence of $\mathcal{I}$-spaces, then $\hocolim_n^* \Omega^{n} \Sigma^{n}_{+} f(n)$ is a weak homotopy equivalence, which implies that $\Sigma^{\bullet}_{+} f$ is a stable equivalence of orthogonal spectra. We now deduce the result for $\Sigma^{\bullet}_{B}$. Assume that $f \colon X \arr Y$ is a $q$-equivalence of $\mathcal{I}$-spaces over $B$. We will use the notion of an ex-quasifibrant space over $B$; this is an ex-space $(Y, p, s)$ whose structure map $p$ is a quasifibration and whose section $s$ is an $\overline{f}$-cofibration, in the language of \cite{MS}*{8.1.1, 8.5.1}. The functor $\Sigma^{\bullet}_{B}$ preserves level-wise ex-quasifibrant objects by \cite{MS}*{8.5.3.(iii)}. Taking a level-wise ex-quasifibrant approximation $f' \colon X' \arr Y'$ of $f$, we see by \cite{MS}*{Lemma 12.4.1} that $\Sigma^{\bullet}_{B} f$ is a stable equivalence if and only if the induced map of fibers of the approximation $\Sigma^{\bullet}_{+} f'_{b} \colon \Sigma^{\bullet}_{+} (X')_{b} \arr \Sigma^{\bullet}_{+} (Y')_{b}$ is a stable equivalence. Since the structure maps $X' \arr B$ and $Y' \arr B$ are level-wise quasifibrations, they are quasifibrations of $\mathcal{I}$-spaces. By the long exact sequence of Lemma \ref{transfer_quasifib_lemma}, the $q$-equivalence $f' \colon X' \arr Y'$ induces a $q$-equivalence of fibers $f'_{b} \colon (X')_{b} \arr (Y')_{b}$. The lemma now follows since $\Sigma^{\bullet}_{+}$ preserves weak equivalences. \end{proof} We will work in the non-parametrized setting for a moment in order to fix notation on some constructions. Suppose that $R$ and $A$ are ring spectra. Consider the function spectrum $F^{R}(-, -)$ of $R$-modules as defined in \cite{MMSS}*{diagram (22.3)}. If $P$ is an $A$-module, $M$ is an $(R, A)$-bimodule and $N$ is an $R$-module, then $F^{R}(M, N)$ is an $A$-module and we have the following adjunction: \addtocounter{theorem}{1} \begin{equation}\label{non_param_function_spectrum_adjunction} \Mod_{R}(M \sma_{A} P, N) \cong \Mod_{A}(P, F^{R}(M, N)). \end{equation} The following invariance result is a consequence of the fact that the category of $R$-modules is a spectrally enriched model category via the function spectra $F^{R}(-,-)$. \begin{lemma}\label{non_param_invariance_prop} If $M'$ is a cofibrant $R$-module, then the functor $F^{R}(M', -)$ preserves stable equivalences between fibrant $R$-modules. If $N$ is a fibrant $R$-module, then the functor $F^{R}(-, N)$ preserves stable equivalences between cofibrant $R$-modules. \end{lemma} We will be interested in the generalization of the adjunction \eqref{non_param_function_spectrum_adjunction} where $N$ and $P$ are parametrized spectra. The smash product $M \sma_{A} P$ occurring in the parametrized version of the adjunction is built out of the external smash product $\overline{\sma} \colon \mathscr{S} \times \mathscr{S}_{B} \arr \mathscr{S}_{B}$, as described in \cite{MS}*{\S14.1}. In particular, there is never a need to internalize the smash product by taking the pullback $\Delta^*$ of a spectrum over $B \times B$ along the diagonal map. As May-Sigurdsson explain, we are able to maintain homotopical control of the smash product $\sma_{A}$ in this situation. \begin{lemma}\label{half_parametrized_monoid_axiom} Suppose that $R$ and $A$ are $s$-cofibrant ring spectra. Let $i \colon X \arr Y$ be an $s$-cofibration of $(R, A)$-bimodules and let $j \colon Z \arr W$ be an $s$-cofibration of $A$-modules over $B$. Then the pushout product \[ i \boxempty j \colon (Y \sma_{A} Z) \cup_{X \sma_{A} Z} (X \sma_{A} W) \arr Y \sma_{A} W \] is an $s$-cofibration of $R$-modules over $B$ which is a stable equivalence if either $i$ or $j$ is. \end{lemma} \begin{proof} Using the fact that all of the module categories are well-grounded, we may induct up the cellular filtration of $i$ and $j$, so it suffices to verify the result when $i$ and $j$ are generating cofibrations or generating acyclic cofibrations. For generating maps, the extra factor of $A$ on either side cancels with the smash product over $A$, so we may deduce the result from the corresponding result for $A = S$ \cite{MS}*{12.6.5}, as in the proof of \cite{schwede_shipley}*{4.1.(2)}. We use the fact that $A$ and $R$ are $s$-cofibrant spectra to insure that $A \sma (-)$ preserves $s$-cofibrations and acyclic $s$-cofibrations of spectra over $B$ and that $R \sma (-)$ takes $s$-cofibrations and acyclic $s$-cofibrations to $s$-cofibrations and acyclic $s$-cofibrations of $R$-modules. \end{proof} The lemma has the following consequence. \begin{proposition}\label{prop:param_quillen_adjunction} Let $A$ and $R$ be $s$-cofibrant ring spectra. Suppose that $M$ is a cofibrant $(R, A)$-bimodule. Then the adjunction \[ \xymatrix{ (\text{$A$-modules over $B$}) \ar@<.5ex>[rrr]^-{M \sma_{A} (-) } & & & (\text{$R$-modules over $B$}) \ar@<.5ex>[lll]^-{F^{R}(M, -)} } \] is a Quillen adjunction. \end{proposition} \section{The principal $\Aut_{R}M$-fibration associated to an $R$-bundle}\label{section:prep} Let $R$ be a ring spectrum and let $M$ be an $R$-module. In this section, we will define the $\mathcal{I}$-monoid $\Aut_{R}M$ of equivalences of $R$-modules $M \arr M$. We then describe the construction of an $\Aut_{R}M$-torsor from an $R$-bundle with fiber $M$. The construction requires a subtle mixing of the $qf$-model structure with approximation by an $h$-fibration. We first recall a general construction from \cite{diagram_spaces}*{\S12} (see also \cite{SS_groupcompletion}). Suppose that $G$ is an $\mathcal{I}$-monoid. While $G$ may not be grouplike, there is a maximal grouplike sub $\mathcal{I}$-monoid $G^{\x} \subset G$ defined as the pullback \begin{equation}\label{diagram:def_units} \xymatrix{ G^{\times} \ar[r] \ar[d] & G \ar[d] \\ (\pi_0 G)^{\x} \ar[r] & \pi_0 G } \end{equation} where $(\pi_0 G)^{\times} \subset \pi_0 G$ is the subset of invertible elements of the monoid $\pi_0 G$. In other words, the inclusion $G^{\times} \arr G$ is given level-wise by the inclusion of those path components that are stably invertible under the $\mathcal{I}$-monoid multiplication. For example, if $G = \Omega^{\bullet} R$ is the $\mathcal{I}$-monoid underlying a ring spectrum $R$, then $G^{\times} = \mGL_1^{\bullet}R$ is the $\mathcal{I}$-monoid of units of $R$. We assume from now on that $R$ is an $s$-cofibrant ring spectrum and that $M$ is an $s$-fibrant and $s$-cofibrant $R$-module. The function spectrum $F^{R}(M, M)$ is a ring spectrum under composition of maps. Let $\End_{R}M = \Omega^{\bullet} F^{R}(M, M)$ be the underlying $\mathcal{I}$-monoid. We define $\Aut_{R} M$ to be the $\mathcal{I}$-monoid of units of the ring spectrum $F^{R}(M, M)$: \[ \Aut_{R} M = \mGL_{1}^{\bullet} F^{R}(M, M) = (\Omega^{\bullet} F^{R}(M, M))^{\times} \] We think of $\Aut_{R} M$ as the $A_{\infty}$ space of weak equivalences of $R$-modules $M \arr M$. The suspension spectrum of the $\mathcal{I}$-monoid $\Aut_{R} M$ is a ring spectrum $\Sigma^{\bullet}_{+} \Aut_{R} M$. The $R$-module $M$ also has the structure of a right $\Sigma^{\bullet}_{+} \Aut_{R} M$-module, with action map \[ M \sma_{S} \Sigma^{\bullet}_{+} \Aut_{R} M \arr M \] the adjoint of the composite map of ring spectra \[ \Sigma^{\bullet}_+ \Aut_{R} M \arr \Sigma^{\bullet}_{+} \Omega^{\bullet} F^{R}(M, M) \overset{\epsilon}{\arr} F^{R}(M, M) \] induced by the canonical inclusion $\mGL_1^{\bullet} \arr \Omega^{\bullet}$ and the counit of the adjunction $(\Sigma^{\bullet}_{+}, \Omega^{\bullet}$). Thus $M$ is a $(R, \Sigma^{\bullet}_{+} \Aut_{R} M)$-bimodule. Let $\Aut_{R}^{c}M \arr \Aut_{R} M$ be a $q$-cofibrant approximation of $\Aut_{R} M$ as an $\mathcal{I}$-monoid. The right $\Sigma^{\bullet}_{+} \Aut_{R}M$-module structure of $M$ pulls back to give a right $\Sigma^{\bullet}_{+} \Aut_{R}^{c}M$-module structure on $M$. By identifying $R \sma_{S} (\Sigma^{\bullet}_{+} \Aut_{R}^{c} M)^{\op}$-modules with $(R, \Sigma^{\bullet}_{+} \Aut_{R}^{c} M)$-bimodules, the category of $(R, \Sigma^{\bullet}_{+} \Aut_{R}^{c} M)$-bimodules is a well-grounded compactly generated model category with weak equivalences and fibrations created in the $s$-model structure on spectra \cite{MMSS}*{12.1}. Let $M^{\circ} \arr M$ be an $s$-cofibrant approximation of $M$ as an $(R, \Sigma^{\bullet}_{+} \Aut_{R}^{c} M)$-bimodule. Note that since $\Sigma^{\bullet}_{+}$ is left Quillen, $\Sigma^{\bullet}_{+} \Aut_{R}^{c}M$ is $s$-cofibrant as a ring spectrum, and thus $s$-cofibrant as a spectrum. We record a basic consequence. \begin{lemma}\label{underling_rightmodule_ofM_cofibrant} The underlying left $R$-module of $M^{\circ}$ is $s$-cofibrant. The underlying right $\Sigma^{\bullet}_{+}\Aut_{R}^{c}M$-module of $M^{\circ}$ is $s$-cofibrant. \end{lemma} \begin{proof} The right adjoint of the forgetful functor from $(R, \Sigma^{\bullet}_{+}\Aut_{R}^{c} M)$-bimodules to left $R$-modules is the function spectrum functor $F^{S}(\Sigma^{\bullet}_{+} \Aut_{R}^{c}M, -)$. This functor preserves fibrations and acylic fibrations because $\Sigma^{\bullet}_{+} \Aut_{R}^{c} M$ is $s$-cofibrant. Therefore its left adjoint the forgetful functor preserves cofibrations and acylic cofibrations. This proves the first claim. The second claim follows using a similar argument and the fact that $R$ is $s$-cofibrant. \end{proof} We write $\mathbf{F}_{b} = \mathbf{R} i_{b}^* (-)$ for the right derived fiber functor. If $N$ is an $R$-module over $B$, the derived fiber $\mathbf{F}_{b} R$ is the object of the homotopy category of $R$-modules determined by the fiber $i_{b}^* R^{s}N$ of an $s$-fibrant approximation of $N$ as an $R$-module over $B$. \begin{definition}\label{def:Rbundle} An $R$-bundle over $B$ with fiber $M$ is an $R$-module $N$ over $B$ such that every derived fiber $\mathbf{F}_{b}N$ of $N$ admits a zig-zag of stable equivalences of $R$-modules to $M$. \end{definition} Let $N$ be an $R$-bundle over $B$. The function spectrum $F_{B}^{R}(M^{\circ}, N)$ is a $\Sigma^{\bullet}_{+}\End_{R} M$-module over $B$. Applying $\Omega^{\bullet}_{B}$, we get an $\End_{R}M$-module $\Omega^{\bullet}_{B}F_{B}^{R}(M^{\circ}, N)$ over $B$ which is $qf$-fibrant when $N$ is $s$-fibrant. \begin{lemma}\label{choice_of_equiv_lemma} Suppose that $N$ is $s$-fibrant and fix a point $b \in B$. A stable equivalence of $R$-modules $N_{b} \simeq M$ determines: \begin{itemize} \item[(i)] a stable equivalence of $\Sigma^{\bullet}_{+}\End_{R}M$-modules $F^{R}(M^{\circ}, N)_{b} \simeq F^{R}(M, M)$, and \item[(ii)] a $q$-equivalence of $\End_{R}M$-modules $\Omega^{\bullet}_{B} F^{R}(M^{\circ}, N)_{b} \simeq \Omega^{\bullet} F^{R}(M, M).$ \end{itemize} \end{lemma} \begin{proof} This follows from the fact that the fiber functor $i_{b}^*$ commutes with $F^{R}(M, -)$, $F^{R}(M^{\circ}, -)$ and $\Omega^{\bullet}$ up to canonical isomorphism, as well as the fibrancy of $N$ and the cofibrancy of $M$ and $M^{\circ}$. \end{proof} \noindent Notice that the second condition in the lemma implies that $\Omega^{\bullet}_{B} F^{R}(M^{\circ}, N)$ is an $\End_{R}M$-torsor. We will now construct an $\Aut_{R}M$-torsor \[ E^{R}(M^{\circ}, N) \subset\Omega^{\bullet}_{B} F^{R}(M^{\circ}, N). \] The idea of the construction is to restrict to the sub-parametrized $\mathcal{I}$-space whose fiber over $b \in B$ consists of the stable equivalences of $R$-modules $M^{\circ} \arr N_{b}$. To make this idea rigorous, we need to access the components $\pi_0 \Omega^{\bullet}_{B} F^{R}(M^{\circ}, N)_{b}$ of each fiber in a way that remembers the topology of $B$. To this end, we will define the parametrized components $\pi_0^{B} X$ of a parametrized space $p \colon X \arr B$. As a set, $\pi_0^{B} X$ consists of all components of all fibers of $X$: \[ \pi_0^{B} X = \bigcup_{b \in B} \pi_0 X_b. \] Give $\pi_0^{B} X$ the quotient topology induced by the map $X \arr \pi_0^{B} X$ that sends a point $x \in X$ to its component $[x] \in \pi_0 X_{p(x)}$. Since the quotient map is a map over $B$, the space $\pi_0^{B} X$ is a parametrized space over $B$. \begin{construction}\label{def:subtorsor} We now define a fiberwise version of \eqref{diagram:def_units}. Let $G$ be an $\mathcal{I}$-monoid and let $(Y, p)$ be a $G$-torsor over $B$ whose structure map $p \colon Y \arr B$ is a quasifibration of $\mathcal{I}$-spaces. A choice of $q$-equivalence of $G$-modules $Y_{b} \simeq G$ gives an isomorphism of $\pi_0 G$-modules $\pi_0 Y_{b} \cong \pi_0 G$. Define $\pi_0 Y_b^{\x}$ to be the subset of $\pi_0 Y_b$ corresponding to $\pi_0 G^{\x}$ under this isomorphism. Although the isomorphism $\pi_0 Y_b^{\x} \cong \pi_0 G^{\x}$ of $\pi_0 G^{\x}$-modules depends on the choice of $q$-equivalence $Y_{b} \simeq G$, the subset $\pi_0 Y_{b}^{\x}$ does not. Let $\pi_0^{B} Y^{\x} \subset \pi_0^{B} Y$ be the subspace consisting of the sets $\pi_0 Y_{b}^{\x}$ in each fiber. Define the $\mathcal{I}$-space $Y^{\x}$ over $B$ to be the following pullback of $\mathcal{I}$-spaces: \[ \xymatrix{ Y^{\x} \ar[r]^{\iota} \ar[d] & Y \ar[d] \\ \pi_0^{B} Y^{\x} \ar[r] & \pi_0^{B} Y } \] Notice that there is a canonical isomorphism $(Y^{\times})_{b} \cong Y_{b}^{\times}$. It is straightforward to verify that the construction $Y \mapsto Y^{\times}$ is functorial for maps of $G$-modules. We will at times write $\mu = (-)^{\times}$ for the resulting functor. The assumption that $p$ is a quasifibration is the minimal hypothesis necessary for the construction to be sensible. In practice, $p$ will be either a $qf$-fibration or an $h$-fibration. Notice that a $G$-torsor with $p$ an $h$-fibration of $G$-modules is exactly a principal $G$-fibration, as defined in \S\ref{principal_fibrations_section}. \end{construction} \begin{lemma}\label{lemma:subtorsor_path_comp} Suppose that the base space $B$ is semi-locally contractible and that $(Y, p)$ is a principal $G$-fibration over $B$. Then for each object $V$ of $\mathcal{I}$, the map $\iota(V) \colon Y^{\times}(V) \arr Y(V)$ is the inclusion of a subspace of path components of $Y(V)$. \end{lemma} \begin{proof} Let $\gamma$ be a path in $Y(V)$ with $\gamma(0) \in Y^{\times}(V)$. Assuming that $\gamma(1) \notin Y^{\times}(V)$, let $t_0 = \inf \{ t \in [0, 1] \mid \gamma(t) \notin Y^{\times}(V) \}$. Set $b_0 = p(\gamma(t_0))$ and choose an open neighborhood $U$ of $b_0$ along with a nullhomotopy $h \colon U \times I \arr B$ of $U$ in $B$. Consider the $G$-module $h^*Y$ over $U \times I$ obtained from $Y$ by pullback along $h$. The restriction $h^*Y \vert_{U \times \{0\}}$ is isomorphic to $Y \vert_{U}$, while the restriction $h^*Y \vert_{U \times\{1\}}$ is isomorphic to $U \times Y_{b_{0}}$. It follows from Lemma \ref{homotopic_maps_equiv_fibrations_prop} that we may find a fiberwise homotopy equivalence of $G$-modules $\rho \colon Y \vert_{U} \arr U \times Y_{b_{0}}$ over $U$. Applying the functor $(-)^{\times}$ to $\rho$ and evaluating at $V$, we have a commutative diagram \[ \xymatrix{ Y^{\times}(V)\vert_{U} \ar[r]^-{\rho^{\times}(V)} \ar[d] & U \times Y_{b_0}^{\times}(V) \ar[d] \\ Y (V)\vert_{U} \ar[r]^-{\rho(V)} & U \times Y_{b_0}(V) } \] which shows that in a neighborhood of $t_0$, the path $\rho(V) \circ \gamma$ must lie in $U \times Y_{b_0}^{\times}(V)$. Since $\rho(V)$ is a fiberwise homotopy equivalence, it follows that $\gamma(t) \in Y_{p(\gamma(t))}^{\times}(V)$ for $t$ near $t_0$, contradicting our initial assumption. \end{proof} \begin{proposition}\label{prop:subtorsor_properties} Suppose that $(Y, p)$ is a $G$-torsor over a semi-locally contractible space $B$. \begin{itemize} \item[(i)] The $\mathcal{I}$-space $Y^{\times}$ is a $G^{\times}$-module over $B$ and the canonical inclusion $\iota \colon Y^{\times} \arr Y$ is a map of $G^{\times}$-modules. \item[(ii)] If the structure map $p \colon Y \arr B$ is an $h$-fibration of $G$-modules, so that $Y$ is a principal $G$-fibration, then $p^{\times} \colon Y^{\times} \arr B$ is a $G^{\times}$-torsor \item[(iii)] The functor $\mu \colon Y \mapsto Y^{\times}$ preserves $q$-equivalences between principal $G$-fibrations. \end{itemize} \end{proposition} \begin{proof} (i) In order to define a $G^{\x}$-module structure on $Y^{\x}$, it suffices to show that the composite \[ \alpha^{\x} \colon G^{\x} \boxtimes Y^{\x} \arr G \boxtimes Y \overset{\alpha}{\arr} Y \] of the canonical inclusions followed by the action of $G$ on $Y$ factors through $Y^{\x}$. By the construction of $Y^{\x}$, it suffices to verify the factorization on $\pi_{0}$ of each fiber. To this end, consider the following diagram, in which the left vertical map is induced by the lax monoidal structure map $l$ of the homotopy colimit functor $(-)_{h \mathcal{I}}$. \[ \xymatrix{ \pi_0 G^{\x} \times \pi_0 Y^{\x}_{b} \ar[r] \ar[d]_{\pi_0 l} & \pi_0 Y^{\x}_{b} \ar[d] \\ \pi_0 (G^{\x} \boxtimes Y^{\x}_{b}) \ar@{-->}[ur] \ar[r]^-{\alpha^{\x}} & \pi_0 Y_{b} } \] The definition of the invertible components $\pi_0 Y_{b}^{\x}$ implies that the action of $\pi_0 G^{\x}$ on $\pi_0 Y^{\x}_{b}$ lands in $\pi_0 Y^{\x}_{b}$ as indicated by the commutative square. The map $\pi_0 l$ is an isomorphism by Lemma \ref{interchange_lemma}, so there exists a diagonal lift and $\alpha^{\x}$ factors through $Y^{\x}$ as claimed. The associativity and unit conditions for the action follow from the associativity and unit conditions for $\alpha$. (ii) By lemma \ref{lemma:subtorsor_path_comp}, $p^{\times}$ is a level-wise $h$-fibration. It follows that the natural map $Y^{\times}_{b} \arr F_{b}Y^{\times}$ from the fiber to the homotopy fiber is a $q$-equivalence. A given chain of $q$-equivalences of $G$-modules $Y_{b} \simeq G$ induces a chain of $q$-equivalences of $G^{\times}$-modules $Y_{b}^{\times} \simeq G^{\times}$. Since the homotopy fiber represents the derived fiber (Lemma \ref{lemma:Gtorsor_characterization}), we conclude that $Y^{\times}$ is a $G^{\times}$-torsor. (iii) Assume that $(Y, p) \arr (Z, q)$ is a $q$-equivalence of $G$-torsors with $p$ and $q$ both $h$-fibrations of $\mathcal{I}$-spaces. For any $b \in B$, the induced map of fibers $Y_{b} \arr Z_{b}$ is a $q$-equivalence of $G$-modules, and so the induced map of $G^{\times}$-modules $Y^{\times}_{b} \arr Z^{\times}_{b}$ is a $q$-equivalence. As observed in the proof of (ii), the homotopy fibers of $p^{\times}$ and $q^{\times}$ are naturally equivalent to the fibers, so it follows that $Y^{\times} \arr Z^{\times}$ is a $q$-equivalence on total spaces. \end{proof} As a consequence of Proposition \ref{prop:subtorsor_properties}, we may define the derived functor of $\mu$ to be the functor from the homotopy category of $G$-torsors to the homotopy category of $G^{\times}$-torsors \begin{align*} \boldsymbol{\mu} \colon \ho (G \Tor/B) &\arr \ho (G^{\times} \Tor/B) \\ Y &\longmapsto \mu(\Gamma Y) = (\Gamma Y)^{\times}, \end{align*} where $\Gamma$ is the $h$-fibrant approximation functor from \S\ref{classification_section_Ispaces}. Since levelwise $h$-fibrations of $\mathcal{I}$-spaces are in particular quasifibrations of $\mathcal{I}$-spaces, Lemma \ref{lemma:subtorsor_path_comp} implies that when $p \colon Y \arr B$ is an $h$-fibration, the fiber $Y_{b}^{\times} \cong (Y^{\times})_{b}$ represents the derived fiber $\mathbf{F}_{b} Y^{\times}$ of $Y^{\times}$. In other words: \begin{lemma}\label{lemma:mu_commutes} There is a canonical isomorphism of derived functors $\mathbf{F}_{b} \boldsymbol{\mu} \cong \boldsymbol{\mu} \mathbf{F}_{b}$. \end{lemma} \begin{definition}\label{def:associated_torsor} Let $N$ be an $R$-bundle with fiber $M$ and let $R^{s}N$ be an $s$-fibrant approximation of $N$ as an $R$-module over $B$. Since $M^{\circ}$ is an $s$-cofibrant $R$-module, the $\End_{R} M$-torsor $\Omega^{\bullet}_{B} F^{R}(M^{\circ}, R^{s}N)$ is $qf$-fibrant as an $\End_{R}M$-module. Thus we can apply Construction \ref{def:subtorsor} and define: \[ \quad E^{R}(M^{\circ}, R^{s}N) = (\Omega^{\bullet}_{B} F^{R}_{B}(M^{\circ}, R^{s}N))^{\x} \] In general, $E^{R}(M^{\circ}, R^{s}N)$ is an $\Aut_{R}M$-module over $B$, but need not be an $\Aut_{R}M$-torsor. If we instead take the derived functor $\boldsymbol{\mu}$ by applying the $h$-fibration approximation functor $\Gamma$ before $(-)^{\times}$, then the value of the associated derived functor \[ \mathbf{E}^{R}(M^{\circ}, N) = \boldsymbol{\mu}\Omega^{\bullet}_{B} F^{R}_{B}(M^{\circ}, R^{s}N) \] is our definition of the $\Aut_{R}M$-torsor associated to the $R$-bundle $N$. Restricting the module action along the $q$-equivalence of $\mathcal{I}$-monoids $\Aut_{R}^{c}M \arr \Aut_{R}M$, we see that $\mathbf{E}^{R}(M^{\circ}, N)$ is an $\Aut_{R}^{c}M$-torsor as well. Since $\Omega^{\bullet}_{B}$ and $F^{R}_{B}(M^{\circ}, -)$ are both right Quillen, we can summarize the definition by saying that \[ \mathbf{E} = \mathbf{E}^{R}(M^{\circ}, -) \colon \ho (\text{$R$-bundles with fiber $M$}) \arr \ho (\text{$\Aut_{R}^{c}M$-torsors}) \] is the composite derived functor $\mathbf{E} = \boldsymbol{ \mu} \circ \mathbf{R} \Omega^{\bullet}_{B} F^{R}_{B}(M^{\circ}, - )$. We can also define $\mathbf{E}^{R}(M, N)$ by replacing $M^{\circ}$ by $M$ throughout. \end{definition} \section{The classification of $R$-bundles}\label{section:bundle_classification} In the previous section we constructed an $\Aut_{R}^{c}M$-torsor from an $R$-bundle with fiber $M$. We now construct an $R$-bundle with fiber $M$ from an $\Aut_{R}^{c}M$-torsor and show that the constructions are homotopy inverse to each other. This will complete the proof of Theorem \ref{thm:classification_of_Rbundles}. We assume that $B$ is a CW-complex, in particular semi-locally contractible, so the functor $\mu$ from the previous section is well-behaved. \begin{definition}\label{def:associated_bundle} Let $Y$ be an $\Aut_{R}^{c}M$-torsor over $B$. The fiberwise suspension spectrum $\Sigma^{\bullet}_{B} Y$ is a $\Sigma^{\bullet}_{+} \Aut_{R}^{c}M$-module spectrum over $B$. We define the $R$-bundle associated to $Y$ to be the $R$-module $T(Y) = M^{\circ} \sma_{\Sigma^{\bullet}_{+}\Aut_{R}^{c}M} \Sigma^{\bullet}_{B} Y$ over $B$. The construction $T$ defines a functor from $\Aut_{R}^{c}M$-modules over $B$ to $R$-module spectra over $B$ which is left Quillen by Prop. \ref{prop:sus_is_quillen} and \ref{prop:param_quillen_adjunction}. We let $\mathbf{T} = \mathbf{L} T$ denote its left derived functor. We will verify that $\mathbf{T} Y$ is in fact an $R$-bundle with fiber $M$ in Lemma \ref{lemma:derived_commute}. \end{definition} \begin{remark}\label{construction_thomspectra} In the case $M = R$, the definition recovers the construction of Thom spectra from \citelist{\cite{ABGHR1} \cite{BCS} \cite{E_infty_rings}}. Given a map of spaces $f \colon B \arr B \mGL_1 R$, the classification of principal $\mGL_1 R$-fibrations gives a principal $\mGL_1 R$-fibration $Y_f$ over $B$. Applying the functor $T$ then gives a rank one $R$-bundle over $B$. The Thom spectrum associated to the map $f$ is the (non-parametrized) $R$-module spectrum \[ \mathrm{M} f= r_{!} T Y_{f} \cong R^{\circ} \sma_{\Sigma^{\bullet}_{+} \mGL_1^{c} R} \Sigma^{\bullet}_{+} Y_f , \] where $r_{!} \colon \mathscr{S}_{B} \arr \mathscr{S}$ is left adjoint to the pullback functor $r^* \colon \mathscr{S} \arr \mathscr{S}_{B}$. \end{remark} The fiber functor $(-)_b = i_{b}^*$ is a left adjoint, but is not left Quillen for either the stable model structure on parametrized spectra or the $qf$-model structure on parametrized $\mathcal{I}$-spaces. However, $i_{b}^*$ is a right Quillen functor. On the other hand, $T = M^{\circ} \sma_{\Sigma^{\bullet}_{+} \Aut_{R}^{c}M} \Sigma^{\bullet}_{B}(-)$ is a left Quillen functor. There is a natural isomorphism of functors \[ (M^{\circ} \sma_{\Sigma^{\bullet}_{+} \Aut_{R}^{c} M} \Sigma^{\bullet}_{B} Y)_{b} \cong M^{\circ} \sma_{\Sigma^{\bullet}_{+} \Aut_{R}^{c} M} \Sigma^{\bullet}_{+} Y_{b} \] at the point-set level, but we are not guaranteed an isomorphism of derived functors after passage to homotopy categories because we are composing left and right derived functors. If $B = \ast$, the fact that $M^{\circ}$ is $s$-cofibrant as a $\Sigma^{\bullet}_{+}\Aut_{R}^{c}M$-module implies that the functor $M^{\circ} \sma_{\Sigma^{\bullet}_{+} \Aut_{R}^{c}M} (-)$ preserves stable equivalences \cite{MMSS}*{12.7}. Along with Lemma \ref{suspension_preserves_we_lemma}, this shows that the functor $T$ takes $q$-equivalences to stable equivalences when the base is a point. This fact will allow us to deduce the desired commutation of derived functors. The proof of the next result is inspired by Shulman's examples in \S9 of \cite{shulman_doubles}. We temporarily revert to the usual notation $\mathbf{L}$ and $\mathbf{R}$ for left and right derived functors. \begin{lemma}\label{lemma:derived_commute} Let $f \colon \ast \arr B$ be the inclusion of a point. Then there is a natural isomorphism of derived functors $\mathbf{R} f^* \mathbf{L} T \cong \mathbf{L} T \mathbf{R} f^*$. \end{lemma} \begin{proof} Suppose that $X$ is a $qf$-bifibrant $\Aut_R^{c}M$-module over $B$, and consider the following natural transformation of $R$-modules \begin{equation}\label{commutation_transformation} T Q^{qf} f^* X \arr T f^* X \overset{\cong}{\arr} f^* T X \arr f^* R^{s} T X, \end{equation} where the first and third maps are induced by $qf$-cofibrant approximation and $s$-fibrant approximation, respectively. Since $T$ preserves all weak equivalences when the base is a point, the first map is a stable equivalence. The second map is the canonical isomorphism. It remains to show that $f^*$ preserves the stable equivalence $TX \arr R^{s} TX$. Factor $f$ as a $q$-equivalence followed by a $q$-fibration, and consider the two cases separately. In the first case, the Quillen adjunction $(f_{!}, f^*)$ is a Quillen equivalence for parametrized $\Aut_{R}^{c}M$-modules (Prop. \ref{prop:base_change_quillen}) and parametrized $R$-modules (the case of $R = S$ is \cite{MS}*{12.6.7} and the general case follows since stable equivalences and $s$-fibrations of $R$-modules are detected by the forgetful functor to parametrized spectra). It follows that the natural transformation of derived functors \[ \mathbf{L} T \mathbf{R} f^* \overset{\eta}{\arr} \mathbf{R} f^* \mathbf{L} f_{!} \mathbf{L} T \mathbf{R} f^* \cong \mathbf{R} f^* \mathbf{L} T \mathbf{L} f_{!} \mathbf{R} f^* \overset{\epsilon}{\arr} \mathbf{R} f^* \mathbf{L} T \] is an isomorphism. As discussed in \cite{shulman_doubles}*{\S7}, this isomorphism of derived functors is represented by the composite \eqref{commutation_transformation}. In particular, $f^* TX \arr f^* R^{s} T X$ is a stable equivalence in this case, since the map $f$ is still the inclusion of a point. When $f$ is a $q$-fibration, we instead consider a level-wise $qf$-fibrant approximation $TX \arr R^{l} TX$. There is a stable equivalence $R^{l} TX \arr R^{s} TX$ under $TX$ \cite{MS}*{12.6.1} and the induced map $f^* R^{l} TX \arr f^* R^{s} TX$ is a stable equivalence because $f^*$ preserves stable equivalences between level-wise $qf$-fibrant spectra. Pullback along $q$-fibrations preserves weak homotopy equivalences of topological spaces, so $f^* TX \arr f^* R^{l} TX$ is a level-wise $q$-equivalence, hence a stable equivalence. Therefore $f^* TX \arr f^* R^{s} TX$ is also a stable equivalence. \end{proof} We return to using bold-face letters to denote derived functors: $\mathbf{T}$ is the left derived functor of $T$ and $\mathbf{F}_{b} = \mathbf{R} i_{b}^*$ is the right derived fiber functor. Recall that the $\Aut_{R}^{c}M$-torsor associated to an $R$-bundle with fiber $M$ is given by \[ \mathbf{E} = \mathbf{E}^{R}(M^{\circ}, -) = \boldsymbol{\mu} \circ \boldsymbol{\Omega}, \] where $\boldsymbol{\mu}$ is the derived functor of Construction \ref{def:subtorsor} and $\boldsymbol{\Omega}$ is the right derived functor of $\Omega = \Omega^{\bullet}_{B}F^{R}(M^{\circ}, - )$. \begin{proposition}\label{prop:derived_commute} There are natural isomorphisms of derived functors $\mathbf{F}_{b} \mathbf{T} \cong \mathbf{T} \mathbf{F}_{b}$ and $\mathbf{F}_{b} \mathbf{E} \cong \mathbf{E} \mathbf{F}_{b}$. \end{proposition} \begin{proof} The first isomorphism is Lemma \ref{lemma:derived_commute}. For the second, observe that the canonical isomorphism $i_{b}^* \Omega \cong \Omega i_{b}^*$ descends to a canonical isomorphism of derived functors $\mathbf{F}_{b} \boldsymbol{\Omega} \cong \boldsymbol{\Omega} \mathbf{F}_{b}$ because $i_{b}^*$ and $\Omega$ are both right Quillen. By Lemma, \ref{lemma:mu_commutes}, there is a natural isomorphism $\mathbf{F}_{b} \boldsymbol{\mu} \cong \boldsymbol{\mu} \mathbf{F}_{b}$, completing the proof. \end{proof} The cofibrant approximation map $M^{\circ} \arr M$ is a stable equivalence of $R$-modules, so the derived functors $\mathbf{E} = \mathbf{E}^{R}(M^{\circ}, -)$ and $\mathbf{E}^{R}(M, -)$ are canonically isomorphic. To ease the exposition in the next proof we will identify these two functors. \begin{theorem}\label{torsor_bundle_equiv_theorem} The pair of functors $(\mathbf{T}, \mathbf{E})$ defines a bijection between the set of $q$-equivalence classes of $\Aut_{R}^{c} M$-torsors over $B$ and the set of stable equivalence classes of $R$-bundles with fiber $M$ over $B$. \end{theorem} \noindent Along with Theorem \ref{generic_Gbundle_classification} and the equivalence between principal $\Aut_{R}^{c}M$-fibrations and $\Aut_{R}^{c}M$-torsors in Proposition \ref{torsor_principal_fibration_equiv_prop}, this theorem completes the proof of Theorem \ref{thm:classification_of_Rbundles}. \begin{proof} Suppose that $Y$ is an $\Aut_{R}^{c}M$-torsor over $B$. We will construct a natural transformation of derived functors $\zeta \colon Y \arr \mathbf{E} \mathbf{T} Y$ by showing that the composite of the units of the adjunctions $(\Sigma^{\bullet}_{B}, \Omega^{\bullet}_{B})$ and $(\mathbf{T}, \mathbf{F}^{R}(M, -)$ factors through $\mathbf{E}^{R}(M, \mathbf{T} Y)$ as indicated in the following diagram. \begin{equation}\label{diagram:wanted_factorization} \xymatrix{ Y \ar[r]^-{\eta} \ar@{-->}[drr]_-{\zeta} & \Omega^{\bullet}_{B} \Sigma^{\bullet}_{B} Y \ar[r]^-{\eta} & \Omega^{\bullet}_{B} F^{R}(M, \mathbf{T} Y ) \\ & & \mathbf{E}^{R}(M, \mathbf{T} Y) \ar[u]_{\iota}\\ } \end{equation} By the construction of $\mathbf{E}^{R}(M, -)$, it suffices to show that the factorization exists on $\pi_0$ along each fiber after taking a fibrant approximation, and for this it suffices to show that the factorization exists in the derived category of $\Aut_{R}^{c}M$-modules after applying the derived fiber functor $\mathbf{F}_{b}$. Apply $\mathbf{F}_{b}$ to diagram \eqref{diagram:wanted_factorization} and commute $\mathbf{F}_{b}$ past the constituent functors to the input variable $Y$. Now fix an isomorphism in the derived category $\mathbf{F}_{b} Y \cong \Aut_{R}^{c}M$ and consider the isomorphic diagram with $\mathbf{F}_{b} Y$ replaced by $\Aut_{R}^{c}M$. The composite of the two instances of $\eta$ in this new diagram is the left vertical composite in the following commutative diagram. \[ \xymatrix{ \Aut_{R}^{c}M \ar[d] \ar[r] & \End_{R} M \ar[d] \\ \Omega^{\bullet} \Sigma^{\bullet}_{+} \Aut_{R}^{c}M \ar[d] \ar[r] & \Omega^{\bullet} \Sigma^{\bullet}_{+} \End_{R} M \ar[d] \\ \Omega^{\bullet} \mathbf{F}^{R}(M, M \sma_{\Sigma^{\bullet}_{+} \Aut_{R}^{c}M} \Sigma^{\bullet}_{+} \Aut_{R}^{c} M) \ar[r] \ar[dr]^{\cong} & \Omega^{\bullet} \mathbf{F}^{R}(M, M \sma_{\Sigma^{\bullet}_{+} \Aut_{R}^{c}M} \Sigma^{\bullet}_{+} \End_{R} M) \ar[d] \\ & \Omega^{\bullet} \mathbf{F}^{R}(M, M) } \] Here the horizontal maps are induced by the composite $\Aut_{R}^{c}M \arr \Aut_{R} M \arr \End_{R} M$ of the cofibrant approximation map and the canonical inclusion. The diagonal map is induced by the action map \[ M \sma_{\Sigma^{\bullet}_{+} \Aut_{R}^{c}M} \Sigma^{\bullet}_{+} \Aut_{R}^{c} M \arr M \] for the right $\Sigma^{\bullet}_{+} \Aut_{R}^{c}M$-module structure on $M$ and it is an isomorphism as indicated. Since $M$ is bifibrant, we may choose to represent the derived functor $\Omega^{\bullet} \mathbf{F}^{R}(M, M)$ in the homotopy category by $\End_{R}M$. A diagram chase involving the triangle identities for the adjunctions $(\Sigma^{\bullet}_{+}, \Omega^{\bullet})$ and $(\mathbf{T}, \mathbf{F}^{R}(M, -)$ shows that the right vertical composite is then the identity map. It follows that the left vertical composite factors through $\Aut_{R} M = E^{R}(M, M)$ via the cofibrant approximation map. This verifies the requested factorization in diagram \eqref{diagram:wanted_factorization}, and so we have constructed the natural transformation $\zeta \colon Y \arr \mathbf{E} \mathbf{T} Y$. As a consequence of the preceding argument, we see that $\mathbf{F}_{b} \zeta$ is equivalent to the cofibrant approximation map $\Aut_{R}^{c} M \arr \Aut_{R} M$. It follows that $\zeta$ is a fiberwise equivalence, and thus induces a natural isomorphism of derived functors. Now let $N$ be an $R$-bundle with fiber $M$. Define $\xi \colon \mathbf{T} \mathbf{E} N \arr N$ to be the composite \[ \mathbf{T} \Sigma^{\bullet}_{B} \mathbf{E}^{R}(M, N) \overset{\iota}{\arr} \mathbf{T} \Sigma^{\bullet}_{B} \Omega^{\bullet}_{B} \mathbf{F}^{R}(M, N) \overset{\epsilon}{\arr} \mathbf{T} \mathbf{F}^{R}(M, N) \overset{\epsilon}{\arr} N \] of the map induced by the inclusion $\iota \colon \mathbf{E}^{R}(M, N) \arr \Omega^{\bullet}_{B} \mathbf{F}^{R}(M, N)$ followed by the counits for the adjunctions $(\Sigma^{\bullet}_{B}, \Omega^{\bullet}_{B})$ and $(\mathbf{T}, \mathbf{F}^{R}(M, -)$. After applying the derived fiber functor $\mathbf{F}_{b}$, commuting it through to the variable $N$, and using a chosen equivalence $\mathbf{F}_{b} N \simeq M$, an argument similar to that just given for $\zeta$ proves that $\mathbf{F}_{b} \xi$ is a fiberwise equivalence. Hence $\xi$ also induces a natural isomorphism of derived functors. \end{proof} \section{Lifted $R$-bundles and algebraic $K$-theory}\label{proving_main_theorem_section} In this section we will prove Theorem \ref{main_theorem}. Having used diagram spaces to prove the classification theorem for $R$-bundles with fiber $M$, we now return to the category of spaces for the following discussion. The arguments are adapted from \citelist{\cite{Kar} \cite{BDR}}. Let $X$ be a finite CW complex and let $R$ be a connective cofibrant orthogonal ring spectrum. Let \[ \mGL_{n}R = \mathbb{Q}_{*} \Aut_{R}^{c}(R^{\vee n}) \] be the grouplike $A_{\infty}$ space associated to a $q$-cofibrant approximation of the grouplike $\mathcal{I}$-monoid $\Aut_{R}(R^{\vee n})$. By Theorem \ref{thm:classification_of_Rbundles}, the classifying space $B \mGL_{n}R$ classifies stable equivalence classes of $R$-bundles with fiber $R^{\vee n}$. Let $B \mGL_{\infty} R = \colim_{n} B \mGL_n R$. Recall the following description of the zeroeth space of the algebraic $K$-theory spectrum of $R$: \[ \Omega^{\infty} K(R) \simeq K_0 R \times B \mGL_{\infty} R^{+} \] The group $K_0 R = K^{f}_0 \pi_0 R$ is the algebraic $K$-theory of free $\pi_0 R$-modules, and the plus denotes Quillen's plus construction with respect to the commutator subgroup of $\pi_1 B \mGL_{\infty} R$. Since the plus construction changes the homotopy type in general, we will need to work with lifted bundles, in the following sense. \begin{definition} A lifted $R$-bundle over $X$ is the data of: \begin{itemize} \item[(i)] An $H_*$-acyclic fibration $p \colon Y \arr X$ of CW complexes, by which we mean a $q$-fibration with $\widetilde{H}_{*}(\mathrm{fiber}(p);\mathbf{Z}) = 0$. \item[(ii)] An $R$-bundle $E$ over $Y$. \end{itemize} We say that a lifted $R$-bundle $(E, Y, p)$ over $X$ is free if every fiber of $E$ admits a stable equivalence of $R$-modules $E_{y} \simeq R^{\vee n}$ for some $n$. \end{definition} Define a relation on lifted $R$-bundles over $X$ by declaring $(E, Y, p) \sim (E', Y', p')$ if there exists a map $f \colon Y \arr Y'$ over $X$ such that the induced map of $R$-modules $E \arr f^*E'$ over $Y$ is a stable equivalence. This does \emph{not} define an equivalence relation in general, so we will work with the equivalence relation on lifted $R$-bundles over $E$ generated by $\sim$. We assume from now on that $X$ is a finite CW complex. Let $\Phi_{R}(X)$ be the set of equivalence classes of lifted free $R$-bundles over $X$. The set $\Phi_{R}(X)$ is an abelian monoid, where the sum $(E_1, Y_1) \oplus (E_2, Y_2)$ of two lifted $R$-bundles over $X$ is the lifted $R$-bundle \[ (g_1^*E_1 \vee_{Z} g_2^*E_2, Z), \quad \text{where $Z$ is the pullback} \quad \xymatrix{ Z \ar[r]^{g_2} \ar[d]_{g_1} & Y_2 \ar[d] \\ Y_1 \ar[r] & X } \] The zero of $\Phi_{R}(X)$ is the trivial $R$-bundle $(\ast_{X}, X)$ over $X$. Let $\overline{K}_{R}(X)$ be the Grothendieck group of the monoid $\Phi_{R}(X)$. We say that a lifted $R$-bundle is virtually trivial if there exists a space $T$ such that $\widetilde{H}_*(T ; \mathbf{Z}) = 0$ and a map $f \colon Y \arr T$ (not necessarily over $X$) along with an $R$-bundle $(E', T)$ over $T$ and a stable equivalence of $R$-bundles $E \simeq f^*E'$. \begin{lemma}\label{virtual_inverse_lemma} Let $(E_1, Y_1)$ be a lifted free $R$-bundle over $X$. Then there exists a lifted free $R$-bundle $(E_2, Y_2)$ over $X$ such that $(E_1, Y_1) \oplus(E_2, Y_2)$ is virtually trivial. \end{lemma} \begin{proof} Let $f_1 \colon Y_1 \arr B \mGL_n R$ be a classifying map for $E_1$. Let $P$ be the homotopy fiber of the $H_*$-acyclic fibration $Y_1 \arr X$. By \cite{haus_hus}*{1.3}, the kernel of $\pi_1 Y_1 \arr \pi_1 X$ is the perfect normal subgroup $\im (\pi_1 P \arr \pi_1 Y_1)$. This is annihilated by the following map to the plus construction: \[ \pi_1 P \arr \pi_1 Y_1 \overset{f_1}{\arr} \pi_1 B\mGL_nR \arr \pi_1 B\mGL_nR^{+}. \] By \cite{haus_hus}*{3.1}, $f_1$ descends to a map $g_1 \colon X \arr B\mGL_nR^{+}$. Use the grouplike $H$-space structure on $B \mGL_{\infty} R^{+}$ to find $g_2 \colon X \arr B\mGL_m R^{+}$ such that $g_1 \oplus g_2 \colon X \arr B \mGL_{m + n} R^{+}$ is nullhomotopic. Define $Y_2$ as the following pullback: \[ \xymatrix{ Y_2 \ar[r]^-{f_2} \ar[d] & B \mGL_m R \ar[d] \\ X \ar[r]^-{g_2} & B \mGL_m R^{+} } \] We choose a model for the plus construction such that the right vertical map (and thus the left vertical map) is a $q$-fibration of CW complexes. Let $E_2$ be the free $R$-bundle over $Y_2$ classified by the map $f_2$. The sum $(E_1, Y_1) \oplus (E_2, Y_2)$ is a lifted $R$-bundle over the pullback $Y = Y_1 \times_{X} Y_2$ that is classified by a lift $f \colon Y \arr B \mGL_{m + n} R$ of $g_1 \oplus g_2$. Thus $f$ is nullhomotopic, so it factors through the $H_*$-acyclic fiber of $B \mGL_{m + n} R \arr B \mGL_{m + n} R^{+}$, proving that $(E_1, Y_1) \oplus (E_2, Y_2)$ is virtually trivial. \end{proof} \begin{lemma} Suppose that $(E, Y)$ is a virtually trivial lifted $R$-bundle over $X$. Then there exists a lifted $R$-bundle $(r^* M, Y')$ over $X$ that is equivalent to $(E, Y)$ as a lifted $R$-bundle: $[(E, Y)] = [(r^* M, Y')]$ in $\Phi_{R}(X)$. If $E$ is a free $R$-bundle, then $M = R^{\vee n}$ for some $n$. \end{lemma} \begin{proof} We are given $f \colon Y \arr T$ where $\widetilde{H}_{*}(T) = 0$ and a stable equivalence $E \simeq f^* E'$ where $E'$ is an $R$-bundle over $T$. Choose a point $p \colon * \arr T$. Consider the following commutative diagram: \[ \xymatrix{ & Y \ar[dl]_{g} \ar[d]_{\tau} \ar[dr]^{f} & \\ X & T \times X \ar[l]_{\pi_2} \ar[r]^{\pi_1} & T \\ & X \ar[ul]^{\id} \ar[u]^{\chi} \ar[r]_{r} & \ast \ar[u]_{p} } \] where $\tau(y) = (f(y), g(y))$, $\chi(x) = (p, x)$. The maps $g, \pi_2$ and $\id$ are all $H_*$-acyclic fibrations. Form the $R$-bundle $\pi_1^* E'$ over $T \times X$. Then we have a stable equivalence of $R$-bundles over $Y$: $\tau^* \pi_1^* E' = f^* E' \simeq E$. On the other hand $\chi^* \pi_1^* E' = r^* p^* E'$, which is a trivial bundle over $X$ with fiber $M = p^* E'$, since $p \circ r$ factors through a point. The two triangles on the left show that $(E, Y) \sim (\pi_1^* E', T \times X)$ and $(\pi_1^* E', T \times X) \sim (r^* M, X)$. \end{proof} Let $\psi \colon \overline{K}_R(X) \arr [X, K_0(R)]$ be the extension of the map of monoids $\Phi_R(X) \arr [X, K_0(R)]$ that takes a lifted free $R$-bundle to the class of the fiber in $K_0(R) = K_0^{f}(\pi_0 R)$ over each component. Here $K_0(R)$ is a discrete space. There is a natural splitting \[ \overline{K}_{R}(X) \cong \ker \psi \oplus [X, K_0(R)]. \] Let $\Phi_R^n(X)$ be the set of equivalence classes of lifted $R$-bundles of rank $n$. \begin{proposition} There is a natural isomorphism \[ \ker \psi \cong \colim_{n} \Phi_{R}^{n}(X). \] \end{proposition} \begin{proof} Suppose that $[E] - [F]$ is a formal difference of lifted free $R$-bundles in $\ker \psi$. We associate to $[E] - [F]$ the element $[E \oplus F'] \in \colim_{n} \Phi_{R}^{n}(X)$ where $F'$ is a lifted free $R$-bundle such that $F \oplus F'$ is virtually trivial (Lemma \ref{virtual_inverse_lemma}). Conversely, to a class $[E] \in \Phi^{n}_{R}(X)$ we associate the formal difference $[E] - [T_n] \in \ker \psi$, where $T_n = r^* R^{\vee n}$ is the trivial bundle of rank $n$. \end{proof} \begin{proposition} There is a natural isomorphism \[ \colim_{n} \Phi_{R}^{n}(X) \cong [X, B\mGL_{\infty}(R)^{+}]. \] \end{proposition} \begin{proof} Given the class of a lifted free $R$-bundle $(E, Y)$ over $X$ in $\colim_{n} \Phi_{R}^{n}(X)$, the arguments of Lemma \ref{virtual_inverse_lemma} show that the classifying map $f$ of $E$ extends to a map $g$ from $X$ to the plus construction: \[ \xymatrix{ Y \ar[r]^-{f} \ar[d]_{p} & B \mGL_{n} R \ar[d] \\ X \ar[r]^-{g} & B \mGL_n R^{+} } \] Conversely, given a classifying map $g$ define $Y$ as the pullback displayed in the same diagram. Then $p$ is an $H_*$-acyclic fibration and $f$ classifies a lifted free $R$-bundle $(E, Y)$ over $X$. \end{proof} All together, we have proved: \[ \overline{K}_{R}(X) \cong [X, K_0(R)] \oplus [X, B \mGL_{\infty}(R)^{+}] \cong [X, \Omega^{\infty}K(R)]. \] This completes the proof of Theorem \ref{main_theorem}. \begin{bibdiv} \begin{biblist} \bib{ABG1}{article}{ title={Twists of $K$-theory and $tmf$} author={M. Ando} author={A.J. Blumberg} author={D. Gepner} journal={Proc. Sympos. Pure Math.} volume={81} pages={27--63} publisher={Amer. Math. Soc.} date={2010} } \bib{ABG2}{article}{ title={Parametrized spectra, multiplicative Thom spectra, and the twisted Umkehr map} author={M. Ando} author={A.J. Blumberg} author={D. Gepner} journal={arXiv:1112.2203 [math.AT]} } \bib{ABGHR1}{article}{ title={Units of ring spectra, orientations, and Thom spectra via rigid infinite loop space theory} author={M. Ando} author={A.J. Blumberg} author={D. Gepner} author={M.J. Hopkins} author={C. Rezk} journal={J. Topol.} volume={7} date={2014} number={4} pages={1077--1117} } \bib{ABGHR2}{article}{ title={An $\infty$-categorical approach to R-line bundles, R-module Thom spectra, and twisted R-homology} author={M. Ando} author={A.J. Blumberg} author={D. Gepner} author={M.J. Hopkins} author={C. Rezk} journal={J. Topol.} volume={7} date={2014} number={3} pages={869--893} } \bib{BCS}{article}{ title={Topological Hochschild homology of Thom spectra and the free loop space} author={A.J. Blumberg} author={R.L. Cohen} author={C. Schlichtkrull} journal={Geom. Topol.} volume={14} number={2} year={2010} pages={1165-1242} } \bib{BDR}{book}{ title={Two-vector bundles and forms of elliptic cohomology} author={N. Baas} author={B. Dundas} author={J. Rognes} date={2004} series={London Mathematical Society Lecture Notes} volume={308} pages={18--45} } \bib{BDRR}{article}{ title={Stable bundles over rig categories} author={N. Baas} author={B. Dundas} author={B. Richter} author={J. Rognes} journal={J. Topol.} volume={4} year={2011} number={3} pages={623--640} } \bib{CL}{article}{ author={J.A. Campbell} author={J.A. Lind} title={From 2-vector bundles to parametrized $ku$-bundles} journal={in preparation} } \bib{cohen_jones}{article}{ title={Gauge theory and string topology} author={R. Cohen} author={J.D.S. Jones} journal={arXiv:1304.0613} } \bib{cohen_jonesII}{article}{ title={Homotopy automorphisms of R-module bundles, and the K-theory of string topology} author={R. Cohen} author={J.D.S. Jones} journal={arXiv:1310.4797} } \bib{EKMM}{book}{ title={Rings, modules, and algebras in stable homotopy theory} author={A.D. Elmendorf} author={I. Kriz} author={M.A. Mandell} author={J.P. May} date={1997} series={Mathematical Surveys and Monographs} volume={47} publisher={American Mathematical Society} } \bib{haus_hus}{article}{ title={Acyclic maps} author={J.C. Hausmann} author={D. Husemoller} journal={Enseign. Math.} volume={25} date={1979} pages={53-75} } \bib{Kar}{book}{ title={Homologie cyclique at $K$-th\'eorie} author={M. Karoubi} date={1987} series={Ast\'erisque} volume={149} } \bib{LM07}{article}{ title={Modules in monoidal model categories} author={L.G. Lewis Jr.} author={M.A. Mandell} journal={J. Pure Appl. Alg.} volume={210} date={2007} pages={395--421} } \bib{diagram_spaces}{article}{ title={Diagram spaces, diagram spectra, and spectra of units} author={J. A. Lind} journal={Algebr. Geom. Topol.} volume={13} date={2013} number={4} pages={1857--1935} } \bib{HTT}{book}{ title={Higher topos theory} author={J. Lurie} series={Annals of Mathematics Studies} vol={170} publisher={Princeton University Press, Princeton, NJ} year={2009} } \bib{MMSS}{article}{ title={Model categories of diagram spectra} author={M.A. Mandell} author={J.P. May} author={S. Schwede} author={B. Shipley} journal={Proc. London Math. Soc. (3)} volume={82} date={2001} pages={441--512} } \bib{class_and_fib}{book}{ title={Classifying spaces and fibrations} author={J.P. May} date={1975} series={Memoirs Amer. Math. Soc.} volume={155} } \bib{E_infty_rings}{book}{ title={$E_{\infty}$ ring spaces and $E_{\infty}$ ring spectra (with contributions by F. Quinn, N. Ray, and J. Tornehave)} author={J.P. May} date={1977} series={Springer Lecture Notes in Mathematics} volume={577} } \bib{geom_infinite_loops}{book}{ title={The geometry of iterated loop spaces} author={J.P. May} date={1972} series={Springer Lecture Notes in Mathematics} volume={271} } \bib{MS}{book}{ title={Parametrized homotopy theory} author={J.P. May} author={J. Sigurdsson} date={2006} series={Mathematical Surveys and Monographs} volume={132} publisher={American Mathematical Society} } \bib{NSS1}{article}{ title={Principal $\infty$-bundles--general theory} author={T. Nikolaus} author={U. Schreiber} author={D. Stevenson} journal={arXiv:1207.0248} } \bib{NSS2}{article}{ title={Principal $\infty$-bundles--presentations} author={T. Nikolaus} author={U. Schreiber} author={D. Stevenson} journal={arXiv:1207.0249} } \bib{RS}{article}{ author={D. Roberts} author={D. Stevenson} title={Simplicial principal bundles in parametrized spaces} journal={arXiv:1203.2460} } \bib{SS}{article}{ title={Diagram spaces and symmetric spectra} author={C. Schlichtkrull} author={S. Sagave} journal={Adv. Math.} volume={231} date={2012} number={3--4} pages={2116--2193} } \bib{SS_groupcompletion}{article}{ title={Group completion and units in I-spaces} author={C. Schlichtkrull} author={S. Sagave} journal={Algebr. Geom. Topol.} volume={13} date={2013} number={2} pages={625--686} } \bib{schwede_shipley}{article}{ title={Algebras and modules in monoidal model categories} author={S. Schwede} author={B. Shipley} journal={Proc. London Math. Soc. (3)} volume={80} date={2000} pages={491--511} } \bib{shulman_doubles}{article}{ title={Comparing composites of left and right derived functors} author={M. Shulman} journal={New York J. Math} volume={17} year={2011} pages={75-125} } \bib{shulman_hocolim}{article}{ title={Homotopy limits and colimits and enriched homotopy theory} author={M. Shulman} journal={arXiv:math/0610194} } \bib{stasheff}{article}{ title={A classification theorem for fibre spaces} author={J. Stasheff} journal={Topology} volume={2} date={1963} pages={239--246} } \bib{stevenson}{article}{ author={D. Stevenson} title={Classifying theory for simplicial parametrized groups} journal={arXiv:1203.2461} } \bib{wendt}{article}{ title={Classifying spaces and fibrations of simplicial presheaves} author={M. Wendt} journal={J. Homotopy Relat. Struct.} volume={6} date={2011} number={1} pages={1--38} } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2015-01-08T02:09:45", "yymm": "1304", "arxiv_id": "1304.5676", "language": "en", "url": "https://arxiv.org/abs/1304.5676", "abstract": "A parametrized spectrum E is a family of spectra E_x continuously parametrized by the points x of a topological space X. We take the point of view that a parametrized spectrum is a bundle-theoretic geometric object. When R is a ring spectrum, we consider parametrized R-module spectra and show that they give cocycles for the cohomology theory determined by the algebraic K-theory K(R) of R in a manner analogous to the description of topological K-theory K^0(X) as the Grothendieck group of vector bundles over X. We prove a classification theorem for parametrized spectra, showing that parametrized spectra over X whose fibers are equivalent to a fixed R-module M are classified by homotopy classes of maps from X to the classifying space BAut_R(M) of the A_\\infty space of R-module equivalences from M to M. In proving the classification theorem for parametrized spectra, we define of the notion of a principal G fibration where G is an A_\\infty space and prove a similar classification theorem for principal G fibrations.", "subjects": "Algebraic Topology (math.AT); K-Theory and Homology (math.KT)", "title": "Bundles of spectra and algebraic K-theory", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363512883316, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7073385824394516 }
https://arxiv.org/abs/1508.06241
Fractional Perimeter and Nonlocal Minimal Surfaces
This Master's thesis presents a study of the basic properties of the s-fractional perimeter and of the regularity theory of the corresponding s-minimal sets. In particular, we give full detailed proofs for all the Theorems contained in the article "Nonlocal Minimal Surfaces" by Caffarelli, Roquejoffre and Savin, where these s-minimal sets were introduced and studied for the first time.
\chapter*{Introduction} This thesis presents a study of the basic properties of the fractional $s$-perimeter and of the regularity theory of the corresponding $s$-minimal sets.\\ The fractional $s$-perimeter arises naturally in nonlocal phase transition problems (e.g., as $\Gamma$-limit of a nonlocal version of the Ginzburg-Landau energy, \cite{Phase}) and the related notion of fractional mean curvature appears in nonlocal evolution equations for surfaces (e.g., in \cite{Cso} and \cite{CMP}).\\ Given an open set $\Omega\subset\R$ we can define the fractional $s$-perimeter of a measurable set $E\subset\R$ in $\Omega$, with $s\in(0,1)$, as the functional \begin{equation*}\begin{split} P_s(E,\Omega)&:=\Ll_s(E\cap\Omega,\Co E\cap\Omega) +\Ll_s(E\cap\Omega,\Co E\setminus\Omega)\\ & \qquad+\Ll_s(E\setminus\Omega,\Co E\cap\Omega), \end{split} \end{equation*} where \begin{equation*} \Ll_s(A,B):=\int_A\int_B\frac{1}{\kers}dx\,dy, \end{equation*} for every couple of disjoint sets $A,\,B\subset\R$.\\ We simply write $P_s(E)=P_s(E,\R)$, when $\Omega=\R$. Formally, this coincides with \begin{equation*} P_s(E,\Omega)=\frac{1}{2}\big([\chi_E]_{W^{s,1}(\R)}-[\chi_E]_{W^{s,1}(\Co \Omega)}\big), \end{equation*} where $[u]_{W^{s,1}}$ denotes the Gagliardo seminorm of $u$ in the Sobolev space $W^{s,1}$. We are neglecting the interactions coming from $\Co\Omega$ because these might be infinite and in the end we are interested in the minimization of $P_s(F,\Omega)$ among all sets $F\subset\R$ with fixed `boundary data' $F\setminus\Omega=E_0\setminus\Omega$, so they would not contribute to the minimization. The $s$-perimeter is a nonlocal functional in the sense that $P_s(E,\Omega)$ is not determined by the behavior of $E$ in a neighborhood of $\Omega$. Moreover, the $s$-perimeter can be thought of as a fractional perimeter, in the sense that $P_s(E,\Omega)$ can be finite even when the Hausdorff dimension of $\partial E$ is strictly bigger than $n-1$ (see below for more details).\\ \begin{section}*{Nonlocal Minimal Surfaces} The main part of the thesis is devoted to the study of $s$-minimal sets and their regularity properties. We followed the paper \cite{CRS}, where $s$-minimal sets were introduced and studied for the first time. In particular, we give full detailed proofs for all the Theorems of \cite{CRS}. A set $E\subset\R$ is $s$-minimal in $\Omega$ if \begin{equation*} P_s(E,\Omega)\leq P_s(F,\Omega)\quad\textrm{for every }F\subset\R\textrm{ s.t. }F\setminus\Omega=E\setminus\Omega. \end{equation*} Once we fix the exterior data $E_0\setminus\Omega$, the existence of an $s$-minimal set $E$ coinciding with $E_0$ outside $\Omega$ is obtained through the direct method of Calculus of Variations. Namely, a fractional Sobolev inequality guarantees the compactness of a minimizing sequence, while Fatou's Lemma is enough to have the inferior semicontinuity. Then an interesting problem consists in studying the regularity of $\partial E\cap\Omega$.\\ This is done using techniques similar to those employed in the classical framework. As a first step we obtain uniform density estimates for $s$-minimal sets.\\ An important consequence is the locally uniform convergence of minimizers, which is a fundamental tool in many proofs. Moreover the uniform density estimates guarantee a clean ball condition.\\ To be more precise, this means that if $E$ is $s$-minimal in $\Omega$ and $x\in\partial E$, with $B_r(x)\subset\Omega$, then there exist balls \begin{equation*} B_{cr}(y_1)\subset E\cap B_r(x),\qquad B_{cr}(y_2)\subset\Co E\cap B_r(x), \end{equation*} for some universal constant $c$. \begin{subsection}*{Euler-Lagrange Equation} We prove that a set $E$ which is $s$-minimal in $\Omega$ satisfies the Euler-Lagrange equation \begin{equation*} \I_s[E](x)=0,\quad x\in\partial E\cap\Omega \end{equation*} in the viscosity sense. Here $\I_s[E](x)$ denotes the $s$-fractional mean curvature of $\partial E$ in $x$, \begin{equation*} \I_s[E](x):=P.V.\int_{\R}\frac{\chi_E(y)-\chi_{\Co E}(y)}{\kers}dy. \end{equation*} Roughly speaking, if we think that $(\chi_E-\chi_{\Co E})(x_0)=0$ for every $x_0\in\partial E$, the Euler-Lagrange equation can be thought of as \begin{equation*} (-\Delta)^\frac{s}{2}(\chi_E-\chi_{\Co E})=0\quad\textrm{along }\partial E\cap\Omega, \end{equation*} in the viscosity sense. This is quite a difficult and delicate result because of the many estimates involved.\\ First of all we remark that we can define the fractional mean curvature only in the principal value sense, \begin{equation*} \I_s[E](x)=\lim_{\rho\to0}\I_s^\rho[E](x),\qquad\I_s^\rho[E](x):=\int_{\Co B_\rho(x)}\frac{\chi_E(y)-\chi_{\Co E}(y)}{\kers}dy, \end{equation*} since the integrand is not in $L^1$. Moreover we need to require some sort of `cancellation' between $E$ and $\Co E$ to guarantee that the limit exists. In particular, following \cite{curvature}, we show that asking $E$ to have both an interior and an exterior tangent paraboloid in $x\in\partial E$ is enough. Since, a priori, we do not know anything about the regularity of the boundary of an $s$-minimal set, this explains why we obtain the equation only in the viscosity sense. Namely, we prove the following \begin{teo} Let $E$ be $s$-minimal in the open set $\Omega$. If $x\in\partial E\cap \Omega$ and $E\cap\Omega$ has an interior tangent ball at $x$, then \begin{equation}\label{resume980} \limsup_{\delta\to0}\I_s^\delta[E](x)\leq0. \end{equation} \end{teo} Similarly with exterior tangent balls. Therefore a first difficulty comes from the limit defining the principal value.\\ To obtain the Euler-Lagrange equation, a natural thing to do would be to look, for example, at the ratios \begin{equation*} \frac{1}{|A\setminus E|}\big(P_s(E\cup A,\Omega)-P_s(E,\Omega)\big), \end{equation*} where the `perturbating' set $A$ is a small neighborhood of some point $x_0\in\partial E$. We can think for simplicity that $A=B_r(x_0)\subset\Omega$. Then we expect that letting $|A|\to0$ gives the Euler-Lagrange equation.\\ Carrying out the computation of the ratio gives \begin{equation*} -\frac{1}{|A\setminus E|}\int_{A\setminus E}\Big(\int_{\R}\frac{\chi_E(y)-\chi_{\Co(E\cup A)}(y)}{\kers}\Big)dx. \end{equation*} Roughly speaking, since $A\searrow\{x_0\}$ as $|A|\to0$, we can think that the inner integral converges to the fractional mean curvature, while the outer integral `disappears' in the limit. However, carrying out all the estimates involved is really difficult, even when $A$ is a ball, mainly because we can not control what sort of cancellation we have, if any, between $E$ and $\Co (E\cup A)$ in the inner integral. Also, as remarked above, we do not even know if the fractional mean curvature at $x_0$ is well defined. Therefore to obtain inequality $(\ref{resume980})$ we consider a very particular kind of perturbation. Namely, we exploit the existence of an interior tangent ball $B$ to define a small perturbating set, which is symmetric in an appropriate sense.\\ Exploiting the symmetry of this construction, we can control all the error terms.\\ We remark that, even for these particular perturbations, the estimates are really delicate.\\ In any case we also prove, following \cite{CMP}, that the fractional mean curvature gives the first variation of the fractional perimeter, at least when we consider regular sets. To be more precise, let $E$ be a bounded open set with $C^2$ boundary. If $\Phi_t:\R\to\R$ is a one-parameter family of $C^2$-diffeomorphisms which is $C^2$ also in $t$ and $\Phi_0=Id$, then \begin{equation*} \frac{d}{dt}P_s(\Phi_t(E))\Big|_{t=0}=-\int_{\partial E}\I_s[E](x)\nu_E(x)\cdot\phi(x)\,d\Han(x), \end{equation*} where $\phi(x):=\frac{\partial}{\partial t}\Phi_t(x)\big|_{t=0}$. \end{subsection} \begin{subsection}*{Regularity} The remaining part of the thesis is a careful study of the `basic' regularity properties of $\partial E$. We remark that if $E$ is $s$-minimal in $\Omega$, then it is $s$-minimal also in every $\Omega'\subset\Omega$. Thus, when we want to study the regularity of $\partial E$ in the neighborhood of some point $x\in\partial E$, using translations and dilations we can reduce to the case of a set $E$, which is $s$-minimal in $B_1$ and s.t. $0\in\partial E$. \begin{subsubsection}{Improvement of Flatness} One of the fundamental results is the following \begin{teo}\label{reg} Let $\alpha\in(0,s)$. There exists $\epsilon_0=\epsilon_0(n,s,\alpha)>0$ s.t. if $E$ is $s$-minimal in $B_1$, with $0\in\partial E$ and \begin{equation*} \partial E\cap B_1\subset\{|x_n|\leq\epsilon_0\}, \end{equation*} then $\partial E\cap B_{1/2}$ is a $C^{1,\alpha}$ surface. \end{teo} \noindent The proof (which is quite long and technical) relies on an improvement of flatness technique, in the style of De Giorgi. Roughly speaking the idea consists in showing that if $\partial E$ is contained in some small cylinder, in a neighborhood of $x_0\in\partial E$, then in a smaller neighborhood it is actually contained in a flatter cylinder, up to a change of coordinates. Then a compactness argument shows that, if the height of the first cylinder is small enough, we can go on inductively, finding flatter and flatter cylinders. In the end, since we are controlling the oscillation of $\partial E$ in smaller and smaller neighborhoods of $x_0$, we obtain our $C^{1,\alpha}$ regularity.\\ Actually the proof is more delicate. Indeed, mainly because of the nonlocality of the fractional perimeter, we need to control also what happens far from our point $x_0$. \end{subsubsection} \begin{subsubsection}{Monotonicity Formula} The next step consists in proving a monotonicity formula for a `localized' version of the fractional perimeter functional, obtained through an extension technique introduced in \cite{extension}. To be more precise, let $u:=\chi_E-\chi_{\Co E}$ and consider the function $\tilde{u}:\mathbb{R}^{n+1}_+\to\mathbb{R}$ which solves \begin{equation*} \left\{\begin{array}{cc} \textrm{div}(z^{1-s}\nabla\tilde{u})=0&\textrm{in }\mathbb{R}^{n+1}_+,\\ \tilde{u}=u&\textrm{on }\{z=0\}, \end{array}\right. \end{equation*} where \begin{equation*} \mathbb{R}^{n+1}_+=\{(x,z)\in\mathbb{R}^{n+1}\,|\,x\in\R,\,z>0\}. \end{equation*} Let $a:=1-s$. We use capital letters, like $X$, to denote points in $\mathbb{R}^{n+1}$. We remark that the first equation above is the Euler-Lagrange equation for the functional \begin{equation*} \mathcal{E}(u)=\int_{\{z>0\}}|\nabla\tilde{u}|^2z^a\,dX. \end{equation*} We relate this energy to the fractional perimeter, showing in particular the following \begin{prop} The set $E$ is $s$-minimal in $B_1$ if and only if the extension $\tilde{u}$ of $u=\chi_E-\chi_{\Co E}$ satisfies \begin{equation*} \int_{\Omega\cap\{z>0\}}|\nabla\bar{v}|^2z^a\,dX\geq\int_{\Omega\cap\{z>0\}}|\nabla\tilde{u}|^2z^a\,dX, \end{equation*} for all bounded open sets $\Omega$ with Lipschitz boundary s.t. $\Omega\cap\{z=0\}\subset\subset B_1$ and all functions $\bar{v}$ that equal $\tilde{u}$ in a neighborhood of $\partial\Omega$ and take the values $\pm1$ on $\Omega\cap\{z=0\}$. \end{prop} Notice that asking $\bar{v}$ to take only the values $\pm1$ on $\Omega\cap\{z=0\}$ corresponds to ask that $\bar{v}$ is of the form $\chi_F-\chi_{\Co F}$, for some set $F$, on $\Omega\cap\{z=0\}$. Roughly speaking, this means that using the extension $\tilde{u}$ we can reduce the minimization problem of the fractional perimeter to a pde problem in $\mathbb{R}^{n+1}_+$.\\ Finally, exploiting the extension $\tilde{u}$ of $\chi_E-\chi_{\Co E}$ we can define the `localized' energy we were looking for. To be more precise, if $E$ is $s$-minimal in $B_R$ we define the rescaled functional \begin{equation*} \Phi_E(r):=\frac{1}{r^{n+a-1}}\int_{\mathcal{B}_r^+}|\nabla\tilde{u}|^2z^a\,dX,\quad\textrm{for }r\in(0,R). \end{equation*} We remark that rescaling guarantees that $\Phi_{\lambda E}(\lambda r)=\Phi_E(r)$. The monotonicity formula then says that $\Phi_E(r)$ is increasing. \end{subsubsection} \begin{subsubsection}{Blow-up and Cones} The functional $\Phi_E$ is a fundamental tool to study the regularity of $\partial E$. Indeed, exploiting the monotonicity formula, we can study the blow-up limit $\lambda E$ as $\lambda\to\infty$, showing that it is a cone $C$, which is locally $s$-minimal in $\R$.\\ We call $C$ a tangent cone.\\ To be more precise, we prove that $\Phi_E$ is constant if and only if $\tilde{u}$ is homogeneous of degree 0. In particular, since the trace of $\tilde{u}$ on $\{z=0\}$ is $\chi_E-\chi_{\Co E}$, this implies that $E$ is a cone. Now suppose that $\lambda_k E\to C$. Exploiting the scaling property, we can prove that the functional $\Phi_C$ is constant, so $C$ is indeed a cone.\\ Roughly speaking, considering the blow-up $\lambda_k E$ corresponds to zooming in on a neighborhood of $0\in\partial E$. If we see the boundary become flatter and flatter, tending to a plane, then $\partial E$ must be $C^{1,\alpha}$ in a neighborhood of 0. Indeed, using Theorem $\ref{reg}$ and the locally uniform convergence of minimizers, we obtain the following \begin{teo} Let $E\subset\R$ be $s$-minimal in $B_1$ with $0\in\partial E$. If $E$ has a half-space as a tangent cone, then $\partial E$ is a $C^{1,\alpha}$ surface in a neighborhood of 0. \end{teo} On the other hand, if $C$ is not a half-space, our point is singular.\\ Notice that if $C$ is not a half-space, then $\partial C$ is singular in 0. \end{subsubsection} \begin{subsubsection}{Singular Set} The last part of the thesis studies the dimension of the singular set of $\partial E$, i.e. of the subset $\Sigma_E\subset\partial E\cap\Omega$ of points having a singular cone as tangent cone. Adapting the classical dimension reduction argument by Federer, we prove that the singular set has Hausdorff dimension at most $n-3$. \begin{teo} Let $E$ be $s$-minimal in $\Omega$. Then \begin{equation*} \h^d(\Sigma_E)=0\quad\textrm{for every }d>n-3. \end{equation*} \end{teo} \noindent In particular we see that, as we would expect, $\partial E\cap\Omega$ has Hausdorff dimension at most $n-1$, \begin{equation*} \h^d(\partial E\cap\Omega)=0\quad\textrm{for every }d>n-1. \end{equation*} The idea of the dimension reduction argument is the following.\\ Suppose that $C\subset\R$ is a singular $s$-minimal cone, having a singularity also in some $x_0\in\partial C$, $x_0\not=0$. Then, if we blow-up at $x_0$ we get another singular $s$-minimal cone $C'$.\\ Now the delicate part consists in showing that $C'=K\times\mathbb{R}$ (up to rotation). Then it is easily seen that also $K\subset\mathbb{R}^{n-1}$ is a singular $s$-minimal cone. Proceeding inductively in this way we reduce the dimension of the ambient space until we end up with a singular $s$-minimal cone $\tilde{K}\subset\mathbb{R}^k$, which is singular only in 0. Finally, since in \cite{cones} it is shown that there are no singular $s$-minimal cones in dimension 2, we obtain our estimate. \end{subsubsection} \end{subsection} \end{section} \begin{section}*{Original Contributions} In the thesis we study the asymptotics of the $s$-perimeter as $s\to1^-$, obtaining an original result which, in particular, improves a previous Theorem of \cite{cafenr}.\\ To be more precise, we obtain the asymptotics for $P_s(E,\Omega)$, for any bounded open set $\Omega$ with Lipschitz boundary, asking minimal regularity on $E$, namely we only require $E$ to have finite (classical) perimeter in a neighborhood of $\Omega$. On the other hand, the result obtained in \cite{cafenr} holds only when $\Omega=B_R$ is a ball and requires $\partial E$ to be $C^{1,\alpha}$.\\ We provide an original example of a set which has finite $s$-perimeter for every $s\in(0,\sigma)$ and infinite perimeter for every $s\in(\sigma,1)$. To be more precise we consider the von Koch snowflake $S\subset\mathbb{R}^2$ and we show that its Minkowski dimension coincides with the fractal dimension which can be defined using the fractional perimeter.\\ We use a formula proved in \cite{unifor} to prove in an original way that the fractional curvature is continuous with respect to $C^{1,\alpha}$-convergence of sets.\\ We also remark that we provide full details for all the proofs of \cite{CRS}. In particular we added a lot of details to the proof of the Flatness Improvement. \begin{subsection}*{Asymptotics as $s\to1$} We state our result, then we sketch an explanation, but first we need to introduce some notation.\\ We split the fractional perimeter in the following two parts \begin{equation*} P_s(E,\Omega)=P_s^L(E,\Omega)+P_s^{NL}(E,\Omega), \end{equation*} where \begin{equation*}\begin{split} &P^L_s(E,\Omega):=\Ll_s(E\cap\Omega,\Co E\cap\Omega)=\frac{1}{2}[\chi_E]_{W^{s,1}(\Omega)},\\ & P^{NL}_s(E,\Omega):=\Ll_s(E\cap\Omega,\Co E\setminus\Omega)+\Ll_s(E\setminus\Omega,\Co E\cap\Omega). \end{split} \end{equation*} We can think of $P_s^L(E,\Omega)$ as the local contribution to the fractional perimeter, in the sense that it is determined by the behavior of $E$ inside $\Omega$. Let $\Omega\subset\R$ be a bounded open set with Lipschitz boundary and let $\bar{d}_\Omega$ denote the signed distance function from $\Omega$, negative inside. Define for any $\rho\in\mathbb{R}$ with $|\rho|$ small, the open set \begin{equation*} \Omega_\rho:=\{\bar{d}_\Omega<\rho\}. \end{equation*} It can be shown that $\Omega_\rho$ has Lipschitz boundary for every $|\rho|<\alpha$, for some $\alpha>0$ small enough. Notice that $\Omega_\rho\subset\subset\Omega$ when $\rho<0$ and $\Omega\subset\subset\Omega_\rho$ when $\rho>0$. Also notice that for $\rho>0$ \begin{equation*} N_\rho(\partial\Omega)=\Omega_\rho\setminus\overline{\Omega_{-\rho}}=\{-\rho<\bar{d}_F<\rho\}, \end{equation*} is an open tubular neighborhood of $\partial\Omega$. Our result is the following \begin{teo} Let $\Omega\subset\R$ be a bounded open set with Lipschitz boundary. Then $(i)\quad\quad E\subset\R$ has finite perimeter in $\Omega$ if and only if $P_s(E,\Omega)<\infty$ for every $s\in(0,1)$, and \begin{equation}\label{resume9} \liminf_{s\to1}(1-s)P_s^L(E,\Omega)<\infty. \end{equation} In this case we have \begin{equation}\label{resume6} \lim_{s\to1}(1-s)P_s^L(E,\Omega)=\omega_{n-1}P(E,\Omega). \end{equation} $(ii)\quad$ Suppose that $E$ has finite perimeter in $\Omega_\beta$, for some $0<\beta<\alpha$. Then \begin{equation}\label{resume10} \limsup_{s\to1}(1-s)P_s^{NL}(E,\Omega) \leq2\omega_{n-1}\lim_{\rho\to0^+}P(E,N_\rho(\partial\Omega)). \end{equation} In particular, if $P(E,\partial\Omega)=0$, then \begin{equation} \lim_{s\to1}(1-s)P_s(E,\Omega)=\omega_{n-1}P(E,\Omega). \end{equation} $(iii)\quad$ Let $E$ be as in $(ii)$; then there exists a set $S\subset(-\alpha,\beta)$, at most countable, s.t. \begin{equation}\label{resume123} \lim_{s\to1}(1-s)P_s(E,\Omega_\delta)=\omega_{n-1}P(E,\Omega_\delta), \end{equation} for every $\delta\in(-\alpha,\beta)\setminus S$. \end{teo} In \cite{cafenr} the authors obtained point (iii) only for $\Omega=B_R$ a ball, asking $C^{1,\alpha}$ regularity of $\partial E$ in $B_R$. They proved the convergence in every ball $B_r$ with $r\in(0,R)\setminus S$, with $S$ at most countable, exploiting uniform estimates. On the other hand, asking $E$ to have finite perimeter in a neighborhood (as small as we want) of the open set $\Omega$ is optimal.\\ Indeed if $E\subset\R$ is s.t. $(\ref{resume123})$ holds true, then point $(i)$ guarantees that $E$ has finite perimeter in $\Omega_\delta$. In \cite{Gamma} the authors studied the asymptotics as $s\to1$ in the $\Gamma$-convergence sense. In particular, for the proof of a $\Gamma$-limsup inequality, which is typically constructive and by density, they show that if $\Pi$ is a polyhedron, then \begin{equation*} \limsup_{s\to1}(1-s)P_s(\Pi,\Omega) \leq\Gamma_n^*P(\Pi,\Omega)+2\Gamma_n^*\lim_{\rho\to0^+}P(\Pi,N_\rho(\partial\Omega)), \end{equation*} which is $(\ref{resume10})$, once we sum the local part of the perimeter. Their proof relies on the fact that $\Pi$ is a polyhedron to obtain the convergence of the local part of the perimeter, which is then used, like we do (see below), also in the estimate of the nonlocal part. Moreover to prove that the constant is $\Gamma_n^*=\omega_{n-1}$ they need a delicate approximation result. They also prove, in particular \begin{equation*} \Gamma-\liminf_{s\to1}(1-s)P_s^L(E,\Omega)\geq\omega_{n-1}P(E,\Omega), \end{equation*} which is a stronger result than our point $(i)$.\\ Our proof relies only on a convergence result by Davila which says, roughly speaking, \begin{equation*} (1-s)[u]_{W^{s,1}(\Omega)}\xrightarrow{s\to1}C_n[u]_{BV(\Omega)}, \end{equation*} when $\Omega$ is a bounded open set with Lipschitz boundary. In the thesis we explicitly compute the constant in an elementary way, showing that \begin{equation*} C_n=2\omega_{n-1}=2\Ll^{n-1}(B_1), \end{equation*} twice the volume of the $(n-1)$-dimensional unit ball $B_1\subset\mathbb{R}^{n-1}$. Using this result we immediately get the convergence of the rescaled `local' part of the fractional perimeter, $(\ref{resume6})$. Then we approximate the nonlocal part of the perimeter showing that \begin{equation*} P^{NL}_s(E,\Omega)\leq 2P_s^L(E,N_\rho(\partial\Omega))+O(1),\quad\textrm{as }s\to1. \end{equation*} This gives $(ii)$ and $(iii)$ is a simple consequence based on the fact that \begin{equation*} P(E,A)=\h^{n-1}(\partial^*E,A), \end{equation*} for every $A\subset\R$, where $\partial^*E$ denotes the reduced boundary of $E$.\\ Now, if we ask $P(E,\Omega_\beta)<\infty$, the set of $\delta\in(-\alpha,\beta)$ s.t. $P(E,\{\bar{d}_\Omega=\delta\})>0$ can be at most countable, proving $(iii)$.\\ We also provide an original example to show that condition ($\ref{resume9}$) is necessary. Namely, we construct a bounded set $E\subset\mathbb{R}$ s.t. $P_s(E)<\infty$ for every $s$, but $P(E)=\infty$. \end{subsection} \begin{subsection}*{Von Koch Snowflake} Before stating our result, we briefly define the Minkowski dimension in an informal way and we sketch the result obtained in \cite{Visintin} Roughly speaking, to define the Minkowski dimension of a set $\Gamma\subset\R$ in an open set $\Omega\subset\R$, we consider the $\rho$-neighborhoods $N_\rho(\Gamma)$ and we look at the limits \begin{equation*} m_r=\lim_{\rho\to0}\frac{|N_\rho(\Gamma)\cap\Omega|}{\rho^{n-r}},\qquad r\in(0,n]. \end{equation*} Then the dimension is defined as $\Dim_{\mathcal{M}}(\Gamma,\Omega):=\inf\{r\,|\,m_r=0\}$.\\ (we remark that a correct definition is much more delicate: we would have to consider the limsup and the liminf of the ratios, then look at the inf and sup of the quantities we obtain).\\ As usual, when $\Omega=\R$ we drop it in the formulas. Following \cite{Visintin}, we can introduce a notion of fractal dimension by setting \begin{equation*} \Dim_F(\partial E,\Omega):=n-\sup\{s\in(0,1)\,|\,P_s(E,\Omega)<\infty\}, \end{equation*} whenever $\Omega$ is a bounded open set with Lipschitz boundary or $\Omega=\R$.\\ As shown in \cite{Visintin}, we can relate this dimension to the Minkowski dimension showing, roughly, the following. Suppose that $E\subset\R$ is s.t. $\Dim_\mathcal{M}(\partial E,\Omega)\in[n-1,n)$. Then \begin{equation}\label{resume776} P_s(E,\Omega)<\infty\qquad\textrm{for every }s\in\left(0,n-\Dim_\mathcal{M}(\partial E,\Omega)\right), \end{equation} i.e. \begin{equation}\label{resume1} \Dim_F(\partial E,\Omega)\leq \Dim_\mathcal{M}(\partial E,\Omega). \end{equation} It would be interesting to have also a lower bound on $\Dim_F$. To be more precise, $(\ref{resume776})$ guarantees that a set $E$ can have finite $s$-perimeter even when the boundary $\partial E$ is really irregular, at least for every $s$ below some treshold $\sigma$. However we do not know what happens above this treshold, when $s>\sigma$. We provide an example of a set for which this treshold is sharp. \begin{prop} Let $S\subset\mathbb{R}^2$ be the von Koch snowflake. Then \begin{equation*} \Dim_\mathcal{M}(\partial S)=\Dim_F(\partial S)=\frac{\log 4}{\log 3}, \end{equation*} i.e. \begin{equation*} P_s(S)<\infty,\qquad\forall\,s\in\Big(0,2-\frac{\log4}{\log3}\Big) \end{equation*} and \begin{equation*} P_s(S)=\infty,\qquad\forall\,s\in\Big(2-\frac{\log4}{\log3},1\Big). \end{equation*} \end{prop} To show that $\Dim_F(\partial S)\geq\Dim_{\mathcal{M}}(\partial S)$, we exploited the self-similarity of $S$ and the scaling property of the fractional perimeter to prove that \begin{equation*} P_s(S)\geq\sum_{k=1}^\infty a_k(s), \end{equation*} which is a divergent series precisely when $s$ is bigger than $2-\frac{\log 4}{\log 3}$. It is also worth noting that to find an exmple of a set $E$ s.t. $\Dim_F(\partial E)\geq\Dim_{\mathcal{M}}(\partial E)$, we did not construct an ad hoc `pathological' set. Indeed the von Koch snowflake is a quite classical and well known example of fractal set. \end{subsection} \begin{subsection}*{Continuity of Fractional Curvature} In \cite{unifor} the authors proved a formula to compute the `local' contribution to the fractional mean curvature $\I_s[E](x)$, when $\partial E$ is a $C^{1,\alpha}$ graph in a neighborhood of $x$, for some $\alpha>s$. To be more precise, let $K_1$ be the cylinder $B_1'\times(-1,1)$ and suppose that $x=0$ and \begin{equation*} E\cap K_1=\{(x',x_n)\in\R\,|\,x'\in B_1',\,-1<x_n<u(x')\}, \end{equation*} for some $u\in C^{1,\alpha}(B_1')$ s.t. $u(0)=0$, $\nabla u(0)=0$ and $\alpha>s$. Then \begin{equation*} P.V.\int_{K_1}\frac{\chi_E(y)-\chi_{\Co E}(y)}{|y|^{n+s}}\,dy =2\int_{B'_1}\Big(\int_0^{\frac{u(y')}{|y'|}}\frac{dt}{(1+t^2)^\frac{n+s}{2}}\Big)\frac{dy'}{|y'|^{n+s-1}}. \end{equation*} Exploiting this formula we prove that \begin{prop} Let $E$ and $E_k$ be bounded open sets with $C^{1,\alpha}$ boundary for some $\alpha>0$ s.t. $E_k\longrightarrow E$ in $C^{1,\alpha}$ and let $x_k\in\partial E_k$, $x\in\partial E$ s.t. $x_k\longrightarrow x$. Then \begin{equation}\label{resume889} \I_s[E_k](x_k)\longrightarrow\I_s[E](x), \end{equation} for every $s\in(0,\alpha)$.\\ In particular, if we ask $C^2$ regularity of the boundaries and the convergence to be in $C^2$ sense, this holds true for every $s\in(0,1)$. \end{prop} By $C^{1,\alpha}$ convergence of sets we mean that our sets can locally be described as the graphs of functions which converge in $C^{1,\alpha}$. We remark that this result is stated for $C^2$ convergence only, without a proof, in \cite{CMP}. We provide an original proof and lower the requested regularity.\\ Actually, if we are interested in the convergence only in the neighborhood of some point $x_0\in\partial E$, we can lower our regularity requests and still get the convergence of the curvatures. To be more precise, let $E$ and $E_k$ be defined in $K_1$ as the subgraphs of $u$ and $u_k$ respectively, with $0\in\partial E$ and $0\in\partial E_k$ (up to translations we can always reduce to this case). Then, using the formula above we get \begin{equation*} |\I_s[E](0)-\I_s[E_k](0)|\leq C\|u-u_k\|_{C^{1,\alpha}(B_1')}+|(E\Delta E_k)\setminus K_1|. \end{equation*} This shows that, if we are looking for convergence of the curvatures only in a fixed neighborhood $U$ of $x_0\in\partial E$, we need not ask $C^{1,\alpha}$ regularity for the whole boundaries. For example, we can ask $E_k$ and $E$ to be $C^{1,\alpha}$ subgraphs in $U$ and only ask them to be measurable in $\Co U$. Then we obtain $(\ref{resume889})$ by asking $C^{1,\alpha}$ convergence of $E_k$ to $E$ in $U$ and only convergence in measure in $\Co U$.\\ A similar problem is studied also in \cite{matteo}, where the author estimates the difference between the fractional mean curvature of a set $E$ with $C^{1,\alpha}$ boundary and that of the set $\Phi(E)$, where $\Phi$ is a $C^{1,\alpha}$ diffeomorphism of $\R$.\\ The estimates obtained there are much more precise than ours.\\ However, as far as only convergence is considered, our result is more general in that the sets involved need not be diffeomorphic. Moreover, as remarked above, our convergence is somewhat local, while the setting in \cite{matteo} is global. Indeed, the author estimates the difference between the curvatures in terms of the $C^{0,\alpha}$ norm of the Jacobian of the diffeomorphism $\Phi$. Thus, even if we want the convergence of the curvatures only in a neighborhood $U$ of $x_0$, to use \cite{matteo} we still need to ask $C^{1,\alpha}$ regularity for the whole boundaries. We remark that in \cite{matteo} the author studies also the stability of these estimates as $s\to1$. \end{subsection} \end{section} \chapter*{Notation} \begin{itemize} \item All sets and functions considered are assumed to be Lebesgue measurable. \item We will usually write $D\varphi$ to denote the distributional gradient of $\varphi$.\\We will write $\nabla\varphi$ when the gradient exists (at least) in the weak sense of Sobolev spaces. \item We denote $\mathcal{L}^k$ the $k$-dimensional Lebesgue measure.\\In $\R$ we will usually write $|E|=\mathcal{L}^n(E)$ for the $n$-dimensional Lebesgue measure of a set $E\subset\R$.\\ We write $\h^d$ for the $d$-dimensional Hausdorff measure, for any $d\geq0$. \item Equality and inclusions of sets will usually be considered in the measure sense, e.g. $E=F$ will usually mean $|E\Delta F|=0$. \item We define the dimensional constants \begin{equation*} \omega_d:=\frac{\pi^\frac{d}{2}}{\Gamma\big(\frac{d}{2}+1\big)},\qquad d\geq0. \end{equation*} In particular, we remark that $\omega_k=\mathcal{L}^k(B_1)$ is the volume of the $k$-dimensional unit ball $B_1\subset\mathbb{R}^k$ and $k\,\omega_k=\h^{k-1}(\mathbb{S}^{k-1})$ is the surface area of the $(k-1)$-dimensional sphere \begin{equation*} \mathbb{S}^{k-1}=\partial B_1=\{x\in\mathbb{R}^k\,|\,|x|=1\}. \end{equation*} \end{itemize} \tableofcontents \begin{chapter}{Caccioppoli Sets} \pagenumbering{arabic} \begin{section}{Caccioppoli Sets} In this section we recall the definitions and the main properties of $BV$-functions and of Caccioppoli sets. For all the details we refer to \cite{GiaSou}, \cite{Giusti} and \cite{Maggi}. \begin{defin} Let $\Omega\subset\R$ be an open set. We say that $f\in L^1_{loc}(\Omega)$ is a function of locally bounded variation in $\Omega$ if \begin{equation}\label{variation} V(f,A):=\sup\left\{\int_Af\textrm{ div }\varphi\,dx\Big|\varphi\in C_c^1(A,\R), |\varphi|\leq1\right\}<\infty, \end{equation} for every open set $A\subset\subset\Omega$. We write $BV_{loc}(\Omega)$ for the space of functions with locally bounded variation.\\ If $f\in L^1(\Omega)$ and $V(f,\Omega)<\infty$, we say that $f$ is a function of bounded variation in $\Omega$ and we write $BV(\Omega)$ for the space of such functions. \end{defin} By using Riesz representation Theorem, it is easy to see that a function $u\in L^1_{loc}(\Omega)$ has locally bounded variation if and only if its distributional gradient $Du=(D_1u,\dots,D_nu)$ is a vector valued Radon measure. Then \begin{equation*}\begin{split} \int_\Omega u\textrm{ div }\varphi\,dx&=-\sum_{i=1}^n\int_\Omega\varphi_i\,dD_iu\\ & =-\int_\Omega\varphi\cdot\sigma\,d|Du|,\qquad\forall\varphi\in C_c^1(\Omega,\R), \end{split}\end{equation*} for some $\sigma:\Omega\to\R$ s.t. $|\sigma(y)|=1$ for $|Du|-$almost every $y\in\Omega$. Since $|Du|$ is a Radon measure on $\Omega$, we can consider $|Du|(A)$ for every $A\subset\Omega$ open (actually Borel), not necessarily bounded, and it is immediate to see that \begin{equation*} |Du|(A)=V(u,A)\qquad\forall A\subset\Omega\textrm{ open.} \end{equation*} Moreover, if $u\in L^1(\Omega)$, then $u\in BV(\Omega)$ if and only if $Du$ is a vector valued Radon measure with finite total variation \begin{equation*} |Du|(\Omega)<\infty. \end{equation*} The Sobolev space $W^{1,1}(\Omega)$ is contained in $BV(\Omega)$. Indeed, for any $u\in W^{1,1}(\Omega)$ the distributional derivative is $\nabla u\Ll^n\llcorner\Omega$ and \begin{equation*} |Du|(\Omega)=\int_\Omega|\nabla u|\,dx<\infty. \end{equation*} Notice that the inclusion is strict, as is shown by considering e.g. the Heavside function $\chi_{[a,\infty)}$. Since for every fixed vector field $\varphi\in C_c^1(\Omega,\R)$ the mapping \begin{equation*} u\longmapsto\int_\Omega u\textrm{ div }\varphi\,dx \end{equation*} is continuous in the $L^1_{loc}(\Omega)$ topology, the functional \begin{equation*} V(\cdot,\Omega):L^1_{loc}(\Omega)\longrightarrow[0,\infty] \end{equation*} is lower semicontinuous and hence in particular we have the following result. \begin{prop} Let $\{u_k\}\subset BV(\Omega)$ s.t. $u_k\to u$ in $L^1_{loc}(\Omega)$. Then \begin{equation}\label{bv_semicont} V(u,\Omega)\leq\liminf_{k\to\infty}|Du_k|(\Omega). \end{equation} \end{prop} Before giving the definitions of perimeter and Caccioppoli set, we recall the following useful approximation result. \begin{prop}\label{bv_approx} Let $u\in BV(\Omega)$. Then $\exists\{u_k\}\subset C^\infty(\Omega)\cap BV(\Omega)$ s.t. \begin{equation*}\begin{split} &(i)\quad u_k\longrightarrow u,\qquad\textrm{in } L^1(\Omega),\\ &(ii)\quad\lim_{k\to\infty}\int_\Omega|\nabla u_k|\,dx=|Du|(\Omega). \end{split}\end{equation*} \end{prop} We can define a norm by \begin{equation*} \|u\|_{BV(\Omega)}:=\|u\|_{L^1(\Omega)}+|Du|(\Omega), \end{equation*} which makes $BV(\Omega)$ a Banach space.\\ Notice that for an approximating sequence we have \begin{equation*} \lim_{k\to\infty}\|u_k\|_{BV(\Omega)}=\|u\|_{BV(\Omega)}, \end{equation*} but we don't have convergence in the $BV$-norm. \begin{defin} We say that a set $E\subset\R$ has locally finite perimeter in $\Omega$ if $\chi_E\in BV_{loc}(\Omega)$. It has finite perimeter if $\chi_E\in BV(\Omega)$.\\ A set $E$ of locally finite perimeter in $\R$ is called Caccioppoli set.\\ The perimeter of $E$ in $\Omega$ is defined as \begin{equation}\begin{split} P(E,\Omega)&:=|D\chi_E|(\Omega)=V(\chi_E,\Omega)\\ & =\sup\left\{\int_E\textrm{div }\varphi\,dx\Big|\varphi\in C_c^1(\Omega,\R), |\varphi|\leq1\right\}. \end{split} \end{equation} If $\Omega=\R$, we simply write $P(E)=P(E,\R)$ for the perimeter. \end{defin} If $E$ is a Caccioppoli set, we write $\nu_E:=\sigma$ for the function obtained in the polar decomposition of the Radon measure $D\chi_E$. Then for every bounded open set $\Omega\subset\R$ \begin{equation} \int_E\textrm{div }\varphi\,dx=-\int_\Omega\varphi\cdot\nu_E\,d|D\chi_E|, \end{equation} for all vector fields $\varphi\in C^1_c(\Omega,\R)$. This formula can be viewed as a generalization of the Gauss-Green formula, so the measure \begin{equation*} D\chi_E=\nu_E|D\chi_E| \end{equation*} is sometimes called Gauss-Green measure. Unlike Sobolev spaces, the space $BV(\Omega)$ contains the characteristic functions of all sufficiently regular sets. Indeed, consider a bounded open set $E\subset\R$ with $C^2$ boundary $\partial E$.\\ Let $\Omega$ be an open set; since $E$ is bounded, $|E\cap\Omega|<\infty$, i.e. $\chi_E\in L^1(\Omega)$.\\ Now take a vector field $\varphi\in C^1_c(\Omega,\R)$ s.t. $|\varphi|\leq1$. Then the classical Gauss-Green formula gives \begin{equation*} \int_E\textrm{div }\varphi\,dx=-\int_{\partial E}\varphi\cdot\nu\,d\Han=-\int_{\partial E\cap\Omega}\varphi\cdot\nu\,d\Han, \end{equation*} where $\nu$ is the inner unit normal to $\partial E$, and hence \begin{equation*} P(E,\Omega)\leq\Han(\partial E\cap\Omega)<\infty, \end{equation*} so $\chi_E\in BV(\Omega)$. Actually we have \begin{equation}\label{smooth_perimeter} D\chi_E=\nu\Han\llcorner\partial E \end{equation} and \begin{equation}\label{smooth_perimeter2} P(E,\Omega)=\Han(\partial E\cap\Omega). \end{equation} This shows that the perimeter coincides with the $(n-1)-$dimensional Hausdorff measure, at least when the set is regular. However, unlike the Hausdorff measure, we have lower semicontinuity and compactness properties which make this definition of perimeter very useful when dealing with variational problems.\\ For example, we can write the semicontinuity property for Caccioppoli sets as \begin{prop}[Semicontinuity]\label{smicont_Cacc} Let $\{E_k\}$ be a sequence of Caccioppoli sets s.t. $E_k\xrightarrow{loc}E$. Then for every $\Omega\subset\R$ open (bounded or not) \begin{equation*} P(E,\Omega)\leq\liminf_{k\to\infty}P(E_k,\Omega). \end{equation*} \end{prop} By $E_k\xrightarrow{loc}E$ we mean the local convergence of sets in measure i.e. of their characteristic functions in $L^1_{loc}(\R)$. Since $|\chi_E-\chi_F|=\chi_{E\Delta F}$, this means \begin{equation*} |(E_k\Delta E)\cap K|\longrightarrow0, \end{equation*} for every compact set $K\subset\R$. Moreover we have the following compactness property. \begin{prop}[Compactness]\label{compact_Cacc} Let $\{E_k\}$ be a sequence of Caccioppoli sets s.t. \begin{equation*} \sup_{k\in\mathbb{N}}P(E_k,\Omega)\leq c(\Omega)<\infty, \end{equation*} for any bounded open set $\Omega$. Then there exists a Caccioppoli set $E$ and a subsequence $\{E_{k_i}\}$ of $\{E_k\}$ s.t. \begin{equation*} E_{k_i}\xrightarrow{loc}E. \end{equation*} \end{prop} Since $P(E,\Omega)=V(\chi_E,\Omega)$, we also have the following property.\\ (Locality)$\quad$The mapping $E\longmapsto P(E,\Omega)$ is local i.e. \begin{equation}\label{locality_perimeter} P(E,\Omega)=P(F,\Omega),\qquad\textrm{whenever }|(E\Delta F)\cap\Omega|=0, \end{equation} (even if $E\not= F$ in measure outside $\Omega$).\\ As we will see, this will not be true for the fractional perimeter. Actually, since the characteristic functions of the two sets are equal in $L^1_{loc}(\Omega)$, the equality is at the level of measures i.e. \begin{equation*} D\chi_E\llcorner\Omega=D\chi_F\llcorner\Omega. \end{equation*} As a consequence we can modify a Caccioppoli set with a set of negligible Lebesgue measure without changing its perimeter. Therefore the notion of topological boundary is not very useful when dealing with Caccioppoli sets (as is shown by the example below) and we cannot expect formula $(\ref{smooth_perimeter2})$ to hold for every Caccioppoli set. \begin{ese} Consider a bounded open set $E\subset\R$ with $C^2$ boundary, as above; now let $F:=E\cup\mathbb{Q}^n$. Then $|(E\Delta F)\cap\Omega|=0$ for every $\Omega\subset\R$ open and hence $D\chi_F=D\chi_E=\nu\Han\llcorner\partial E$. However $\partial F=\R\setminus E$. \end{ese} \end{section} \begin{section}{Regularity of the Boundary} We saw that we can modify a set of (locally) finite perimeter with a set of zero Lebesgue measure, making its topological boundary as big as we want, without changing its perimeter. For this reason one introduces measure theoretic notions of interior, exterior and boundary. We will see below that in some sense we can also minimize the size of the topological boundary. \begin{defin} Let $E\subset\R$. For every $t\in[0,1]$ define the set \begin{equation}\label{density_t} E^{(t)}:=\left\{x\in\R\,\big|\,\exists\lim_{r\to0}\frac{|E\cap B_r(x)|}{\omega_nr^n}=t\right\}, \end{equation} of points density $t$ of $E$. The sets $E^{(0)}$ and $E^{(1)}$ are respectively the measure theoretic exterior and interior of the set $E$. The set \begin{equation}\label{ess_bdry} \partial_eE:=\R\setminus(E^{(0)}\cup E^{(1)}) \end{equation} is the essential boundary of $E$. \end{defin} Using the Lebesgue points Theorem for the characteristic function $\chi_E$, we see that the limit in $(\ref{density_t})$ exists for a.e. $x\in\R$ and \begin{equation*} \lim_{r\to0}\frac{|E\cap B_r(x)|}{\omega_nr^n}=\left\{\begin{array}{cc}1,&\textrm{a.e. }x\in E,\\ 0,&\textrm{a.e. }x\in\Co E. \end{array} \right. \end{equation*} So \begin{equation*} |E\Delta E^{(1)}|=0,\qquad|\Co E\Delta E^{(0)}|=0\qquad\textrm{and }|\partial_eE|=0. \end{equation*} In particular every set $E$ is equivalent to its measure theoretic interior. Notice that $E^{(1)}$ in general is not open. Recall that the support of a Radon measure $\mu$ on $\R$ is defined as the set \begin{equation*} \supp\mu:=\{x\in\R\,|\,\mu(B_r(x))>0\textrm{ for every }r>0\}. \end{equation*} Notice that, being the complementary of the union of all open sets of measure zero, it is a closed set. In particular, if $E$ is a Caccioppoli set, we have \begin{equation*} \supp|D\chi_E|=\{x\in\R\,|\,P(E,B_r(x))>0\textrm{ for every }r>0\}. \end{equation*} \begin{defin} The reduced boundary of a Caccioppoli set $E$ is the set \begin{equation} \partial^*E:=\left\{x\in\supp|D\chi_E|\,\Big|\,\exists\,\lim_{r\to0}\frac{D\chi_E(B_r(x))}{|D\chi_E|(B_r(x))}=:\nu_E(x)\in\Sp\right\}. \end{equation} The function $\nu_E:\partial^*E\longrightarrow\Sp$ is called measure theoretic inner unit normal to $E$. \end{defin} \noindent Notice that the function $\nu_E$ is (by definition) the Radon-Nikodym derivative \begin{equation*} \nu_E=\frac{d\,D\chi_E}{d\,|D\chi_E|}. \end{equation*} Since $|D\chi_E|(\R\setminus\partial^*E)=0$, we have \begin{equation*} |D\chi_E|=|D\chi_E|\llcorner\partial^*E\qquad\textrm{and}\qquad D\chi_E=\nu_E|D\chi_E|\llcorner\partial^*E. \end{equation*} As the following Theorem shows, the reduced boundary of a Caccioppoli set is quite regular and allows us to generalize in some sense formula $(\ref{smooth_perimeter2})$. \begin{teo}[De Giorgi] Let $E\subset\R$ be a Caccioppoli set. Then (i) The reduced boundary $\partial^*E$ is locally $(n-1)$-rectifiable and its approximate tangent plane at $x$ is normal to $\nu_E(x)$ for $\Han$-a.e. $x\in\partial^*E$. (ii) $|D\chi_E|=\Han\llcorner\partial^*E$ and $D\chi_E=\nu_E\Han\llcorner\partial^*E$. (iii) We have the following Gauss-Green formula \begin{equation} \int_E\textrm{div }\varphi\,dx=-\int_{\partial^*E}\varphi\cdot\nu_E\,d\Han\qquad\forall\,\varphi\in C^1_c(\R,\R). \end{equation} \end{teo} The following Proposition shows the relation between the essential and the reduced boundary. \begin{prop} Let $E\subset\R$ be a Caccioppoli set. Then \begin{equation*} \partial^*E\subset E^{(1/2)}\subset\partial_eE, \end{equation*} and \begin{equation} \Han(\partial_eE\setminus\partial^*E)=0. \end{equation} \end{prop} In particular we can as well take the essential boundary in the Gauss-Green formula or when calculating the perimeter.\\ Actually we have also the following characterization \begin{teo} A set $E\subset\R$ is a Caccioppoli set if and only if \begin{equation*} \Han(\partial_e E\cap K)<\infty, \end{equation*} for every $K\subset\R$ compact. \end{teo} \begin{rmk} In particular this implies that any bounded open set with Lipschitz boundary has finite perimeter. \end{rmk} We have yet another natural way to define a measure theoretic boundary. \begin{defin} Let $E\subset\R$ and define the sets \begin{equation*}\begin{split} &E_1:=\{x\in\R\,|\,\exists r>0,\,|E\cap B_r(x)|=\omega_nr^n\},\\ & E_0:=\{x\in\R\,|\,\exists r>0,\,|E\cap B_r(x)|=0\}. \end{split}\end{equation*} Then we define \begin{equation*}\begin{split} \partial^-E&:=\R\setminus(E_0\cup E_1)\\ & =\{x\in\R\,|\,0<|E\cap B_r(x)|<\omega_nr^n\textrm{ for every }r>0\}. \end{split} \end{equation*} \end{defin} Notice that $E_0$ and $E_1$ are open sets and hence $\partial^-E$ is closed. Moreover, since \begin{equation}\label{density_subsets} E_0\subset E^{(0)}\qquad\textrm{and}\qquad E_1\subset E^{(1)}, \end{equation} we have \begin{equation*} \partial_eE\subset\partial^-E. \end{equation*} We have \begin{equation}\label{ess_bdry_top1} F\subset\R\textrm{ s.t. }|E\Delta F|=0\quad\Longrightarrow\quad\partial^-E\subset\partial F. \end{equation} Indeed, if $|E\Delta F|=0$, then $|F\cap B_r(x)|=|E\cap B_r(x)|$ for every $r>0$. In particular for any $x\in\partial^-E$ we have \begin{equation*} 0<|F\cap B_r(x)|<\omega_nr^n, \end{equation*} which implies \begin{equation*} F\cap B_r(x)\not=\emptyset\quad\textrm{and}\quad\Co F\cap B_r(x)\not=\emptyset\quad\textrm{for every }r>0, \end{equation*} and hence $x\in\partial F$.\\ In particular, $\partial^-E\subset\partial E$. Moreover \begin{equation}\label{ess_bdry_top2} \partial^-E=\partial E^{(1)}. \end{equation} Indeed, since $|E\Delta E^{(1)}|=0$, we already know that $\partial^-E\subset\partial E^{(1)}$ and the converse inclusion is clear from $(\ref{density_subsets})$.\\ From $(\ref{ess_bdry_top1})$ and $(\ref{ess_bdry_top2})$ we see that \begin{equation*} \partial^-E=\bigcap_{F\sim E}\partial F, \end{equation*} where the intersection is taken over all sets $F\subset\R$ s.t. $|E\Delta F|=0$, so we can think of $\partial^-E$ as a way to minimize the size of the topological boundary of $E$. In particular \begin{equation*} F\subset\R\textrm{ s.t. }|E\Delta F|=0\quad\Longrightarrow\quad\partial^-F=\partial^-E. \end{equation*} If $E$ is a Caccioppoli set, then it is easy to verify that \begin{equation*} \partial^-E=\supp|D\chi_E|=\overline{\partial^*E}. \end{equation*} However notice that in general the inclusions \begin{equation*} \partial^*E\subset\partial_eE\subset\partial^-E\subset\partial E \end{equation*} are all strict and in principle we could have \begin{equation*} \Han(\partial^-E\setminus\partial^*E)>0. \end{equation*} \begin{rmk}\label{gmt_assumption} Let $E\subset\R$. From what we have seen above, up to modifying $E$ on a set of measure zero, we can assume that \begin{equation}\label{gmt_assumption_eq} \begin{split} &E_1\subset E,\qquad E\cap E_0=\emptyset\\ \textrm{and}\quad\partial E=\partial^-E&=\{x\in\R\,|\,0<|E\cap B_r(x)|<\omega_nr^n,\,\forall\,r>0\}. \end{split} \end{equation} We will make this assumption in later chapters. \end{rmk} \end{section} \begin{section}{Minimal Surfaces} In this section we recall the definition of minimal surfaces and we present the sketch of a proof due to Savin for the regularity of their reduced boundary.\\ The classical proof relies on the monotonicity formula and the approximation of a minimal surface with appropriate harmonic functions. The proof by Savin makes use of viscosity solutions and a suitable Harnack inequality, which allows to prove an improvement of flatness Theorem; from this one easily obtains $C^{1,\alpha}$ regularity (and hence also smoothness) of the reduced boundary of a minimal surface.\\ In this section we suppose that every set satisfies $(\ref{gmt_assumption_eq})$. In particular every Caccioppoli set $E$ satisfies \begin{equation*} \partial E=\partial^-E=\supp |D\chi_E|. \end{equation*} \begin{defin} Let $\Omega\subset\R$ be a bounded open set. We say that a Caccioppoli set $E$ has minimal perimeter in $\Omega$, or that $\partial E$ is a minimal surface in $\Omega$, if it has minimal perimeter among the sets which agree with $E$ outside a compact subset of $\Omega$, i.e. \begin{equation} P(E,\Omega)\leq P(F,\Omega), \end{equation} for every Caccioppoli set $F$ s.t. $E\Delta F\subset\subset\Omega$.\\ If $\Omega$ is not bounded, in particular if $\Omega=\R$, we say that $E$ has minimal perimeter in $\Omega$ if the above inequality holds for every bounded open subset $\Omega'\subset\Omega$ s.t. $E\Delta F\subset\subset\Omega'$. \end{defin} Using the compactness and semicontinuity theorems, it is immediate to get the existence of minimal surfaces, via the direct method of Calculus of Variation. \begin{prop} Let $\Omega\subset\R$ be a bounded open set and fix a set $E_0$ of finite perimeter. Then there exists a set of finite perimeter $E$ s.t. $E=E_0$ in $\Co\Omega$ and \begin{equation}\label{local_Plateau} P(E)=\inf\left\{P(F)\,|\,F\setminus\Omega=E_0\setminus\Omega\right\}. \end{equation} \end{prop} The problem of finding a minimal set as in $(\ref{local_Plateau})$ is known as the Plateau problem. Since the perimeter is local, it is quite clear that the behavior of the set $E_0$ far from $\Omega$ doesn't matter. Roughly speaking $E_0$ has the role of boundary data and we want to find the surface which minimizes the area among all surfaces having as boundary $\partial E_0\cap\partial\Omega$. \begin{rmk} Notice that a set solving the Plateau problem $(\ref{local_Plateau})$ has minimal perimeter in $\Omega$. \end{rmk} Now there are two main problems: to show that the reduced boundary of a set of minimal perimeter is actually smooth and to understand how big the singular set $\partial E\setminus\partial^*E$ can be.\\ First of all notice that if $E$ has minimal perimeter in $\Omega$, then $\lambda E$ has minimal perimeter in $\lambda\Omega$ for every $\lambda>0$ thanks to the scaling of the perimeter.\\ Moreover if $E$ has minimal perimeter in $\Omega$, then it has minimal perimeter also in every $\Omega'\subset\Omega$ open.\\ Indeed, let $F$ be a Caccioppoli set s.t. $E\Delta F\subset\subset\Omega'$. Then also $E\Delta F\subset\subset\Omega$ and \begin{equation*}\begin{split} P(E,\Omega')+P(E,\Omega\setminus\Omega')&=P(E,\Omega)\\ & \leq P(F,\Omega)=P(F,\Omega')+P(F,\Omega\setminus\Omega')\\ & =P(F,\Omega')+P(E,\Omega\setminus\Omega'). \end{split}\end{equation*} In particular when we want to study the regularity of a point $x\in\partial E$, using translations and dilations we can reduce to the study of a set $E$ of minimal perimeter in $B_1$ s.t. $0\in\partial E$. Using the isoperimetric inequality we can obtain the following uniform density estimates \begin{prop} Let $E$ be a set of minimal perimeter in $B_1$ s.t. $0\in\partial B_1$. There exists a constant $c=c(n)>0$ s.t. for every $r\in (0,1)$ we have \begin{equation*} |E\cap B_r|\geq cr^n,\qquad|\Co E\cap B_r|\geq cr^n. \end{equation*} \end{prop} These and similar estimates holding for the perimeter instead of the Lebesgue measure yield the first regularity result for minimal surfaces. \begin{prop} If $E$ is a set of minimal perimeter in $\Omega$, then \begin{equation*} \Han((\partial E\setminus\partial^*E)\cap\Omega)=0. \end{equation*} \end{prop} Moreover we have the following compactness property \begin{prop} Let $\{E_k\}$ be a sequence of sets of minimal perimeter in $\Omega$. Then there exists a subsequence $\{E_{k_i}\}$ converging to a set $E$ of minimal perimeter in $\Omega$ \begin{equation*} E_{k_i}\xrightarrow{loc}E. \end{equation*} \end{prop} Actually using the uniform estimates above we can show that the minimal surfaces $\partial E_k$ converge in Hausdorff sense to $\partial E$ on any compact subset of $\Omega$, i.e. for every $\epsilon>0$ the surfaces $\partial E_k$ lie in an $\epsilon$-neighborhood of $\partial E$ for every $k$ big enough (inside a compact subset of $\Omega$) The case which interests us the most is the following. Let $E$ be a set of minimal perimeter in $B_1$ s.t. $0\in\partial E$ and consider the blow-ups \begin{equation*} E_r:=\{x\in\R\,|\,rx\in E\}=\frac{1}{r}E, \end{equation*} for $r\to0$.\\ As remarked above, these are sets of minimal perimeter in $B_r$, and hence also in $B_1$, if $r\leq1$.\\ The compactness property implies that we can find a sequence $r_k\to0$ s.t. \begin{equation*} E_{r_k}\xrightarrow{loc} E_\infty, \end{equation*} for a set $E_\infty$ of minimal perimeter in $B_1$.\\ Actually the set $E_\infty$ has minimal perimeter in $\R$ and it is easily seen that it is a cone i.e. there exists a set $A$ s.t. \begin{equation*} E_\infty=\{tx\,|\,t\geq0,\,x\in A\}. \end{equation*} The set $E_\infty$ is called tangent cone to $E$ at $0$. We say that it is a singular cone if it is not a half-space.\\ If the point $0$ belongs to the reduced boundary of $E$, $0\in\partial^*E$, then the blow-up limit is actually the approximate tangent plane at $0$, \begin{equation*} \begin{split} &\qquad\qquad\partial E_\infty=H(0)=\nu_E(0)^\bot,\\ &E_\infty=H^-(0)=\{x\in\R\,|\,x\cdot\nu_E(0)\leq0\}. \end{split} \end{equation*} \begin{rmk} This is true for a general Caccioppoli set $E$, not necessarily of minimal perimeter. Let $x\in\partial^*E$; up to translation we can suppose $x=0\in\partial^*E$. Then \begin{equation*} E_r\xrightarrow{loc}H^-(0),\qquad\textrm{as }r\to0. \end{equation*} \end{rmk} Also notice that $0\in\partial E_r$ for every $r>0$. Roughly speaking we are zooming in on $0$ and if it is a regular point, then we see the boundary of $E$ becoming flatter and flatter, untill it becomes a plane.\\ Up to rotation we can suppose $\nu_E(0)=e_n$, so that \begin{equation*} H^-(0)=\{x\in\R\,|\,x_n\leq0\}. \end{equation*} As remarked above we see that if $E$ is of minimal perimeter in $B_1$ and $0\in\partial E$ is s.t. $E_\infty=\{x_n\leq0\}$ (in particular if $0\in\partial^*E$), then for every $\epsilon>0$ \begin{equation*} \partial E_r\cap B_1\subset\{x\in\R\,|\,|x_n|<\epsilon\}, \end{equation*} for $r$ small enough.\\ The fundamental result for the regularity of a minimal surface is the following flatness Theorem \begin{teo}[De Giorgi]\label{original_flatness} Let $E$ be a set of minimal perimeter in $B_1$ with $0\in\partial E$ s.t. \begin{equation}\label{loc_min_per} \partial E\cap B_1\subset\{|x_n|\leq\epsilon_0\}, \end{equation} where $\epsilon_0=\epsilon_0(n)>0$ is a small constant depending only on $n$. Then $\partial E$ is an analytic surface in $B_{1/2}$. \end{teo} It is clear from the discussion above that we can apply this Theorem to every point $x\in\partial E\cap\Omega$ of a surface of minimal perimeter in $\Omega$ which has as blow-up limit a half-space.\\ Indeed let $x$ be such a point; after translation we can suppose that $x=0$. Then for $r$ small enough the set $E_r$ satisfies the hypothesis of the Theorem, so scaling back we see that $\partial E$ is an analytic surface in $B_{r/2}$.\\ Also notice that every such point $x$ must belong to the reduced boundary of $E$, since $\partial E$ is smooth in a neighborhood of $x$. This implies that the reduced boundary $\partial^*E\cap\Omega$ of a set of minimal perimeter in $\Omega$ is an analytic surface and the singular set $(\partial E\setminus\partial^*E)\cap\Omega$ coincides with the set of points of $\partial E\cap\Omega$ having as blow-up limit a singular minimal cone.\\ Notice that such a cone must indeed be singular (at least) in $0$ by construction. A result of Simons shows that in dimension $n\leq7$ the only global minimal surfaces are planes. Since there cannot be singular minimal cones, then the singular set is empty if $n\leq7$. Moreover, using a dimension reduction argument, Federer proved that in dimension $n\geq8$ the singular set can have Hausdorff dimension at most equal to $n-8$. Also notice that there exist examples of minimal sets for which this estimate is sharp. To sum up, we have \begin{teo}\label{local_min_reg} Let $\Omega\subset\R$ be a bounded open set. If $E$ is a set of minimal perimeter in $\Omega$, then $\partial^*E\cap\Omega$ is an analytic hypersurface, which is relatively open in $\partial E\cap\Omega$. Moreover the singular set $(\partial E\setminus\partial^*E)\cap\Omega$ satisfies the following properties: (i) if $2\leq n\leq7$, then $(\partial E\setminus\partial^*E)\cap\Omega=\emptyset$, (ii) if $n=8$, then $(\partial E\setminus\partial^*E)\cap\Omega$ has no accumulation points, (iii) if $n\geq9$, then $\mathcal{H}^s((\partial E\setminus\partial^*E)\cap\Omega)=0$ for every $s>n-8$.\\ There exists a perimeter minimizer $E$ in $\mathbb{R}^8$ s.t. $\mathcal{H}^0(\partial E\setminus\partial^*E)=1$ and if $n\geq9$ there exists a perimeter minimizer $E$ in $\R$ s.t. $\mathcal{H}^{n-8}(\partial E\setminus\partial^*E)=\infty$. \end{teo} The main difficulty in proving Theorem $\ref{original_flatness}$ is the fact that a priori $\partial E$ cannot be written as a graph.\\ Define for every $r>0$ the cylinder \begin{equation*} C_r:=B'_r\times(-r,r)=\{(x',x_n)\in\R\,|\,|x'|<r,\,|x_n|<r\}. \end{equation*} Now let $E$ be a surface of minimal perimeter in $C_1$, with $0\in\partial E$. Suppose that $\partial E$ is the graph of a Lipschitz function $u:B'_1\longrightarrow\mathbb{R}$, with $u(0)=0$ and Lip$(u)=1$, i.e. \begin{equation*} \partial E\cap C_1=\{(x',u(x'))\in\R\,|\,x'\in B_1'\} \end{equation*} and that $E$ is the subgraph of $u$ \begin{equation*} E\cap C_1=\{(x',x_n)\in\R\,|\,x'\in B_1',\,-1<x_n<u(x')\}. \end{equation*} Then for every Borel set $A\subset B_1'$ the perimeter of $E$ is computed as \begin{equation*} P(E,C_1\cap p^{-1}A)=\mathcal{A}(u,A):=\int_A\sqrt{1+|\nabla'u(x')|^2}\,dx', \end{equation*} where $p:\R\longrightarrow\mathbb{R}^{n-1}$, $(x',x_n)\mapsto x'$. It is easily seen that since $E$ is a perimeter minimizer in $C_1$, then $u$ is a local minimizer of the area functional $\mathcal{A}(\cdot,B_1')$, meaning that for every compact subset $K\subset B_1'$ there exists $\epsilon>0$ s.t. \begin{equation*} \mathcal{A}(u,B_1')\leq\mathcal{A}(u+\varphi,B_1'), \end{equation*} for every $\varphi\in C^\infty_c(B_1')$ with $\supp\varphi\subset K$ and $|\varphi|<\epsilon$.\\ Moreover $u$ is a Lipschitz local minimizer for the area functional $\mathcal{A}(\cdot,B_1')$ if and only if \begin{equation*} \int_{B_1'}\frac{\nabla'u(x')}{\sqrt{1+|\nabla'u(x')|^2}}\cdot\nabla'\varphi(x')\,dx'=0, \end{equation*} for every $\varphi\in C_c^\infty(B_1')$ i.e. if and only if it is a weak solution in $B_1'$ of the Euler-Lagrange equation \begin{equation}\label{min_surf_eq} \textrm{div}\left(\frac{\nabla'u}{\sqrt{1+|\nabla'u|^2}}\right)=0, \end{equation} which is the so called minimal surface equation.\\ Notice that this equation expresses in local coordinates the vanishing of the mean curvature of the hypersurface $\partial E\cap C_1$, defined as the graph of $u$ over $B_1'$. Moreover notice that once we prove that $u\in C^{1,\alpha}(B_1')$, then classical bootstrapping arguments of the elliptic regularity theory imply that $u$ actually is analytic.\\ As we said earlier the problem is that we do not know if $\partial E$ can be written as a graph.\\ In any case it is proved in \cite{CC} that if $E$ is a set of minimal perimeter in $B_1$, then $\partial E\cap B_1$ is a viscosity solution of equation $(\ref{min_surf_eq})$. \begin{defin} The boundary $\partial E$ of a set $E$ satisfies the minimal surface equation $(\ref{min_surf_eq})$ in the viscosity sense if for every smooth function $\varphi$ which has the subgraph \begin{equation*} S=\{x_n<\varphi(x')\} \end{equation*} included in $E$ or in $\Co E$ in a small ball $B_r(y)$ centered in some point \begin{equation*} y\in\partial S\cap\partial E \end{equation*} we have \begin{equation*} \textrm{div}\left(\frac{\nabla'\varphi}{\sqrt{1+|\nabla'\varphi|^2}}\right)\leq0, \end{equation*} and if we consider supergraphs then the opposite inequality holds. \end{defin} This means that if we touch the boundary $\partial E$ with the graph of a smooth function from the inside (outside) of $E$, then the corresponding inequality holds. For some details about viscosity solutions see the Appendix.\\ Now the idea is to adapt the methods used in the proof of the Harnack inequality for viscosity solutions (see Theorem $\ref{Harnack_vi}$ in Appendix A) in order to get the following \begin{teo}[Harnack inequality] Let $E$ be a set of minimal perimeter in $B_1$ s.t. $0\in\partial E$ and \begin{equation*} \partial E\cap B_1\subset\{|x_n|\leq\epsilon\}, \end{equation*} with $\epsilon\leq\epsilon_0(n)$. Then \begin{equation*} \partial E\cap B_{1/2}\subset\{|x_n|\leq(1-\eta)\epsilon\}, \end{equation*} where $\eta>0$ is a small universal constant. \end{teo} \noindent Then, arguing by contradiction, we obtain the following improvement of flatness Theorem for $\partial E$ \begin{teo}[Improvement of flatness] Let $E$ be a set of minimal perimeter in $B_1$ s.t. $0\in\partial E$ and \begin{equation*} \partial E\cap B_1\subset\{|x_n|\leq\epsilon\}, \end{equation*} with $\epsilon\leq\epsilon_0(n)$. Then there exists $\nu_1\in\Sp$ s.t. \begin{equation} \partial E\cap B_{r_0}\subset\left\{|x\cdot\nu_1|\leq\frac{\epsilon}{2}r_0\right\}, \end{equation} where $r_0$ is a small universal constant. \end{teo} Roughly speaking this means that once we know that $\partial E$ in a neighborhood of $0$ is contained in a cylinder of small heigth, then in a smaller neighborhood it is actually contained in a flatter cylinder, up to changing the coordinates.\\ Applying this Theorem inductively one can then show that $\partial E$ is actually a $C^{1,\alpha}$ graph in $B_{3/4}$ and hence, as remarked above, it is analytic.\\ We conclude this section recalling the monotonicity formula, since we will later need to find an analogue for fractional perimeters. \begin{teo}[Monotonicity Formula] Let $E$ be a set of minimal perimeter in $\Omega$ and let $x_0\in\partial E\cap\Omega$. Then the density ratios \begin{equation*} \frac{P(E,B_r(x_0))}{n\omega_nr^{n-1}} \end{equation*} are increasing for $r\in(0,d(x_0,\partial\Omega))$. \end{teo} \noindent Notice that if $E$ is a cone, then the above ratio is constant. \end{section} \end{chapter} \begin{chapter}{Fractional Perimeter} \begin{section}{Fractional Sobolev Spaces} We recall (see \cite{HitGuide}) the definition of fractional Sobolev space and some embedding properties which will be used in the sequel. \begin{defin} Let $\Omega\subset\R$ be an open set and fix $p\in[1,\infty)$, $s\in(0,1)$. Then we define the fractional Sobolev space \begin{equation*} W^{s,p}(\Omega):=\left\{u\in L^p(\Omega)\Big|\int_\Omega\int_\Omega\frac{|u(x)-u(y)|^p}{|x-y|^{n+sp}}\,dx\,dy<\infty\right\}. \end{equation*} The term \begin{equation*} [u]_{W^{s,p}(\Omega)}:=\left(\int_\Omega\int_\Omega\frac{|u(x)-u(y)|^p}{|x-y|^{n+sp}}\,dx\,dy\right)^\frac{1}{p} \end{equation*} is called Gagliardo seminorm of $u$. \end{defin} Endowed with the norm \begin{equation*} \|u\|_{W^{s,p}(\Omega)}:=\left(\|u\|_{L^p(\Omega)}^p+[u]_{W^{s,p}(\Omega)}^p\right)^\frac{1}{p}, \end{equation*} $W^{s,p}(\Omega)$ is a Banach space. When $p=2$, we write $H^s(\Omega)$ for the Hilbert space $W^{s,2}(\Omega)$. For a fixed $p$, the fractional Sobolev spaces are intermediate between $L^p(\Omega)$ and $W^{1,p}(\Omega)$, as is shown by the following Propositions. \begin{prop}\label{cont_scale} Let $\Omega\subset\R$ be an open set and let $p\in[1,\infty)$, $0<s\leq t<1$. Then $\exists C=C(n,s,p)\geq1$ s.t. for every measurable $u:\Omega\longrightarrow\mathbb{R}$ \begin{equation*} \|u\|_{W^{s,p}(\Omega)}\leq C\|u\|_{W^{t,p}(\Omega)}. \end{equation*} In particular we have the continuous embedding \begin{equation*} W^{t,p}(\Omega)\hookrightarrow W^{s,p}(\Omega). \end{equation*} \end{prop} In order to prove the embedding $W^{1,p}(\Omega)\hookrightarrow W^{s,p}(\Omega)$, we need to impose some regularity condition on the boundary of $\Omega$, because in the proof we make use of an extension property.\\ To be more precise, we say that an open set $\Omega\subset\R$ is an extension domain for $W^{s,p}$ if $\exists C=C(s,p,\Omega)\geq0$ s.t. for every $u\in W^{s,p}(\Omega)$ there exists $\tilde{u}\in W^{s,p}(\R)$ with $\tilde{u}_{|\Omega}=u$ and $\|\tilde{u}\|_{W^{s,p}(\R)}\leq C\|u\|_{W^{s,p}(\Omega)}$. We say that $\Omega$ is an extension domain if it is an extension domain for $W^{s,p}$ for every $p\in[1,\infty)$ and $s\in(0,1)$. It can be proved that any open set $\Omega$ with bounded $C^{0,1}$ boundary $\partial\Omega$ is an extension domain (see \cite{HitGuide} for a proof and a counterexample). We consider $\R$ itself as an extension domain (for terminology semplicity). \begin{prop} Let $\Omega\subset\R$ be an extension domain and let $p\in[1,\infty)$, $0<s<1$. Then $\exists C=C(n,s,p)\geq1$ s.t. for every measurable $u:\Omega\longrightarrow\mathbb{R}$ \begin{equation*} \|u\|_{W^{s,p}(\Omega)}\leq C\|u\|_{W^{1,p}(\Omega)}. \end{equation*} In particular we have the continuous embedding \begin{equation*} W^{1,p}(\Omega)\hookrightarrow W^{s,p}(\Omega). \end{equation*} \end{prop} We have the following Sobolev-type inequality. \begin{teo}\label{fractional_sobolev} Let $p\in[1,\infty)$ and $0<s<1$ s.t. $sp<n$. Then $\exists C=C(n,s,p)\geq0$ s.t. for every measurable $u:\R\longrightarrow\mathbb{R}$ with compact support we have \begin{equation}\label{sobolev_inequality} \|u\|_{L^{p^*}(\R)}^p\leq C\int_{\R}\int_{\R}\frac{|u(x)-u(y)|^p}{|x-y|^{n+sp}}\,dx\,dy=C[u]_{W^{s,p}(\R)}^p, \end{equation} where $p^*=\frac{np}{n-sp}$ is the fractional critical exponent. \end{teo} As a consequence, using Holder inequality we get the following embeddings. \begin{coroll} Let $p\in[1,\infty)$ and $0<s<1$ s.t. $sp<n$. Then we have the continuous embedding \begin{equation*} W^{s,p}(\R)\hookrightarrow L^q(\R),\qquad\textrm{for every }q\in[p,p^*]. \end{equation*} \end{coroll} Exploiting the extension property and the above results, we find \begin{teo} Let $\Omega\subset\R$ be an extension domain and let $p\in[1,\infty)$, $s\in(0,1)$ s.t. $sp<n$. Then $\exists C=C(s,p,\Omega)\geq0$ s.t. for every $u\in W^{s,p}(\Omega)$ \begin{equation} \|u\|_{L^q(\Omega)}\leq C\|u\|_{W^{s,p}(\Omega)}, \qquad\textrm{for every }q\in[p,p^*], \end{equation} i.e. we have the continuous embedding \begin{equation*} W^{s,p}(\Omega)\hookrightarrow L^q(\Omega),\qquad\textrm{for every }q\in[p,p^*]. \end{equation*} Moreover, if $\Omega$ is bounded, then \begin{equation*} W^{s,p}(\Omega)\hookrightarrow L^q(\Omega),\qquad\textrm{for every }q\in[1,p^*]. \end{equation*} \end{teo} Actually in a bounded extension domain the embedding is compact, except the case of the critical exponent. \begin{teo}\label{compact_embd_th} Let $\Omega\subset\R$ be a bounded extension domain and let $p\in[1,\infty)$, $s\in(0,1)$ s.t. $sp<n$. Then we have the compact embedding \begin{equation*} W^{s,p}(\Omega)\hookrightarrow\hookrightarrow L^q(\Omega),\qquad\textrm{for every }q\in[1,p^*). \end{equation*} \end{teo} \end{section} \begin{section}{Fractional Perimeter} First of all we fix an index $s\in(0,1)$. Now for every couple of disjoint sets $E$, $F\subset\R$ we define the functional \begin{equation} \Ll_s(E,F):=\int_E\int_F\frac{1}{\kers}\,dx\,dy=\int_{\R}\int_{\R}\frac{\chi_E(x)\chi_F(y)}{\kers}\,dx\,dy. \end{equation} \begin{defin} Let $\Omega\subset\R$ be an open set. Then for every $E\subset\R$ we define the fractional $s$-perimeter of $E$ in $\Omega$ as \begin{equation} P_s(E,\Omega):=\Ll_s(E\cap\Omega,\Co E)+\Ll_s(E\setminus\Omega,\Co E\cap\Omega). \end{equation} If $\Omega=\R$, then we write $P_s(E):=P_s(E,\R)$ for the (global) $s$-perimeter of $E$. \end{defin} Notice that, if $E\subset\Omega$, then $E\setminus\Omega=\emptyset$ and $E\cap\Omega=E$, so \begin{equation} P_s(E,\Omega)=\Ll_s(E,\Co E)=P_s(E). \end{equation} Moreover \begin{equation*} \begin{split} \Ll_s(E,\Co E)&=\int_E\int_{\Co E}\frac{1}{\kers}\,dx\,dy\\ & =\frac{1}{2}\int_{\R}\int_{\R}\frac{|\chi_E(x)-\chi_E(y)|}{\kers}\,dx\,dy=\frac{1}{2}[\chi_E]_{W^{s,1}(\R)} \end{split} \end{equation*} \begin{rmk} Since $|\chi_E(x)-\chi_E(y)|^p=|\chi_E(x)-\chi_E(y)|$, we could as well consider the $W^{t,p}$ norm, with $t=\frac{s}{p}$. For this reason in the literature the index $\sigma\in(0,\frac{1}{2})$ is sometimes used in place of $s$. In the sequel we will consider the index $s\in(0,1)$ and define $\sigma:=\frac{s}{2}\in(0,\frac{1}{2})$, which is the natural index when considering $H^\sigma$ norms. \end{rmk} For a general open set $\Omega\subset\R$ we can split the $s$-perimeter in the three terms \begin{equation*} P_s(E,\Omega)=\Ll_s(E\cap\Omega,\Co E\cap\Omega)+\Ll_s(E\cap\Omega,\Co E\setminus\Omega)+\Ll_s(E\setminus\Omega,\Co E\cap\Omega), \end{equation*} and regroup them as \begin{equation*}\begin{split} &P^L_s(E,\Omega):=\Ll_s(E\cap\Omega,\Co E\cap\Omega)=\frac{1}{2}[\chi_E]_{W^{s,1}(\Omega)},\\ & P^{NL}_s(E,\Omega):=\Ll_s(E\cap\Omega,\Co E\setminus\Omega)+\Ll_s(E\setminus\Omega,\Co E\cap\Omega). \end{split} \end{equation*} The term $P^L_s(E,\Omega)$ can be considered as the local contribution to the fractional perimeter of $E$ in $\Omega$ in the sense that, given two sets $E$, $F\subset\R$ s.t. $|(E\Delta F)\cap\Omega|=0$, we clearly have $P^L_s(E,\Omega)= P_s^L(F,\Omega)$.\\ However if $|(E\Delta F)\cap\Co\Omega|>0$, then (in general) we will have $P_s(E,\Omega)\not=P_s(F,\Omega)$, unlike what happens with the classical perimeter.\\ This means that the fractional perimeter is nonlocal. In the following Proposition we collect some elementary properties of the $s$-perimeter. \begin{prop}\label{elementary_properties} Let $s\in(0,1)$ and $\Omega\subset\R$ open. (i) (Subadditivity)$\quad$ Let $E,\,F\subset\R$ s.t. $|E\cap F|=0$. Then \begin{equation}\label{subadditive} P_s(E\cup F,\Omega)\leq P_s(E,\Omega)+P_s(F,\Omega). \end{equation} (ii) (Translation invariance)$\quad$ Let $E\subset\R$ and $x\in\R$. Then \begin{equation}\label{translation_invariance} P_s(E+x,\Omega+x)=P_s(E,\Omega). \end{equation} (iii) (Rotation invariance)$\quad$ Let $E\subset\R$ and $\mathcal{R}\in SO(n)$ a rotation. Then \begin{equation}\label{rotation_invariance} P_s(\mathcal{R}E,\mathcal{R}\Omega)=P_s(E,\Omega). \end{equation} (iv) (Scaling)$\quad$ Let $E\subset\R$ and $\lambda>0$. Then \begin{equation}\label{scaling} P_s(\lambda E,\lambda\Omega)=\lambda^{n-s}P_s(E,\Omega). \end{equation} \begin{proof} (i) follows from the following observations. Let $A_1,\,A_2,\,B\subset\R$. If $|A_1\cap A_2|=0$, then \begin{equation*}\begin{split} \Ll_s(A_1\cup A_2,B)&=\int_{A_1\cup A_2}\int_B\frac{dx\,dy}{\kers}\\ &= \int_{A_1}\int_B\frac{dx\,dy}{\kers}+\int_{A_2}\int_B\frac{dx\,dy}{\kers}\\ & =\Ll_s(A_1,B)+\Ll_s(A_2,B). \end{split} \end{equation*} Moreover \begin{equation} A_1\subset A_2\quad\Longrightarrow\quad\Ll_s(A_1,B)\leq\Ll_s(A_2,B), \end{equation} and \begin{equation*} \Ll_s(A,B)=\Ll_s(B,A). \end{equation*} Therefore \begin{equation*}\begin{split} P_s(E\cup F,\Omega)&=\Ll_s((E\cup F)\cap\Omega,\Co(E\cup F))+\Ll_s((E\cup F)\setminus\Omega,\Co(E\cup F)\cap\Omega)\\ & =\Ll_s(E\cap\Omega,\Co(E\cup F))+\Ll_s(F\cap\Omega,\Co(E\cup F))\\ & \qquad+\Ll_s(E\setminus\Omega,\Co(E\cup F)\cap\Omega)+\Ll_s(F\setminus\Omega,\Co(E\cup F)\cap\Omega)\\ & \leq\Ll_s(E\cap\Omega,\Co E)+\Ll_s(F\cap\Omega,\Co F)\\ & \qquad+\Ll_s(E\setminus\Omega,\Co E\cap\Omega)+\Ll_s(F\setminus\Omega,\Co F\cap\Omega)\\ & =P_s(E,\Omega)+P_s(F,\Omega). \end{split}\end{equation*} (ii), (iii) and (iv) follow simply by a change of variables in $\Ll_s$ and the following observations: \begin{equation*}\begin{split} &(x+A_1)\cap(x+A_2)=x+A_1\cap A_2,\qquad x+\Co A=\Co(x+A),\\ & \mathcal{R}A_1\cap\mathcal{R}A_2=\mathcal{R}(A_1\cap A_2),\qquad\mathcal{R}(\Co A)=\Co(\mathcal{R}A),\\ & (\lambda A_1)\cap(\lambda A_2)=\lambda(A_1\cap A_2),\qquad\lambda(\Co A)=\Co(\lambda A). \end{split}\end{equation*} For example, for claim (iv) we have \begin{equation*}\begin{split} \Ll_s(\lambda A,\lambda B)&=\int_{\lambda A}\int_{\lambda B}\frac{dx\,dy}{\kers} =\int_A\lambda^n\,dx\int_B\frac{\lambda^n\,dy}{\lambda^{n+s}\kers}\\ & =\lambda^{n-s}\Ll_s(A,B). \end{split} \end{equation*} Then \begin{equation*}\begin{split} P_s(\lambda E,\lambda\Omega)&=\Ll_s(\lambda E\cap\lambda\Omega,\Co(\lambda E))+ \Ll_s(\lambda E\cap\Co(\lambda\Omega),\Co(\lambda E)\cap\lambda\Omega)\\ & =\Ll_s(\lambda(E\cap\Omega),\lambda\Co E)+\Ll_s(\lambda(E\setminus\Omega),\lambda(\Co E\cap\Omega))\\ & =\lambda^{n-s}\left(\Ll_s(E\cap\Omega,\Co E)+\Ll_s(E\setminus\Omega,\Co E\cap\Omega)\right)\\ & =\lambda^{n-s}P_s(E,\Omega). \end{split}\end{equation*} \end{proof} \end{prop} Now a natural question is: what kind of sets (if any) have finite fractional perimeter? The following embedding implies that any set with finite perimeter has also finite $s$-perimeter for any $s\in(0,1)$. \begin{prop}\label{bv_embd} Let $\Omega\subset\R$ be an extension domain and let $s\in(0,1)$. Then $\exists C=C(n,s)\geq1$ s.t. for every measurable $u:\Omega\longrightarrow\mathbb{R}$ \begin{equation} \|u\|_{W^{s,1}(\Omega)}\leq C\left(\|u\|_{L^1(\Omega)}+V(u,\Omega)\right)=C\|u\|_{BV(\Omega)}. \end{equation} In particular we have the continuous embedding \begin{equation*} BV(\Omega)\hookrightarrow W^{s,1}(\Omega). \end{equation*} \begin{proof} The claim is trivially satisfied if the right hand side is infinite, so let $u\in\bvo$.We only need to check that the Gagliardo seminorm of $u$ is bounded by its $BV$-norm.\\ Since $\Omega$ is an extension domain, we know that $\exists C=C(n,s)\geq1$ s.t. \begin{equation*} \|v\|_{\fracso}\leq C\|v\|_{W^{1,1}(\Omega)}. \end{equation*} Take an approximating sequence $\{u_k\}\subset C^\infty(\Omega)\cap\bvo$ as in Proposition $\ref{bv_approx}$.\\ Then \begin{equation*} [u_k]_{\fracso}\leq\|u_k\|_{\fracso}\leq C\|u_k\|_{W^{1,1}(\Omega)}=C\|u_k\|_{\bvo}, \end{equation*} for each $k\in\mathbb{N}$.\\ Now using Fatou's Lemma we get \begin{equation*}\begin{split} [u]_{\fracso}&\leq\liminf_{k\to\infty}[u_k]_{\fracso}\leq C\liminf_{k\to\infty}\|u_k\|_{\bvo}=C\lim_{k\to\infty}\|u_k\|_{\bvo}\\ & =C\|u\|_{\bvo} \end{split} \end{equation*} and hence the claim. \end{proof} \end{prop} \begin{rmk} In particular we have \begin{equation*} W^{1,1}(\Omega)\subset\bvo\subset\bigcap_{s\in(0,1)}\fracso \end{equation*} and we know that the first inclusion is strict. We will show below that (in general) the second inclusion is strict too. \end{rmk} As a consequence, since $P_s(E)=\frac{1}{2}[\chi_E]_{\fracs}$, we have \begin{coroll} Let $E\subset\R$ be a set of finite perimeter i.e. $\chi_E\in\bv$. Then $E$ has finite $s$-perimeter for every $s\in(0,1)$. \end{coroll} Acually, in case $E$ is bounded we can say more. \begin{teo}\label{cacc_equiv} Let $E\subset\R$ be bounded. Then the following are equivalent: (i) $E$ has finite perimeter, (ii) $E$ has finite $s$-perimeter for every $s\in(0,1)$ and \begin{equation*} \liminf_{s\to1}(1-s)P_s(E)<\infty, \end{equation*} (iii) $\exists\{s_k\}\subset(0,1),s_k\nearrow1$ s.t. $E$ has finite $s_k$-perimeter for each $k\in\mathbb{N}$ and \begin{equation*} \sup_{k\in\mathbb{N}}(1-s_k)P_{s_k}(E)<\infty. \end{equation*} Moreover in this case we have \begin{equation} \lim_{s\to1}(1-s)P_s(E)=\omega_{n-1}P(E). \end{equation} \end{teo} This is a consequence of the following results (see \cite{BBM} and \cite{Davila}) \begin{teo}[Bourgain, Brezis, Mironescu]\label{bb} Let $\Omega\subset\R$ be a smooth bounded domain. Let $u\in L^1(\Omega)$. Then $u\in\bvo$ if and only if \begin{equation*} \liminf_{n\to\infty}\int_\Omega\int_\Omega\frac{|u(x)-u(y)|}{|x-y|}\rho_n(x-y)\,dxdy<\infty, \end{equation*} and then \begin{equation}\label{rough} \begin{split} C_1|Du|(\Omega)&\leq\liminf_{n\to\infty}\int_\Omega\int_\Omega\frac{|u(x)-u(y)|}{|x-y|}\rho_n(x-y)\,dxdy\\ & \leq\limsup_{n\to\infty}\int_\Omega\int_\Omega\frac{|u(x)-u(y)|}{|x-y|}\rho_n(x-y)\,dxdy\leq C_2|Du|(\Omega), \end{split} \end{equation} for some constants $C_1$, $C_2$ depending only on $\Omega$. \end{teo} This result was refined by Davila \begin{teo}[Davila] Let $\Omega\subset\R$ be a bounded open set with Lipschitz boundary. Let $u\in\bvo$. Then \begin{equation}\label{correct} \lim_{k\to\infty}\int_\Omega\int_\Omega\frac{|u(x)-u(y)|}{|x-y|}\rho_k(x-y)\,dxdy=K_{1,n}|Du|(\Omega), \end{equation} where \begin{equation*} K_{1,n}=\frac{1}{n\omega_n}\int_{\mathbb{S}^{n-1}}|v\cdot e|\,d\sigma(v), \end{equation*} with $e\in\R$ any unit vector. \end{teo} In the above Theorems $\rho_k$ is any sequence of radial mollifiers i.e. of functions satisfying \begin{equation}\label{rule1} \rho_k(x)\geq0,\quad\rho_k(x)=\rho_k(|x|),\quad\int_{\R}\rho_k(x)\,dx=1 \end{equation} and \begin{equation}\label{rule2} \lim_{k\to\infty}\int_\delta^\infty\rho_k(r)r^{n-1}dr=0\quad\textrm{for all }\delta>0. \end{equation} In particular, for $R>\textrm{diam}(\Omega)$, we can consider \begin{equation*} \rho(x):=\chi_{[0,R]}(|x|)\frac{1}{|x|^{n-1}} \end{equation*} and define for any sequence $\{s_k\}\subset(0,1),s_k\nearrow1$, \begin{equation*} \rho_k(x):=(1-s_k)\rho(x)c_{s_k}\frac{1}{|x|^{s_k}}, \end{equation*} where the $c_{s_k}$ are normalizing constants. Then \begin{equation*}\begin{split} \int_{\R}\rho_k(x)\,dx&=(1-s_k)c_{s_k}n\omega_n\int_0^R\frac{1}{r^{n-1+s_k}}r^{n-1}\,dr\\ & =(1-s_k)c_{s_k}n\omega_n\int_0^R\frac{1}{r^{s_k}}\,dr=c_{s_k}n\omega_nR^{1-s_k}, \end{split} \end{equation*} and hence taking $c_{s_k}:=\frac{1}{n\omega_n}R^{s_k-1}$ gives $(\ref{rule1})$; notice that $c_{s_k}\to\frac{1}{n\omega_n}$.\\ Also \begin{equation*}\begin{split} \lim_{k\to\infty}\int_\delta^\infty\rho_k(r)r^{n-1}\,dr&= \lim_{k\to\infty}(1-s_k)c_{s_k}\int_\delta^R\frac{1}{r^{s_k}}\,dr\\ & =\lim_{k\to\infty}c_{s_k}(R^{1-s_k}-\delta^{1-s_k})=0, \end{split} \end{equation*} giving $(\ref{rule2})$.\\ With this choice we get \begin{equation*} \int_\Omega\int_\Omega\frac{|u(x)-u(y)|}{|x-y|}\rho_k(x-y)\,dxdy=c_{s_k}(1-s_k)[u]_{W^{s_k,1}(\Omega)}. \end{equation*} Then, if $u\in\bvo$, Davila's Theorem gives \begin{equation}\label{limitperimeter}\begin{split} \lim_{s\to1}(1-s)[u]_{W^{s,1}(\Omega)}&=\lim_{s\to1}\frac{1}{c_s}(c_s(1-s)[u]_{W^{s,1}(\Omega)})\\ & =n\omega_nK_{1,n}|Du|(\Omega). \end{split} \end{equation} Now we need to compute the constant $K_{1,n}$.\\ Notice that we can choose $e$ in such a way that $v\cdot e=v_n$.\\ Then using spheric coordinates for $\s^{n-1}$ we obtain $|v\cdot e|=|\cos\theta_{n-1}|$ and \begin{equation*} d\sigma=\sin\theta_2(\sin\theta_3)^2\dots(\sin\theta_{n-1})^{n-2}d\theta_1\dots d\theta_{n-1}, \end{equation*} with $\theta_1\in[0,2\pi)$ and $\theta_j\in[0,\pi)$ for $j=2,\dots,n-1$. Notice that \begin{equation*}\begin{split} \h^k(\s^k)&=\int_0^{2\pi}\,d\theta_1\int_0^\pi\sin\theta_2\,d\theta_2\dots \int_0^\pi(\sin\theta_{k-1})^{k-2}\,d\theta_{k-1}\\ & =\h^{k-1}(\s^{k-1})\int_0^\pi(\sin t)^{k-2}\,dt. \end{split} \end{equation*} Then we get \begin{equation*} \begin{split} \int_{\s^{n-1}}|v\cdot e|\,d\sigma(v)&=\h^{n-2}(\s^{n-2})\int_0^\pi(\sin t)^{n-2}|\cos t|\,dt\\ & =\h^{n-2}(\s^{n-2})\Big(\int_0^\frac{\pi}{2}(\sin t)^{n-2}\cos t\,dt-\int_\frac{\pi}{2}^\pi(\sin t)^{n-2}\cos t\,dt\Big)\\ & =\frac{\h^{n-2}(\s^{n-2})}{n-1}\Big(\int_0^\frac{\pi}{2}\frac{d}{dt}(\sin t)^{n-1}\,dt-\int_\frac{\pi}{2}^\pi\frac{d}{dt}(\sin t)^{n-1}\,dt\Big)\\ & =\frac{2\h^{n-2}(\s^{n-2})}{n-1}. \end{split} \end{equation*} Therefore \begin{equation} n\omega_nK_{1,n}=2\frac{\h^{n-2}(\s^{n-2})}{n-1}=2\mathcal{L}^{n-1}(B_1(0))=2\omega_{n-1}, \end{equation} and hence $(\ref{limitperimeter})$ becomes \begin{equation*} \lim_{s\to1}(1-s)[u]_{W^{s,1}(\Omega)}=2\omega_{n-1}|Du|(\Omega), \end{equation*} for any $u\in\bvo$. Putting everything together we obtain \begin{coroll} Let $\Omega\subset\R$ be a bounded open set with Lipschitz boundary. Let $u\in L^1(\Omega)$. Then $u\in\bvo$ if and only if \begin{equation*} \liminf_{s\to1}(1-s)[u]_{\fracso}<\infty. \end{equation*} Moreover in that case \begin{equation}\label{davila_conv_def} \lim_{s\to1}(1-s)[u]_{W^{s,1}(\Omega)}=2\omega_{n-1}|Du|(\Omega). \end{equation} \end{coroll} We can rewrite this proposition for sets as \begin{prop}\label{local_frac_converge} Let $\Omega\subset\R$ be a bounded open set with Lipschitz boundary. Let $E\subset\R$. Then $P(E,\Omega)<\infty$ if and only if \begin{equation*} \liminf_{s\to1}(1-s)P_s^L(E,\Omega)<\infty. \end{equation*} Moreover in that case \begin{equation}\label{perimeter_conv_def} \lim_{s\to1}(1-s)P_s^L(E,\Omega)=\omega_{n-1}P(E,\Omega). \end{equation} \end{prop} Now we can give the proof of Theorem $\ref{cacc_equiv}$, but first it is convenient to point out the following easy but useful estimate. \begin{lem}\label{positive_distance} Let $B\subset\R$ and $x\in\R$ s.t. $d(x,B)\geq d>0$; then \begin{equation}\label{punctual_eq_positive_distance} \int_B\frac{dy}{\kers}\leq\frac{n\omega_n}{s}\frac{1}{d^s}. \end{equation} In particular, if $A\subset\R$ is s.t. $|A|<\infty$ and $d(A,B)\geq d>0$, then \begin{equation}\label{positive_distance_eq} \Ll_s(A,B)\leq\frac{n\omega_n}{s}|A|\frac{1}{d^s}. \end{equation} \begin{proof} Since $d(x,B)\geq d$, we have $B\subset\Co B_d(x)$ and hence \begin{equation*}\begin{split} \int_B\frac{dy}{\kers}&\leq\int_{\Co B_d(x)}\frac{dy}{\kers}=\int_{\Co B_d}\frac{dz}{|z|^{n+s}}\\ & =\int_{\Sp}d\Han\int_d^\infty\frac{1}{\rho^{n+s}}\rho^{n-1}d\rho\\ & =-\frac{n\omega_n}{s}\int_d^\infty\frac{d}{d\rho}\rho^{-s}d\rho=\frac{n\omega_n}{s}\frac{1}{d^s}. \end{split} \end{equation*} The second inequality follows. \end{proof} \end{lem} \begin{proof}[Proof of Theorem $\ref{cacc_equiv}$] Clearly (ii) and (iii) are equivalent.\\ Let $\Omega:=B_R$ s.t. $R>1, E\subset\Omega$ and $\textrm{dist}(E,\partial\Omega)\geq d>0$; this is clearly a bounded open set with smooth boundary. (i)$\Rightarrow$(ii): we only need to show that the liminf is finite. Actually we prove that the limit holds true.\\ Notice that we have \begin{equation*} P_s(E)=P^L_s(E,\Omega)+\Ll_s(E,\Co\Omega). \end{equation*} Now, as $\chi_E\in\bv$ and $E\subset\subset\Omega$, we have $\chi_E\in\bvo$ and $P(E,\Omega)=P(E)$; in particular we also have $\chi_E\in\fracso$ for every $s\in(0,1)$.\\ Since $d(E,\partial\Omega)\geq d>0$, we have \begin{equation*} \Ll_s(E,\Co\Omega)\leq\frac{n\omega_n}{s}|E|\frac{1}{d^s}. \end{equation*} Multiplying by $(1-s)$ we get \begin{equation*} 0\leq(1-s)\Ll_s(E,\Co\Omega)\leq(1-s)\frac{|E|n\omega_n}{sd^s}\to0, \end{equation*} as $s\to1$. This and ($\ref{perimeter_conv_def}$) give \begin{equation*}\begin{split} \lim_{s\to1}(1-s)P_s(E)&=\lim_{s\to1}(1-s)P_s^L(E,\Omega)=\omega_{n-1}P(E,\Omega)\\ & =\omega_{n-1}P(E). \end{split} \end{equation*} (iii)$\Rightarrow$(i): We have \begin{equation*}\begin{split} c_{s_k}(1-s_k)[\chi_E]_{W^{s_k,1}(\Omega)}&\leq\frac{R}{n\omega_n}(1-s_k)[\chi_E]_{W^{s_k,1}(\Omega)}\\ & \leq\frac{2R}{n\omega_n}(1-s_k)P_{s_k}(E). \end{split} \end{equation*} Thus the hypothesis implies \begin{equation*} \liminf_{k\to\infty}c_{s_k}(1-s_k)[\chi_E]_{W^{s_k,1}(\Omega)}<\infty, \end{equation*} and the Theorem of Bourgain, Brezis and Mironescu gives $\chi_E\in\bvo$.\\ Finally, as $E\subset\subset\Omega$, we also get $\chi_E\in\bv$. \end{proof} \begin{rmk} The condition that the liminf be bounded is necessary i.e. there exist bounded measurable sets having finite $s$-perimeter for every $s\in(0,1)$ which are not of finite perimeter.\\ This also shows that in general the inclusion $\bvo\subset\bigcap_{s\in(0,1)}\fracso$ is strict. \end{rmk} \begin{ese}\label{inclusion_counterexample} Let $a\in(0,1)\subset\mathbb{R}$ and consider the open intervals $I_k:=(a^{k+1},a^k)$ for every $k\in\mathbb{N}$. Define $E:=\bigcup_{k\in\mathbb{N}}I_{2k}$, which is a bounded (open) set. Due to the infinite number of jumps $\chi_E\not\in BV(\mathbb{R})$. However it can be proved that $E$ has finite $s$-perimeter for every $s\in(0,1)$. We postpone the proof to the end of the chapter. \end{ese} Notice that, since \begin{equation*} A_1\subset A_2\quad\Longrightarrow\quad\Ll_s(A_1,B)\leq\Ll_s(A_2,B), \end{equation*} we always have \begin{equation}\begin{split} P^{NL}_s(E,\Omega)&=\Ll_s(E\cap\Omega,\Co E\cap\Co\Omega)+\Ll_s(\Co E\cap\Omega,E\cap\Co\Omega)\\ & \leq2\Ll_s(\Omega,\Co\Omega)=2P_s(\Omega). \end{split}\end{equation} \noindent In particular, if $\Omega\subset\R$ is a bounded open set with Lipschitz boundary, then $\Omega$ has finite perimeter and hence also finite $s$-perimeter, as we saw above. Therefore in that case \begin{equation*} P^{NL}_s(E,\Omega)\leq2P_s(\Omega)<\infty, \end{equation*} for every $E\subset\R$, so we only need to check the local part of the fractional perimeter. In particular, Proposition $\ref{local_frac_converge}$ then implies that \begin{equation*} P(E,\Omega)<\infty\quad\Longrightarrow\quad P_s(E,\Omega)<\infty, \end{equation*} for every $s\in(0,1)$. We now wish to estimate $P^{NL}_s(E,\Omega)$ and show that the convergence in $(\ref{perimeter_conv_def})$ holds for the whole fractional perimeter instead of only its local part. First we have to introduce some notation.\\ Let $\Omega\subset\R$ be a bounded open set with Lipschitz boundary. Then we can find two sequences of bounded open sets $A_k,\, D_k\subset\R$ with Lipschitz boundary strictly approximating $\Omega$ from the inside and from the outside respectively, that is $(i)\quad A_k\subset A_{k+1}\subset\subset\Omega$ and $A_k\nearrow\Omega$, i.e. $\bigcup_k A_k=\Omega$, $(ii)\quad \Omega\subset\subset D_{k+1}\subset D_k$ and $D_k\searrow\overline{\Omega}$, i.e. $\bigcap_k D_k=\overline{\Omega}$.\\ For a proof we refer to \cite{LipApprox} and the references cited therein. We define for every $k$ \begin{equation*}\begin{split} &\Omega_k^+:=D_k\setminus\overline{\Omega},\qquad\Omega_k^-:=\Omega\setminus\overline{A_k} \qquad T_k:=\Omega_k^+\cup\partial\Omega\cup\Omega_k^-,\\ &\qquad\qquad d_k:=\min\{d(A_k,\partial\Omega),\,d(D_k,\partial\Omega)\}>0. \end{split} \end{equation*} Now we can prove the following \begin{teo} Let $\Omega\subset\R$ be a bounded open set with Lipschitz boundary and let $E\subset\R$ be a set having finite perimeter in $D_1$.\\ Then $P_s(E,\Omega)<\infty$ for every $s\in(0,1)$, \begin{equation*} \omega_{n-1}P(E,\Omega)\leq\liminf_{s\to1}(1-s)P_s(E,\Omega) \end{equation*} and \begin{equation} \limsup_{s\to1}(1-s)P_s(E,\Omega) \leq\omega_{n-1}P(E,\Omega)+2\omega_{n-1}\lim_{k\to\infty}P(E,T_k). \end{equation} In particular, if $P(E,\partial\Omega)=0$, then \begin{equation} \lim_{s\to1}(1-s)P_s(E,\Omega)=\omega_{n-1}P(E,\Omega). \end{equation} \begin{proof} Since $\Omega$ is regular and $P(E,\Omega)<\infty$, we already know from Proposition $\ref{local_frac_converge}$ that $P_s(E,\Omega)$ is finite for every $s$ and \begin{equation*} \lim_{s\to1}(1-s)P_s^L(E,\Omega)=\omega_{n-1}P(E,\Omega), \end{equation*} so we only need to prove the second inequality. Notice that, since $|D\chi_E|$ is a finite Radon measure on $D_1$ and $T_k\searrow\partial\Omega$ as $k\nearrow\infty$, we have \begin{equation*} \exists\lim_{k\to\infty}P(E,T_k)=P(E,\partial\Omega). \end{equation*} Consider the nonlocal part of the fractional perimeter \begin{equation*} P_s^{NL}(E,\Omega)=\Ll_s(E\cap\Omega,\Co E\setminus\Omega)+\Ll_s(\Co E\cap\Omega,E\setminus\Omega), \end{equation*} and take any $k$. Then \begin{equation*}\begin{split} \Ll_s(E\cap\Omega,\Co E\setminus\Omega)&=\Ll_s(E\cap\Omega,\Co E\cap\Omega_k^+)+\Ll_s(E\cap\Omega,\Co E\cap(\Co\Omega\setminus B_k))\\ & \leq\Ll_s(E\cap\Omega,\Co E\cap\Omega_k^+)+\frac{n\omega_n}{s}|\Omega|\frac{1}{d_k^s}\\ & \leq\Ll_s(E\cap\Omega_k^-,\Co E\cap\Omega_k^+)+2\frac{n\omega_n}{s}|\Omega|\frac{1}{d_k^s}\\ & \leq\Ll_s(E\cap(\Omega_k^-\cup\Omega_k^+),\Co E\cap(\Omega_k^-\cup\Omega_k^+))+2\frac{n\omega_n}{s}|\Omega|\frac{1}{d_k^s}\\ & =P^L_s(E,T_k)+2\frac{n\omega_n}{s}|\Omega|\frac{1}{d_k^s}. \end{split}\end{equation*} Since we can bound the other term in the same way, we get \begin{equation} P^{NL}_s(E,\Omega)\leq2P^L_s(E,T_k)+4\frac{n\omega_n}{s}|\Omega|\frac{1}{d_k^s}. \end{equation} By hypothesis we know that $T_k$ is a bounded open set with Lipschitz boundary \begin{equation*} \partial T_k=\partial A_k\cup\partial D_k. \end{equation*} Therefore for every $k$, using again Proposition $\ref{local_frac_converge}$ we have \begin{equation*} \lim_{s\to1}(1-s)P^L_s(E,T_k)=\omega_{n-1}P(E,T_k), \end{equation*} and hence \begin{equation*} \limsup_{s\to1}(1-s)P_s(E,\Omega) \leq\omega_{n-1}P(E,\Omega)+2\omega_{n-1}P(E,T_k). \end{equation*} Since this holds true for any $k$, we get the claim. \end{proof} \end{teo} \begin{rmk} We remark that in the proof above we showed that we can bound the nonlocal part of the perimeter as \begin{equation*} P^{NL}_s(E,\Omega)\leq2P^L_s(E,T_k)+4\frac{n\omega_n}{s}|\Omega|\frac{1}{d_k^s}, \end{equation*} for every $k$, without making assumptions on the set $E\subset\R$. Then, using Theorem $\ref{bb}$, we get \begin{equation*} \limsup_{s\to1}(1-s)P^{NL}_s(E,\Omega)\leq CP(E,T_k), \end{equation*} and hence \begin{equation} \limsup_{s\to1}(1-s)P^{NL}_s(E,\Omega)\leq C\liminf_{k\to\infty}P(E,T_k). \end{equation} We remark that this estimate holds true for any set $E\subset\R$.\\ However, if we suppose that $P(E,D_1)<\infty$ then the liminf is actually a limit, which is equal to $P(E,\partial\Omega)$, and we can use the constant $C=2\omega_{n-1}$, as we did above. \end{rmk} Actually we can use the signed distance function to find our approximating sets.\\ Let $\Omega\subset\R$ be a bounded open set with Lipschitz boundary and let $\bar{d}_\Omega$ denote the signed distance function from $\Omega$, negative inside. Define for any $\rho\in\mathbb{R}$ with $|\rho|$ small, the open set \begin{equation*} \Omega_\rho:=\{\bar{d}_\Omega<\rho\}. \end{equation*} It is proved in \cite{LipApprox} that $\Omega_\rho$ has Lipschitz boundary for every $|\rho|<\alpha$, for some $\alpha>0$ small enough. Notice that $\Omega_\rho\subset\subset\Omega$ when $\rho<0$ and $\Omega\subset\subset\Omega_\rho$ when $\rho>0$.\\ Therefore we can consider the sets $\Omega_\rho$ as our approximating sequences $A_k,\, D_k$, with $\rho<0$ and $\rho>0$ respectively. Having 'continuous' approximating sequences rather than numerable ones allows us to improve the previous result. Notice that for $\rho>0$ \begin{equation*} N_\rho(\partial\Omega)=\Omega_\rho\setminus\overline{\Omega_{-\rho}}=\{-\rho<\bar{d}_F<\rho\}, \end{equation*} is an open tubular neighborhood of $\partial\Omega$. These take the place of the sets $T_k$. Now suppose that $E$ has finite perimeter in $\Omega_\alpha$. In particular this implies that $E$ has finite perimeter in $N_\alpha(\partial\Omega)$.\\ Notice that $P(E,B)=\Han(\partial^*E\cap B)$ for every $B\subset\R$, where $\partial^*E$ is the reduced boundary of $E$. In particular \begin{equation*} P(E,\{\bar{d}_\Omega=\delta\})=\Han(\partial^*E\cap\{\bar{d}_\Omega=\delta\}), \end{equation*} for every $\delta\in(-\alpha,\alpha)$. Now, since \begin{equation*} \Han(\partial^*E\cap N_\alpha(\partial\Omega))=P(E,N_\alpha(\partial\Omega))<\infty, \end{equation*} the set \begin{equation*} S:=\left\{\delta\in(-\alpha,\alpha)\,|\,P(E,\{\bar{d}_\Omega=\delta\})>0\right\} \end{equation*} is at most countable.\\ Moreover for every $\delta\in(-\alpha,\alpha)\setminus S$ we have \begin{equation*} \lim_{s\to1}(1-s)P_s(E,\Omega_\delta)=\omega_{n-1}P(E,\Omega_\delta). \end{equation*} This shows that even if the limit doesn't hold for $\Omega$, if we slightly enlarge or restrict $\Omega$, then the limit holds true. Moreover, since $(-\alpha,\alpha)\setminus S$ is dense in $(-\alpha,\alpha)$, we can enlarge (or restrict) $\Omega$ as little as we want. To sum up \begin{coroll} Let $\Omega\subset\R$ be an open set with Lipschitz boundary and let $E\subset\R$ be a set having finite perimeter in $\Omega_\beta$, for some $0<\beta<\alpha$.\\ Then there exists a set $S\subset(-\alpha,\beta)$, at most countable, s.t. \begin{equation*} \lim_{s\to1}(1-s)P_s(E,\Omega_\delta)=\omega_{n-1}P(E,\Omega_\delta), \end{equation*} for every $\delta\in(-\alpha,\beta)\setminus S$. \end{coroll} This is an improvement of Theorem 1 in \cite{cafenr}, which was obtained through uniform estimates. In particular, our result holds for any bounded Lipschitz domain $\Omega$, without requiring $C^{1,\alpha}$ regularity of $\partial E$.\\ For a complete analysis of the asymptotics of the fractional perimeter as $s\to1$ in the context of $\Gamma$-convergence, see \cite{Gamma}.\\ Finally we remark that also the asymptotics as $s\to0$ has been studied. See \cite{asymptzero} for a complete analysis. \end{section} \begin{section}{(Ir)Regularity of the Boundary} We give a definition of fractal dimension related to the fractional perimeter, which was introduced in \cite{Visintin}. In particular we show that a set can have finite fractional perimeter even if the dimension of its boundary is bigger than $n-1$.\\ Using Proposition $\ref{cont_scale}$ we immediately get the following \begin{prop} Let $\Omega$ be an open set. For every measurable $u:\Omega\longrightarrow\mathbb{R}$ there exists one and only one $R(u)\in[0,1]$ s.t. \begin{equation*} [u]_{\fracso}\quad\left\{\begin{array}{cc} <\infty,& \forall\,s\in(0,R(u))\\ =\infty, &\forall\,s\in(R(u),1) \end{array}\right. \end{equation*} that is \begin{equation}\begin{split}\label{frac_range} R(u)&=\sup\left\{s\in(0,1)\,\big|\,[u]_{\fracso}<\infty\right\}\\ & =\inf\left\{s\in(0,1)\,\big|\,[u]_{\fracso}=\infty\right\}. \end{split} \end{equation} \end{prop} In particular, using the above Proposition for characteristic functions, we can give the following definition of fractal dimension. \begin{defin} Let $E\subset\R$ s.t. $\partial^-E\not=\emptyset$. We define \begin{equation} \Dim_F(\partial^-E,\Omega):=n-R(\chi_E), \end{equation} the fractal dimension of $\partial^-E$ in $\Omega$ relative to the fractional perimeter. \end{defin} Notice that in the case of sets $(\ref{frac_range})$ becomes \begin{equation} \begin{split}\label{frac_range_sets} R(\chi_E)&=\sup\left\{s\in(0,1)\,\big|\,P_s^L(E,\Omega)<\infty\right\}\\ & =\inf\left\{s\in(0,1)\,\big|\,P_s^L(E,\Omega)=\infty\right\}. \end{split} \end{equation} In particular we can take $\Omega$ to be the whole of $\R$, or a bounded open set with Lipschitz boundary.\\ In the first case the local part of the fractional perimeter coincides with the whole fractional perimeter, while in the second case we know that we can bound the nonlocal part with $2P_s(\Omega)<\infty$ for every $s\in(0,1)$. Therefore in both cases in $(\ref{frac_range_sets})$ we can as well take the whole fractional perimeter $P_s(E,\Omega)$ instead of just the local part. Using the embedding of Proposition $\ref{bv_embd}$, we know that if $\Omega$ is an extension domain, then \begin{equation*} P(E,\Omega)<\infty\quad\Longrightarrow\quad\Dim_F(\partial^-E,\Omega)=n-1. \end{equation*} However the Example $\ref{inclusion_counterexample}$ shows that (in general) the converse is false. Now we recall the definition of Minkowski content and dimension.\\ For simplicity set \begin{equation*} \bar{N}_\rho^\Omega(E):=\overline{N_\rho(E)}\cap\Omega =\{x\in\Omega\,|\,d(x,E)\leq\rho\}, \end{equation*} for any $\rho>0$. \begin{defin} Let $\Omega\subset\R$ be an open set. For any $\Gamma\subset\R$ and $r\in[0,n]$ we define the inferior and superior $r$-dimensional Minkowski contents of $\Gamma$ relative to the set $\Omega$ as, respectively \begin{equation*} \underline{\mathcal{M}}^r(\Gamma,\Omega):=\liminf_{\rho\to0}\frac{|\bar{N}_\rho^\Omega(\Gamma)|}{\rho^{n-r}},\qquad \overline{\mathcal{M}}^r(\Gamma,\Omega):=\limsup_{\rho\to0}\frac{|\bar{N}_\rho^\Omega(\Gamma)|}{\rho^{n-r}}. \end{equation*} Then we define the lower and upper Minkowski dimensions of $\Gamma$ in $\Omega$ as \begin{equation*}\begin{split} \underline{\Dim}_\mathcal{M}(\Gamma,\Omega)&:=\inf\left\{r\in[0,n]\,|\,\underline{\mathcal{M}}^r(\Gamma,\Omega)=0\right\}\\ & =n-\sup\left\{r\in[0,n]\,|\,\underline{\mathcal{M}}^{n-r}(\Gamma,\Omega)=0\right\}, \end{split}\end{equation*} \begin{equation*}\begin{split} \overline{\Dim}_\mathcal{M}(\Gamma,\Omega)&:=\sup\left\{r\in[0,n]\,|\,\overline{\mathcal{M}}^r(\Gamma,\Omega)=\infty\right\}\\ & =n-\inf\left\{r\in[0,n]\,|\,\overline{\mathcal{M}}^{n-r}(\Gamma,\Omega)=\infty\right\}. \end{split} \end{equation*} If they agree, we write \begin{equation*} \Dim_\mathcal{M}(\Gamma,\Omega) \end{equation*} for the common value and call it the Minkowski dimension of $\Gamma$ in $\Omega$.\\ If $\Omega=\R$ or $\Gamma\subset\subset\Omega$, we drop the $\Omega$ in the formulas. \end{defin} \begin{rmk} Let $\Dim_\mathcal{H}$ denote the Hausdorff dimension. In general one has \begin{equation*} \Dim_\mathcal{H}(\Gamma)\leq\underline{\Dim}_\mathcal{M}(\Gamma)\leq\overline{\Dim}_\mathcal{M}(\Gamma), \end{equation*} and all the inequalities might be strict. However for some sets (e.g. self-similar sets with some symmetric and regularity condition) they are all equal. \end{rmk} In \cite{Visintin} the following Proposition (not explicitly stated) is proved. \begin{prop} Let $\Omega\subset\R$ be a bounded open set. Then for every $E\subset\R$ s.t. $\partial^-E\not=\emptyset$ and $\overline{\Dim}_\mathcal{M}(\partial^-E,\Omega)\geq n-1$ we have \begin{equation} \Dim_F(\partial^-E,\Omega)\leq\overline{\Dim}_\mathcal{M}(\partial^-E,\Omega). \end{equation} \begin{proof} By hypothesis we have \begin{equation*} \overline{\Dim}_\mathcal{M}(\partial^-E,\Omega)=n-\inf\left\{r\in(0,1)\,|\,\overline{\mathcal{M}}^{n-r}(\partial^-E,\Omega)=\infty\right\}, \end{equation*} and we need to show that \begin{equation*} \inf\left\{r\in(0,1)\,|\,\overline{\mathcal{M}}^{n-r}(\partial^-E,\Omega)=\infty\right\} \leq \sup\{s\in(0,1)\,|\,P_s^L(E,\Omega)<\infty\}. \end{equation*} Up to modifying $E$ on a set of Lebesgue measure zero we can suppose that $\partial E=\partial^-E$, as in Remark $\ref{gmt_assumption}$. Notice that this does not affect the $s$-perimeter. Now for any $s\in(0,1)$ \begin{equation*}\begin{split} 2P_s^L(E,\Omega)&=\int_\Omega\,dx\int_\Omega\frac{|\chi_E(x)-\chi_E(y)|}{\kers}\,dy\\ & =\int_\Omega dx\int_0^\infty d\rho\int_{\partial B_\rho(x)\cap\Omega}\frac{|\chi_E(x)-\chi_E(y)|}{\kers}\,d\Han(y)\\ & =\int_\Omega dx\int_0^\infty\frac{d\rho}{\rho^{n+s}}\int_{\partial B_\rho(x)\cap\Omega}|\chi_E(x)-\chi_E(y)|\,d\Han(y). \end{split} \end{equation*} Notice that \begin{equation*} d(x,\partial E)>\rho\quad\Longrightarrow\quad\chi_E(y)=\chi_E(x),\quad\forall\,y\in\overline{B_\rho(x)}, \end{equation*} and hence \begin{equation*}\begin{split} \int_{\partial B_\rho(x)\cap\Omega}|\chi_E(x)-\chi_E(y)|\,d\Han(y)& \leq\int_{\partial B_\rho(x)\cap\Omega}\chi_{\bar{N}_\rho(\partial E)}(x)\,d\Han(y)\\ & \leq n\omega_n\rho^{n-1}\chi_{\bar{N}_\rho(\partial E)}(x). \end{split} \end{equation*} Therefore \begin{equation}\label{visintin_pf} 2P_s^L(E,\Omega)\leq n\omega_n\int_0^\infty\frac{d\rho}{\rho^{1+s}}\int_\Omega \chi_{\bar{N}_\rho(\partial E)}(x) =n\omega_n\int_0^\infty\frac{|\bar{N}^\Omega_\rho(\partial E)|}{\rho^{1+s}}\,d\rho. \end{equation} We prove the following\\ CLAIM \begin{equation}\label{visintin_proof} \overline{\mathcal{M}}^{n-r}(\partial E,\Omega)<\infty\quad\Longrightarrow\quad P_s^L(E,\Omega)<\infty,\quad\forall\,s\in(0,r). \end{equation} Indeed \begin{equation*} \limsup_{\rho\to0}\frac{|\bar{N}^\Omega_\rho(\partial E)|}{\rho^r}<\infty\quad\Longrightarrow\quad\exists\,C>0\textrm{ s.t. } \sup_{\rho\in(0,C]}\frac{|\bar{N}^\Omega_\rho(\partial E)|}{\rho^r}\leq M<\infty. \end{equation*} Then \begin{equation*}\begin{split} 2P_s^L(E,\Omega)&\leq n\omega_n\left\{\int_0^C\frac{|\bar{N}^\Omega_\rho(\partial E)|}{\rho^{1-(r-s)+r}}\,d\rho +\int_C^\infty\frac{|\bar{N}^\Omega_\rho(\partial E)|}{\rho^{1+s}}\,d\rho\right\}\\ & \leq n\omega_n\left\{ M\int_0^C\frac{1}{\rho^{1-(r-s)}}\,d\rho+|\Omega|\int_C^\infty\frac{1}{\rho^{1+s}}\,d\rho \right\}\\ & =n\omega_n\left\{ \frac{M}{r-s}C^{r-s}+\frac{|\Omega|}{sC^s} \right\}<\infty, \end{split}\end{equation*} proving the claim.\\ This implies \begin{equation*} r\leq\sup\{s\in(0,1)\,|\,P_s^L(E,\Omega)<\infty\}, \end{equation*} for every $r\in(0,1)$ s.t. $\overline{\mathcal{M}}^{n-r}(\partial E,\Omega)<\infty$.\\ Thus for $\epsilon>0$ very small, we have \begin{equation*} \inf\left\{r\in(0,1)\,|\,\overline{\mathcal{M}}^{n-r}(\partial^-E,\Omega)=\infty\right\}-\epsilon \leq\sup\{s\in(0,1)\,|\,P_s^L(E,\Omega)<\infty\}. \end{equation*} Letting $\epsilon$ tend to zero, we conclude the proof. \end{proof} \end{prop} In particular, if $\Omega$ has a regular boundary, we obtain \begin{coroll} Let $\Omega\subset\R$ be a bounded open set with Lipschitz boundary. Let $E\subset\R$ s.t. $\partial^-E\not=\emptyset$ and $\overline{\Dim}_\mathcal{M}(\partial^-E,\Omega)\in[n-1,n)$. Then \begin{equation}\label{fractal_per} P_s(E,\Omega)<\infty\qquad\textrm{for every }s\in\left(0,n-\overline{\Dim}_\mathcal{M}(\partial^-E,\Omega)\right). \end{equation} \end{coroll} \noindent This shows that a set $E$ can have finite fractional perimeter even if its boundary is really irregular (unlike what happens with Caccioppoli sets and their reduced boundary). Now we give some equivalent definitions of the Minkowski dimensions, usually referred to as box-counting dimensions, which are easier to compute. For the details and the relation between the Minkowski and the Hausdorff dimensions, see \cite{Mattila} and \cite{Falconer} and the references cited therein.\\ For simplicity we only consider the case $\Gamma$ bounded and $\Omega=\R$ (or $\Gamma\subset\subset\Omega$). \begin{defin} Given a nonempty bounded set $\Gamma\subset\R$, define for every $\delta>0$ \begin{equation*} \mathcal{N}(\Gamma,\delta):=\min\left\{k\in\mathbb{N}\,\big|\,\Gamma\subset\bigcup_{i=1}^kB_\delta(x_i),\textrm{ for some }x_i\in\R\right\}, \end{equation*} the smallest number of $\delta$-balls needed to cover $\Gamma$, and \begin{equation*} \mathcal{P}(\Gamma,\delta):=\max\left\{k\in\mathbb{N}\,|\,\exists\,\textrm{disjoint balls }B_\delta(x_i),\,i=1,\dots,k\textrm{ with }x_i\in \Gamma\right\}, \end{equation*} the greatest number of disjoint $\delta$-balls with centres in $\Gamma$. \end{defin} Then it is easy to verify that \begin{equation}\label{counting} \mathcal{N}(\Gamma,2\delta)\leq\mathcal{P}(\Gamma,\delta)\leq\mathcal{N}(\Gamma,\delta/2). \end{equation} Moreover, since any union of $\delta$-balls with centers in $\Gamma$ is contained in $N_\delta(\Gamma)$, and any union of $(2\delta)$-balls covers $N_\delta(\Gamma)$ if the union of the corresponding $\delta$-balls covers $\Gamma$, we get \begin{equation}\label{counting2} \mathcal{P}(\Gamma,\delta)\omega_n\delta^n\leq|N_\delta(\Gamma)|\leq \mathcal{N}(\Gamma,\delta)\omega_n(2\delta)^n. \end{equation} Using $(\ref{counting})$ and $(\ref{counting2})$ we see that \begin{equation*}\begin{split} &\underline{\Dim}_\mathcal{M}(\Gamma)=\inf\left\{r\in[0,n]\,\big|\,\liminf_{\delta\to0}\mathcal{N}(\Gamma,\delta)\delta^r=0\right\},\\ & \overline{\Dim}_\mathcal{M}(\Gamma)=\sup\left\{r\in[0,n]\,\big|\,\limsup_{\delta\to0}\mathcal{N}(\Gamma,\delta)\delta^r=\infty\right\}. \end{split} \end{equation*} Then it can be proved that \begin{equation}\label{log_counting}\begin{split} &\underline{\Dim}_\mathcal{M}(\Gamma)=\liminf_{\delta\to0}\frac{\log\mathcal{N}(\Gamma,\delta)}{-\log\delta},\\ & \overline{\Dim}_\mathcal{M}(\Gamma)=\limsup_{\delta\to0}\frac{\log\mathcal{N}(\Gamma,\delta)}{-\log\delta}. \end{split} \end{equation} Actually notice that, due to $(\ref{counting})$, we can take $\mathcal{P}(\Gamma,\delta)$ in place of $\mathcal{N}(\Gamma,\delta)$ in the above formulas.\\ It is also easy to see that if in the definition of $\mathcal{N}(\Gamma,\delta)$ we take cubes of side $\delta$ instead of balls of radius $\delta$, then we get exactly the same dimensions. Moreover in $(\ref{log_counting})$ it is enough to consider limits as $\delta\to0$ through any decreasing sequence $\delta_k$ s.t. $\delta_{k+1}\geq c\delta_k$ for some constant $c\in(0,1)$; in particular for $\delta_k=c^k$. Indeed if $\delta_{k+1}\leq\delta<\delta_k$, then \begin{equation*}\begin{split} \frac{\log\mathcal{N}(\Gamma,\delta)}{-\log\delta}&\leq\frac{\log\mathcal{N}(\Gamma,\delta_{k+1})}{-\log\delta_k} =\frac{\log\mathcal{N}(\Gamma,\delta_{k+1})}{-\log\delta_{k+1}+\log(\delta_{k+1}/\delta_k)}\\ & \leq\frac{\log\mathcal{N}(\Gamma,\delta_{k+1})}{-\log\delta_{k+1}+\log c}, \end{split}\end{equation*} so that \begin{equation*} \limsup_{\delta\to0}\frac{\log\mathcal{N}(\Gamma,\delta)}{-\log\delta}\leq \limsup_{k\to\infty}\frac{\log\mathcal{N}(\Gamma,\delta_k)}{-\log\delta_k}. \end{equation*} The opposite inequality is clear and in a similar way we can treat the lower limits. Now we can study the following example. \begin{ese}[von Koch Snowflake] The von Koch snowflake, whose construction we recall below, is a bounded open set $S\subset\mathbb{R}^2$ whose boundary has fractal dimension $\Dim_\mathcal{M}(\partial S)=\frac{\log4}{\log3}$. Therefore we have \begin{equation}\label{koch1} P_s(S)<\infty,\qquad\forall\,s\in\left(0,2-\frac{\log4}{\log3}\right). \end{equation} Moreover \begin{equation}\label{koch2} P_s(S)=\infty,\qquad\forall\,s\in\left(2-\frac{\log4}{\log3},1\right), \end{equation} and hence \begin{equation*} \Dim_F(\partial S)=\Dim_\mathcal{M}(\partial S)=\frac{\log4}{\log3}. \end{equation*} \begin{proof} First of all we construct the von Koch curve. Then the snowflake is made of three von Koch curves.\\ Let $E_0$ be a line segment of unit length. The set $E_1$ consists of the four segments obtained by removing the middle third of $E_0$ and replacing it by the other two sides of the equilateral triangle based on the removed segment. We construct $E_2$ by applying the same procedure to each of the segments in $E_1$ and so on. Thus $E_k$ comes from replacing the middle third of each straight line segment of $E_{k-1}$ by the other two sides of an equilateral triangle.\\ As $k$ tends to infinity, the sequence of polygonal curves $E_k$ approaches a limiting curve $E$, called the von Koch curve.\\ If we start with an equilateral triangle with unit length side and perform the same construction on all three sides, we obtain the von Koch snowflake $\Sigma$.\\ Let $S$ be the bounded region enclosed by $\Sigma$, so that $S$ is open and $\partial S=\Sigma$.\\ Now we calculate the dimension of $E$ using the remarks above about the box-counting dimensions.\\ The idea is to exploit the self-similarity of $E$ and consider covers made of squares with side $\delta_k=3^{-k}$.\\ The key observation is that $E$ can be covered by three squares of length $1/3$ (and cannot be covered by only two), so that $\mathcal{N}(E,1/3)=3$.\\ Then consider $E_1$. We can think of $E$ as being made of four von Koch curves starting from the set $E_1$ and with initial segments of length $1/3$ instead of 1. Therefore we can cover each of these four pieces with three squares of side $1/9$, so that $E$ can be covered with $3\cdot4$ squares of length $1/9$ (and not one less) and $\mathcal{N}(E,1/9)=4\cdot3$.\\ We can repeat the same starting from $E_2$ to get $\mathcal{N}(E,1/27)=4^2\cdot3$, and so on. In general we obtain \begin{equation*} \mathcal{N}(E,3^{-k})=4^{k-1}\cdot3. \end{equation*} Then, taking logarithms we get \begin{equation*} \frac{\log\mathcal{N}(E,3^{-k})}{-\log3^{-k}}=\frac{\log3+(k-1)\log4}{k\log3}\longrightarrow\frac{\log4}{\log3}, \end{equation*} so that $\Dim_\mathcal{M}(E)=\frac{\log4}{\log3}$.\\ Notice that the Minkowski dimensions of the snowflake and of the curve are the same, so we get the claim and $(\ref{koch1})$. Now we prove $(\ref{koch2})$.\\ As starting point for the snowflake take the equilateral triangle $T$ of side 1, with baricenter in the origin and a vertex on the $y$-axis. Then $T_1$ is made of three triangles of side $1/3$, $T_2$ of $3\cdot4$ triangles of side $1/3^2$ and so on. In general $T_k$ is made of $3\cdot4^{k-1}$ triangles of side $1/3^k$; call them $T_k^1,\dots,T_k^{3\cdot4^{k-1}}$ and let the $x^i_k$'s be their baricenters. For each triangle $T^i_k$ there exists a rotation $\mathcal{R}_k^i\in SO(n)$ s.t. \begin{equation*} \mathcal{R}_k^i\left(\frac{1}{3^k}T\right)+x_k^i=T_k^i. \end{equation*} Fix a ball $B_1(x)\subset\Co S$, far from $S$, e.g. $B_1(0,15)$. Then \begin{equation}\label{koch3} B_{3^{-k}}(x+x_k^i)=\mathcal{R}_k^i\left(\frac{1}{3^k}B_1(x)\right)+x_k^i\subset\Co S, \end{equation} for every $i,\,k$.\\ Notice that $T_k$ and $T_{k-1}$ touch only on a set of measure zero and $S=T\cup\bigcup T_k$. Then \begin{equation*}\begin{split} P_s(S)&=\Ll_s(S,\Co S)=\Ll_s(T,\Co S)+\sum_{k=1}^\infty\Ll_s(T_k,\Co S)\\ & =\Ll_s(T,\Co S)+\sum_{k=1}^\infty\sum_{i=1}^{3\cdot4^{k-1}}\Ll_s(T_k^i,\Co S) \geq\sum_{k=1}^\infty\sum_{i=1}^{3\cdot4^{k-1}}\Ll_s(T_k^i,\Co S)\\ & \geq\sum_{k=1}^\infty\sum_{i=1}^{3\cdot4^{k-1}}\Ll_s(T_k^i,B_{3^{-k}}(x+x_k^i))\qquad\textrm{(by }(\ref{koch3}))\\ & =\sum_{k=1}^\infty\sum_{i=1}^{3\cdot4^{k-1}}\Big(\frac{1}{3^k}\Big)^{2-s}\Ll_s(T,B_1(x))\qquad\textrm{(by Proposition }\ref{elementary_properties})\\ & =\frac{3}{3^{2-s}}\Ll_s(T,B_1(x))\sum_{k=0}^\infty\Big(\frac{4}{3^{2-s}}\Big)^k. \end{split}\end{equation*} To conclude, notice that the last series is divergent if $s>2-\frac{\log4}{\log3}$. \end{proof} \end{ese} \end{section} \begin{section}{Proof of Example $\ref{inclusion_counterexample}$} Note that $E\subset (0,a^2]$. Let $\Omega:=(-1,1)\subset\mathbb{R}$. Then $E\subset\subset\Omega$ and $\textrm{dist}(E,\partial\Omega)=1-a^2=:d>0$. Now \begin{equation*} P_s(E)=\int_E\int_{CE\cap\Omega}\frac{dxdy}{|x-y|^{1+s}}+ \int_E\int_{C\Omega}\frac{dxdy}{|x-y|^{1+s}} \end{equation*} As for the second term, we have \begin{equation*} \int_E\int_{C\Omega}\frac{dxdy}{|x-y|^{1+s}}\leq\frac{2|E|}{sd^s}<\infty. \end{equation*} We split the first term into three pieces \begin{equation*}\begin{split} \int_E&\int_{CE\cap\Omega}\frac{dxdy}{|x-y|^{1+s}}\\ & =\int_E\int_{-1}^0\frac{dxdy}{|x-y|^{1+s}} +\int_E\int_{CE\cap(0,a)}\frac{dxdy}{|x-y|^{1+s}}+\int_E\int_a^1\frac{dxdy}{|x-y|^{1+s}}\\ & =\mathcal{I}_1+\mathcal{I}_2+\mathcal{I}_3. \end{split} \end{equation*} Note that $CE\cap(0,a)=\bigcup_{k\in\mathbb{N}}I_{2k-1}=\bigcup_{k\in\mathbb{N}}(a^{2k},a^{2k-1})$.\\ A simple calculation shows that, if $a<b\leq c<d$, then \begin{equation}\label{rectangle_integral}\begin{split} \int_a^b&\int_c^d\frac{dxdy}{|x-y|^{1+s}}=\\ & \frac{1}{s(1-s)}\big[(c-a)^{1-s}+(d-b)^{1-s}-(c-b)^{1-s}-(d-a)^{1-s}\big]. \end{split} \end{equation} Also note that, if $n>m\geq1$, then \begin{equation}\label{derivative_bound}\begin{split} (1-a^n)^{1-s}-(1-a^m)^{1-s}&=\int_m^n\frac{d}{dt}(1-a^t)^{1-s}\,dt\\ & =(s-1)\log a\int_m^n\frac{a^t}{(1-a^t)^s}\,dt\\ & \leq a^m (s-1)\log a\int_m^n\frac{1}{(1-a^t)^s}\,dt\\ & \leq(n-m)a^m\frac{(s-1)\log a}{(1-a)^s}. \end{split} \end{equation} Now consider the first term \begin{equation*} \mathcal{I}_1=\sum_{k=1}^\infty\int_{a^{2k+1}}^{a^{2k}}\int_{-1}^0\frac{dxdy}{|x-y|^{1+s}}. \end{equation*} Use $(\ref{rectangle_integral}$) and notice that $(c-a)^{1-s}-(d-a)^{1-s}\leq0$ to get \begin{equation*} \int_{-1}^0\int_{a^{2k+1}}^{a^{2k}}\frac{dxdy}{|x-y|^{1+s}} \leq\frac{1}{s(1-s)}\big[(a^{2k})^{1-s}-(a^{2k+1})^{1-s}\big]\leq\frac{1}{s(1-s)}(a^{2(1-s)})^k. \end{equation*} Then, as $a^{2(1-s)}<1$ we get \begin{equation*} \mathcal{I}_1\leq\frac{1}{s(1-s)}\sum_{k=1}^\infty(a^{2(1-s)})^k<\infty. \end{equation*} As for the last term \begin{equation*} \mathcal{I}_3=\sum_{k=1}^\infty\int_{a^{2k+1}}^{a^{2k}}\int_a^1\frac{dxdy}{|x-y|^{1+s}}, \end{equation*} use $(\ref{rectangle_integral}$) and notice that $(d-b)^{1-s}-(d-a)^{1-s}\leq0$ to get \begin{equation*}\begin{split} \int_{a^{2k+1}}^{a^{2k}}\int_a^1\frac{dxdy}{|x-y|^{1+s}}& \leq\frac{1}{s(1-s)}\big[(1-a^{2k+1})^{1-s}-(1-a^{2k})^{1-s}\big]\\ & \leq\frac{-\log a}{s(1-a)^s}a^{2k}\quad\textrm{by }(\ref{derivative_bound}). \end{split} \end{equation*} Thus \begin{equation*} \mathcal{I}_3\leq\frac{-\log a}{s(1-a)^s}\sum_{k=1}^\infty(a^2)^k<\infty. \end{equation*} Finally we split the second term \begin{equation*} \mathcal{I}_2=\sum_{k=1}^\infty\sum_{j=1}^\infty\int_{a^{2k+1}}^{a^{2k}}\int_{a^{2j}}^{a^{2j-1}} \frac{dxdy}{|x-y|^{1+s}} \end{equation*} into three pieces according to the cases $j>k$, $j=k$ and $j<k$. If $j=k$, using $(\ref{rectangle_integral})$ we get \begin{equation*}\begin{split} \int_{a^{2k+1}}^{a^{2k}}&\int_{a^{2k}}^{a^{2k-1}} \frac{dxdy}{|x-y|^{1+s}}=\\ & =\frac{1}{s(1-s)}\big[(a^{2k}-a^{2k+1})^{1-s}+(a^{2k-1}-a^{2k})^{1-s}-(a^{2k-1}-a^{2k+1})^{1-s}\big]\\ & =\frac{1}{s(1-s)}\big[a^{2k(1-s)}(1-a)^{1-s}+a^{(2k-1)(1-s)}(1-a)^{1-s}\\ & \quad\quad\quad\quad\quad-a^{(2k-1)(1-s)}(1-a^2)^{1-s}\big]\\ & =\frac{1}{s(1-s)}(a^{2(1-s)})^k\Big[(1-a)^{1-s}+\frac{(1-a)^{1-s}}{a^{1-s}}-\frac{(1-a^2)^{1-s}}{a^{1-s}}\Big]. \end{split} \end{equation*} Summing over $k\in\mathbb{N}$ we get \begin{equation*}\begin{split} \sum_{k=1}^\infty&\int_{a^{2k+1}}^{a^{2k}}\int_{a^{2k}}^{a^{2k-1}} \frac{dxdy}{|x-y|^{1+s}}=\\ & =\frac{1}{s(1-s)}\frac{a^{2(1-s)}}{1-a^{2(1-s)}}\Big[(1-a)^{1-s}+\frac{(1-a)^{1-s}}{a^{1-s}}-\frac{(1-a^2)^{1-s}}{a^{1-s}}\Big]<\infty. \end{split} \end{equation*} In particular note that \begin{equation*}\begin{split} (1-s)&P_s(E)\geq(1-s)\mathcal{I}_2\\ & \geq\frac{1}{s(1-a^{2(1-s)})}\big[a^{2(1-s)}(1-a)^{1-s}+a^{1-s}(1-a)^{1-s}-a^{1-s}(1-a^2)^{1-s}\big], \end{split} \end{equation*} which tends to $+\infty$ when $s\to1$. This shows that $E$ cannot have finite perimeter. To conclude let $j>k$, the case $j<k$ being similar, and consider \begin{equation*} \sum_{k=1}^\infty\sum_{j=k+1}^\infty\int_{a^{2j}}^{a^{2j-1}}\int_{a^{2k+1}}^{a^{2k}} \frac{dxdy}{|x-y|^{1+s}}. \end{equation*} Again, using $(\ref{rectangle_integral}$) and $(d-b)^{1-s}-(d-a)^{1-s}\leq0$, we get \begin{equation*}\begin{split} \int_{a^{2j}}^{a^{2j-1}}&\int_{a^{2k+1}}^{a^{2k}} \frac{dxdy}{|x-y|^{1+s}}\\ & \leq\frac{1}{s(1-s)}\big[(a^{2k+1}-a^{2j})^{1-s}-(a^{2k+1}-a^{2j-1})^{1-s}\big]\\ & =\frac{a^{1-s}}{s(1-s)}(a^{2(1-s)})^k\big[(1-a^{2(j-k)-1})^{1-s}-(1-a^{2(j-k)-2})^{1-s}\big]\\ & \leq\frac{a^{1-s}}{s(1-s)}(a^{2(1-s)})^k\frac{(s-1)\log a}{(1-a)^s}a^{2(j-k)-2}\quad\quad\textrm{by }(\ref{derivative_bound})\\ & =\frac{-\log a}{s(1-a^s)a^{s+1}}(a^{2(1-s)})^k(a^2)^{j-k}, \end{split} \end{equation*} for $j\geq k+2$. Then \begin{equation*} \begin{split} \sum_{k=1}^\infty&\sum_{j=k+2}^\infty\int_{a^{2j}}^{a^{2j-1}}\int_{a^{2k+1}}^{a^{2k}} \frac{dxdy}{|x-y|^{1+s}}\\ & \leq\frac{-\log a}{s(1-a^s)a^{s+1}}\sum_{k=1}^\infty(a^{2(1-s)})^k\sum_{h=2}^\infty(a^2)^h<\infty. \end{split} \end{equation*} If $j=k+1$ we get \begin{equation*}\begin{split} \sum_{k=1}^\infty\int_{a^{2k+2}}^{a^{2k+1}}\int_{a^{2k+1}}^{a^{2k}}\frac{dxdy}{|x-y|^{1+s}}& \leq\frac{1}{s(1-s)}\sum_{k=1}^\infty(a^{2k+1}-a^{2k+2})^{1-s}\\ & =\frac{a^{1-s}(1-a)^{1-s}}{s(1-s)}\sum_{k=1}^\infty(a^{2(1-s)})^k<\infty. \end{split} \end{equation*} This shows that also $\mathcal{I}_2<\infty$, so that $P_s(E)<\infty$ for every $s\in(0,1)$ as claimed. \end{section} \end{chapter} \begin{chapter}{Nonlocal Minimal Surfaces} \begin{section}{Nonlocal Minimal Surfaces} In this section we give the definition of nonlocal minimal surface and we prove existence and compactness results. \begin{defin} Let $\Omega\subset\R$ be an open set and let $s\in(0,1)$. The set $E\subset\R$ is said to be $s$-minimal in $\Omega$ if $P_s(E,\Omega)<\infty$ and \begin{equation} P_s(E,\Omega)\leq P_s(F,\Omega), \end{equation} for every $F\subset\R$ s.t. $F\setminus\Omega=E\setminus\Omega$. \end{defin} As in the classical case, $E\setminus\Omega$ plays the role of boundary data. However here it is not enough to know how it behaves in a neighborhood of $\partial\Omega$; indeed, since $P_s$ is nonlocal, we need to know the whole of $E\setminus\Omega$. \begin{rmk} Since from now on the index $s\in(0,1)$ will be fixed, we will usually write $\J_\Omega(E):=P_s(E,\Omega)$.\\ If $E$ is $s$-minimal in $\Omega$, we will also say that it is a minimizer for $\J_\Omega$. \end{rmk} Even if the definition makes sense for every open set $\Omega$, we will usually consider bounded open sets with Lipschitz boundary. This ensures that for any fixed set $E_0$ we have \begin{equation}\label{inf_existence}\begin{split} \inf\{\J_\Omega(F)\,|\,F\setminus\Omega=E_0\setminus\Omega\}&\leq \J_\Omega(E_0\setminus\Omega)\\ & =\Ll_s((E_0\setminus\Omega)\setminus\Omega,\Co(E_0\setminus\Omega)\cap\Omega)\\ & \leq\Ll_s(\Co\Omega,\Omega)=P_s(\Omega)<\infty \end{split} \end{equation} \begin{lem} If $E$ is $s$-minimal in $\Omega$, then it is also $s$-minimal in every open subset $\Omega'\subset\Omega$. \begin{proof} Indeed, let $\Omega'\subset\Omega$ and $F\subset\R$. Then \begin{equation}\label{eq1}\begin{split} P_s(F,\Omega)-P_s(F,\Omega')=P_s^L(F,&\Omega\setminus\Omega') +\Ll_s(F\setminus\Omega,\Co F\cap(\Omega\setminus\Omega'))\\ & +\Ll_s(F\cap(\Omega\setminus\Omega'),\Co F\setminus\Omega). \end{split} \end{equation} Now notice that if $F\setminus\Omega'=E\setminus\Omega'$, then the corresponding right hand sides in $(\ref{eq1})$ are equal and clearly we also have $F\setminus\Omega=E\setminus\Omega$. Therefore \begin{equation*} \J_{\Omega'}(F)-\J_{\Omega'}(E)=\J_\Omega(F)-\J_\Omega(E)\geq0. \end{equation*} \end{proof} \end{lem} \begin{rmk}\label{elem_properties_sets} Using Proposition $\ref{elementary_properties}$ it is immediate to see that if $E$ is $s$-minimal in $\Omega$, then if we dilate, rotate or translate both $E$ and $\Omega$, then we end up with an $s$-minimal set in the corresponding open set.\\ For example, let $\lambda>0$ and take a set $F$ s.t. $F\setminus\lambda\Omega=\lambda E\setminus\lambda\Omega$. Then $(\lambda^{-1}F)\setminus\Omega=E\setminus\Omega$ and \begin{equation*} \J_{\lambda\Omega}(F)=\lambda^{n-s}\J_\Omega(\lambda^{-1}F)\geq \lambda^{n-s}\J_\Omega(E)=\J_{\lambda\Omega}(\lambda E). \end{equation*} \end{rmk} \begin{defin} We say that a set $E$ is a (variational) supersolution in $\Omega$ if \begin{equation}\label{var_supersol} A\subset\Co E\cap\Omega\qquad\Longrightarrow\qquad\Ll_s(A,E)-\Ll_s(A,\Co(E\cup A))\leq0. \end{equation} It is a subsolution if \begin{equation}\label{var_subsol} A\subset E\cap\Omega\qquad\Longrightarrow\qquad\Ll_s(A,E\setminus A)-\Ll_s(A,\Co E)\geq0. \end{equation} \end{defin} These definitions are justified by the following observation, which is easily obtained once we explicit all the terms. \begin{lem} Let $E\setminus\Omega=F\setminus\Omega$. Denote $A^+:= F\setminus E$ and $A^-:=E\setminus F$. Then \begin{equation}\label{sper_difference}\begin{split} \J_\Omega(F)-\J_\Omega(E)&=\left\{\Ll_s(A^-,E\setminus A^-)-\Ll_s(A^-,\Co E)\right\}+2\Ll_s(A^-,A^+)\\ & \qquad\quad-\left\{\Ll_s(A^+,E)-\Ll_s(A^+,\Co(E\cup A^+))\right\}. \end{split}\end{equation} \end{lem} As a consequence we get \begin{prop} The set $E$ is $s$-minimal in $\Omega$ if and only if it is both a subsolution and a supersolution. \begin{proof} Suppose $E$ is $s$-minimal. Let $A\subset\Co E\cap\Omega$ and define $F:=A\cup(E\setminus\Omega)$. Using the notation of the Lemma, we have $A^+=A$ and $A^-=\emptyset$. Therefore, since $E$ is $s$-minimal and $F\setminus\Omega=E\setminus\Omega$, the right hand side of $(\ref{sper_difference})$ reduces to \begin{equation*} -\left\{\Ll_s(A,E)-\Ll_s(A,\Co(E\cup A))\right\}=\J_\Omega(F)-\J_\Omega(E)\geq0, \end{equation*} proving $(\ref{var_supersol})$. In the same way we get also $(\ref{var_subsol})$.\\ On the other hand, if $F\setminus\Omega=E\setminus\Omega$, then $A^+\subset\Co E\cap\Omega$ and $A^-\subset E\cap\Omega$. If we suppose that $E$ is both a subsolution and a supersolution, then all the terms in the right hand side of $(\ref{sper_difference})$ are non-negative, and hence \begin{equation*} \J_\Omega(F)-\J_\Omega(E)\geq0, \end{equation*} proving that $E$ is $s$-minimal in $\Omega$. \end{proof} \end{prop} \begin{rmk}\label{elem_properties_variational} Notice that $E$ is a subsolution (supersolution) in $\Omega$ if and only if $\Co E$ is a supersolution (subsolution) in $\Omega$.\\ Moreover, if $E$ is a subsolution (supersolution) in $\Omega$, then it is also a subsolution (supersolution) in every open subset $\Omega'\subset\Omega$.\\ Analogues of the statements in Remark $\ref{elem_properties_sets}$ hold for subsolutions and supersolutions. \end{rmk} \begin{prop}[Lower semicontinuity] Let $\Omega\subset\R$ be an open set and $\{E_k\}$ a sequence of sets s.t. $E_k\xrightarrow{loc}E$. Then \begin{equation*} \J_\Omega(E)\leq\liminf_{k\to\infty}\J_\Omega(E_k). \end{equation*} \begin{proof} The claim is just a consequence of Fatou's Lemma. Indeed, it is enough to notice that if $\chi_{A_k}\longrightarrow\chi_A$ and $\chi_{B_k}\longrightarrow\chi_B$ in $L^1_{loc}(\R)$, then we can find $\{k_i\}\subset\mathbb{N}$ s.t. $k_i\nearrow\infty$ strictly and \begin{equation*} \chi_{A_{k_i}}(x)\chi_{B_{k_i}}(y)\longrightarrow\chi_A(x)\chi_B(y), \end{equation*} for a.e. $(x,y)\in\R\times\R$. Then Fatou's Lemma implies \begin{equation*}\begin{split} \Ll_s(A,B)&=\int\int\frac{\chi_A(x)\chi_B(y)}{\kers}\,dx\,dy\\ & \leq\liminf_{i\to\infty}\int\int\frac{\chi_{A_{k_i}}(x)\chi_{B_{k_i}}(y)}{\kers}\,dx\,dy\\ & =\liminf_{i\to\infty}\Ll_s(A_{k_i},B_{k_i}). \end{split} \end{equation*} Applying this inequality to both the terms in the definition of $\J_\Omega$ we get the claim. \end{proof} \end{prop} Using this Proposition and a compactness result for fractional Sobolev spaces we can now prove the existence of $s$-minimal sets using the direct method of the Calculus of Variations. \begin{teo}[Existence of minimizers] Let $\Omega\subset\R$ be a bounded open set with Lipschitz boundary, and fix a set $E_0\subset\Co\Omega$. Then there exists a set $E$ s.t. $E\setminus\Omega=E_0$ and \begin{equation*} \J_\Omega(E)=\inf_{F\setminus\Omega=E_0}\J_\Omega(F). \end{equation*} \begin{proof} As remarked above, since $\Omega$ is bounded and has Lipschitz boundary, then \begin{equation*} \inf_{F\setminus\Omega=E_0}\J_\Omega(F)\leq\J_\Omega(E_0)\leq P_s(\Omega)<\infty. \end{equation*} Let $\{F_k\}$ be a minimizing sequence, i.e. s.t. $F_k\setminus\Omega=E_0$ and \begin{equation*} \lim_{k\to\infty}\J_\Omega(F_k)=\inf_{F\setminus\Omega=E_0}\J_\Omega(F). \end{equation*} We can suppose that \begin{equation*} \J_\Omega(F_k)\leq M<\infty,\qquad\textrm{for every }k, \end{equation*} since in any case this is true for $k$ big enough. In particular \begin{equation*} [\chi_{F_k}]_{\fracso}=2P_s^L(F_k,\Omega)\leq2\J_\Omega(F_k)\leq2M, \end{equation*} and \begin{equation*} \|\chi_{F_k}\|_{L^1(\Omega)}=|F_k\cap\Omega|\leq|\Omega|, \end{equation*} for every $k$. Then, using the compactness of the embedding $\fracso\hookrightarrow L^1(\Omega)$ (see Theorem $\ref{compact_embd_th}$), we get $u\in L^1(\Omega)$ s.t. $\chi_{F_k}\longrightarrow u$ in $L^1(\Omega)$ (we relabel the subsequence). It is clear that $u$ is equal (in $L^1$) to the characteristic function of a set $F\subset\Omega$. Then, if we define $E:=E_0\cup F$ we have $F_k\xrightarrow{loc}E$ and hence the semicontinuity result implies \begin{equation*} \J_\Omega(E)\leq\liminf_{k\to\infty}\J_\Omega(F_k)=\inf_{F\setminus\Omega=E_0}\J_\Omega(F). \end{equation*} \end{proof} \end{teo} It is convenient to have a estimate for the difference $\J_\Omega(E)-\J_\Omega(F)$ also in the case $E\cap\Omega=F\cap\Omega$. \begin{lem} Let $\Omega\subset\R$ be a bounded open set with Lipschitz boundary. Assume $E=F$ inside $\Omega$; then \begin{equation}\label{confront_equal_inside} |\J_\Omega(E)-\J_\Omega(F)|\leq\Ll_s(\Omega,(E\Delta F)\setminus\Omega). \end{equation} \begin{proof} Since $E$ is equal to $F$ inside $\Omega$, \begin{equation*}\begin{split} \J_\Omega(E)-\J_\Omega(F)&=\Ll_s(E\cap\Omega,\Co E\setminus\Omega)+\Ll_s(E\setminus\Omega,\Co E\cap\Omega)\\ & \qquad\quad-\Ll_s(F\cap\Omega,\Co F\setminus\Omega)-\Ll_s(F\setminus\Omega,\Co F\cap\Omega)\\ & =\Ll_s(E\cap\Omega,\Co E\setminus\Omega)-\Ll_s(E\cap\Omega,\Co F\setminus\Omega)\\ & \qquad\quad+\Ll_s(E\setminus\Omega,\Co E\cap\Omega)-\Ll_s(F\setminus\Omega,\Co E\cap\Omega). \end{split}\end{equation*} Now if we take the absolute value and explicit all the terms we get \begin{equation*}\begin{split} |\J_\Omega(E)-\J_\Omega(F)|&=\left|\int_{E\cap\Omega}\int_{\Co\Omega}\frac{\chi_F(y)-\chi_E(y)}{\kers}+ \int_{\Co E\cap\Omega}\int_{\Co\Omega}\frac{\chi_E(y)-\chi_F(y)}{\kers}\right|\\ & \leq \int_{E\cap\Omega}\int_{\Co\Omega}\frac{|\chi_F(y)-\chi_E(y)|}{\kers}+ \int_{\Co E\cap\Omega}\int_{\Co\Omega}\frac{|\chi_E(y)-\chi_F(y)|}{\kers}\\ & = \int_{E\cap\Omega}\int_{\Co\Omega}\frac{\chi_{E\Delta F}(y)}{\kers}+ \int_{\Co E\cap\Omega}\int_{\Co\Omega}\frac{\chi_{E\Delta F}(y)}{\kers}\\ & =\Ll_s(\Omega,(E\Delta F)\setminus\Omega). \end{split}\end{equation*} \end{proof} \end{lem} Now we can prove a compactness result for $s$-minimal sets. \begin{teo}\label{nonlocal_compactness} Let $\{E_k\}$ be a sequence of $s$-minimal sets in $B_1$ s.t. $ E_k\xrightarrow{loc} E. $ Then $E$ is $s$-minimal in $B_1$ and \begin{equation*} \J_{B_1}(E)=\lim_{k\to\infty}\J_{B_1}(E_k). \end{equation*} \begin{proof} Assume $F=E$ outside $B_1$ and let \begin{equation*} F_k:=(F\cap B_1)\cup(E_k\setminus B_1). \end{equation*} Then, since $F_k=E_k$ outside $B_1$ and $E_k$ is a minimizer, we have \begin{equation*} \J_{B_1}(F_k)\geq\J_{B_1}(E_k). \end{equation*} On the other hand, since $F_k=F$ inside $B_1$, inequality $(\ref{confront_equal_inside})$ gives \begin{equation*} |\J_{B_1}(F_k)-\J_{B_1}(F)|\leq\Ll_s(B_1,(F_k\Delta F)\setminus B_1) =\Ll_s(B_1,(E_k\Delta E)\setminus B_1)=:b_k. \end{equation*} Now we have \begin{equation*} \J_{B_1}(F)+b_k\geq\J_{B_1}(F_k)\geq\J_{B_1}(E_k). \end{equation*} If we prove that $b_k\to0$, then \begin{equation*} \J_{B_1}(F)\geq\limsup_{k\to\infty}\J_{B_1}(E_k)\geq\liminf_{k\to\infty}\J_{B_1}(E_k) \geq\J_{B_1}(E), \end{equation*} by lower semicontinuity, proving that $E$ is a minimizer for $\J_{B_1}$.\\ Also notice that taking $F=E$ gives \begin{equation*} \lim_{k\to\infty}\J_{B_1}(E_k)=\J_{B_1}(E). \end{equation*} We're left to prove that $b_k\to0$.\\ Define \begin{equation*} a_k(r):=\Han((E_k\Delta E)\cap\partial B_r) \end{equation*} and take any $r_0>1$. Then \begin{equation*} b_k=\Ll_s(B_1,(E_k\Delta E)\cap(B_{r_0}\setminus B_1))+\Ll_s(B_1,(E_k\Delta E)\setminus B_{r_0}) \end{equation*} and the second term is \begin{equation*} \Ll_s(B_1,(E_k\Delta E)\setminus B_{r_0})\leq\frac{n\omega_n}{s}|B_1|(r_0-1)^{-s}. \end{equation*} As for the first term, we have \begin{equation*}\begin{split} \Ll_s(B_1,(E_k\Delta E)&\cap(B_{r_0}\setminus B_1))=\int_{B_{r_0}\setminus B_1}\chi_{E_k\Delta E}(x)\left(\int_{B_1}\frac{dy}{\kers}\right)dx\\ & =\int_1^{r_0}\left(\int_{\partial B_r}\chi_{E_k\Delta E}(x)\left(\int_{B_1}\frac{dy}{\kers}\right)d\Han(x)\right)dr\\ & \leq\frac{n\omega_n}{s}\int_1^{r_0}\left(\int_{\partial B_r}\frac{\chi_{E_k\Delta E}(x)}{(r-1)^s}d\Han(x)\right)dr\\ & =\frac{n\omega_n}{s}\int_1^{r_0}\frac{a_k(r)}{(r-1)^s}dr. \end{split}\end{equation*} Now notice that \begin{equation*} \int_1^{r_0}\frac{1}{(r-1)^s}dr=\frac{(r_0-1)^{1-s}}{1-s}<\infty \end{equation*} and \begin{equation*} \int_1^{r_0}a_k(r)dr=|(E_k\Delta E)\cap(\overline{B_{r_0}}\setminus B_1)|\longrightarrow0. \end{equation*} Then, since \begin{equation*} a_k(r)\leq\Han(\partial B_r)=n\omega_nr^{n-1}\leq n\omega_nr_0^{n-1}, \end{equation*} for every $r\leq r_0$, Lebesgue's dominated convergence Theorem gives \begin{equation*} \int_1^{r_0}\frac{a_k(r)}{(r-1)^s}dr\longrightarrow0. \end{equation*} Therefore \begin{equation*} \limsup_{k\to\infty}b_k\leq \frac{n\omega_n^2}{s}(r_0-1)^{-s}. \end{equation*} Since $r_0$ was arbitrary, this concludes the proof. \end{proof} \end{teo} \begin{rmk} Notice that the same proof applies if we take any ball $B_r(x)$ in place of $B_1$. \end{rmk} \end{section} \begin{section}{Uniform Density Estimates} Now we prove an analogue of the density estimates which hold in the classical case and we derive some consequences which will be fundamental in the sequel. \begin{rmk} From now on we suppose that any set $E$ satisfies $(\ref{gmt_assumption_eq})$. In particular \begin{equation*} \partial E=\partial^-E=\{x\in\R\,|\,0<|E\cap B_r(x)|<\omega_nr^n,\,\forall\,r>0\}. \end{equation*} \end{rmk} The estimates will follow easily from the following result \begin{lem} Let $E$ be a subsolution in $B_1$. There exists a universal constant $c=c(n,s)>0$ s.t. \begin{equation*} |E\cap B_1|\leq c\qquad\Longrightarrow\qquad|E\cap B_{1/2}|=0. \end{equation*} \begin{proof} Define for every $r\in(0,1]$ \begin{equation*} V_r:=|E\cap B_r|,\qquad a(r):=\Han(E\cap\partial B_r). \end{equation*} Using the fractional Sobolev inequality (see Theorem $\ref{fractional_sobolev}$) with $p=1$, \begin{equation*} \|u\|_{L^\frac{n}{n-s}(\R)}\leq C[u]_{\fracs}, \end{equation*} for $u=\chi_{E\cap B_r}$, we get \begin{equation*} V_r^\frac{n-s}{n}\leq2C\Ll_s(E\cap B_r,\Co(E\cap B_r)). \end{equation*} Now we split \begin{equation*} \Ll_s(E\cap B_r,\Co(E\cap B_r))=\Ll_s(E\cap B_r,\Co E)+\Ll_s(E\cap B_r,E\setminus B_r). \end{equation*} Since $E$ is a subsolution in $B_1$, $(\ref{var_subsol})$ with $A=E\cap B_r\subset E\cap B_1$ implies \begin{equation*} \Ll_s(E\cap B_r,\Co E)\leq\Ll_s(E\cap B_r,E\setminus B_r), \end{equation*} and, since $E\setminus B_r\subset \Co B_r$, we get \begin{equation*} \Ll_s(E\cap B_r,\Co(E\cap B_r))\leq2\Ll_s(E\cap B_r,\Co B_r). \end{equation*} For every $x\in E\cap B_r$ we have $d(x,\Co B_r)\geq r-|x|>0$, and hence (see Lemma $\ref{positive_distance}$) \begin{equation*} \int_{\Co B_r}\frac{dy}{\kers}\leq\frac{n\omega_n}{s}(r-|x|)^{-s}. \end{equation*} Therefore \begin{equation*}\begin{split} \Ll_s(E\cap B_r,\Co B_r)&=\int_0^r\left(\int_{\partial B_\rho}\chi_E(x)\left(\int_{\Co B_r}\frac{dy}{\kers}\right) \,d\Han(x)\right)\,d\rho\\ & \leq\frac{n\omega_n}{s}\int_0^r\left(\int_{\partial B_\rho}\chi_E(x)\,d\Han(x)\right)\frac{d\rho}{(r-\rho)^s}\\ & =\frac{n\omega_n}{s}\int_0^r\frac{a(\rho)}{(r-\rho)^s}\,d\rho. \end{split} \end{equation*} Putting everything together we have \begin{equation*} V_r^\frac{n-s}{n}\leq C\int_0^r\frac{a(\rho)}{(r-\rho)^s}\,d\rho. \end{equation*} Integrating on $(0,t)$, with $t<1$, we obtain \begin{equation*}\begin{split} \int_0^t V_r^\frac{n-s}{n}\,dr&\leq C\int_0^t\left(\int_0^r\frac{a(\rho)}{(r-\rho)^s}\,d\rho\right)\,dr\\ & =C\int_0^ta(\rho)\left(\int_\rho^t(r-\rho)^{-s}\,dr\right)\,d\rho\\ & =\frac{C}{1-s}\int_0^ta(\rho)(t-\rho)^{1-s}\,d\rho\\ & \leq C\frac{t^{1-s}}{1-s}\int_0^ta(\rho)\,d\rho=Ct^{1-s}V_t. \end{split}\end{equation*} Now we consider the above inequality with \begin{equation*} t_k:=\frac{1}{2}+\frac{1}{2^k},\qquad k\geq1. \end{equation*} Notice that $t_1=1$, $t_k$ is strictly decreasing with $t_k-t_{k+1}=\frac{1}{2^{k+1}}$, and $t_k\to\frac{1}{2}$. We have \begin{equation*} \int_0^{t_k}V_r^\frac{n-s}{n}\,dr\leq C t_k^{1-s}V_{t_k}\leq CV_{t_k}, \end{equation*} and \begin{equation*}\begin{split} \int_0^{t_k}V_r^\frac{n-s}{n}\,dr&=\int_0^{t_{k+1}}V_r^\frac{n-s}{n}\,dr+\int_{t_{k+1}}^{t_k}V_r^\frac{n-s}{n}\,dr \geq\int_{t_{k+1}}^{t_k}V_r^\frac{n-s}{n}\,dr\\ & \geq\left(t_k-t_{k+1}\right)V^\frac{n-s}{n}_{t_{k+1}}=\frac{1}{2^{k+1}}V^\frac{n-s}{n}_{t_{k+1}}, \end{split} \end{equation*} since $V_r$ is nondecreasing. If we set $v_k:=V_{t_k}$, then \begin{equation*} 2^{-(k+1)}v_{k+1}^\frac{n-s}{n}\leq Cv_k, \end{equation*} i.e. \begin{equation*} v_{k+1}\leq C_0 \left(2^\frac{n}{n-s}\right)^kv_k^\frac{n}{n-s}, \end{equation*} where the constant $C_0=(2C)^\frac{n}{n-s}$ can be supposed to be strictly bigger than 1 and depends only on $n$ and $s$.\\ We claim that if $v_1$ is small enough, $v_1\leq c(n,s)$, then $v_k\longrightarrow0$. Notice that this concludes the proof.\\ Let $b:=2^\frac{n}{n-s}>1$ and $\alpha:=\frac{s}{n-s}>0$ so that our inequality reads \begin{equation*} v_{k+1}\leq C_0b^kv_k^{1+\alpha}. \end{equation*} Now, if $v_1\leq C_0^{-\frac{1}{\alpha}}b^{-\frac{1}{\alpha^2}-\frac{1}{\alpha}}$, then \begin{equation}\label{ineq_iter} v_k\leq b^{-\frac{k-1}{\alpha}}v_1, \end{equation} for every $k\geq1$ and hence in particular $v_k\longrightarrow0$.\\ We prove $(\ref{ineq_iter})$ by induction. It is trivially satisfied if $k=1$. Suppose it is true for $k$; then \begin{equation*}\begin{split} v_{k+1}&\leq C_0b^kv_k^{1+\alpha}\leq C_0b^k\left(b^{-\frac{k-1}{\alpha}}v_1\right)^{1+\alpha} =C_0b^{1-\frac{k-1}{\alpha}}v_1^\alpha v_1\\ & \leq b^{-\frac{k}{\alpha}}v_1. \end{split}\end{equation*} \end{proof} \end{lem} Now we can prove the density estimates. \begin{teo}[Uniform density estimate] Let $E$ be a subsolution in $\Omega$. There exists a universal constant $c=c(n,s)>0$ s.t. if $x\in\partial E$ and $B_r(x)\subset\Omega$ then \begin{equation*} |E\cap B_r(x)|\geq cr^n. \end{equation*} In particular, if $E$ is $s$-minimal in $\Omega$, then \begin{equation}\label{uniform_density_estimate} |E\cap B_r(x)|\geq cr^n,\qquad |\Co E\cap B_r(x)|\geq cr^n. \end{equation} \begin{proof} The second statement is an immediate consequence of the first. Indeed, if $E$ is $s$-minimal, then $E$ is also a supersolution i.e. $\Co E$ is a subsolution and hence it satisfies the hypothesis of the Theorem. The first inequality follows from previous Lemma.\\ Since $B_r(x)\subset\Omega$, the set $E$ is a subsolution in $B_r(x)$ and hence $\frac{E-x}{r}$ is a subsolution in $B_1$. Let $c$ be the constant in the Lemma. If we suppose that \begin{equation*} r^n\left|\frac{E-x}{r}\cap B_1\right|=|E\cap B_r(x)|<cr^n, \end{equation*} we have \begin{equation*} \left|\frac{E-x}{r}\cap B_1\right|<c, \end{equation*} and hence \begin{equation*} \left|\frac{E-x}{r}\cap B_\frac{1}{2}\right|=0,\qquad\textrm{i.e. }\left|E\cap B_\frac{r}{2}(x)\right|=0. \end{equation*} However $x\in\partial E$ and hence $|E\cap B_\rho(x)|>0$ for every $\rho>0$. This gives a contradiction, proving that \begin{equation*} |E\cap B_r(x)|\geq cr^n. \end{equation*} \end{proof} \end{teo} A first consequence of the uniform density estimate is that we can always find a small ball completely contained in $E\cap B_1$ and one contained in $\Co E\cap B_1$, if $0\in\partial E$. \begin{coroll}[Clean ball condition]\label{clean_ball} Let $E$ be $s$-minimal in $\Omega$, $x\in\partial E$ and $B_r(x)\subset\Omega$. There exist balls \begin{equation*} B_{cr}(y_1)\subset E\cap B_r(x),\qquad B_{cr}(y_2)\subset\Co E\cap B_r(x), \end{equation*} for some small universal constant $c=c(n,s)>0$. \begin{proof} We can assume that $x=0$ and $B_r(x)=B_1$. Without loss of generality we can also suppose that $B_1\subset\subset\Omega$, otherwise we could consider $B_{1/2}$. In particular we can suppose that $d(B_1,\partial\Omega)>1/2$ i.e. that $B_{3/2}\subset\Omega$.\\ We decompose the space into cubes of side $\delta$. We want to show that $N_\delta$, the number of cubes contained in $B_1$ which intersect $\partial E$, satisfies \begin{equation}\label{cube_number} N_\delta\leq C\delta^{s-n}, \end{equation} for some constant $C=C(n,s)>0$. Then, since $0\in\partial E$, the density estimate for the ball $B_{1/2}$ gives \begin{equation*} |E\cap B_{1/2}|\geq c 2^{-n}, \end{equation*} and hence at least $2^{-n}c\delta^{-n}$ of the cubes intersect $E\cap B_{1/2}$. Moreover, if $\delta$ is small, say $\delta<\frac{1}{12\sqrt{n}}$, then all these cubes are contained in $B_1$.\\ If noone of these cubes is completely contained in $E\cap B_1$, then they all intersect $\partial E$, so that \begin{equation*} N_\delta\geq2^{-n}c\delta^{-n}. \end{equation*} This and $(\ref{cube_number})$ give \begin{equation*} \frac{c}{2^n C}\leq\delta^s, \end{equation*} where the left hand side does not depend on $\delta$. Therefore if we take $\delta$ small enough we get a contradiction. This proves that there is a cube of side $\delta$ completely contained in $E\cap B_1$, and hence also a ball of radius $\delta/2$. To conclude, notice that we can take \begin{equation*} \delta=\delta(n,s)=\min\left\{\frac{1}{2}\left(\frac{c}{2^n C}\right)^\frac{1}{s},\,\frac{1}{12\sqrt{n}}\right\}, \end{equation*} and hence get the claim with $c=\delta/2$. It is clear that we can apply the same argument for $\Co E$. We are left to prove $(\ref{cube_number})$.\\ Let $Q_\delta\subset B_1$ be a cube of side $\delta$ s.t. $Q_\delta\cap\partial E\not=\emptyset$ and let $y$ be a point in the intersection. Now if $\delta$ is small enough, $\delta<\frac{1}{12\sqrt{n}}$, then the cube $Q_{3\delta}$ (with same center and side $3\delta$) is contained in $B_{3/2}$ and hence in $\Omega$.\\ Therefore there is a ball $B_\delta(y)\subset Q_{3\delta}\subset\Omega$, with $y\in\partial E$, so the density estimates give \begin{equation*} |E\cap Q_{3\delta}|\geq c\delta^n,\qquad|\Co E\cap Q_{3\delta}|\geq c\delta^n. \end{equation*} If $x\in E\cap Q_{3\delta}$ and $y\in\Co E\cap Q_{3\delta}$, then $|x-y|\leq$diam$(Q_{3\delta})=\sqrt{n}\,3\delta$, so \begin{equation*}\begin{split} \Ll_s(E\cap Q_{3\delta},\,\Co E\cap Q_{3\delta})&=\int_{E\cap Q_{3\delta}}\int_{\Co E\cap Q_{3\delta}}\frac{dx\,dy}{\kers}\\ & \geq\int_{E\cap Q_{3\delta}}\int_{\Co E\cap Q_{3\delta}}\frac{dx\,dy}{(\sqrt{n}\,3\delta)^{n+s}}\\ & =\frac{|E\cap Q_{3\delta}|\cdot|\Co E\cap Q_{3\delta}|}{(\sqrt{n}\,3\delta)^{n+s}}\\ & \geq\frac{c^2}{(3\,\sqrt{n})^{n+s}}\,\frac{\delta^n\cdot\delta^n}{\delta^{n+s}}=c_0\delta^{n-s}. \end{split}\end{equation*} Let $F'_\delta$ be the family of cubes of side $\delta$ contained in $B_{3/2}$ and $F_\delta$ the subfamily made of those contained in $B_1$; finally let $G_\delta\subset F_\delta$ be the subfamily of those intersecting $\partial E$, so that $N_\delta$ is the cardinality of $G_\delta$.\\ Notice that if $Q_\delta\in F_\delta$, then the cube $Q_{3\delta}$ covers $3^n$ cubes of $F'_\delta$.\\ Therefore, since the intersection of two distinct cubes has zero Lebesgue measure, we get \begin{equation*}\begin{split} \sum_{Q_\delta,Q_\delta'\in F_\delta} \int_{Q_{3\delta}\cap E}&\int_{Q'_{3\delta}\cap \Co E}\frac{dx\,dy}{\kers}\\ & \leq9^n \sum_{Q_\delta,Q_\delta'\in F'_\delta} \int_{Q_\delta \cap E}\int_{Q_\delta'\cap \Co E}\frac{dx\,dy}{\kers}\\ & \leq 9^n \int_{B_{3/2}\cap E}\int_{B_{3/2}\cap \Co E}\frac{dx\,dy}{\kers}. \end{split}\end{equation*} On the other hand we have \begin{equation*}\begin{split} \sum_{Q_\delta,Q_\delta'\in F_\delta} \int_{Q_{3\delta}\cap E}\int_{Q_{3\delta}'\cap \Co E}\frac{dx\,dy}{\kers} &\ge \sum_{Q_\delta\in F_\delta} \int_{Q_{3\delta}\cap E}\int_{Q_{3\delta}\cap\Co E}\frac{dx\,dy}{\kers}\\ & \ge \sum_{Q_\delta\in G_\delta} \int_{Q_{3\delta}\cap E}\int_{Q_{3\delta}\cap\Co E}\frac{dx\,dy}{\kers}\\ & \ge \sum_{Q_\delta\in G_\delta} c_0\delta^{n-s} = c_0\delta^{n-s} N_\delta. \end{split}\end{equation*} These give \begin{equation*} c_0\delta^{n-s} N_\delta\leq9^n\Ll_s(E\cap B_{3/2},\Co E\cap B_{3/2}). \end{equation*} Finally from the minimality of $E$ we get \begin{equation*}\begin{split} \Ll_s(E\cap B_{3/2},\Co E\cap B_{3/2})&\leq\Ll_s(E\cap B_{3/2},\Co E)\leq\Ll_s(E\cap B_{3/2},E\cap\Co B_{3/2})\\ & \leq\Ll_s(B_{3/2},\Co B_{3/2})=P_s(B_{3/2}). \end{split} \end{equation*} This proves $(\ref{cube_number})$, concluding the proof. \end{proof} \end{coroll} From the proof of this Corollary we can deduce an estimate on the Hausdorff measure of $\partial E\cap\Omega$. \begin{coroll} If $E$ is $s$-minimal in $\Omega$, then \begin{equation*} \mathcal{H}^{n-s}(\partial E\cap\Omega)<\infty. \end{equation*} \end{coroll} Actually we will prove that $\partial E\cap\Omega$ has Hausdorff dimension equal to $n-1$.\\ For this reason we can think of $\partial E\cap\Omega$ as a nonlocal minimal surface.\\ Another important consequence of the density estimates is the following improvement of the convergence of $s$-minimal sets. \begin{coroll}\label{haus_conv_min} Let $\{E_k\}$ be a sequence of $s$-minimal sets in $\Omega$ s.t. $ E_k\xrightarrow{loc} E. $ For every compact set $K\subset\Omega$ and every $\epsilon>0$ there exists $n_0$ s.t. \begin{equation}\label{sminimal_hausdorff_convergence} \partial E_k\cap K\subset N_\epsilon(\partial E)\cap K,\qquad\textrm{for }k\ge n_0, \end{equation} where $N_\epsilon(\partial E)$ is the $\epsilon$-neighborhood of $\partial E$. \begin{proof} Let $d:=d(K,\partial\Omega)>0$. Suppose that there exists a sequence (eventually relabeled) $\{x_k\}$ and $\epsilon_0>0$ s.t. \begin{equation*} x_k\in\partial E_k\cap K\qquad\textrm{and}\quad d(x_k,\partial E)\geq\epsilon_0. \end{equation*} We can suppose that $\epsilon_0<d$. Since $d(x_k,\partial E)\geq\epsilon_0$, we have \begin{equation*} E_k\cap B_{\epsilon_0/2}(x_k)\subset E_k\setminus E, \end{equation*} and, since $\epsilon_0<d$, $B_{\epsilon_0/2}(x_k)\subset\Omega$, so that the density estimate gives \begin{equation*} |E_k\cap B_{\epsilon_0/2}(x_k)|\ge \frac{c}{2^n}\epsilon_0^n. \end{equation*} But this contradicts the $L^1_{loc}$ convergence. Indeed, for $R$ big enough we have $K\subset B_R$; now, since $x_k\in K\subset B_R$, we have $B_{\epsilon_0/2}(x_k)\subset B_{R+\epsilon_0}=:B_{R'}$. Then \begin{equation*} |(E_k\Delta E)\cap B_{R'}|\geq|(E_k\setminus E)\cap B_{R'}|\ge|E_k\cap B_{\epsilon_0/2}(x_k)|\ge \frac{c}{2^n}\epsilon_0^n, \end{equation*} for every $k$. \end{proof} \end{coroll} \end{section} \end{chapter} \begin{chapter}{Fractional Mean Curvature} In Section 2 we show that an $s$-minimal set $E$ satisfies the Euler-Lagrange equation \begin{equation}\label{euler-lagrange_frac} \I_s[E](x)=0,\qquad x\in\partial E \end{equation} in the viscosity sense, where $\I_s[E](x)$ denotes the fractional mean curvature of $\partial E$ at $x$, defined below. This can be thought of as the fractional analogue of the equation $(\ref{min_surf_eq})$ satisfied by classical minimal surfaces. Moreover in Section 3 we show that the fractional mean curvature is the first variation of the fractional perimeter, at least when the set is regular enough. To be more precise, we show that if $\Phi_t:\R\to\R$ is a one-parameter family of $C^2$-diffeomorphisms which is $C^2$ also in $t$ and s.t. $\Phi_0=Id$, then \begin{equation}\label{first_variation} \frac{d}{dt}P_s(\Phi_t(E))\Big|_{t=0}=-\int_{\partial E}\I_s[E](x)\nu_E(x)\cdot\phi(x)\,d\Han(x), \end{equation} where $\phi(x):=\frac{\partial}{\partial t}\Phi_t(x)\big|_{t=0}$ and $E$ is any bounded open set with $C^2$ boundary. \begin{section}{Definition} \begin{rmk} Again, in this chapter we suppose that any set $E$ satisfies $(\ref{gmt_assumption_eq})$. In particular \begin{equation*} \partial E=\partial^-E=\{x\in\R\,|\,0<|E\cap B_r(x)|<\omega_nr^n,\,\forall\,r>0\}. \end{equation*} \end{rmk} \begin{defin} Given a set $E$, the $s$-fractional mean curvature of $\partial E$ at a point $x\in\partial E$ is formally defined as \begin{equation*} \I_s[E](x):=\textrm{P.V.}\int_{\R}\frac{\chi_E(y)-\chi_{\Co E}(y)}{\kers}\,dy. \end{equation*} This means \begin{equation}\label{frac_mc} \I_s[E](x)=\lim_{\rho\to0}\I^\rho_s[E](x), \end{equation} where \begin{equation*} \I^\rho_s[E](x):=\int_{\R\setminus B_\rho(x)}\frac{\chi_E(y)-\chi_{\Co E}(y)}{\kers}\,dy. \end{equation*} \end{defin} \begin{rmk} The integral above has to be considered in the principal value sense because the integrand is not in the space $L^1(\R)$. Moreover, in order for the limit in $(\ref{frac_mc})$ to be well defined, we need some sort of cancellation near the point $x$. \end{rmk} In \cite{curvature} it is shown that if the boundary of the set $E$ is $C^2$ near $x$, then the limit exists. The proof exploits the cancellation provided by the existence of tangent interior and exterior paraboloids in a neighborhood of $x$. To be more precise, let $E$ be an open set s.t. $\partial E$ is $C^2$ in a neighborhood of $x\in\partial E$. We can suppose $x=0$. Then in normal coordinates we have \begin{equation}\label{cancellation}\begin{split} E\cap B_\rho\subset\left\{(y',y_n)\in\R\,|\,y_n\leq M|y'|^2\right\},\\ \Co E\cap B_\rho\subset\left\{(y',y_n)\in\R\,|\,y_n\geq- M|y'|^2\right\}, \end{split} \end{equation} for some $M>0$ and $\rho$ small.\\ Given $0<\delta'<\delta$ small, \begin{equation*}\begin{split} \left|\I_s^\delta[E](0)-\I_s^{\delta'}[E](0)\right|& =\left|\int_{B_\delta\setminus B_{\delta'}}\frac{\chi_E(y)-\chi_{\Co E}(y)}{|y|^{n+s}}\,dy\right|\\ & =\left|\int_{\delta'}^\delta\,d\rho\int_{\partial B_\rho}\frac{\chi_E(y)-\chi_{\Co E}(y)}{\rho^{n+s}}\,d\Han(y)\right|\\ & =\left|\int_{\delta'}^\delta\frac{\Han(E\cap\partial B_\rho)-\Han(\Co E\cap\partial B_\rho)}{\rho^{n+s}}\,d\rho\right|. \end{split} \end{equation*} Now thanks to $(\ref{cancellation})$ we get \begin{equation*}\begin{split} \Han(E\cap\partial B_\rho)-\Han(\Co E\cap\partial B_\rho)&=\Han(E\cap\Sigma_\rho)-\Han(\Co E\cap\Sigma_\rho)\\ & \leq\Han(\Sigma_\rho), \end{split} \end{equation*} where \begin{equation*} \Sigma_\rho:=\left\{(y',y_n)\in\R\,|\,|y_n|\leq M|y'|^2\right\}\cap\partial B_\rho, \end{equation*} and hence \begin{equation*} \left|\I_s^\delta[E](0)-\I_s^{\delta'}[E](0)\right|\leq\int_{\delta'}^\delta\frac{\Han(\Sigma_\rho)}{\rho^{n+s}}\,d\rho. \end{equation*} Therefore, if we show that $\Han(\Sigma_\rho)=O(\rho^n)$ as $\rho\to0$, then the sequence $\I_s^\delta[E](0)$ is a Cauchy sequence and the fractional curvature is well defined. Notice that $\Sigma_\rho=\left\{y\in \partial B_\rho\,|\,|y_n|\leq h\right\}$, with \begin{equation*} h=\frac{\sqrt{\frac{1}{M^2}+4\rho^2}-\frac{1}{M}}{2}. \end{equation*} If we take polar coordinates in such a way that $y_n=\rho\cos\theta_{n-1}$, then the condition $|y_n|\leq h$ is translated in $\theta_{n-1}\in(\tau,\pi-\tau]$, with $ \tau=\arccos\big(\frac{h}{\rho}\big). $ \noindent Therefore \begin{equation*}\begin{split} \Han(\Sigma_\rho)&=\rho^{n-1}\int_0^{2\pi}\,d\theta_1\dots\int_0^\pi\sin(\theta_{n-2})^{n-3}\,d\theta_{n-2}\int_\tau^{\pi-\tau} \sin(\theta_{n-1})^{n-2}\,d\theta_{n-1}\\ & =2\rho^{n-1}\h^{n-2}(\mathbb{S}^{n-2})\int_\tau^{\frac{\pi}{2}}(\sin t)^{n-2}\,dt\\ & \leq2\rho^{n-1}\h^{n-2}(\mathbb{S}^{n-2})\Big(\frac{\pi}{2}-\tau\Big). \end{split} \end{equation*} Using Taylor expansion we find $\tau=\frac{\pi}{2}-M\rho+o(\rho)$ and hence $\Han(\Sigma_\rho)=O(\rho^n)$, as $\rho\to0$.\\ Since the existence of tangent interior and exterior balls to $\partial E$ in $x$ is enough to get $(\ref{cancellation})$ (up to rotation and translation), we get the following \begin{lem} Suppose there exist two open sets $F_1\subset E$ and $F_2\subset\Co E$ with $x\in\partial E\cap\partial F_i$. If $\partial F_i$ is $C^2$ in a neighborhood of $x$ for $i=1,2$, then $\I_s[E](x)$ is well defined. \end{lem} \begin{rmk} In order to study the existence of the fractional mean curvature at $x\in\partial E$ it is enough to know the behavior of $E$ in a neighborhood of $x$.\\ However, unlike what happens with the classical mean curvature, we need to know the whole of $E$ to determine $\I_s[E](x)$, meaning that the fractional mean curvature is nonlocal. \end{rmk} \begin{lem} If $E\subset F$ and $x\in\partial E\cap\partial F$, then \begin{equation}\label{frac_conf} \I_s^\delta[E](x)\leq\I_s^\delta[F](x), \end{equation} for every $\delta>0$. \begin{proof} It is enough to notice that \begin{equation*}\begin{split} E\subset F&\quad\Longrightarrow\quad\chi_E\leq\chi_F\quad\textrm{and}\quad \chi_{\Co E}\geq\chi_{\Co F}\\ & \quad\Longrightarrow\quad\chi_E-\chi_{\Co E}\leq\chi_F-\chi_{\Co F}. \end{split} \end{equation*} \end{proof} \end{lem} For more details about the fractional mean curvature, see \cite{curvature}.\\ In particular it is proved there that if $E$ is open and $\partial E$ is $C^2$ near $x$, then the $s$-fractional mean curvature approaches the classical mean curvature as $s\to1$, i.e. \begin{equation*} \lim_{s\to1}(1-s)\I_s[E](x)=(n-1)\omega_{n-1}H(x), \end{equation*} where $H(x)$ is the classical mean curvature of $\partial E$ at $x$, i.e. the arithmetic mean of the principal curvatures of $\partial E$ in $x$.\\ See also \cite{unifor} for the asymptotics. \end{section} \begin{section}{Euler-Lagrange Equation} We begin by showing a comparison principle between the boundary $\partial E$ of an $s$-minimal set and the hyperplane $\{x_n=0\}$.\\ The same technique used in the proof, with some complications due to error terms, will be used in the proof of the Euler-Lagrange equation. \begin{prop} Let $E$ be an $s$-minimal set in $B_1$. Then \begin{equation*} \{x_n\leq0\}\setminus B_1\subset E\quad\Longrightarrow\quad\{x_n\leq0\}\subset E. \end{equation*} \begin{proof} Define \begin{equation*} A^-:=\{x_n\leq0\}\setminus E, \end{equation*} and notice that $A^-\subset B_1\cap\Co E$. We want to show that $|A^-|=0$. To this end we define a new set as perturbation and exploit symmetry in order to obtain cancellation in the integrals.\\ Let $T$ be the reflection across $\{x_n=0\}$, i.e. $T(x',x_n)=(x',-x_n)$ and define \begin{equation*} A^+:=T(A^-)\setminus E, \end{equation*} and \begin{equation*} A:=A^-\cup A^+. \end{equation*} Decompose $A$ in two sets, $A_1$, which is symmetric with respect to $\{x_n=0\}$ and the remaining part $A_2\subset A^-$, i.e. \begin{equation*} A_1:=A^+\cup T(A^+),\qquad A_2:=A^-\setminus T(A^+). \end{equation*} Notice that \begin{equation*} \{x_n\leq0\}\subset E\cup A, \end{equation*} and define \begin{equation*} F:=T(\Co(E\cup A)). \end{equation*} Then \begin{equation*} F\subset\{x_n\leq0\}\setminus A^-\subset E. \end{equation*} From the minimality of $E$, since $A\subset\Co E\cap B_1$, we get \begin{equation*} 0\geq\Ll_s(A,E)-\Ll_s(A,\Co(E\cup A))=\sum_{i=1,2}\left(\Ll_s(A_i,E)-\Ll_s(A_i,\Co(E\cup A))\right). \end{equation*} Using the reflection $T$, since $A_1$ is symmetric we get \begin{equation*} \Ll_s(A_1,\Co(E\cup A))=\Ll_s(A_1,F), \end{equation*} and hence \begin{equation*} \Ll_s(A_1,E)-\Ll_s(A_1,\Co(E\cup A))=\Ll_s(A_1,E\setminus F). \end{equation*} As for the second term, \begin{equation*} \Ll_s(A_2,E)-\Ll_s(A_2,\Co(E\cup A))=\Ll_s(A_2,E\setminus F)+\Ll_s(A_2,F)-\Ll_s(T(A_2),F). \end{equation*} For every $x,y\in\{x_n\leq0\}$ we have \begin{equation*} |x-y|\leq|T(x)-y|, \end{equation*} and hence \begin{equation*} \Ll_s(A_2,F)-\Ll_s(T(A_2),F)\geq0. \end{equation*} Putting everything together we get \begin{equation*} 0\geq\Ll_s(A_1,E\setminus F)+\Ll_s(A_2,E\setminus F)+[\Ll_s(A_2,F)-\Ll_s(T(A_2),F)], \end{equation*} and all three terms are nonnegative, so they must all be equal to 0.\\ This can happen only if $|A_2|=0$ and either $|A_1|=0$ or $|E\setminus F|=0$. If $|A_1|=0$, we're done. On the other hand, if $|E\setminus F|=0$, we can repeat the same argument with the hyperplane $\{x_n=-\epsilon\}$ for every $\epsilon>0$ small and in this case we have $|E\setminus F|>0$. Letting $\epsilon$ tend to 0, we get the claim. \end{proof} \end{prop} Now we state and prove the main result. \begin{teo} Let $E$ be a supersolution in $B_R$, with $0\in\partial E$. Suppose that $B_2(-2e_n)\subset E$. Then \begin{equation*} \limsup_{\delta\to0}\I_s^\delta[E](0)\leq0. \end{equation*} \begin{proof} Fix $\delta>0$ small and $0<\epsilon\ll\delta$.\\ Denote by $d_x$ the distance of $x$ from the sphere $\partial B_{1+\epsilon}(-e_n)$ and let $T$ be the radial reflection with respect to the sphere $\partial B_{1+\epsilon}(-e_n)$ in the annulus $d_x<2\delta$, i.e. \begin{equation*} \frac{x+T(x)}{2}+e_n=(1+\epsilon)\frac{x+e_n}{|x+e_n|}. \end{equation*} It can be shown that \begin{equation}\label{reflex1} |\det DT(x)|\leq1+Cd_x\leq 2, \end{equation} since $d_x<2\delta$ and $\delta$ is small, and \begin{equation}\label{reflex2} |T(x)-T(y)|\geq(1-c\max\{d_x,d_y\})|x-y|, \end{equation} for every $x,\,,y\in\{z\in\R\,|\,d_z<2\delta\}$.\\ We want to define a small perturbation $A$ near the point $0$, in such a way that we can exploit some sort of cancellation given by $T$, in order to control the difference $\J_{B_R}(E)-\J_{B_R}(A\cup E)$.\\ We define \begin{equation*} A^-:=B_{1+\epsilon}(-e_n)\setminus E, \end{equation*} \begin{equation*} A^+:=T(A^-)\setminus E,\qquad A:=A^-\cup A^+. \end{equation*} Notice that \begin{equation*} A^-\subset B_{1+\epsilon}(-e_n)\setminus B_2(-2 e_n), \end{equation*} and hence, if $\epsilon$ is small enough, \begin{equation*} A\subset B_{2\sqrt{\epsilon}}\subset B_\delta\subset B_R. \end{equation*} We decompose $A$ in two disjoint sets, with $A_1=T(A_1)$, i.e. \begin{equation*} A_1:=T(A^+)\cup A^+,\qquad A_2:=A\setminus A_1\subset A^-. \end{equation*} Finally let \begin{equation*} F:=T(B_\delta\cap\Co(E\cup A)). \end{equation*} Notice that \begin{equation*} B_\delta\cap\Co(E\cup A)\subset B_\delta\cap \Co B_{1+\epsilon}(-e_n), \end{equation*} and \begin{equation*} T(B_\delta\cap \Co B_{1+\epsilon}(-e_n))\subset B_\delta\cap B_{1+\epsilon}(-e_n). \end{equation*} Therefore \begin{equation*} F\subset (B_\delta\cap B_{1+\epsilon}(-e_n))\setminus A^-\subset E\cap B_\delta. \end{equation*} Now, since $A\subset B_R\cap\Co E$ and $E$ is a supersolution in $B_R$, we get \begin{equation*}\begin{split} 0&\geq\Ll_s(A,E)-\Ll_s(A,\Co(E\cup A))\\ & =\Ll_s(A,E\setminus B_\delta)+\Ll_s(A,E\cap B_\delta)-\Ll_s(A,\Co(E\cup A)\cap B_\delta)\\ & \qquad\qquad-\Ll_s(A,\Co(E\cup A)\setminus B_\delta)\\ & =[\Ll_s(A,E\setminus B_\delta)-\Ll_s(A,\Co E\setminus B_\delta)]+\Ll_s(A,E\setminus F)\\ & \qquad\qquad+[\Ll_s(A,F)-\Ll_s(A,T(F))]\\ & \geq[\Ll_s(A,E\setminus B_\delta)-\Ll_s(A,\Co E\setminus B_\delta)]+[\Ll_s(A,F)-\Ll_s(A,T(F))]\\ & =:I_1+I_2. \end{split} \end{equation*} For simplicity in the following inequalities we will always write $C$ for the constants appearing, understanding that it changes when necessary. We first estimate $I_1$ \begin{equation*}\begin{split} &\left|\frac{1}{|A|}I_1-\I_s^\delta[E](0)\right|\\ & \qquad\quad=\left|\frac{1}{|A|}\int_A\left(\int_{\Co B_\delta}(\chi_E(y)-\chi_{\Co E}(y))\left(\frac{1}{\kers}- \frac{1}{|y|^{n+s}}\right)dy\right)dx \right|\\ & \qquad\quad=\left|\frac{1}{|A|}\int_{\Co B_\delta}(\chi_E(y)-\chi_{\Co E}(y))\left(\int_A\left(\frac{1}{\kers}- \frac{1}{|y|^{n+s}}\right)dx\right)dy \right|\\ & \qquad\quad\leq\int_{\Co B_\delta}\left(\frac{1}{|A|}\int_A\left|\frac{1}{\kers}- \frac{1}{|y|^{n+s}}\right|dx\right)dy. \end{split} \end{equation*} We know that for every $x\in A$ and $y\in\Co B_\delta$ \begin{equation*} \left|\frac{1}{\kers}-\frac{1}{|y|^{n+s}}\right| =(n+s)\frac{1}{|\xi|^{n+s+1}}||x-y|-|y||, \end{equation*} for some point $\xi$ lying on the segment with endpoints $x$ and $x-y$.\\ Moreover, since $A\subset B_{2\sqrt{\epsilon}}$, we have \begin{equation*} ||x-y|-|y||\leq2\sqrt{\epsilon}, \end{equation*} and hence \begin{equation*} \left|\frac{1}{\kers}-\frac{1}{|y|^{n+s}}\right| \leq C\sqrt{\epsilon}\,\min\{|y|,|x-y|\}^{-(n+s+1)}. \end{equation*} For every fixed $y\in\Co B_\delta$ we can split $A$ in the two sets \begin{equation*} S_1:=\{x\in A\,|\, |x-y|\geq|y|\},\qquad S_2:=\{x\in A\,|\, |x-y|<|y|\}. \end{equation*} On the second set we have \begin{equation*}\begin{split} \int_{\Co B_\delta}&\left(\frac{1}{|A|}\int_{S_2}\left|\frac{1}{\kers}- \frac{1}{|y|^{n+s}}\right|dx\right)dy\\ & \qquad\leq C\sqrt{\epsilon}\,\int_{\Co B_\delta}\left(\frac{1}{|A|}\int_{S_2}\frac{1}{|x-y|^{n+s+1}}\,dx\right)dy\\ & \qquad\leq C\sqrt{\epsilon}\,\int_{\Co B_\delta}\left(\frac{1}{|A|}\int_A\frac{1}{|x-y|^{n+s+1}}\,dx\right)dy\\ & \qquad= C\sqrt{\epsilon}\,\frac{1}{|A|}\int_A\left(\int_{\Co B_\delta}\frac{1}{|x-y|^{n+s+1}}\,dy\right)dx\\ & \qquad\leq C\sqrt{\epsilon}\,\frac{1}{|A|}\int_A\left(\int_{\Co B_{\delta-2\sqrt{\epsilon}}}\frac{1}{|z|^{n+s+1}}\,dz\right)dx\\ & \qquad =C\sqrt{\epsilon}\,\int_{\Co B_{\delta-2\sqrt{\epsilon}}}\frac{1}{|z|^{n+s+1}}\,dz \end{split} \end{equation*} where in the last inequality we have simply translated $z=x-y$ in the inner integral. Since $\epsilon\ll\delta$, we get \begin{equation*} C\sqrt{\epsilon}\,\int_{\Co B_{\delta-2\sqrt{\epsilon}}}\frac{1}{|z|^{n+s+1}}\,dz \leq C\sqrt{\epsilon}\,\int_{\Co B_{\delta/2}}\frac{1}{|z|^{n+s+1}}\,dz \leq C \epsilon^{1/2}\delta^{-1-s}. \end{equation*} As for the first set $S_1$, we simply have \begin{equation*}\begin{split} \int_{\Co B_\delta}&\left(\frac{1}{|A|}\int_{S_1}\left|\frac{1}{\kers}- \frac{1}{|y|^{n+s}}\right|dx\right)dy\\ & \qquad\leq C\sqrt{\epsilon}\,\int_{\Co B_\delta}\left(\frac{1}{|A|}\int_{S_1}\frac{1}{|y|^{n+s+1}}\,dx\right)dy\\ & \qquad\leq C\sqrt{\epsilon}\,\int_{\Co B_\delta}\frac{1}{|y|^{n+s+1}}\,dy\\ & \qquad\leq C \epsilon^{1/2}\delta^{-1-s}. \end{split}\end{equation*} Therefore \begin{equation}\label{first_fmc} \left|\frac{1}{|A|}I_1-\I_s^\delta[E](0)\right|\leq C \epsilon^{1/2}\delta^{-1-s}. \end{equation} To estimate $I_2$ we write \begin{equation*} I_2=[\Ll_s(A_1,F)-\Ll_s(A_1,T(F))]+[\Ll_s(A_2,F)-\Ll_s(A_2,T(F))]. \end{equation*} Changing variables via $T$, since $A_1=T(A_1)$, we get \begin{equation*}\begin{split} \Ll_s(A_1,T(F))&=\Ll_s(T(A_1),T(F))\\ & =\int_{A_1}\int_F\frac{|\det DT(x)||\det DT(y)|}{|T(x)-T(y)|^{n+s}}\,dx\,dy\\ & \leq\int_{A_1}\int_F\frac{1+C\max\{d_x,\,d_y\}}{\kers}\,dx\,dy\\ & =\Ll_s(A_1,F)+C\int_{A_1}\int_F\frac{\max\{d_x,\,d_y\}}{\kers}\,dx\,dy, \end{split} \end{equation*} where we have used $(\ref{reflex1})$, $(\ref{reflex2})$ and Taylor expansion.\\ Since for every $x\,,y\in B_{1+\epsilon}(-e_n)$ \begin{equation*} |x-y|\leq|x-T(y)|, \end{equation*} and $A_2\subset A^-\subset B_{1+\epsilon}(-e_n),\quad F\subset B_{1+\epsilon}(-e_n)$, we have \begin{equation*}\begin{split} \Ll_s(A_2,T(F))&=\int_{A_2}dx\int_F\frac{|\det DT(y)|}{|x-T(y)|^{n+s}}\,dy \leq\int_{A_2}dx\int_F\frac{1+C\,d_y}{\kers}\,dy\\ & \leq\Ll_s(A_2,F)+C \int_{A_2}\int_F\frac{\max\{d_x,\,d_y\}}{\kers}\,dx\,dy. \end{split} \end{equation*} Therefore \begin{equation}\label{2fmc_int} -I_2\leq C\int_A\int_F\frac{\max\{d_x,\,d_y\}}{\kers}\,dx\,dy. \end{equation} We want to show that \begin{equation}\label{second_fmc} -\frac{I_2}{|A|}\leq C\delta^{1-s}+o(\epsilon). \end{equation} Then, since $I_1+I_2\leq0$, we have \begin{equation*} \frac{I_1}{|A|}\leq-\frac{I_2}{|A|}\leq C\delta^{1-s}+o(\epsilon), \end{equation*} and hence from $(\ref{first_fmc})$ we get \begin{equation*} \I_s^\delta[E](0)\leq\frac{I_1}{|A|}+C\epsilon^{1/2}\delta^{-1-s}\leq C\delta^{1-s}+o(\epsilon)+C\epsilon^{1/2}\delta^{-1-s}. \end{equation*} Passing to the limit $\epsilon\to0$ we obtain \begin{equation*} \I_s^\delta[E](0)\leq C\delta^{1-s}, \end{equation*} and this concludes the proof. We are left to show $(\ref{second_fmc})$.\\ We begin by estimating the contribution in the integral in $(\ref{2fmc_int})$ given by $x$ outside $B_{1+\epsilon}(-e_n)$, i.e. $x\in A^+$.\\ Recall that $T(A^+)\subset A^-$ and $|\det DT(x)|\leq2$. Therefore changing variables via $T$ we get \begin{equation*} \begin{split} \int_{A^+}dx\int_F\frac{\max\{d_x\,,d_y\}}{\kers}\,dy&=\int_{T(A^+)}|\det DT(x)|dx\int_F\frac{\max\{d_x\,,d_y\}}{|T(x)-y|^{n+s}}\,dy\\ & \leq2\int_{A^-}dx\int_F\frac{\max\{d_x\,,d_y\}}{|T(x)-y|^{n+s}}\,dy\\ & \leq2\int_{A^-}dx\int_F\frac{\max\{d_x\,,d_y\}}{\kers}\,dy, \end{split}\end{equation*} and hence \begin{equation*} -I_2\leq C\int_{A^-}dx\int_F\frac{\max\{d_x\,,d_y\}}{\kers}\,dy. \end{equation*} For a fixed $x\in A^-$ we can distinguish the cases $y\in B_{2d_x}(x)$ and $y\in\Co B_{2d_x}(x)$. Also recall that $F\subset B_\delta$ and $A^-\subset B_\delta$. Then \begin{equation*}\begin{split} \int_{F\setminus B_{2d_x}(x)}\frac{\max\{d_x\,,d_y\}}{\kers}\,dy&\leq \int_{B_\delta\setminus B_{2d_x}(x)}\frac{\max\{d_x\,,d_y\}}{\kers}\,dy\\ & \leq\int_{B_{2\delta}(x)\setminus B_{2d_x}(x)}\frac{\max\{d_x\,,d_y\}}{\kers}\,dy\\ & =\int_{2 d_x}^{2\delta}dr\int_{\partial B_r(x)}\frac{\max\{d_x\,,d_y\}}{r^{n+s}}\,d\Han(y)\\ & \leq\int_{2 d_x}^{2\delta}dr\int_{\partial B_r(x)}\frac{r}{r^{n+s}}\,d\Han(y)\\ & =n\omega_n\int_{2 d_x}^{2\delta}\frac{r}{r^{n+s}}r^{n-1}\,dr\\ & =\frac{n\omega_n}{1-s}\int_{2 d_x}^{2\delta}\frac{d}{dr}r^{1-s}dr\\ & \leq C\delta^{1-s}. \end{split}\end{equation*} Integrating on $A^-$ we obtain the fist term of the right hand side in $(\ref{second_fmc})$ \begin{equation*} \int_{A^-}dx\int_{F\setminus B_{2d_x}(x)}\frac{\max\{d_x\,,d_y\}}{\kers}\,dy\leq C|A|\delta^{1-s}. \end{equation*} On the other hand, if $y\in B_{2d_x}(x)$, then since $A^-\subset B_{1+\epsilon}(-e_n)\setminus B_2(-2e_n)$, \begin{equation*} \max\{d_x\,,d_y\}\leq3 d_x\leq 3\epsilon, \end{equation*} and hence \begin{equation*}\begin{split} \int_{A^-}dx\int_{F\cap B_{2d_x}(x)}\frac{\max\{d_x\,,d_y\}}{\kers}\,dy&\leq3\epsilon \int_{A^-}\int_{F\cap B_{2d_x}(x)}\frac{1}{\kers}\,dx\,dy\\ & \leq3\epsilon\,\Ll_s(A^-,F). \end{split}\end{equation*} Therefore \begin{equation*} -I_2\leq C|A|\delta^{1-s}+C\epsilon\,\Ll_s(A^-,F). \end{equation*} To conclude the proof it is now enough to prove the following \begin{lem} There exists a sequence $\epsilon\to0$ s.t. \begin{equation*} \epsilon\,\Ll_s(A^-,F)\leq C\epsilon^\eta\,|A^-|, \end{equation*} for some $\eta\in(0,1-s)$. \begin{proof} Since $E$ is a supersolution in $B_R$ and $A^-\subset \Co E\cap B_R$, we have \begin{equation*} \Ll_s(A^-,E)\leq\Ll_s(A^-,\Co(E\cup A^-)). \end{equation*} Therefore, since $B_{1+\epsilon}(-e_n)\subset E\cup A^-$, we get \begin{equation*}\begin{split} \Ll_s(A^-,F)&\leq\Ll_s(A^-,E)\leq\Ll_s(A^-,\Co(E\cup A^-))\\ & \leq\Ll_s(A^-,\Co B_{1+\epsilon}(-e_n)). \end{split}\end{equation*} For every $x\in B_{1+\epsilon}(-e_n)$ we have \begin{equation*} \int_{\Co B_{1+\epsilon}(-e_n)}\frac{1}{\kers}\,dy\leq\int_{\Co B_{d_x}(x)}\frac{1}{\kers}\,dy\leq C\,d_x^{-s}. \end{equation*} We denote \begin{equation*} a(r):=\Han(\Co E\cap\partial B_{1+r}(-e_n)), \end{equation*} for every $r\in[0,\epsilon)$. Then \begin{equation*}\begin{split} \Ll_s(A^-,\Co B_{1+\epsilon}(-e_n))&=\int_{A^-}dx\int_{\Co B_{1+\epsilon}(-e_n)}\frac{1}{\kers}\,dy\\ & \leq C\int_{A^-}d_x^{-s}\,dx\\ & =C\int_0^\epsilon dr\int_{\Co E\cap\partial B_{1+r}(-e_n)}(\epsilon-r)^{-s}\,d\Han(x)\\ & =C\int_0^\epsilon a(r) (\epsilon-r)^{-s}\,dr. \end{split} \end{equation*} In order to prove the claim, we show that for a sequence $\epsilon\to0$ we have \begin{equation*} \epsilon\,\int_0^\epsilon a(r) (\epsilon-r)^{-s}\,dr\leq\epsilon^\eta\,\int_0^\epsilon a(r)\,dr=\epsilon^\eta\,|A^-|. \end{equation*} Assume by contradiction that for all $\epsilon$ small we have the opposite inequality \begin{equation*} \int_0^\epsilon a(r)(\epsilon-r)^{-s}\,dr>\epsilon^{\eta-1}\int_0^\epsilon a(r)\,dr. \end{equation*} Integrating this inequality in $\epsilon$ between $0$ and $\lambda$ we get \begin{equation}\label{lemma52} \lambda^{1-s}\int_0^\lambda a(r)\,dr\geq c(s,\eta)\lambda^\eta\int_0^{\frac{\lambda}{2}}a(r)\,dr. \end{equation} Indeed the left hand side gives \begin{equation*}\begin{split} \int_0^\lambda\left(\int_0^\epsilon a(r)(\epsilon-r)^{-s}\,dr\right)d\epsilon&= \int_0^\lambda a(r)\left(\frac{1}{1-s}\int_r^\lambda\frac{d}{d\epsilon}(\epsilon-r)^{1-s}\,d\epsilon\right)dr\\ & =\frac{1}{1-s}\int_0^\lambda a(r)(\lambda-r)^{1-s}\,dr\\ & \leq \frac{1}{1-s}\lambda^{1-s}\int_0^\lambda a(r)\,dr. \end{split} \end{equation*} As for the right hand side we have \begin{equation*}\begin{split} \int_0^\lambda\left(\epsilon^{\eta-1}\int_0^\epsilon a(r)\,dr\right)d\epsilon& \geq \int_{\frac{\lambda}{2}}^\lambda\left(\epsilon^{\eta-1}\int_0^\epsilon a(r)\,dr\right)d\epsilon\\ & \geq \int_{\frac{\lambda}{2}}^\lambda\left(\epsilon^{\eta-1}\int_0^{\frac{\lambda}{2}} a(r)\,dr\right)d\epsilon\\ & =\frac{1}{\eta}\int_{\frac{\lambda}{2}}^\lambda\frac{d}{d\epsilon}\,\epsilon^\eta\,d\epsilon\int_0^{\frac{\lambda}{2}}a(r)\,dr\\ & =\frac{1}{\eta}(1-2^{-\eta})\lambda^\eta\int_0^{\frac{\lambda}{2}}a(r)\,dr, \end{split} \end{equation*} and we get $(\ref{lemma52})$. Let $\alpha:=1-s-\eta>0$; then $(\ref{lemma52})$ reads \begin{equation*} \int_0^\lambda a(r)\,dr\geq c\lambda^{-\alpha}\int_0^{\frac{\lambda}{2}}a(r)\,dr, \end{equation*} and hence for every $M>0$, if $\lambda$ is small enough, $\lambda<\lambda_0$, we get \begin{equation*} \int_0^\lambda a(r)\,dr\geq M\int_0^{\frac{\lambda}{2}}a(r)\,dr. \end{equation*} If we take $\lambda=2^{-k}$, with $k\geq k_0$, \begin{equation*}\begin{split} \int_0^{2^{-k}}a(r)\,dr&\leq M^{-1}\int_0^{2^{-k+1}}a(r)\,dr \leq M^{-2}\int_0^{2^{-k+2}}a(r)\,dr\leq\dots\\ & \leq M^{k_0-k}\int_0^{2^{-k_0}}a(r)\,dr, \end{split} \end{equation*} and we have \begin{equation*} \int_0^{2^{-k_0}}a(r)\,dr=\left|\Co E\cap B_{1+2^{-k_0}}(-e_n)\right|\leq |B_2(-e_n)\setminus B_1(-e_n)|, \end{equation*} for every $k_0\in\mathbb{N}$. Therefore \begin{equation*} \left|\Co E\cap B_{1+2^{-k}}(-e_n)\right|\leq C M^{k_0-k}. \end{equation*} However, since $E$ is a supersolution in $B_R$, with $0\in\partial E$ and $B_{2^{-k}}\subset B_R$, the uniform density estimate gives \begin{equation*} \left|\Co E\cap B_{1+2^{-k}}(-e_n)\right|\geq\left|\Co E\cap B_{2^{-k}}\right|\geq c2^{-nk}. \end{equation*} Choosing $M=2^{n+1}$ we obtain \begin{equation*} c2^{-nk}\leq C 2^{(n+1)(k_0-k)}, \end{equation*} for every $k\geq k_0$, i.e. \begin{equation*} 2^{-k}\geq\frac{c}{C2^{(n+1)k_0}}, \end{equation*} which yields a contradiction once we choose $k$ big enough. This concludes the proof. \end{proof} \end{lem} \end{proof} \end{teo} Scaling and traslating we get \begin{coroll}\label{Euler_Lag_ball_eq} Let $E$ be a supersolution in the open set $\Omega$. If $x\in\partial E\cap \Omega$ and $E\cap\Omega$ has an interior tangent ball at $x$, then \begin{equation*} \limsup_{\delta\to0}\I_s^\delta[E](x)\leq0. \end{equation*} \end{coroll} Now let $E$ be a supersolution in $\Omega$, with $x\in\partial E\cap\Omega$.\\ Suppose we have an open set $F$ which is contained in $E$ and touches $E$ in $x$, i.e. $F\subset E$ s.t. $x\in\partial F$, and suppose $\partial F$ is $C^2$ in a neighborhood of $x$.\\ Then, since $\partial F$ is $C^2$ near $x$, we can find an interior tangent ball at $x$, i.e. \begin{equation*} B_r(y)\subset F\qquad\textrm{s.t. }x\in\partial B_r(y)\cap\partial F. \end{equation*} Taking a smaller ball if necessary, we can suppose that $B_r(y)\subset\Omega$.\\ Clearly, since $F\subset E$ and $x\in\partial E\cap\partial F$, $B_r(y)$ is also an interior tangent ball to $E$ in $x$. Therefore previous Corollary gives \begin{equation*} \limsup_{\delta\to0}\I_s^\delta[E](x)\leq0. \end{equation*} Since $F$ is regular near $x$, we know that the fractional mean curvature of $F$ at $x$ is well defined. Moreover $(\ref{frac_conf})$ gives \begin{equation*} \I_s^\delta[F](x)\leq\I_s^\delta[E](x), \end{equation*} and hence passing to the limit $\delta\to0$ we get \begin{equation*} \I_s[F](x)\leq0. \end{equation*} This proves that a supersolution is also a viscosity supersolution, in the following sense \begin{coroll} Let $E$ be a supersolution in the open set $\Omega$, with $x\in\partial E\cap\Omega$. If $F$ is an open set contained in $E$ with $x\in\partial F$ and s.t. $\partial F$ is $C^2$ near $x$, then \begin{equation*} \I_s[F](x)\leq0. \end{equation*} \end{coroll} \begin{rmk} Notice that if $E$ is a subsolution we get the analogous statements just by considering $\Co E$, which is then a supersolution.\\ For example, if we have an exterior tangent ball at $x\in\partial E\cap\Omega$, then \begin{equation*} \liminf_{\delta\to0}\I_s^\delta[E](x)=-\limsup_{\delta\to0}\I_s^\delta[\Co E](x)\geq0. \end{equation*} \end{rmk} In particular, when $E$ is minimal we have both inequalities and hence we get the following \begin{coroll} Let $E$ be an $s$-minimal set in the open set $\Omega$. If $x\in\partial E\cap \Omega$ and $E$ has an interior and an exterior tangent ball at $x$, both contained in $\Omega$, then \begin{equation*} \I_s[E](x)=0. \end{equation*} \end{coroll} The above Corollary says that an $s$-minimal set is a classical solution of the zero fractional mean curvature equation $(\ref{euler-lagrange_frac})$ in every regular enough point $x\in\partial E\cap\Omega$.\\ As a consequence of the Euler-Lagrange equation we can also improve the comparison principle shown earlier. If the boundary of $E$ is contained in a strip outside of $\Omega$, then it is contained in the same strip also inside $\Omega$. \begin{coroll Let $E$ be an $s$-minimal set in the bounded open set $\Omega$. If \begin{equation*} \{x_n\leq a\}\setminus\Omega\subset E\subset\{x_n\leq b\}\setminus\Omega, \end{equation*} then \begin{equation*} \{x_n\leq a\}\subset E\subset\{x_n\leq b\}. \end{equation*} \begin{proof} We only show that \begin{equation*} \{x_n\leq a\}\subset E, \end{equation*} the other inclusion being analogous. It is enough to prove \begin{equation*} \inf_{x\in\partial E}x_n\geq a. \end{equation*} Notice that by hypothesis we know \begin{equation*} a\leq\inf_{x\in\partial E\setminus\Omega}x_n\leq b, \end{equation*} and, since $\Omega$ is bounded, \begin{equation*} \inf_{x\in\Omega}x_n\leq\inf_{x\in\partial E\cap\Omega}x_n\leq\sup_{x\in\Omega}x_n, \end{equation*} so that $\inf_{x\in\partial E}x_n$ is finite.\\ By contradiction suppose that \begin{equation*} \inf_{x\in\partial E}x_n< a. \end{equation*} Then we can traslate an hyperplane $\{x_n=t\}$ until we touch $\partial E$.\\ We can suppose that the contact point is $x=0\in\partial E\cap\Omega$ and that the tangent hyperplane is $\{x_n=0\}$, with $0<a<b$. Since $P:=\{x_n\leq0\}\subset E$ and $0\in\partial E\cap \partial P$, \begin{equation*} (\chi_E(y)-\chi_{\Co E}(y))-(\chi_P(y)-\chi_{\Co P}(y))\geq0, \end{equation*} for every $y\in\R$. Let $T$ denote reflection across $\{x_n=0\}$; then changing coordinates via $T$ we get \begin{equation*} \int_{\Co B_\delta}\frac{\chi_{\Co P}(y)}{|y|^{n+s}}\,dy= \int_{\Co B_\delta}\frac{\chi_{\Co P}(T(y))}{|T(y)|^{n+s}}\,dy= \int_{\Co B_\delta}\frac{\chi_P(y)}{|y|^{n+s}}\,dy, \end{equation*} so that $\I_s^\delta[P](0)=0$ for every $\delta>0$.\\ Moreover the Euler-Lagrange equation for $E$ gives \begin{equation*} \limsup_{\delta\to0}\I_s^\delta[E](0)\leq0. \end{equation*} Therefore \begin{equation*}\begin{split} 0&\leq \limsup_{\delta\to0} \int_{C B_\delta} \frac{(\chi_E(y)-\chi_{C E}(y)) - (\chi_P(y)-\chi_{C P}(y))}{|y|^{n+s}} \,dx\\ & \qquad\qquad=\limsup_{\delta\to0}\I_s^\delta[E](0)\leq 0, \end{split} \end{equation*} which implies $\chi_E(y)=\chi_P(y)$ a.e. $y\in\R$, i.e. $E=P$, but this contradicts the hypothesis \begin{equation*} \{x_n\leq a\}\setminus\Omega\subset E. \end{equation*} \end{proof} \end{coroll} Taking thinner and thinner strips we get the following \begin{coroll} An hyperplane is locally $s$-minimal, meaning that it is $s$-minimal in every bounded open set $\Omega\subset\R$. \end{coroll} \begin{rmk} We can think of equation $(\ref{euler-lagrange_frac})$ as saying that \begin{equation*} -(-\Delta)^\frac{s}{2}(\chi_E-\chi_{\Co E})=0\qquad\textrm{along}\quad\partial E, \end{equation*} if we think that a point $x_0\in\partial E$ belongs both to $E$ and $\Co E$. To be more precise, following the notation of Section 1.2, we define \begin{equation*} \tilde{\chi}_E(y):=\left\{\begin{array}{cc} 1,&y\in E_1,\\ 0,&y\in\partial E,\\ -1,&y\in E_0. \end{array}\right. \end{equation*} Then, if $|\partial E|=0$ we have $\tilde{\chi}_E(y)=\chi_E(y)-\chi_{\Co E}(y)$ a.e. $y\in\R$.\\ Therefore for every $x_0\in\partial E$ and $\delta>0$ \begin{equation*}\begin{split} \I_s^\delta[E](x_0)&=\int_{\Co B_\delta(x_0)}\frac{\chi_E(y)-\chi_{\Co E}(y)}{|x_0-y|^{n+s}}\,dy =\int_{\Co B_\delta(x_0)}\frac{\tilde{\chi}_E(y)}{|x_0-y|^{n+s}}\,dy\\ & =-\int_{\Co B_\delta(x_0)}\frac{\tilde{\chi}_E(x_0)-\tilde{\chi}_E(y)}{|x_0-y|^{n+s}}\,dy, \end{split} \end{equation*} and hence passing to the limit $\delta\to0$ formally yields \begin{equation*} \I_s[E](x_0)=-(-\Delta)^\frac{s}{2}(\tilde{\chi}_E)(x_0). \end{equation*} \end{rmk} \end{section} \begin{section}{First Variation of the Fractional Perimeter} First of all we show that the fractional mean curvature is continuous with respect to $C^2$ convergence of sets. \begin{defin} If $E$ and $E_k$ are bounded open sets with $C^2$ boundary, we say that $E_k\longrightarrow E$ in $C^2$ if $|E_k\Delta E|\longrightarrow0$ and the boundaries converge in the $C^2$ sense, meaning for example that they can be described locally with a finite number of graphs of functions which converge in $C^2$. \end{defin} \begin{prop}\label{continuity_curv_prop} Let $E$ and $E_k$ be bounded open sets with $C^2$ boundary s.t. $E_k\longrightarrow E$ in $C^2$ and let $x_k\in\partial E_k$, $x\in\partial E$ s.t. $x_k\longrightarrow x$. Then \begin{equation} \I_s[E_k](x_k)\longrightarrow\I_s[E](x). \end{equation} \begin{proof} Without loss of generality we can simplify a little bit our situation.\\ We can suppose that $x=0\in\partial E$ and that the normal vector at $\partial E$ in $0$ is $\nu_E(0)=e_n$. Moreover notice that the translated sets $E_k-x_k$ still converge in $C^2$ to $E$ and \begin{equation*} \I_s^\rho[E_k-x_k](0)=\I_s^\rho[E_k](x_k), \end{equation*} for every $\rho>0$, so that we can assume $x_k=0\in\partial E_k$ for every $k$.\\ We can also rotate the sets $E_k$ so that the normal vector in $0$ is $e_n$. Indeed, for every $k\in\mathbb{N}$ let $\mathcal{R}_k\in SO(n)$ be a rotation s.t. $\nu_{\mathcal{R}_kE_k}(0)=e_n$. Then we still have $\mathcal{R}_kE_k\longrightarrow E$ in $C^2$ and \begin{equation*} \I_s^\rho[\mathcal{R}_kE_k](0)=\I_s^\rho[E_k](0), \end{equation*} for every $\rho>0$. We begin by showing that far from $0$ the $L^1$ convergence of the sets is enough.\\ Indeed, let $A\subset\R$ be s.t. $B_r\subset A$ for some $r>0$. Then \begin{equation*} \left|\int_{\Co A}\left\{(\chi_{E_k}(y)-\chi_{\Co E_k}(y))-(\chi_E(y)-\chi_{\Co E}(y))\right\}\,\frac{dy}{|y|^{n+s}}\right| \leq\frac{|(E_k\Delta E)\cap\Co A|}{r^{n+s}}, \end{equation*} which tends to 0 as $k\to\infty$. Therefore we only need to study what happens near 0.\\ We work in the cylinder \begin{equation*} A=K_r:=B'_r\times (-r,r)=\{(y',y_n)\in\R\,|\,|y'|<r\textrm{ and }y_n\in(-r,r)\}. \end{equation*} Taking $r$ small enough we can write \begin{equation*} E\cap K_r=\{(y',y_n)\in\R\,|\,y'\in B'_r,\,-r<y_n<u(y')\}, \end{equation*} with $u\in C^2(B'_r)$ s.t. $u(0)=0$ and also $\nabla u(0)=0$, since $\nu_E(0)=0$.\\ Using our assumptions we can also write for $k$ big enough \begin{equation*} E_k\cap K_r=\{(y',y_n)\in\R\,|\,y'\in B'_r,\,-r<y_n<u_k(y')\}, \end{equation*} with $u_k\in C^2(B'_r)$ s.t. $u_k(0)=0$, $\nabla u_k(0)=0$ and $u_k\longrightarrow u$ in $C^2(B'_r)$. As shown in \cite{unifor}, we have an explicit formula to calculate the contribution to the fractional mean curvature of $E$ in 0 coming from $K_r$, \begin{equation}\label{curv_graph} P.V.\int_{K_r}\frac{\chi_E(y)-\chi_{\Co E}(y)}{|y|^{n+s}}\,dy =2\int_{B'_r}\left(\int_0^{\frac{u(y')}{|y'|}}\frac{dt}{(1+t^2)^\frac{n+s}{2}}\right)\frac{dy'}{|y'|^{n+s-1}}. \end{equation} We give a proof of this formula in the Lemma below. Notice that the right hand side is well defined in the classical sense. Indeed using Taylor expansion and our hypothesis on $u$ we see that \begin{equation*} \left|\frac{u(y')}{|y'|}\right|\leq\frac{1}{2}\|D^2u\|_{C^0(B'_r)}|y'|, \end{equation*} and hence \begin{equation*}\begin{split} \int_{B'_r}\Big|\int_0^{\frac{u(y')}{|y'|}}\frac{dt}{(1+t^2)^\frac{n+s}{2}}\Big|&\frac{dy'}{|y'|^{n+s-1}} \leq\frac{1}{2}\|D^2u\|_{C^0}\int_{B'_r}\frac{|y'|}{|y'|^{n+s-1}}\,dy'\\ & =\frac{1}{2}\|D^2u\|_{C^0}\h^{n-2}(\s^{n-2})\int_0^r\frac{\rho^{n-2}}{\rho^{n+s-2}}\,d\rho\\ & =\frac{1}{2}\|D^2u\|_{C^0}\frac{\h^{n-2}(\s^{n-2})}{1-s}r^{1-s}. \end{split} \end{equation*} Clearly formula $(\ref{curv_graph})$ holds also for every $E_k$. Moreover, using Taylor again, we see that \begin{equation*} \left|\frac{u(y')-u_k(y')}{|y'|}\right|\leq\frac{1}{2}\|D^2u-D^2u_k\|_{C^0(B'_r)}|y'| \leq\frac{1}{2}\|u-u_k\|_{C^2(B'_r)}|y'|. \end{equation*} Therefore \begin{equation*}\begin{split} &\left|P.V.\int_{K_r}\frac{\chi_E(y)-\chi_{\Co E}(y)}{|y|^{n+s}}\,dy -P.V.\int_{K_r}\frac{\chi_{E_k}(y)-\chi_{\Co E_k}(y)}{|y|^{n+s}}\,dy\right|\\ & \qquad\quad =2\left|\int_{B'_r}\Big(\int_0^{\frac{u(y')}{|y'|}}\frac{dt}{(1+t^2)^\frac{n+s}{2}} -\int_0^{\frac{u_k(y')}{|y'|}}\frac{dt}{(1+t^2)^\frac{n+s}{2}}\Big)\frac{dy'}{|y'|^{n+s-1}}\right|\\ & \qquad\quad \leq2\int_{B'_r}\left|\frac{u(y')-u_k(y')}{|y'|}\right|\frac{dy'}{|y'|^{n+s-1}}\\ & \qquad\quad \leq\frac{\h^{n-2}(\s^{n-2})}{1-s}\,r^{1-s}\,\|u-u_k\|_{C^2(B'_r)}, \end{split} \end{equation*} which goes to 0 as $k\to\infty$. Putting the two estimates together we get, \begin{equation}\label{curv_cont_last} \left|\I_s[E](0)-\I_s[E_k](0)\right|\leq\frac{|(E_k\Delta E)\cap\Co K_r|}{r^{n+s}} +C\,r^{1-s}\|u-u_k\|_{C^2(B'_r)}, \end{equation} and hence the claim. \end{proof} \end{prop} \begin{rmk} Actually, to get $(\ref{curv_cont_last})$ it is enough to suppose that the sets $E_k$ converge in the $L^1$ sense to $E$ outside some small ball $B$ centered at $0$ and that inside $B$ we can represent $E_k$ and $E$ as $C^2$ graphs, with the graphs of $E_k$ converging to that of $E$ in the $C^2$ sense.\\ In particular, if we are interested in the continuity of the fractional curvature only in 0, there is no need to ask $C^2$-regularity of the boundaries far from 0. \end{rmk} Setting $E_k=E$ for every $k$ we trivially get the following \begin{coroll} Let $E$ be a bounded open set with $C^2$ boundary. Then the function \begin{equation*} \I_s[E](-):\partial E\longrightarrow\R,\qquad x\longmapsto\I_s[E](x) \end{equation*} is continuous. \end{coroll} \begin{rmk} Notice that we can weaken our regularity assumptions on the sets involved and ask their boundaries to be only $C^{1,\alpha}$ for some $\alpha>s$ and the convergence to be in the $C^{1,\alpha}$ sense.\\ Indeed, suppose we can write \begin{equation*} E\cap K_r=\{(y',y_n)\in\R\,|\,y'\in B'_r,\,-r<y_n<u(y')\}, \end{equation*} with $u\in C^{1,\alpha}(B'_r)$ s.t. $u(0)=0$ and $\nabla u(0)=0$ and $\alpha>s$. Then the mean value theorem gives \begin{equation*}\begin{split} |u(y')|&\leq|\nabla u(\xi)|\,|y'|=|\nabla u(\xi)-\nabla u(0)|\,|y'| \leq\|\nabla u\|_{C^{0,\alpha}}|\xi|^\alpha\,|y'|\\ & \leq\|u\|_{C^{1,\alpha}}|y'|^{1+\alpha}. \end{split} \end{equation*} Since we are asking $\alpha>s$, this inequality is enough to guarantee that the right hand side of $(\ref{curv_graph})$ is well defined, \begin{equation*}\begin{split} \int_{B'_r}&\Big|\int_0^{\frac{u(y')}{|y'|}}\frac{dt}{(1+t^2)^\frac{n+s}{2}}\Big|\frac{dy'}{|y'|^{n+s-1}} \leq\|u\|_{C^{1,\alpha}}\int_{B'_r}\frac{|y'|^\alpha}{|y'|^{n+s-1}}\,dy'\\ & =\|u\|_{C^{1,\alpha}}\h^{n-2}(\s^{n-2})\int_0^r\frac{\rho^{n-2}}{\rho^{n+s-\alpha-1}}\,d\rho =\|u\|_{C^{1,\alpha}}\frac{\h^{n-2}(\s^{n-2})}{\alpha-s}r^{\alpha-s}. \end{split} \end{equation*} Once we have formula $(\ref{curv_graph})$, arguing as above we find \begin{equation*}\begin{split} &\left|P.V.\int_{K_r}\frac{\chi_E(y)-\chi_{\Co E}(y)}{|y|^{n+s}}\,dy -P.V.\int_{K_r}\frac{\chi_{E_k}(y)-\chi_{\Co E_k}(y)}{|y|^{n+s}}\,dy\right|\\ & \qquad\qquad\qquad \leq C\,r^{\alpha-s}\|u-u_k\|_{C^{1,\alpha}(B'_r)}, \end{split}\end{equation*} and hence the convergence. \end{rmk} Now we prove $(\ref{curv_graph})$. \begin{lem}\label{explicit_curv_formula} Let $E$ be an open set with $0\in\partial E$. Suppose that \begin{equation*} E\cap K_r=\{(y',y_n)\in\R\,|\,y'\in B'_r,\,-r<y_n<u(y')\}, \end{equation*} with $u\in C^{1,\alpha}(B'_r)$ s.t. $u(0)=0$, $\nabla u(0)=0$ and $\alpha>s$. Then \begin{equation} P.V.\int_{K_r}\frac{\chi_E(y)-\chi_{\Co E}(y)}{|y|^{n+s}}\,dy =2\int_{B'_r}\left(\int_0^{\frac{u(y')}{|y'|}}\frac{dt}{(1+t^2)^\frac{n+s}{2}}\right)\frac{dy'}{|y'|^{n+s-1}}. \end{equation} In particular the $s$-fractional mean curvature of $E$ in 0 is well defined. \begin{proof} We take $0<\rho<r$ and split the set $K_r\setminus B_\rho$ as \begin{equation*}\begin{split} K_r\setminus B_\rho& =(B'_r\setminus B'_\rho)\times(-r,r)\cup\{(y',y_n)\,|\,y'\in B'_\rho,\,y_n\in(-r,r)\setminus\pi(y')\}\\ & =:S_1\cup S_2, \end{split}\end{equation*} where \begin{equation*} \pi(y'):=\left[-\sqrt{\rho^2-|y'|^2},\sqrt{\rho^2-|y'|^2}\right \subset\mathbb{R}. \end{equation*} \noindent We first calculate the contribution coming from $S_1$. We have \begin{equation*}\begin{split} \int_{K_r\setminus B_\rho}&\frac{\chi_E(y)-\chi_{\Co E}(y)}{|y|^{n+s}}\,dy\\ & =\int_{S_1}\frac{\chi_E(y)-\chi_{\Co E}(y)}{|y|^{n+s}}\,dy= +\int_{S_2}\frac{\chi_E(y)-\chi_{\Co E}(y)}{|y|^{n+s}}\,dy\\ & =:I_1+I_2, \end{split} \end{equation*} and \begin{equation*}\begin{split} I_1& =\int_{B'_r\setminus B'_\rho}\left(\int_{-r}^{u(y')}\frac{dy_n}{(|y'|^2+y_n^2)^\frac{n+s}{2}} -\int^{r}_{u(y')}\frac{dy_n}{(|y'|^2+y_n^2)^\frac{n+s}{2}}\right)dy'\\ & =\int_{B'_r\setminus B'_\rho}\Big(\int_{-r}^{u(y')}\Big(1+\Big(\frac{y_n}{|y'|}\Big)^2\Big)^{-\frac{n+s}{2}}\,dy_n\\ & \qquad\qquad\qquad -\int^{r}_{u(y')}\Big(1+\Big(\frac{y_n}{|y'|}\Big)^2\Big)^{-\frac{n+s}{2}}\,dy_n\Big)\frac{dy'}{|y'|^{n+s}}. \end{split} \end{equation*} \noindent We change variables $y_n=|y'|t$ and we write for simplicity $g(t):=(1+t^2)^{-\frac{n+s}{2}}$ obtaining \begin{equation*} I_1=\int_{B'_r\setminus B'_\rho}\left(\int_{-\frac{r}{|y'|}}^\frac{u(y')}{|y'|}g(t)\,dt -\int^\frac{r}{|y'|}_\frac{u(y')}{|y'|}g(t)\,dt\right)\frac{dy'}{|y'|^{n+s-1}}. \end{equation*} \noindent Since the function $g$ is even, we get \begin{equation*} I_1 =2\int_{B'_r\setminus B'_\rho}\Big(\int_0^\frac{u(y')}{|y'|}g(t)\,dt\Big)\frac{dy'}{|y'|^{n+s-1}}. \end{equation*} In a similar way, we see that the contribution coming from $S_2$ gives \begin{equation*}\begin{split} I_2&=\int_{B'_\rho}\Big(\int_\mathbb{R}\chi_{|y'|^{-1}(-r,u(y'))}(t)\big(1-\chi_{|y'|^{-1}\pi(y')}(t)\big)g(t)\,dt\\ & \qquad -\int_\mathbb{R}\chi_{|y'|^{-1}(u(y'),r)}(t)\big(1-\chi_{|y'|^{-1}\pi(y')}(t)\big)g(t)\,dt\Big)\frac{dy'}{|y'|^{n+s-1}}. \end{split} \end{equation*} \noindent If we let \begin{equation*} U:=\{y'\in B'_\rho\,|\,u(y')\in\pi(y')\}, \end{equation*} then playing with the characteristic functions and using again that $g$ is even, we see that \begin{equation*} I_2=2\int_{B'_\rho}\Big(\int_0^\frac{u(y')}{|y'|}g(t)\,dt\Big)\frac{dy'}{|y'|^{n+s-1}} -2\int_U\Big(\int_0^\frac{u(y')}{|y'|}g(t)\,dt\Big)\frac{dy'}{|y'|^{n+s-1}}. \end{equation*} Summing gives \begin{equation*} I_1+I_2=2\int_{B'_r}\Big(\int_0^\frac{u(y')}{|y'|}g(t)\,dt\Big)\frac{dy'}{|y'|^{n+s-1}} -2\int_U\Big(\int_0^\frac{u(y')}{|y'|}g(t)\,dt\Big)\frac{dy'}{|y'|^{n+s-1}}, \end{equation*} and, as remarked above \begin{equation*}\begin{split} \left|\int_U\Big(\int_0^\frac{u(y')}{|y'|}g(t)\,dt\Big)\frac{dy'}{|y'|^{n+s-1}}\right|& \leq\int_{B'_\rho}\Big|\int_0^\frac{u(y')}{|y'|}g(t)\,dt\Big|\frac{dy'}{|y'|^{n+s-1}}\\ & \leq C\,\|u\|_{C^{1,\alpha}(B'_r)}\rho^{\alpha-s}, \end{split} \end{equation*} which goes to 0 as $\rho\to0$.\\ Therefore passing to the limit concludes the proof. \end{proof} \end{lem} \begin{rmk} In a similar context, we might want to have estimates for the difference between the fractional mean curvature of a set $E$ with $C^{1,\alpha}$ boundary and that of the set $\Phi(E)$, where $\Phi$ is a $C^{1,\alpha}$-diffeomorphism of $\R$. For a detailed study of the estimates involved see \cite{matteo}. \end{rmk} Now it is convenient to switch our attention from graphs to level sets. We consider a function $\varphi\in C^2_c(\R)$ s.t. $\nabla \varphi\not=0$ in $\{t_1\leq \varphi\leq t_2\}$, for some $0<t_1<t_2$. That $\varphi$ has compact support guarantees that the sets $\{\varphi\geq t\}$ are compact for every $t>0$. Moreover $\nabla \varphi\not=0$ in $\{t_1\leq \varphi\leq t_2\}$ implies that $\{\varphi=t\}$ is a $C^2$ hypersurface for every $t\in[t_1,t_2]$.\\ In particular $\I_s[\{\varphi\geq \varphi(x)\}](x)$ is well defined for every $x\in\{t_1\leq \varphi\leq t_2\}$.\\ For simplicity we write $E_t:=\{\varphi\geq t\}$, for $t\in\mathbb{R}$.\\ Given $x\in\{t_1\leq \varphi\leq t_2\}$, we have for every $\rho>0$ \begin{equation*}\begin{split} \I^\rho_s[E_{\varphi(x)}](x)& =\int_{\Co B_\rho(x)}\frac{\chi_{\{\varphi\geq\varphi(x)\}}(y)-\chi_{\{\varphi<\varphi(x)\}}(y)}{\kers}dy\\ & =\int_{\Co B_\rho(x)}\frac{\sig(\varphi(y)-\varphi(x))}{\kers}dy, \end{split} \end{equation*} where sgn is the sign function \begin{equation*} \sig(t):=\left\{\begin{matrix}1, & t\geq0\\-1,&t<0\end{matrix}\right.. \end{equation*} \noindent Since by symmetry \begin{equation*} \int_{\Co B_\rho(x)}\frac{\sig(\nabla\varphi(x)\cdot (y-x))}{\kers}dy=0, \end{equation*} we get \begin{equation*}\begin{split} \qquad\I^\rho_s[E_{\varphi(x)}](x)& =\int_{\Co B_\rho(x)}\frac{\sig(\varphi(y)-\varphi(x))-\sig(\nabla\varphi(x)\cdot (y-x))}{\kers}dy\\ & =2\int_{\Co B_\rho(x)}\Big(\frac{\chi_{\{y|\varphi(y)\geq\varphi(x),\nabla\varphi(x)\cdot (y-x)\leq0\}}(y)} {\kers}\\ &\qquad\qquad\qquad -\frac{\chi_{\{y|\varphi(y)<\varphi(x),\nabla\varphi(x)\cdot (y-x)>0\}}(y)}{\kers}\Big)dy. \end{split} \end{equation*} We want to exploit this formula to show that the limit \begin{equation}\label{unif_curv} \I^\rho_s[E_{\varphi(x)}](x)\xrightarrow{\rho\to0^+}\I_s[E_{\varphi(x)}](x) \end{equation} is uniform in $x\in\{t_1\leq \varphi\leq t_2\}$.\\ To be more precise, since \begin{equation*}\begin{split} |\I_s^\rho[E_{\varphi(x)}](x)-\I_s[E_{\varphi(x)}](x)|&= \lim_{\delta\to0}|\I_s^\rho[E_{\varphi(x)}](x)-\I_s^\delta[E_{\varphi(x)}](x)|\\ & =\lim_{\delta\to0}\Big|\int_{B_\rho(x)\setminus B_\delta(x)}\frac{\chi_{E_{\varphi(x)}}(y)-\chi_{\Co E_{\varphi(x)}}(y)}{\kers}dy\Big|, \end{split} \end{equation*} we want to use the above formula to bound \begin{equation*} \Big|\int_{B_\rho(x)\setminus B_\delta(x)}\frac{\chi_{E_{\varphi(x)}}(y)-\chi_{\Co E_{\varphi(x)}}(y)}{\kers}dy\Big| \end{equation*} independently on $\delta$, so that we can let $\delta\to0$, then show that what we obtain goes to 0 as $\rho\to0$, independently on $x$. Roughly speaking, we are going to show that the sets $E_{\varphi(x)}$ satisfy a uniform paraboloid condition, meaning that we have tangent inner and outer paraboloids with the same opening width for every $x$.\\ Let $x\in\{t_1\leq\varphi\leq t_2\}$. Using Taylor expansion, \begin{equation*} |\varphi(y)-\varphi(x)-\nabla\varphi(x)\cdot(y-x)|\leq\frac{1}{2}\|D^2\varphi\|_{C^0}|y-x|^2, \end{equation*} and hence we have \begin{equation*}\begin{split} \{y\in\R\,|&\,\varphi(y)\geq\varphi(x),\nabla\varphi(x)\cdot (y-x)\leq0\}\\ & =\{y\in\R\,|\,0\leq-\nabla\varphi(x)\cdot(y-x)\leq\varphi(y)-\varphi(x)-\nabla\varphi(x)\cdot(y-x)\}\\ & \subset\left\{y\in\R\,|\,0\leq-\nabla\varphi(x)\cdot(y-x)\leq\frac{1}{2}\|D^2\varphi\|_{C^0}|y-x|^2\right\}. \end{split} \end{equation*} We write \begin{equation*} e:=-\frac{\nabla\varphi(x)}{|\nabla\varphi(x)|} \quad\textrm{ and }\quad p_e(z):=z-e\cdot z, \end{equation*} so that \begin{equation*} -\nabla\varphi(x)\cdot(y-x)=|\nabla\varphi(x)|\,e\cdot(y-x) \end{equation*} and, if we consider $y\in B_\rho(x)$, \begin{equation*} |y-x|^2=\big(e\cdot(y-x)\big)^2+\big(p_e(y-x)\big)^2 \leq \rho\, e\cdot(y-x)+\big(p_e(y-x)\big)^2. \end{equation*} Moreover, if we let \begin{equation*} \beta:=\inf_{\{t_1\leq\varphi\leq t_2\}}|\nabla\varphi|>0, \end{equation*} and we take \begin{equation*} 0<\rho<\frac{\beta}{\|D^2\varphi\|_{C^0}}, \end{equation*} then \begin{equation*} |\nabla\varphi(x)|-\frac{1}{2}\|D^2\varphi\|_{C^0}\,\rho>\frac{\beta}{2}>0. \end{equation*} Therefore \begin{equation*}\begin{split} &\Big\{y\in B_\rho(x)\setminus B_\delta(x)\,\big|\,0\leq-\nabla\varphi(x)\cdot(y-x)\leq\frac{1}{2}\|D^2\varphi\|_{C^0}|y-x|^2\Big\}\\ &\subset\Big\{y\in B_\rho(x)\setminus B_\delta(x)\,\big|\,0\leq\Big(|\nabla\varphi(x)|-\frac{\|D^2\varphi\|_{C^0}}{2}\rho\Big)e\cdot(y-x)\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad \leq\frac{\|D^2\varphi\|_{C^0}}{2}\big(p_e(y-x)\big)^2\Big\}\\ & =\Big\{y\in B_\rho(x)\setminus B_\delta(x)\,\big|\,0\leq e\cdot(y-x)\leq \frac{\|D^2\varphi\|_{C^0}}{2|\nabla\varphi(x)|-\|D^2\varphi\|_{C^0}\,\rho}\big(p_e(y-x)\big)^2\Big\}\\ & \subset\Big\{y\in B_\rho(x)\setminus B_\delta(x)\,\big|\,0\leq e\cdot(y-x)\leq\frac{\|D^2\varphi\|_{C^0}}{\beta}\big(p_e(y-x)\big)^2\Big\}, \end{split} \end{equation*} which is a paraboloid whose opening width is independent of $x$, as wanted. Now we obtain \begin{equation*}\begin{split} \int_{B_\rho(x)\setminus B_\delta(x)}&\frac{\chi_{\{y|\varphi(y)\geq\varphi(x),\nabla\varphi(x)\cdot (y-x)\leq0\}}(y)} {\kers}dy\\ & \leq\int_{B_\rho(x)\setminus B_\delta(x)}\frac{\chi_{\{y|0\leq e\cdot(y-x)\leq\frac{\|D^2\varphi\|_{C^0}}{\beta}(p_e(y-x))^2\}}(y)}{\kers}dy\\ & \leq\int_{B'_\rho}\Big(\int_0^{\frac{\|D^2\varphi\|_{C^0}}{\beta}|z'|}\frac{dt}{(1+t^2)^\frac{n+s}{2}}\Big)\frac{dz'}{|z'|^{n+s-1}}\\ & \leq\frac{\h^{n-2}(\s^{n-2})}{1-s}\frac{\|D^2\varphi\|_{C^0}}{\inf_{\{t_1\leq\varphi\leq t_2\}}|\nabla\varphi|}\,\rho^{1-s}, \end{split} \end{equation*} which is uniform in $\delta$ and does not depend on $x$.\\ Reasoning in a similar way yields the same inequality for \begin{equation*} \int_{B_\rho(x)\setminus B_\delta(x)}\frac{\chi_{\{y|\varphi(y)<\varphi(x),\nabla\varphi(x)\cdot (y-x)>0\}}(y)} {\kers}dy, \end{equation*} and hence we can estimate \begin{equation} |\I_s^\rho[E_{\varphi(x)}](x)-\I_s[E_{\varphi(x)}](x)| \leq4\frac{\h^{n-2}(\s^{n-2})}{1-s}\frac{\|D^2\varphi\|_{C^0}}{\inf_{\{t_1\leq\varphi\leq t_2\}}|\nabla\varphi|}\,\rho^{1-s}, \end{equation} which tends to 0 uniformly in $x\in\{t_1\leq\varphi\leq t_2\}$ as $\rho\to0$. We have just proved the following \begin{lem} Let $\varphi\in C_c^2(\R)$ s.t. $\nabla\varphi\not=0$ in $\{t_1\leq\varphi\leq t_2\}$, for some $0<t_1<t_2$. Then the limit \begin{equation} \I^\rho_s[E_{\varphi(x)}](x)\xrightarrow{\rho\to0^+}\I_s[E_{\varphi(x)}](x) \end{equation} is uniform in $x\in\{t_1\leq \varphi\leq t_2\}$.\\ In particular \begin{equation}\label{L1_curv_conv} \int_S\I_s^\rho[E_{\varphi(x)}](x)\,dx\xrightarrow{\rho\to0^+} \int_S\I_s[E_{\varphi(x)}](x)\,dx, \end{equation} for every $S\subset\{t_1\leq\varphi\leq t_2\}$. \begin{proof} We have proved the first assertion above. In more fancy language this means that the functions \begin{equation*} \I_s^\rho[E_{\varphi(-)}](-):\{t_1\leq\varphi\leq t_2\}\longrightarrow\mathbb{R} \end{equation*} converge in $L^\infty$ to $\I_s[E_{\varphi(-)}](-)$, and hence, since $\{t_1\leq\varphi\leq t_2\}$ is bounded, we have also the $L^1$ convergence. \end{proof} \end{lem} We can now relate the difference between the $s$-perimeter of the superlevel set $E_{t_1}$ and that of $E_{t_2}$, with the $s$-fractional mean curvature \begin{prop} Let $\varphi\in C_c^2(\R)$ s.t. $\nabla\varphi\not=0$ in $\{t_1\leq\varphi\leq t_2\}$, for some $0<t_1<t_2$. Then \begin{equation}\label{fracurveq} P_s(E_{t_1})=P_s(E_{t_2})-\int_{\{t_1<\varphi<t_2\}}\I_s[E_{\varphi(x)}](x)\,dx. \end{equation} Moreover for every $W\subset\R$ s.t. $\{\varphi\geq t_2\}\subset W\subset\{\varphi\geq t_1\}$ \begin{equation}\label{fracurvineq} P_s(W)\geq P_s(E_{t_2})-\int_{W\setminus E_{t_2}}\I_s[E_{\varphi(x)}](x)\,dx. \end{equation} \begin{proof} We remark that with our hypothesis $\I_s[E_{\varphi(x)}](x)$ is well defined for every $x\in\{t_1\leq\varphi\leq t_2\}$ and Proposition $\ref{continuity_curv_prop}$ implies that the function $\I_s[E_{\varphi(-)}](-)$ is continuous there.\\ On the other hand recall that $\I_s^\rho[E_{\varphi(x)}](x)$ is defined for every $x\in\R$. Let $A\subset\R$ be a bounded set. Then \begin{equation*}\begin{split} \int_A&\I_s^\rho[E_{\varphi(x)}](x)\,dx =\int_A\Big(\int_{\R\setminus B_\rho(x)}\frac{\chi_{\{\varphi\geq\varphi(x)\}}(y)-\chi_{\{\varphi<\varphi(x)\}}(y)}{\kers}dy\Big)dx\\ & =\int_{\R\times\R}\chi_A(x)\big(\chi_{\{\varphi\geq\varphi(x)\}}(y)-\chi_{\{\varphi<\varphi(x)\}}(y)\big) \frac{\chi_{\Co(0,\rho)}(|x-y|)}{\kers}dx\,dy\\ & =\frac{1}{2}\int_{\R\times\R}\big(\chi_A(x)-\chi_A(y)\big)\big(\chi_{\{\varphi\geq\varphi(x)\}}(y)-\chi_{\{\varphi<\varphi(x)\}}(y)\big)\\ & \qquad\qquad\qquad\qquad\qquad\cdot \frac{\chi_{\Co(0,\rho)}(|x-y|)}{\kers}dx\,dy. \end{split} \end{equation*} As for the last equality, simply notice that \begin{equation*} \chi_{\{\varphi\geq\varphi(x)\}}(y)-\chi_{\{\varphi<\varphi(x)\}}(y) =-\big(\chi_{\{\varphi\geq\varphi(y)\}}(x)-\chi_{\{\varphi<\varphi(y)\}}(x)\big), \end{equation*} for every $(x,y)\in\R\times\R$ s.t. $\varphi(x)\not=\varphi(y)$, and then just exchange $x$ and $y$ in the integral. For a general bounded set $A$ this gives \begin{equation*} -\int_A\I_s^\rho[E_{\varphi(x)}](x)\,dx\leq\frac{1}{2} \int_{\R\times\R}|\chi_A(x)-\chi_A(y)|\frac{\chi_{\Co(0,\rho)}(|x-y|)}{\kers}dx\,dy, \end{equation*} while if we take $A=E_t$ for some $t>0$, we have \begin{equation*} \big(\chi_{\{\varphi\geq t\}}(x)-\chi_{\{\varphi\geq t\}}(y)\big)\big(\chi_{\{\varphi\geq\varphi(x)\}}(y)-\chi_{\{\varphi<\varphi(x)\}}(y)\big) =-|\chi_{E_t}(x)-\chi_{E_t}(y)|, \end{equation*} and hence \begin{equation} -\int_{E_t}\I_s^\rho[E_{\varphi(x)}](x)\,dx =\frac{1}{2} \int_{\R\times\R}|\chi_{E_t}(x)-\chi_{E_t}(y)|\frac{\chi_{\Co(0,\rho)}(|x-y|)}{\kers}dx\,dy. \end{equation} Now let $\{\varphi\geq t_2\}\subset W\subset\{\varphi\geq t_1\}$. If $P_s(W)=\infty$, then $(\ref{fracurvineq})$ is clear.\\ On the other hand, if $P_s(W)<\infty$, then Lebesgue's dominated convergence Theorem implies that \begin{equation*} \lim_{\rho\to0}\frac{1}{2} \int_{\R\times\R}|\chi_W(x)-\chi_W(y)|\frac{\chi_{\Co(0,\rho)}(|x-y|)}{\kers}dx\,dy =P_s(W). \end{equation*} Therefore \begin{equation*}\begin{split} \frac{1}{2}\int_{\R\times\R}&|\chi_W(x)-\chi_W(y)|\frac{\chi_{\Co(0,\rho)}(|x-y|)}{\kers}dx\,dy \geq-\int_W\I_s^\rho[E_{\varphi(x)}](x)\,dx\\ & =-\int_{W\setminus E_{t_2}}\I_s^\rho[E_{\varphi(x)}](x)\,dx -\int_{E_{t_2}}\I_s^\rho[E_{\varphi(x)}](x)\,dx\\ & =-\int_{W\setminus E_{t_2}}\I_s^\rho[E_{\varphi(x)}](x)\,dx\\ & \qquad\qquad +\frac{1}{2} \int_{\R\times\R}|\chi_{E_{t_2}}(x)-\chi_{E_{t_2}}(y)|\frac{\chi_{\Co(0,\rho)}(|x-y|)}{\kers}dx\,dy. \end{split}\end{equation*} Since $W\setminus E_{t_2}\subset\{t_1\leq\varphi\leq t_2\}$, previous Lemma guarantees that \begin{equation*} \lim_{\rho\to0}-\int_{W\setminus E_{t_2}}\I_s^\rho[E_{\varphi(x)}](x)\,dx =-\int_{W\setminus E_{t_2}}\I_s[E_{\varphi(x)}](x)\,dx, \end{equation*} and hence passing to the limit $\rho\to0$ proves $(\ref{fracurvineq})$ (notice that $E_{t_2}$ is a bounded set with $C^2$ boundary and hence $P_s(E_{t_2})<\infty$).\\ To get $(\ref{fracurveq})$ simply take $W=E_{t_1}$, so that \begin{equation*}\begin{split} \frac{1}{2}\int_{\R\times\R}&|\chi_{E_{t_1}}(x)-\chi_{E_{t_1}}(y)|\frac{\chi_{\Co(0,\rho)}(|x-y|)}{\kers}dx\,dy =-\int_{E_{t_1}}\I_s^\rho[E_{\varphi(x)}](x)\,dx\\ & =-\int_{\{t_1<\varphi<t_2\}}\I_s^\rho[E_{\varphi(x)}](x)\,dx\\ & \qquad\qquad +\frac{1}{2} \int_{\R\times\R}|\chi_{E_{t_2}}(x)-\chi_{E_{t_2}}(y)|\frac{\chi_{\Co(0,\rho)}(|x-y|)}{\kers}dx\,dy, \end{split}\end{equation*} and pass to the limit $\rho\to0$. \end{proof} \end{prop} As shown in \cite{CMP} in the broader context of generalized perimeters and curvatures, previous Proposition is enough (actually equivalent) to show that the fractional mean curvature $\I_s$ is the first variation of the fractional perimeter $P_s$, i.e. to prove $(\ref{first_variation})$. \begin{teo}[First Variation of the Perimeter] Let $E$ be a bounded open set with $C^2$ boundary and let $\Phi_t:\R\longrightarrow\R$ be a one-parameter family of diffeomorphisms of class $C^2$ both in $x$ and in $t$, with $\Phi_0=$Id. Then \begin{equation} \frac{d}{dt}P_s(\Phi_t(E))\Big|_{t=0}=-\int_{\partial E}\I_s[E](x)\nu_E(x)\cdot\phi(x)\,d\Han(x), \end{equation} where $\phi(x):=\frac{\partial}{\partial t}\Phi_t(x)\big|_{t=0}$ and $\nu_E(x)$ is the outer unit normal at $\partial E$ in $x$. \begin{proof} We can write $E=\{\varphi\geq\frac{1}{2}\}$ for some $\varphi\in C^2_c(\R)$ with $\nabla\varphi\not=0$ in $\{\frac{1}{8}\leq\varphi\leq\frac{7}{8}\}$ (see Appendix C). Moreover notice that, since $\Phi_t$ is $C^2$ in $t$ and $\Phi_0=$Id, for $|t|$ small we have \begin{equation}\label{symmdifffirstvar} \Phi_t(E)\Delta E\subset N_{M|t|}(\partial E), \end{equation} the $(M|t|)$-neighborhood of $\partial E$, for some $M>0$. Therefore, for $|t|$ sufficiently small, we can construct a $C^2$ diffeomorphism $\tilde{\Phi}_t$ s.t. $\tilde{\Phi}_t=\textrm{Id}$ outside $N_{2M|t|}(\partial E),$ and in particular out of $\{\frac{1}{4}\leq\varphi\leq\frac{3}{4}\}$, $\tilde{\Phi}_t(E)=\Phi_t(E)$ and $\|\tilde{\Phi}_t-\textrm{Id}\|_{C^2}\longrightarrow0$ as $t\to0$.\\ In particular we have $P_s(\tilde{\Phi}_t(E))=P_s(\Phi_t(E))$. Moreover, since \begin{equation*} \Phi_t(E)=\tilde{\Phi}_t(E)=\Big\{\varphi\circ\tilde{\Phi}_t^{-1}\geq\frac{1}{2}\Big\}, \end{equation*} and \begin{equation*} \Big\{\varphi\circ\tilde{\Phi}_t^{-1}\geq\frac{7}{8}\Big\}=\Big\{\varphi\geq\frac{7}{8}\Big\}, \end{equation*} using $(\ref{fracurveq})$ we find, for $|t|$ small enough, \begin{equation*}\begin{split} P_s&(\Phi_t(E))-P_s(\{\varphi\geq7/8\})=P_s(\{\varphi\circ\tilde{\Phi}_t^{-1}\geq1/2\})-P_s(\{\varphi\circ\tilde{\Phi}_t^{-1}\geq7/8\})\\ & =-\int_{\{1/2<\varphi\circ\tilde{\Phi}_t^{-1}<7/8\}}\I_s\big[E_{\varphi\circ\tilde{\Phi}_t^{-1}(x)}\big](x)\,dx\\ & =-\int_{\Phi_t(E)\setminus\{\varphi\geq7/8\}}\I_s\big[E_{\varphi\circ\tilde{\Phi}_t^{-1}(x)}\big](x)\,dx\\ & =-\int_{N_{2M|t|}(\partial E)\cap\Phi_t(E)} \Big(\I_s\big[E_{\varphi\circ\tilde{\Phi}_t^{-1}(x)}\big](x)-\I_s[E_{\varphi(x)}](x)\Big)dx\\ & \qquad\qquad\qquad\qquad -\int_{\Phi_t(E)\setminus\{\varphi\geq7/8\}}\I_s[E_{\varphi(x)}](x)\,dx. \end{split}\end{equation*} Now \begin{equation*}\begin{split} \Big|\int_{N_{2M|t|}(\partial E)\cap\Phi_t(E)}& \Big(\I_s\big[E_{\varphi\circ\tilde{\Phi}_t^{-1}(x)}\big](x)-\I_s[E_{\varphi(x)}](x)\Big)dx\Big|\\ & \leq\int_{N_{2M|t|}(\partial E)} \Big|\I_s\big[E_{\varphi\circ\tilde{\Phi}_t^{-1}(x)}\big](x)-\I_s[E_{\varphi(x)}](x)\Big|dx, \end{split}\end{equation*} and by Proposition $\ref{continuity_curv_prop}$ \begin{equation*} \|\I_s\big[E_{\varphi\circ\tilde{\Phi}_t^{-1}(x)}\big](x)-\I_s[E_{\varphi(x)}](x)\|_{L^\infty(N_{2M|t|}(\partial E))} \xrightarrow{t\to0}0, \end{equation*} so that \begin{equation*} \int_{N_{2M|t|}(\partial E)\cap\Phi_t(E)} \Big(\I_s\big[E_{\varphi\circ\tilde{\Phi}_t^{-1}(x)}\big](x)-\I_s[E_{\varphi(x)}](x)\Big)dx=o(t), \end{equation*} as $t\to0$. Therefore for $|t|$ small we get \begin{equation*} P_s(\Phi_t(E))=P_s(\{\varphi\geq7/8\})-\int_{\Phi_t(E)\setminus\{\varphi\geq7/8\}}\I_s[E_{\varphi(x)}](x)\,dx+o(t), \end{equation*} and hence \begin{equation*}\begin{split} \frac{d}{dt}P_s(\Phi_t(E))\Big|_{t=0}&=-\frac{d}{dt}\Big(\int_{\Phi_t(E)\setminus\{\varphi\geq7/8\}}\I_s[E_{\varphi(x)}](x)\,dx\Big)\Big|_{t=0}\\ & =-\int_{\partial E}\I_s[E](x)\nu_E(x)\cdot\phi(x)\,d\Han(x), \end{split} \end{equation*} where the last equality is classical, see for example Proposition 17.8 of \cite{Maggi}. We give a sketch of the proof. We know that $\I_s[E_{\varphi(-)}](-)$ is a continuous function in $\{1/8\leq\varphi\leq7/8\}$. We can find a continuous function $g\in C^0(\R)$ s.t. $g(x)=\I_s[E_{\varphi(x)}](x)$ for every $x\in\{1/4\leq\varphi\leq3/4\}$. Then, since we have $(\ref{symmdifffirstvar})$ and $\partial E=\{\varphi=1/2\}$, for $|t|$ small enough we obtain \begin{equation*}\begin{split} \int_{\Phi_t(E)\setminus\{\varphi\geq7/8\}}&\I_s[E_{\varphi(x)}](x)\,dx -\int_{E\setminus\{\varphi\geq7/8\}}\I_s[E_{\varphi(x)}](x)\,dx\\ & =\int_{\Phi_t(E)}g(x)\,dx-\int_Eg(x)\,dx. \end{split}\end{equation*} We suppose for simplicity that $g\in C^1(\R)$; in general we would need to approximate $g$ with $C^1$ functions and then show that we can pass to the limit.\\ Let $\Omega\subset\R$ be a bounded open set s.t. $E\subset\subset\Omega$. Then we can write \begin{equation*} \Phi_t(x)=x+t\phi(x)+O(t^2),\qquad D_x\Phi_t(x)=\textrm{I}_n+t D\phi(x)+O(t^2), \end{equation*} as $t\to0$, uniformly in $x\in\Omega$. As a consequence it can be shown that \begin{equation*} |\det D_x\Phi_t(x)|=1+t\textrm{ div }\phi(x)+O(t^2), \end{equation*} uniformly in $x\in\Omega$, as $t\to0$.\\ Then changing variables and using the divergence Theorem we find \begin{equation*}\begin{split} \int_{\Phi_t(E)}&g(x)\,dx-\int_Eg(x)\,dx\\ & =\int_E\big(g(x+t\phi(x)+O(t^2))|\det D_x\Phi_t(x)|-g(x)\big)dx\\ & =\int_E\Big( \big(g(x)+t\nabla g(x)\cdot\phi(x)\big)\big(1+t\textrm{ div }\phi(x)\big)-g(x)+O(t^2)\Big)dx\\ & =t\int_E\textrm{div}(g(x)\phi(x))dx+ O(t^2)\\ & =t\int_{\partial E}g(x)\nu_E(x)\cdot\phi(x)\,d\Han(x)+O(t^2). \end{split} \end{equation*} \end{proof} \end{teo} \end{section} \end{chapter} \begin{chapter}{Regularity of Nonlocal Minimal Surfaces} \begin{rmk} Again, in this chapter we suppose that every set satisfies $(\ref{gmt_assumption_eq})$. \end{rmk} \begin{section}{Flatness Improvement} In this section we exploit an improvement of flatness technique, similar to the one used in the classical case (see Chapter 1), in order to show that the boundary of an $s$-minimal set is a $C^{1,\gamma}$ graph in a neighborhood of every point which has an interior tangent ball. The main result is the following Theorem, from which we can easily obtain our $C^{1,\gamma}$ regularity, see Theorem $\ref{flat_reg_teo1}$ below. Roughly speaking, the idea consists in showing that if $\partial E$ is contained in some cylinder, in a neighborhood of $x_0\in\partial E$, then in a smaller neighborhood it is actually contained in a flatter cylinder, up to a change of coordinates. \begin{teo}[Improvement of Flatness]\label{imp_flat_teo1} Let $s\in(0,1)$ and fix $\alpha\in(0,s)$. There exists $k_0=k_0(n,s,\alpha) $ s.t. the following result holds.\\ Let $E\subset\R$ be $s$-minimal in $B_1$, with $0\in\partial E$, and assume that \begin{equation*} \partial E\cap B_{2^{-i}}\subset\{|x\cdot\nu_i|\leq 2^{-i(\alpha+1)}\}, \end{equation*} for every $i\in\{0,\dots,k_0\}$, for some $\nu_i\in\mathbb{S}^{n-1}$.\\ Then there exist vectors $\nu_i\in\mathbb{S}^{n-1}$ for every $i>k_0$ s.t. the above inclusion remains valid, i.e. \begin{equation*} \partial E\cap B_{2^{-i}}\subset\{|x\cdot\nu_i|\leq 2^{-i(\alpha+1)}\}, \end{equation*} for every $i\in\mathbb{N}$. \end{teo} If we dilate everything by a factor $2^k$, we get \begin{equation*} \partial (2^kE)\cap B_{2^{k-i}}\subset\{|x\cdot\nu_i|\leq 2^k2^{-i(\alpha+1)}\}=\{|x\cdot\nu_i|\leq 2^{-\alpha k}2^{(k-i)(\alpha+1)}\}. \end{equation*} Also notice that we can start with a set $E$ which is $s$-minimal in $B_2$, rather than only $B_1$, and this guarantees that $2^kE$ is $s$-minimal in $B_{2^{k+2}}$, so if we slightly translate $E$ we still have an $s$-minimal set in $B_{2^{k+1}}$. Let $a_k:=2^{-\alpha k}$. The situation can then be reduced to the following.\\ CLAIM:$\quad$There exists a universal $k_0\in\mathbb{N}$ s.t. if $F\subset\R$ is $s$-minimal in $B_{2^{k+2}}$, with $k\geq k_0$, and \begin{equation*} \partial F\cap B_{2^j}\subset\{|x\cdot\nu'_j|\leq a_k2^{j(\alpha+1)}\},\quad\textrm{for every } j\in\{0,\dots,k\}, \end{equation*} then there exists $\nu'_{-1}$ s.t. \begin{equation*} \partial F\cap B_{1/2}\subset\{|x\cdot\nu_{-1}'|\leq a_k2^{-1-\alpha}\}. \end{equation*} Notice that up to a rotation we can always suppose that $\nu'_0=e_n$.\\ We define the flatness of a cylinder to be the ratio between its height and the diameter of the base.\\ Roughly speaking, requiring flatness of $\partial E\cap B_1$ of order $a_k$, but also flatness of order $a_k2^{i\alpha}$ for all diadic balls $B_{2^i}$ from $B_1$ to $B_{2^k}$, i.e. until flatness becomes of order one, gives flatness of order $a_k2^{-\alpha}$ in $B_{1/2}$. If we manage to prove this, then scaling back and forth we get Theorem $\ref{imp_flat_teo1}$ by induction on $k\geq k_0$. The proof is by contradiction. Suppose that for every $k$ there is a set $E_k\subset\R$ which is $s$-minimal in $B_{2^{k+2}}$, s.t. $0\in\partial E_k$ and \begin{equation*} \partial E_k\cap B_{2^j}\subset\{|x\cdot\nu^k_j|\leq a_k2^{j(\alpha+1)}\},\quad\textrm{for every } j\in\{0,\dots,k\}, \end{equation*} for some $\nu^k_j\in\mathbb{S}^{n-1}$, with $\nu^k_0=e_n$, but s.t. $\partial E_k\cap B_{1/2}$ cannot fit in any cylinder of flatness $a_k2^{-\alpha}$. Then we show that the rescaled sets \begin{equation*} \partial E^*_k:=\Big\{\big(x',\frac{x_n}{a_k}\big)\,\big|\,(x',x_n)\in\partial E_k\Big\} \end{equation*} converge (up to a subsequence) to a plane $P=\{x\cdot\nu=0\}$, uniformly on compact sets, reaching a contradiction. To do this we first show that there is a limiting set $P$, which is the graph of an Holder function $u$. Then we control the growth of $u$ at infinity and we show that it is a $\frac{s+1}{2}$-harmonic function. This will imply that it is actually linear, concluding the proof.\\ One of the main tools is the following geometric Harnack-type inequality. \begin{lem} Let $s\in(0,1)$ and $\alpha\in(0,s)$. There exist $k_0\in\mathbb{N}$ and $\delta\in(0,1)$ which only depend on $n,\,s$ and $\alpha$, for which the following result holds.\\ Let $k\geq k_0$ and let $a:=2^{-k\alpha}$.\\ Let $E\subset\R$ be $s$-minimal in $B_{2^{k+2}}$ and assume that \begin{equation} \partial E\cap B_1\subset\{|x_n|\leq a\} \end{equation} and, for every $i\in\{0,\dots,k\}$, \begin{equation} \partial E\cap B_{2^i}\subset\{|x\cdot\nu_i|\leq a2^{i(1+\alpha)}\}, \end{equation} for some $\nu_i\in\mathbb{S}^{n-1}$. Then \begin{equation}\begin{split}\label{harnack_inclusions} &\textrm{either}\quad\partial E\cap B_\delta\subset\{x_n\leq a(1-\delta^2)\},\\ & \textrm{or}\quad\partial E\cap B_\delta\subset\{x_n\geq a(-1+\delta^2)\}. \end{split} \end{equation} \begin{proof} Given $y\in\partial E\cap B_{1/2}$ we have for every $i\in\{0,\dots,k-1\}$ \begin{equation*} \partial E\cap B_{2^i}(y)\subset\partial E\cap B_{2^1+\frac{1}{2}}\subset\partial E\cap B_{2^{i+1}} \subset\big\{|x\cdot\nu_{i+1}|\leq a2^{(i+1)(\alpha+1)}\big\}, \end{equation*} and also \begin{equation*} |y\cdot\nu_{i+1}|\leq a2^{(i+1)(\alpha+1)}. \end{equation*} Thus \begin{equation*} \partial E\cap B_{2^i}(y)\subset\big\{|(x-y)\cdot\nu_{i+1}|\leq 2a2^{(i+1)(\alpha+1)}\big\}. \end{equation*} This provides some cancellation for the integral of the contribution coming from $\Co B_{1/2}(y)$ to the $s$-fractional mean curvature of $E$ in $y$, yielding \begin{equation*}\begin{split} \big|\I_s^\frac{1}{2}[E](y)\big|&=\Big|\int_{\Co B_\frac{1}{2}(y)}\frac{\chi_E(x)-\chi_{\Co E}(x)}{\kers}dx\Big|\\ & \leq\Big|\int_{B_{2^{k-1}}(y)\setminus B_\frac{1}{2}(y)}\frac{\chi_E(x)-\chi_{\Co E}(x)}{\kers}dx\Big| +\big|\I_s^{2^{k-1}}[E](y)\big|\\ & \leq\sum_{i=0}^{k-1}\Big|\int_{B_{2^i}(y)\setminus B_{2^{i-1}}(y)}\frac{\chi_E(x)-\chi_{\Co E}(x)}{\kers}dx\Big| +\frac{n\omega_n}{s}2^s2^{-ks}\\ & \leq C\Big\{\sum_{i=0}^{k-1}\int_{B_{2^i}(y)\setminus B_{2^{i-1}}(y)}\frac{\chi_{\{|(x-y)\cdot\nu_{i+1}|\leq 2a2^{(i+1)(\alpha+1)}\}}(x)}{\kers}dx+2^{-ks}\Big\}\\ & \leq C\Big\{\sum_{i=0}^{k-1}\int_{2^{i-1}}^{2^i}\frac{a2^{(i+1)(\alpha+1)}r^{n-2}}{r^{n+s}}dr +2^{-ks}\Big\}\\ & \leq C_1 a, \end{split} \end{equation*} since $\alpha<s$, for some $C_1=C_1(n,s,\alpha)>0$. Since by hypothesis $\partial E\cap B_1\subset\{|x_n|\leq a\}$, we can assume that \begin{equation*} \{x_n<-a\}\cap B_1\subset E. \end{equation*} We also assume that $E$ contains more than half of the measure of the cylinder \begin{equation*} D:=\{|x'|\leq\delta\}\times\{|x_n|\leq a\}, \end{equation*} i.e. that \begin{equation}\label{E_cyl_meas_ass} |E\cap D|\geq\frac{1}{2}|D|=\omega_{n-1}\delta^{n-1}a. \end{equation} Then we show that \begin{equation}\label{Har_claim_eq} \{x_n< a(-1+\delta^2)\}\cap B_\delta\subset E, \end{equation} which implies \begin{equation*} \partial E\cap B_\delta\subset\{x_n\geq a(-1+\delta^2)\}. \end{equation*} Suppose that $(\ref{Har_claim_eq})$ doesn't hold. Then there is a portion of $\partial E\cap B_\delta$ trapped in the strip $\{-a\leq x_n\leq (-1+\delta^2)a\}$.\\ Now we slide the plane $x_n=t$ upwards, starting from $t=-a$ until we first touch $\partial E$. Let $y\in\partial E \cap B_\delta$ be a contact point; then \begin{equation*} |y'|\leq\delta\quad\textrm{and}\quad-a\leq y_n\leq(-1+\delta^2)a. \end{equation*} Since \begin{equation*} \{x_n<y_n\}\cap B_\delta\subset E, \end{equation*} we can touch $\partial E$ in $y$ with an interior tangent paraboloid of opening $-\frac{a}{2}$.\\ To be more precise, let $P$ be the (interior of the) subgraph of the paraboloid \begin{equation*} x_n=-\frac{a}{2}|x'-y'|^2+y_n. \end{equation*} Then \begin{equation*} P\cap B_\delta\subset\{x_n<y_n\}\cap B_\delta\subset E \end{equation*} and \begin{equation*} (\partial P\cap\partial E)\cap B_\delta=\{y\}. \end{equation*} In particular from Corollary $\ref{Euler_Lag_ball_eq}$ we have \begin{equation}\label{Eu_La_eq_flat} \limsup_{\rho\to0}\I_s^\rho[E](y)\leq0. \end{equation} On the other hand \begin{equation*}\begin{split} P.V.\int_{B_\frac{1}{2}(y)}&\frac{\chi_E(x)-\chi_{\Co E}(x)}{\kers}dx\\ & = P.V.\int_{B_\frac{1}{2}(y)}\frac{\chi_P(x)-\chi_{\Co P}(x)}{\kers}dx +2P.V.\int_{B_\frac{1}{2}(y)}\frac{\chi_{E\setminus P}(x)}{\kers}dx\\ & =:I_1+I_2, \end{split} \end{equation*} and it is readily seen that \begin{equation*} I_1\geq -C_2\,a, \end{equation*} for some $C_2=C_2(n,s)>0$ (see the calculations in Section 4.1 and also Lemma $\ref{explicit_curv_formula}$). Moreover, since $\delta$ and $a$ are very small, we have $D\subset B_{1/2}(y)$. Also, taking $k_0$ big enough, we assume that $a<\delta$.\\ Using $(\ref{Har_claim_eq})$ we can estimate \begin{equation*}\begin{split} |(E\setminus P)\cap D|&=|E\cap D|-|P\cap D| \geq\frac{1}{2}|D|-|B'_\delta\times\{-a\leq x_n\leq(-1+\delta^2)a\}|\\ & =\omega_{n-1}\delta^{n-1}a-\omega_{n-1}\delta^{n-1}\delta^2 a\geq\frac{1}{2}\omega_{n-1}\delta^{n-1}a =\frac{1}{4}|D|, \end{split} \end{equation*} provided that $\delta^2<\frac{1}{2}$. Now we have \begin{equation*} I_2\geq2\int_D\frac{\chi_{E\setminus P}(x)}{\kers}dx \geq2\int_D\frac{\chi_{E\setminus P}(x)}{(4\delta)^{n+s}}dx \geq C_3\delta^{-1-s}a, \end{equation*} for some $C_3=C_3(n,s)>0$, where we have estimated $|x-y|\leq|x|+|y|<4\delta$, since both $x$ and $y$ belong to $D\subset B_{2\delta}$.\\ Putting the three estimates together we obtain \begin{equation*} \liminf_{\rho\to0}\I_s^\rho[E](y)\geq (-C_1-C_2+C_3\delta^{-1-s})a>0, \end{equation*} once we choose $\delta$ small enough. But this contradicts $(\ref{Eu_La_eq_flat})$, concluding the proof. Notice that if $E$ doesn't satisfy $(\ref{E_cyl_meas_ass})$, then $\Co E$ does, and arguing as above with $\Co E$ in place of $E$ yields \begin{equation*} \partial E\cap B_\delta\subset\{x_n\leq a(1-\delta^2)\}. \end{equation*} \end{proof} \end{lem} In any case this provides flatness of order $a(1-\delta^2/2)/\delta$ for $\partial E\cap B_\delta$.\\ Indeed, suppose e.g. that the second inclusion in $(\ref{harnack_inclusions})$ is satisfied. Then \begin{equation}\label{Harnack_ind_eq0} \partial E\cap B_\delta\subset\{(-1+\delta^2)a\leq x_n\leq a\}, \end{equation} which is a cylinder with base diameter $2\delta$ and height $(2-\delta^2)a$. Now we want to apply Harnack inequality again.\\ Suppose we have $(\ref{Harnack_ind_eq0})$ with $k\gg k_0$ and let $t:=\frac{\delta^2}{2}\,a$. If we translate $E$ downwards by $t$, then \begin{equation*} \partial(E-t\,e_n)\cap B_{\delta/2} \subset\Big\{a\Big(-1+\frac{\delta^2}{2}\Big)\leq x_n\leq a\Big(1-\frac{\delta^2}{2}\Big)\Big\}, \end{equation*} hence if we dilate by a factor $2/\delta$, we get \begin{equation*} \partial\Big(\frac{2}{\delta}(E-te_n)\Big)\cap B_1 \subset\Big\{\frac{-2+\delta^2}{\delta}\,a\leq x_n\leq\frac{2-\delta^2}{\delta}\,a\Big\}. \end{equation*} Notice that \begin{equation*} \frac{2-\delta^2}{\delta}\,a> a, \end{equation*} and let \begin{equation}\label{K_har} k':=\max\Big\{j\in\mathbb{N}\,|\,2^{-\alpha j}\geq\frac{2-\delta^2}{\delta}\,a\Big\}, \end{equation} so that \begin{equation*} a':=2^{-\alpha k'}> a\quad\Longrightarrow\quad k'<k, \end{equation*} and \begin{equation*} \partial F\cap B_1\subset\{-a'\leq x_n\leq a'\}, \end{equation*} where $F:=\frac{2}{\delta}(E-t\,e_n)$. Notice that we can take $\delta$ of the form $\delta=2^{-M_0}$.\\ Now for $i\geq1$ \begin{equation*} \partial F\cap B_{2^i}\subset\partial\Big(\frac{2}{\delta}E\Big)\cap B_{2^{i+1}} =2^{M_0+1}\big(\partial E\cap B_{2^{i-M_0}}\big). \end{equation*} If $i\leq M_0$, then $B_{2^{i-M_0}}\subset B_1$ and hence \begin{equation*} \partial F\cap B_{2^i}\subset\{|x_n|\leq 2^{M_0+1}a\}. \end{equation*} Since for every $i\geq1$ \begin{equation}\label{est_har} 2^{M_0+1}a=\frac{2}{\delta}\,a\leq\frac{2-\delta^2}{\delta}\,a\,2^{1+\alpha}\leq \,2^{1+\alpha}a'\leq 2^{i(1+\alpha)}a', \end{equation} we obtain \begin{equation*} \partial F\cap B_{2^i}\subset\{|x\cdot\nu_i'|\leq2^{i(1+\alpha)}a'\},\quad\textrm{for }0\leq i\leq M_0, \end{equation*} with $\nu_i'=e_n$. On the other hand, for $M_0<i\leq k$ we get using $(\ref{est_har})$ \begin{equation*}\begin{split} \partial F\cap B_{2^i}&\subset 2^{M_0+1}\big\{|x\cdot \nu_{M_0-i}|\leq 2^{(M_0-i)(1+\alpha)}a\big\}\\ & \subset\{|x\cdot\nu_i'|\leq 2^{i(1+\alpha)}a'\}, \end{split} \end{equation*} with $\nu'_i=\nu_{M_0-i}$. Notice that these inclusions hold for $0\leq i\leq k$ and hence in particular for $0\leq i\leq k'$. Therefore, if $k'$ as defined in $(\ref{K_har})$ is s.t. $k'\geq k_0$, we can apply Harnack inequality to the set $F$ and get \begin{equation*}\begin{split} &\textrm{either}\quad\partial F\cap B_\delta\subset\{x_n\leq a'(1-\delta^2)\},\\ & \textrm{or}\quad\partial F\cap B_\delta\subset\{x_n\geq a'(-1+\delta^2)\}. \end{split} \end{equation*} Since \begin{equation*} a'\sim \frac{2-\delta^2}{\delta}\,a, \end{equation*} (actually we can take this as an equality, since Harnack inequality would still hold), scaling and traslating back, we get flatness of order $\big(\frac{2}{\delta}\big)^2\big(1-\frac{\delta^2}{2}\big)^2a$ for $\partial E\cap B_{(\delta/2)^2}$. Notice that the flatness increases but the height of the cylinder, and hence the oscillation of $\partial E$ in the $e_n$ direction, decreases. We can repeat the same argument and go on appliying Harnack inequality as long as the hypothesis are satisfied, that is until the flatness becomes of the order of $a_0:=2^{-k_0\alpha}$.\\ This gives flatness of order $\big(\frac{2}{\delta}\big)^j\big(1-\frac{\delta^2}{2}\big)^ja$ for $\partial E\cap B_{(\delta/2)^j}$, until \begin{equation}\label{limit_exp_har} j\sim c_0(\delta)\log\frac{a_0}{a},\qquad\textrm{with }c_0(\delta):=\Big(\log\frac{2}{\delta}\Big(1-\frac{\delta^2}{2}\Big)\Big)^{-1}. \end{equation} Notice that if $a\to0$, then $j\to\infty$. Clearly after slightly dilating and traslating the set $E$, we can repeat the above analysis and get the same estimate for every $x_0\in\partial E\cap B_{1/2}$, that is, we have flatness of order $c\,\big(\frac{2}{\delta}\big)^j\big(1-\frac{\delta^2}{2}\big)^ja$ for $\partial E\cap B_{(\delta/2)^j}(x_0)$, until $j$ becomes as in $(\ref{limit_exp_har})$. Here $c$ is a small constant appearing as a consequence of the scaling and does not depend on $E$ nor $a$.\\ Now we want to prove the CLAIM, so we consider our sequence of sets $E_k$ as above. That is, for every $k$ the set $E_k\subset\R$ is $s$-minimal in $B_{2^{k+2}}$, we have $0\in\partial E_k$, and \begin{equation*} \partial E_k\cap B_{2^j}\subset\{|x\cdot\nu^k_j|\leq a_k2^{j(\alpha+1)}\},\quad\textrm{for every } j\in\{0,\dots,k\}, \end{equation*} for some $\nu^k_j\in\mathbb{S}^{n-1}$, with $\nu^k_0=e_n$. Moreover $\partial E_k\cap B_{1/2}$ cannot fit in any cylinder of flatness $a_k2^{-\alpha}$. We want to show that the flatness hypothesis on the sets $E_k$ imply that the vectors $\nu_j^k$ cannot oscillate too much and must remain close to $e_n$. Consider a set $E_k$ and fix any index $j$. From the two inclusions \begin{equation*}\begin{split} \partial E_k\cap B_{2^j}\subset\{|x\cdot\nu^k_j|\leq 2^j2^{(j-k)\alpha}\},\\ \partial E_k\cap B_{2^{j+1}}\subset\{|x\cdot\nu^k_{j+1}|\leq 2^{j+1}2^{(j+1-k)\alpha}\}, \end{split} \end{equation*} we deduce that \begin{equation} |\nu_j^k-\nu_{j+1}^k|\leq C\,2^{\alpha(j-k)}, \end{equation} for some constant $C>0$ independent of $k$ and $j$.\\ Therefore we get for every $j\ge1$ \begin{equation}\label{contr_osc_ineq} |\nu_j^k-e_n|\leq|\nu_j^k-\nu_{j-1}^k|+\dots+|\nu_1^k-e_n|=C\Big(\sum_{i=0}^{j-1}2^{\alpha i}\Big)2^{-\alpha k}. \end{equation} In particular, for every fixed $j$ we have \begin{equation*} \nu_j^k\xrightarrow{k\to\infty}e_n. \end{equation*} Now we stretch our sets in the $e_n$ direction and consider the sets \begin{equation*} \partial E^*_k:=\Big\{\big(x',\frac{x_n}{a_k}\big)\,\big|\,(x',x_n)\in\partial E_ \Big\}. \end{equation*} \begin{lem} There exists a Holder continuous function $u:\mathbb{R}^{n-1}\longrightarrow\mathbb{R}$ and a sequence $k_i\nearrow\infty$ s.t. if we define \begin{equation*} A_\infty:=\{(x',u(x'))\,|\,x'\in\mathbb{R}^{n-1}\}, \end{equation*} then $\partial E^*_{k_i}\longrightarrow A_\infty$ uniformly on compact sets, in the following sense. For every fixed $K\subset\R$ compact, for any $\epsilon>0$, \begin{equation*} \partial E^*_{k_i}\cap K\subset N_\epsilon(A_\infty)\cap K,\quad\textrm{for }k_i\geq k(\epsilon). \end{equation*} Moreover we have \begin{equation*} u(0)=0,\qquad|u(x')|\leq C(1+|x'|^{1+\alpha}). \end{equation*} \begin{proof} We use a diagonal argument to prove the existence of such a function $u$. Then we estimate the growth of $u$ at infinity using the flatness estimates of the sets $E_k$. We first consider the sets \begin{equation*} A_k:=\Big\{\big(x',\frac{x_n}{a_k}\big)\,\big|\,(x',x_n)\in\partial E_k\cap B_1 \Big\}, \end{equation*} which are contained in $\{|x_n|\leq1\}$, and show that there exist a Holder function $u$ and a sequence $\{k_i\}$ s.t. $A_{k_i}\cap\{|x'|\leq1/2\}$ lies in $N_\epsilon(A_\infty)\cap\{|x'|\leq1/2\}$ for $k_i$ big enough.\\ Suppose that \begin{equation*} y_0=(y'_0,y_{0n})\in A_k,\quad\textrm{with}\quad|y'_0|\leq1/2. \end{equation*} From the discussion above about Harnack inequality, we know that \begin{equation}\label{comp_oscil_eq} A_k\cap\Big\{|y'-y'_0|<\frac{1}{2}\Big(\frac{\delta}{2}\Big)^j\Big\} \subset\Big\{|y_n-y'_{0n}|<2c\Big(1-\frac{\delta^2}{2}\Big)^j\Big\}, \end{equation} for every $j$ s.t. \begin{equation*} j<j_k\sim c(\delta)\log\frac{a_0}{a_k}. \end{equation*} For the moment we fix an index $j_0$ and consider $j\leq j_0$. Notice that for $k$ big enough, say $k\geq k(j_0)$, inclusion $(\ref{comp_oscil_eq})$ is satisfied.\\ We show that $A_k\cap\{|x'|\leq1/2\}$ is above the graph of \begin{equation} \Psi_{y_0,k}(y'):=y_{0n}-2c\Big(1-\frac{\delta^2}{2}\Big)^{j_0}-\theta|y'-y'_0|^\beta, \end{equation} where $\theta$ and $\beta\,>0$ depend only on $\delta$.\\ Let $(y',y'_n)\in A_k\cap\{|x'|\leq1/2\}$, so that $|y'-y'_0|\leq1$. Now we distinguish three cases: \begin{equation*}\begin{split} &(i)\quad|y'-y'_0|<\frac{1}{2}\Big(\frac{\delta}{2}\Big)^{j_0},\\ & (ii)\quad \frac{1}{2}\Big(\frac{\delta}{2}\Big)^{j_0}\leq|y'-y'_0|\leq\frac{1}{2},\\ & (iii)\quad\frac{1}{2}<|y'-y'_0|\leq1. \end{split} \end{equation*} In case $(i)$ our claim follows immediately from $(\ref{comp_oscil_eq})$ with $j=j_0$.\\ In case $(ii)$ we argue as follows. Notice that in this case there exists $0\leq j\leq j_0$ s.t. \begin{equation}\label{eq_osc_proof} \frac{1}{2}\Big(\frac{\delta}{2}\Big)^{j+1}\leq|y'-y'_0|\leq\frac{1}{2}\Big(\frac{\delta}{2}\Big)^j. \end{equation} From $(\ref{comp_oscil_eq})$ we obtain \begin{equation*} 2c\Big(1-\frac{\delta^2}{2}\Big)^j\geq|y_n-y_{0n}|. \end{equation*} By $(\ref{eq_osc_proof})$ and the fact that $0<\delta/2<1$ we find \begin{equation*} j\leq\frac{-\log(2|y'-y'_0|)}{\log\frac{2}{\delta}}\leq j+1, \end{equation*} and hence \begin{equation*}\begin{split} \Big(1-\frac{\delta^2}{2}\Big)^j&\leq \Big(1-\frac{\delta^2}{2}\Big)^{\big(\frac{-\log(2|y'-y'_0|)}{\log\frac{2}{\delta}}-1\big)} =\frac{1}{\big(1-\frac{\delta^2}{2}\big)}e^{\beta \log(2|y'-y'_0|)}\\ & \qquad\qquad=\frac{(2|y'-y'_0|)^\beta}{\big(1-\frac{\delta^2}{2}\big)}, \end{split}\end{equation*} where $\beta:=\frac{-\log(1-\frac{\delta^2}{2})}{\log(\frac{2}{\delta})}$.\\ Therefore \begin{equation*} |y_n-y_{0n}|\leq\frac{2^{\beta+1}c}{\big(1-\frac{\delta^2}{2}\big)}|y'-y'_0|^\beta, \end{equation*} which is the desired result with $\theta:=\frac{2^{\beta+1}c}{(1-\frac{\delta^2}{2})}$.\\ Finally, eventually adding a constant to $\theta$, the result holds also in case $(iii)$.\\ Indeed in this case $|y'-y'_0|^\beta\geq(1/2)^\beta$ and \begin{equation*} |y_n-y_{0n}|\leq|y_n|+|y_{0n}|\leq2. \end{equation*} So we get the claim provided that $\theta(1/2)^\beta\geq2$. Notice that, as $y_0$ varies, $\Psi_{y_0,k}$ are Holder continuous functions with Holder modulus of continuity bounded via the function $\theta t^\beta$. Therefore, if we set \begin{equation*} \psi_k(y'):=\sup_{y_0\in A_k\cap\{|x'|\leq1/2\}}\Psi_{y_0,k}(y'), \end{equation*} then $\psi_k$ is a Holder continuous function, with Holder modulus of continuity still bounded via the function $\theta t^\beta$, and $A_k\cap\{|x'|\leq1/2\}$ is above the graph of $\psi_k$. Arguing in the same way, possibly taking $\theta$ and $\beta$ larger, but still depending only on $\delta$, we find that, if we define \begin{equation*} \Phi_{y_0,k}(y'):=y_{0,n}+2c\Big(1-\frac{\delta^2}{2}\Big)^{j_0}+\theta|y'-y'_0|^\beta, \end{equation*} then $A_k\cap\{|x'|\leq1/2\}$ is below the graph of $\Phi_{y_0,k}$. Again, we define \begin{equation*} \phi_k(y'):=\inf_{A_k\cap\{|x'|\leq1/2\}}\Phi_{y_0,k}(y'), \end{equation*} so that $\phi_k$ is a Holder continuous function, with Holder modulus of continuity bounded via the function $\theta t^\beta$, and $A_k\cap\{|x'|\leq1/2\}$ is below the graph of $\phi_k$. Thus $A_k\cap\{|x'|\leq1/2\}$ lies between the graphs of $\psi_k$ and $\phi_k$ for every $k\geq k(j_0)$ and, by construction, \begin{equation}\label{another_flat_eq} 0\leq\phi_k(y')-\psi(y')\leq 4c\Big(1-\frac{\delta^2}{2}\Big)^{j_0}. \end{equation} Also, for $j_0$ fixed, by Ascoli-Arzel\'a Theorem, letting $k\to\infty$, it follows that $\psi_k$ uniformly converges (up to a subsequence) to a Holder function which depends on $j_0$, say $ \psi_k\longrightarrow w_{j_0}^- $.\\ Analogously we find a Holder continuous function $w_{j_0}^+$ s.t. $\phi_k\longrightarrow w_{j_0}^+$ uniformly (up to a subsequence). Moreover we have by construction that $w_{j_0}^-\leq w_{j_0}^+$ and that \begin{equation}\label{nomoreeqplease}\begin{split} A_{k_i}\cap\{|x'|\leq1/2\}\textrm{ lies between}\\ \textrm{the graphs of }w_{j_0}^--\epsilon/2 \textrm{ and }w_{j_0}^++\epsilon/2, \end{split}\end{equation} for $k_i$ large enough. Now we let $j_0\to\infty$. Notice that by the construction of $\theta$ and $\beta$ above, the Holder constants of $w_{j_0}^\pm$ depend on $\delta$ but are independent of $j_0$.\\ Therefore by Ascoli-Arzel\'a Theorem we find that there exists a Holder continuous function $u$ s.t. $w_{j_0}^-$ converges uniformly (up to subsequences) to $u$. By $(\ref{another_flat_eq})$, also $w_{j_0}^+$ uniformly converges to $u$. From $(\ref{nomoreeqplease})$ we get our claim. Using $(\ref{contr_osc_ineq})$ we can translate the estimate for the flatness of $\partial E_{k_i}\cap B_{2^j}$ from an estimate in direction $\nu^{k_i}_j$ to an estimate in direction $e_n$, for every fixed $j$.\\ In this way we can repeat the above argument in bigger and bigger balls, getting a graph in the $e_n$ direction. To be more precise, consider $x\in\partial E_k\cap B_{2^j}$. Then \begin{equation*}\begin{split} |x\cdot e_n|&\leq|x\cdot\nu_j^k|+2^j|e_n-\nu_j^k|\leq a_k2^{j(\alpha+1)}+2^jC\Big(\sum_{i=0}^{j-1}2^{\alpha i}\Big)a_k\\ & \leq Ca_k2^j2^{j\alpha}+C2^ja_k\sum_{i=0}^{j-1}2^{\alpha i} =Ca_k2^j\sum_{i=0}^j2^{\alpha i}\\ & =Ca_k2^j\frac{2^{\alpha(j+1)}-1}{2^\alpha-1} \leq Ca_k2^{j(\alpha+1)}, \end{split} \end{equation*} which gives \begin{equation}\label{oscillation_uglyeq_stop} \partial E_k\cap B_{2^j}\subset\{|x_n|\leq C a_k2^{j(\alpha+1)}\}. \end{equation} We remark that the constant $C$ is independent of $k$ and $j$. Now we can consider the sets \begin{equation*} A^1_{k_i}:=\Big\{\Big(x',\frac{x_n}{a_{k_i}}\Big)\,\big|\,(x',x_n)\in\partial E_{k_i}\cap B_2\Big\}. \end{equation*} and repeat the argument above to obtain, in $\{|x'|\leq1\}$, the convergence (up to a subsequence) to the graph $\{(x',v(x')\}$ of a Holder function $v$, which must coincide with $u$ on $\{|x'|\le1/2\}$. Proceeding in this way with the sets \begin{equation*} A^j_{k_i}:=\Big\{\Big(x',\frac{x_n}{a_{k_i}}\Big)\,\big|\,(x',x_n)\in\partial E_{k_i}\cap B_{2^j}\Big\}, \end{equation*} we get our claim via a diagonal argument. Clearly $u(0)=0$, so we are left to prove the growth estimate for $u$.\\ From $(\ref{oscillation_uglyeq_stop})$ we know that for every fixed $j$ \begin{equation*} A_{k_i}^j\subset\{|x_n|\leq C2^{j(\alpha+1)}\}, \end{equation*} for every $k_i$. Then, since \begin{equation*} A_{k_i}^j\cap B'_{2^{j-1}}\longrightarrow A_\infty\cap B'_{2^{j-1}}=\big\{(x',u(x'))\,|\,|x'|\leq 2^{j-1}\big\} \end{equation*} uniformly, we obtain \begin{equation*} |u(x')|\leq 2^{\alpha+1}C\,2^{j(\alpha+1)}\,\quad\textrm{in }B'_{2^j}\,, \end{equation*} for every $j$. This implies our growth estimate. Indeed, let $x'\in\mathbb{R}^{n-1}$. Then, if $|x'|\le1$, we have $|u(x')|\leq C$ and, if $x'\in B'_{2^{j+1}}\setminus B'_{2^j}$, for some $j$, we have \begin{equation*} \frac{|u(x')|}{1+|x'|^{1+\alpha}}\leq C\frac{2^{(j+1)(\alpha+1)}}{1+2^{j(\alpha+1)}} \leq 2^{\alpha+1}C\sup_{t\in[0,\infty)}\frac{2^{t(\alpha+1)}}{1+2^{t(\alpha+1)}}<\infty. \end{equation*} \end{proof} \end{lem} Now we show that the function $u$ found in previous Lemma must be linear. \begin{lem} The limit function $u$ satisfies \begin{equation*} (-\Delta)^\frac{s+1}{2}u=0\quad\textrm{in }\mathbb{R}^{n-1}, \end{equation*} in the viscosity sense, and therefore is linear. \begin{proof} Assume $\varphi+|x'|^2$ is a smooth tangent function that touches $u$ by below, say for simplicity at the origin.\\ By construction of $u$ we can find $E$ $s$-minimal and $a>0$ small, s.t. $\partial E$ is included in a $a\epsilon$ neighborhood of $\{(x',au(x')\}$ for $|x'|\leq R$ and $\partial E$ is touched by below at $x_0$, with $|x'_0|\leq\epsilon$ by a vertical translation of $a\varphi$. From the Euler-Lagrange equation we know that \begin{equation*} \limsup_{\rho\to0}\frac{1}{a}\int_{\Co B_\rho(x_0)}\frac{\chi_E-\chi_{\Co E}}{|x-x_0|^{n+s}}dx\leq 0. \end{equation*} We are going to estimate this integral in terms of the function $u$ by integrating on square cylinders with center $x_0$, i.e. \begin{equation*} D_r:=\{(x',x_n)\,|\,|x'-x'_0|<r,\,|(x-x_0)\cdot e_n|<r\}. \end{equation*} For simplicity we forget about the principal values in the following integrals. We fix $\delta$ small and $R$ large, and we assume $a,\,\epsilon\ll\delta$.\\ Since $E$ contains the subgraph $P$ of a translation of $a\varphi$, using Lemma $\ref{explicit_curv_formula}$ we have \begin{equation*} \frac{1}{a}\int_{D_\delta}\frac{\chi_E-\chi_{\Co E}}{|x-x_0|^{n+s}}dx \geq\frac{1}{a}\int_{D_\delta}\frac{\chi_P-\chi_{\Co P}}{|x-x_0|^{n+s}}dx \geq -C(\varphi)\delta^{1-s}. \end{equation*} Using the flatness hypothesis of $E$ in the balls $B_{2^j}$, we know that \begin{equation*} \partial E\cap B_{2R}(x_0)\subset\{|(x-x_0)\cdot e_n|\leq C(R)a\}, \end{equation*} for some $C(R)>0$, and hence we get by symmetry \begin{equation*} \frac{1}{a}\int_{D_R\setminus D_\delta}\frac{\chi_E-\chi_{\Co E}}{|x-x_0|^{n+s}}dx = \frac{1}{a}\int_{A}\frac{\chi_E-\chi_{\Co E}}{|x-x_0|^{n+s}}dx, \end{equation*} where \begin{equation*} A:=(D_R\setminus D_\delta)\cap\{|(x-x_0)\cdot e_n|\leq C(R)a\}. \end{equation*} Taking $a$ small enough, we can assume that $C(R)a<\delta/2$. Also notice that $A\subset \Co B_\delta(x_0)$. Then for every $x\in A$ we get \begin{equation}\label{uglyeq1} |x'-x'_0|^2\geq\delta^2-|(x-x_0)\cdot e_n|^2\geq\delta^2-C(R)^2a^2\geq\frac{3}{4}\delta^2. \end{equation} Now consider the function \begin{equation*} F(t):=\big(|x'-x'_0|^2+t|(x-x_0)\cdot e_n|^2\big)^{-\frac{n+s}{2}}, \end{equation*} so that \begin{equation*} \Big|\frac{1}{|x-x_0|^{n+s}}-\frac{1}{|x'-x'_0|^{n+s}}\Big|=|F(1)-F(0)|=\Big|\int_0^1F'(t)\,dt\Big|. \end{equation*} We have \begin{equation*} F'(t)=-\frac{n+s}{2}\big(|x'-x'_0|^2+t|(x-x_0)\cdot e_n|^2\big)^{-\frac{n+s+2}{2}}|(x-x_0)\cdot e_n|^2, \end{equation*} and hence we find \begin{equation*}\begin{split} \Big|\int_0^1F'(t)\,dt\Big|&\leq\frac{n+s}{2}C(R)^2a^2\int_0^1 \frac{dt}{(|x'-x'_0|^2+t|(x-x_0)\cdot e_n|^2)^\frac{n+s+2}{2}}\\ & \leq\frac{n+s}{2}C(R)^2a^2\int_0^1 \frac{dt}{|x'-x'_0|^{n+s+2}}\leq C(R,\delta)a^2, \end{split} \end{equation*} where we used $(\ref{uglyeq1})$ to obtain the last inequality. Then, using this and the fact that $\partial E$ is included in a $a\epsilon$ neighborhood of $\{(x',u(x'))\}$ for $|x'|\leq R$, we find \begin{equation*}\begin{split} \frac{1}{a}\int_{A}\frac{\chi_E-\chi_{\Co E}}{|x-x_0|^{n+s}}dx& = \frac{1}{a}\int_{B'_R\setminus B'_\delta}\frac{a2(u(x')-u(x'_0)+O(\epsilon))}{|x'-x'_0|^{n+s}}dx'+O(a^2)\\ & =2\int_{B'_R\setminus B'_\delta}\frac{u(x')-u(x'_0)}{|x'-x'_0|^{n+s}}dx'+O(\epsilon)+O(a^2). \end{split} \end{equation*} We are left to estimate the contribution coming fom $\Co D_R$.\\ Let $a=2^{-k\alpha}$. We argue as we did in the beginning of the proof of Harnack inequality to estimate $\I_s^{1/2}[E](y)$, exploiting our flatness hypothesis for $\partial E\cap B_{2^i}$ for $0\leq i\leq k$. We have \begin{equation*}\begin{split} \frac{1}{a}\int_{\Co D_R}\frac{\chi_E-\chi_{\Co E}}{|x-x_0|^{n+s}}dx&\leq\frac{1}{a}C\Big(\int_{R/2}^{2^k} \frac{ar^{\alpha+1}r^{n-2}}{r^{n+s}}dr+\int_{2^k}^\infty\frac{r^{n-1}}{r^{n+s}}dr\Big)\\ & \leq\frac{1}{a}C\Big(a\int_{R/2}^\infty \frac{d}{dr}r^{\alpha-s}dr+2^{-ks}\Big)\\ & \leq\frac{1}{a}C\big(a\,R^{\alpha-s}+a^{1+\eta}\big)\leq C(R^{\alpha-s}+a^\eta), \end{split}\end{equation*} where $\eta=\frac{s-\alpha}{\alpha}$. Putting these estimates together and letting $\epsilon,\,a\to0$, we obtain from the Euler-Lagrange equation for $E$ \begin{equation*} \int_{B'_R\setminus B'_\delta}\frac{u(x')-u(0)}{|x'|^{n+s}}dx'\leq C(\delta^{1-s}+R^{\alpha-s}). \end{equation*} Letting $\delta\to0$ and $R\to\infty$ shows that $u$ is a viscosity solution in $0$ (we can repeat the same argument if $u$ is touched from above). Then Theorem $\ref{liouville_frac}$ guarantees that $u$ is linear, concluding the proof. \end{proof} \end{lem} This gives a contradiction, proving our CLAIM and hence Theorem $\ref{imp_flat_teo1}$. From Theorem $\ref{imp_flat_teo1}$ we can deduce \begin{teo}[Regularity]\label{flat_reg_teo1} Let $\alpha\in(0,s)$. There exists $\epsilon_0=\epsilon_0(n,s,\alpha)>0$ s.t. if $E$ is $s$-minimal in $B_1$, with $0\in\partial E$ and \begin{equation*} \partial E\cap B_1\subset\{|x_n|\leq\epsilon_0\}, \end{equation*} then $\partial E\cap B_{1/2}$ is a $C^{1,\alpha}$ surface. \begin{proof} Let $k_0$ be from Theorem $\ref{imp_flat_teo1}$. If $\epsilon_0<2^{-k_0(\alpha+1)}$, then $E$ satisfies the hypothesis of the Theorem, with $\nu_i=e_n$ for every $i\in\{0,\dots,k_0\}$, and hence there exist $\nu_i\in\mathbb{S}^{n-1}$ for every $i$, s.t. \begin{equation*} \partial E\cap B_{2^{-i}}\subset\{|x\cdot\nu_i|\leq2^{-i(\alpha+1)}\}. \end{equation*} This implies \begin{equation*} |\nu_i-\nu_{i+1}|\leq C2^{-i\alpha}, \end{equation*} with $C>0$ independent of $i$, and hence \begin{equation*} \nu_i\longrightarrow\nu(0), \end{equation*} for some $\nu(0)\in\mathbb{S}^{n-1}$. Moreover we easily get by induction \begin{equation*} |\nu_i-\nu(0)|\leq 2C\,2^{-i\alpha}. \end{equation*} Thus, if $x\in\partial E\cap B_{2^{-i}}$, \begin{equation*} |x\cdot\nu(0)|\leq|x\cdot\nu_i|+|x|\,|\nu_i-\nu(0)|\leq C2^{-i(\alpha+1)}, \end{equation*} and hence \begin{equation*} \partial E\cap B_{2^{-i}}\subset\big\{|x\cdot\nu(0)|\leq C2^{-i(\alpha+1)}\big\}, \end{equation*} for every $i$. This implies that $\partial E$ is a differentiable surface in $0$, with normal $\nu(0)$.\\ If we take $\epsilon_0$ smaller, say $\epsilon_0<\frac{1}{4}2^{-k_0(1+\alpha)}$, then, after traslating $E$, we can repeat the same argument at every point $x_0\in\partial E\cap B_{1/2}$ and get that $\partial E\cap B_{1/2}$ is actually a $C^{1,\alpha}$ surface. \end{proof} \end{teo} Now we show that if an $s$-minimal set $E$ has an interior tangent ball in some point, say $B_r(-re_n)$ in $0\in\partial E$, with $r$ big, then $\partial E\cap B_1$ must lie below $\{x_n=1/2\}$. \begin{lem} There exists $R_0=R_0(n,s)>0$ s.t. the following result holds. Let $E\subset\R$ be $s$-minimal in $B_2$, with $0\in\partial E$. If $B_R(-R\,e_n)\subset E,$ for some $R\geq R_0$, then \begin{equation*} \partial E\cap B_1\subset\{x_n\leq1/2\}. \end{equation*} \begin{proof} Suppose the claim is false. Then there is a point $y\in\partial E\cap B_1$ with $y_n>1/2$. Since $B_{1/4}(y)\subset B_2$, the Clean ball condition guarantees the existence of a ball \begin{equation*} B_{\frac{1}{4}c}(p)\subset E\cap B_\frac{1}{4}(y). \end{equation*} Moreover, since $\partial E$ has an interior tangent ball at 0, we have \begin{equation*} \limsup_{\delta\to0}\I_s^\delta[E](0)\leq0. \end{equation*} Let $D:=B_R(-R\,e_n)\cup B_R(R\,e_n)$ and let $K$ be the convex envelope of $D$; notice that $B_R\subset K$. We can split \begin{equation*}\begin{split} P.V.\int_{\R}&\frac{\chi_E(z)-\chi_{\Co E}(z)}{|z|^{n+s}}dz=\int_{\Co K}\frac{\chi_E(z)-\chi_{\Co E}(z)}{|z|^{n+s}}dz\\ &+P.V:\int_{K\setminus D}\frac{\chi_E(z)-\chi_{\Co E}(z)}{|z|^{n+s}}dz +P.V.\int_D\frac{\chi_E(z)-\chi_{\Co E}(z)}{|z|^{n+s}}dz\\ &=:I_1+I_2+I_3. \end{split} \end{equation*} We can bound \begin{equation*} |I_1|\leq\int_{\Co B_R}\frac{1}{|z|^{n+s}}dz\leq C_1(n,s)R^{-s}. \end{equation*} As for $I_2$, we have \begin{equation*}\begin{split} |I_2|&\leq P.V.\int_{K\setminus D}\frac{1}{|z|^{n+s}}dz\\ & =P.V.\int_{(K\setminus D)\setminus B_{R/2}}\frac{1}{|z|^{n+s}}dz +P.V.\int_{(K\setminus D)\cap B_{R/2}}\frac{1}{|z|^{n+s}}dz\\ & \leq C_1(n,s)\Big(\frac{R}{2}\Big)^{-s}+P.V.\int_{(K\setminus D)\cap B_{R/2}}\frac{1}{|z|^{n+s}}dz. \end{split} \end{equation*} If we let $F(h):=R-(R^2-h^2)^{1/2}$, then \begin{equation*} F'(0)=0\quad\textrm{and}\quad F''(h)=\frac{R^2}{(R^2-h^2)^{3/2}}\leq\Big(\frac{4}{3}\Big)^\frac{3}{2}\frac{1}{R}, \end{equation*} for every $h\in[0,R/2]$, and hence, arguing as in Lemma $\ref{explicit_curv_formula}$, we get \begin{equation*}\begin{split} P.V.\int_{(K\setminus D)\cap B_{R/2}}&\frac{1}{|z|^{n+s}}dz =2\int_{\mathbb{S}^{n-2}}d\mathcal{H}^{n-2}\int_0^{R/2}\Big(\int_0^\frac{F(\rho)}{\rho}(1+t^2)^{-\frac{n+s}{2}}\Big) \frac{d\rho}{\rho^{s+1}}\\ & \leq C\frac{1}{R}\int_0^{R/2}\frac{d}{d\rho}\rho^{1-s}\,d\rho\leq CR^{-s}. \end{split} \end{equation*} Therefore \begin{equation*} |I_2|\leq C_2(n,s)R^{-s}. \end{equation*} We're left to estimate $I_3$. Notice that $D$ is symmetric with respect to $\{x_n=0\}$ and $B_R(-R\,e_n)\subset E$ by hypothesis. Moreover the smal ball $B_{c/4}(p)$ is contained in $E$ and $B_{c/4}(p)\subset B_R(R\,e_n)$.\\ Roughly speaking, the contribution coming from any point $z\in\Co E\cap D$ is canceled by that of the point $-z\in E\cap B_R(-R\, e_n)$ and we are left with (at least) the contribution coming from $B_{c/4}(p)$, which is positive. That is \begin{equation*} I_3=P.V.\int_D\frac{\chi_E(z)-\chi_{\Co E}(z)}{|z|^{n+s}}dz \geq \int_{B_{\frac{1}{4}c}(p)}\frac{1}{|z|^{n+s}}dz. \end{equation*} Now notice that for every $z\in B_{c/4}(p)$ we have \begin{equation*} |z|\leq |z-p|+|p-y|+|y|\leq \frac{1}{4}c+\frac{1}{4}+1\leq2, \end{equation*} and hence we obtain \begin{equation*} I_3\geq \frac{1}{2^{n+s}}\,\omega_n\frac{1}{4^n}c^n=:C_3(n,s). \end{equation*} We remark that this last estimate does not depend on $R$ nor on our set $E$ nor the point $y\in\partial E\cap B_1$ lying in the strip $\{1/2<y\leq1\}$. Therefore \begin{equation*} 0\geq\limsup_{\delta\to0}\I_s^\delta[E](0)\geq\liminf_{\delta\to0}\I_s^\delta[E](0) \geq C_3-\big(C_1+C_2\big)R^{-s}>0, \end{equation*} provided $R$ is big enough, giving a contradiction. \end{proof} \end{lem} \begin{rmk} For any fixed $\beta\in(0,1/2]$, we can repeat the same argument to show that $\partial E\cap B_1$ lies below the plane $\{x_n=\beta\}$, provided that $E$ has an interior tangent ball in 0, with radius $R\geq R(n,s,\beta)$.\\ In this case the constant $C_3$ appearing in the proof becomes \begin{equation*} C_3=C_3(n,s,\beta)\sim\beta^n, \end{equation*} and hence \begin{equation*} \lim_{\beta\to0}R(n,s,\beta)=\infty. \end{equation*} \end{rmk} As a consequence we see that if $E$ has an interior tangent ball in some point $x_0$ then we can control the flatness of $\partial E$ in a small enough neighborhood of $x_0$. Therefore, using this Lemma we can prove the following consequence of the Regularity Theorem \begin{coroll}\label{int_tang_flat_reg} Let $E\subset\R$ be $s$-minimal in $\Omega$ and let $x_0\in\partial E\cap\Omega$.\\ If $E$ has an interior tangent ball $B_r(p)\subset E$ in $x_0$, then $\partial E$ is a $C^{1,\alpha}$ surface in a neighborhood of $x_0$. \begin{proof} After a traslation and a rotation, we can suppose $x_0=0$ and \begin{equation*} B_r(-re_n)\subset E. \end{equation*} If we dilate everything by a factor $\lambda':=\lambda/r>0$ we obtain \begin{equation*} B_\lambda(-\lambda e_n)\subset\lambda' E. \end{equation*} Clearly $0\in\partial(\lambda' E)$ and, taking $\lambda$ big enough, the set $\lambda' E$ is $s$-minimal in $B_2\subset\lambda'\Omega$. Using previous Lemma and the Remark above, we know that if $\lambda\geq R(n,s,\epsilon_0/2)$, then $\partial(\lambda' E)\cap B_1\subset\{x_n\leq\epsilon_0/2\}$. Moreover $\partial(\lambda' E)\cap B_1$ lies above the ball $B_\lambda(-\lambda e_n)$. Thus, eventually taking a bigger $\lambda$, we have also $\partial(\lambda' E)\cap B_1\subset\{x_n\geq-\epsilon_0/2\}$.\\ Now the set $\lambda'E$ satisfies the hypothesis of the Regularity Theorem and hence $\partial (\lambda' E)\cap B_{1/2}$ is a $C^{1,\alpha}$ surface. Scaling back concludes the proof. \end{proof} \end{coroll} Considering $\Co E$ in place of $E$ we see that the same holds if we have an exterior tangent ball. \end{section} \begin{section}{Monotonicity Formula} In this section we prove a monotonicity formula for a quantity related to the fractional perimeter of an $s$-minimal set.\\ This formula can be seen as an extension to the fractional framework of the classical monotonicity formula which holds true for minimal surfaces (see the end of Chapter 1). However in order to define this quantity we need to consider an appropriate extension function in one extra variable. \begin{subsection}{Intermezzo About the Fractional Laplacian: The Extension Problem} For all the details about the extension problem we refer to \cite{extension}. See also \cite{Sire}. For every $s\in(0,1)$ we define the weighted $L^1$-space \begin{equation*} L_\frac{s}{2}:=\left\{u:\R\longrightarrow\mathbb{R}\,\Big|\,\int_{\R}\frac{|u(y)|}{(1+|y|^2)^\frac{n+s}{2}}\,dy<\infty\right\}. \end{equation*} For a function $u\in L_{s/2}$ we consider the extension $\tilde{u}:\R\times[0,\infty)\longrightarrow\mathbb{R}$, which solves \begin{equation}\label{extension} \left\{\begin{array}{cc} \textrm{div}(z^{1-s}\nabla\tilde{u})=0&\textrm{in }\mathbb{R}^{n+1}_+,\\ \tilde{u}=u&\textrm{on }\{z=0\}, \end{array}\right. \end{equation} where \begin{equation*} \mathbb{R}^{n+1}_+=\{(x,z)\in\mathbb{R}^{n+1}\,|\,x\in\R,\,z>0\}. \end{equation*} \begin{rmk} Actually such an extension can be defined in the same way for every $s\in(0,2)$ and the case $s=1$ is well known, but we are interested only in the range $s\in(0,1)$. \end{rmk} Let $a:=1-s$. We use capital letters, like $X$, to denote points in $\mathbb{R}^{n+1}$. \begin{rmk} It is clear that the first equation in $(\ref{extension})$ is the Euler-Lagrange equation for the functional \begin{equation}\label{extension_energy} \int_{\{z>0\}}|\nabla\tilde{u}|^2z^a\,dX, \end{equation} and it can be rewritten as \begin{equation*} \Delta_x\tilde{u}+\frac{a}{z}\tilde{u}_z+\tilde{u}_{zz}=0, \end{equation*} where $\Delta_x$ denotes the Laplacian in the first $n$ variables and the pedice $z$ denotes derivation in the last variable. \end{rmk} The solution $\tilde{u}$ to $(\ref{extension})$ can be explicitly computed via the Poisson formula \begin{equation*} \tilde{u}(\cdot,z)=P(\cdot,z)\ast u,\qquad\textrm{i.e.}\quad\tilde{u}(x,z)=\int_{\R}P(x-\xi,z)u(\xi)\,d\xi, \end{equation*} where the Poisson kernel $P$ is \begin{equation*} P(x,z):=c_1(n,a)\frac{z^{1-a}}{\left(|x|^2+z^2\right)^\frac{n+1-a}{2}}. \end{equation*} If we define \begin{equation*} H(x):=c_1\frac{1}{(1+|x|^2)^\frac{n+s}{2}}, \end{equation*} then we have \begin{equation*} P(x,1)=H(x)\qquad\textrm{and}\quad P(x,z)=\frac{1}{z^n}H\Big(\frac{x}{z}\Big). \end{equation*} In particular \begin{equation*} \int_{\R}P(x,z)\,dx=\frac{1}{z^n}\int_{\R}H\Big(\frac{x}{z}\Big)\,dx=\int_{\R}H(\xi)\,d\xi, \end{equation*} for every $z>0$. Then the constant $c_1$ is chosen in such a way that \begin{equation*} \int_{\R}H(x)\,dx=1. \end{equation*} The extension $\tilde{u}$ is related to the $\frac{s}{2}$-fractional Laplacian of $u$ via the formula \begin{equation}\label{frac_ext_trace} \lim_{z\to0}-z^a\tilde{u}_z(\cdot,z)=c_2(n,a)(-\Delta)^\frac{s}{2}u, \end{equation} which holds in the distributional sense, i.e. $\tilde{u}$ is a weak solution of the Neumann problem \begin{equation} \left\{\begin{array}{cc} \textrm{div}(z^a\nabla\tilde{u})=0&\textrm{in }\mathbb{R}^{n+1}_+,\\ -z^a\frac{\partial\tilde{u}}{\partial z}=c_2(n,a)(-\Delta)^\frac{s}{2}u&\textrm{on }\partial\mathbb{R}^{n+1}_+. \end{array}\right. \end{equation} Moreover it can be shown that \begin{equation*} \int_{\mathbb{R}^{n+1}_+}|\nabla\tilde{u}|^2z^a\,dX=c_3(n,a)[u]_{H^\frac{s}{2}(\R)}^2, \end{equation*} for every $u\in H^\frac{s}{2}(\R)$ with compact support.\\ Now we consider the local contribution of the $H^\frac{s}{2}$-seminorm of $u$ in the ball $B_r$, i.e. \begin{equation*} \J_r(u):=\int_{B_r}\int_{B_r}\frac{|u(x)-u(y)|^2}{\kers}\,dx\,dy+2\int_{B_r}\int_{\Co B_r}\frac{|u(x)-u(y)|^2}{\kers}\,dx\,dy. \end{equation*} Notice that the first term is just $[u]_{H^\frac{s}{2}(B_r)}^2$ and, if $u\in H^\frac{s}{2}(\R)$, \begin{equation*} [u]_{H^\frac{s}{2}(\R)}^2=\J_r(u)+[u]_{H^\frac{s}{2}(\Co B_r)}^2. \end{equation*} In particular, if $u,\,v\in H^\frac{s}{2}(\R)$ and $u=v$ outside $B_r$, then \begin{equation*} [u]_{H^\frac{s}{2}(\R)}^2-[v]_{H^\frac{s}{2}(\R)}^2= \J_r(u)-\J_r(v). \end{equation*} \begin{rmk} Notice that the functional $\J_r$ is simply the extension to generic functions of the fractional perimeter in the ball $B_r$, i.e. \begin{equation} \J_{B_r}(E)=P_s(E,B_r)=\frac{1}{2}\J_r(\chi_E). \end{equation} In particular, if $u=\chi_E-\chi_{\Co E}$, then \begin{equation*} \J_r(u)=8P_s(E,B_r). \end{equation*} \end{rmk} Now we prove some estimates which relate our functional $\J_1(u)$ with the energy $(\ref{extension_energy})$ of the extension $\tilde{u}$. \begin{prop} Let $\Omega\subset\mathbb{R}^{n+1}$ be a bounded open set with Lipschitz boundary and denote \begin{equation*} \Omega_0:=\Omega\cap\{z=0\}\subset\R,\qquad\Omega_+:=\Omega\cap\{z>0\}. \end{equation*} (a) If $\Omega_0\subset\subset B_1$ then \begin{equation}\label{local_energy1} \int_{\Omega_+}|\nabla\tilde{u}|^2z^a\,dX\leq C\J_1(u), \end{equation} with $C$ depending on $\Omega$. (b) If $B_1\subset\subset\Omega_0$ and $u$ is bounded in $\R$ then \begin{equation*} \J_1(u)\leq C\left(1+\int_{\Omega_+}|\nabla\tilde{u}|^2z^a\,dX\right), \end{equation*} with $C$ depending on $\Omega$ and $\|u\|_{L^\infty(\R)}$. \begin{proof} (a) We can assume without loss of generality that $\int_{B_1}u=0$. Then \begin{equation*} 2\int_{B_1}\int_{\R}\frac{u(x)u(y)}{(1+|x|^2)^\frac{n+s}{2}}\,dx\,dy= 2\left(\int_{B_1}u(y)\,dy\right)\left(\int_{\R}\frac{u(x)}{(1+|x|^2)^\frac{n+s}{2}}\,dx\right)=0, \end{equation*} and hence \begin{equation*}\begin{split} \int_{\R}\frac{|u(x)|^2}{(1+|x|^2)^\frac{n+s}{2}}\,dx& =\frac{1}{|B_1|}\int_{B_1}\int_{\R}\frac{|u(x)|^2}{(1+|x|^2)^\frac{n+s}{2}}\,dx\,dy\\ & \leq\frac{1}{|B_1|}\int_{B_1}\int_{\R}\frac{|u(x)-u(y)|^2}{(1+|x|^2)^\frac{n+s}{2}}\,dx\,dy. \end{split}\end{equation*} For every $y\in B_1$ we have $|x-y|\leq1+|x|$ and hence \begin{equation*} |x-y|^{n+s}\leq(1+|x|)^{n+s}\leq C(1+|x|^2)^\frac{n+s}{2}. \end{equation*} Therefore \begin{equation*}\begin{split} \frac{1}{|B_1|}\int_{B_1}\int_{\R}\frac{|u(x)-u(y)|^2}{(1+|x|^2)^\frac{n+s}{2}}\,dx\,dy& \leq C\int_{B_1}\int_{\R}\frac{|u(x)-u(y)|^2}{\kers}\,dx\,dy\\ & \leq C\J_1(u), \end{split} \end{equation*} and \begin{equation*} \int_{\R}\frac{|u(x)|^2}{(1+|x|^2)^\frac{n+s}{2}}\,dx\leq C\J_1(u). \end{equation*} Thus, by Holder inequality \begin{equation*}\begin{split} \int_{\R}\frac{|u(x)|}{(1+|x|^2)^\frac{n+s}{2}}\,dx& =\int_{\R}\frac{|u(x)|}{(1+|x|^2)^\frac{n+s}{4}}\,\frac{1}{(1+|x|^2)^\frac{n+s}{4}}\,dx\\ & \leq\left(\int_{\R}\frac{|u(x)|^2}{(1+|x|^2)^\frac{n+s}{2}}\,dx\right)^\frac{1}{2} \left(\int_{\R}\frac{1}{(1+|x|^2)^\frac{n+s}{2}}\,dx\right)^\frac{1}{2}\\ & \leq C\J_1(u)^\frac{1}{2}. \end{split} \end{equation*} Now let $\varphi\in C_c^\infty(\R)$ be a smooth cutoff function s.t. $\varphi=1$ in $N_b(\Omega_0)$, the $b$-neighborhood of $\Omega_0$, for some $b\in(0,1)$ small enough to have $N_b(\Omega_0)\subset\subset B_1$, and $\supp\varphi\subset B_1$, . We write \begin{equation*} u=\varphi u+(1-\varphi)u=:u_1+u_2. \end{equation*} Clearly $\tilde{u}=\tilde{u}_1+\tilde{u}_2$. Since $u_1$ is compactly supported in $B_1$, we have \begin{equation*} \int_{\mathbb{R}_+^{n+1}}|\nabla\tilde{u}_1|^2z^a\,dX=C[u_1]_{H^\frac{s}{2}}^2=C\J_1(u_1)\leq C\J_1(u). \end{equation*} On the other hand, it can be shown that for every $(x,z)\in\Omega_+$ \begin{equation}\label{punctual_gradient_ext_estimate} z^a|\nabla\tilde{u}_2(x,z)|\leq C\int_{\R}\frac{|u_2(y)|}{(1+|y|^2)^\frac{n+s}{2}}\,dy\leq C\J_1(u)^\frac{1}{2}, \end{equation} and hence \begin{equation*} \int_{\Omega_+}|\nabla\tilde{u}_2|^2z^a\,dX =\int_{\Omega_+}|\nabla\tilde{u}_2|^2z^{2a}\,\frac{1}{z^a}\,dX \leq C\J_1(u)\int_{\Omega_+}\frac{1}{z^a}\,dX. \end{equation*} Since $\Omega_+$ is bounded, it is contained in the cylinder $C_R$, \begin{equation*} \Omega_+\subset C_R:=\{(x,z)\in\mathbb{R}^{n+1}\,|\,0\leq z<R,\,|x|<R\}, \end{equation*} for some $R=R(\Omega)$ big enough. Thus \begin{equation*} \int_{\Omega_+}\frac{1}{z^a}\,dX\leq\int_{C_R}\frac{1}{z^a}\,dX =\frac{1}{1-a}\int_{B_R}dx\int_0^R\frac{d}{dz}z^{1-a}\,dz =\frac{\omega_n}{1-a}R^{n+1-a}, \end{equation*} and \begin{equation*} \int_{\Omega_+}|\nabla\tilde{u}_2|^2z^a\,dX\leq C\J_1(u). \end{equation*} Therefore \begin{equation*}\begin{split} \int_{\Omega_+}|\nabla\tilde{u}|^2z^a\,dX&=\int_{\Omega_+}|\nabla\tilde{u}_1+\nabla\tilde{u}_2|^2z^a\,dX\\ & \leq\int_{\Omega_+}|\nabla\tilde{u}_1|^2z^a\,dX+\int_{\Omega_+}|\nabla\tilde{u}_2|^2z^a\,dX\\ & \qquad\qquad\qquad +2\int_{\Omega_+}|\nabla\tilde{u}_1\cdot\nabla\tilde{u}_2|z^a\,dX\\ & \leq2C\J_1(u)+2\int_{\Omega_+}|\nabla\tilde{u}_1|z^\frac{a}{2}\,|\nabla\tilde{u}_2|z^\frac{a}{2}\,dX\\ & \leq2C\J_1(u)+2\left(\int_{\Omega_+}|\nabla\tilde{u}_1|^2z^a\,dX\right)^\frac{1}{2}\left(\int_{\Omega_+}|\nabla\tilde{u}_2|^2z^a\,dX\right)^\frac{1}{2}\\ & \leq C\J_1(u). \end{split}\end{equation*} (b) Since $u$ is bounded, we have \begin{equation*}\begin{split} \int_{B_1}\int_{\Co B_1}\frac{|u(x)-u(y)|^2}{\kers}\,dx\,dy&\leq4\|u\|_{L^\infty(\R)}^2\int_{B_1}\int_{\Co B_1}\frac{1}{\kers}\,dx\,dy\\ & =4P_s(B_1)\|u\|^2_{L^\infty(\R)}. \end{split} \end{equation*} Let $\psi\in C_c^\infty(\mathbb{R}^{n+1})$ be a smooth cutoff function s.t. $\psi=1$ in $B_1$ and $\supp\psi\subset\Omega$, and define \begin{equation*} v(x):=\psi(x,0)u(x). \end{equation*} Notice that, since $u=v$ in $B_1$, \begin{equation*} \int_{B_1}\int_{B_1}\frac{|u(x)-u(y)|^2}{\kers}\,dx\,dy=\int_{B_1}\int_{B_1}\frac{|v(x)-v(y)|^2}{\kers}\,dx\,dy\leq\J_1(v). \end{equation*} Moreover, since $\supp v\subset\Omega_0$ is compact, \begin{equation*} \int_{\mathbb{R}^{n+1}_+}|\nabla\tilde{v}|^2z^a\,dX=C[v]_{H^\frac{s}{2}(\R)}^2\geq C\J_1(v). \end{equation*} Since the function $\tilde{v}$ minimizes \begin{equation*} \int_{\mathbb{R}^{n+1}_+}|\nabla\tilde{v}|^2z^a\,dX=\inf\left\{\int_{\mathbb{R}^{n+1}_+}|\nabla w|^2z^a\,dX\,\Big|\, w(\cdot,0)=v\right\} \end{equation*} and $\psi(x,0)\tilde{u}(x,0)=\psi(x,0)u(x)=v(x)$, we have \begin{equation*} \int_{\mathbb{R}^{n+1}_+}|\nabla\tilde{v}|^2z^a\,dX \leq \int_{\mathbb{R}^{n+1}_+}|\nabla(\psi\tilde{u})|^2z^a\,dX. \end{equation*} To conclude it is enough to compute the right hand side. Recalling that $\supp\psi\subset\Omega$, \begin{equation*}\begin{split} \int_{\mathbb{R}^{n+1}_+}|\nabla(\psi\tilde{u})|^2z^a\,dX& = \int_{\mathbb{R}^{n+1}_+}|(\nabla\psi)\tilde{u}+\psi\nabla\tilde{u}|^2z^a\,dX\\ & =\int_{\Omega_+}|(\nabla\psi)\tilde{u}|^2z^a\,dX+ \int_{\Omega_+}|\psi\nabla\tilde{u}|^2z^a\,dX\\ & \qquad\qquad +\int_{\Omega_+}2(\nabla\psi\cdot\nabla\tilde{u})\,\psi\tilde{u}z^a\,dX. \end{split} \end{equation*} Notice that, since $u$ is bounded, also $\tilde{u}$ is bounded \begin{equation*} |\tilde{u}(x,z)|\leq\int_{\R}P(\xi,z)|u(x-\xi)|\,d\xi\leq\|u\|_{L^\infty(\R)}\int_{\R}P(\xi,z)\,d\xi=\|u\|_{L^\infty(\R)}. \end{equation*} In particular this gives \begin{equation*} \|\tilde{u}\|_{L^\infty(\mathbb{R}^{n+1}_+)}\leq\|u\|_{L^\infty(\R)}. \end{equation*} Therefore the first term in the right hand side above is \begin{equation*} \int_{\Omega_+}|(\nabla\psi)\tilde{u}|^2z^a\,dX \leq\mathfrak{L}^{n+1}(\Omega_+)\sup_{\Omega_+}|\nabla\psi|^2\|u\|_{L^\infty(\R)}^2\sup_{\Omega_+}z^a<\infty. \end{equation*} The second term gives \begin{equation*} \int_{\Omega_+}|\psi\nabla\tilde{u}|^2z^a\,dX\leq\sup_{\Omega_+}|\psi|^2\int_{\Omega_+}|\nabla\tilde{u}|^2z^a\,dX. \end{equation*} As for the last term, we have \begin{equation*} 2(\nabla\psi\cdot\nabla\tilde{u})\,\psi\tilde{u}z^a=\textrm{div}(\tilde{u}^2\psi z^a\nabla\psi) -\tilde{u}^2\psi z^a\Delta\psi-\tilde{u}^2z^a|\nabla\psi|^2 -\tilde{u}^2\psi\,\psi_z\frac{d}{dz}z^a. \end{equation*} Since $\supp\psi\subset\Omega$, using the Gauss-Green formula for the first term gives \begin{equation*} \int_{\Omega_+}\textrm{div}(\tilde{u}^2\psi z^a\nabla\psi)\,dX =\int_{\partial\Omega_+}\tilde{u}^2\psi z^a\nabla\psi\cdot\nu_{\Omega_+}\,d\sigma =\int_{\Omega_0}u^2\psi z^a\psi_z\,dx. \end{equation*} Integrating and taking absolute values gives \begin{equation*}\begin{split} \int_{\Omega_+}2(\nabla\psi&\cdot\nabla\tilde{u})\,\psi\tilde{u}z^a\,dX \leq\left|\int_{\Omega_+}2(\nabla\psi\cdot\nabla\tilde{u})\,\psi\tilde{u}z^a\,dX\right|\\ & \leq\int_{\Omega_0}|u^2\psi z^a\psi_z|\,dx +\int_{\Omega_+}|\tilde{u}^2\psi z^a\Delta\psi|\,dX +\int_{\Omega_+}|\tilde{u}^2z^a|\nabla\psi|^2|\,dX\\ & \qquad +\int_{\Omega_+}\Big|\tilde{u}^2\psi\,\psi_z\frac{1}{az^{1-a}}\Big|\,dX \end{split}\end{equation*} As we did in (a), we can enclose $\Omega_+\subset C_R$, for $R=R(\Omega)$ big enough; then integrating the last term in the right hand side above gives \begin{equation*}\begin{split} \int_{\Omega_+}\Big|\tilde{u}^2\psi\,\psi_z\frac{1}{az^{1-a}}\Big|\,dX& \leq\|u\|_{L^\infty(\R)}^2\sup_{\Omega_+}|\psi\,\psi_z|\int_{B_R}dx\int_0^R\frac{z^{a-1}}{a}\,dz\\ & =\|u\|_{L^\infty(\R)}^2\sup_{\Omega_+}|\psi\,\psi_z|\omega_nR^{n+a}. \end{split} \end{equation*} The other three terms are simply bounded by a constant depending on $\|u\|_{L^\infty(\R)}$, $\Omega$ and $\psi$; for example the second term is \begin{equation*} \int_{\Omega_+}|\tilde{u}^2\psi z^a\Delta\psi|\,dX \leq\|u\|_{L^\infty(\R)}^2\sup_{\Omega_+}|\psi\Delta\psi| R^a\mathfrak{L}^{n+1}(\Omega_+). \end{equation*} Putting everything together gives the claim. \end{proof} \end{prop} \begin{rmk} Let $\Omega\subset\mathbb{R}^{n+1}_+$ be a bounded open set with Lipschitz boundary and let $\bar{v}:\Omega\longrightarrow\mathbb{R}$ be s.t. \begin{equation*} \int_\Omega|\nabla\bar{v}|^2z^a\,dX<\infty. \end{equation*} Then from Holder's inequality \begin{equation*} \int_\Omega|\nabla\bar{v}|\,dX<\infty, \end{equation*} and hence we can define the trace of $\bar{v}$ on $\partial\Omega$.\\ In particular if $\bar{v}=\tilde{u}$, the trace of $\tilde{u}$ on $\Omega_0$ is clearly $u$. \end{rmk} \begin{rmk} Assume $\bar{v}$ is compactly supported in the open set $\Omega\subset\mathbb{R}^{n+1}$ and has trace $v$ on $\Omega_0$. Then \begin{equation*} \int_{\Omega_+}|\nabla\bar{v}|^2z^a\,dX\geq\int_{\mathbb{R}^{n+1}_+}|\nabla\tilde{v}|^2z^a\,dX. \end{equation*} We briefly sketch the proof. Denote by $\bar{v}_k$ the solution of equation \begin{equation*} \textrm{div}(z^a\nabla\bar{v}_k)=0\qquad\textrm{in}\quad \mathcal{B}_k^+, \end{equation*} which has trace $v$ on $\{z=0\}$ and $0$ on $\partial\mathcal{B}_k^+\cap\{z>0\}$, where $\mathcal{B}_k^+$ denotes the upper half of the $(n+1)$-dimensional ball centered at 0, i.e. \begin{equation*} \mathcal{B}_k^+=\left\{(x,z)\in\mathbb{R}^{n+1}_+\,|\,(|x|^2+z^2)^\frac{1}{2}<k\right\}. \end{equation*} Extend $\bar{v}_k$ to be 0 outside $\mathcal{B}_k^+$. If $k$ is big enough, so that $\supp\bar{v}\subset\mathcal{B}_k$, then $\bar{v}$ and $\bar{v}_k$ have the same trace on $\partial\mathcal{B}_k^+$ and hence \begin{equation*}\begin{split} \int_{\mathbb{R}^{n+1}_+}|\nabla\bar{v}_k|^2z^a\,dX&= \int_{\mathcal{B}^+_k}|\nabla\bar{v}_k|^2z^a\,dX\leq\int_{\mathcal{B}^+_k}|\nabla\bar{v}|^2z^a\,dX\\ & =\int_{\Omega_+}|\nabla\bar{v}|^2z^a\,dX. \end{split}\end{equation*} It can be checked that $\nabla\bar{v}_k$ converges to $\nabla\tilde{v}$ in $L^2(\mathbb{R}^{n+1}_+,z^a\,dx\,dz)$, so we get the claim letting $k\to\infty$. \end{rmk} \begin{lem} Assume $u,\, v:\R\longrightarrow\mathbb{R}$ are s.t. $\J_1(u),\,\J_1(v)<\infty$ and $u-v$ is compactly supported in $B_1$. Then \begin{equation} \inf_{\Omega,\,\bar{v}}\int_{\Omega_+}\left(|\nabla\bar{v}|^2-|\nabla\tilde{u}|^2\right)z^a\,dX=c_3(n,a)\left(\J_1(v)-\J_1(u)\right), \end{equation} where the infimum is taken among all bounded open sets $\Omega\subset\mathbb{R}^{n+1}$ with Lipschitz boundary and $\Omega_0\subset B_1$, and among all functions $\bar{v}$ s.t. $\bar{v}-\tilde{u}$ is compactly supported in $\Omega$ and the trace of $\bar{v}$ on $\{z=0\}$ equals $v$. \begin{proof} First of all notice that, since the trace of $\bar{v}$ is equal to $v$ on the whole of $\{z=0\}$ and $\bar{v}=\tilde{u}$ out of $\Omega_+$, we must have supp$(u-v)\subset\Omega_0$.\\ If $u,\,v\in C^\infty_c(\R)$, then \begin{equation*}\begin{split} \inf_{\Omega,\,\bar{v}}\int_{\Omega_+}&\left(|\nabla\bar{v}|^2-|\nabla\tilde{u}|^2\right)z^a\,dX =\int_{\mathbb{R}^{n+1}_+}|\nabla\tilde{v}|^2z^a\,dX-\int_{\mathbb{R}^{n+1}_+}|\nabla\bar{u}|^2z^a\,dX\\ & =c_3(n,a)\left([v]_{H^\frac{s}{2}(\R)}^2-[u]_{H^\frac{s}{2}(\R)}^2\right) =c_3(n,a)\left(\J_1(v)-\J_1(u)\right). \end{split}\end{equation*} The first equality is a consequence of previous Remark: fixed an open set $\Omega$ as above and an admissible $\bar{v}$, let $K:=$supp$(\bar{v}-\tilde{u})$ and define $\bar{w}:=\bar{v}$ in $K$ and 0 outside; then the trace of $\bar{w}$ is $v$ on $\Omega_0$ and $\supp\bar{w}\subset\Omega$, so \begin{equation*} \int_{\Omega_+}|\nabla\bar{v}|^2z^a\,dX\geq\int_{K_+}|\nabla\bar{v}|^2z^a\,dX=\int_{\Omega_+}|\nabla\bar{w}|^2z^a\,dX \geq\int_{\mathbb{R}^{n+1}_+}|\nabla\tilde{v}|^2z^a\,dX. \end{equation*} On the other hand, it is clear that taking a sequence of admissible pairs of open sets $\Omega^k$ converging to $\mathbb{R}^{n+1}_+$ and functions $\bar{v}_k$ converging to $\tilde{v}$ gives the opposite inequality.\\ The other two equalities are a consequence of the compact supports of $u$ and $v$ and the hypothesis $u=v$ out of $B_1$, respectively. In the general case let \begin{equation*} \Omega^1\subset\Omega^2\subset\Omega^3\dots,\qquad\bigcup_k\Omega^k=\mathbb{R}^{n+1}\setminus\{(x,0)\,|\, x\in\Co B_1\}, \end{equation*} and denote $\bar{w}_k$ the solution of the equation \begin{equation*} \textrm{div}(z^a\nabla\bar{w}_k)=0\qquad\textrm{in}\quad\Omega_+^k, \end{equation*} which has trace $w:=v-u$ on $\Omega_0^k$ and 0 on $\partial\Omega^k\cap\{z>0\}$. We extend $\bar{w}_k$ to be 0 outside $\Omega^k$. Notice that the function $\tilde{u}+\bar{w}_k$ satisfies the equation \begin{equation*} \textrm{div}(z^a\nabla(\tilde{u}+\bar{w}_k))=0\qquad\textrm{in}\quad\Omega_+^k, \end{equation*} and has trace $v$ on the whole of $\{z=0\}$.\\ If $\Omega\subset\Omega^k$, then $\bar{v}$ and $\tilde{u}+\bar{w}_k$ have the same trace on $\partial\Omega^k_+$, equal to $v$ on $\Omega^k_0$ and $\tilde{u}$ on $\partial\Omega^k\cap\{z>0\}$; actually $\bar{v}=\tilde{u}=\tilde{u}+\bar{w}_k$ in $\mathbb{R}^{n+1}_+\setminus\Omega^k$. Therefore \begin{equation*}\begin{split} \int_{\mathbb{R}^{n+1}_+}\left(|\nabla\bar{v}|^2-|\nabla\tilde{u}|^2\right)z^a\,dX \geq\int_{\mathbb{R}^{n+1}_+}\left(|\nabla(\tilde{u}+\bar{w}_k|^2-|\nabla\tilde{u}|^2\right)z^a\,dX\\ =\int_{\mathbb{R}^{n+1}_+}|\nabla\bar{w}_k|^2z^a\,dX +2\int_{\mathbb{R}^{n+1}_+}z^a\nabla\tilde{u}\cdot\nabla\bar{w}_k\,dX. \end{split}\end{equation*} The second term is independent of $k$. Indeed $\tilde{u}$ satisfies \begin{equation*} \textrm{div}(z^a\nabla\tilde{u})=0\qquad\textrm{in}\quad\mathbb{R}^{n+1}_+ \end{equation*} and $\bar{w}_{k_1}-\bar{w}_{k_2}$ is compactly supported in $\mathbb{R}^{n+1}$ and has trace 0 on $\{z=0\}$; therefore using Gauss-Green and the equality \begin{equation*}\begin{split} z^a\nabla\tilde{u}\cdot\nabla(\bar{w}_{k_1}-\bar{w}_{k_2})&=\textrm{div}(z^a(\bar{w}_{k_1}-\bar{w}_{k_2})\nabla\tilde{u}) -(\bar{w}_{k_1}-\bar{w}_{k_2})\textrm{div}(z^a\nabla\tilde{u})\\ & =\textrm{div}(z^a(\bar{w}_{k_1}-\bar{w}_{k_2})\nabla\tilde{u}), \end{split}\end{equation*} we get \begin{equation*} \int_{\mathbb{R}^{n+1}_+}z^a\nabla\tilde{u}\cdot\nabla(\bar{w}_{k_1}-\bar{w}_{k_2})\,dX=0, \end{equation*} and hence \begin{equation*} \int_{\mathbb{R}^{n+1}_+}z^a\nabla\tilde{u}\cdot\nabla\bar{w}_k\,dX = \int_{\mathbb{R}^{n+1}_+}z^a\nabla\tilde{u}\cdot\nabla\bar{w}_1\,dX, \end{equation*} for every $k$. As in the case of balls in previous Remark, it can be checked that $\nabla\bar{w}_k$ converges to $\nabla\tilde{w}$ in $L^2(\mathbb{R}^{n+1}_+,z^a\,dx\,dz)$.\\ Thus if we let $k\to\infty$ we find that the infimum equals \begin{equation*} \int_{\mathbb{R}^{n+1}_+}\left(|\nabla\tilde{w}|^2+2\nabla\tilde{u}\cdot\nabla\bar{w}_1\right)z^a\,dX=c_3(n,a)\J_1(w) +2\int_{\mathbb{R}^{n+1}_+}z^a\nabla\tilde{u}\cdot\nabla\bar{w}_1\,dX, \end{equation*} since $w$ has compact support in $\R$. In the particular case $u,\,v\in C_c^\infty(\R)$ we already showed that the inf is $c_3(\J_1(v)-\J_1(u))$ and hence we get \begin{equation*} c_3(n,a)\J_1(w) +2\int_{\mathbb{R}^{n+1}_+}z^a\nabla\tilde{u}\cdot\nabla\bar{w}_1\,dX =c_3(n,a)\left(\J_1(u+w)-\J_1(u)\right), \end{equation*} for every $u,\,w\in C_c^\infty(\R)$. Then by approximation we find that this equality holds for all $u,\,w$ with $\J_1(u),\,\J_1(w)<\infty$, concluding the proof. \end{proof} \end{lem} As a consequence, if we restrict our attention to functions $v=\chi_F-\chi_{\Co F}$, we obtain the following \begin{prop}\label{minim_functional} The set $E$ is $s$-minimal in $B_1$ if and only if the extension $\tilde{u}$ of $u=\chi_E-\chi_{\Co E}$ satisfies \begin{equation*} \int_{\Omega_+}|\nabla\bar{v}|^2z^a\,dX\geq\int_{\Omega_+}|\nabla\tilde{u}|^2z^a\,dX, \end{equation*} for all bounded open sets $\Omega$ with Lipschitz boundary s.t. $\Omega_0\subset\subset B_1$ and all functions $\bar{v}$ that equal $\tilde{u}$ in a neighborhood of $\partial\Omega$ and take the values $\pm1$ on $\Omega_0$. \end{prop} \end{subsection} \begin{subsection}{Monotonicity Formula} Finally we are ready to define the promised quantity and prove the monotonicity formula. Assume $E$ is $s$-minimal in $B_R$. For all $r<R$ we define the functional \begin{equation} \Phi_E(r):=\frac{1}{r^{n+a-1}}\int_{\mathcal{B}_r^+}|\nabla\tilde{u}|^2z^a\,dX, \end{equation} where \begin{equation*} u=\chi_E-\chi_{\Co E}. \end{equation*} \begin{lem}[Scale Invariance] The functional $\Phi_E$ is scale invariant in the sense that the rescaled set $\lambda E$ satisfies \begin{equation*} \Phi_{\lambda E}(\lambda r)=\Phi_E(r). \end{equation*} \begin{proof} Let $v:=\chi_{\lambda E}-\chi_{\Co(\lambda E)}$ and notice that $v(x)=u\left(\frac{x}{\lambda}\right)$.\\ Since \begin{equation*} P(x,z)=c_1\frac{\lambda^{1-a}}{\lambda^{n+1-a}} \,\frac{\left(\frac{z}{\lambda}\right)^{1-a}}{\left(\left|\frac{x}{\lambda}\right|^2+\left(\frac{z}{\lambda}\right)^2\right)^\frac{n+1-a}{2}} =\frac{1}{\lambda^n} P\Big(\frac{x}{\lambda},\frac{z}{\lambda}\Big), \end{equation*} we have \begin{equation*} \tilde{v}(x,z)=\int_{\R}P(x-\xi,z)v(\xi)\,d\xi =\frac{1}{\lambda^n}\int_{\R}P\left(\frac{x-\xi}{\lambda},\frac{z}{\lambda}\right)u\left(\frac{\xi}{\lambda}\right)\,d\xi =\tilde{u}\left(\frac{x}{\lambda},\frac{z}{\lambda}\right), \end{equation*} and hence \begin{equation*} \nabla\tilde{v}(x,z)=\frac{1}{\lambda}\nabla\tilde{u}\left(\frac{x}{\lambda},\frac{z}{\lambda}\right). \end{equation*} Therefore \begin{equation*}\begin{split} \int_{\mathcal{B}_{\lambda r}^+}|\nabla\tilde{v}(x,z)|^2z^a\,dX& =\frac{1}{\lambda^{2-a}}\int_{\mathcal{B}_{\lambda r}^+}\left|\nabla\tilde{u}\left(\frac{x}{\lambda},\frac{z}{\lambda}\right)\right|^2\left(\frac{z}{\lambda}\right)^a\,dX\\ & =\lambda^{n+a-1}\int_{\mathcal{B}_r^+}|\nabla\tilde{u}(x,z)|^2z^a\,dX, \end{split} \end{equation*} proving the claim. \end{proof} \end{lem} \begin{lem}\label{bounded_monotonicity_functional} There exists a constant $C=C(n,a)>0$ s.t. \begin{equation*} \Phi_E(r)\leq C \end{equation*} for every $r\leq R/2$. \begin{proof} From the inequality $(\ref{local_energy1})$, with $\Omega=\mathcal{B}_{1/2}$ we obtain \begin{equation*} \Phi_E\Big(\frac{1}{2}\Big)=\frac{1}{2^{n+a-1}}\int_{\mathcal{B}_{1/2}^+}|\nabla\tilde{u}|^2z^a\,dX\leq\frac{C}{2^{n+a-1}} \J_1(u)=C P_s(E,B_1), \end{equation*} where $C=C(n,a)$. Now, using the scaling invariance of $\Phi_E$ and the scaling of the $s$-perimeter, we get \begin{equation*}\begin{split} \Phi_E(r)&=\Phi_{\frac{1}{2r}E}\Big(\frac{1}{2}\Big)\leq C P_s\Big(\frac{1}{2r}E,B_1\Big)=C\frac{1}{(2r)^{n+a-1}}P_s(E,B_{2r})\\ & \leq C\frac{1}{(2r)^{n+a-1}}P_s(B_{2r})\qquad\left(\textrm{since }E\textrm{ is }s\textrm{-minimal in }B_R\textrm{ and }2r\leq R\right)\\ & =CP_s(B_1)=:C(n,a). \end{split}\end{equation*} \end{proof} \end{lem} \begin{teo}[Monotonicity Formula] Let $E$ be an $s$-minimal set in $B_R$. Then the function $r\longmapsto\Phi_E(r)$ is increasing. \begin{proof} Notice that $\Phi_E$ is continuous (see Remark $\ref{continuity_of_functional_mon}$ below) and differentiable at $r$ for almost every $r\in(0,R)$, with \begin{equation*}\begin{split} \frac{d}{dr}\Phi_E(r)&=-(n+a-1)\frac{1}{r^{n+a-2}}\int_{\mathcal{B}_r^+}|\nabla\tilde{u}|^2z^a\,dX\\ & \qquad +\frac{1}{r^{n+a-1}}\int_{(\partial\mathcal{B}_r)^+}|\nabla\tilde{u}|^2z^a\,d\mathcal{H}^n(X), \end{split} \end{equation*} where \begin{equation*} (\partial\mathcal{B}_r)^+:=\partial\mathcal{B}_r\cap\{z>0\}. \end{equation*} To prove the claim we show that $\frac{d}{dr}\Phi_E(r)\geq0$. Due to the scale invariance, it is enough to prove the inequality for $r=1$, i.e. that \begin{equation}\label{eq_some} \int_{(\partial\mathcal{B}_1)^+}|\nabla\tilde{u}|^2z^a\,d\mathcal{H}^n(X)\geq (n+a-1)\int_{\mathcal{B}_1^+}|\nabla\tilde{u}|^2z^a\,dX. \end{equation} To do so, we define the function $\bar{v}$ in $\mathbb{R}^{n+1}_+$, as \begin{equation*} \bar{v}(x,z):=\left\{\begin{array}{cc}\tilde{u}((1+\epsilon)(x,z)),&\textrm{in }\mathcal{B}_{1/(1+\epsilon)}^+,\\ \tilde{u}\big(\frac{(x,z)}{|(x,z)|}\big),&\textrm{in }\mathcal{B}_1^+\setminus\mathcal{B}_{1/(1+\epsilon)}^+, \end{array}\right. \quad\textrm{and }\bar{v}:=\tilde{u}\textrm{ in }\Co\mathcal{B}_1^+. \end{equation*} In particular the trace $v$ of $\bar{v}$ on $\{z=0\}$ is equal to $\chi_F-\chi_{\Co F}$, for some set $F$ which coincides with $E$ in $\R\setminus B_1$. Therefore the minimality of $E$ implies, thanks to Proposition $\ref{minim_functional}$, \begin{equation*} \int_{\mathcal{B}_1^+}|\nabla\bar{v}|^2z^a\,dX\geq\int_{\mathcal{B}_1^+}|\nabla\tilde{u}|^2z^a\,dX. \end{equation*} Moreover by construction the function $\bar{v}$ is constant along radial directions in the strip $\mathcal{B}_1^+\setminus\mathcal{B}_{1/(1+\epsilon)}^+$ and hence its gradient there is equal to \begin{equation*} \nabla\bar{v}(x,z)=\frac{1}{|(x,z)|}\nabla_\tau\tilde{u}\Big(\frac{(x,z)}{|(x,z)|}\Big), \end{equation*} where $\nabla_\tau$ denotes the tangential component of the gradient in $(\partial\mathcal{B}_1)^+$.\\ Thus we obtain \begin{equation*}\begin{split} &\int_{\mathcal{B}_1^+}|\nabla\tilde{u}|^2z^a\,dX \leq \int_{\mathcal{B}_{1/(1+\epsilon)}^+}|\nabla\bar{v}|^2z^a\,dX + \int_{\mathcal{B}_1^+\setminus\mathcal{B}_{1/(1+\epsilon)}^+}|\nabla\bar{v}|^2z^a\,dX\\ & =\frac{1}{(1+\epsilon)^{n+a-1}}\int_{\mathcal{B}_1^+}|\nabla\tilde{u}|^2z^a\,dX + \int_{\mathcal{B}_1^+\setminus\mathcal{B}_{1/(1+\epsilon)}^+}\Big|\nabla_\tau\tilde{u}\Big(\frac{(x,z)}{|(x,z)|}\Big)\Big|^2\frac{z^a}{|(x,z)|^2}\,dX, \end{split} \end{equation*} and hence \begin{equation*}\begin{split} \frac{1}{\epsilon}\Big(1&-\frac{1}{(1+\epsilon)^{n+a-1}}\Big)\int_{\mathcal{B}_1^+}|\nabla\tilde{u}|^2z^a\,dX\\ & \leq \frac{1}{\epsilon}\int_\frac{1}{1+\epsilon}^1 dt\int_{(\partial\mathcal{B}_t)^+} \Big|\nabla_\tau\tilde{u}\Big(\frac{(x,z)}{|(x,z)|}\Big)\Big|^2\frac{z^a}{|(x,z)|^2}\,d\mathcal{H}^n(X). \end{split} \end{equation*} Then passing to the limit as $\epsilon\to0$ gives \begin{equation*} \int_{(\partial\mathcal{B}_1)^+}|\nabla_\tau\tilde{u}|^2z^a\,d\mathcal{H}^n(X) \geq(n+a-1)\int_{\mathcal{B}_1^+}|\nabla\tilde{u}|^2z^a\,dX. \end{equation*} Thus \begin{equation}\label{homog_minim_eq}\begin{split} \int_{(\partial\mathcal{B}_1)^+}&|\nabla\tilde{u}|^2z^a\,d\mathcal{H}^n(X)\\ & \geq(n+a-1)\int_{\mathcal{B}_1^+}|\nabla\tilde{u}|^2z^a\,dX + \int_{(\partial\mathcal{B}_1)^+}|\nabla_\nu\tilde{u}|^2z^a\,d\mathcal{H}^n(X), \end{split}\end{equation} which implies $(\ref{eq_some})$, concluding the proof. \end{proof} \end{teo} In particular notice that from $(\ref{homog_minim_eq})$ we obtain \begin{equation*} \frac{d}{dr}\Phi_E(r)=0\qquad\Longrightarrow\qquad\nabla_\nu\tilde{u}=0\qquad\textrm{on }(\partial\mathcal{B}_r)^+. \end{equation*} As a consequence we have the following \begin{coroll} The function $r\longmapsto\Phi_E(r)$ is constant if and only if $\tilde{u}$ is homogeneous of degree 0. \end{coroll} \end{subsection} \end{section} \begin{section}{Minimal Cones} We study the blow-up limit $\lambda E$, as $\lambda\to\infty$, of a set $E$ which is $s$-minimal in $B_1$ and s.t. $0\in\partial E$, showing that it is an $s$-minimal cone $C$. In particular we exploit the improvement of flatness to show that if $C$ is a half-space, then $\partial E$ is $C^{1,\alpha}$ near 0.\\ We begin with the following technical result \begin{prop} Let $E_k\subset\R$ be $s$-minimal in $B_k$ for every $k\in\mathbb{N}$ and suppose $E_k\xrightarrow{loc}E$. Then the corresponding extensions $\tilde{u}_k$, respectively $\tilde{u}$, satisfy (i)$\qquad\tilde{u}_k\longrightarrow\tilde{u}$ uniformly on compact sets of $\mathbb{R}^{n+1}_+$, (ii)$\qquad\nabla\tilde{u}_k\longrightarrow\nabla\tilde{u}$ in $L^2_{loc}(\mathbb{R}^{n+1}_+,z^a\,dxdz)$.\\ In particular $\Phi_{E_k}(r)\longrightarrow\Phi_E(r)$. \begin{proof} Notice that the functions $\tilde{u}_k$ are uniformly Lipschitz continuous on each compact set of $\{z>0\}$ (which is easily shown e.g. using the Poisson formula). Consider a subsequence $\tilde{u}_{k_i}$ that converges uniformly on compact sets to a function $\tilde{v}$. We want to show that $\tilde{v}=\tilde{u}$. The uniform convergence implies that also $\tilde{v}$ satisfies the equation \begin{equation*} \textrm{div}(z^a\nabla\tilde{v})=0,\qquad\textrm{in }\mathbb{R}^{n+1}_+, \end{equation*} and it is bounded. Thus, if we prove that the trace $v$ of $\tilde{v}$ is equal to $u=\chi_E-\chi_{\Co E}$ on $\{z=0\}$, then we get $\tilde{v}=\tilde{u}$. Let $r>0$. Fatou's Lemma gives \begin{equation*} \int_{\mathcal{B}_r^+}|\nabla\tilde{v}|^2z^a\,dX\leq\liminf_{i\to\infty} \int_{\mathcal{B}_r^+}|\nabla\tilde{u}_{k_i}|^2z^a\,dX \leq C r^{n+a-1}, \end{equation*} the last inequality being a consequence of Lemma $\ref{bounded_monotonicity_functional}$ (notice that for $i$ big enough the set $E_{k_i}$ is $s$-minimal in $B_{2r}$).\\ Then near the boundary $\{z=0\}$ of $\mathbb{R}^{n+1}_+$, using Holder's inequality we get \begin{equation*}\begin{split} \int_{\mathcal{B}_r\cap\{0<z<\delta\}}|\nabla(\tilde{u}_{k_i}-\tilde{v})|\,dX& =\int_{\mathcal{B}_r\cap\{0<z<\delta\}}z^{-\frac{a}{2}}|\nabla(\tilde{u}_{k_i}-\tilde{v})|z^\frac{a}{2}\,dX\\ & \leq\Big(\frac{\mathcal{L}^n(B_r)}{1-a}\Big)^\frac{1}{2}\delta^\frac{1-a}{2} \Big(\int_{\mathcal{B}_r^+}|\nabla(\tilde{u}_{k_i}-\tilde{v})|^2z^a\,dX\Big)^\frac{1}{2}\\ & \leq C\delta^\frac{1-a}{2}, \end{split}\end{equation*} with $C$ depending on $r$, but not on $\delta$ or $k_i$. On the other hand $\nabla\tilde{u}_{k_i}$ converges to $\nabla\tilde{v}$ uniformly on compact sets of $\mathbb{R}^{n+1}_+$. Therefore \begin{equation*} \int_{\mathcal{B}_r^+}|\nabla(\tilde{u}_{k_i}-\tilde{v})|\,dX \leq C\delta^\frac{1-a}{2}+\int_{\mathcal{B}_r\cap\{z\geq\delta\}}|\nabla(\tilde{u}_{k_i}-\tilde{v})|\,dX, \end{equation*} with the last term going to 0 as $i\to\infty$. Thus taking the limit gives \begin{equation*} \limsup_{i\to\infty}\int_{\mathcal{B}_r^+}|\nabla(\tilde{u}_{k_i}-\tilde{v})|\,dX\leq C\delta^\frac{1-a}{2}, \end{equation*} for every $\delta>0$ small. Since $\delta$ is arbitrary, we see that $\tilde{u}_{k_i}$ converges to $\tilde{v}$ in $W^{1,1}(\mathcal{B}_r^+)$ and this implies the convergence of the traces $u_{k_i}\longrightarrow v$ in $L^1(B_r)$.\\ Since this holds for every $r>0$, we get $v=u$, as wanted, proving $(i)$. Now we prove $(ii)$. It is enough to prove the convergence in $\mathcal{B}_r^+$ for every $r>0$. From inequality $(\ref{local_energy1})$ we get \begin{equation*} \limsup_{k\to\infty}\int_{\mathcal{B}_r^+}|\nabla(\tilde{u}_k-\tilde{u})|^2z^a\,dX\leq C\limsup_{k\to\infty}\J_{2r}(u_k-u). \end{equation*} We want to show that $\J_{2r}(u_k-u)\longrightarrow0$. Define the functions \begin{equation*} f_k(x,y):=\frac{u_k(x)-u_k(y)}{|x-y|^\frac{n+s}{2}}\chi_{B_{2r}}(x)\big(\chi_{B_{2r}}(y)+\sqrt{2}\chi_{\Co B_{2r}}(x)\big), \end{equation*} so that \begin{equation*} \|f_k\|_{L^2(\R\times\R)}^2=\J_{2r}(u_k)=8P_s(E_k,B_{2r}). \end{equation*} According to Theorem $\ref{nonlocal_compactness}$ \begin{equation*} \lim_{k\to\infty}P_s(E_k,B_{2r})=P_s(E,B_{2r}) =\frac{1}{8}\|f\|_{L^2(\R\times\R)}^2, \end{equation*} with \begin{equation*} f(x,y):=\frac{u(x)-u(y)}{|x-y|^\frac{n+s}{2}}\chi_{B_{2r}}(x)\big(\chi_{B_{2r}}(y)+\sqrt{2}\chi_{\Co B_{2r}}(x)\big). \end{equation*} Moreover, since $u_k\longrightarrow u$ in $L^1_{loc}(\R)$, from every subsequence of $\{u_k\}$ we can extract a subsequence $\{u_{k_i}\}$ converging pointwise (almost everywhere) to $u$, and hence also $f_{k_i}$ converges pointwise to $f$.\\ Then for every such subsequence we have the standard implication \begin{equation*}\begin{split} f_{k_i}\longrightarrow f\quad&\textrm{a.e. in } \R\times\R,\quad\|f_{k_i}\|_{L^2(\R\times\R)} \longrightarrow\|f\|_{L^2(\R\times\R)}\\ & \Longrightarrow\quad f_{k_i}\longrightarrow f\quad\textrm{in }L^2(\R\times\R), \end{split} \end{equation*} and hence \begin{equation*} \J_{2r}(u_{k_i}-u)=\|f_{k_i}-f\|_{L^2(\R\times\R)}^2\longrightarrow0. \end{equation*} This proves the claim.\\ Indeed, suppose there exists a subsequence of $\{u_k\}$ (we relabel it for simplicity) s.t. $\J_{2r}(u_k-u)\geq\epsilon$ for every $k$. Then what we have just shown proves that we can extract a subsequence $\{u_{k_i}\}$ s.t. $\J_{2r}(u_{k_i}-u)\longrightarrow0$, giving a contradiction. \end{proof} \end{prop} \begin{rmk}\label{general_seq_conv} The same Proposition remains true if we consider a sequence $E_k\xrightarrow{loc}E$ of sets $E_k$ $s$-minimal in $B_{\lambda_k}$, with $\lambda_k\longrightarrow\infty$. Thus in particular when we consider the blow-up sequence $E_k:=\lambda_k E$ of a set $E$ which is $s$-minimal in $B_1$, with $0\in\partial E$, provided that such a sequence admits a limit. \end{rmk} \begin{rmk}\label{continuity_of_functional_mon} Exploiting the same argument used in the proof of $(ii)$ we can show that the functional $\Phi_E$ is continuous in $r$.\\ Indeed, let $E$ be $s$-minimal in $B_R$ and take $\tilde{r}\in(0,R)$; we want to show that \begin{equation*} \lim_{r\to\tilde{r}}\Phi_E(r)=\Phi_E(\tilde{r}). \end{equation*} Using the scaling invariance we have \begin{equation*} \Phi_E(r)=\Phi_{\frac{\tilde{r}}{r}E}(\tilde{r}), \end{equation*} so it is enough to show that \begin{equation*} \nabla\tilde{u}_r\longrightarrow\nabla\tilde{u}\quad\textrm{in}\quad L^2(\mathcal{B}_{\tilde{r}}^+,z^a\,dxdz), \end{equation*} where $u_r:=\chi_{\frac{\tilde{r}}{r}E}-\chi_{\Co(\frac{\tilde{r}}{r}E)}$ and $u=\chi_E-\chi_{\Co E}$.\\ Notice that the set $\frac{\tilde{r}}{r}E$ is $s$-minimal in $B_{\tilde{r}\frac{R}{r}}$ and $\frac{\tilde{r}}{r}E\xrightarrow{loc}E$ as $r\to\tilde{r}$.\\ Therefore taking a small $\delta>0$, both the sets $E$ and $\frac{\tilde{r}}{r}E$ are $s$-minimal in $B_{\tilde{r}+\epsilon}\subset B_R$, for every $|r-\tilde{r}|\leq\delta$, for some very small $0<\epsilon<R-\tilde{r}$.\\ Now we have \begin{equation*} \limsup_{r\to\tilde{r}}\int_{\mathcal{B}_{\tilde{r}}^+}|\nabla(\tilde{u}_r-\tilde{u})|^2z^a\,dX\leq C \limsup_{r\to\tilde{r}}\J_{\tilde{r}+\epsilon}(u_r-u), \end{equation*} and reasoning as above proves the claim. \end{rmk} Now we can use the pointwise convergence of the functions $\Phi_{E_k}$ to show that the blow-up limit of $E$ in 0 (if it exists) is a cone. We recall that a set $C$ is a cone (with vertex in 0) if for every $t>0$ we have $tC=C$. Moreover we say that $C$ is an $s$-minimal cone if it is locally $s$-minimal in $\R$, meaning that $C$ is $s$-minimal in every ball $B\subset\R$. \begin{teo}[Blow-up Limit]\label{blowup_teo} Let $E\subset\R$ be $s$-minimal in $B_1$ with $0\in\partial E$ and let $\lambda_k\longrightarrow\infty$ be a sequence s.t. \begin{equation*} \lambda_k E\xrightarrow{loc}C. \end{equation*} Then $C$ is an $s$-minimal cone. \begin{proof} Theorem $\ref{nonlocal_compactness}$ proves that $C$ is $s$-minimal in every ball $B\subset\R$.\\ Using previous Proposition and Remark $\ref{general_seq_conv}$ we get \begin{equation*} \Phi_E\Big(\frac{r}{\lambda_k}\Big)=\Phi_{\lambda_k E}(r)\xrightarrow{k\to\infty}\Phi_C(r), \end{equation*} for every $r>0$. Therefore $\Phi_C$ is a constant function, with \begin{equation*} \Phi_C(r)=\lim_{t\to0}\Phi_E(t), \end{equation*} the existence of the limit being guaranteed by the monotonicity of $\Phi_E$.\\ Since $\Phi_C$ is constant, we conclude that the extension $\tilde{u}_C$, and hence also its trace $u_C=\chi_C-\chi_{\Co C}$, is homogeneous of degree 0, proving that $C$ is a cone. \end{proof} \end{teo} \begin{defin} We say that a cone $C$ as in Theorem $\ref{blowup_teo}$ is a tangent cone for $E$ at 0. \end{defin} Now we prove the existence of tangent cones \begin{prop}[Existence of Blow-up Limits]\label{blowup_exist} Let $E\subset\R$ be $s$-minimal in $B_1$ with $0\in\partial E$ and let $\lambda_k\longrightarrow\infty$. Then there exist an $s$-minimal cone $C$ and a subsequence $\{\lambda_{k_i}\}$ of $\{\lambda_k\}$ s.t. \begin{equation*} \lambda_{k_i}E\xrightarrow{loc}C. \end{equation*} \begin{proof} We want to use Theorem $\ref{compact_embd_th}$ to construct a limit set via a diagonal argument. Then previous Theorem shows that it is an $s$-minimal cone. Since we are working with subsequences, we can as well assume that $\lambda_k\nearrow\infty$.\\ Let $h\in\mathbb{N}$. Notice that $\lambda_k E$ is $s$-minimal in $B_{\lambda_k}$. Then for $k\geq k(h)$ we have $\lambda_k\geq h$ and hence in particular $\lambda_k E$ is $s$-minimal in $B_h$. For every such $k$ the minimality implies \begin{equation*}\begin{split} [\chi_{\lambda_kE}]_{W^{s,1}(B_h)}&=2P^L_s(\lambda_k E,B_h)\leq2P_s(\lambda_k E,B_h)\\ & \leq2P_s((\lambda_kE)\setminus B_h,B_h)\\ & =2\Ll_s((\lambda_kE)\setminus B_h,\Co((\lambda_kE)\setminus B_h)\cap B_h)\\ & \leq2\Ll_s(\Co B_h,B_h)=2P_s(B_h), \end{split}\end{equation*} and clearly \begin{equation*} \|\chi_{\lambda_kE}\|_{L^1(B_h)}\leq|B_h|. \end{equation*} Therefore \begin{equation*} \forall\, h\quad\exists\, k(h)\quad\textrm{s.t.}\quad\|\chi_{\lambda_kE}\|_{W^{s,1}(B_h)}\leq c_h<\infty,\quad\forall\, k\geq k(h). \end{equation*} Thus Theorem $\ref{compact_embd_th}$ guarantees the existence of a subsequence $\{\lambda_{k_i}\}$ (with $k_1\geq k(h)$) s.t. \begin{equation*} (\lambda_{k_i}E)\cap B_h\xrightarrow{i\to\infty}E^h, \end{equation*} in measure, for some set $E^h\subset B_h$. Applying this argument for $h=1$ we get a subsequence $\{\lambda^1_k\}$ of $\{\lambda_k\}$ with \begin{equation*} (\lambda_k^1E)\cap B_1\longrightarrow E^1. \end{equation*} Applying again the argument in $B_2$, with $\{\lambda_k^1\}$ in place of $\{\lambda_k\}$, we get a subsequence $\{\lambda^2_k\}$ of $\{\lambda^1_k\}$, with \begin{equation*} (\lambda_k^2E)\cap B_2\longrightarrow E^2. \end{equation*} Notice that we must have $E^1\subset E^2$ in measure (by the uniqueness of the limit in $B_1$). We can also suppose that $\lambda_1^2>\lambda^1_1$.\\ Proceeding inductively in this way we get a subsequence $\{\lambda_1^k\}$ of $\{\lambda_k\}$ s.t. \begin{equation*} (\lambda_1^kE)\cap B_h\xrightarrow{k\to\infty} E^h,\qquad\textrm{for every }h\in\mathbb{N}, \end{equation*} with $E^h\subset E^{h+1}$. Therefore if we define $C:=\bigcup_hE^h$ we get \begin{equation*} \lambda_1^kE\xrightarrow{loc}C, \end{equation*} concluding the proof. \end{proof}\end{prop} We remark that, as in the classical setting of Caccioppoli sets, two different subsequences might converge to different cones i.e. tangent cones need not be unique. However this can happen only at singular points: if $\partial E$ is regular in a neighborhood of 0, then the tangent cone is necessarily the half-space \begin{equation*} H^-(0)=\{x\in\R\,|\,x\cdot\nu_E(0)\leq0\}. \end{equation*} Moreover, exploiting the improvement of flatness, we can show that if $E$ has a half-space $C$ as tangent cone, then $\partial E$ is regular in a neighborhood of 0. Thus in particular $C=H^-(0)$ is the unique tangent cone at 0. \begin{teo}[Regularity]\label{tangent_cone_reg} Let $E\subset\R$ be $s$-minimal in $B_1$ with $0\in\partial E$. If $E$ has a half-space as a tangent cone, then $\partial E$ is a $C^{1,\alpha}$ surface in a neighborhood of 0. \begin{proof} Let $C$ be the tangent half-space of the hypothesis and let $\lambda_k\nearrow\infty$ be s.t. $\lambda_kE\xrightarrow{loc}C$. Then Corollary $\ref{haus_conv_min}$ implies that for every $\epsilon>0$ \begin{equation*} \partial(\lambda_k E)\cap B_1\subset N_\epsilon(\partial C)\cap B_1\subset N_\epsilon(\partial C),\qquad\textrm{for }k\geq k(\epsilon). \end{equation*} Since $C$ is a half-space, up to rotation we have \begin{equation*} N_\epsilon(\partial C)=\{|x\cdot e_n|<\epsilon\}. \end{equation*} Taking $\epsilon=\frac{\epsilon_0}{2}$, we see that $\lambda_{k(\epsilon_0)}E$ satisfies the hypothesis of Theorem $\ref{flat_reg_teo1}$, and hence $\partial(\lambda_{k(\epsilon_0)}E)=\lambda_{k(\epsilon_0)}\partial E$ is a $C^{1,\alpha}$ surface in $B_{1/2}$.\\ Therefore scaling back we see that $\partial E$ is a $C^{1,\alpha}$ surface in $B_{1/2\lambda_{k(\epsilon_0)}}$. \end{proof} \end{teo} \begin{defin} Let $E\subset\R$ be $s$-minimal in $\Omega$. A point $x_0\in\partial E\cap\Omega$ that has a half-space as a tangent cone is called a regular point. The points in $\partial E\cap\Omega$ that are not regular are called singular points. \end{defin} For a minimal cone $C$ we denote by $\Phi_C$ its energy, i.e. the constant value of the function $\Phi_C(r)$. We show that half-spaces have minimal energy amongst minimal cones, with a gap separating their energy from those of other cones. Let $\Pi:=\{x_1>0\}$ be a half-space. \begin{teo}[Energy Gap] Let $C$ be an $s$-minimal cone. Then \begin{equation}\label{energy_gap1} \Phi_C\geq\Phi_\Pi. \end{equation} Moreover, if $C$ is not a half-space, then \begin{equation}\label{energy_gap2} \Phi_C\geq\Phi_\Pi+\delta_0, \end{equation} where $\delta_0>0$ is a constant depending only on $n$ and $s$. \begin{proof} We have $0\in\partial C\cap B_1$ and hence the Clean ball condition (Corollary $\ref{clean_ball}$) guarantees the existence of some small ball $B\subset E\cap B_1$. Sliding $B$ vertically until we first touch $\partial C$ we find a point $x_0\in\partial C$ having an interior tangent ball. Corollary $\ref{int_tang_flat_reg}$ then implies that $\partial C$ is $C^{1,\alpha}$ in a neighborhood of $x_0$ and hence the tangent cone of $C$ at $x_0$ is a half-space. Therefore \begin{equation*} \lim_{r\to0}\Phi_{E-x_0}(r)=\Phi_\Pi. \end{equation*} On the other hand, since $\frac{1}{k}(C-x_0)=C-\frac{1}{k}x_0$, we obtain \begin{equation*} \frac{1}{k}(C-x_0)\xrightarrow{loc}C \end{equation*} and hence \begin{equation*} \Phi_{C-x_0}(k)\longrightarrow\Phi_C,\quad\textrm{as }k\to\infty. \end{equation*} The monotonicity of $\Phi_{C-x_0}$ gives $(\ref{energy_gap1})$. We have equality only when $\Phi_{C-x_0}$ is constant, i.e. when $C-x_0$ is a cone, thus when $C-x_0$ is a half-space, which in turn implies that $C$ is a half-space. To prove $(\ref{energy_gap2})$ we use a compactness argument.\\ Assume by contradiction that there exist minimal cones $C_k$ with \begin{equation}\label{energy_gap_contr_ass} \Phi_{C_k}\leq\Phi_\Pi+\frac{1}{k},\quad\textrm{for every }k\in\mathbb{N}, \end{equation} that are not half-spaces. Arguing as in the proof of Proposition $\ref{blowup_exist}$ we can find a convergent subsequence \begin{equation*} C_{k_i}\xrightarrow{loc} C_0, \end{equation*} for some minimal cone $C_0$. Since $\Phi_{C_0}\geq\Phi_\Pi$, from $(\ref{energy_gap_contr_ass})$ we get $\Phi_{C_0}=\Phi_\Pi$ and hence $C_0$ is a half-space.\\ Therefore, using again Corollary $\ref{haus_conv_min}$ we get (up to rotation) \begin{equation*} \partial C_{k_i}\cap B_1\subset\{|x\cdot e_n|\leq\epsilon_0\}, \end{equation*} for all $k_i$ large enough.\\ Then Theorem $\ref{flat_reg_teo1}$ implies that $\partial C_{k_i}$ are $C^{1,\alpha}$ surfaces around 0. Thus we find that $C_{k_i}$ is a half-space for all large $k_i$, giving a contradiction. \end{proof} \end{teo} \end{section} \begin{section}{Dimension Reduction} In this section we adapt Federer classical reduction argument to estimate the size of the singular set of an $s$-minimal set $E$. We will need the following result from \cite{cones}, which states that in dimension 2 there are no singular minimal cones \begin{teo}\label{min2cones} If $E$ is an $s$-minimal cone in $\mathbb{R}^2$, then $E$ is a half-space. \end{teo} We remark that if $\partial C$ is singular in $x_0\not=0$, then also the vertex 0 is a singular point. Indeed, if $\partial C$ is regular in 0, then $C$ must be a half space, and hence $\partial C$ is a plane, which is regular in every point. The dimension reduction result is the following: if $C\subset\R$ is a minimal cone with a singularity in $x_0\in\partial C$, with $x_0\not=0$, then we can find a minimal cone $K\subset\mathbb{R}^{n-1}$ which is singular in $0$. Roughly speaking, using dimension reduction we can inductively reduce the dimension of the ambient space until we are left with a minimal cone which is singular only in the vertex 0 and from Theorem $\ref{min2cones}$ we see that in dimension 3 singular minimal cones can be singular only in the vertex 0 (otherwise we could find a singular minimal cone in dimension 2). As a consequence we will prove (see Corollary 2 in \cite{cones}) that the Hausdorff dimension of the singular set is at most $n-3$.\\ We begin with the following technical result \begin{lem}\label{interpol} Let $\bar{w}$ be a bounded function defined in $\mathcal{B}_1^+\subset\mathbb{R}^{n+1}$ s.t. $\bar{w}=0$ in a neighborhood of $\partial\mathcal{B}_1$ and \begin{equation*} \int_{\mathcal{B}_1^+}|\nabla\bar{w}|^2z^a\,dX<\infty. \end{equation*} There exists a function $\mathcal{W}=\mathcal{W}(x,x_{n+1},z)$ defined in \begin{equation*} \mathcal{B}_1^+\times[-1,1]=\{(x,x_{n+1},z)\in\mathbb{R}^{n+2}\,|\,(x,z)\in\mathcal{B}_1^+,\,x_{n+1}\in[-1,1]\}, \end{equation*} with the following properties \begin{equation}\label{sharp_switch}\begin{split} &(i)\quad\mathcal{W}=0,\textrm{ if }x_{n+1}<-\frac{1}{2},\\ & (ii)\quad\mathcal{W}(x,x_{n+1},z)=\bar{w}(x,z),\textrm{ if }x_{n+1}>\frac{1}{2},\\ & (iii)\quad\mathcal{W}=0\textrm{ in a neighborhood of }\partial\mathcal{B}_1^+\times[-1,1],\\ & (iv)\quad\mathcal{W}(x,x_{n+1},0)=\left\{\begin{array}{cc} 0&\textrm{if }x_{n+1}\leq0,\\ \bar{w}(x,0)&\textrm{if }x_{n+1}>0, \end{array}\right.\\ & (v)\quad\int_{\mathcal{B}_1^+\times[-1,1]}|\nabla\mathcal{W}|^2z^a\,d\mathcal{X}<\infty, \end{split}\end{equation} where $\mathcal{X}=(x,x_{n+1},z)\in\mathbb{R}^{n+2}$. \begin{proof} First we assume that $0\leq\bar{w}\leq1$. Moreover we can think $\bar{w}$ is defined in $\mathbb{R}^{n+2}$ and is constant in the $x_{n+1}$ variable. Let $\pi$ be the extension in $\mathbb{R}^{n+2}$ corresponding to $\chi_{\{x_{n+1}>0\}}$. Then the function \begin{equation*} \mathcal{W}_1:=\min\{\bar{w},\pi\} \end{equation*} satisfies the last three points of $(\ref{sharp_switch})$. Now we modify $\mathcal{W}_1$ so that points $(i)$ and $(ii)$ of $(\ref{sharp_switch})$ also hold.\\ Let $\phi_1$ be a smooth cutoff function on $\mathbb{R}$, with $\phi_1=0$ outside $[-1/2,1/2]$ and $\phi_1=1$ on $[-1/4,1/4]$. Now define $\phi_2:=1-\phi_1$ on $[0,\infty)$ and $\phi_2:=0$ on $(-\infty,0)$. Then \begin{equation*} \mathcal{W}:=\phi_1(x_{n+1})\mathcal{W}_1+\phi_2(x_{n+1})\bar{w} \end{equation*} satisfies all the properties in $(\ref{sharp_switch})$.\\ The general case follows by repeating this construction for the (scaled) positive and negative parts, $\bar{w}^+$ and $\bar{w}^-$, and then subtracting the functions we obtain. \end{proof} \end{lem} We will exploit this result in the proof of the following \begin{teo} The set $E\subset\R$ is locally $s$-minimal in $\R$ if and only if the set $E\times\mathbb{R}$ is locally $s$-minimal in $\mathbb{R}^{n+1}$. \begin{proof} First of all, notice that if $\tilde{u}=\tilde{u}(x,z)$ is the extension in $\mathbb{R}^{n+1}$ for the function $\chi_E-\chi_{\Co E}$, then by making $\tilde{u}$ to be constant in the $x_{n+1}$ variable we obtain the extension in $\mathbb{R}^{n+2}$ corresponding to $E\times\mathbb{R}$. $\Longrightarrow)$ Assume $E$ is locally $s$-minimal in $\R$. As we remarked above, the extension in $\mathbb{R}^{n+2}$ corresponding to $E\times\mathbb{R}$ is the function $\mathcal{U}(x,x_{n+1},z)=\tilde{u}(x,z)$, which is constant in the $x_{n+1}$ variable. Therefore $|\nabla\mathcal{U}|=|\nabla_X\mathcal{U}|=|\nabla\tilde{u}|$, where $\nabla_X$ denotes the gradient in the $(x,z)$ variables.\\ Also notice that for any function $\bar{v}(x,x_{n+1},z)$ we have $|\nabla\bar{v}|^2\geq|\nabla_X\bar{v}|^2$ for every fixed $x_{n+1}$. Now suppose that $\bar{v}$ is s.t. supp$(\bar{v}-\mathcal{U})\subset Q$, where $Q$ is a bounded open cube in $\mathbb{R}^{n+2}$, and the trace of $\bar{v}$ on $\{z=0\}$ takes only the values $\pm1$. Let $Q_t:=Q\cap\{x_{n+1}=t\}$ and $Q_t^+:=Q_t\cap\{z>0\}$. Then slicing we get \begin{equation*} \int_{Q}|\nabla\bar{v}|^2z^a\,d\mathcal{X}\geq\int\Big(\int_{Q_t}|\nabla_X\bar{v}|^2z^a\,d X\Big)dt, \end{equation*} and the minimality of $E$ implies, as in Proposition $\ref{minim_functional}$, \begin{equation*} \int_{Q_t^+}|\nabla_X\bar{v}|^2z^a\,d X\geq \int_{Q_t^+}|\nabla\tilde{u}|^2z^a\,d X=\int_{Q_t^+}|\nabla\mathcal{U}|^2z^a\,d X. \end{equation*} Therefore \begin{equation*} \int_{Q_+}|\nabla\bar{v}|^2z^a\,d\mathcal{X} \geq \int_{Q_+}|\nabla\mathcal{U}|^2z^a\,d\mathcal{X}, \end{equation*} which implies the minimality of $E\times\mathbb{R}$ in $Q$, again via Proposition $\ref{minim_functional}$, and hence the local $s$-minimality of $E\times\mathbb{R}$ in $\mathbb{R}^{n+2}$. $\Longleftarrow)$ Assume $E\times\mathbb{R}$ is locally $s$-minimal in $\mathbb{R}^{n+2}$. Let $\bar{v}(x,z)$ be s.t. supp$(\bar{v}-\tilde{u})\subset\mathcal{B}_R\subset\mathbb{R}^{n+1}$ and the trace of $\bar{v}$ on $\{z=0\}$ takes only the values $\pm1$. We want to show \begin{equation}\label{cylindersfun} \int_{\mathcal{B}_R^+}|\nabla\bar{v}|^2z^a\,dX\geq \int_{\mathcal{B}_R^+}|\nabla\tilde{u}|^2z^a\,dX. \end{equation} We can suppose that the first integral is finite. Moreover the local minimality of $E\times\mathbb{R}$ implies \begin{equation*} \int_{-1}^1\Big(\int_{\mathcal{B}_R^+}|\nabla\tilde{u}|^2z^a\,dX\Big)dx_{n+1} = \int_{\mathcal{B}_R^+\times[-1,1]}|\nabla\mathcal{U}|^2z^a\,d\mathcal{X}<\infty, \end{equation*} so the second integral above is also finite. Using previous Lemma, we can construct a competitor for $\mathcal{U}$ and obtain $(\ref{cylindersfun})$ by confronting the corresponding energies.\\ Let $c>1$ and let $\mathcal{W}$ be the function obtained in Lemma $\ref{interpol}$ for $\bar{w}:=\tilde{u}-\bar{v}$. Now we consider the function $\mathcal{V}(x,x_{n+1},z)$, defined in $\mathcal{D}:=\mathcal{B}_R^+\times[-(c+1),c+1]$ as \begin{equation*} \mathcal{V}(x,x_{n+1},z):=\left\{ \begin{array}{cc} \bar{v}(x,z),&\textrm{if }|x_{n+1}|\leq c-1,\\ \bar{v}(x,z)+\mathcal{W}(x,|x_{n+1}|-c,z),&\textrm{if }-1<|x_{n+1}|-c\leq1. \end{array}\right. \end{equation*} This function $\mathcal{V}$ is a competitor for $\mathcal{U}$ in $\mathcal{D}_+:=\mathcal{D}\cap\{z>0\}$ in the sense of Proposition $\ref{minim_functional}$. Moreover notice that $\mathcal{V}$ is even in the $x_{n+1}$ variable and the energy is s.t. \begin{equation*} \int_{\mathcal{B}_R^+\times[c-1,c+1]}|\nabla\mathcal{V}|^2z^a\,d\mathcal{X}=\Lambda \end{equation*} is independent of $c$ and \begin{equation*} \int_{\mathcal{B}_R^+\times[0,c-1]}|\nabla\mathcal{V}|^2z^a\,d\mathcal{X} =(c-1)\int_{\mathcal{B}_R^+}|\nabla\bar{v}|^2z^a\,dX. \end{equation*} Therefore the minimality of $E\times\mathbb{R}$ gives \begin{equation*}\begin{split} 2(c+1)&\int_{\mathcal{B}_R^+}|\nabla\tilde{u}|^2z^a\,dX= \int_{-(c+1)}^{c+1}\Big(\int_{\mathcal{B}_R^+}|\nabla\tilde{u}|^2z^a\,dX\Big)dx_{n+1}\\ & =\int_{\mathcal{D}_+}|\nabla\mathcal{U}|^2z^a\,d\mathcal{X} \leq \int_{\mathcal{D}_+}|\nabla\mathcal{V}|^2z^a\,d\mathcal{X}\\ & =2(c-1)\int_{\mathcal{B}_R^+}|\nabla\bar{v}|^2z^a\,dX+2\Lambda. \end{split}\end{equation*} Dividing by $2c$ and letting $c\to\infty$ we get $(\ref{cylindersfun})$, concluding the proof. \end{proof} \end{teo} Now we prove the dimension reduction result. The idea is to blow-up a singular cone in correspondence of a non-vertex singularity. The tangent cone thus obtained is a cylinder whose base is a cone which is singular in the vertex. \begin{teo}[Dimension Reduction]\label{dimeredteo} Let $C$ be an $s$-minimal cone in $\R$, with $x_0=e_n\in\partial C$ (and vertex in 0). From any sequence converging to $\infty$ we can extract a subsequence $\lambda_k\to\infty$ s.t. \begin{equation*} \lambda_k(C-x_0)\xrightarrow{loc} A\times\mathbb{R}, \end{equation*} where $A$ is an $s$-minimal cone in $\mathbb{R}^{n-1}$.\\ Moreover, if $x_0$ is a singular point for $\partial C$ then 0 is a singular point for $A$. \begin{proof} We already know (Proposition $\ref{blowup_exist}$) that a blow-up limit $D$ exists and that it is an $s$-minimal cone (Theorem $\ref{blowup_teo}$). Suppose that $D=A\times\mathbb{R}$. Then previous Theorem implies that $A$ is locally $s$-minimal in $\mathbb{R}^{n-1}$; moreover, since $A\times\mathbb{R}$ is a cone, also $A$ must be a cone. \\ As for the last claim, we remark that $A\times\mathbb{R}$ is the tangent cone for $C$ at $x_0$. Therefore if it were regular in 0, the cone $C$ would be regular in $x_0$ (Theorem $\ref{tangent_cone_reg}$). To conclude, notice that 0 is a regular point of $A\times\mathbb{R}$ if and only if 0 is regular for $A$. We are left to show that the blow-up limit $D$ is of the form $D=A\times\mathbb{R}$, i.e. that $D$ is constant in the $e_n$ direction. To do this, we show that if $x$ is an interior point of $D$, then the whole line $L:=\{x+te_n\,|\,t\in\mathbb{R}\}$ is included in the interior of $D$. Indeed, let $x$ be an interior point of $D$ i.e. $B_\epsilon(x)\subset D$. Then by uniform density (exploiting Corollary $\ref{haus_conv_min}$) we have \begin{equation*} B_{\epsilon/2}(x)\subset C_k:=\lambda_k(C-x_0), \end{equation*} for all $k$ big enough. Notice that $C_k=C-\lambda_ke_n$ is a cone with vertex in $-\lambda_ke_n$. Let $T_k$ be the cone with vertex in $-\lambda_ke_n$ generated by $B_{\epsilon/2}(x)$. Then $T_k\subset C_k$ and \begin{equation*} T_k\xrightarrow{loc}\bigcup_{t\in\mathbb{R}}\big(B_{\epsilon/2}(x)+te_n\big)=N_{\epsilon/2}(L). \end{equation*} As a consequence we get $N_{\epsilon/2}(L)\subset D$, as wanted, concluding the proof. \end{proof}\end{teo} \begin{rmk} Since there are no singular $s$-minimal cones in dimension 2, previous Theorem implies that $s$-minimal cones in $\mathbb{R}^3$ can have at most one singularity, in the origin. \end{rmk} Finally we are ready to estimate the dimension of the singular set.\\ The argument is the same as the one for classical minimal surfaces. We recall some definitions and results about Hausdorff measures which we will need in the proof. For the details we refer to e.g. \cite{Maggi} or Chapter 11 of \cite{Giusti}. Let $E\subset\R$, $d\in[0,\infty)$ and $\delta\in(0,\infty]$. Then \begin{equation*} \h_\delta^d(E):=\frac{\omega_d}{2^d}\inf\Big\{\sum_{j=1}^\infty (\textrm{diam }S_j)^d\,\big|\, E\subset\bigcup_{j=1}^\infty S_j,\,\textrm{diam }S_j<\delta\Big\}, \end{equation*} and \begin{equation*} \h^d(E):=\lim_{\delta\to0}\h_\delta^d(E)=\sup_\delta\h^d_\delta(E). \end{equation*} It is easy to show that \begin{equation}\label{hauseq1} \h^d_\infty(E)=0\quad\Longleftrightarrow\quad\h^d(E)=0, \end{equation} and \begin{equation}\label{hauseq2} \h^d_\infty(E)=0\quad\Longrightarrow\quad\h^{d+1}_\infty(E\times\mathbb{R})=0. \end{equation} We also recall the following density property: if $E\subset\R$ and $d>0$, then \begin{equation}\label{hausdenseq} \limsup_{r\to0}\frac{\h_\infty^d(E\cap B_r(x))}{\omega_dr^d}\geq\frac{1}{2^d},\qquad\textrm{for }\h^d\textrm{-a.e. }x\in E. \end{equation} \begin{teo}[Dimension of the Singular Set]\label{singsetdim} Let $E\subset\R$ be $s$-minimal in $\Omega$. The singular set $\Sigma_E\subset\partial E\cap\Omega$ has Hausdorff dimension at most $n-3$, i.e. \begin{equation*} \h^d(\Sigma_E)=0,\qquad\textrm{for every }d>n-3. \end{equation*} \end{teo} The main part of the proof is the following \begin{prop} Let $\{E_h\}$ and $E$ be $s$-minimal sets in $\Omega$ s.t. $E_h\xrightarrow{loc}E$. Then, for every compact set $K\subset\Omega$ and every $\epsilon>0$, \begin{equation}\label{subsingular} \Sigma_{E_h}\cap K\subset N_\epsilon(\Sigma_E\cap K), \end{equation} for $h$ sufficiently large. Therefore \begin{equation}\label{estimatesizefund} \h^d_\infty(\Sigma_E\cap K)\geq\limsup_{h\to\infty}\h^d_\infty(\Sigma_{E_h}\cap K). \end{equation} \begin{proof} We remark that $(\ref{estimatesizefund})$ is an immediate consequence of $(\ref{subsingular})$. Suppose $(\ref{subsingular})$ is false. Then (up to a subsequence) for every $h$ there exists $x_h\in\Sigma_{E_h}\cap K$ s.t. $d(x_h,\Sigma_E\cap K)\geq\epsilon$. Since $K$ is compact, we can suppose $x_k\longrightarrow x_0\in K$ and using Corollary $\ref{haus_conv_min}$ we find $x_0\in\partial E$.\\ We only need to show that $x_0\in\Sigma_E$. Then for $h$ big enough we have \begin{equation*} x_h\in B_\epsilon(x_0)\subset N_\epsilon(\Sigma_E\cap K), \end{equation*} giving a contradiction. Up to considering translated sets and $\Omega'\subset\subset\Omega$ s.t. $K\subset\Omega'$, we can suppose that $x_h=0=x_0$ for every $h$. If $0$ is a regular point for $E$, then (up to dilation and rotation) we have $\partial E\cap B_{3/2}\subset\{|x_n|<\epsilon_0/2\}$.\\ But then Corollary $\ref{haus_conv_min}$ implies \begin{equation*} \partial E_h\cap B_1\subset\{|x_n|\leq\epsilon_0\}, \end{equation*} for all $h$ large enough and hence $0$ is a regular point for $E_h$ thanks to Theorem $\ref{flat_reg_teo1}$. This gives a contradiction, concluding the proof. \end{proof} \end{prop} \begin{proof}[Proof of Theorem $\ref{singsetdim}$] We begin by proving the following \begin{equation}\label{firststepproof} \h^d(\Sigma_E)>0\quad\Longrightarrow\quad\exists C\subset\R\,s\textrm{-minimal cone s.t. }\h^d(\Sigma_C)>0. \end{equation} Since $\h^d(\Sigma_E)>0$, using $(\ref{hausdenseq})$ we know that there exist $x\in\Sigma_E$ and a sequence $r_h\to0$ s.t. \begin{equation}\label{dunno1} \h^s_\infty(\Sigma_E\cap B_{r_h}(x))\geq\frac{\omega_dr_h^d}{2^{d+1}}. \end{equation} We can suppose that $x=0$. Up to a subsequence we know that $E_h:=r_h^{-1}E$ converges locally in $\R$ to an $s$-minimal cone $C$. Moreover each $E_h$ is $s$-minimal in $r_h^{-1}\Omega\supset\Omega$. Also notice that clearly \begin{equation*} \Sigma_{E_h}=r_h^{-1}\Sigma_E, \end{equation*} and hence scaling in $(\ref{dunno1})$ we get \begin{equation*} \h_\infty^d(\Sigma_{E_h}\cap B_1)=r_h^{-d}\h_\infty^d(\Sigma_E\cap B_{r_h})\geq\frac{\omega_d}{2^{d+1}}. \end{equation*} Passing to the limit in $h$ and using $(\ref{estimatesizefund})$ we get $\h^d_\infty(\Sigma_C)>0$. Thanks to $(\ref{hauseq1})$, this proves $(\ref{firststepproof})$.\\ Now, if $C\subset\R$ is a singular $s$-minimal cone s.t. $\h^d(\Sigma_C)>0$, we can repeat the same argument blowing-up near some $x\in\Sigma_C$, $x\not=0$. By Theorem $\ref{dimeredteo}$ we know that the limiting cone is of the form $A\times\mathbb{R}$, with $A\subset\mathbb{R}^{n-1}$ a singular $s$-minimal cone. From the preceeding discussion we know that \begin{equation*} \h^d(\Sigma_{A\times\mathbb{R}})>0 \end{equation*} and hence, using $(\ref{hauseq2})$, \begin{equation*} \h^{d-1}(\Sigma_A)>0. \end{equation*} Repeating this argument, we get the existence of a singular $s$-minimal cone $C_k\subset\mathbb{R}^{n-k}$, with $\h^{d-k}(\Sigma_{C_k})>0$, for every $k<d$. Since, as remarked above, $s$-minimal cones in $\mathbb{R}^3$ can have at most one singular point, we conclude that $d\leq n-3$. \end{proof} As a consequence of this estimate, since we know that $\partial E$ is $C^{1,\alpha}$ in a neighborhood of any regular point, we immediately get the following \begin{coroll} Let $E\subset\R$ be $s$-minimal in $\Omega$. Then $\partial E\cap\Omega$ has Hausdorff dimension at most $n-1$, i.e. \begin{equation*} \h^d(\partial E\cap\Omega)=0,\qquad\textrm{for every }d>n-1. \end{equation*} \end{coroll} \end{section} \begin{section}{Further Results} In this last section we collect some more results about the regularity of nonlocal minimal surfaces. For the details and the proofs we refer to the corresponding articles.\\ We begin by saying something more about the dimension of the singular set.\\ Suppose we know that there are no singular $s$-minimal cones in $\mathbb{R}^m$. Then we can apply the same argument used in the proof of Theorem $\ref{singsetdim}$ to prove that the singular set $\Sigma_E$ has Hausdorff dimension at most $n-(m+1)$. In the classical case it is well known that there are no minimal cones in $\mathbb{R}^m$ for any $m\leq7$. Therefore, using the dimension reduction argument we obtain Theorem $\ref{local_min_reg}$, which completely characterizes the regularity of a classical minimal surface. However in the nonlocal case it is not known, for a general $s$, wether or not there exist singular $s$-minimal cones in dimension $m\geq3$. In \cite{unifor} the authors exploited uniform estimates as $s\to1$ to prove that when $s$ is sufficiently close to 1, they don't exist and hence the singular set has the same dimension as in the local case. \begin{teo} Let $n\leq7$. There exists $\epsilon_0>0$ s.t. if $s\in(1,\epsilon_0,1)$, then any $s$-minimal cone is a half-space. \end{teo} \begin{teo} There exists $\epsilon_0>0$ s.t. if $s\in(1-\epsilon_0,1)$, then $(i)\quad$ if $n\leq 7$ then the boundary of any $s$-minimal set is locally a $C^{1,\alpha}$-hypersurface, $(ii)\quad$ if $n=8$ then the boundary of any $s$-minimal set is locally a $C^{1,\alpha}$-hypersurface, except at most at countably many isolated points, $(iii)\quad$ if $n\geq9$ then the boundary of any $s$-minimal set is locally a $C^{1,\alpha}$-hypersurface outside a closed set $\Sigma_E$, with $\h^d(\Sigma_E)=0$ for any $d>n-8$. \end{teo} On the other hand in \cite{DelPino} the authors studied a particular kind of cones, the Lawson cones \begin{equation*} C_\alpha=\{(u,v)\in\mathbb{R}^m\times\mathbb{R}^n\,|\,|v|=\alpha|u|\}, \end{equation*} with $m,\,n\geq1$, and proved that there is a unique $\alpha=\alpha(s,m,n)>0$ s.t. $C_\alpha$ is $s$-minimal in $\mathbb{R}^{m+n}$. We denote $C_m^n(s)$ such a cone. Unlike what happens in the classical case, when $n\geq3$ a nontrivial $s$-minimal cone, $C^{n-1}_1(s)$, does indeed exist. Moreover when $s$ is sufficiently close to 0, all Lawson cones $C_m^n(s)$ are shown to be unstable if $N:=n+m\leq6$ and stable if $N=7$. This suggests that a regularity theory up to a singular set of dimension $N-7$ should be the best possible for a general $s$.\\ In \cite{ShortReg} the authors improved the regularity in the neighborhood of a regular point, showing that Lipschitz regularity actually implies $C^\infty$ regularity. \begin{teo} Let $n\geq2$ and let $E$ be $s$-minimal in $B_1$. If $\partial E\cap B_1$ is locally Lipschitz, then $\partial E\cap B_1$ is $C^\infty$. \end{teo} This improves Theorem $\ref{tangent_cone_reg}$.\\ In particular this guarantees that the $s$-fractional mean curvature $\I_s[E](x)$ is well defined in every regular point $x\in(\partial E\setminus\Sigma_E)\cap\Omega$ and the Euler-Lagrange equation is satisfied in the classical sense in every such point.\\ Since $\h^{n-1}(\Sigma_E\cap\Omega)=0$, we see that if $E\subset\R$ is $s$-minimal in $\Omega$, then \begin{equation} \I_s[E](x)=0,\quad\h^{n-1}\textrm{-a.e. }x\in\partial E\cap\Omega. \end{equation} In \cite{ShortReg} the authors also proved a Bernstein-type result \begin{teo} Let $n\geq2$ and let $E=\{(x',x_n)\in\mathbb{R}^{n-1}\times\mathbb{R}\,|\,x_n<u(x')\}$ be an $s$-minimal graph. If there are no singular $s$-minimal cones in dimension $n-1$, then $u$ is an affine function (thus $E$ is a half-space). \end{teo} Another interesting property concerning $s$-minimal sets and graphs is given in the recent paper \cite{graph}. Roughly speaking, an $s$-minimal set which is a subgraph outside a cylinder is actually a subgraph in the whole space. \begin{teo} Let $n\geq2$ and let $\Omega_0\subset\mathbb{R}^{n-1}$ be a bounded open set with $\partial \Omega_0$ of class $C^{1,1}$. Let $\Omega:=\Omega_0\times\mathbb{R}$ and let $E\subset\R$ be $s$-minimal in $\Omega$. Assume that \begin{equation*} E\setminus\Omega=\{x_n<u(x')\,|\,x'\in\mathbb{R}^{n-1}\setminus\Omega_0\}, \end{equation*} for some continuous $u:\mathbb{R}^{n-1}\longrightarrow\mathbb{R}$. Then \begin{equation*} E\cap\Omega=\{x_n<v(x')\,|\,x'\in\Omega_0\}, \end{equation*} for some $v:\mathbb{R}^{n-1}\longrightarrow\mathbb{R}$. \end{teo} A delicate point is the fact that in general $s$-minimal sets are not regular (not even continuous) up to the boundary. Indeed boundary stickiness phenomena may occur. The boundary behavior is studied in another recent paper by the same authors, \cite{bdary}. We briefly describe one of the many results obtained in this article, an example of stickiness phenomenum. For any $\delta>0$, let \begin{equation*} K_\delta:=(B_{1+\delta}\setminus B_1)\cap\{x_n<0\}, \end{equation*} a small half-ring. Define $E_\delta$ to be the set minimizing $P_s(F,B_1)$ among all sets $F\subset\R$ s.t. $F\setminus B_1=K_\delta$. We remark that in the local framework the set minimizing the perimeter, among all sets having $K_\delta$ as boundary value at $\partial B_1$, is always the flat set $\{x_n<0\}\cap B_1$, independently of $\delta$. However in the nonlocal framework this changes dramatically, since nonlocal minimizers stick to the boundary $\partial B_1$, provided $\delta$ is suitably small. To be more precise, \begin{teo} There exists $\delta_0=\delta_0(n,s)>0$ s.t. for any $\delta\in(0,\delta_0]$ we have \begin{equation*} E_\delta=K_\delta. \end{equation*} \end{teo} As described in the article, these rather surprising sticking effects have some (at least vague) heuristic explanations. For example, to see that \begin{equation*} E:=(\{x_n<0\}\cap B_1)\cup K_\delta=\{x_n<0\}\cap B_{1+\delta} \end{equation*} cannot be our nonlocal minimizer, we can look at $\I_s[E](0)$.\\ There is no contribution coming from inside $B_{1+\delta}$ because of symmetry, \begin{equation*} P.V.\int_{B_{1+\delta}}\frac{\chi_E(y)-\chi_{\Co E}(y)}{|y|^{n+s}}dy=0. \end{equation*} On the other hand, outside $B_{1+\delta}$ we have no contribution coming from $E$, \begin{equation*} \int_{\Co B_{1+\delta}}\frac{\chi_E(y)-\chi_{\Co E}(y)}{|y|^{n+s}}dy= -\int_{\Co B_{1+\delta}}\frac{1}{|y|^{n+s}}dy=-\frac{n\,\omega_n}{s}(1+\delta)^{-s}. \end{equation*} Since $\I_s[E](0)<0$, $E$ cannot be the $s$-minimal set we are looking for.\\ Now the idea is that, in order to compensate the contribution coming from outside $B_{1+\delta}$ (which is the same for every competitor), our set $E$ has to bend near 0, becoming convex. However when $\delta$ is very small this bending is not enough to compensate the other contribution and the set $E$ has to stick to the half-ring $K_\delta$ in order to satisfy the Euler-Lagrange equation. \end{section} \end{chapter}
{ "timestamp": "2015-08-26T02:12:10", "yymm": "1508", "arxiv_id": "1508.06241", "language": "en", "url": "https://arxiv.org/abs/1508.06241", "abstract": "This Master's thesis presents a study of the basic properties of the s-fractional perimeter and of the regularity theory of the corresponding s-minimal sets. In particular, we give full detailed proofs for all the Theorems contained in the article \"Nonlocal Minimal Surfaces\" by Caffarelli, Roquejoffre and Savin, where these s-minimal sets were introduced and studied for the first time.", "subjects": "Analysis of PDEs (math.AP)", "title": "Fractional Perimeter and Nonlocal Minimal Surfaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363508288305, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7073385821092566 }
https://arxiv.org/abs/0812.2240
The number of elements in the mutation class of a quiver of type $D_n$
We show that the number of quivers in the mutation class of a quiver of Dynkin type $D_n$ is given by $\sum_{d|n} \phi(n/d)\binom{2d}{d}/(2n)$ for $n \geq 5$. To obtain this formula, we give a correspondence between the quivers in the mutation class and certain rooted trees.
\section*{Introduction} Quiver mutation is an important ingredient in the definition of cluster algebras \cite{fz1}. It is an operation on quivers, which induces an equivalence relation on the set of quivers. The mutation class $\mathcal{M}$ of a quiver $Q$ consists of all quivers mutation equivalent to $Q$. If $Q$ is a Dynkin quiver, then $\mathcal{M}$ is finite. In \cite{t} an excplicit formula for $|\mathcal{M}|$ is given for Dynkin type $A_n$. Here we give an explicit formula for the number of quivers in the mutation class of a quiver of Dynkin type $D_n$. The formula is given by \[ d(n) = \left\{ \begin{array}{l l} \sum_{d|n} \phi(n/d)\binom{2d}{d}/(2n) & \quad \mbox{if $n \geq 5$}, \\ 6 & \quad \mbox{if $n=4$}, \end{array} \right. \] where $\phi$ is the Euler function. The proof for this formula consists of two parts. The first part shows that the mutation class of type $D_n$ is in 1--1 correspondence with the triangulations (with tagged edges) of a punctured $n$-gon, up to rotation and inversion of tags. This is a generalization of the method used in \cite{t} to count the number of elements in the mutation class of quivers of Dynkin type $A_n$. Here we are strongly using the ideas in \cite{fst} and \cite{s}. In the second part we count the number of (equivalence classes of) triangulations of a punctured $n$-gon, by describing an explicit correspondence to a certain class of rooted trees. A tree in this class is constructed by taking a family of full binary trees $T_1, \dots, T_s$ such that the total number of leaves is $n$, and then adding a node $S$ and an edge from this node to the root of $T_i$ for each $i$, such that $S$ becomes a root (Figure \ref{tree tree} displays all such trees for $n=5$). When these rooted trees are considered up to rotation at the root, they are in 1--1 correspondence with the above mentioned equivalence classes of triangulations of the punctured $n$-gon. To count these rooted trees we use a simple adaption of a known formula found in \cite{i} and \cite[exercise 7.112 b]{st}. We also point out a mutation operation on these rooted trees, corresponding to the other mutation operations involved (on triangulations and on quivers). Our formula and the bijection to triangulations of the punctured $n$-gon were presented at the ICRA in Torun, August 2007 \cite{t2}. After completing our work, we learnt about the paper \cite{glz}. They also generalize the methods in \cite{t} to prove the bijection from the mutation class of $D_n$ to triangulations of the punctured $n$-gon. However, their method of counting triangulations is very different from ours. They use the classification of quivers of mutation type $D_n$, recently given in \cite{v}. The authors of \cite{glz} end up with a very different formula than ours. In particular, their formula is not explicit, and it seems they get a different output than we get, e.g. for $n=6$. We are grateful to Hugh Thomas for several useful discussions and for the idea of making use of binary trees as an alternative to rooted planar trees. We would also like to thank Dagfinn Vatne for useful discussions. \section{Quiver mutation} Let $Q$ be a quiver with no multiple arrows, no loops and no oriented cycles of length two. Mutation of $Q$ at the vertex $k$ gives a quiver $Q'$ obtained from $Q$ in the following way. \begin{enumerate} \item Add a vertex $k^{*}$. \item If there is a path $i\rightarrow k \rightarrow j$, then if there is an arrow from $j$ to $i$, remove this arrow. If there is no arrow from $j$ to $i$, add an arrow from $i$ to $j$. \item For any vertex $i$ replace all arrows from $i$ to $k$ with arrows from $k^{*}$ to $i$, and replace all arrows from $k$ to $i$ with arrows from $i$ to $k^{*}$. \item Remove the vertex $k$. \end{enumerate} It is easy to see that mutating $Q$ twice at $k$ gives $Q$. We say that two quivers $Q$ and $Q'$ are mutation equivalent if $Q'$ can be obtained from $Q$ by a finite number of mutations. The mutation class of $Q$ consists of all quivers mutation equivalent to $Q$. Figure \ref{figmutationclassD4} gives all quivers in the mutation class of $D_4$, up to isomorphism. \begin{figure}[htp] \begin{center} $$\xymatrix{&\bullet_{4}\ar[d]&\\\bullet_{1}\ar[r]&\bullet_{2}\ar[r]&\bullet_{3}} \hspace{1.5 cm} \xymatrix{&\bullet_{4}\ar[d]&\\\bullet_{1}\ar[r]&\bullet_{2}&\bullet_{3}\ar[l]}$$ $$\xymatrix{&\bullet_{4}\ar[d]&\\\bullet_{1}&\bullet_{2}\ar[l]\ar[r]&\bullet_{3}} \hspace{1.5 cm} \xymatrix{&\bullet_{4}\ar[dr]&\\\bullet_{1}\ar@/_/[rr]&\bullet_{2}\ar[u]\ar[l]&\bullet_{3}\ar[l]}$$ $$\xymatrix{&\bullet_{4}&\\\bullet_{1}&\bullet_{2}\ar[r]\ar[l]\ar[u]&\bullet_{3}} \hspace{1.5 cm} \xymatrix{&\bullet_{4}\ar[d]&\\\bullet_{1}\ar@/_/[rr]&\bullet_{2}\ar[l]&\bullet_{3}\ar[ul]}$$ \end{center}\caption{The mutation class of $D_4$.} \label{figmutationclassD4} \end{figure} It is know from \cite{fz3} that the mutation class of a Dynkin quiver $Q$ is finite. An explicit formula for the number of equivalence classes in the mutation class of any quiver of type $A_n$ was given in \cite{t}. The Catalan number $C(i)$ can be defined as the number of triangulations of an $i+2$-gon with $i-1$ diagonals. It is given by $$C(i)=\frac{1}{i+1}\binom{2i}{i}.$$ The number of equivalence classes in the mutation class of any quiver of type $A_n$ is then given by the formula \cite{t} $$a(n)=C(n+1)/(n+3)+C((n+1)/2)/2+(2/3)C(n/3)$$ where the second term is omitted if $(n+1)/2$ is not an integer and the third term is omitted if $n/3$ is not an integer. This formula counts the triangulations of the disk with $n$ diagonals \cite{b}. \section{Cluster-tilted algebras} The cluster category was defined independently in \cite{bmrrt} for the general case and in \cite{ccs} for the $A_n$ case. Let $\mathcal{D}^b (\operatorname{mod}\nolimits H)$ be the bounded derived category of the finitely generated modules over a finite dimensional hereditary algebra $H$ over a field $K$. In \cite{bmrrt} the cluster category was defined as the orbit category $\mathcal{C}=\mathcal{D}^b (\operatorname{mod}\nolimits H) / \tau^{-1}[1]$, where $\tau$ is the Auslander-Reiten translation and [1] the suspension functor. The cluster-tilted algebras are the algebras of the form $\Gamma=\operatorname{End}\nolimits_{\mathcal{C}}(T)^{\operatorname{op}\nolimits}$, where $T$ is a cluster-tilting object in $\mathcal{C}$ (see \cite{bmr1}). In this paper we will mostly consider the case where the underlying graph of the quiver of $H$ is of Dynkin type $D$. If $\Gamma = \operatorname{End}\nolimits_{\mathcal{C}}(T)^{\operatorname{op}\nolimits}$ for a cluster-tilting object $T$ in $\mathcal{C}$, and $\mathcal{C}$ is the cluster category of a path algebra of type $D_n$, then we say that $\Gamma$ is of type $D_n$. Let $Q$ be a quiver of a cluster-tilted algebra $\Gamma$. From \cite{bmr2} it is known that if $Q'$ is obtained from $Q$ by a finite number of mutations, then there is a cluster-tilted algebra $\Gamma '$ with quiver $Q'$. Moreover, $\Gamma$ is of finite representation type if and only if $\Gamma '$ is of finite representation type \cite{bmr1}. We also have that $\Gamma$ is of type $D_n$ if and only if $\Gamma '$ is of type $D_n$. It is well known that we can obtain all orientations of a Dynkin quiver by reflections, and hence all orientations of a Dynkin quiver are mutation equivalent. From \cite{bmr3,birs} we know that a cluster-tilted algebra is up to isomorphism uniquely determined by its quiver (see also \cite{ccs2}). It follows from this that the number of non-isomorphic cluster-tilted algebras of type $D_n$ is equal to the number of equivalence classes in the mutation class of any quiver with underlying graph $D_n$. \section{Category of diagonals of a regular $n+3$-gon} In \cite{ccs} Caldero, Chapoton and Schiffler considered regular polygons with $n+3$ vertices and triangulations of such polygons. A diagonal is a straight line between two non-adjacent vertices on the border of the polygon, and a triangulation is a maximal set of diagonals which do not cross. A triangulation of an $(n+3)$-gon consists of exactly $n$ diagonals. In \cite{ccs} the category of diagonals of such polygons was defined, and it was shown to be equivalent to the cluster category, as defined in Section 2, in the $A_n$ case. It was also shown that a cluster-tilting object in the cluster category $\mathcal{C}$ corresponds to a triangulation of the regular $(n+3)$-gon in the $A_n$ case. In \cite{t} it was shown that there is a bijection between isomorphism classes of cluster-tilted algebras of type $A_n$ (or equivalently isomorphism classes of quivers in the mutation class of any quiver with underlying graph $A_n$) and triangulations of the disk with $n$ diagonals (i.e. triangulations of the regular $(n+3)$-gon up to rotation). For any triangulation of the regular $(n+3)$-gon we can define a quiver with $n$ vertices in the following way. The vertices are the midpoints of the diagonals. There is an arrow between $i$ and $j$ if the corresponding diagonals bound a common triangle. The orientation is $i \rightarrow j$ if the diagonal corresponding to $j$ can be obtained from the diagonal corresponding to $i$ by rotating anticlockwise about their common vertex. It is also known from \cite{ccs} that all quivers obtained in this way are quivers of cluster-tilted algebras of type $A_n$. This means that we can define a function $\gamma_n$ from the mutation class of $A_n$ to the set of all triangulations of the regular $(n+3)$-gon. There is an induced function $\widetilde{\gamma}_n$ from the mutation class of $A_n$ to the set of all triangulations of the disk with $n$ diagonals. It was shown in \cite{t} that $\widetilde{\gamma}_n$ is a bijection. \begin{figure}[htp] \begin{center} \includegraphics[width=5.0cm]{eksan.eps} \end{center}\caption{A triangulation $\Delta$ of the regular 8-gon and the corresponding quiver $\gamma_5(\Delta)$ of type $A_5$.} \label{figeksan} \end{figure} \section{Category of diagonals of a punctured regular $n$-gon} In this paper we will consider the $D_n$ case and we will first recall some results and notions from \cite{s} and \cite{fst}. Let $\mathcal{P}_n$ be a regular polygon with $n$ vertices and one puncture in the center. Diagonals (or edges) will be homotopy classes of paths between two vertices on the border of the polygon. We follow the definitions from \cite{s}. Let $\delta_{a,b}$ be an oriented path between two vertices $a \neq b$ on the border of $\mathcal{P}_n$ in counterclockwise direction, such that $\delta_{a,b}$ does not run through the same point twice. Also let $\delta_{a,a}$ be the path that runs from $a$ to $a$, i.e. around the polygon exactly one time. We define $|\delta_{a,b}|$ to be the number of vertices on the path $\delta_{a,b}$, including $a$ and $b$. An edge is a triple $(a,\alpha,b)$ where $a$ and $b$ are vertices on the border of the polygon and $\alpha$ is an oriented path from $a$ to $b$ lying in the interior of $\mathcal{P}_n$ and that is homotopic to $\delta_{a,b}$. Furthermore, the path should not cross itself and $|\delta_{a,b}| \geq 3$. Two edges are equivalent if they start in the same vertex, end in the same vertex and are homotopic. Let $E$ be the set of equivalence classes of edges, and denote by $M_{a,b}$ the equivalence class of edges in $E$ going from $a$ to $b$. In \cite{s} the set of tagged edges is defined as follows. $$\{ M_{a,b}^{\epsilon}|M_{a,b} \in E\text{, } \epsilon \in \{-1,1\} \text{ with } \epsilon = 1 \text{ if } a \neq b \}$$ From now on tagged edges will be called \textit{diagonals}. Diagonals starting and ending in the same vertex $a$ will be represented as lines between the puncture and the vertex $a$. Diagonals with $\epsilon = -1$ will be drawn with a tag on it. In some cases we will draw them as loops. The crossing number $e(M_{a,b}^\epsilon,N_{c,d}^{\epsilon'})$ is the minimal number of intersection of representations of $M_{a,b}^\epsilon$ and $N_{c,d}^{\epsilon'}$ in the interior of the punctured polygon. When $a=b$ and $c=d$, we let the crossing number be $1$ if $a \neq c$ and $\epsilon \neq \epsilon'$ and $0$ otherwise. If $e(M_{a,b}^\epsilon,N_{c,d}^{\epsilon'}) = 0$, we say that $M_{a,b}^\epsilon$ and $N_{c,d}^{\epsilon'}$ do not cross. Now we can define a triangulation of the punctured $n$-gon, which is a maximal set of non-crossing diagonals. Any such set will have $n$ elements \cite{s}. See some examples of triangulations of the punctures 6-gon in Figure \ref{figekstagged}. \begin{figure}[htp] \begin{center} \includegraphics[width=4.0cm]{ekstagged1.eps} \includegraphics[width=4.0cm]{ekstagged2.eps} \includegraphics[width=4.0cm]{ekstagged3.eps} \end{center}\caption{Examples of triangulations of the punctured 6-gon.} \label{figekstagged} \end{figure} \cite{s} defines a category which is equivalent to the cluster catecory in the $D_n$ case in the following way. The objects are direct sums of diagonals (tagged edges), and the morphism space from $\alpha$ to $\beta$ is spanned by sequences of elementary moves modulo the mesh-relations. The equivalence between this category $\mathcal{C}$ and the cluster category in the $D_n$ case was proved in \cite{s}. Furthermore we have the following important results: \begin{itemize} \item $\operatorname{dim}\nolimits \operatorname{Ext}\nolimits_{\mathcal{C}}^1(\alpha,\beta)$ is equal to the crossing number of $\alpha$ and $\beta$. \item A cluster-tilting object corresponds to a triangulation. \item The Auslander-Reiten translation of a diagonal from $a$ to $b$ is given by clockwise rotation of the diagonal if $a \neq b$. If $a=b$ the AR-translation is given by clockwise rotation and inverting the tag. \end{itemize} Let $\mathcal{T}_n$ be the set of all triangulations of $\mathcal{P}_n$, and let $\Delta$ be an element in $\mathcal{T}_n$. We can assign to $\Delta$ a quiver in the following way (see \cite{fst}). Just as in the $A_n$ case, the vertices are the midpoints of the diagonals. There is an arrow between $i$ and $j$ if the corresponding diagonals bound a common triangle. The orientation is $i \rightarrow j$ if the diagonal corresponding to $j$ can be obtained from the diagonal corresponding to $i$ by rotating anticlockwise about their common vertex. In the case when there are two diagonals $\alpha$ and $\alpha'$ between the puncture and the same vertex on the border, both adjacent to a diagonal $\beta$ and a border edge $\delta$, we consider the triangle with edges $\alpha$, $\beta$ and $\delta$ separately from the triangle with edges $\alpha'$, $\beta$ and $\delta$, when thinking of $\alpha$ and $\alpha'$ as loops around the puncture. If we end up with an oriented cycle of length $2$, delete both arrows in the cycle. See some examples in Figure \ref{figtriquiv}. \begin{figure}[htp] \begin{center} \includegraphics[width=4.0cm]{triquiv1.eps} \includegraphics[width=4.0cm]{triquiv2.eps} \end{center}\caption{Some examples of triangulations and corresponding quiver.} \label{figtriquiv} \end{figure} Let $\mathcal{M}_n$ be the mutation class of $D_n$, i.e. all quivers obtained by repeated mutations from $D_n$, up to isomorphisms of quivers. We can define a function $\epsilon_n : \mathcal{T}_n \rightarrow \mathcal{M}_n$, where we set $\epsilon_n(\Delta) = Q_{\Delta}$ for any triangulation in $\mathcal{T}_n$. It is known that $Q_{\Delta}$ is a quiver of Dynkin type $D_n$ and that all quiver of type $D$ can be obtained this way, hence $\epsilon$ is surjective. We can define a mutation operation on a triangulation. If $\alpha$ is a diagonal in a triangulation, then mutation at $\alpha$ is defined as replacing $\alpha$ with another diagonal such that we obtain a new triangulation. This can be done in one and only one way. It is known that mutation of quivers commutes with mutation of triangulations under $\epsilon$ (see \cite{s,fst}). \section{Bijection between the mutation class of a quiver of type $D_n$ and triangulations up to rotation and inverting tags} Here we adapt the methods and ideas of \cite{t} to obtain a bijection between the mutation class of a quiver of type $D_n$ and the set of triangulations of a punctured $n$-gon up to rotations and inversion of tags. See also \cite{glz}. We say that a diagonal from $a$ to $b$ is \textit{close to the border} if $|\delta(a,b)|=3$. For a quiver $Q_\Delta$ corresponding to a triangulation $\Delta$, we will always denote by $v_\alpha$ the vertex in $Q_\Delta$ corresponding to the diagonal $\alpha$. From now on we let $n \geq 5$. Let us denote by $S_n$ the triangulation of $\mathcal{P}_n$ shown in Figure \ref{fignotoriented}. Note that this triangulation and the triangulation $S_n$ with all tags inverted are the only triangulations that correspond to the quiver consisting of the oriented cycle of length $n$, $Q_n$. \begin{figure}[htp] \begin{center} \includegraphics[width=4.0cm]{notoriented.eps} \end{center}\caption{Triangulation $R_n$ corresponding to the quiver consisting of the oriented cycle of length $n$.} \label{fignotoriented} \end{figure} \begin{lem}\label{exist diagonal} Let $\Delta$ be a triangulation of $\mathcal{P}_n$, with $\Delta \neq S_n$. Then there exists a diagonal in $\Delta$ which is close to the border. \end{lem} \begin{proof} Let $\Delta$ be a triangulation of $\mathcal{P}_n$. If $\Delta$ is not $S_n$, then there is at least one diagonal $\alpha$ which connects two vertices on the border. See Figure \ref{figdividedpolygon}. \begin{figure}[h] \begin{center} \includegraphics[width=5cm]{dividedpolygon.eps} \end{center} \caption{The diagonal $\alpha$ divides the polygon into a punctured and a non-punctured surface.} \label{figdividedpolygon} \end{figure} Consider the non-punctured surface $B$ determined by this diagonal. If $\alpha$ is not close to the border, there exist a diagonal that divides the surface $B$ into two smaller surfaces. By induction, there exists a diagonal close to the border. \end{proof} \begin{lem}\label{close to the border sink source cycle} If a diagonal $\alpha$ of a triangulation $\Delta$ is close to the border, then the corresponding vertex $v_{\alpha}$ in $\epsilon_n(\Delta)=Q_{\Delta}$ is either a source, a sink or lies on an oriented cycle of length 3. \end{lem} \begin{proof} Suppose $\alpha$ is a diagonal close to the border. We have to consider the eight cases shown in Figure \ref{figsinksourcecycleD}. In the first picture in Figure \ref{figsinksourcecycleD}, $\alpha$ corresponds to a source since no other vertex except $v_\beta$ can be adjacent to $v_\alpha$, or else the corresponding diagonal would cross $\beta$. In the second picture $\alpha$ corresponds to a sink. In picture three, four, five and six, there are arrows between $v_\alpha$, $v_\beta$ and $v_{\beta'}$, and in the last two pictures, there are arrows between $v_\alpha$, $v_\beta$ and $v_\gamma$, so $v_\alpha$ lies on an oriented cycle of length $3$. \begin{figure}[htp] \begin{center} \includegraphics[width=3.0cm]{sinksourcecycle1D.eps} \includegraphics[width=3.0cm]{sinksourcecycle2D.eps} \includegraphics[width=3.0cm]{sinksourcecycle3D.eps} \includegraphics[width=3.0cm]{sinksourcecycle3bD.eps} \includegraphics[width=3.0cm]{sinksourcecycle6D.eps} \includegraphics[width=3.0cm]{sinksourcecycle7D.eps} \includegraphics[width=3.0cm]{sinksourcecycle4D.eps} \includegraphics[width=3.0cm]{sinksourcecycle5D.eps} \end{center}\caption{See the proof of Lemma \ref{close to the border sink source cycle}} \label{figsinksourcecycleD} \end{figure} \end{proof} Let $\Delta$ be a triangulation of $\mathcal{P}_n$ and let $\alpha$ be a diagonal close to the border. We define a triangulation $\Delta / \alpha$ of $\mathcal{P}_n$ obtained from $\Delta$ by letting $\alpha$ be a border edge and leaving all the other diagonals unchanged. We write $\Delta/\alpha$ for the new triangulation obtained and we say that we factor out $\alpha$. See Figure \ref{figfactoringD}. Note that this operation is well-defined for each case in Figure \ref{figsinksourcecycleD}. \begin{figure}[h] \begin{center} \includegraphics[width=7cm]{factoringD.eps} \end{center} \caption{Factoring out a diagonal close to the border.} \label{figfactoringD} \end{figure} \begin{lem}\label{is of type Dn} Let $\Delta$ be a triangulation of $\mathcal{P}_n$, with $\Delta \neq S_n$ and let $\epsilon_n(\Delta) = Q_{\Delta}$ be the corresponding quiver. If $\alpha$ is a diagonal close to the border in $\Delta$, then the quiver $Q_\Delta/v_\alpha$ obtained from $Q_\Delta$ by factoring out the vertex $v_{\alpha}$ is connected and of type $D_{n-1}$. Furthermore, we have that $\epsilon_{n-1}(\Delta / \alpha) = Q_\Delta / v_\alpha$, when $\alpha$ is close to the border. \end{lem} \begin{proof} By Lemma \ref{close to the border sink source cycle} we have that $Q_\Delta / v_\alpha$ is connected. It is also straightforward to verify that $\epsilon_{n-1}(\Delta/\alpha) = Q_\Delta /v_\alpha$ for each case, and hence $Q_\Delta / v_\alpha$ is of type $D_{n-1}$ since $\Delta/\alpha$ is a triangulation of $\mathcal{P}_{n-1}$. \end{proof} Now we describe what happens when we factor out a vertex corresponding to a diagonal not close to the border. We need to consider two cases. We first deal with the case when $\alpha$ is a diagonal not going between the puncture and the border. \begin{lem}\label{not close to the border 1} Let $\Delta$ be a triangulation and $\epsilon_n(\Delta)=Q_\Delta$. If we factor out a vertex in $Q_{\Delta}$ corresponding to a diagonal that is not close to the border and that is not a diagonal between the puncture and the border, then the resulting quiver is disconnected. \end{lem} \begin{proof} Let $\alpha$ be a diagonal not close to the border and not between the puncture and the border. Then the diagonal divides $\mathcal{P}_{n}$ into two surfaces $A$ and $B$. See Figure \ref{figdividedpolygon}. Let $\beta$ be a diagonal in $A$ and $\beta'$ a diagonal in $B$. If $\beta$ and $\beta'$ would determine a common triangle, the third diagonal would cross $\alpha$, hence there is no arrow between the subquiver determined by $A$ and the subquiver determined by $B$, except those passing through $v_\alpha$. It follows that factoring out $v_\alpha$ disconnects the quiver. \end{proof} Let $\Delta$ be a triangulation of $\mathcal{P}_n$ and let $\alpha$ be a diagonal between the puncture and a vertex $b_i$ on the border of the polygon. We want to understand the effect of factoring out $v_\alpha$ (see Figure \ref{figfactpunctD}). In $\mathcal{P}_n$, create a new vertex $c$ between $b_{i-1}$ and $b_i$ and a new vertex $d$ between $b_i$ and $b_{i+1}$, such that we obtain a $(n+2)$-polygon. Let all diagonals that started in $b_i$ now start in $d$ and all diagonals ending in $b_i$ now end in $c$. Remove the diagonal $\alpha$ and identify the puncture with the vertex $b_i$. If there were two diagonals between the puncture and $b_i$, remove both and draw a diagonal from $c$ to $d$. Leave all the other diagonals unchanged. We will see that this is a triangulation of the non-punctured $(n+2)$-polygon in the next lemma. \begin{figure}[htp] \begin{center} \includegraphics[width=12.5cm]{factor_punct.eps} \includegraphics[width=12.5cm]{factor_punct_punct.eps} \end{center}\caption{Factoring out a diagonal from the puncture to the border.} \label{figfactpunctD} \end{figure} Recall that $\gamma_n$ is the function from the set of all triangulations of the regular $(n+3)$-gon to the mutation class of $A_n$, defined in Section 2. We have the following. \begin{lem}\label{not close to the border 2} Let $\Delta$ be a triangulation and $\epsilon_n(\Delta)=Q_\Delta$. If $\alpha$ is a diagonal between the puncture and the border, then the quiver $Q_\Delta / v_\alpha$ obtained from $Q_\Delta$ by factoring out $v_\alpha$ is connected and of type $A_{n-1}$. Furthermore, we have that $\gamma_{n+2}(\Delta/\alpha) = Q_\Delta / v_\alpha$ when $\alpha$ is a diagonal between the puncture and a vertex on the border. \end{lem} \begin{proof} It is clear that $\Delta / \alpha$ has $n-1$ diagonals and that no diagonals cross. This means that the new triangulation is a triangulation of the $(n+2)$ polygon without a puncture. We want to show that all triangles are preserved by factoring out a diagonal as described above and hence we will have that $\gamma_{n+2}(\Delta / \alpha) = Q_\Delta / v_\alpha$, and that $Q_\Delta / v_\alpha$ is of type $A_{n-1}$. First suppose that there is only one diagonal from the puncture to the vertex $b_i$ (see Figure \ref{figfactpunctD}). Then it is easy to see that all triangles are preserved. Next, suppose there are two diagonals $\alpha$ and $\beta$ from the puncture to $b_i$. In this case we add a new diagonal $\beta'$ between $b_{i-1}$ and $b_{i+1}$ and remove $\alpha$ and $\beta$. Then the diagonals bounding a common triangle with $\beta$ before factoring out $\alpha$ will bound a common triangle with $\beta'$ after factoring out $\alpha$. \end{proof} Summarizing, we get the following Proposition. \begin{prop}\label{if and only if} Let $\Delta$ be a triangulation and let $\epsilon_n(\Delta)=Q_\Delta$ be the corresponding quiver. Then $\epsilon_{n-1}(\Delta / \alpha) = Q_\Delta / v_\alpha$ is of type $D_{n-1}$ if and only if the corresponding diagonal $\alpha$ is close to the border. \end{prop} \begin{proof} From Lemma \ref{is of type Dn}, we have that if $\alpha$ is close to the border, then $Q_\Delta / v_\alpha$ is of type $D_{n-1}$. If $\alpha$ is not close to the border, we have by Lemma \ref{not close to the border 1} and Lemma \ref{not close to the border 2} that $Q_\Delta / v_\alpha$ is either disconnected or of type $A_{n-1}$. \end{proof} If $\Delta$ is a triangulation of $\mathcal{P}_n$, we want to add a diagonal $\alpha$ and a vertex on the polygon such that $\alpha$ is a diagonal close to the border and such that $\Delta \cup \alpha$ is a triangulation of $\mathcal{P}_{n+1}$. Consider any border edge $m$ on $\mathcal{P}_n$. We consider the eight different cases for the triangle containing $m$, as shown in Figure \ref{figextendingD}. We can define the extension at $m$ for each case. See Figure \ref{figsinksourcecycleD} for the corresponding extensions. \begin{figure}[htp] \begin{center} \includegraphics[width=3.0cm]{extending1D.eps} \includegraphics[width=3.0cm]{extending2D.eps} \includegraphics[width=3.0cm]{extending3D.eps} \includegraphics[width=3.0cm]{extending4D.eps} \includegraphics[width=3.0cm]{extending5D.eps} \includegraphics[width=3.0cm]{extending6D.eps} \includegraphics[width=3.0cm]{extending7D.eps} \includegraphics[width=3.0cm]{extending8D.eps} \end{center}\caption{Extension at $m$.} \label{figextendingD} \end{figure} For a given diagonal $\beta$, there are at most three ways to extend the polygon with a diagonal $\alpha$ such that $\alpha$ is adjacent to $\beta$. These extensions give non-isomorphic quivers, except when the triangulation is $S_n$. Combining Lemma \ref{exist diagonal} and Lemma \ref{is of type Dn}, we get that for a quiver $Q$ which is not $Q_n$, there always exist a vertex $v$ such that $Q'$ obtained from $Q$ by factoring out $v$ is connected and a quiver of a cluster-tilted algebra of type $D$. Furthermore, such a vertex must correspond to a diagonal close to the border in any triangulation $\Delta$ such that $\epsilon_n(\Delta) = Q_{\Delta}$. For a triangulation $\Delta$ of $\mathcal{P}_n$, let us denote by $\Delta(i)$ the triangulation obtained from $\Delta$ by rotating $i$ steps in the clockwise direction. Also denote by $\Delta^{-1}$ the triangulation obtained from $\Delta$ by inverting all tags. We define an equivalence relation on $\mathcal{T}_n$ where we let $\Delta \sim \Delta(i)$ for all $i$ and $\Delta^{-1} \sim \Delta$. We define a new function $\widetilde{\epsilon}_n:(\mathcal{T}_n / \! \! \sim) \rightarrow \mathcal{M}_n$ induced from $\epsilon_n$. This is well-defined, and since $\epsilon_n$ is a surjection, we also have that $\widetilde{\epsilon}_n$ is a surjection. We actually have the following. \begin{thm}\label{maintheorem} The function $\widetilde{\epsilon}_n:(\mathcal{T}_n / \! \! \sim) \rightarrow \mathcal{M}_n$ is bijective for all $n \geq 5$. \end{thm} \begin{proof} We already know that $\widetilde{\epsilon}_n$ is surjective. Suppose $\widetilde{\epsilon}_n(\Delta) = \widetilde{\epsilon}_n(\Delta')$. We want to show that $\Delta = \Delta'$ in $(\mathcal{T}_n / \! \! \sim)$ using induction. It is straightforward to check that $\widetilde{\epsilon}_5:(\mathcal{T}_5 / \! \! \sim) \rightarrow \mathcal{M}_5$ is injective. Suppose $\widetilde{\epsilon}_{n-1}:(\mathcal{T}_{n-1} / \! \! \sim) \rightarrow \mathcal{M}_{n-1}$ is injective. Let $\alpha$ be a diagonal close to the border in $\Delta$, with image $v_\alpha$ in $Q$, where $Q$ is a representative for $\widetilde{\epsilon}_n(\Delta)$. Then the diagonal $\alpha'$ in $\Delta'$ corresponding to $v_\alpha$ in $Q$ is also close to the border by Proposition \ref{if and only if}. We have $\widetilde{\epsilon}_{n-1}(\Delta/\alpha)=\widetilde{\epsilon}_{n-1}(\Delta'/\alpha') = Q/v_\alpha$, and hence by hypothesis, $\Delta/\alpha = \Delta'/\alpha'$ in $(\mathcal{T}_n/ \! \! \sim)$. We can obtain $\Delta$ and $\Delta'$ from $\Delta/\alpha = \Delta'/\alpha'$ by extending the polygon at some border edge. Fix a diagonal $\beta$ in $\Delta$ such that $v_\alpha$ and $v_\beta$ are adjacent. This can be done since $Q$ is connected. Let $\beta'$ be the diagonal in $\Delta'$ corresponding to $v_\beta$. By the above there are at most three ways to extend $\Delta/\alpha$ such that the new diagonal is adjacent to $\beta$. It is clear that these extensions will be mapped by $\widetilde{\epsilon_n}$ to non-isomorphic quivers. Also there are at most three ways to extend $\Delta'/\alpha'$ such that the new diagonal is adjacent to $\beta'$, and all these extensions are mapped to non-isomorphic quivers, thus $\Delta=\Delta'$ in $(\mathcal{T}_n / \! \! \sim)$. \end{proof} \begin{cor} The number $d(n)$ of elements in the mutation class of any quiver of type $D_n$ is equal to the number of triangulations of the punctured regular $n$ polygon up to rotations and inverting all tags. \end{cor} \section{Equivalences on the cluster category in the $D_n$ case} Since the Auslander-Reiten translation $\tau$ is an equivalence, it is clear that if $T$ is a cluster-tilting object in $\mathcal{C}$, then the cluster-tilted algebras $\operatorname{End}\nolimits_{\mathcal{C}}(T)^{\operatorname{op}\nolimits}$ and $\operatorname{End}\nolimits_{\mathcal{C}}(\tau T)^{\operatorname{op}\nolimits}$ are isomorphic. We know that $\tau$ corresponds to rotation of diagonals. In \cite{t} it was proven that if $T$ and $T'$ are cluster-tilting objects in $\mathcal{C}$, then the cluster-tilted algebras $\operatorname{End}\nolimits_{\mathcal{C}}(T)^{\operatorname{op}\nolimits}$ and $\operatorname{End}\nolimits_{\mathcal{C}}(T')^{\operatorname{op}\nolimits}$ are isomorphic if and only if $T' = \tau^i T$ for an $i \in \mathbb{Z}$ in the $A_n$ case. Let $\alpha$ be a diagonal (indecomposable object in $\mathcal{C}$). If $\alpha$ is a diagonal between the puncture and the border, let $\alpha^{-1}$ denote the diagonal $\alpha$ with inverted tag. We define $$\mu \alpha = \begin{cases}\alpha^{-1}& \mbox{ if } \alpha \mbox{ is a diagonal between the puncture and the border,} \\ \alpha& \mbox{ otherwise.}\end{cases}$$ If $\alpha$ is not a diagonal between the puncture and the border, then clearly $\tau^n \alpha = \alpha$. Now, let $\alpha$ be a diagonal between the puncture and the border. Suppose $n$ is even. Then it is clear from combinatorial reasons that $\tau^n \alpha = \alpha$ and that $\tau^i \alpha \neq \alpha^{-1}$ for any $i$. If $n$ is odd, then $\tau^n \alpha = \alpha^{-1}$ and hence $\tau^{n} = \mu$. See Figure \ref{figARquiverD_5} for an example of an AR-quiver in the $D_5$ case. \begin{thm}\label{maintheorem2} Let $T$ and $T'$ be cluster-tilting objects in $\mathcal{C}$. Then the cluster-tilted algebras $\operatorname{End}\nolimits_{\mathcal{C}}(T)^{\operatorname{op}\nolimits}$ and $\operatorname{End}\nolimits_{\mathcal{C}}(T')^{\operatorname{op}\nolimits}$ are isomorphic if and only if $T' = \mu^i \tau^j T$ $i,j \in \mathbb{Z}$. \end{thm} \begin{proof} Let $\Delta$ be a triangulation corresponding to $T$ and $\Delta'$ a triangulation corresponding to $T'$. If $T' \not\simeq \tau^i T$ for any $i$, then $\Delta'$ is not obtained from $\Delta$ by a rotation. If $T' \not\simeq \mu T$, then $\Delta \neq \Delta^{-1}$. It then follows from Theorem \ref{maintheorem} that $\operatorname{End}\nolimits_{\mathcal{C}}(T)^{\operatorname{op}\nolimits}$ is not isomorphic to $\operatorname{End}\nolimits_{\mathcal{C}}(T')^{\operatorname{op}\nolimits}$. \end{proof} It is clear that $\mu$ is an equivalence on the cluster category, since $\mu^2 =$ id. \begin{figure}[htp] \begin{center} \includegraphics[width=12.0cm]{ARquiverD_5.eps} \end{center}\caption{AR quiver for the cluster category in the $D_5$ case.} \label{figARquiverD_5} \end{figure} \section{The number of triangulations of punctured polygons} In this section we want to find an explicit formula for the number of triangulations of punctured polygons up to rotation and tags. Let $\mathcal{B}_n$ be the set of equivalence classes of trees such that \begin{itemize} \item any full subtree not including the root is binary and every inner node has either two or no children, \item there are exactly $n$ leaves and \item two trees are equivalent if one can be obtained from the other by rotating at the root. \end{itemize} As before, let $\mathcal{T}_5 / \! \! \sim$ be the set of triangulations of the punctured $n$-gon, where rotations and inverting tags gives equivalent triangulations. In this section we will draw certain tagged edges as loops. If there are two diagonals between the puncture and the same vertex, we will draw one diagonal as a loop. See Figure \ref{figtagtoloop}. \begin{figure}[htp] \begin{center} \includegraphics[width=10cm]{tagtoloop.eps} \end{center}\caption{Drawing tagged edges as loops} \label{figtagtoloop} \end{figure} We define a function $\sigma : \mathcal{T}_n / \! \! \sim \rightarrow \mathcal{B}_n$ by assigning to a triangulation a tree. Let $\Delta$ be a triangulation. We let $\sigma(\Delta)$ be the tree obtained in the following way. Draw an edge between two triangles $E$ and $E'$ if they are adjacent and their common diagonal is not a diagonal between the puncture and the border. Note that a loop in this case is not an edge between the puncture and the border. When a triangle $E$ contains one or two border edges, also draw one or two edges from the vertex to the outside of the polygon, crossing the border edges. These will be the leaf edges. Then identify the vertices adjacent to the puncture to be the root in the tree. See Figure \ref{figtreitree} for some examples. \begin{figure}[htp] \begin{center} \includegraphics[width=12.5cm]{tritreedraw1.eps} \includegraphics[width=12.5cm]{tritreedraw2.eps} \end{center}\caption{Triangulation and corresponding tree.} \label{figtreitree} \end{figure} It is clear that $\sigma$ is a well-defined function. Our aim is to show that $\sigma$ is a bijection. Let the tree $R_n$ be the tree consisting of exactly $n$ edges from the root, as shown in Figure \ref{root tree}. Note that this is the unique tree which is the image of the triangulation $S_n$. \begin{figure}[htp] \begin{center} \includegraphics[width=3cm]{roottree.eps} \end{center}\caption{The tree $R_n$ consisting of exactly $n$ edges from the root.} \label{root tree} \end{figure} Now we want to define a function $\lambda:\mathcal{B}_n \rightarrow \mathcal{T}_n / \! \! \sim$ and we will see that this is the inverse of $\sigma$. Given a tree $T$ with $n$ leaves, we will here describe $\lambda(T)$. We know that an inner edge of a tree (an edge not going to a leaf) corresponds to a diagonal $\alpha$ not going between the puncture and the border. Suppose $\alpha$ is an inner edge of $T$. Let $T'$ be the full subtree of $T$ with root ending in $\alpha$. If $T'$ has $n$ leaves, we draw a segment of a polygon consisting of $n$ border edges. See Figure \ref{correspondingpart2}. Suppose the subtree to the left of the root in $T'$ has $r \geq 2$ leaves. Then we draw a diagonal $\beta$ from $v_1$ to $v_{r+1}$. If $r+1 \neq n$ we draw a diagonal $\delta$ from $v_{r+1}$ to $v_{n+1}$. We can continue like this with $\beta$ and $\delta$ until we made a complete triangulation of the segment of the polygon, by induction. \begin{figure}[htp] \begin{center} \includegraphics[width=10.0cm]{correspondingpart2.eps} \end{center}\caption{} \label{correspondingpart2} \end{figure} Now, suppose $T$ has $k$ edges from the root, namely $t_1 , t_2 ,..., t_k$. Suppose the full subtree with root ending in $t_i$ has $d_i$ leaves. Then $\sum_i d_i = n$. Draw a punctured polygon with $n$ border edges and draw $k$ diagonals between the puncture and vertices on the border such that each segment has $d_i$ border edges in anticlockwise direction. For each segment defined by $t_i$, apply the procedure described above to obtain a triangulation of the segment. See Figure \ref{correspondingpart}. \begin{figure}[htp] \begin{center} \includegraphics[width=10.0cm]{correspondingpart.eps} \end{center}\caption{} \label{correspondingpart} \end{figure} It is clear from the construction that $\lambda$ is the inverse of $\sigma$, so we have the following. \begin{thm} $\sigma: \mathcal{T}_n \rightarrow \mathcal{B}_n$ is a bijection. \end{thm} The number of rooted planar trees with $n+1$ nodes where rotating at the root gives equivalent trees, is given by the formula \[\sum_{d|n} \phi(n/d)\binom{2d}{d}/(2n)\] where $\phi$ is the Euler function (see \cite{i} and the references given there and exercise 7.112 b in \cite{st}). The number of planar trees with $n+1$ nodes and the number of planar binary trees with $n+1$ leaves are both given by the $n$'th Catalan number. It follows that the number of elements in $\mathcal{B}_n$ is given by the above formula. \begin{cor} The number $d(n)$ of elements in the mutation class of any quiver of type $D_n$ is given by: \[ d(n) = \left\{ \begin{array}{l l} \sum_{d|n} \phi(n/d)\binom{2d}{d}/(2n) & \quad \mbox{if $n \geq 5$}, \\ 6 & \quad \mbox{if $n=4$}, \end{array} \right. \] where $\phi$ is the Euler function. \end{cor} We proved this for $n \geq 5$ and for $n=4$ the number is $6$. See Figure \ref{figmutationclassD4} for all quivers in the mutation class of $D_4$. See Table \ref{tableexamples} for some values of $d(n)$. \begin{table} \begin{tabular}{l|l} $n$&$d(n)$\\ \hline 3&4\\ 4&6\\ 5&26\\ 6&80\\ 7&246\\ \end{tabular} \; \; \begin{tabular}{l|l} $n$&$d(n)$\\ \hline 8&810\\ 9&2704\\ 10&9252\\ 11&32066\\ 12&112720\\ \end{tabular} \caption{Some values of $d(n)$.}\label{tableexamples} \end{table} \section{Mutation of trees} We want to define a mutation operation on the elements in $\mathcal{B}$, and we want this to commute with mutation of triangulations. Mutating a triangulation at a given diagonal is defined as removing this diagonal and replacing it with another one to obtain a new triangulation. This can be done in one and only one way. Let $\Delta$ be a triangulation in $\mathcal{T}_n$ and let $\sigma(\Delta) = T$ be the corresponding tree. An inner edge of $T$ corresponds to a diagonal in $\Delta$ not going from the puncture to the border, since the edges crosses these diagonals when we construct $T$ from $\Delta$. However, when we construct $T$ from $\Delta$, no edges in $T$ crosses a diagonal between the puncture and the border. To define mutation on $T$ corresponding to mutating at a diagonal $\alpha$ between the puncture and the border, we instead define mutation at two adjacent edges from the root in $T$, namely the two edges from the root in $T$ separated by $\alpha$. \begin{enumerate} \item Let $v_1$ be an edge from the root in $T$. The mutation of $T$ at $v_1$ is a new tree obtained in the following way. Remove the edge $v_1$. Identify the root of the full subtree of $T$ ending in $v_1$ with the root in $T$. See the first picture in Figure \ref{figtreemutate1}. \item Let $x$ and $y$ be two adjacent edges from the root of $T$. The mutation of $T$ at $x$ and $y$ is a new tree obtained in the following way. Disconnect the full subtree of $T$ containing $x$ and $y$. Add an edge $v_1$ from the root and connect the subtree to the end of $v_1$. See the second picture in Figure \ref{figtreemutate1}. \item Let $v$ be an inner edge not going from the root or to a leaf. The mutation of $T$ at $v$ is a new tree obtained in the following way. Suppose $v$ is an edge from the nodes $r$ to $t$, going down in the tree. Let $x$ be the other edge starting in $r$, and let $y$ and $z$ be the two edges starting in $t$. See the third and fourth picture in Figure \ref{figtreemutate1}. Suppose $x$ goes to the left from $r$ and $v$ goes to the right, as in the third picture. Disconnect the full subtree with $t$ as a root. Remove the edge $v$ and identify $r$ with $t$. Disconnect the full subtree $T'$ containing $x$ and $y$. Create a new vertex $v'$ starting in $r$ and identify the root of $T'$ with the node ending in $v'$. See the third picture in Figure \ref{figtreemutate1}. If $x$ goes to the right from $r$ and $v$ goes to the left, we define mutation at $v$ in a similar way as shown in the fourth picture. \end{enumerate} \begin{figure}[htp] \begin{center} \includegraphics[width=12.5cm]{treemutate1.eps} \includegraphics[width=12.5cm]{treemutate2.eps} \includegraphics[width=12.5cm]{treemutate3a.eps} \includegraphics[width=12.5cm]{treemutate3b.eps} \end{center}\caption{Mutation of a tree.} \label{figtreemutate1} \end{figure} We claim that mutation of a tree as defined above commutes with mutation of triangulations. We leave the details of the proof to the reader. \begin{prop}\label{treetricommutes} Mutation of trees commutes with mutations of triangulations and quivers. \end{prop} \begin{proof}[Sketch of proof:] For mutation of type 3, we mutate at a diagonal not going between the puncture and the border, so we are in the situation shown in Figure \ref{muttype3}. We see that mutation of trees commutes with mutation of triangulations. \begin{figure}[htp] \begin{center} \includegraphics[width=10cm]{muttype3.eps} \end{center}\caption{Mutation of triangulations and trees commute. See proof of Proposition \ref{treetricommutes}.} \label{muttype3} \end{figure} For mutation of type 1 and 2 we have to consider the three cases shown in Figure \ref{muttype1}. In these cases we also see that mutation as defined above commutes with mutation of triangulations. \begin{figure}[htp] \begin{minipage}[b]{12.5 cm} \hspace{1.5 cm} \includegraphics[width=10cm]{muttype1a.eps} \vspace{1 cm} \end{minipage} \begin{minipage}[b]{12.5 cm} \hspace{1.5 cm} \includegraphics[width=10cm]{muttype1b.eps} \vspace{1 cm} \end{minipage} \begin{minipage}[b]{12.5 cm} \hspace{1.5 cm} \includegraphics[width=10cm]{muttype1c.eps} \end{minipage} \caption{Mutation of triangulations and trees commutes. See proof of Proposition \ref{treetricommutes}.} \label{muttype1} \end{figure} \end{proof} Figure \ref{triangulations tree} and \ref{tree tree} shows the mutations of type 1 and 2 for both triangulations and trees in the $D_5$ case. Note that mutation of type 2 adds an edge from the root, or equivalently replaces a diagonal not between the puncture and the border with a diagonal between the puncture and the border. Mutation of type 1 is the opposite operation. This defines a tree of mutations as shown in Figure \ref{triangulations tree} and \ref{tree tree}, where going down in the tree corresponds to mutatation of type 1 and going up in the tree corresponds to mutation of type 2. If we drew arrows for mutations of type 3, the arrows would go to trees (or triangulations) in the same level in the tree of mutations. It is easy to see that this holds in general for any $n$. \small \begin{figure}\centering \includegraphics[width=12.0cm]{triang_tree_r.eps} \caption{All triangulations of type $D_5$.} \label{triangulations tree} \end{figure} \begin{figure}\centering \includegraphics[width=11.5cm]{tree_tree_r.eps} \caption{All trees of type $D_5$.} \label{tree tree} \end{figure} \clearpage \newpage \small
{ "timestamp": "2009-04-14T13:33:01", "yymm": "0812", "arxiv_id": "0812.2240", "language": "en", "url": "https://arxiv.org/abs/0812.2240", "abstract": "We show that the number of quivers in the mutation class of a quiver of Dynkin type $D_n$ is given by $\\sum_{d|n} \\phi(n/d)\\binom{2d}{d}/(2n)$ for $n \\geq 5$. To obtain this formula, we give a correspondence between the quivers in the mutation class and certain rooted trees.", "subjects": "Representation Theory (math.RT); Combinatorics (math.CO)", "title": "The number of elements in the mutation class of a quiver of type $D_n$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363508288305, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7073385821092566 }
https://arxiv.org/abs/1307.2262
Approximate the k-Set Packing Problem by Local Improvements
We study algorithms based on local improvements for the $k$-Set Packing problem. The well-known local improvement algorithm by Hurkens and Schrijver has been improved by Sviridenko and Ward from $\frac{k}{2}+\epsilon$ to $\frac{k+2}{3}$, and by Cygan to $\frac{k+1}{3}+\epsilon$ for any $\epsilon>0$. In this paper, we achieve the approximation ratio $\frac{k+1}{3}+\epsilon$ for the $k$-Set Packing problem using a simple polynomial-time algorithm based on the method by Sviridenko and Ward. With the same approximation guarantee, our algorithm runs in time singly exponential in $\frac{1}{\epsilon^2}$, while the running time of Cygan's algorithm is doubly exponential in $\frac{1}{\epsilon}$. On the other hand, we construct an instance with locality gap $\frac{k+1}{3}$ for any algorithm using local improvements of size $O(n^{1/5})$, here $n$ is the total number of sets. Thus, our approximation guarantee is optimal with respect to results achievable by algorithms based on local improvements.
\section{Introduction} Given a universe of elements $U$ and a collection $\mathcal{S}$ of subsets with size at most $k$ of $U$, the $k$-Set Packing problem asks to find a maximum number of disjoint sets from $\mathcal{S}$. The most prominent approach for the $k$-Set Packing problem is based on local improvements. In each round, the algorithm selects $p$ sets from the current packing and replaces them with $p+1$ sets such that the new solution is still a valid packing. It is well-known that for any $\epsilon>0$, there exists a constant $p$, such that the local improvement algorithm has an approximation ratio $\frac{k}{2}+\epsilon$ \cite{schrijver}. In quasi-polynomial time, the result has been improved to $\frac{k+2}{3}$ \cite{Halldorsson1995} and later to $\frac{k+1}{3}+\epsilon$ for any $\epsilon>0$ \cite{sellhyperedge} using local improvements of size $O(\log n)$, here $n$ is the size of $\mathcal{S}$. In \cite{sellhyperedge}, the algorithm looks for any local improvement of size $O(\log n)$, while in \cite{Halldorsson1995}, only sets which intersect with at most 2 sets in the current solution are considered and the algorithm looks for improvements of a binocular shape. One can obtain a polynomial-time algorithm which looks for local improvements of logarithmic size using the color coding technique \cite{ksetpacking2013,bestksetpacking}. The algorithm in \cite{ksetpacking2013} looks for local improvements similar to \cite{Halldorsson1995} and has an approximation ratio $\frac{k+2}{3}$. In \cite{bestksetpacking}, local improvements of bounded pathwidth are considered and an approximation ratio $\frac{k+1}{3}+\epsilon$, for any $\epsilon>0$ is achieved. In this paper, we obtain an approximation ratio $\frac{k+1}{3}+\epsilon$ for the $k$-Set Packing problem, for any $\epsilon>0$. On the other hand, we improve the lower bound given in \cite{ksetpacking2013} by constructing an instance that any algorithm using local improvements of size $O(n^{1/5})$ has a performance ratio at least $\frac{k+1}{3}$. Thus, our result is optimal with respect to the performance guarantee achievable by algorithms using local improvements. Our algorithm extends the types of local improvements considered in \cite{Halldorsson1995,ksetpacking2013} by first looking for a series of set replacements which swap some sets in the current packing $\mathcal{A}$ with a same number of disjoint sets $\mathcal{T}$ which are not in $\mathcal{A}$. We then look for local improvements which can be decomposed into cycles and paths, from sets in $\mathcal{S}\setminus(\mathcal{A}\cup\mathcal{T})$ which intersect with at most 2 sets in $\mathcal{A}$. We also use the color-coding technique \cite{colorcoding,fixparapacking} to ensure a polynomial time complexity when the local improvement has logarithmic size. Our algorithm is more efficient as it runs in time singly exponential in $\frac{1}{\epsilon^2}$, while the running time of Cygan's algorithm \cite{bestksetpacking} is doubly exponential in $\frac{1}{\epsilon}$. We believe that our approach makes an important step towards a practical algorithm for the $k$-Set Packing problem. {\bf Related works.} The Set Packing problem has been studied for decades. Hastad has shown that the general Set Packing problem cannot be approximated within $N^{1-\epsilon}$ unless $NP\subseteq ZPP$ \cite{setpackinglowerbound}. Here $N$ is the size of the universe $U$. The bounded Set Packing problem assumes an upper bound of the size of the sets. In the unweighted case, i.e. the $k$-Set Packing problem, besides algorithms based on local improvements \cite{schrijver,Halldorsson1995,sellhyperedge,ksetpacking2013,bestksetpacking}, Chan and Lau have shown that the standard linear programming algorithm has an integrality gap $k-1+1/k$ \cite{lpsdppacking}. They have also constructed a polynomial-sized semi-definite program with integrality gap $\frac{k+1}{2}$, but no rounding strategy is provided. The problem is also known to have a lower bound $\Omega(\frac{k}{\log k})$ \cite{ksetpackinglowerbound}. In the weighted case, Chandra and Halld\'{o}rsson have given a nontrivial approximation ratio $\frac{2(k+1)}{3}$ \cite{weightpacking2}. The result was improved to $\frac{k+1}{2}+\epsilon$ by Berman \cite{weightpacking}, which remains the best so far. The paper is organized as follows. In Section 2, we review previous local search algorithms and define useful tools for analysis. In Section 3, we introduce the new local improvement and analyze its performance guarantee. In Section 4, we give an efficient implementation of our algorithm. In Section 5, we give a lower bound of algorithms based on local improvements for the $k$-Set Packing problem. We conclude in Section 6. \section{Preliminaries} \subsection{Local improvements} Let $\mathcal{S}$ be a collection of subsets of size at most $k$ of the universe $U$ and the size of $\mathcal{S}$ is $n$. Let $\mathcal{A}$ be the collection of disjoint sets chosen by the algorithm. In this paper, we are interested in the unweighted $k$-set packing problem. We assume without loss of generality that every set is of uniform size $k$. Otherwise, we could add distinct elements to any set until it is of size $k$. In the following context, we use calligraphic letters to represent collections of $k$-sets and capital letters to represent sets of vertices which correspond to $k$-sets. The most widely used algorithm for the $k$-Set Packing problem is local search. The algorithm starts by picking an arbitrary maximal packing. If there exists a collection of $p+1$ sets $\mathcal{P}$ which are not in $\mathcal{A}$ and a collection of $p$ sets $\mathcal{Q}$ in $\mathcal{A}$, such that $(\mathcal{A}\setminus \mathcal{Q})\cup \mathcal{P}$ is a valid packing, the algorithm will replace $\mathcal{Q}$ with $\mathcal{P}$. We call it a {\it $(p+1)$-improvement}. With $p$ being a constant which depends on $\epsilon$, it is well-known that this local search algorithm achieves an approximation ratio $\frac{k}{2}+\epsilon$, for any $\epsilon>0$ \cite{schrijver}. \begin{theorem}[\cite{schrijver}] \label{thm:hs} For any $\epsilon>0$, there exists an integer $p=O(\log_k \frac{1}{\epsilon})$, such that the local search algorithm which looks for any local improvement of size $p$ has an approximation ratio $\frac{k}{2}+\epsilon$ for the $k$-Set Packing problem. \end{theorem} Halld\'{o}rsson \cite{Halldorsson1995} and later Cygan et al. \cite{sellhyperedge} show that when $p$ is $O(\log n)$, the approximation ratio can be improved at a cost of quasi-polynomial time complexity. Based on the methods of \cite{Halldorsson1995}, Sviridenko and Ward \cite{ksetpacking2013} have obtained a polynomial-time algorithm using the color coding technique \cite{colorcoding}. We summarize their algorithm as follows. Let $\mathcal{A}$ be the packing chosen by the algorithm and $\mathcal{C}=\mathcal{S}\setminus \mathcal{A}$. Construct an {\it auxiliary multi-graph $G_{A}$} as follows. The vertices in $G_{A}$ represent sets in $\mathcal{A}$. For any set in $\mathcal{C}$ which intersects with exactly two sets $s_1,s_2\in\mathcal{A}$, add an edge between $s_1$ and $s_2$. For any set in $\mathcal{C}$ which intersects with only one set $s\in \mathcal{A}$, add a self-loop on $s$. The algorithm searches for local improvements which can be viewed as {\it binoculars} in $G_{A}$. They call them {\it canonical improvements} \cite{ksetpacking2013}. A binocular can be decomposed into paths and cycles. The color coding technique \cite{colorcoding} and the dynamic programming algorithm are employed to efficiently locate paths and cycles of logarithmic size. This algorithm has an approximation ratio at most $\frac{k+2}{3}$. \begin{theorem}[\cite{ksetpacking2013}] \label{thm:sw} With $p=4\log n+1$, there exists a polynomial-time algorithm which solves the $k$-Set Packing problem with an approximation ratio $\frac{k+2}{3}$. \end{theorem} \begin{comment} We remark that this approximation ratio is tight. A tight example is as follows. Assume $\mathcal{C}=\mathcal{S}\setminus \mathcal{A}$ can be partitioned into two collections, one collection $\mathcal{C}_1$ of $|\mathcal{A}|$ sets where every set intersects with 1 set in $\mathcal{A}$, and the other collection $\mathcal{C}_3$ of $\frac{k-1}{3}|\mathcal{A}|$ sets where every set intersects with 3 sets in $\mathcal{A}$. It is easy to see that there is no canonical improvement. \begin{figure}[!t] \centering \label{canonicalimprovement} \includegraphics[width=4in]{canonicalimprovement6.pdf} \caption{Canonical improvements. The three types in the first row are considered in \cite{ksetpacking2013}.} \end{figure} \end{comment} Cygan \cite{bestksetpacking} has shown that an approximation ratio $\frac{k+1}{3}+\epsilon$ can be obtained in polynomial time by restricting the local improvements from anything of size $O(\log n)$ \cite{sellhyperedge} to local improvements of bounded pathwidth. Namely, let $G(A,C)$ be the {\it bipartite conflict graph} where $\mathcal{A}$ and $\mathcal{C}=\mathcal{S}\setminus\mathcal{A}$ represent one part of vertices respectively. For any $u\in A, v\in C$, if the corresponding sets are not disjoint, we put an edge between $u$ and $v$. For any disjoint collection $P\subseteq C$, if the subgraph induced by $P$ and the neighbors of $P$, $N(P)$ in $A$ have bounded pathwidth, a set replacement of $P$ with $N(P)$ is called a local improvement of bounded pathwidth. The color coding technique is also employed for efficiently locating such an improvement. \begin{theorem}[\cite{bestksetpacking}] For any $\epsilon> 0$, there exists a local search algorithm which runs in time $2^{O(kr)}n^{O(pw)}$ with an approximation ratio $\frac{k+1}{3}+\epsilon$ of the k-Set Packing problem. Here $r=2(k+1)^{\frac{1}{\epsilon}}\log n$ is the upper bound of the size of a local improvement, $pw=2(k+1)^{\frac{1}{\epsilon}}$ is the upper bound of pathwidth. \end{theorem} \subsection{Partitioning the bipartite conflict graph} Consider a bipartite conflict graph $G(A,B)$ where one part of the vertices $A$ representing sets $\mathcal{A}$ chosen by the algorithm and the other part $B$ representing an arbitrary disjoint collection of sets $\mathcal{B}$. We assume without loss of generality that $\mathcal{B}\cap \mathcal{A}=\emptyset$. The collection $\mathcal{B}$ can be thought of an optimal solution. It is only used for analysis. Given $\epsilon>0$, let $c_k=k-1$, $b=|B|$, we further partition $G(A, B)$ iteratively as follows. Let $B_1^1$ be the set of vertices in $B$ with degree 1 to $A$. Denote the neighbors of $B_1^1$ in $A$ by $A_1^1$. If $|B_1^1|<\epsilon b$, stop the partitioning. Otherwise, we consider $B_1^2$ which is the set of vertices whose degree drops to 1 if we remove $A^1_1$. Denote the neighbors of $B_1^2$ in $A\setminus A^1_1$ by $A_1^2$. If $|B_1^1\cup B_1^2|< c_k\epsilon b$, stop the partitioning. In general for any $j\geq 2$, let $B_1^j$ be the set of vertices with their degree dropping to 1 if the vertices in $\cup_{l=1}^{j-1}A_1^l$ are removed, and let $A_1^j$ be the neighbors of $B_1^j$ which are not in $\cup_{l=1}^{j-1}A^l_1$. If $|\cup_{l=1}^j B_1^l|<c_k^{j-1}\epsilon b$, we stop. Otherwise continue the partitioning. Let $i$ be the smallest integer such that $|\cup_{l=1}^i B_1^l|<c_k^{i-1}\epsilon b$. This integer $i$ exists as $c_k^{i-2}\epsilon b\leq |\cup_{l=1}^{i-1} B_1^l|\leq b$, we have $i\leq 2+\log_{c_k}\frac{1}{\epsilon}$. Let $B_1^{\leq j}$ ($A_1^{\leq j}$) be the set union $\cup_{l=1}^j B_1^l$ ($\cup_{l=1}^j A_1^l$), for $j\geq 1$. \begin{comment} Furthermore, let $B_2$ be the collection of vertices of degree 2 to $A$ and with no neighbors in $A_1$. Let $B_3^2$ be the collection of vertices with degree at least 3 to $A$ and degree 2 to $A\setminus A_1$. Let $B_3^3$ be the collection of sets with degree at least 3 to $A$ and degree at least 3 to $A\setminus A_1$. If we first run Algorithm LI1 with parameter $p=O(\frac{k}{\epsilon})$ to ensure that there is no local improvement of size $p$, there will be no vertex $b$ in $B$ such that after removing some $B_1^j$, its degree drops to 0. Otherwise, suppose $b$ has all its neighbors $N(b)\subseteq A_1$ and $N(b)$ have neighbors $N(N(b))\subseteq B_1$. We have $|N(N(b))|>|N(b)|$. Hence, there exists a local improvement by replacing $N(b)$ with $N(N(b))$. Therefore, $B_1, B_2, B_3^2, B_3^3$ are all possible types of vertices in $B$ and they are mutually disjoint. Hence, $B=B_1\cup B_2\cup B_3^2 \cup B_3^3$. We denote the size of $B_1,B_2,B_3^2,B_3^3$ by $b_1,b_2,b_3^2,b_3^3$ respectively. An illustration of the definitions is given in Figure 2. \end{comment} \begin{comment} \begin{figure}[!t] \centering \label{auxiliarygraph} \includegraphics[width=3.5in]{auxiliarygraph1.pdf} \caption{The bipartite conflict graph.} \end{figure} \end{comment} \section{Canonical Improvements with Tail Changes} \begin{comment} Before introducing the new local search algorithm, we show that the performance guarantee of the $k$-Set Packing problem can be improved to $\frac{k^2-2}{3k-4}+\epsilon$ by using both local improvements in \cite{ksetpacking2013} and \cite{schrijver} with parameter $p=O(\frac{k}{\epsilon})$. This new algorithm ({\bf Algorithm LI2*}) iterates by first running Algorithm LI1 until there is no local improvements of constant size available. Then it switches to Algorithm LI2 and searches for canonical improvements of $O(\log n)$ size. \begin{theorem} \label{thm:ls21} For any $\epsilon >0$, Algorithm LI2* has an approximation ratio $\frac{k^2-2}{3k-4}+\epsilon$. In particular, the 3-Set Packing problem has an approximation ratio $1.4+\epsilon$. \end{theorem} We include a proof of Theorem \ref{thm:ls21} in Appendix A. \end{comment} In this section, we present a local search algorithm based on \cite{ksetpacking2013}, and show that it achieves an approximation ratio $\frac{k+1}{3}+\epsilon$ for any $\epsilon>0$ for the $k$-Set Packing problem. \begin{comment} We give an example in Figure 2. A vertex $v$ of degree 3 has one neighbor $u_{1,1}$ in $A_1$ and two neighbors $u_1',u_2'$ which are not in $A_1$. Assume $u_{1,1}\in A_1^{j_1}$ and $v_{1,1}\in B_1^{j_1}$ is its neighbor, for some $1< j_1\leq 2+\log_{c_k} \frac{1}{\epsilon}$. Assume $v_{1,1}$ has other two neighbors $u_{2,1},u_{2,2}\in A_1^{j_2}$, for $1<j_2<j_1$. Assume $u_{2,1},u_{2,2}$ have neighbor $v_{2,1},v_{2,2}\in B_1^{j_2}$ respectively. Assume $v_{2,1}$ has neighbor $u_{3,1}\in A_{1}^1$ with neighbor $v_{3,1}\in B_1^1$. Assume $v_{2,2}$ has neighbors $u_{3,2},u_{3,3}\in A_{1}^1$ with neighbors $v_{3,2},v_{3,3}\in B_1^1$ respectively. Assume all $v_{i,j}$ are disjoint and the neighbors of $v_{i,j}$ are those we have marked in Figure 2. If we replace $u_{1,1},u_{2,1},u_{2,2},u_{3,1},u_{3,2},u_{3,3}$ in the current packing by $v_{1,1},v_{2,1},v_{2,2},v_{3,1},v_{3,2},v_{3,3}$, then $v$ has only two neighbors in $u_1',u_2'\in A$, and thus it can be included in a canonical improvement. Formally, for any vertex $v\in B$ with degree at least 3, consider the subgraph $T$ of $G(A,B)$ induced by $u_{1,j_1},v_{1,j_1},u_{2,1},...,u_{2,j_2},v_{2,1},...,v_{2,j_2},...,u_{i,1},...,u_{i,j_i},v_{i,1},...,v_{i,j_i}$, where $j_1=1$, $u_{1,j_1}$ is a neighbor of $v$, $v_{l,1},...,v_{l,j_l}\in B$ are neighbors of $u_{l,1},...,u_{l,j_l}\in A$ respectively for $1\leq l\leq i$, $u_{l,1},...,u_{l,j_l}$ and $u_{l+1,1},...,u_{l+1,j_{l+1}}$ are all the neighbors of $v_{l,1},...,v_{l,j_l}$ in $G(A,B)$, and finally $v_{i,1},...,v_{i,j_i}\in B_1^1$. If all vertices of $T$ in $B$ represent disjoint sets, a $p$-$p$ replacement which replaces $u_{1,1},u_{2,1},...,u_{2,j_2},...,u_{i,1},...,u_{i,j_i}$ by $v_{1,1},v_{2,1},...,v_{2,j_2},...,v_{i,1},...,v_{i,j_i}$ is called a {\bf tail change} associated with edge $(v,u_{1,1})$ of vertex $v$, or simply a tail change associated with $(v,u_{1,1})$. Here $p=\sum_{l=1}^i j_l$. We say $p$ is the {\it size of the tail change}. Notice that if there is no local improvement of size 2 in $G(A,B)$, the subgraph $T$ is a tree rooted at $u_{1,1}$ and of height $2i$. We can always associate a tail change with such a tree. We denote a tail change associated with edge $e$ by $T_e(U,V,i)$, where the tail change replaces $U\subseteq A$ with $V\subseteq B$, and $i$ is the number of times the associated tree alternating between $A$ and $B$. We say the {\it height of the tail change} is $i$. \end{comment} \subsection{The new local improvement} In this section, we introduce a new type of local improvements. Let $\mathcal{A}$ be a packing chosen by the algorithm, and let $\mathcal{C}=\mathcal{S}\setminus \mathcal{A}$. We create the bipartite conflict graph $G(A,C)$ as in Section 2.1. Recall that only those sets in $\mathcal{C}$ which intersect with at most 2 sets in $\mathcal{A}$ are considered in \cite{ksetpacking2013}. Our approach tries to include sets of higher degree in a local improvement by swapping $p$ sets in $\mathcal{A}$ with $p$ sets in $\mathcal{C}$. In this way, if the degree of a vertex in $C$ drops to 2, it could be included in a local improvement. \begin{definition}[Tail change] Consider any vertex $v\in C$ of degree at least 3, we call a swapping of $p$ sets $U$ in $A$ with $p$ disjoint sets $V$ in $C$ a {\bf tail change} associated with an edge $(v, u)$ of $v$ if the following three requirements are satisfied: (1) $v\notin V$. (2) $u$ is the unique neighbor of $v$ in $U$. (3) The neighbors of $V$ in $A$ are exactly $U$. The {\bf size} of this tail change is defined to be $p$. \end{definition} We denote a tail change associated with $e$ which swaps $U$ with $V$ by $T_e(U,V)$. We say that two tail changes $T_e(U,V),T_{e'}(U',V')$ of vertices $v,v'$ respectively are {\it consistent}, if either $v\neq v'$ and $(\{v\}\cup V)\cap (\{v'\}\cup V')=\emptyset$, or $v=v'$, $e\neq e'$ and $V\cap V'=\emptyset$. Moreover we require that the degrees of $v,v'$ after the tail changes remain at least 2. Therefore, for any vertex $v\in C$ of degree $d\geq 3$, we could perform at most $d-2$ tail changes for $v$. We are now ready to introduce the new local search algorithm. We first consider an algorithm that runs in quasi-polynomial time. Given parameter $\epsilon>0$, in each iteration, the algorithm starts by performing local improvements of constant size $O(\frac{k}{\epsilon})$. If no such local improvement is present, the algorithm starts looking for improvements of size $O(\log n)$. Construct the bipartite conflict graph $G(A,C)$. For any set $I$ of at most $\frac{4}{\epsilon}\log n$ vertices in $C$, let $I_3\subseteq I$ be the set of vertices of degree at least 3 in $G(A,C)$. The algorithm checks if there exists a collection of consistent tail changes each of size at most $\frac{2(k-1)}{\epsilon}$ for $I_3$, which together replace $U\subseteq A$ with $V\subseteq C$, such that $V\cap I=\emptyset$, and after the replacement the degree of every vertex in $I_3$ drops to 2. If so, the algorithm goes on checking in the auxiliary multi-graph $G_A$ where edges are constructed from vertices in $I$ assuming the swapping of $U$ with $V$ is performed, whether there is a subgraph which is one of the following six types (illustrated in Figure 1): (1) two cycles intersecting at a single point, (2) two disjoint cycles connecting by a path, (3) two cycles with a common arc, (those three types are binoculars also considered in \cite{ksetpacking2013}), (4) a path, (5) a path and a cycle intersecting at a single point, (6) a cycle. Let $U'$ be the vertices in this subgraph, and $V'$ be the edges. The algorithm checks if a replacement of $U\cup U'$ with $V\cup V'$ is an improvement. We call this new local improvement the {\bf canonical improvement with tail changes}, and this quasi-polynomial time algorithm, {\bf Algorithm LI} (LI stands for local improvement). We will explain the parameter settings in Algorithm LI in the next section. \begin{figure}[!t] \centering \label{canonicalimprovement} \includegraphics[width=4in]{canonicalimprovement6.pdf} \caption{Improvements composed of cycles and lines.} \end{figure} Before showing how to efficiently locate a canonical improvement with tail changes, we first show that the approximation ratio of Algorithm LI is $\frac{k+1}{3}+\epsilon$. \begin{comment} looking for local improvements of constant size $O(\frac{k}{\epsilon})$. If no such local improvement is present, the algorithm guesses a collection $\mathcal{I}$ of at most $\frac{4}{\epsilon}\log n$ disjoint sets in $\mathcal{C}$ and checks if they could form a local improvement. Namely, for the collection of sets $\mathcal{I}_3\subseteq \mathcal{I}$ of degree at least 3, the algorithm tries to find a collection of consistent tail changes of size at most $\frac{2(k-1)}{\epsilon}$ for them, which together replace $\mathcal{U}\subseteq \mathcal{A}$ with $\mathcal{V}\subseteq \mathcal{C}$ where $\mathcal{V}\cap \mathcal{I}=\emptyset$, such that after the sets replacement, the degree of every set in $\mathcal{I}_3$ drops to 2. Finally, assuming the swapping of $\mathcal{U}$ with $\mathcal{V}$ is performed, the algorithm checks if there is a canonical improvement in the auxiliary multi-graph $G_A$ (defined in Section 2.1) with edges representing sets in $\mathcal{I}$. If so, perform the canonical improvement together with the swapping. We call this algorithm {\bf Algorithm LI3'} and the new local improvement the {\bf canonical improvement with tail changes}. We explain the parameter settings in Algorithm LI3' in the next section. \end{comment} \subsection{Analysis} Given a packing $A$ chosen by Algorithm LI and for an arbitrary packing $B$, consider the bipartite conflict graph $G(A,B)$ defined in Section 2.2. The notations in this section are taken from Section 2.2. First, we remark that since we make all $O(\frac{k}{\epsilon})$-improvements at the beginning of Algorithm LI, for any set $V\subseteq B$ of size $O(\frac{k}{\epsilon})$, there are at least $|V|$ neighbors of $V$ in $A$. In $G(A, B)$, we make every vertex $a$ in $A$ full degree $k$ by adding self-loops of $a$ which we call {\it null edges}. We define a {\it surplus edge} which is either a null edge, or an edge incident to some vertex in $B$ which is of degree at least 3. We first show that there exists a one-to-one matching from almost all vertices in $B_1^1$ to surplus edges with the condition that after excluding the matched surplus edges of any vertex in $B$, the degree of this vertex remains at least 2. We define such a matching in the following matching process. {\bf The matching process.} Pick an arbitrary order of vertices in $B_1^1$. Mark all edges and vertices as unmatched. Try to match every vertex with a surplus edge in this order one by one. For any vertex $v_1\in B_1^{1}$, starting from $v_1$, go to its neighbor $u_1\in A_1^1$. If $u_1$ has an unmatched null edge, match $v_1$ to it, mark this null edge as matched and stop. Otherwise, if $u_1$ has a neighbor $v$ in $B$, such that the degree of $v$ is at least 3, $(u_1,v)$ is unmatched and $v$ is unmatched, match $v_1$ to $(u_1,v)$ and mark this edge as matched. If the degree of $v$ drops to 2 by excluding all matched edges of $v$, mark $v$ as matched. If $u_1$ does not have a neighbor satisfying the requirement, try every neighbor $v_2$ (except $v_1$) of $u_1$ and continue the process from $v_2$. In general, suppose we are at a vertex $v_j\in B_1^j$ and it has a neighbor $u_j\in A_1^j$. We try to match $v_1$ with a null edge of $u_j$, or a surplus edge of an unmatched neighbor of $u_j$. If no matching edge is found, continue by trying every neighbor of $u_j$ in $B_1^{j_1}$ for $j_1>j$, until either $v_1$ is matched, or $j> 2+\log_{c_k}\frac{1}{\epsilon}$. In the latter case, we mark $v_1$ as unmatched. \begin{figure}[!t] \centering \label{conflictgraph} \includegraphics[width=3.5in]{iscoconflictgraph.pdf} \caption{The bipartite conflict graph. $k=3$.} \end{figure} We give an example of the matching process illustrated in Figure 2. The vertices and edges of the same color are matched. We match $v_1$ to the null edge (dotted line) of its neighbor $u_1$. $v_2$ is matched to a surplus edge $(u_2,v_5)$ of $v_5$. After that, the degree of $v_5$ drops to 2 by excluding the edge $(u_2,v_5)$ and $v_5$ is marked as matched. For $v_3$, we go on to $u_3$, $v_5$, $u_5$, $v_6$, $u_6$, then $v_8$ with a surplus edge $(u_6,v_8)$. We match $v_3$ to this edge. \begin{comment} \begin{figure}[!t] \centering \label{figmatch} \includegraphics[width=3.5in]{tailchange2.pdf} \caption{An example of the matching process for the 3-Set Packing problem. The self-loop in dotted line represents a null edge. Vertices and edges in the same color are matched. } \end{figure} \end{comment} \begin{lemma} \label{lemma:matchb1} For any $\epsilon>0$, there exists a set of surplus edges $E_1$, such that except for at most $\epsilon |B|$ vertices, $B_1^{1}$ can be matched to $E_1$ one-to-one. Moreover, every endpoint of $E_1$ in $B$ has degree at least 2 after excluding $E_1$. \end{lemma} \begin{proof} It is sufficient to prove that at most $\epsilon |B|$ vertices in $B_1^1$ are unmatched. Let $v$ be an unmatched vertex in $B_1^1$. The neighbor of $v$ in $A$, $u$ has no null edges and thus has degree $k$, and none of the neighbors of $u$ have an unmatched surplus edge. The matching process runs in exactly $i-1=1+\log_{c_k}\frac{1}{\epsilon}$ iterations for $v$ and has tried $k-1+(k-1)^2+\cdots +(k-1)^{i-1}= \frac{(k-1)^i-(k-1)}{k-2}$ vertices in $B_1^{\leq i}$. Notice that for two different vertices $v,v'\in B_1^1$ which are marked unmatched, the set of vertices the matching process has tried in $B_1^{\leq i}$ must be disjoint. Otherwise, we can either find a matching for one of them, or there exists a local improvement of size $O(\frac{k}{\epsilon})$. Suppose there are $n_{um}$ unmatched vertices in $B_1^1$. Recall that $|B_1^{\leq i}|\leq c_k^{i-1}\epsilon |B|$. Therefore, $n_{um}\dot (1+\frac{(k-1)^i-(k-1)}{k-2})\leq |B_1^{\leq i}|\leq c_k^{i-1}\epsilon |B|$. We have $n_{um}\leq \epsilon |B|$, where $c_k= k-1$. $\square$ \end{proof} Consider any surplus edge $e=(u,v)$ matching to $w\in B_1^1$, we can obtain a tail change $T_e(U,V)$ associated with $e$ by viewing the matching process reversely. Assume $u\in A_1^i$, for $i<2+\log_{c_k}\frac{1}{\epsilon}$. Let $U_i=\{u\}$ and $V_i$ be the neighbor of $U_i$ in $B_1^i$. In general, let $V_j$ be the set of neighbors of $U_j$ in $B_1^j$, we define $U_{j-1}$ to be the neighbors of $V_j$ excluding $U_j$. This decomposition ends when $j=1$ and $V_1\subseteq B_1^1$. As $w$ is matched to $e$, we know that $w\in V_1$. Let $U=\cup_{j=1}^iU_j,V=\cup_{j=1}^i V_j$, then a swapping of $U$ with $V$ is a tail change associated with edge $e$. First, we have $|V_j|=|U_j|$ for $1\leq j\leq i$, otherwise there exists a $O(\frac{k}{\epsilon})$-improvement. Hence $|U|=|V|$. Secondly, the set of neighbors of $V$ is $U$ by the construction. And $u$ is the only neighbor of $v$ in $U$, otherwise $w$ will be matched to another surplus edge. As an example in Figure 2, $U=(u_6,u_5,u_3,u_2),V=(v_6,v_5,v_3,v_2)$ is a tail change associated with edge $(u_6,v_8)$ which is matched to $v_3$. We estimate the size of such a tail change. Since every vertex in $U$ has at most $k$ neighbors, there are at most $\sum_{j=1}^{i} (k-1)^{j-1}=\frac{(k-1)^i-1}{k-2}$ vertices in $V$. Let $i=2+\log_{k-1}\frac{1}{\epsilon}$. Then the size of this tail change is at most $\frac{2(k-1)}{\epsilon}$. \begin{comment} If we view the matching process reversely, we can decompose a tail change $T_e(U,V)$ with $e=(v,u)$, $U\subseteq A, V\subseteq B$ as follows. The decomposition is similar as in Section 2.2. We assume $V_1=V\cap B_1^1$ is not empty. Let $U_1$ be the neighbors of $V_1$. In general, let $V_j$ be the set of vertices in $V$ with degree 1 to $A\setminus \cup_{l=1}^{j-1}U_l$, and let $U_j$ be the neighbors of $V_j$ excluding those in $\cup_{l=1}^{j-1}U_l$. We call a tail change a {\it special tail change} if it has the following properties: (1) $V_1\neq\emptyset$. (2) Every vertex in $U_j$ has a neighbor in $V_{j+1}$. (3) There exists some integer $i\leq 1+\log_{c_k}\frac{1}{\epsilon}$, such that the decomposition ends at $U_i$, and $u$ is the only vertex in $U_i$. (4) $V=\cup_{l=1}^{i}V_l$, $U=\cup_{l=1}^{i}U_l$. (5) $v\in V_m$ for $i<m\leq 2+\log_{c_k}\frac{1}{\epsilon}$ with $u$ being its only neighbor in $U$. We denote a special tail change by $T_e^*(U,V,i)$. \end{comment} Let $\mathcal{T}_S$ be the collection of tail changes associated with the surplus edges which are not null edges as defined above. Assume those tail changes together replace $U\subseteq A$ with $V\subseteq B$. Let $B_L=B_1^1\cap V$. Let $B_N\subseteq B_1^1$ be the set of vertices which are matched to null edges. By Lemma \ref{lemma:matchb1}, we know that $|B_1^1\setminus (B_L\cup B_N)|\leq \epsilon |B|$. Moreover, we show in the following Corollary that a consistent collection of tail changes with the same property can be extracted from $\mathcal{T}_S$. \begin{corollary} \label{corofmatchprocess} There exists a subcollection $\mathcal{T}_c$ of consistent tail changes from $\mathcal{T}_S$, which together replace a set $U_c\subseteq A$ with $V_c\subseteq B$, such that $V_c\cap B_1^1=B_L$. \end{corollary} \begin{proof} We consider the tail changes associated with the surplus edges one by one. $\mathcal{T}_c$ is initialized to be empty. If the tail change $T_{e_i}(U_i,V_i)$ is consistent with every tail change in $\mathcal{T}_c$, we include it in $\mathcal{T}_c$. If there exists any tail change $T_{e_j}(U_j,V_j)$ such that $V_i\cap V_j \neq \emptyset$, assume $e_j=(u_j,v_j)$ is matched with the vertex $w_j$ and $e_i=(u_i,v_i)$ with $w_i$, we know that at the time the matching for $w_i$ tries the edges of $v_j$, $v_j$ has been marked as matched. Hence, $V_j\subseteq V_i$. We discard $T_{e_j}(U_j,V_j)$ and include $T_{e_j}(U_j,V_j)$ in $\mathcal{T}_c$. $\square$ \end{proof} \begin{theorem} \label{thm:ls3} For any $\epsilon>0$, Algorithm LI has an approximation ratio $\frac{k+1}{3}+\epsilon$. \end{theorem} Before proving the theorem, we state the following result from \cite{ksetpacking2013} which is derived from a lemma in \cite{indsetbounddeg}. The lemma in \cite{indsetbounddeg} states that when the density of a graph is greater than a constant $c>1$, there exists a subgraph of size $O(\log n)$ with more edges than vertices. If the underlying graph is the auxiliary multi-graph $G_A$ defined in Section 2.1 and this condition holds, we know from \cite{ksetpacking2013} that there exists a binocular of size $O(\log n)$. \begin{lemma}[\cite{ksetpacking2013}] \label{lemmacycle} For any integer $s\geq 1$ and any undirected multigraph $G=(V,E)$ with $|E|\geq \frac{s+1}{s}|V|$, there exists a binocular of size at most $4s\log n-1$. \end{lemma} \begin{proof}[Theorem \ref{thm:ls3}] For a given $\epsilon$, let $\epsilon' = \frac{2k+5}{3}\epsilon>3\epsilon$. Let $A$ be the packing returned by Algorithm LI with parameter $\epsilon$ and for any other packing $B$, we show that $(\frac{k+1}{3}+\epsilon)|A|\geq |B|$. In the following, we use the corresponding small letter of a capital letter (which represents a set) to represent the size of this set. In Corollary \ref{corofmatchprocess}, the collection of consistent tail changes $\mathcal{T}_c$ together replace a set of $a_t$ vertices $A_t$ in $A$ with $b_t=a_t$ vertices $B_t$ in $B$. We exclude theses vertices from the original bipartite conflict graph $G(A, B)$. Denote the remaining graph by $G(A',B')$. We add null edges to vertices in $A'$ until every vertex in $A'$ has degree $k$. There are $ka'$ edges counting from $A'$. Let $B'_N= B_N\cap B'$ and $b^1_n=|B_N'|$. We can also think of that there is a null edge at each vertex in $B_N'$ when we count the number of edges from $B'$. By Lemma \ref{lemma:matchb1}, there are at most $\epsilon' b$ unmatched vertices in $B_1^1$. We further partition the vertices in $B'$ as follows. Let $B_3^2$ ($B_3^{2'}$) be the set of vertices in $B'$ whose degree drops to 2 after performing the tail changes in $\mathcal{T}_c$, and for any vertex $v\in B_3^2$ ($v'\in B_3^{2'}$), there is at least one (no) tail change in $\mathcal{T}_c$ associated with $v$. Let $B^2_2$ be the set of vertices in $B'$ with degree 2 in $G(A',B')$ and no neighbors in $A_t$. Let $B_2^1$ ($B_3^1$) be the set of vertices in $B'$ whose degree drops from 2 (at least 3) in $G(A,B)$ to 1 in $G(A',B')$. Let $B_3^3$ be the set of vertices in $B'$ with degree at least 3 in $G(A',B')$. Moreover, there is no vertex in $G(A',B')$ of degree 0, otherwise, there exists a local improvement. By Lemma \ref{lemma:matchb1}, the number of edges in $G(A', B')$ is at least $2b^1_n+\epsilon' b+b_2^1+2b^2_2+2b_3^2+3b_3^3+2b_3^{2'}+b_3^1$. Therefore, \begin{equation} \label{relationab1} k(a-a_t) \geq 2b^1_n + \epsilon' b+b_2^1+2b^2_2+2b_3^2+3b_3^3+2b_3^{2'}+b_3^1. \end{equation} Next, we show that $b_n^1+b_2^1+b^2_2+b_3^2\leq (1+\epsilon')(a-a_t)$. Suppose the set of neighbors of $B_N',B^2_2,B_2^1,B_3^2$ in $A'$ is $A_2$. Construct an auxiliary multi-graph $G_{A_2}$ as in Section 2.1, where the vertices are $A_2$, every vertex in $B_N',B_2^1$ creates a self-loop, and every vertex in $B_2^2,B_3^2$ creates an edge. Assume $G_{A_2}$ has at least $(1+\epsilon')|A_2|$ edges, implied by Lemma \ref{lemmacycle}, there exists a binocular of size at most $\frac{4}{\epsilon'}\log |A_2|-1$ in $G_{A_2}$. Let $G_A$ be the auxiliary multi-graph with vertices being $A$, every degree-1 vertex in $B$ creates a self-loop, every degree-2 vertex in $B$ creates an edge, and every vertex $v$ with degree dropping to 2 by performing some consistent tail changes of this vertex in $\mathcal{T}_c$ creates an edge between the two neighbors $u_1,u_2$ of $v$, where $(u_1,v),(u_2,v)$ are not associated with any tail change in $\mathcal{T}_c$. (Notice that contrary to the auxiliary multi-graph considered in \cite{ksetpacking2013}, here some vertices in $B$ might simultaneously create an edge in $G_A$ and involve in tail changes.) We have the following claim for sufficiently large $n$ ($n>(\frac{k}{\epsilon})^{O(\epsilon)}$). \begin{Claim} \label{claimimprovement} If there is a binocular of size $p\leq \frac{4}{\epsilon'}\log |A_2|-1$ in $G_{A_2}$, there exists a canonical improvement with tail changes in $G_A$ of size at most $\frac{12}{\epsilon'}\log n$. \end{Claim} Implied by the claim, we know that there exists a canonical improvement with tail changes in $G_A$ of size $\frac{12}{\epsilon'}\log n<\frac{4}{\epsilon}\log n$, which can be found by Algorithm LI. Therefore \begin{equation} \label{relationab2} (1+\epsilon')(a-a_t)\geq (1+\epsilon')|A_2| \geq b_n^1+b_2^1+b^2_2+b_3^2. \end{equation} Combining (\ref{relationab1}) and (\ref{relationab2}), we have \begin{eqnarray} (k+1+\epsilon')(a-a_t) &\geq& 3b_n^1+\epsilon' b+2b_2^1+3b^2_2+3b_3^2+3b_3^3+2b_3^{2'}+b_3^1 \nonumber \\ &=& 3(b-b_t-\epsilon' b)-b_2^1-b_3^{2'}-2b_3^1+\epsilon' b. \end{eqnarray} Hence, $(3-2\epsilon')b \leq (k+1+\epsilon')a-(k-2+\epsilon')a_t+b_2^1+b_3^{2'}+2b_3^1$. Since every vertex in $A_t$ can have at most $k-2$ edges to $B'$, we have $b_2^1+b_3^{2'}+2b_3^1 \leq (k-2)a_t$. Therefore, $(3-2\epsilon')b \leq (k+1+\epsilon')a$. As $\epsilon' =\frac{2k+5}{3}\epsilon$, we have $b\leq (\frac{k+1}{3}+\epsilon)a$. $\square$ \end{proof} The proof of Claim \ref{claimimprovement} helps understand why we consider three more types of local improvements in addition to binoculars and helps explain the algorithm design in the next section. \begin{comment} \begin{proof}[Claim \ref{claimimprovement}] Consider any binocular $I$ in $G_{A_2}$. If there is no edge in $I$ which is from $B_2^1$, we have a corresponding improvement $I'$ in $G_A$ by performing tail changes for any edge from $B_3^2$ in $I$. Otherwise, we assume that there is one self-loop in $I$ from $v\in B_2^1$. By definition, one neighbor $u_1$ of $v$ lies in $A_t$ and the other neighbor $u_2$ in $A'$. Suppose $u_1$ belongs to a tail change in $\mathcal{T}_c$ which is associated with $w\in B_3^2$. If $w\in I$, we associate $w$ with tail changes in $G_A$. In $G_A$, we remove the self-loop on $u_2$ and add edge $(u_1,u_2)$. In this way, we have a path together with the other cycle in $I$ which form an improvement in $G_A$, assuming the other cycle in $I$ is not a self-loop from $B_2^1$. If the other cycle in $I$ is also a self-loop from $v'\in B_2^1$, let $u_1'$ be a neighbor of $v'$ in $A_t$ and $u_2'$ be the other neighbor of $v'$ in $A'$. If $u_1'$ belongs to the tail change associated with $w'\in B_3^2$ and $w'\in I$, the path between $u_2,u_2'$ in $I$ together with the edges $(u_1,u_2),(u_1',u_2')$ form an improvement. If $u_1=u_1'$, we have an improvement in $G_A$ as a cycle. Other cases can be analyzed similarly. $\square$ \end{proof} \end{comment} \begin{proof}[Claim \ref{claimimprovement}] There are two differences between $G_{A_2}$ and $G_A$. First, there is no tail changes involved in any improvement from $G_{A_2}$. While in $G_A$, if we want to include edges which come from vertices of degree at least 3 in $B$ into an improvement, we also need to perform tail changes. Second, any vertex in $B_2^1$ creates a self-loop in $G_{A_2}$, while in $G_A$ it creates an edge. Consider any binocular $I$ in $G_{A_2}$ which forms a pattern in the first row of Figure 1. If there is no edge in $I$ which is from $B_2^1$, we have a corresponding improvement $I'$ in $G_A$ by performing tail changes for any edge from $B_3^2$ in $I$. As we select a consistent collection of tail changes $\mathcal{T}_c$, $I'$ is a valid improvement. Otherwise, we first assume that there is one self-loop in $I$ from $v\in B_2^1$. By definition, one neighbor $u_1$ of $v$ lies in $A_t$ and the other $u_2$ in $A'$. Suppose $u_1$ belongs to a tail change in $\mathcal{T}_c$ which is associated with $w\in B_3^2$. By the matching process and Corollary \ref{corofmatchprocess}, there exists a path from $u_1$ to a vertex $b_1\in B_L^1$, where the edges in this path might come from degree-2 vertices, or from higher degree vertices with consistent tail changes. Let $a_1$ be the neighbor of $b_1$. If $w\notin I$, we have a canonical improvement in $G_A$ by replacing the self-loop from $v$ by the path from $u_2,u_1$ to $a_1$ and a self-loop on $a_1$ from $b_1$. The size of the improvement increases by at most $1+\log_{k-1}\frac{1}{\epsilon}$. If $w\in I$, we associate $w$ with tail changes in $G_A$. We replace the self-loop on $u_2$ from $v$ to the edge $(u_1,u_2)$. In this way, we have a path together with the other cycle in $I$ which form an improvement in $G_A$, assuming the other cycle in $I$ is not a self-loop from $B_2^1$. Finally, if the other cycle in $I$ is also a self-loop from $v'\in B_2^1$, let $u_1'$ be a neighbor of $v'$ in $A_t$ and $u_2'$ be another neighbor of $v'$ in $A'$. If $u_1'$ belongs to the tail change associated with $w'\in B_3^2$ and $w'\in I$, the path between the two self-loops in $I$ together with the edges $(u_1,u_2),(u_1',u_2')$ form an improvement (as there are tail changes involved). Here $w'$ and $w$ could be the same vertex. Otherwise, we can also replace the self-loop of $v'$ by a path with one endpoint attaching a new self-loop. In this way, there is an improvement in $G_A$ with size increasing by at most $2(1+\log_{k-1}\frac{1}{\epsilon})$. Notice that if the two new self-loops are the same, two new paths and the original path between $v,v'$ form a cycle. Notice that $1+\log_{k-1}\frac{1}{\epsilon}<\frac{4}{\epsilon'}\log n$ for $n>(\frac{k}{\epsilon})^{O(\epsilon)}$. Therefore, for any binocular of size $p\leq \frac{4}{\epsilon'}\log |A_2|-1$ in $G_{A_2}$, there exists a corresponding canonical improvement in $G_A$ of size at most $\frac{12}{\epsilon'}\log n$. $\square$ \end{proof} \section{The Algorithm and Main Results} In this section, we give an efficient implementation of Algorithm LI using the color coding technique \cite{colorcoding} and dynamic programming. Let $U$ be the universe of elements and $K$ be a collection of $kt$ colors, where $t=\frac{4}{\epsilon}\log n \cdot \frac{2(k-1)}{\epsilon}\cdot (k-2)\leq \frac{4}{\epsilon}\log n \cdot \frac{2k^2}{\epsilon}$. We assign every element in $U$ one color from $K$ uniformly at random. If two $k$-sets contain $2k$ distinct colors, they are recognized as disjoint. Applying color coding is crucial to obtain a polynomial-time algorithm for finding a logarithmic-sized local improvement. \subsection{Efficiently finding canonical improvements with tail changes} In this section, we show how to efficiently find canonical improvements with tail changes using the color coding technique. Let $C(S)$ be the set of distinct colors contained in sets in $S$. We say a collection of sets is colorful if every set contains $k$ distinct colors and every two sets contain different colors. {\bf Tail changes.} We say a tail change $T_e(U,V)$ of a vertex $v$ is {\it colorful} if $V$ is colorful, and the colors in $C(V)$ are distinct from $C(v)$. A surplus edge can be associated with many tail changes. Let $\mathcal{T}_v(e)$ be all colorful tail changes of size at most $\frac{2(k-1)}{\epsilon}$ which are associated with an edge $e$ of $v$. We enumerate all subsets of $\mathcal{S}\setminus \mathcal{A}$ of size at most $\frac{2(k-1)}{\epsilon}$ and check if they are colorful, and if they are tail changes associated with $e$. The time to complete the search for all vertices is at most $n^{O(k/\epsilon)}$. The next step is to find all colorful groups of tail changes associated with $v$ such that after performing one group of tail changes, the degree of $v$ drops to 2. Notice that the tail changes in a colorful group are consistent. For every two edges $e_i,e_j$ of $v$, we can compute a collection of colorful groups of tail changes which associate with all edges of $v$ except $e_i, e_j$ by comparing all possible combinations of tail changes from $E(v)\setminus \{e_i,e_j\}$. There are at most $(n^{O(k/\epsilon)})^{k-2}$ combinations. For every group of colorful tail changes which together replace $V$ with $U$, we explicitly keep the information of which vertices are in $U,V$ and the colors of $V$. It takes at most $n^{O(k^2/\epsilon)}$ space. To summarize, the time of finding colorful groups of tail changes for every vertex of degree at least 3 is $n^{O(k^2/\epsilon)}$. \begin{comment} We consider tail changes of height at most $2+\log_{k-1}\frac{1}{\epsilon}$. Let $\mathcal{T}_v(e)$ be all colorful tail changes associated with an edge $e$ of $v$. In the following, we fix a packing $\mathcal{A}$ and construct the bipartite conflict graph $G(A,B)$ where $\mathcal{B}=\mathcal{C}\setminus \mathcal{A}$. We consider the collection of sets $\mathcal{B}_3\subseteq \mathcal{B}$, where every set in $\mathcal{B}_3$ intersects with at least 3 sets in $\mathcal{A}$. We compute $\mathcal{T}_v(e)$ for every edge of a vertex $v$ in $B_3$. For any $v\in B_3$ and its neighbor $u\in A$, let $\mathcal{T}_v(u,h,L,P,C)$ be an indicator function of whether there exists a tree representing a partially tail change (leaves might not lie in $B_1^1$) which is of height $h$, has the root $u$, a set of leaves $L$ with their parents $P$, and the colors of all the nodes in the odd level are $C$. We compute $\mathcal{T}_v(u,h,L,P,C)$ by dynamic programming. To initialize the computation table, we set $\mathcal{T}_v(u,0,\{u\}, \emptyset, C)$ to be true if and only if $C(v)$ contains $k$ colors and $C=C(v)$. In general, when $h$ is even, for a fixed set of vertices $L$ and a fixed set of colors $C$, $\mathcal{T}_v(u,h,L,P,C)$ is true if there exists an entry $\mathcal{T}_v(u,h-1,P,P',C')$ in the table such that there exists a perfect matching between $L$ and $P$, moreover $C'\cup C(L)=C$ and $C'\cap C(L)=\emptyset$. When $h$ is odd, we set $\mathcal{T}_v(u,h,L,P,C)$ to be true if there exists an entry $\mathcal{T}_v(u,h-1,P,P',C)$ such that $L=N(P)\setminus P'$, where $N(P)$ is the set of neighbors of $P$. The computation stops when $L\subseteq B_1^1$. We now analyze the running time of computing this table. We have $h\leq 2(2+\log_{k-1}\frac{1}{\epsilon})$. $L$ is at most of size $(k-1)^{1+\log_{k-1}\frac{1}{\epsilon}}= \frac{k-1}{\epsilon}$. $C$ is at most of size $k\cdot \frac{2(k-1)}{\epsilon}$. As $|K|\leq \frac{8k^2}{\epsilon^2}\log n$, there are at most $\sum_{j=1}^{|C|}{|K|\choose j} \leq |K|^{|C|}\leq (\frac{8k^2}{\epsilon^2}\log n)^{2k^2/\epsilon}$ many color combinations of a colorful tail change. It takes at most $O(|C|)$ time to check the disjointness of two set of colors, and at most $O(|L|^3)$ to check if there exists a perfect matching between $P$ and $L$. Therefore, the time to compute the table is at most \begin{equation} \label{timetailchange} \frac{2k^2}{\epsilon}\cdot (\frac{k-1}{\epsilon})^3\cdot 2\log_{k-1}\frac{(k-1)^2}{\epsilon} \cdot n^{\frac{k-1}{\epsilon}} \cdot n^{\frac{k-1}{\epsilon}} \cdot (\frac{8k^2}{\epsilon^2}\log n)^{2k^2/\epsilon}=n^{O(\frac{k}{\epsilon})}. \end{equation} The next step is to find all groups of colorful tail changes for $v$ such that after performing one group of tail changes, the degree of $v$ drops to 2. A group of tail changes is colorful if every tail change is colorful, and any two tail changes contain different colors. Notice a colorful group of tail changes are consistent. Let the degree of $v$ be $d_v$. Let $\mathcal{T}_v(e_i, e_j)$ be a collection of colorful groups of tail changes which associate with all edges of $v$ excluding $e_i, e_j$. We can compute $\mathcal{T}_v(e_i, e_j)$ by comparing all possible combinations of tail changes from $E(v)\setminus \{e_i,e_j\}$. There are at most $(n^{O(\frac{k}{\epsilon})})^{k-2}$ combinations. For each combination, it takes at most $O(\frac{k^2}{\epsilon})$ time to check if every vertex to be included in the packing by the tail change has different colors. Hence, the time of finding all consistent tail changes for $B_3$ is in the order of \begin{equation} \label{timeconsistenttc} n{k\choose 2}\cdot (n^{O(\frac{k}{\epsilon})})^{k-2} \cdot O(\frac{k^2}{\epsilon}) = n^{O(\frac{k^2}{\epsilon})}. \end{equation} Combining (\ref{timetailchange}) and (\ref{timeconsistenttc}), the time of finding groups of colorful tail changes for every vertex in $B_3$ is $n^{O(\frac{k^2}{\epsilon})}$. \end{comment} {\bf Canonical improvements with tail changes.} After finding all colorful tail changes for every vertex of degree at least 3, we construct the auxiliary multi-graph $G_{A}$. For vertices $a_1,a_2$ in $G_{A}$, we put an edge $e(a_1,a_2)$ between $a_1$ and $a_2$ if first, there is a set $b\in \mathcal{C}=\mathcal{S}\setminus \mathcal{A}$ intersecting with only $a_1,a_2$, or secondly, there is a set $b\in \mathcal{C}$ of degree $d_b\geq 3$ intersecting with $a_1,a_2$, and for other edges of $b$, there exists at least one group of $d_b-2$ colorful tail changes. In the first case, we assign the colors of $b$ to $e(a_1,a_2)$. In the second case, we add as many as $n^{O(k^2/\epsilon)}$ edges between $a_1$ and $a_2$, and assign to each edge the colors of $b$ together with the colors of the corresponding group of $d_b-2$ tail changes. The number of edges between two vertices in $G_{A}$ is at most $n\cdot n^{O(k^2/\epsilon)}$. The number of colors assigned to each edge is at most $\frac{2k^3}{\epsilon}$ (Notice that the number of colors on an edge is at most $k(1+\frac{(k-1)^2/\epsilon-1}{k-2})(k-2)$. This is at most $\frac{2k^3}{\epsilon}$ for $\epsilon<k+5$, which is usually the case.) Moreover, we add a self-loop for a vertex $a$ in $G_A$ if there exists a set $b\in\mathcal{C}$ such that $b$ intersects only with set $a$ and assign the colors of $b$ to this self-loop. We use the dynamic programming algorithm to find all colorful paths and cycles of length $p=\frac{4}{\epsilon}\log n$ in $G_{A}$. A path/cycle is colorful if all the edges contain distinct colors. If we did not consider improvements containing at most one cycle, we could use a similar algorithm as in \cite{ksetpacking2013}. In our case, when extending a path by an edge $e$ by dynamic programming, we would like to keep the information of the vertices replaced by the tail changes of $e$ in this path. This would take quasi-polynomial time when backtracking the computation table. By Claim \ref{claimimprovement}, it is sufficient to check for every path with endpoints $u,v$, if there is an edge of this path containing a tail change $T_e(U,V)$, such that $u\in U$ or $v\in U$. We sketch the algorithm as follows. For a given set of colors $C$, let $\mathcal{P}(u, v, j, C, q_u, q_v)$ be an indicator function of whether there exists a path of length $j$ from vertex $u$ to $v$ with the union of the colors of these edges equal $C$. $q_u$($q_v$) are indicator variables of whether there is a tail change $T_e(U,V)$ of some edge in the path such that $u\in U$($v\in U$). The computation table can be initialized as $\mathcal{P}(u, u, 0, \emptyset, 0, 0)=1$ and $\mathcal{P}(u, v, 0, \emptyset, 0, 0)=1$, for every $u,v\in A$. In general, for a fixed set of colors $C$ and integer $j\geq 1$, $\mathcal{P}(u, v, j, C, q_u, q_v)=1$ if there exists a neighbor $w$ of $v$, such that $\mathcal{P}(u, w, j-1, C', q_u, q_w)=1$, $C'\cup C((w,v))= C$ and $C'\cap C((w,v))=\emptyset$. If $C((w,v))>k$ (i.e., there are tail changes), we check every edge between $w$ and $v$ which satisfies the previous conditions. If there exists an edge associated with a tail change $T_e(U,V)$ such that $u,v\in U$, we mark $q_u=1,q_v=1$. Otherwise, if there exists an edge associated with a tail change $T_e(U,V)$ such that $u\in U$, we mark $q_u=1$. To find colorful cycles, we query the result of $\mathcal{P}(u,u,j,C, q_u, q_u)$ for $j\geq 1$. Recall that we use $kt$ many colors, where $t\leq p\cdot \frac{2k^2}{\epsilon}$. The running time of finding all colorful paths and cycles is $O(n^3kp 2^{kt})$, which is $n^{O(k^3/\epsilon^2)}$. The final step is to find canonical improvements with tail changes by combining colorful paths and cycles to form one of the six types defined in Section 3.1 by enumerating all possibilities. The running time of this step is $n^{O(k^3/\epsilon^2)}$. In conclusion, the total running time of finding colorful tail changes, colorful paths/cycles, and canonical improvements with tail changes is $n^{O(k^3/\epsilon^2)}$. We call this color coding based algorithm {\bf Algorithm CITC} (canonical improvement with tail changes). The running time analysis of Algorithm CITC is given in the appendix. \subsection{Main results} In this section, we present our main results. We first present a randomized local improvement algorithm. The probability that Algorithm CITC succeeds in finding a canonical improvement with tail changes if one exists can be calculated as follows. The number of sets involved in a canonical improvement with tail changes is at most $\frac{2k^2}{\epsilon}\cdot \frac{4\log n}{\epsilon}$. The probability that an improvement with $i$ sets having all $ki$ elements of distinct color is \begin{equation} \label{colorsuccesspr} \frac{{kt\choose ki}(ki)!}{(kt)^{ki}} = \frac{(kt)!}{(kt-ki)!(kt)^{ki}}\geq \frac{(kt)!}{(kt)^{kt}} >e^{-kt} \geq n^{-8k^3/\epsilon^2}. \end{equation} Let $N=n^{8k^3/\epsilon^2}\ln n$. We run Algorithm CITC $2N$ times and each time with a fresh random coloring. From (\ref{colorsuccesspr}), we know that the probability that at least one call of CITC succeeds in finding an improvement is at least \begin{equation} 1-(1-n^{-8k^3/\epsilon^2})^{2N} \geq 1-exp(n^{-8k^3/\epsilon^2}\cdot 2n^{-8k^3/\epsilon^2}\ln n) = 1-n^{-2}. \nonumber \end{equation} Since there are at most $n$ local improvements for the problem, the probability that all attempts succeed is at least $(1-n^{-2})^n \geq 1-n^{-1}\longrightarrow 1$ as $n\longrightarrow \infty$. Hence this randomized algorithm has an approximation ratio $\frac{k+1}{3}+\epsilon$ with high probability. We call this algorithm {\bf Algorithm RLI} (R for randomized). The running time of the algorithm is $2N\cdot n^{O(k^3/\epsilon^2)}$, which is $n^{O(k^3/\epsilon^2)}$. \begin{comment} We summarize Algorithm RLI3 as follows. \renewcommand{\thealgorithm}{RLI3} \begin{algorithm} \caption{Randomized implementation of canonical improvements with tail changes for the $k$-Set Packing problem} \label{localsearch} \begin{algorithmic}[1] \State {\bf Input:} a collection of $k$-sets $\mathcal{S}$ on universe $U$, parameter $\epsilon>0$. \State Pick a packing $\mathcal{A}$ greedily. \Repeat \State Perform all local improvements of size $O(\frac{k}{\epsilon})$. \State {\bf Loop} $2n^{8k^3/\epsilon^2}\ln n$ times: \State - Assign $kt$ colors to $U$ uniformly at random, $t=\frac{2(k-1)(k-2)}{\epsilon}\cdot \frac{4\log n}{\epsilon}$. \State - Find all colorful groups of tail changes for sets in $\mathcal{S}\setminus\mathcal{A}$ of degree at least 3. \State - Construct the auxiliary multi-graph and find canonical improvements by dynamic programming. \State - Update $\mathcal{A}$ if an improvement is found. \Until{there is no local improvement.} \State {\bf Return} $\mathcal{A}$. \end{algorithmic} \end{algorithm} \end{comment} We can obtain a deterministic implementation of Algorithm RLI, which always succeeds in finding a canonical improvement with tail changes if one exists. We call this deterministic algorithm {\bf Algorithm DLI} (D for deterministic). The general approach is given by Alon et al. \cite{derandom}. The idea is to find a collection of colorings $\mathcal{K}$, such that for every improvement there exists a coloring $K\in \mathcal{K}$ that assigns distinct colors to the sets involved in this improvement. Then Algorithm CITC can be implemented on every coloring until an improvement is found. The collection of colorings satisfying this requirement can be constructed using perfect hash functions from $U$ to $K$. A perfect function for a set $S\subseteq U$ is a mapping which is one-to-one on $S$. A $p$-perfect family of hash functions contains one perfect hash function for each set $S\subseteq U$ of size at most $p$. Alon and Naor \cite{derandom} show how to construct a perfect hash function from $[m]$ to $[p]$ in time $O(p\log m)$ explicitly. This function can be described in $O(p+\log p\log\log m)$ bits and can be evaluated in $O(\log m/\log p)$ time. In our case, we use a $kt$-perfect family of perfect hash functions from $U$ to every $K\in \mathcal{K}$. $\mathcal{S}$ covers at most $nk$ elements. The number of the perfect hash functions in this family is at most $2^{kt+\log kt\log\log nk}$, which is $n^{O(k^3/\epsilon^2)}$. Hence, we need $n^{O(k^3/\epsilon^2)}$ runs of dynamic programming to find an improvement. \begin{theorem} For any $\epsilon>0$, Algorithm DLI achieves an approximation ratio $\frac{k+1}{3}+\epsilon$ of the $k$-Set Packing problem in time $n^{O(k^3/\epsilon^2)}$. \end{theorem} \section{Lower bound} In this section, we construct an instance with locality gap $\frac{k+1}{3}$ such that there is no local improvement of size up to $O(n^{1/5})$. Our construction is randomized and extends from the lower bound construction in \cite{ksetpacking2013}. This lower bound matches the performance guarantee of Algorithm DLI. \begin{theorem} \label{lowerbound} For any $t\leq \left(\frac{3e^3n}{k}\right)^{1/5}$, there exist two disjoint collections of $k$-sets $\mathcal{A}$ and $\mathcal{B}$ with $|\mathcal{A}|=3n$ and $|\mathcal{B}|=(k+1)n$, such that any collection of $t$ sets in $\mathcal{B}$ intersect with at least $t$ sets in $\mathcal{A}$. \end{theorem} \begin{proof Consider a universe $U_A$ of $3kn$ elements. Let $\mathcal{A}$ be a collection of $a=3n$ disjoint $k$-sets on $U_A$. We index the sets in $\mathcal{A}$ by 1 to $3n$. Let $\mathcal{B}$ be a collection of $(k+1)n$ disjoint $k$-sets, such that every set induced on $U_A$ is a 2-set or a 3-set. There are $b_2=3n$ 2-sets covering $m_2=6n$ elements and $b_3=(k-2)n$ 3-sets covering $m_3=3kn-6n$ elements in $\mathcal{B}$. We index the 2-sets in $\mathcal{B}$ by 1 to $3n$. The $i$-th 2-set intersects with the $(i-1)$-th and the $i$-th set in $\mathcal{A}$ (the 0-th set is the $n$-th set in $\mathcal{A}$). The 3-sets are constructed by partitioning the elements not covered by 2-sets in $U$ into groups of three uniformly at random. Consider an arbitrary collection $\mathcal{A}_t$ of $t$ sets in $\mathcal{A}$. We compute the probability that there are $t$ sets $\mathcal{B}_t$ in $\mathcal{B}$ which are contained entirely in $\mathcal{A}_t$. We call such an event {\it unstable}. Assume there are $t_2$ 2-sets $\mathcal{B}_t^2$ and $t_3$ 3-sets $\mathcal{B}_t^3$ in $\mathcal{B}_t$. Suppose the sets in $\mathcal{A}_t$ belong to $r$ disjoint index intervals, $i_1,....,i_r\geq 1$ for $1\leq r\leq t$. Here we use modular $3n$ to compute the connectivity. Then $t_2=i_1-1+i_2-1+\cdots +i_r-1=i_1+\cdots+i_r-r=t-r$ and $\mathcal{B}_t^2$ cover $2(t-r)$ elements. $t_3=r$ and $\mathcal{B}_t^3$ cover $3r$ elements from a set of $m_t=kt-2(t-r)=(k-2)t+2r$ elements. Let $\tau(m)$ be the number of ways to partition $m$ elements into $m/3$ disjoint sets. We have \begin{equation} \label{tau} \tau(m)=\frac{m!}{(3!)^{m/3}(m/3)!}. \end{equation} Let $\Pr(t,r)$ be the probability that all $t_3$ 3-sets are contained in $\mathcal{A}_t$. We have \begin{equation} \label{prunstable} \Pr(t,r) = {m_t\choose 3r}\cdot \frac{\tau(3r)\tau(m_3-3r)}{\tau(m_3)}=\frac{{m_t\choose 3r}\cdot{m_3/3\choose r}}{{m_3\choose 3r}}. \end{equation} Let $U_{t,r}$ be the number of all unstable events summing over the distribution of $\mathcal{B}$. There are ${t-1\choose r-1}$ positive integer solutions of the function $t=i_1+i_2+\cdots +i_r$. There are ${a-t\choose r}$ intervals of length $i_1,...,i_r$ in index range 1 to $a$. Hence, the expected number of unstable events can be estimated as follows, \begin{eqnarray} \label{expunstable1} \mathbb{E}[U_{t,r}]&=&{t-1\choose r-1} \cdot {a-t\choose r} \cdot \Pr(t,r) \approx {t\choose r} \cdot {a-t\choose r} \cdot \frac{{(k-2)t+2r\choose 3r}\cdot{(k-2)a/3\choose r}}{{(k-2)a\choose 3r}} \nonumber \\ &\leq& \left(\frac{\frac{t}{r}\cdot\frac{a-t}{r}\cdot(\frac{(k-2)t+2r}{3r})^3\cdot \frac{(k-2)a}{3r}}{(\frac{e(k-2)a}{3r})^3} \right)^r = \left(\frac{t(a-t)((k-2)t+2r)^3}{3e^3r^3(k-2)^2a^2} \right)^r \end{eqnarray} The inequality in (\ref{expunstable1}) follows from an upper and lower bounds of the combinatorial number as \begin{equation} \label{nchoosekbound} \left(\frac{n}{k}\right)^k \leq {n\choose k} \leq \left(\frac{en}{k}\right)^k. \end{equation} $\mathbb{E}[U_{t,r}]$ can be further bounded from (\ref{expunstable1}) as, \begin{eqnarray} \label{expunstable2} \mathbb{E}[U_{t,r}] &=& \left(\frac{t(a-t)}{3e^3(k-2)^2a^2}\cdot\left(\frac{(k-2)t}{r}+2\right)^3 \right)^r \nonumber \\ &\leq& \left(\frac{t}{3e^3(k-2^2a^2)}\cdot\frac{k^3t^3}{r^3}\right)^r \leq \left(\frac{kt^4}{3e^3ar^3}\right)^r. \end{eqnarray} Since $t\leq \left(\frac{3e^3n}{k}\right)^{1/5}=\left(\frac{e^3a}{k}\right)^{1/5}=t_0$ by assumption, we have $\frac{kt^4}{3e^3ar^3}\leq \frac{kt^4}{3e^3a}\leq \frac{1}{3}(\frac{k}{e^3a})^{1/5}<\frac{1}{2}$ as $a\gg 1$. Hence by summing up $r$ from 1 to $t$ and $t$ from 1 to $t_0=\left(\frac{e^3a}{k}\right)^{1/5}$ in (\ref{expunstable2}), we have, \begin{eqnarray} \label{expunstable3} \sum_{t=1}^{t_0}\sum_{r=1}^t \mathbb{E}[U_{t,r}] &<& \sum_{t=1}^{t_0} \sum_{r=1}^t \left(\frac{kt^4}{3e^3a}\right)^r < \sum_{t=1}^{t_0} \frac{2kt^4}{3e^3a} < \frac{2k}{3e^3a}\cdot t_0^5 <1. \end{eqnarray} Therefore, there exists some collection $\mathcal{B}$ given $\mathcal{A}$ which does not contain any unstable collection of size at most $t$. $\square$ \end{proof} \section{Conclusion} In this paper, we propose a new polynomial-time local search algorithm for the $k$-Set Packing problem and show that the performance ratio is $\frac{k+1}{3}+\epsilon$ for any $\epsilon>0$. While the approximation guarantee is the same as for Cygan's algorithm \cite{bestksetpacking}, our algorithm has a better running time, which is singly exponential to $\frac{1}{\epsilon^2}$. We also give a matching lower bound, which shows that any algorithm using local improvements of size at most $O(n^{1/5})$ cannot have a better performance guarantee. This indicates that this is possibly the best result that can be achieved by a local improvement algorithm for the $k$-Set Packing problem. On the other hand, algorithms based on LP/SDP for the $k$-Set Packing problem are far from being well understood. It is interesting to explore possibilities using that approach. A more general open question is to further close the gap between the upper bound $\frac{k+1}{3}+\epsilon$ and the lower bound $\Omega(\frac{k}{\log k})$ of the $k$-Set Packing problem \cite{ksetpackinglowerbound}. \bibliographystyle{plain}
{ "timestamp": "2014-06-12T02:02:42", "yymm": "1307", "arxiv_id": "1307.2262", "language": "en", "url": "https://arxiv.org/abs/1307.2262", "abstract": "We study algorithms based on local improvements for the $k$-Set Packing problem. The well-known local improvement algorithm by Hurkens and Schrijver has been improved by Sviridenko and Ward from $\\frac{k}{2}+\\epsilon$ to $\\frac{k+2}{3}$, and by Cygan to $\\frac{k+1}{3}+\\epsilon$ for any $\\epsilon>0$. In this paper, we achieve the approximation ratio $\\frac{k+1}{3}+\\epsilon$ for the $k$-Set Packing problem using a simple polynomial-time algorithm based on the method by Sviridenko and Ward. With the same approximation guarantee, our algorithm runs in time singly exponential in $\\frac{1}{\\epsilon^2}$, while the running time of Cygan's algorithm is doubly exponential in $\\frac{1}{\\epsilon}$. On the other hand, we construct an instance with locality gap $\\frac{k+1}{3}$ for any algorithm using local improvements of size $O(n^{1/5})$, here $n$ is the total number of sets. Thus, our approximation guarantee is optimal with respect to results achievable by algorithms based on local improvements.", "subjects": "Data Structures and Algorithms (cs.DS)", "title": "Approximate the k-Set Packing Problem by Local Improvements", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363503693293, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7073385817790617 }
https://arxiv.org/abs/1803.09266
New SOCP relaxation and branching rule for bipartite bilinear programs
A bipartite bilinear program (BBP) is a quadratically constrained quadratic optimization problem where the variables can be partitioned into two sets such that fixing the variables in any one of the sets results in a linear program. We propose a new second order cone representable (SOCP) relaxation for BBP, which we show is stronger than the standard SDP relaxation intersected with the boolean quadratic polytope. We then propose a new branching rule inspired by the construction of the SOCP relaxation. We describe a new application of BBP called as the finite element model updating problem, which is a fundamental problem in structural engineering. Our computational experiments on this problem class show that the new branching rule together with an polyhedral outer approximation of the SOCP relaxation outperforms a state-of-the-art commercial global solver in obtaining dual bounds.
\section{Introduction: Bipartite bilinear program (BBP)}\label{sec:intro} A quadratically constrained quadratic program (QCQP) is called as a bilinear optimization problem if every degree two term in the constraints and objective involves the product of two distinct variables. For a given instance of bilinear optimization problem, one often associates a simple graph constructed as follows: The set of vertices corresponds to the variables in the instance and there is an edge between two vertices if there is a degree two term involving the corresponding variables in the instance formulation. Strength of various convex relaxations for bilinear optimization problems can be analyzed using combinatorial properties of this graph~\cite{LuedtkeNL12,BolandDKMR17,GupteKRW17}. When this graph is bipartite, we call the resulting bilinear problem as a bipartite bilinear program (BBP). In other words, BBP is an optimization problem of the following form: \begin{eqnarray}\label{eq:BBP} \begin{array}{rcl} &\min & x^{\top}Q_0y + d_1^{\top} x + d_2^{\top} y\label{P}\\ &\textup{s.t.}& x^{\top}Q_ky + a_k^{\top}x + b_k^{\top}y + c_k = 0, \ k\in\{1, \dots, m\}\\ && l\leq (x,y) \leq u \\ &&(x,y)\in {\rr}^{n_1+n_2}, \end{array} \end{eqnarray} where $n_1,n_2 \in \mathbb{Z}_+, \ Q_0, Q_k\in\rr^{n_1\times n_2}, \ d_1, a_k\in \rr^{n_1}, \ d_2, b_k\in\rr^{n_2}, \ c_k\in\rr$, ${\forall} k\in\{1, \dots, m\}$. The vectors $l,u\in\rr^{n_1+n_2}$ define the box constraints on the decision variables and, without loss of generality, we assume that $l_i=0, \ u_i = 1, \ {\forall} i\in \{1, \dots, n_1+n_2\}$. BBP~(\ref{eq:BBP}) may include bipartite bilinear inequality constraints, which can be converted into equality constraints by adding slack variables, and these slack variables will also be bounded since the original variables are bounded. We note that BBP is a special case of the more general biconvex optimization problem~\cite{gorski2007biconvex}. BBP has many applications such as waste water management~\cite{faria2011novel,castro2015tightening,galan1998optimal}, pooling problem~\cite{GupteADC17,haverly1978studies}, and supply chain~\cite{nahapetyan2008bilinear}. \section{Our results} \subsection{Second order cone representable relaxation of BBP}\label{sec:SOCPintro} A common and successful approach in integer linear programing is to generate cutting-planes implied by single constraint relaxation, see for example~\cite{crowder1983solving,marchand2001aggregation,dey2017analysis,Bodur2017}. We take a similar approach here. We begin by examining one row relaxation of BBP, that is, we study the convex hull of the set defined by a single constraint defining the feasible region of (\ref{eq:BBP}). Our first result is to show that the convex hull of this set is second order cone (SOCP) representable in the extended space, where we have introduced new variables $w_{ij}$ for $x_iy_j$. We formally present this result next. \begin{theorem}\label{thm:conv} Let $n_1,n_2 \in \mathbb{Z}_{+}$, $V_1 \in \{1,\dots, n_1\}$, $V_2 \in \{1, \dots, n_2\}$, and $E \subseteq V_1 \times V_2$. Consider the one-constraint BBP set $$S: = \left\{ (x,y,w)\in[0,1]^{n_1 + n_2 + |E|} \, \left| \, \begin{array}{l}\sum_{(i,j)\in E}q_{ij}w_{ij} + \sum_{i\in V_1}a_i x_i + \sum_{j\in V_2}b_j y_j + c = 0, \\ w_{ij}=x_iy_j, \ {\forall} (i,j)\in E \end{array}\right\}\right. .$$ Then: \begin{enumerate} \item[(i)] Let $(\bar{x},\bar{y}, \bar{w})$ be an extreme point of $S$. Then, there exists $U \subseteq V_1 \cup V_2$, of the form \begin{enumerate} \item $U = \{i_0,j_0\}$ where $(i_0,j_0)\in E$, or \item $U = \{i_0\}$ where $i_0 \in V_1$ is an isolated node, or \item $U = \{j_0\}$ where $j_0 \in V_2$ is an isolated node, \end{enumerate} such that $\bar{x}_{i}\in\{0,1\}, \ \forall i\in V_1 \setminus U$, and $\bar{y}_{j}\in\{0,1\}, \ \forall j \in V_2\setminus U$. \item[(ii)] $\textup{conv}(S)$ is SOCP-representable. \end{enumerate} \end{theorem} A proof of Theorem~\ref{thm:conv} is presented in Section \ref{sec: proof of thm 1}. \begin{remark}\label{rem:size} In Theorem~\ref{thm:conv}, part (ii) follows from part (i). For any given choice of $U$, we first fix all the variables to $0$ or $1$ except for those in $U$. It is then shown that the convex hull of the resulting set is SOCP-representable and we obtain (ii) by convexifying the union of a finite set of SOCP representable sets. It is easy to see that the number of distinct $U$ sets is $\mathcal{O}(n_1n_2)$, and the number of possible fixings is $\mathcal{O}(2^{n_1 + n_2})$. Thus, the number of resulting SOCP representable objects is $\mathcal{O}(n_1n_22^{n_1 + n_2})$. \end{remark} We note that the literature in global optimization theory has many results on convexifying functions, see for example~\cite{al1983jointly,Rikun1997,meyer2005convex,tawarmalani2002convexification,tuy2016convex}. However, as is well-known, replacing a constraint $f(x) = b$ by $\{x\,|\, \hat{f}(x) \geq b, \ \breve{f}(x) \leq b\}$ where $\hat{f}$ and $\breve{f}$ are the concave and convex envelop of $f$, does not necessarily yield the convex hull of the set $\{x \,|\, f(x) = b\}$. There are relatively lesser number of results on convexification of sets~\cite{tawarmalani2013explicit,nguyen2013deriving,nguyen2011convexification,tawarmalani2010strong}. Theorem~\ref{thm:conv} generalizes results presented in~\cite{Tawarmalani2010,akshayguptethesis,kocuk2017matrix} and is related to results presented in~\cite{davarnia2017simultaneous}. \paragraph{The SOCP relaxation for the feasible region of the general BBP (\ref{eq:BBP})} that we propose, henceforth referred as $S^{SOCP}$, is the intersection of the convex hull of each of the constraints of (\ref{eq:BBP}). Formally: $$S^{SOCP} = \bigcap_{k = 1}^m \textup{conv}(S_k),$$ where $S_k = \{(x,y,w)\in [0,1]^{n_1 \times n_2 \times |E|}\, | \, x^{\top}Q_ky + a_k^{\top}x + b_k^{\top}y + c_k = 0, w_{ij} = x_iy_j \ \forall (i,j) \in E \}$ and $E$ is the edge set of the graph corresponding to the BBP instance (and not just of one row). As an aside, note that $S^{SOCP}$ can be further strengthened by adding the convex hull of single row BBP sets arrived by taking linear combinations of rows. Next we discuss the strength of $S^{SOCP}$ vis-\'a-vis the strength of other standard relaxations. Consider the following two standard relaxations of the feasible region of BBP (\ref{eq:BBP}): Let $S^{SDP}$ be the standard semi-definite programming (SDP) relaxation and let \begin{eqnarray} S^{QBP} &:=& \{(x, y, w)\in[0,1]^{n_1+n_2+|E|} \,|\, \sum_{(ij) \in E} (Q_k)_{ij}w_{ij} + a_k^{\top}x + b_k^{\top}y + c_k = 0 \ k \in \{1, \dots, m\}\} \nonumber\\ && \bigcap \textup{ conv}\left(\{(x, y, w) \in[0,1]^{n_1+n_2+ |E|}\,|\, w_{ij} = x_i y_j \ \forall (i,j) \in E\}\right).\label{eq:QBP0} \end{eqnarray} Note that $S^{QBP}$ is a polyhedral set, since the second set in the right-hand-side of (\ref{eq:QBP0}) is equal to the Boolean Quadratic Polytope~\cite{Burer2009}. Two well-known classes of valid inequalities for this set are the McCormick's inequalities~\cite{al1983jointly} and the triangle inequalities {\cite{padberg1989boolean}}. \begin{theorem}\label{thm:str} For any BBP, we have that $$\textup{proj}_{x, y, w} \left(S^{SDP}\right) \bigcap S^{QBP} \supseteq S^{SOCP}.$$ \end{theorem} A proof of Theorem~\ref{thm:str} is presented in Section \ref{sec:thm2}. \begin{remark}\label{remark: sparse graph} It is possible to show that the convex hull of one row BBP is SOCP representable, even without introducing the $w$ variables. Thus, it is possible to construct, similar to $S^{SOCP}$, a SOCP-representable relaxation of BBP, without introducing $w$ variables. However, this SOCP relaxation would be weaker. In particular, we are unable to prove the corresponding version of Theorem~\ref{thm:str} for this SOCP relaxation. The strength of $S^{SOCP}$ relaxation is due to the fact that the extended space $w$ variables `interact' from different constraints. \end{remark} We note that other SOCP relaxations for QCQPs have been proposed~\cite{kim2001second,Burer2014}. However, these are all weaker than the standard SDP relaxation. We also note that it is polynomial time to optimize on $S^{SDP}$, although the tractability of solving SDPs in practice is still limited. On the other hand, solvers for SOCPs are significantly better in practice. It is NP-hard to optimize on $S^{QBP}$, although as discussed in Remark~\ref{rem:size}, the size of the extended formulation to obtain $S^{SOCP}$ is exponential in size. \subsection{A new branching rule} For details about general branch-and-bound scheme for global optimization see, for example, \cite{Ryoo1996}. Inspired by the convex relaxation described in Section \ref{sec:SOCPintro}, we propose a new rule for partitioning the domain of a given variable in order to produce two branches. Details of this new proposed branching rule together with node selection and variable selection rules that we used in our computational experiments are presented in Section~\ref{sec:bb}. Here, we sketch the main ideas behind our new proposed branching rule. Suppose we have decided to branch on the variable $x_1$. As explained in Remark~\ref{rem:size}, the convex hull of the one constraint set is obtained by taking the convex hull of union of sets obtained by fixing all but two (or one) variables. If we are branching on $x_1$, we examine all such two-variable sets involving $x_1$ obtained from each of the constraints. For each of these sets, there is an ideal point to divide the range of $x_1$ so that the sum of the volume of the two convex hulls of the two-dimensional sets corresponding to the two resulting branches is minimized. (See recent papers on importance of volume minimization in branch-and-bound algorithm~\cite{speakman2017branching}). We present a heuristic to find an ``ideal range". We collect all such ideal ranges corresponding to all the two-dimensional sets involving $x_1$. Then we present a heuristic to select one points (based on corresponding volume reduction) to finally partition the domain of $x_1$. We also use similar arguments to propose a new variable selection rule. \subsection{A new application of BBP and computational experiments} A new application of BBP, which motivated our work presented here, is called as the \textit{finite element model updating problem}, which is a fundamental methodological problem in structural engineering. See Section~\ref{sec: finite element model} for a description of the problem. All the new methods we develop here are tested on instances of this problem. Due to the large size of $S^{SOCP}$, in practice, we consider a lighter version of this relaxation. In particular, we write the extended formulation of each row of BBP corresponding only to the variables in that row (see details in Section \ref{sec: lighter version}). As our instances are row sparse, the resulting SOCP relaxation can be solved in reasonable time. Unfortunately, there are no theoretical guarantees for the bounds of this light version of the relaxation. After some preliminary experimentation, we observed that a polyhedral outer approximation of the SOCP relaxation produces similar bounds but solves much faster. Therefore, we used this linear programming (LP) relaxation in our experiments. Details of this outer approximation is presented in Section~\ref{sec:Polyhedralrelaxation}. Our computational experiments are aimed at making three comparisons. First, we examined the quality of the dual bound produced at root node via our new method (polyhedral outer approximation of SOCP relaxation) against SDP, McCormick, and SDP together with McCormick inequalities. The bounds produced are better for the new method. Second, we test the performance of the new branching rule against traditional branching rules. Our experiments show that the new branching rule significantly out performs the other branching rules. Finally, we compare the performance of our naive branch-and-bound implementation against BARON. In all instances, we close significantly more gap in equal amount of time. All these results are discussed in detail in Section~\ref{sec:compres}. \section{Second order cone representable relaxation and its strength}\label{sec:thm1} \subsection{Proof of Theorem~\ref{thm:conv}} \label{sec: proof of thm 1} Consider the bipartite graph $G=(V_1,V_2,E)$ defined by the set of vertices $V_1 = \{1, \dots, n_1\}$ and $V_2= \{1, \dots, n_2\}$ which is associated to the equation \begin{align} \sum_{(i,j)\in E}q_{ij}x_i y_j + \sum_{i\in V_1}a_i x_i + \sum_{j\in V_2}b_j y_j+ c = 0.\tag{EQ} \label{eq_single} \end{align} In this section, we prove that the convex hull of the set \begin{align} S = \{(x,y, w)\in[0,1]^{{n_1}+{n_2} + |E|} \,|\, (\ref{eq_single}), \ w_{ij} = x_iy_j \ \forall (i,j) \in E \}. \label{set: S EQ} \end{align} is SOCP representable. In addition, the proof provides {an} implementable procedure to obtain $\convex(S)$. The key idea underlying this result is the fact that, at each extreme point of $S$, at most two variables are not fixed to 0 or 1 and, once all variables but two (or one) are fixed, the convex hull of the resulting object is SOCP representable in $\rr^2$ (or $\rr$). Hence, $\convex(S)$ can be written as the convex hull of an union of SOCP representable sets. \subsubsection{Preliminary results} First we present a few preliminary results that will be used to prove that $\convex(S)$ is SOCP representable. \begin{lemma}\label{lemma: convex of convex eq}\cite{Tawarmalani2013} Let $f:[0,1]^n\to\mathbb{R}$ be a continuous function and $B \subseteq [0, 1]^n$ be a convex set. Then $$ \convex({\{x\in B\,|\, f(x)=0\}})= \convex \left({\{x\in B\,|\, f(x)\leq 0\}}\right) \bigcap \convex({\{x\in B \,|\, f(x)\geq 0\}}). $$ \end{lemma} \begin{lemma}\label{lemma: reverse convex set}\cite{Hillestad1980} Let $f:[0,1]^n\to\mathbb{R}$ be a convex function. Then $$ G:= \convex({\{x\in [0,1]^n\,|\, f(x)\geq 0\}}), $$ is a polytope. Indeed, $G$ can be obtained as the convex hull of finite number of points obtained as follows: fix all but one variable to $0$ or $1$ and solve for $f(x) = 0$. \end{lemma} \begin{lemma}\label{lemma: convex of union of convex sets}\cite{ben2001lectures} Let $T\subset \rr^n$ be a compact set and $\{T_k\}_{k\in K}$ be a partition of the set of all extreme points of $T$. Then, \begin{align} \convex(T) = \convex\left(\bigcup_{k\in K} T_k \right) = \convex \left(\bigcup_{k\in K}\convex(T_k) \right). \end{align} In addition, if $\convex(T_k)$ is a SOCP representable set for every $k\in K$, then $\convex(T)$ is also a SOCP representable set. \end{lemma} \begin{lemma}\label{lemma: convex of intersection with affine set} Let $B=\{(x,w)\in[0,1]^{n} \times \mathbb{R}\,|\, x\in B_0, \ w=l^{\top}x+l_0\}$, where $B_0\subseteq\rr^{n}$, and $l^{\top}x + l_0$ is an affine function of $x$. Then, $$\convex(B) = \{(x,w)\in[0,1]^{n} \times \mathbb{R}\,|\, x\in \convex(B_0), \ w=l^{\top}x+l_0\}.$$ \end{lemma} \begin{proof} We assume $B_0$ is non-empty, otherwise, there is nothing to prove. Let $(x,w)\in \convex(B)$. Then there exist $(x^i,w^i)\in B$ and $\lambda_i\geq 0, \ \forall i\in \{1, \dots, n+2\}$, such that $\sum_{i=1}^{n+2}\lambda_i=1$, $x = \sum_{i=1}^{n+2}\lambda_i x^i$ and $w = \sum_{i=1}^{n+2}\lambda_i w^i$. It follows by the definition of $B$ that $x^i\in B_0, \ \forall i\in \{1, \dots, n+2\}$, and hence $x\in \convex(B_0)$. It also follows from the definition of $B$ that $w^i = l^{\top}x^i+l_0, \ \forall i\in \{1, \dots, n+2\}$, and hence $$w = \sum_{i=1}^{n + 2} \lambda_iw^i = \sum_{i=1}^{n + 2}\lambda_i(l^{\top}x^i+l_0) = l^{\top}\left(\sum_{i=1}^{n+ 2}\lambda_ix^i\right)+l_0 = l^{\top}x + l_0.$$ Conversely, let $(x,w)$ be such that $x\in \convex(B_0)$ and $w=l^{\top}x+l_0$. Then, there exist $x^i\in B_0$ and $\lambda_i\geq 0, \ \forall i\in \{1, \dots, n+1\}$, such that $\sum_{i=1}^{n + 1}\lambda_i=1$, $x = \sum_{i=1}^{n + 1}\lambda_i x^i$. Define $w^i = l^{\top}x^i+l_0, \ \forall i\in \{1, \dots, n+1\}$. Then $(x^i,w^i)\in B, \ \forall i\in \{1, \dots, n+1\}$. In addition, $$ w = l^{\top}x+l_0 = l^{\top}\left(\sum_{i=1}^{n + 1}\lambda_i x^i\right) + l_0 = \sum_{i=1}^{n + 1}\lambda_i (l^{\top}x^i + l_0) = {\sum_{i=1}^{n+1}}\lambda_i w^i, $$ which completes the proof. \end{proof} \subsubsection{Proof of part (i) of Theorem~\ref{thm:conv}} We restate part (i) of Theorem~\ref{thm:conv} next for easy reference: \begin{proposition}\label{prop: extreme points of a single eq} Let $(\bar{x},\bar{y}, \bar{w})$ be an extreme point of the set $S$ defined in (\ref{set: S EQ}). Then, there exists $U \subseteq V_1 \cup V_2$, of the form \begin{enumerate} \item $U = \{i_0,j_0\}$ where $(i_0,j_0)\in E$, or, \item $U = \{i_0\}$ where $i_0 \in V_1$ is an isolated node, or, \item $U = \{j_0\}$ where $j_0 \in V_2$ is an isolated node, \end{enumerate} such that $\bar{x}_{i} \in\{0,1\}, \ \forall i\in V_1 \setminus U$, and $\bar{y}_{j}, \ \forall j \in V_2\setminus U$. \end{proposition} \begin{proof} To prove by contradiction, suppose without loss of generality that $0 < \bar{x}_{1}, \bar{x}_{2} < 1$. Consider the system of equations \begin{eqnarray} \bar{a}_1x_1 + \bar{a}_2x_2 + \bar{c} & = & 0, \nonumber \\ w_{1j} - x_1 \bar{y}_j & = & 0 \ \forall j: (1,j) \in E \nonumber\\ w_{2j} - x_2 \bar{y}_j & = & 0 \ \forall j: (2,j) \in E, \nonumber \end{eqnarray} obtained by fixing $x_i=\bar{x}_i, \ y_j=\bar{y}_j$ in (\ref{set: S EQ}), $w_{ij} = \bar{x}_i\bar{y}_j$ $\forall i\in V_1\setminus\{1,2\}, \ \forall j\in V_2$. Since $(\bar{x}_1,\bar{x}_2)$ is in the relative interior of $\{(x_1,x_2)\in [0,1]^2 \,|\, \bar{a}_1x_1 + \bar{a}_2x_2 + \bar{c} = 0\}$, $(\bar{x},\bar{y}, \bar{w})$ cannot be an extreme point of $S$. \end{proof} \subsubsection{Proof of part (ii) of Theorem~\ref{thm:conv}} First, we prove that the two-variable sets we encounter after fixing variables are SOCP representable. \begin{proposition}\label{prop: hyp is SOCP} Let $ S_0 = \{(x,y)\in[0,1]^2\,|\, \ ax + by + q x y + c = 0\}. $ Then, $\convex(S_0)$ {is SOCP representable}. \end{proposition} \begin{proof} We may assume $S_0\neq \emptyset$ and $q\neq 0$, otherwise the result follows trivially. Define $r=-b/q, \ s=-a/q$ and $\tau=(ab-cq)/q^2$ to write $ax + by + q x y + c =0$ equivalently as \begin{align} (x-r)(y-s)=\tau.\label{hyp_eq} \end{align} If $\tau=0$, then (\ref{hyp_eq}) is equivalent to $x=r$ or $y=s$. In this case, $S_0 = \{(x,y)\in[0,1]^2\,|\, x=r\}\cup \{(x,y)\in[0,1]^2\, |\, y=s\}$ and hence $\convex(S_0)$ is a polytope. Suppose $\tau > 0$ (if $\tau < 0$, we multiply (\ref{hyp_eq}) by $-1$ and repeat the same proof with $x-r$ and $\tau$ replaced with $-(x-r)$ and $-\tau$). Either $x-r,y-s\geq 0$ or $x-r,y-s \leq 0$. Thus, $S_0 = S_0^>\cup S_0^<$, where $S_0^>=\{(x,y)\in[0,1]^2\,|\, x-r,y-s \geq 0, \ (\ref{hyp_eq})\}$ and $S_0^<=\{(x,y)\in[0,1]^2 \,|\, x-r,y-s \leq 0, \ (\ref{hyp_eq})\}$. Next, we show that if $S_0^>\neq \emptyset$, then $\convex(S_0^>)$ is SOCP representable. Using that $4uv = (u+v)^2-(u-v)^2$, we can rewrite (\ref{hyp_eq}) as \begin{align*} \sqrt{[(x-r)-(y-s)]^2 + (2\sqrt{\tau})^2} = (x-r)+(y-s). \end{align*} It now follows from Lemma~\ref{lemma: convex of convex eq} that $\convex(S_0^>) = \convex(S_{1}^>) \cap \convex(S_{2}^>)$, where \begin{align*} S_1^> = \{(x,y)\in [0,1]^2 \,|\, x-r,y-s \geq 0, \ \sqrt{[(x-r)-(y-s)]^2 + (2\sqrt{\tau})^2} \leq (x-r)+(y-s)\} \ \\ S_2^> = \{(x,y)\in [0,1]^2 \,|\, x-r,y-s\geq 0, \ \sqrt{[(x-r)-(y-s)]^2 + (2\sqrt{\tau})^2} \geq (x-r)+(y-s)\}. \end{align*} Notice that $S_1^>$ is SOCP representable. Also, as the square root term in the definition of $S_2^>$ is a convex function in $x$ and $y$, it follows from Lemma~\ref{lemma: reverse convex set} that $S_2^>$ is a polytope. Thus, $\convex(S_0^>)$ is SOCP representable. Similarly, we can prove that $\convex(S_0^<)$ is SOCP by repeating the arguments above after replacing $x-r,y-s$ with $-(x-r),-(y-s)$. Therefore, $\convex(S_0)=\convex(S_0^>\cup S_0^<)=\convex(\convex(S_0^>)\cup \convex(S_0^<))$ {is SOCP representable} by Lemma~\ref{lemma: convex of union of convex sets}. \end{proof} \begin{proposition}\label{prop: parabola is SOCP} Let $ S_0 = \{(x,y)\in[0,1]^2 \,|\, \ y = a_0 + a_1 x + a_2 x^2\}. $ Then $\convex(S_0)$ {is SOCP representable}. \end{proposition} \begin{proof} We may assume $S_0\neq \emptyset$ and $a_2\neq 0$, otherwise the result follows trivially. By completing squares, we can write $y = a_0+a_1 x + a_2 x^2$ equivalently as $ (x + 0.5a_1/a_2)^2-(a_1/2a_2)^2 + a_0/a_2 = y/a_2, $ and then as \begin{align} (x+\bar{a})^2 = t \ \Leftrightarrow \ \sqrt{(x+\bar{a})^2+\left(\frac{t-1}{2 }\right)^2} = \frac{t+1}{2}, \end{align} where $\bar{a}=0.5a_1/a_2, \ t= y/a_2 + (a_1/2a_2)^2 - a_0/a_2$, using that $4t = (t+1)^2 - (t-1)^2$. It now follows from Lemma~\ref{lemma: convex of convex eq} that $\convex(S_0) = \convex(S_{1}) \cap \convex(S_{2})$, where \begin{align*} S_1 = \{(x,y)\in [0,1]^2\,|\, \sqrt{(x+\bar{a})^2+\left(\frac{t-1}{2 }\right)^2} \leq \frac{t+1}{2}\} \ \\ S_2 = \{(x,y)\in [0,1]^2 \,|\,\sqrt{(x+\bar{a})^2+\left(\frac{t-1}{2 }\right)^2} \geq \frac{t+1}{2}\}. \end{align*} Notice that $S_1$ is SOCP representable. Also, as the square root term in the definition of $S_2$ is a convex function in $x$ and $y$ (because $t$ is an affine function of $y$), it follows from Lemma~\ref{lemma: reverse convex set} that $S_2$ is a polytope. Thus, $\convex(S_0)$ is SOCP representable. \end{proof} \begin{proposition}\label{prop: hyp is SOCP in space of w} Let $ S_0 = \{(x,y,w)\in[0,1]^3\,|\, ax + by + q w + c = 0, \ w = x y\}. $ Then, $\convex(S_0)$ {is SOCP representable}. \end{proposition} \begin{proof} If $q\neq 0$, then we can write \begin{eqnarray}\label{eq:soqneq0} S_0 = \{(x,y,w)\in[0,1]^2 \times \mathbb{R}\,|\, (x,y)\in B_0, \ w = (-c-ax-by)/q\}, \end{eqnarray} where $B_0 = \{(x,y)\in[0,1]^2 \,|\, ax + by + q x y + c = 0\}$. (Note that the bounds on $w$ are automatically enforced in (\ref{eq:soqneq0}) and it is sufficient to say $w \in \mathbb{R}$). Hence, by Proposition~\ref{prop: hyp is SOCP} and Lemma~\ref{lemma: convex of intersection with affine set}, $\convex(S_0)$ {is SOCP representable}. Now, suppose $q=0$. Four cases: (i) $a,b=0$. In this case, we may assume $c=0$, otherwise $S_0=\emptyset$. Then, $ S_0 = \{(x,y,w)\in[0,1]^3 \,|\, w = xy\}, $ in which case $\convex(S_0)$ is a well known polytope given by the McCormick envelope. (ii) $a = 0$, $b \neq 0$. In this case, if $-c/b \notin [0,1]$, then $S_0$ is infeasible. Otherwise, this case is trivial. (iii) $a \neq 0$, $b = 0$. Similar to previous case. (iv) $a\neq 0$ and $b\neq 0$. In this case, we can solve $ax + by + c = 0$ for $x$, i.e. $x = (-c - by)/a$. Let $[\alpha, \beta]$ be the bounds on $y$ such that the line $ax + by + c = 0$ intersects the $[0,1]^2$ box. If $\alpha = \beta$, then we can set $y = \alpha$ and the result follows trivially. Otherwise, substitute in $w=xy$ to rewrite $S_0$ as following $$ S_0 = \{(x,y,w)\in\mathbb{R} \times [\alpha, \beta] \times [0,1] \,|\, (y,w)\in B_0, \ x = (-by-c)/a\}, $$ where $B_0 = \{(y,w)\in[\alpha, \beta] \times [0,1]\,|\, w = (-c/a)y-(b/a)y^2\}$. Now, it is straightforward via Proposition~\ref{prop: parabola is SOCP} (affinely scale $y$ to have bound of $[0, 1]$) and Lemma~\ref{lemma: convex of intersection with affine set} that $\convex(S_0)$ is a SOCP representable set. \end{proof} Now we are ready to prove part (ii) of Theorem~\ref{thm:conv}. \begin{proposition}\label{thm: convex of single eq is SOCP in the space of w} Let ${S}$ be the set defined in (\ref{set: S EQ}). Then $\convex({S})$ is SOCP representable. \end{proposition} \begin{proof} By Proposition~\ref{prop: extreme points of a single eq}, we can fix various sets of $x$ and $y$ variables that corresponds to the $U$ sets and prove that the convex hull of each of these sets is SOCP representable. Case (i): $|U| = 1$. In this case, the set of unfixed variables satisfy a set of linear equations. Thus this set is clearly SOCP representable. Case (ii): $U = \{(i_0, j_0)\}$, where $(i_0, j_0)\in E$. In this case, the set of unfixed variables satisfy the following constraints: \begin{eqnarray} ax_{i_0} + by_{j_0} + q w_{i_0 j_0} + c & = & 0, \\ w_{i_0 j_0} &=& x_{i_0} y_{j_0}\\ w_{i j_0} &=& \bar{x}_i y_{j_0} \ \forall (i,j_0) \in E, i \neq i_0\\ w_{i_0 j} &=& \bar{y}_j x_{i_0} \ \forall (i_0,j) \in E, j \neq j_0, \end{eqnarray} where the bound constraints on $w_{i j_0}$ and $w_{i_0 j}$ variables are not needed explictly. Thus, by Proposition~\ref{prop: hyp is SOCP in space of w} and Lemma~\ref{lemma: convex of intersection with affine set}, the above set is SOCP representable. Thus, by Lemma ~\ref{lemma: convex of union of convex sets}, we obtain that $\convex({S})$ is SOCP representable. \end{proof} \subsection{Proof of Theorem~\ref{thm:str}}\label{sec:thm2} In order to prove Theorem~\ref{thm:str} it is sufficient to prove that: \begin{eqnarray}\label{eq:SDP} \textup{proj}_{x, y, w}\left(S^{SDP}\right) \supseteq S^{SOCP} \end{eqnarray} and \begin{eqnarray}\label{eq:QBP} S^{QBP} \supseteq S^{SOCP}. \end{eqnarray} We prove these two containments next. \begin{proposition} For any BBP, (\ref{eq:SDP}) holds. \end{proposition} \begin{proof} In order to prove (\ref{eq:SDP}), it is convenient to introduce some notation. Let $H$ be the matrix variable representing $\left[\begin{array}{c}x \\ y\end{array}\right][x^{\top} y^{\top}]$. We write $w = \textup{proj}_E(H)$, to imply that if $(i,j) \in E$, then $w_{ij} = \frac{1}{2}\left(H_{i(j+n_1)} + H_{(j + n_1)i}\right)$. Then the standard SDP relaxation may be written as: \begin{eqnarray} \sum_{ij \in E}(Q_k)_{ij}w_{ij} + a_k^{\top}x + b_k^{\top}y + c_k & = & 0, \ k\in\{1, \dots, m\} \label{eq:row}\\ \textup{proj}_E(H) & = & w \label{eq:projw}\\ \left[\begin{array}{cc} H & [x^{\top} y^{\top}] \\ \left[\begin{array}{c}x \\ y\end{array}\right] & 1\end{array} \right] &\succeq& 0. \label{eq:sdpcon} \end{eqnarray} Let $$T^k: = \{ (x, y, H, w)\,|\, (\ref{eq:row}) \textup{ corresponding to }k, (\ref{eq:projw}), \textup{ and }(\ref{eq:sdpcon})\}$$ and as before let $$S^k:= \{(x, y, w)\,|\, (\ref{eq:row}) \textup{ corresponding to }k, w_{ij} = x_i y_j \ \forall (ij) \in E\}.$$ Then by construction \begin{eqnarray} \textup{proj}_{x,y,w}\left(T^k\right) \supseteq \textup{conv}(S^k)\label{eq:perrow}. \end{eqnarray} Next we need the following: \paragraph{Claim 1} $\bigcap_{k =1}^m \textup{proj}_{x,y,w}\left(T^k\right) = \textup{proj}_{x,y,w}\left(\bigcap_{k =1}^m \left(T^k\right)\right)$: Trivially we have that, $$\bigcap_{k =1}^m \textup{proj}_{x,y,w}\left(T^k\right) \supseteq \textup{proj}_{x,y,w}\left(\bigcap_{k =1}^m \left(T^k\right)\right),$$ holds. We now verify the converse. For some $(\bar{x},\bar{y}, \bar{w}) \in \textup{proj}_{x,y,w}\left(T^{k}\right)$, let $$\mathcal{H}^k(\bar{x}, \bar{y}, \bar{w}):= \left\{H \,|\, (\bar{x}, \bar{y}, \bar{w}, H) \in T^k\right\}.$$ Then observe that $\mathcal{H}^k(\bar{x}, \bar{y}, \bar{w})$ is the set of matrices $H$ satisfying \begin{eqnarray} \textup{proj}_E(H) &=& \bar{w}\\ \left[\begin{array}{cc} H & [\bar{x}^{\top} \bar{y}^{\top}] \\ \left[\begin{array}{c}\bar{x} \\ \bar{y}\end{array}\right] & 1\end{array} \right] &\succeq& 0. \end{eqnarray} Thus $\mathcal{H}^k(\bar{x},\bar{y}, \bar{w})$ is independent of $k$, i.e. if $(\bar{x},\bar{y}, \bar{w}) \in \bigcap_{k = 1}^m\textup{proj}_{x,y,w}\left(T^{k}\right)$ then $\mathcal{H}^{k_1}(\bar{x},\bar{y},\bar{w}) = \mathcal{H}^{k_2}(\bar{x},\bar{y},\bar{w})$ for all $k_1 \neq k_2$. Therefore in particular, if $(\bar{x},\bar{y}, \bar{w}) \in \bigcap_{k = 1}^m\textup{proj}_{x,y,w}\left(T^{k}\right) $, then there exists $\bar{H}$ such that $(\bar{x}, \bar{y}, \bar{w}, \bar{H}) \in \bigcap_{k = 1}^mT^{k}$. Thus, $(\bar{x},\bar{y}, \bar{w}) \in \textup{proj}_{x,y,w}\left( \bigcap_{k = 1}^mT^{k}\right).$~$\diamond$ Now, we return to the proof of the original statement. Intersecting (\ref{eq:perrow}) for all $k \in \{1, \dots, m\}$ we obtain, \begin{eqnarray} \textup{proj}_{x,y,w}\left(S^{SDP}\right) = \textup{proj}_{x,y,w}\left(\bigcap_{k =1}^m \left(T^k\right)\right) = \bigcap_{k =1}^m \textup{proj}_{x,y,w}\left(T^k\right) \supseteq \bigcap_{k =1}^m \textup{conv}(S^k) = S^{SOCP}, \nonumber \end{eqnarray} where the first equality is by definition of $S^{SDP}$, the second equality via Claim 1, the inequality is due to (\ref{eq:perrow}) and the last equality is by definition of $S^{SOCP}$. \end{proof} \begin{proposition} For any BBP, (\ref{eq:QBP}) holds. \end{proposition} \begin{proof} Recall that $S^{QBP}$ is the set \begin{eqnarray} && \left\{(x, y, w)\in[0,1]^{n_1+n_2+|E|} \,|\, \sum_{(ij) \in E} (Q_k)_{ij}w_{ij} + a_k^{\top}x + b_k^{\top}y + c_k = 0 \ k \in \{1, \dots, m\}\right\} \label{eq:row1}\\ &&\bigcap \textup{ conv}\left(\{(x, y, w) \in[0,1]^{n_1+n_2+ |E|}\,|\, w_{ij} = x_i y_j \ \forall (i,j) \in E\}\right).\label{eq:QBP1} \end{eqnarray} Let $$T^k:= \{(x, y, w)\in[0,1]^{n_1+n_2+|E|} \,|\, (\ref{eq:row1}) \textup{ corresponding to }k, (\ref{eq:QBP1})\}$$ and let $$S^k:= \{(x, y, w)\,|\, (\ref{eq:row}) \textup{ corresponding to }k, w_{ij} = x_i y_j \ \forall (ij) \in E\}.$$ Then by construction \begin{eqnarray} T^k \supseteq \textup{conv}(S^k)\label{eq:perrow1}. \end{eqnarray} Intersecting (\ref{eq:perrow1}) for all $k \in \{1, \dots, m\}$ we obtain, $$S^{QBP} = \bigcap_{k = 1}^m T^k \supseteq \bigcap_{k = 1}^m\textup{conv}(S^k) = S^{SOCP}.$$ \end{proof} \section{Proposed branch-and-bound algorithm}\label{sec:bb} In this section, we discuss some details of our proposed branch-and-bound algorithm to solve BBP~(\ref{eq:BBP}). \subsection{Node selection and partitioning strategies} The most common node selection rule used in the literature is the so-called \textit{best-bound-first}, in which a node with the least lower bound (assuming minimization) is chosen for branching. Other rules may include selection of nodes that have the potential of identifying good feasible solutions earlier. In our computational experiments, we only use best-bound-first rule. Also, we use the most simple partitioning operation: rectangular. Example of other operation adopted in the literature are conical and simplicial~\cite{Linderoth2005}. \subsection{Variable selection and point of partitioning}\label{sec: variable and branching point selection} A simple rule for variable selection is to choose a variable with largest range. Another common rule is to prioritize the variable that is most responsive for the approximation error of nonlinear terms. For example, suppose we are optimizing in the extended space of $(x,y,w)$, then we could chose $x_i$ (or $y_j$) for which the absolute error $|\bar{w}_{ij}-\bar{x}_i\bar{y}_j|$ is maximized over the set of all possible pairs $(i,j)$, where $(\bar{x},\bar{y},\bar{w})$ is the relaxation solution for the current node. We refer to this rule as the \textit{gap-error-rule}. Once the variable is selected, say $x_1$ (without loss of generality), we can list three standard rules for choosing the partitioning point:\\ \textit{Bisection}: partition at the mid point of the domain of $x_1$ in the current node.\\ \textit{Maximum-deviation}: partition at $\bar{x}_1$, where $(\bar{x},\bar{y},\bar{w})$ is the relaxation solution for the current node.\\ \textit{Incumbent}: partition at $x^*_1$, where $(x^*,y^*,w^*)$ is the current best feasible solution, if $x^*_1$ is in the range of $x_1$ in the current node. Combination of the above rules have also been proposed. For example, Tawarmalani et al. \cite{Sahinidis2005} propose a rule that is a convex combination of bisection and maximum-deviation branching rules (biased towards the maximum-deviation), and uses incumbent branching whenever possible. In our proposed algorithm, we use specialized variable and branching point selection rules, which use information collected from multiple disjunctions and, therefore, take into account the coefficients of the constraints in the model in addition to the variable ranges at the current node. \paragraph{New proposed rule} Note that we always branch on only one set of variables, either $x$ or $y$. We describe our rule assuming we are branching on the $x$ variables. To further ease exposition, we explain our proposed branching rules for the root node, i.e., we assume that all variables range from $0$ to $1$. Consider the three-variable set: $$ S_0 = \{(x_{1},y_{1},w_{11})\in{\rr}^3\,|\, q w_{11} + ax_{1} + by_{1} + c = 0, \ w_{11}=x_{1} y_{1}\}, $$ which is obtained by fixing $x_i, y_j$ to either $0$ or $1$ in (\ref{eq_single}), $\forall i\in V_1\setminus\{1\}, \ \forall j\in V_2\setminus \{1\}$. Like the proof of Proposition~\ref{prop: hyp is SOCP in space of w}, there are two cases of interest. \begin{itemize} \item $q\neq 0$. In this case, $w_{11}$ can be written as affine function of $x_1$ and $y_1$. We can then write the projection of $S_0$ in the space of $(x_1,y_1)$ as (we drop the indices to simplify notation, we also drop the word 'Proj') $$S_0 = \{(x,y)\in[0,1]^2 \,|\, \ (x-r)(y-s) = \tau\},$$ where $r,s,\tau$ are constants. The equation $(x-r)(y-s) = \tau$ represents a hyperbola with asymptotes $x=r$ and $y=s$. Two typical instances are plotted in Figure~\ref{fig: hyp 2 branches}-\ref{fig: hyp one branch}, where the continuous thick portion of the curves represents $S_0$ and the whole dotted areas represent $\convex(S_0)$. \begin{figure}[h] \centering \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.7]{fig_hyp_two_branches.png} \caption{Convex hull of the set defined by the intersection of two branches of a hyperbola with the $[0,1]^2$ box. {Here,} $x_l$ (resp. $x_u$) is the $x$-coordinate of the intersection point of the left (resp. right) branch with the line $y=0$ (resp. $y=1$).\newline} \label{fig: hyp 2 branches} \end{minipage}% \hspace{0.3cm} \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.7]{fig_hyp_one_branch.png} \caption{Convex hull of the set defined by the intersection of a single branch of a hyperbola with the $[0,1]^2$ box. {Let} $A$ and $B$ are the intersection points of the curve with the $[0,1]^2$ box and $C$ is the point of the curve at which the tangent line is parallel to $AB$. {Then,} $x_a,x_b$ and $x_c$ are the projections of $A,B$ and $C$ onto the $x$ axis.} \label{fig: hyp one branch} \end{minipage} \end{figure} Our goal is to branch at a point that maximizes the eliminated area upon branching. Case 1: Both branches of a hyperbola intersect with the $[0,1]^2$ box. Let $x_l$ (resp. $x_u$) be the $x$-coordinate of the intersection point of the left (resp. right) branch with either of the lines $y=0$ or $y=1$. The plot on Figure~\ref{fig: hyp 2 branches} suggests that branching $x$ at any point $x_0\in [x_l,x_u]$ is a reasonable choice for the case where both branches of the hyperbola intersect the $[0,1]^2$ box. Indeed, such branching would eliminate the entire dotted area between the two branches of the curve. Case 2: One branch of hyperbola intersects with the $[0, 1]^2$ box. For the case where only one branch intersects the $[0,1]^2$ box, as illustrated in Figure~\ref{fig: hyp one branch}, we could in principle compute $C$ that maximizes the area of the triangle $\bigtriangleup_{ABC}$. To simplify the rule and avoid excessive computations, we simply choose $C$ to be the point at which the tangent line to the curve is parallel to the line $AB$. Moreover, for points in some interval $[x_l,x_u]$ containing $x_c$, the area of the triangle $\bigtriangleup_{ABC}$ does not change much, implying that every point in $[x_l,x_u]$ may be a good choice to branch at. In our computational experiments, we compute $x_l$ and $x_u$ such that $x_c-x_l = \gamma (x_c-x_a)$ and $x_u-x_c = \gamma (x_b-x_c)$ with $\gamma = 2/3$. \item $q = 0$ and $a\neq 0$ or $b\neq 0$. Without loss of generality assume $b\neq 0$. In this case, $y_{1}$ is an affine function of $x_{1}$ as shown in proof of Proposition~\ref{prop: hyp is SOCP in space of w}. Thus, we can study $S_0$ in the space of $(x_{1},w_{11})$, where it is defined by a parabola and we adopt the same rule defined for the case of Figure~\ref{fig: hyp one branch}, i.e. choose points $x_l$ and $x_u$ as a function of $x_a$ and $x_b$. If the parabola intersects the $[0,1]^2$ box in more than two points, we define $A$ and $B$ to be the left and right most intersection points. Note that if $a \neq 0$, {then} $x_1$ is an affine function or $y_1$. We can identify appropriate points in the $y_1$ space as above and then translate them to the $x$ space via the affine function. \end{itemize} Thus, corresponding to every three-variable set $S_0$, we associate (i) an $x$-variable $x_i$, (ii) an interval $[x_l, x_u]$ within the domain of $x_i$ and (iii) we also approximately compute the area of $\textup{conv}(S_0)$, (either in the space of $(x_1,y_1)$, if $q\neq 0$, or in the space of $(x_1,w_{11})$, if $q=0$), referred to as $A_0$. The actual area we use is that of the polyhedral outer approximation as will be discussed in Section~\ref{sec:Polyhedralrelaxation}. Once the above data is collected for all disjunctions, we use the following Algorithm to decide on the variable to branch on and the point of partitioning for this variable. \begin{algorithm}[H] \caption{Branching rule} \label{alg: branching rule} \begin{algorithmic}[1] \State \textbf{Input:} $\delta = 1/K$, for some positive integer $K$. Let {$\varepsilon_1,\varepsilon_2 > 0$}. \State Let $A_{ik} = 0, \ i \in \{1, \dots, n_1\}, \ k \in \{1, \dots, K\}$. Let $p_i = 0 , \ i \in \{1, \dots, n_1\}$ \State Define $I_{ik} = [(k-1)\delta,k\delta]$, for $k \in \{1, \dots, K\}$ (which defines a partition of the range of $x_i$). \For{Each disjunctions $S_0$} \State Compute (a) the index $i$ of $x$-variable corresponding to $S_0$, (b) domain $[x_l,x_u]$ and (c) the area $A_0$. \State Set $p_i = p_i + 1$ \State If $[x_l,x_u]\cap I_{ik}\neq \emptyset$ for some $k\in\{1, \dots, K\}$, set $A_{ik} = A_{ik} + A_0$. \EndFor \State \For{$i \in \{1, \dots, n_1\}$} \If{$\frac{p_i}{\sum_{l=1}^{n_1}p_l} < {\varepsilon_1}$} \State variable $i$ is declared irrelevant. \EndIf \EndFor \State Let $(i^*,k^*) \in \Argmax\{A_{i,k}\,|\, i\in\{1, \dots, n_1\}, i \textup{ is not irrelevant}, k\in \{1,\dots, K\}\}$ \If{$A_{i^*k^*} \geq {\varepsilon_2}$} \State Branch on the variable $x_{i^*}$ at the mid point of the interval $I_{i^*k^*}$. \Else \State Use the bisection rule. \EndIf \end{algorithmic} \end{algorithm} In our computational experiments, whenever we use Algorithm~\ref{alg: branching rule}, we set $\varepsilon_1 = 0.01,~\varepsilon_2 = 1/16$ and $K = 8$. Our implementation is naive, and we have not tried to fine tune any of these parameters. \section{Computational experiments} \subsection{Finite Element Updating Model}\label{sec: finite element model} The instances of BBP that we use come from finite element (FE) model updating in structural engineering. The goal is to update the parameter values in an FE model, so that the model provides same resonance frequencies and mode shapes that are physically measured from vibration testing at the as-built structure. In this study we adopt the modal dynamic residual formulation, for which the details can be found in \cite{Wang2015}. The formulation is briefly summarized as follows. Consider the model updating of a structure with $m$ number of degrees-of-freedom (DOFs). Corresponding to stiffness parameters that are being updated, the (scaled) updating variables are first denoted as $x\in [-1,1]^{n_1}$. Since only some DOFs can be instrumented, we suppose $n_2$ of those are not instrumented, leaving $m-n_2$ of them as instrumented. In the meantime, it's assumed that $n_3$ number of vibration modes are measured/observed from the vibration testing data. For each $l$-th measured mode, $\forall l\in \{1, \dots, n_3\}$, the experimental results provide $\lambda_l$ as the square of the (angular) resonance frequency, and $\bar{y}^l\in \mathbb{R}^{m-n_2}$ as the mode shape entries at the instrumented DOFs. In mathematical terms, the modal dynamic residual formulation can be stated as the problem of simultaneously solving the following set of equations on stiffness updating variables $x\in [-1,1]^{n_1}$ and (scaled) unmeasured mode shape entries $y^l\in[-2,2]^{n_2}$, $\forall l\in \{1, \dots, n_3\}$: \begin{align} [K_0+\sum_{i=1}^{n_1}x_iK_i-\lambda_lM] \begin{bmatrix} \bar{y}^l\\ y^l \end{bmatrix} = 0, \ l\in \{1, \dots, n_3\},\label{setEqs} \end{align} where $M,K_0,K_i\in \mathbb{R}^{m\times m}$, $\forall i\in \{1, \dots, n_1\}$, $\lambda_l\in \mathbb{R}_+$ and $\bar{y}^l\in \mathbb{R}^{m-n_2}, \ \forall l\in \{1, \dots, n_3\}$, are problem data. In practice, (\ref{setEqs}) is unlikely to have a feasible solution set of $x$ and $y^l$, $l\in \{1, \dots, n_3\}$, because of modeling and measurement inaccuracies. Therefore, we convert the problem of solving (\ref{setEqs}) into an optimization problem that aims to minimize the sum of the residuals, i.e., the absolute difference between left and right-hand-side of each equation. After some affine transformations and simplifications, this optimization problem can be stated as following: \begin{eqnarray} &\min & \sum_{k=1}^{m} z_{k}\label{P1}\\ &\textup{s.t.}& |x^{\top}Q_{k}y + a_{k}^{\top}x + b_{k}^{\top}y + c_{k}| = z_{k}, \ k\in \{1, \dots, m\}\nonumber\\ & & x\in[0,1]^{n_1},y\in[0,1]^{n_2},\nonumber \end{eqnarray} where $n_2$ and $m$ correspond to {$n_2n_3$} and $mn_3$, respectively, in the notation of (\ref{setEqs}). Finally, (\ref{P1}) is equivalent to the following BBP. \begin{eqnarray} &\min & \sum_{k=1}^{m} z'_{k}+ z''_{k}\label{P2}\\ &\textup{s.t.}& x^{\top}Q_{k}y + a_{k}^{\top}x + b_{k}^{\top}y + c_{k} = z'_{k}-z''_{k}, \ k\in \{1, \dots, m\}\nonumber\\ & & x\in[0,1]^{n_1},y\in[0,1]^{n_2}.\nonumber\\ & & 0 \leq z'_{k}, z''_{k} \leq u, \ k \in \{1, \dots, m\}.\nonumber \end{eqnarray} \noindent\textbf{Instances:}\\ The simulated structural example is similar to the planar truss structure in \cite{Wang2015}. In order to simulate measurement noise, we add a normal-distributed random variable to the parameters $\lambda^l$ and $\bar{y}^l$, ${\forall} l\in\{1,\dots,n_3\}$, with mean zero and variance equal $2\%$ of its actual value. {In our case there are six modes, i.e. $n_3=6$}. By taking different values for {$n_2$}, we then generate ten instances whose number of variables and constraints are given in Table~\ref{table: inst}. \begin{table}[h] \centering \caption{Instances description} \label{table: inst} \begin{tabular}{ccccc} Inst & $\#$ of x-variables & $\#$ of y-variables & $\#$ of equations & $\#$ of bilinear terms\\ \hline inst1 & 6 & 180 & 312 & 990 \\ inst2 & 6 & 180 & 312 & 954 \\ inst3 & 6 & 168 & 312 & 966 \\ inst4 & 6 & 168 & 312 & 972 \\ inst5 & 6 & 156 & 312 & 900 \\ inst6 & 6 & 144 & 312 & 780 \\ inst7 & 6 & 132 & 312 & 756 \\ inst8 & 6 & 132 & 312 & 756 \\ inst9 & 6 & 120 & 312 & 684 \\ inst10 & 6 & 120 & 312 & 684 \end{tabular} \end{table} \subsection{Simplifying $S^{SOCP}$} \subsubsection{A lighter version of $S^{SOCP}$}\label{sec: lighter version} According to Remark~\ref{rem:size}, the number of disjunction needed to model the convex hull of a single bilinear equation can be computationally prohibitive for many instances of interest. To overcome this issue, in our computational experiments, we write the convex hull of each row only in the space of the variable appearing in it. In particular, for constraint $k$ we work with $G(V^k, E^k)$, where $V^k$ is the set of variables appearing in constraint $k$ and $E^k$ represent the complete bipartite graph between the $x$ and $y$ variables appearing in $V^k$. This possibly weaker relaxation is much more computationally cheaper that $S^{SOCP}$ for our instances due to the sparsity on the coefficients of each bilinear equation. We denote this relaxation as $\textup{light}-S^{SOCP}$. \subsubsection{Polyhedral outer approximation}\label{sec:Polyhedralrelaxation} As shown in Proposition \ref{prop: hyp is SOCP} and Proposition \ref{prop: parabola is SOCP}, all the sets obtained after fixings are SOCP representable. Some are polyhedral while many of the others are not. Since linear programming techniques are more efficient and robust, than the non-linear counterpart, we outer approximate the non-polyhedral sets by polyhedral sets. As shown in proof of Proposition~\ref{thm: convex of single eq is SOCP in the space of w}, all the non-linear sets that we need to convexify in order to obtain the convex hull of the set ${S}$ defined in (\ref{eq_single}) are of the form \begin{align*} S_{i_0j_0} = \{(x,y,w)\in[0,1]^{n_1+n_2+n_1n_2}\,|\, x_i, y_j\in\{0,1\}, \ \forall i\in V_1\setminus\{i_0\}, \ \forall j\in V_2\setminus \{j_0\}, \\ \bar{q} w_{i_0j_0} + \bar{a}x_{i_0} + \bar{b}y_{i_0} + \bar{c} = 0 , \ w_{ij} = x_{i} y_{j}, \ i \in V_1, \ j\in V_2\}, \end{align*} for some $(i_0,j_0)\in E$. Without loss of generality, suppose $i_0=1$ and $j_0=1$, in which case we want to outer approximate the following set $ S_0 = \{(x_{1},y_{1},w_{11})\in{\rr}^3\,|\, q w_{1} + ax_{1} + by_{1} + c = 0, \ w_{11}=x_{1} y_{1}\}. $ There are two cases of interest. The first case occurs when $q\neq 0$. In this case, $w_{11}$ is an affine functions of $x_{1}$ and $y_{1}$ as following: $w_{11} = (-c-ax_{1}-by_{1})/q$; $w_{1j} = x_{1}y_j, \ \forall j\in \{1, \dots, n_2\}$; $w_{i1} = x_{i}y_{1}, \ \forall i\in \{1, \dots, n_1\}$; and $w_{ij}=x_iy_j, \ \forall i\in \{1, \dots, n_1\}\setminus\{1\}, \ \forall j\in \{1, \dots, n_2\}\setminus\{1\}$. Hence, we only need to approximate $\convex(S_0)$ in the space of $(x_{1},y_{1})$. If both branches of the hyperbola defined by $q x_{1} y_{1} + ax_{1} + by_{1} + c = 0$ intersect the $[0,1]^2$ box, than $\convex(S_0)$ is polyhedral. Suppose only one branch of the hyperbola intersects the box. Then, we outer approximate $\convex(S_0)$ by using tangent lines to the curve. In our implementation, we only use the tangent lines at the intersection points of the curve with the box, see Figure~\ref{fig: hyperbola}. More tangent lines could be added to better approximate $\convex(S_0)$, but based on our preliminary experience on our instances it does not make significant difference. The second case of interest is $q = 0$ and $a\neq 0$ (or $b\neq 0$) for which we can rewrite $S_0$ as $ S_0 = \{(x_{1},y_{1},w_{11})\in[0,1]^3\,|\, aw_{11} = -by_{1}^2-cy_{1}, \ ax_{1} = -by_{1}-c\}. $ In this case, $x_{1}$ is an affine function of $y_{1}$ and we only need to approximate $\convex(S_0)$ in the space of $(y_{1},w_{11})$, where $ aw_{11} = -cy_{1}-by_{1}^2$ defines a parabola as shown in Figure~\ref{fig: parabola}. As in the previous case, we outer approximate the curve by using tangent lines to the curve as illustrated in Figure~\ref{fig: parabola}. \begin{figure}[!h] \centering \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.8]{fig_hyperbola.png} \caption{Convex hull of the set defined by the intersection of one branch of a hyperbola with the $[0,1]^2$ box, and its tangential linear outer approximation.} \label{fig: hyperbola} \end{minipage}% \hspace{0.3cm} \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.8]{fig_parabola.png} \caption{Convex hull of the set defined by the intersection of parabola with the $[0,1]^2$ box, and its tangential linear outer approximation.\newline} \label{fig: parabola} \end{minipage} \end{figure} \subsection{Computation results}\label{sec:compres} \subsubsection{Software and Hardware} All of our experiments were ran on a Windows 10 machine with 64-bit operating system, x64 based processor with 2.19GHz, and 32GB RAM. We call MOSEK via CVX from MATLAB R2015b to solve SDPs. We used Gurobi 7.5.1 to solve LPs and integer programs. We used BARON 15.6.5 (with CPLEX 12.6 as LP solver and IPOPT as nonlinear solver) as our choice of commercial global solver, which we call from MATLAB R2015b. \subsubsection{Root node} We assess the strength of our proposed polyhedral outer approximation of $\textup{light}-S^{SOCP}$ relaxation (defined in Section~\ref{sec: lighter version} and referred as SOCP in the tables) against the classical SDP and McCormick (Mc) relaxations. The numerical results are reported in Table~\ref{table:Root relaxation comparisons}, where SDP+Mc denotes the the intersection of SDP and Mc relaxations. Similarly, SOCP+Mc denotes the intersection of SOCP and Mc relaxations (since we are not using $S^{SOCP}$, this could potentially be stronger than SOCP). \begin{table}[h] \centering \caption{Root relaxations} \label{table:Root relaxation comparisons} \begin{tabular}{ccccccccccc} & \multicolumn{2}{c}{Mc} & \multicolumn{2}{c}{SDP} & \multicolumn{2}{c}{SDP+Mc} & \multicolumn{2}{c}{SOCP} & \multicolumn{2}{c}{SOCP+Mc} \\ Inst & Bound & Time & Bound & Time & Bound & Time & Bound & Time & Bound & Time \\ \hline 1 & 0.17771 & 0.07 & 0.17771 & 1.81 & 0.17771 & 35.89 & 0.17793 & 17.59 & 0.17793 & 18.42 \\ 2 & 0.00000 & 0.05 & 0.00000 & 1.70 & 0.00000 & 38.98 & 0.00000 & 20.93 & 0.00000 & 21.14 \\ 3 & 0.27543 & 0.07 & 0.27194 & 1.81 & 0.27543 & 44.02 & 0.28202 & 16.22 & 0.28202 & 49.61 \\ 4 & 0.10095 & 0.08 & 0.10012 & 2.14 & 0.10095 & 36.13 & 0.10101 & 20.71 & 0.10101 & 25.87 \\ 5 & 0.34766 & 0.05 & 0.34766 & 1.67 & 0.34766 & 31.58 & 0.34925 & 13.17 & 0.34925 & 12.88 \\ 6 & 0.97758 & 0.05 & 0.91629 & 1.80 & 0.97758 & 28.47 & 1.00267 & 11.64 & 1.00267 & 11.07 \\ 7 & 1.73437 & 0.07 & 1.70329 & 1.38 & 1.73437 & 25.29 & 1.74015 & 10.76 & 1.74015 & 11.68 \\ 8 & 1.99887 & 0.07 & 1.97107 & 1.30 & 1.99887 & 21.95 & 2.01260 & 17.53 & 2.01260 & 21.51 \\ 9 & 1.89400 & 0.05 & 1.89222 & 1.17 & 1.89400 & 22.94 & 1.90191 & 10.53 & 1.90191 & 9.32 \\ 10 & 2.41036 & 0.05 & 2.40658 & 1.16 & 2.41036 & 18.95 & 2.41959 & 10.07 & 2.41959 & 12.29 \end{tabular} \end{table} As we see, SOCP produces the best dual bounds among SDP, Mc and SDP + Mc. Also, SOCPs runs faster than SDP + Mc for all the instances. Finally, SOCP+ Mc produces no better bounds than SOCPs alone. A strong relaxation can be obtained by partitioning the domain of some variables and writing a MILP formulation to model the union of McCormick relaxations over each piece~\cite{meyer2006global, dey2015analysis}. We call it McCormick Discretization and use the MILP formulation with binary expansion. We only partition the domain of variables $x_i$'s as the number of $x$ variables is much smaller than the number of $y$ variables for all of our instances. In Table~\ref{table: Mc Disc}, $T$ defines the level of discretization, meaning that the range of each variable $x_i$ is partitioned into $2^T+1$ uniform sub-intervals. This relaxation becomes tighter as $T$ increases. However, the MILP that need to be solved becomes harder since the number of binary variables increases as a function of $T$. Thus, we give GUROBI a time limit of 10 hours, which is the amount of time given to all the branch-and-bound algorithm that we report in Section~\ref{sec: branch and bound} below. Table~\ref{table: Mc Disc} reports the computational results, where the asterisk signalizes that GUROBI reached the time limit with the given level of discretization. {If this is the case, then} we report the MILP dual bound reported by the solver, which is a valid dual bound for our problem. The last column displays the best bound obtained among all the levels of discretizations reported. \begin{table}[h] \centering \caption{McCormick discretization: dual bounds} \label{table: Mc Disc} \begin{tabular}{cccccccc} Inst & T=6 & T=8 & T=10 & T=12* & T=14* & T=16* & Best \\ \hline 1 & 0.18611 & 0.20512 & 1.11852 & 1.85387 & 1.40586 & 0.96121 & 1.85387 \\ 2 & 0.00000 & 0.03133 & 1.05662 & 2.14709 & 1.38374 & 0.04654 & 2.14709 \\ 3 & 0.29443 & 0.33575 & 1.39375 & 2.14270 & 1.42642 & 1.42007 & 2.14270 \\ 4 & 0.10524 & 0.11387 & 1.21446 & 2.44853 & 1.63495 & 1.27218 & 2.44853 \\ 5 & 0.36159 & 0.47559 & 2.15416 & 3.40272 & 3.22915 & 2.67721 & 3.40272 \\ 6 & 1.25052 & 2.61325 & 4.16459 & 4.06782 & 3.96512 & 3.78165 & 4.16459 \\ 7 & 1.96682 & 2.17988 & 3.60737 & 4.92133 & 4.69632 & 4.47471 & 4.92133 \\ 8 & 2.48886 & 2.69510 & 3.63400 & 4.81890 & 4.48014 & 4.19095 & 4.81890 \\ 9 & 2.05584 & 2.42150 & 4.16064 & 5.54076 & 5.63110 & 5.15290 & 5.63110 \\ 10 & 2.57751 & 2.80795 & 4.07475 & 5.40977 & 5.28173 & 5.16376 & 5.40977 \end{tabular} \end{table} Clearly, McCormick discretization produces better results than $SOCP$. Therefore, if one does not want to use branch and bound, then McCormick discretization is the best option. However, as we see in the next section, better dual bounds can be obtained by combining SOCP with the new proposed branch-and-bound algorithm. \subsubsection{Branch-and-bound}\label{sec: branch and bound} We assess and compare the performance of the following methods:\\ - \textit{BB}: This stands for our implementation of a branch-and-bound algorithm coded in Python. We use GUROBI as LP solver and run IPOPT at each node to search for feasible solutions. Our algorithm uses best-bound-first as node selection and rectangular partitioning. We consider three variants that differ from each other based on the relaxation adopted in each node and in the way variables and branching points are selected: \begin{itemize} \item[-] \textit{SOCP-1}: Uses the polyhedral relaxation described in Section~\ref{sec:Polyhedralrelaxation} with variable selection and the branching point given by Algorithm~\ref{alg: branching rule}. \item[-] \textit{SOCP-2}: Uses the same relaxation of BB-SOCP-1 above. The branching variable is selected according to the gap-error-rule explained in Section~\ref{sec: variable and branching point selection}. Then uses the incumbent-rule for branching point selection, whenever possible, otherwise uses the maximum-deviation-rule. \item[-] \textit{SOCP-3}: Same as BB-SOCP-2 except that uses bisection for branching point selection. \item[-] \textit{BB-Mc}: Uses McCormick relaxation with gap-error-rule as branching variable selection rule and bisection for branching point selection. \end{itemize} The dual bounds from our computational experiments are reported in Table~\ref{table: dual bounds}. The stopping criteria for all the methods was a time limit of 10 hours. \begin{table}[h] \centering \caption{Branch-and-bound methods: dual bounds} \label{table: dual bounds} \begin{tabular}{ccccc} Inst & BB-SOCP-1 & BB-SOCP-2 & BB-SOCP-3 & BB-Mc \\ \hline 1 & 2.50744 & 0.18473 & 0.18228 & 0.18343 \\ 2 & 2.86438 & 0.00000 & 0.00000 & 0.00000 \\ 3 & 3.13078 & 0.29109 & 0.28983 & 0.28884 \\ 4 & 3.11154 & 0.10526 & 0.10246 & 0.10410 \\ 5 & 3.78958 & 0.35253 & 0.35392 & 0.35405 \\ 6 & 4.63992 & 1.11105 & 1.09537 & 1.15191 \\ 7 & 5.26603 & 1.99569 & 1.88331 & 1.94949 \\ 8 & 5.13128 & 2.18546 & 2.18193 & 2.28761 \\ 9 & 6.10860 & 2.17509 & 2.08068 & 2.10144 \\ 10 & 5.77051 & 2.48039 & 2.45158 & 2.47965 \end{tabular} \end{table} The best dual bound for each instance is clearly given by BB-SOCP-1, which uses our proposed relaxation and branching rule. All the standard branching rules yield significantly worse bounds. \subsubsection{McCormick relaxation with BB-SOCP-1 branching rules} The computational results from Section~\ref{sec: branch and bound}, suggest that the good performance of BB-SOCP-1 is highly dependent on its branching rules, defined according to Algorithm~\ref{alg: branching rule}. In this section we show that the branching rules of Algorithm~\ref{alg: branching rule} on them own are not enough to produce good dual bounds. Consider the variant of BB-SOCP-1, reffered as BB-SOCP-Mc, which uses only McCormick relaxation and the same branching rule given by Algorithm~\ref{alg: branching rule}. Thus, at each node, we collect data from each disjunction $S_0$, run Algorithm~\ref{alg: branching rule} to select the branching variable and the branching point, but we only use the McCormick inequalities to define the relaxation. In Table~\ref{table: SOCP1 vc SOCP_Mc}, we compare the performance of BB-SOCP-1 and BB-SOCP-Mc. It becomes clear that the strength of BB-SOCP-1 does not come only from the branching rules of Algorithm~\ref{alg: branching rule} but also from our proposed relaxation. The discrepancy in the performance of BB-SOCP-1 and BB-SOCP-Mc means that, as the algorithm goes down the tree, the SOCP relaxation becomes much tighter than the McCormick relaxation. \begin{table}[h] \centering \caption{BB-SOCP-1 vs. McCormick relaxation with BB-SOCP-1 branching rules} \label{table: SOCP1 vc SOCP_Mc} \begin{tabular}{ccccc} & \multicolumn{2}{c}{BB-SOCP-1} & \multicolumn{2}{c}{BB-SOCP-Mc} \\ Inst & Dual Bound & Gap ($\%$) & Dual Bound & Gap ($\%$) \\ \hline 1 & 2.50744 & 27.9 & 0.19776 & 94.3 \\ 2 & 2.86438 & 18.2 & 0.02752 & 99.2 \\ 3 & 3.13078 & 14.9 & 0.30514 & 91.7 \\ 4 & 3.11154 & 17.1 & 0.11188 & 97.0 \\ 5 & 3.78958 & 8.3 & 0.40497 & 90.2 \\ 6 & 4.63992 & 18.0 & 1.52070 & 73.1 \\ 7 & 5.26603 & 6.0 & 2.26765 & 59.5 \\ 8 & 5.13128 & 9.5 & 2.68861 & 52.6 \\ 9 & 6.10860 & 1.5 & 2.51461 & 59.5 \\ 10 & 5.77051 & 7.9 & 2.85232 & 54.2 \end{tabular} \end{table} \subsubsection{Comparison of primal bounds and duality gaps} Finally, we report in Table~\ref{table: primal bounds} a summary of the performance of BB-SOCP-1, McCormick Discretization, BARON and BB-Mc. Recall that the stopping criteria for all the methods was a time limit of 10 hours. Also recall that primal solutions for BB-SOCP-1 and BB-Mc are obtained using IPOPT. \begin{table}[h] \centering \caption{Primal bounds and duality gaps} \label{table: primal bounds} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccccccc} & \multicolumn{3}{c}{BB-SOCP-1} & \multicolumn{2}{c}{Mc Disc} & \multicolumn{3}{c}{BARON} & \multicolumn{3}{c}{BB-Mc} \\ Inst & Dual & Primal & Gap($\%$) & Dual & Gap($\%$) & Dual & Primal & Gap($\%$) & Dual & Primal & Gap($\%$) \\ \hline 1 & 2.50744 & 3.47847 & 27.9 & 1.85387 & 46.7 & 0.33122 & 3.47887 & 90.5 & 0.18343 & 3.47849 & 94.7 \\ 2 & 2.86438 & 3.49983 & 18.2 & 2.14709 & 38.6 & 0.52447 & 3.49931 & 85.0 & 0.00000 & 3.49983 & 100.0 \\ 3 & 3.13078 & 3.68103 & 14.9 & 2.14270 & 41.8 & 0.47599 & 3.68306 & 87.1 & 0.28884 & 3.73308 & 92.3 \\ 4 & 3.11154 & 3.75223 & 17.1 & 2.44853 & 34.7 & 0.78630 & 3.75297 & 79.0 & 0.10410 & 3.75225 & 97.2 \\ 5 & 3.78958 & 4.13277 & 8.3 & 3.40272 & 17.7 & 0.38396 & 4.13541 & 90.7 & 0.35405 & 4.28165 & 91.7 \\ 6 & 4.63992 & 5.66096 & 18.0 & 4.16459 & 26.4 & 2.26566 & 5.66053 & 60.0 & 1.15191 & 5.66096 & 79.7 \\ 7 & 5.26603 & 5.60009 & 6.0 & 4.92133 & 12.1 & 3.07096 & 5.60020 & 45.2 & 1.94949 & 5.69318 & 65.8 \\ 8 & 5.13128 & 5.67022 & 9.5 & 4.81890 & 15.0 & 2.70237 & 5.67025 & 52.3 & 2.28761 & 5.67252 & 59.7 \\ 9 & 6.10860 & 6.20343 & 1.5 & 5.63110 & 9.2 & 3.67301 & 6.20346 & 40.8 & 2.10144 & 6.29365 & 66.6 \\ 10 & 5.77051 & 6.26853 & 7.9 & 5.40977 & 13.1 & 2.94060 & 6.22639 & 52.8 & 2.47965 & 6.30477 & 60.7 \end{tabular}} \end{table} The primal bounds from all the three branch-and-bound methods are similar, suggesting that the solutions found are close to a global optimal. On the other hand, the dual bounds from BB-SOCP-1 are significantly better than the dual bounds from all the other methods, which can be seem by comparing the duality gaps. In particular, the duality gap from BB-SOCP-1 is considerably smaller than the duality gap from Mc Disc, even though we are reporting the best dual bound obtained among all the levels of discretizations $T=6,8,\cdots,16$, and the primal bound we use to compute the duality gap of Mc Disc is the best primal bound from BB-SOCP-1, BARON and BB-Mc. The standard branching, i.e., the McCormick relaxation with bisection, yields the worse performance for all the instances. \section*{Acknowledgments} The authors would like to thank Xinjun Dong in Civil and Environmental Engineering at Georgia Tech, for his assistance with preparing the structural example data. Santanu S. Dey would like to acknowledge the discussion on a preliminary version of this paper at Dagstuhl workshop \# 18081, that helped improve the paper. Funding: This work was supported by the NSF CMMI [grant number 1149400]; the NSF CMMI [grant number 1150700]; and the CNPq-Brazil [grant number 248941/2013-5]. \bibliographystyle{plain}
{ "timestamp": "2018-03-28T02:11:24", "yymm": "1803", "arxiv_id": "1803.09266", "language": "en", "url": "https://arxiv.org/abs/1803.09266", "abstract": "A bipartite bilinear program (BBP) is a quadratically constrained quadratic optimization problem where the variables can be partitioned into two sets such that fixing the variables in any one of the sets results in a linear program. We propose a new second order cone representable (SOCP) relaxation for BBP, which we show is stronger than the standard SDP relaxation intersected with the boolean quadratic polytope. We then propose a new branching rule inspired by the construction of the SOCP relaxation. We describe a new application of BBP called as the finite element model updating problem, which is a fundamental problem in structural engineering. Our computational experiments on this problem class show that the new branching rule together with an polyhedral outer approximation of the SOCP relaxation outperforms a state-of-the-art commercial global solver in obtaining dual bounds.", "subjects": "Optimization and Control (math.OC)", "title": "New SOCP relaxation and branching rule for bipartite bilinear programs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363499098283, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7073385814488669 }
https://arxiv.org/abs/0812.3374
A remarkable sequence of integers
A survey of properties of a sequence of coefficients appearing in the evaluation of a quartic definite integral is presented. These properties are of analytical, combinatorial and number-theoretical nature.
\section{A quartic integral} \label{intro} \setcounter{equation}{0} The problem of explicit evaluation of definite integrals has been greatly simplified due to the advances in symbolic languages like Mathematica and Maple. Some years ago the first author described in \cite{moll-notices} how he got interested in these topics and the appearance of the sequence of rational numbers \begin{equation} d_{l,m} = 2^{-2m} \sum_{k=l}^{m} 2^{k} \binom{2m-2k}{m-k} \binom{m+k}{m} \binom{k}{l}, \label{positive-0} \end{equation} \noindent for $0 \leq l \leq m$. These are rational numbers with a simple denominator. The numbers $2^{2m}d_{l,m}$ are the remarkable integers in the title. These rational coefficients $d_{l,m}$ appeared in the evaluation of the {\em quartic integral} \begin{equation} N_{0,4}(a;m) := \int_{0}^{\infty} \frac{dx}{(x^{4} + 2ax^{2} + 1)^{m+1}}, \label{int-deg4} \end{equation} \noindent for $a> -1, \, m \in \mathbb{N}$. The formula \begin{equation} N_{0,4}(a;m) = \frac{\pi}{2} \frac{P_{m}(a)}{\left[ 2(a+1) \right]^{m + \tfrac{1}{2} } }, \label{int-qua} \end{equation} \noindent with \begin{equation} P_{m}(a) = \sum_{l=0}^{m} d_{l,m}a^{l} \label{polyP-def} \end{equation} \noindent has been established by a variety of methods, some of which are reviewed in \cite{amram}. The symbolic status of (\ref{int-deg4}) has not changed much since we last reported on \cite{moll-notices}. Mathematica 6.0 is unable to compute it when $a$ and $m$ are entered as parameters. On the other hand, the corresponding indefinite integral is evaluated in terms of the Appell-F1 function defined by \begin{equation} F_{1}(a;b_{1},b_{2};c;x,y) := \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \frac{(a)_{m+n} (b_{1})_{m} (b_{2})_{n}}{m! n! (c)_{m+n}} x^{m}y^{n} \end{equation} \noindent as \begin{eqnarray} \int \frac{dx}{(x^{4}+2ax^{2}+1)^{m+1}} & = & x \, F_{1} \left[ \frac{1}{2}, 1+m,1+m, \frac{3}{2}, - \frac{x^{2}}{a_{+}}, \frac{x^{2}}{-a_{-}} \right], \nonumber \end{eqnarray} \noindent where $a_{\pm} := a \pm \sqrt{-1+a^{2}}$. Here $(a)_{k} = a(a+1) \cdots (a+k-1)$ is the ascending factorial. The coefficients $\{d_{l,m}: 0 \leq l \leq m \}$ have remarkable properties that will be discussed here. Those properties have mainly been discovered by following the methodology of Experimental Mathematics, as presented in \cite{borw1, borw2}. Many of the properties presented here have been {\em guessed} using a symbolic language and subsequently established by traditional methods. The reader will find in \cite{irrbook} a detailed introduction to the polynomial $P_{m}(a)$ in (\ref{polyP-def}). \section{A triple sum expression for $d_{l,m}$} \label{sec-triple} \setcounter{equation}{0} Our first approach to the evaluation of (\ref{int-qua}) was a byproduct of a new proof of Wallis's formula, \begin{eqnarray} J_{2,m} := \int_{0}^{\infty} \frac{dx}{(x^{2} + 1)^{m+1}} & = & \frac{\pi}{2^{2m+1}} \binom{2m}{m}, \label{wallis} \end{eqnarray} \noindent where $m$ is a nonnegative integer. Wallis' formula has the equivalent form \begin{equation} \frac{\pi}{2} = \frac{2}{1} \cdot \frac{2}{3} \cdot \frac{4}{3} \cdot \frac{4}{5} \cdots \frac{2n}{2n-1} \cdot \frac{2n}{2n+1} \cdots. \end{equation} \noindent The reader will find in \cite{irrbook} a proof of the equivalence of these two formulations. We describe in \cite{sarah1} our first proof of (\ref{wallis}). Section \ref{sec-single} shows that a simple extension leads naturally to the concept of {\em rational Landen transformations}. These are transformations on the coefficients of a rational integrand that preserve the value of the integral. It is the rational analog of the well known transformation \begin{equation} a \mapsto \frac{a+b}{2}, \quad b \mapsto \sqrt{ab} \end{equation} \noindent that preserves the elliptic integral \begin{equation} G(a,b) = \int_{0}^{\pi/2} \frac{d \theta}{\sqrt{a^{2} \cos^{2} \theta + b^{2} \sin^{2} \theta}}. \end{equation} \noindent The reader will find in \cite{borwein1} and \cite{manna-moll3} details about these topics. The proof of Wallis' formula begins with the change of variables $x = \tan \theta$. This converts $J_{2,m}$ to its trigonometric form \begin{eqnarray} J_{2,m} = \int_{0}^{\pi/2} \cos^{2m} \theta \, d \theta & = & \frac{\pi}{2^{2m+1}} \binom{2m}{m}. \label{wallis12} \end{eqnarray} \noindent The usual elementary proof of (\ref{wallis12}) presented in textbooks is to produce a recurrence for $J_{2,m}$. Writing $\cos^{2} \theta = 1 - \sin^{2} \theta$ and using integration by parts yields \begin{eqnarray} J_{2,m} & = & \frac{2m-1}{2m} J_{2,m-1}. \label{recur1} \end{eqnarray} \noindent Now verify that the right side of (\ref{wallis12}) satisfies the same recursion and that both sides give $\pi/2$ for $m=0$. A second elementary proof of Wallis's formula, also given in \cite{sarah1}, is done using a simple {\em double-angle trick}: \begin{eqnarray} J_{2,m} & = & \int_{0}^{\pi/2} \cos^{2m} \theta \, d \theta = \int_{0}^{\pi/2} \left( \frac{1 + \cos 2 \theta}{2} \right)^{m} \, d \theta. \nonumber \end{eqnarray} \noindent Now introduce the change of variables $\psi = 2 \theta$, expand and simplify the result by observing that the odd powers of cosine integrate to zero. Hence (\ref{wallis12}) is reduced to an inductive proof of the binomial recurrence \begin{eqnarray} J_{2,m} & = & 2^{-m} \sum_{i=0}^{\lfloor{ m/2 \rfloor}} \binom{m}{2i} J_{2,i}. \label{recur} \end{eqnarray} \noindent Note that $J_{2,m}$ is uniquely determined by (\ref{recur}) along with the initial value $J_{2,0} = \pi/2$. Thus (\ref{wallis12}) now follows from the identity \begin{eqnarray} f(m) := \sum_{i=0}^{\lfloor{ m/2 \rfloor}} 2^{-2i} \binom{m}{2i} \binom{2i}{i} & = & 2^{-m} \binom{2m}{m} \label{sum1} \end{eqnarray} \noindent since (\ref{sum1}) can be written as \begin{eqnarray} J_{2,m} & = & 2^{-m} \sum_{i=0}^{\lfloor{m/2\rfloor}} \binom{m}{2i} J_{2,i}, \nonumber \end{eqnarray} \noindent where \begin{eqnarray} J_{2,i} & = & \frac{\pi}{2^{2i+1}} \binom{2i}{i}. \nonumber \end{eqnarray} The last step is to verify the identity (\ref{sum1}). This can be done {\em mechanically} using the theory developed by Wilf and Zeilberger, which is explained in \cite{nemes,aequalsb}. The sum in (\ref{sum1}) is the example used in \cite{aequalsb} (page 113) to illustrate their method. \smallskip \noindent {\bf Note}. The WZ-method is an algorithm in Computational Algebra that, among other things, will produce for a hypergeometric/holonomic sum, such as (\ref{bin-sum}), a recurrence like (\ref{rec-22}). The reader will find in \cite{nemes} and \cite{aequalsb} information about this algorithm. \\ \noindent The command $$ct(binomial(m,2i) \, binomial(2i,i) 2^{-2i}, 1, i, m,N)$$ produces \begin{eqnarray} f(m+1) & = & \frac{2m+1}{m+1} \; f(m), \label{recur2} \end{eqnarray} \noindent a recursion satisfied by the sum. One completes the proof by verifying that $2^{-m} \binom{2m}{m}$ satisfies the same recursion. Note that (\ref{recur1}) and (\ref{recur2}) are equivalent since $J_{2,m}$ and $f(m)$ differ only by a factor of $\pi/2^{m+1}$. We have seen that Wallis's formula can be proven by an angle-doubling trick followed by a hypergeometric sum evaluation. Perhaps the most interesting application of the double-angle trick is in the theory of rational Landen transformations. See \cite{manna-moll3} for an overview. Now we employ the same ideas in the evaluation of (\ref{int-qua}). The change of variables $x = \tan \theta$ yields \begin{equation} N_{0,4}(a;m) = \int_{0}^{\pi/2} \left( \frac{\cos^{4} \theta} {\sin^{4}\theta + 2a \sin^{2}\theta \cos^{2}\theta + \cos^{4}\theta} \right)^{m+1} \times \frac{d \theta}{\cos^{2} \theta}. \nonumber \end{equation} \noindent Observe first that the denominator of the trigonometric function in the integrand is a polynomial in $u = 2 \theta$. In detail, \begin{equation} \sin^{4}\theta + 2a \sin^{2}\theta \cos^{2}\theta + \cos^{4}\theta = 2 \left[ (1+a) + (1-a) \cos^{2} u \right]. \nonumber \end{equation} \noindent In terms of the double-angle $u = 2 \theta$, the original integral becomes \begin{equation} N_{0,4}(a;m) = 2^{-(m+1)} \int_{0}^{\pi} \left( \frac{(1+ \cos u)^{2}}{(1+a) + (1-a) \cos^{2}u } \right)^{m+1} \times \frac{du}{1+ \cos u}. \nonumber \end{equation} \noindent Next, expand the binomial $(1+ \cos u)^{2m+1}$ and check that \begin{equation} \int_{0}^{\pi} \left[ (1 + a) + (1-a) \cos^{2}u \right]^{-(m+1)} \, \cos^{j} u \, du = 0 \label{vanishing} \end{equation} \noindent for $j$ odd. The vanishing of half of the terms in the binomial expansion turns out to be a crucial property. The remaining integrals, those with $j$ even, can be simplified by using the double-angle trick one more time. The result is \begin{equation} N_{0,4}(a;m) = \sum_{j=0}^{m} 2^{-j} \binom{2m+1}{2j} \int_{0}^{\pi} \left[ (3+a) + (1-a) \cos v \right]^{-(m+1)} ( 1 + \cos v)^{j} \, dv, \nonumber \end{equation} \noindent where $v = 2u$ and we have used the symmetry of cosine about $v = \pi$ to reduce the integrals form $[0, 2 \pi]$ to $[0, \pi]$. The familiar change of variables $z = \tan(v/2)$ produces (\ref{int-qua}) with the complicated formula \begin{equation} d_{l,m} = \sum_{j=0}^{l} \sum_{s=0}^{m-l} \sum_{k=s+l}^{m} \frac{(-1)^{k-l-s}}{2^{3k}} \binom{2k}{k} \binom{2m+1}{2s+2j} \binom{m-s-j}{m-k} \binom{s+j}{j} \binom{k-s-j}{l-j}. \nonumber \end{equation} \medskip \noindent {\bf Note}. In spite of its complexity, obtaining this expression was the first step in the mathematical road described in this paper. It was precisely what Kauers and Paule \cite{kauers-paule} required to clarify some combinatorial properties of $d_{l,m}$. Some arithmetical properties can be read directly from it. For example, we can see that $d_{l,m}$ is a rational number and that $2^{3m} d_{l,m} \in \mathbb{Z}$; that is, its denominator is a power of $2$ bounded above by $3m$. Improvements on this bound are outlined in Section \ref{sec-single}. \section{A single sum expression for $d_{l,m}$} \label{sec-single} \setcounter{equation}{0} The idea of doubling the angle that proved productive in Section \ref{sec-triple} can be expressed in the realm of rational functions via the change of variables \begin{equation} y = R_{2}(x) := \frac{x^{2}-1}{2x}. \label{transf-r2} \end{equation} \noindent The inverse has two branches \begin{equation} x = y \pm \sqrt{y^{2}+1}, \end{equation} \noindent where the plus sign is valid for $x \in (0, \, +\infty)$ and the other one on $(-\infty, 0)$. The rational function $R_{2}$ arises from the identity \begin{equation} \cot 2 \theta = R_{2}(\cot \theta). \end{equation} \noindent This change of variables gives the proof of the next theorem. \begin{Thm} \label{thm-inv} Let $f$ be a rational function and assume that the integral of $f$ over $\mathbb{R}$ is finite. Then \begin{eqnarray} \int_{-\infty}^{\infty} f(x) \, dx & = & \int_{-\infty}^{\infty} \left[ f(y + \sqrt{y^{2}+1}) + f(y - \sqrt{y^{2}+1}) \right] \, dy + \label{invar} \\ & + & \int_{-\infty}^{\infty} \left[ f(y + \sqrt{y^{2}+1}) - f(y - \sqrt{y^{2}+1}) \right] \, \frac{y \, dy}{\sqrt{y^{2}+1}}. \nonumber \end{eqnarray} \noindent Moreover, if $f$ is an {\em even} rational function, the identity (\ref{invar}) remains valid if one replaces each interval of integration by ${\mathbb{R}}^{+}$. \end{Thm} \begin{Thm} For $m \in \mathbb{N}$, let \begin{equation} Q(x) = \frac{1}{(x^{4}+2ax^{2}+1)^{m+1}}. \end{equation} \noindent Define \begin{eqnarray} Q_{1}(y) & := & \left[ Q(y+\sqrt{y^{2}+1}) + Q(y-\sqrt{y^{2}+1}) \right] + \nonumber \\ & + & \frac{y}{\sqrt{y^{2}+1}} \left[ Q(y+\sqrt{y^{2}+1}) - Q(y-\sqrt{y^{2}+1}) \right]. \nonumber \end{eqnarray} \noindent Then \begin{equation} Q_{1}(y) = \frac{T_{m}(2y)}{2^{m}(1+a+2y^{2})^{m+1}}, \end{equation} \noindent where \begin{equation} T_{m}(y) = \sum_{k=0}^{m} \binom{m+k}{m-k} y^{2k}. \label{bin-sum} \end{equation} \end{Thm} \begin{proof} Introduce the variable $\phi = y + \sqrt{y^{2}+1}$. Then $y - \sqrt{y^{2}+1} = - \phi^{-1}$ and $y = \tfrac{1}{2}(\phi - \phi^{-1})$. Moreover, \begin{eqnarray} Q_{1}(y) & = & \left[ Q(\phi) + Q(\phi^{-1}) \right] + \frac{\phi^{2}-1}{\phi^{2}+1} \left( Q(\phi) - Q(\phi^{-1}) \right) \nonumber \\ & = & \frac{2}{\phi^{2}+1} \left[ \phi^{2}Q(\phi) + Q(\phi^{-1}) \right] \nonumber \\ & := & S_{m}(\phi). \nonumber \end{eqnarray} \noindent The result of the theorem is therefore equivalent to \begin{equation} 2^{m} \left(1 + a + \tfrac{1}{2}(\phi - \phi^{-1})^{2} ) \right)^{m+1} \, S_{m}(\phi) = T_{m}(\phi - \phi^{-1}). \label{newform} \end{equation} \noindent A direct simplification of the left hand side of (\ref{newform}) shows that this identity is equivalent to proving \begin{equation} \frac{\phi^{2m+1}+ \phi^{-(2m+1)}}{\phi+\phi^{-1}} = T_{m}(\phi - \phi^{-1}). \label{newform2} \noindent \end{equation} To establish this, one simply checks that both sides of (\ref{newform2}) satisfy the second order recurrence \begin{equation} c_{m+2}- ( \phi^{2} + \phi^{-2}) c_{m+1} + c_{m} = 0, \label{rec-22} \end{equation} \noindent and the values for $m=0$ and $m=1$ match. This is straight-forward for the expression on the left hand side, while the WZ-method settles the right hand side. \end{proof} \bigskip We now prove (\ref{int-qua}). The identity in Theorem \ref{thm-inv} shows that \begin{equation} \int_{0}^{\infty} Q(x) \, dx = \int_{0}^{\infty} Q_{1}(y) \, dy, \label{equalint} \end{equation} \noindent and this last integral can be evaluated in elementary terms. Indeed, \begin{eqnarray} \int_{0}^{\infty} Q_{1}(y) \, dy & = & \int_{0}^{\infty} \frac{T_{m}(2y) \, dy}{2^{m} (1+ 2y^{2})^{m+1}} \nonumber \\ & = & \frac{1}{2^{m}} \sum_{k=0}^{m} \binom{m+k}{m-k} \int_{0}^{\infty} \frac{(2y)^{2k} \, dy}{(1 + a + 2y^{2})^{m+1}}. \nonumber \end{eqnarray} \noindent The change of variables $y = t \, \sqrt{1+a}/\sqrt{2}$ gives \begin{equation} \int_{0}^{\infty} Q_{1}(y) \, dy = \frac{1}{[2(1+a)]^{m+1/2}} \sum_{k=0}^{m} \binom{m+k}{m-k} 2^{k} (1+a)^{k} \int_{0}^{\infty} \frac{t^{2k} \, dt}{(1+t^{2})^{m+1}}, \nonumber \end{equation} \noindent and the elementary identity \begin{equation} \int_{0}^{\infty} \frac{t^{2k} \, dt}{(1+t^{2})^{m+1}} = \frac{\pi}{2^{2m+1}} \binom{2k}{k} \binom{2m-2k}{m-k} \binom{m}{k}^{-1} \nonumber \end{equation} gives \begin{equation} \int_{0}^{\infty} Q_{1}(y) \, dy = \frac{\pi}{2^{2m+1}} \frac{1}{[2(1+a)]^{m+1/2}} \sum_{k=0}^{m} \binom{m+k}{m-k} 2^{k} \binom{2k}{k} \binom{2m-2k}{m-k} \binom{m}{k}^{-1} (1+a)^{k}. \nonumber \end{equation} \noindent This can be simplified further using \begin{equation} \binom{m+k}{m-k} \binom{2k}{k} = \binom{m+k}{m} \binom{m}{k} \end{equation} \noindent and the equality (\ref{equalint}) to produce \begin{equation} \int_{0}^{\infty} Q(y) \, dy = \frac{\pi}{2^{2m+1}} \frac{1}{[2(1+a)]^{m+1/2}} \sum_{k=0}^{m} 2^{k} \binom{m+k}{m} \binom{2m-2k}{m-k} (1+a)^{k}. \end{equation} This completes the proof of (\ref{int-qua}). The coefficients $d_{l,m}$ are given by \begin{equation} d_{l,m} = 2^{-2m} \sum_{k=l}^{m} 2^{k} \binom{2m-2k}{m-k} \binom{m+k}{m} \binom{k}{l}. \label{positive} \end{equation} \noindent This is clearly an improvement over the expression for $d_{l,m}$ given in the previous section. We now see that $d_{l,m}$ is a {\em positive} rational number. The bound on the denominator is now improved to $2m-1$. This comes directly from (\ref{positive}) and the familiar fact that the central binomial coefficients $\binom{2m}{m}$ are even. \section{A finite sum} \label{sec-finite} \setcounter{equation}{0} The previous two sections have provided two expressions for the polynomial $P_{m}(a)$. The elementary evaluation in Section \ref{sec-triple} gives \begin{eqnarray} P_{m}(a) & = & \sum_{j = 0}^{m} \binom{2m+1}{2j} (a + 1)^{j} \sum_{k=0}^{m - j} \binom{m - j}{k} \binom{2(m-k)}{m-k} 2^{-3(m-k)} (a - 1)^{m - k - j} \nonumber \\ & & \label{poly1} \end{eqnarray} \noindent and the results described in Section \ref{sec-single} provide the alternative expression \begin{eqnarray} P_{m}(a) & = & 2^{-m} \sum_{k=0}^{m} 2^{-k} \binom{2k}{k} \binom{2m-k}{m} (a+1)^{m-k}. \label{poly2} \\ \nonumber \end{eqnarray} \noindent The reader will find details in \cite{sarah1}. Comparing the values at $a=1$ given by both expressions leads to \begin{equation} \sum_{k=0}^{m} 2^{-2k} \binom{2k}{k} \binom{2m+1}{2k} = \sum_{k=0}^{m} 2^{-2k} \binom{2k}{k} \binom{2m-k}{m}. \label{pretty} \end{equation} \noindent The identity (\ref{pretty}) can be verified using D. Zeilberger's package EKHAD \cite{aequalsb}. Indeed, EKHAD tells us that both sides of (\ref{pretty}) satisfy the recursion \begin{equation} (2m+3)(2m+2)f(m+1) = (4m+5)(4m+3) f(m). \nonumber \end{equation} \noindent To conclude the proof by recursion, we check that they agree at $m=1$. A symbolic evaluation of both sides of (\ref{pretty}) leads to \begin{equation} \frac{2^{2m+1} \Gamma(2m+3/2)}{\sqrt{\pi} \, \Gamma(2m+2)} = - \frac{2^{2m+1} \, \sqrt{\pi}}{\Gamma(-2m-1/2) \Gamma(2m+2)}. \end{equation} \noindent The identity (\ref{pretty}) now follows from \begin{equation} \Gamma( m + \tfrac{1}{2} ) = \frac{\sqrt{\pi}}{2^{2m}} \frac{(2m)!}{m!} \text{ for } m \in \mathbb{N}. \end{equation} An elementary proof of (\ref{pretty}) would be desirable. \\ The left hand sum admits a combinatorial interpretation: multiply by $2^{2m+1}$ to produce \begin{equation} S_{1}(m) := \sum_{j=0}^{m} \binom{2m+1}{2j} \binom{2j}{j} 2^{2m+1-2j}. \end{equation} \noindent Consider the set $X$ of all paths in the plane that start at $(0,0)$ and take $2m+1$ steps in any of the four compass directions ($N = (0,1), \, S = (0,-1), \, E = (1,0)$ and $W= (-1,0)$) so that the path ends on the $y$-axis. Clearly there must be the same number of $E's$ and $W's$, say $j$ of them. Then to produce one of these paths, choose which is $E$ and which is $W$ in $\binom{2j}{j}$ ways. Finally, choose the remaining $2m+1-2j$ steps to be either $N$ or $S$, in $2^{2m+1-2j}$ ways. This shows that the set $X$ has $S_{1}(m)$ elements. Now let $Y$ be the set of all paths of the $x$-axis that start and end at $0$, take steps $e = 1$ and $w = -1$, and have length $4m+2$. The cardinality of $Y$ is clearly $\binom{4m+2}{2m+1}$. There is a simple bijection between the sets $X$ and $Y$ given by $E \to ee, \, W \to ww, N \to ew, \, S \to we$. Therefore, \begin{equation} S_{1}(m) = \binom{4m+2}{2m+1}. \end{equation} We have been unable to produce a combinatorial proof for the right hand side of (\ref{pretty}). \section{A related family of polynomials} \label{sec-related} \setcounter{equation}{0} The expression (\ref{positive}) provides an efficient formula for the evaluation of $d_{l,m}$ when $l$ is close to $m$. For example, \begin{equation} d_{m,m} = 2^{-m} \binom{2m}{m} \text{ and } d_{m-1,m} = (2m+1)2^{-(m+1)} \binom{2m}{m}. \end{equation} \noindent Our attempt to produce a similar formula for small $l$ led us into a surprising family of polynomials. \\ The original idea is very simple: start with \begin{equation} P_{m}(a) = \frac{2}{\pi} \left[ 2(a+1) \right]^{m + \tfrac{1}{2}} \int_{0}^{\infty} \frac{dx}{(x^{4} + 2ax^{2} + 1)^{m+1}}, \end{equation} \noindent and compute $d_{l,m}$ as coming from the Taylor expansion at $a=0$ of the right hand side. This yields \begin{equation} d_{l,m} = \frac{1}{l!m!2^{m+l}} \left( \alpha_{l}(m) \prod_{k=1}^{m} (4k-1) - \beta_{l}(m) \prod_{k=1}^{m} (4k+1) \right), \label{dlm-long} \end{equation} \noindent where $\alpha_{l}$ and $\beta_{l}$ are polynomial in $m$ of degrees $l$ and $l-1$, respectively. The explicit expressions \begin{equation} \alpha_{l}(m) = \sum_{t=0}^{\lfloor{ l/2 \rfloor} } \binom{l}{2t} \prod_{\nu=m+1}^{m+t} (4 \nu -1) \prod_{\nu=m-l+2t+1}^{m} (2 \nu +1) \prod_{\nu=1}^{t-1} (4 \nu +1), \label{alpha} \end{equation} \noindent and \begin{equation} \beta_{l}(m) = \sum_{t=1}^{\lfloor{ (l+1)/2 \rfloor} } \binom{l}{2t-1} \prod_{\nu=m+1}^{m+t-1} (4 \nu +1) \prod_{\nu=m-l+2t}^{m} (2 \nu +1) \prod_{\nu=1}^{t-1} (4 \nu -1), \label{beta} \end{equation} \noindent are given in \cite{bomosha}. \\ Trying to obtain more information about $\alpha_{l}$ and $\beta_{l}$ directly from (\ref{alpha}, \ref{beta}) proved difficult. One uninspired day, we decided to compute their roots numerically. We were pleasantly surprised to discover the following property. \begin{theorem} For all $l \geq 1$, all the roots of $\alpha_{l}(m) = 0$ lie on the line $\mathop{\rm Re}\nolimits{m} = - \tfrac{1}{2}$. Similarly, the roots of $\beta_{l}(m) = 0$ for $l \geq 2$ lie on the same vertical line. \end{theorem} The proof of this theorem, due to J. Little \cite{little}, starts by writing \begin{equation} A_{l}(s) := \alpha_{l}( (s-1)/2) \text{ and } B_{l}(s) := \beta_{l}( (s-1)/2) \end{equation} \noindent and proving that $A_{l}$ is equal to $l!$ times the coefficient of $u^{l}$ in $f(s,u) g(s,u)$, where $f(s,u) = (1+ 2u)^{s/2}$ and $g(s,u)$ is the hypergeometric series \begin{equation} g(s,u) = {_{2}F_{1}} \left( \frac{s}{2}+ \frac{1}{4}, \frac{1}{4}; \frac{1}{2}; 4 u^{2} \right). \end{equation} \noindent A similar expression is obtained for $B_{l}(s)$. From here it follows that $A_{l}$ and $B_{l}$ each satisfy the three-term recurrence \begin{equation} x_{l+1}(s) = 2sx_{l}(s) - (s^{2} - (2l-1)^{2})x_{l-1}(s). \end{equation} \noindent Little then establishes a version of Sturm's theorem to prove the final result. \\ The location of the zeros of $\alpha_{l}(m)$ now suggest to study the behavior of this family as $l \to \infty$. In the best of all worlds, one will obtain an analytic function of $m$ with all the zeros on a vertical line. Perhaps some Number Theory will enter and ... {\em one never knows}. \section{Arithmetical properties} \label{sec-arith} \setcounter{equation}{0} The expression (\ref{dlm-long}) gives \begin{equation} m! 2^{m+1} \, d_{1,m} = (2m+1) \prod_{k=1}^{m} (4k-1) - \prod_{k=1}^{m} (4k+1), \label{d1} \end{equation} \noindent from where it follows that the right hand side is an even number. This led naturally to the problem of determining the $2$-adic valuation of \begin{eqnarray} A_{l,m} := l! m! 2^{m+l} d_{l,m} & = & \alpha_{l}(m) \prod_{k=1}^{m} (4k-1) - \beta_{l}(m) \prod_{k=1}^{m} (4k+1) \label{new-A} \\ & = & \frac{l! m!}{2^{m-l}} \sum_{k=l}^{m} 2^{k} \binom{2m-2k}{m-k} \binom{m+k}{k} \binom{k}{l}. \label{new1-A} \end{eqnarray} Recall that, for $x \in \mathbb{N}$, the $2$-adic valuation $\nu_{2}(x)$ is the highest power of $2$ that divides $x$. This is extended to $x = a/b \in \mathbb{Q}$ via $\nu_{2}(x) = \nu_{2}(a) - \nu_{2}(b)$, leaving $\nu_{2}(0)$ as undefined. It follows from (\ref{new1-A}) that \begin{equation} A_{m,m} = 2^{m} (2m)! \text{ and } A_{m-1,m} = 2^{m-1} (2m-1)! (2m+1), \end{equation} \noindent so these $2$-adic valuations can be computed directly from Legendre's classical formula \begin{equation} \nu_{2}(x) = x - s_{2}(x), \end{equation} \noindent where $s_{2}(x)$ counts the number of $1$'s in the binary expansion of $x$. At the other end of the $l$-axis, \begin{equation} A_{0,m} = \prod_{k=1}^{m} (4k-1) \end{equation} \noindent is clearly odd, so $\nu_{2}(A_{0,m}) = 0$. The first interesting case is $l=1$: \begin{equation} A_{1,m} = (2m+1) \prod_{k=1}^{m} (4k-1) - \prod_{k=1}^{m} (4k+1). \label{d1-new} \end{equation} The main result of \cite{bomosha} is that \begin{equation} \nu_{2}(A_{l,m}) = \nu_{2}(m(m+1)) + 1. \end{equation} This was extended in \cite{amm1}. \begin{theorem} \label{2adicall} The $2$-adic valuation of $A_{l,m}$ satisfies \begin{equation} \nu_{2}(A_{l,m}) = \nu_{2}((m+1-l)_{2l}) + l, \label{2valuel} \end{equation} \noindent where $(a)_{k} = a(a+1) \cdots (a+k-1)$ is the Pochhammer symbol for $k \geq 1$. For $k=0$, we define $(a)_{0}=1$. \end{theorem} The proof is an elementary application of the WZ-method. Define the numbers \begin{eqnarray} B_{l,m} & := & \frac{A_{l,m}}{2^{l} (m+1-l)_{2l}}, \end{eqnarray} \noindent and use the WZ-method to obtain the recurrence \begin{equation} B_{l-1,m} = (2m+1)B_{l,m} -(m-l)(m+l+1)B_{l+1,m}, \quad 1 \leq l \leq m-1. \nonumber \end{equation} \noindent Since the initial values $B_{m,m} = 1$ and $B_{m-1,m} = 2m+1$ are odd, it follows inductively that $B_{l,m}$ is an odd integer. The reader will also find in \cite{amm1} a WZ-free proof of the theorem. \\ \noindent {\bf Note}. The reader will find in \cite{amm2} a study of the $2$-adic valuation of the Stirling numbers. This study was motivated by the results described in this section. The papers \cite{cohen1, cohn1, lengyel1, lengyel2, wannemacker1, wannemacker2} contain information about $2$-adic valuations of related sequences. \\ \section{The combinatorics of the valuations} \label{sec-combina1} \setcounter{equation}{0} The sequence of valuations $\{ \nu_{2}(A_{l,m}): \, m \geq l \}$ increase in complexity with $l$. Some of the combinatorial nature of this sequence is described next. The first feature of this sequence is that it has a block structure, reminiscent of the simple functions of Real Analysis. \begin{definition} Let $s \in \mathbb{N}, \, s \geq 2$. We say that a sequence $\{ a_{j}: \, j \in \mathbb{N} \}$ has {\em block structure} if there is an $s \in \mathbb{N}$ such that each $t \in \{ 0, \, 1, \, 2, \, \cdots \}$, we have \begin{equation} a_{st + 1} = a_{st + 2} = \cdots = a_{s(t+1)}. \label{repeat} \end{equation} The sequence is called {\it $s$-simple} if $s$ is the largest value for which (\ref{repeat}) occurs. \end{definition} \begin{theorem} \label{period-thm} For each $l \geq 1$, the set $X(l) := \{ \nu_{2}(A_{l,m}): \, m \geq l \, \}$ is an $s$-simple sequence, with $s = 2^{1+ \nu_{2}(l)}$. \end{theorem} We now provide a combinatorial interpretation for $X(l)$. This requires the maps \begin{eqnarray} F ( \{ a_{1}, \, a_{2}, \, a_{3}, \, \cdots \} ) & := & \{ a_{1}, \, a_{1}, \, a_{2}, \, a_{3}, \, \cdots \}, \nonumber \\ T ( \{ a_{1}, \, a_{2}, \, a_{3}, \, \cdots \} ) & := & \{ a_{1}, \, a_{3}, \, a_{5}, \, a_{7}, \, \cdots \}. \nonumber \end{eqnarray} \noindent We will also employ the notation $c := \{ \nu_{2}(m): \, m \geq 1 \} = \{ 0, \, 1, \, 0, \, 2, \, 0, \, 1, \cdots \}$. \medskip We describe an algorithm that reduces the sequence $X(l)$ to a constant sequence. The algorithm starts with the sequence $X(l) := \left\{ \nu_{2}( A_{l,l+m-1}): \quad m \geq 1 \, \right\}$ and then finds and $n \in \mathbb{N}$ so that $X(l)$ is is $2^{n}$-simple. Define $Y(l) := T^{n} \left( X(l) \right)$. At the initial stage, Theorem \ref{period-thm} ensures that $n= 1 + \nu_{2}(l)$. The next step is to introduce the shift $Z(l) := Y(l) - c$ and finally define $W(l):= F(Z(l))$. If $W(l)$ is a constant sequence, then STOP; otherwise repeat the process with $W$ instead of $X$. Define $X_k(l)$ as the new sequence at the end of the $(k-1)$th cycle of this process, with $X_1(l)=X(l)$. This algorithm produces a sequence of integers $n_{j}$, so that $X_{k}(l)$ is $2^{n_{k}}$-simple. The integer vector $\Omega(l) := \left\{ n_{1}, \, n_{2}, \, n_{3}, \cdots, n_{\omega(l)} \right\} $ is called the {\em reduction sequence} of $l$. The number $\omega_{l}$ is the number of cycles requires to obtain a constant sequence. \begin{definition} \label{def-compo} Let $l \in \mathbb{N}$. The {\em composition} of $l$, denoted by $\Omega_{1}(l)$, is an integer sequence defined as follows: Write $l$ in binary form. Read the digits from right to left. The first part of $\Omega_{1}(l)$ is the number of digits up to and including the first $1$ read in the corresponding binary sequence; the second one is the number of additional digits up to and including the second $1$ read, and so on until the number has been read completely. \end{definition} \begin{theorem} \label{thm-reduc} Let $\{ k_{1}, \cdots, k_{n}: \, 0 \leq k_{1} < k_{2} < \cdots < k_{n} \}$, be the unique collection of distinct nonnegative integers such that $ l = \sum_{i=1}^{n} 2^{k_{i}}$. Then the reduction sequence $\Omega(l)$ of $l$ is $\{ k_{1}+1, \, k_{2}-k_{1}, \cdots, k_{n}-k_{n-1} \}$. \end{theorem} It follows that the reduction sequence $\Omega(l)$ is precisely the sequence of compositions of $l$, that is, $\Omega(l) = \Omega_{1}(l)$. This is the combinatorial interpretation of the algorithm used to reduce $X(l)$ to a constant sequence. \section{Valuation patterns encoded in binary trees} \label{sec-tree} \setcounter{equation}{0} In this section we describe the precise structure of the graph of the sequence $\{ \nu_{2}(A_{l,m}), m \geq l \}$. The reader is referred to \cite{moll-sun} for complete details. In view of the block structure described in the previous section, it suffices to consider the sequences $\{ \nu_{2}(C_{l,m}), m \geq l \}$, which are defined by $$ C_{l,m} = $$ The emerging patterns are still very complicated. For instance, Figure \ref{val-c13} shows the case $l=13$ and Figure \ref{val-c59} corresponds to $l=59$. The remarkable fact is that in spite of the complexity of $\nu_{2}(C_{l,m})$ there is {\em an exact formula} for it. The rest of this section describes how to find it. {{ \begin{figure}[h] \begin{center} \centerline{\psfig{file=a13type1.eps,width=20em,angle=0}} \caption{The valuation $\nu_{2}(C_{13,m})$} \label{val-c13} \end{center} \end{figure} }} {{ \begin{figure}[h] \begin{center} \centerline{\psfig{file=a59type1.eps,width=20em,angle=0}} \caption{The valuation $\nu_{2}(C_{59,m})$} \label{val-c59} \end{center} \end{figure} }} We describe now the {\em decision tree} associated to the index $l$. Start with a root $v_{0}$ at level $k=0$. To this vertex we attach the sequence $\{ \nu_{2}(C_{l,m}): m \geq 1 \}$ and ask whether $\nu_{2}(C_{l,m})-\nu_{2}(m)$ has a constant value {\em independent} of $m$. If the answer is yes, we say that $v_{0}$ is a {\em terminal vertex} and label it with this constant. The tree is complete. If the answer is negative, we split the integers modulo $2$ and produce two new vertices, $v_{1}, \, v_{2}$, connected to $v_{0}$ and attach to the classes $\{ \nu_{2}(C_{l,2m-1}): m \geq 1 \}$ and $\{ \nu_{2}(C_{l,2m}): m \geq 1 \}$ to these vertices. We now ask whether $\nu_{2}(C_{l,2m-1})-\nu_{2}(m)$ is independent of $m$ and the same for $\nu_{2}(C_{l,2m})-\nu_{2}(m)$. Each vertex that yields a positive answer is considered terminal and the corresponding constant value is attached to it. Every vertex with a negative answer produces two new ones at the next level. Assume the vertex $v$ corresponding to the sequence $\{ 2^{k}(m-1) + a: \, m \geq 1 \}$ produces a negative answer. Then it splits in the next generation into two vertices corresponding to the sequences $\{ 2^{k+1}(m-1) + a: \, m \geq 1 \}$ and $\{ 2^{k+1}(m-1) + 2^{k} + a: \, m \geq 1 \}$. For instance, in Figure \ref{tree5}, the vertex corresponding to $\{ 4m: \, m \geq 1 \}$, that is not terminal, splits into $\{ 8m: \, m \geq 1 \}$ and $\{ 8m - 4: \, m \geq 1 \}$. These two edges lead to terminal vertices. Theorem \ref{formula-val2} shows that this process ends in a finite number of steps. \\ {{ \begin{figure}[ht] \begin{center} {\centerline{\psfig{file=graph5.eps,width=15em,angle=0} }} \caption{The decision tree for $l=5$} \label{tree5} \end{center} \end{figure} }} \begin{theorem} \label{formula-val2} Let $l \in \mathbb{N}$ and $T(l)$ be its decision tree. Define $k^{*}(l) := \lfloor{ \log_{2}l \rfloor}$. Then \\ \noindent 1) $T(l)$ depends only on the odd part of $l$; that is, for $r \in \mathbb{N}$, we have $T(l) = T(2^{r}l)$, up to the labels. \\ \noindent 2) The generations of the tree are labelled starting at $0$; that is, the root is generation $0$. Then, for $0 \leq k \leq k^{*}(l)$, the $k$-th generation of $T(l)$ has $2^{k}$ vertices. Up to that point, $T(l)$ is a complete binary tree. \\ \noindent 3) The $k^{*}$-th generation contains $2^{k^{*}+1}-l$ terminal vertices. The constants associated with these vertices are given by the following algorithm. Define \begin{equation} j_{1}(l,k,a) := -l + 2(1+2^{k}-a), \end{equation} \noindent and \begin{equation} \gamma_{1}(l,k,a) = l+k+1 + \nu_{2} \left( (j_{1}+l-1)! \right) + \nu_{2} \left( (l-j_{1})! \right). \end{equation} \noindent Then, for $1 \leq a \leq 2^{k^{*}+1}-l$, we have \begin{equation} \nu_{2} \left( C_{l,2^{k}(m-1)+a} \right) = \nu_{2}(m) + \gamma_{1}(l,k,a). \end{equation} \noindent Thus, the vertices at the $k^{*}$-th generation have constants given by $\gamma_{1}(l,k,a)$. \\ \noindent 4) The remaining terminal vertices of the tree $T(l)$ appear in the next generation. There are $2(l-2^{k^{*}(l)})$ of them. The constants attached to these vertices are defined as follows: let \begin{equation} j_{2}(l,k,a) := -l + 2(1+2^{k+1}-a), \end{equation} \noindent and \begin{equation} j_{3}(l,k,a) := j_{2}(l,k,a+2^{k}). \end{equation} \noindent Define \begin{equation} \gamma_{2}(l,k,a) := l+k+2 + \nu_{2} \left( (j_{2} + l -1)! \right) + \nu_{2} \left( (l- j_{2})! \right), \end{equation} \noindent and \begin{equation} \gamma_{3}(l,k,a) := l+k+2 + \nu_{2} \left( (j_{3} + l -1)! \right) + \nu_{2} \left( (l- j_{3})! \right). \end{equation} \noindent Then, for $2^{k^{*}(l)+1}-l+1 \leq a \leq 2^{k^{*}(l)}$, we have \begin{equation} \nu_{2} \left( C_{l,2^{k^{*}(l) +1}(m-1)+a} \right) = \nu_{2}(m) + \gamma_{2}(l,k^{*}(l),a), \end{equation} \noindent and \begin{equation} \nu_{2} \left( C_{l,2^{k^{*}(l) +1}(m-1)+a+2^{k^{*}(l)} } \right) = \nu_{2}(m) + \gamma_{3}(l,k^{*}(l),a), \end{equation} \noindent give the constants attached to these remaining terminal vertices. \end{theorem} \medskip We now use the theorem to produce a formula for $\nu_{2}(C_{3,m})$. The value $k^{*}(3) = 1$ shows that the first level contains $2^{1+1}-3 = 1$ terminal vertex. This corresponds to the sequence $2m-1$ and has constant value $7$, thus, \begin{equation} \nu_{2} \left(C_{3,2m-1} \right) = 7. \end{equation} \noindent The next level has $2(3 - 2^{1}) = 2$ terminal vertices. These correspond to the sequences $4m$ and $4m-2$, with constant values $9$ for both of them. This tree produces \begin{equation} \nu_{2} \left( C_{3,m} \right) = \begin{cases} 7 + \nu_{2} \left( \tfrac{m+1}{2} \right) & \text{ if } m \equiv 1 \bmod 2, \\ 9 + \nu_{2} \left( \tfrac{m}{4} \right) & \text{ if } m \equiv 0 \bmod 4, \\ 9 + \nu_{2} \left( \tfrac{m+2}{4} \right) & \text{ if } m \equiv 2 \bmod 4. \end{cases} \end{equation} The complexity of the graph for $l=13$ is reflected in the analytic formula for this valuation. The theorem yields \begin{equation} \nu_{2} \left( C_{13,m} \right) = \begin{cases} 36 + \nu_{2} \left( \tfrac{m+7}{8} \right) & \text{ if } m \equiv 1 \bmod 8, \\ 37 + \nu_{2} \left( \tfrac{m+6}{8} \right) & \text{ if } m \equiv 2 \bmod 8, \\ 36 + \nu_{2} \left( \tfrac{m+5}{8} \right) & \text{ if } m \equiv 3 \bmod 8, \\ 40 + \nu_{2} \left( \tfrac{m+12}{16} \right) & \text{ if } m \equiv 4 \bmod 16, \\ 38 + \nu_{2} \left( \tfrac{m+11}{16} \right) & \text{ if } m \equiv 5 \bmod 16, \\ 39 + \nu_{2} \left( \tfrac{m+10}{16} \right) & \text{ if } m \equiv 6 \bmod 16, \\ 38 + \nu_{2} \left( \tfrac{m+9}{16} \right) & \text{ if } m \equiv 7 \bmod 16, \\ 40 + \nu_{2} \left( \tfrac{m+8}{16} \right) & \text{ if } m \equiv 8 \bmod 16, \\ 40 + \nu_{2} \left( \tfrac{m+4}{16} \right) & \text{ if } m \equiv 12 \bmod 16, \\ 38 + \nu_{2} \left( \tfrac{m+3}{16} \right) & \text{ if } m \equiv 13 \bmod 16, \\ 39 + \nu_{2} \left( \tfrac{m+2}{16} \right) & \text{ if } m \equiv 14 \bmod 16, \\ 38 + \nu_{2} \left( \tfrac{m+1}{16} \right) & \text{ if } m \equiv 15 \bmod 16, \\ 40 + \nu_{2} \left( \tfrac{m}{16} \right) & \text{ if } m \equiv 16 \bmod 16. \end{cases} \end{equation} The details for Theorem \ref{formula-val2} are given in \cite{moll-sun}. \\ \noindent {\bf Note}. The $p$-adic valuations of $A_{l,m}$ for $p$ odd present phenomena different from those explained for the case $p=2$. Figure \ref{val-17} shows the plot of $\nu_{17}(A_{1,m})$ where we observe linear growth. Experimental data suggest that, for any odd prime $p$, one has \begin{equation} \nu_{p}(A_{l,m}) \sim \frac{m}{p-1}. \end{equation} \noindent Figure \ref{error-17} depicts the error term $\nu_{17}(A_{1,m}) - m/16$. The structure of the error remains to be explored. {{ \begin{figure}[h] \begin{center} \centerline{\psfig{file=val17.eps,width=20em,angle=0}} \caption{The valuation $\nu_{17}(A_{1,m})$} \label{val-17} \end{center} \end{figure} }} {{ \begin{figure}[h] \begin{center} \centerline{\psfig{file=error17.eps,width=20em,angle=0}} \caption{The error term $\nu_{17}(A_{1,m}) - m/16$} \label{error-17} \end{center} \end{figure} }} \section{Unimodality and log-concavity} \label{sec-unimodal} \setcounter{equation}{0} A finite sequence of real numbers $\{ a_{0}, \, a_{1}, \cdots, a_{m} \}$ is said to be {\em unimodal} if there exists an index $0 \leq j \leq m$ such that $a_{0} \leq a_{1} \leq \cdots \leq a_{j}$ and $a_{j} \geq a_{j+1} \geq \cdots \geq a_{m}$. A polynomial is said to be unimodal if its sequence of coefficients is unimodal. The sequence $\{a_{0}, a_{1}, \cdots, a_{m} \}$ with $a_{j} \geq 0$ is said to be {\it logarithmically concave} (or {\em log-concave} for short) if $a_{j+1}a_{j-1} \leq a_{j}^{2}$ for $ 1 \leq j \leq m-1$. It is easy to see that if a sequence is log-concave then it is unimodal \cite{wilf1}. Unimodal polynomials arise often in combinatorics, geometry, and algebra, and have been the subject of considerable research in recent years. The reader is referred to \cite{stanley1} and \cite{brenti1} for surveys of the diverse techniques employed to prove that specific families of polynomials are unimodal. For $m \in \mathbb{N}$, the sequence $\{ d_{l,m}: 0 \leq l \leq m \}$ is unimodal. This is a consequence of the following criterion established in \cite{bomouni1}. \begin{theorem} Let $a_{k}$ be a nondecreasing sequence of positive numbers and let $A(x) = \sum_{k=0}^{m} a_{k} x^{k}$. Then $A(x+1)$ is unimodal. \end{theorem} We applied this theorem to the polynomial \begin{equation} A(x) := 2^{-2m} \sum_{k=0}^{m} 2^{k} \binom{2m-2k}{m-k} \binom{m+k}{m} x^{k} \label{niceA} \end{equation} \noindent that satisfies $P_{m}(x) = A(x+1)$. The criterion was extended in \cite{bomouni2} to include the shifts $A(x+j)$ and in \cite{wang2} for arbitrary shifts. The original proof of the unimodality of $P_{m}(a)$ can be found in \cite{bomouni3}. \\ In \cite{moll-notices} we conjectured the log-concavity of $\{ d_{l,m}: 0 \leq l \leq m \}$. This turned out a more difficult question. Here we describe some of our failed attempts. \\ \noindent 1) A result of Brenti \cite{brenti1} states that if $A(x)$ is log-concave then so is $A(x+1)$. Unfortunately this does not apply in our case since (\ref{niceA}) is not log-concave. Indeed, \begin{eqnarray} 2^{4m-2k} \left( a_{k}^{2} - a_{k-1}a_{k+1} \right) & = & \binom{2m}{m-k}^{2} \binom{m+k}{m}^{2} \times \nonumber \\ & \times & \left( 1 - \frac{k(m-k)(2m-2k+1)(m+k+1)}{(k+1)(m+k)(2m-2k-1)(m-k+1)} \right) \nonumber \end{eqnarray} \noindent and this last factor could be negative---for example, for $m=5$ and $j=4$. The number of negative terms in this sequence is small, so perhaps there is a way out of this. \\ \noindent 2) The coefficients $d_{l,m}$ satisfy many recurrences. For example, \begin{equation} d_{j+1}(m) = \frac{2m+1}{j+1} d_{j}(m) - \frac{(m+j)(m+1-j)}{j(j+1)} d_{j-1}(m). \end{equation} This can be found by a direct application of WZ method. Therefore, $d_{l,m}$ is logconcave provided \begin{equation} j(2m+1) d_{j-1}(m) d_{j}(m) \leq (m+j)(m+1-j) d_{j-1}(m)^{2} + j(j+1) d_{j}(m)^{2}. \end{equation} \noindent We have conjectured that the smallest value of the expression \begin{equation} (m+j)(m+1-j)d_{j-1}(m)^{2} + j(j+1)d_{j}(m)^{2} - j(2m+1) d_{j-1}(m)d_{j}(m) \end{equation} \noindent is $2^{2m} m(m+1) \binom{2m}{m}^{2}$ and it occurs at $j=m$. This would imply the log-concavity of $\{ d_{l,m} : 0 \leq l \leq m \}$. Unfortunately, it has not yet been proven. \\ Actually we have conjectured that the $d_{l,m}$ satisfy a stronger version of log-concavity. Given a sequence $ \{ a_{j} \}$ of positive numbers, define a map \begin{equation} \mathfrak{L} \left( \{ a_{j} \} \right) := \{ b_{j} \} \nonumber \end{equation} \noindent by $b_{j} := a_{j}^{2} - a_{j-1}a_{j+1}$. Thus $\{ a_{j} \}$ is log-concave if $\{ b_{j} \}$ has positive coefficients. The nonnegative sequence $\{ a_{j} \}$ is called {\em infinitely log-concave} if any number of applications of $\mathfrak{L}$ produces a nonnegative sequence. \\ \begin{conjecture} \label{conj-inf} For each fixed $m \in \mathbb{N}$, the sequence $\{ d_{l,m} : 0 \leq l \leq m \}$ is infinitely log-concave. \end{conjecture} \medskip The log-concavity of $\{ d_{l,m} : 0 \leq l \leq m \}$ has recently been established by M. Kauers and P. Paule \cite{kauers-paule} as an applications of their work on establishing inequalities by automatic means. The starting point is the triple sum expression in Section \ref{sec-triple} written as \begin{equation} d_{l,m} = \sum_{j,s,k} \frac{(-1)^{k+j-l}}{2^{3(k+s)}} \binom{2m+1}{2s} \binom{m-s}{k} \binom{2(k+s)}{k+s} \binom{s}{j} \binom{k}{l-j}. \nonumber \end{equation} \noindent Using the RISC package Multisum \cite{wegschaider1} they derive the recurrence \begin{equation} 2(m+1)d_{l,m+1} = 2(l+m)d_{l-1,m} + (2l+4m+3)d_{l,m}. \label{recu1} \end{equation} \noindent The positivity of $d_{l,m}$ follows directly from here. To establish the log-concavity of $d_{l,m}$ the new recurrence \begin{equation} 4l(l+1)d_{l+1,m} = -2(2l-4m-3)(l+m+1)d_{l,m} + 4(l-m-1)(m+1)d_{l,m+1} \nonumber \end{equation} \noindent is derived automatically and the log-concavity of $d_{l,m}$ is reduced to establishing the inequality \begin{eqnarray} d_{l,m}^{2} & \geq & \frac{4(m+1) \left( 4((l-m-1)(m+1)-(2l^{2}-4m^{2}-7m-3)d_{l,m+1}d_{l,m} \right)} {16m^{3}+16lm^{2}+40m^{2}+28lm + 33m +9l+9} \nonumber \end{eqnarray} \noindent The $2$-log-concavity of $\{ d_{l,m} : 0 \leq l \leq m \}$, that is ${\mathfrak{L}}^{(2)}( \{ d_{l,m}\}) \geq \{0,0, \dots 0\}$ remains an open question. At the end of \cite{kauers-paule} the authors state that ``...we have little hope that a proof of $2$-logconcavity could be completed along these lines, not to mention that a human reader would have a hard time digesting it." \\ The general concept of infinite log-concavity has generated some interest. D. Uminsky and K. Yeats \cite{umi-yeats} have studied the action of $\mathfrak{L}$ on sequences of the form \begin{equation} \{\cdots, 0, 0, 1, x_{0}, x_{1}, \cdots, x_{n}, \cdots, x_{1}, x_{0}, 1,0,0, \cdots \} \end{equation} \noindent and \begin{equation} \{\cdots, 0, 0, 1, x_{0}, x_{1}, \cdots, x_{n}, x_{n}, \cdots, x_{1}, x_{0}, 1,0,0, \cdots \} \end{equation} \noindent and established the existence of a large unbounded region in the positive orthant of $\mathbb{R}^{n}$ that consists only of infinitely log-concave sequences $\{x_0, \dots, x_n\}$. P. McNamara and B. Sagan \cite{mcnamara1} have considered sequences satisfying the condition $a_{k}^{2} \geq r a_{k-1}a_{k+1}$. Clearly this implies log-concavity of $r \geq 1$. Their techniques apply to the rows of the Pascal triangle. Choosing appropriate $r$-factors and a computer verification procedure, they obtain the following. \begin{theorem} For fixed $n \leq 1450$, the sequence $ \{ \binom{n}{k}: 0 \leq k \leq n\}$ is infinite log-concave. \end{theorem} \noindent In particular, they looked for values of $r$ for which the $r$-factor condition is preserved by the $\mathfrak{L}$-operator. The factor that works is $r = \frac{3 + \sqrt{5}}{2}$ (the square of the golden mean). This technique can be used on a variety of finite sequences. See \cite{mcnamara1} for a complete discussion of the techinque. McNamara and Sagan have also considered $q$-analogues of the binomial coefficients. In order to describe these extensions we introduce the basic notation and refer to the reader to \cite{kac-chung} and \cite{andrews3} for more details on the world of $q$-analogues. Let $q$ be a variable and for $n \in \mathbb{N}$ define \begin{equation} [n]_{q} = \frac{1-q^{n}}{1-q} = 1 + q + q^{2} + \cdots + q^{n-1}. \end{equation} \noindent The {\em Gaussian-polynomial} or {\em $q$-binomial coefficients} are defined by \begin{equation} \begin{bmatrix} n \\ k \end{bmatrix}_{q} = \frac{ [n]_{q}!}{[k]_{q}! \, [n-k]_{q}!}, \end{equation} \noindent where $[n]_{q}! := [1]_{q} [2]_{q} \cdots [n]_{q}$. The Gaussian polynomials have nonnegative coefficients. We will say that the sequence of polynomials $ \{ f_{k}(q) \}$ is {\it $q$-log-concave} if $\mathfrak{L}( f_{k}(q) )$ is a sequence of polynomials with nonnegative coefficients. The extension of this definition to {\it infinite q-log-concavity} is made in the obvious way. Observe that \begin{equation} \binom{n}{k} = \lim\limits_{q \to 1} \begin{bmatrix} n \\ k \end{bmatrix}_{q}. \end{equation} \noindent McNamara and Sagan have established the surprising result: \begin{theorem} The sequence $ {\left \{ \begin{bmatrix} n \\ k \end{bmatrix}_{q} \right\}}_{k \geq 0}$ is not infinite q-logconcave. \end{theorem} In fact they established that applying $\mathfrak{L}$ twice gives polyomials with some negative coefficients. As a compensation, they propose: \begin{conjecture} The sequence $ \left \{ \begin{bmatrix} n \\ k \end{bmatrix}_{q} \right\}_{n \geq k}$ is infinite q-log-concave for all fixed $k \geq 0$. \end{conjecture} Another $q$-analog of the binomial coefficients that arisies in the study of quantum groups is defined by \begin{equation} \langle{ n \rangle} := \frac{q^{n}-q^{-n}}{q-q^{-1}} = \frac{1}{q^{n-1}}( 1 + q^{2}+q^{4} + \cdots + q^{2n-2}). \end{equation} \noindent From here we proceed as in the case of Gaussian polynomials and define \begin{equation} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} := \frac{{\langle { n } \rangle} !}{{\langle { k } \rangle} ! \, {\langle {n-k} \rangle} !} \end{equation} \noindent where ${\langle { n } \rangle}! = {\langle { 1 } \rangle} \langle{2 \rangle} \cdots \langle{n \rangle}$. For these coefficients McNamara and Sagan have proposed \begin{conjecture} \noindent a) The row sequence $ \left\{ \genfrac{\langle}{\rangle}{0pt}{}{n}{k} \right\}_{k \geq 0}$ is infinitely $q$-log-concave for all $n \geq 0$. \noindent b) The column sequence $ \left\{ \genfrac{\langle}{\rangle}{0pt}{}{n}{k} \right\}_{n \geq k}$ is infinitely $q$-log-concave for all fixed $n \geq 0$. \noindent c) For all integers $0 \leq u < v$, the sequence $ \left\{ \genfrac{\langle}{\rangle}{0pt}{}{n+mu}{mv} \right\}_{m \geq 0}$ is infinitely $q$-log-concave for all $n \geq 0$. \end{conjecture} This conjecture has been verified for all $n \leq 24$ with $v \leq 10$. When $u>v$, using \begin{equation} \genfrac{\langle}{\rangle}{0pt}{}{n}{k} = \frac{1}{q^{nk-k^{2}}} \begin{bmatrix} n \\ k \end{bmatrix}_{q^{2}} \end{equation} \noindent one checks that the lowest degree of $\genfrac{\langle}{\rangle}{0pt}{}{n+u}{v}^{2} - \genfrac{\langle}{\rangle}{0pt}{}{n+2u}{2v}$ is $-1$, so the sequence is not even $q$-log-concave. Sagan and McNamara observe that when $u=v$, the quantum groups analoge has exactly the same behavior as the Gaussian polynomials. Newton began the study of log-concave sequences by establishing the following result (paraphrased in Section $2.2$ of \cite{hardy7}). \begin{theorem} Let $\{ a_{k} \}$ be a finite sequence of positive real numbers. Assume all the roots of the polynomial \begin{equation} P[a_{k};x]:= a_{0} + a_{1}x + \cdots + a_{n}x^{n} \end{equation} \noindent are real. Then the sequence $\{ a_{k} \}$ is log-concave. \end{theorem} McNamara and Sagan \cite{mcnamara1} and, independently, R. Stanley have proposed the next conjecture. \begin{conjecture} Let $\{ a_{k} \}$ be a finite sequence of positive real numbers. If $P[a_{k};x]$ has only real roots then the same is true for $P[ \mathfrak{L}(a_{k}); x]$. \end{conjecture} \medskip This conjecture was also independently made by Fisk. See \cite{mcnamara1} for the complete details on the conjecture. \\ The polynomials $P_{m}(a)$ in (\ref{polyP-def}) are the generating function for the sequence $\{ d_{l,m} \}$ described here. It is an unfortunate fact that they do not have real roots \cite{bomouni3} so these conjecture would not imply Conjecture \ref{conj-inf}. In spite of this, the asymptotic behavior of these zeros has remarkable properties. Dimitrov \cite{dimitrov1} has shown that, in the right scale, the zeros converge to a lemniscate. \\ The infinite-log-concavity of $\{ d_{l,m} \}$ has resisted all our efforts. It remains to be established. \\ \medskip \noindent {\bf Acknowledgements}. The authors wish to thank B. Sagan for many comments on an earlier version of the paper. The first author acknowledges the partial support of NSF-DMS 0713836. \bigskip
{ "timestamp": "2008-12-17T20:09:34", "yymm": "0812", "arxiv_id": "0812.3374", "language": "en", "url": "https://arxiv.org/abs/0812.3374", "abstract": "A survey of properties of a sequence of coefficients appearing in the evaluation of a quartic definite integral is presented. These properties are of analytical, combinatorial and number-theoretical nature.", "subjects": "Number Theory (math.NT); Combinatorics (math.CO)", "title": "A remarkable sequence of integers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363494503271, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7073385811186718 }
https://arxiv.org/abs/1501.02090
Parameter choice strategies for least-squares approximation of noisy smooth functions on the sphere
We consider a polynomial reconstruction of smooth functions from their noisy values at discrete nodes on the unit sphere by a variant of the regularized least-squares method of An et al., SIAM J. Numer. Anal. 50 (2012), 1513--1534. As nodes we use the points of a positive-weight cubature formula that is exact for all spherical polynomials of degree up to $2M$, where $M$ is the degree of the reconstructing polynomial. We first obtain a reconstruction error bound in terms of the regularization parameter and the penalization parameters in the regularization operator. Then we discuss a priori and a posteriori strategies for choosing these parameters. Finally, we give numerical examples illustrating the theoretical results.
\section{Introduction} In recent decades methods for approximation of a continuous function $y$ on the sphere $\mathbb{S}^2:=\left\{\mathbf{x}=(x_1,x_2,x_3)^T\in\mathbb{R}^3:\ x_1^2+x_2^2+x_3^2=1\right\}$ by means of polynomials have been discussed by many authors (see, for example, \cite{G1997, R2003, S1979, W1981}). Often the underlying motivation has been the need to approximate geophysical quantities. For example, such a task appears in the satellite gravity gradiometry problem (SGG-problem) \cite{F1999}, p. 120, 262, \cite{P2008}, in which the task is to find a spherical harmonic representation of Earth's gravitational potential from satellite observations. The present study was motivated by this example. We shall return to it several times throughout the paper. The mathematical problem considered in this paper is to find a polynomial approximation to $y\in C(\mathbb{S}^2)$, given noisy data values $y^\epsilon(\mathbf{x}_i)$ at points $\mathbf{x}_i\in\mathbb{S}^2$, $i=1,\ldots N$, using a least-squares strategy developed in \cite{A2012}. (In the SGG application the sphere in question is determined by the satellite orbits. The gravitational potential at the satellite height is smoother than at earth's surface, a complicating feature for the inverse problem.) We shall assume, in a slight generalization of \cite{A2012}, that the point set $X_N:=\{\mathbf{x}_1,\ldots,\mathbf{x}_N\}$ consists of the points of a cubature rule which is exact for all polynomials $p\in \mathbb{P}_{2M}$, where $\mathbb{P}_M$ is the set of all spherical polynomials of degree less than or equal to $M$, or in other words the restriction to $\mathbb{S}^2$ of the set of all polynomials in $\mathbb{R}^3$ of degree less than or equal to $M$. Thus the point set must satisfy \begin{equation}\label{cubature} \forall p\in\mathbb{P}_{2M},\ \ \sum_{i=1}^N w_ip(\mathbf{x}_i)=\int_{\mathbb{S}^2}p(\mathbf{x})d\omega(\mathbf{x}), \end{equation} where $d \omega(\mathbf{x})$ denotes area measure on $\mathbb{S}^2$, and $w_i, i=1,\ldots,N$ are positive cubature weights associated with the pointset $X_N$. For sufficiently large $N$ one can find in the literature a variety of suitable cubature formulas (see, e.g., \cite{M2001, HSW2010, LM2006, Xu2003}). Moreover, in principle the point sets for such a rule can be generated by selecting from any sufficiently dense set of points on the sphere, see \cite{N2006a, LM2008, G2009}. The strategy is to take the approximant $y_M\in\mathbb{P}_M$ to be the minimizer of the regularized discrete least-squares problem \begin{equation}\label{leastsquares} y_M=\arg\min\left\{\sum_{i=1}^Nw_i(p(\mathbf{x}_i)-y^\epsilon(\mathbf{x}_i))^2+\alpha\sum_{i=1}^Nw_i(R_Mp(\mathbf{x}_i))^2,\ p\in\mathbb{P}_M\right\}, \end{equation} where $y^\epsilon(\mathbf{x}_i):=y(\mathbf{x}_i)+\epsilon_i$ represent noisy values of a perturbed version $y^\epsilon$ of the original function $y$ calculated at the points of $X_N$, $\alpha$ is a regularization parameter, and $R_M:\mathbb{P}_M\rightarrow\mathbb{P}_M$ is a linear ``penalization'' operator given by \begin{eqnarray}\label{regularizer} R_Mp(\mathbf{x}) &:=&\sum_{k=0}^M\beta_k\frac{2k+1}{4\pi}\int_{\mathbb{S}^2}P_k(\mathbf{x}\cdot\mathbf{z})p(\mathbf{z})d\omega(\mathbf{z})\\ \nonumber &=&\sum_{k=0}^M\beta_k\frac{2k+1}{4\pi}\sum_{i=1}^N w_i P_k(\mathbf{x}\cdot\mathbf{x}_i)p(\mathbf{x}_i),\ \mathbf{x}\in\mathbb{S}^2,\ p\in\mathbb{P}_M, \end{eqnarray} where $P_k$ is the Legendre polynomial of degree $k$, and in the last step we used (\ref{cubature}). Here the numbers $\beta_k, k=1,\ldots,M$ are a non-decreasing sequence of positive parameters. With $\beta_0$ fixed in some appropriate way, the important feature of the parameters $\beta_k$ is their rate of growth. The central task in this paper will be to assign appropriate values for the $\beta_k$. As pointed out in \cite{A2012}, the expression in (\ref{regularizer}) is the most general rotationally invariant expression for a linear operator on the space $\mathbb{P}_M$. In \cite{A2012} the point set $X_N$ was taken to be a spherical $2M$-design, which simply means that (\ref{cubature}) must hold with equal weights $w_i=4\pi/N$. We gain considerable freedom in this paper by allowing general positive weights $w_i$ in (\ref{cubature}). The only effective difference in the present approximation scheme is that the least-squares problem (\ref{leastsquares}) is slightly non-standard because of the appearance of the cubature weights $w_i$. It was observed in numerical experiments in \cite{A2012} that a proper choice of the penalization operator $R_M$ together with the regularization parameter $\alpha$ can significantly improve the approximation. However, the choice of the model parameters in (\ref{regularizer}) was not settled, and still remains an open issue. In our paper we will tackle this crucial question by proposing parameter choice strategies (strategies for choosing $\beta_k$ and $\alpha$) that allow good approximation of noisy smooth functions on the sphere. The paper is organized as follows: in the next section we present necessary preliminaries, and give an explicit solution of the regularized least-squares problem.. In Section 3 we derive theoretical error bounds for the resulting approximation. Sections 4 and 5 discuss error bounds and parameter choice strategies. Finally, in the last section we present some numerical experiments that test the theoretical results from previous sections. \section{Preliminaries} We introduce a real spherical harmonic basis for $\mathbb{P}_M$, see \cite{M1966} \[ \left\{Y_{k,j}:\ k=0,1,...,M,\ j=1,...,2k+1\right\}, \] assumed to be orthonormal with respect to the standard $L^2$ inner product, \[ \left\langle f,g\right\rangle_{L^2(\mathbb{S}^2)} :=\int_{\mathbb{S}^2}f(\mathbf{x})g(\mathbf{x}) d\omega(\mathbf{x}). \] Then for $p\in\mathbb{P}_M$ an arbitrary spherical polynomial of degree $\leq M$ there exists a unique vector $\gamma=(\gamma_{k,j})\in\mathbb{R}^{(M+1)^2}$ such that \begin{equation}\label{eq:1} p(\mathbf{x})=\sum_{k=0}^M\sum_{j=1}^{2k+1}\gamma_{k,j}Y_{k,j}(\mathbf{x}),\ \ \mathbf{x}\in\mathbb{S}^2. \end{equation} The addition theorem for spherical harmonics (see \cite{M1966}), which asserts \begin{equation}\label{addition} \sum_{j=1}^{2k+1}Y_{k,j}(\textbf{x})Y_{k,j}(\textbf{z}) = \frac{2k+1}{4\pi}P_k(\textbf{x}\cdot\textbf{z}), \quad \textbf{x},\textbf{z}\in\mathbb{S}^2, \end{equation} will play an important role. The assumption that a function $y$ on the unit sphere is continuous implies that $y\in L^2(\mathbb{S}^2)$, and hence that its Fourier coefficients $\left\langle Y_{k,j},y\right\rangle_{L^2(\mathbb{S}^2)}$ with respect to the basis of spherical harmonics are square-summable, i.e. \[ \sum_{k=0}^\infty\sum_{j=1}^{2k+1}\left|\left\langle Y_{k,j},y\right\rangle_{L^2(\mathbb{S}^2)}\right|^2<\infty. \] To measure any additional smoothness of $y$ it is convenient to introduce a Hilbert space $W^{\phi,\beta}$ that is especially tailored to the particular problem, namely \[ y\in W^{\phi,\beta}:=\left\{g: \left\|g\right\|^2_{W^{\phi,\beta}}:=\sum_{k=0}^\infty\sum_{j=1}^{2k+1}\frac{\left|\left\langle Y_{k,j},g\right\rangle_{L^2(\mathbb{S}^2)}\right|^2}{\phi^2(\beta_k^{-2})}<\infty\right\}, \] where $\phi$ is an non-decreasing function such that $\phi(0)=0$ and $\beta=\left\{\beta_0,\beta_1,...,\beta_M,\ldots\right\}$ is the sequence of coefficients appearing in the regularizer (\ref{regularizer}). In the literature, see, e.g., \cite{L2013}, the function $\phi$ goes under the name of \emph{smoothness index function}. In this context the smoothness of $y$ is encoded in $\phi$ and $\beta$. For example, if the smoothness index function $\phi(t)$ and the sequence $\beta=\left\{\beta_k\right\}$ increase polynomially with $t$ and $k$ such that $\phi(t)=O\left(t^{\nu_1}\right), \beta_k=O\left(k^{\nu_2}\right), \nu_1\nu_2>1/2$, then the space $W^{\phi,\beta}$ becomes a spherical Sobolev space $H_{2\nu_1\nu_2}$ (see, e.g., \cite{F1999}, p. 64), and a spherical analog of the fundamental lemma due to Sobolev (see \cite{F1999}, Lemma 2.1.5) says that $H_{2\nu_1\nu_2}$ is embedded in the space $C^{(\nu)}(\mathbb{S}^2)$ of functions, which have $\nu$ continuous derivatives on $\mathbb{S}^2, \nu<2\nu_1\nu_2-1$, and are the restrictions to $\mathbb{S}^2$ of functions satisfying the Laplace equation in the outer space of $\mathbb{S}^2$ and being regular at infinity. Then Jackson's theorem on the sphere (see \cite{R1971}, Theorem 3.3) tells us that for $y\in W^{\phi,\beta}$, there holds \begin{equation}\label{eq:3} \inf_{p\in\mathbb{P}_M}\left\|y-p\right\|_{C(\mathbb{S}^2)}=O\left(M^{-\nu}\right), \ \nu<2\nu_1\nu_2-1. \end{equation} On the other hand, if the sequence $\beta=\left\{\beta_k\right\}$ increases exponentially then for polynomially increasing $\phi$ and $y\in W^{\phi,\beta}$ we have \[ \inf_{p\in\mathbb{P}_M}\left\|y-p\right\|_{C(\mathbb{S}^2)}=O\left(e^{-qM}\right), \] where $q$ is some positive number that does not depend on $M$. In the error analysis later in the paper we make use of a linear polynomial approximation that in a certain precise sense mimics best approximation in the space of spherical polynomials of half the degree. The approximation, studied in \cite{Mh2005, F2008, S2011}, approximates a function $y\in C(\mathbb{S}^2)$ by $V_M y\in\mathbb{P}_M$ defined by \begin{eqnarray}\label{V_M} V_M y(\mathbf{x})&:=\sum_{k=0}^M h\left(\frac{k}{M}\right)\sum_{j=1}^{2k+1}Y_{k,j}(\mathbf{x})\left\langle Y_{k,j},y\right\rangle_{L^2(\mathbb{S}^2)}\\ &=\sum_{k=0}^M h\left(\frac{k}{M}\right)\frac{2k+1}{4\pi}\int_{\mathbb{S}^2}P_k(\mathbf{x}\cdot\mathbf{z})y(\mathbf{z})d\omega(\mathbf{z}),\nonumber \end{eqnarray} where $h$ is a real-valued function on $\mathbb{R}^+$, called a filter function, which satisfies \begin{equation}\label{hspec} h(t)\in[0,1]\,\forall\, t\in \mathbb{R}^+,\quad h(t) = \left\{ \begin{array}{l l} 1, & \quad t\in\left[0,1/2\right],\\ 0, & \quad t\in\left(1,\infty\right). \end{array} \right. \end{equation} It is shown in \cite{S2011} that for suitable choices of the filter $h$ (including any filter in $C^3(\mathbb{R}^+)$, or the unique $C^1$ quadratic spline with breakpoints at 1/2, 3/4 and 1 that satisfies (\ref{hspec})), the norm of the operator $V_M$ as an operator from $\mathbb{P}_M$ to $C(\mathbb{S}^2)$ is bounded independently of $M$. Since, as is easily seen, $V_M$ reproduces polynomials of degree less than or equal to $M/2$, it follows in the usual way that \[ \left\|y-V_My\right\|_{C(\mathbb{S}^2)}\leq c\inf_{p\in\mathbb{P}_{\left[ M/2\right]}}\left\|y-p\right\|_{C(\mathbb{S}^2)}, \] where $\left[\cdot\right]$ denotes the floor function. (In this paper $c$ is a generic constant, which may take different values at different occurrences.) In view of (\ref{eq:3}), for polynomially increasing $\phi, \beta$ and $y\in W^{\phi,\beta}$ we have \[ \left\|y-V_My\right\|_{C(\mathbb{S}^2)}\leq c\left[M/2\right]^{-\nu}\leq cM^{-\nu}. \] On the other hand, for exponentially increasing $\beta$ and polynomially increasing $\phi$ the theory \cite{R2000} suggests taking $h(t) = 1$ for $t\in\left[0,1\right]$ (in which case $V_M y$ is just the $M$th-degree partial sum of the Fourier-Laplace series). Then for $y\in W^{\phi,\beta}$ there holds \[ \left\|y-V_My\right\|_{C(\mathbb{S}^2)}\leq c\sqrt{M}\inf_{p\in\mathbb{P}_M}\left\|y-p\right\|_{C(\mathbb{S}^2)}\leq c\sqrt{M}e^{-qM}. \] \section{Weighted regularized least-squares problem and its solution} The penalization operator (\ref{regularizer}) can equivalently be written, using the addition theorem (\ref{addition}) and (\ref{eq:1}), as \begin{eqnarray}\label{equivalentR} R_Mp(\mathbf{x})&=&\sum_{k=0}^M\beta_k\sum_{j=1}^{2k+1}Y_{k,j}(\mathbf{x})\left\langle Y_{k,j},p \right\rangle_{L^2(\mathbb{S}^2)}\\ &=&\sum_{k=0}^M\beta_k\sum_{j=1}^{2k+1}\gamma_{k,j}Y_{k,j}(\textbf{x}),\nonumber \end{eqnarray} allowing us to write the minimization problem as one of linear algebra. For the noisy function $y^\epsilon$ defined on $\mathbb{S}^2$, let $\mathbf{y^\epsilon}:=\mathbf{y^\epsilon}(X_N)$ be the column vector \[ \mathbf{y^\epsilon}=\left[y^\epsilon(\mathbf{x}_1),...,y^\epsilon(\mathbf{x}_N)\right]^T\ \in \mathbb{R}^N, \] and let $\mathbf{Y_M}:=\mathbf{Y_M}(X_N)\in\mathbb{R}^{(M+1)^2\times N}$ be the matrix of spherical harmonics evaluated at the points of $X_N$. Using this notation we can reduce the minimization problem in (\ref{leastsquares}) to the following discrete minimization problem: \begin{equation}\label{eq:10} \min_{\boldsymbol{\gamma}\in\mathbb{R}^{(M+1)^2}}\left\|\mathbf{W}^{1/2}\mathbf{Y_M}^T\boldsymbol{\gamma}-\mathbf{W}^{1/2}\mathbf{y^\epsilon}\right\|^2_2+\alpha\left\|\mathbf{W}^{1/2}\mathbf{R_M}^T\boldsymbol{\gamma}\right\|^2_2,\ \alpha>0, \end{equation} where $\left\|\cdot\right\|_2$ is the standard Euclidean vector norm, $\mathbf{R_M}:=\mathbf{R_M}(X_N)=\mathbf{B_M}\mathbf{Y_M}\in\mathbb{R}^{(M+1)^2\times N}$, $\mathbf{B_M}$ is a positive diagonal matrix defined by \begin{equation}\label{eq:11} \mathbf{B_M}:=\diag(\beta_0,\underbrace{\beta_1,\beta_1,\beta_1}_{3},...,\underbrace{\beta_M,\beta_M,...,\beta_M}_{2M+1})\in\mathbb{R}^{(M+1)^2\times(M+1)^2}, \end{equation} and $\mathbf{W}$ is a diagonal matrix of cubature weights \[ \mathbf{W}:=\diag(w_1,...,w_N)\in\mathbb{R}^{N\times N}. \] The solution of (\ref{leastsquares}) can be found from the following system of linear equations \begin{equation}\label{eq:12} (\mathbf{Y_M}\mathbf{W}\mathbf{Y_M}^T+\alpha\mathbf{B_M}\mathbf{Y_M}\mathbf{W}\mathbf{Y_M}^T\mathbf{B_M})\gamma=\mathbf{Y_M}\mathbf{W}\mathbf{y^\epsilon}. \end{equation} \begin{theorem} \label{th:solution} Assume $y^\epsilon\in C(\mathbb{S}^2)$. Let $M>0$ be given, and let (\ref{cubature}) hold true for the set of points $X_N$. Then (\ref{eq:12}) has the unique solution $\boldsymbol{\gamma}=(\gamma_{k,j})\in\mathbb{R}^{(M+1)^2}$, \begin{equation}\label{eq:13} \gamma_{k,j}=\frac{1}{1+\alpha\beta_k^2}\sum_{i=1}^N w_i Y_{k,j}(\mathbf{x}_i)y^\epsilon(\mathbf{x}_i), \end{equation} and the minimizer of (\ref{leastsquares}) is given by \begin{eqnarray}\label{eq:14} y_M(\mathbf{x})=T_{\alpha,M}^\beta y^\epsilon(\mathbf{x}):&=&\sum_{k=0}^M\sum_{j=1}^{2k+1}\frac{Y_{k,j}(\mathbf{x})}{1+\alpha\beta_k^2} \sum_{i=1}^N w_iY_{k,j}(\mathbf{x}_i)y^\epsilon(\mathbf{x}_i)\\ &=&\sum_{k=0}^M\frac{2k+1}{4\pi}\frac{1}{1+\alpha\beta_k^2} \sum_{i=1}^N w_iP_{k}(\mathbf{x}\cdot\mathbf{x}_i)y^\epsilon(\mathbf{x}_i).\nonumber \end{eqnarray} \end{theorem} \begin{proof} On using (\ref{cubature}) we have \[ \sum_{i=1}^N w_iY_{k,j}(\mathbf{x}_i)Y_{\kappa,\iota}(\mathbf{x}_i)= \left\langle Y_{k,j},Y_{\kappa,\iota}\right\rangle_{L^2(\mathbb{S}^2)}=\delta_{k,\kappa}\delta_{j,\iota}, \] where $k,\kappa=0,...,M,\ j=1,...,2k+1,\ \iota=1,...,2\kappa+1$. Thus $\mathbf{Y_M}\mathbf{W}\mathbf{Y_M}^T$ is the identity matrix. Since $\mathbf{B_M}$ and $\mathbf{W}$ are diagonal matrices, the solution of (\ref{eq:12}) is given by (\ref{eq:13}) and from (\ref{eq:1}) we obtain (\ref{eq:14}). \qquad\end{proof} \textbf{Remark 3.1}. Note that one can also employ fast iterative algorithms for scattered least squares \cite{K2007} to find the minimizer (\ref{leastsquares}). Moreover, the evaluation of the coefficients (\ref{eq:13}) can be realized with fast spherical Fourier transform presented in \cite{K2008}. \section{Error bounds} In this section we estimate the uniform error of approximation of $y$ by $y_M$, see (\ref{eq:14}). It is convenient here to regard $y^\epsilon$ as a continuous function on $\mathbb{S}^2$, constructed by some interpolation process from its values on the discrete set $X_N$. The operator $T_{\alpha,M}^\beta$ defined in (\ref{eq:14}) can then be considered as an operator on the space $C(\mathbb{S}^2)$. Since $y_M=T_{\alpha,M}^{\beta} y^\epsilon$ it is clear that \[ y-y_M=y-T_{\alpha,M}^\beta V_M y+T_{\alpha,M}^\beta(V_M y - y + y - y^\epsilon), \] and hence \begin{eqnarray}\label{eq:15} \left\|y-y_M\right\|_{C(\mathbb{S}^2)}&\leq&\left\|y-T_{\alpha,M}^{\beta} V_My\right\|_{C(\mathbb{S}^2)} \\ \nonumber &+&\left\|T_{\alpha,M}^{\beta}\right\|_{C(\mathbb{S}^2)}\left(\left\|y-V_My\right\|_{C(\mathbb{S}^2)}+\left\|y-y^\epsilon\right\|_{C(\mathbb{S}^2)}\right), \end{eqnarray} where $\left\|T_{\alpha,M}^{\beta}\right\|_{C(\mathbb{S}^2)}$ is the norm of the operator $T_{\alpha,M}^{\beta}:C(\mathbb{S}^2)\rightarrow C(\mathbb{S}^2)$. Let $\epsilon=\left[\epsilon_1,\epsilon_2,...,\epsilon_N\right] \in \mathbb{R}^N$, and $\left\|\epsilon\right\|_\infty=\max\left|\epsilon_i\right|$. It is natural to assume, and from now on we shall do so, that $\left\|y-y^\epsilon\right\|_{C(\mathbb{S}^2)}= \left\|\epsilon\right\|_\infty$. This means that we adopt the deterministic noise model, which allows the worst noise level at any point of $X_N$. Then it is also natural to assume that $M$ is large enough to ensure $\left\|y-V_My\right\|_{C(\mathbb{S}^2)}\le\left\|\epsilon\right\|_\infty$, since otherwise data noise is dominated by the approximation error and no regularization is required. Then the bound (\ref{eq:15}) can be reduced to \begin{equation}\label{eq:16} \left\|y-y_M\right\|_{C(\mathbb{S}^2)}\leq\left\|y-T_{\alpha,M}^{\beta}V_My\right\|_{C(\mathbb{S}^2)}+2\left\|T_{\alpha,M}^{\beta}\right\|_{C(\mathbb{S}^2)}\left\|\epsilon\right\|_\infty. \end{equation} We will call the first term of the right-hand side in (\ref{eq:16}) the \emph{regularization error} and the second the \emph{noise propagation error}. The noise propagation error can be quantified by the following result for the norm of $T_{\alpha, M}^\beta$, which is a consequence of (\ref{eq:14}). \begin{theorem} \label{th:regerr} Under the conditions of Theorem \ref{th:solution} \begin{eqnarray}\label{eq:17} \left\|T_{\alpha,M}^{\beta}\right\|_{C(\mathbb{S}^2)}&=&\max_{\mathbf{x}\in\mathbb{S}^2}\sum_{i=1}^N w_i \left| \sum_{k=0}^M \frac{2k+1}{4\pi(1+\alpha\beta_k^2)}P_k(\mathbf{x}\cdot\mathbf{x}_i)\right|\\ \nonumber & \le &\max_{\mathbf{x}\in\mathbb{S}^2}\sum_{i=1}^N w_i\sum_{k=0}^M\frac{2k+1}{4\pi(1+\alpha\beta_k^2)}\left|P_k(\mathbf{x}\cdot\mathbf{x}_i)\right|. \end{eqnarray} \end{theorem} Theorem \ref{th:regerr} reduces to Proposition 5.1 in \cite{A2012} on setting $w_i=4\pi/N$, but note that the result as stated in \cite{A2012} corresponds to the upper bound in (\ref{eq:17}), and so is not correctly stated. Now we are going to bound the regularization error $\left\|y-T_{\alpha,M}^{\beta}V_My\right\|_{C(\mathbb{S}^2)}$. We start with the following decomposition \begin{equation}\label{eq:18} y-T_{\alpha,M}^{\beta}V_My=y-T_{0,M}V_My+(T_{0,M}-T_{\alpha,M}^{\beta})V_My, \end{equation} where $T_{0,M}$ is the so-called hyperinterpolation operator \cite{S1995}, \begin{eqnarray}\label{eq:19} T_{0,M}g(\mathbf{x} &=&\sum_{k=0}^M\sum_{j=1}^{2k+1}Y_{k,j}(\mathbf{x}) \sum_{i=1}^N w_iY_{k,j}(\mathbf{x}_i)g(\mathbf{x}_i)\\ \nonumber &=&\sum_{k=0}^M \frac{2k+1}{4\pi} \sum_{i=1}^N w_i P_k(\mathbf{x}\cdot \mathbf{x}_i)g(\mathbf{x}_i). \end{eqnarray} From (\ref{eq:19}) and (\ref{cubature}) it immediately follows that for any $p\in\mathbb{P}_M$ we have $T_{0,M}p=p$. Therefore, $T_{0,M}V_My=V_My$. In view of this property and the decomposition (\ref{eq:18}) we can derive a bound for the regularization error \begin{eqnarray}\label{eq:20} \left\|y-T_{\alpha,M}^{\beta}V_My\right\|_{C(\mathbb{S}^2)}&\leq& \left\|y-V_My\right\|_{C(\mathbb{S}^2)}+\left\|(T_{0,M}-T_{\alpha,M}^{\beta})V_My\right\|_{C(\mathbb{S}^2)} \\ \nonumber &\leq& \left\|\epsilon\right\|_\infty+\left\|(T_{0,M}-T_{\alpha,M}^{\beta})V_My\right\|_{C(\mathbb{S}^2)}. \end{eqnarray} An estimate of the term $\left\| (T_{0,M}-T_{\alpha,M}^{\beta})V_M\right\|_{C(\mathbb{S}^2)}$ in (\ref{eq:20}) is given by the following theorem. \begin{theorem} \label{th:apb} Assume that the smoothness index function $\phi$ is such that the function $t\rightarrow t/\phi(t)$ is monotone. Then for $y\in W^{\phi,\beta}$ there holds \begin{equation}\label{eq:21} \left\|(T_{0,M}-T_{\alpha,M}^{\beta})V_My\right\|_{C(\mathbb{S}^2)}\leq cM\hat{\phi}(\alpha)\left\|y\right\|_{W^{\phi,\beta}}, \end{equation} where $\hat{\phi}(\alpha)=\phi(\alpha)$ if $t/\phi(t)$ is non-decreasing, and $\hat{\phi}(\alpha)=\alpha$ if $t/\phi(t)$ is non-increasing. \end{theorem} \begin{proof} In view of (\ref{eq:14}), (\ref{eq:19}) and (\ref{V_M}), together with the fact that the cubature formula in (\ref{cubature}) is exact for $p\in\mathbb{P}_{2M}$, we may write \begin{eqnarray*} &&\left\|(T_{0,M}-T_{\alpha,M}^{\beta})V_My\right\|_{C(\mathbb{S}^2)}\\ \nonumber &&=\left\|\sum_{k=0}^M\sum_{j=1}^{2k+1}Y_{k,j}(\cdot)\frac{\alpha\beta_k^2}{1+\alpha\beta_k^2}\left\langle Y_{k,j},V_M y\right\rangle_{L^2(\mathbb{S}^2)}\right\|_{C(\mathbb{S}^2)}\\ \nonumber &&=\left\|\sum_{k=0}^M\sum_{j=1}^{2k+1}h\left(\frac{k}{M}\right)Y_{k,j}(\cdot)\frac{\alpha\beta_k^2}{1+\alpha\beta_k^2}\left\langle Y_{k,j},y\right\rangle_{L^2(\mathbb{S}^2)}\right\|_{C(\mathbb{S}^2)}, \end{eqnarray*} where in the last step we used $\langle Y_{k,j},V_M y\rangle_{L^2(\mathbb{S}^2)}=h(k/M) \langle Y_{k,j},y\rangle_{L^2(\mathbb{S}^2)}$. Now using the Nikolskii inequality (see, e.g., \cite{N2006}, Proposition 2.5) and also $h(k/M)\le 1$, we obtain \begin{eqnarray*} \|(T_{0,M}-T_{\alpha,M}^{\beta})V_My\|_{C(\mathbb{S}^2)}&&\leq cM\left\|\sum_{k=0}^M\sum_{j=1}^{2k+1}h\left(\frac{k}{M}\right)Y_{k,j}\frac{\alpha\beta_k^2}{1+\alpha\beta_k^2}\left\langle Y_{k,j},y\right\rangle_{L^2(\mathbb{S}^2)}\right\|_{L^2(\mathbb{S}^2)} \\ &&= cM\left(\sum_{k=0}^M\sum_{j=1}^{2k+1}h\left(\frac{k}{M}\right)^2\left(\frac{\alpha\beta_k^2}{1+\alpha\beta_k^2}\right)^2\left|\left\langle Y_{k,j},y\right\rangle_{L^2(\mathbb{S}^2)}\right|^2\right)^{1/2}\\ &&\le cM\left(\sum_{k=0}^M\sum_{j=1}^{2k+1}\left(\frac{\alpha\beta_k^2}{1+\alpha\beta_k^2}\right)^2\phi^2(\beta_k^{-2})\frac{\left|\left\langle Y_{k,j},y\right\rangle_{L^2(\mathbb{S}^2)}\right|^2}{\phi^2(\beta_k^{-2})}\right)^{1/2} \\ &&\leq cM\sup_{u\in[0,\beta_0^{-2}]}\left|\frac{\alpha}{\alpha+u}\phi(u)\right|\left\|y\right\|_{W^{\phi,\beta}}\leq cM\hat{\phi}(\alpha)\left\|y\right\|_{W^{\phi,\beta}}, \end{eqnarray*} where the last inequality follows from \cite{L2013}, Proposition 2.7. \qquad\end{proof} It is instructive to note that if, for example, $\phi(t)=t^\nu$, then the function $\hat{\phi}$ defined in Theorem \ref{th:apb} is given by \[ \hat{\phi}(\alpha)=\begin{cases} \alpha,\quad \quad \nu\ge 1,\\ \alpha^\nu, \quad 0<\nu<1. \end{cases} \] Thus the error bound in the theorem does not improve if $\phi(t)$ grows faster than $t$. \section{Parameter choice strategies} In this section we will be concerned with the choice of the design parameters for the least-squares approximation $y_M$, namely the regularization parameter $\alpha$ and the penalization parameters $\beta_k$. In the first subsection we discuss an \emph{a priori} choice for the penalization parameters $\beta_k$. In the next subsection we consider an adaptive strategy for choosing the regularization parameter $\alpha$. In the third subsection we present an \emph{a posteriori} choice for the penalization parameters $\beta_k$. The choice of parameters is motivated by the error bound (\ref{eq:16}) for $y-y_M$. From (\ref{eq:20}) and (\ref{eq:21}) it follows that the bound (\ref{eq:16}) can be reduced to the following: \begin{eqnarray}\label{eq:26} \left\|y-y_M\right\|&\leq& \left\|\epsilon\right\|_\infty+cM\hat{\phi}(\alpha)\left\|y\right\|_{W^{\phi,\beta}}+2\left\|\epsilon\right\|_\infty\left\|T_{\alpha,M}^{\beta}\right\|_{C(\mathbb{S}^2)} \\ \nonumber &\leq&cM\hat{\phi}(\alpha)\left\|y\right\|_{W^{\phi,\beta}}+c\left\|\epsilon\right\|_\infty\left\|T_{\alpha,M}^{\beta}\right\|_{C(\mathbb{S}^2)}. \end{eqnarray} \subsection{A priori choice of the penalization parameters} For definiteness, we assume in this subsection that $\phi(t)=t$, which means that $\hat{\phi}$ has the highest order in $\alpha$, namely $\hat{\phi}(\alpha)=\alpha$. The error bound (\ref{eq:26}) now provides useful guidance in the choice of the regularization parameters $\beta_k$. If $\beta_0$ is considered to be fixed, and we increase the rate of growth of the $\beta_k$, then the first term on the right-hand side of the last line of (\ref{eq:26}) will increase, while from (\ref{eq:17}) the second term has an upper bound that decreases with increasing rate of growth of the $\beta_k$. Even more can be said: for the first term to be finite the $W^{\phi,\beta}$ norm of $y$ must be finite, which imposes the constraint \begin{equation}\label{eq:22} \sum_{k=0}^\infty\sum_{j=1}^{2k+1}\beta_k^4\left\langle Y_{k,j},y\right\rangle^2_{L^2(\mathbb{S}^2)}<\infty. \end{equation} To see what this condition means in a particular application, we consider the SGG-problem mentioned in the Introduction. In this problem $y$ is the second order radial derivative of the gravitational potential measured pointwise at the orbital sphere of a satellite. It can be shown \cite{F2001,L2010,S1983} that after a proper normalization of this sphere to $\mathbb{S}^2$ we have \begin{equation}\label{eq:23} \left\langle Y_{k,j},y\right\rangle_{L^2(\mathbb{S}^2)}=a_kg_{k,j}, \end{equation} where $a_k=\left(\frac{R}{\rho}\right)^k\frac{(k+1)(k+2)}{\rho^2}$, $\rho$ is the radius of the orbital sphere, $R$ is the radius of the surface of the Earth considered as a sphere, and $\left\{g_{k,j}\right\}$ is some (unknown) sequence of scaled Fourier coefficients of the gravitational potential $g$ measured at the surface of the Earth. It is well-known (see, e.g., \cite{S1983a, F2001}) that in the scale of the spherical Sobolev spaces $\left\{H_s\right\}$ mentioned above the Earth's gravitational potential has a smoothness index $s=3/2$ at least, which means that the sequence $\left\{g_{k,j}\right\}$ should satisfy the requirement \[ \sum_{k=0}^\infty\sum_{j=1}^{2k+1}(k+1/2)^3g^2_{k,j}<\infty. \] In view of the last requirement, the condition (\ref{eq:22}) is satisfied by the choice \begin{equation}\label{eq:25} \beta_k=a_k^{-1/2}(k+1/2)^{3/4},\ \ k=0,1,... . \end{equation} Of course the condition (\ref{eq:22}) will also be satisfied if the $\beta_k$ increase more slowly, but at the likely expense of a larger second term in the error bound (\ref{eq:26}). Since $R<\rho$, it is clear that the $\beta_k$ given by (\ref{eq:25}) increase exponentially. This is natural in view of the exponential decrease of the Fourier coefficients (\ref{eq:23}) of the approximated function, which implies that the exact function as measured at the satellite height is very smooth, even analytic. The regularization scheme (\ref{leastsquares}) with weights (\ref{eq:25}) will penalize the presence of oscillating coefficients with large indexes in the approximant $T_{\alpha,M}^\beta y^\epsilon$. In the last section we illustrate a good performance of the scheme (\ref{leastsquares}) with these penalization weights. \subsection{Regularization parameter choice strategy} For regularization of our problem we will implement an adaptive regularization parameter choice strategy known as the balancing principle (see, e.g., \cite{L2013, M2003, P2005} and references therein). In this method the regularization parameter $\alpha$ is selected from some finite set, say $\Delta_L:=\left\{\alpha_i=q^i\alpha_0, i=1,2,...,L\right\}$, with $q\in (0,1)$ and $L$ large enough. Applying the balancing principle to our problem we start with the smallest parameter $\alpha_L$ and increase stepwise $\alpha_{i-1}=\alpha_i/q, i=L,L-1,...,$ until $\alpha_*:=\alpha_z$ is the parameter for which \[ \left\|T_{\alpha_z,M}^\beta y^\epsilon-T_{\alpha_{z+1},M}^\beta y^\epsilon\right\|_{C(\mathbb{S}^2)}>{{\omega}}\left\|\epsilon\right\|_\infty\left\|T_{\alpha_{z+1},M}^\beta\right\|_{C(\mathbb{S}^2)} \] for the first time. Here ${{\omega}}$ is a design parameter. In all our numerical tests with the balancing principle (BP) reported below in Section 6, the value of ${{\omega}}$ is fixed as ${{\omega}}=0.002$ and is data independent, while the value of the regularization parameter $\alpha_*$ chosen according to BP varies with data. Note that for choosing $\alpha_*$ we need only the knowledge of (\ref{eq:14}) and an upper bound of $\left\|T_{\alpha,M}^{\beta}\right\|_{C(\mathbb{S}^2)}$ given by (\ref{eq:17}). In the Section 6 we will present a numerical test showing a good reconstruction of the function on the sphere from noisy observations with the above \emph{a posteriori} regularization parameter. It is instructive to see that in all tests BP performs at the level of the ideal parameter choice $\alpha\in\Delta_L$. \subsection{A posteriori choice of the penalization weights} We start with the observation that the space $\mathbb{P}_M$ of spherical polynomials $p$ is a reproducing kernel Hilbert space (RKHS) $\mathcal{H}$. By the Riesz representation theorem, to every RHKS $\mathcal{H}$ there corresponds a unique symmetric positive definite function $K:\mathbb{S}^2\times\mathbb{S}^2\rightarrow \mathbb{R}$, called the reproducing kernel of $\mathcal{H}=\mathcal{H}_K$, that has the following reproducing property: $p(\mathbf{x})=\left\langle p(\cdot),K(\cdot,\mathbf{x})\right\rangle_{\mathcal{H}_K}$. A comprehensive theory of RKHSs can be found in \cite{A1950}. It is easy to check that the kernel \begin{equation}\label{eq:a} K(\mathbf{x},\mathbf{z})=\sum_{k=0}^M \beta_k^{-2}\sum_{j=1}^{2k+1} Y_{k,j}(\mathbf{x}) Y_{k,j}(\mathbf{z}),\ \mathbf{x}, \mathbf{z}\in\mathbb{S}^2 \end{equation} has the above mentioned reproducing property if the inner product in $\mathbb{P}_M$ is defined as follows \[ \left\langle f,g \right\rangle_{\mathcal{H}_K}=\sum_{k=0}^M \beta_k^2\sum_{j=1}^{2k+1} \left\langle Y_{k,j},f\right\rangle_{L^2(\mathbb{S}^2)} \left\langle Y_{k,j},g\right\rangle_{L^2(\mathbb{S}^2)}. \] Indeed, for $p\in\mathbb{P}_M$ we find \begin{eqnarray*} \left\langle p(\cdot),K(\mathbf{x},\cdot) \right\rangle_{\mathcal{H}_K}&=&\sum_{k=0}^M \beta_k^2\sum_{j=1}^{2k+1} \left\langle Y_{k,j},p\right\rangle_{L^2(\mathbb{S}^2)} \left\langle Y_{k,j},K(\mathbf{x},\cdot)\right\rangle_{L^2(\mathbb{S}^2)} \\ \nonumber &=&\sum_{k=0}^M \beta_k^2\sum_{j=1}^{2k+1}\left\langle Y_{k,j},p\right\rangle_{L^2(\mathbb{S}^2)}\beta_k^{-2}Y_{k,j}(\mathbf{x})=p(\mathbf{x}). \end{eqnarray*} In this RKHS setting the spherical polynomial $y_M=y_M(N,K,\alpha)$ defined by (\ref{leastsquares}) also can be seen, using the addition theorem and (\ref{cubature}), as the minimizer of the following quadratic functional \begin{equation}\label{eq:b} T_\alpha(N,K;p)=\sum_{i=1}^Nw_i(p(\mathbf{x}_i)-y^\epsilon(\mathbf{x}_i))^2+\alpha\left\|p\right\|^2_{\mathcal{H}_K},\ p\in\mathbb{P}_M, \end{equation} which makes (\ref{eq:a}) a natural way of defining the reproducing kernel in this context. At this point the problem of the choice of the penalization weights $\left\{\beta_k\right\}$ is transformed into that of selecting a kernel $K$ from the set $\mathcal{K}$ of kernels of the form (\ref{eq:a}). In the literature there are several methods for choosing a kernel from the available set of kernels (see, e.g., \cite{M2005, N2011}, and references therein). For example, in \cite{M2005} the authors suggest selecting a kernel by minimizing the value of the functional (\ref{eq:b}) evaluated at its minimizer $y_M$. In the present context, according to \cite{M2005}, the kernel $K=K_*$ of choice is given as \[ K_*=\arg \min\left\{T_\alpha(N,K;y_M(N,K,\alpha)),\ K\in\mathcal{K}\right\}. \] Note that such $K_*$ depends on the value of the regularization parameter $\alpha$. Therefore, the approach \cite{M2005} can be realized only for an \emph{a priori} known $\alpha$. However, in practice we are not provided with this knowledge, and have to use \emph{a posteriori} regularization parameter choice strategies (for example, the balancing principle described in Subsection 4.1). Thus, in practice we are dealing with kernel dependent regularization parameter $\alpha=\alpha(K)$. This situation has been discussed in \cite{N2011}. In the present context the kernel choice suggested in \cite{N2011} can be written as follows \begin{equation}\label{eq:c} K_+=\arg \min\left\{T_{\alpha(K)}(N,K;y_M(N,K,\alpha(K))),\ K\in\mathcal{K}\right\}. \end{equation} The existence of such $K_+$ has been proved in \cite{N2011} under rather general assumptions on the set of admissible kernels $\mathcal{K}$ and regularization parameter choice strategy $\alpha=\alpha(K)$. From a practical point of view, it is a challenging issue to use the strategy (\ref{eq:c}) in our case because one has to minimize a function depending on $M+1$ unknown penalization weights $\beta_k$. Therefore, it is natural to reduce the complexity of the model before applying the strategy from \cite{N2011}. For example, one may parametrize $\left\{\beta_k\right\}$ as follows: $\beta_k^2=e^{\lambda_1(k+1)}(k+1)^{\lambda_2}, \lambda_1,\lambda_2\geq 0$. In other words, in (\ref{eq:c}) the set of kernels $\mathcal{K}$ consists of the functions \begin{equation}\label{eq:d} K(t,\tau)=\sum_{k=0}^M e^{-\lambda_1(k+1)}(k+1)^{-\lambda_2}\sum_{j=1}^{2k+1} Y_{k,j}(t) Y_{k,j}(\tau),\ t,\tau\in\mathbb{S}^2. \end{equation} Then the kernel $K_+$ can be found by minimizing a function of two variables $\lambda_1,\lambda_2$. In the last section we will illustrate such a reduced approach by a numerical test showing good performance of the scheme (\ref{leastsquares}) with \emph{a posteriori} chosen penalization weights. \section{Numerical examples} In this section we present some numerical experiments to verify the analysis from the previous sections. Note that we work not with real data but with artificially generated ones. In all our experiments we follow \cite{G2009, N2013} and assume that the set of points $X_N$ is the set of Gauss-Legendre points, for which the positive quadrature weights are known analytically. The number of points in this case is $N=2(M+1)^2$, and the corresponding cubature formula (\ref{cubature}) is indeed exact for all spherical polynomials of degree $2M$. In all our experiments $M=30$. Note that in real applications the spherical polynomials of much higher degree are used \cite{P2008}. Moreover, the Gauss-Legendre points are known to have the drawback of having too many points concentrated at the poles, making it not suitable for real satellite data. In our experiments below we use the Gauss-Legendre points and polynomials of modest degree only for illustration purposes and as a proof of concept. At the same time, we note that even for the case $M=30$ the corresponding discrete problem is rather ill-conditioned and, thus, should be treated with a regularization (see Figure \ref{fig:1} and the discussion below). We start with an experiment illustrating that a proper choice of the penalization weights $\beta_0,...,\beta_M$ is crucial for the approximation of functions on the sphere. Consider again the SGG-problem corresponding to (\ref{eq:23}). Note that for $k=1,2,...,30$ the values $a_k=\left(\frac{R}{\rho}\right)^k \frac{(k+1)(k+2)}{\rho^2}$ in (\ref{eq:23}) are increasing, and so, they do not exhibit a typical behavior of the singular values of the compact operators. This effect is well-known (see, e.g, \cite{F1999}, Fig. 4.2.3, p. 280). Therefore, to mimic the SGG-problem for $M=30$ one usually omits the factor $\frac{(k+1)(k+2)}{\rho^2}$ (see, e.g., \cite{B2007}). In this case the decay character of the coefficients $a_k$ in (\ref{eq:23}) can be modeled, for example, as \[ a_k=(1.2)^{-k},\ k=0,1,...,M. \] We conduct our first experiment in the following way. First we generate a spherical function \[ y=y(\mathbf{x})=\sum_{k=0}^M (1.2)^{-k}\sum_{j=1}^{2k+1}g_{k,j}\frac{1}{\rho}Y_{k,j}(\mathbf{x}),\ \ \mathbf{x}\in\mathbb{S}^2, \] where $g_{k,j}=(k+1/2)^{-3/2}x_{k,j}, \ k=0,...,M, \ j=1,...2k+1$, and $x_{k,j}$ are random numbers uniformly distributed on $\left[0,1\right]$. The blurred spherical function $y^\epsilon$ is simulated by adding a random point-wise noise to the values of the initial function $y$ at the point set $X_N$. The simulated noise values are given as the components of a random vector $0.05\epsilon / \left\|\epsilon\right\|_\infty$, where $\epsilon=\left[\epsilon_1,\epsilon_2,...,\epsilon_N\right]$, and $\epsilon_i$ are uniformly distributed on $[-1,1]$. To mimic the SGG-problem we reconstruct the vector $g=(g_{k,j})$ by $g^{\alpha,M}=(g_{k,j}^{\alpha,M})$, where $g_{k,j}^{\alpha,M}=a_k^{-1}\gamma_{k,j}$, and $\gamma_{k,j}$ are given by (\ref{eq:13}). \begin{figure*} \includegraphics[width=\textwidth]{penalization.eps} \caption{Numerical illustration. The figure presents relative errors for 50 simulations of the data. The errors are plotted in ascending order for each of the discussed methods. Note that two bottom curves corresponding to penalization according to (\ref{eq:25}) nearly overlap.} \label{fig:1} \end{figure*} To assess the obtained results and compare the performance of the considered schemes we measure the relative error \[ \frac{\left\|g-g^{\alpha,M}\right\|_2}{\left\|g\right\|_2}. \] The results are displayed in Figure \ref{fig:1},where along the vertical axis we plot the relative errors in solving the problem with one of 50 simulated data. The relative errors are plotted in ascending order for each of four methods: a straightforward least-squares fit to noisy data without any regularization, the regularization with the penalization weights (\ref{eq:25}) and $\alpha$ chosen according to the balancing principle (BP) from $\Delta_{60}=\left\{\alpha_i=8\cdot q^i\ i=1,2,\ldots,60\right\}, q=0.8$, the regularization with default penalization weights $\beta_k=1, k=0,1,\ldots,M,$ and the best $\alpha\in\Delta_{60}$, the regularization with the penalization weights (\ref{eq:25}) and the best $\alpha\in\Delta_{60}$. Thus, in the latter two cases the choice of the regularization parameter $\alpha$ for both schemes was made to achieve the best possible performance of each method. As it can be seen from Figure \ref{fig:1} the balancing principle (BP) performs at the level of the ideal parameter choice strategy. From Figure \ref{fig:1} one can also conclude that the proper choice of the penalization weights according to the proposed \emph{a priori} recipe can significantly improve the accuracy of the reconstruction. Moreover, Figure \ref{fig:1} shows that a straightforward least-squares fit to noisy data without regularization leads to the relative error that is about 2-3 times larger than that after a regularization. This confirms that in the considered experiment we are really dealing with a rather ill-conditioned problem. In our second experiment we again confirm that the balancing principle gives a value of the regularization parameter $\alpha_*$ that is competitive with the best value manually found in \cite{A2012}. We choose the regularization parameter from the same geometric sequence $\Delta_{60}$ and use the same value of the design parameter ${\omega}=0.002$ in BP. \begin{figure} \centering \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=\textwidth]{fr1.eps} \caption{$y$} \label{fig:fr1} \end{subfigure}% \quad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=\textwidth]{fr2.eps} \caption{$y$ with $N(0,0.25)$ noise} \label{fig:fr2} \end{subfigure} \\ \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\textwidth]{fr3.eps} \caption{$T_{\alpha_*,M}y^\epsilon$} \label{fig:fr3} \end{subfigure} \caption{Franke function recovery}\label{fig:fr} \end{figure} Similarly to \cite{A2012}, as a test function $y$ we take the sum of the Franke function $y_1$ modified by Renka \cite{R1988} (p.146) and a function $y_{\rm{cap}}$ \cite{W1992}, namely $y=y_1+y_{\rm{cap}}$ with \begin{eqnarray}\label{eq:e} y_1(x_1,x_2,x_3)&=&0.75e^{-(9x_1-2)^2/4-(9x_2-2)^2/4-(9x_3-2)^2/4} \\ \nonumber &+&0.75e^{-(9x_1+1)^2/49-(9x_2+1)/49-(9x_3+1)/10} \\ \nonumber &+&0.5e^{-(9x_1-7)^2/4-(9x_2-3)^2/4-(9x_3-5)^2/4}\\ \nonumber &-&0.2e^{-(9x_1-4)^2-(9x_2-7)^2-(9x_3-5)^2},\ (x_1,x_2,x_3)\in\mathbb{S}^2, \end{eqnarray} and \begin{equation}\label{eq:f} y_{\rm{cap}}(\mathbf{x}) = \left\{ \begin{array}{l l} 2\cos\left(\pi\arccos(\mathbf{x}_c\cdot\mathbf{x})\right), & \quad \mathbf{x}_c\cdot\mathbf{x}\geq\cos(0.5),\\ 0, & \quad \rm{otherwise}, \end{array} \right. \end{equation} where $\mathbf{x}_c=\left(-\frac{1}{2},-\frac{1}{2},\frac{1}{\sqrt{2}}\right)^T$ and $(\cdot)$ defines the dot product of two vectors. The function $y$ was then contaminated by noise, taking for the noise $\epsilon(\mathbf{x})$ at each $\mathbf{x}\in X_N$ an independent sample of a normal random variable with mean 0 and standard deviation $\sigma=0.5$. Figure \ref{fig:fr1} illustrates the function $y$, while Figure \ref{fig:fr2} shows the blurred function $y^\epsilon(\mathbf{x})=y(\mathbf{x})+\epsilon(\mathbf{x})$. For the reconstruction, following \cite{A2012} we choose a Laplace-Beltrami penalization operator that corresponds to the matrix \[ \mathbf{B_M}:=\diag(0,\underbrace{4,4,4}_{3},...,\underbrace{\left(M(M+1)\right)^2,...,\left(M(M+1)\right)^2}_{2M+1})\in\mathbb{R}^{(M+1)^2\times(M+1)^2}. \] Figure \ref{fig:fr3} illustrates the reconstructed function $T_{\alpha_*,M}^\beta y^\epsilon$. The regularization parameter $\alpha_*$ was obtained according to the balancing principle described above. We found automatically the regularization parameter $\alpha_*=1.42\cdot 10^{-4}$ which agrees well with the value $10^{-4}$ from \cite{A2012} obtained manually. \begin{figure*} \includegraphics[width=\textwidth]{aposterpen.eps} \caption{Numerical illustration. The figure presents relative errors for 50 simulations of the data. The errors are plotted in ascending order for each of the discussed methods.} \label{fig:3} \end{figure*} In our last experiment we will illustrate an application of the \emph{a posteriori} rule (\ref{eq:c}) for choosing the penalization weights. As a test function $y^\epsilon$ we again consider the blurred function from the previous example, where we used the \emph{a priori} chosen penalization weights $\beta_k=k(k+1)$ corresponding to Laplace-Beltrami operator. Now we are going to estimate the penalization weights using the \emph{a posteriori} strategy described in Subsection 4.3. Recall that we are looking for the minimizer (\ref{eq:c}) among the set of admissible kernels $\mathcal{K}$ consisting of the functions (\ref{eq:d}). This approach allows us to take into account an exponential, as well as a polynomial growth of $\beta_k$. To find an approximate minimizer of (\ref{eq:c}) we have implemented the Random Search method \cite{M1965} over the set of parameters $(\lambda_1,\lambda_2)\in \left[0,5\right]\times\left[0,5\right]$. The method was implemented 10 times, and in each implementation 10 random steps have been performed. Then the mean values of the parameters $\lambda_1, \lambda_2$ appearing after each implementation of the Random Search method have been taken as an approximate minimum point. As the result, the values $\lambda_1=0.32, \lambda_2=1.9$ have been obtained. Figure \ref{fig:3} displays the relative errors in solving the problem (\ref{eq:e}), (\ref{eq:f}) with one of 50 simulated noisy data, for each of two methods: regularization with the penalization weights $\beta_k=k(k+1)$, and regularization with \emph{a posteriori} chosen weights. From Figure \ref{fig:3} we see that the choice of the penalization weights according to the proposed \emph{a posteriori} choice rule can improve the accuracy of the reconstruction. \section*{Acknowledgments} The first and the third authors are supported by the Austrian Fonds Zur Forderung der Wissenschaftlichen Forschung (FWF), grant P25424. The work was initiated when the second author visited Johann Radon Institute for Computational and Applied Mathematics (RICAM) within the Special Semester on Applications of Algebra and Number Theory. The second author acknowledges the support of the Australian Research Council.
{ "timestamp": "2015-01-12T02:10:07", "yymm": "1501", "arxiv_id": "1501.02090", "language": "en", "url": "https://arxiv.org/abs/1501.02090", "abstract": "We consider a polynomial reconstruction of smooth functions from their noisy values at discrete nodes on the unit sphere by a variant of the regularized least-squares method of An et al., SIAM J. Numer. Anal. 50 (2012), 1513--1534. As nodes we use the points of a positive-weight cubature formula that is exact for all spherical polynomials of degree up to $2M$, where $M$ is the degree of the reconstructing polynomial. We first obtain a reconstruction error bound in terms of the regularization parameter and the penalization parameters in the regularization operator. Then we discuss a priori and a posteriori strategies for choosing these parameters. Finally, we give numerical examples illustrating the theoretical results.", "subjects": "Numerical Analysis (math.NA)", "title": "Parameter choice strategies for least-squares approximation of noisy smooth functions on the sphere", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363485313248, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7073385804582819 }
https://arxiv.org/abs/2210.06424
Computing Persistence Diagram Bundles
Persistence diagram (PD) bundles, a generalization of vineyards, were introduced as a way to study the persistent homology of a set of filtrations parameterized by a topological space $B$. In this paper, we present an algorithm for computing piecewise-linear PD bundles, a wide class that includes many of the PD bundles that one may encounter in practice. Full details are given for the case in which $B$ is a triangulated surface, and we outline the generalization to higher dimensions and other cases.
\section{Introduction} Suppose one has a set $\{X(t)\}_{t \in \mathcal{T}}$ of point clouds parameterized by a topological space $\mathcal{T}$. For example, a time-varying point cloud is parameterized by $\mathcal{T} = \mathbb{R}$. At each $t \in \mathcal{T}$, one can construct a filtration (such as the Vietoris--Rips filtration) for $X(t)$ and compute its persistent homology (PH). More generally, one may have a \emph{fibered filtration function}, a set $\{f_t:\mathcal{K}^t \to \mathbb{R}\}_{t \in \mathcal{T}}$ of filtrations parameterized by $\mathcal{T}$. The associated \emph{persistence diagram (PD) bundle} is the space of persistence diagrams $PD(f_t)$ for all $t \in \mathcal{T}$. For example, a vineyard \cite{vineyards} is the special case in which $\mathcal{T}$ is an interval in $\mathbb{R}$, and the persistent homology transform \cite{pht} is a special case in which $\mathcal{T} = S^d$. For more examples, see \cite{pd_bundle}. \subsection{Contributions} I generalize the algorithm for computing vineyards \cite{vineyards} to an algorithm for efficiently computing PD bundles. We restrict to the case in which the PD bundle is \emph{piecewise linear}. This means that $\mathcal{T}$ is a simplicial complex, $\mathcal{K}^t \equiv \mathcal{K}$ for all $t \in \mathcal{T}$, and for every simplex $\sigma \in \mathcal{K}$, the function $f_{\sigma}(t) := f_t(\sigma)$ is linear on every simplex of $\mathcal{T}$. The restriction to piecewise-linear PD bundles allows us to take advantage of work in computational geometry such as the Bentley--Ottman planesweep algorithm \cite{compgeo} for finding intersections of lines in a plane. An analogous piecewise-linear restriction was made for computing vineyards in \cite{vineyards}. The idea of the algorithm is to partition the base space $\mathcal{T}$ into polyhedrons and compute a PD ``template'' for each polyhedron. The partition is given by Proposition \ref{prop:polyhedrons_constant} (\cite{pd_bundle}). For any $t \in \mathcal{T}$, the persistence diagram $PD(f_t)$ can be computed in $O(N)$ time from the template for the polyhedron that contains $t$, where $N$ is the number of simplices in $\mathcal{K}$. The piecewise-linear restriction is reasonable for most applications. For example, suppose that we have a point cloud $X(t, \mu)$ whose coordinates depend on time $t$ and a system-parameter value $\mu \in \mathbb{R}$. If the data set is obtained through either real-world data collection or through numerical simulation, then we likely only know the coordinates of the point cloud $X(t, \mu)$ at a discrete set $\{t_i\}$ of time steps and a discrete set $\{\mu_j\}$ of system-parameter values. For every $(t_i, \mu_j)$, there is the filtration function $f_{(t_i, \mu_j)}:\mathcal{K} \to \mathbb{R}$ associated with the Vietoris--Rips filtration (or any other filtration) of $X(t_i, \mu_j)$. To obtain a fibered filtration function, let $\mathcal{T}$ be a triangulation of $[\min t_i, \max t_i] \times [\min \mu_j, \max \mu_j]$ whose vertices are $\{(t_i, \mu_j)\}_{ij}$. We can extend $\{f_{(t_i, \mu_j)}\}_{ij}$ to a fibered filtration function on all of $\mathcal{T}$ by defining the filtration values of a simplex $\sigma$ via linear interpolation of $\{f_{(t_i, \mu_j)}(\sigma)\}_{ij}$. By construction, the resulting PD bundle is piecewise linear. I give full implementation details only for the case in which $\dim(\mathcal{T}) \leq 2$, but I discuss the mathematical generalization to higher dimensions in Section \ref{sec:high}. When the base space $\mathcal{T}$ is $2$D, it is already an improvement over a vineyard. If $\dim(\mathcal{T}) = 2$, this means there are three parameters in total: the filtration parameter $r$ as well as two parameters that locally parameterize $\mathcal{T}$. In higher dimensions, we are limited by the availability of computational geometry algorithms for working with a partition of a space into polyhedrons. When $\dim(\mathcal{T}) = 2$, the partition of $\mathcal{T}$ into polygons can be represented by a doubly-connected edge list (DCEL) data structure, and we can use a standard point-location algorithm to locate the polygon that contains a given point. However, to the best of my knowledge, no one has generalized these yet to arbitrarily high dimensions. \subsection{Related Work} PD bundles were introduced in \cite{pd_bundle} as a generalization of vineyards \cite{vineyards}. The algorithm I present in this paper for computing PD bundles is a generalization of the algorithm presented in \cite{vineyards}. In many ways, the algorithm in this paper is also reminiscent of the Rivet algorithm for computing fibered barcodes of 2D multiparameter persistence modules \cite{rivet}. \subsection{Organization} The paper proceeds as follows. I review the relevant background on persistent homology, vineyards, and PD bundles in Section \ref{sec:background}. I present my algorithm for computing piecewise-linear PD bundles in Section \ref{sec:compute}. Finally, I conclude and discuss possible directions for future research in Section \ref{sec:conclusion}. \section{Background}\label{sec:background} We begin by reviewing persistent homology, vineyards, and PD bundles; for more details on persistent homology, see \cite{edel_book, roadmap}, for more on vineyards, see \cite{vineyards}, and for an introduction to PD bundles, see \cite{pd_bundle}. \subsection{Persistent homology}\label{sec:PH} Let $\mathcal{K}$ be a simplicial complex. A \emph{filtration function} $f: \mathcal{K} \to \mathbb{R}$ is a real-valued function on $\mathcal{K}$ that is \emph{monotonic}, i.e., $f(\tau) \leq f(\sigma)$ if $\tau$ is a face of $\sigma$. Monotonicity guarantees that the $r$-sublevel sets $\mathcal{K}_r := \{\sigma \in \mathcal{K} \mid f(\sigma) \leq r)\}$ are simplicial complexes. In persistent homology, we study how the homology of $\mathcal{K}_r$ changes as $r$ increases. Let $\{r_i\}$ be the image of $f$, ordered such that $r_i < r_{i+1}$. These are the critical values at which $\mathcal{K}_r$ changes; for $r \in [r_i, r_{i+1})$, we have $\mathcal{K}_r = \mathcal{K}_{r_i}$. For every $i \leq j$, the inclusion $\iota^{i, j}: \mathcal{K}_{r_i} \hookrightarrow \mathcal{K}_{r_j}$ induces a map $\iota^{i, j}_*: H_*(\mathcal{K}_{r_i}, \mathbb{F}) \to H_*(\mathcal{K}_{r_j}, \mathbb{F})$ on homology. For the remainder of this paper, we compute homology over the field $\mathbb{F} = \mathbb{Z}/2\mathbb{Z}$. The \emph{$p$th-persistent homology} (PH) is the pair \begin{equation*} \Big( \{H_p(\mathcal{K}_{r_i}, \mathbb{F})\}_{1 \leq i \leq N}\,, \{\iota_*^{i, j}\}_{1 \leq i \leq j \leq N} \Big)\,. \end{equation*} A homology class is \emph{born} at $r_i$ if it is not in the image of $\iota_*^{i, i-1}$. The homology class \emph{dies} at $r_j > r_i$ if $j$ is the minimum index such that $\iota_*^{i, j}$ maps it to zero. (Such a $j$ may not exist; in that case, the homology class never dies.) The Fundamental Theorem of Persistent Homology yields compatible choices of bases for the vector spaces $H_p(\mathcal{K}_{r_i}, \mathbb{F})$. The generators in our definition of a persistence diagram, below, are the basis elements in the decomposition given by the Fundamental Theorem of Persistent Homology. Persistent homology is often visualized as a \emph{persistence diagram} (PD). The $p$th persistence diagram $PD_p(f)$ is a multiset of points in the extended plane $\overline{\mathbb{R}}^2$ that summarizes the $p$th persistent homology. It contains the diagonal (for technical reasons) and one point for every generator. If a generator is born at $b$ and dies at $d$, then the coordinates of the corresponding point in the PD are $(b, d)$, and if the generator is born at $b$ and never dies, then the coordinates of the point are $(b, \infty)$. One of the standard methods for computing PH is the algorithm introduced in \cite{RU}. The algorithm requires a choice of compatible \emph{simplex ordering} $\alpha: \mathcal{K} \to \{1, \ldots, N\}$, where $N$ is the number of simplices in $\mathcal{K}$. We require that $\alpha(\sigma) < \alpha(\tau)$ if $f(\sigma) < f(\tau)$ or $\sigma$ is a face of $\tau$. A compatible ordering $\alpha$ exists because monotonicity ensures that $f(\sigma) \leq f(\tau)$ if $\sigma$ is a face of $\tau$. Let $D$ be the boundary matrix compatible with this ordering. That is, let $D$ be the matrix whose $(i, j)$th entry is \begin{equation*} D_{ij} = \begin{cases} 1\,, & \alpha^{-1}(i) \text{ is a face of } \alpha^{-1}(j) \\ 0 \,, & \text{otherwise.} \end{cases} \end{equation*} We decompose the boundary matrix $D$ into a matrix product $D = RU$ such that $U$ is upper triangular and $R$ is a binary matrix that is \emph{reduced}. A binary matrix $R$ is reduced if $low_R(j) \neq low_R(j')$ whenever $j \neq j'$ are the indices of nonzero columns in $R$. The quantity $low_R(j)$ is the row index of the last $1$ in column $j$ if column $j$ is nonzero and undefined if column $j$ is zero. An RU decomposition can be computed in $O(N^3)$ time \cite{RU, edel_book}. The function $low_R(j)$ is called the \emph{pairing function}. The authors of \cite{vineyards} showed that the pairing function $low_R(j)$ depends only on the boundary matrix $D$, and not on the particular reduced binary matrix $R$ in the decomposition $D = RU$. A pair of simplices $(\alpha^{-1}(i), \alpha^{-1}(j))$, for which $i = low_R(j)$, represents a persistent homology class. The \emph{birth simplex} $\alpha^{-1}(i)$ creates the homology class and the \emph{death simplex} $\alpha^{-1}(j)$ destroys the homology class. The two simplices in a pair have consecutive dimensions (i.e., if dim$(\alpha^{-1}(i)) = p$ then dim$(\alpha^{-1}(j)) = p + 1$). If dim$(\alpha^{-1}(i)) = p$ and dim$(\alpha^{-1}(j)) = p + 1$, then a point with coordinates $(f(\alpha^{-1}(i), f(\alpha^{-1}(j)))$ is added to the $p$th persistence diagram. We refer to $f(\alpha^{-1}(i))$ as its \emph{birth} and $f(\alpha^{-1}(j))$ as its \emph{death}. Some simplices are not paired. If $i \neq low_R(j)$ for all $j$, then the simplex $\alpha^{-1}(i)$ is a birth simplex for a homology class that never dies. Its birth is $f(\alpha^{-1}(i))$ and its death is $\infty$. If $\dim(\alpha^{-1}(i)) = p$, then a point with coordinates $(f(\alpha^{-1}(i)), \infty)$ is added to the $p$th persistence diagram. \subsection{Vineyards}\label{sec:vineyards} Let $\mathcal{K}$ be a simplicial complex. A \emph{$1$-parameter filtration function} on $\mathcal{K}$ is a function $f: \mathcal{K} \times I \to \mathbb{R}$, where $I = [t_0, t_1]$ is an interval in $\mathbb{R}$, such that $f(\cdot, t)$ is a filtration function on $\mathcal{K}$ for all $t \in I$. For each $t \in I$, the $r$-sublevel sets $\mathcal{K}_r^t = \{\sigma \in \mathcal{K} \mid f(\sigma, t) \leq r\}$ are a filtration of $\mathcal{K}$. The set $\{\{\mathcal{K}_r^t\}_{r \in \mathbb{R}}\}_{t \in I}$ is a set of filtrations parameterized by $t \in I$. For each $t \in I$, one can compute the persistence diagram $PD(f(\cdot, t))$. The associated \emph{vineyard} is the 1-parameter set $\{PD(f(\cdot, t))\}_{t \in I}$ of persistence diagrams. We visualize the vineyard in $\mathbb{R}^2 \times I$ as a continuous stack of PDs (see Figure~\ref{fig:vineyard}). The points in the PDs trace out curves with time; these curves are the \emph{vines}. \begin{figure} \centering \includegraphics[width = .4\textwidth]{Figures/vineyard_example.png} \caption{An illustration of a vineyard. There is a persistence diagram for each time $t$. (This figure is a slightly modified version of a figure that appeared originally in \cite{vineyard_figure}, which is available under a Creative Commons license.)} \label{fig:vineyard} \end{figure} An algorithm for computing vineyards is given by \cite{vineyards}, and we review it here. As in Section \ref{sec:PH}, we define a simplex ordering function $\alpha: \mathcal{K} \times I \to \{1, \ldots, N\}$ such that $\alpha(\sigma, t) < \alpha(\tau, t)$ if $f(\sigma, t) < f(\tau, t)$ or $\sigma$ is a face of $\tau$. Let $D(t)$ be the boundary matrix compatible with the ordering at time $t$. There is a corresponding pairing function $low_R(j, t)$. The simplex ordering is constant on intervals $J \subseteq I$ for which we have that if $f(\sigma, t) \leq f(\tau, t)$ for some $t \in J$, then $f(\sigma, s) \leq f(\tau, s)$ for all $s \in J$. On an interval $J$ on which the simplex ordering is constant, we let $\alpha(\cdot, J) : \mathcal{K} \to \{1, \ldots, N\}$ denote the simplex ordering in $J$. If the simplex ordering is constant in $J$, then so is $D(t)$, and thus so is the pairing function $low_R(j, t)$. We denote the pairing function in $J$ by $low_R(\cdot, J)$. In order to compute the pairing function for all $t \in I$, we only need to compute the pairing function once per interval $J$ on which the simplex ordering is constant. For all $t \in J$ and pairs $i, j$ such that $i = low_R(j, J)$, the $p$th persistence diagram at time $t$ has a point with coordinates $(f(\alpha^{-1}(i, J)), f(\alpha^{-1}(j, J)))$. For all $i$ such that $i \neq low_R(j, J)$ for all $j$, the $p$th persistence diagram at time $t$ has a point with coordinates $(f(\alpha^{-1}(i, J)), \infty)$. The algorithm for computing a vineyard can be broken down into three steps: \begin{enumerate} \item {\bf Compute the transposition times:} Compute the times $t$ at which there is a change in the relative order of a pair $(\sigma, \tau)$ of simplices. This means there are intervals $J_1$, $J_2$ with $J_1 \cap J_2 = \{t\}$ such that the simplex ordering is constant on each $J_i$ and $(\alpha(\sigma, J_1) - \alpha(\tau, J_1))(\alpha(\sigma, J_2) - \alpha(\tau, J_2)) < 0$. \item {\bf Compute the pairing function:} For the boundary matrix $D(t_0)$ at initial time $t = t_0$, compute an RU decomposition $D(t_0) = R(t_0)U(t_0)$, where $R(t_0)$ is a reduced binary matrix and $U(t_0)$ is upper triangular. Using the initial pairing function $low_R(\cdot, t_0)$, we compute the birth and death simplices for the persistent homology of the initial filtration $f(\cdot, t_0)$. If the $i$th and $(i+1)$st simplices $\alpha^{-1}(i, t)$ and $\alpha^{-1}(i+1, t)$ are transposed at time $t$, we update the $RU$ decomposition by following the case work in \cite{vineyards}. (Note that if there is more than one pair $(\sigma, \tau)$ of simplices whose relative order changes at $t$, then the permutation can be decomposed into a sequence of such transpositions.) At worst, updating $R(t)$ requires adding one column to another and adding one row to another---similarly for $U(t)$. The addition of columns and rows is an $O(N)$ operation, but in experiments, the authors of \cite{vineyards} found that updating $R(t)$ and $U(t)$ can be done in approximately constant time if one uses the sparse matrix representations that are given in \cite{vineyards}. After an update of the $RU$ decomposition, we update the birth and death simplices. At most two (birth, death) simplex pairs are updated, and these updates occur in constant time. This updating procedure yields the birth and death simplices for the filtration function $f(\cdot, t)$. \item {\bf Evaluate the PD at each time:} At time $t$, let $J$ be the interval such that $t \in J$ and the simplex ordering is constant in $J$. For every (birth, death) simplex pair $(\sigma_b, \sigma_d)$ for the interval $J$, the diagram $PD_p(f(\cdot, t))$ contains the point $(f(\sigma_b, t), f(\sigma_d, t))$ if $\dim(\sigma_b) = p$. For every $p$-dimensional simplex $\sigma_b$ that is unpaired in $J$, the diagram $PD_p(f(\cdot, t))$ contains the point $(f(\sigma_b, t), \infty)$. \end{enumerate} A special type of vineyard is a \emph{piecewise-linear vineyard}. If we are only given $f(\sigma, t_i)$ at discrete time steps $t_i$, then for all $i$ we extend $f(\sigma, t)$ to $t \in [t_i, t_{i+1}]$ by linear interpolation. In this case, one can perform step (1) of the algorithm above by using the Bentley--Ottman planesweep algorithm \cite{compgeo}. This is because computing when (if) two simplices $\sigma, \tau$ get transposed in $[t_i, t_{i+1}]$ is equivalent to finding the intersection (if it exists) between the lines \begin{align*} y = \frac{f(\sigma, t_{i+1}) - f(\sigma, t_i)}{t_{i+1} - t_i}(t - t_i) + f(\sigma, t_i)\,, \\ y = \frac{f(\tau, t_{i+1}) - f(\tau, t_i)}{t_{i+1} - t_i}(t - t_i) + f(\tau, t_i)\,. \end{align*} \subsection{PD bundles}\label{sec:pd_bundle} PD bundles were introduced in \cite{pd_bundle} as a generalization of vineyards in which a set of filtrations is parameterized by a base space $\mathcal{T}$. A vineyard is the special case in which $\mathcal{T}$ is an interval in $\mathbb{R}$. \begin{definition} A \emph{fibered filtration function} is a set $\{f_t: \mathcal{K}^t \to \mathbb{R}\}_{t \in \mathcal{T}}$ of filtration functions parameterized by a topological space $\mathcal{T}$. When $\mathcal{K}^t \equiv \mathcal{K}$ for all $t \in \mathcal{T}$, we define $f(\sigma, t) := f_t(\sigma)$ for all $\sigma \in \mathcal{K}$ and $t \in \mathcal{T}$. \end{definition} \begin{definition} Let $\{f_t: \mathcal{K}^t \to \mathbb{R}\}_{t \in \mathcal{T}}$ be a fibered filtration function. The topological space $\mathcal{T}$ is the \emph{base space}. The space $E := \{(t, z) \mid z \in PD_p(f_t)\,, t \in \mathcal{T}\}$ is the $p$th \emph{total space}. We give $E$ the subspace topology inherited from the inclusion $E \hookrightarrow \mathcal{T} \times \overline{\mathbb{R}}^2$. The associated \emph{$p$th PD bundle} is the triple $(E, \mathcal{T}, \pi)$, where $\pi$ is the projection from $E$ to $\mathcal{T}$. \end{definition} In \cite{vineyards}, it was computationally easier to work with a piecewise-linear vineyard, which is a vineyard for a fibered filtration function $f: \mathcal{K} \times [t_0, t_1] \to \mathbb{R}$ in which $f(\sigma, \cdot)$ is piecewise linear for all $\sigma \in \mathcal{K}$. (See the discussion at the end of Section \ref{sec:vineyards}.) Below, we define an analog of piecewise-linear vineyards. \begin{definition}[Piecewise-linear PD bundles] Let $\{f_t : \mathcal{K}^t \to \mathbb{R}\}_{t \in \mathcal{T}}$ be a fibered filtration function in which $\mathcal{K}^t \equiv \mathcal{K}$. As before, we define $f(\sigma, t) := f_t(\sigma)$ for all $\sigma \in \mathcal{K}$ and $t \in \mathcal{T}$. If $\mathcal{T}$ is a simplicial complex and $f(\sigma, \cdot)$ is linear on each simplex of $\mathcal{T}$ for all simplices $\sigma \in \mathcal{K}$, then $f$ is a \emph{piecewise-linear fibered filtration function.} The resulting PD bundle is a \emph{piecewise-linear PD bundle}. \end{definition} \noindent For example, in the introduction we considered a point cloud $X(t, \mu)$ whose coordinates depended on time $t \in \mathbb{R}$ and system-parameter value $\mu \in \mathbb{R}$. Given only the coordinates of the point cloud at a discrete set $\{t_i\}$ and a discrete set $\{\mu_j\}$, we had a filtration function $f_{(t_i, \mu_j)}$ for every $(t_i, \mu_j)$. We extended this to a piecewise-linear fibered filtration function on $\mathcal{T} = [\min t_i, \max t_i] \times [\min \mu_j, \max \mu_j]$ via linear interpolation of the filtration values for each simplex $\sigma \in \mathcal{K}$. More generally, suppose we are given a fibered filtration function $f: \mathcal{K} \times \prod_{i=1}^m \mathcal{I}_i \to \mathbb{R}$, where each $\mathcal{I}_i$ is a finite subset of $\mathbb{R}$, and we wish to extend $f$ to a fibered filtration function whose base is $\mathcal{T} = \prod_{i=1}^m [\min \mathcal{I}_i, \max \mathcal{I}_i]$ (e.g., in the example given above, we had $\mathcal{I}_1 = \{t_i\}$ and $\mathcal{I}_2 = \{\mu_j\}$). First, we construct a triangulation $\mathcal{T}$ (i.e., an $m$-dimensional simplicial complex) of $\prod_{i=1}^m [\min \mathcal{I}_i, \max \mathcal{I}_i]$ whose set of vertices is $\prod_{i=1}^m \mathcal{I}_i$. (See Appendix \ref{sec:triangulation}.) Then, one can extend $f$ to a piecewise-linear fibered filtration function $f: \mathcal{K} \times \mathcal{T} \to \mathbb{R}$ by linearly interpolating $f(\sigma, \cdot)$ on each simplex $T \in \mathcal{T}$ for all simplices $\sigma \in \mathcal{K}$. In \cite{pd_bundle}, I showed that if $f: \mathcal{K} \times \mathcal{T} \to \mathbb{R}$ is a piecewise-linear fibered filtration function on an $n$-dimensional simplicial complex $\mathcal{T}$, then $\mathcal{T}$ can be partitioned into $n$-dimensional polyhedrons such that within each polyhedron $P$, there is a ``template'' from which $PD_p(f(\cdot, t))$ can be computed for all $t \in P$. The template is a list of (birth, death) simplex pairs $(\sigma_b, \sigma_d)$. To make this more precise, we define \begin{equation*} I(\sigma, \tau) := \{t \in \mathcal{T} \mid f(\sigma, t) = f(\tau, t)\} \,. \end{equation*} For every $n$-simplex $T$ in $\mathcal{T}$, the intersection $I(\sigma, \tau) \cap T$ is $\emptyset$, $T$, a vertex of $T$, or the intersection of an $(n-1)$-dimensional hyperplane with $T$. The set \begin{equation}\label{eq:partition} \bigcup_{T \in \mathcal{T}} \partial T \cup \{I(\sigma, \tau) \cap T \mid I(\sigma, \tau) \cap T \text{ is } (n-1)\text{-dimensional}\} \end{equation} partitions $\mathcal{T}$ into polyhedrons, where $T$ denotes an $n$-simplex of $\mathcal{T}$ and $\partial T$ denotes the boundary of $T$. As in Section \ref{sec:PH}, we define a simplex ordering function $\alpha : \mathcal{K} \times \mathcal{T} \to \mathbb{R}$ such that $\alpha(\sigma, t) < \alpha(\tau, t)$ if $f(\sigma, t) < f(\tau, t)$ or $\sigma \subseteq \tau$. If there is a $t \in \mathcal{T}$ such that $f(\sigma, t) = f(\tau, t)$ and neither $\sigma \subseteq \tau$ nor $\tau \subseteq \sigma$, then the ordering $\alpha(\cdot, t)$ is not uniquely defined. For consistency over the base space, we fix some ``intrinsic ordering'' $\beta: \mathcal{K} \to \{1, \ldots, N\}$ such that $\beta(\sigma) < \beta(\tau)$ if $\sigma \subseteq \tau$. We now fix $\alpha: \mathcal{K} \times \mathcal{T} \to \{1, \ldots, N\}$ to be the unique simplex ordering function such that $\alpha(\sigma, t) < \alpha(\tau, t)$ if $f(\sigma, t) < f(\tau, t)$ or $\beta(\sigma) < \beta(\tau)$. \begin{proposition}[\cite{pd_bundle}]\label{prop:polyhedrons_constant} If $f: \mathcal{K} \times \mathcal{T} \to \mathbb{R}$ is a piecewise-linear fibered filtration function, then the set in Equation \ref{eq:partition} partitions $\mathcal{T}$ into polyhedra $P$ on which the simplex ordering is constant (i.e., $\alpha(\sigma, \cdot)\vert_P$ is constant for all $\sigma \in \mathcal{K}$). Therefore the set of (birth, death) simplex pairs for $f$ is constant within each $P$. \end{proposition} \section{Computing piecewise-linear PD bundles}\label{sec:compute} The algorithm for computing a piecewise-linear PD bundle is split into three main steps: \begin{enumerate} \item {\bf Compute the polyhedrons:} Compute the polyhedrons on which the simplex ordering (and thus pairing function) is constant (see Prop \ref{prop:polyhedrons_constant}). For every pair of adjacent polyhedrons, there is a permutation $\pi$ that relates the differing simplex orders in each polyhedron. We compute and record the list of simplex pairs $(\sigma, \tau)$ such that $\pi$ changes the relative positions of $\sigma$ and $\tau$. In the ``generic case,'' (defined below at the beginning of Section \ref{sec:findpolygons}), $\pi$ is the transposition of a single pair $(\sigma, \tau)$ of simplices with consecutive indices in the simplex ordering. \item {\bf Compute the pairing function:} Choose a point $t_* \in \mathcal{T}$. Compute the simplex ordering at $t_*$, the boundary matrix $D(t_*)$, and an RU decomposition $D(t_*) = R(t_*)U(t_*)$, where $R(t_*)$ is a reduced binary matrix and $U(t_*)$ is upper triangular. We traverse the polyhedrons, starting with the polyhedron that contains $t_*$. As we move from one polyhedron to the next, we perform the simplex permutation $\pi$ computed above. We update the $RU$ decomposition and pairing function via the update rules that are used when computing vineyards (see \cite{vineyards}). In each polyhedron, we store its pairing function (i.e., the pairs $(\sigma_b, \sigma_d)$ of birth and death simplex pairs and also the unpaired simplices $\sigma_b$, which are birth simplices for homology classes that never die). \item {\bf Query the PD bundle:} To see the $p$th persistence diagram $PD_p(f(\cdot, t))$ associated with point $t \in \mathcal{T}$, first locate the polyhedron $P$ that contains $t$. For each pair $(\sigma_b, \sigma_d)$ of simplices in the pairing function for $P$, the diagram $PD_p(f(\cdot, t))$ has a point with coordinates $(f(\sigma_b, t), f(\sigma_d, t))$ if $\dim(\sigma_b) = p$. For every $p$-dimensional simplex $\sigma_b$ that is unpaired in $P$, the diagram $PD_p(f(\cdot, t))$ contains the point $(f(\sigma_b, t), \infty)$. \end{enumerate} Steps 1--3 are directly analogous to steps 1--3 in the algorithm for computing vineyards that was presented in Section \ref{sec:vineyards}. In what follows, I elaborate on each step of the algorithm above. We focus on the case in which $\mathcal{T}$ is 2D. \subsection{Special case: $\mathcal{T}$ is $2$-dimensional }\label{sec:2D} Let $\mathcal{K}$ be a simplicial complex, let $f: \mathcal{K} \times \mathcal{T} \to \mathbb{R}$ be a piecewise-linear fibered filtration function, and suppose $\mathcal{T}$ is $2$D. If $T$ is a triangle in $\mathcal{T}$, then $I(\sigma, \tau) \cap T$ is one of $\emptyset$, $T$, a vertex of $T$, or a line segment whose endpoints are on $\partial T$. In Figure \ref{fig:intersection}, we show a few possible cases for $I(\sigma,\tau)$. The set in Equation \ref{eq:partition} is a set $L$ of line segments, and the planar subdivision induced by $L$ is a \emph{line arrangement} $\mathcal{A}(L)$. For example, see Figure \ref{fig:polygon partition}. \begin{figure} \centering \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection1.png}\label{fig:intersection 1}} \hspace{5mm} \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection2.png}\label{fig:intersection 2}} \hspace{5mm} \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection3.png}} \\ \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection4.png} \label{fig:triangle_intersection}} \hspace{5mm} \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection5.png}} \hspace{5mm} \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection6.png}} \caption{A few possible cases for the set $I(\sigma,\tau)$, in pink. The black lines are the 1-skeleton of $\mathcal{T}$.} \label{fig:intersection} \end{figure} \begin{figure} \centering \includegraphics[width = .5\textwidth]{Figures/polygon_partition_2.png} \caption{A line arrangement $\mathcal{A}(L)$ that represents the partition of a triangulated base space $\mathcal{T}$ into polygons (see Proposition \ref{prop:polyhedrons_constant}). Within each polygon, the simplex ordering ordering is constant. (This figure appeared originally in \cite{pd_bundle}.)} \label{fig:polygon partition} \end{figure} For ease of exposition, we will make two genericity assumptions for the remainder of Section \ref{sec:2D}. The idea of the algorithm is not different in the general case, but it requires some technical modifications, which I discuss in Appendix \ref{sec:technical}. The assumptions are as follows: \begin{enumerate} \item For all distinct simplices $\sigma, \tau \in \mathcal{K}$ and all vertices $v \in \mathcal{T}$, we have that $f(\sigma, v) \neq f(\tau, v)$. This implies that for all triangles $T \in \mathcal{T}$, the intersection $I(\sigma, \tau) \cap T$ is either $\emptyset$ or a line segment whose endpoints are not vertices of $T$. For examples, see Figures \ref{fig:intersection 1} and \ref{fig:intersection 2}. \item For all distinct simplices $\sigma_1, \tau_1, \sigma_2, \tau_2 \in \mathcal{K}$ and every triangle $T \in \mathcal{T}$ such that $I(\sigma_1, \tau_1) \cap T$ and $I(\sigma_2, \tau_2) \cap T$ are nonempty, the line segments $I(\sigma_1, \tau_1) \cap T$ and $ I(\sigma_2, \tau_2) \cap T$ do not share any endpoints. \end{enumerate} \subsubsection{Computing the polygons}\label{sec:findpolygons} For a piecewise-linear vineyard, computing the intervals on which the simplex ordering is constant can be reduced to finding the intersections between the piecewise-linear functions $y = f(\sigma, t)$ and $y = f(\tau, t)$ for all pairs $(\sigma, \tau)$ of simplices in $\mathcal{K}$. Likewise for a piecewise-linear PD bundle, computing the polygons on which the simplex ordering is constant can be reduced to finding the intersections $I(\sigma, \tau)$ for all pairs $(\sigma, \tau)$ of simplices. \begin{definition}\label{def:swap} Let $\ell$ be a line segment whose endpoints are on the boundary of a triangle $T$ in $\mathcal{T}$. The line segment $\ell$ partitions $\mathcal{T}$ into polygons $Q_1$ and $Q_2$. We say that simplices $\sigma, \tau \in \mathcal{K}$ \emph{swap along $\ell$} if $(\alpha(\sigma, t_1) - \alpha(\tau, t_1))(\alpha(\sigma, t_2) - \alpha(\tau, t_2)) < 0$ for all $t_1 \in Q_1$ and $t_2 \in Q_2$ (i.e., $\sigma$ and $\tau$ have different relative orders in $Q_1$ and $Q_2$). \end{definition} \noindent Under the generic assumptions we made earlier, a pair $(\sigma, \tau)$ swaps along $\ell$ if and only if $\ell = I(\sigma, \tau) \cap T$ for a triangle $T \in \mathcal{T}$. (See Lemma \ref{lem:linesegs} for a discussion of the general case.) We wish to compute the line arrangement $\mathcal{A}(L)$, where $L$ is the set of line segments defined by Equation \ref{eq:partition}. (See Figure \ref{fig:polygon partition}.) The polygons of $\mathcal{A}(L)$ are the polygons on which the simplex ordering is constant. We store $\mathcal{A}(L)$ using a doubly-connected edge list (DCEL) data structure \cite{compgeo}. A DCEL is a standard data structure for storing a polygonal subdivision of the plane. We compute $\mathcal{A}(L)$ using the following algorithm, illustrated in Figure \ref{fig:step1}: \begin{figure} \centering \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step1.png}\label{fig:step1.1}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step2.png}\label{fig:step1.2}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step3.png}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step4.png}} \\ \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step5.png}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step6.png}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step7.png}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step8.png}} \caption{Computing the polygons. (A) The line arrangement $\mathcal{A}(L)$ is initialized to represent the triangulated base space $\mathcal{T}$, which in this case consists of two triangles. (B) We find the vertices $v$ of $\mathcal{A}(L)$ that lie on the $1$-skeleton of $\mathcal{T}$. (C)--(H) We incrementally add the line segments of the form $I(\sigma, \tau) \cap T$ for a triangle $T$ in $\mathcal{T}$. The endpoints of a given line segment are a pair $(v, w)$ of vertices in (B).} \label{fig:step1} \end{figure} \begin{enumerate} \item We initialize $\mathcal{A}(L)$ so that it represents the triangulation $\mathcal{T}$. (See Figure \ref{fig:step1.1}.) In addition to the usual data that a DCEL stores, we enumerate the triangles in $\mathcal{T}$ and every half edge $e$ stores the index of the triangle in $\mathcal{T}$ that $e$ is on the boundary of. \item Let $\gamma$ be a path through the $1$-skeleton of $\mathcal{T}$ that traverses each edge at least once. For example, suppose $\mathcal{T}$ is a triangulation of a grid, as in Figure \ref{fig:triangulation grid}. A path $\gamma$ through $\mathcal{T}$ is shown in Figure \ref{fig:triangulation path}. \begin{figure} \centering \subfloat[]{\includegraphics[width = .35\textwidth]{Figures/triangulation_grid.png}\label{fig:triangulation grid}} \subfloat[]{\includegraphics[width = .35\textwidth]{Figures/triangulation_path_rainbow.png}\label{fig:triangulation path}} \caption{(A) A triangulated base space $\mathcal{T}$. (B) A path $\gamma$ that traverses each edge of the $1$-skeleton of $\mathcal{T}$, starting at the bottom-left vertical edge (violet) and ending at the top-right vertical edge (red).} \end{figure} \item The restriction of $f$ to the path $\gamma$ is a $1$-parameter filtration function (the input to a vineyard). We use the Bentley--Ottman algorithm \cite{compgeo} to find the points $v$ on $\gamma$ at which pairs of simplices swap (i.e., the relative order of a pair of simplices changes). See Figure \ref{fig:step1.2}. The points $v$ are the points on the $1$-skeleton of $\mathcal{T}$ at which $f(\sigma, v) = f(\tau, v)$ for some $\sigma \neq \tau$. When we find a vertex $v$, we add it to the DCEL that represents $\mathcal{A}(L)$; we do this by splitting the edge in the DCEL that the vertex lies on. For each triangle $T \in \mathcal{T}$, we maintain a dictionary whose keys are pairs $(\sigma, \tau)$ of simplices. The value associated with $(\sigma, \tau)$ is the list of vertices $v$ on $\partial T$ at which $\sigma$ and $\tau$ swap. This list is updated as we find vertices; at the end of the Bentley--Ottman algorithm, every $(\sigma, \tau)$ in the dictionary for $T$ is associated with a pair $(v, w)$ of vertices that lie on $\partial T$. \item For each triangle $T \in \mathcal{T}$ and for each pair $(\sigma, \tau)$ of simplices in its dictionary, there is an associated pair $(v, w)$ of vertices that lie on the boundary $\partial T$. The vertices $v, w$ must be the endpoints of a line segment $\ell \in L$ along which $(\sigma, \tau)$ swap, so for each $(v, w)$, we add a line segment with endpoints $(v, w)$ to the DCEL that represents $\mathcal{A}(L)$. (See Figures \ref{fig:step1}C--H.) There are many standard algorithms for doing this: one example is the incremental algorithm (see e.g., Chapter 8.3 of \cite{compgeo}), in which the line segments are incrementally added one at a time. The worst-case running time of the incremental algorithm is $O(n_T^2)$, where $n_T$ is the number of line segments in triangle $T$. The algorithm also requires $O(n_T^2)$ space. The incremental algorithm can be parallelized over the triangles $T \in \mathcal{T}$. \end{enumerate} \noindent In Figure \ref{fig:step1}, we illustrate the algorithm for computing the polygons. Adding a single line segment to $\mathcal{A}(L)$ typically creates more than one new edge in $\mathcal{A}(L)$. For example, in Figure \ref{fig:step1}H, adding the last line segment creates two new edges and splits an existing edge into two edges. As we add line segments to $\mathcal{A}(L)$, we keep track of the pairs $(\sigma, \tau)$ of simplices that correspond to each edge. More precisely, if edge $e$ is a subset of a line segment for the pair $(\sigma, \tau)$ of simplices, then $e$ stores a reference to the pair $(\sigma, \tau)$. We add the reference to $(\sigma, \tau)$ at the time that edge $e$ is created in $\mathcal{A}(L)$. The two polygons adjacent to $e$ have simplex orderings that are related via the transposition of $\sigma$ and $\tau$. \subsubsection{Computing the pairing function}\label{sec:simplexpairs} Let $G$ be the dual graph to the line arrangement $\mathcal{A}(L)$. The graph $G$ contains a vertex $v_P$ for every polygon $P$ of $\mathcal{A}(L)$ and an edge between two vertices if the corresponding polygons are adjacent. Next, we compute a path $\Gamma$ that visits every vertex of $G$ at least once. For example, see Figure \ref{fig:path}. One way to obtain such a path is the following algorithm, used by Rivet \cite{rivet}: We first compute a minimal spanning tree $S$ for $G$ via an algorithm such as Prim's algorithm or Kruskal's algorithm \cite{mst}. By performing a depth-first search of $S$, we obtain a path $\Gamma$ that visits every edge of $S$ at most twice. $\Gamma$ may not be minimal, but has the following guarantee: If $\Gamma^*$ is a path of minimal length that visits every vertex of $G$ at least once, then length$(\Gamma) \leq 2 \times (\text{number of edges in } S) \leq 2 \times \text{length}(\Gamma^*)$. \begin{figure} \centering \includegraphics[width = .3\textwidth]{pd_bundle_path.png} \caption{A path $\Gamma$ that visits every polygon in the line arrangement $\mathcal{A}(L)$.} \label{fig:path} \end{figure} At the first vertex $v_P$ of $\Gamma$, we compute the simplex ordering in polygon $P$, the RU decomposition for the boundary matrix in $P$, and the (birth, death) simplex pairs in $P$. The polygon $P$ stores a reference to its (birth, death) simplex pairs. To store the current simplex ordering, every simplex stores a reference to its index in the current ordering (initialized to the ordering in $P$). We traverse the path $\Gamma$. As we walk from one polygon $P_1$ to the next polygon $P_2$ by crossing an edge $e$ in $\mathcal{A}(L)$, we update the simplex ordering, the RU decomposition, and the (birth, death) simplex pairs. To update the simplex ordering, we recall that edge $e$ stores a reference to the simplex pair $(\sigma, \tau)$ such that the simplex orderings in $P_1$ and $P_2$ are related via the transposition of $\sigma$ and $\tau$. We update the order by swapping the indices that $\sigma$ and $\tau$ store. To update the RU decomposition and the (birth, death) simplex pairs, we apply the update algorithm of \cite{vineyards}. In $P_2$, we store the new (birth, death) simplex pairs. \subsubsection{Querying the PD bundle} We consider the scenario in which a user would like to query many points $t \in \mathcal{T}$ in real time and see the $p$th persistence diagram $PD_p(f(\cdot, t))$ associated with each point $t$ that is queried. To compute the $p$th persistence diagram $PD_p(f(\cdot, t))$ associated with a given $t$, we first identify the polygon $P$ of $\mathcal{A}(L)$ that contains $t$. This is a well-studied problem in computational geometry; it is known as the \emph{point-location problem}. When one is planning to perform many point-location queries on the same line arrangement $\mathcal{A}(L)$ (i.e., if one is querying many points $t \in \mathcal{T}$), the standard strategy is to precompute a data structure so that the subsequent point-location queries can be done efficiently. There are many strategies for doing this (see e.g., chapter 38 in \cite{handbook}). One method is the slab-and-persistence method \cite{slab}, in which one precomputes a ``persistent search tree'' for $\mathcal{A}(L)$. The slab-and-persistence method takes $O(k \log k)$ preprocessing time, $O(k)$ space, and $O(\log k)$ time per query, where $k$ is the number of vertices in $\mathcal{A}(L)$.\footnote{By Euler's formula, the number of edges in $\mathcal{A}(L)$ is bounded above by $3k - 6$ and the number of faces is bounded above $2k - 4$. Therefore, the number of vertices, edges, and faces are all $O(k)$ \cite{compgeo}.} If the triangulation $\mathcal{T}$ is such that one can locate the triangle $T$ that contains the point $t$ in $O(1)$ time, then one can reduce the computational complexity by constructing separate persistent search trees for each triangle in $\mathcal{T}$. For example, if $\mathcal{T}$ is a triangulation of the form in Figure \ref{fig:triangulation grid}, then one can locate the triangle $T$ in constant time by examining the coordinates of $t$. Using separate persistent search trees for the planar subdivisions in each triangle, the slab and persistence method takes $O(\sum_{T \in \mathcal{T}} k_T \log(k_T))$ preprocessing time, $O(\sum_{T \in \mathcal{T}} k_T)$ space, and $O(\max_{T \in \mathcal{T}} \log k_T)$ time per query, where $k_T$ is the number of vertices in $\mathcal{A}(L) \cap T$. The pairing function in polygon $P$ was precomputed in the previous step (see Section \ref{sec:simplexpairs}). For every (birth, death) pair $(\sigma_b, \sigma_d)$ of simplices, $PD_p(f(\cdot, t))$ has a point with coordinates $(f(\sigma_b, t), f(\sigma_d, t))$ if $\dim(\sigma_b) = p$. For every unpaired $p$-dimensional simplex $\sigma_b$, the diagram $PD_p(f(\cdot, t))$ has a point with coordinates $(f(\sigma_b, t), \infty)$. \subsection{Generalizing to higher-dimensional $\mathcal{T}$}\label{sec:high} In higher dimensions, we are somewhat limited by the availability of computational geometry data structures and algorithms. When $n = 2$, we use the DCEL data structure to represent the partition of $\mathcal{T}$ into polygons, and we can use one of several algorithms for solving the point-location problem. To the best of my knowledge, there is not an analogous data structure (yet) for storing a partition of a space into $n$-dimensional polyhedrons for arbitrary $n \geq 3$, and there are not algorithms (yet) for solving the point-location problem in higher dimensions. Otherwise, the algorithm of Section \ref{sec:2D} (as outlined at the beginning of Section \ref{sec:compute}) requires almost no modifications for higher-dimensional $\mathcal{T}$. The partition of $\mathcal{T}$ into polygons is replaced by a partition of $\mathcal{T}$ into $n$-dimensional polyhedrons, where $n = \dim(\mathcal{T})$. Only the first step (computing the polyhedrons) requires a meaningful modification, which I describe below. When $n = 2$, the intersection of $I(\sigma, \tau)$ with a triangle $T \in \mathcal{T}$ is the intersection of a line with $T$, which is a line segment $L_{\sigma, \tau, T}$. These line segments completely determine the polygonal partition of $\mathcal{T}$ because the line segments are the faces of the polygons. In turn, each line segment $L_{\sigma, \tau, T}$ is completely determined by the intersection of $L_{\sigma, \tau, T}$ with the $1$-skeleton of $\mathcal{T}$; this intersection is a pair $(v_{\sigma, \tau, T}, w_{\sigma, \tau, T})$ of points. We computed the set \{$(v_{\sigma, \tau, T}, w_{\sigma, \tau, T})\}_{\sigma, \tau, T}$ by restricting the fibered filtration function $f$ to a path through the $1$-skeleton of $\mathcal{T}$ and applying the Bentley--Ottman planesweep algorithm. In general, the intersection of $I(\sigma, \tau)$ with an $n$-simplex $T \in \mathcal{T}$ is the intersection of an $(n-1)$-dimensional hyperplane $H_{\sigma, \tau, T}$ with $T$, which is an $(n-1)$-dimensional polyhedron $P_{\sigma, \tau, T}$. The set $\{P_{\sigma, \tau, T}\}_{\sigma, \tau, T}$ completely determines the polyhedral partition of $\mathcal{T}$ that is given by Proposition \ref{prop:polyhedrons_constant} because the polyhedrons $P_{\sigma, \tau, T}$ are the $(n-1)$-dimensional faces of the $n$-dimensional polyhedrons in the partition. In turn, each polyhedron $P_{\sigma, \tau, T}$ is completely determined by its intersection with the $1$-skeleton of $\mathcal{T}$, as follows. The $m$-dimensional faces of $P_{\sigma, \tau, T}$ are the set $\{H_{\sigma, \tau, T} \cap F \mid F \text{ is an } (m+1)\text{-dimensional face of } T \text{ and } H_{\sigma, \tau, T} \cap F \neq \emptyset \}$. For every $(m+1)$-dimensional face $F$ of $T$ such that $H_{\sigma, \tau, T} \cap F \neq \emptyset$, the intersection $H_{\sigma, \tau, T} \cap F$ is the $m$-dimensional polyhedron whose $(m-1)$-dimensional faces are the set $\{H_{\sigma, \tau, T} \cap g \mid g \text{ is an } m\text{-dimensional face of } F\}$. By induction, the faces of $P_{\sigma, \tau, T}$ are determined by $\{H_{\sigma, \tau, T} \cap e \mid e \text{ is a } 1 \text{-dimensional face of } T \text{ (i.e., an edge)}\}$, which are the vertices of $P_{\sigma, \tau, T}$. Consequently, we can compute each $P_{\sigma, \tau, T}$ by computing its vertices. As in the case in which $\dim(\mathcal{T}) = 2$, we do this by restricting the fibered filtration function $f$ to a path through the $1$-skeleton of $\mathcal{T}$ and applying the Bentley--Ottman planesweep algorithm. \section{Conclusions and Discussion}\label{sec:conclusion} I introduced an algorithm for efficiently computing PD bundles when the fibered filtration function is piecewise-linear. I gave full implementation details for the case in which the base space $\mathcal{T}$ is 2D, and in Section \ref{sec:high} I discussed how one may generalize to higher dimensions. I conclude with some questions and proposals for future work: \begin{itemize} \item What invariants can we use for summarizing and analyzing PD bundles in ways that do not require exploratory data analysis? The current algorithm requires a user to ``query'' the PD bundle at various points in the base space. \item Can we generalize the implementation of the algorithm to higher-dimensional base spaces $\mathcal{T}$? Generalizing to higher-dimensional base spaces will require generalizing certain computational geometry data structures and algorithms. For example, we must generalize the doubly-connected edge list, which we used to represent a partition of the plane into polygons, to a data structure that can represent a partition of a higher-dimensional space into polyhedrons. Such higher-dimensional computational geometry algorithms may be useful for other applications as well. \item Can we generalize the algorithm to nonpiecewise-linear fibered filtration functions? For piecewise-linear fibered filtration functions, we used the fact that the base space $\mathcal{T}$ can be partitioned into polyhedrons such that there is a single PD ``template'' (a list of (birth, death) simplex pairs) for each polyhedron. The template can then be used to obtain $PD_p(f_t)$ at any point $t$ in the polyhedron. For ``generic'' fibered filtration functions, I showed in \cite{pd_bundle} that the base space $\mathcal{T}$ is stratified such that for each $\dim(\mathcal{T})$-dimensional stratum, there is a single PD template which can be used to obtain $PD_p(f_t)$ at any point $t$ in the stratum. \end{itemize} \section*{Acknowledgements} I thank Michael Lesnick and Nina Otter for helpful discussions about the algorithm for computing PD bundles. \section{Introduction} Suppose one has a set $\{X(t)\}_{t \in \mathcal{T}}$ of point clouds parameterized by a topological space $\mathcal{T}$. For example, a time-varying point cloud is parameterized by $\mathcal{T} = \mathbb{R}$. At each $t \in \mathcal{T}$, one can construct a filtration (such as the Vietoris--Rips filtration) for $X(t)$ and compute its persistent homology (PH). More generally, one may have a \emph{fibered filtration function}, a set $\{f_t:\mathcal{K}^t \to \mathbb{R}\}_{t \in \mathcal{T}}$ of filtrations parameterized by $\mathcal{T}$. The associated \emph{persistence diagram (PD) bundle} is the space of persistence diagrams $PD(f_t)$ for all $t \in \mathcal{T}$. For example, a vineyard \cite{vineyards} is the special case in which $\mathcal{T}$ is an interval in $\mathbb{R}$, and the persistent homology transform \cite{pht} is a special case in which $\mathcal{T} = S^d$. For more examples, see \cite{pd_bundle}. \subsection{Contributions} I generalize the algorithm for computing vineyards \cite{vineyards} to an algorithm for efficiently computing PD bundles. We restrict to the case in which the PD bundle is \emph{piecewise linear}. This means that $\mathcal{T}$ is a simplicial complex, $\mathcal{K}^t \equiv \mathcal{K}$ for all $t \in \mathcal{T}$, and for every simplex $\sigma \in \mathcal{K}$, the function $f_{\sigma}(t) := f_t(\sigma)$ is linear on every simplex of $\mathcal{T}$. The restriction to piecewise-linear PD bundles allows us to take advantage of work in computational geometry such as the Bentley--Ottman planesweep algorithm \cite{compgeo} for finding intersections of lines in a plane. An analogous piecewise-linear restriction was made for computing vineyards in \cite{vineyards}. The idea of the algorithm is to partition the base space $\mathcal{T}$ into polyhedrons and compute a PD ``template'' for each polyhedron. The partition is given by Proposition \ref{prop:polyhedrons_constant} (\cite{pd_bundle}). For any $t \in \mathcal{T}$, the persistence diagram $PD(f_t)$ can be computed in $O(N)$ time from the template for the polyhedron that contains $t$, where $N$ is the number of simplices in $\mathcal{K}$. The piecewise-linear restriction is reasonable for most applications. For example, suppose that we have a point cloud $X(t, \mu)$ whose coordinates depend on time $t$ and a system-parameter value $\mu \in \mathbb{R}$. If the data set is obtained through either real-world data collection or through numerical simulation, then we likely only know the coordinates of the point cloud $X(t, \mu)$ at a discrete set $\{t_i\}$ of time steps and a discrete set $\{\mu_j\}$ of system-parameter values. For every $(t_i, \mu_j)$, there is the filtration function $f_{(t_i, \mu_j)}:\mathcal{K} \to \mathbb{R}$ associated with the Vietoris--Rips filtration (or any other filtration) of $X(t_i, \mu_j)$. To obtain a fibered filtration function, let $\mathcal{T}$ be a triangulation of $[\min t_i, \max t_i] \times [\min \mu_j, \max \mu_j]$ whose vertices are $\{(t_i, \mu_j)\}_{ij}$. We can extend $\{f_{(t_i, \mu_j)}\}_{ij}$ to a fibered filtration function on all of $\mathcal{T}$ by defining the filtration values of a simplex $\sigma$ via linear interpolation of $\{f_{(t_i, \mu_j)}(\sigma)\}_{ij}$. By construction, the resulting PD bundle is piecewise linear. I give full implementation details only for the case in which $\dim(\mathcal{T}) \leq 2$, but I discuss the mathematical generalization to higher dimensions in Section \ref{sec:high}. When the base space $\mathcal{T}$ is $2$D, it is already an improvement over a vineyard. If $\dim(\mathcal{T}) = 2$, this means there are three parameters in total: the filtration parameter $r$ as well as two parameters that locally parameterize $\mathcal{T}$. In higher dimensions, we are limited by the availability of computational geometry algorithms for working with a partition of a space into polyhedrons. When $\dim(\mathcal{T}) = 2$, the partition of $\mathcal{T}$ into polygons can be represented by a doubly-connected edge list (DCEL) data structure, and we can use a standard point-location algorithm to locate the polygon that contains a given point. However, to the best of my knowledge, no one has generalized these yet to arbitrarily high dimensions. \subsection{Related Work} PD bundles were introduced in \cite{pd_bundle} as a generalization of vineyards \cite{vineyards}. The algorithm I present in this paper for computing PD bundles is a generalization of the algorithm presented in \cite{vineyards}. In many ways, the algorithm in this paper is also reminiscent of the Rivet algorithm for computing fibered barcodes of 2D multiparameter persistence modules \cite{rivet}. \subsection{Organization} The paper proceeds as follows. I review the relevant background on persistent homology, vineyards, and PD bundles in Section \ref{sec:background}. I present my algorithm for computing piecewise-linear PD bundles in Section \ref{sec:compute}. Finally, I conclude and discuss possible directions for future research in Section \ref{sec:conclusion}. \section{Background}\label{sec:background} We begin by reviewing persistent homology, vineyards, and PD bundles; for more details on persistent homology, see \cite{edel_book, roadmap}, for more on vineyards, see \cite{vineyards}, and for an introduction to PD bundles, see \cite{pd_bundle}. \subsection{Persistent homology}\label{sec:PH} Let $\mathcal{K}$ be a simplicial complex. A \emph{filtration function} $f: \mathcal{K} \to \mathbb{R}$ is a real-valued function on $\mathcal{K}$ that is \emph{monotonic}, i.e., $f(\tau) \leq f(\sigma)$ if $\tau$ is a face of $\sigma$. Monotonicity guarantees that the $r$-sublevel sets $\mathcal{K}_r := \{\sigma \in \mathcal{K} \mid f(\sigma) \leq r)\}$ are simplicial complexes. In persistent homology, we study how the homology of $\mathcal{K}_r$ changes as $r$ increases. Let $\{r_i\}$ be the image of $f$, ordered such that $r_i < r_{i+1}$. These are the critical values at which $\mathcal{K}_r$ changes; for $r \in [r_i, r_{i+1})$, we have $\mathcal{K}_r = \mathcal{K}_{r_i}$. For every $i \leq j$, the inclusion $\iota^{i, j}: \mathcal{K}_{r_i} \hookrightarrow \mathcal{K}_{r_j}$ induces a map $\iota^{i, j}_*: H_*(\mathcal{K}_{r_i}, \mathbb{F}) \to H_*(\mathcal{K}_{r_j}, \mathbb{F})$ on homology. For the remainder of this paper, we compute homology over the field $\mathbb{F} = \mathbb{Z}/2\mathbb{Z}$. The \emph{$p$th-persistent homology} (PH) is the pair \begin{equation*} \Big( \{H_p(\mathcal{K}_{r_i}, \mathbb{F})\}_{1 \leq i \leq N}\,, \{\iota_*^{i, j}\}_{1 \leq i \leq j \leq N} \Big)\,. \end{equation*} A homology class is \emph{born} at $r_i$ if it is not in the image of $\iota_*^{i, i-1}$. The homology class \emph{dies} at $r_j > r_i$ if $j$ is the minimum index such that $\iota_*^{i, j}$ maps it to zero. (Such a $j$ may not exist; in that case, the homology class never dies.) The Fundamental Theorem of Persistent Homology yields compatible choices of bases for the vector spaces $H_p(\mathcal{K}_{r_i}, \mathbb{F})$. The generators in our definition of a persistence diagram, below, are the basis elements in the decomposition given by the Fundamental Theorem of Persistent Homology. Persistent homology is often visualized as a \emph{persistence diagram} (PD). The $p$th persistence diagram $PD_p(f)$ is a multiset of points in the extended plane $\overline{\mathbb{R}}^2$ that summarizes the $p$th persistent homology. It contains the diagonal (for technical reasons) and one point for every generator. If a generator is born at $b$ and dies at $d$, then the coordinates of the corresponding point in the PD are $(b, d)$, and if the generator is born at $b$ and never dies, then the coordinates of the point are $(b, \infty)$. One of the standard methods for computing PH is the algorithm introduced in \cite{RU}. The algorithm requires a choice of compatible \emph{simplex ordering} $\alpha: \mathcal{K} \to \{1, \ldots, N\}$, where $N$ is the number of simplices in $\mathcal{K}$. We require that $\alpha(\sigma) < \alpha(\tau)$ if $f(\sigma) < f(\tau)$ or $\sigma$ is a face of $\tau$. A compatible ordering $\alpha$ exists because monotonicity ensures that $f(\sigma) \leq f(\tau)$ if $\sigma$ is a face of $\tau$. Let $D$ be the boundary matrix compatible with this ordering. That is, let $D$ be the matrix whose $(i, j)$th entry is \begin{equation*} D_{ij} = \begin{cases} 1\,, & \alpha^{-1}(i) \text{ is a face of } \alpha^{-1}(j) \\ 0 \,, & \text{otherwise.} \end{cases} \end{equation*} We decompose the boundary matrix $D$ into a matrix product $D = RU$ such that $U$ is upper triangular and $R$ is a binary matrix that is \emph{reduced}. A binary matrix $R$ is reduced if $low_R(j) \neq low_R(j')$ whenever $j \neq j'$ are the indices of nonzero columns in $R$. The quantity $low_R(j)$ is the row index of the last $1$ in column $j$ if column $j$ is nonzero and undefined if column $j$ is zero. An RU decomposition can be computed in $O(N^3)$ time \cite{RU, edel_book}. The function $low_R(j)$ is called the \emph{pairing function}. The authors of \cite{vineyards} showed that the pairing function $low_R(j)$ depends only on the boundary matrix $D$, and not on the particular reduced binary matrix $R$ in the decomposition $D = RU$. A pair of simplices $(\alpha^{-1}(i), \alpha^{-1}(j))$, for which $i = low_R(j)$, represents a persistent homology class. The \emph{birth simplex} $\alpha^{-1}(i)$ creates the homology class and the \emph{death simplex} $\alpha^{-1}(j)$ destroys the homology class. The two simplices in a pair have consecutive dimensions (i.e., if dim$(\alpha^{-1}(i)) = p$ then dim$(\alpha^{-1}(j)) = p + 1$). If dim$(\alpha^{-1}(i)) = p$ and dim$(\alpha^{-1}(j)) = p + 1$, then a point with coordinates $(f(\alpha^{-1}(i), f(\alpha^{-1}(j)))$ is added to the $p$th persistence diagram. We refer to $f(\alpha^{-1}(i))$ as its \emph{birth} and $f(\alpha^{-1}(j))$ as its \emph{death}. Some simplices are not paired. If $i \neq low_R(j)$ for all $j$, then the simplex $\alpha^{-1}(i)$ is a birth simplex for a homology class that never dies. Its birth is $f(\alpha^{-1}(i))$ and its death is $\infty$. If $\dim(\alpha^{-1}(i)) = p$, then a point with coordinates $(f(\alpha^{-1}(i)), \infty)$ is added to the $p$th persistence diagram. \subsection{Vineyards}\label{sec:vineyards} Let $\mathcal{K}$ be a simplicial complex. A \emph{$1$-parameter filtration function} on $\mathcal{K}$ is a function $f: \mathcal{K} \times I \to \mathbb{R}$, where $I = [t_0, t_1]$ is an interval in $\mathbb{R}$, such that $f(\cdot, t)$ is a filtration function on $\mathcal{K}$ for all $t \in I$. For each $t \in I$, the $r$-sublevel sets $\mathcal{K}_r^t = \{\sigma \in \mathcal{K} \mid f(\sigma, t) \leq r\}$ are a filtration of $\mathcal{K}$. The set $\{\{\mathcal{K}_r^t\}_{r \in \mathbb{R}}\}_{t \in I}$ is a set of filtrations parameterized by $t \in I$. For each $t \in I$, one can compute the persistence diagram $PD(f(\cdot, t))$. The associated \emph{vineyard} is the 1-parameter set $\{PD(f(\cdot, t))\}_{t \in I}$ of persistence diagrams. We visualize the vineyard in $\mathbb{R}^2 \times I$ as a continuous stack of PDs (see Figure~\ref{fig:vineyard}). The points in the PDs trace out curves with time; these curves are the \emph{vines}. \begin{figure} \centering \includegraphics[width = .4\textwidth]{Figures/vineyard_example.png} \caption{An illustration of a vineyard. There is a persistence diagram for each time $t$. (This figure is a slightly modified version of a figure that appeared originally in \cite{vineyard_figure}, which is available under a Creative Commons license.)} \label{fig:vineyard} \end{figure} An algorithm for computing vineyards is given by \cite{vineyards}, and we review it here. As in Section \ref{sec:PH}, we define a simplex ordering function $\alpha: \mathcal{K} \times I \to \{1, \ldots, N\}$ such that $\alpha(\sigma, t) < \alpha(\tau, t)$ if $f(\sigma, t) < f(\tau, t)$ or $\sigma$ is a face of $\tau$. Let $D(t)$ be the boundary matrix compatible with the ordering at time $t$. There is a corresponding pairing function $low_R(j, t)$. The simplex ordering is constant on intervals $J \subseteq I$ for which we have that if $f(\sigma, t) \leq f(\tau, t)$ for some $t \in J$, then $f(\sigma, s) \leq f(\tau, s)$ for all $s \in J$. On an interval $J$ on which the simplex ordering is constant, we let $\alpha(\cdot, J) : \mathcal{K} \to \{1, \ldots, N\}$ denote the simplex ordering in $J$. If the simplex ordering is constant in $J$, then so is $D(t)$, and thus so is the pairing function $low_R(j, t)$. We denote the pairing function in $J$ by $low_R(\cdot, J)$. In order to compute the pairing function for all $t \in I$, we only need to compute the pairing function once per interval $J$ on which the simplex ordering is constant. For all $t \in J$ and pairs $i, j$ such that $i = low_R(j, J)$, the $p$th persistence diagram at time $t$ has a point with coordinates $(f(\alpha^{-1}(i, J)), f(\alpha^{-1}(j, J)))$. For all $i$ such that $i \neq low_R(j, J)$ for all $j$, the $p$th persistence diagram at time $t$ has a point with coordinates $(f(\alpha^{-1}(i, J)), \infty)$. The algorithm for computing a vineyard can be broken down into three steps: \begin{enumerate} \item {\bf Compute the transposition times:} Compute the times $t$ at which there is a change in the relative order of a pair $(\sigma, \tau)$ of simplices. This means there are intervals $J_1$, $J_2$ with $J_1 \cap J_2 = \{t\}$ such that the simplex ordering is constant on each $J_i$ and $(\alpha(\sigma, J_1) - \alpha(\tau, J_1))(\alpha(\sigma, J_2) - \alpha(\tau, J_2)) < 0$. \item {\bf Compute the pairing function:} For the boundary matrix $D(t_0)$ at initial time $t = t_0$, compute an RU decomposition $D(t_0) = R(t_0)U(t_0)$, where $R(t_0)$ is a reduced binary matrix and $U(t_0)$ is upper triangular. Using the initial pairing function $low_R(\cdot, t_0)$, we compute the birth and death simplices for the persistent homology of the initial filtration $f(\cdot, t_0)$. If the $i$th and $(i+1)$st simplices $\alpha^{-1}(i, t)$ and $\alpha^{-1}(i+1, t)$ are transposed at time $t$, we update the $RU$ decomposition by following the case work in \cite{vineyards}. (Note that if there is more than one pair $(\sigma, \tau)$ of simplices whose relative order changes at $t$, then the permutation can be decomposed into a sequence of such transpositions.) At worst, updating $R(t)$ requires adding one column to another and adding one row to another---similarly for $U(t)$. The addition of columns and rows is an $O(N)$ operation, but in experiments, the authors of \cite{vineyards} found that updating $R(t)$ and $U(t)$ can be done in approximately constant time if one uses the sparse matrix representations that are given in \cite{vineyards}. After an update of the $RU$ decomposition, we update the birth and death simplices. At most two (birth, death) simplex pairs are updated, and these updates occur in constant time. This updating procedure yields the birth and death simplices for the filtration function $f(\cdot, t)$. \item {\bf Evaluate the PD at each time:} At time $t$, let $J$ be the interval such that $t \in J$ and the simplex ordering is constant in $J$. For every (birth, death) simplex pair $(\sigma_b, \sigma_d)$ for the interval $J$, the diagram $PD_p(f(\cdot, t))$ contains the point $(f(\sigma_b, t), f(\sigma_d, t))$ if $\dim(\sigma_b) = p$. For every $p$-dimensional simplex $\sigma_b$ that is unpaired in $J$, the diagram $PD_p(f(\cdot, t))$ contains the point $(f(\sigma_b, t), \infty)$. \end{enumerate} A special type of vineyard is a \emph{piecewise-linear vineyard}. If we are only given $f(\sigma, t_i)$ at discrete time steps $t_i$, then for all $i$ we extend $f(\sigma, t)$ to $t \in [t_i, t_{i+1}]$ by linear interpolation. In this case, one can perform step (1) of the algorithm above by using the Bentley--Ottman planesweep algorithm \cite{compgeo}. This is because computing when (if) two simplices $\sigma, \tau$ get transposed in $[t_i, t_{i+1}]$ is equivalent to finding the intersection (if it exists) between the lines \begin{align*} y = \frac{f(\sigma, t_{i+1}) - f(\sigma, t_i)}{t_{i+1} - t_i}(t - t_i) + f(\sigma, t_i)\,, \\ y = \frac{f(\tau, t_{i+1}) - f(\tau, t_i)}{t_{i+1} - t_i}(t - t_i) + f(\tau, t_i)\,. \end{align*} \subsection{PD bundles}\label{sec:pd_bundle} PD bundles were introduced in \cite{pd_bundle} as a generalization of vineyards in which a set of filtrations is parameterized by a base space $\mathcal{T}$. A vineyard is the special case in which $\mathcal{T}$ is an interval in $\mathbb{R}$. \begin{definition} A \emph{fibered filtration function} is a set $\{f_t: \mathcal{K}^t \to \mathbb{R}\}_{t \in \mathcal{T}}$ of filtration functions parameterized by a topological space $\mathcal{T}$. When $\mathcal{K}^t \equiv \mathcal{K}$ for all $t \in \mathcal{T}$, we define $f(\sigma, t) := f_t(\sigma)$ for all $\sigma \in \mathcal{K}$ and $t \in \mathcal{T}$. \end{definition} \begin{definition} Let $\{f_t: \mathcal{K}^t \to \mathbb{R}\}_{t \in \mathcal{T}}$ be a fibered filtration function. The topological space $\mathcal{T}$ is the \emph{base space}. The space $E := \{(t, z) \mid z \in PD_p(f_t)\,, t \in \mathcal{T}\}$ is the $p$th \emph{total space}. We give $E$ the subspace topology inherited from the inclusion $E \hookrightarrow \mathcal{T} \times \overline{\mathbb{R}}^2$. The associated \emph{$p$th PD bundle} is the triple $(E, \mathcal{T}, \pi)$, where $\pi$ is the projection from $E$ to $\mathcal{T}$. \end{definition} In \cite{vineyards}, it was computationally easier to work with a piecewise-linear vineyard, which is a vineyard for a fibered filtration function $f: \mathcal{K} \times [t_0, t_1] \to \mathbb{R}$ in which $f(\sigma, \cdot)$ is piecewise linear for all $\sigma \in \mathcal{K}$. (See the discussion at the end of Section \ref{sec:vineyards}.) Below, we define an analog of piecewise-linear vineyards. \begin{definition}[Piecewise-linear PD bundles] Let $\{f_t : \mathcal{K}^t \to \mathbb{R}\}_{t \in \mathcal{T}}$ be a fibered filtration function in which $\mathcal{K}^t \equiv \mathcal{K}$. As before, we define $f(\sigma, t) := f_t(\sigma)$ for all $\sigma \in \mathcal{K}$ and $t \in \mathcal{T}$. If $\mathcal{T}$ is a simplicial complex and $f(\sigma, \cdot)$ is linear on each simplex of $\mathcal{T}$ for all simplices $\sigma \in \mathcal{K}$, then $f$ is a \emph{piecewise-linear fibered filtration function.} The resulting PD bundle is a \emph{piecewise-linear PD bundle}. \end{definition} \noindent For example, in the introduction we considered a point cloud $X(t, \mu)$ whose coordinates depended on time $t \in \mathbb{R}$ and system-parameter value $\mu \in \mathbb{R}$. Given only the coordinates of the point cloud at a discrete set $\{t_i\}$ and a discrete set $\{\mu_j\}$, we had a filtration function $f_{(t_i, \mu_j)}$ for every $(t_i, \mu_j)$. We extended this to a piecewise-linear fibered filtration function on $\mathcal{T} = [\min t_i, \max t_i] \times [\min \mu_j, \max \mu_j]$ via linear interpolation of the filtration values for each simplex $\sigma \in \mathcal{K}$. More generally, suppose we are given a fibered filtration function $f: \mathcal{K} \times \prod_{i=1}^m \mathcal{I}_i \to \mathbb{R}$, where each $\mathcal{I}_i$ is a finite subset of $\mathbb{R}$, and we wish to extend $f$ to a fibered filtration function whose base is $\mathcal{T} = \prod_{i=1}^m [\min \mathcal{I}_i, \max \mathcal{I}_i]$ (e.g., in the example given above, we had $\mathcal{I}_1 = \{t_i\}$ and $\mathcal{I}_2 = \{\mu_j\}$). First, we construct a triangulation $\mathcal{T}$ (i.e., an $m$-dimensional simplicial complex) of $\prod_{i=1}^m [\min \mathcal{I}_i, \max \mathcal{I}_i]$ whose set of vertices is $\prod_{i=1}^m \mathcal{I}_i$. (See Appendix \ref{sec:triangulation}.) Then, one can extend $f$ to a piecewise-linear fibered filtration function $f: \mathcal{K} \times \mathcal{T} \to \mathbb{R}$ by linearly interpolating $f(\sigma, \cdot)$ on each simplex $T \in \mathcal{T}$ for all simplices $\sigma \in \mathcal{K}$. In \cite{pd_bundle}, I showed that if $f: \mathcal{K} \times \mathcal{T} \to \mathbb{R}$ is a piecewise-linear fibered filtration function on an $n$-dimensional simplicial complex $\mathcal{T}$, then $\mathcal{T}$ can be partitioned into $n$-dimensional polyhedrons such that within each polyhedron $P$, there is a ``template'' from which $PD_p(f(\cdot, t))$ can be computed for all $t \in P$. The template is a list of (birth, death) simplex pairs $(\sigma_b, \sigma_d)$. To make this more precise, we define \begin{equation*} I(\sigma, \tau) := \{t \in \mathcal{T} \mid f(\sigma, t) = f(\tau, t)\} \,. \end{equation*} For every $n$-simplex $T$ in $\mathcal{T}$, the intersection $I(\sigma, \tau) \cap T$ is $\emptyset$, $T$, a vertex of $T$, or the intersection of an $(n-1)$-dimensional hyperplane with $T$. The set \begin{equation}\label{eq:partition} \bigcup_{T \in \mathcal{T}} \partial T \cup \{I(\sigma, \tau) \cap T \mid I(\sigma, \tau) \cap T \text{ is } (n-1)\text{-dimensional}\} \end{equation} partitions $\mathcal{T}$ into polyhedrons, where $T$ denotes an $n$-simplex of $\mathcal{T}$ and $\partial T$ denotes the boundary of $T$. As in Section \ref{sec:PH}, we define a simplex ordering function $\alpha : \mathcal{K} \times \mathcal{T} \to \mathbb{R}$ such that $\alpha(\sigma, t) < \alpha(\tau, t)$ if $f(\sigma, t) < f(\tau, t)$ or $\sigma \subseteq \tau$. If there is a $t \in \mathcal{T}$ such that $f(\sigma, t) = f(\tau, t)$ and neither $\sigma \subseteq \tau$ nor $\tau \subseteq \sigma$, then the ordering $\alpha(\cdot, t)$ is not uniquely defined. For consistency over the base space, we fix some ``intrinsic ordering'' $\beta: \mathcal{K} \to \{1, \ldots, N\}$ such that $\beta(\sigma) < \beta(\tau)$ if $\sigma \subseteq \tau$. We now fix $\alpha: \mathcal{K} \times \mathcal{T} \to \{1, \ldots, N\}$ to be the unique simplex ordering function such that $\alpha(\sigma, t) < \alpha(\tau, t)$ if $f(\sigma, t) < f(\tau, t)$ or $\beta(\sigma) < \beta(\tau)$. \begin{proposition}[\cite{pd_bundle}]\label{prop:polyhedrons_constant} If $f: \mathcal{K} \times \mathcal{T} \to \mathbb{R}$ is a piecewise-linear fibered filtration function, then the set in Equation \ref{eq:partition} partitions $\mathcal{T}$ into polyhedra $P$ on which the simplex ordering is constant (i.e., $\alpha(\sigma, \cdot)\vert_P$ is constant for all $\sigma \in \mathcal{K}$). Therefore the set of (birth, death) simplex pairs for $f$ is constant within each $P$. \end{proposition} \section{Computing piecewise-linear PD bundles}\label{sec:compute} The algorithm for computing a piecewise-linear PD bundle is split into three main steps: \begin{enumerate} \item {\bf Compute the polyhedrons:} Compute the polyhedrons on which the simplex ordering (and thus pairing function) is constant (see Prop \ref{prop:polyhedrons_constant}). For every pair of adjacent polyhedrons, there is a permutation $\pi$ that relates the differing simplex orders in each polyhedron. We compute and record the list of simplex pairs $(\sigma, \tau)$ such that $\pi$ changes the relative positions of $\sigma$ and $\tau$. In the ``generic case,'' (defined below at the beginning of Section \ref{sec:findpolygons}), $\pi$ is the transposition of a single pair $(\sigma, \tau)$ of simplices with consecutive indices in the simplex ordering. \item {\bf Compute the pairing function:} Choose a point $t_* \in \mathcal{T}$. Compute the simplex ordering at $t_*$, the boundary matrix $D(t_*)$, and an RU decomposition $D(t_*) = R(t_*)U(t_*)$, where $R(t_*)$ is a reduced binary matrix and $U(t_*)$ is upper triangular. We traverse the polyhedrons, starting with the polyhedron that contains $t_*$. As we move from one polyhedron to the next, we perform the simplex permutation $\pi$ computed above. We update the $RU$ decomposition and pairing function via the update rules that are used when computing vineyards (see \cite{vineyards}). In each polyhedron, we store its pairing function (i.e., the pairs $(\sigma_b, \sigma_d)$ of birth and death simplex pairs and also the unpaired simplices $\sigma_b$, which are birth simplices for homology classes that never die). \item {\bf Query the PD bundle:} To see the $p$th persistence diagram $PD_p(f(\cdot, t))$ associated with point $t \in \mathcal{T}$, first locate the polyhedron $P$ that contains $t$. For each pair $(\sigma_b, \sigma_d)$ of simplices in the pairing function for $P$, the diagram $PD_p(f(\cdot, t))$ has a point with coordinates $(f(\sigma_b, t), f(\sigma_d, t))$ if $\dim(\sigma_b) = p$. For every $p$-dimensional simplex $\sigma_b$ that is unpaired in $P$, the diagram $PD_p(f(\cdot, t))$ contains the point $(f(\sigma_b, t), \infty)$. \end{enumerate} Steps 1--3 are directly analogous to steps 1--3 in the algorithm for computing vineyards that was presented in Section \ref{sec:vineyards}. In what follows, I elaborate on each step of the algorithm above. We focus on the case in which $\mathcal{T}$ is 2D. \subsection{Special case: $\mathcal{T}$ is $2$-dimensional }\label{sec:2D} Let $\mathcal{K}$ be a simplicial complex, let $f: \mathcal{K} \times \mathcal{T} \to \mathbb{R}$ be a piecewise-linear fibered filtration function, and suppose $\mathcal{T}$ is $2$D. If $T$ is a triangle in $\mathcal{T}$, then $I(\sigma, \tau) \cap T$ is one of $\emptyset$, $T$, a vertex of $T$, or a line segment whose endpoints are on $\partial T$. In Figure \ref{fig:intersection}, we show a few possible cases for $I(\sigma,\tau)$. The set in Equation \ref{eq:partition} is a set $L$ of line segments, and the planar subdivision induced by $L$ is a \emph{line arrangement} $\mathcal{A}(L)$. For example, see Figure \ref{fig:polygon partition}. \begin{figure} \centering \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection1.png}\label{fig:intersection 1}} \hspace{5mm} \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection2.png}\label{fig:intersection 2}} \hspace{5mm} \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection3.png}} \\ \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection4.png} \label{fig:triangle_intersection}} \hspace{5mm} \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection5.png}} \hspace{5mm} \subfloat[]{\includegraphics[width=.2\textwidth]{Figures/intersection6.png}} \caption{A few possible cases for the set $I(\sigma,\tau)$, in pink. The black lines are the 1-skeleton of $\mathcal{T}$.} \label{fig:intersection} \end{figure} \begin{figure} \centering \includegraphics[width = .5\textwidth]{Figures/polygon_partition_2.png} \caption{A line arrangement $\mathcal{A}(L)$ that represents the partition of a triangulated base space $\mathcal{T}$ into polygons (see Proposition \ref{prop:polyhedrons_constant}). Within each polygon, the simplex ordering ordering is constant. (This figure appeared originally in \cite{pd_bundle}.)} \label{fig:polygon partition} \end{figure} For ease of exposition, we will make two genericity assumptions for the remainder of Section \ref{sec:2D}. The idea of the algorithm is not different in the general case, but it requires some technical modifications, which I discuss in Appendix \ref{sec:technical}. The assumptions are as follows: \begin{enumerate} \item For all distinct simplices $\sigma, \tau \in \mathcal{K}$ and all vertices $v \in \mathcal{T}$, we have that $f(\sigma, v) \neq f(\tau, v)$. This implies that for all triangles $T \in \mathcal{T}$, the intersection $I(\sigma, \tau) \cap T$ is either $\emptyset$ or a line segment whose endpoints are not vertices of $T$. For examples, see Figures \ref{fig:intersection 1} and \ref{fig:intersection 2}. \item For all distinct simplices $\sigma_1, \tau_1, \sigma_2, \tau_2 \in \mathcal{K}$ and every triangle $T \in \mathcal{T}$ such that $I(\sigma_1, \tau_1) \cap T$ and $I(\sigma_2, \tau_2) \cap T$ are nonempty, the line segments $I(\sigma_1, \tau_1) \cap T$ and $ I(\sigma_2, \tau_2) \cap T$ do not share any endpoints. \end{enumerate} \subsubsection{Computing the polygons}\label{sec:findpolygons} For a piecewise-linear vineyard, computing the intervals on which the simplex ordering is constant can be reduced to finding the intersections between the piecewise-linear functions $y = f(\sigma, t)$ and $y = f(\tau, t)$ for all pairs $(\sigma, \tau)$ of simplices in $\mathcal{K}$. Likewise for a piecewise-linear PD bundle, computing the polygons on which the simplex ordering is constant can be reduced to finding the intersections $I(\sigma, \tau)$ for all pairs $(\sigma, \tau)$ of simplices. \begin{definition}\label{def:swap} Let $\ell$ be a line segment whose endpoints are on the boundary of a triangle $T$ in $\mathcal{T}$. The line segment $\ell$ partitions $\mathcal{T}$ into polygons $Q_1$ and $Q_2$. We say that simplices $\sigma, \tau \in \mathcal{K}$ \emph{swap along $\ell$} if $(\alpha(\sigma, t_1) - \alpha(\tau, t_1))(\alpha(\sigma, t_2) - \alpha(\tau, t_2)) < 0$ for all $t_1 \in Q_1$ and $t_2 \in Q_2$ (i.e., $\sigma$ and $\tau$ have different relative orders in $Q_1$ and $Q_2$). \end{definition} \noindent Under the generic assumptions we made earlier, a pair $(\sigma, \tau)$ swaps along $\ell$ if and only if $\ell = I(\sigma, \tau) \cap T$ for a triangle $T \in \mathcal{T}$. (See Lemma \ref{lem:linesegs} for a discussion of the general case.) We wish to compute the line arrangement $\mathcal{A}(L)$, where $L$ is the set of line segments defined by Equation \ref{eq:partition}. (See Figure \ref{fig:polygon partition}.) The polygons of $\mathcal{A}(L)$ are the polygons on which the simplex ordering is constant. We store $\mathcal{A}(L)$ using a doubly-connected edge list (DCEL) data structure \cite{compgeo}. A DCEL is a standard data structure for storing a polygonal subdivision of the plane. We compute $\mathcal{A}(L)$ using the following algorithm, illustrated in Figure \ref{fig:step1}: \begin{figure} \centering \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step1.png}\label{fig:step1.1}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step2.png}\label{fig:step1.2}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step3.png}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step4.png}} \\ \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step5.png}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step6.png}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step7.png}} \subfloat[]{\includegraphics[width = .25\textwidth]{step1/pd_step8.png}} \caption{Computing the polygons. (A) The line arrangement $\mathcal{A}(L)$ is initialized to represent the triangulated base space $\mathcal{T}$, which in this case consists of two triangles. (B) We find the vertices $v$ of $\mathcal{A}(L)$ that lie on the $1$-skeleton of $\mathcal{T}$. (C)--(H) We incrementally add the line segments of the form $I(\sigma, \tau) \cap T$ for a triangle $T$ in $\mathcal{T}$. The endpoints of a given line segment are a pair $(v, w)$ of vertices in (B).} \label{fig:step1} \end{figure} \begin{enumerate} \item We initialize $\mathcal{A}(L)$ so that it represents the triangulation $\mathcal{T}$. (See Figure \ref{fig:step1.1}.) In addition to the usual data that a DCEL stores, we enumerate the triangles in $\mathcal{T}$ and every half edge $e$ stores the index of the triangle in $\mathcal{T}$ that $e$ is on the boundary of. \item Let $\gamma$ be a path through the $1$-skeleton of $\mathcal{T}$ that traverses each edge at least once. For example, suppose $\mathcal{T}$ is a triangulation of a grid, as in Figure \ref{fig:triangulation grid}. A path $\gamma$ through $\mathcal{T}$ is shown in Figure \ref{fig:triangulation path}. \begin{figure} \centering \subfloat[]{\includegraphics[width = .35\textwidth]{Figures/triangulation_grid.png}\label{fig:triangulation grid}} \subfloat[]{\includegraphics[width = .35\textwidth]{Figures/triangulation_path_rainbow.png}\label{fig:triangulation path}} \caption{(A) A triangulated base space $\mathcal{T}$. (B) A path $\gamma$ that traverses each edge of the $1$-skeleton of $\mathcal{T}$, starting at the bottom-left vertical edge (violet) and ending at the top-right vertical edge (red).} \end{figure} \item The restriction of $f$ to the path $\gamma$ is a $1$-parameter filtration function (the input to a vineyard). We use the Bentley--Ottman algorithm \cite{compgeo} to find the points $v$ on $\gamma$ at which pairs of simplices swap (i.e., the relative order of a pair of simplices changes). See Figure \ref{fig:step1.2}. The points $v$ are the points on the $1$-skeleton of $\mathcal{T}$ at which $f(\sigma, v) = f(\tau, v)$ for some $\sigma \neq \tau$. When we find a vertex $v$, we add it to the DCEL that represents $\mathcal{A}(L)$; we do this by splitting the edge in the DCEL that the vertex lies on. For each triangle $T \in \mathcal{T}$, we maintain a dictionary whose keys are pairs $(\sigma, \tau)$ of simplices. The value associated with $(\sigma, \tau)$ is the list of vertices $v$ on $\partial T$ at which $\sigma$ and $\tau$ swap. This list is updated as we find vertices; at the end of the Bentley--Ottman algorithm, every $(\sigma, \tau)$ in the dictionary for $T$ is associated with a pair $(v, w)$ of vertices that lie on $\partial T$. \item For each triangle $T \in \mathcal{T}$ and for each pair $(\sigma, \tau)$ of simplices in its dictionary, there is an associated pair $(v, w)$ of vertices that lie on the boundary $\partial T$. The vertices $v, w$ must be the endpoints of a line segment $\ell \in L$ along which $(\sigma, \tau)$ swap, so for each $(v, w)$, we add a line segment with endpoints $(v, w)$ to the DCEL that represents $\mathcal{A}(L)$. (See Figures \ref{fig:step1}C--H.) There are many standard algorithms for doing this: one example is the incremental algorithm (see e.g., Chapter 8.3 of \cite{compgeo}), in which the line segments are incrementally added one at a time. The worst-case running time of the incremental algorithm is $O(n_T^2)$, where $n_T$ is the number of line segments in triangle $T$. The algorithm also requires $O(n_T^2)$ space. The incremental algorithm can be parallelized over the triangles $T \in \mathcal{T}$. \end{enumerate} \noindent In Figure \ref{fig:step1}, we illustrate the algorithm for computing the polygons. Adding a single line segment to $\mathcal{A}(L)$ typically creates more than one new edge in $\mathcal{A}(L)$. For example, in Figure \ref{fig:step1}H, adding the last line segment creates two new edges and splits an existing edge into two edges. As we add line segments to $\mathcal{A}(L)$, we keep track of the pairs $(\sigma, \tau)$ of simplices that correspond to each edge. More precisely, if edge $e$ is a subset of a line segment for the pair $(\sigma, \tau)$ of simplices, then $e$ stores a reference to the pair $(\sigma, \tau)$. We add the reference to $(\sigma, \tau)$ at the time that edge $e$ is created in $\mathcal{A}(L)$. The two polygons adjacent to $e$ have simplex orderings that are related via the transposition of $\sigma$ and $\tau$. \subsubsection{Computing the pairing function}\label{sec:simplexpairs} Let $G$ be the dual graph to the line arrangement $\mathcal{A}(L)$. The graph $G$ contains a vertex $v_P$ for every polygon $P$ of $\mathcal{A}(L)$ and an edge between two vertices if the corresponding polygons are adjacent. Next, we compute a path $\Gamma$ that visits every vertex of $G$ at least once. For example, see Figure \ref{fig:path}. One way to obtain such a path is the following algorithm, used by Rivet \cite{rivet}: We first compute a minimal spanning tree $S$ for $G$ via an algorithm such as Prim's algorithm or Kruskal's algorithm \cite{mst}. By performing a depth-first search of $S$, we obtain a path $\Gamma$ that visits every edge of $S$ at most twice. $\Gamma$ may not be minimal, but has the following guarantee: If $\Gamma^*$ is a path of minimal length that visits every vertex of $G$ at least once, then length$(\Gamma) \leq 2 \times (\text{number of edges in } S) \leq 2 \times \text{length}(\Gamma^*)$. \begin{figure} \centering \includegraphics[width = .3\textwidth]{pd_bundle_path.png} \caption{A path $\Gamma$ that visits every polygon in the line arrangement $\mathcal{A}(L)$.} \label{fig:path} \end{figure} At the first vertex $v_P$ of $\Gamma$, we compute the simplex ordering in polygon $P$, the RU decomposition for the boundary matrix in $P$, and the (birth, death) simplex pairs in $P$. The polygon $P$ stores a reference to its (birth, death) simplex pairs. To store the current simplex ordering, every simplex stores a reference to its index in the current ordering (initialized to the ordering in $P$). We traverse the path $\Gamma$. As we walk from one polygon $P_1$ to the next polygon $P_2$ by crossing an edge $e$ in $\mathcal{A}(L)$, we update the simplex ordering, the RU decomposition, and the (birth, death) simplex pairs. To update the simplex ordering, we recall that edge $e$ stores a reference to the simplex pair $(\sigma, \tau)$ such that the simplex orderings in $P_1$ and $P_2$ are related via the transposition of $\sigma$ and $\tau$. We update the order by swapping the indices that $\sigma$ and $\tau$ store. To update the RU decomposition and the (birth, death) simplex pairs, we apply the update algorithm of \cite{vineyards}. In $P_2$, we store the new (birth, death) simplex pairs. \subsubsection{Querying the PD bundle} We consider the scenario in which a user would like to query many points $t \in \mathcal{T}$ in real time and see the $p$th persistence diagram $PD_p(f(\cdot, t))$ associated with each point $t$ that is queried. To compute the $p$th persistence diagram $PD_p(f(\cdot, t))$ associated with a given $t$, we first identify the polygon $P$ of $\mathcal{A}(L)$ that contains $t$. This is a well-studied problem in computational geometry; it is known as the \emph{point-location problem}. When one is planning to perform many point-location queries on the same line arrangement $\mathcal{A}(L)$ (i.e., if one is querying many points $t \in \mathcal{T}$), the standard strategy is to precompute a data structure so that the subsequent point-location queries can be done efficiently. There are many strategies for doing this (see e.g., chapter 38 in \cite{handbook}). One method is the slab-and-persistence method \cite{slab}, in which one precomputes a ``persistent search tree'' for $\mathcal{A}(L)$. The slab-and-persistence method takes $O(k \log k)$ preprocessing time, $O(k)$ space, and $O(\log k)$ time per query, where $k$ is the number of vertices in $\mathcal{A}(L)$.\footnote{By Euler's formula, the number of edges in $\mathcal{A}(L)$ is bounded above by $3k - 6$ and the number of faces is bounded above $2k - 4$. Therefore, the number of vertices, edges, and faces are all $O(k)$ \cite{compgeo}.} If the triangulation $\mathcal{T}$ is such that one can locate the triangle $T$ that contains the point $t$ in $O(1)$ time, then one can reduce the computational complexity by constructing separate persistent search trees for each triangle in $\mathcal{T}$. For example, if $\mathcal{T}$ is a triangulation of the form in Figure \ref{fig:triangulation grid}, then one can locate the triangle $T$ in constant time by examining the coordinates of $t$. Using separate persistent search trees for the planar subdivisions in each triangle, the slab and persistence method takes $O(\sum_{T \in \mathcal{T}} k_T \log(k_T))$ preprocessing time, $O(\sum_{T \in \mathcal{T}} k_T)$ space, and $O(\max_{T \in \mathcal{T}} \log k_T)$ time per query, where $k_T$ is the number of vertices in $\mathcal{A}(L) \cap T$. The pairing function in polygon $P$ was precomputed in the previous step (see Section \ref{sec:simplexpairs}). For every (birth, death) pair $(\sigma_b, \sigma_d)$ of simplices, $PD_p(f(\cdot, t))$ has a point with coordinates $(f(\sigma_b, t), f(\sigma_d, t))$ if $\dim(\sigma_b) = p$. For every unpaired $p$-dimensional simplex $\sigma_b$, the diagram $PD_p(f(\cdot, t))$ has a point with coordinates $(f(\sigma_b, t), \infty)$. \subsection{Generalizing to higher-dimensional $\mathcal{T}$}\label{sec:high} In higher dimensions, we are somewhat limited by the availability of computational geometry data structures and algorithms. When $n = 2$, we use the DCEL data structure to represent the partition of $\mathcal{T}$ into polygons, and we can use one of several algorithms for solving the point-location problem. To the best of my knowledge, there is not an analogous data structure (yet) for storing a partition of a space into $n$-dimensional polyhedrons for arbitrary $n \geq 3$, and there are not algorithms (yet) for solving the point-location problem in higher dimensions. Otherwise, the algorithm of Section \ref{sec:2D} (as outlined at the beginning of Section \ref{sec:compute}) requires almost no modifications for higher-dimensional $\mathcal{T}$. The partition of $\mathcal{T}$ into polygons is replaced by a partition of $\mathcal{T}$ into $n$-dimensional polyhedrons, where $n = \dim(\mathcal{T})$. Only the first step (computing the polyhedrons) requires a meaningful modification, which I describe below. When $n = 2$, the intersection of $I(\sigma, \tau)$ with a triangle $T \in \mathcal{T}$ is the intersection of a line with $T$, which is a line segment $L_{\sigma, \tau, T}$. These line segments completely determine the polygonal partition of $\mathcal{T}$ because the line segments are the faces of the polygons. In turn, each line segment $L_{\sigma, \tau, T}$ is completely determined by the intersection of $L_{\sigma, \tau, T}$ with the $1$-skeleton of $\mathcal{T}$; this intersection is a pair $(v_{\sigma, \tau, T}, w_{\sigma, \tau, T})$ of points. We computed the set \{$(v_{\sigma, \tau, T}, w_{\sigma, \tau, T})\}_{\sigma, \tau, T}$ by restricting the fibered filtration function $f$ to a path through the $1$-skeleton of $\mathcal{T}$ and applying the Bentley--Ottman planesweep algorithm. In general, the intersection of $I(\sigma, \tau)$ with an $n$-simplex $T \in \mathcal{T}$ is the intersection of an $(n-1)$-dimensional hyperplane $H_{\sigma, \tau, T}$ with $T$, which is an $(n-1)$-dimensional polyhedron $P_{\sigma, \tau, T}$. The set $\{P_{\sigma, \tau, T}\}_{\sigma, \tau, T}$ completely determines the polyhedral partition of $\mathcal{T}$ that is given by Proposition \ref{prop:polyhedrons_constant} because the polyhedrons $P_{\sigma, \tau, T}$ are the $(n-1)$-dimensional faces of the $n$-dimensional polyhedrons in the partition. In turn, each polyhedron $P_{\sigma, \tau, T}$ is completely determined by its intersection with the $1$-skeleton of $\mathcal{T}$, as follows. The $m$-dimensional faces of $P_{\sigma, \tau, T}$ are the set $\{H_{\sigma, \tau, T} \cap F \mid F \text{ is an } (m+1)\text{-dimensional face of } T \text{ and } H_{\sigma, \tau, T} \cap F \neq \emptyset \}$. For every $(m+1)$-dimensional face $F$ of $T$ such that $H_{\sigma, \tau, T} \cap F \neq \emptyset$, the intersection $H_{\sigma, \tau, T} \cap F$ is the $m$-dimensional polyhedron whose $(m-1)$-dimensional faces are the set $\{H_{\sigma, \tau, T} \cap g \mid g \text{ is an } m\text{-dimensional face of } F\}$. By induction, the faces of $P_{\sigma, \tau, T}$ are determined by $\{H_{\sigma, \tau, T} \cap e \mid e \text{ is a } 1 \text{-dimensional face of } T \text{ (i.e., an edge)}\}$, which are the vertices of $P_{\sigma, \tau, T}$. Consequently, we can compute each $P_{\sigma, \tau, T}$ by computing its vertices. As in the case in which $\dim(\mathcal{T}) = 2$, we do this by restricting the fibered filtration function $f$ to a path through the $1$-skeleton of $\mathcal{T}$ and applying the Bentley--Ottman planesweep algorithm. \section{Conclusions and Discussion}\label{sec:conclusion} I introduced an algorithm for efficiently computing PD bundles when the fibered filtration function is piecewise-linear. I gave full implementation details for the case in which the base space $\mathcal{T}$ is 2D, and in Section \ref{sec:high} I discussed how one may generalize to higher dimensions. I conclude with some questions and proposals for future work: \begin{itemize} \item What invariants can we use for summarizing and analyzing PD bundles in ways that do not require exploratory data analysis? The current algorithm requires a user to ``query'' the PD bundle at various points in the base space. \item Can we generalize the implementation of the algorithm to higher-dimensional base spaces $\mathcal{T}$? Generalizing to higher-dimensional base spaces will require generalizing certain computational geometry data structures and algorithms. For example, we must generalize the doubly-connected edge list, which we used to represent a partition of the plane into polygons, to a data structure that can represent a partition of a higher-dimensional space into polyhedrons. Such higher-dimensional computational geometry algorithms may be useful for other applications as well. \item Can we generalize the algorithm to nonpiecewise-linear fibered filtration functions? For piecewise-linear fibered filtration functions, we used the fact that the base space $\mathcal{T}$ can be partitioned into polyhedrons such that there is a single PD ``template'' (a list of (birth, death) simplex pairs) for each polyhedron. The template can then be used to obtain $PD_p(f_t)$ at any point $t$ in the polyhedron. For ``generic'' fibered filtration functions, I showed in \cite{pd_bundle} that the base space $\mathcal{T}$ is stratified such that for each $\dim(\mathcal{T})$-dimensional stratum, there is a single PD template which can be used to obtain $PD_p(f_t)$ at any point $t$ in the stratum. \end{itemize} \section*{Acknowledgements} I thank Michael Lesnick and Nina Otter for helpful discussions about the algorithm for computing PD bundles.
{ "timestamp": "2022-10-13T02:22:01", "yymm": "2210", "arxiv_id": "2210.06424", "language": "en", "url": "https://arxiv.org/abs/2210.06424", "abstract": "Persistence diagram (PD) bundles, a generalization of vineyards, were introduced as a way to study the persistent homology of a set of filtrations parameterized by a topological space $B$. In this paper, we present an algorithm for computing piecewise-linear PD bundles, a wide class that includes many of the PD bundles that one may encounter in practice. Full details are given for the case in which $B$ is a triangulated surface, and we outline the generalization to higher dimensions and other cases.", "subjects": "Algebraic Topology (math.AT); Computational Geometry (cs.CG)", "title": "Computing Persistence Diagram Bundles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363485313248, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7073385804582819 }
https://arxiv.org/abs/1210.0729
An Algorithm to Solve Polyhedral Convex Set Optimization Problems
An algorithm which computes a solution of a set optimization problem is provided. The graph of the objective map is assumed to be given by finitely many linear inequalities. A solution is understood to be a set of points in the domain satisfying two conditions: the attainment of the infimum and minimality with respect to a set relation. In the first phase of the algorithm, a linear vector optimization problem, called the vectorial relaxation, is solved. The resulting pre-solution yields the attainment of the infimum but, in general, not minimality. In the second phase of the algorithm, minimality is established by solving certain linear programs in combination with vertex enumeration of some values of the objective map.
\section{Introduction} In this paper we consider the problem to minimize a set-valued map $F:\mathbb{R}^n\rightrightarrows\mathbb{R}^q$ with polyhedral convex graph with respect to the relation \[ F(x) \preceq F(u) \quad:\iff\quad F(x) +C \supset F(u), \] where $C$ denotes a polyhedral convex ordering cone that contains no lines and has nonempty interior. The objective map can be considered as a function from $\mathbb{R}^n$ into the space $\mathcal{G}$ of all closed convex subsets of $\mathbb{R}^q$. With the above ordering relation, one obtains a complete lattice, i.e., the infimum $\inf\cb{F(x)\st x\in\mathbb{R}^n}$ in the sense of a greatest lower bound with respect to $\preceq$ always exists. Solution concepts for complete-lattice-valued problems have been introduced in \cite{HeyLoe11}. The main idea is that, beyond scalar optimization, {\em minimality} and {\em infimum attainment} are two different conditions and a solution shall involve both. Such a solution concept is also useful in vector optimization \cite{LoeTam07,HeyLoe11,Loehne11,HamLoeRud12}. In a set-valued framework, it has been used, for instance, in \cite{H09,HamLoe12,HamSch12}. Applications of set optimization based on the above ordering relation and solution concept can be found in mathematical finance in the framework of markets with frictions, see e.g. \cite{HamRud08,HamHey09,HHR11,HamelRudloffYankova12,LoeRud11,HamLoeRud12}. But, in specific calculations, only the infimum attainment have been considered so far. The algorithm presented in this note ensures also minimality. Optimization problems with set-valued objective function but partially based on other ordering relations or other solution concepts have been investigated by many authors. References and results can be found, for instance, in \cite{Luc88,GoeTamRiaZal03,Jahn04,Hamel05,BotGraWan09,HerRodSam10,Loehne11,JahHa11}. \section{Preliminaries} A set $P \subseteq \mathbb{R}^q$ is said to be {\em polyhedral convex} if there is a representation \begin{equation}\label{eq_H} P=\bigcap_{i=1}^r \cb{y \in \mathbb{R}^q \st (w^i)^T y \geq \gamma_i} \end{equation} where $w^1, \ldots, w^r \in \mathbb{R}^q\setminus\cb{0}$ and $\gamma_1, \ldots, \gamma_r \in \mathbb{R}$. Equation \eqref{eq_H} is called {\em H-representation} of $P$. Every non-empty polyhedral convex set $P \subseteq \mathbb{R}^q$ can be expressed as a (generalized) convex hull of finitely many points $x^1, \ldots, x^s \in \mathbb{R}^q$ ($s\in \cb{1,2,3,\dots}$) and finitely many directions $d^1, \ldots, d^t \in \mathbb{R}^q\setminus \cb{0}$ ($t \in \cb{0,1,2,\dots}$) through \begin{equation}\label{eq_V} P = \cb{ \sum_{i=1}^s \lambda_i x^i + \sum_{j=1}^t \mu_j d^j \bigg|\; \lambda_i \geq 0,\; \sum_{i=1}^s \lambda_i = 1,\; \mu_j \geq 0}, \end{equation} where $d \in \mathbb{R}^q\setminus\{0\}$ is called a direction of $P$ if $P + \cb{\lambda\cdot d} \subseteq P$ for all $\lambda>0$. This can be also written with the convex hull of the points and the cone generated by the directions as $P = \conv \cb{x^1, \ldots,x^s} + {\rm cone\,}\cb{d^1, \ldots, d^t}$. We set ${\rm cone\,} \emptyset = \cb{0}$, thus, $P$ is bounded if and only if $t=0$. Equation \eqref{eq_V} is called a {\em V-representation} of $P$. Numerical methods to compute a V-representation from an H-representation and vise versa are called {\em vertex enumeration}, see e.g. \cite{BarDobHuh96,BreFukMar98}. We denote by $\cl P$ and $\Int P$, respectively, the closure and interior of a set $P \subset \mathbb{R}^q$. We assume throughout that $C \subset \mathbb{R}^q$ is a pointed (i.e., $C \cap (-C)= \cb{0}$) polyhedral convex cone with $\Int C \neq \emptyset$. The cone $C$ yields a partial ordering $\leq_C$ on $\mathbb{R}^q$ where $y \leq_C v$ is defined by $v-y \in C$. If $C=\mathbb{R}^q_+ :=\cb{y\in \mathbb{R}^q \st y_1 \geq 0, \ldots , y_q \geq 0}$, the component-wise ordering $\leq_{\mathbb{R}^q_+}$ is abbreviated to $\leq$. The polar cone of $C$ is the set $C^\circ:=\cb{v \in \mathbb{R}^q \st \forall y \in C: v^T y \leq 0}$. A point $y \in \mathbb{R}^q$ is said to be {\em $C$-minimal} in $P\subset \mathbb{R}^q$ if $y \in P$ and $(\cb{y} -C\setminus\{0\}) \cap P = \emptyset$. We assume that an H-representation of $C$ is given, that is, a matrix $Z\in \mathbb{R}^{q \times p}$ such that \begin{equation}\label{eq_c} C=\cb{y \in \mathbb{R}^q \st Z^T y \geq 0}. \end{equation} Let $\mathcal{G}$ denote the family of all closed convex subsets of $\mathbb{R}^q$. By $\mathcal{G}_C$ we denote the subfamily of those elements $P$ of $\mathcal{G}$ having the additional property $P=P+C$. For $P,Q \in \mathcal{G}$ we define \[ P \preceq Q \quad :\iff \quad P + C \supset Q + C,\] which can be equivalently written as $P + C \supset Q$. The ordering $\preceq$ is reflexive and transitive in $\mathcal{G}$ (quasi ordering) and, additionally, antisymmetric in $\mathcal{G}_C$ (partial ordering). For $P,Q \in \mathcal{G}$ we define an equivalence relation by \[ P \sim Q \quad :\iff \quad P + C = Q + C. \] Clearly, the quotient space $\mathcal{G}/\!\sim$ is isomorphic to $\mathcal{G}_C$ and thus $\preceq$ is a partial ordering in $\mathcal{G}/\!\sim$. \begin{remark}\label{rem1} In a theoretical framework the space $\mathcal{G}_C$ is often more convenient and leads to easier formulations. From a computational viewpoint, however, the usage of $\mathcal{G}$ and $\mathcal{G}/\!\sim$ seems to be more natural. This is due to the fact, that an H/V-representation of some $P \in \mathcal{G}$ with $P\subsetneq P+C$ might be known whereas getting an H/V-representation of $P+C$ would require computational effort. \end{remark} The partially ordered set $(\mathcal{G}/\!\!\sim,\preceq)$ provides a {\em complete lattice}, i.e., for every subset of $\mathcal{G}/\!\!\sim$ there exist the infimum and supremum, see e.g. \cite{Loehne11} for more details. To simplify the notation we express the infimum and supremum in terms of the (quasi-ordered) space $(\mathcal{G},\preceq)$, where we have in mind that we actually deal with representatives of equivalence classes. Thus, for nonempty sets $\P \subseteq \mathcal{G}$ we have \[ \inf \P = \cl\conv \bigcup_{P \in \P} (P+C) \qquad \sup \P = \bigcap_{P \in \P} (P+C). \] Furthermore, we set $\inf \emptyset = \emptyset$ and $\sup \emptyset = \mathbb{R}^q$. To express minimality we define for $P,Q \in \mathcal{G}$: \[ P \precneq Q \quad:\iff\quad ( P\preceq Q \text{ and } P\nsim Q).\] Let $F:\mathbb{R}^n\rightrightarrows\mathbb{R}^q$ be a {\em polyhedral convex} set-valued map, that is, its graph \[\gr F :=\cb{(x,y)\in \mathbb{R}^n\times\mathbb{R}^q\st y\in F(x)}\] is a polyhedral convex set. We assume throughout that an H-representation of $\gr F$ is known. This means, there are $A\in\mathbb{R}^{m\times n}$, $B\in \mathbb{R}^{m \times q}$ and $b \in \mathbb{R}^m$ such that \begin{equation}\label{eq_grH} \gr F :=\cb{(x,y)\in \mathbb{R}^n\times\mathbb{R}^q \st A x + B y \geq b}. \end{equation} The {\em domain} of $F$ is the set ${\rm dom\,} F := \cb{x \in \mathbb{R}^n \st F(x)\neq \emptyset}$. We consider the following set optimization problem: \begin{equation}\tag{P}\label{p} \text{ minimize } F: \mathbb{R}^n\rightrightarrows\mathbb{R}^q \text{ with respect to } \preceq. \end{equation} Problem \eqref{p} is called {\em feasible} if ${\rm dom\,} F \neq \emptyset$. We assume throughout that \eqref{p} is {\em bounded} in the sense that \[ \exists v \in \mathbb{R}^q:\; \cb{v} \preceq \inf_{x \in \mathbb{R}^n} F(x). \] In our setting, it is sufficient for \eqref{p} being bounded that ${\rm dom\,} F$ is a bounded set and \[ \forall x \in \mathbb{R}^n,\exists v\in\mathbb{R}^q: \; \cb{v} \preceq F(x),\] where the latter condition is obviously satisfied for a map $F$ with bounded values $F(x)$. Note that the algorithm introduced below can verify whether the problem is bounded or not. The following solution concept is based on a combination of minimality and infimum attainment as these notions do no longer coincide in vector and set optimization. It is an adaptation of the concepts introduced in \cite{HeyLoe11,Loehne11,HamLoeRud12} to the present setting. \begin{definition} A point $\bar x \in {\rm dom\,} F$ is said to be a {\em minimizer} for \eqref{p} if there is no $x \in \mathbb{R}^n$ with $F(x) \precneq F(\bar x)$. A finite set $\bar X \subseteq {\rm dom\,} F$ is called a {\em finite infimizer} for \eqref{p} if the infimum is attained in $\bar X$, that is, \[ \inf_{x \in \bar X} F(x) = \inf_{x \in \mathbb{R}^n} F(x). \] A finite infimizer $\bar X$ of \eqref{p} is called a {\em solution} to \eqref{p} if it consists of only minimizers. \end{definition} \section{Vectorial relaxation and pre-solution} Consider the linear function $f:\mathbb{R}^n\times\mathbb{R}^q \to \mathbb{R}^q$, $f(x,y)=y$. Because of formal reasons we understand $f$ as a set-valued map whose values are singleton sets, i.e., \[ f:\mathbb{R}^n\times\mathbb{R}^q\rightrightarrows\mathbb{R}^q, \quad f(x,y)=\cb{y}.\] The {\em vectorial relaxation} of the set optimization problem \eqref{p} is defined as: \begin{equation}\tag{VR}\label{r} \text{ minimize } f: \mathbb{R}^n\times\mathbb{R}^q\rightrightarrows\mathbb{R}^q \text{ with respect to } \preceq \text{ subject to } y \in F(x). \end{equation} Of course, \eqref{r} can be seen as a special case of a set optimization problem, whence the above definitions apply also to \eqref{r}. Obviously, \eqref{r} is feasible if and only if so is \eqref{p}. As $f$ is single-valued and the constraint $y \in F(x)$ can be expressed by finitely many linear inequalities, \eqref{r} is (equivalent to) a linear vector optimization problem. We have \begin{equation}\label{eq_inf} \inf_{x \in \mathbb{R}^n} F(x) = \inf_{x \in \mathbb{R}^n} \inf_{y \in F(x)} \cb{y} = \inf_{x\in \mathbb{R}^n\!\!,\,y \in F(x)} f(x,y), \end{equation} i.e., \eqref{p} and \eqref{r} have the same infima. This implies that \eqref{p} is bounded if and only if so is \eqref{r}. Equation \eqref{eq_inf} motivates the following concept. \begin{definition} A finite set $\{x^i \in \mathbb{R}^n \st i=1,\dots,k\}$ is called a pre-solution of \eqref{p} if there exist $y^i \in \mathbb{R}^q$, $i=1,\dots,k$ such that $\{(x^i,y^i)^T \in \mathbb{R}^n\times\mathbb{R}^q \st i=1,\dots,k\}$ is a solution of the vectorial relaxation \eqref{r} of \eqref{p}. \end{definition} The following example shows that the $x$-component of a minimizer of \eqref{r} is in general not a minimizer of \eqref{p}. This means that a pre-solution of \eqref{p} is in general not a solution of \eqref{p}. \begin{example} Consider the set-valued map $F:\mathbb{R}^2 \rightrightarrows \mathbb{R}^2$ where, according to \eqref{eq_grH}, $\gr F \subset \mathbb{R}^4$ is given by \[ \renewcommand{\arraystretch}{0.8} A=\left(\begin{array}{rr} 1 & 0 \\ 0 & 1 \\ -1 & 0\\ 0 & -1\\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & 1 \\ 0 & -1\\ -2 & 2\\ -1 & 1\\ 2 & -2\\ 1 & -1 \end{array}\right)\quad B=\left(\begin{array}{rr} 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ -1 & -1\\ 2 & 1 \\ 1 & 2 \\ 1 & 1 \\ 1 & 0 \\ 0 & 1 \\ 1 & 0 \\ 1 & 0 \\ 0 & 1 \\ 0 & 1 \end{array}\right)\quad b=\left(\begin{array}{r} -1 \\ 0 \\ -1 \\ -1 \\ -3 \\ 2 \\ 2 \\ 2 \\ 0 \\ -1 \\ 0 \\ 0 \\ -2 \\ -1 \end{array}\right)\quad \] Setting \[ x^1:=(-1,0)^T,\; x^2:=(-1,1)^T,\; x^3:=(0,0),\; x^4:=(0,1)^T,\] we see that \[ \renewcommand{\arraystretch}{1.6} \begin{array}{l} F(x^1)= \conv\cb{\of{0,2}^T,\of{2,0}^T,\of{0,3}^T,\of{3,0}^T}, \\ F(x^2)= \conv\cb{\of{0,2}^T,\of{-1,4}^T,\of{1,2}^T},\\ F(x^3) = \conv\cb{\of{0,2}^T,\of{2,0}^T,\of{0,3}^T,\of{4,-1}^T},\\ F(x^4) = \conv\cb{\of{0,2}^T,\of{2,0}^T,\of{-1,4}^T,\of{3,0}^T}. \end{array} \] Consider the problems \eqref{p} and \eqref{r} for the ordering cone $C:=\mathbb{R}^2_+$. Set \[ y^1:=(0,2)^T,\; y^2:=(2,0)^T,\; y^3:=(-1,4),\; y^4:=(4,-1)^T.\] The infimum for both problems \eqref{p} and \eqref{r} can be expressed as \[ \inf_{x \in \mathbb{R}^2} F(x) = \conv\cb{y^1,y^2,y^3,y^4}+\mathbb{R}^2_+ =:Q,\] where each of the points $y^1,\dots,y^4$ is $\leq_C$-minimal in the set $Q$. It follows that, for instance, the set $\cb{(x^1,y^1)^T,(x^1,y^2)^T,(x^2,y^3)^T,(x^3,y^4)^T}$ is a solution of \eqref{r}. Hence $\cb{x^1,x^2,x^3}$ is a pre-solution of \eqref{p}. But $\cb{x^1,x^2,x^3}$ is not a solution of \eqref{p}. Indeed, we have $F(x^3) \precneq F(x^1)$ and $F(x^4) \precneq F(x^2)$ which means $x^1$ and $x^2$ are not minimizers for problem \eqref{p}. Note further that $\cb{x^3,x^4}$ is a solution to \eqref{p}. \end{example} The following modification of an example provided by Frank Heyde shows that a solution to \eqref{p} is in general not a pre-solution to \eqref{p}. \begin{example} Let $C=\mathbb{R}^2_+$ and let $F:\mathbb{R}^2 \rightrightarrows \mathbb{R}^2$ with ${\rm dom\,} F = \{(x_1,x_2) \st x_1 \geq 0,\; x_1 + |x_2| \leq 1\}$ be defined by $$ F(x) := \cb{(z_1,z_2) \st z_1\geq -x_1+x_2,\, z_2\geq -x_1-x_2,\, z_1+z_2\geq x_1}.$$ The set $\{(0,1)^T, (0,-1)^T, (\frac{1}{2},0)^T\}$ is a solution, but not a pre-solution. Indeed, $(\frac{1}{2},0)^T$ is a minimizer for \eqref{p}, but no point of the form $(\frac{1}{2},0,y_1,y_2) \in \gr F$ is a minimizer for \eqref{r}. \end{example} As a consequence of the following statement we obtain that every solution to \eqref{p} contains a pre-solution to \eqref{p}. \begin{proposition}\label{prop:Corr2} Every finite infimizer for \eqref{p} contains a pre-solution to \eqref{p}. \end{proposition} \begin{proof} There are $y^1,\dots, y^r \in \mathbb{R}^q$ such that $\bar P = \inf_{x \in \mathbb{R}^n} F(x)$ holds for the polyhedron $\bar P :=\conv\cb{y^1,...,y^r}+C$. Without loss of generality we assume that $y^1,\dots, y^r$ are the vertices of $\bar P$. Hence, any set $\cb{(x^i,y^i)\st i=1,...,r}$ with $y^i\in F(x^i)$ is a solution to \eqref{r} and consequently $\cb{x^1,...,x^r}$ is a pre-solution to \eqref{p}. Let $\cb{\bar x^1,...,\bar x^k}$ be a finite infimizer to \eqref{p} and assume that $y^m\notin \bigcup_{j=1,...,k}F(\bar x^j)$ for some $m\in\cb{1,...,r}$. Then $y^m$ is not a vertex of $\inf_{j=1,\dots,k} F(\bar x^j) = \bar P$, a contradiction. \end{proof} \section{Algorithm} According to \eqref{eq_grH} and \eqref{eq_c}, let an instance of problem \eqref{p} be given by $A\in \mathbb{R}^{m\times n}$, $B \in \mathbb{R}^{m\times q}$, $b \in \mathbb{R}^m$, $Z \in \mathbb{R}^{q \times p}$. The algorithm computes a solution to \eqref{p} if the problem is feasible and bounded. Otherwise it detects whether \eqref{p} is infeasible or unbounded. In the first phase of the algorithm, the vectorial relaxation \eqref{r}, which is (equivalent to) a linear vector optimization problem, is solved. A solution can be obtained, for instance, with Benson's algorithm, see e.g. \cite{Benson98a,EhrLoeSha12,ShaEhr08,ShaEhr08-1,Loehne11,HamLoeRud12}. We know that \eqref{p} is bounded if and only if so is \eqref{r}. But Benson's algorithm is able to detect if \eqref{r} is unbounded. Note further that in \cite{Loehne11}, Benson's algorithm was extended for unbounded linear vector optimization problems. In the second phase, for every point $x^0$ of the pre-solution obtained in the first phase, we construct a sequence $(x^0,x^1,x^2,\dots,x^l)$ with $F(x^0) \succneq F(x^1) \succneq F(x^2) \succneq \dots\succneq F(x^l)$ until, after finitely many steps, a minimizer $x^l$ is obtained. For parameters $w \in \mathbb{R}^q$ and $\bar x \in {\rm dom\,} F$, we consider the following scalar problem: \begin{equation}\tag{P($w$,$\bar x$)}\label{px} \text{ maximize } w^T y \;\text{ subject to }\; y \in F(x), \; F(x) \preceq F(\bar x) . \end{equation} As the $y_1,\dots,y_q\in\mathbb{R}$ are considered to be auxiliary variables, we use the following convention: $\hat x$ is said to be a solution to \eqref{px} if there exists $\hat y\in \mathbb{R}^q$ such that $(\hat x,\hat y)$ is a solution to \eqref{px} in the ordinary sense. In practice this means that $(\hat x,\hat y)$ is computed but only $\hat x$ is used. Obviously, we have the following lower bound $\beta$ for the optimal value $\alpha$ of \eqref{px}: \begin{equation}\label{eq_albe} \renewcommand{\arraystretch}{1.3} \begin{array}{ll} \alpha(w,\bar x) &:=\sup\cb{w^T y \st y \in F(x),\,F(x) \preceq F(\bar x)} \\ &\,\geq \sup\cb{w^T y \st y \in F(\bar x)}=:\beta(w,\bar x). \end{array} \end{equation} This leads to an optimality condition. \begin{lemma}\label{lem2} Let \eqref{p} be feasible and bounded. For some $\bar x \in {\rm dom\,} F$ let an H-representation of the set $F(\bar x)+C$ be given, that is, \[ F(\bar x)+C = \cb{y \in \mathbb{R}^q \big|\; \ofg{w^j}^T y \leq \gamma_j,\; j=1,\dots,r}.\] If $\alpha\ofg{w^j,\bar x}=\beta\ofg{w^j,\bar x}$ for all $j\in \cb{1,\dots,r}$, then $\bar x$ is a minimizer for \eqref{p}. \end{lemma} \begin{proof} Assume that $\bar x$ is not a minimizer for \eqref{p}, i.e., there exists $x \in \mathbb{R}^n$ with $F(x) \precneq F(\bar x)$. Hence there is some $y \in F(x)$ and some $c \in C$ such that $y+c \not\in F(\bar x)+C$. Since $C+C=C$, we conclude that $y \not\in F(\bar x)+C$. Thus there is some $j \in \cb{1,\dots,r}$ such that ${w^j}^T y > \gamma_j$ which implies $\alpha\ofg{w^j,\bar x}> \gamma_j \geq \beta\ofg{w^j,\bar x}$. \end{proof} With the aid of a V-representation of the set $F(\bar x)+C$, that is, \begin{equation}\label{eq_Vrep} F(\bar x)+C = \conv\cb{y^1,\dots,y^s}+C, \end{equation} \eqref{px} can be transformed into a linear program. Note that the assumption that \eqref{p} is bounded was used in \eqref{eq_Vrep}, otherwise the cone on the right hand side can be a superset of $C$. By \eqref{eq_grH}, the first constraint $y\in F(x)$ can be expressed as $Ax+By \geq b$. The second constraint can be transformed as follows: \begin{equation}\label{eq_constr} \renewcommand{\arraystretch}{1.1} \begin{array}{lcl} F(x) \preceq F(\bar x) &\iff& F(x)+C \supseteq F(\bar x) + C \\ &\iff& \forall i=1,\dots,s:\; y^i \in F(x)+C\\ &\iff& \forall i=1,\dots,s,\; \exists c^i \in C: y^i - c^i\in F(x)\\ &\iff& \forall i=1,\dots,s,\; \exists c^i \in C: A x - B c^i \geq b - B y^i \end{array} \end{equation} Thus, \eqref{px} is equivalent to the linear program \begin{equation}\tag{P($w;y^1,\dots,y^s$)}\label{py} \renewcommand{\arraystretch}{1.1} \text{ max } w^T y \;\text{ s.t.} \cb{\begin{array}{rll} Ax+By \!\!&\geq b & \\ Ax - Bc^i \!\!&\geq b - B y^i &(i=1,\dots,s)\\ Z^T c^i \!\!&\geq 0 &(i=1,\dots,s) \end{array}} \end{equation} which has $n+q(s+1)$ variables and $m+ms+ps$ constraints. According to the above convention we will speak about a solution $x\in \mathbb{R}^n$ and do not mention the $q(s+1)$ auxiliary variables $y,c^1,\dots,c^s\in \mathbb{R}^q$. \smallskip \subsubsection*{Algorithm \bfseries{\scshape{SetOpt}}.} Input:\\ \indent H-representation of $\gr F$ according to \eqref{eq_grH}: $A\in \mathbb{R}^{m\times n}$, $B \in \mathbb{R}^{m \times q}$, $b \in \mathbb{R}^m$;\\ \indent H-representation of the ordering cone according to \eqref{eq_c}: $Z\in \mathbb{R}^{q\times p}$; \\ \noindent Output:\\ \indent A solution $\bar X$ of \eqref{p} if \eqref{p} is feasible and bounded, $\bar X = \emptyset$ otherwise;\\ \indent The solution status for \eqref{p};\\ \noindent Phase 1:\\ \indent $\bar X \leftarrow \emptyset$;\\ \indent solve \eqref{r};\\ \indent {\bf if} \eqref{r} is infeasible {\bf then} $status \leftarrow \text{\em''\eqref{p} is infeasible.''}$; stop; {\bf end}; \\ \indent {\bf if} \eqref{r} is unbounded {\bf then} $status \leftarrow \text{\em''\eqref{p} is unbounded.''}$; stop; {\bf end};\\ \indent store a pre-solution $\cb{x^1,\dots,x^k}$ of \eqref{p};\\ \noindent Phase 2:\\ \indent {\bf for} $i \leftarrow 1$ {\bf to} $k$ {\bf do}\\ \indent\indent flag $\leftarrow$ 1;\\ \indent\indent$K \leftarrow \emptyset$;\\ \indent\indent {\bf while} $flag=1$ {\bf do}\\ \indent\indent\indent compute a V-representation and an H-representation of $F(x^i)+C$:\\ \indent\indent\indent\indent $\begin{array}{ll} F(x^i)+C &=\conv\cb{y^1,\dots,y^s}+C\\ &=\cb{y \in \mathbb{R}^q \big|\; \of{w^j}^T y \leq \gamma_j,\; j=1,\dots,r}; \end{array} $ \\ \indent\indent\indent {\bf for} $j \leftarrow 1$ {\bf to} $r$ {\bf do}\\ \indent\indent\indent\indent {\bf if} $w^j/\|w^j\| \notin K$ {\bf then}\\ \indent\indent\indent\indent\indent $K \leftarrow K \cup \cb{w^j/\|w^j\|}$;\\ \indent\indent\indent\indent\indent $\text{solve (P($w^j$; $y^1,\dots y^s$))}$; $x_i \leftarrow$ solution; $\alpha \leftarrow$ optimal value;\\ \indent\indent\indent\indent\indent $\beta \leftarrow \max\cb{\ofg{w^j}^T y^1,\dots,\ofg{w^j}^T y^s}$;\\ \indent\indent\indent\indent\indent {\bf if} $\alpha > \beta$ {\bf then} break (i.e., exit the inner-most loop);\\ \indent\indent\indent\indent {\bf end}; \\ \indent\indent\indent\indent {\bf if} $j=r$ {\bf then} $flag \leftarrow$ 0; \\ \indent\indent\indent {\bf end}; \\ \indent\indent {\bf end};\\ \indent\indent $\bar X \leftarrow \bar X \cup \cb{x^i}$;\\ \indent {\bf end};\\ \indent $status \leftarrow \text{\em''\eqref{p} has been solved.''}$; \bigskip \noindent We next show that the algorithm works correctly and is finite. We prepare the theorem by two lemmas. \begin{lemma}\label{lem0} Let \eqref{p} be feasible and bounded. Consider some $\bar x \in {\rm dom\,} F$, a halfspace $H=\cb{y \in \mathbb{R}^q \st w^T y \leq \gamma}$ containing the set $F(\bar x) + C$ and finitely many points $y^1,\dots,y^s\in \mathbb{R}^q$ such that \eqref{eq_Vrep} holds. Then, \begin{itemize} \item[(i)] The linear program {\rm(P($w$; $y^1,\dots y^s$))} has an optimal solution; \item[(ii)] The lower bound $\beta$ defined in \eqref{eq_albe} can be expressed as $$\beta(w,\bar x) = \max\cb{w^T y^1,\dots,w^T y^s}.$$ \end{itemize} \end{lemma} \begin{proof} Let $\bar y \in F(\bar x)$. Then, $A \bar x + B \bar y \geq b$. By \eqref{eq_Vrep}, for all $i\in \cb{1,\dots,s}$, we have $y^i \in F(\bar x) +C$, i.e., there is some $c^i\in C$ such that $y^i-c^i\in F(\bar x)$, or equivalently, $A \bar x - B c^i \geq b - B y^i$. Hence, the point $(\bar x,\bar y,c^1,\dots,c^s)$ is feasible for \eqref{py}. Since \eqref{p} is assumed to be bounded, there exists $v \in \mathbb{R}^q$ such that $\cb{v} + C \supset F(x)$ for all $x \in \mathbb{R}^n$. As $H$ contains $F(\bar x) + C$, we must have $w \in C^\circ$. It follows that \[ \sup\cb{w^T y \st x\in\mathbb{R}^n\!, y \in F(x)} \leq w^T v +\sup\cb{w^T c\st c \in C} = w^T v,\] which implies that \eqref{px} and hence \eqref{py} is bounded. This proves (i). Statement (ii) follows from \eqref{eq_Vrep} taking into account that $\sup_{c \in C} w^T c = 0$ for $w \in C^\circ$. \end{proof} \begin{lemma}\label{lem1} Let \eqref{p} be feasible and bounded and let $\bar x \in {\rm dom\,} F$. If $\hat x \in \mathbb{R}^n$ is a solution of \eqref{px} for some $w \in C^\circ$, then $\alpha(w,u) = \beta(w,u)$ for every $u\in \mathbb{R}^n$ with $F(u)\preceq F(\hat x)$. \end{lemma} \begin{proof} Obviously, we have $\beta(w,u)\leq \alpha(w,u)$. For $u\in \mathbb{R}^n$ with $F(u)\preceq F(\hat x)$, we get \[ \renewcommand{\arraystretch}{1.2} \begin{array}{ll} \alpha(w,u) &=\sup\cb{w^T y \st y \in F(x),\,F(x) \preceq F(u)} \\ &\leq \sup\cb{w^T y \st y \in F(x),\,F(x) \preceq F(\hat x)} = \alpha(w,\hat x). \end{array} \] Since $F(u)\preceq F(\hat x)$ can be written as $F(u) \supseteq F(\hat x)+C$ and, as $w \in C^\circ$, we obtain \[ \beta(w,u)= \sup\cb{w^T y \st y \in F(u)} \geq \sup\cb{w^T y \st y \in F(\hat x)} = \beta(w,\hat x).\] The point $\hat x$ being a solution of \eqref{px} implies that there exists $\hat y\in F(\hat x)$ such that $\beta(w,\hat x)\geq w^T \hat y = \alpha(w,\bar x)\geq\alpha(w,\hat x)$. Altogether we get $\alpha(w,u)\leq \alpha(w,\hat x) \leq \beta(w,\hat x) \leq \beta(w,u)\leq \alpha(w,u)$, which yields the desired equality. \end{proof} \begin{theorem} {\scshape SetOpt} computes a solution of (P) whenever \eqref{p} is feasible and bounded. Otherwise {\scshape SetOpt} states whether \eqref{p} is infeasible or unbounded. If the H-representation computed in phase 2 contains no redundant inequalities, {\scshape SetOpt} terminates after finitely many steps. To be more precise, let the pre-solution computed in the first phase consist of $k$ points and let $l$ be the number of linear inequalities necessary to describe the polyhedral convex set $\gr F + (0_{\mathbb{R}^n}\times C)$. In the second phase of {\scshape SetOpt}, at most $l\cdot k$ linear programs have to be solved. \end{theorem} \begin{proof} If \eqref{p} is infeasible or unbounded, so is \eqref{r} and the algorithm terminates with the corresponding status. If \eqref{p} is feasible and bounded, a pre-solution $\cb{x^1,\dots,x^k}$, $k\geq 1$ of \eqref{p} is obtained by solving \eqref{r}. For fixed $i \in \cb{1,\dots,k}$, denote by $\bar x^i$ the value of the variable $x^i$ before the while loop has been entered and let $\hat x^i$ be the value after the while loop has been left. Then, $\bar X:=\cb{\bar x^1,\dots,\bar x^k}$ is the pre-solution computed in phase 1 and $\hat X:=\cb{\hat x^1,\dots,\hat x^k}$ is the result of the algorithm. We will show that for all $i \in \cb{1,\dots,k}$, \begin{equation}\label{eq_th41} F\ofg{\hat x^i} \preceq F\ofg{\bar x^i} \qquad \text{and} \qquad \not\exists\, x \in \mathbb{R}^n: F(x) \precneq F\ofg{\hat x^i}. \end{equation} The first condition in \eqref{eq_th41} implies \[ \inf_{x \in \hat X} F(x) \preceq \inf_{x \in \bar X} F(x).\] Since $\bar X$ is a pre-solution of \eqref{p}, the infimum is attained in $\bar X$, that is, \[ \inf_{x \in \bar X} F(x) = \inf_{x \in \mathbb{R}^n} F(x).\] It follows that the infimum is also attained in $\hat X$. The second condition in \eqref{eq_th41} states that $\hat X$ consists of only minimizers, whence $\hat X$ is a solution to \eqref{p}. To show \eqref{eq_th41}, let $i \in \cb{1,\dots,k}$ be fixed. The first condition of \eqref{eq_th41} follows directly from the constraint $F(x) \preceq F\ofg{x^i}$ of the equivalent formulation (P($w^j,x^i$)) of the linear program (P($w^j$; $y^1,\dots y^s$)). Note that by Lemma \ref{lem0} an optimal solution of (P($w^j$; $y^1,\dots y^s$)) always exists in the case where \eqref{p} is feasible and bounded. The while loop is left only after $flag$ has been set to zero. This requires $r$ iterations in the inner for loop, which occurs only if for every $j \in \cb{1\dots,r}$, $\alpha=\alpha(w^j,\hat x^i)$ equals $\beta=\beta(w^j,\hat x^i)$. In case of $w^j/\|w^j\|\in K$, this is known by Lemma \ref{lem1}, i.e., (P($w^j$; $y^1,\dots y^s$)) does not need to be solved. But $\alpha(w^j,\hat x^i)=\beta(w^j,\hat x^i)$ for all $j \in \cb{1\dots,r}$ implies that $\hat x^i$ is a minimizer, compare Lemma \ref{lem2}. To show finiteness, note first that there is a finite algorithm (such as Benson's algorithm) to solve \eqref{r} in phase 1. Consider the map $\tilde F:\mathbb{R}^n\rightrightarrows\mathbb{R}^q$, $\tilde F(x):=F(x)+C$. Of course, $\gr \tilde F = \gr F + (0_{\mathbb{R}^n}\times C)$ is a polyhedral convex set. Thus it can be expressed as \[ \gr \tilde F = \cb{(x,y)\in \mathbb{R}^n\times\mathbb{R}^q\st \tilde A x + \tilde B y \geq \tilde b},\] for some $\tilde A \in \mathbb{R}^{l \times n}$, $\tilde B \in \mathbb{R}^{l \times q}$, $\tilde b \in \mathbb{R}^l$. For every $x \in {\rm dom\,} F$, we have \[ F(x)+C=\cb{y \in \mathbb{R}^q \st \tilde B y \geq \tilde b - \tilde A x}.\] Consequently, every H-representation of $F(x)+C$ that contains no redundant inequalities consists of at most $l$ inequalities. In other words, there are at most $l$ different vectors $w^j/\norm{w^j}$ for $j\in \cb{1,\dots,k}$ in the algorithm, which proves the claim. \end{proof} The theorem immediately implies the following existence result. \begin{corollary} If \eqref{p} is feasible and bounded, a solution exists. \end{corollary} To reduce the computational effort of the algorithm for specific problems, we suggest an additional rule. Let $\cb{(x_1,y_1),\dots,(x_k,y_k)}$ denote the solution of \eqref{r} obtained in phase 1 and consider iteration $i\in \cb{1,\dots,k}$ of the outer for loop in phase 2: \[ \text{\em If $y_j \in F(x_i)$ for $j$ with $i < j \leq k$ then skip all commands in iteration $j$.}\] Clearly, this rule maintains the attainment of the infimum and thus the algorithm still works correctly. Finally we consider the special situation where $F:\mathbb{R}^n \to \mathcal{G}_C$, i.e., we have $F(x)=F(x)+C$ for all $x \in \mathbb{R}^n$. In this case, \eqref{eq_constr} can be replaced by \[ \begin{array}{lcl} F(x) \preceq F(\bar x) &\iff& F(x) \supseteq F(\bar x) \\ &\iff& \forall i=1,\dots,s:\; y^i \in F(x)\\ &\iff& \forall i=1,\dots,s: A x \geq b - B y^i\\ &\iff& A x \geq \max\cb{b - B y^i \st i=1,\dots,s}. \end{array} \] This means that the linear program \eqref{py} has only $n+q$ variables and $2m$ constraints. The problem to obtain an H-representation of $\gr (F(\cdot)+C)$ from an H-representation of $\gr F$ seems to be difficult in practice where it is typical that $n \gg q$, compare also Remark \ref{rem1}. One way to obtain it is vertex enumeration of a polyhedral convex set in $\mathbb{R}^n\times\mathbb{R}^q$. In contrast, {\scshape SetOpt} involves vertex enumeration only in $\mathbb{R}^q$. We finally show a special property of solutions obtained by {\sc SetOpt}. \begin{proposition} Every solution to \eqref{p} obtained by the algorithm {\sc SetOpt} is a pre-solution to \eqref{p}. \end{proposition} \begin{proof} In phase 1 of the algorithm, a pre-solution $\bar X = \cb{\bar x^1,\dots,\bar x^k}$ is computed. In phase 2 of the algorithm, $\bar X$ is replaced by $\hat X = \cb{\hat x^1,\dots,\hat x^k}$, where $f(\hat x^i) \preceq f(\bar x^i)$, $i\in \cb{1,\dots,k}$ by \eqref{eq_th41}. Hence, $(\hat x^i, y)$ is a minimizer of \eqref{r} whenever $(\bar x^i, y)$ is a minimizer of \eqref{r}, which proves the claim. \end{proof}
{ "timestamp": "2014-04-07T02:01:45", "yymm": "1210", "arxiv_id": "1210.0729", "language": "en", "url": "https://arxiv.org/abs/1210.0729", "abstract": "An algorithm which computes a solution of a set optimization problem is provided. The graph of the objective map is assumed to be given by finitely many linear inequalities. A solution is understood to be a set of points in the domain satisfying two conditions: the attainment of the infimum and minimality with respect to a set relation. In the first phase of the algorithm, a linear vector optimization problem, called the vectorial relaxation, is solved. The resulting pre-solution yields the attainment of the infimum but, in general, not minimality. In the second phase of the algorithm, minimality is established by solving certain linear programs in combination with vertex enumeration of some values of the objective map.", "subjects": "Optimization and Control (math.OC)", "title": "An Algorithm to Solve Polyhedral Convex Set Optimization Problems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363480718235, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7073385801280868 }
https://arxiv.org/abs/1906.10645
Central Limit Theorems for Compound Paths on the 2-Dimensional Lattice
Zeckendorf proved that every integer can be written uniquely as a sum of non-consecutive Fibonacci numbers $\{F_n\}$, and later researchers showed that the distribution of the number of summands needed for such decompositions of integers in $[F_n, F_{n+1})$ converges to a Gaussian as $n\to\infty$. Decomposition problems have been studied extensively for a variety of different sequences and notions of a legal decompositions; for the Fibonacci numbers, a legal decomposition is one for which each summand is used at most once and no two consecutive summands may be chosen. Recently, Chen et al. [CCGJMSY] generalized earlier work to $d$-dimensional lattices of positive integers; there, a legal decomposition is a path such that every point chosen had each component strictly less than the component of the previous chosen point in the path. They were able to prove Gaussianity results despite the lack of uniqueness of the decompositions; however, their results should hold in the more general case where some components are identical. The strictly decreasing assumption was needed in that work to obtain simple, closed form combinatorial expressions, which could then be well approximated and led to the limiting behavior. In this work we remove that assumption through inclusion-exclusion arguments. These lead to more involved combinatorial sums; using generating functions and recurrence relations we obtain tractable forms in $2$ dimensions and prove Gaussianity again; a more involved analysis should work in higher dimensions.
\section{Introduction} Among the many fascinating properties of the Fibonacci numbers is the following observation, credited to Zeckendorf \cite{Ze}: Every positive integer admits a unique representation as a sum of non-adjacent Fibonacci numbers $\{F_n\}$, where\footnote{If we started with $F_0 = 0$ and $F_1 = 1$, then $F_2 = 1$ and we trivially lose uniqueness.} $F_1 = 1, F_2 = 2$ and $F_{n+1} = F_{n} + F_{n-1}$. Interestingly we can treat this property as an equivalent definition of the Fibonacci numbers: they are the only sequence from which every positive integer can be decomposed uniquely as a sum of non-adjacent terms. It turns out that there is often a relationship between rules for legal decompositions and sequences $\{G_n\}$, and the literature is now filled with many results on properties of the summands in legal decompositions of numbers in intervals $[G_n, G_{n+1})$ as $n\to\infty$. These range from the mean number of summands growing linearly, with the factor related to the roots of the characteristic polynomial of the recurrence, to the distribution of the number of summands converging to a Gaussian, to the distribution of gaps between summands; see for example \cite{Bes,Bow,Br,Day,Dem,FGNPT,Fr,GTNP,Ha, Ho, HW, Ke,Lek,Mw1,Mw2,Ste1,Ste2} and the references therein. Most of the sequences studied have been one-dimensional. Additional sequences, such as those in \cite{CFHMN2,CFHMNPX}, appear two-dimensional but can be converted into one-dimensional sequences and attacked using existing techniques. This motivated Chen et. al. \cite{CCGJMSY} to consider a true multi-dimensional sequence by looking at paths among lattice points with non-negative integer coefficients. They defined a legal decomposition in $d$-dimensions to be a finite collection of lattice points for which \begin{enumerate} \item{each point is used at most once}, and \item{if the point $(i_1, i_2, \dots, i_d)$ is included then all subsequent points $(i_1', i_2', \dots, i_d')$ have $i'_j < i_j$ for all $j \in \{1, 2, \dots, d\}$ (i.e., \emph{all} coordinates must decrease between point in the decomposition and the next one).} \end{enumerate} They called the path of chosen lattice points a \textbf{simple jump path}; at each step, each component was \emph{strictly less than} the corresponding component of the previous step. One can construct a sequence on the lattice in many ways. For example, in two dimensions one can go along diagonal paths parallel to $y=-x$ and at each lattice point adding the first number which cannot be legally represented. The situation is slightly more involved in higher dimensions, though for most of the problems studied the values of the ordered points do not matter; what matters is the geometry of the lattice walks. In \eqref{ZeckendorfDiagonalSequenceSimp2D} we illustrate several diagonals' worth of entries when $d = 2$. Unlike for the Fibonacci sequence, we find that the uniqueness of these decompositions fails (for example, $25$ has two legal decompositions: $20+5$ and $24+1$). \begin{eqnarray} \begin{array}{cccccccccc}280 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\157 & 263 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\84 & 155 & 259 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\50 & 82 & 139 & 230 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\28 & 48 & 74 & 123 & 198 & \cdots & \cdots & \cdots & \cdots & \cdots \\14 & 24 & 40 & 66 & 107 & 184 & \cdots & \cdots & \cdots & \cdots \\7 & 12 & 20 & 33 & 59 & 100 & 171 & \cdots & \cdots & \cdots \\3 & 5 & 9 & 17 & 30 & 56 & 93 & 160 & \cdots & \cdots \\1 & 2 & 4 & 8 & 16 & 29 & 54 & 90 & 159 & \cdots \end{array} \label{ZeckendorfDiagonalSequenceSimp2D} \end{eqnarray} These simple jump paths have a severe limitation as every coordinate must decrease. Thus in \eqref{ZeckendorfDiagonalSequenceSimp2D} we had to add 5 to our 2-dimensional sequence in the $(2,3)$ location, as we cannot use $4+1$ to get 5 as that is only horizontal movement. This strict decreasing condition was needed in \cite{CCGJMSY} to obtain simple closed form combinatorial expressions, which were then well approximated. Finally, this led to a proof that the limiting behavior of the distribution of the number of summands converges to a Gaussian. This alternative, more general formulation leads to what we call a \(\textbf{generalized jump path}\). Formally, a generalized jump path from the lattice point \(p \in \mathbb{N}^d\) is a sequence of lattice points where each point is used only once, and if the point \((i_1, i_2, \dots, i_d)\) is in the sequence, then each subsequent point \((i_1', i_2', \dots, i_d')\) must satisfy the following properties: \begin{itemize} \item for all \(j\in \{1,2,\dots, d\}\), \(i_j \geq i_j'\), and \item for at least one \(k \in \{1,2, \dots, d\}\), \(i_k > i_k'\). \end{itemize} These conditions imply that for each point in the sequence, at least one coordinate must decrease while the remaining coordinates cannot increase. Below is the number of generalized jump paths in two dimensions from the point \((i,j)\) for \(i,j \leq 3\). \begin{equation} \begin{matrix}\label{generalizedJumpPathsSmallPoints} \vdots & \vdots & \vdots & \vdots & \iddots \\ 8 & 40 & 152 & 504 & \cdots \\ 4 & 16 & 52 & 152 & \cdots \\ 2 & 6 & 16 & 40 & \cdots \\ 1 & 2 & 4 & 8 & \cdots \end{matrix} \end{equation} We can find a recursive formula which allows us to calculate the total number of generalized jump paths starting at the point \(p=(p_1,p_2, \dots, p_d)\). For convenience we include the requirement that all paths end at one point outside the lattice, at the origin; note moving directly to the origin is equivalent to not choosing any additional points in a path and is thus considered a legal option. If \(S((p_1, p_2, \dots, p_d))\) represents the total number of paths starting at $p$, then we may do the following: partition all the generalized jump paths from \(p\) by the location of their first step. Either the path goes directly from \(p\) to the origin, or \(p\) first goes to some other lattice point \(a\). Note \(a\) must have at least one coordinate, say \(a_i\), such that \(a_i < p_i\), hence \(a \in \{[0,p_1] \times [0,p_2] \times \cdots \times [0, p_d]\}\setminus\{p\}\). By definition there are \(S(a)\) paths from the point $a$. Hence, summing over all possible \(a\) and the one additional case (immediately jumping to the origin, or equivalently not choosing any additional lattice points), we find a formula for \(S(p)\): \begin{equation}\label{totalPathsRecurrence} S(p) \ = \ 1 + \sum\limits_{\substack{a\neq p, \\ {a\in [0,p_1]\times [0, p_2]\times \cdots}}} S(a). \end{equation} We are able to perform an asymptotic analysis both on the number of generalized jump paths and the number of such paths of length $k$. Our main result is as follows. \begin{thm}\label{thm:main} Let $X_{p, q}$ be the random variable denoting the number of generalized jump paths starting at a point $(p, q) \in \mathbb{N}^2$ and ending at the origin. Suppose $p := n$ and $q := cn$ for $n \in \mathbb{N}^+$ and $c \geq 1$ is fixed. Then $X_{p, q}$ converges to a Gaussian as $n \rightarrow \infty$, with mean $\frac{q + p + \sqrt{p^2 + 6pq + q^2}}{4}$ and variance $\frac{p+q}8+\frac{(p+q)^2}{8\sqrt{p^2+6pq+q^2}}$. \end{thm} In Section \ref{sec2}, we introduce some notation for our problem and prove some basic properties of unrestricted generalized jump paths. In Section \ref{sec3}, we use these properties in conjunction with various analytical and combinatorial methods to obtain a generating function for the number of paths to a fixed point as a function of path length. Using that result, we prove the Gaussianity in the limit of the number of summands in decompositions (Theorem \ref{thm:main}). We use a method similar to that found in \cite{CCGJMSY}; the difficulty in the argument is in determining a good count of the number of paths and, from that, a good estimate for the number of paths of a given length. We conclude with a brief discussion of future questions to study. \section{Properties of Generalized Jump Paths} \label{sec2} We define a \textbf{generalized jump path} of length $n$ that starts at $(p_1, p_2, \dots, p_d)$ to be a sequence of points $\{(x_{i,1},x_{i,2},\dots,x_{i,d})\}_{i=0}^n$ such that: \begin{itemize} \item $(x_{0,1},x_{0,2},\dots,x_{0,d}) = (p_1, p_2, \dots, p_d)$, \item for all $i$ and $j$ we have $x_{i,j} \geq x_{i+1,j}$, \item for all $i$ we have $(x_{i,1},x_{i,2},\dots,x_{i,d}) \neq (x_{i+1,1},x_{i+1,2},\dots,x_{i+1,d})$ \footnote{A natural future question would be to remove this condition, which would allow the same point to be used multiple times in a decomposition. See Section \ref{s:conclusion} for more information.}, and \item $(x_{n,1},x_{n,2},\dots,x_{n,d}) = (0, 0, \dots, 0)$. \end{itemize} Let $g((p_1, p_2, \dots, p_d),n)$ be the number of such paths of length $n$ starting at $(p_1, p_2, \dots, p_d)$. We define an \textbf{unrestricted generalized jump path} to be a sequence of points $\{(x_{i,1},x_{i,2},\dots,x_{i,d})\}_{i=0}^n$ such that: \begin{itemize} \item $(x_{0,1},x_{0,2},\dots,x_{0,d}) = (p_1, p_2, \dots, p_d)$, \item for all $i$ and $j$ we have $x_{i,j} \geq x_{i+1,j}$, and \item for all $i$ we have $(x_{i,1},x_{i,2},\dots,x_{i,d}) \neq (x_{i+1,1},x_{i+1,2},\dots,x_{i+1,d})$. \end{itemize} Moreover, let $u((p_1,p_2,\dots,p_d),n)$ be the number of \emph{unrestricted} generalized jump paths starting from the point $(p_1, p_2, \dots, p_d);$ this is analogous to the definition of $g$. Note that unrestricted generalized jump paths are simply generalized jump paths with the last restriction lifted (i.e., the sequence does not need to end at the bottom left corner). We now establish and prove two basic properties for $g$ and $u$, the first of which was alluded to when we defined $u$. \begin{lemma}[Unrestricted-Restricted Relationship] \label{unrestricted-restricted} Let $v := (p_1,p_2,\dots,p_d)$. For all $n \in \mathbb{N}$, \be \label{unrestricted-restricted-recurrence-eq1} u(v,n) \ = \ g(v,n) + g(v,n+1). \ee \end{lemma} \begin{proof} The set of unrestricted jump paths of length $n$ that do not end at $(0,0,\dots,0)$ is bijective to the set of restricted jump paths of length $n+1$ that end at $(0,0,\dots,0)$. This immediately implies the result. \end{proof} \begin{theorem}[2-Dimensional Path Recurrence] \label{2d-recurrence} For all $p, q, n \in \mathbb{N}^+$, \begin{align} \label{2dimPathRecEqA} u((p,q),n) \ = \ ~ & u((p,q-1),n) + u((p,q-1),n-1) \nonumber \\ & + u((p-1,q),n) + u((p-1,q),n-1) \nonumber \\ & - u((p-1,q-1),n) - u((p-1,q-1),n-1). \end{align} \end{theorem} \begin{proof} Let functions $\ell((p,q),n), d((p,q),n)$, and $m((p,q),n)$ denote the number of unrestricted jump paths starting from the point $(p,q)$ whose first jump is directly \emph{left}, directly \emph{down}, and both \emph{left and down}, respectively, in $n$ jumps. Then, \begin{align}\label{2dimPathRecEqAA} \ell((p,q),n) & \ = \ \ell((p-1,q),n) + u((p-1,q),n-1) \nonumber\\ d((p,q),n) & \ = \ d((p,q-1),n) + u((p,q-1),n-1) \nonumber\\ m((p,q),n) & \ = \ d((p-1,q),n) + m((p-1,q),n) \nonumber\\ & \ = \ \ell((p,q-1),n) + m((p,q-1),n) \nonumber\\ & \ = \ u((p-1, q-1), n-1) + u((p-1, q-1), n). \end{align} Notice by the definitions of the functions $\ell((p, q), n), d((p, q), n)$, and $m((p, q), n)$ that \be \label{oneDirectionPartition} u((p,q),n) \ = \ \ell((p,q),n) + d((p,q),n) + m((p,q),n). \ee Using identity \eqref{oneDirectionPartition} gives \begin{align} u((p,q),n) \ = \ \ ~ & \ell((p-1,q),n) + u((p-1,q),n-1) \nonumber \\ & + d((p,q-1),n) + u((p,q-1),n-1) \nonumber \\ & + d((p-1,q),n) + m((p-1,q),n) \nonumber \\ & + \ell((p,q-1),n) + m((p,q-1),n) \nonumber \\ & - (u((p-1, q-1), n-1) + u((p-1, q-1), n)) \nonumber \\ \ = \ \ ~ & \ell((p-1,q),n) + d((p-1,q),n) + m((p-1,q),n) \nonumber \\ & + d((p,q-1),n) + \ell((p,q-1),n) + m((p,q-1),n) \nonumber \\ & + u((p-1,q),n-1) + u((p,q-1),n-1) \nonumber \\ & - (u((p-1, q-1), n-1) + u((p-1, q-1), n)) \nonumber \\ \ = \ \ ~ & u((p-1,q),n) + u((p,q-1),n) \nonumber \\ & + u((p-1,q),n-1) + u((p,q-1),n-1) \nonumber \\ & - u((p-1,q-1),n-1) - u((p-1,q-1),n), \label{2dimPathRecEqAA} \end{align} as desired. \end{proof} \section{2-Dimensional Generating Function} \label{sec3} For every lattice point in $\mathbb{N}^2$ there is a 2-dimensional generating function for the lengths of the paths from that point. Explicitly, we denote $F_{p,q}(x)$ to be \be \label{f-def} F_{p,q}(x) \ = \ \sum_{k=0}^{p+q} u((p,q),k) x^k. \ee Our main result in this section is an alternative form for $F_{p,q}$ that is more readily studied using asymptotic techniques. \begin{theorem}\label{pathLenGenFn} If $p \leq q$, then \be F_{p,q}(x) \ = \ (1+x)^p\sum_{k=0}^{q} \binom{q}{k}\binom{p+k}{k} x^k. \ee \end{theorem} We give two proofs. The first is a pure generating function approach that only proves the claim for the case $p=q=n$, whereas the second is purely combinatorial and proves the theorem in generality. For technical convenience, one often analyzes all paths starting at a point on the main diagonal; thus if we restrict our investigation to this case, the simpler first proof suffices. \subsection{Pure Generating Functions Method} Consider the generating function \begin{align} B(x,y,z) \ = \ \sum_{p,q,k \in \mathbb{Z}^+} u((p,q),k)x^py^qz^k. \end{align} Using Theorem \ref{2d-recurrence}, we immediately see that \begin{align} B(x,y,z) \ = \ 1+(1+z)(x+y-xy)B(x,y,z), \end{align} which implies that \begin{align} B(x,y,z) \ = \ \dfrac{1}{1-(1+z)(x+y-xy)}. \end{align} Now we use a method adapted from \cite{Stan} to determine the central terms. Define \be D(x,s,u)\ :=\ B(s,x/s,z),\ee and let $u := 1+z$; thus \begin{align} \label{centralTermCompA} D(x,s,u) \ = \ \dfrac{1}{1-u(s+x/s-x)} \ = \ \dfrac{-s/u}{s^2-s(x+1/u)+x}. \end{align} Our generating function for the central terms will be the $s^0$ coefficient of $D(x,s,u)$. We denote the two solutions for $s$ in the denominator of $D(x,s,u)$ as $\alpha$ and $\beta$, where for ease of reading we suppress the arguments of these functions as follows: \be \alpha \ := \ \dfrac{ux+1-\sqrt{u^2 x^2 - 4 u^2 x + 2 u x + 1}}{2u},\ \ \ \beta \ := \ \dfrac{ux+1+\sqrt{u^2 x^2 - 4 u^2 x + 2 u x + 1}}{2u}. \ee Using the method of partial fractions on \eqref{centralTermCompA}, we find that \begin{align} \label{centralTermCompB} D(x,s,u) \ = \ \frac{1}{u(\beta-\alpha)}\left(\frac{\alpha}{s-a}-\frac{\beta}{s-\beta}\right). \end{align} We expand each term by using the geometric series formula, which is applicable for sufficiently small values of the parameters, and get \begin{align} \label{centralTermCompC} D(x,s,u) \ = \ \frac{1}{u(\beta-\alpha)}\left(\sum_{n\geq1}\alpha^ns^{-n}+\sum_{n\geq0}\beta^{-n}s^n\right). \end{align} The $s^0$ term is clearly just $\frac{1}{u(\beta-\alpha)}$ as $\beta^{-0}s^0 = 1$. Since $\beta-\alpha = \frac{\sqrt{u^2x^2-2u^2x+2ux+1}}{u}$, we have \begin{align} \label{centralTermCompD} B(x,x,z) \ = \ \frac{1}{\sqrt{u^2x^2-4u^2x+2ux+1}} \ = \ \frac{1}{\sqrt{(ux+1)^2-4u^2x}}, \end{align} where we used that $u = z+1$. We now represent $1 / \sqrt{(ux+1)^2-4u^2x}$ as a power series of the form $\sum_{i=0}b_i(z)x^i$. Differentiating, we obtain the recurrence relation \begin{align} \label{centralTermCompE} b_i \ = \ \begin{cases} 1 & i = 0 \\ 1+3z+2z^2 & i = 1 \\ \dfrac{(2i-1)(1+2z)(1+z)b_{i-1} - (1+z)^2(i-1)b_{i-2}}{i} & i \ge 2. \end{cases} \end{align} We define $a_i$ such that $a_i (1+z)^i = b_i$; then, this sequence satisfies the recurrence \begin{align} \label{centralTermCompF} a_i \ = \ \begin{cases} 1 & i = 0 \\ 1+2z & i = 1 \\ \dfrac{(2i-1)(1+2z)a_{i-1} - (i-1)a_{i-2}}{i} & i \ge 2. \end{cases} \end{align} The solution to this recurrence relation is $a_i = P_i(2z+1)$, where $P_i$ is the $i$\textsuperscript{th} Legendre Polynomial \begin{align} \label{centralTermCompG} P_l(x) \ = \ \sum_{k=0}^{\ell} \binom{\ell}{k} \binom{-\ell-1}{k}\left(\frac{1-x}{2}\right)^k; \end{align} see \cite{Ko}. It follows that \begin{align} \label{centralTermCompH} a_n \ = \ P_n(2z+1) \ = \ \sum_{k=0}^n\binom{n}{k}\binom{n+k}{k}z^k. \end{align} Thus, changing variable names, \begin{align} \label{centralTermCompI} F_{n,n}(x) \ = \ b_n(x) \ = \ (1+x)^n a_n(x) \ = \ (1+x)^n\sum_{k=0}^n\binom{n}{k}\binom{n+k}{k}x^k, \end{align} which proves Theorem \ref{pathLenGenFn} in the $p=q=n$ case, as desired. \subsection{Combinatorial Method} We attempt to obtain a nicer formula for $g(p,n)$ by first relaxing our constraint to allow for paths with stationary points (where consecutive points are allowed to be exactly the same) and then later correcting for the over-counting. We do this as it is significantly easier to count the total number of paths with this relaxation in place. Let $r(p,n)$ denote the number of paths from $p$ to the origin with this relaxed constraint. Arguing as in the Stars and Bars problem\footnote{The number of ways to divide $N$ identical items into $G$ groups, where all that matters is how many items are placed in a group, is $\ncr{N+G-1}{G}$. To see this, consider $N+G-1$ items in a line; choosing $G-1$ of these partitions the remaining $N$ items into $G$ sets. The first set is all the elements up to the first chosen one, and so on. There is thus a one-to-one correspondence between the two counting problems. This method was first used for Zeckendorf decomposition problems in \cite{KKMW}, and has been successfully used in many works since then.}, it follows that \be r(p,n) \ = \ \prod_{i = 1}^d\binom{p_i+n-1}{p_i}. \ee Let $s(p,n, k)$ be the number of paths to $p$ with at least $k$ stationary points. Then \be s(p,n,k) \ = \ \binom{n}k r(p,n-k). \ee By the principle of Inclusion-Exclusion, we have that the number of paths with no stationary points is \begin{align} \label{orig-statement} g(p,n) \ = \ \sum\limits_{k=0}^{n-1} (-1)^k\binom{n}k r(p,n-k). \end{align} Now, the following identity and its proof serve as motivation for how to proceed in evaluating \eqref{orig-statement}. \begin{lemma} \label{macc} For $m, n \in \mathbb{N}$, \be \label{maccEqn} \sum_{i = 0}^n \binom{n}{i} \binom{m+n-i}{k-i}(-1)^i \ = \ \binom{m}{k}. \ee \end{lemma} Recall that $[n]$ means $\{1, 2, \dots, n\}$. The justification is as follows. \begin{proof} We view the inner term as counting the number of ordered pairs $(S,T)$ such that $S \subseteq [n]$, $T \subseteq [m+n] \setminus S$, and $|S| + |T| = k$. Let the set of all such valid ordered pairs be $V$. We consider the sign-reversing involution $f :V \setminus E \mapsto V \setminus E$ (where $E$ is the set of ``exceptions,'' for which $f$ is ill-defined) defined by toggling the smallest term in $S \cup T$ between $S$ and $T$. For example, when $k = 5$, \be \label{involutionEx} f(\{2,3,5\}, \{7,8\}) \ = \ (\{3,5\}, \{2,7,8\}). \ee Given this definition, it's not difficult to see that $f$ is its own inverse (hence, an involution) and always "flips" the parity of $|S|$, sending ordered pairs with a positive coefficient in our sum to pairs with negative coefficients (and vice versa). Therefore, for all the ordered pairs on which $f$ is well-defined, our desired sum is 0. Thus, we have \be \label{sumToSetExclusion} \sum_{i = 0}^n \binom{n}{i} \binom{m+n-i}{k-i}(-1)^i \ = \ |E|, \ee and it's not difficult to see that the only pairs $(S, T) \in E$ are those where $S = \emptyset$ and $T \subseteq [m+n] \setminus [n]$. There are clearly $\binom{m}{k}$ such choices for these sets, and the desired result follows. \end{proof} Viewed properly, our formula in \eqref{orig-statement} looks similar to the above lemma; we re-write it as \begin{align} \label{genPathsClosedA} g(p,n) \ = \ \sum_{i = 0}^n (-1)^i \binom{n}{i} \prod_{k=1}^d \binom{(p_k - 1) + n-i}{(n-1)-i}. \end{align} Furthermore, when $d = 1$, this formula agrees with \eqref{macc}, as expected. Although similar methods may be applied in higher dimensions, we now consider the special case $d = 2$. \begin{theorem} For $p, q \in \mathbb{N}$, \begin{align} \label{genPathsClosedB} g((p,q),n) \ = \ \sum_{i=0}^{n-1} \binom{p-1}{i}\binom{p-1+n-i}{p}\binom{q}{n-i-1}. \end{align} \end{theorem} \begin{proof} We first assume without loss of generality that $p \leq q$. We have that \begin{align} \label{genPathsClosedC} g(p,n) \ = \ \sum_{i = 0}^n (-1)^i \binom{n}{i} \binom{(p - 1) + n-i}{(n-1)-i} \binom{(q - 1) + n-i}{(n-1)-i}. \end{align} We now proceed in a similar manner to the proof of Lemma \ref{macc}. We view the inner term as counting the number of ordered pairs $(S,T, U)$ such that $S \subseteq [n]$, $T \subseteq [p+n-1] \setminus S$, $U \subseteq [q+n-1] \setminus S$, and $|S| + |T| = |S| + |U| = n-1$ . Let the set of all such valid ordered pairs be $V$. We consider the sign-reversing involution $f :V \setminus E \mapsto V \setminus E$ (where $E$ is the set of ``exceptions'' for which $f$ is ill-defined) defined by toggling the smallest term in $S \cup (T\cap U)$ between all three sets. Given this definition, it's not difficult to see that $f$ is its own inverse (hence, an involution) and always ``flips'' the parity of $|S|$, sending ordered pairs with a positive coefficient in our sum to pairs with negative coefficients (and vice versa). Hence, for all the ordered pairs on which $f$ is well-defined, our desired sum is 0. Thus, we have \begin{align} \label{genPathsClosedD} \sum_{i = 0}^n (-1)^i \binom{n}{i} \binom{(p - 1) + n-i}{(n-1)-i} \binom{(q - 1) + n-i}{(n-1)-i}\ = \ |E|, \end{align} and our problem reduces to counting $|E|$. For $f$ to be ill-defined, it is necessary and sufficient for $S = \emptyset$ and $T\cap U \cap [n] = \emptyset$. \\ In order to prove this, we begin by indexing every tuple $(S,T,U)$ by fixing $i := |T \cap U|$. With $i$ fixed, we then choose the $i$ terms that are common to both $T$ and $U$, which must be a subset of $[p + n - 1] \setminus [n]$. Consequently, there are exactly $\binom{p-1}{i}$ ways to do this. We may now freely choose the remaining $n-i-1$ members of $T$ (of which there are $\binom{p+n-i - 1}{n-i-1}$ ways to do so) and the remaining $n-i-1$ members of $U$ (for which there are $\binom{q}{n-1}$ ways to select). Multiplying these three terms (and simplifying the second term), we find exactly the desired term, concluding the proof. \end{proof} \begin{proof}[Proof (Theorem \ref{pathLenGenFn})] We now use \eqref{unrestricted-restricted-recurrence-eq1} to obtain \begin{align} \label{genPathsClosedE} u((p,q),n) \ = \ \sum_{i=0}^n \binom{p}{i} \binom{p+n-i}{p}\binom{q}{n-i}. \end{align} By the definition of $F_{p,q}$ given in \eqref{f-def} we know that \begin{align} \label{genPathsClosedF} F_{p,q}(x) & \ = \ \sum_{k=0}^{p+q}\sum_{i=0}^k \binom{p}{i} \binom{p+k-i}{p}\binom{q}{k-i}x^k \nonumber \\ & \ = \ \Bigg(\sum_{i=0}^p\binom{p}{i}x^i\Bigg) \Bigg(\sum_{k=0}^q \binom{q}{k}\binom{p+k}{p}\Bigg) \nonumber \\ & \ = \ (1+x)^p\sum_{k=0}^{q} \binom{q}{k}\binom{p+k}{k} x^k, \end{align} which proves Theorem \ref{pathLenGenFn}. \end{proof} \section{Proving 2-Dimensional Gaussianity} \label{sec4} We begin this section with a little bit of notation. Let $X_{p,q}$ to be a random variable counting the length generalized jump path starting from the point $(p, q)$ chosen uniformly at random from all such paths. From Theorem \ref{pathLenGenFn}, we have \be \label{gaussDecomp} X_{p,q} \ = \ A_{p,q} + B_{p,q}, \ee where $A$ and $B$ are independent random variables proportional to a binomial coefficient and a product of binomial coefficients; explicitly \be \label{gaussDecompRandomVar} P(A = k) \ \propto\ \binom{p}{k}, \ \ \ \ P(B = k) \ \propto\ \binom{q}{k}\binom{p+k}{k}. \ee Note $A$ is just the well-studied binomial random variable, which converges to a Gaussian with mean $p/2$ and variance $p/4 $ as $p \to \infty$. We now prove Theorem \ref{thm:main}. We start by restating it with the notation above: \emph{ Suppose $p := n$ and $q := cn$ for $n \in \mathbb{N}^+$ and $c \geq 1$ is fixed. Then $X_{p, q}$ converges to a Gaussian as $n \rightarrow \infty$, with mean $\frac{q + p + \sqrt{p^2 + 6pq + q^2}}{4}$ and variance $\frac{p+q}8+\frac{(p+q)^2}{8\sqrt{p^2+6pq+q^2}}$.} Due to our definition of $X_{p, q}$ in \eqref{gaussDecomp}, it suffices to show $B$ also converges to a Gaussian, with mean $\frac{q-p+\sqrt{p^2+6pq+q^2}}{4}$ and variance $\frac{q-p}8+\frac{(p + q)^2}{8\sqrt{p^2+6pq+q^2}}$ for the aforementioned choices of $p$ and $q$. The proof will use similar techniques to those found in \cite{CCGJMSY}; the algebra is more involved due to the more complicated structure of the formulas for the number of paths. To begin, let $p := n$ and $q := cn$, where $c \geq 1$ is fixed. Then $P(B=k)$ is proportional to a ratio of factorials: \be \label{stirlingApprox1} P(B = k) \ \propto\ \frac{q!}{p!}\frac{(p+k)!}{k!k!(q-k)!} \ = \ \frac{q!}{p!}\frac{(n+k)!}{k!k!(cn-k)!}.\ee Applying Stirling's formula to approximate the factorials in \eqref{stirlingApprox1}, we find for $p, q$ large that \be \label{stirlingApprox2} P(B = k) \ \propto\ \frac{q!}{p!}\frac{(n+k)^{n+k+\frac12}}{2\pi k^{2k+1}(cn-k)^{cn-k+\frac12}}.\ee We now define \begin{align} M \ :=\ \frac{(n+k)^{n+k+\frac12}}{k^{2k+1}(cn-k)^{cn-k+\frac12}} \label{m-def} \end{align} (i.e., we remove the terms in \eqref{stirlingApprox2} that are constant with respect to $k$). Taking the logarithm of both sides of (\ref{m-def}), we find \be \log M\ =\ \Big(n+k+\frac12\Big)\log(n+k) - \Big(2k+1\Big) \log(k) - \Big(cn-k+\frac12\Big)\log(cn-k).\ee Now we write $k$ as $an + t\sqrt{n}$, where $a := \frac{c-1+\sqrt{c^2+6c+1}}4$. This allows us to introduce a more natural variable for the number of steps in a path, where this number is written in terms of its distance (in natural units) from the mean. We Taylor expand, using \be \log(u+x)\ =\ \log(u)+\log\left(1 + \frac{x}{u}\right)\ =\ \log(u)+\frac{x}{u} - \frac12\left(\frac{x}{u}\right)^2+O\left(\frac{x^3}{u^3}\right). \ee We do this because later in Lemma \ref{eq:decay} we show that $k=an$ is the center of the distribution, and almost all (i.e., with probability 1 in the limit) the mass of the distribution is located where $t$ is small. We find \begin{align} \label{gaussProofLogDeriv} \log M \ = \ ~ & \Big(n+k+\frac12\Big)\log\Big(n\Big(1+a+\frac{t}{\sqrt{n}}\Big)\Big) \nonumber\\ & -\Big(2k+1\Big)\log\Big(n\Big(a+\frac{t}{\sqrt{n}}\Big)\Big) \nonumber\\ & -\Big(cn-k+\frac12\Big)\log\Big(n\Big(c-a-\frac{t}{\sqrt{n}}\Big)\Big) \nonumber\\ \ = \ ~ & (n-cn-1)\log(n) \nonumber\\ & + \Big(n+k+\frac12\Big)\log\Big(1+a+\frac{t}{\sqrt{n}}\Big) \nonumber\\ & -\Big(2k+1\Big)\log\Big(a+\frac{t}{\sqrt{n}}\Big) \nonumber\\ & -\Big(cn-k+\frac12\Big)\log\Big(c-a-\frac{t}{\sqrt{n}}\Big) \nonumber\\ \ = \ ~ & (n-cn-1)\log(n) \nonumber\\ & + \Big(n+k+\frac12\Big)\Big(\log(1+a)+\frac{t}{(1+a)\sqrt{n}}-\frac{t^2}{2(1+a)^2n}+O\Big(\frac{t^3}{n^{3/2}}\Big)\Big) \nonumber\\ & - \Big(2k+1\Big)\Big(\log(a)+\frac{t}{a\sqrt{n}}-\frac{t^2}{2a^2n}+O\left(\frac{t^3}{n^{3/2}}\right)\Big) \nonumber\\ & - \left(cn-k+\frac12\right)\left(\log(c-a)-\frac{t}{(c-a)\sqrt{n}}-\frac{t^2}{2(c-a)^2n}+O\left(\frac{t^3}{n^{3/2}}\right)\right). \end{align} After standard but tedious computations, the details of which are in Appendix \ref{app:computations}, we obtain \be \label{gaussProofE} \log M \ = \ -\chi t^2 + f(n) + O\left(\frac{t^3}{\sqrt{n}}\right), \ee where \bea \chi\ =\ \frac{2 c^2 + 10 c^3 - 10 c^4 - 2 c^5 + 2 c^2 \sqrt{1 + c (6 + c)} + 4 c^3 \sqrt{1 + c (6 + c)} + 2 c^4 \sqrt{1 + c (6 + c)}}{8c^4}\nonumber\\ \eea and $f(n)$ is independent of $k$ and $t$. We now prove the following lemma, which demonstrates that the tails of this distribution decay sufficiently quickly. \vspace{0.2 cm} \begin{lemma} \label{eq:decay} As $n \to \infty$, $P(|t|>n^{0.1}) \to 0$. \end{lemma} \begin{proof} First, we show $P(t>n^{0.1})$ goes to 0; the proof for $P(t<-n^{0.1})$ follows similarly. Note that if we increase $k$ by 1, then the number of paths of length $k$ increases by a factor of around $\frac{(p+k)(q-k)}{k^2}$ (note that this is strictly decreasing in $k$). Plugging in $k = an+t\sqrt{n}$, we see that the proportion at which the number of paths increases is \begin{align} \label{pathIncrProportion} 1 - \left(\frac{8\sqrt{c^2+6c+1}}{(1+c)^2+(c-1)\sqrt{c^2+6c+1}}\frac{t}{\sqrt{n}}\right) + O\left(\frac{t^2}n\right). \end{align} Let the coefficient of the $t/\sqrt{n}$ term be $c'$. Plugging in $t=1$, and ignoring the higher order terms means that for $k \geq an+\sqrt{n}$, the ratio will be at most $r = 1- c'\sqrt{n}$. Let $B_0$ be the number of paths of length $k$ when $t = 1$. The number of paths with $t>n^{0.1}$ is bounded above by a geometric series of first term $B_0\cdot r^{n^0.6-n^0.5}$ and ratio $r$, and we have \begin{align} r^{n^{0.5}} \ = \ \left(1-\frac{c'}{\sqrt{n}}\right)^{\sqrt{n}} \approx e^{-c'}<1. \end{align} Thus as $n$ goes to infinity, $B_0\cdot r^{n^0.6-n^0.5} = B_0\cdot \left(e^{-c'}\right)^{n^{0.1}-1}$, where the last exponent, $n^{0.1}-1$, goes to infinity. Now note that the total number of paths of any length is strictly greater than $B_0$, so the probability that $t > n^{0.1}$ is at most \begin{align} \frac{B_0\cdot \left(e^{-c'}\right)^{n^{0.1}-1}/(1-r)}{B_0} \ = \ \frac{\sqrt{n}}{c'}\left(e^{-c'}\right)^{n^{0.1}-1}, \end{align} which goes to 0 as $n$ grows. Thus, the probability that $t > n^{0.1}$ goes to 0 as $n$ gets large, as desired. \end{proof} Consequently, we know that $t^3/\sqrt{n} \to 0$ as $n \to \infty$, so by sending $p$ and $q$ to $\infty$, \begin{align} \label{proportionEquality} P(B = k) \ = \ Ce^{-\chi t^2} \ = \ Ce^{-\frac{(an-k)^2}{n/\chi}}, \end{align} where $C$ is some constant in terms of $p$ and $q$. This is the equation for a Gaussian Distribution with mean \begin{align} \label{GaussianBMean} an \ = \ \frac{q-p+\sqrt{p^2+6pq+q^2}}4 \end{align} and variance \begin{align} \label{GaussianBVariance} \frac{n}{2\chi} \ = \ n\left(\frac{c-1}8+\frac{c^2+2c+1}{8\sqrt{1+6c+c^2}}\right) \ = \ \frac{q-p}8+\frac{q^2+2pq+p^2}{8\sqrt{p^2+6pq+q^2}}. \end{align} We conclude that $X_{p,q}$ is Gaussian with mean \begin{align} E[X_{p,q}] \ = \ E[A_{p,q}] + E[B_{p,q}] \ = \ \frac{p}2+\frac{q-p+\sqrt{p^2+6pq+q^2}}4 \ = \ \frac{p+q}4+\frac{\sqrt{p^2+6pq+q^2}}4, \end{align} and variance \begin{eqnarray} {\rm Var}\left(X_{p,q}\right) & \ = \ & {\rm Var}\left(A_{p,q}\right) + {\rm Var}\left(B_{p,q}\right) \nonumber\\ & \ = \ & \frac{p}4+\frac{q-p}8+\frac{q^2+2pq+p^2}{8\sqrt{p^2+6pq+q^2}} \ = \ \frac{p+q}8+\frac{(p+q)^2}{8\sqrt{p^2+6pq+q^2}}. \end{eqnarray} This completes the proof of Theorem \ref{thm:main}. \hfill $\Box$ \section{Further Work and Open Questions} \label{sec5} \label{s:conclusion} We conclude with some suggestions of future research. \begin{enumerate} \item Is it possible to generalize the generating function approach for points not on the diagonal? Although empirical results suggest similar behavior, the associated Taylor expansion becomes significantly harder to work with. \item How do our results generalize to higher dimensions? Our combinatorial approach admits a natural extension to higher dimensions, although the casework becomes significantly more cumbersome. Furthermore, it is of note that the generating function approach does not generalize to three or more dimensions. \item How quickly does our distribution converge to a Gaussian? \item What is the behavior if we allow a point to be used more than once? Of course, if we can use a point arbitrarily many times it is unclear how to define the terms of our sequence; thus the natural question would be what happens if each point can be used at most $T$ times, for some fixed $T$. \item There are even more combinatorial questions worth exploring. Many different additional restrictions can be put on the compound jump paths. One such restriction is prohibiting any path from visiting any points that lie above the line $y = x$. \end{enumerate}
{ "timestamp": "2020-05-25T02:01:20", "yymm": "1906", "arxiv_id": "1906.10645", "language": "en", "url": "https://arxiv.org/abs/1906.10645", "abstract": "Zeckendorf proved that every integer can be written uniquely as a sum of non-consecutive Fibonacci numbers $\\{F_n\\}$, and later researchers showed that the distribution of the number of summands needed for such decompositions of integers in $[F_n, F_{n+1})$ converges to a Gaussian as $n\\to\\infty$. Decomposition problems have been studied extensively for a variety of different sequences and notions of a legal decompositions; for the Fibonacci numbers, a legal decomposition is one for which each summand is used at most once and no two consecutive summands may be chosen. Recently, Chen et al. [CCGJMSY] generalized earlier work to $d$-dimensional lattices of positive integers; there, a legal decomposition is a path such that every point chosen had each component strictly less than the component of the previous chosen point in the path. They were able to prove Gaussianity results despite the lack of uniqueness of the decompositions; however, their results should hold in the more general case where some components are identical. The strictly decreasing assumption was needed in that work to obtain simple, closed form combinatorial expressions, which could then be well approximated and led to the limiting behavior. In this work we remove that assumption through inclusion-exclusion arguments. These lead to more involved combinatorial sums; using generating functions and recurrence relations we obtain tractable forms in $2$ dimensions and prove Gaussianity again; a more involved analysis should work in higher dimensions.", "subjects": "Number Theory (math.NT); Combinatorics (math.CO); Probability (math.PR)", "title": "Central Limit Theorems for Compound Paths on the 2-Dimensional Lattice", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363535858372, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7073385781583024 }
https://arxiv.org/abs/1302.2556
Intersection Cuts for Nonlinear Integer Programming: Convexification Techniques for Structured Sets
We study the generalization of split, k-branch split, and intersection cuts from Mixed Integer Linear Programming to the realm of Mixed Integer Nonlinear Programming. Constructing such cuts requires calculating the convex hull of the difference between a convex set and an open set with a simple geometric structure. We introduce two techniques to give precise characterizations of such convex hulls and use them to construct split, k-branch split, and intersection cuts for several classes of non-polyhedral sets. In particular, we give simple formulas for split cuts for essentially all convex sets described by a single quadratic inequality. We also give simple formulas for k-branch split cuts and some general intersection cuts for a wide variety of convex quadratic sets.
\section{Introduction.} An important area of Mixed Integer Linear Programming (MILP) is the characterization of the convex hull of specially structured non-convex polyhedral sets to develop strong valid inequalities or cutting planes such as split and intersection cuts \cite{conforti2010polyhedral,conforti2011corner,cornuejols2008valid,del2012relaxations}. This approach has led to highly effective branch-and-cut algorithms \cite{DBLP:journals/mpc/Achterberg09,DBLP:journals/anor/BixbyR07,bixby2004,DBLP:journals/informs/JohnsonNS00,Lodi2009}, so there has recently been significant interest in extending the associated theoretical and computational results to the realm of Mixed Integer Nonlinear Programming (MINLP) \cite{Atamturk07,AN:conicmir,Goez2012,DBLP:conf/ipco/Bonami11,DBLP:journals/mp/CezikI05,DBLP:journals/mor/DadushDV11,DBLP:conf/ipco/DadushDV11,Dadush2011121,Drewes,mustafa,springerlink:10.1007/s101070050103}. Unfortunately, this extension requires the study of the convex hull of a non-convex and non-polyhedral set, which has proven significantly harder than the polyhedral case. Most of the known results in this area are limited to very specific sets \cite{horst2003global,sherali1998reformulation,tawarmalani2002convexification} or to approximations of semi-algebraic sets through Semidefinite Programming (SDP) \cite{fujie1997semidefinite,lasserre2001global,nesterov,oustry2001sdp,parrilo2003semidefinite,polik2007survey,poljak1995recipe}. While some precise SDP representations of the convex hulls of semi-algebraic sets exist \cite{Gouveia12,Helton12,henrion2011semidefinite,Scheiderer20112606}, these require the use of auxiliary variables. Such higher dimensional, extended, or lifted representations are extremely powerful. However, there are theoretical and computational reasons to want representations in the original space and/or in the same class as the original set (e.g. representations that do not jump from quadratic basic semi-algebraic to SDP). We refer to characterizations that satisfy both these requirements as \emph{projected} and \emph{class preserving}. Projected and class preserving are in general incompatible (e.g. the convex hull of the basic semi-algebraic set $\set{x\in \Real^2\,:\, (x_1^2-x_2)x_1\geq 0,\, x_2\geq 0}$ has no projected basic semi-algebraic representation, but has a lifted basic semi-algebraic representation \cite{lecnote}). Furthermore, even giving an algebraic characterization of the boundary of the convex hull of a variety \cite{ranestad2011convex,ranestad2009convex} or giving a projected SDP representation of the convex hull of certain varieties and quadratic semi-algebraic sets \cite{sanyal2011orbitopes,yildiran2009convex,Kose} requires very complex techniques from algebraic geometry. In this paper we show that simple class preserving projected characterizations can be given for a wide range of specially structured sets. More specifically, we show how two very simple techniques can be used to construct projected class preserving characterizations of the convex hull of certain non-polyhedral non-convex sets that mimic the special structures studied in MILP. These special structures correspond to the difference between two convex sets with simple geometry. The techniques we consider are tailored to this special structure and do not require any additional algebraic properties (e.g. being quadratic, basic semi-algebraic, etc.). Thanks to this, the resulting characterizations are quite general, but give simple closed form expressions. While the structures we study are somewhat specific, we use them to extend split cuts to essentially all convex sets described by a single quadratic inequality, and to extend general intersection cuts to a wide variety of quadratic sets of interest to trust region and lattice problems. The rest of this paper is organized as follows. We begin with Section~\ref{LitReview} where we introduce some notation and review some known results. Section~\ref{NSCSec} then introduces an interpolation technique that can be used to construct split cuts for many classes of sets including convex quadratic sets. Finally, Section~\ref{intersection_cut_section} introduces an aggregation technique that can be used to construct a wide array of general intersection cuts. \section{Notation, Known Results and Other Preliminaries.} \label{LitReview} We use the following notation. Let $e^i\in\mathbb{R}^n$ be the $i$-th unit vector and $I\in \mathbb{R}^{n\times n}$ be the identity matrix where $n$ is an appropriate dimension that we omit if evident from the context. We also let $\norm{x}_2:=\sqrt{\sum_{i=1}^n x_i^2}$ denote the Euclidean norm of a given vector $x\in \mathbb{R}^n$ and for a vector $v \in \Real^n$, we let the projection onto its span be $P_{v} := \frac{v v^T}{\|v \|_2^2}$ and onto its orthogonal complement be $P_{v}^ {\perp} := I - \frac{v v^T}{\|v \|_2^2}$. For a set $S\subseteq \Real^n$, we let $\Int\bra{S}$ be its interior, $\conv\bra{S}$ be its convex hull, and $\cconv\bra{S}$ be the closure of its convex hull. For a function $F:\mathbb{R}^n\to \mathbb{R}$ we let $\epi\bra{F}:=\set{\bra{x,t}\in \Real^n\times\Real\,:\, F(x)\leq t}$ be its epigraph, $\gr\bra{F}:=\set{\bra{x,t}\in \Real^n\times\Real\,:\, F(x)= t}$ be its graph, and $\hyp\bra{F}:=\set{\bra{x,t}\in \Real^n\times\Real\,:\, F(x)\geq t}$ be its hypograph. In addition, we let $[n]:=\set{1,\ldots,n}$. \begin{definition}[Intersection and Split Cuts] \label{firstdef} Let $C,\,S\subseteq\mathbb{R}^{n}$ be two closed convex sets and $g:\Real^n\to\Real$ be an arbitrary function. We say inequality $g(x)\leq 0$ is : \begin{itemize} \item \emph{Valid} for $\cconv\bra{C\setminus \Int\bra{S}}$ if $\cconv\bra{C\setminus \Int\bra{S}}\subseteq \set{x\in \Real^n\,:\, g(x)\leq 0}$, \item an \emph{intersection cut} for $C$ and $S$ if it is valid for $\cconv\bra{C\setminus \Int\bra{S}}$ and $g$ is convex, and \item a \emph{non-trivial intersection cut} for $C$ and $S$ if additionally $\set{x\in C\,:\, g(x)>0}\neq \emptyset$. \end{itemize} We let a \emph{split} be a set of the form $\set{x\in \Real^n\,:\, \pi^Tx\in[\pi_0,\pi_1]}$ for some $\pi\in \Real^n\setminus\set{\bf 0}$ and $\pi_0,\,\pi_1\in \Real$ such that $\pi_0<\pi_1$. If $S$ is a split, we say that the associated intersection cut is a \emph{split cut}. If $S$ is additionally a split with $\pi=e^i$ for some $i\in [n]$, we say the associated split cut is an \emph{elementary split cut}. \end{definition} We note that the term intersection cut was introduced by Balas \cite{balas1971intersection} for the case in which $C$ is a translated simplicial cone and its unique vertex is in $\Int\bra{S}$. In this setting, we have that $\conv\bra{C\setminus \Int\bra{S}}$ is closed and can be described by adding a single linear inequality to $C$. Furthermore, this single linear inequality has a simple formula dependent on the \emph{intersections} of the extreme rays of $C$ with $S$. While we do not always have such intersection formulas for other classes of sets, we continue to use the term in the more general setting and avoid any additional qualifiers for simplicity. In particular, we do not use the term \emph{generalized intersection cut} as it has already been used for the case of polyhedral $C$ and $S$ and in conjunction with an improved cut generation procedure for MILP \cite{genintersec}. The term split cut was introduced by Cook, Kannanand, and Schrijver \cite{DBLP:journals/mp/CookKS90}, and their original definition directly generalizes to non-polyhedral sets as in Definition~\ref{firstdef}. The interest of intersection cuts for MILP and MINLP arises from the fact that if $\Int(S)\cap \mathbb{Z}^p\times \mathbb{R}^q=\emptyset$, an intersection cut for $C$ and $S$ is valid for $\cconv\bra{C\cap \mathbb{Z}^p\times \mathbb{R}^q}$. Hence, intersection cuts can be used to strengthen the continuous relaxation of MILP and MINLP problems. Intersection cuts are particularly attractive in the MILP setting, since they can be quite strong and can be easily constructed. They were extensively studied when they were first proposed in the 1970s \cite{balas1971intersection,Gomory1969451,springerlink:10.1007/BF01584976} and have recently received renewed interest \cite{conforti2011corner,del2012relaxations}. Part of the relative simplicity and effectiveness of intersection cuts for MILP stems from two basic facts. The first one is that in the MILP setting, $C$ is a polyhedron (i.e. the continuous relaxation of a MILP is an LP). The second one is the fact that every convex set $S$ such that $\Int(S)\cap \mathbb{Z}^n=\emptyset$ (usually denoted a \emph{lattice free convex set}) and that is maximal with respect to inclusion for this property is also a polyhedron \cite{lovasz1989geometry}. Restricting both $C$ and $S$ to be polyhedra give intersection cuts for MILP several useful properties. For instance, if $C$ and $S$ are polyhedra, then $\cconv\bra{C\setminus \Int\bra{S}}$ is a polyhedron \cite{del2012relaxations}. Hence, in the MILP setting, we can restrict our attention to linear intersection cuts. Furthermore, if $C$ is a translated simplicial cone and its unique vertex is $\Int\bra{S}$, then $\conv\bra{C\setminus \Int\bra{S}}$ is closed, can be described by adding a single linear inequality to $C$, and this linear inequality has a relatively simple formula \cite{balas1971intersection,Gomory1969451,springerlink:10.1007/BF01584976}. In particular, if $S$ is a split and $C$ is a polyhedron, then all linear intersection cuts for $C$ and $S$ can be constructed from simplicial relaxations of $C$ and hence have simple formulas \cite{DBLP:journals/mp/AndersenCL05,dash2011note,DBLP:journals/orl/Vielma07}. Gomory Mixed Integer (GMI) cuts \cite{Gomory1969451,springerlink:10.1007/BF01584976} and Mixed Integer Rounding (MIR) cuts \cite{Marchand,DBLP:books/daglib/0090563,springerlink:10.1007/BF01585752,Wolsey} are two versions of these formulas that have made split cuts one of the most effective cuts for MILP \cite{bixby2004}. For more information on the ongoing efforts to duplicate this effectiveness for other lattice free polyhedra, we refer the reader to \cite{conforti2011corner,del2012relaxations}. In this context, we note that $\conv\bra{C\setminus \Int\bra{S}}$ can fail to be closed even if $C$ and $S$ are polyhedra and $S$ is not a split (e.g. consider $C=\set{x\in \Real^2\,:\, x_2\geq 0}$ and $S=\set{x\in \Real^2\,:\, x_2\leq 1,\,x_1+x_2\leq 1}$). However, $\conv\bra{C\setminus \Int\bra{S}}$ is closed in the polyhedral case if $S$ is full-dimensional and the recession cone of $S$ is a linear subspace \cite{andersen2010analysis}. In the MINLP setting, there has been significant work on the computational use of linear split cuts \cite{DBLP:conf/ipco/Bonami11,DBLP:journals/mp/CezikI05,springerlink:10.1007/s101070050103,Drewes, mustafa}. From the theoretical side, we know that if $S$ is a split, then $\conv\bra{C\setminus \Int\bra{S}}$ is closed even if $C$ is not polyhedral \cite{Dadush2011121}. With respect to formulas for intersection cuts, there has been some progress in the description of split cuts for quadratic sets in \cite{Atamturk07,AN:conicmir,Dadush2011121,Goez2012}. Dadush et al. \cite{Dadush2011121} show that, if $C$ is an ellipsoid and $S$ is a split, then $\conv\bra{C\setminus \Int\bra{S}}$ can be described by intersecting $C$ with either a linear half space, a linear transformation of the second-order cone (a.k.a. Lorentz cone), or an ellipsoidal cylinder. In addition, they give simple closed form expressions for all these linear and nonlinear split cuts. Independently, \cite{Goez2012} studies split cuts for more general quadratic sets, but only for splits in which $\{x\in C\,:\, \pi^Tx= \pi_0\}$ and $\{x\in C\,:\, \pi^Tx= \pi_1\}$ are bounded. They give a procedure to find the associated split cuts, but do not give closed form expressions for them. Finally, \cite{Atamturk07,AN:conicmir} give a simple formula for an elementary split cut for the standard three dimensional second-order cone. While \cite{Goez2012} develops a procedure to construct split cuts through a detailed algebraic analysis of quadratic constraints developed in \cite{BelottiGPRT2011}, \cite{Atamturk07,AN:conicmir,Dadush2011121} give formulas for split cuts through simple geometric arguments. As we have recently shown at the MIP 2012 Workshop, these geometric techniques can be extended to additional quadratic and basic semi-algebraic sets \cite{poster}. In this paper we show that the principles behind these geometric arguments can be abstracted from the semi-algebraic setting to develop simple split cut formulas for a wider class of specially structured convex sets. This abstraction greatly simplifies the proofs and can be used to construct split cuts for essentially all convex sets described by a single quadratic inequality through simple linear algebra arguments. In addition to studying split cuts, we show how a commonly used aggregation technique can be used to develop formulas for general nonlinear intersection cuts for the case in which $C$ and $S$ are both non-polyhedral, but share a common structure. While a non-polyhedral $S$ is not necessary in the MINLP settings (it still should be sufficient to consider maximal lattice free convex sets, which are polyhedral), they could still provide an advantage and are important in other settings such as trust region problems \cite{dan,polik2007survey} and lattice problems \cite{DBLP:conf/ipco/BuchheimCL10,springerlink:10.1007/s10107-011-0475-x,MGbook}. We finally note that similar results for the quadratic case have recently been independently developed in \cite{Kent}. To describe our approach, we use the following additional definitions. Because we restrict to the cases in which $\conv\bra{C\setminus \Int\bra{S}}$ is closed, we drop the closure from the definitions. \begin{definition} Let $C,\,S\subseteq\mathbb{R}^{n}$ be two closed convex sets and $g:\Real^n\to\Real$ be an arbitrary function. We say inequality $g(x)\leq 0$ is a: \begin{itemize} \item \emph{Binding valid cut } for $\conv\bra{C\setminus \Int\bra{S}}$ if it is valid and $\set{x\in C\setminus \Int\bra{S}\,:\, g(x)=0}\neq \emptyset$, and \item \emph{Sufficient cut}, if $\set{x\in C \,:\, g(x)\leq0}\subseteq \conv\bra{C\setminus\Int(S)}$. \end{itemize} In this setting, we refer to $C$ as the \emph{base set} and to $S$ as the \emph{forbidden set}. \end{definition} Binding valid cuts correspond to valid cuts that cannot be improved by translations, and sufficient cuts are those that are violated outside $\conv\bra{C\setminus\Int(S)}$. We can show that a convex cut that is sufficient and valid is enough to describe $\conv\bra{C\setminus\Int(S)}$ together with the original constraints defining $C$. Our approach to generating such cuts will be to construct cuts that are binding and valid by design, and that have simple structures from which sufficiency can easily be proved. \section{Nonlinear Split Cuts Through Interpolation.} \label{NSCSec} In this section we consider the case in which the base set is either the epigraph or lower level set of a convex function and the forbidden set corresponds to a split set. Our cut construction approach is based on a simple interpolation technique that can be more naturally explained for epigraphs of specially structured functions. For this reason, we begin with such a case and then consider special cases of non-epigraphical sets and discuss the limits of the interpolation technique. While the structures for which the technique yields simple formulas are quite specific, we can consider broader classes by considering linear transformations. We illustrate the power of this approach by showing how the interpolation technique yields formulas for split cuts for convex quadratic sets. For convenience, we use the following notation for the sets associated to specific classes of splits. \begin{definition} Let $\pi \in \mathbb{R}^n\setminus\set{\bf 0}, \pi_0, \pi_1, \hat{\pi} \in \mathbb{R}$ be such that $\pi_0 < \pi_1$ and $\hat{\pi}\neq 0$. \begin{itemize} \item {\bf General Sets:} For a closed convex base set $C \subseteq \mathbb{R}^n$ and a forbidden split set \mbox{$S=\set{x\in \Real^n\,:\, \pi^Tx\in[\pi_0,\pi_1]}$}, we define \[C^{\pi, \pi_0, \pi_1} := \conv\bra{C\setminus \Int\bra{S}}=\conv \bra{\set{x \in C : \pi^T x \le \pi_0} \cup\set{ x \in C : \pi^T x \ge \pi_1}}.\] \item {\bf Epigraphical Sets:} For a closed convex base set $C \subseteq \mathbb{R}^n\times \mathbb{R}$ and a forbidden split set \mbox{$S_x=\set{\bra{x,t}\in \Real^n\times \Real\,:\, \pi^Tx\in[\pi_0,\pi_1]}$} that does not affect $t$, we define \[C^{\pi, \pi_0, \pi_1} := \conv\bra{C\setminus \Int\bra{S_x}}= \conv \bra{\set{\bra{x,t} \in C : \pi^T x \le \pi_0} \cup \set{ \bra{x,t} \in C : \pi^T x \ge \pi_1}},\] and for a forbidden split set \mbox{$S_{x,t}=\set{\bra{x,t}\in \Real^n\times \Real\,:\, \pi^Tx+\hat{\pi} t\in[\pi_0,\pi_1]}$} that does affect $t$, we define \[C^{\pi,\hat{\pi}, \pi_0, \pi_1}:=\conv\bra{C\setminus \Int\bra{S_{x,t}}}=\conv\bra{ \set{\bra{x,t} \in C : \pi^T x+\hat{\pi} t \le \pi_0}\cup \set{ \bra{x,t} \in C : \pi^T x +\hat{\pi} t\ge \pi_1}}.\] Note that in this latter case, we also allow disjunctions of the form $\phat t \le \pi_0$ and $\phat t \ge \pi_1$ for which $\pi = \bf 0$. \item {\bf Epigraphical Sets with Elementary Disjunctions:} For a closed convex base set $C \subseteq \mathbb{R}\times \mathbb{R}^n\times \mathbb{R}$ and a forbidden elementary split set \mbox{$S=\set{\bra{z,y,t}\in \mathbb{R}\times \mathbb{R}^n\times \mathbb{R}\,:\, z\in[\pi_0,\pi_1]}$}, we define \[C^{\pi_0, \pi_1} :=\conv\bra{C\setminus \Int\bra{S}}= \conv \bra{\set{\bra{z,y,t} \in C : z \le \pi_0} \cup \set{ \bra{z,y,t} \in C : z \ge \pi_1}}.\] \end{itemize} \end{definition} \subsection{Epigraphical Sets with Simple Split Disjunctions.} Let $F:\mathbb{R}\to \mathbb{R}$ be a closed convex function, \begin{equation} \epi(F) := \set{\bra{z, t} \in \Real \times \Real : F\left(z\right) \le t} \end{equation} be its epigraph, and consider an elementary split disjunction on $z$. As illustrated in Figure~\ref{firstex1}, $\epi(F)^{\pi_0,\pi_1}=\epi(F)\cap \epi(G)$ for \begin{equation}\label{simple_splitcut} G(z)=\frac{F(\pi_1) - F(\pi_0)}{\pi_1 - \pi_0} z +\frac{\pi_1 F(\pi_0) - \pi_0 F(\pi_1)}{\pi_1- \pi_0}. \end{equation} \begin{figure}[htb] \centering \subfigure[Graph of $F$ in black and graph of $G$ in blue.]{\includegraphics[scale=0.35]{separable1.pdf}\label{firstex1}} \subfigure[Naive friends construction.]{\includegraphics[scale=0.35]{separable2.pdf}\label{firste2}} \subfigure[Friends by following the slope.]{\includegraphics[scale=0.35]{separable3.pdf}\label{firstex3}} \caption{Interpolation technique for univariate functions.} \end{figure} Indeed, since $G$ is a linear function and hence $\epi(F)\cap \epi(G)$ convex, it suffices to show that $G(z)\leq t$ is a valid and sufficient cut. We can check that $G(z)\leq t$ is a binding valid cut by design. Indeed, $G$ is the (affine) linear interpolation of $f$ through $z=\pi_0$ and $z=\pi_1$. Convexity of $f$ then implies that this interpolation is below $f$ outside $z\in(\pi_0,\pi_1)$. To show that the cut is sufficient, we need to show that any point $\bra{\overline{z},\overline{t}} \in \epi(F)$ that satisfies the cut is in $\epi(F)^{\pi_0,\pi_1}$. To achieve this, we can find two points $\bra{z^0,t^0}$ and $\bra{z^1,t^1}$ in $\epi(F)$ such that $z^0\leq \pi_0$, $z^1\geq \pi_1$, and $\bra{\overline{z},\overline{t}}\in\conv\bra{\set{\bra{z^0,t^0},\bra{z^1,t^1}} }$. Following \cite{dgv11}, we will denote these points the \emph{friends} of $\bra{\overline{z},\overline{t}}$. One naive way to construct the friends is to wiggle $\bra{\overline{z},\overline{t}}$ by decreasing and increasing $\overline{z}$ until it reaches $\pi_0$ and $\pi_1$, respectively. However, as illustrated in Figure~\ref{firste2}, this can result in one of the friends falling outside $\epi(F)$. Fortunately, as illustrated in Figure~\ref{firstex3}, we can always wiggle by following the slope of cut $G$ to assure that the friends are at least in $\epi(G)$. Correctness then follows by noting that $G(z)=F(z)$ at $z=\pi_0$ and $z=\pi_1$, since $G(z)\leq t$ is a binding valid cut. The previous reasoning can be formalized for multivariate functions and general split disjunctions as follows. \begin{proposition}\label{method1propB} Let $F:\mathbb{R}^n\to \mathbb{R}$ be a closed convex function, $\pi\in \Real^n\setminus\set{\bf 0}$, and $\pi_0, \pi_1 \in \Real$ such that $\pi_0<\pi_1$. If $G:\mathbb{R}^n\to \mathbb{R}$ is a closed convex function such that \begin{subequations}\label{generalintcond} \begin{alignat}{4} G(x)&=F(x) &\quad& \forall x \text{ s.t. } \pi^Tx\in\set{\pi_0,\pi_1}\label{generalintcondb}\\ G(x)&\leq F(x)&\quad& \forall x \text{ s.t. } \pi^Tx\notin\bra{\pi_0,\pi_1},\label{generalintcondc} \end{alignat} \end{subequations} and if every $\bra{\overline{x},\tbar}\in \epi(G)$ such that $\pi^T \xbar \in (\pi_0,\pi_1)$ has friends in \[\set{\bra{x,t}\in \epi(F):\pi^Tx\notin (\pi_0,\pi_1)},\] then \begin{equation} \epi(F)^{\pi,\pi_0,\pi_1}= \epi(F)\cap \epi(G). \end{equation} \end{proposition} \begin{proof} Let $Q=\set{\bra{x,t}\in \epi(F):\pi^Tx\notin (\pi_0,\pi_1)}$. We have that \begin{equation}\label{AAAAA} Q\subseteq \epi(F)\cap\epi(G) \subseteq \conv(Q)= \epi(F)^{\pi,\pi_0,\pi_1}, \end{equation} where the first containment comes from \eqref{generalintcondc} and the \blue{last} from the friends condition. The result follows by taking convex hull in \eqref{AAAAA} and noting that $\epi(F)\cap\epi(G)$ is convex \blue{because} both $F$ and $G$ are convex. \end{proof} Our general approach to use Proposition~\ref{method1propB} is to construct a convex function that yields binding valid cuts (i.e. satisfies \eqref{generalintcond}) and to follow its slope to construct friends for sufficiency. We now consider two structures in which the appropriate interpolation can easily be constructed once we identify its general structure. \subsubsection{Separable Functions.} If $F$ is a separable function of the form $F(z,y)=f(z)+g(y)$ with $f:\Real \to \Real$ and $g:\Real^d\to \Real$ closed convex functions, we can simply interpolate $F$ parametrically on $y$ to obtain \begin{equation}\label{simple_splitcut_para} G(z,y)=\frac{F(\pi_1,y) - F(\pi_0,y)}{\pi_1 - \pi_0} z +\frac{\pi_1 F(\pi_0,y) - \pi_0 F(\pi_1,y)}{\pi_1- \pi_0}. \end{equation} In this case, the interpolation simplifies to \begin{equation}\label{simple_splitcut_para_sep} G(z,y)= \frac{f(\pi_1) - f(\pi_0)}{\pi_1 - \pi_0} z +\frac{\pi_1 f(\pi_0) - \pi_0 f(\pi_1)}{\pi_1- \pi_0}+g(y), \end{equation} which is convex on $\bra{z,y}$ and linear on $z$. Our original univariate argument follows through directly. We can then use the general version of this parametric interpolation to show the following result. \begin{restatable}{proposition}{GFormSplitCut} \label{GFormSplitCut} Let $\pi \in \Real^n\setminus\set{\bf 0}$, $\pi_0, \pi_1 \in \Real$ such that $\pi_0 < \pi_1$, $g:\mathbb{R}^n\to \mathbb{R}$ and $f:\mathbb{R}\to \mathbb{R}$ be closed convex functions, \[ S_{g,f} := \set{\bra{x, t} \in \Real^n \times \Real : g\left(P_\pi^{\perp} x\right) + f\left(\pi^T x\right) \le t},\] $a = \frac{f(\pi_1) - f(\pi_0)}{\pi_1 - \pi_0}$, and $b = \frac{\pi_1 f(\pi_0) - \pi_0 f(\pi_1)}{\pi_1- \pi_0}$. Then we have that \[S^{\pi,\pi_0,\pi_1}_{g,f} = \set{ \bra{x, t} \in S_{g,f} : g\left(P_\pi^{\perp} x\right) + a \pi^T x + b \le t}.\] \end{restatable} \begin{proof} Let $F(x)= g\left(P_\pi^{\perp} x\right) + f\left(\pi^T x\right)$ and $G(x)=g\left(P_\pi^{\perp} x\right) + a \pi^T x + b$. Interpolation condition \eqref{generalintcond} holds by the definition of $a$ and $b$ and convexity of $f$. Now let $\bra{\overline{x},\tbar}\in \epi(G)$ such that $\pi^T \xbar \in (\pi_0,\pi_1)$. To construct the friends of $\bra{\overline{x},\tbar}$, we consider two cases. \noindent \textbf{Case 1.} If $f(\pi_0) = f(\pi_1)$, then $\bra{\bar{x},\tbar}$ can be written as a convex combination of $\bra{x^0,\tbar}$ and $\bra{x^1,\tbar}$, where \beqn x^0 = P_{\pi}^{\perp} \bar{x} + \frac{\pi_0}{\norm{\pi}_2^2} \pi \eeqn and \beqn x^1 = P_{\pi}^{\perp} \bar{x} + \frac{\pi_1}{\norm{\pi}_2^2} \pi. \eeqn We have that $\bra{x^0,\tbar}, \bra{x^1,\tbar} \in \epi(F)\cap\set{\bra{x,t}:\pi^Tx\notin (\pi_0,\pi_1)}$, since $\pi^T x^0 = \pi_0$ and $\pi^T x^1 = \pi_1$, and because interpolation condition \eqref{generalintcondb} holds. \textbf{Case 2.} If $f(\pi_0) \neq f(\pi_1)$, $\bra{\bar{x},\tbar}$ can be written as a convex combination of $\bra{x^0,t^0}$ and $\bra{x^1,t^1}$ given by \beq x^0 = P_{\pi}^{\perp} \bar{x} + \frac{\pi_0}{\norm{\pi}_2^2} \pi, \quad t^0 = \tbar + a \bra{\pi_0 - \pi^T \bar{x}}, \eeq and \beq x^1 = P_{\pi}^{\perp} \bar{x} + \frac{\pi_1}{\norm{\pi}_2^2} \pi, \quad t^1 = \tbar + a \bra{\pi_1 - \pi^T \bar{x}}. \eeq Indeed, since $\pi^T \bar{x} \in \bra{\pi_0, \pi_1}$, there exists $\alpha \in (0,1)$ such that $\pi^T \bar{x} = \alpha \pi_0 + (1-\alpha) \pi_1$. One can check that $\bra{\bar{x},\tbar} = \alpha \bra{x^0,t^0} + (1-\alpha) \bra{x^1,t^1}$. We again have that $\bra{x^0,t^0}, \bra{x^1,t^1} \in \set{\bra{x,t}\in \epi(F):\pi^Tx\notin (\pi_0,\pi_1)}$, since $\pi^T x^0 = \pi_0$ and $\pi^T x^1 = \pi_1$, and because interpolation condition \eqref{generalintcondb} holds. The result then follows from Proposition~\ref{method1propB}. \end{proof} \subsubsection{Non-Separable Positive Homogeneous Convex Functions.} \label{ENSF Section} Proposition~\ref{method1propB} can also be used for some non-separable functions, but as illustrated in the following example, we need slightly more complicated interpolations. Consider $F:\Real\times \Real\to \Real$ given by $F(z,y)=\sqrt{z^2+y^2}$ and let $\pi_0=-10$ and $\pi_1=1$. Constructing a parametric \emph{linear} interpolation as in \eqref{simple_splitcut_para} yields \begin{equation} G_L(z,y)=\frac{10 \sqrt{1 + y^2} + \sqrt{100 + y^2} + z \bra{\sqrt{1 + y^2} - \sqrt{100 + y^2}}}{11}. \end{equation} The associated cut is certainly valid, binding and sufficient (we can always find friends by wiggling $z$ toward $\pi_0$ and $\pi_1$, and using $t$ to correct by following the slope of $G$ for fixed $y$). However, while $G$ is linear with respect to $z$, it is not convex with respect to $y$. We hence cannot use Proposition~\ref{method1propB} for this interpolation. Fortunately, we can construct an alternative interpolation given by \begin{equation} G_C(z,y)={\sqrt{\bra{\frac{ 20 - 9 z}{11}}^2 + y^2 }} \end{equation} that is convex on $(z,y)$. This function is not linear on $z$ for fixed $y$, but we can still show it satisfies the interpolation condition \eqref{generalintcond} by noting that $\bra{\frac{ 20 - 9 z}{11}}^2\leq z^2$ for any $z\notin (\pi_0,\pi_1)$ and that equality holds for $z\in \set{\pi_0,\pi_1}$. This is illustrated in Figure~\ref{firstpos1} which shows that for fixed $y=4$, $G_C\leq t$ is a nonlinear binding valid cut, but is strictly weaker than $G_L\leq t$. While $G_C$ yields a weaker cut than $G_L$, $G_C$ is in fact the strongest \emph{convex} function that satisfies the interpolation condition \eqref{generalintcond} and we can show that $\epi(F)^{\pi_0,\pi_1}=\epi(F)\cap\epi(G_C)$. However, as illustrated in Figure~\ref{firstpos2}, constructing friends for $\bra{\overline{z},\overline{y},\overline{t}}$ can no longer be done by leaving $\overline{y}$ fixed and following the slope of $G_C$ or $G_L$ to wiggle $\overline{z}$ towards $\pi_0$ and $\pi_1$. \begin{figure}[htb] \centering \subfigure[Graphs of $F$, $G_L$ and $G_C$ in black, green and blue, respectively.]{\includegraphics[scale=0.5]{nonsep1.pdf}\label{firstpos1}} \subfigure[Following slopes fails in friends construction.]{\includegraphics[scale=0.5]{nonsep2.pdf}\label{firstpos2}} \caption{Nonlinear interpolation and friends.} \end{figure} The issue with the friends construction stems from the fact that for certain fixed $y$, $G_C$ is not linear on $z$. However, for any $\bra{\overline{z},\overline{y},\overline{t}}$ such that $\bar{z} \in (\pi_0,\pi_1)$, we can always find a direction in which $G_C$ is linear in a neighborhood of $\bra{\overline{z},\overline{y},\overline{t}}$. Discovering one such direction is straightforward once we note that the epigraph of $G_C$ is a convex cone. Let $\bra{\hat{z},\hat{y}}$ be the projection of the apex of this cone onto $(z,y)$ and let $r=\bra{r_z,r_y}$ be a vector that starts at this point and goes through $\bra{\overline{z},\overline{y}}$ (See Figure~\ref{sectionfig0}). As illustrated in Figure~\ref{sectionfig1}, we can then check that $G_C$ is linear on $\set{\bra{\hat{z},\hat{y}}+r \alpha\,:\, \alpha\geq0}$ (Figure~\ref{sectionfig1} shows $F\bra{\bra{\hat{z},\hat{y}}+r\bra{1-\alpha}}$ and $G_C\bra{\bra{\hat{z},\hat{y}}+r\bra{1-\alpha}}$ with $\alpha_i$ such that $\hat{z}+r_z\bra{1-\alpha_i}=\pi_i$ for $i\in\set{0,1}$). \begin{figure}[htb] \centering \subfigure[Level sets of $G_C$ in blue and ray $r$ in purple.]{\includegraphics[scale=0.5]{section0.pdf}\label{sectionfig0}} \subfigure[Graphs of $F$ and $G_C$ in black and blue plotted in the direction in which $G_C$ is locally linear. ]{\includegraphics[scale=0.5]{section1.pdf}\label{sectionfig1}} \subfigure[Friends by following the slope of $G_C$.]{\includegraphics[scale=0.5]{section2.pdf}\label{sectionfig2}} \subfigure[Friends by using the fact that $\epi\bra{G_C}$ is a cone.]{\includegraphics[scale=0.5]{section3.pdf}\label{sectionfig3}} \caption{Friends construction for positive homogeneous non-separable functions.} \end{figure} The friends construction can now be done by wiggling $z$ towards $\pi_0$ and $\pi_1$ while modifying $y$ to remain in the affine subspace in which $G_C$ is locally linear (i.e. $\set{\bra{\hat{z},\hat{y}}+r \alpha\,:\, \alpha\in \Real}$), and modifying $t$ to follow the slope of $G_C$ as before. This is illustrated in Figure~\ref{sectionfig2}, which also shows that we can always wiggle all the way to $\pi_0$ and $\pi_1$ as $G_C$ fails to be linear in the subspace only outside $z\in [\pi_0,\pi_1]$. Alternatively, we can wiggle towards $\pi_0$ and $\pi_1$ by following the ray that starts in the apex of the epigraph of $G_C$ and goes through $\bra{\overline{z},\overline{y},\overline{t}}$ as illustrated in Figure~\ref{sectionfig3}. The key in this procedure was guessing that the appropriate interpolation had the form $ \sqrt{ (az+b)^2 + y^2 }$. After that, we could easily find the appropriate coefficients $a$ and $b$, and construct a direction to find friends. This can be easily generalized to $p$-order norms by using the following simple lemma whose proof is included in the appendix. \begin{restatable}{lemma}{pordersplitlemma} \label{interpolation lemma} Let $p \in {\mathbb{N}}$, $\pi_0, \pi_1 \in \Real$ such that $\pi_0 < 0 < \pi_1$, $a = \frac{\pi_0 + \pi_1}{\pi_1 - \pi_0}$, and $b = - \frac{2 \pi_1 \pi_0}{\pi_1 - \pi_0}$. \begin{itemize} \item If $s \notin \left[\pi_0, \pi_1\right]$, then $\abs{as +b}^p < \abs{s}^p$, \item if $s \in \set{\pi_0, \pi_1}$, then $\abs{as +b}^p = \abs{s}^p$, and \item if $s \in \bra{\pi_0, \pi_1}$, then $\abs{as +b}^p > \abs{s}^p$. \end{itemize} \end{restatable} Using a similar interpolation form and Lemma~\ref{interpolation lemma}, we can also calculate general split cuts for a wide range of positive homogeneous convex functions as follows. \begin{restatable}{proposition}{pordersplit}\label{porder split} Let $\pi \in \Real^n\setminus\set{\bf 0}$, $\pi_0,\,\pi_1, \beta \in \mathbb{R}$ such that $\pi_0 < \pi_1$, $p\in \mathbb{N}$, $g:\mathbb{R}^n\to \mathbb{R}$ be a positive homogeneous closed convex function, $ a = \frac{\pi_1+\pi_0 }{\pi_1-\pi_0} $ , $ b = - \frac{2 {\pi_1} {\pi_0}}{\pi_1-\pi_0} $, and \[C_{p,g} := \set{\bra{x, t} \in \Real^n \times \Real : \left(g\left(P_\pi^{\perp} x\right)^p + \abs{\beta \pi^Tx}^p \right)^{1/p}\le t}.\] If $0\notin (\pi_0,\pi_1)$, then $C_{p,g}^{\pi,\pi_0,\pi_1}=C_{p,g}$. Otherwise, we have that \[ C_{p,g}^{\pi,\pi_0,\pi_1} = \set{ (x,t) \in C_{p,g} : \left(g\left(P_\pi^{\perp} x\right)^p + \left|a \beta \pi^T x+\beta b\right|^p \right)^{1/p}\le t}.\] \end{restatable} \begin{proof} We first show the case $0 \in \bra{\pi_0, \pi_1}$. Let $F(x)= \left(g\left(P_\pi^{\perp} x\right)^p + \abs{\beta \pi^Tx}^p \right)^{1/p}$ and $G(x)=\left(g\left(P_\pi^{\perp} x\right)^p + \left|a \beta \pi^T x+\beta b\right|^p \right)^{1/p}$. Interpolation condition \eqref{generalintcond} holds by the definition of $a$ and $b$ and Lemma~\ref{interpolation lemma}. Now let $\bra{\overline{x},\overline{t}}\in \epi(G)$ such that $\pi^T \xbar \in (\pi_0,\pi_1)$. To construct the friends of $\bra{\overline{x},\overline{t}}$, we consider two cases. \noindent \textbf{Case 1.} If $\abs{\pi_0 } = \abs{\pi_1 }$, then the proof is analogous to case $f(\pi_0) = f(\pi_1)$ in the proof of Proposition~\ref{GFormSplitCut}. \noindent\textbf{Case 2.} If $\abs{\pi_0 } \ne \abs{\pi_1 }$, one can check that for \beqn x^{*} = \bra{\frac{-b}{a {\| \pi \|}_2^2} \pi} , \eeqn all points on the ray $ R := \set{ \bra{x^{*} ,0} + \alpha \bra{ \bar{x} -x^{*},\tbar} : \alpha \in \mathbb{R}_+ }$ belong to $\epi(G)$. Let the intersections of $R$ with the hyperplanes $\pi^T x = \pi_0$ and $\pi^T x = \pi_1$ be $\bra{x^0,t^0}$ and $\bra{x^1,t^1}$, respectively. Such points are obtained from $R$ by setting \beqn \alpha_0 = \frac{\pi_0 + {\frac{b}{a}}}{\pi^T \bar{x} + {\frac{b}{a}}} \eeqn and \beqn \alpha_1 = \frac{\pi_1 + {\frac{b}{a}}}{\pi^T \bar{x} + {\frac{b}{a}}}, \eeqn respectively. We have that $\bra{x^0,t^0}, \bra{x^1,t^1} \in \set{\bra{x,t}\in \epi(F):\pi^Tx\notin (\pi_0,\pi_1)}$, since $\pi^T x^0 = \pi_0$ and $\pi^T x^1 = \pi_1$, and because interpolation condition \eqref{generalintcondb} holds. Now note that $(\bar{x},\tbar)$ is obtained from $R$ by setting $\alpha=1$. If $\alpha_0 < 1 < \alpha_1$ or $\alpha_1 < 1 < \alpha_0$, then there exists $\beta \in (0, 1)$ such that $\bra{\bar{x},\tbar} = \beta \bra{x^0, t^0} + \bra{1-\beta} \bra{x^1,t^1}$. Seeing that $\pi^T \bar{x} \in \bra{\pi_0, \pi_1}$, $\abs{\pi_0} \ne \abs{\pi_1 }$, and $0 \in \bra{\pi_0, \pi_1}$, one can check $\alpha_0 < 1 < \alpha_1$ or $\alpha_1 < 1 < \alpha_0$. The result then follows from Proposition~\ref{method1propB}. Finally, consider the case $0 \notin \bra{\pi_0, \pi_1}$. We need to show that any point $\bra{\bar{x}, \tbar}\in \set{\bra{x,t}\in \epi(F):\pi^Tx\in (\pi_0,\pi_1)}$ has friends in $\set{\bra{x,t}\in\epi(F):\pi^Tx\notin (\pi_0,\pi_1)}$. We can construct the friends in a similar way as before by noting that the ray $R:= \set{ \alpha \bra{ \bar{x},\tbar} : \alpha \in \mathbb{R}_+ }$ is contained in $\epi(F)$ and intersects both $\pi^T x = \pi_0$ and $\pi^T x = \pi_1$. \end{proof} In particular, if $g$ is a $p$-norm, the following direct corollary characterizes elementary split cuts for $p$-order cones. \begin{corollary}\label{p-order cone split cut} Let $e^k$ be the $k$-th unit vector, $\pi_0, \pi_1 \in \mathbb{R}$ such that $\pi_0 < \pi_1$, $\| x \|_p = \bra{\sum \nolimits_{i=1}^{n} \abs{x_i }^p}^{1/p}$, \[C_p := \{ (x,t) \in \mathbb{R}^n \times \mathbb{R}_{+} : \|x\|_p\leq t \},\] $ a = \frac{\pi_1+\pi_0 }{\pi_1-\pi_0} $, $b = - \frac{2 \pi_1 \pi_0}{\pi_1-\pi_0}$, and $\widehat{B}:=I+(a-1)e^k {e^k}^T$. If $0\notin (\pi_0,\pi_1)$, then $C_p ^{e^k,\pi_0,\pi_1}=C_p $. Otherwise, we have that \[C_p ^{e^k,\pi_0,\pi_1} = \set{ (x,t) \in C_p : \left\lVert\widehat{B}x+b e^k\right\rVert_p\leq t }.\] \end{corollary} \subsection{Level Sets and Disjunctions Considering $t$.} We have concentrated on epigraphical sets, since having $t$ not be affected by the disjunction simplifies the friends constructions. Fortunately, the effect of $t$ can be replicated by a positive homogeneous function of variables not affected by the disjunction. With this modification, we can show the following proposition for non-epigraphical sets. \begin{proposition} \label{boundedSpli} Let $\pi \in \Real^n\setminus\set{\bf 0}$, $\pi_0,\pi_1\in \mathbb{R}$ such that $\pi_0 < \pi_1$, $g:\mathbb{R}^n\to \mathbb{R}$ be a positive homogeneous convex function, $f:\mathbb{R}\to \mathbb{R}\cup\{+\infty\}$ be a closed convex function such that $f(\pi_0), f(\pi_1) \le 0$, \[ D_{f,g} := \set{x \in \Real^n : g\left(P_\pi^{\perp} x\right) + f(\pi^T x) \le 0},\] $a = \frac{f(\pi_1) - f(\pi_0)}{\pi_1 - \pi_0}$, and $b = \frac{\pi_1 f(\pi_0) - \pi_0 f(\pi_1)}{\pi_1- \pi_0}$. Then we have that \begin{equation} D_{f,g}^{\pi,\pi_0,\pi_1} = \set{ x \in D_{f,g} : g\left(P_\pi^{\perp} x\right) +a \pi^T x+b\le 0}. \label{homogeq} \end{equation} \end{proposition} \begin{proof} The left to right containment in \eqref{homogeq} holds by convexity of $f$ and the definition of $a$ and $b$. For the right to left containment, let $\overline{x}\in\Real^n$ be such that \begin{equation}\label{eee} g\left(P_\pi^{\perp} \overline{x}\right) +a \pi^T \overline{x}+b\le 0 \end{equation} and $\pi^T \overline{x}\in \bra{\pi_0,\pi_1}$. To construct friends of $\overline{x}$, we consider two cases. \noindent \textbf{Case 1.} If $f(\pi_0) = f(\pi_1)$, one can check that \beqn x^0 = P_{\pi}^{\perp} \bar{x} + \frac{\pi_0}{\norm{\pi}_2^2} \pi \eeqn and \beqn x^1 = P_{\pi}^{\perp} \bar{x} + \frac{\pi_1}{\norm{\pi}_2^2} \p \eeqn are friends of $\bar{x}$. \textbf{Case 2.} If $f(\pi_0) \neq f(\pi_1)$, let $\alpha \in (0,1)$ be such that $\pi^T \bar{x} = \alpha \pi_0 + (1-\alpha) \pi_1$. Then $\bar{x}$ can be written as a convex combination of $x^0$ and $x^1$, where \beqn x^0 = \beta^1 P_{\pi}^{\perp} \bar{x} + \frac{\pi_0}{\norm{\pi}_2^2} \pi \eeqn and \beqn x^1 =\beta^2 P_{\pi}^{\perp} \bar{x} + \frac{\pi_1}{\norm{\pi}_2^2} \pi, \eeqn for $\beta^i:= f(\pi_i)/(\alpha f(\pi_0)+(1-\alpha)f(\pi_1))\in [0,\infty)$ for $i\in\{0,1\}$ by the assumptions on $f$ and $f(\pi_0) \neq f(\pi_1)$. By construction, $\pi^T x^0 = \pi_0$ and $\pi^T x^1 = \pi_1$. To check that $x^i\in D_{f,g}$, first note that by positive homogeneity of $g$ we have \beq\label{validbounded} g\left(P_\pi^{\perp} x^i\right) +f\left( \pi^T x^i \right)=\frac{f(\pi_i)}{\alpha f(\pi_0)+(1-\alpha)f(\pi_1)} g\left(P_\pi^{\perp} \bar{x}\right)+f(\pi_i). \eeq If $f(\pi_i)=0$, then we directly obtain $g\left(P_\pi^{\perp} x^i\right) +f\left(\pi^T x^i\right)\leq 0$. Otherwise, if $f(\pi_i)<0$, \eqref{validbounded} implies that $g\left(P_\pi^{\perp} x^i\right) +f\left( \pi^T x^i \right)\leq 0$ is equivalent to $g\left(P_\pi^{\perp} \bar{x}\right) +\alpha f(\pi_0)+(1-\alpha)f(\pi_1)\leq 0$, which holds because of \eqref{eee} and $\alpha f(\pi_0)+(1-\alpha)f(\pi_1)=a \pi^T \bar{x}+b$. The result then follows from convexity of $g\left(P_\pi^{\perp} x\right) +a \pi^T x+b$, similar to the proof of Proposition~\ref{method1propB}. \end{proof} As a direct corollary of Proposition~\ref{boundedSpli}, we obtain formulas for elementary split cuts for balls of $p$-norms. \begin{corollary}\label{p-order ball split cut} Let $e^k$ be the $k$-th unit vector, $\pi_0, \pi_1, r \in \mathbb{R}$ such that $\pi_0 < \pi_1$, $r>0$, and $\abs{\pi_0},\abs{\pi_1}\leq r$, \[B_p := \{ x \in \mathbb{R}^n \ : \|x\|_p\leq r \},\] $f(u):=-\bra{r^p-\abs{u }^p}^{1/p}$, $a = \frac{f(\pi_1) - f(\pi_0)}{\pi_1 - \pi_0}$, $b = \frac{\pi_1 f(\pi_0) - \pi_0 f(\pi_1)}{\pi_1- \pi_0}$ , and $\widehat{B}:=I-e^k {e^k}^T$. Then \[B_p ^{e^k,\pi_0,\pi_1} = \set{ x \in B_p : \left\lVert\widehat{B}x\right\rVert_p+ax_k+b\leq 0 }.\] \end{corollary} \begin{proof} Direct from Proposition~\ref{boundedSpli} by noting that \[B_p =\set{ x \in \mathbb{R}^n \ : \left\lVert\widehat{B}x\right\rVert_p+f(x_k)\leq 0 }.\] \end{proof} From the proof of Proposition~\ref{boundedSpli}, we can glimpse a natural extension of Proposition~\ref{boundedSpli} to general non-epigraphical sets. We do not explore such extension further, since we have not yet encountered non-epigraphical structures beyond that of Proposition~\ref{boundedSpli} which allow for simple split cut characterizations. However, we consider the following direct extension of Proposition~\ref{method1propB} as we need it to give a complete characterization of split cuts for convex quadratic sets. We will see in Section~\ref{complicated} that the resulting formulas are significantly more complicated than those obtained through Proposition~\ref{method1propB}. \begin{proposition}\label{method1propC} Let $F:\mathbb{R}^n\to \mathbb{R}$ be a closed convex function, $\pi\in \Real^n$, $\pi_0, \pi_1, \phat \in \Real$ such that $\pi_0<\pi_1$ and $\hat{\pi}\neq 0$. If $G:\mathbb{R}^n\to \mathbb{R}$ is a closed convex function such that \begin{subequations}\label{generalintconC} \begin{alignat}{4} \set{ \bra{x,t}\in \gr(F)\,:\, \pi^Tx +\hat{\pi}t =\pi_0}&=\set{ \bra{x,t}\in \gr(G)\,:\, \pi^Tx +\hat{\pi}t =\pi_0} \label{generalintcondbC}\\ \set{ \bra{x,t}\in \gr(F)\,:\, \pi^Tx +\hat{\pi}t =\pi_1}&=\set{ \bra{x,t}\in \gr(G)\,:\, \pi^Tx +\hat{\pi}t =\pi_1} \label{generalintcondb2C}\\ \set{ \bra{x,t}\in \epi(F)\,:\, \pi^Tx +\hat{\pi}t \notin\bra{\pi_0,\pi_1}}&\subseteq \set{ \bra{x,t}\in \epi(G)\,:\, \pi^Tx +\hat{\pi}t \notin\bra{\pi_0,\pi_1}}, \label{generalintcondcC} \end{alignat} \end{subequations} and if every $\bra{\overline{x},\tbar}\in \epi(G)$ such that $\pi^T \xbar+\hat{\pi} \tbar \in (\pi_0,\pi_1)$ has friends in \[\set{\bra{x,t}\in \epi(F):\pi^Tx+\hat{\pi}t\notin (\pi_0,\pi_1)},\] then \begin{equation} \epi(F)^{\pi,\hat{\pi},\pi_0,\pi_1}= \epi(F)\cap \epi(G). \end{equation} \end{proposition} \subsection{Split Cuts for Quadratic Sets.} In this section we consider split cuts for convex sets described by a single quadratic inequality. Since we also want to include the second-order cone, we also consider quadratic sets that are the union of two convex sets and can be separated by a single linear inequality. It is well known (e.g. see \cite{BelottiGPRT2011} Section~2.1) that all such \emph{convex quadratic} sets correspond to the following list: \begin{enumerate} \item A full dimensional paraboloid, \item a full dimensional ellipsoid (or a single point), \item a full dimensional second-order cone, \item one side of a full dimensional hyperboloid of two sheets, \item a cylinder generated by a lower-dimensional version of one of the previous sets, or \item an invertible affine transformation of one of the previous sets. \end{enumerate} To give formulas for split cuts for all the above sets, it suffices to give formulas for the cases 1--4. With these, we can construct formulas for cylinders by using the following straightforward lemma. \begin{lemma} Let $C\subseteq \Real^n$ and let $L\subseteq \Real^n$ be a linear subspace. Then $\conv\bra{C+L}=\conv\bra{C}+L$. \end{lemma} Finally, split cuts for affine transformations can be obtained through the following simple lemma that we prove in the appendix. We include an \emph{epigraphical} version of this lemma, since it will simplify the formulas in some cases. \begin{restatable}{lemma}{basictransformlemmares}\label{basictransformlemma} Let $c \in \mathbb{R}^n, \pi \in \Real^n\setminus\set{\bf 0}, \pi_0, \pi_1 \in \mathbb{R}$ such that $\pi_0 < \pi_1$, $B \in \mathbb{R}^{n \times n}$ be an invertible matrix, $\tilde{\pi}=B^{-T} \pi$, $\tilde{\pi}_0=\pi_0-\pi^T c$, $\tilde{\pi}_1=\pi_1-\pi^T c$, and $f,g,\tilde{f},\tilde{g}:\mathbb{R}^n\to \mathbb{R}\cup \{+\infty\}$ be proper closed convex functions such that $ f(x)=\tilde{f}\bra{B(x-c)}$ and $g(x)=\tilde{g}\bra{B(x-c)}$. If \[\epi\bra{\tilde{f}}^{\tilde{\pi}, \tilde{\pi}_0, \tilde{\pi}_1} = \epi\bra{\tilde{f}}\cap \epi\bra{\tilde{g}},\] then $\epi\bra{{f}}^{{\pi}, {\pi}_0, {\pi}_1} = \epi\bra{{f}}\cap \epi\bra{{g}}$. Similarly, if \[\set{x \in \mathbb{R}^n : \tilde{f}(x)\leq 0}^{\tilde{\pi}, \tilde{\pi}_0, \tilde{\pi}_1} = \set{x \in \mathbb{R}^n : \tilde{f}(x)\leq 0,\quad \tilde{g}(x)\leq 0},\] then $\set{x \in \mathbb{R}^n : {f}(x)\leq 0}^{\pi, {\pi}_0, {\pi}_1} = \set{x \in \mathbb{R}^n : f(x)\leq 0, \quad g(x)\leq 0}$. \end{restatable} We first consider split cuts for quadratic sets with simple structures that can be obtained as direct corollaries of Propositions~\ref{GFormSplitCut}, \ref{porder split} and \ref{boundedSpli}. We refer to these as \emph{simple split cuts}. We then consider split cuts for sets with more complicated structures that require ad-hoc proofs based on Propositions~\ref{method1propB} or \ref{method1propC}. As expected, we will see that formulas for the first case are significantly simpler than those for the second case. However, in either case, it is crucial to exploit the symmetry of the Euclidean norm through the following well known lemma. \begin{lemma} \label{norm decomposition} For $v \in \Real^n$, $\|x\|_2^2 = \norm{P_v x }_2^2+\norm{P_{v}^{\perp}x}_2^2$. \end{lemma} \subsubsection{Simple Split Cuts.}\label{simplequasplitcuts} Simple split cuts can be obtained for general ellipsoids and for paraboloids and cones that, when interpreted as epigraphs of quadratic or conic functions (i.e. based on the Euclidean norm), are such that $t$ is unaffected by the split disjunctions. We note that the ellipsoid case has already been proven on \cite{Goez2012,Dadush2011121}, and that the conic case generalizes Proposition~2 in \cite{AN:conicmir} which considers elementary disjunctions for the standard three dimensional second-order cone. \begin{corollary}[Simple split cuts for paraboloids]\label{quadratic coro} Let $B \in \mathbb{R}^{n \times n}$ be an invertible matrix, $ c \in \mathbb{R}^n$, $\pi \in \Real^n\setminus\set{\bf 0}$, $\pi_0, \pi_1 \in \mathbb{R}$ such that $\pi_0 < \pi_1$, \[LC^2 \bra{B,c} := \set{\bra{x,t} \in \mathbb{R}^n \times \mathbb{R}_{+} : \|B\bra{x-c}\|_2^2 \leq t},\] $ a = \frac{\pi_0 + \pi_1 - 2 \pi^T c }{\|B^{-T}\pi\|_2^2} $, $ b = - \frac{\bra{\pi_1 - \pi^T c} \bra{\pi_0 - \pi^T c}}{\|B^{-T}\pi\|_2^2}, $ and $\widehat{B}= P_{B^{-T} \pi}^ {\perp} B$. Then \[LC^2 \bra{B,c}^{\pi,\pi_0,\pi_1} = \set{\bra{x,t} \in LC^2 \bra{B,c}: \left\lVert \widehat{B} \bra{x-c}\right\rVert_2^2 + a \pi^T\bra{ x-c } + b \le t }.\] \end{corollary} \begin{proof} Using Lemma \ref{basictransformlemma}, we prove the corollary by finding a closed form expression for $LC^2 \bra{I, 0}^{\tilde{\pi}, \tilde{\pi}_0 ,\tilde{\pi}_1}$, where $\tilde{\pi} = B^{-T}\pi$, $\tilde{\pi}_0 = \pi_0 - \pi^T c$, and $\tilde{\pi}_1 = \pi_1 - \pi^T c$. By Lemma \ref{norm decomposition}, we have $LC^2 \bra{I,0} = \set{\bra{x,t} \in \mathbb{R}^n \times \mathbb{R}_{+} : \| P_{\tilde{\pi}}^ {\perp} x \|_2^2 + \frac{(\tilde{\pi}^T x)^2}{\norm{\tilde{\pi}}_2^2} \le t}$. The result then follows from Proposition \ref{GFormSplitCut} \end{proof} \begin{corollary}[Simple split cuts for cones]\label{nlscut prop} Let $B \in \mathbb{R}^{n \times n}$ be an invertible matrix, $ c \in \mathbb{R}^n$, $\pi \in \Real^n\setminus\set{\bf 0}$, $\pi_0, \pi_1 \in \mathbb{R}$ such that $\pi_0 < \pi_1$, \[LC \bra{B,c} := \set{\bra{x,t} \in \mathbb{R}^n \times \mathbb{R}_{+} : \|B\bra{x-c}\|_2 \leq t},\] $a = \frac{\pi_1 + \pi_0 -2 \pi^T c }{\pi_1 - \pi_0} $, $ b = \frac{-2 \bra{\pi_1 - \pi^T c} \bra{\pi_0 - \pi^T c}}{\pi_1 - \pi_0} $, $\widehat{B}=\bra{P_{B^{-T} \pi}^ {\perp}+a P_{B^{-T} \pi}} B$, and $\widehat{c}=\bra{b/\norm{B^{-T} \pi}_2^2} B^{-T} \pi$. If $ \pi^T c\notin (\pi_0,\pi_1)$, then $LC \bra{B,c}^{\pi,\pi_0,\pi_1} =LC \bra{B,c}$. Otherwise, we have that \[LC \bra{B,c}^{\pi,\pi_0,\pi_1}=\set{(x,t) \in LC \bra{B,c}: \left\lVert \widehat{B} \bra{x-c}+\widehat{c}\, \right\rVert_2 \le t}.\] \end{corollary} \begin{proof} Using Lemma \ref{basictransformlemma}, we prove the corollary by finding a closed form expression for $LC \bra{I, 0}^{\tilde{\pi}, \tilde{\pi}_0 ,\tilde{\pi}_1}$, where $\tilde{\pi} = B^{-T}\pi$, $\tilde{\pi}_0 = \pi_0 - \pi^T c$, and $\tilde{\pi}_1 = \pi_1 - \pi^T c$. By Lemma \ref{norm decomposition}, we have $LC \bra{I,0} = \set{\bra{x,t} \in \mathbb{R}^n \times \mathbb{R}_{+} : \bra{\| P_{\tilde{\pi}}^ {\perp} x \|_2^2 + \frac{(\tilde{\pi}^T x)^2}{\norm{\tilde{\pi}}_2^2}}^{1/2} \le t}$. The result then follows using Proposition \ref{porder split}. \end{proof} A particularly interesting application of Corollaries~\ref{quadratic coro} and \ref{nlscut prop} is the Closest Vector Problem \cite{MGbook}, which can be alternatively written as $\min\set{\norm{B\bra{x-c}}_2^2\,:\, x\in \mathbb{Z}^n}$ or $\min\set{\norm{B\bra{x-c}}_2\,:\, x\in \mathbb{Z}^n}$. In turn, these problems can be reformulated as $\min\set{t\,:\, \bra{x,t}\in LC^2 \bra{B,c},\,x\in \mathbb{Z}^n}$ and $\min\set{t\,:\, \bra{x,t}\in LC \bra{B,c},\, x\in \mathbb{Z}^n}$, respectively. We can then use Corollaries~\ref{quadratic coro} and \ref{nlscut prop} with lattice free splits to construct cuts that could improve the solution speed of these problems. We are currently studying the effectiveness of such cuts. We can also obtain as a corollary the following result from \cite{Goez2012,Dadush2011121}. \begin{corollary}[General split cuts for ellipsoids] \label{ellip coro} Let $B \in \mathbb{R}^{n \times n}$ be an invertible matrix, $ c \in \mathbb{R}^n$, $\pi \in \Real^n\setminus\set{\bf 0}$, $r \in \mathbb{R}_{+}$, \[E \bra{B,c, r} := \set{x\in \mathbb{R}^n : \|B\bra{x-c}\|_2 \leq r},\] $\pi_0, \pi_1 \in \mathbb{R}$ such that $\pi_0< \pi_1$, $f(u):=-\sqrt{r^2-\frac{u^2}{\|B^{-T} \pi \|_2^2 }}$, \[b = \frac{\bra{\pi_1-\pi^T c} f(\pi_0-\pi^T c) - \bra{\pi_0-\pi^T c} f(\pi_1-\pi^T c)}{\pi_1- \pi_0},\] and $a = \frac{f(\pi_0-\pi^T c)-f(\pi_1-\pi^T c)}{\pi_1 - \pi_0}$. If $\pi^T c-r \norm{B^{-T}\pi}_2\leq {\pi_0 < \pi_1}\leq \pi^T c+r \norm{B^{-T}\pi}_2$, then \bear\label{splitproper} E\bra{B,c,r}^{\pi,\pi_0,\pi_1} = \set{ x \in E \bra{B,c, r} : \| P_{B^{-T} \pi}^ {\perp} B \bra{x-c} \|_2 \le a \pi^T (x-c) - b },\,\quad \eear {if $\pi_0 < \pi^T c-r \norm{B^{-T}\pi}_2 < \pi_1\leq \pi^T c+r \norm{B^{-T}\pi}_2$, then} \bear\label{plitcg1} E\bra{B,c,r}^{\pi,\pi_0,\pi_1} = \set{ x \in E \bra{B,c, r} : \pi^T x\geq \pi_1 },\eear {if $\pi^T c-r \norm{B^{-T}\pi}_2\leq \pi_0 < \pi^T c+r \norm{B^{-T}\pi}_2 < \pi_1$, then} \bear\label{plitcg2} E\bra{B,c,r}^{\pi,\pi_0,\pi_1} = \set{ x \in E \bra{B,c, r} : \pi^T x\leq \pi_0 },\eear {if $\pi^T c-r \norm{B^{-T}\pi}_2 \ge \pi_1$ or $\pi_0 \ge \pi^T c+r \norm{B^{-T}\pi}_2$, then $E\bra{B,c,r}^{\pi,\pi_0,\pi_1} = E\bra{B,c,r}$}, and otherwise, $E\bra{B,c,r}^{\pi,\pi_0,\pi_1} = \emptyset$. \end{corollary} \begin{proof} Using Lemma \ref{basictransformlemma}, we prove \eqref{splitproper} by finding a closed form expression for $E\bra{I,0,r}^{\tilde{\pi}, \tilde{\pi}_0 ,\tilde{\pi}_1}$, where $\tilde{\pi} = B^{-T}\pi$, $\tilde{\pi}_0 = \pi_0 - \pi^T c$, and $\tilde{\pi}_1 = \pi_1 - \pi^T c$. By Lemma \ref{norm decomposition}, we have $E\bra{I,0,r} = \set{x \in \mathbb{R}^n : \| P_{\tilde{\pi}}^ {\perp} x \|_2 - \sqrt{r^2 - \frac{(\tilde{\pi}^T x)^2}{\norm{\tilde{\pi}}_2^2}} \le 0}$. The result then follows from Proposition \ref{boundedSpli}. The other cases can be shown by studying when the ellipsoid is partially or completely contained in one side of the disjunction, or when it is completely contained strictly between the disjunctions. \end{proof} We note that Corollary \ref{ellip coro} shows there are two types of split cuts for $E \bra{B,c, r}$. In \eqref{splitproper}, we obtain a nonlinear split cut that we would expect from Proposition~\ref{boundedSpli}, while in \eqref{plitcg1}--\eqref{plitcg2} we obtain simple linear split cuts. These linear inequalities are actually Chv\'atal-Gomory (CG) cuts for $E \bra{B,c, r}$ \cite{Chvatal73,DBLP:journals/mor/DadushDV11,DBLP:conf/ipco/DadushDV11,DBLP:conf/ipco/DeyV10,Gomory58}, but they are still sufficient to describe $E\bra{B,c,r}^{\pi,\pi_0,\pi_1}$ together with the original constraint. We hence follow the same MILP convention used in \cite{Dadush2011121} and still consider them split cuts. Finally, we note that we can also consider ``CG split cuts'' in Proposition~\ref{boundedSpli} if we include additional structure on the functions such as $g$ being non-negative. Similarly, we can also do the case analysis for CG cuts in Corollary~\ref{p-order ball split cut}. \subsubsection{Other Split Cuts.}\label{complicated} The split cut formulas in this section are significantly more complicated. For this reason, we only present them for standard sets (i.e. with $B=I$ and $c=0$). Formulas for the general case may be obtained by combining the formulas for the standard case with Lemma~\ref{basictransformlemma}. \begin{proposition}[General split cuts for paraboloids]\label{newpnlscut prop} Let $\pi \in \mathbb{R}^n$, $\pi_0, \pi_1, \hat{\pi} \in \mathbb{R}$ such that $\pi_0 < \pi_1$ and $\hat{\pi} \neq 0$, \[LC^2 \bra{I, 0} := \{\bra{x,t} \in \mathbb{R}^n \times \mathbb{R}_{+} : \|x\|_2^2 \leq t\}.\] If $\phat > 0$ and $\pi_0 < \pi_1 \le \frac{-\normss{\pi}}{4\phat}$, or if $\phat < 0$ and $\frac{-\normss{\pi}}{4\phat} \le \pi_0 < \pi_1$, then \[LC^2 \bra{I, 0}^{\pi, \hat{\pi}, \pi_0, \pi_1} = LC^2 \bra{I, 0},\] if $\phat > 0$ and $\pi_0 < \frac{-\normss{\pi}}{4\phat} < \pi_1$, then \[LC^2 \bra{I, 0}^{\pi, \hat{\pi}, \pi_0, \pi_1} = \set{(x, t) \in LC^2\bra{I, 0} : \pi^T x + \phat t \ge \pi_1},\] if $\phat < 0$ and $\pi_0 < \frac{-\normss{\pi}}{4\phat} < \pi_1$, then \[LC^2 \bra{I, 0}^{\pi, \hat{\pi}, \pi_0, \pi_1} = \set{(x, t) \in LC^2\bra{I, 0}: \pi^T x + \phat t \le \pi_0},\] and if $\phat > 0$ and $\frac{-\normss{\pi}}{4\phat} \le \pi_0 < \pi_1$, or if $\phat < 0$ and $\pi_0 < \pi_1 \le \frac{-\normss{\pi}}{4\phat}$, then \begin{equation}\label{generalpa} LC^2 \bra{I, 0}^{\pi, \hat{\pi}, \pi_0, \pi_1} = \set{ (x, t) \in LC^2\bra{I, 0} : \norm{P_\pi^\perp x + \frac{\pi^T x + b }{\norm{\pi}_2^2} \pi}_2 \le c \pi^T x + d t + e}, \end{equation} for \bearn &&b = \frac{\norm{\pi}_2^2}{2 \hat{\pi}} \\ &&c = \frac{ f }{\sqrt{2} \bra{\pi_1 - \pi_0}\hat{\pi}} \\ && d = c \phat \\ &&e= \frac{ \normss{\pi} + \sqrt{\normss{\pi} + 4 \pi_0 \hat{\pi}} \sqrt{\normss{\pi} + 4 \pi_1 \hat{\pi}} }{4 \sqrt{2} \bra{\pi_1 - \pi_0} \hat{\pi}^2} f\\ &&f = \sqrt{\norm{\pi}_2^2 + 2 \bra{\pi_0 + \pi_1} \hat{\pi} - \sqrt{\norm{\pi}_2^2 + 4 \pi_0 \hat{\pi}} \sqrt{\norm{\pi}_2^2 + 4 \pi_1 \hat{\pi}}}, \eearn where we use the convention $0/0 \eqq 0$ for the case $\norms{\pi}=0$. \end{proposition} \begin{proof} We include the case analysis in Lemma~\ref{NPValidity} in the appendix. All the cases above except the last one follow from Lemma \ref{NPValidity}. Now consider the last case where $\phat > 0$ and $\frac{-\normss{\pi}}{4\phat} \le \pi_0 < \pi_1$, or $\phat < 0$ and $\pi_0 < \pi_1 \le \frac{-\normss{\pi}}{4\phat}$. We prove this case using Proposition~\ref{method1propC}. Let $F(x)= \normss{x}$ and $G(x)= \bra{\norms{P_\pi^\perp x + \frac{\pi^T x + b }{\normss{\pi}} \pi} - c \pi^T x - e}/d$. One can check that $f, d>0$. Now we consider two cases. \noindent\textbf{Case 1.} Assume that $\norms{\pi} \ne 0$. First, we prove \eqref{generalintcondbC}. Let $S_F \eqq \set{ \bra{x,t}\in \gr(F)\,:\, \pi^Tx +\hat{\pi}t =\pi_0}$ and $S_G \eqq \set{ \bra{x,t}\in \gr(G)\,:\, \pi^Tx +\hat{\pi}t =\pi_0}$. To prove $S_G \subseteq S_F$, let $(\xbar, \tbar) \in S_G$. We need to show that $\normss{\xbar} = \tbar$. By squaring the split cut equality and using Lemma \ref{norm decomposition}, we can equivalently show that \beq \bra{c \pi^T \xbar + d\tbar + e}^2 - \frac{\bra{\pi^T \xbar + b}^2}{\normss{\pi}} = \tbar - \frac{\bra{\pi^T \xbar}^2}{\normss{\pi}}. \label{15a_para} \eeq Replacing $\tbar$ with $\bra{\pi_0 - \pi^T \xbar}/\phat$, one can check that \eqref{15a_para} follows from the definition of $b, c, d,$ and e. To prove $S_F \subseteq S_G$, let $(\xbar, \tbar) \in S_F$. We only need to show that $c \pi^T \xbar + d\tbar + e \ge 0$. Since $d = c \phat$, we need to show that $c \bra{\pi^T \xbar + \phat \tbar} \ge -e$, which after a few simplifications, can be written as \beq \phat \bra{\pi^T \xbar + \phat \tbar} \ge - \bra{\normss{\pi} + \sqrt{\normss{\pi}+ 4 \pi_0 \phat} \sqrt{\normss{\pi}+ 4 \pi_1 \phat}}/{4}. \label{nonneg} \eeq \eqref{nonneg} follows from noting that $\min \set{\phat \bra{\pi^Tx + \phat t} \,:\, (x, t) \in LC^2 \bra{I, 0}} = -\frac{\normss{\pi}}{4}$. Proving \eqref{generalintcondb2C} is anologous. Now we prove \eqref{generalintcondcC}. Let $\tilde{S}_F \eqq \{\bra{x,t}\in \epi(F)\,:\, \pi^Tx +\hat{\pi}t \notin\bra{\pi_0,\pi_1}\}$ and $\tilde{S}_G \eqq \{\bra{x,t}\in \epi(G)\,:\, \pi^Tx +\hat{\pi}t \notin\bra{\pi_0,\pi_1}\}$. To prove $\tilde{S}_F \subseteq \tilde{S}_G$, let $(\xbar, \tbar) \in \tilde{S}_F$. To show that $(\xbar, \tbar)$ satisfies the split cut inequality \eqref{generalpa}, we can show that \beq \bra{\bra{c \pi^T \xbar + d\tbar + e}^2 - \frac{\bra{\pi^T \xbar + b}^2}{\normss{\pi}}} - \bra{\tbar - \frac{\bra{\pi^T \xbar}^2}{\normss{\pi}}} \ge 0. \label{rhs} \eeq One can check that proving \eqref{rhs} is equivalent to showing that \beqn \frac{f^2 \bra{\pi^T \xbar + \phat \tbar - \pi_0} \bra{\pi^T \xbar + \phat \tbar - \pi_1}}{2 \bra{\pi_1 - \pi_0}^2 \phat^2} \ge 0, \eeqn which follows from $\pi^T \xbar + \phat \tbar \notin (\pi_0, \pi_1)$. Proving $c \pi^T \xbar + d\tbar + e \ge 0$ is similar as before. Now let $\bra{\overline{x},\tbar}\in \epi(G)$ such that $\pi^T \xbar + \phat \tbar \in (\pi_0,\pi_1)$. To construct the friends of $\bra{\overline{x},\tbar}$, one can check that for \beqn x^{*} = \frac{-b}{{\| \pi \|}_2^2} \pi , \quad t^{*} = \frac{bc-e}{d}, \eeqn all points on the ray $ R := \set{ \bra{x^{*} ,t^{*}} + \alpha \bra{ \bar{x} -x^{*},\tbar - t^{*}} : \alpha \in \mathbb{R}_+ }$ belong to $\epi(G)$. Let the intersections of $R$ with the hyperplanes $\pi^T x + \phat t = \pi_0$ and $\pi^T x + \phat t= \pi_1$ be $\bra{x^0,t^0}$ and $\bra{x^1,t^1}$, respectively. Such points are obtained from $R$ by setting \beqn \alpha_0 = \frac{\pi_0 + h}{\pi^T \bar{x} + \hat{\pi} \tbar + h} \eeqn and \beqn \alpha_1 = \frac{\pi_1 + h}{\pi^T \bar{x} + \hat{\pi} \tbar + h}, \eeqn respectively, where $h = \frac{\normss{\pi}+\sqrt{\normss{\pi}+ 4 \pi_0 \phat} \sqrt{\normss{\pi} + 4 \pi_1 \phat}}{4 \phat}$. We have that $\bra{x^0,t^0}, \bra{x^1,t^1} \in \epi(F)\cap\set{\bra{x,t}:\pi^Tx+\phat t \notin (\pi_0,\pi_1)}$, since $\pi^T x^0 + \phat t^0 = \pi_0$ and $\pi^T x^1 + \phat t^1 = \pi_1$, and because interpolation conditions \eqref{generalintcondbC} and \eqref{generalintcondb2C} hold. Now note that $(\bar{x},\tbar)$ is obtained from $R$ by setting $\alpha=1$. If $\alpha_0 < 1 < \alpha_1$ or $\alpha_1 < 1 < \alpha_0$, then there exists $\beta \in (0, 1)$ such that $\bra{\bar{x},\tbar} = \beta \bra{x^0, t^0} + \bra{1-\beta} \bra{x^1,t^1}$. Seeing that $\pi^T \bar{x} + \phat \tbar \in \bra{\pi_0, \pi_1}$, one can check $\alpha_0 < 1 < \alpha_1$ or $\alpha_1 < 1 < \alpha_0$. \noindent\textbf{Case 2.} If $\norms{\pi} = 0$, the split cut \eqref{generalpa} is simplified to $\norms{x} \le d t +e$. Let $G(x) = \bra{\norms{x} - e}/d$. One can prove \eqref{generalintcondbC} and \eqref{generalintcondb2C} by showing that for $(\xbar, \tbar) \in \epi(F)$ such that $\phat \tbar \in \set{\pi_0, \pi_1}$, we have $\bra{d \tbar + b}^2 = \tbar$. The latter follows from the definition of the interpolation coefficients. Non-negativity of $d, e$, and $t$ also imply $d \tbar + e \ge 0$. Proving \eqref{generalintcondcC} is also equivalent to showing that \beqn \frac{f^2 \bra{\phat \tbar - \pi_0} \bra{\phat \tbar - \pi_1}}{2 \bra{\pi_1 - \pi_0}^2 \phat^2} \ge 0, \eeqn which follows from $\phat \tbar \notin (\pi_0, \pi_1)$. We can construct the friends in a similar way as in Case 1 by noting that the ray $ R := \set{ \bra{0 ,t^{*}} + \alpha \bra{ \bar{x},\tbar - t^{*}} : \alpha \in \mathbb{R}_+ }$, where $t^{*} = \frac{-e}{d}$, is contained in $\epi(G)$ and intersects both $\phat t = \pi_0$ and $\phat t = \pi_1$. The result then follows from Proposition~\ref{method1propC}. \end{proof} \begin{proposition}[General split cuts for cones]\label{newnlscut prop} Let $\pi \in \mathbb{R}^n$, $\pi_0, \pi_1, \hat{\pi} \in \mathbb{R}$ such that $\pi_0 < \pi_1$ and $\hat{\pi} \neq 0$, \[LC \bra{I, 0} := \{\bra{x,t} \in \mathbb{R}^n \times \mathbb{R}_{+} : \|x\|_2 \leq t\}.\] If $0 \notin (\pi_0, \pi_1)$, then $LC \bra{I, 0}^{\pi, \hat{\pi}, \pi_0, \pi_1} = LC\bra{I, 0}$. Otherwise, if $0 \in (\pi_0, \pi_1)$ and $\hat{\pi} \le - \norm{\pi}_2$, then \[LC \bra{I, 0}^{\pi, \hat{\pi}, \pi_0, \pi_1} = \set{ (x, t) \in LC\bra{I, 0} : \pi^T x + \hat{\pi} t \le \pi_0},\] if $0 \in (\pi_0, \pi_1)$ and $\hat{\pi} \ge \norm{\pi}_2$, then \[LC \bra{I, 0}^{\pi, \hat{\pi}, \pi_0, \pi_1} = \set{ (x, t) \in LC\bra{I, 0} : \pi^T x + \hat{\pi} t \ge \pi_1},\] and if $0 \in (\pi_0, \pi_1)$ and $\hat{\pi} \in \bra{-\norm{\pi}_2, \norm{\pi}_2}$, then \[LC \bra{I, 0}^{\pi, \hat{\pi}, \pi_0, \pi_1} = \set{ (x, t) \in LC\bra{I, 0} : \norm{P_\pi^\perp x + \frac{a \pi^T x + b }{\norm{\pi}_2^2} \pi}_2 \le c \pi^T x + d t + e},\] where \bearn &&a = \frac{ \bra{\pi_0 + \pi_1} \bra{\norm{\pi}_2^2 - \hat{\pi}^2} } { f } \\ &&b = - \frac{ 2 \pi_0 \pi_1 \norm{\pi}_2^2} { f } \\ &&c = - \frac{ 4 \pi_0 \pi_1 \hat{\pi} }{ \bra{\pi_1 - \pi_0} f} \\ &&d = \frac{ f }{ \bra{\pi_1 - \pi_0} \bra{\normss{\pi} - \phat^2} }\\ &&e= \frac{ 2 \pi_0 \pi_1 \bra{\pi_0 + \pi_1} \hat{\pi} }{ \bra{\pi_1 - \pi_0} f }\\ && f = \sqrt{ \bra{\norm{\pi}_2^2 - \hat{\pi}^2} \bra{ \norm{\pi}_2^2 \bra{\pi_0 - \pi_1}^2 - \bra{\pi_0 + \pi_1}^2 \hat{\pi}^2} }. \eearn \end{proposition} \begin{proof} We include the case analysis in Lemma~\ref{NValidity} in the appendix. The second and third cases where $0 \in (\pi_0, \pi_1)$ and $\phat \le - \norms{\pi}$ or $\phat \ge \norms{\pi}$ follow from Lemma \ref{NValidity}. Now we show the case $0 \in (\pi_0, \pi_1)$ and $\phat \in (-\norm{\pi}, \norm{\pi})$. Note that $\phat \neq 0$ and $\phat \in (-\norm{\pi}, \norm{\pi})$ imply $\norms{\pi} \neq0$. Let $F(x)= \norms{x}$ and $G(x)= \bra{\norms{P_\pi^\perp x + \frac{a \pi^T x + b }{\normss{\pi}} \pi} - c \pi^T x - e}/d$. One can check that $d>0$. Similarly to the proof of Proposition~\ref{newpnlscut prop}, we can show that interpolation condition \eqref{generalintconC} holds by the definition of $a, b, c, d$, and $e$. Now let $\bra{\overline{x},\tbar}\in \epi(G)$ such that $\pi^T \xbar + \phat \tbar \in (\pi_0,\pi_1)$. To construct the friends of $\bra{\overline{x},\tbar}$, we consider two cases. \textbf{Case 1.} If $\abs{\pi_0} = \abs{\pi_1}$, $\bra{\bar{x},\tbar}$ can be written as a convex combination of $\bra{x^0,t^0}$ and $\bra{x^1,t^1}$ given by \beq x^0 = P_{\pi}^{\perp} \bar{x} + \frac{\pi_0 - \hat{\pi} \tbar - \frac{c}{d} \hat{\pi} \pi^T \bar{x}}{\bra{1 - \frac{c}{d} \hat{\pi}}\norm{\pi}_2^2} \pi, \quad t^0 = \tbar + \frac{c \bra{\pi^T \bar{x} - \pi^T x^0}}{d}, \eeq and \beq x^1 = P_{\pi}^{\perp} \bar{x} + \frac{\pi_1 - \hat{\pi} \tbar - \frac{c}{d} \hat{\pi} \pi^T \bar{x}}{\bra{1 - \frac{c}{d} \hat{\pi}}\norm{\pi}_2^2} \pi, \quad t^1 = \tbar + \frac{c \bra{\pi^T \bar{x} - \pi^T x^1}}{d}. \eeq Indeed, since $\pi^T \bar{x} + \phat \tbar \in \bra{\pi_0, \pi_1}$, there exists $\alpha \in (0,1)$ such that $\pi^T \bar{x} + \phat \tbar = \alpha \pi_0 + (1-\alpha) \pi_1$. One can check that $\bra{\bar{x},\tbar} = \alpha \bra{x^0,t^0} + (1-\alpha) \bra{x^1,t^1}$. We also have that $\bra{x^0,t^0}, \bra{x^1,t^1} \in \epi(F)\cap\set{\bra{x,t}:\pi^Tx+\phat t \notin (\pi_0,\pi_1)}$, since $\pi^T x^0 + \phat t^0 = \pi_0$ and $\pi^T x^1 + \phat t^1 = \pi_1$, and because interpolation conditions \eqref{generalintcondbC} and \eqref{generalintcondb2C} hold. \noindent\textbf{Case 2.} If $\abs{\pi_0} \neq \abs{\pi_1}$, one can check that for \beqn x^{*} = \frac{-b}{{a \| \pi \|}_2^2} \pi , \quad t^{*} = \frac{bc-ae}{ad}, \eeqn all points on the ray $ R := \set{ \bra{x^{*} ,t^{*}} + \alpha \bra{ \bar{x} -x^{*},\tbar - t^{*}} : \alpha \in \mathbb{R}_+ }$ belong to $\epi(G)$. Let the intersections of $R$ with the hyperplanes $\pi^T x + \phat t = \pi_0$ and $\pi^T x + \phat t= \pi_1$ be $\bra{x^0,t^0}$ and $\bra{x^1,t^1}$, respectively. Such points are obtained from $R$ by setting \beqn \alpha_0 = \frac{\pi_0 + h}{\pi^T \bar{x} + \hat{\pi} \tbar + h} \eeqn and \beqn \alpha_1 = \frac{\pi_1 + h}{\pi^T \bar{x} + \hat{\pi} \tbar + h}, \eeqn respectively, where $h = - \frac{2 \pi_0 \pi_1}{\pi_0 + \pi_1}$. We again have that $\bra{x^0,t^0}, \bra{x^1,t^1} \in \epi(F)\cap\set{\bra{x,t}:\pi^Tx+\phat t \notin (\pi_0,\pi_1)}$, since $\pi^T x^0 + \phat t^0 = \pi_0$ and $\pi^T x^1 + \phat t^1 = \pi_1$, and because interpolation conditions \eqref{generalintcondbC} and \eqref{generalintcondb2C} hold. Now note that $(\bar{x},\tbar)$ is obtained from $R$ by setting $\alpha=1$. If $\alpha_0 < 1 < \alpha_1$ or $\alpha_1 < 1 < \alpha_0$, then there exists $\beta \in (0, 1)$ such that $\bra{\bar{x},\tbar} = \beta \bra{x^0, t^0} + \bra{1-\beta} \bra{x^1,t^1}$. Seeing that $\pi^T \bar{x} + \phat \tbar \in \bra{\pi_0, \pi_1}$, $\abs{\pi_0} \neq \abs{\pi_1}$, and $0 \in (\pi_0, \pi_1)$, one can check $\alpha_0 < 1 < \alpha_1$ or $\alpha_1 < 1 < \alpha_0$. The result then follows from Proposition~\ref{method1propC}. Finally, consider the case $0 \notin \bra{\pi_0, \pi_1}$. We need to show that any point $\bra{\bar{x}, \tbar}\in \epi(F)\cap\set{\bra{x,t}:\pi^Tx + \phat t \in (\pi_0,\pi_1)}$ has friends in $\epi(F)\cap\set{\bra{x,t}:\pi^Tx + \phat t\notin (\pi_0,\pi_1)}$. We can construct the friends in a similar way as before by noting that the ray $R:= \set{ \alpha \bra{ \bar{x},\tbar} : \alpha \in \mathbb{R}_+ }$ is contained in $\epi(F)$ and intersects both $\pi^T x + \phat t= \pi_0$ and $\pi^T x + \phat t= \pi_1$. \end{proof} \begin{proposition}[Simple split cuts for hyperboloids]\label{simplehyper} Let $\pi \in \mathbb{R}^n\setminus\set{\bf 0}$, $\pi_0, \pi_1, l \in \mathbb{R}$ such that $\pi_0 < \pi_1$ and $l \neq 0$, \[H := \set{\bra{x,t} \in \mathbb{R}^n \times \mathbb{R}_{+} : \sqrt{\normss{x} + l^2} \leq t}.\] If $\abs{\pi_0} = \abs{\pi_1}$, then \[H^{\pi, \pi_0, \pi_1} = \set{\bra{x,t} \in H : \norms{P_{\pi}^{\perp} x + \frac{\hat{b}}{\normss{\pi}} \pi} \le t},\] where $\hat{b} = \sqrt{l^2 \normss{\pi} + \pi_1^2}$, and if $\abs{\pi_0} \neq \abs{\pi_1}$, then \[H^{\pi, \pi_0, \pi_1} = \set{\bra{x,t} \in H : \norms{P_{\pi}^{\perp} x + \frac{a\pi^T x + b}{\normss{\pi}} \pi} \le t},\] where \bearn &&a = \frac{ \sqrt{ 2 l^2 \normss{\pi} + \pi_0^2 + \pi_1^2 - 2 \sqrt{\bra{l^2 \normss{\pi} + \pi_0^2} \bra{l^2 \normss{\pi} + \pi_1^2}} } }{\pi_1 - \pi_0} \\ &&b = \frac{l^2 \normss{\pi} - \pi_0 \pi_1 + \sqrt{\bra{l^2 \normss{\pi} + \pi_0^2} \bra{l^2 \normss{\pi} + \pi_1^2}} }{\pi_0 + \pi_1} a. \eearn \end{proposition} \begin{proof} We first show the case $\abs{\pi_0} = \abs{\pi_1}$. Let $F(x)= \sqrt{\normss{x} + l^2}$ and $G(x)= \norms{P_{\pi}^{\perp} x + \frac{\hat{b}}{\normss{\pi}} \pi}$. Interpolation condition \eqref{generalintcond} holds by the definition of $\hat{b}$. Now let $\bra{\overline{x},\tbar}\in \epi(G)$ such that $\pi^T \xbar \in (\pi_0,\pi_1)$. We can construct the friends of $\bra{\overline{x},\tbar}$ in a similar way as in Case 1 in the proof of Proposition~\ref{porder split}. The result then follows from Proposition~\ref{method1propB}. Now consider the case $\abs{\pi_0} \neq \abs{\pi_1}$. Let $G(x)= \norms{P_{\pi}^{\perp} x + \frac{a\pi^T x + b}{\normss{\pi}} \pi}$. Interpolation condition \eqref{generalintcond} holds by the definition of $a$ and $b$. Now let $\bra{\overline{x},\tbar}\in \epi(G)$ such that $\pi^T \xbar \in (\pi_0,\pi_1)$. We can construct the friends of $\bra{\overline{x},\tbar}$ in a similar way as in Case 2 in the proof of Proposition~\ref{porder split}. The result then follows from Proposition~\ref{method1propB}. \end{proof} We have partially generalized Proposition~\ref{simplehyper} to general split cuts for hyperboloids. However, in most cases, the resulting formulas are unmanageably complicated, so we did not pursue this further. \section{General Intersection Cuts Through Aggregation.}\label{intersection_cut_section} In this section we consider the case in which the base sets are either epigraphs or lower level sets of convex functions and the forbidden sets are hypographs or upper level sets of concave functions. Our cut construction approach in this case is based on a simple aggregation technique, which again can be more naturally explained for epigraphs of specially structured functions. Following the structure of Section~\ref{NSCSec}, we also begin by studying the epigraphical sets and then consider the case of non-epigraphical sets. We also end the section by illustrating the power and limitations of the approach by considering intersection cuts for quadratic constraints. \subsection{Intersection Cuts for Epigraphs.} Let $F,G:\Real\times \Real \to \Real$ be a convex and a concave function given by $F(z,y)=z^2+2 y^2$ and $G(z,y)=-(z-1)^2+1-y^2$, and let \[\epi(F)=\set{\bra{z,y,t}\,:\, F(z,y)\leq t} \text{ and } \hyp(G)=\set{\bra{z,y,t}\,:\, t\leq G(z,y)}\] be the epigraph of $F$ and the hypograph of $G$, respectively. For $\lambda\in[0,1]$, let $H_\lambda(z,y)=(1-\lambda)F+\lambda G$. As illustrated in Figure~\ref{aggcutfig1}, for any $\lambda\in[0,1]$, we have that $H_\lambda(z,y)\leq t$ is a binding valid cut for $\epi(F)\setminus\Int\bra{\hyp(G)}$. However, depending on the choice of $\lambda$, the inequality could be non-convex, or it could be convex but not sufficient. It is clear from Figure~\ref{aggcutfig1} that in this case, the correct choice of $\lambda$ is $1/2=\arg\max\set{\lambda\in [0,1]\,:\, H_\lambda \text{ is convex}}$, which yields the strongest convex cut from this class. Furthermore, as illustrated in Figure~\ref{aggcutfig2}, we have that for any $\bra{\overline{z},\overline{y},\overline{t}}\in \epi(H_{1/2})\cap \Int\bra{\hyp(G)}$, we can find friends in $\epi(F)\setminus\Int\bra{\hyp(G)}$ by following the slope of $H_{1/2}$ similar to what we did for split cuts of separable functions. We can then show that \begin{equation*} \conv\bra{\epi(F)\setminus\Int\bra{\hyp(G)}}=\epi(F)\cap\epi\bra{H_{1/2}}. \end{equation*} A similar construction can also be obtained if we instead study $\conv\bra{\set{(z,y,t)\in \epi(F)\,:\, G(z,y)\leq 0}}$. \begin{figure}[htb] \centering \subfigure[$F$ in black, $G$ in blue and valid aggregation cuts $H_\lambda$ for $\lambda\in \{1/4,1/2,3/4\}$ in red, green and brown.]{\includegraphics[scale=0.5]{intersect1.pdf}\label{aggcutfig1}} \subfigure[Friends construction by following slope of $H_{1/2}$.]{\includegraphics[scale=0.5]{intersect2.pdf}\label{aggcutfig2}} \caption{Cuts from aggregation.}\label{aggcutfig} \end{figure} $H_\lambda$ and the convexity requirement on it are the basis of many techniques such as Lagrangian/SDP relaxations of quadratic programming problems \cite{fujie1997semidefinite,oustry2001sdp,polik2007survey,poljak1995recipe}, the QCR method for integer quadratic programming \cite{billionnet2012extending,billionnet2009improving}, and an algorithm for constructing projected SDP representations of the convex hull of quadratic constraints introduced in \cite{yildiran2009convex}. It is hence not surprising that the approach works in the quadratic case. However, as shown in \cite{yildiran2009convex}, even in the quadratic case the approach can fail to yield convex constraints or closed form expressions. Furthermore, for general functions, $H_\lambda$ can easily be non-convex for every $\lambda$. Fortunately, as the following proposition shows, the approach can yield closed form expressions for general intersection cuts for problems with special structures \begin{proposition}\label{generalintprop} Let $g_i:\Real \to\Real$ be convex functions for each $i\in[n]$, $m,l\in\Real^n$, $r,q\in \Real$, and $\gamma\in \Real_+$. Furthermore, let $\set{a_i}_{i=1}^n\subseteq \Real^n$ be such that $a_n\neq 0$ and $a_i \perp a_j$ for every $i\neq j$, and $\set{\alpha_i}_{i=1}^n\subseteq \Real_+$ be such that $0\neq \alpha_n\geq \alpha_i$ for all $i$. Let \begin{align*} F(x)=&\sum_{i=1}^n g_i\bra{a_i^Tx}+m^Tx+r,\\ G(x)=&-\sum_{i=1}^n \alpha_i g_i\bra{a_i^Tx}-l^Tx-q, \end{align*} $C:=\epi(F)$, and $S:=\set{\bra{x,t}\in\Real^n\times\Real\,:\, \gamma t\leq G(x)}$. If $\bra{1+\gamma/\alpha_n}>0$ and \begin{equation}\label{intersectlimcondition} \lim_{|s|\to \infty} -\alpha_n g_n\bra{ s a_n^Ta_n} - s\bra{l^Ta_n+\gamma \frac{\bra{m-l/\alpha_n}^T a_n}{1+\gamma/\alpha_n}}=-\infty, \end{equation} then \begin{equation}\label{intersect1} \conv\bra{C\setminus\Int\bra{S}}= \conv\bra{\set{\bra{x,t}\in \epi(F)\,:\, G(x)\leq \gamma t}}=\epi(F)\cap \epi(H), \end{equation} where \begin{equation} H(x):=\frac{F(x)+(1/\alpha_n)G(x)}{1+\gamma/\alpha_n}=\frac{\sum_{i=1}^{n-1} \bra{1-\alpha_i/\alpha_n} g_i\bra{a_i^Tx}+\bra{m-l/\alpha_n}^Tx+(r-q/\alpha_n)}{\bra{1+\gamma/\alpha_n}}. \end{equation} \end{proposition} \begin{proof} The first equality in \eqref{intersect1} is direct. For the second equality, we proceed as follows. $H$ is a non-negative linear combination of $F$ and $G$ that is also a convex function from which it is easy to see that the left to right containment holds. To show the right to left containment, let $\bra{\overline{x},\overline{t}}\in \epi(F)\cap \epi(H)$ be such that $G\bra{\overline{x}}> \gamma \overline{t}$. Let $k= \frac{\bra{m-l/\alpha_n}^Ta_n}{1+\gamma/\alpha_n}$. Because of \eqref{intersectlimcondition}, there exits $s_1>0$ and $s_2<0$, for which $\bra{x^i,t^i}= \bra{\overline{x}+s_i a_n,\overline{t}+s_i k}$ for $i=1,2$ are such that $G\bra{x^i}= \gamma t^i$. Furthermore, by design $\bra{x^i,t^i}\in \epi(H)$ for $i=1,2$, which implies $F\bra{x^i}+G\bra{x^i}/\alpha_n\leq \bra{1+\gamma/\alpha_n} t^i$ and hence $F\bra{x^i}\leq t^i$. The result then follows by noting that $ \bra{\overline{x},\overline{t}}\in\conv\bra{\set{\bra{x^1,t^1},\bra{x^2,t^2}}}$. \end{proof} \subsection{Intersection Cuts for Level Sets.} In contrast to the split cut case, we cannot use positive homogeneous functions to play the role of $t$ in non-epigraphical sets. Nevertheless, we can extend the aggregation approach to certain non-epigraphical sets through the following proposition whose proof is a direct analog to that of Proposition~\ref{generalintprop}. \begin{proposition}\label{boundinterprop} Let $g_i:\Real \to\Real$ be convex functions for each $i\in[n]$, $m\in\Real^n$, $r,q\in \Real$. Furthermore, let $\set{a_i}_{i=1}^n\subseteq \Real^n$ be such that $a_n\neq 0$ and $a_i \perp a_j$ for every $i\neq j$, and $\set{\alpha_i}_{i=1}^n\subseteq \Real_+$ be such that $0\neq \alpha_n\geq \alpha_i$ for all $i$. Let \begin{align*} F(x)&=\sum_{i=1}^n g_i\bra{a_i^Tx}+m^Tx+r,\\ G(x)&=-\sum_{i=1}^n \alpha_i g_i\bra{a_i^Tx}-\alpha_n m^Tx-q, \end{align*} $C:=\set{x\in\Real^n\,:\, F(x)\leq 0}$, and $S:=\set{x\in\Real^n\,:\, G(x)\geq 0}$. If \begin{equation} \lim_{|s|\to \infty} -\alpha_n g_n\bra{ s a_n^Ta_n} - s\alpha_n m^Ta_n=-\infty, \end{equation} then \begin{equation} \conv\bra{C\setminus\Int\bra{S}}= \conv\bra{\set{x\in \Real^n\,:\, \begin{aligned}F(x)&\leq 0,\\G(x)&\leq 0\end{aligned}}}=\set{x\in \Real^n\,:\, \begin{aligned}F(x)&\leq 0,\\H(x)&\leq 0\end{aligned}}, \end{equation} where \begin{equation} H(x):=F(x)+(1/\alpha_n)G(x)=\sum_{i=1}^{n-1} \bra{1-\alpha_i/\alpha_n} g_i\bra{a_i^Tx}+(r-q/\alpha_n). \end{equation} \end{proposition} The special structure in both of these propositions is extremely simple, but thanks to the symmetry of quadratic constraints, they can be used to get formulas for several quadratic intersection cuts. \subsection{Intersection Cuts for Quadratic Sets.} \begin{corollary}\label{coro} Let $B\in \Real^{n\times n}$ be an invertible matrix, $A\in \Real^{n\times n}$, $c,d\in \Real^n$, $q \in \Real$, $\gamma \in\Real_+$, \[LC^2 \bra{B,c} := \set{\bra{x,t} \in \mathbb{R}^n \times \mathbb{R}_{+} : \|B\bra{x-c}\|_2^2 \leq t},\] and \[S:=\set{\bra{x,t}\in \Real^n\times\Real\,:\,\gamma t+q\leq -\norm{A\bra{x-d}}^2}.\] Then \begin{equation} \conv\bra{LC^2 \bra{B,c}\setminus\Int\bra{S}}=\set{\bra{x,t}\in \Real^n\times\mathbb{R}_{+}\,:\,\begin{aligned}\norm{B\bra{x-c}}^2&\leq t\\x^TEx+a^Tx+f&\leq (\alpha_n+\gamma)t\end{aligned}}, \end{equation} for \[E=B^THB, \] \[a=-2B^Te-2B^THBc, \] \[f=c^TB^THBc+2\bra{B^Te}^Tc-w-q,\] \[ H = \sum_{i=1}^{n-1} \bra{\alpha_n-\alpha_i}v_i v_i^T,\] \[ e = \sum_{i=1}^n\alpha_iv_i^TB (c-d) v_i,\] \[ w = \sum_{i=1}^n\alpha_i\bra{v_i^T B(c-d)}^2,\] where $\bra{v_i}_{i=1}^n\subseteq\Real^n$ and $\bra{\alpha_i}_{i=1}^n\subseteq \Real$ correspond to an eigenvalue decomposition of $B^{-T}A^TAB^{-1}$ so that \[B^{-T}A^TAB^{-1}=\sum_{i=1}^n\alpha_i v_i v_i^T,\] $\norms{v_i} =1$ for all $i$, $v_i^T v_j = 0$ for all $i \neq j$, and $\alpha_n\geq \alpha_i$ for all $i$. \end{corollary} \begin{proof} Let $y=B(x-c)$ and $T \eqq LC^2 \bra{B,c}\setminus\Int\bra{S}$. Using orthonormality of the vectors $v_i$, $T$ can be written on the $y$ variables as \[T=\set{\bra{y,t}\in \Real^n\times\mathbb{R}_{+}\,:\,\begin{aligned} \sum_{i=1}^n \bra{v_i^Ty}^2&\leq t\\ -\sum_{i=1}^n \alpha_i \bra{v_i^Ty}^2-2e^T y-w-q &\leq \gamma t\end{aligned}}.\] The result then follows by using Proposition~\ref{generalintprop}. \end{proof} An interesting case of Corollary~\ref{coro} arises when $\gamma=0$. In this case, the base set $C$ corresponds to a paraboloid and the forbidden set $S$ corresponds to an ellipsoidal cylinder. In such a case, the minimization of $t$ over $(x,t)\in C\setminus\Int\bra{S}$ is equivalent to the minimization of a convex quadratic function outside an ellipsoid, which corresponds to the simplest indefinite version of the well known trust region problem. While this is a non-convex optimization problem, it can be solved in polynomial time through Lagrangian/SDP approaches \cite{polik2007survey}. It is known that optimal dual multipliers of an SDP relaxation of a non-convex quadratic programming problem such as the trust region problem can be used to construct a finite convex quadratic optimization problem with the same optimal value as the original non-convex problem (e.g. \cite{DBLP:conf/ipco/GiandomenicoLRS11}). Furthermore, the complete feasible region induced by a SDP relaxation on the original variable space (in this case $(x,t)$) can be characterizes by an infinite number of convex quadratic constraints \cite{kojima2000cones}. This characterization has recently been simplified for the feasible region of the trust region problem in \cite{dan}. This work gives a semi-infinite characterization of $T$ for $\gamma=0$ composed by the convex quadratic constraint $\norm{B\bra{x-c}}^2\leq t$ plus an infinite number of linear inequalities that can be separated in polynomial time. Corollary~\ref{coro} shows that these linear inequalities can be subsumed by a single convex quadratic constraint, which gives another explanation for their polynomial time separability. We note that the techniques in \cite{dan} are also adapted to other non-convex optimization problems (both quadratic and non-quadratic). Hence, combining Corollary~\ref{coro} with these techniques could yield valid convex quadratic inequalities for more general non-convex problems. Another interesting application of Corollary~\ref{coro} for the case $\gamma=0$ is the Shortest Vector Problem (SVP) \cite{MGbook} of the form $\min\set{\norm{Bx}^2\,:\, x\in \mathbb{Z}^n\setminus\set{\bf 0}}$. Similar to the Closest Vector Problems (CVP) studied in Section~\ref{simplequasplitcuts}, we can transform this problem to $\min_{\bra{x,t}\in Q\cap\bra{\mathbb{Z}^n\times \Real}} t$ for \[Q=\set{(x,t)\in \Real^n\times\mathbb{R}_{+} \,:\,\norm{Bx}^2\leq t,\, x\neq {\bf 0}},\] so that we can strengthen the problem by generating valid inequalities for $Q$. Unfortunately, as the following simple lemma shows, traditional split cuts will not add any strength. \begin{lemma} Let $Q_0 = Q \cup \set{{({\bf 0}, 0)}}$. For any $B \in \Real^{n \times n}$, \beqn t^{*} = \min \set{t \,:\, (x, t) \in \cap_{(\pi, \pi_0) \in \mathbb{Z}^n \times \mathbb{Z}} \,Q_0^{\pi, \pi_0, \pi_0+1}} = 0. \eeqn \end{lemma} \begin{proof} Note that for all integer splits $(\pi, \pi_0) \in \mathbb{Z}^n \times \mathbb{Z}$, $(\xbar, \tbar) = ({\bf 0}, 0)$ belongs to one side of the disjunctions. Thus, we have $t^{*} \le 0$ and the result follows from non-negativity of the norm. \end{proof} However, we can easily construct \emph{near} lattice free ellipsoids centered at $\bf 0$ that do not contain any point from $\mathbb{Z}^n\setminus\{\bf 0\}$ in their interior, and use them to get some bound improvement. For instance, in the trivial case of $B = I$, Corollary~\ref{coro} applied to the single \emph{near} lattice free ellipsoid given by the unit ball $\set{x \in \Real^n \,:\, \norms{x} \le 1}$ yields a cut that provides the optimal value $t^{*}= 1$. Similar ellipsoids could be used to generate strong convex quadratic valid inequalities for non-trivial cases to significantly speed up the solution of SVP problems. Studying the effectiveness of these cuts is left for future research. We end this section with a brief discussion about the strength and possible extensions of the aggregation technique. For this, we begin by presenting the following corollary of Proposition~\ref{boundinterprop} whose proof is analogous to that of Corollary~\ref{coro}. \begin{corollary}\label{coro2} Let $B\in \Real^{n\times n}$ be an invertible matrix, $A\in \Real^{n\times n}$, $c\in \Real^n$, $r_1,r_2\in\Real_+$, \[E^2 \bra{B,c, r_1} := \set{x\in \mathbb{R}^n : \|B\bra{x-c}\|_2^2 \leq r_1},\] and \[S:=\set{x\in \Real^n\,:\,\norm{B\bra{x-c}}^2_2\geq r_2}.\] Then there exist a positive semi-definite matrix $E \in \Real^{n\times n}$, $a\in\Real^n$, and $f\in\Real$ such that \begin{equation} \conv\bra{E^2 \bra{B,c, r_1}\setminus\Int\bra{S}}=\set{x \in \Real^n:\,\begin{aligned}\norm{B\bra{x-c}}^2_2&\leq r_1\\x^TEx+a^Tx+f&\leq 0\end{aligned}}. \end{equation} \end{corollary} Corollary \ref{coro2} shows how to construct the convex hull of the set obtained by removing an ellipsoid or an ellipsoidal cylinder from an ellipsoid. However, this construction only works if the ellipsoids have a common center $c$. The following example shows how the construction can fail for non-common centers. In addition, the example shows that the aggregation technique does not subsume the interpolation technique and sheds some light into the relationship between Corollaries~\ref{coro} and \ref{coro2} and SDP relaxations for quadratic programming. \begin{example} Consider the set $C=\set{\bra{z,y}\,:\, z^2+y^2\leq 4}$ and the split disjunction $z\leq 0 \vee z\geq 1$. From Corollary~\ref{ellip coro}, we have that \begin{align*} C^{0,1}&:=\conv\bra{\set{\bra{z,y}\in C\,:\, z\leq 0}\cup \set{\bra{z,y}\in C\,:\, z\geq 1}}\\ &=\set{\bra{z,y}\,:\, z^2+y^2\leq 4,\quad |y|\leq \bra{\sqrt{3}-2}z+2}. \end{align*} Now let $F(z,y)= z^2+y^2-4$ and $G(z,y)=-(z-1/2)^2+1/4$. Since split disjunction $z\leq 0 \vee z\geq 1$ is equivalent to $G(z,y)\leq 0$, we have \begin{equation} C^{0,1}=\conv\bra{\set{\bra{z,y}\in \Real^2\,:\, F(z,y)\leq 0,\quad G(z,y)\leq 0}}. \end{equation} Now consider $H_\lambda=(1-\lambda) F+\lambda G$. One can check that the split cut $|y|\leq \bra{\sqrt{3}-2}z+2$ obtained through Corollary~\ref{ellip coro}, can be equivalently written as \begin{subequations} \begin{align} y^2-\bra{\bra{\sqrt{3}-2}z+2}^2&\leq 0 \label{splitcut} \\ \label{boundconst}\bra{\sqrt{3}-2}z+2&\geq 0. \end{align} \end{subequations} In turn, \eqref{splitcut} is equivalent to $H_{\lambda^*}\leq 0$ for $\lambda^*=\frac{4}{33}\bra{6-\sqrt{3}}$ because $H_{\lambda^*} / \bra{\frac{1}{33} \bra{9+4\sqrt{3}}} = y^2-\bra{\bra{\sqrt{3}-2}z+2}^2$. By noting that \eqref{boundconst} holds for $C$, we conclude that \begin{equation} C^{0,1}=\set{\bra{z,y}\in \Real^2\,:\, z^2+y^2\leq 4,\quad H_{\lambda^*}(z,y)\leq 0}. \end{equation} Unfortunately, $H_{\lambda^*}$ is not a convex function, so it does not fit in the aggregation framework described in this section. In particular, $H_{\lambda^*}$ is an indefinite quadratic function so it cannot be obtained from a SDP relaxation of $\set{\bra{z,y}\in \Real^2\,:\, F(z,y)\leq 0,\quad G(z,y)\leq 0}$. Indeed, we can show that the SDP relaxation of $\set{\bra{z,y}\in \Real^2\,:\, F(z,y)\leq 0,\quad G(z,y)\leq 0}$ strictly contains $C^{0,1}$. Finally, while we can obtain $H_{\lambda^*}$ through a procedure described in \cite{yildiran2009convex}, this procedure requires the execution of a numerical algorithm and does not give closed form expressions such as those provided by Corollary~\ref{ellip coro}. \end{example} \section{APPENDIX: Omitted proof and auxiliary lemmas.} \pordersplitlemma* \begin{proof} We prove the lemma for $p=2$ only. The same result follows for any $p \in \mathbb{N}$ by taking the appropriate power of both sides of the inequalities. Let $p=2$. Consider the quadratic equation $\bra{as +b}^2 - s^2 =0$. One can see that $s = \pi_0$ and $s=\pi_1$ solve the equation above. With some rearrangements, this quadratic equation can be written as $\bra{a^2 -1} s^2 + 2abs + b^2 =0$, where $a^2 - 1 = \frac{4 \pi_0 \pi_1}{\bra{\pi_1 - \pi_0}^2} < 0$. The result follows from noting that $\bra{a^2 -1} s^2 + 2ab s + b^2 < 0$ for $s \notin [\pi_0, \pi_1]$, and $\bra{a^2 -1} s^2 + 2ab s + b^2 > 0$ for $s \in \bra{\pi_0, \pi_1}$. \end{proof} \basictransformlemmares* \begin{proof} We prove the first case only. The second case is analogous. Let \beqn S_0 \eqq \set{ \bra{x,t} \in \epi(f) : \pi^T x \le \pi_0},\quad S_1 \eqq \set{ \bra{x,t} \in \epi(f) : \pi^T x \ge \pi_1}, \eeqn and \beqn \tilde{S}_0 \eqq \set{ \bra{x,t} \in \epi\bra{\tilde{f}} : \pi^T x \le \pi_0},\quad \tilde{S}_1 \eqq \set{ \bra{x,t} \in \bra{\tilde{f}} : \pi^T x \ge \pi_1}.\eeqn By definition, ${{f}}^{{\pi}, {\pi}_0, {\pi}_1} = \conv \bra{S_0 \cup S_1}$. To show that $\conv \bra{S_0 \cup S_1}\subseteq \epi(f)\cap\epi(g)$, take $(\bar{x}, \tbar) \in \conv \bra{S_0 \cup S_1}$. There exist $(x^0, t^0) \in S_0$ and $(x^1, t^1) \in S_1$ such that $\bra{\bar{x}, \tbar} = \alpha (x^0, t^0) + \bra{1-\alpha} (x^1, t^1)$ for some $\alpha \in [0,1]$. Then $\bra{B \bra{x^i -c}, t^i} \in \tilde{S}_i$ for $i\in\set{1,2}$ and hence \beqn \alpha \bra{B \bra{x^0 -c}, t^0} + \bra{1-\alpha} \bra{B \bra{x^1 -c}, t^1} = \bra{B \bra{\bar{x} -c}, \tbar} \in \epi\bra{\tilde{f}}^{\tilde{\pi}, \tilde{\pi}_0, \tilde{\pi}_1}. \eeqn Then by assumption, $\bra{B \bra{\bar{x} -c}, \tbar}\in \epi\bra{\tilde{f}}\cap\epi\bra{\tilde{g}}$ and the result follows from the definition of $f$ and $g$. For the reverse inclusion, take $(\bar{x}, \tbar) \in \epi(f)\cap\epi(g)$. Then $\bra{B \bra{\bar{x} -c}, \tbar}\in \epi\bra{\tilde{f}}\cap\epi\bra{\tilde{g}}$ and by assumption, there exist $(x^0, t^0) \in \tilde{S}_0$ and $(x^1, t^1) \in \tilde{S}_1$ such that $\bra{B \bra{\bar{x} -c}, \tbar} = \alpha (x^0, t^0) + \bra{1-\alpha} (x^1, t^1)$ for some $\alpha \in [0,1]$. Thus, \beqn \bar{x} = B^{-1} \bra{\alpha x^0 + \bra{1-\alpha} x^1} + c \quad \mbox{and} \quad \alpha t^0 + \bra{1-\alpha} t^1 = \tbar. \eeqn The result then follows by noting that $\bra{B^{-1} x^0 +c, t^0} \in S_0$ and $\bra{B^{-1}x^1 +c , t^1} \in S_1$. \begin{comment} Now we prove \eqref{transformation ineq2} following the similar steps as those for \eqref{transformation ineq1}. Define \beqn Q_0 \eqq \set{ x \in \mathbb{R}^n : h\bra{ B \bra{x-c}}\leq 0,\, \pi^T x \le \pi_0} \eeqn and \beqn Q_1 \eqq \set{ x \in \mathbb{R}^n : h\bra{ B \bra{x-c}}\leq 0,\, \pi^T x \ge \pi_1}. \eeqn Also define \beqn \tilde{Q}_0 \eqq \set{ x \in \mathbb{R}^n : h(x)\leq 0,\, \tilde{\pi}^T x \le \tilde{\pi}_0} \eeqn and \beqn \tilde{Q}_1 \eqq \set{ x \in \mathbb{R}^n : h(x)\leq 0,\, \tilde{\pi}^T x \ge \tilde{\pi}_1}. \eeqn By definition, $K_1(B,c)^{\pi, \pi_0, \pi_1} = \conv \bra{Q_0 \cup Q_1}$ and $K_1(I,\mathbf{0})^{\tilde{\pi}, \tilde{\pi}_0, \tilde{\pi}_1} = \conv \bra{\tilde{Q}_0 \cup \tilde{Q}_1}$. We prove \eqref{transformation ineq2} by showing that $\conv \bra{Q_0 \cup Q_1} = D$, where \beqn D \eqq \set{ x \in \mathbb{R}^n : h\bra{ B \bra{x-c}}\leq 0,\, \tilde{h}\bra{ B \bra{x-c}}\leq 0}. \eeqn First, we prove that $\conv \bra{Q_0 \cup Q_1} \subseteq D$. Take $\bar{x} \in \conv \bra{Q_0 \cup Q_1}$. There exist $x^0 \in Q_0$ and $x^1 \in Q_1$ such that $\bar{x} = \alpha x^0 + \bra{1-\alpha} x^1$ for some $\alpha \in [0,1]$. Thus, we have \beq h\bra{ B \bra{x^0-c}}\leq 0,\quad \pi^T x^0 \le \pi_0\label{friend0_ineq4} \eeq and \beq h\bra{ B \bra{x^1-c}}\leq 0,\quad \pi^T x^1 \ge \pi_1. \label{friend1_ineq5} \eeq By \eqref{friend0_ineq4} and \eqref{friend1_ineq5}, one can see that $B \bra{x^0 -c} \in \tilde{Q}_0$ and $B \bra{x^1 -c} \in \tilde{Q}_1$. Therefore \beqn \alpha B \bra{x^0 -c} + \bra{1-\alpha} B \bra{x^1 -c} = B \bra{\bar{x} -c} \in K_1(I,\mathbf{0})^{\tilde{\pi}, \tilde{\pi}_0, \tilde{\pi}_1}. \eeqn From the above, we conclude that $h\bra{ B \bra{\bar{x}-c}}\leq 0$ and $\tilde{h}\bra{ B \bra{\bar{x}-c}}\leq 0$, which imply $\bar{x} \in D$, as needed. Now we show that $D \subseteq \conv \bra{Q_0 \cup Q_1}$. Take $\bar{x} \in D$. We have that $h\bra{ B \bra{\bar{x}-c}}\leq 0$ and $\tilde{h}\bra{ B \bra{\bar{x}-c}}\leq 0$. As a result, $B \bra{\bar{x} -c} \in K_1(I,\mathbf{0})^{\tilde{\pi}, \tilde{\pi}_0, \tilde{\pi}_1}$ and therefore, there exist $x^0 \in \tilde{Q}_0$ and $x^1 \in \tilde{Q}_1$ such that $B \bra{\bar{x} -c} = \alpha x^0 + \bra{1-\alpha} x^1$ for some $\alpha \in [0,1]$. Thus, we have \beq h\bra{ x^0}\leq 0,\quad \tilde{\pi}^T x^0 \le \tilde{\pi}_0 \label{friend0_ineq6} \eeq and \beq h\bra{ x^1}\leq 0,\quad \tilde{\pi}^T x^1 \ge \tilde{\pi}_1. \label{friend1_ineq7} \eeq Moreover, we have \beq \bar{x} = B^{-1} \bra{\alpha x^0 + \bra{1-\alpha} x^1} + c. \label{xbar_equality2} \eeq Using \eqref{friend0_ineq6} and \eqref{friend1_ineq7}, one can check that $B^{-1} x^0 +c \in Q_0$ and $B^{-1}x^1 +c \in Q_1$. Hence, $\alpha \bra{B^{-1} x^0 +c} + \bra{1-\alpha} \bra{B^{-1}x^1 +c}$, which by \eqref{xbar_equality2} is equivalent to $\bar{x}$, belongs to $\conv \bra{Q_0 \cup Q_1}$. This completes the proof of \eqref{transformation ineq2}. \end{comment} \end{proof} \begin{lemma} \label{NPValidity} Let $\pi \in \mathbb{R}^n$, $\pi_0, \pi_1, \hat{\pi} \in \mathbb{R}$ such that $\pi_0 < \pi_1$ and $\hat{\pi} \neq 0$. Also define \[LC^2 \bra{I, 0} \eqq \set{(x,t) \in \Real^n \times \mathbb{R}_{+} : \normss{x} \le t},\] $S_0 \eqq \{\bra{x,t} \in LC^2 \bra{I, 0} : \pi^T x + \hat{\pi} t \le \pi_0\}$, and $S_1 \eqq \{\bra{x,t} \in LC^2 \bra{I, 0} : \pi^T x + \hat{\pi} t \ge \pi_1\}$. \begin{itemize} \item If $\phat > 0$ and $\pi_0 < \pi_1 \le \frac{-\normss{\pi}}{4\phat}$, then $S_0 = \emptyset$ and $S_1 = LC^2 \bra{I, 0}$, \item if $\phat > 0$ and $\pi_0 < \frac{-\normss{\pi}}{4\phat} < \pi_1$, then $S_0 = \emptyset$, $S_1 \subsetneq LC^2 \bra{I, 0}$, and $S_1 \neq \emptyset$, \item if $\phat > 0$ and $\frac{-\normss{\pi}}{4\phat} \le \pi_0 < \pi_1$, then $S_0, S_1 \subsetneq LC^2 \bra{I, 0}$ and $S_0, S_1 \neq \emptyset$, \item if $\phat < 0$ and $\pi_0 < \pi_1 \le \frac{-\normss{\pi}}{4\phat}$, then $S_0, S_1 \subsetneq LC^2 \bra{I, 0}$ and $S_0, S_1 \neq \emptyset$, \item if $\phat < 0$ and $\pi_0 < \frac{-\normss{\pi}}{4\phat} < \pi_1$, then $S_1 = \emptyset$, $S_0 \subsetneq LC^2 \bra{I, 0}$, and $S_0 \neq \emptyset$, \item if $\phat < 0$ and $\frac{-\normss{\pi}}{4\phat} \le \pi_0 < \pi_1$, then $S_1 = \emptyset$ and $S_0 = LC^2 \bra{I, 0}$. \end{itemize} \end{lemma} \begin{proof} We prove the first three cases only. The other cases are analogous. First, consider the case that $\phat > 0$ and $\pi_0 < \pi_1 \le \frac{-\normss{\pi}}{4\phat}$. If $\norm{\pi}_2 = 0$, the result follows from non-negativity of $t$. Now assume that $\norm{\pi}_2 \neq 0$. Note that if $S_0 \neq \emptyset$, one can find $(\xbar, \tbar) \in S_0$ such that $\bra{\pi^T \xbar}^2/\norm{\pi}_2^2 \le \bra{\pi_0 - \pi^T \xbar}/\hat{\pi}$. Therefore, we prove $S_0 = \emptyset$ by showing that $\bra{\pi^T x}^2/\norm{\pi}_2^2 > \bra{\pi_0 - \pi^T x}/\hat{\pi}$. This follows from noting that for $y \in \Real$, the quadratic equation $\frac{y^2}{\norm{\pi}_2^2} = \frac{\pi_0 - y}{\hat{\pi}}$ does not have any solution. To prove $S_1 = LC^2 \bra{I, 0}$, we show that $\pi^T x + \phat t \ge \pi_1$ is a valid inequality for $LC^2 \bra{I, 0}$. This comes from the fact that the quadratic equation $\frac{y^2}{\norm{\pi}_2^2} = \frac{\pi_1 - y}{\hat{\pi}}$ has at most a single solution and as a result, we have $\bra{\pi_1 - \pi^T x}/\hat{\pi} \le \bra{\pi^T x}^2/\norm{\pi}_2^2 \le t$. Now consider the second case that $\phat > 0$ and $\pi_0 < \frac{-\normss{\pi}}{4\phat} < \pi_1$. Proving $S_0 = \emptyset$ is analogous to the previous case. We have $S_1 \subsetneq LC^2 \bra{I, 0}$, since $\bra{\bar{x}, \tbar} = \bra{\frac{- \pi}{2 \phat}, \frac{\normss{\pi}}{4 \phat^2}} \in LC^2\bra{I, 0}$, but $\bra{\bar{x}, \tbar} \notin S_1$. To prove $S_1 \neq \emptyset$, one can check that for any $\bar{x} \in \Real^n$ and $\tbar = \Max \set{\normss{\xbar}, \frac{\pi_1 - \pi^T \bar{x}}{\hat{\pi}}}$, $\bra{\bar{x}, \tbar} \in S_1$. Finally, consider the third case that $\phat > 0$ and $\frac{-\normss{\pi}}{4\phat} \le \pi_0 < \pi_1$. To prove $S_0, S_1 \subsetneq LC^2\bra{I, 0}$, one can see that $\bra{\bar{x}, \tbar} = \bra{\frac{- \pi}{2 \phat}, \frac{\pi_0 + \pi_1}{2 \phat}+ \frac{\normss{\pi}}{2 \phat^2}} \in LC^2\bra{I, 0}$, but $\bra{\bar{x}, \tbar} \notin S_0 \cup S_1$. Proving $S_1 = \emptyset$ is analogous to the previous case. Now we prove $S_0 \neq \emptyset$. If $\norms{\pi}=0$, one can note that $\bra{\bar{x}, \tbar} = \bra{{\bf 0}, 0} \in S_0$. If $\norms{\pi}\neq 0$, one can check that $\bra{\bar{x}, \tbar} = \bra{\frac{\hat{y}}{\normss{\pi}} \pi, \frac{\pi_0 - \hat{y}}{\phat}} \in S_0$, where $\hat{y} = \frac{-\normss{\pi} -\sqrt{\norm{\pi}_2^4 + 4 \normss{\pi} \pi_0 \phat}}{2\phat}$ is a solution to the quadratic equation $\frac{y^2}{\norm{\pi}_2^2} = \frac{\pi_0 - y}{\hat{\pi}}$. \end{proof} \begin{lemma} \label{NValidity} Let $\pi \in \mathbb{R}^n$, $\pi_0, \pi_1, \hat{\pi} \in \mathbb{R}$ such that $\pi_0 < 0 < \pi_1$ and $\hat{\pi} \neq 0$. Also define \[LC \bra{I, 0} \eqq \set{(x,t) \in \Real^n \times \mathbb{R}_{+} : \norms{x} \le t},\] $S_0 \eqq \{\bra{x,t} \in LC\bra{I, 0} : \pi^T x + \hat{\pi} t \le \pi_0\}$, and $S_1 \eqq \{\bra{x,t} \in LC\bra{I, 0} : \pi^T x + \hat{\pi} t \ge \pi_1\}$. \begin{itemize} \item If $\hat{\pi} \ge \norm{\pi}_2$, then $S_0 = \emptyset$, $S_1 \subsetneq LC\bra{I, 0}$, and $S_1 \neq \emptyset$, \item if $\hat{\pi} \le - \norm{\pi}_2$, then $S_1 = \emptyset$, $S_0 \subsetneq LC\bra{I, 0}$, and $S_0 \neq \emptyset$, \item if $\hat{\pi} \in \bra{-\norm{\pi}_2, \norm{\pi}_2}$, then $S_0, S_1 \subsetneq LC \bra{I, 0}$ and $S_0, S_1 \neq \emptyset$. \end{itemize} \end{lemma} \begin{proof} First, we prove the case that $\hat{\pi} \ge \norm{\pi}_2$. If $\norm{\pi}_2 = 0$, the result follows from non-negativity of $t$. Now assume that $\norm{\pi}_2 \neq 0$. Note that if $S_0 \neq \emptyset$, one can find $(\xbar, \tbar) \in S_0$ such that $\bra{\pi^T \xbar}^2/\norm{\pi}_2^2 \le \bra{\pi_0 - \pi^T \xbar}^2/\hat{\pi}^2$. Therefore, we prove $S_0 = \emptyset$ by showing that $\bra{\pi^T x}^2/\norm{\pi}_2^2 > \bra{\pi_0 - \pi^T x}^2/\hat{\pi}^2$. Note that non-negativity of $t$ and $\phat$, together with $\pi^T x + \hat{\pi} t \le \pi_0$ imply $\pi^T x \le \pi_0 < 0$. One can see that $\pi^T x < \pi_0 - \pi^T x < -\pi^T x$, where the first inequality follows from $\pi^T x \le \pi_0$ and $- \pi^T x > 0$, and the second inequality comes from the fact that $\pi_0 < 0$. Thus, $\bra{\pi^T x}^2 > \bra{\pi_0 - \pi^T x}^2$ and the result follows by noting that $\frac{1}{\norm{\pi}_2^2} \ge \frac{1}{\hat{\pi}^2}$. We have $S_1 \subsetneq LC\bra{I, 0}$, since $\bra{\bar{x}, \tbar} = \bra{{\bf 0}, 0} \in LC\bra{I, 0}$, but $\bra{\bar{x}, \tbar} \notin S_1$. To prove $S_1 \neq \emptyset$, one can check that for any $\bar{x} \in \Real^n$ and $\tbar = \Max \set{\norms{\xbar}, \frac{\pi_1 - \pi^T \bar{x}}{\hat{\pi}}}$, $\bra{\bar{x}, \tbar} \in S_1$. Now consider the second case that $\hat{\pi} \le -\norm{\pi}_2$. Again for $\norm{\pi}_2 = 0$, the result follows from non-negativity of $t$. Now assume that $\norm{\pi}_2 \neq 0$. Note that if $S_1 \neq \emptyset$, one can find $(\xbar, \tbar) \in S_1$ such that $\bra{\pi^T \xbar}^2/\norm{\pi}_2^2 \le \bra{\pi_1 - \pi^T \xbar}^2/\hat{\pi}^2$. Therefore, we prove $S_1 = \emptyset$ by showing that $\bra{\pi^T x}^2/\norm{\pi}_2^2 > \bra{\pi_1 - \pi^T x}^2/\hat{\pi}^2$. Note that non-negativity of $t$, $\phat < 0$, and $\pi^T x + \hat{\pi} t \ge \pi_1$ imply $\pi^T x \ge \pi_1 > 0$. One can see that $-\pi^T x < \pi_1 - \pi^T x < \pi^T x$, where the first inequality comes from the fact that $\pi_1 > 0$, and the second inequality follows from $\pi_1 \le \pi^T x$ and $- \pi^T x < 0$. Thus, $\bra{\pi^T x}^2 > \bra{\pi_1 - \pi^T x}^2$ and the result follows by noting that $\frac{1}{\norm{\pi}_2^2} \ge \frac{1}{\hat{\pi}^2}$. We have $S_0 \subsetneq LC\bra{I, 0}$, since $\bra{\bar{x}, \tbar} = \bra{{\bf 0}, 0} \in LC\bra{I, 0}$, but $\bra{\bar{x}, \tbar} \notin S_0$. To prove $S_0 \neq \emptyset$, one can check that for any $\bar{x} \in \Real^n$ and $\tbar = \Max \set{\norms{\xbar}, \frac{\pi_0 - \pi^T \bar{x}}{\hat{\pi}}}$, $\bra{\bar{x}, \tbar} \in S_0$. Finally, consider the last case that $\hat{\pi} \in \bra{-\norm{\pi}_2, \norm{\pi}_2}$. Note that $\phat \neq 0$ and $\phat \in (\norms{\pi}, \norms{\pi}) $ imply $\norm{\pi}_2 \neq 0$. We have $S_0, S_1 \subsetneq LC\bra{I, 0}$, since $\bra{\bar{x}, \tbar} = \bra{{\bf 0}, 0} \in LC\bra{I, 0}$, but $\bra{\bar{x}, \tbar} \notin S_0 \cup S_1$. To prove $S_0, S_1 \neq \emptyset$, one can check that for \beqn x^0 = \frac{\pi_0}{\norm{\pi}_2 \bra{\norm{\pi}_2 - \hat{\pi}}} \pi, \quad t^0 = - \frac{\pi_0}{\norm{\pi}_2 - \hat{\pi}}, \eeqn and \beqn x^1 = \frac{\pi_1}{\norm{\pi}_2 \bra{\norm{\pi}_2 + \hat{\pi}}} \pi, \quad t^1 = \frac{\pi_1}{\norm{\pi}_2 + \hat{\pi}}, \eeqn we have $\bra{x^0, t^0} \in S_0$ and $\bra{x^1, t^1} \in S_1$. \end{proof} \bibliographystyle{amsplain}
{ "timestamp": "2013-03-14T01:00:38", "yymm": "1302", "arxiv_id": "1302.2556", "language": "en", "url": "https://arxiv.org/abs/1302.2556", "abstract": "We study the generalization of split, k-branch split, and intersection cuts from Mixed Integer Linear Programming to the realm of Mixed Integer Nonlinear Programming. Constructing such cuts requires calculating the convex hull of the difference between a convex set and an open set with a simple geometric structure. We introduce two techniques to give precise characterizations of such convex hulls and use them to construct split, k-branch split, and intersection cuts for several classes of non-polyhedral sets. In particular, we give simple formulas for split cuts for essentially all convex sets described by a single quadratic inequality. We also give simple formulas for k-branch split cuts and some general intersection cuts for a wide variety of convex quadratic sets.", "subjects": "Optimization and Control (math.OC)", "title": "Intersection Cuts for Nonlinear Integer Programming: Convexification Techniques for Structured Sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363535858372, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7073385781583024 }
https://arxiv.org/abs/2108.05474
An asymptotically tight lower bound for superpatterns with small alphabets
A permutation $\sigma \in S_n$ is a $k$-superpattern (or $k$-universal) if it contains each $\tau \in S_k$ as a pattern. This notion of "superpatterns" can be generalized to words on smaller alphabets, and several questions about superpatterns on small alphabets have recently been raised in the survey of Engen and Vatter. One of these questions concerned the length of the shortest $k$-superpattern on $[k+1]$. A construction by Miller gave an upper bound of $(k^2+k)/2$, which we show is optimal up to lower-order terms. This implies a weaker version of a conjecture by Eriksson, Eriksson, Linusson and Wastlund. Our results also refute a 40-year-old conjecture of Gupta.
\section{Introduction} Given permutations $\tau \in S_k, \sigma \in S_n$, we say $\sigma$ \textit{contains} $\tau$ as a \textit{pattern} if there exist indices $1\le i_1< \dots<i_k\le n$ such that $\sigma(i_j) < \sigma(i_{j'})$ if and only if $\tau(j) < \tau(j')$ for all choices $j,j'$ (e.g., $312$ is contained in $2\underline{514}3$ as a pattern, we may choose $i_1,i_2,i_3 = 2,3,4$). We say that $\sigma\in S_n$ is a \textit{$k$-superpattern} if it contains each $\tau \in S_k$ as a pattern. Naturally, this leads us to consider the ``superpattern problem''. \begin{prob} For $k \ge 1$, let $f(k)$ be the minimum $n$ such that there exists $\sigma\in S_n$ which is a $k$-superpattern. What is the asymptotic growth of $f(k)$? \end{prob}\noindent In 1999, Arratia \cite{arratia} showed that $(1/e^2-o(1))k^2 \le f(k) \le k^2$, hence $f(k)$ is well-defined. There have been several competing conjectures about the asymptotic growth of $f(k)$. The conjecture relevant to this paper is that of Eriksson, Eriksson, Linusson and Wastlund, which claimed $f(k) = (1/2\pm o(1))k^2$ \cite{eriksson}. As some evidence towards this conjecture, Miller showed that there exist $k$-superpatterns of length $(k^2+k)/2$ (i.e., $f(k) \le (k^2+k)/2$) \cite{miller}. And later Engen and Vatter improved this to show $f(k) \le (k^2+1)/2$ \cite{engen}. However in forthcoming work \cite{hunter}, the author will show that $f(k)\le \frac{15}{32}k^2+O(k)$, refuting the claim that the constant $1/2$ is tight. \hide{ In 2009, Miller showed that there exist $k$-superpatterns of length $(k^2+k)/2$ using the alphabet $[k+1]$ (i.e., $f(k;k+1) \le (k^2+k)/2$) \cite{miller}. The linear factor was improved in 2018 by Engen and Vatter, though this used an alphabet that grows quadratically with $k$ \cite{engen}. In forthcoming work \cite{hunter}, the author will show that $f(k)\le \frac{15}{32}k^2+O(k)$ using quadratically large alphabets. This contradicts a conjecture by Eriksson et al. \cite{eriksson} that $f(k) = (1/2\pm o(1))k^2$.} In light of this, one is left to wonder if a revised version of the conjecture from \cite{eriksson} holds true. We answer this in the affirmative by considering a ``stricter regime'' of the superpattern problem which has received attention recently (see \cite{chroman,engen}). \hide{ In this paper we prove a weakening of the conjecture by showing the constant $1/2$ is asymptotically tight in a stricter regime of the superpattern problem. Recently, the survey of Engen and Vatter \cite{engen} there have been breakthroughs in the superpattern problem and some of its variants \cite{chroman,he}. We study a stricter regime of the superpattern problem, which was mentioned in \cite[Section~6]{chroman}. Meanwhile, in this paper, we will show that the constant $1/2$ is asymptotically tight when we consider a stricter regime of the superpattern problem. In this paper, we Recently, there have been breakthroughs in the superpattern problem and some of its variants \cite{chroman,he}. In this paper, we study a certain regime of the superpattern problem, which was mentioned in \cite[Section~6]{chroman}. Our results are asymptotically tight in said regime. \zc{this sentence i can't read:} They have several implication of our results also refute a 40-year conjecture of Gupta \cite{gupta}; see Section~\ref{implications} for more details. \zc{usually, people introduce history before saying their own results. just something to think about.}} The regime in question concerns ``alphabet size''. Instead of having $\sigma$ be a permutation, what if it was a word (i.e., sequence) on the alphabet $[r]:=\{1,\dots,r\}$? For $\sigma\in [r]^n$ and $\tau \in S_k$, we say $\sigma$ contains $\tau$ as a pattern for the same reasons as before (i.e., if there are indices $1\le i_1<\dots<i_k\le n$ such that $\sigma(i_j) < \sigma(i_{j'})$ if and only if $\tau(j)<\tau(j')$). As before, we say $\sigma \in [r]^n$ is a $k$-superpattern if it contains every $\tau \in S_k$ as a pattern. We define $f(k;r)$ to be the minimum $n$ such that there is a $\sigma\in [r]^n$ which is a $k$-superpattern. One could revise this conjecture, by claiming in regimes with ``small'' alphabets, that the shortest $k$-superpatterns have a length of $(1/2\pm o(1))k^2$. In this paper, we prove the revised conjecture for the regime where $r=r_k = (1+o(1))k$. The lower bound is given by our main result. \begin{thm}\label{opt}For every $\epsilon > 0$, there exists $\delta > 0$ so that the following holds for sufficiently large $k$. For\footnote{Throughout the paper, we omit floor functions when there is not risk for confusion.} $r_k = (1+\delta)k$ and $n < (1/2-\epsilon)k^2$, no word $\sigma \in [r_k]^n$ is a $k$-superpattern. \end{thm}\noindent Hence, with Miller's construction (which uses the alphabet $[k+1]$, and thus shows $f(k;k+1)\le (k^2+k)/2$), we have asymptotically sharp bounds of the shortest superpatterns in this regime. \begin{cor}\label{asy}Suppose $r_k = (1+o(1))k$ and also $r_k > k$ for all $k$. Then \[f(k;r_k) = \left(\frac{1}{2}\pm o(1)\right)k^2.\] \end{cor} In Section~\ref{outline} we go over past lower bounds of $f$, and outline a proof of Theorem~\ref{opt}. In Section~\ref{notation} we go over notation. In Section~\ref{reduction} we go over a reduction which shows that Theorem~\ref{opt} follows from a more technical Theorem~\ref{goodwalks}, which we state later. We came across two proofs of Theorem~\ref{goodwalks}, we include both (but provide different levels of detail). In Section~\ref{det} we prove Theorem~\ref{goodwalks} by a simple coupling argument. In Section~\ref{alt} we sketch a second proof which uses the differential method. We believe our second proof is more likely to find applications in future research, however the first proof is more natural and was easier to present in full detail. In Section~\ref{conclusion} we discuss some open problems and go over some results about the lower-order terms of our bounds. One part which may be of particular interest Section~\ref{implications}, where we refute a conjecture made by Gupta in 1981 \cite{gupta}, which was about the length of ``bi-directional circular superpatterns''. \subsection{Past lower bounds and an outline of our proof}\label{outline} We mention two trivial lower bounds for the length of superpatterns. Any $\sigma \in S_n$ contains at most $\binom{n}{k}$ permutations $\tau \in S_k$ as a pattern, since $\binom{n}{k}$ counts the number of choices of indices $1\le i_1<\dots<i_k\le n$. This implies $\binom{f(k)}{k}\ge k!$ must hold, which gives the bound $f(k) \ge (1/e^2-o(1))k^2$. Meanwhile, if $r_k = (1+o(1))k$, then one can get $f(k;r_k) \ge (1/e-o(1))k^2$ by a convexity argument (more specifically, one shows that any $\sigma \in [k]^n$ contains at most $(n/k)^k$ patterns of length $k$, and then one uses Remark~\ref{nfromk} which we mention shortly). In 1976, Kleitman and Kwiatowski \cite{kleitman} used inductive methods to show that $f(k;k) \ge (1-o(1))k^2$ which is asymptotically tight (indeed, to see $f(k;k)\le k^2$ one can consider $1,\dots,k$ repeated $k$ times). But it was only in 2020 that Chroman, Kwan, and Singhal \cite{chroman} proved non-trivial lower bounds for superpatterns on alphabets larger than $[k]$. Basically, the methodology was based around ``encoding'' patterns in a more efficient manner. They show that typical choices of indices $1\le i_1<\dots<i_k\le n$ have many ``large gaps'' (choices $j$ where $i_{j+2}-i_j > Ck$ for a certain $C>0$), and that this property is particularly redundant (loosely, they create equivalence classes for choices of indices with many large gaps, and show that each equivalence class contain many choices of indices, yet few distinct patterns). This was used to show $f(k) \ge (1.000076/e^2)k^2$ for large $k$, and $f(k;(1+e^{-1000})k) \ge ((1+e^{-600})/e)k^2$ for large $k$. In proving Theorem~\ref{opt}, we take a rather different approach than either of the previous papers which established non-trivial lower bounds (namely \cite{chroman,kleitman}). We actually reformulate the problem in terms of random walks on deterministic finite automata (DFAs). To get there, we need a definition and an observation. \begin{defn} For positive integers $k,n$, we let $F(k,n)$ be the maximum number of patterns $\tau \in S_k$ that a $\sigma \in [k]^n$ can contain. \end{defn} \begin{rmk}\label{nfromk} For any $\sigma \in [r]^n$, we have that $\sigma$ contains at most $\binom{r}{k}F(k,n)$ patterns $\tau \in S_k$. Consequently, if $r$ is such that\[\binom{r}{k}F(k,n) < k!\] then there is no $\sigma \in [r]^n$ which is a $k$-superpattern (i.e., $f(k;r)>n$). \end{rmk}\noindent To confirm this remark, it suffices to verify the first sentence in Remark~\ref{nfromk}, which can be briefly justified as follows. Note that for each of the $\binom{r}{k}$ subsets $Y\subset [r]$ with $|Y|=k$, there are at most $F(k,n)$ permutations $\tau\in S_k$ which are a pattern of $\sigma|_{\sigma^{-1}(Y)}$. Conversely, if $\tau\in S_k$ is a pattern of $\sigma$, it is contained as a pattern of $\sigma|_{\sigma^{-1}(Y)}$ for some set $Y\subset [r]$ with $|Y| =k$. Thus for fixed $\epsilon > 0$, we want to show that when $n < (1/2-\epsilon)k^2$ and $k$ is large that $F(k,n)$ will be ``extremely small''. We are able to show this by considering random walks on certain DFAs. What we specifically prove about DFAs is a bit technical, so we defer the rigorous statement to Section~\ref{thereduction}. Essentially, it implies a exponentially small upper bound for $F(k,n)/k!$ when $n < (1/2-\Omega(1))k^2$. \begin{repthm}{goodwalks}[Informal statement]There exists a function $G(k,n,N)$ (which is defined in terms of a family of DFAs) such that \[F(k,(1/2-\epsilon)k^2) \le G(k,(1/2-\epsilon)k^2,k^2) .\]For fixed $\epsilon >0$, we will have \[G(k,(1/2-\epsilon)k^2,k^2) \quad \textrm{gets ``very small'' as }k\to \infty. \] \end{repthm}\noindent Here, the notion of ``very small'' is such that Theorem~\ref{opt} will follow from an application of Remark~\ref{nfromk}. Intuitively, one may expect our results to hold true by considering the following argument sketch. The rest of our paper will be dedicated to rigorously grounding this sketch. Consider any $\sigma \in [k]^n$. Let $t$ be sampled from $[k]$ uniformly at random. For any $i_0 \in [n]$, we'll have that $\mathbb{E}[\inf\{i>i_0: \sigma(i) = t\}-i_0]\ge (k+1)/2$, which is minimized when $\sigma(i_0+1),\dots,\sigma(i_0+k)$ is a permutation (we use the convention $\infty-i_0 = \infty$ so that the quantity $\inf\{i>i_0: \sigma(i) = t\}-i_0$ is always well-defined). Thus, if $t_1,\dots,t_k$ are i.i.d. and sample $[k]$ uniformly at random, and we set $i_j = \inf\{i>i_{j-1}:\sigma(i) =t_j\}$ for each $j\in [k]$, then it should be exponentially likely (in terms of $k$) that $i_k-i_0 > (1/2-\epsilon)k^2$ (this essentially is due to a Chernoff bound). This quantity $i_k-i_0$ essentially tells us how long $\sigma$ needs to be so that we can ``embed'' $t_1,\dots,t_k$ into $\sigma$. In Section~\ref{reduction}, we go over our reduction from pattern containment to this deterministic embedding process, and then show how to use DFAs to track the quantity $i_k-i_0$. We conclude Section~\ref{reduction} by precisely stating Theorem~\ref{goodwalks} and showing how it implies Theorem~\ref{opt}. We then prove Theorem~\ref{goodwalks} in Section~\ref{det}. In our argument sketch above, we show that $i_k-i_0 < (1/2-\epsilon)k^2$ is exponentially unlikely when $t_1,\dots,t_k$ are sampled uniformly at random. What we do in Section~\ref{det} is show that the probability continues to be small when we condition on $t_1,\dots,t_k$ being a permutation. This is done by choosing $\alpha >0$ sufficiently small relative to $\epsilon$, and considering the behavior of substrings of length $\alpha k$. Here, if we choose the letters of our substring uniformly at random, the probability that our substring contains no repeated letters (i.e., it could be a substring of a permutation) is much larger than the probability that $i_{\alpha k+j}-i_j > (1/2-\epsilon)\alpha k^2$. \subsection{Notation}\label{notation} For positive integers $n$ we let $[n]:= \{1,\dots,n\}$. We let $[\infty]:= \{1,2,3,\dots\} \cup\{\infty\}$. We use some standard asymptotic notation, detailed below. Let $f=f(k),g=g(k)$ be functions. We say $f = O(g)$ if there exists $C> 0$ such that $f\le Cg$ for sufficiently large $k$; conversely we say $f = \Omega(g)$ if there is $c> 0$ so that $f\ge cg$ for all large $k$. We use $o(1)$ to denote a non-negative\footnote{This is slightly non-standard, in most contexts $o(1)$ is allowed to be negative. We primarily use this convention to make the paper easier to read. We never implicitly make use of this convention in any of our proofs.} quantity that tends to zero as $k\to \infty$. Following \cite{keevash}, for a function $h= h(k)$, we say $h = f \pm g$ to mean $f-g\le h \le f+g$. We remind the readers of the Kleene star operator. Given an alphabet (i.e., a set) $\Sigma$, we let $\Sigma^*$ denote the set of finite words on the alphabet $\Sigma$ (so $\Sigma= \bigcup_{n=0}^\infty \Sigma^i$). For our purposes, a DFA is a $3$-tuple $D = (V,\delta,\root(D))$, where $V$ is the set (of ``states'') of $D$, $\delta:V\times \Sigma \to V;(v,t)\mapsto \delta(v,t)$ is a transition function defined on some alphabet $\Sigma$, and $\root(D) \in V$ is the ``root'' of $D$. For the purposes of this paper, one may think of each DFA $D$ as being a rooted (not necessarily simple) directed graph, with its transition function, $\delta$, being a convenient way to describe walks on said graph. Given a word $w \in [k]^*$ and $v\in V$ we define a walk in $D$, $\w{v}{w}$, as follows. Let $L$ be the number of letters in $w$, so $w = w_1,\dots,w_L$. We set $\w{v}{w} = v_0,\dots,v_L$, where $v_0 = v$ and for $j \in [L]$, $v_j = \d{v_{j-1}}{w_j}$. Let $D$ be a DFA with a sets of states $V$, and suppose we have defined $\delta:V \times[k]\to V;(v,w)\mapsto \d{v}{w}$. We shall extend the function $\delta$ to the domain $V\times[k]^*$. Consider $w \in [k]^*$. If $w$ has length zero, then set $\d{v}{w} = v$. Otherwise, proceeding inductively, writing $w = w_1,\dots,w_L$, we can set $\d{v}{w} = \d{\d{v}{w_1}}{w_2,\dots,w_L}$. \subsubsection{Cost} Now we shall go over how we define a ``cost function''. We will start with an initial function $c:V\times [k]\to [\infty]$, and then extend it, similar to how we extended the transition function $\delta$. The end result will be a way to assign cost to walks that behaves additively; for those familiar with weighted graphs and the travelling salesman problem, we will effectively be translating the concept of weighted walks in terms of DFAs. Let $D$ be a DFA with a sets of states $V$, and suppose we have defined $\cost:V \times[k]\to [\infty]$. We shall extend this to the domain $V\times[k]^*$. Given $v\in V$ and $w \in [k]^*$, we let $v_0,\dots,v_{|w|} = \w{v}{w}$, and set\[\c{v}{w} = \sum_{j \in [|w|]}\c{v_{j-1}}{w(j)}.\]In English, we initialize with net cost zero and do a walk according to $w$ that starts at state $v$ and let $\c{v}{w}$ be our net cost at the end of the walk. When doing the $j$-th step of our walk, we read the letter $w(j)$ while at state $v_{j-1}$ and shall increment our net cost by $\c{v_{j-1}}{w(j)}$ (if we think of $v_{j-1}$ as being a toll booth, this is the cost of taking the $w(j)$-th route of $v_{j-1}$). A weighted DFA is simply a 2-tuple $(D,\cost)$ where $D$ is a DFA and $\cost$ is a cost function defined on $V$, the set of state of $D$. Given a weighted DFA $X =(D,\cost)$, we call $D$ the \textit{underlying DFA} of $X$. Also, for a weighted graph $X = (D,\cost)$, we will identify $X$ with $D$, so if we say something like ``let $V$ be the set of states of $X$'' we mean ``let $V$ be the set of states of $D$''. When talking about two DFAs $A,B$, we respectively denote the transition function of $A$ and the transition function of $B$ by $\delta_A$ and $\delta_B$. We similarly denote their walk functions by $\walk_A$ and $\walk_B$. In the same fashion, given two weighted DFAs $A,B$, each with their own cost function, we'll respectively denote them by $\cost_A$ and $\cost_B$. This allows us to compare functions when $A,B$ have a common set of states $V$. Thus, if we say $\c[A]{v}{t} \ge \c[B]{v}{t}$, this means that if we wanted to read the letter $t$ while at the state $v$, the associated cost of doing this in $A$ is at least as much as doing this in $B$. We now introduce the concept of making a weighted DFA ``cheaper''. For a weighted DFA $X = (D,\cost)$ we say that $Y = (D',\cost')$ is a \textit{cheapening} of $X$ if $D = D'$ (i.e., they have the same underlying DFA) and for each $(v,t) \in V\times [k]$ we have that $\cost(v,t) \ge \cost'(v,t)$ (here $V$ is the set of states of $D$ and $[k]$ is the alphabet of letters which $D$ reads). The implication of this definition is that a cheapening will have a more relaxed cost function, that assign lower costs to all inputs (just like what would happen if one decreased the weights of some edges in an instance of the traveling salesman problem). \begin{rmk}\label{cheap}If $B$ is a cheapening of $A$, then for $(v,w) \in V\times [k]^*$ we have that $\c[B]{v}{w}\le \c[A]{v}{w}$. \begin{proof} Consider any $v \in V$ and $w\in [k]^*$. As $A,B$ have the same underlying DFA, we'll have that $\w[A]{v}{w} = v_0,\dots,v_{|w|} = \w[B]{v}{w}$. Hence, \[\c[A]{v}{w}- \c[B]{v}{w} = \sum_{j\in [|w|]} \c[A]{v_{j-1}}{w_j}-\c[B]{v_{j-1}}{w_j} \ge 0\](because $B$ is ``cheaper'' than $A$,\footnote{i.e., $\c[A]{v}{t} \ge \c[B]{v}{t}$ for all $(v,t) \in V\times [k]$} each summand is non-negative). It follows that $\c[A]{v}{w}\ge \c[B]{v}{w}$ as desired. \end{proof} \end{rmk} Finally, here are some meta-notational conventions we will use. The symbol $\sigma$ will refer to a word we want to be a superpattern. The symbol $\tau$ will be an element of $S_k$, we'll wish to check if $\tau$ is a pattern of $\sigma$. We use $i$ to denote an index of $\sigma$, $j$ to denote an index of $\tau$, $t$ to denote an image of $\tau$ (i.e., it would make sense to write ``with $\tau = t_1,\dots,t_k$'' or ``suppose $\tau(j) = t$''). \section{Reduction}\label{reduction} In this section, we will properly state Theorem~\ref{goodwalks} (which shall be proven in Section~\ref{det}), and prove that it implies Theorem~\ref{opt}. First, in Section~\ref{gstrat}, we formalize a ``greedy strategy'' for embedding $\tau$ into $\sigma$, and show that when $\sigma \in [k]^n$ and $\tau \in S_k$ that $\tau$ is a pattern of $\sigma$ if and only if the greedy strategy works. Then in Section~\ref{gdfa}, we will introduce a way to associate $\sigma \in [k]^n$ with a weighted DFA that will simulate this greedy embedding. Next in Section~\ref{kdfa}, we introduce a family of weighted DFAs, called $k$-DFAs, and show they generalize the weighted DFAs from Section~\ref{gdfa}. Lastly, in Section~\ref{thereduction} we first state Theorem~\ref{goodwalks} in terms of $k$-DFAs, and prove Theorem~\ref{opt} assuming this result. \subsection{Greedy Strategy}\label{gstrat} Let $\sigma\in [k]^n$ and $\tau \in S_k$. Since $\sigma$ uses the alphabet $[k]$, and $\tau$ uses every element of that alphabet, we have that $\tau$ is a pattern of $\sigma$ if and only if there are indices $1\le i_1< \dots <i_k \le n$ with $\sigma(i_j) = \tau(j)$ for each $j\in[k]$. Now, if such a choice/embedding of indices exist, then so will the ``greedy embedding'' of $\tau$ where we take $i_1 = \min\{i:i \in \sigma^{-1}(\tau(1))\}$ and iteratively for $j \in [k]\setminus \{1\}$ take $i_j = \min\{i>i_{j-1}: i\in \sigma^{-1}(\tau(j))\}$.\footnote{Indeed, suppose $i'_1<\dots < i'_k$ is one such embedding. We claim that the $i_j$ defined according to the greedy embedding will exist for all $j \in [k]$. First, we have that $\sigma(i'_1) =\tau(1)$, thus $i_1$ exists and we'll have $i_1 \le i'_1$. Then inductively, for any $j \in [k-1]$, assuming $i_j$ exists and $i_j\le i'_j$, we see that as $\sigma(i'_{j+1}) =\tau(j+1)$ and $i'_{j+1}> i'_j \ge i_j \implies i_{j+1} \le i'_{j+1}$. Hence we can construct $i_j$ for all $j\in [k]$ as required.} Conversely, if we can construct $i_1,\dots,i_k$ according to the greedy embedding, it is clear that we'll have $i_1\ge 1$ and $i_k \le n$, which will imply $\sigma$ contains $\tau$ as a pattern. Hence, $\tau$ being a pattern of $\sigma$ is equivalent to being able to greedily embed $\tau$ into $\sigma$. \subsection{Greedy DFA}\label{gdfa} Given $\sigma \in [k]^n$, we shall create a weighted DFA $A_\sigma$ on $n+1$ states such that for $\tau \in S_k$, $\tau$ can be greedily embedded into $\sigma$ if and only if $c_\tau\le n$, where $c_\tau$ is the ``cost'' of the walk which $\tau$ induces in $A_\sigma$. We start by letting the states of $A_\sigma$ be $V=\{0\}\cup [n]$, with $0$ being the root. We will now define the transition function $\delta$ and the associated cost function $\cost$ on the domain $V\times[k]$. See Figure~\ref{fig:gdfa} for an example. For $v \in V$ and $t \in [k]$, we let $u=u(v,t) = \inf\{i \in \sigma^{-1}(t): i>v\}$. If $u < \infty$, then $u\in [k] \subset V$, thus we define $\d{v}{t} = u$ and $\c{v}{t} = u-v$. Otherwise, if $u = \infty$, we let $\d{v}{t} = v$ and $\c{v}{t} = \infty$. \begin{figure}[ht] \centering\begin{tikzpicture} \node[state, initial] (q0) {$0$}; \node[state, right of=q0] (q1) {$1$}; \node[state, right of=q1] (q2) {$2$}; \node[state, right of=q2] (q3) {$3$}; \node[state, right of=q3] (q4) {$4$}; \draw (q0) edge[above] node{$a$ (1)} (q1) (q0) edge[bend left, above] node{$b$ (2)} (q2) (q0) edge[bend right, below] node{$c$ (3)} (q3) (q1) edge[above] node{$b$ (1)} (q2) (q1) edge[bend left, above] node{$c$ (2)} (q3) (q2) edge[bend right, below] node{$b$ (2)} (q4) (q2) edge[above] node{$c$ (1)} (q3) (q3) edge[above] node{$b$ (1)} (q4); \end{tikzpicture} \caption{A sketch of $A_\sigma$ where $\sigma = a,b,c,b$ (here we use the alphabet $\{a,b,c\}$ rather than $[3]$ for clarity). The labels of the edges are of the form ``$x$ (y)'' where $x \in \{a,b,c\}$ is the letter being read and y is the cost of the step. All omitted edges are self-loops with cost $\infty$.} \label{fig:gdfa} \end{figure} As we went over in Section~\ref{notation}, we can extend $\delta,\cost$ to functions on the domain $V\times [k]^*$ by considering finite walks. We also now can define the walk function $\walk$ for $A_\sigma$. Now, given $v\in V, w \in [k]^*$ we consider $v_0,\dots,v_{|w|} = \w{v}{w}$. If $v_j = v_{j-1}$ for some $j \in [|w|]$, we say there was a failure. It is easy to see that if there is a failure, then $\c{v}{w} = \infty$, and otherwise we will have $\c{v}{w} = v_{|w|}-v_0$ (by induction). We can now express pattern containment of permutations in terms of walks along $A_\sigma$. This is morally because $\w{0}{\tau}$ will mimic the greedy embedding of $\tau$, and has infinite cost if and only if the greedy embedding fails. \begin{lem}For $\sigma \in [k]^n, \tau \in S_k$, we have that $\tau$ is a pattern of $\sigma$ if and only if $\c[A_\sigma]{0}{\tau}\le n$. \begin{proof}Let $\cost,\walk$ be the cost and walk functions of $A_\sigma$. Consider any $w \in [k]^*$. We shall show that $w$ has a greedy embedding into $\sigma$ if and only if $\c{0}{w} \le n$. By Section~\ref{gstrat}, the result will follow, since $\tau$ will be a pattern of $\sigma$ if and only if it has a greedy embedding into $\sigma$. By design/definition, we see that if $w\in [k]^*$ has a greedy embedding $i_1,\dots,i_{|w|}$ into $\sigma$, then $\w{0}{w} = 0,i_1,i_2,\dots,i_{|w|}$. Since $i_1 \ge 1 > 0$, and $i_j< i_{j+1}$ for $j \in [|w|-1]$, we get $\c{0}{w} = i_{|w|}\le n$ (because the walk does not have a failure). Meanwhile, we have that if $\c{0}{w} \le n$, then $0 = v_0 <v_1<\dots<v_{|w|}\le n$ with $v_0,\dots,v_{|w|}= \w{0}{w}$, making $v_1,\dots,v_{|w|}$ a greedy embedding of $w$ into $\sigma$. \end{proof} \end{lem} \subsection{The family of \texorpdfstring{$k$}{k}-DFAs}\label{kdfa} We will now define a family of weighted DFAs that will generalize the weighted DFAs $A_\sigma$ created in the last subsection. Let $D$ be a DFA with a set of states $V$ and a cost function $\cost:V\times[k]^*\to [\infty]$. We say $D$ is a \textit{$k$-DFA} if for each $v \in V$, we have that there is $\pi_v \in S_k$ such that $\pi_{v}(t) = \c{v}{t}$ for each $t\in [k]$. Now we will show how $k$-DFAs ``generalizes'' the family of $A_\sigma$ from Section~\ref{gdfa}. Recall that given two weighted DFAs $X,Y$, we say $X$ is a cheapening of $Y$ if they both have the same underlying DFA, and we have $\c[X]{v}{t} \le \c[Y]{v}{t}$ for all $(v,t) \in V\times [k]$. \begin{lem}\label{kcheap} Let $k$ be a positive integer. For any $\sigma \in [k]^n$, there exists a $k$-DFA $B_\sigma$ which is a cheapening of $A_\sigma$. \begin{proof} Let $V$ be the set of states for $A_\sigma$ and let $\cost$ be the cost function for $A_\sigma$ restricted to $V\times [k]$. We shall take $B_\sigma$ to have the same underlying DFA as $A_\sigma$, and need to define some cost function $\cost_*$ for $B_\sigma$. It suffices to define $\c[*]{v}{t}$ for all $(v,t) \in V\times [k]$. For each $v\in V$, we wish to find a permutation $\pi_v \in S_k$ such that $\pi_{v}(t)\le \c{v}{t}$ for all $t\in [k]$. We will then set $\c[*]{v}{t} = \pi_v(t)$ for all $(v,t)\in V\times[k]$.\hide{We may then set $\c[*]{v}{\cdot} = \pi_v$ for each $v\in V$.} If we can do this, then it is clear that $B_\sigma$ will be a $k$-DFA (by definition) and that it will be a cheapening of $A_\sigma$ (by our choices of $\pi_v$). We now fix some $v\in V$, and find $\pi_v$. By construction of $A_\sigma$, we have that $\cost_v$ is injective on finite values. Indeed, for $t\in [k]$, we have $\c{v}{t} = c < \infty \implies \sigma(v+c) = t$, thus if $t,t'\in [k]$ have the same finite cost $c$ (starting at $v$) we have that $t = \sigma(v+c) = t'$. Letting $T = \{t\in [k]: \c{v}{t} \le k\}$, we have that $\cost_v|_T$ is an injection into $[k]$ and $t\in [k]\setminus T $ will imply $\c{v}{t} > k$. Thus, it works to let $\pi_v = \pi \in S_k$ for any $\pi$ where $\pi|_T = \cost_v|_T$ (such $\pi$ will exist as $\cost_v|_T$ is an injection into $[k]$). \end{proof} \end{lem} \hide{Now, we observe a very trivial property about our $\cost$ function on $A_\sigma$ (for any $A_\sigma$ constructed according to preceding subsection). Let $V$ be the set of states of $A_\sigma$. For each $v \in V$, there exists $\pi_v \in S_k$ such that $\pi{v}{t} \le \c{v}{t}$ for all $t \in [k]$. Indeed, if we look back at the definition of $\cost_v$, we see that $\c{v}{t} = \c{v}{t'}< \infty $ implies $t=t'$ thus $\cost_v$ was injective on finite values. Letting $T = \{t: \c{v}{t} \le k\}$, we have that $\cost_v|_T$ is an injection into $[k]$ and $t\in [k]\setminus T $ will imply $\c{v}{t} > k$; thus it works to let $\pi_v = \pi \in S_k$ for any $\pi$ where $\pi|_T = \cost_v|_T$ (such $\pi$ will exist as $\cost_v|_T$ is an injection into $[k]$). Thus, for $\sigma\in [k]^n$, we shall relax $A_\sigma$ to get another weighted $B_\sigma$ as follows. Let $V$ be the state space of $A_\sigma$. As noted in the paragraph before, for each $v \in V$, we may choose $\pi_v \in S_k$ so that $\pi_v \le \cost(A)_v$ (for all inputs $t \in [k]$). For concreteness, let $\pi_v^\sigma$ be the lexicographically smallest choice of $\pi_v$ that works in the last sentence. We take $B_\sigma$ to have the same state space and transition function as $A_\sigma$, but for the cost function we set $\cost[B_\sigma]{v}{t} = \pi_v^\sigma(t)$ for each $(v,t) \in V\times[k]$ (which gets extended to $V\times [k]^*$). Since $\delta(B_\sigma) = \delta(A_\sigma)$, we have that $\walk $ We will now define a family of weighted DFAs that will generalize the $A_\sigma$ from the last subsection. Let $D$ be a DFA with a set of states $V$ and a cost function $\cost:V\times[k]^*\to [\infty]$. We say $D$ is a \textit{$k$-DFA} if for each $v \in V$, we have that there is $\pi_v \in S_k$ such that $\pi{v}{t} = \c{v}{t}$ for each $t\in [k]$. Looking at the previous paragraph, for $\sigma \in [k]^n$, there exists a $k$-DFA $B_\sigma$ with the set of states $V = \{0\}\cup [n]$ such that for all $v \in V$ and $t\in [k]$ we have $\cost(B_\sigma)_{v}(t) \le \cost(A_\sigma)_{v}(t)$ (i.e., for any state $v \in V$ and letter $t\in[k]$, the cost of reading $t$ at $v$ for $B_\sigma$ will be at most the cost of reading $t$ at $v$ for $A_\sigma$). This extends to any walk $w \in [k]^*$ having at most as much cost in $B_\sigma$ as it will in $A_\sigma$. Thus, $\{\tau \in S_k: \c[B_\sigma]{0}{\tau}\le n \}\supseteq \{\tau \in S_k: \c[A_\sigma]{0}{\tau}\le n \}$.} Recalling Remark~\ref{cheap}, as $B_\sigma$ is a cheapening of $A_\sigma$, we have that $\c[B_\sigma]{v}{w}\le \c[A_\sigma]{v}{w}$ for all $(v,w) \in V\times [k]^*$. Hence, for any $\sigma \in [k]^n$, we get \[ \{\tau \in S_k: \c[A_\sigma]{0}{\tau}\le n \} \subseteq \{\tau \in S_k: \c[B_\sigma]{0}{\tau}\le n \}\] \hide{\begin{equation}\label{contain} \{\tau \in S_k: \c[A_\sigma]{0}{\tau}\le n \} \subseteq \{\tau \in S_k: \c[B_\sigma]{0}{\tau}\le n \} \tag{$\ast$} \end{equation}}where the set on the RHS is defined with respect to $B_\sigma$, which is a $k$-DFA. \subsection{The Reduction}\label{thereduction} We define $G(k,n,N)$ so that for any $k$-DFA $D$ on $N$ states, there are at most $G(k,n,N)$ ``permutational walks'' $w\in S_k$ where $\c{\root(D)}{w} \le n$. Observe that\[F(k,n) \le G(k,n,n+1)\le G(k,n,k^2)\]when $n\le k^2/2$ (here the first inequality follows by our previous work, and the second follows by the monotonicity of $G$ in the third variable). We can now make our original statement of Theorem~\ref*{goodwalks} precise.\begin{thm}[Formal statement]\label{goodwalks} Fix $\epsilon^*>0$. Then there exists $c>0$ such that for sufficiently large $k$, \[G(k,(1/2-\epsilon^*)k^2,k^2) \le \exp(-ck)k!.\]\end{thm} By Remark~\ref{nfromk}, we see Theorem~\ref{opt} will follow. \begin{proof}[Proof of Theorem~\ref{opt} given Theorem~\ref{goodwalks}] Fix $\epsilon>0$. We take $\epsilon^* = \epsilon$. We may apply Theorem~\ref{goodwalks} to get $c>0$ such that \[F(k,(1/2-\epsilon)k^2) \le \exp(-ck)k!\]for all sufficiently large $k$. One can easily verify that there exists $\delta_0 >0$ such that $2\delta +\delta \log(\delta^{-1})\le c$ for all $\delta \in (0,\delta_0]$. We will take some $\delta =\min\{1,\delta_0\}$. Letting $r_k = (1+\delta)k$, a standard bound gives \[\binom{r_k}{k} \le (e(1+\delta)\delta^{-1})^{\delta k} <\exp((2+\log(\delta^{-1}))\delta k) \le \exp(ck).\]Thus by Remark~\ref{nfromk}, we get that $f(k;r_k) >(1/2-\epsilon)k^2$ for sufficiently large $k$. \end{proof} \section{A coupling argument}\label{det} \subsection{Machinery}\label{machinery} In this subsection, we will fix some variables. We let $k$ be a (fixed) positive integer. We let $D$ be a (fixed) $k$-DFA with state set $V$; we respectively denote the transition, walk, and cost functions of $D$ by $\delta,\walk,$ and $\cost$. We will say $w \in [k]^*$ is a \textit{permutational word} if $w(j) = w(j') \implies j=j'$ (i.e., if $w$ is injective). Note that permutational words will always use the alphabet $[k]$. Also, for $w = w_1,\dots,w_L$, and $E \subset [L]$, we write $w|_E$ to denote the word $w_{e_1},w_{e_2},\dots,w_{e_{|E|}}$, where $e_1< e_2<\dots<e_{|E|}$ are the elements of $E$ in increasing order. We will make use of the following fact several times. \begin{rmk}\label{uniform} Suppose $w$ is sampled uniformly from permutational words of length $L$. For any $ E\subset [L]$, we have that $w|_E$ will sample permutational words of length $|E|$ uniformly at random. \end{rmk}\noindent This remark follows from basic properties of symmetry. We will be concerned with bounding the following quantity, $P$. \begin{defn}For $v \in V,\epsilon >0,L$, we define \[P(v,L,\epsilon) = \mathbb{P}(\c{v}{w} < (1/2-\epsilon)kL)\]where $w$ is permutational word of length $L$ chosen uniformly at random. \end{defn} \noindent For convenience, for $v \in V, w\in [k]^*,\epsilon>0$, we say $w$ is \textit{$(v,\epsilon)$-bad} if $\c{v}{w} \le (1/2-\epsilon)k|w|$. Otherwise we say $w$ is $(v,\epsilon)$-good. Note that $P$ and this concept of ``goodness'' are defined with respect to $D$. We now move on to proving some necessary lemmas. \begin{lem}For any $v\in V,\epsilon> 0, L=L_0+L_1+\dots +L_M$\[P(v,L,\epsilon) \le P(v,L_0,\epsilon) + \sum_{u \in V} \sum_{m\in [M]}P(u,L_m,\epsilon).\] \begin{proof} Set $I_0 = [L_0]$, and similarly for $m \in [M]$ set $I_m = [L_0+\dots+L_m]\setminus [L_0+\dots +L_{m-1}]$. Observe that $I_0,\dots,I_M$ partitions $[L]$. Also, for each $m \in \{0\}\cup[M]$ it is clear that $|I_m| = L_m$. Consider a word $w\in [k]^L$ of length $L$. For each $m\in \{0\}\cup[M]$, let $w^m = w|_{I_m}$. Observe that for each $v \in V$, we can choose $u_1,\dots,u_M \in V$ so that \[\c{v}{w} = \c{v}{w^0}+\sum_{m\in[M]}\c{u_m}{w^m}\] (indeed, we can start by taking $u_1 = \d{v}{w^0}$, and then for $m\in [M-1]$ take $u_{m+1} = \d{u_m}{w^m}$). This is because $w$ is the sequential concatenation of $w^0,w^1,\dots,w^M$. Now suppose $w \in [k]^L$ is a $(v,\epsilon)$-bad word. It follows (essentially by pigeonhole) that there must exist some $m\in \{0\}\cup [M]$ where the event $E_m(w)$ is true, where \begin{itemize} \item $E_0(w)$ is the event that $w^0$ is $(v,\epsilon)$-bad \item and for $m\in [M]$, $E_m(w)$ is the event that $w^m$ is $(u_m,\epsilon)$-bad. \end{itemize} Let $w$ be sampled from permutational words of length $L$ uniformly at random. As above, for each $m\in \{0\}\cup[M]$ we define $w^m = w|_{I_m}$. Now, recalling Remark~\ref{uniform}, we will have that each $w^m$ will be sampled uniformly at random from permutational words of length $L_m$. Immediately, we see that the probability of the event $E_0(w)$ being true is exactly $P(v,L,\epsilon)$, by definition. We now consider each $m\in [M]$. As $w^m$ is a uniform random permutational word of length $L^m$, we'll get \[\mathbb{P}(w^m \textrm{ is } (u,\epsilon)\textrm{-bad for some }u) \le \sum_{u \in V} P(u,L_m,\epsilon)\] by union bound. Hence as the event $E_m(w)$ is contained in the event on the LHS, the probability of $E_m(w)$ occurring is upper-bounded by the RHS. So by union bound we observe \[\mathbb{P}(w\textrm{ is $(v,\epsilon)$-bad}) \le \sum_{m\in \{0\}\cup[ M]} \mathbb{P}(E_m(w)),\]which gives the desired result due to the bounds given in the preceding paragraph. \end{proof} \end{lem} Writing $P(L,\epsilon):= \max_{v \in V}\{P(v,L,\epsilon)\}$, we immediately get \begin{cor}\label{doubling} For, $\epsilon>0,L,M$\[P(ML,\epsilon) \le M|V| P(L,\epsilon).\] \end{cor} Next, we observe \begin{lem}\label{forL} For $ \epsilon> 0, L$, \[P(L,\epsilon) \le \frac{k^L(k-L)!}{k!}\exp(-\frac{\epsilon^2}{4} L).\] \begin{proof} Let $w$ be uniform random word of length $L$. For each $v \in V$, we have that \begin{align*} P(v,L,\epsilon) &= \mathbb{P}(w \textrm{ is $(v,\epsilon)$-bad}|w\textrm{ is permutational})\\ &\le \frac{\mathbb{P}(w \textrm{ is $(v,\epsilon)$-bad})}{\mathbb{P}(w\textrm{ is permutational})}\\ \end{align*}by Bayes' theorem. Immediately, we note that $\mathbb{P}(w\textrm{ is permutational}) = \frac{k!}{(k-L)! k^L}$, which justifies the first term in our lemma. Meanwhile, by a Chernoff bound \cite[Theorem~6.ii]{goemans}, we have that $\mathbb{P}(w \textrm{ is }(v,\epsilon)\textrm{-bad})\le \exp(-\frac{\epsilon^2}{4} L)$ as $\c{v}{w}$ is the sum of $L$ i.i.d. samples from the uniform distribution of $[k]$ (this is true by definition of $D$ being a $k$-DFA). This justifies the second term in our lemma. Hence, $P(v,L,\epsilon) \le \frac{k^L(k-L)!}{k!}\exp(-\frac{\epsilon^2}{4}L)$. As $v\in V$ was arbitrary, the same bound applies to $P(L,\epsilon)$, giving the result. \end{proof} \end{lem} \subsection{Proof of Theorem~\ref{goodwalks}} We require a standard bound for the birthday problem: \begin{rmk}\label{birth} There exists $\alpha_0>0$ such that for $\alpha \in (0,\alpha_0)$, if we take $L = \alpha k$, then we have that \[\frac{k^L(k-L)!}{k!} \le \exp((\alpha^2/2+\alpha^3/4)k).\] \end{rmk}\noindent This follows from \cite[Slide~11]{maji}. We can now prove Theorem~\ref{goodwalks} by choosing $L$ appropriately. \begin{proof}[Proof of Theorem~\ref{goodwalks}] Fix $\epsilon^* > 0$ and set $\epsilon = 2\epsilon^*/3$ so that \[(1/2-\epsilon)(1-\epsilon)>(1/2-\epsilon^*).\tag{$\dagger$}\label{epchoice}\]Without loss of generality, we may assume $\epsilon<\alpha_0$ where $\alpha_0$ is the constant from Remark~\ref{birth}. Let $D$ be any $k$-DFA with $k^2$ states. We define $P(\cdot,\cdot)$ and $(\cdot,\cdot)$-bad with respect to $D$ as we did in Section~\ref{machinery}. Now, we will take $L = \lfloor \alpha k\rfloor $ for some $\alpha\in (0,\epsilon)$ which we determine later. We shall bound $P(\epsilon,L)$ by directly applying Lemma~\ref{forL}. When $0<\alpha<\epsilon<\alpha_0$, the conclusion of Remark~\ref{birth} holds. Hence, plugging $L$ into Lemma~\ref{forL} gives \[P(L,\epsilon) \le \exp\left((\alpha^2/2+\alpha^3/4)k -\frac{\epsilon^2}{4}L\right) \le \exp\left((\alpha^2/2+\alpha^3/4-\frac{\epsilon^2}{4}\alpha)k+1\right) .\] (here the $+1$ is to account for $L$ being the floor of $\alpha k$) Taking $\alpha = \sqrt{\epsilon^2/2+1}-1 \in (0,\epsilon)$,\footnote{It should be clear that defining $\alpha$ in this way ensures $\alpha>0$; by checking derivatives one can confirm that $\epsilon> 0 \implies \alpha< \epsilon$. Hence $\alpha \in (0,\epsilon)$ as desired.} we get \[P(L,\epsilon)\le \exp(1-c_0k),\textrm{ with }c_0=\frac{\epsilon^2(\sqrt{\epsilon^2/2 +1}-1)}{8}.\] Next, we set $M = \lfloor k/L\rfloor$. Because $L \ge 1$, we will have $M \le k$, and by assumption $D$ has at most $k^2$ states. By Corollary~\ref{doubling}, \[P(ML,\epsilon)\le k^3\exp(1-c_0k).\] For later use, we remark that \[k-\epsilon k< k-\alpha k \le k-L < ML \tag{$\ddagger$}\label{floor}.\]The above follows from properties of the floor function and the fact that $\alpha < \epsilon$. Now, let $w \in S_k$ be sampled uniformly at random. By Remark~\ref{uniform}, $w':=w|_{[ML]}$ samples permutational words of length $ML$ uniformly at random. Trivially, \[\c{\root(D)}{w'}\le \c{\root(D)}{w}\]as $w'$ is a prefix of $w$. So, assuming $w'$ is $(\root(D),\epsilon)$-good, we get \begin{align*} \c{\root(D)}{w} &\ge \c{\root(D)}{w'}\\ &\ge (1/2-\epsilon)kML \\ &>(1/2-\epsilon^*)k^2.\\ \end{align*} (The last line quickly follows from \ref{epchoice} and \ref{floor}.) Thus, by our bound on $P(ML,\epsilon)$ from above \[\mathbb{P}(\c{\root(D)}{w}\le (1/2-\epsilon^*)k^2) \le P(ML,\epsilon) \le k^3\exp(1-c_0k).\]As $D$ was arbitrary, this holds for all $k$-DFAs on $k^2$ states, thus \[G(k,(1/2-\epsilon^*)k^2,k^2) \le k^3\exp(1-c_0k)k!.\]We conclude by fixing some choice of $c \in (0,c_0)$. By basic asymptotics, it follows that for sufficiently large $k$, we have \[G(k,(1/2-\epsilon^*)k^2,k^2) \le \exp(-ck)k!.\] \end{proof} \hide{ \begin{proof}[Proof of Theorem~\ref{goodwalks}] Fix $\epsilon^* > 0,\gamma:\mathbb{N}\to \mathbb{R}_{>0}$ such that $\gamma = \omega(1)$, and set $\epsilon = \epsilon^*/2$. We will create constants $C_1,C_2>0$. Let $D$ be any $k$-DFA with $k^2$ states. We define $P(\cdot,\cdot)$ and $(\cdot,\cdot)$-bad with respect to $D$ as we did in Section~\ref{machinery}. Now, take $L = k/\gamma(k)$. We shall bound $P(\epsilon,L)$ by directly applying Lemma~\ref{forL}. First, we shall apply a standard bound for the birthday problem. Since $L = o(k)$, by \cite[Slide~11]{maji} we get that $\frac{k!}{k^L(k-L)!} \ge \exp(-O(k/\gamma^2(k)))$ and thus its reciprocal is at most $ \exp(O(k/\gamma^2(k)))$. Hence, plugging $L$ into Lemma~\ref{forL} gives \[P(L,\epsilon) \le \exp\left(O(k/\gamma^2(k))-\frac{\epsilon^2}{4}k/\gamma(k)\right) \le \exp\left(-C_1k/\gamma(k)\right) .\] Next, we set $M = \lfloor k/L\rfloor$. Because $L \ge 1$, we will have $M \le k$, and by assumption $D$ has at most $k^2$ states. Thus by Corollary~\ref{doubling}, \[P(ML,\epsilon)\le k^3\exp(-C_1(k/\gamma(k))) = \exp(-C_2k/\gamma(k)).\] By definition of the floor function, we'll have that \[k-L < ML \le k.\] Now, let $w \in S_k$ be sampled uniformly at random. By Remark~\ref{uniform}, $w':=w|_{[ML]}$ samples permutational words of length $ML$ uniformly at random. Trivially, \[\c{\root(D)}{w'}\le \c{\root(D)}{w}\]as $w'$ is a prefix of $w$. By what we've done above, we have that \[\mathbb{P}(w' \textrm{ is }(\root(D),\epsilon)\textrm{-bad})\le \exp(-C_2k/\gamma(k)).\]Assuming $w'$ is $(\root(D),\epsilon)$-good, we get \begin{align*} \c{\root(D)}{w} &\ge \c{\root(D)}{w'}\\ &\ge (1/2-\epsilon)k(k-L) \\ &= (1/2-\epsilon-o(1))k^2.\\ \end{align*} Thus, for sufficiently large $k$, the last line will exceed $(1/2-\epsilon^*)k^2$, so \[\mathbb{P}(\c{\root(D)}{w}<(1/2-\epsilon^*)k^2) \le \exp(-C_2k/\gamma(k)).\]As $D$ was arbitrary, this holds for all $k$-DFAs on $k^2$ states, thus for sufficiently large $k$ \[G(k,(1/2-\epsilon^*)k^2,k^2) \le \exp(-C_2k/\gamma(k))k!.\] \end{proof}} \section{An alternate approach}\label{alt} In this section, we sketch another way to get bounds on $G$. Here, we break the cost of each walk into two parts, which we bound separately. Fix a $k$-DFA $D$. Suppose we sample $\tau \in S_k$ uniformly at random. We write $\tau = t_1,\dots,t_k$ and $v_0,\dots,v_k = \w[D]{\root(D)}{\tau}$. For each $j \in [k]$, let $C_j =\c[D]{v_{j-1}}{t_j}$. By definition of cost, \[\c[D]{\root(D)}{\tau} = \sum_{j=1}^k C_j \label{eq:cost}\tag{$*$}.\] Now, given $t_1,\dots, t_{j-1}$, there exists $S_j \subset [k],|S_j| = k-j+1$ such that $C_j$ samples $S_j$ uniformly at random (in particular, $t_1,\dots,t_{j-1}$ determines $v_{j-1}$ thus we get $S_j = \{\c[D]{v_{j-1}}{t}:t \in [k]\setminus \{t_1,\dots,t_{j-1}\}\}$). Let $X_j$ be such that $C_j$ is the $X_j$-th smallest element of $S_j$. Since $C_j$ samples $S_j$ uniformly, it follows that $X_j$ samples $[k-j+1]$ uniformly at random. We remark without proof that $X_1,\dots,X_k$ are independently distributed. Next, we define $Y_j = C_j-X_j$, and observe that $Y_j$ is always non-negative. By \ref{eq:cost}, we get \[\c[D]{\root(D)}{\tau} = \sum_{j=1}^k X_j +Y_j.\] We shall now consider $\sum_{j=1}^k X_j$ and $\sum_{j=1}^k Y_j$ individually. The first sum is not very complicated and does not depend on our choice of $D$. It suffices to apply Hoeffding's inequality. \begin{lem}\label{Xbound} For any $\epsilon > 0$, and sufficiently large $k$, \[\mathbb{P}(\sum_{j=1}^k X_j \le (1/4-\epsilon)k^2) < \exp( -32\epsilon^2 k/3).\] \begin{proof} By linearity, \[\mathbb{E}\left[\sum_{j=1}^k X_j\right] = \sum_{j=1}^{k} \frac{k-j+1}{2} = \frac{1}{4}(k^2+k) > k^2/4.\]Meanwhile, for each $j$ the support of $X_j$ is contained in the interval $[1,k-j+1]$. We have that \[\sum_{j=1}^k (k-j+1-1)^2 = \sum_{j=1}^{k-1} j^2 = \frac{1}{6}(k-1)k(2k-1) < k^3/3.\] Thus, applying a standard Hoeffding bound, we get \begin{align*} \mathbb{P}(\sum_{j=1}^k X_j \le (1/4-\epsilon)k^2) &< \exp\left(-\frac{2k^2 (4\epsilon k)^2}{k^3/3}\right)\\ &= \exp(-32\epsilon^2 k/3).\\ \end{align*} \end{proof} \end{lem} Next, we want to control the sum over $Y_j$. We first note \begin{align*} Y_j &= \sum_{t=1}^{C_j} I(\c[D]{v_{j-1}}{t}=\c[D]{v_{j-1}}{t_{j'}}\textrm{ for some }j' \in [j-1])\\ &\ge \min_{v \in V} \left\{\sum_{t=1}^{X_j} I(\c[D]{v}{t} =\c[D]{v}{t_{j'}} \textrm{ for some }j'\in [j-1] )\right\}. \end{align*} Thus, for $v \in V, j\in [k],x \in [k-j+1]$, we define \[T_{v,j,x} := \sum_{t=1}^x I(\c[D]{v}{t} = \c[D]{v}{t_{j'}} \textrm{ for some }j'\in [j-1])\] \[\textrm{ and } T_{j,x} := \min_{v \in V}\{T_{v,j,x}\}.\] We will next need two concentration results. These will allow us to bound $\sum_{j=1}^k Y_j$ in manner reminiscent to Riemann sums. \begin{prp}\label{con1} Fix $\epsilon^* > 0$ and a positive integer $M$. There exists $c = c_{\ref{con1}}(\epsilon^*,M) > 0$ such that for each $m_1,m_2 \in [M-1]$, \[\mathbb{P}(|\{m_1k/M< j\le (m_1+1)k/M: X_j/(k-j+1)>\frac{m_2}{M}\}| < (1-\epsilon^*)\left(1-\frac{m_2}{M}\right)k/M)\le \exp(-ck)\]when $k$ is sufficiently large. We may in particular take $c_{\ref{con1}}(\epsilon^*,M) =\frac{1}{2}\left(\frac{\epsilon^*}{M}\right)^2$. \end{prp} \begin{prp}\label{con2} Fix $\epsilon^* > 0$ and a positive integer $M$. There exists $c = c_{\ref{con2}}(\epsilon^*,M) > 0$ such that for each $m_1,m_2 \in [M-1]$, \[\mathbb{P}(T_{j,m_2k/M} < (1-\epsilon^*)\left(\frac{m_2}{M}(j-1)\right) \textrm{ for some }\frac{m_1}{M}k<j\le \frac{m_1+1}{M}k)\le \exp(-ck)\]when $k$ is sufficiently large. We may in particular take any $c_{\ref{con2}}(\epsilon^*,M) <\frac{1}{2}\left(\frac{\epsilon^*}{M}\right)^2$. \end{prp}\noindent The first result immediately follows from a Chernoff bound, since the size of the set behaves exactly like a binomial random variable. To prove the second result it suffices to control $T_{j,v,m_2k/M}$ and then take a union bound over all $v,j$. To control $T_{j,v,m_2k/M}$, one can couple it with a binomial random variable $B$ with success probability slightly less than $m_2/M$ so that $\mathbb{P}(B> T_{j,v,m_2k/M})$ is exponentially small, and then apply a Chernoff bound. We leave the details as an exercise for the reader. We note that Proposition~\ref{con2} is the only result whose proof will make use of the number of states in $D$ not being too large. In Section~\ref{bestdfa}, we give an example of $k$-DFA with $2^k$ states such that $\sum_{j=1}^k Y_j = 0$ always holds, thus limiting the growth of the number of states is necessary. We now go over how to bound $\sum_{j=1}^kY_j$. \begin{lem}\label{Ybound}Fix $\epsilon >0$. There exists $c > 0$ such that for sufficiently large $k$, \[\mathbb{P}(\sum_{j=1}^kY_j < (1/4-\epsilon)k^2)<\exp(-ck).\] \begin{proof}[Proof of Lemma~\ref{Ybound} given Proposition~\ref{con1} and Proposition~\ref{con2}] Fix $\epsilon^*> 0$ and a positive integer $M$. Now assume the events of Proposition~\ref{con1} and Proposition~\ref{con2} for the given $\epsilon^*$ and $M$ do not hold for any $m_1,m_2 \in [M-1]$. For $m_1 \in [M-1]$, let $E_{m_1} = [m_1k/M:(m_1+1)k/M]$. For $m_2 \in [M-1]$, let $F_{m_2} =\{ j: X_j/(k-j+1)>\frac{m_2}{M}\}$. We will have that \begin{align*} \sum_{j\in E_{m_1}} Y_j &\ge \sum_{j \in E_{m_1}} T_{j,X_j}\\ &\ge \sum_{m_2 \in [M-1]}\sum_{j \in E_{m_1}\cap F_{m_2}} (1-\epsilon^*)\frac{ (j-1)}{M}\\ &\ge \frac{(1-\epsilon^*)}{M}\sum_{m_2 \in [M-1]}|E_{m_1}\cap F_{m_2}|k\frac{m_1}{M}\\ &\ge \frac{(1-\epsilon^*)^2}{M^2}k^2 \sum_{m_2 \in [M-1]}(1-\frac{m_2}{M})\frac{m_1}{M}\\ \end{align*}(here the second inequality makes use of Proposition~\ref{con2} not holding and also applies telescoping; the last inequality makes use of Proposition~\ref{con1} not holding). Hence, \begin{align*} \sum_{j=1}^k Y_j &\ge \frac{(1-\epsilon^*)^2}{M^2}k^2\sum_{m_1 \in [M-1]} \sum_{m_2\in [M-1]} (1-\frac{m_2}{M})\frac{m_1}{M}\\ &= \frac{(1-\epsilon^*)^2}{M^2}k^2\left(\frac{M-1}{2}\right)^2\\ &\ge (1-\epsilon^*)^2(1-1/M)^2 \frac{1}{4}k^2\\ \end{align*}\noindent here the second line follows by separating the double sum into the product of two sums (which both happen to equal $(M-1)/2$). Thus, if $\epsilon^*,M$ are such that $(1-\epsilon^*)^2(1-1/M)^2 \ge 1-4\epsilon$, the RHS will be at least $(1/4-\epsilon)k^2$. Hence, the probability that $\sum_{j=1}^kY_j <(1/4-\epsilon)k^2$ is at most probability that there exists $m_1,m_2\in [M-1]$ such that the event from Proposition~\ref{con1} or Proposition~\ref{con2} holds with respect to the specified $\epsilon^*,M$. By union bound, this is at most \[(M-1)^2(\exp(-c_{\ref{con1}}(\epsilon^*,M)k)+\exp(-c_{\ref{con2}}(\epsilon^*,M)k)) \le \exp(-ck) \textrm{ for sufficiently large }k\] for any $c <\min\{c_{\ref{con1}}(\epsilon^*,M),c_{\ref{con2}}(\epsilon^*,M)\}$. \end{proof} \end{lem} It is clear that combining Lemma~\ref{Xbound} and Lemma~\ref{Ybound} gives another proof of Theorem~\ref{goodwalks}. \section{Conclusions}\label{conclusion} \subsection{Lower order terms for \texorpdfstring{$f(k;k+1)$}{f(k;k+1)}}\label{loworder} From Corollary~\ref{asy}, we know that $f(k;k+1) = (1/2\pm o(1))k^2$, meaning Miller's construction is optimal up to lower order terms. However, the statement of Theorem~\ref{opt} does not immediately yield any explicit function for this $o(1)$-term. We briefly mention an explicit function our methods yield. To prove $f(k;k+1) < n$, it suffices to show $kG(k,n,k^2) < k!$ (by Remark~\ref{nfromk}). The following comes from looking at the proof of Theorem~\ref{goodwalks}, and observing $c_0 > \epsilon^4/33$ for sufficiently small $\epsilon$ ($33$ may be replaced with any constant greater than $32$). \begin{rmk}For all sufficiently small $\epsilon>0$, \[\epsilon^4 > \frac{33+132\log(k)}{k} \implies f(k;k+1)<(1/2-3\epsilon/2)k^2.\] \end{rmk}\noindent Analyzing the work from Section~\ref{alt} should give a similar bound, where $33+ 4\log(k)$ is replaced by some other function of the same shape. Thus, we can say \begin{cor} For all $k$, \[\frac{k^2}{2}-k^{7/4+o(1)}\le f(k;k+1)\le \frac{k^2+k}{2}.\] \end{cor}\noindent It is interesting to note that the best lower bound of $f(k;k)$ is of the form $k^2-k^{7/4+o(1)}$ \cite{kleitman}. The lower bound for $f(k;k)$ was proved in 1976 and has remained unimproved for 45 years. It would be interesting to see if the lower-order error in the lower bound for $f(k;k)$ or $f(k;k+1)$ can be improved. As we will demonstrate in Section~\ref{dstrat}, there is a limit to how well we can bound $f(k;k+1)$ by our methods. In particular, for large $k$ we have $G(k,k^2-k^{3/2},k+1) = \Omega(k!)$. In fact, a more careful calculation would give that $kG(k,k^2-h(k)k^{3/2},k+1) \ge k!$ with $h(k)$ being some slowly growing function which is roughly $|\Phi^{-1}(C/\sqrt{k})|$ for a certain absolute constant $C>0$ (here $\Phi$ is the cdf of the standard normal distribution). \subsection{Other Problems on \texorpdfstring{$k$}{k}-DFAs}\label{otherkdfa}We believe understanding the cost of permutational walks on $k$-DFAs might be of independent interest. We provide some useful constructions and ask a few future problems. \subsubsection{Upper bound on \texorpdfstring{$G(k,n,N)$}{G(k,n,N)} independent of \texorpdfstring{$N$}{N}}\label{bestdfa} We note that there's an ``optimally cheap'' $k$-DFA for reading permutations. By which we mean there is a $k$-DFA $A$ such that for any other $k$-DFA $B$, there exists a bijection $\phi:S_k \to S_k$ such that for $\tau \in S_k$ we have $\c[A]{\root(A)}{\pi} \le \c[B]{\root(B)}{\phi(\tau)}$. It follows that for any $k$-DFA $B$, that \[|\{\tau \in S_k:\c[B]{\root(B)}{\tau} \le n\}|\le |\{\tau \in S_k:\c[A]{\root(A)}{\tau} \le n\}|.\]Thus the RHS will exactly be $\max_{N}\{G(k,n,N)\}$. We sketch on construction of $A$. For the set of states, $V$, we use all subsets of $[k]$ (with the empty set being the root). For $v \in V$, and $t \in [k]$, we set $\d{v}{t} = v\cup \{t\}$. For the cost, we impose for each $v\in V$, that $t \in v \iff \c{v}{t}> k-|v|$. Essentially, the DFA will remember which letters have been read thus far, and assigns the highest costs to these letters (since when reading a permutation, we never read a letter twice). To see optimality, it suffices to show that we'll always have $\sum_{j=1}^k Y_j = 0$ (here we use the terminology from Section~\ref{alt}). This follows immediately from how the cost is defined. If we've walked to a vertex $v$, then letters we've read while walking to $v$ is exactly the elements of $v$, and these will have greater cost at $v$ then any letter which is not an element of $v$ (and thus none of the summands $Y_j$ can be non-zero). \subsubsection{\texorpdfstring{$k$}{k}-DFA's with many low cost permutations}\label{dstrat} It would be interesting to better understand how fast $n_k$ must grow when \[G(k,n_k,k^2) = \Omega(k!).\]Repeating the analysis from Section~\ref{loworder}, we get that $ n_k\ge k^2/2 - k^{7/4+o(1)}$ must hold. We will describe a construction (provided by Zachary Chase in personal communication) of a $k$-DFA $D$ on $k+1$ states such that for ``many'' $\tau \in S_k$, $\c[D]{\root(D)}{\tau} \le k^2/2 - k^{3/2}$. This will show that its possible to have $n_k \le k^2/2 - \Omega(k^{3/2})$. We first partition $[k]$ into two sets $A,B$ as evenly as possible, such that $|A| \le |B| \le |A|+1$. Out set of states will be $V:=\{-|A|,1-|A|,\dots,|B|\}$ with root $0$. For $t\in A$, we let $\d[D]{v}{t} = v-1$ if $v\neq -|A|$ and for $t\in B$ we let $\d[D]{v}{t} = v+1$ if $v\neq |B|$ (otherwise we let $\delta$ be constant, though this will not matter when reading permutations). With $v_0,\dots,v_L = \w[D]{0}{w_1,\dots,w_L}$, we observe that we'll have $v_j = |B\cap \{w_1,\dots,w_j\}|-|A\cap \{w_1,\dots,w_j\}|$, unless there was some $j'<j$ where $w_{j'+1} = w_{j'}$. Whenever $w$ is a permutation, the second case will not happen, so $v_0,\dots,w_k := \w[D]{0}{w}$ satisfies \[v_j = |B\cap \{w_1,\dots,w_j\}|-|A\cap \{w_1,\dots,w_j\}|\] whenever $w\in S_k$. For our cost function, we will assign the elements of $A$ lower weights when we are in a negative state and do the opposite otherwise. For simplicity, we consider the case where $k = 2m$, $A = [m],B = [2m]\setminus [m]$. Then for $v \in V,t\in [k]$, we let \[\c[D]{v}{t}=\begin{cases}t &\textrm{if }v<0\\ t+m&\textrm{if $v\ge 0$ and }t\in A\\ t-m&\textrm{if $v\ge 0$ and }t\in B\\\end{cases}.\] We now analyze the cost of reading permutations in $D$. We may write $\c[D]{v}{t} =mq(v,t)+r(t)$, where $q(v,t) \in \{0,1\}, r(t)\in [m]$ (it is easily verified that $r(t)$ does not depend on $v$). Thus, for $\tau \in S_k$, if $\w[D]{0}{\tau} = v_0,\dots,v_k$, then \[\c[D]{0}{\tau} = \sum_{t \in [k]}r(t) + m\sum_{j\in [k]} q(v_{j-1},\tau(j)).\] Noting $\sum_{t \in [k]}r(t) =\frac{k^2}{4}-\frac{k}{2}\le k^2/4$, it remains to control the second term. Now, we claim (without proof) that if $\tau \in S_k$ is chosen uniformly at random, there is a coupling with $X_1,\dots,X_k$ (where $X_i$ are i.i.d. Bernoulli variables with $\mathbb{P}(X_i=1) = 1/2$) so that $X_j = 0 \implies q(v_{j-1},\tau(j)) = 0$. By Berry-Esseen Theorem, one can see that \[\mathbb{P}(\sum_{j=1}^k X_j \le k/2-2\sqrt{k})\to \Phi(-4)>0\] (where $\Phi$ is the cdf of the standard normal distribution). As $X_j \ge q(v_{j-1},\tau(j))$ for each $j$, it follows that for large $k$, \[\mathbb{P}\left(\sum_{j\in [k]}q(v_{j-1},\tau(j)) \le k/2-2\sqrt{k}\right) \ge \Phi(-4)/2\] \[\implies G(k,k^2-k^{3/2},k+1) \ge \frac{\Phi(-4)}{2} k!.\] \subsection{Refuting a conjecture of Gupta}\label{implications} Lastly, we demonstrate how our result contradicts a conjecture by Gupta \cite{gupta} (see also the second item in the final section of \cite{engen}). This conjecture is concerned with ``bi-directional circular pattern containment''. Essentially, given a word $w\in [r]^n$, we say $\tau \in S_k$ is a \textit{circular pattern} of $w$ if there exists $i \in [n]$ such that $\tau$ is a pattern of \[w(i),w(i+1),\dots,w(n),w(1),w(2),\dots,w(i-1).\]We say $\tau\in S_k$ is a \textit{bi-directional circular pattern} (BCP) of $w\in [r]^n$ if $\tau$ is circular pattern of $w$ and/or $w's$ reversal, $w(n),w(n-1),\dots,w(2),w(1)$. Gupta conjectured that for each $k$, there was $\sigma \in [k]^n$ with $n \le \frac{3}{8}k^2 +\frac{1}{2}$ such that each $\tau \in S_k$ is a BCP of $\sigma$. By definition of BCPs, this would mean that there exists $2n$ words $w_1,\dots,w_{2n}\in [k]^n$ such that for any $\tau \in S_k$, there exists $i\in [2n]$ such that $\tau$ is pattern of $w_i$. This would imply that $k! \le 2nF(k,n) \le k^2F(k,n)$. Hence, by our bounds on $F(k,n)$ we get a contradiction for large $k$. In fact, essentially repeating the analysis from Section~\ref{loworder}, we can show that if $\sigma\in [k]^n$ contains each $\tau \in S_k$ as a BCP, then $n \ge \frac{k^2}{2}-k^{7/4+o(1)}$. In 2012, Lecouturier and Zmiaikou proved that there exists $\sigma \in [k]^{k^2/2 +O(k)}$ which contain each $\tau \in S_k$ as a circular pattern (and hence as a BCP), thus our bound is tight up to lower-order terms \cite{lecouturier}. \subsection{A 0-1 phenomenon} In \cite[Section~6]{chroman}, it was asked how large must $n_k$ be for there to exist $\sigma \in [k]^{n_k}$ which contain almost all patterns in $S_k$ (i.e., what are the growth of sequences $n_k$ so that $F(k,n_k) = (1-o(1))k!$). Again, the analysis of Section~\ref{loworder} shows that $n_k \ge k^2/2 - k^{7/4 + o(1)}$ is necessary for $F(k,n_k) = \Omega(k!)$ to hold. Meanwhile, if we consider the word $w_k^m$ obtained by concatenating $m$ copies of $1,2,\dots,k$, we have that $w$ contains all $\tau \in S_k$ with at least $k-m$ ascents (the number of ascents in a permutation $\tau \in S_k$ is the number of $j \in [k-1]$ such that $\tau(j)<\tau(j+1)$). By reversing the order of permutation $\tau \in S_k$ with $a$ ascents, you get a permutation with $k-a-1$ ascents. Thus, with $m = \lceil k/2\rceil$ we have that $w_k^{m}$ contains at least half of the $\tau \in S_k$ as a pattern (thus $n_k = (k^2+k)/2$ satisfies $F(k,n_k) \ge k!/2$). Finally, using standard martingale concentration results (see e.g. \cite[Proposition~2.3]{alon}) if $m = k/2 +C\sqrt{k}$ then $w_k^m$ contains $(1-2\exp(-\Omega(C^2)))k!$ patterns thus $n_k = k^2/2 + \omega(k^{3/2})$ suffices for $F(k,n_k) = (1-o(1))k!$. \subsection{Open Problems} To recap Sections~\ref{loworder} and \ref{otherkdfa}, we find the following problems concerning lower-order terms interesting. \begin{prob}\label{p2} Is there $c_1<7/4$ such that \[k^2-O(k^{c_1}) \le f(k;k)?\]It is known that $c_1$ must be taken to be $\ge 1$. \end{prob} \begin{prob}\label{p3} Is there $c_2<7/4$ such that \[\frac{k^2+k}{2}-O(k^{c_2}) \le f(k;k+1)?\] It is possible that no error term is needed, and $(k^2+k)/2 = f(k;k+1)$ simply holds. \end{prob}\label{p4} \begin{prob} Is there $c_3<7/4$ such that \[ G\left(k,\frac{k^2}{2}-\Omega(k^{c_3}),k^2\right) = o(k!) ?\]Due to Section~\ref{dstrat}, it is clear that $c_3$ must be taken so that $c_3> 3/2$ (but potentially we can take $c_3$ to be any value $>3/2$). \end{prob} It would also be interesting to extend the conclusion of Corollary~\ref{asy} to alphabets with linearly many extra letters. Specifically, we pose the following problem. \begin{prob}\label{linear} Does there exist $\delta > 0$ such that $f(k;(1+\delta)k) \ge (1/2-o(1))k^2$? \end{prob}\noindent This would require a significant new idea. In particular, we think a proof would use some ``redundancy result'' to replace Remark~\ref{nfromk}. We further remark that the stronger statement, which claims $f(k;Ck)\ge (1/2-o(1))k^2$ for every $C>1$, could quite possibly be true. However, our methods fail to prove that $f(k;1.0001k)\ge (1/4-o(1))k^2$, so this currently seems out of reach. While we believe Problems~1-5 have affirmative answers, we are uncertain whether this stronger statement holds true. Our (lack of) understanding about more efficient superpatterns on small alphabets will be further discussed in \cite{hunter}. \section{Acknowledgements} The author would like to thank Daniel Carter and Zachary Chase for helpful conversations and looking at previous drafts of this paper. The author would also like to thank Carla Groenland for many suggestions on the presentation of the paper. Lastly, the author thanks Vincent Vatter and Mihir Singhal for giving comments on the final draft of this preprint.
{ "timestamp": "2021-08-13T02:05:56", "yymm": "2108", "arxiv_id": "2108.05474", "language": "en", "url": "https://arxiv.org/abs/2108.05474", "abstract": "A permutation $\\sigma \\in S_n$ is a $k$-superpattern (or $k$-universal) if it contains each $\\tau \\in S_k$ as a pattern. This notion of \"superpatterns\" can be generalized to words on smaller alphabets, and several questions about superpatterns on small alphabets have recently been raised in the survey of Engen and Vatter. One of these questions concerned the length of the shortest $k$-superpattern on $[k+1]$. A construction by Miller gave an upper bound of $(k^2+k)/2$, which we show is optimal up to lower-order terms. This implies a weaker version of a conjecture by Eriksson, Eriksson, Linusson and Wastlund. Our results also refute a 40-year-old conjecture of Gupta.", "subjects": "Combinatorics (math.CO)", "title": "An asymptotically tight lower bound for superpatterns with small alphabets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.984336352207334, "lm_q2_score": 0.7185943865443352, "lm_q1q2_score": 0.7073385771677178 }
https://arxiv.org/abs/1908.01265
Relative Spectral Invariants of Elliptic Operators on Manifolds
We introduce and study {\it new} relative spectral invariants of {\it two} elliptic partial differential operators of Laplace and Dirac type on compact smooth manifolds without boundary that depend on both the eigenvalues and the eigensections of these operators and contain much more information about geometry. We prove the existence of the homogeneous short time asymptotics of the new invariants with the coefficients of the asymptotic expansion being integrals of some invariants that depend on the symbols of both operators. The first two coefficients of the asymptotic expansion are computed explicitly.
\section{Introduction} \setcounter{equation}0 Elliptic operators on manifolds, in particular, first and second-order partial differential operators, play a crucial role in global analysis, spectral geometry and mathematical physics \cite{gilkey95,berline92,berger03,avramidi00,avramidi10,avramidi15}. The study of the spectrum of elliptic operators is of paramount importance since it describes various important objects in quantum field theory and differential geometry such as correlation functions, functional determinants, integrals of infinite-dimensional Hamiltonian systems etc. The spectrum of elliptic operators does, of course, depend on the geometry of the manifold. Therefore, one can ask the question: ``To what extent does the spectrum of a {\it single} elliptic operator describe the geometry?'', or, as M. Kac put it ``Can one hear the shape of a drum?'' In general one cannot compute the spectrum exactly. One usually studies the spectrum indirectly by studying some spectral invariants such as the heat trace or zeta function \cite{gilkey95}. These spectral invariants only depend on the eigenvalues of the operators and do not depend on the eigensections. It is well known now that the answer to this question is negative, that is, there are non-isometric manifolds that have the same spectrum. The classical heat trace of Laplace type operators has been studied for decades going back to H. Weyl \cite{weyl11} and S. Minakshisundaram and A. Plejel \cite{minakshisundaram49}. There is a vast literature on the subject (see \cite{gilkey95,berline92,avramidi00,avramidi10,avramidi15} and references therein). In \cite{avramidi16} and \cite{avramidi17} we studied more general spectral invariants that appear naturally in quantum statistical physics and geometry. In the present paper we introduce and study {\it new relative spectral invariants} of {\it two} elliptic operators. We hope that these new invariants could shed new light on the old questions of spectral geometry. We generalize the question as follows: ``Does the spectral data of {\it two} elliptic operators determine the geometry?'' These invariants depend both on the eigenvalues and the eigensections and contain much more information about geometry. Such relative spectral invariants appear naturally, in particular, in the study of particle creation in quantum field theory and quantum gravity \cite{birrel80,dewitt75,avramidi19a}. They determine the number of created particles from the vacuum when the dynamical operator depends on time. In Sec. 2 we motivate the study of the relative spectral invariants. We describe the so-called {\it Bogolyubov invariant} in quantum field theory and show how it can be expressed in terms of the relative spectral invariant. We consider a smooth $n$-dimensional compact manifold $M$ without boundary and a vector bundle ${\cal V}$ over the manifold $M$. Let $L_\pm$ be two self-adjoint elliptic second-order partial differential operators acting on smooth sections of the vector bundle ${\cal V}$ with a positive definite scalar leading symbols of Laplace type. Let $D_\pm$ be two self-adjoint elliptic first-order partial differential operators acting on smooth sections of the vector bundle ${\cal V}$ of Dirac type such that the squares $L_\pm=D_\pm^2$ are Laplace type operators. The spectral information about the operators $L_\pm$ and $D_\pm$ are contained in the classical heat traces \bea \Theta_\pm(t) &=& \mathrm{ Tr} \exp(-tL_\pm), \label{237ssb} \\ H_\pm(t) &=& \mathrm{ Tr} D_\pm \exp(-tD_\pm^2). \label{12via} \eea We show that the Bogolyubov invariants can be expressed in terms of the traces \bea \Psi(t,s) &=& \mathrm{ Tr}\left\{\exp(-tL_+)-\exp(-tL_-)\right\} \left\{\exp(-sL_+)-\exp(-sL_-)\right\}, \label{220ssb} \\ \Phi(t,s) &=& \mathrm{ Tr}\left\{D_+\exp(-tD^2_+)-D_-\exp(-tD^2_-)\right\} \left\{D_+\exp(-sD^2_+)-D_-\exp(-sD^2_-)\right\}, \nonumber\\ \label{248ssb} \eea that we call {\it relative spectral invariants}; which can further be expressed further in terms of the classical heat traces and the {\it combined heat traces} \bea X(t,s) &=& \mathrm{ Tr}\exp(-tL_+)\exp(-s L_-), \label{238ccx} \\ Y(t,s) &=& \mathrm{ Tr} D_+\exp(-tD^2_+)D_-\exp(-sD^2_-), \label{18ssb} \eea by \bea \Psi(t,s) &=& \Theta_+(t+s)+\Theta_-(t+s) -X(t,s)-X(s,t), \label{16via} \\ \Phi(t,s) &=& -\partial_t\Theta_+(t+s)-\partial_t\Theta_-(t+s) -Y(t,s)-Y(s,t). \label{17via} \eea In Sec. 3 we describe the relevant differential operators and their spectral traces and introduce the relevant notation. The operators $L_\pm$ naturally define the metrics $g_{ij}^\pm$, the connection one forms ${\cal A}^\pm_i$ and the endomorphisms $Q_\pm$ by \be L_\pm = -g_\pm^{-1/4}(\partial_i+{\cal A}^\pm_i)g_\pm^{1/2}g_\pm^{ij} (\partial_j+{\cal A}^\pm_j)g_\pm^{-1/4}+Q_\pm, \ee where $g^{ij}_\pm$ are the inverse metrics and $g_\pm=\det g^\pm_{ij}$. Similarly, the operators $D_\pm$ define the endomorphisms $S_\pm$ by \be D_\pm = g_\pm^{1/4}i\gamma^j_\pm(\partial_j+{\cal A}_j^\pm)g_\pm^{-1/4}+S_\pm, \ee where $\gamma_\pm^j$ are the Dirac matrices satisfying (\ref{39viax}), here the connection ${\cal A}^\pm_i$ is supposed to satisfy the compatibility condition (\ref{310viax}). Also, we suppose that the endomorphisms $S_\pm$ anticommute with the Dirac matrices $\gamma_\pm^i$, (\ref{312viax}), so that the square of the Dirac type operator $D_\pm^2$ is a Laplace type operator with the potential \be Q_\pm=-\frac{1}{2}\gamma^{ij}_\pm\mathcal{R}^\pm_{ij} +S_\pm^2+i\gamma_\pm^j\nabla^\pm_j S_\pm, \label{qsxxab} \ee where $\mathcal{R}^\pm_{ij}$ be the curvature of the connection ${\cal A}^\pm_i$ and $\gamma_\pm^{ij}=\gamma_\pm^{[i}\gamma_\pm^{j]}$. We follow the standard convention \cite{zhelnorovich19} and denote the antisymmetrized products of Dirac matrices by $\gamma^{i_1\dots i_k}=\gamma^{[i_1}\cdots\gamma^{i_k]}$. In Sec. 4 we present a detailed review of the Ruse-Synge function (which is equal to one half of the square of the geodesic distance between two points in a Riemannian manifold) with the particular emphasis on its dependence on the metric. We compute the diagonal values of the covariant derivatives (defined with respect to a metric $g$) of a Ruse-Synge function $\sigma^h(x,x')$ defined with respect to another metric $h$. In Sec. 5 we study the asymptotics of the integrals of Laplace type and prove some important lemmas used in the proof of the main theorems. In Sec. 6 we study the asymptotics of the combined heat traces and prove the general theorems. The spectral information about the operators $L_\pm$ is contained in the classical heat traces (\ref{237ssb}). In particular, the asymptotic expansion as $t\to 0$ \bea \Theta_\pm\left(t\right) &\sim& (4\pi)^{-n/2} \sum_{k=0}^\infty t^{k-n/2}A^\pm_k, \label{513xxc} \eea defines the sequence of spectral invariants \bea A^\pm_k &=& \frac{(-1)^k}{k!}\int\limits_M dx\; g^{1/2}_\pm \tr[a^\pm_k], \eea where $\tr[a^\pm_k]$ are some scalar invariants which are polynomial in the jets of the symbols of the operators $L_\pm$, that is, in the covariant derivatives of the curvatures $R^\pm_{ijkl}$ of the metrics $g_\pm$, the curvatures $\mathcal{R}^\pm_{ij}$ of the connections ${\cal A}^\pm_i$ and the potentials $Q_\pm$ (notice the different normalization factor in (\ref{513xxc}) compared to our earlier papers \cite{avramidi91,avramidi00,avramidi10,avramidi15}). It is well known that the first two classical heat kernel coefficients are \cite{gilkey95,avramidi00,avramidi15} \bea A^\pm_0 &=& \int_M dx\;g_\pm^{1/2} \tr I, \\{} A^\pm_1 &=& \int_M dx\;g^{1/2}_\pm \tr\left(\frac{1}{6}R_\pm I-Q_\pm\right), \eea where $\tr$ is the fiber trace, $I$ is the identity endomorphism and $R_\pm$ is the scalar curvature of the metric $g_\pm$. Therefore, for the Dirac type operators the coefficient $A_1$ takes the form \bea A^\pm_1 &=& \int_M dx\;g^{1/2}_\pm \tr\left(\frac{1}{6}R_\pm I +\frac{1}{2}\gamma^{ij}_\pm\mathcal{R}^\pm_{ij}-S_\pm^2 \right). \label{126viax} \eea We study in this paper the asymptotics of the he combined heat traces (\ref{238ccx}) and (\ref{18ssb}). We define the time-dependent metric $g_{ij}=g_{ij}(t,s)$ as the inverse of the matrix \be g^{ij}=tg_+^{ij}+sg_-^{ij}, \label{113via} \ee with $t,s> 0$; throughout the paper we use the notation $g=\det g_{ij}$ for the determinant of the metric. Also, we define the time-dependent connection ${\cal A}_i={\cal A}_i(t,s)$ by \be {\cal A}_i=g_{ij}\left(tg^{jk}_+{\cal A}^+_k+sg^{jk}_-{\cal A}^-_k\right). \label{114via} \ee and the vectors \be {\cal C}^\pm_i = {\cal A}^\pm_i-{\cal A}_i. \label{115via} \ee We omit the variables $t$ and $s$ where it does not cause any confusion. Note that the inverse metric $g^{ij}$ is a homogeneous function of $t$ and $s$ of degree $1$, and, therefore, the metric $g_{ij}$ is a homogeneous function of $t$ and $s$ of degree $(-1)$, and the determinant $g=\det g_{ij}$ is a homogeneous function of $t$ and $s$ of degree $(-n)$; furthermore, the Christoffel symbols, $\Gamma_{g}{}^i{}_{jk}$, the Riemann tensor $R_g{}^i{}_{jkl}$ and the Ricci tensor $R^g_{ij}$ of the metric $g$ are homogeneous functions of $t$ and $s$ of degree $0$. Similarly, the connection ${\cal A}_i$ and its curvature $\mathcal{R}^{\cal A}_{ij}$ are homogeneous functions of $t$ and $s$ of degree $0$. \begin{theorem} \label{theorem1} There are asymptotic expansions as $\varepsilon\to 0$ \bea X(\varepsilon{}t,\varepsilon{}s) &\sim& (4\pi\varepsilon)^{-n/2} \sum_{k=0}^\infty \varepsilon^{k} B_k(t,s), \label{1zaab} \\ Y(\varepsilon{}t,\varepsilon{}s) &\sim& (4\pi\varepsilon)^{-n/2} \sum_{k=0}^\infty \varepsilon^{k-1} C_k(t,s), \label{15zaac} \eea where \bea B_k(t,s) &=& \int\limits_M dx\; g^{1/2}(t,s)b_k(t,s), \\ C_k(t,s) &=& \int\limits_M dx\; g^{1/2}(t,s)c_k(t,s). \eea \begin{enumerate} \item The coefficients $b_k(t,s)$ and $c_k(t,s)$ are scalar invariants built polynomially from the covariant derivatives (defined with respect to the metric $g_{ij}$ and the connection ${\cal A}_i$) of the metrics $g^\pm_{ij}$, the vectors ${\cal C}^\pm_i$ and the potentials $Q_\pm$ and $S_\pm$. \item The coefficients $b_k(t,s)$ and $c_k(t,s)$ are symmetric under the exchange $(t,L_+)\leftrightarrow (s,L_-)$. \item The coefficients $b_k(t,s)$ are homogeneous functions of $t$ and $s$ of degree $k$ and the coefficients $c_k(t,s)$ are homogeneous functions of $t$ and $s$ of degree $(k-1)$. \end{enumerate} \end{theorem} This gives the asymptotic expansion of the relative spectral invariants (\ref{220ssb}) and (\ref{248ssb}). \begin{corollary} \label{corollary1} There are asymptotic expansions as $\varepsilon\to 0$ \bea \Psi(\varepsilon{}t,\varepsilon{}s) &\sim& (4\pi\varepsilon)^{-n/2} \sum_{m=0}^\infty \varepsilon^{k} \Psi_k(t,s), \label{120via} \\ \Phi(\varepsilon{}t,\varepsilon{}s) &\sim& (4\pi\varepsilon)^{-n/2} \sum_{k=0}^\infty \varepsilon^{k-1} \Phi_k(t,s), \label{121via} \eea where \bea \Psi_k(t,s) &=& (t+s)^{k-n/2}(A_k^++A_k^-) -B_k(t,s)-B_k(s,t), \label{543xxca} \\ \Phi_k(t,s) &=& -\left(k-\frac{n}{2}\right)(t+s)^{k-1-n/2}\left(A_k^++A_k^-\right) -C_k(t,s)-C_k(s,t). \label{542saax} \eea \end{corollary} In Sec. 7 we consider some particular cases when the relative spectral invariants can be computed exactly in terms of the classical heat trace and compute explicitly the first two coefficients of the asymptotic expansions. To describe the main results we introduce a symmetric tensor $G_{ij}=G_{ij}(t,s)$ (that we call the {\it dual metric}) by \be G_{ij} = s g^+_{ij} + t g^-_{ij}, \label{126via} \ee and its inverse $G^{ij}$, which is related to the metric $g^{ij}$, (\ref{113via}), by \bea G_{ij} & =& g^+_{ik}g^{kl}g^-_{lj} = g^-_{ik}g^{kl}g^+_{lj}, \label{128viab} \\ G^{ij} &=& g_-^{ip}g_{pq}g_+^{qj} = g_+^{ip}g_{pq}g_-^{qj}. \label{127via} \eea Notice that \bea g_{ij}(1,0) &=& G_{ij}(0,1)=g^+_{ij}, \\ g_{ij}(0,1) &=& G_{ij}(1,0)=g^-_{ij}. \eea Also, we introduce the non-compatibility tensors \bea K^\pm_{ijk} &=& \nabla_i^g g^\pm_{jk}, \label{128via} \eea and the tensors \bea W_\pm{}^i{}_{jk} &=& \frac{1}{2}g_\pm^{im}\left(K^\pm_{jkm}+K^\pm_{kjm} -K^\pm_{mjk}\right), \label{129via} \\ W^\pm_j &=& W_\pm{}^i{}_{ij} =\nabla^g_j W^\pm, \label{130via} \eea with \be W^\pm = \frac{1}{2}\log\left(\frac{g_\pm}{g}\right). \label{131via} \ee Finally, we define \bea W_i &=& \frac{1}{2}\left(W^+_i+W^-_i\right) =\frac{1}{2}\nabla^g_i\left(W^++W^-\right), \label{137zax} \\ W_{ij}&=& \frac{1}{2}\left(\nabla^g_jW^+_i +\nabla^g_jW^-_i\right) =\frac{1}{2}\nabla^g_i\nabla^g_j(W^++W^-), \label{130viaxz} \\ \Sigma_{ijk} &=& \frac{3}{2}sK^+_{(ijk)}+\frac{3}{2}tK^-_{(ijk)}, \label{139zax} \\ \Sigma_{ijkl} &=& s S^+_{ijkl} + t S^-_{ijkl}, \label{140zax} \eea where \be S^\pm_{ijkl} = 4g^\pm_{m(i} \nabla^g{}_{j}W_\pm^m{}_{kl)} +4g^\pm_{m(i}W_\pm^n{}_{jk}W_\pm^{m}{}_{l)n} +3g^{\pm}W_\pm^n{}_{(ij}W_\pm^{m}{}_{kl)}. \label{141zax} \ee Here and everywhere below parenthesis denote symmetrization over all indices included. The indices excluded from the symmetrization are separated by vertical lines. \begin{theorem} \label{theorem2} The first two coefficients of the asymptotic expansion of the combined heat trace $X(t,s)$ are \bea b_0(t,s) &=& \tr I, \label{132via} \\ b_1(t,s) &=& \tr\Biggl\{ t\left(\frac{1}{6}R_+I -Q_+\right) +s\left(\frac{1}{6}R_-I -Q_-\right) +ts\Biggl[\frac{1}{6}G^{ij}\left( R^+_{ij}+R^-_{ij} -2R^g_{ij}\right)I \nonumber\\ && +\Biggl(\frac{1}{6} G^{ij}\left(W_{ij} +W_{i}W_{j}\right) -G^{ij}G^{kl}\Sigma_{ikl}W_{j} -\frac{1}{4}G^{ij}G^{kl}\Sigma_{ijkl} \nonumber\\ && +\frac{1}{12}\left(2G^{il}G^{jm} +3G^{ij}G^{lm}\right)G^{kn}\Sigma_{ijk}\Sigma_{lmn} \Biggr)I \nonumber\\ && +G^{ij}({\cal C}^+_{i}-{\cal C}^-_{i})({\cal C}^+_{j}-{\cal C}^-_{j}) \Biggr] \Biggr\}. \label{134via} \eea \end{theorem} \begin{corollary} \label{corollary2} The first two coefficients of the asymptotic expansion of the relative spectral invariant $\Psi(t,s)$ are \bea \Psi_0(t,s) &=& \int\limits_M dx\; \left\{(t+s)^{-n/2}\left(g_+^{1/2}+g_-^{1/2}\right) -g^{1/2}(t,s)-g^{1/2}(s,t)\right\}\tr I, \\ \Psi_1(t,s) &=& \int_M dx\;\Biggl\{ (t+s)^{1-n/2}\Biggl[g_+^{1/2}\tr\left(\frac{1}{6}R_+ I-Q_+\right) +g_-^{1/2}\tr\left(\frac{1}{6}R_-I-Q_-\right)\Biggr] \nonumber\\ && -g^{1/2}(t,s)b_1(t,s)-g^{1/2}(s,t)b_1(s,t) \Biggr\}. \label{543xxcab} \eea \end{corollary} Further, we define the auxiliary tensors \bea N^{jkl} &=& 2G^{ij}G^{kl}W_{i} -\frac{1}{3}\left( 2G^{ij}G^{qk} +3G^{iq}G^{jk} \right)G^{pl}\Sigma_{ipq}, \label{144viax} \\ M^{kl} &=& \left(G^{kl}G^{ij} +2G^{ik}G^{jl}\right) (W_{ij}+W_iW_j) \nonumber\\ && -\left(2G^{ij}G^{mk}G^{pl} +2G^{im}G^{jk}G^{pl} +G^{kl}G^{im}G^{pj} \right)\Sigma_{pim}W_{j} \nonumber\\ && -\frac{1}{4}\left(G^{pq}G^{kl} +4G^{kp}G^{lq}\right)G^{pq}\Sigma_{ijpq} \nonumber\\ && +\frac{1}{72}\Biggl(2G^{ij}G^{pr}G^{qs}G^{kl} +3G^{ij}G^{pq}G^{rs}G^{kl} +6G^{ik}G^{jl}G^{pq}G^{rs} \nonumber\\ && +12G^{ij}G^{pq}G^{kr}G^{ls} +12G^{ij}G^{pr}G^{kq}G^{sl} \Biggr)\Sigma_{ipq}\Sigma_{jrs}. \label{147zax} \eea and \bea V_{pqijkl}&=& \mathrm{Sym}(i,j,k,l) \Biggl\{ \Biggl( 4g^+_{mp} \nabla^g{}_{k}W_+^m{}_{ij} +4g^+_{mp}W_+^n{}_{jk}W_+^{m}{}_{in} +12g^+_{mi}W_+{}^m{}_{nk}W_+{}^n{}_{jp} \nonumber\\ && -6g^+_{mi}W_+{}^n{}_{kj}W_+{}^m{}_{np} \Biggr) g^-_{lq} +g^+_{lp}\Biggl( 4g^-_{mq} \nabla^g{}_{k}W_-^m{}_{ij} +4g^-_{mq}W_-^n{}_{jk}W_-^{m}{}_{in} \nonumber\\ && +12g^-_{mi}W_-{}^m{}_{nk}W_-{}^n{}_{jq} -6g^-_{mi}W_-{}^n{}_{kj}W_-{}^m{}_{nq} \Biggr) +6g^+_{mp}W_+^m{}_{ij}g^-_{nq}W_-^n{}_{kl} \Biggr\}. \label{145viaxz} \eea \begin{theorem} \label{theorem3} The first two coefficients of the asymptotic expansion of the combined heat trace $Y(t,s)$ are \bea c_0(t,s) &=& \frac{1}{2}g_{ij}(t,s)\tr\, \left(\gamma_+^{i}\gamma_-^{j}\right), \label{138via} \\ c_1(t,s) &=& \tr\Biggl\{ \frac{1}{6}t\left(\frac{1}{2}g_{pq} R_+ -g_{qi}g_+^{ij}R^+_{jp} \right) \gamma_+^p\gamma_-^q +\frac{1}{6}s\left(\frac{1}{2}g_{pq}R_- -g_{pi}g_-^{ij}R^-_{jq} \right)\gamma_+^p\gamma_-^q \nonumber\\ && +\frac{1}{4}tg_{pq}\gamma_-^{q}\gamma_+^{pij}\mathcal{R}^+_{ij} +\frac{1}{4}sg_{pq}\gamma_+^{p}\gamma_-^{qij} \mathcal{R}^-_{ij} +S_+S_- -\frac{1}{2}tg_{pq}\gamma_-^{q}\gamma_+^{p}S_+^2 -\frac{1}{2}sg_{pq}\gamma_+^{p}\gamma_-^{q} S_-^2 \nonumber\\ && -\frac{1}{2}tg_{q p}i\gamma_-^{q}\gamma_+^{pj}\nabla_{j }^+ S_+ -\frac{1}{2}sg_{pq}i\gamma_+^{p}\gamma_-^{qj}\nabla^-_jS_- \nonumber\\ && +ts\Biggl[ \frac{1}{12}\left(G^{kl}G^{ij} +2G^{ik}G^{jl}\right) \left(R^+_{ij}+R^-_{ij}-2R^g_{ij}\right) g^+_{p(k}g^-_{l)q}\gamma_+^{p}\gamma_-^q \nonumber\\ && +\frac{1}{8}G^{(ij}G^{kl)}V_{pqijkl}\gamma_+^{p}\gamma_-^q +\frac{3}{4}N^{jkl} \left(g^+_{mp}W_+{}^m{}_{(jk}g^-_{l)q} +g^-_{mq}W_-{}^m{}_{(jk} g^+_{l)p} \right) \gamma_+^{p}\gamma_-^q \nonumber\\ && +\frac{1}{2} M^{kl}g^+_{p(k}g^-_{l)q}\gamma_+^{p}\gamma_-^q -\frac{3}{4} G^{(ij}G^{kl)}g^+_{jp}g^-_{qi} [\gamma_+^p,\gamma_-^q] \nabla^{g,{\cal A}}_{k}({\cal C}^+_{l}-{\cal C}^-_{l}) \nonumber\\ && -\frac{3}{4} \Bigl[ G^{(ij}G^{kl)}\left(g^+_{mp}W_+^m{}_{ij}g^-_{kq} +g^+_{kp}g^-_{mq}W_-^m{}_{ij} \right) +N^{jkl}g^+_{pk}g^-_{jq} \Bigr] [\gamma_+^p,\gamma_-^{q}]({\cal C}^+_{l}-{\cal C}^-_{l}) \nonumber\\ && +\frac{3}{4} g^+_{pi}g^-_{jq} G^{(ij}G^{kl)} \Bigl[\left({\cal C}^+_{k}{\cal C}^+_{l}+{\cal C}^-_{k}{\cal C}^-_{l}\right) \left( \gamma_+^p\gamma_-^q +\gamma_-^q\gamma_+^p \right) \nonumber\\ && -2{\cal C}^+_{k}{\cal C}^-_{l}\gamma_+^p\gamma_-^q -2{\cal C}^-_{l}{\cal C}^+_{k}\gamma_-^q\gamma_+^p \Bigr] \Biggr] \Biggr\}. \label{150zax} \eea \end{theorem} \begin{corollary} \label{corollary3} The first two coefficients of the asymptotic expansion of the relative spectral invariant $\Phi(t,s)$ are \bea \Phi_0(t,s) &=& \int\limits_M dx\;\Biggl\{ \frac{n}{2}(t+s)^{-1-n/2}\left(g_+^{1/2}+g^{1/2}_-\right)\tr I \nonumber\\ && -\frac{1}{2} \left[g^{1/2}(t,s)g_{ij}(t,s)+g^{1/2}(s,t)g_{ij}(s,t)\right] \tr\,\left(\gamma_+^{i}\gamma_-^{j}\right) \Biggr\}, \label{141via}\\ \Phi_1(t,s) &=& \int_M dx\;\Biggl\{ -g^{1/2}(t,s)c_1(t,s)-g^{1/2}(s,t)c_1(s,t) \label{542saaa}\\ && +\left(\frac{n}{2}-1\right)(t+s)^{-n/2} \Biggl[g_+^{1/2}\tr\left(\frac{1}{6}R_+I-Q_+\right) +g_-^{1/2} \tr\Biggl(\frac{1}{6}R_-I-Q_-\Biggr) \Biggr]\Biggr\}, \nonumber \eea \end{corollary} where $Q_\pm$ are given by (\ref{qsxxab}). \section{Bogolyubov Invariant} We motivate the definition of the relative spectral invariants by quantum field theory. We will be very brief here, the detailed exposition will appear elsewhere \cite{avramidi19a}. We describe now the standard method for calculation of particles creation via the Bogolyubov transformation \cite{birrel80,dewitt75}. Let $({\cal M}, h)$ be a pseudo-Riemannian $(n+1)$-dimensional assume that $({\cal M}, h)$ is globally hyperbolic so that there is a foliation of ${\cal M}$ with space slices $M_t$ at a time $t$, moreover, we assume that there is a global time coordinate $t$ varying from $-\infty$ to $+\infty$ and that at all times $M_t$ is a compact $n$-dimensional Riemannian manifold without boundary. We will also assume that there are well defined limits $M_{\pm}$ as $ t\to \pm\infty$. For simplicity, we will just assume that the manifold ${\cal M}$ has two cylindrical ends, $(-\infty,\beta)\times M$ and $(\beta,\infty)\times M$ for some positive parameter $\beta$. So, the foliation slices $M_t$ depend on $t$ only on a compact interval $[-\beta,\beta]$. Let ${\cal W}$ be a Hermitian vector bundle over ${\cal M}$ and ${\cal V}_t$ be the corresponding time slices (vector bundles over $M_t$). In quantum field theory there are two types of particles, bosons and fermions. The bosonic fields are described by second order Laplace type partial differential operators whereas the fermionic fields are described by first order Dirac type partial differential operators. Let $L_t$ be a one-parameter family of {\it positive self-adjoint elliptic second-order} partial differential operators of Laplace type acting on smooth sections of the vector bundle ${\cal V}_t$. We assume that there are well defined limits $L_{\pm}$ as $t \to \pm\infty$. Let $D_t$ be a one-parameter family of {\it self-adjoint elliptic first-order} partial differential operators of Dirac type acting on sections of the vector bundles ${\cal V}_{t}$ such that its square $ L_t=D_t^2 $ is a self-adjoint second-order positive elliptic partial differential operator of Laplace type. We assume that there are well defined limits $D_{\pm}$ as $t\to \pm\infty$. Then one defines so-called the in-vacuum and the out-vacuum and the corresponding in-particles and out-particles. Then the out-vacuum contains some in-particles (and vice versa). The total number of in-particles in the out-vacuum is determined by the so-called {Bogolyubov invariant}. Let $E_{b,f,0}$ be the functions defined by \bea E_f(x) &=& \frac{1}{e^x+1}, \\ E_b(x) &=& \frac{1}{e^x-1}, \\ E_0(x) &=& \frac{1}{2\sinh x}, \eea and $\omega_\pm$ are pseudo-differential operators defined by \be \omega_\pm=\sqrt{L_\pm}. \ee Then in some approximation (for details, see \cite{avramidi19a}) the Bogolyubov invariants for bosons and fermions are determined by the following traces \bea B_b(\beta)&=&\mathrm{ Tr} \left\{E_f(\beta\omega_+)-E_f(\beta\omega_-)\right\} \Bigl\{E_b(\beta\omega_+)-E_b(\beta\omega_-)\Bigr\}, \label{311xxa} \\ B_f(\beta) &=& 2\beta^2\mathrm{ Tr}\Bigl\{D_+E_0(\beta\omega_+)-D_-E_0(\beta\omega_-)\Bigr\}^2. \label{318xxa} \eea The Bogolyubov invariants can be expressed in terms of the the {\it relative spectral invariants} $\Psi(t,s)$ and $\Phi(t,s)$ defined in (\ref{220ssb}) and (\ref{248ssb}). Let $h_{b,f,0}$ be the functions defined by \bea h_f(t) &=& \frac{1}{2\pi }\fint_\RR dp\; p \tan\left(\frac{p}{2}\right)\exp(-tp^2) \nonumber\\ &=& (4\pi)^{-1/2}t^{-3/2} \sum_{k=1}^\infty (-1)^{k+1} k\exp\left(-\frac{k^2}{4t}\right), \\ h_b(t) &=& \frac{1}{2\pi }\fint_\RR dp\;p \cot\left(\frac{p}{2}\right)\exp(-tp^2) \nonumber\\ &=& (4\pi)^{-1/2}t^{-3/2} \sum_{k=1}^\infty k\exp\left(-\frac{k^2}{4t}\right), \\ h_0(t) &=& \frac{1}{2\pi }\fint_\RR dp\; \frac{p}{\sin p}\exp(-tp^2) \nonumber\\ &=& (4\pi)^{-1/2}t^{-3/2} \sum_{k=0}^\infty \left( 2k+1\right) \exp\left(-\frac{\left(2k+1\right)^2}{4t}\right). \eea where the integrals are taken in the principal value sense. Then the Bogolyubov invariants take the form \bea B_b(\beta) &=& \int\limits_0^\infty dt\int\limits_0^\infty ds\; h_f\left(s\right) h_b\left(t\right) \Psi\left(\beta^2s,\beta^2t\right), \label{413xxa} \\ B_f(\beta) &=& \int\limits_0^\infty dt\int\limits_0^\infty ds\;\, h_0\left(s\right) h_0\left(t\right) 2\beta^2\Phi\left(\beta^2t,\beta^2s\right). \label{414xxa} \eea It is the relative spectral invariants $\Psi(t,s)$ and $\Phi(t,s)$ that we study in the present paper. Obviously, the combined heat traces $X(t,s)$ and $Y(t,s)$ contains information about the spectra of both operators $L_\pm$ (and $D_\pm$) since, in particular, \bea X(0,s) &=& \Theta_-(s), \qquad X(t,0)=\Theta_+(t), \\ Y(0,s) &=& H_-(s), \qquad Y(t,0)=H_+(t) \eea Also, although for any $t,s>0$ \bea \Psi(0,s)=\Psi(t,0)= \Phi(0,s)=\Phi(t,0)=0, \eea the asymptotics as $t,s\to 0$ are non-trivial. It is these asymptotics that we study in the present paper. We can also define the corresponding {\it relative zeta functions} \bea Z_\Psi(p,q) &=& \frac{1}{\Gamma(p)\Gamma(q)}\int\limits_0^\infty dt \int\limits_0^\infty ds\;t^{p-1}s^{q-1} \Psi(t,s), \\ Z_\Phi(p,q) &=& \frac{1}{\Gamma(p)\Gamma(q)}\int\limits_0^\infty dt \int\limits_0^\infty ds\;t^{p-1}s^{q-1} \Phi(t,s), \eea and, similarly, $Z_X(p,q)$ and $Z_Y(p,q)$. Then \bea Z_X(p,q) &=& \mathrm{ Tr} L_+^{-p}L_-^{-q}, \\ Z_Y(p,q) &=& \mathrm{ Tr} D_+^{-2p+1}D_-^{-2q+1} \eea and \bea Z_\Psi(p,q) &=& \mathrm{ Tr} \left(L_+^{-p}-L_-^{-p}\right) \left(L_+^{-q}-L_-^{-q}\right), \\ Z_\Phi(p,q) &=& \mathrm{ Tr} \left(D_+^{-2p+1}-D_-^{-2p+1}\right) \left(D_+^{-2q+1}-D_-^{-2q+1}\right). \eea To avoid confusion the complex power of the operator $D_\pm$ (which is not positive) is defined as follows $D_\pm^{-2p+1}=D_\pm(D_\pm^2)^{-p}$. For the Dirac case one can also introduce more general traces \bea W_\pm(t,\alpha) &=& \mathrm{ Tr} \exp(-t D^2_\pm+i\alpha D_\pm), \label{424zzc} \\ V(t,s;\alpha,\beta) &=& \mathrm{ Tr}\exp(-tD^2_++i\alpha D_\pm)\exp(-sD^2_-+i\beta D_-). \label{51xxa} \eea Then, obviously, \bea \Theta_\pm(t) &=& W_\pm(t,0), \\ X(t,s) &=& V(t,s;0,0), \\ Y(t,s) &=& -\frac{\partial}{\partial \alpha} \frac{\partial}{\partial \beta} V(t,s;\alpha,\beta) \Big|_{\alpha=\beta=0}. \eea Therefore, all traces can be obtained from the traces (\ref{424zzc}) and (\ref{51xxa}). Notice that the trace $W(t,\alpha)$ can be written in the form \be W_\pm(t,\alpha)=(4\pi t)^{-1/2}\int\limits_\RR d\alpha' \exp\left\{-\frac{(\alpha-\alpha')^2}{4t}\right\} T_\pm(\alpha'), \label{278gil} \ee where \be T_\pm(\alpha)=\mathrm{ Tr}\exp\left(i\alpha D_\pm\right); \ee strictly speaking, $T_\pm(\alpha)$ is a distribution and eq. (\ref{278gil}) should be understood in the distributional sense. Similarly, the invariant $V(t,s;\alpha,\beta)$ can be written in the form \be V(t,s;\alpha,\beta)=(4\pi)^{-1} (ts)^{-1/2}\int\limits_{\RR^2} d\alpha'd\beta' \exp\left\{-\frac{(\alpha-\alpha')^2}{4t} -\frac{(\beta-\beta')^2}{4s}\right\} S_\pm(\alpha',\beta'), \ee where \be S_\pm(\alpha,\beta)=\mathrm{ Tr}\exp\left(i\alpha D_+\right) \exp\left(i\beta D_-\right). \ee In this paper we will be interested primarily in the asymptotic expansion of the combined heat traces as $t,s\to 0$. \section{Generalized Heat Traces} \label{secxxx} \setcounter{equation}0 \subsection{Differential Operators} Let $M$ be a compact $n$-dimensional Riemannian manifold without boundary. Throughout the whole paper we denote tensor indices by Latin letters and use Einstein summation convention. We use parenthesis for the symmetrization of indices and square brackets for the anti-symmetrization. The indices excluded from the symmetrization or anti-symmetrization are separated by vertical lines. Also, we denote the local coordinates by $x^i$ and the partial derivatives by $\partial_i$. Let ${\cal V}$ be a vector bundle of densities of weight $1/2$ over $M$, $L^2({\cal V})$ be the corresponding Hilbert space; we use the notation $\tr$ for the fiber trace and $\mathrm{ Tr}$ be the corresponding $L^2$ trace. We study {\it positive self-adjoint elliptic second-order} partial differential operators $L$ with a scalar positive definite leading symbol of {\it Laplace type} acting on smooth sections of the bundle ${\cal V}$. A Laplace type operator $L$ naturally defines a Riemannian metric $g$ and a connection $\nabla^{{\cal A}}$ on the vector bundle with a connection one-form ${\cal A}_i$. Since we will be working with different operators we do not have a single metric, then, following \cite{avramidi04}, we prefer to work with the vector bundle of densities of weight $1/2$ and with the Lebesgue measure $dx$ instead of the Riemannian one. Then the heat kernel $U(t;x,x')$ of the heat semigroup $\exp(-tL)$ is also a density of weight $1/2$ at each point $x$ and $x'$, and the heat kernel diagonal $U(t;x,x)$ is a density of weight $1$. Then a Laplace type operator has the form \bea L &=& g^{1/4}\left(-\Delta^{g,{\cal A}} +Q\right)g^{-1/4}, \label{lap} \eea where $\Delta^{g,{\cal A}}=g^{ij}\nabla^{g,{\cal A}}_i\nabla^{g,{\cal A}}_j$ is the Laplacian, $g=\det g_{ij}$, and $Q$ is some smooth endomorphism of the vector bundle ${\cal V}$; locally it has the form \bea L &=& -g^{-1/4}(\partial_i+{\cal A}_i)g^{1/2}g^{ij} (\partial_j+{\cal A}_j)g^{-1/4}+Q. \eea Let $L_\pm$ be two {\it Laplace type} operators defined by the metrics $g^\pm_{ij}$, the connections ${\cal A}_i^\pm$ and the potential terms $Q_\pm$. By using the metric $g_{ij}(t,s)$, (\ref{113via}), the connection ${\cal A}_i(t,s)$, (\ref{114via}), and the identity \be tg_+^{ij}{\cal C}_j^+ +sg_-^{ij}{\cal C}_j^-=0 \ee one can rewrite now the operators $L_\pm$ in the form \bea L_\pm &=& g^{1/4}\left(-\nabla^{g,{\cal A}}_ig_\pm^{ij}\nabla^{g,{\cal A}}_j -g_\pm^{ij}{\cal C}^\pm_i\nabla^{g,{\cal A}}_j -\nabla^{g,{\cal A}}_i g_\pm^{ij}{\cal C}^\pm_j +q_\pm\right)g^{-1/4}, \eea where \bea q_\pm =Q_\pm -g_\pm^{ij}{\cal C}^\pm_i{\cal C}^\pm_j +\frac{1}{2}\nabla^g_i(g_\pm^{ij}W^\pm_{j}) +\frac{1}{4}g_\pm^{ij}W^+_{i}W^+_{j}, \eea where $W^\pm_j$ is defined by (\ref{130via}). Notice that the sum of Laplace type operators is a Laplace type operator, in particular, the operator \bea L(t,s) &=& tL_++sL_- \nonumber\\ &=&g^{1/4}\left(-\Delta^{g,{\cal A}}+Q\right)g^{-1/4} \eea is a Laplace type operator with the metric $g_{ij}(t,s)$, (\ref{113via}), the connection ${\cal A}_i(t,s)$, (\ref{114via}), and the potential form \bea Q(t,s)&=&tQ_++sQ_- -tg_+^{ij}{\cal C}^+_i{\cal C}^+_j -sg_-^{ij}{\cal C}^-_i{\cal C}^-_j \label{25zaa}\\ && +\frac{1}{2}t\nabla_j^g(g_+^{ij}W^+_{i}) +\frac{1}{4}tg_+^{ij}W^+_{i}W^+_{j} +\frac{1}{2}s\nabla_j^g(g_-^{ij}W^-_{i}) +\frac{1}{4}sg_-^{ij}W^-_{i}W^-_{j}. \nonumber \eea Now, assume that ${\cal V}$ is a Clifford bundle. Let $D_\pm$ be two {\it self-adjoint first-order elliptic} partial differential operators of {\it Dirac type} acting on sections of the bundle ${\cal V}$ such that their squares $D_\pm^2$ are self-adjoint second-order positive elliptic partial differential operators. Let $\gamma_\pm: T^*M\to \End({\cal V})$ be the Clifford maps (determined by traceless Dirac matrices) satisfying \be \gamma_\pm^i\gamma_\pm^j+\gamma_\pm^j\gamma_\pm^i=2 g^{ij}_\pm I, \label{39viax} \ee where $I$ is the identity endomorphism. Let ${\cal A}^\pm_i$ be connection one-forms on the vector bundle ${\cal V}$; it is required to satisfy \be \partial_i\gamma_\pm^k+\Gamma_\pm{}^k{}_{ij}\gamma_\pm^j +[{\cal A}^\pm_i,\gamma_\pm^k]=0, \label{310viax} \ee where $\Gamma_\pm{}^k{}_{ij}$ are Christoffel symbols of the metric $g^\pm_{ij}$, in particular, it means $\nabla^\pm_i\gamma_\pm^k=0$. Let $S_\pm$ be some endomorphisms of the vector bundle ${\cal V}$ anticommuting with $\gamma_\pm^i$. By using the representation of the Dirac matrices in terms of the orthonormal frames, $\gamma_\pm^i(x)=e_\pm{}^i{}_a(x)\gamma^a$, this means that the matrices $S_\pm$ anti-commute also with the Dirac matrices $\gamma_\mp^i$, that is, \be [S_\pm,\gamma_\pm^i]=[S_\pm,\gamma_\mp^i]=0. \label{312viax} \ee Then the Dirac type operators have the form \bea D_\pm &=& g_\pm^{1/4}i\gamma^j_\pm(\partial_j+{\cal A}_j^\pm)g_\pm^{-1/4}+S_\pm \\ &=& g_\pm^{1/4}\left(i\gamma^j_\pm\nabla^\pm_j +S_\pm\right)g_\pm^{-1/4}. \eea and $D_\pm^2$ is a Laplace type operator of the form \be D_\pm^2=g_\pm^{1/4}\left(-\Delta_\pm +Q_\pm\right)g_\pm^{-1/4}, \ee where \be Q_\pm=-\frac{1}{2}\gamma^{ij}_\pm\mathcal{R}^\pm_{ij}+S_\pm^2+i\gamma_\pm^j\nabla^\pm_j S_\pm. \label{qsxxa} \ee $\gamma_\pm^{ij}=\gamma_\pm^{[i}\gamma_\pm^{j]}$ and $\mathcal{R}^\pm_{ij}$ is the curvature of the connection ${\cal A}^\pm_i$. If the Clifford bundle is a twisted spinor bundle then the connection ${\cal A}^\pm_i$ has the form \be {\cal A}^\pm_i=\frac{1}{4}\omega^\pm_{abi}\gamma^{ab}+{\cal E}^\pm_i , \ee where $\omega^\pm_{abi}$ is the spin connection, and the curvature has the form \be \mathcal{R}^\pm_{ij}=\frac{1}{4}R^\pm_{abij}\gamma^{ab}+{\cal F}^\pm_{ij}, \ee where ${\cal F}_{ij}^\pm$ is the curvature of the connection ${\cal E}_{i}^\pm$ and $R^\pm_{abij}$ is the Riemann tensor of the metric $g^\pm_{ij}$. \subsection{Heat Traces} Let $\{\lambda_k^\pm\}_{k=1}^\infty$ be the eigenvalues (counted with multiplicities and ordered in nondecreasing order) and $\{\varphi_k^\pm\}_{k=1}^\infty$ be the corresponding orthonormal sequence of eigensections of the operator $L_\pm$. The heat kernel of the operator $L_\pm$ has the following spectral representation \be U_\pm(t;x,x')=\sum_{k=1}^\infty \exp\left(-t\lambda^\pm_k\right) \varphi^\pm_k(x)\varphi^{\pm*}_k(x'). \ee Then the classical heat trace (\ref{237ssb}) has form \bea \Theta_{\pm}(t)&=& \sum_{k=1}^\infty \exp\left(-t\lambda^\pm_k\right) \label{532xxc} \nonumber\\ &=&\int\limits_M dx\;\tr U_\pm(t;x,x) \label{532xxca} \eea and the combined heat trace (\ref{238ccx}) is \bea X(t,s)&=& \sum_{k,j=1}^\infty \exp\left(-t\lambda^+_k-s\lambda^-_j\right) \left|(\varphi^-_j,\varphi^{+}_k)\right|^2 \label{533xxc} \nonumber\\ &=& \int\limits_{M\times M}dx\;dx'\;\tr\left\{ U_+(t;x,x')U_-(s;x',x)\right\}. \label{533xxca} \eea Let $\{\mu_k^\pm\}_{k=1}^\infty$ be the eigenvalues of the operator $D_\pm$ (counted with multiplicities and ordered in nondecreasing order of the absolute value) and $\{\varphi_k^\pm\}_{k=1}^\infty$ be the corresponding orthonormal sequence of eigensections of the operator. The integral kernel of the heat semigroups $\exp(-tD^2_\pm+i\alpha D_\pm)$ and $\exp(-tD^2_\pm)$ have the form \bea V_\pm(t,\alpha;x,x') &=&\sum_{k=1}^\infty \exp\left[-t(\mu^\pm_k)^2+i\alpha\mu_k\right] \varphi^\pm_k(x)\varphi^{\pm*}_k(x'), \\ U_\pm(t;x,x')&=& \sum_{k=1}^\infty \exp\left[-t(\mu^\pm_k)^2\right] \varphi^\pm_k(x)\varphi^{\pm*}_k(x'). \eea Then the classical heat trace (\ref{12via}) has the form \bea H_\pm(t) &=& \sum_{k=1}^\infty \mu^\pm_k\exp\left[-t(\mu^\pm_k)^2\right] \nonumber\\ &=& \int\limits_{M}dx\;\tr\left\{ D_\pm U_\pm(t;x,x)\right\}, \eea where the operators $D_\pm$ act {\it only} on the first argument of the heat kernel, and the combined heat trace (\ref{18ssb}) is \bea Y(t,s)&=& \sum_{k,j=1}^\infty \exp\left[-t(\mu^+_k)^2-s(\mu^-_j)^2\right] \mu^+_k\mu^-_j\left|(\varphi^-_j,\varphi^{+}_k)\right|^2 \nonumber\\ &=&\int\limits_{M\times M}dx\;dx'\;\tr\left\{ D_+U_+(t;x,x')D_-U_-(s;x',x)\right\}, \label{534xxc} \eea where the differential operators act on the first spacial argument of the heat kernel. The generalized traces (\ref{424zzc}) and (\ref{51xxa}) have the form \bea W_\pm(t,\alpha)&=& \sum_{k=1}^\infty \exp\left[-t(\mu^\pm_k)^2+i\alpha\mu^\pm_k\right], \nonumber\\ &=&\int\limits_M dx\;\tr V_\pm(t,\alpha;x,x), \label{56zzaa} \\ V(t,s;\alpha,\beta)&=& \sum_{k,j=1}^\infty \exp\left[-t(\mu^+_k)^2+i\alpha\mu^+_k -s(\mu^-_j)^2+i\beta\mu_j^-\right] \left|(\varphi^-_j,\varphi^{+}_k)\right|^2. \nonumber\\ &=&\int\limits_{M\times M}dx\;dx'\tr\left\{ V_+(t,\alpha;x,x')V_-(s,\beta;x',x)\right\}. \label{534xxda} \eea We would like to stress that whereas the classical invariants $\Theta_\pm(t)$, $H_\pm(t)$ and $W_\pm(t)$ depend only on the eigenvalues of the operators the new invariants $X(t,s)$, $Y(t,s)$ and $V(t,s;\alpha,\beta)$ depend on the eigenfunctions as well and, therefore, contain much more information about the spectra of these operators. \section{ Ruse-Synge Function} \setcounter{equation}0 In this section we follow our books \cite{avramidi00,avramidi15}. We fix the notation for the rest of the paper. Let $x'$ be a {\it fixed point} in a manifold $M$. We denote indices of tensors in the tangent space at the point $x'$ by prime Latin letters. The derivatives with respect to coordinates $x'^i$ will be denoted by prime indices as well. We will also use the notation for the {\it partial derivatives} of a scalar function $f$ with respect to $x$ and $x'$ by just adding indices to the function after comma, e.g. $f_{,ij'}=\partial_i\partial_{j'}f$. Obviously, the derivatives with respect to $x$ and with respect to $x'$ commute. Finally, everywhere below the square brackets denote the diagonal value of a two-point function $f(x,x')$, that is, $[f]=f(x',x')$. It is also easy to see that the derivatives of the coincidence limits are equal to the sum of the conicidence limits of the derivative with respect to $x$ and $x'$ \be [f]_{,j} =[f_{,j}]+[f_{,j'}]. \label{51qqq} \ee Let $g$ be a Riemannian metric and $r_{\rm inj}(M,g)$ be the injectivity radius of the manifold $M$. Let $B_r(x')$ be the geodesic ball of radius $r$ less than the injectivity radius of the manifold, $r<r_{\rm inj}(M,g)$. Let $U\subset B_r(x')$ be a sufficiently small neighborhood of the point $x'$ in the ball $B_r(x')$ so that it is covered by a single coordinate patch with coordinates $x^i$. Each point $x$ in the neighborhood $U$ can be connected with the point $x'$ by a {\it unique} geodesic. The \index{ Ruse-Synge function} {\it Ruse-Synge function} $\sigma(x,x')$ is a symmetric smooth function defined as one half of the square of the geodesic distance $d(x,x')$ between the points $x$ and $x'$, \be \sigma(x,x')=\frac{1}{2}d^2(x,x'); \ee it was introduced by Ruse \cite{ruse31} and used extensively by Synge \cite{synge60} and others \cite{dewitt75,birrel80} in general relativity under the name {\it world function}. There are many ways to show that the Ruse-Synge function satisfies the (modified) Hamilton-Jacobi equation \be \sigma= \frac{1}{2}g^{ij}(x)\sigma_{,i}\sigma_{,j} =\frac{1}{2}g^{i'j'}(x')\sigma_{,i'}\sigma_{,j'} \,, \label{3133xx} \ee with the initial conditions \be [\sigma]= [\sigma_{,i}] =[\sigma_{,i'}] =0\,.\qquad \label{3130zza} \ee Furthermore, by differentiating eq. (\ref{3133xx}) and taking the coincidence limit it is easy to see that \be [\sigma_{,ij}] = [\sigma_{,i'j'}] = -[\sigma_{,ij'}] = g_{ij}. \label{45via} \ee The Hamilton-Jacobi equation (\ref{3133xx}) with the above initial conditions (\ref{3130zza}) has a unique solution; it can be solved, for example, in form of a (noncovariant) Taylor series \be \sigma(x,x')= \sum_{k=2}^\infty\frac{1}{k!} [\sigma_{,i_1\dots i_k}](x')y^{i_1}\cdots y^{i_k}, \label{36ttt} \ee where $y^i=x^i-x'^{i}$. The coincidence limits of {partial derivatives} of higher orders $[\sigma_{,i_1\dots i_k}]$, $k\ge 3$, are {uniquely} determined in terms of some polynomials in the partial derivatives of the metric $g_{ij, m_1,\dots m_p}$ and the metrics $g_{ij}$ and $g^{ij}$, that is, some polynomials in the partial derivatives $[\sigma_{,ij}]_{, m_1,\dots m_p}$ and the matrix $[\sigma_{,ij}]$ and its inverse. Therefore, there are { non-trivial relations} between the coincidence limits of partial derivatives. By using these equations one can obtain the coincidence limits of partial derivatives \bea [\sigma_{,ijk}] &=& 3 g_{m(k}\Gamma{}^m{}_{ij)}= \frac{3}{2}g_{(ij,k)}, \\ {}[\sigma_{,ijkl}] &=& 4g_{m(l}\Gamma^m{}_{ij,k)} +4g_{m(l}\Gamma^n{}_{ij}\Gamma^{m}{}_{k)n} +3g_{nm}\Gamma^n{}_{(ij}\Gamma^{m}{}_{kl)}, \label{48viax} \\ {}[\sigma_{,i'jkl}] &=& -g_{m(l}\Gamma^m{}_{ij,k)} -g_{m(l}\Gamma^n{}_{ij}\Gamma^{m}{}_{k)n}, \label{49viax} \eea where $\Gamma^i{}_{jk}$ are the Christoffel symbols for the metric $g$. Here and everywhere below the parenthesis denote the symmetrization over all included indices and the vertical lines denote the indices excluded from the symmetrization. By differentiating eq. (\ref{3133xx}) we also find \be \sigma_{,k'}=g^{ij}\sigma_{,ik'}\sigma_{,j}. \ee Let $\gamma^{j'i}$ be the inverse of the matrix of mixed derivatives $\sigma_{,jk'}$ (it should not be confused with Dirac matrices). Then we obtain \be \gamma^{k'i}\sigma_{,k'}=g^{ij}\sigma_{,j}, \label{524zzz} \ee and, therefore, the Ruse-Synge function satisfies a {\it non-trivial equation without any metric} \be \sigma=\frac{1}{2}\gamma^{i'j}\sigma_{,i'}\sigma_{,j}. \label{525zzz} \ee This enables one to compute the Ruse-Synge function in terms of diagonal values of its own partial derivatives. The usual Taylor series (\ref{36ttt}) is not symmetric whereas the function $\sigma(x,x')$ is. Thus, it is more appropriate to represent it in the manifestly symmetric Taylor series. Let us introduce new coordinates \be z^i=x^i+x'^i, \qquad y^i=x^i-x'^i. \ee Then the Ruse-Synge function is a function of $z$ and $y$ \be \sigma(x,x')=f(z,y). \ee Then the derivatives are related by \bea \partial^z_i &=& \frac{1}{2}\left(\partial^x_i+\partial^{x'}_i\right), \qquad \partial^{y}_i = \frac{1}{2}\left(\partial^x_i-\partial^{x'}_i\right), \\ \partial^x_i &=& \partial^z_i+\partial^{y}_i, \qquad \partial^{x'}_i = \partial^z_i-\partial^{y}_i. \eea We can expand the Ruse-Synge function in the Taylor series in the variables $y$ with coefficients depending on the variables $z$. Since it is symmetric it will only have even powers of $y$, \be \sigma(x,x')=\sum_{k=1}^\infty \frac{1}{(2k)!}F_{i_1\dots i_{2k}}(z)y^{i_1}\dots y^{i_{2k}}, \ee Then the derivatives of the Ruse-Synge function are \bea \sigma_{,i} &=& A_i+B_i, \\ \sigma_{,j'} &=& A_j-B_j, \\ \sigma_{,ij'} &=& -F_{ij}+C_{ij}+D_{ij}, \eea where \bea A_j&=& \sum_{k=1}^\infty \frac{1}{(2k)!}F_{i_1\dots i_{2k},j} y^{i_1}\dots y^{i_{2k}}, \\ B_j &=& \sum_{k=0}^\infty \frac{1}{(2k+1)!}F_{i_1\dots i_{2k+1}j}y^{i_1}\dots y^{i_{2k+1}}, \\ C_{ij} &=& \sum_{k=1}^\infty \frac{1}{(2k)!}\left(F_{i_1\dots i_{2k},ij} -F_{i_1\dots i_{2k}ij}\right)y^{i_1}\dots y^{i_{2k}}, \\ D_{ij}&=&\sum_{k=0}^\infty \frac{1}{(2k+1)!}\left(F_{i_1\dots i_{2k+1}i,j} -F_{i_1\dots i_{2k+1}j,i} \right)y^{i_1}\dots y^{i_{2k+1}}. \eea Now, by using these expansions one can compute the expansion of the matrix $\gamma^{ij'}$ and then use the equation (\ref{525zzz}) to obtain recursive relation for the coefficients $F_{i_1\dots i_k}$. {\it All of the higher-order coefficients $F_{i_1\dots i_k}$, with $k\ge 4$, will be determined by the derivatives of the first coefficient $F_{ij}$}. The diagonal values of the {\it covariant derivatives} of the Ruse-Synge function are expressed in terms of the polynomials of the covariant derivatives of the curvature tensor, in particular, \bea [\nabla^g_{i}\nabla^g_j\nabla^g_{k}\sigma] &=& [\nabla^g_{i}\nabla^g_j\nabla^g_{k'}\sigma]=0, \label{324mmm} \\ {}[\nabla^g_{l}\nabla^g_k\nabla^g_{j}\nabla^g_{i}\sigma] &=&-[\nabla^g_{l'}\nabla^g_k\nabla^g_{j}\nabla^g_{i}\sigma] = [\nabla^g_{l'}\nabla^g_{k'}\nabla^g_{j}\nabla^g_{i}\sigma] =-\frac{2}{3}R^g{}_{(i|k|j)l}. \label{325mmm} \eea That is, {\it the diagonal values of all higher order covariant derivatives of the Ruse-Synge function $[\nabla^g_{j'_m}\cdots\nabla^g_{j'_1}\nabla^g_{i_k}\cdots\nabla^g_{i_1} \sigma]$, with $k+m\ge 4$, are expressed in terms of the derivatives $[\sigma_{,ij'}]_{,i_1\dots i_k}$ of the diagonal values of the second derivatives $[\sigma_{,ij'}]$}. One can also show that it also satisfies the following coincidence limits \cite{avramidi00}: for any $k\ge 2$, \be [\nabla^g_{(i_1}\cdots\nabla^g_{i_k)}\nabla^g_{j'}\sigma]= [\nabla^g_{(i_1}\cdots\nabla^g_{i_k)}\nabla^g_j\sigma]=0. \ee An important ingredient is the Van Vleck-Morette determinant, defined by \bea M(x,x') &=& \det\left(-\sigma_{,ij'}(x,x')\right)\,; \label{vvmd} \eea it is a two-point density of weight $1$ at each point (we denote it by $M(x,x')$ instead of the usual $D(x,x')$ to avoid confusion with the Dirac type operators $D_\pm$). Therefore, we find it convenient to define the function \be \zeta(x,x') =\frac{1}{2}\log\left(g^{-1/2}(x)M(x,x')g^{-1/2}(x')\right), \label{410qqq} \ee which is a scalar function at each point. The first coincidence limits of this functions are \cite{avramidi00} \bea [\zeta] &=& [\zeta_{,i}]=0, \label{511mmm} \\ {}[\nabla^g_{i}\nabla^g_{j}\zeta] &=& \frac{1}{6}R^g_{ij}, \label{512mmm} \\ {}[\nabla^g_{(i}\nabla^g_{j}\nabla^g_{k)}\zeta] &=& \frac{1}{4}\nabla^g_{(i}R^g{}_{jk)}, \label{513via} \\ {}[\nabla^g_{(i}\nabla^g_{j}\nabla^g_{k}\nabla^g_{l)}\zeta] &=& \frac{3}{10}\nabla^g_{(i}\nabla^g_{j}R^g{}_{kl)} +\frac{1}{15}R^g{}_{m(i}{}^n{}_{j}R^{g}{}_k{}^{m}{}_{l)n}. \label{514via} \eea One can also show that the tangent vector to the geodesic connecting the points $x'$ and $x$ at the point $x'$ pointing to the point $x$ is given by the derivative of the Ruse-Synge function \cite{avramidi00} \be \xi^{i'}=-g^{i'j'}\sigma_{,j'}, \label{514xxz} \ee so that \be \sigma=\frac{1}{2}g_{i'j'}\xi^{i'}\xi^{j'}. \ee The variables $\xi^{i'}$ are related to the so called Morse variables; they provide the normal coordinates in geometry. The Jacobian of the transformation $x\mapsto \xi$ is expressed in terms of the Van Vleck-Morette determinant and for sufficiently close points $x$ and $x'$ is not equal to zero. The volume element and the derivatives in these coordinates have the form \bea dx &=& M^{-1}(x,x')g(x')d\xi \nonumber\\ &=& g^{1/2}(x')g^{-1/2}(x)e^{-2\zeta(x,x')}\;d\xi. \label{59qqq} \\ \frac{\partial}{\partial x^i} &=& -\sigma_{,ik'}g^{k'j'}\frac{\partial}{\partial \xi^{j'}}\,. \eea Then an arbitrary analytic scalar function $f$ can be expanded in the covariant Taylor series, \cite{avramidi00} \be f=\sum_{k=0}^\infty\frac{1}{k!} f_{i'_1 \dots i'_k} \xi^{i'_1}\cdots \xi^{i'_k}\,, \ee where $f_{i'_1 \dots i'_k}= [\nabla^g_{(i_1}\cdots\nabla^g_{i_k)}f](x')$. One can show that the metric is determined by the Ruse-Synge function as follows. Let $V$ be the matrix defined by \be V_{k'l'}=\sigma_{,j'}\gamma^{ij'}\sigma_{,k'l'i}. \ee Further, let $Y$ be a matrix defined by \be Y_{k'l'}=\sigma_{,k'l'}-V_{k'l'}, \ee and $X=(X^{i'j'})$ be the inverse of the matrix $Y$. Then the matrix $X$ is given by the series \be X=\sum_{n=0}^\infty (\beta V)^n\beta, \ee where $\beta^{k'l'}$ is the inverse of the matrix $\sigma_{,k'l'}$, that is, \bea X^{k'l'} &=& \beta^{k'l'}+ \beta^{k'm'}V_{m'p'}\beta^{p'l'} +\beta^{k'm'}V_{m'p'}\beta^{p'q'}V_{q'r'}\beta^{r'l'} +\cdots \nonumber \eea By differentiating eq. (\ref{524zzz}) with respect to $x'^l$ we obtain \bea g^{ij}\sigma_{,jl'} &=& \gamma^{ik'}Y_{k'l'}. \eea Finally, by multiplying by the matrix $\gamma^{jl'}$ we prove the following lemma. \begin{lemma} \label{lemmaruse} The metric is uniquely determined by the partial derivatives of the Ruse-Synge function by \bea g^{ij} &=&\gamma^{ik'}\gamma^{jl'}Y_{k'l'}, \label{57zzz} \\ g_{ij} &=& \sigma_{,ik'}\sigma_{,jl'}X^{k'l'}. \eea \end{lemma} \noindent Even though the metric is determined by the off-diagonal derivatives of $\sigma$ it does not depend on the point $x'$. Also, of course for $x=x'$ we get $g_{ij}(x')=g_{i'j'}$. Notice that the matrix $V$ is of first order in $y^i=x^i-x'^i$; therefore, this power series is well defined near diagonal. Thus, we obtain for the metric \bea g_{ij} &=& \sigma_{,ik'}\sigma_{,jl'}\beta^{k'l'} +\sigma_{,ik'}\sigma_{,jl'}\beta^{k'm'}V_{m'p'}\beta^{p'l'} \nonumber\\ &&+\sigma_{,ik'}\sigma_{,jl'}\beta^{k'm'}V_{m'p'}\beta^{p'q'}V_{q'r'}\beta^{r'l' } +\cdots \eea Therefore, one can find the metric in terms of the Taylor series \bea g^{ij}(x) &=& \sum_{k=0}^\infty\frac{1}{k!} g^{ij}{}_{,i_1\dots i_k}(x')y^{i_1}\cdots y^{i_k}, \\ g_{ij}(x) &=& \sum_{k=0}^\infty\frac{1}{k!} g_{ij}{}_{,i_1\dots i_k}(x')y^{i_1}\cdots y^{i_k}. \eea The Taylor coefficients $g^{ij}{}_{,i_1\dots i_k}(x')$ and $g_{ij}{}_{,i_1\dots i_k}(x')$ are expressed in terms of polynomials in the coincidence limits $[\sigma_{,k'i_1\dots i_p}]$ and $[\sigma_{,k'l'i_1\dots i_p}]$ and the metric $g^{i'j'}(x')$. This gives an expression for the metric entirely in terms of the partial derivatives of the Ruse-Synge function. Finally, we study the dependence of the Ruse-Synge function on the metric. Let $h_{ij}$ be another metric and $\sigma^h(x,x')$ be the Ruse-Synge function for the metric $h$. We will need to study the covariant Taylor expansion of this function in the power series in the variables $\xi^{i'}$ defined by (\ref{514xxz}) (with respect to the metric $g$), that is, \be \sigma^h(x,x')=\sum_{k=2}^\infty \frac{1}{k!} [\nabla^g_{(i_1}\cdots\nabla^g_{i_k)}\sigma^h](x')\xi^{i'_1}\cdots \xi^{i'_k}. \ee To avoid confusion we use the notation $\nabla^g$ and $\nabla^h$ to denote the covariant derivatives with respect to the metrics $g$ and $h$. All indices will be raised and lowered by the metric $g$. The function $\sigma^h$ satisfies the equation \be \sigma^h=\frac{1}{2}h^{ij}\sigma^h_{,i}\sigma^h_{,j} \ee with the initial conditions \be [\sigma^h]=[\sigma^h_{,i}]=0; \ee therefore, the first two terms in the Taylor series of the function $\sigma^h$ vanish. The non-compatibility of the metric $h$ and $g$ is measured by the non-metricity tensor \be K_{ijk}=\nabla^g_i h_{jk} \ee and the disformation tensor \be W^i{}_{jk}=\Gamma_h{}^i{}_{jk}-\Gamma_g{}^i{}_{jk}, \ee where $\Gamma_{h,g}{}^i{}_{jk}$ are the Levi-Civita connections of the metrics $h$ and $g$. These two tensors are related by \bea W^i{}_{jk} &=& \frac{1}{2}h^{im}\left(K_{jkm}+K_{kjm}-K_{mjk}\right), \\ K_{ijk} &=& h_{km}W^{m}{}_{ij} + h_{jm} W^{m}{}_{ik}. \eea The covariant derivatives with respect to the metrics $g$ and $h$ are related by \bea \nabla^h_i T^k{}_j &=& \nabla^g_i T^k{}_j +W^k{}_{im}T^m{}_j -W^m{}_{ij}T^k{}_m. \label{360mmm} \eea We introduce the following scalar $W=\frac{1}{2}\log \left(\frac{h}{g}\right)$ with $h=\det h_{ij}$, $g=\det g_{ij}$, and the vector $ W_j =\partial_j W, $ then \be W^i{}_{ij}=\frac{1}{2}h^{kl}K_{jkl}=W_j. \ee The Riemann tensors are related by \bea R_{h}{}^i{}_{jkl} &=& R_{g}{}^i{}_{jkl} +\nabla_k^g W^i{}_{lj} -\nabla_l^g W^i{}_{kj} +W^i{}_{km}W^m{}_{lj} -W^i{}_{lm}W^m{}_{kj}. \eea One has to be careful with this equation when lowering or raising indices. For example, the Ricci tensors are obtained by just contracting the indices \bea R^h{}_{jl} &=& R^g{}_{jl} +\nabla_i^g W^i{}_{lj} -\nabla^g_{j}W_{l} +W_{m}W^m{}_{lj} -W^i{}_{lm}W^m{}_{ij}, \label{355zaa} \eea but for the Riemann tensor $R^h{}_{ijkl}$ with all indices lowered we have to use the metric $h_{ij}$ and, therefore, it will not be directly related to the Riemann tensor $R^g{}_{ijkl}$, which is obtained by using the metric $g_{ij}$, that is, \be R^{h}{}_{njkl} = h_{ni}g^{im}R^{g}{}_{mjkl} +h_{ni}\nabla_k^g W^i{}_{lj} -h_{ni}\nabla_l^g W^i{}_{kj} +h_{ni}W^i{}_{km}W^m{}_{lj} -h_{ni}W^i{}_{lm}W^m{}_{kj}. \ee We will need to compute the following tensors determined by the diagonal values of the symmetrized covariant derivatives with respect to the metric $g$ of the Ruse-Synge function $\sigma^h$ of the metric $h$ and the vectors $\sigma^h_{,j}$ and $\sigma^h_{,j'}$, \bea S_{i_1\dots i_k} &=& [\nabla^g_{(i_1}\cdots\nabla^g_{i_k)}\sigma^h], \label{449viax} \\ T_{ji_1\dots i_k} &=& [\nabla^g_{(i_1}\cdots\nabla^g_{i_k)}\nabla^g_j\sigma^h], \\ V_{ji_1\dots i_k} &=& [\nabla^g_{(i_1}\cdots\nabla^g_{i_k)}\nabla^g_{j'}\sigma^h]. \eea We know that \bea [\nabla^h_{i}\sigma^h] &=& 0, \\ {}[\nabla^h_{j}\nabla^h_{i}\sigma^h] &=& h_{ij}, \label{461viax} \\ {}[\nabla^h_{k}\nabla^h_{j}\nabla^h_{i}\sigma^h] &=& [\nabla^h_{k}\nabla^h_{j}\nabla^h_{i'}\sigma^h] = 0, \label{462viax}\\ {}[\nabla^h_{l}\nabla^h_{k}\nabla^h_{j}\nabla^h_{i}\sigma^h] &=& -{}[\nabla^h_{l'}\nabla^h_{k}\nabla^h_{j}\nabla^h_{i}\sigma^h] =-\frac{2}{3}R^h{}_i{}_{(k|j|l)}. \eea By using these equations and the relation (\ref{360mmm}) between the covariant derivatives we obtain \bea [\nabla^g_i \sigma^h] &=& S_i=T_i=V_i=0, \\ {}[\nabla^g_j\nabla^g_i\sigma^h] &=& -[\nabla^g_j\nabla^g_{i'}\sigma^h] = S_{ij}=T_{ij}=-V_{ij}= h_{ij}, \label{465viax} \\{} {}[\nabla^g_k\nabla^g_j\nabla^g_i \sigma^h] &=& S_{ijk}=T_{ijk} =3h_{m(i}W^m{}_{jk)} =\frac{3}{2}K_{(ijk)}. \label{361zaa} \eea Also, by using the relation (\ref{51qqq}) and (\ref{361zaa}) (or (\ref{462viax}) and (\ref{360mmm})) we obtain \bea [\nabla^g_k\nabla^g_j\nabla^g_{i'} \sigma^h] &=& V_{ijk} =-h_{mi}W^m{}_{kj}. \label{467viax} \eea Similarly, we compute \bea [\nabla^g_{(l}\nabla^g_k\nabla^g_{j)}\nabla^g_{i} \sigma^h] &=& T_{ijkl} \nonumber\\ &=& 3h_{m(j} \nabla^g{}_{k}W^m{}_{l)i} +h_{mi} \nabla^g{}_{(j}W^m{}_{kl)} +3h_{m(j}W^n{}_{k|i|}W^{m}{}_{l)n} \nonumber\\ && +h_{mi}W^{n}{}_{(jk}W^{m}{}_{l)n} + 3h_{nm}W^n{}_{(jk}W^{m}{}_{l)i}, \label{469viax}\\ {}[\nabla^g_{(l}\nabla^g_k\nabla^g_{j)}\nabla^g_{i'} \sigma^h] &=& V_{ijkl} \nonumber\\ &=& -h_{mi} \nabla^g{}_{(j}W^m{}_{kl)} -h_{mi}W^n{}_{(jk}W^{m}{}_{l)n}. \label{470viax} \\ {}[\nabla^g_{(l}\nabla^g_k\nabla^g_j\nabla^g_{i)} \sigma^h] &=& S_{ijkl} \label{362zaa}\\ &=& 4h_{m(i} \nabla^g{}_{j}W^m{}_{kl)} +4h_{m(i}W^n{}_{jk}W^{m}{}_{l)n} +3h_{nm}W^n{}_{(ij}W^{m}{}_{kl)}. \nonumber \eea This can also be written in terms of the tensors $K_{ijk}$ \be S_{ijkl} =2\nabla^g{}_{(i}K_{jkl)} -h^{mn}K_{(ij|m|}K_{kl)n} +h^{mn}K_{n(ij}K_{kl)m} -\frac{1}{4}h^{mn}K_{m(ij}K_{|n|kl)}. \label{362zaaz} \ee Let $\nabla^{\cal A}$ be a connection on a vector bundle ${\cal V}$ over a manifold $M$. It defines the {\it operator of parallel transport} ${\cal P}_{g,{\cal A}}(x,x')$ of sections of the vector bundle ${\cal V}$ along geodesics of the metric $g$ from the point $x'$ to the point $x$. It satisfies the equation of parallel transport \cite{avramidi00} \be g^{ij}\sigma^g_{,j}\nabla^{\cal A}_i{\cal P}_{g,{\cal A}}=0 \ee with the initial condition \bea [{\cal P}_{\cal A}]&=&I, \eea where $I$ is the identity endomorphism. By using these equations we obtain the coincidence limits of partial derivatives \bea [{\cal P}^{g,{\cal A}}_{,i}] &=& -{\cal A}_i, \\{} [{\cal P}^{g,{\cal A}}_{,ij}] &=& -{\cal A}_{(i,j)}+{\cal A}_{(i}{\cal A}_{j)}. \eea The coincidence limits of covariant derivatives are \bea [\nabla^{\cal A}_i {\cal P}_{g,{\cal A}}] &=&0, \label{420baa} \\{} [\nabla^{g,{\cal A}}_i\nabla^{g,{\cal A}}_j{\cal P}_{g,{\cal A}}] &=& \frac{1}{2}\mathcal{R}^{\cal A}_{ij}, \label{422qqc} \eea where $\mathcal{R}^{\cal A}_{ij}$ is the curvature of the connection $\nabla^{\cal A}$. Moreover, one can show that the diagonal values of the symmetrized covariant derivatives vanish, \be [\nabla^{g,{\cal A}}_{(i_1}\cdots\nabla^{g,{\cal A}}_{i_k)}{\cal P}_{g,{\cal A}}]=0. \ee Now, suppose that there is another metric $h$ and another connection $\nabla^{h,{\cal B}}$ and ${\cal P}_{h,{\cal B}}$ be the corresponding operator of parallel transport. We need to compute the diagonal values of the derivatives $[\nabla^{g,{\cal A}}_{(i_1}\cdots\nabla^{g,{\cal A}}_{i_k)}{\cal P}_{h,{\cal B}}]$. The difference of the connection one-forms defines the tensor \be {\cal C}_i={\cal B}_i-{\cal A}_i, \ee so that \be \nabla^{g,{\cal A}}_i{\cal P}_{h,{\cal B}}=\nabla^{h,{\cal B}}_i{\cal P}_{h,{\cal B}}-{\cal C}_i{\cal P}_{h,{\cal B}}. \label{483viax} \ee By using the eqs. (\ref{420baa}), (\ref{422qqc}), (\ref{483viax}) we obtain \bea [\nabla^{g,{\cal A}}_{i}{\cal P}_{h,{\cal B}}] &=& -{\cal C}_i, \label{372zaa} \\ {}[\nabla^{g,{\cal A}}_{(i}\nabla^{g,{\cal A}}_{j)}{\cal P}_{h,{\cal B}}] &=& -\nabla^{g,{\cal A}}_{(i}{\cal C}_{j)}+{\cal C}_{(i}{\cal C}_{j)}. \label{373zaa} \eea \section{Asymptotics of Integrals} \setcounter{equation}0 We use the Laplace method to compute the asymptotics as $\varepsilon\to 0$ of Laplace type integrals \be F(\varepsilon)= (4\pi\varepsilon)^{-n/2} \int\limits_{U}dx\;\exp\left(-\frac{1}{2\varepsilon}\Sigma(x,x')\right) \varphi(x), \label{541zzam} \ee with some positive smooth function $\Sigma$ and a smooth function $\varphi$ over a sufficiently small neighborhood $U$ of a point $x'$ in a manifold $M$. \subsection{Gaussian Integrals on Riemannian Manifolds} First of all, we recall the standard Gaussian integrals. Let $G_{ij}$ be a real symmetric positive matrix, $G^{ij}$ be its inverse, $G=\det G_{ij}$ and $\left<y, Gy\right>=G_{ij}y^iy^j$. We define the {\it Gaussian average} of a smooth function $f$ on $\RR^n$ by \be \left<f\right>_G = (4\pi)^{-n/2}G^{1/2}\int\limits_{\RR^n}dy\; \exp\left(-\frac{1}{4}\left<y, Gy\right>\right) f(y). \ee Then the Gaussian average of the odd monomials vanish and the average of the monomials for any $k\ge 0$ are (see, e.g. \cite{avramidi15, prudnikov83}) \be \left<y^{i_1}\cdots y^{i_{2k}}\right>_G = \frac{(2k)!}{k!} G^{(i_1i_2}\cdots G^{i_{2k-1}i_{2k})}. \label{53via} \ee By integrating by parts it is easy to obtain a useful relation for the averages of derivatives \be \left<\partial_{i_1}\cdots\partial_{i_k}f\right>_G= \left<{\cal H}_{i_1\dots i_k}f\right>_G, \ee where ${\cal H}_{i_1\dots i_k}$ are Hermite polynomials defined by \cite{erdelyi53} \bea {\cal H}_{i_1\dots i_k}(y) &=& (-1)^k \exp\left(\frac{1}{4}\left<y, Gy\right>\right) \partial_{i_1}\cdots\partial_{i_k} \exp\left(-\frac{1}{4}\left<y, Gy\right>\right) \nonumber\\ &=& (-1)^k{\cal D}_{i_1}\cdots{\cal D}_{i_k}\cdot 1, \eea and \be {\cal D}_{i}=\partial_i-\frac{1}{2}G_{ij}y^j. \ee As a result, the Gaussian average of a Hermite polynomial of degree $k$ with any polynomial $f$ of degree less than $k$ vanishes \be \left<{\cal H}_{i_1\dots i_k}f\right>_G=0, \ee and the average of the product of Hermite polynomials of the same degree is \be \left<{\cal H}_{i_1\dots i_k}{\cal H}_{j_1\dots j_k}\right>_G =\frac{k!}{2^k}G_{i_1(j_1}\cdots G_{|i_k|j_k)}. \ee By the same trick one could get the relations \bea \left<y^if\right>_G &=& 2G^{ij}\left<\partial_j f\right>_G, \\ \left<y^iy^jf\right>_G &=& 2G^{ij}\left<f\right>_G +4G^{ik}G^{jm}\left<\partial_k\partial_m f\right>_G, \eea etc. \begin{lemma} \label{lemma1} Let $U$ be an open set in $\RR^n$ containing the origin and $\varphi$ be a smooth real function on $U$. Let $\varepsilon>0$ be a positive real parameter, and \be F(\varepsilon) =(4\pi\varepsilon)^{-n/2} \int\limits_{U}dy\; \exp\left(-\frac{1}{4\varepsilon}\left<y, Gy\right>\right)\varphi(y). \ee Then there is the asymptotic expansion of the integral $F(\varepsilon)$ as $\varepsilon\to 0^+$, independent of $U$, \be F(\varepsilon) \sim \sum_{k=0}^\infty \varepsilon^{k}c_k, \ee where \be c_k = \frac{1}{k!} G^{(i_1i_2}\cdots G^{i_{2k-1}i_{2k})} G^{-1/2}\varphi_{i_1 \dots i_{2k}}, \label{55via} \ee and $\varphi_{i_1\dots i_k}=\varphi_{,i_1\dots i_k}(0)$. \end{lemma} {\it Remark.} This can also be written as \be c_k=\frac{1}{k!}(\Delta_G^k G^{-1/2}\varphi)(0), \ee where \be \Delta_{G} = G^{ij}\frac{\partial}{\partial y^i} \frac{\partial}{\partial y^j}. \ee {\it Proof.} This lemma can be proved by using the Taylor expansion. The open set $U$ must contain an open ball $B_\delta(0)$ of some radius $\delta>0$ centered at the origin. After rescaling of the variables $y^i\mapsto \sqrt{\varepsilon}\,y^i$ the domain of the integration becomes $U_\varepsilon$ containing the ball $B_{\delta/\sqrt{\varepsilon}}(0)$ of radius $\delta/\sqrt{\varepsilon}$ and as $\varepsilon\to 0$ it becomes the whole space $\RR^n$. The calculation of Gaussian average gives then the result. We will need the following Lemma to compute the coefficients of the asymptotic expansion. We use the notation introduced at the beginning of this section. We pick a metric $g$ and let $R^g{}_{ijkl}$ be the Riemann tensor, $R^g_{ij}$ be the Ricci tensor, $R_g$ be the scalar curvature, $\nabla^g_i$ be the covariant derivative (also denoted by the semicolon $;$) and $\Delta_g$ be the scalar Laplacian of the metric $g$. Further, let $\sigma$ be the Ruse-Synge function of this metric and $\zeta$ be the modified Van Vleck-Morette determinant. We generalize the Gaussian integrals in the Euclidean space to Riemannian manifolds by replacing the quadratic form in the exponential by the Ruse-Synge function. \begin{lemma} \label{lemma5} Let $U$ be a sufficiently small neighborhood of a fixed point $x'$ in a manifold $M$, $\varphi(x,x')$ be a smooth scalar density of weight $1$, and \be F(\varepsilon) =(4\pi\varepsilon)^{-n/2} \int\limits_{U}dx\, \exp\left(-\frac{\sigma(x,x')}{2\varepsilon}\right) \varphi(x)\,. \label{58via} \ee Then as $\varepsilon\to 0^+$ there is the asymptotic expansion independent of $U$ \be F(\varepsilon) \sim \sum_{k=0}^\infty \varepsilon^{k} c_k, \ee where \be c_k=\frac{1}{k!} g^{i_1i_2}\cdots g^{i_{2k-1}i_{2k}} \hat\varphi_{i_1\dots i_{2k}}, \ee and \be \hat\varphi_{i'_1\dots i'_k}=\left[ \left(\nabla^g_{(i_1}-2\zeta_{,(i_1}\right)\cdots \left(\nabla^g_{i_{k})}-2\zeta_{,i_{k})}\right) (g^{-1/2}\varphi)\right](x'). \ee The coefficients $c_k$ are polynomial in the derivatives of the curvature of the metric $g$ and linear in the derivatives of the function $\varphi$. In particular, \bea c_0 &=& g^{-1/2}[\varphi], \\ c_1 &=& [\Delta_gg^{-1/2}\varphi] -\frac{1}{3}R_gg^{-1/2}[\varphi], \label{555zaz} \\ c_2 &=& \frac{1}{2}g^{ij}g^{kl}[\nabla^g_{(i}\nabla^g_{j}\nabla^g_{k}\nabla^g_{l)}(g^{-1/2}\varphi)] -\frac{2}{3}R_g^{ij}[\nabla^g_{(i}\nabla^g_{j)}(g^{-1/2}\varphi)] \nonumber\\ && -\frac{1}{3}R_g[\Delta_g(g^{-1/2}\varphi)] -\frac{2}{3}R_g{}^{;i}[\nabla^g_{i}(g^{-1/2}\varphi)] \nonumber\\ && +\left(\frac{1}{18}R_g^2 -\frac{1}{5}(\Delta_g R_g) +\frac{4}{45}R_g{}^{ij}R^{g}{}_{ij} -\frac{1}{30}R_g{}^{ijkl}R^g{}_{ijkl}\right)g^{-1/2}[\varphi]. \label{522viax} \eea \end{lemma} \noindent {\it Proof.} Let $\xi^{i'}=-g^{i'j'}\sigma_{,j'}$ and $|\xi|^2=g_{i'j'}\xi^{i'}\xi^{j'}$. Then by changing the variables $x\mapsto \xi$ and using eq. (\ref{59qqq}) we obtain \bea F(\varepsilon) &=& (4\pi\varepsilon)^{-n/2} \int\limits_{\hat U} d\xi\,g^{1/2}(x') \exp\left(-\frac{1}{4\varepsilon}|\xi|^2\right) \hat\varphi(\xi) \,. \eea where $\hat U$ is the corresponding domain in the variables $\xi$ and \be \hat\varphi(\xi)=\exp\left\{-2\zeta(x,x')\right\}g^{-1/2}(x)\varphi(x)\,. \label{557ccx} \ee We rescale the variables $\xi\mapsto\sqrt{\varepsilon}\xi$. Then as $\varepsilon\to 0$ we can extend the integration domain to the whole space $\RR^n$; this does not affect the asymptotic expansion. Therefore, the asymptotic expansion is determined by the Gaussian average \be F(\varepsilon)\sim \left<\hat\varphi(\sqrt{\varepsilon}\xi)\right>_{g}. \ee Next, we expand the function $\hat\varphi$ in the covariant Taylor series \bea \hat\varphi(\sqrt{\varepsilon}\xi) &=&\sum_{k=0}^\infty \frac{\varepsilon^{k/2}}{k!} \hat\varphi_{i'_1\dots i'_k} \xi^{i'_1}\cdots\xi^{i'_k}, \label{558zaz} \eea where \bea \hat\varphi_{i'_1\dots i'_k} &=& \left[\nabla^g_{(i_1}\cdots\nabla^g_{i_k)} e^{-2\zeta}g^{-1/2}\varphi\right](x') \\ &=& \left[ \left(\nabla^g_{(i_1}-2\zeta_{;(i_1}\right)\cdots \left(\nabla^g_{i_{2k})}-2\zeta_{;i_{2k})}\right) (g^{-1/2}\varphi)\right](x'). \nonumber \eea and compute the Gaussian average over $\xi$ to get the result. Notice that the diagonal values of the derivatives of the function $\zeta$, and, therefore, the coefficients $\hat\varphi_{i_1\dots i_k}$ and $c_k$, are polynomial in the derivatives of the curvature of the metric $g$. By using (\ref{511mmm}) we obtain, in particular, \bea \hat\varphi &=& g^{-1/2}[\varphi], \label{520via} \\ \hat\varphi_i &=& [\nabla^g_i(g^{-1/2}\varphi)], \label{473naa}\\ \hat\varphi_{ij} &=& [\nabla^g_{(i}\nabla^g_{j)} (g^{-1/2}\varphi)] -2[\zeta_{;ij}g^{-1/2}\varphi], \label{474naa} \\ \hat\varphi_{ijk} &=& [\nabla^g_{(i}\nabla^g_{j}\nabla^g_{k)} (g^{-1/2}\varphi)] -6[\zeta_{(ij}\nabla^g_{k)}(g^{-1/2}\varphi)] -2[\zeta_{;(ijk)}g^{-1/2}\varphi], \label{474naab} \\ \hat\varphi_{ijkl} &=& [\nabla^g_{(i}\nabla^g_{j}\nabla^g_{k}\nabla^g_{l)} (g^{-1/2}\varphi)] -12[\zeta_{;(ij}\nabla^g_{k}\nabla^g_{l)}(g^{-1/2}\varphi)] -8[\zeta_{;(ijk}\nabla^g_{l)}(g^{-1/2}\varphi)] \nonumber\\ &&+\left(12[\zeta_{;(ij}\zeta_{;kl)}] -2[\zeta_{;(ijkl)}]\right) g^{-1/2}[\varphi]. \label{474via} \eea Finally, by using the diagonal values of the derivatives of the function $\zeta$, (\ref{511mmm}), and (\ref{512mmm}), (\ref{514via}), we obtain the coefficients $c_0$, $c_1$ and $c_2$. Of course, in the case of the flat metric we recover the earlier result (\ref{55via}). \subsection{Morse Lemma} We say that a smooth real valued symmetric function $\Sigma(x,x')$ on $M\times M$ has a {\it non-degenerate critical point on the diagonal} if: \begin{enumerate} \item the first derivatives vanish on the diagonal, $ [\Sigma_{,i}]=0, $ and \item the Hessian is positive definite on the diagonal, $G_{ij}=[\Sigma_{,ij}]>0.$ \end{enumerate} \begin{lemma} \label{lemma0} Let $U$ be a sufficiently small open set in a manifold $M$, $\Sigma: U\times U\to\RR $ be a smooth real valued symmetric non-negative function that has a non-degenerate critical point on the diagonal and vanishes on the diagonal, that is, $[\Sigma]=[\Sigma_{,i}]=0$ and $G_{ij}=[\Sigma_{,ij}]>0$. Then there exists a local diffeomorphism $\eta^{a}=\eta^{a}(x,x')$ such that the function $\Sigma$ has the form \be \Sigma(x,x')=\frac{1}{2}G_{ab}(x')\eta^{a}(x,x')\eta^{b}(x,x'). \label{513qq} \ee \end{lemma} \noindent {\it Proof.} We pick some metric $g_{ij}$ and define the corresponding Ruse-Synge function $\sigma(x,x')$ and the variables $\xi^{i'}=-g^{i'j'}\sigma_{j'}$ introduced in (\ref{514xxz}). We expand the function $\Sigma$ in the covariant Taylor series \be \Sigma(x,x')= \sum_{k=2}^\infty\frac{1}{k!} \Sigma_{i'_1\dots i'_k} \xi^{i'_1}\cdots \xi^{i'_k}\,, \label{525zza} \ee where $\Sigma_{i'_1\dots i'_k}= \left[\Sigma_{;(i_1\dots i_k)}\right](x')$. This can be written in the form \be \Sigma(x,x')=\frac{1}{2}A_{i'j'}(x,x')\xi^{i'}\xi^{j'}, \ee where \be A_{i'j'}(x,x')= \sum_{k=0}^\infty\frac{2}{(k+2)!} \Sigma_{i'j'i'_1\dots i'_k} \xi^{i'_1}\cdots \xi^{i'_k}\,. \ee The matrix $A$ is real and symmetric, so it can be written (nonuniquely) in the form $A=B^THB$; that is, \be A_{i'j'}=G_{ab}B^{a}{}_{i'} B^{b}{}_{j'}. \ee The matrix $B$ is defined up to an orthogonal matrix, that is, up to a transformation $B\mapsto UB$ with the matrix $U$ satisfying $U^TGU=G$. Then the function $\Sigma$ takes the Morse form (\ref{513qq}) with $ \eta^{a}=B^{a}{}_{i'}\xi^{i'}. $ The Morse diffeomorphism is obviously also defined up to an orthogonal transformation $\eta\mapsto U\eta$. The Morse diffeomorphism $\eta^{a}=\eta^{a}(x,x')$ can be computed explicitly in terms of the Taylor series \be \eta^{a}= \sum_{k=1}^\infty \frac{1}{k!} \eta^a{}_{i'_1\dots i'_k}\xi^{i'_1}\cdots \xi^{i'_k}; \ee the coefficients $\eta^a{}_{i'_1\dots i'_k} =[\eta^a{}_{;(i_1\dots i_k)}](x')$ here will be expressed in terms of the derivatives $\Sigma_{i'_1\dots i'_k}$ of the function $\Sigma$ on the diagonal at the point $x'$. They can be obtained by substituting this Taylor series in (\ref{513qq}) and comparing it with (\ref{525zza}). The solution is {\it not unique}. The first coefficient can be chosen to be a frame of vectors $\eta^a{}_{i'}$ at the point $x'$ determined by \be G_{ab}\eta^a{}_{i'}\eta^b{}_{j'}=\Sigma_{i'j'}. \ee Of course, the matrix $\eta^a{}_{i'}$ is defined up to an orthogonal transformation. \subsection{Asymptotics of Laplace Type Integrals} We will need the following lemma. We fix some metric $g_{ij}$; all covariant derivatives and the curvature are defined with respect to this metric. \begin{lemma} \label{lemma2} Let $x'$ be a point in a manifold $M$ and $U$ be a sufficiently small neighborhood of this point. Let $\Sigma: U\times U\to\RR $ be a smooth real valued symmetric non-negative function that has a non-degenerate critical point on the diagonal and vanishes on the diagonal. Let $\varphi(x,x')$ be a smooth scalar density of weight $1$ and \be F(\varepsilon)=(4\pi\varepsilon)^{-n/2} \int\limits_{U}dx\;\exp\left(-\frac{1}{2\varepsilon}\Sigma(x,x')\right) \varphi(x,x'). \label{541zza} \ee Then there is the asymptotic expansion as $\varepsilon\to 0^+$ \be F(\varepsilon) \sim \sum_{k=0}^\infty \varepsilon^{k} F_k. \label{531via} \ee The coefficients $F_k$ do not depend on the domain $U$; they depend only on the derivatives of the functions $\varphi$ and $\Sigma$ at the point $x'$. Let $\Sigma_{i'_1\dots i'_k} =[\nabla^g_{(i_1}\cdots\nabla^g_{i_k)}\Sigma](x')$ be the symmetrized covariant derivatives of the function $\Sigma$ on the diagonal at the point $x'$, in particular, let $G_{ij}=[\Sigma_{,ij}](x')$ be the Hessian on the diagonal, $G^{ij}$ be the inverse of this matrix and $G=\det G_{ij}$ be its determinant. Let $\varphi_{i'_1\dots i'_k} =[\nabla^g_{(i_1}\cdots\nabla^g_{i_k)}g^{-1/2}\varphi](x')$ be the symmetrized covariant derivatives of the function $\varphi$ on the diagonal at the point $x'$. Then: \begin{enumerate} \item the coefficients $F_k$ have the form $F_k=G^{-1/2}\tilde F_k$, where \item $\tilde F_k$ are linear in the derivatives, $\varphi_{i'_1\dots i'_k}$, of the function $\varphi$ at the point $x'$ and \item polynomial in the inverse Hessian, $G^{ij}$, and the derivatives, $\Sigma_{i'_1\dots i'_k}$, $k>2$, of the function $\Sigma$ on the diagonal at $x'$ of order higher than $2$. \item The first two coefficients are \bea F_0 &=&G^{-1/2}[\varphi], \label{532via} \\ F_1 &=& G^{-1/2}g^{1/2}\Biggl\{G^{ij}[\nabla^g_{i}\nabla^g_{j}(g^{-1/2}\varphi)] -G^{ij}G^{pq}\Sigma_{ipq}[\nabla^g_j(g^{-1/2}\varphi)] \\ && +\Biggl[ -\frac{1}{3}G^{ij}R^g_{ij} +\frac{1}{12}\Bigl(2G^{il}G^{jm} +3G^{ij}G^{lm} \Bigr)G^{kn}\Sigma_{ijk}\Sigma_{lmn} \nonumber\\ &&-\frac{1}{4}G^{ij}G^{kl}\Sigma_{ijkl} \Biggr] g^{-1/2}[\varphi]\Biggr\}. \label{533via} \eea \item In the case when the function $\varphi$ and its first derivative vanish on the diagonal, $[\varphi]=[\nabla^g_i\varphi]=0$, the third coefficient is \bea F_2 &=& g^{1/2}G^{-1/2}\Biggl\{ \frac{1}{2}G^{ij}G^{kl}[\nabla^g_{(i}\nabla^g_j\nabla^g_k\nabla^g_{l)}g^{-1/2}\varphi] \label{533viax}\\ && -\frac{1}{3}\left( 2G^{ij}G^{qk} +3G^{iq}G^{jk} \right)G^{pl}\Sigma_{ipq} [\nabla^g_{(j}\nabla^g_k\nabla^g_{l)}g^{-1/2}\varphi] \nonumber \\ && +\Biggl[ -\frac{1}{3}\left(G^{ij}G^{kl}+2G^{ik}G^{jl}\right)R^g_{ij} -\frac{1}{4}\left(G^{pq}G^{kl} +4G^{kp}G^{lq}\right)G^{pq}\Sigma_{ijpq} \nonumber\\ && +\frac{1}{72}\Biggl(2G^{ij}G^{pr}G^{qs}G^{kl} +3G^{ij}G^{pq}G^{rs}G^{kl} +6G^{ik}G^{jl}G^{pq}G^{rs} \nonumber\\ && +12G^{ij}G^{pq}G^{kr}G^{ls} +12G^{ij}G^{pr}G^{kq}G^{sl} \Biggr)\Sigma_{ipq}\Sigma_{jrs} \Biggr][\nabla^g_{(k}\nabla^g_{l)}g^{-1/2}\varphi] \Biggr\}. \nonumber \eea \end{enumerate} \end{lemma} \noindent {\it Remark.} Notice that the metric $g$ is arbitrary, in particular, it could be taken to be equal to the Hessian $g_{ij}=G_{ij}$. \noindent {\it Proof.} First, it is easy to show that the asymptotic expansion does not depend on the size of the domain $U$; so, it can be assumed to be sufficiently small. Then for a sufficiently small $U$ there is a Morse diffeomorphism $\eta^a=\eta^a(x,x')$ so that $\Sigma=\frac{1}{2}G_{ab}\eta^a\eta^b$. Then the integral takes the form \be F(\varepsilon)=(4\pi\varepsilon)^{-n/2} \int\limits_{\tilde U}d\eta\; \exp\left(-\frac{1}{4\varepsilon}\left<\eta,G\eta\right>\right) f(\eta), \ee where $\tilde U$ is the corresponding domain for the variables $\eta$ and \be f(\eta)= \left(\det\left(\frac{\partial \eta^a}{\partial x^i}\right)\right)^{-1} \varphi(x(\eta)). \ee Now, by applying Lemma \ref{lemma1} we get the asymptotic expansion \be F(\varepsilon)\sim \sum_{k=0}^\infty \varepsilon^{k}F_k, \ee where $F_k=\frac{1}{k!}G^{-1/2}(\Delta^\eta_G f)(0)$. The coefficients $F_k$ can be computed explicitly now by using the Taylor series. We decompose the function $\Sigma$ via \be \Sigma=\frac{1}{2}G_{i'j'}\xi^{i'}\xi^{j'}+\hat \Sigma, \ee where $\xi^{i'}$ are the variables introduced in (\ref{514xxz}) and \be \hat \Sigma(\xi)=\sum_{k=3}^\infty \frac{1}{k!} \Sigma_{i'_1\dots i'_k}\xi^{i'_1}\cdots \xi^{i'_k}. \label{573zaz} \ee where $\Sigma_{i'_1\dots i'_k}=[\Sigma_{;(i_1\dots i_k)}](x')$. Then by changing the variables $x\mapsto \xi$ and using (\ref{59qqq}) the integral takes the form \be F(\varepsilon)=(4\pi\varepsilon)^{-n/2} \int\limits_{\hat U}d\xi\; g^{1/2}(x') \exp\left(-\frac{1}{4\varepsilon}\left<\xi,G\xi\right>\right) \psi(\xi,\varepsilon), \label{541zzb} \ee where $\hat U$ is the modified domain and \be \psi(\xi,\varepsilon)= \exp\left(-\frac{1}{2\varepsilon}\hat\Sigma(\xi)\right) \hat\varphi(\xi), \ee with $\hat\varphi(\xi)$ defined by (\ref{557ccx}). Next, by rescaling the variables $\xi^{i'}\mapsto \sqrt{\varepsilon}\,\xi^{i'}$, we extend the integration to the whole space $\RR^n$ so that the asymptotics of the integral is given by the Gaussian average \be F(\varepsilon)\sim g^{1/2}G^{-1/2} \left<\psi(\sqrt{\varepsilon}\xi,\varepsilon)\right>_{G}. \ee Now, we expand the function $\psi(\sqrt{\varepsilon}\xi,\varepsilon)$ in powers of $\varepsilon$ \be \psi(\sqrt{\varepsilon}\xi,\varepsilon)= \sum_{k=0}^\infty \varepsilon^{k/2} \psi_{k/2}(\xi), \ee where $\psi_{k/2}(\xi)$ are polynomials in $\xi$. It is easy to see that the half-integer order coefficients $\psi_{k+1/2}(\xi)$ are odd polynomials and the integer order coefficients $\psi_{k}(\xi)$ are even polynomials. Therefore, the Gaussian average of the half-integer order coefficients vanish, $\left<\psi_{k+1/2}(\xi)\right>_G=0$. Thus, finally we obtain the asymptotic expansion (\ref{531via}) with only integer powers of $\varepsilon$ with \be F_k=g^{1/2}G^{-1/2} \left<\psi_{k}(\xi)\right>_G. \ee Then by computing the Gaussian average (\ref{53via}) we get the explicit form of the coefficients of the asymptotic expansion. By using the covariant Taylor expansions of the function $\hat\varphi$, (\ref{558zaz}), and of the function $\hat\Sigma$, (\ref{573zaz}), we obtain \bea \psi_0(\xi) &=& \hat\varphi, \\ \psi_1(\xi) &=& \frac{1}{2}\hat\varphi_{i'j'}\xi^{i'}\xi^{j'} -\frac{1}{48}\left( 4\Sigma_{i'j'k'}\hat\varphi_{l'} +\Sigma_{i'j'k'l'}\hat\varphi \right) \xi^{i'}\xi^{j'}\xi^{k'}\xi^{l'} \nonumber\\ && +\frac{1}{288}\Sigma_{i'j'k'}\Sigma_{l'm'n'}\xi^{i'}\xi^{j'}\xi^{k'}\xi^{l'} \xi^{m'}\xi^{n'}\hat\varphi, \eea where $\hat\varphi$, $\hat\varphi_i$ and $\hat\varphi_{ij}$ are given by (\ref{520via})-(\ref{474naa}). By computing the Gaussian average this gives \bea F_0 &=& g^{1/2}G^{-1/2}\hat\varphi, \\ F_1 &=& g^{1/2}G^{-1/2}\Biggl\{ G^{ij}\hat\varphi_{ij} -G^{ij}G^{kl}\Sigma_{(ijk}\hat\varphi_{l)} -\frac{1}{4}G^{ij}G^{kl}\Sigma_{ijkl}\hat\varphi \nonumber\\ && +\frac{5}{12}G^{ij}G^{kl}G^{mn}\Sigma_{(ijk}\Sigma_{lmn)}\hat\varphi\Biggr\}. \eea By using Lemma 2.1 of \cite{novoseltsev05} we get \be G^{ij}G^{kl}G^{mn}\Sigma_{(ijk}\Sigma_{lmn)} =\frac{1}{5}G^{ij}G^{kl}G^{mn}\left( 2\Sigma_{ikm}\Sigma_{jln} +3\Sigma_{ijm}\Sigma_{kln} \right). \label{559via} \ee Therefore, \bea F_1 &=& g^{1/2}G^{-1/2}\Biggl\{ G^{ij}\hat\varphi_{ij} -G^{ij}G^{kl}\Sigma_{ijk}\hat\varphi_{l} -\frac{1}{4}G^{ij}G^{kl}\Sigma_{ijkl}\hat\varphi \nonumber\\ && +\frac{1}{12} \left(2G^{il}G^{jm} +3G^{ij}G^{lm}\right)G^{kn}\Sigma_{ijk}\Sigma_{lmn} \hat\varphi\Biggr\}. \eea Finally, by using eqs. (\ref{520via})-(\ref{474naa}) we get the result (\ref{533via}). In the case when the function $\varphi$ and its first derivative vanish on the diagonal we also get \bea \psi_2(\xi) &=& \frac{1}{24} \hat\varphi_{i'j'k'l'}\xi^{i'}\xi^{j'}\xi^{k'}\xi^{l'} -\frac{1}{72}\Sigma_{i'j'k'}\hat\varphi_{l'm'n'}\xi^{i'}\xi^{j'}\xi^{k'}\xi^{l'}\xi^{m'}\xi^{n'} \\ && -\frac{1}{96}\Sigma_{i'j'k'l'}\hat\varphi_{m'n'}\xi^{i'}\xi^{j'}\xi^{k'}\xi^{l'}\xi^{m'}\xi^{n'} +\frac{1}{576}\Sigma_{i'j'k'}\Sigma_{l'm'n'}\hat\varphi_{p'q'} \xi^{i'}\xi^{j'}\xi^{k'}\xi^{l'}\xi^{m'}\xi^{n'} \xi^{p'}\xi^{q'}. \nonumber \eea By computing the Gaussian average over $\xi$ we get \bea F_2 &=& g^{1/2}G^{-1/2}\Biggl\{ \frac{1}{2}G^{ij}G^{kl}\hat\varphi_{ijkl} -\frac{5}{3}G^{ij}G^{kl}G^{mn}\Sigma_{(ijk}\hat\varphi_{lmn)} \\ && -\frac{5}{4}G^{ij}G^{kl}G^{mn}\Sigma_{(ijkl}\hat\varphi_{mn)} +\frac{35}{72}G^{ij}G^{kl}G^{mn}G^{pq}\Sigma_{(ijk}\Sigma_{lmn}\hat\varphi_{pq)}. \Biggr\} \eea We use Lemma 2.1 of \cite{novoseltsev05} to get \bea G^{ij}G^{kl}G^{mn}\Sigma_{(ijkl}\hat\varphi_{mn)} &=&\frac{1}{5}G^{ij}G^{kl}G^{mn}\left(\Sigma_{ijkl}\hat\varphi_{mn} +4\Sigma_{ikmn}\hat\varphi_{jl}\right) \\ G^{ij}G^{kl}G^{mn}G^{pq}\Sigma_{(ijk}\Sigma_{lmn}\hat\varphi_{pq)} &=& \frac{1}{35} G^{ij}G^{kl}G^{mn}G^{pq} \Biggl( 2\Sigma_{ikm}\Sigma_{jln}\hat\varphi_{pq} +3\Sigma_{ijk}\Sigma_{lmn}\hat\varphi_{pq} \nonumber\\ && +6\Sigma_{ijk}\Sigma_{mnp}\hat\varphi_{lq} +12\Sigma_{ijk}\Sigma_{lmp}\hat\varphi_{nq} +12\Sigma_{ikm}\Sigma_{jlp}\hat\varphi_{nq} \Biggr). \nonumber\\ \eea Now, by using these equations together with (\ref{559via}) we obtain \bea F_2 &=& g^{1/2}G^{-1/2}\Biggl\{ \frac{1}{2}G^{ij}G^{kl}\hat\varphi_{ijkl} -\frac{1}{3}\left(2 G^{ij}G^{qk} +3G^{iq}G^{jk} \right)G^{pl}\Sigma_{ipq}\hat\varphi_{jkl} \\ && +\Biggl[ -\frac{1}{4}\left(G^{pq}G^{kl} +4G^{kp}G^{lq}\right)G^{pq}\Sigma_{ijpq} \nonumber\\ && +\frac{1}{72}\Biggl(2G^{ij}G^{pr}G^{qs}G^{kl} +3G^{ij}G^{pq}G^{rs}G^{kl} +6G^{ik}G^{jl}G^{pq}G^{rs} \nonumber\\ && +12G^{ij}G^{pq}G^{kr}G^{ls} +12G^{ij}G^{pr}G^{kq}G^{sl} \Biggr)\Sigma_{ipq}\Sigma_{jrs} \Biggr]\hat\varphi_{kl} \Biggr\}. \nonumber \eea Finally, by using eqs. (\ref{520via})-(\ref{474via}) and (\ref{511mmm})-(\ref{514via}) we obtain (\ref{533viax}). Of course, in the particular case when $\Sigma$ is the Ruse-Synge function, $\Sigma=\sigma^g$ of the metric $g$, all its symmetrized covariant derivatives of order higher than two vanish on the diagonal and the second derivative is equal to the metric, which gives the earlier result (\ref{555zaz})-(\ref{522viax}). \section{Asymptotics of Heat Traces} \setcounter{equation}0 \subsection{Classical Heat Trace} First of all, it is easy to see that the asymptotics of the classical heat trace as $t\to \infty$ are determined by the bottom eigenvalue \be \Theta_{\pm}(t) \sim d^\pm_1\exp\left(-t\lambda_1^\pm\right), \ee where $d^\pm_1=\tr P^\pm_1$ is the multiplicity of the first eigenvalue and $P^\pm_1$ is the projection to the first eigenspace. We will be primarily interested in the asymptotics as $t\to 0$. For Laplace type operators $L_\pm$ there is an asymptotic expansion of the heat kernel $U_\pm(t;x,x')$ in the neighborhood of the diagonal as $t\to 0$ (see e.g. \cite{avramidi91,avramidi00,avramidi10,avramidi15}) \bea U_\pm(t;x,x')\sim (4\pi)^{-n/2}\exp\left(-\frac{\sigma_\pm}{2t}\right) \sum_{k=0}^\infty t^{k-n/2} \tilde a^\pm_k(x,x'), \label{513xxcd} \eea where \bea \tilde a^\pm_k(x,x') &=& \frac{(-1)^k}{k!}M_\pm^{1/2}(x,x'){\cal P}_\pm(x,x') a^\pm_k(x,x') \nonumber\\ &=& \frac{(-1)^k}{k!}g_\pm^{1/4}(x)g_\pm^{1/4}(x') e^{\zeta_\pm(x,x')}{\cal P}_\pm(x,x') a^\pm_k(x,x'), \label{513xxcq} \eea $\sigma_\pm=\sigma_\pm(x,x')$ is the Ruse-Synge function of the metric $g_\pm$, $M_\pm=M_\pm(x,x')$ is the Van Vleck-Morette determinant, ${\cal P}_\pm={\cal P}_\pm(x,x')$ is the operator of parallel transport of sections along the geodesic in the connection $\nabla^\pm$ and the metric $g_\pm$ from the point $x'$ to the point $x$ and $a^\pm_k=a^\pm_k(x,x')$ are the usual heat kernel coefficients. In particular, \cite{avramidi00} \be a^\pm_0 = I, \label{64via} \ee and \be [a^\pm_1] = Q_\pm-\frac{1}{6}R_\pm I, \label{65via} \ee where $R_\pm$ is the scalar curvature of the metric $g_\pm$ and for the Dirac operator $Q_\pm$ is given by (\ref{qsxxa}). Therefore, there is the asymptotic expansion (\ref{513xxc}) of the classical heat trace (\ref{532xxc}). This is the classical heat trace asymptotics of Laplace type operators. By using the off-diagonal expansion of the heat kernel (\ref{513xxcd}) for the Laplace type operator (and using the diagonal values of the derivatives of the functions $\sigma_\pm, M_\pm, {\cal P}_\pm$) one can also obtain the asymptotic expansion of the classical spectral invariant $H(t)$, (\ref{12via}), for the Dirac type operator, \be H_\pm(t)\sim (4\pi)^{-n/2}\sum_{k=0}^\infty t^{k-n/2} H_k^\pm, \ee where \be H_k^\pm =\frac{(-1)^k}{k!}\int\limits_M dx\; g_\pm^{1/2}\tr\; [D_\pm a^\pm_k]. \ee Here we used, in particular, a useful relation \be [D_\pm \tilde a_k]=\frac{(-1)^k}{k!}g_\pm^{1/2}[D_\pm a_k] \label{68via} \ee Further, by using \be [D_\pm {\cal P}_\pm]= S_\pm \label{68viab} \ee and the results of \cite{avramidi91,avramidi00,avramidi15} we get \bea H_0^\pm &=& \int\limits_M dx\; g_\pm^{1/2}\tr\; S_\pm, \\ H_1^\pm &=& \int\limits_M dx\; g_\pm^{1/2} \tr\left\{i\gamma^j_\pm\left(-\frac{1}{2}\nabla^\pm_jQ_\pm -\frac{1}{6}\nabla^\pm_k\mathcal{R}_\pm^k{}_j\right) +S_\pm\left(\frac{1}{6}R_\pm-Q_\pm\right) \right\}. \label{69via} \eea For the manifolds without boundary we can safely neglect the total derivative terms in (\ref{69via}). \subsection{Combined Heat Trace for Laplace Type Operators} To compute the asymptotics of the relative spectral invariants $\Psi(t,s)$ and $\Phi(t,s)$ we rescale the variables $t \mapsto \varepsilon t$ and $s\mapsto \varepsilon s$ and study the asymptotics as $\varepsilon\to 0$ or $\varepsilon\to\infty$. It is easy to see that the asymptotics of the combined heat traces $X(t,s)$, (\ref{238ccx}), and $Y(t,s)$, (\ref{534xxc}), as $\varepsilon\to \infty$ are determined by the bottom eigenvalues $\lambda^\pm_1$ and $\mu^\pm_1$, \bea X\left(\varepsilon{}t,\varepsilon{}s\right) &\sim & \exp\left[-\varepsilon{}\left(t\lambda_1^++s\lambda_1^-\right)\right] \mathrm{ Tr} P_1^+P_1^-, \\ Y\left(\varepsilon{}t,\varepsilon{}s\right) &\sim & \exp\left[-\varepsilon{} \left(t(\mu_1^+)^2+s(\mu_1^-)^2\right)\right]\;\mu_1^+\mu_1^-\; \mathrm{ Tr} P_1^+ P_1^-. \eea We will be interested mainly in the asymptotics as $\varepsilon\to 0$. In this subsection we prove the Theorem \ref{theorem1} for the combined heat trace $X(t,s)$. {\bf Proof of Theorem \ref{theorem1}, Part I.} The combined trace $X(t,s)$ is given by the integral (\ref{533xxca}) over $M\times M$ of the form \bea X(t,s) &=& \int\limits_{M\times M} dx\; dx' f_1(t,s;x,x'), \eea where \be f_1(t,s;x,x')=\frac{1}{2}\tr\Bigl\{U_+(t,x,x')U_-(s,x',x) +U_-(s,x,x')U_+(t,x',x)\Bigr\}. \ee Notice that we made it manifestly symmetric under the exachange $(t,L_+)\leftrightarrow (s,L_-)$ by symmetrizing the integrand in $x$ and $x'$ (alternatively, we could do the symmetrization $(t,L_+)\leftrightarrow (s,L_-)$ at the end of the calculations). Let $r^\pm_{\rm inj}$ be the injectivity radii of the metrics $g_\pm$ and \be \rho=\min\{r^+_{\rm inj},r^-_{\rm inj}\}. \ee We fix a point $x'$ in the manifold $M$ (of course, we could instead fix the point $x$; we have to do it both ways to achieve the required symmetry $(t,L_+)\leftrightarrow(s,L_-)$ of the heat trace). Let $B^\pm_r(x')$ be the geodesic balls in the metric $g^\pm_{ij}$ centered at $x'$ of radius $r<\rho$ smaller than the injectivity radii $r^\pm_{\rm inj}$. Let $B_r(x')\subset B^+_r(x')\cap B^-_r(x')$ be an open set contained in both of these balls. We decompose the combined traces as follows \bea X(t,s) &=& X_{\rm diag}(t,s)+X_{\rm off-diag}(t,s), \eea where \bea X_{\rm diag}(t,s)&=& \int\limits_{M}dx'\,\int\limits_{B_r(x')}dx\,f_1(t,s;x,x'), \\ X_{\rm off-diag}(t,s)&=&\int\limits_M dx' \int\limits_{M-B_r(x')}dx\;f_1(t,s;x,x')\,. \eea To estimate these integrals we will need the following lemma. \begin{lemma} \label{lemma6} The off-diagonal part $X_{\rm off-diag}(\varepsilon{}t,\varepsilon{}s)$ of the combined heat trace is exponentially small as $\varepsilon\to 0$ and does not contribute to its asymptotic expansion, that is, as $\varepsilon\to 0$ \be X(\varepsilon{}t,\varepsilon{}s)\sim X_{\rm diag}(\varepsilon{}t,\varepsilon{}s). \ee \end{lemma} \noindent {\it Proof.} This can be proved by using the standard elliptic estimates of the heat kernel. For any $x\in M-B_r(x')$ and $0<t<1$ there is an estimate \cite{grigoryan09} \be \left|U_\pm(t;x,x')\right|\le C_1 t^{-n/2}\exp\left(-\frac{r^2}{4t}\right), \ee where $C_{1}=C_1(r)$ is some constant. Therefore, \be |f_1(t,s;x,x')|\le C_2\varepsilon^{-n}t^{-n/2}s^{-n/2}\exp\left[-\frac{r^2}{4\varepsilon{}} \left(\frac{1}{t}+\frac{1}{s}\right)\right] \ee with some constant $C_2$. This means that as $\varepsilon\to 0$ \be X_{\rm off-diag}(\varepsilon{}t,\varepsilon{}s)\sim 0. \ee The statement follows. Now, by using this Lemma \ref{lemma6} and the asymptotic expansion of the heat kernel (\ref{513xxcd}) we obtain the asymptotic expansion of the combined heat trace (\ref{533xxc}) as $\varepsilon\to 0$, \be X\left(\varepsilon{}t,\varepsilon{}s\right) \sim (4\pi\varepsilon)^{-n/2} \sum_{m=0}^\infty \varepsilon^{m} X_{m}(\varepsilon,t,s), \label{622via} \ee where \bea X_{m}(\varepsilon,t,s) \label{527xxc} =\left(4\pi\varepsilon ts\right)^{-n/2} \int\limits_M dx' \int\limits_{B_r(x')}dx\; \exp\left\{-\frac{1}{2\varepsilon ts{}}\Sigma(t,s;x,x')\right\} \Lambda_{m}(t,s;x,x'), \nonumber\\ \eea and \bea \Sigma(t,s;x,x') &=& s\sigma_+(x,x') +t\sigma_-(x,x'), \label{529zzc} \\[5pt] \Lambda_{m}(t,s;x,x') &=& \sum_{j=0}^m t^{m-j}s^{j} \frac{1}{2}\tr\,\left\{ \tilde a^+_{m-j}(x,x')\tilde a^-_j(x',x) +\tilde a^-_{m-j}(x,x')\tilde a^+_j(x',x)\right\}. \nonumber\\ \label{530zzc} \eea Next, we compute the asymptotic expansion of the function $X_m(\varepsilon,t,s)$. \begin{lemma} \label{lemma7} There is the asymptotic expansion as $\varepsilon\to 0$ \be X_{m}\left(\varepsilon,t,s\right) \sim \sum_{k=0}^\infty \varepsilon^{k} X_{m,k}(t,s), \label{626via} \ee where \be X_{m,k}(t,s)=\int\limits_M dx\; g^{1/2}(t,s)b_{m,k}(t,s). \label{631viab} \ee The coefficients $b_{m,k}(t,s)$ are scalar invariants constructed polynomially from the diagonal values of the derivatives of the function $\Lambda_{m}$ and the derivatives of the function $\Sigma(t,s)$ of order higher than $2$ as well as the metrics $g_+^{ij}$, $g_-^{ij}$ and $g_{ij}(t,s)$, (\ref{113via}). The coefficients $b_{m,k}(t,s)$ are homogeneous functions of $t$ and $s$ of degree $(m+k)$ and the coefficients $X_{m,k}(t,s)$ are homogeneous functions of $t$ and $s$ of degree $(m+k-n/2)$. In particular, \bea b_{m,0}(t,s) &=&\sum_{j=0}^m \frac{(-1)^{m}}{(m-j)!j!}t^{m-j}s^{j} \tr \left\{[a_{m-j}^+]\,[a_j^-]\right\}. \label{628via} \eea \end{lemma} {\it Proof.} The function $\Sigma(t,s)=\Sigma(t,s;x,x')$, (\ref{529zzc}), is smooth and positive (here and below we omit the space variables $x$ and $x'$). It has the absolute minimum on the diagonal, at $x=x'$, equal to zero, \be [\Sigma(t,s)]=0 \ee and has a non-degenerate critical point on the diagonal, that is, the first derivatives vanish on the diagonal \be [\Sigma_{,i}(t,s)]=0, \ee and the Hessian $ [\Sigma_{,ij}(t,s)] =G_{ij}(t,s), $ which is exactly equal to the matrix $G_{ij}(t,s)$, defined by (\ref{126via}), is positive on the diagonal. Also, by using (\ref{128viab}) we can see that the determinant of the Hessian has the form \be G = \det G_{ij} = \frac{g_+g_-}{g}. \label{631via} \ee where $g=\det g_{ij}$ and $g_\pm=\det g^\pm_{ij}$. Now, by using Lemma \ref{lemma2} we compute the asymptotic expansion of the integral (\ref{527xxc}) which gives (\ref{626via}) and proves the first part of the lemma. The coefficient $b_{m,0}(t,s)$ is given by (\ref{532via}), so \be b_{m,0} = g^{-1/2}G^{-1/2}[\Lambda_m] = g_+^{-1/2}g_-^{-1/2}[\Lambda_m]. \label{631viac} \ee Further, we compute the diagonal value of the functions $\Lambda_{m}(t,s)$, (\ref{530zzc}) \be [\Lambda_{m}(t,s)] = g_+^{1/2}g_-^{1/2}\sum_{j=0}^m\frac{(-1)^{m}}{(m-j)!j!} t^{m-j}s^{j} \tr\{ [a_{m-j}^+]\,[a_j^-]\}, \ee to get (\ref{628via}). This proves Lemma \ref{lemma7}. Thus, by using (\ref{622via}) and (\ref{626via}) we obtain the asymptotic expansion (\ref{1zaab}) of the combined heat trace $X(t,s)$ with the coefficients \be b_k(t,s)=\sum_{j=0}^k b_{k-j,j}(t,s). \label{634via} \ee This proves Theorem \ref{theorem1} for the trace $X(t,s)$. Finally, by using the relation (\ref{16via}) and the asymptotic expansions (\ref{513xxc}) and (\ref{1zaab}) we obtain the asymptotic expansion (\ref{120via}) of the relative spectral invariant $\Psi(t,s)$ with the coefficients (\ref{543xxca}). This proves Corollary \ref{corollary1} for the function $\Psi(t,s)$. \subsection{Combined Heat Trace for Dirac Type Operators} The case for the Dirac type operators is similar to the Laplace type operators; so we will omit some details. In this subsection we prove the Theorem \ref{theorem1} for the combined heat trace $Y(t,s)$. {\bf Proof of the Theorem \ref{theorem1}, Part II.} The combined heat trace $Y(t,s)$ is given by the integral (\ref{534xxc}) over $M\times M$ of the form \bea Y(t,s) &=& \int\limits_{M\times M} dx\; dx' f_2(t,s;x,x'), \eea where \be f_2(t,s;x,x')=\frac{1}{2}\tr\Bigl\{D_+U_+(t,x,x')D_-U_-(s,x',x) +D_-U_-(s,x,x')D_+U_+(t,x',x)\Bigr\}, \ee where the operators $D_\pm$ act on the first space argument of the heat kernels. The method for computing this integral is essentially the same as for the Laplace type operators. In this case we use the estimate for the derivative of the heat kernel: for any $x\in M-B_r(x')$ and $0<t<1$ \be \left|D_\pm U_\pm(t;x,x')\right|\le C_3 t^{-1-n/2}\exp\left(-\frac{r^2}{4t}\right), \ee where $C_3$ is some constant. By using this estimate it is easy to see that the off-diagonal part of the integral is exponentially small and does not contribute to the asymptotic expansion of the trace $Y(\varepsilon{}t,\varepsilon{}s)$ as $\varepsilon\to 0$. Therefore, we obtain \be Y(\varepsilon{}t,\varepsilon{}s) \sim (4\pi\varepsilon)^{-n/2} \sum_{m=0}^\infty \varepsilon^{m}\tilde Y_{m}(\varepsilon,t,s), \label{636via} \ee where \bea \tilde Y_{m}(\varepsilon,t,s) \label{642aabc} =(4\pi\varepsilon ts)^{-n/2} \int\limits_M dx'\int\limits_{B_r(x')}dx\; \exp\left\{-\frac{1}{2\varepsilon ts{}}\Sigma(t,s;x,x')\right\} \tilde N_{m}(\varepsilon,t,s,x,x'), \nonumber\\ \eea with \bea && \tilde N_{m}(\varepsilon,t,s,x,x') \label{6438via} \\ &&= \frac{1}{2}\sum_{j=0}^m t^{m-j} s^{j}\tr\Biggl\{ \left( D_+ -\frac{1}{2\varepsilon{}t}\nu_+(x,x')\right)\tilde a^+_{m-j}(x,x') \left( D_- -\frac{1}{2\varepsilon{}s}\nu_-(x',x)\right) \tilde a^-_j(x',x) \Biggr\} \nonumber\\ && + x\leftrightarrow x', \nonumber \eea and \bea \nu_\pm(x,x') &=& i\gamma^j_\pm(x)\sigma^\pm_{,j}(x,x'). \label{639via} \eea To avoid confusion we stress here once again that the operators $D_\pm$ act on the first space argument of the coefficients $\tilde a^\pm_k$. The functions $\tilde N_{m}$ depend on $\varepsilon$ in the following way \bea \tilde N_{m} = \frac{1}{\varepsilon^2}N^{(0)}_{m} +\frac{1}{\varepsilon{}}N^{(1)}_{m} +N^{(2)}_{m}, \eea where \bea N^{(0)}_{m}&=& \frac{1}{8}(ts)^{-1}\sum_{j=0}^mt^{m-j} s^{j} \tr\left\{\nu_+(x,x') \tilde a_{m-j}^+(x,x')\nu_-(x',x)\tilde a_j^-(x',x)\right\} + x\leftrightarrow x', \nonumber\\ \label{642via} \\ N^{(1)}_{m}&=& -\frac{1}{4}(ts)^{-1}\sum_{j=0}^m t^{m-j} s^{j}\tr\Biggl\{ s\nu_+(x,x')\tilde a_{m-j}^+(x,x') (D_-\tilde a_j^-(x',x)) \nonumber\\ && +t(D_+\tilde a_{m-j}^+(x,x'))\nu_-(x',x)\tilde a_j^-(x',x) \Biggr\}+ x\leftrightarrow x', \label{645viax} \\ N^{(2)}_{m}&=& \frac{1}{2}\sum_{j=0}^m t^{m-j} s^{j} \tr\left\{(D_+\tilde a_{m-j}^+(x,x'))(D_-\tilde a_j^-(x',x))\right\} + x\leftrightarrow x'. \eea Therefore, \be \tilde Y_{m} = \frac{1}{\varepsilon^2} Y^{(0)}_{m} +\frac{1}{\varepsilon} Y^{(1)}_{m} +Y_{m}^{(2)}, \label{642viax} \ee with the obvious notation \bea Y^{(i)}_{m}(\varepsilon,t,s) =(4\pi\varepsilon ts)^{-n/2} \int\limits_M dx'\int\limits_{B_r(x')}dx\; \exp\left\{-\frac{1}{2\varepsilon ts{}}\Sigma(t,s;x,x')\right\} N^{(i)}_{m}(t,s,x,x'). \label{642aab} \nonumber\\ \eea Therefore, by using (\ref{642viax}) we have \be Y(\varepsilon{}t,\varepsilon{}s) \sim (4\pi\varepsilon)^{-n/2} \sum_{m=0}^\infty\varepsilon^{m-2} Y_m(\varepsilon,t,s), \label{649via} \ee where \bea Y_0 &=& Y_0^{(0)}, \\ Y_1 &=& Y^{(0)}_1+Y^{(1)}_0, \eea and for $m\ge 2$ \be Y_m = Y^{(0)}_{m} + Y^{(1)}_{m-1} + Y^{(2)}_{m-2}. \ee By using Lemma \ref{lemma2} again to compute the integral (\ref{642aab}) we prove the following lemma. \begin{lemma} \label{lemma8} There is the asymptotic expansion as $\varepsilon\to 0$ \bea Y^{(i)}_{m}(\varepsilon,t,s) &\sim & \sum_{k=0}^\infty \varepsilon^{k} Y^{(i)}_{m,k}(t,s), \label{650via} \eea where \be Y^{(i)}_{m,k}(t,s)=\int\limits_M dx\; g^{1/2}(t,s) c^{(i)}_{m,k}(t,s). \ee The coefficients $ c^{(i)}_{m,k}(t,s)$ are scalars constructed polynomially from the diagonal values of the derivatives of the functions $N^{(i)}_{m}$ and the derivatives of the function $\Sigma(t,s)$ of order higher than $2$ as well as the metric $g_{ij}(t,s)$ and $g_\pm^{ij}$. The coefficients $ c^{(i)}_{m,k}(t,s)$ are homogeneous functions of $t$ and $s$ of degree $(m+k+i-2)$ and the coefficients $Y^{(i)}_{m,k}(t,s)$ are homogeneous functions of $t$ and $s$ of degree $(m+k+i-2-n/2)$. The first coefficients are \be c^{(0)}_{m,0}=c^{(1)}_{m,0}=0, \ee \be c^{(2)}_{m,0}(t,s) = \sum_{j=0}^{m} \frac{(-1)^{m}}{(m-j)!j!} t^{m-j}s^{j} \tr\left\{[D_+ a^+_{m-j}] [D_- a^-_j]\right\}. \label{652via} \ee \end{lemma} \noindent {\it Proof.} The proof of this lemma is essentially the same as that of the Lemma \ref{lemma7}. By using Lemma \ref{lemma2} we compute the asymptotic expansion of the integral (\ref{642aab}) which gives (\ref{650via}) and proves the first part of the lemma. The coefficient $c^{(i)}_{m,0}(t,s)$ is given by (\ref{532via}), so \be c^{(i)}_{m,0}=g^{-1/2}G^{-1/2}[N^{(i)}_m] =g_+^{-1/2}g_-^{-1/2}[N^{(i)}_m]. \label{657via} \ee Thus, we need to compute the diagonal values of the functions $N^{(i)}_m$. First of all, since the diagonal values of the function $\sigma_\pm$ and its first derivatives vanish it is easy to see that the diagonal values of the functions $\nu_\pm$, (\ref{639via}), vanish, \be [\nu_\pm]=0, \ee and, therefore, \be [N_{m}^{(0)}]=[N_{m}^{(1)}]=0. \ee This means that \be c^{(0)}_{m,0}=c^{(1)}_{m,0}=0. \ee The functions $N^{(2)}_m$ are expressed in terms of the coefficients $\tilde a^\pm_k$, which are related to the standard heat kernel coefficients $a^\pm_k$ by (\ref{513xxcq}). Therefore, by using the diagonal values of the functions $\sigma_\pm, M_\pm, {\cal P}_\pm$, and their derivatives we obtain \be [N^{(2)}_{m}(t,s)] = g_+^{1/2}g_-^{1/2} \sum_{j=0}^{m} \frac{(-1)^{m}}{(m-j)!j!}t^{m-j}s^j \tr\left\{[D_+ a^+_{m-j}] [D_- a^-_j]\right\}. \ee Now, by using (\ref{631via}) and (\ref{657via}) we get (\ref{652via}). This proves Lemma \ref{lemma8}. By using this lemma, we obtain the asymptotic expansion \bea Y_{m}(\varepsilon,t,s) &\sim & \sum_{k=0}^\infty \varepsilon^{k} Y_{m,k}(t,s), \label{650viab} \eea where \be Y_{m,k}(t,s)=\int\limits_M dx\;g^{1/2}(t,s)c_{m,k}(t,s) \ee with \bea c_{0,k} &=& c_{0,k}^{(0)}, \\ c_{1,k} &=& c^{(0)}_{1,k}+c^{(1)}_{0,k}, \eea and for $m\ge 2$ \be c_{m,k} = c^{(0)}_{m,k} + c^{(1)}_{m-1,k} + c^{(2)}_{m-2,k}. \ee Now, by using (\ref{649via}) and the equations above we obtain \be Y(\varepsilon{}t,\varepsilon{}s) \sim (4\pi\varepsilon)^{-n/2} \sum_{k=-1}^\infty \varepsilon^{k-1} C_k(t,s), \label{662via} \ee with the coefficients \be C_k=\sum_{j=0}^{k+1} Y_{j, k+1-j}. \label{662viab} \ee Finally, we notice that the first coefficient $C_{-1}$ vanishes since \be C_{-1} = Y_{0,0} = Y^{(0)}_{0,0} = 0. \ee Thus, we obtain the asymptotic expansion (\ref{15zaac}) of the combined heat trace $Y(t,s)$ with the coefficients \be c_k(t,s)=\sum_{j=0}^{k+1} c_{j, k+1-j}(t,s). \label{662vian} \ee This proves Theorem \ref{theorem1} for the trace $Y(t,s)$. Finally, by using the relation (\ref{17via}) and the asymptotic expansions (\ref{513xxc}) and (\ref{15zaac}) we obtain the asymptotic expansion (\ref{121via}) of the relative spectral invariant $\Phi(t,s)$ with the coefficients (\ref{542saax}). This proves Corollary \ref{corollary1} for the function $\Phi(t,s)$. \subsection{Specific Cases} First of all, we notice that since for equal operators $L_-=L_+$ the combined trace $X(t,s)$ can be expressed in terms of the classical heat trace \be X(t,s) =\Theta(t+s), \ee then, by comparing (\ref{1zaab}) and (\ref{513xxc}) we see that in this case \be B_k(t,s)=(t+s)^{k-n/2} A_k. \label{674viax} \ee Similarly, since for equal operators $D_-=D_+$ the combined trace $Y(t,s)$ can be expressed in terms of the classical heat trace \be Y(t,s) = - \partial_t\Theta(t+s), \ee then, by comparing (\ref{15zaac}) and (\ref{513xxc}) we see that in this case \be C_k(t,s)=-\left(k-\frac{n}{2}\right)(t+s)^{k-1-n/2} A_k. \label{676viax} \ee This gives non-trivial relations between the heat kernel coefficients and their derivatives and provides a useful check of the results. It is easy to see then that for equal operators $L_-=L_+$ and $D_-=D_+$ the relative spectral invariants $\Psi(t,s)$ and $\Phi(t,s)$ vanish. If the Laplace type operators differ by just a constant, \be L_+=L_-+M^2, \ee then the metrics and the connections are the same and \bea \Theta_+(t) &=& e^{-tM^2}\Theta_-(t), \\ X(t,s) &=& e^{-tM^2}\Theta_-(t+s), \eea and, therefore, \be \Psi(t,s)=\left(e^{-tM^2}-1\right)\left(e^{-sM^2}-1\right)\Theta_-(t+s). \ee In this case \bea B_0(t,s) &=& (t+s)^{-n/2}A_0^-, \\ B_1(t,s) &=& (t+s)^{1-n/2}A_1^--t(t+s)^{-n/2}M^2A_0^-. \eea For the Dirac case suppose that there is an endomorphism $M$ such that it anticommutes with the operator $D_-$, \be D_-M=-MD_-, \ee and $M^2$ is a scalar. Then it is easy to see that \be \mathrm{ Tr} MD_-\exp(-sD^2_-)=0. \ee Now, suppose that \be D_+=D_-+M, \ee so that (recall that $L_+=D_+^2$) \be L_+=L_-+M^2; \ee Then it is easy to show that \bea H_+(t) &=& H_-(t)+\mathrm{ Tr} M\exp(-tD_-^2), \\ Y(t,s) &=& -e^{-tM^2}\partial_t\Theta_-(t+s), \eea and, hence, \be \Phi(t,s)=-\left(e^{-tM^2}-1\right)\left(e^{-sM^2}-1\right) \partial_t\Theta_-(t+s) +M^2e^{-(t+s)M^2}\Theta_-(t+s). \ee Therefore, \bea C_0(t,s) &=& \frac{n}{2}(t+s)^{-1-n/2}A_0^-, \\ C_1(t,s) &=& \left(\frac{n}{2}-1\right)(t+s)^{-n/2}A_1^- -\frac{n}{2}t(t+s)^{-1-n/2}M^2A_0^-. \eea A more general case is the case of {\it commuting} operators; then the combined heat traces still simplify significantly, they can be expressed in terms of the classical one \bea X(t,s) &=& \mathrm{ Tr}\exp(-tL_+-s L_-), \label{238ccxa} \\ Y(t,s) &=& \mathrm{ Tr} D_-D_+\exp(-tD^2_+ -sD^2_-). \eea Therefore, the asymptotics of the combined traces can be obtained from the classical ones. Notice that the leading symbol of the operators $L=tL_++sL_-$ and $L=tD_+^2+sD_-^2$ is determined exactly by the metric $g^{ij}(t,s)$. Therefore, in this case the combined traces are given by the classical trace for the operator $L=tL_++sL_-$. \section{Explicit Results} \setcounter{equation}0 \subsection{Laplace Type Operators (Proof of Theorem \ref{theorem2})} Coming back to the general case, it is easy to see that the first coefficients are the same as in the commuting case. The coefficient $B_0$ is obtained by using (\ref{634via}), (\ref{628via}) and (\ref{64via}) \be b_0=b_{0,0}=\tr I. \ee This proves eq. (\ref{132via}). The coefficient $b_1$ has the form (by using (\ref{634via})) \be b_1=b_{1,0}+b_{0,1}. \ee Here the first coefficient is easy to compute. By using the well known results (\ref{64via}), (\ref{65via}), for the coefficients $[a^\pm_k]$ we obtain from (\ref{628via}) \bea b_{1,0} &=& \tr\left\{ -t\,[a_1^+a_0^-] -s\,[a_0^+a_1^-] \right\} \nonumber\\ &=& \tr\left\{ t\,\left(\frac{1}{6}R_+I-Q_+\right) + s\,\left(\frac{1}{6}R_-I-Q_-\right) \right\}. \label{72viax} \eea The coefficient $b_{0,1}$ is determined by the second term of the asymptotics (\ref{626via}) of the quantity $X_{0}(\varepsilon t,\varepsilon s)$, (\ref{527xxc}). By using (\ref{530zzc}), (\ref{513xxcq}), (\ref{631via}) and (\ref{410qqq}) we have \bea \Lambda_0 &=& g^{1/2}(x)G^{1/2}(x') e^{\omega(x,x')}\varphi_1(x,x'), \eea where \be \omega(x,x') = \frac{1}{4}\log\left(\frac{g_+(x)}{g(x)} \frac{g_-(x)}{g(x)}\frac{g(x')}{g_+(x')}\frac{g(x')}{g_-(x')}\right) +\zeta_+(x,x')+\zeta_-(x,x') \label{75via} \ee and \be \varphi_1(x,x')= \frac{1}{2}\tr\left\{{\cal P}_+(x,x'){\cal P}_-(x',x) +{\cal P}_-(x,x'){\cal P}_+(x',x)\right\}. \ee By using the fact that ${\cal P}(x',x)={\cal P}^{-1}(x,x')$ we find it convenient to rewrite the function $\varphi_1$ in the form \be \varphi_1= \frac{1}{2}\tr\left(\Pi+\Pi^{-1}\right), \ee where \be \Pi(x,x')={\cal P}_-(x',x){\cal P}_+(x,x')={\cal P}_-^{-1}{\cal P}_+. \ee Here the function $\zeta_\pm(x,x')$ is defined by (\ref{410qqq}). Now, by using Lemma \ref{lemma2} and eq. (\ref{533via}) we obtain \bea b_{0,1} &=& ts G^{ij}[\nabla^g_{i}\nabla^g_{j}(e^{\omega}\varphi_1)] - tsG^{ij}G^{kl}\Sigma_{ijk}[\nabla^g_l(e^{\omega}\varphi_1)] \nonumber\\ && +ts\Biggl(-\frac{1}{3}G^{ij}R^g_{ij} -\frac{1}{4}G^{ij}G^{kl}\Sigma_{ijkl} +\frac{1}{6}G^{il}G^{jm}G^{kn}\Sigma_{ijk}\Sigma_{lmn} \nonumber\\ && +\frac{1}{4}G^{ij}G^{lm}G^{kn}\Sigma_{ijk}\Sigma_{lmn} \Biggr)[\varphi_1], \eea where $\Sigma_{i_1\dots i_k}=[\nabla^g_{(i_1}\cdots\nabla^g_{i_k)}\Sigma]$ are the coincidence limits of symmetrized covariant derivatives of $\Sigma$ determined by the metric $g_{ij}$ (\ref{113via}). First of all, we notice that $[\omega]=0$. We will denote the diagonal values of the derivatives of the function $\omega$ by just adding indices, that is, $\omega_i=[\nabla^g_i\omega]$ and $\omega_{ij}=[\nabla^g_i\nabla^g_j\omega]$. By using (\ref{511mmm}) and (\ref{512mmm}) and the fact that $[\zeta^\pm_{,i}]=0$ we compute the diagonal values of the first two derivatives \bea \omega_{i}&=&W_i, \\{} \omega_{ij} &=& \frac{1}{6}R^+_{ij}+\frac{1}{6}R^-_{ij}+W_{ij}, \label{711viaxz} \eea where $W_i$ and $W_{ij}$ are defined by (\ref{137zax}) and (\ref{130viaxz}). Next, it is easy to see that \be [\varphi_1]=\tr I, \ee Next, since $[\nabla^\pm_i{\cal P}_\pm]=0$ and $[{\cal P}_\pm]=I$ we have \be [\nabla^g_i{\cal P}_\pm{}(x,x')] =-[\nabla^g_{i'}{\cal P}_\pm{}(x,x')] =-{\cal A}^\pm_i, \label{716viax} \ee and \be [\nabla^g_i\Pi]=-[\nabla^g_i \Pi^{-1}]={\cal C}^-_i-{\cal C}^+_i; \ee therefore, \be [\nabla^g_i\varphi_{1}]=0, \ee and \be [\nabla^g_i(e^{\omega}\varphi_1)] =\omega_{i}\tr I. \ee Further, we compute \bea [\nabla^g_{i}\nabla^g_{j} (e^{\omega}\varphi_1)] &=&[\nabla^g_{i}\nabla^g_{j}\varphi_{1}] +\left(\omega_{ij}+\omega_{i}\omega_{j} \right)\tr I. \label{713gia} \eea By using (\ref{372zaa}) and (\ref{373zaa}) we compute \bea [\nabla^{g}_{i}\nabla^{g}_{j}\varphi_1] &=&\tr\left\{({\cal C}^+_{(i}-{\cal C}^-_{(i})({\cal C}^+_{j)}-{\cal C}^-_{j)}) \right\}. \eea Finally, we obtain \bea [\nabla^g_{i}\nabla^g_{j} (e^{\omega}\varphi_1)] &=& \left(\omega_{ij}+\omega_{i}\omega_{j} \right)\tr I +\tr\left\{({\cal C}^+_{(i}-{\cal C}^-_{(i})({\cal C}^+_{j)}-{\cal C}^-_{j)}) \right\}. \eea By collecting the above results we obtain \bea b_{0,1} &=& ts\tr\Biggl\{ \Biggr[\frac{1}{6} G^{ij}\left( R^+_{ij}+R^-_{ij} -2R^g_{ij} +W_{ij} +W_{i}W_{j}\right) -G^{ij}G^{kl}\Sigma_{ikl}W_{j} -\frac{1}{4}G^{ij}G^{kl}\Sigma_{ijkl} \nonumber\\ && +\frac{1}{6}G^{il}G^{jm}G^{kn}\Sigma_{ijk}\Sigma_{lmn} +\frac{1}{4}G^{ij}G^{lm}G^{kn}\Sigma_{ijk}\Sigma_{lmn} \Biggr]I \nonumber\\ && +G^{ij}({\cal C}^+_{i}-{\cal C}^-_{i})({\cal C}^+_{j}-{\cal C}^-_{j}) \Biggr\}. \label{720viax} \eea Next, we compute the derivatives of the function $\Sigma$ defined by (\ref{529zzc}). By using the eqs. (\ref{361zaa}) and (\ref{362zaa}) we obtain eqs. (\ref{139zax}) and (\ref{140zax}). By using the results (\ref{72viax}) and (\ref{720viax}) we obtain (\ref{134via}), which proves Theorem \ref{theorem2}; the Corollary \ref{corollary2} follows. It is easy to see that for equal operators $L_+=L_-$ the coefficient $B_1$ is equal to $(t+s)^{1-n/2}A_1$, as it should. \subsection{Dirac Type Operators (Proof of Theorem \ref{theorem3}.)} \subsubsection{Coefficient $c_0$} The coefficient $c_0$ is given by (\ref{662vian}), \be c_0 = c^{(0)}_{0,1}+c^{(0)}_{1,0}+c^{(1)}_{0,0}; \ee and, since $c^{(0)}_{1,0}=c^{(1)}_{0,0}=0$ it is equal to $c_0= c^{(0)}_{0,1}$, which is determined by the second coefficient of the asymptotics of the function $Y_0$, (\ref{650via}), which is equal to $Y_0=Y_0^{(0)}$ given by (\ref{642aab}). First, we have \bea N^{(0)}_{0} &=& \frac{1}{4}(ts)^{-1} g^{1/2}(x)G^{1/2}(x')e^{\omega(x,x')}\varphi_2(x,x'), \label{724via} \eea where $\omega$ is defined by (\ref{75via}) and \be \varphi_2(x,x')= \frac{1}{2} \tr\left\{\mu_+(x,x')\mu_-(x',x)+\mu_-(x,x')\mu_+(x',x)\right\}, \ee with \be \mu_\pm(x,x')=\nu_\pm(x,x'){\cal P}_\pm(x,x')= \sigma^\pm_{,j}(x,x')i\gamma_\pm^{j}(x){\cal P}_\pm(x,x'). \ee We use Lemma \ref{lemma2}, namely, eq. (\ref{533via}) to compute it. We notice that the diagonal values of the function $\varphi$ and its first derivative vanish, \be [\varphi_2]=[\nabla^g_i\varphi_2]=0. \label{734viax} \ee Therefore, by using (\ref{533via}), (\ref{734viax}) and (\ref{631via}) we get \be c^{(0)}_{0,1} = \frac{1}{4} G^{ij}[\nabla^g_i\nabla^g_j(e^\omega\varphi_2)]. \ee We use now the connection $\nabla^{g,{\cal A}}$ defined with respect to the metric $g_{ij}(t,s)$ given by (\ref{113via}) and the connection ${\cal A}_i(t,s)$ given by (\ref{114via}). Now, by using (\ref{639via}) and the diagonal values of the second derivatives of the Ruse-Synge function $\sigma_\pm$ (\ref{45via}), we compute \bea [\nabla^{g}_j\nu_\pm(x,x')]=[\nabla^g_{j}\mu_\pm(x,x')] =-[\nabla^{g}_{j}\nu^\pm(x',x)] =-[\nabla^g_j\mu_\pm(x',x)] =ig^\pm_{ji}\gamma_\pm^i. \label{661via} \eea By using (\ref{642via}), the derivatives of the functions $M_\pm$ and ${\cal P}_\pm$, (\ref{511mmm}), (\ref{420baa}), and (\ref{661via}), we obtain \bea [\nabla^g_i\nabla^g_j (e^\omega\varphi_2)] =[\nabla^g_i\nabla^g_j \varphi_2] =2g^+_{k(i}g^-_{j)m}\tr \left(\gamma_+^k\gamma_-^{m}\right), \label{734via} \eea and, therefore, by using (\ref{127via}) we obtain \be c_0(t,s)=\frac{1}{2}g_{ij}(t,s)\tr \left(\gamma_+^{i}\gamma_-^{j}\right), \ee which gives (\ref{138via}). \subsubsection{Coefficient $c_1$} The coefficient $c_1$ is given by (\ref{662vian}) \bea c_1 &=& c^{(0)}_{2,0}+ c^{(1)}_{1,0} +c^{(2)}_{0,0} +c^{(0)}_{1,1} + c^{(1)}_{0,1} +c^{(0)}_{0,2} \eea and, since $c^{(1)}_{1,0}=c^{(0)}_{2,0}=0$, is equal to \bea c_1 &=& c^{(2)}_{0,0} +c^{(0)}_{1,1} + c^{(1)}_{0,1} +c^{(0)}_{0,2}. \eea The coefficient $c^{(2)}_{0,0}$ is given by (\ref{652via}) \be c^{(2)}_{0,0}=\tr \left\{[D_+ a^+_0][D_-a^-_0] \right\}, \ee and, therefore, by using (\ref{68viab}) we obtain \be c^{(2)}_{0,0}=\tr \left(S_+S_-\right). \label{742viax} \ee The coefficient $c^{(0)}_{1,1}$ is determined by the second coefficient of the asymptotic expansion of the integral $Y^{(0)}_{1}$, (\ref{642aab}), of the function $N^{(0)}_1$, (\ref{642via}), which we can rewrite in the form \be N^{(0)}_{1} = \frac{1}{4}(ts)^{-1} g^{1/2}(x)G^{1/2}(x')e^{\omega(x,x')}\varphi_3(x,x'), \ee where \bea \varphi_3(x,x') &=& -\frac{1}{2}\tr\Biggl\{ t \nu_+(x,x')a_{1}^+(x,x')\mu_-(x',x) +s \mu_+(x,x')\nu_-(x',x) a_1^-(x',x) \nonumber\\ && +s \nu_-(x,x') a_1^-(x,x')\mu_+(x',x) +t\mu_-(x,x') \nu_+(x',x)a_{1}^+(x',x) \Biggr\}. \eea We use Lemma {\ref{lemma2}} and eq. (\ref{533via}) to compute it. First of all, we notice that \be [\varphi_3]=[\nabla^g_i\varphi_3]=0. \ee Therefore, we get \bea c^{(0)}_{1,1} &=& \frac{1}{4} G^{ij}[\nabla^g_i\nabla^g_j(e^\omega\varphi_3)]. \eea Next, by using (\ref{661via}) we compute the diagonal values of the second derivatives \be [\nabla^g_i\nabla^g_j(e^\omega\varphi_3)] =[\nabla^g_i\nabla^g_j\varphi_3] = 2g^+_{m(i}g^-_{j)k}\tr\Biggl\{ t\gamma_-^{k}\gamma_+^{m} \left(\frac{1}{6}R_+ I-Q_+\right) +s\gamma_+^{m}\gamma_-^{k} \left(\frac{1}{6}R_- I-Q_-\right) \Biggr\}. \ee This gives \bea c^{(0)}_{1,1} &=& \frac{1}{2}g_{ij}\tr\Biggl\{ t\gamma_-^{j}\gamma_+^{i} \left(\frac{1}{6}R_+ I-Q_+\right) +s \gamma_+^{i}\gamma_-^{j} \left(\frac{1}{6}R_- I-Q_-\right) \Biggr\}. \label{748viax} \eea Recall that $Q_\pm$ for Dirac type operators is given by (\ref{qsxxa}). The coefficient $c^{(1)}_{0,1}$ is determined by the second coefficient of the asymptotic expansion of the integral $Y_0^{(1)}$, (\ref{642aab}), of the function $N^{(1)}_0$, (\ref{645viax}), which can be written in the form \be N^{(1)}_{0} = \frac{1}{4}(ts)^{-1}g^{1/2}(x)G^{1/2}(x') e^{\omega(x,x')} \varphi_4(x,x'), \ee where \bea \varphi_4(x,x') &=& \tr\Biggl\{ -t\left( \theta_+(x,x')\mu_-(x',x) + \mu_-(x,x')\theta_+(x',x)\right) \nonumber\\ && -s\left( \theta_-(x,x')\mu_+(x',x) +\mu_+(x,x')\theta_-(x',x) \right) \Biggr\}, \eea with \bea \theta_\pm(x,x') &=& e^{-\zeta\pm}D_\pm \left(e^{\zeta_\pm}{\cal P}_\pm\right) =\left\{i\gamma_\pm^k(x)(\nabla^\pm_k+\zeta^\pm_{,k})+S_\pm(x)\right\}{\cal P}_\pm(x,x'), \eea We use Lemma \ref{lemma2} and eq. (\ref{533via}) to compute it. First of all, we notice that \be [\varphi_4]=0. \ee Next, by using (\ref{661via}) and the obvious limit \be [\theta_\pm] = S_\pm \label{741via} \ee we compute \be [\nabla^g_j\varphi_{4}]=0. \ee Therefore, \bea c^{(1)}_{0,1} &=& \frac{1}{4} G^{ij}[\nabla^g_{i}\nabla^g_{j}(e^\omega\varphi_4)]. \label{753viax} \eea Further, by using (\ref{741via}) and (\ref{661via}) and omitting all terms that vanish on the diagonal we obtain \bea [\nabla^g_i\nabla^g_j(e^\omega\varphi_4)] &=& [\nabla^g_i\nabla^g_j\varphi_4]= -2\tr\Biggl\{\frac{1}{2}t S_+\left[\nabla^{g}_{(i}\nabla^{g}_{j)}\mu_-(x',x) +\nabla^{g}_{(i}\nabla^{g}_{j)}\mu_-(x,x') \right] \nonumber\\ && +\frac{1}{2}s S_-\left[\nabla^{g}_{(i}\nabla^{g}_{j)}\mu_+(x',x) +\nabla^{g}_{(i}\nabla^{g}_{j)}\mu_+(x,x')\right] \nonumber\\ && +t i\gamma_-^kg^-_{k(i}\left[\nabla^g_{j)}\theta_+(x',x) -\nabla^{g}_{j)}\theta_+(x,x') \right] \nonumber\\ && +s i\gamma_+^kg^+_{k(i}\left[\nabla^g_{j)}\theta_-(x',x) -\nabla^{g}_{j)}\theta_-(x,x') \right] \Biggr\}. \label{754viax} \eea Next, by using (\ref{716viax}) and (\ref{661via}) we compute \bea [\nabla^g_{(i}\nabla^g_{j)}\mu_\pm(x,x')] &=& [\nabla^g_{(i}\nabla^g_{j)}\nu_\pm(x,x')] -2i\gamma^k_\pm g^\pm_{k(i}{\cal A}^\pm_{j)}, \\ {}[\nabla^g_{(i}\nabla^g_{j)}\mu_\pm(x',x)] &=& [\nabla^g_{(i}\nabla^g_{j)}\nu_\pm(x',x)] -2i\gamma^k_\pm g^\pm_{k(i}{\cal A}^\pm_{j)}. \eea Now, by using (\ref{310viax}) and (\ref{360mmm}) we have \be \nabla^{g}_i\gamma_\pm^k =-W_\pm{}^k{}_{im}\gamma_\pm^m -[{\cal A}^\pm_i,\gamma_\pm^k] \label{758viax} \ee By using this equation and (\ref{361zaa}) we compute \bea [\nabla^{g}_{(i}\nabla^{g}_{j)}\nu_\pm(x,x')] =-2g^\pm_{k(i}i [{\cal A}^\pm_{j)},\gamma_+^{k}] +i\gamma_\pm^k g^\pm_{mk}W_\pm{}^m{}_{ij}. \label{763via} \eea Further, by using (\ref{467viax}) we have \bea {}[\nabla^{g}_{(i}\nabla^{g}_{j)}\nu_\pm(x',x)] =-i\gamma_\pm^kg^\pm_{mk}W_\pm{}^m{}_{ij}. \label{764via} \eea Therefore, \bea [\nabla^g_{(i}\nabla^g_{j)}\mu_\pm(x,x')] &=& i\gamma_\pm^k g^\pm_{mk}W_\pm{}^m{}_{ij} -2{\cal A}^\pm_{j)}g^\pm_{i)k}i\gamma_\pm^{k}, \nonumber\\ \label{765via}\\ {}[\nabla^g_{(i}\nabla^g_{j)}\mu_\pm(x',x)] &=& -i\gamma_\pm^kg^\pm_{mk}W_\pm{}^m{}_{ij} -2i\gamma^k_\pm g^\pm_{k(i}{\cal A}^\pm_{j)}. \label{766via} \eea Next, by using (\ref{512mmm}) and (\ref{422qqc}) we compute the diagonal values \bea [\nabla^+_j\theta_+(x,x')] &=& i\gamma_+^k\Omega^+_{jk}+\nabla_j^+S^+, \\{} [\nabla^-_j\theta_-(x',x)] &=& -i\gamma_-^k\Omega^-_{jk}, \eea where \be \Omega^\pm_{jk}=\frac{1}{2}\mathcal{R}^\pm_{jk} +\frac{1}{6}R^\pm_{jk}I. \label{768viax} \ee To avoid confusion, we note that the derivatives $\nabla^g$ here do not include the connection ${\cal A}_\pm$; therefore, \bea [\nabla^{g}_j\theta_\pm(x,x')] &=& i\gamma_\pm^k\Omega^\pm_{jk}+\nabla_j^\pm S^\pm-{\cal A}^\pm_jS_\pm, \\{} [\nabla^{g}_j\theta_\pm(x',x)] &=& -i\gamma_\pm^k\Omega^\pm_{jk}+S_\pm{\cal A}^\pm_j. \eea By using the above results we compute \bea [\nabla^g_i\nabla^g_j\varphi_4] &=& -2\tr\Biggl\{ -t i\gamma_-^mg^-_{m(i}\nabla_{j)}^+ S_+ +t S_+ \left[ \left({\cal C}^+_{(j}-{\cal C}^-_{(j}\right)g^-_{i)k}i\gamma_-^{k} +i\gamma_-^{k}g^-_{k(i}\left({\cal C}^+_{j)}-{\cal C}^-_{j)}\right) \right] \nonumber\\ && -s i\gamma_+^mg^+_{m(i}\nabla_{j)}^- S_- -s S_- \left[ \left({\cal C}^+_{(j}-{\cal C}^-_{(j}\right)g^+_{i)k}i\gamma_+^{k} +i\gamma_+^{k}g^+_{k(i}\left({\cal C}^+_{j)}-{\cal C}^-_{j)}\right) \right] \nonumber\\ && +2t \gamma_-^m \gamma_+^kg^-_{m(i}\Omega^+_{j)k} +2s \gamma_+^m \gamma_-^kg^+_{m(i}\Omega^-_{j)k} \Biggr\}. \eea Now, by using (\ref{312viax}) and the cyclicity of the trace we obtain a simpler form \bea [\nabla^g_i\nabla^g_j\varphi_4] &=& -2\tr\Biggl\{ -t i\gamma_-^mg^-_{m(i}\nabla_{j)}^+ S_+ -s i\gamma_+^mg^+_{m(i}\nabla_{j)}^- S_- \nonumber\\ && +2t \gamma_-^m \gamma_+^kg^-_{m(i}\Omega^+_{j)k} +2s \gamma_+^m \gamma_-^kg^+_{m(i}\Omega^-_{j)k} \Biggr\}. \eea Thus, we obtain the coefficient $c^{(1)}_{0,1}$ from (\ref{753viax}) \bea c^{(1)}_{0,1} &=& -\frac{1}{2} G^{ij}\tr\Biggl\{ -t i\gamma_-^mg^-_{m(i}\nabla_{j)}^+ S_+ -s i\gamma_+^mg^+_{m(i}\nabla_{j)}^- S_- \nonumber\\ && +2t \gamma_-^m \gamma_+^kg^-_{m(i}\Omega^+_{j)k} +2s \gamma_+^m \gamma_-^kg^+_{m(i}\Omega^-_{j)k} \Biggr\}. \label{771viax} \eea The coefficient $c^{(0)}_{0,2}$ is determined by the third coefficient of the asymptotic expansion of the integral $Y^{(0)}_0$, (\ref{642aab}), of the function $N^{(0)}_0$, (\ref{724via}). We use Lemma \ref{lemma2} to compute it. Since the function $\varphi_2$ and its first derivative vanish on the diagonal, (\ref{734viax}), it is given by the eq. (\ref{533viax}). We use the equations \bea [\nabla^g_{(i}\nabla^g_j\nabla^g_k\nabla^g_{l)}(e^\omega\varphi_2)] &=& [\nabla^g_{(i}\nabla^g_j\nabla^g_k\nabla^g_{l)}\varphi_2] +4\omega_{(i}[\nabla^g_{j}\nabla^g_k\nabla^g_{l)}\varphi_2] \nonumber\\ && +6\left(\omega_{(ij}+\omega_{(i}\omega_j\right)[\nabla^g_{k}\nabla^g_{l)}\varphi_2], \\{} [\nabla^g_{(j}\nabla^g_k\nabla^g_{l)}(e^\omega\varphi_2)] &=& [\nabla^g_{(j}\nabla^g_k\nabla^g_{l)}\varphi_2] +3\omega_{(j}[\nabla^g_{k}\nabla^g_{l)}\varphi_2], \\{} [\nabla^g_{(k}\nabla^g_{l)}(e^\omega\varphi_2)] &=& [\nabla^g_{(k}\nabla^g_{l)}\varphi_2], \eea and (\ref{711viaxz}) to obtain \bea c^{(0)}_{0,2} &=& \frac{1}{4}ts \Biggl\{ \frac{1}{2}G^{ij}G^{kl} [\nabla^g_{(i}\nabla^g_j\nabla^g_k\nabla^g_{l)}\varphi_2] +N^{jkl}[\nabla^g_{(j}\nabla^g_k\nabla^g_{l)}\varphi_2] \label{774viax}\\ && +\left[ \frac{1}{6}\left(G^{kl}G^{ij} +2G^{ik}G^{jl}\right) \left(R^+_{ij}+R^-_{ij}-2R^g_{ij}\right) +M^{kl} \right] [\nabla^g_{(k}\nabla^g_{l)}\varphi_2] \Biggr\}, \nonumber \eea where $N^{jkl}$ and $M^{kl}$ are defined by (\ref{144viax}) and (\ref{147zax}). The second derivative of $\varphi_2$ was computed in (\ref{734via}). So, we compute the third derivative; we have \bea [\nabla^g_{(i}\nabla^g_j\nabla^g_{k)}\varphi_2] &=& \frac{3}{2}\Biggl[\nabla^g_{(i}\nabla^g_j\mu_+(x,x')\nabla^g_{k)}\mu_-(x',x) +\nabla^g_{(i}\mu_+(x,x')\nabla^g_{j}\nabla^g_{k)}\mu_-(x',x) \nonumber\\ && +\nabla^g_{(i}\nabla^g_j\mu_-(x,x')\nabla^g_{k)}\mu_+(x',x) +\nabla^g_{(i}\mu_-(x,x')\nabla^g_{j}\nabla^g_{k)}\mu_+(x',x) \Biggr]. \nonumber\\ \eea By using (\ref{661via}), (\ref{765via}) and (\ref{766via}) we get \bea [\nabla^g_{(i}\nabla^g_j\nabla^g_{k)}\varphi_2] &=& 3\tr\Biggl\{ \left[W_+{}^m{}_{(ij}g^-_{k)q}g^+_{mp} +W_-{}^m{}_{(ij} g^+_{k)p} g^-_{mq}\right]\gamma_+^p\gamma_-^{q} \label{779viax} \\ && +g^-_{q(i}({\cal C}^+_{j}-{\cal C}^-_j) g^+_{k)p}\gamma_-^{q}\gamma_+^p -g^+_{p(i}({\cal C}^+_{j}-{\cal C}^-_j)g^-_{k)q}\gamma_+^p\gamma_-^{q} \nonumber \Biggr\}. \eea Finally, we compute the forth derivative of the function $\varphi_2$; we rewrite it in the form \be \varphi_{2}=-\frac{1}{2}A_{pq'}(x,x')B^{pq'}(x,x') +(+\leftrightarrow -), \ee where \bea A_{pq'} &=& \sigma^+_{,p}\sigma^-_{,q'}, \\ B^{pq'} &=& \tr\left\{ {\cal P}_-^{-1}(x,x')\gamma^p_+(x){\cal P}_+(x,x')\gamma_-^{q'}(x') \right\}, \eea and the symbol $(+\leftrightarrow -)$ indicates that one should add the same term with $+$ and $-$ switched. The diagonal value of the forth symmetrized derivative is then \bea [\nabla^g_{(i}\nabla^g_j\nabla^g_{k}\nabla^g_{l)}\varphi_2] &=& -\frac{1}{2} [\nabla^g_{(i}\nabla^g_j\nabla^g_{k}\nabla^g_{l)}A_{pq'}] \tr\left(\gamma_+^p\gamma_-^q\right) -2[\nabla^g_{(i}\nabla^g_j\nabla^g_{k}A_{pq'}][\nabla^g_{l)}B^{pq'}] \nonumber\\ && -3[\nabla^g_{(i}\nabla^g_jA_{pq'}] [\nabla^g_{k}\nabla^g_{l)}B^{pq'}] +(+\leftrightarrow -). \eea First of all, it is easy to get \bea [\nabla^g_{(i}\nabla^g_{j)}A_{pq'}] &=& -2g^+_{p(i}g^-_{j)q}, \\{} [\nabla^g_{(i}\nabla^g_j\nabla^g_{k)}A_{pq'}] &=& 3V^-_{q(ij}g^+_{k)p} -3T^+_{p(ij}g^-_{k)q}, \eea where the tensors $V_{ijk}$ and $T_{ijk}$ are given by (\ref{467viax}), (\ref{361zaa}) \bea T^\pm_{ijk} &=& 3g^\pm_{m(i}W_\pm^m{}_{jk)}, \\{} V^\pm_{ijk} &=& -g^\pm_{mi}W_\pm^m{}_{kj}. \eea Similarly, we obtain \bea [\nabla^g_{(i}\nabla^g_j\nabla^g_{k}\nabla^g_{l)}A_{pq'}] &=& 4 V^-_{q(ijk}g^+_{l)p} +6 V^-_{q(ij}T^+_{kl)p} -4 T^+_{p(ijk}g^-_{l)q}, \eea where the tensors $T^\pm_{ijkl}$ and $V^\pm_{ijkl}$ are given by (\ref{469viax}), (\ref{470viax}), \bea T^\pm_{ijkl} &=& 3g^\pm_{m(j} \nabla^g{}_{k}W_\pm^m{}_{l)i} +g^\pm_{mi} \nabla^g{}_{(j}W_\pm^m{}_{kl)} +3g^\pm_{m(j}W_\pm^n{}_{k|i|}W_\pm^{m}{}_{l)n} \nonumber\\ && +g^\pm_{mi}W_\pm^{n}{}_{(jk}W_\pm^{m}{}_{l)n} +3g^\pm_{nm}W_\pm^n{}_{(jk}W_\pm^{m}{}_{l)i}, \label{469viaxz}\\ V^\pm_{ijkl} &=& -g^\pm_{mi} \nabla^g{}_{(j}W_\pm^m{}_{kl)} -g^\pm_{mi}W_\pm^n{}_{(jk}W_\pm^{m}{}_{l)n}. \label{470viaxz} \eea Next, by using \be [\nabla_l^+B^{pq'}] =-\tr\left\{({\cal C}^+_l-{\cal C}^-_l)\gamma_+^p\gamma_-^q\right\} \ee we compute \be [\nabla^g_lB^{pq'}] =\tr\left\{ -({\cal C}^+_l-{\cal C}^-_l)\gamma_+^p\gamma_-^q -W_+{}^p{}_{lm}\gamma_+^m\gamma_-^q \right\}. \ee Further, we have \bea [\nabla^g_{(k}\nabla^g_{l)}B^{pq'}] &=& \tr\Biggl\{\left[-\nabla^g_{(k}W_+{}^p{}_{l)m} +3W_+{}^p{}_{n(k}W_+{}^n{}_{l)m} -W_+{}^n{}_{kl}W_+{}^p{}_{nm}\right] \gamma_+^m\gamma_-^q \nonumber\\ && +2W_+{}^p{}_{m(k}({\cal C}^+_{l)}-{\cal C}^-_{l)})\gamma_+^m\gamma_-^q -W_+{}^n{}_{kl}({\cal C}^+_n-{\cal C}^-_n)\gamma_+^p\gamma_-^q \Biggr\} \nonumber\\ && +[\nabla^+_{(k}\nabla^{+}_{l)}B^{pq'}]. \eea Next, by using the equations $[\nabla^+_{i}{\cal P}_+]=[\nabla^+_{(i}\nabla^+_{j)}{\cal P}_+]=0$, $\nabla^+_i\gamma_+^j=0$, and \bea [\nabla^+_{i}{\cal P}_-] &=& {\cal C}^+_i-{\cal C}^-_i, \label{372zaaz} \\ {}[\nabla^+_{(i}\nabla^+_{j)}{\cal P}_-] &=& \nabla^+_{(i}({\cal C}^+_{j)}-{\cal C}^-_{j)}) +({\cal C}^+_{(i}-{\cal C}^-_{(i})({\cal C}^+_{j)}-{\cal C}^-_{j)}), \eea we obtain \bea [\nabla^+_{(i}\nabla^{+}_{j)}B^{pq'}] &=& \tr\left\{\left[ -\nabla^+_{(i}({\cal C}^+_{j)}-{\cal C}^-_{j)}) +({\cal C}^+_{(i}-{\cal C}^-_{(i})({\cal C}^+_{j)}-{\cal C}^-_{j)}) \right]\gamma_+^p\gamma_-^q\right\} \nonumber\\ &=& \tr\Biggl\{\Bigl[ -\nabla^{g,{\cal A}}_{(i}({\cal C}^+_{j)}-{\cal C}^-_{j)}) +W_+^s{}_{ij}({\cal C}^+_{s}-{\cal C}^-_{s}) \nonumber\\ && -2{\cal C}^+_{(i}{\cal C}^-_{j)} +{\cal C}^+_{(i}{\cal C}^+_{j)}+{\cal C}^-_{(i}{\cal C}^-_{j)} \Bigr]\gamma_+^p\gamma_-^q\Biggr\}, \eea and, therefore, \bea [\nabla^g_{(k}\nabla^g_{l)}B^{pq'}] &=& \tr\Biggl\{ \Bigl[-\nabla^g_{(k}W_+{}^p{}_{l)m} +3W_+{}^p{}_{n(k}W_+{}^n{}_{l)m} -W_+{}^n{}_{kl}W_+{}^p{}_{nm} \Bigr] \gamma_+^m\gamma_-^q \nonumber\\ && +2W_+{}^p{}_{m(k}({\cal C}^+_{l)}-{\cal C}^-_{l)})\gamma_+^m\gamma_-^q -\nabla^{g,{\cal A}}_{(k}({\cal C}^+_{l)}-{\cal C}^-_{l)})\gamma_+^p\gamma_-^q \nonumber\\ && +\left({\cal C}^+_{(k}{\cal C}^+_{l)}-2{\cal C}^+_{(k}{\cal C}^-_{l)}+{\cal C}^-_{(k}{\cal C}^-_{l)}\right) \gamma_+^p\gamma_-^q \Biggr\}. \eea Finally, by collecting all these results we obtain \bea [\nabla^g_{(i}\nabla^g_j\nabla^g_{k}\nabla^g_{l)}\varphi_2] &=& \mathrm{Sym}(i,j,k,l) \tr\Biggl\{ V_{pqijkl}\gamma_+^p\gamma_-^q -6g^+_{jp}g^-_{qi} [\gamma_+^p,\gamma_-^q] \nabla^{g,{\cal A}}_{k}({\cal C}^+_{l}-{\cal C}^-_{l}) \nonumber\\ && -6\left( g^+_{mp}W_+^m{}_{ij}g^-_{kq} +g^+_{kp}g^-_{mq}W_-^m{}_{ij} \right) [\gamma_+^p,\gamma_-^q]({\cal C}^+_{l}-{\cal C}^-_{l}) \nonumber\\ && +6g^+_{pi}g^-_{jq}\left({\cal C}^+_{k}{\cal C}^+_{l}+{\cal C}^-_{k}{\cal C}^-_{l}\right) \left( \gamma_+^p\gamma_-^q +\gamma_-^q\gamma_+^p \right) \nonumber\\ && -12g^+_{pi}g^-_{jq}{\cal C}^+_{k}{\cal C}^-_{l}\gamma_+^p\gamma_-^q -12g^+_{pi}g^-_{jq}{\cal C}^-_{l}{\cal C}^+_{k}\gamma_-^q\gamma_+^p \Biggr\}, \eea where $V_{pqijkl}$ is defined by (\ref{145viaxz}). This enables us to compute the coefficient $c^{(0)}_{0,2}$, (\ref{774viax}), \bea c^{(0)}_{0,2} &=& \frac{1}{4}ts\; \tr\Biggl\{ \frac{1}{3}\left(G^{kl}G^{ij} +2G^{ik}G^{jl}\right) \left(R^+_{ij}+R^-_{ij}-2R^g_{ij}\right) g^+_{p(k}g^-_{l)q}\gamma_+^{p}\gamma_-^q \nonumber\\ && +\Biggl[ \frac{1}{2}G^{(ij}G^{kl)}V_{pqijkl} +3N^{jkl} \left(g^+_{mp}W_+{}^m{}_{(jk}g^-_{l)q} +g^-_{mq}W_-{}^m{}_{(jk} g^+_{l)p} \right) \Biggr] \gamma_+^{p}\gamma_-^q \nonumber\\ && +2M^{kl}g^+_{p(k}g^-_{l)q}\gamma_+^{p}\gamma_-^q +3G^{(ij}G^{kl)}g^+_{jp}g^-_{qi} \left( \gamma_-^q\gamma_+^p -\gamma_+^p\gamma_-^q \right) \nabla^{g,{\cal A}}_{k}({\cal C}^+_{l}-{\cal C}^-_{l}) \nonumber\\ && +\Bigl[3G^{(ij}G^{kl)}\left( g^+_{mp}W_+^m{}_{ij}g^-_{kq} +g^+_{kp}g^-_{mq}W_-^m{}_{ij} \right) +3N^{jkl}g^+_{pk}g^-_{jq} \Bigr] \nonumber\\ && \times \left(\gamma_-^{q}\gamma_+^p -\gamma_+^p\gamma_-^{q} \right)({\cal C}^+_{l}-{\cal C}^-_{l}) \nonumber\\ && +3G^{(ij}G^{kl)}\left({\cal C}^+_{k}{\cal C}^+_{l}+{\cal C}^-_{k}{\cal C}^-_{l}\right) \left( \gamma_+^p\gamma_-^q +\gamma_-^q\gamma_+^p \right)g^+_{pi}g^-_{jq} \nonumber\\ && -6G^{(ij}G^{kl)}{\cal C}^+_{k}{\cal C}^-_{l}\gamma_+^p\gamma_-^qg^+_{pi}g^-_{jq} -6G^{(ij}G^{kl)}{\cal C}^-_{l}{\cal C}^+_{k}\gamma_-^q\gamma_+^pg^+_{pi}g^-_{jq} \Biggr\}. \eea Thus (after some tedious but straightforward manipulations; by using (\ref{qsxxa}) and many well known algebraic properties of the Dirac matrices \cite{zhelnorovich19}) we obtain the coefficient $c_1$, (\ref{150zax}). This proves Theorem \ref{theorem3}; the Corollary \ref{corollary3} follows. For equal operators $D_-=D_+$ the coefficient $C_1$ takes the form \be C_1=(t+s)^{-n/2}\int_M dx g^{1/2}\tr\Biggl\{ \left(\frac{n}{2}-1\right) \left(\frac{1}{6}R+\frac{1}{2}\gamma^{ij}\mathcal{R}_{ij}-S^2\right) -\frac{(n-1)}{2}i\gamma^j\nabla_jS \Biggr\}. \ee Notice that since $\nabla_j S$ anticommutes with $\gamma^j$ the last term here vanishes and $C_1$ is indeed equal to $(n/2-1)(t+s)^{-n/2}A_1$, with $A_1$ given by (\ref{126viax}). \section{Conclusion} \setcounter{equation}0 The primary goal of this paper was to introduce and to study some new spectral invariants of a pair of elliptic partial differential operators on manifolds, that we call the relative spectral invariants and the combined heat traces. Of special interest are the asymptotics of these invariants. We established a general asymptotic expansion of these invariants and computed the first two coefficients of the asymptotic expansions.
{ "timestamp": "2020-01-17T02:15:33", "yymm": "1908", "arxiv_id": "1908.01265", "language": "en", "url": "https://arxiv.org/abs/1908.01265", "abstract": "We introduce and study {\\it new} relative spectral invariants of {\\it two} elliptic partial differential operators of Laplace and Dirac type on compact smooth manifolds without boundary that depend on both the eigenvalues and the eigensections of these operators and contain much more information about geometry. We prove the existence of the homogeneous short time asymptotics of the new invariants with the coefficients of the asymptotic expansion being integrals of some invariants that depend on the symbols of both operators. The first two coefficients of the asymptotic expansion are computed explicitly.", "subjects": "Mathematical Physics (math-ph); Differential Geometry (math.DG); Spectral Theory (math.SP)", "title": "Relative Spectral Invariants of Elliptic Operators on Manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363522073338, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7073385771677175 }
https://arxiv.org/abs/1507.01152
Discriminants and Higher K-energies on Polarized Kähler Manifolds
Given a compact polarized Kähler manifold $X\hookrightarrow\mathbb{CP}^N$, the space of Bergman metrics on $X$, parameterized by $\mathrm{SL}(N+1,\mathbb{C})$, corresponds to a dense set in the space of Kähler potentials in the Kähler class as $N\to\infty$. Critical points of the $k$th K-energy functional, which is defined on the Kähler class, correspond to metrics with harmonic $k$th Chern form. In this paper it is shown that the higher K-energy functionals, when restricted to the Bergman metrics, are expressible as the energies of certain pairs of vectors (tensors products of discriminants). Consequentially, we obtain results on the asymptotic behavior of these functionals along 1-parameter subgroups and their boundedness properties.
\section{Introduction} A major body of research in K\"ahler geometry has been guided by the Tian-Yau-Donaldson Problem, which asks for necessary and sufficient conditions for the existence of a canonical metric (e.g., extremal, cscK, K\"ahler-Einstein) in a given Hodge class. This problem has been solved in the Fano case by Chen-Donaldson-Sun \cite{CDSI,CDSII,CDSIII} and Tian \cite{tian15} in 2012. Alternatively, with the establishment of the partial $C^0$ estimate by Sz{\'e}kelyhidi\ in 2013 \cite{szekelyhidi15}, S.\ Paul's 2012 paper \cite{paul12} showed that for a Fano manifold $X$ with finite automorphism group, $(X,-K_X)$ is asymptotically K-stable (in Paul's sense of \emph{pairs}) if and only if it admits a K\"ahler-Einstein metric. A key step in \cite{paul12} was the algebraic reformulation of the Mabuchi K-energy in terms of the classical algebro-geometric \emph{discriminants}, i.e., defining polynomials of hypersurface dual varieties. The analytically defined \emph{K-energy map} was defined by Mabuchi in 1986 (\cite{mabuchi86}) in order to detect K\"ahler-Einstein metrics in a given K\"ahler class. The Mabuchi K-energy is an integrated form of the \emph{Futaki invariant}, which vanishes if the K\"ahler class admits a cscK metric. Accordingly, a cscK metric is an extremum of the Mabuchi K-energy. Paul's reformulation of the Mabuchi K-energy in algebraic terms thus allowed for a GIT-style stability criterion to replace the extremal condition for K\"ahler-Einstein metrics. It is hoped that the stability criterion will be easier to check in explicit examples. In the 1986 paper \cite{BM86}, Bando and Mabuchi defined a broader class of functionals $M_k$, $k=1,2,\ldots$, that generalized the Mabuchi K-energy, the case $k=1$. These \emph{higher K-energy functionals} integrate a corresponding class of higher Futaki invariants (\cite{bando06}\footnote{written in 1983, but not published until 2006}). Moreover, a K\"ahler metric with harmonic $k$th Chern form gives an extremum of the $k$th K-energy. However, for $k>1$ very little is known about the behavior of these functionals. It is unknown whether these functionals are bounded above or below, or whether they enjoy any sort of convexivity properties, in analogy with the case $k=1$. We now outline the key results of this paper. As a preliminary result, we obtain the following \begin{Thm} \label{thm:mainthm} Let $X^n\hookrightarrow\ensuremath{\mathbb{CP}}^N$ be a smooth, nonlinear, irreducible, subvariety embedded by a complete linear system and $k\leq n$ be a positive integer. If $k\geq 3$, assume further that at least one of the Chern classes $c_j(J_1(\ensuremath{\mathcal{O}}_X(1)))\neq 0$ for $k\leq j\leq n$, where $J_1(\ensuremath{\mathcal{O}}_X(1))$ is the bundle of 1-jets of the hyperplane bundle. Then there exist $\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$-modules $V_k$ and $W_k$ equipped with Hermitian norms, and nonzero vectors $v_k\in V_k$ and $w_k\in W_k$ such that the $k$th K-energy restricted to the Bergman metrics is given by \begin{equation} \label{eq:maineq} M_k(\sigma) = \lognorm{v_k}-\lognorm{w_k}, \end{equation} for $\sigma\in\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$. \end{Thm} The $\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$-modules $V_k$ and $W_k$ and the vectors $v_k$ and $w_k$ are given explicitly in Section \ref{sec:higherKen} in terms of Chow forms and $X$-discriminants. The construction of the norms, due to Tian (\cite{tian94}), is given in Section \ref{sec:logpolygrowth}. The assumption on the Chern classes of the jet bundle is there to ensure the relevant $X$-discriminants exist. With Theorem \ref{thm:mainthm} in tow, we obtain two results on the global behavior of $M_k$ on the space of Bergman metrics $\ensuremath{\mathcal{B}}$. \begin{cor}[Asymptotics of $M_k$] \label{cor:asymptotics} Let $\lambda:\ensuremath{\mathbb{C}}^*\to\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$ be a 1-parameter subgroup. Then there exist asymptotic expansions of the higher K-energies as $\abs{t}\to 0:$ \begin{equation} M_k(\lambda(t)) = A_k(\lambda) \log\abs{t}^2 + B_k(\lambda), \end{equation} where $A_k(\lambda)\in\ensuremath{\mathbb{Z}}$ and $B_k(\lambda)$ is $O(1)$. \end{cor} \begin{cor}[Boundedness of $M_k$] \label{cor:boundedness} The following are equivalent: \begin{enumerate} \item $M_k$ is bounded below on $\ensuremath{\mathcal{B}}$ \item $M_k$ is bounded along all 1-parameter subgroups $\lambda:\ensuremath{\mathbb{C}}^*\to\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$ \item $M_k$ is bounded below on all algebraic tori in $\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$ \item the pair $(v_k,w_k)$ is K-semistable. \end{enumerate} \end{cor} This paper is organized as follows. In Section \ref{sec:notation} we establish notation and recall the definitions of the higher K-energies and discriminants. We set the notation for the embedding of $X$ into $\ensuremath{\mathbb{CP}}^N$ and the related Bergman metrics and then define the higher K-energies. Finally, we define discriminants and recall certain facts from the literature that relate to our work. In Section \ref{sec:logpolygrowth} we show a technical lemma: the higher K-energies have $\log$-polynomial growth on \mbox{$\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$.} This fact enables us to employ Tian's ``$\ensuremath{\partial\ensuremath{\bar{\partial}}}$'' technique from \cite{tian94}, needed for Lemma \ref{lem:lognorm}. In Section \ref{sec:discrdegrees} we obtain a formula discriminant degrees $\deg\left(\Delta_X^{(n-k)}\right)$ in terms of the integrals $\mu_k$ defined in Section \ref{subsec:higherenergies}. This requires computing a top Chern class $c_{2n-k}$ on the bundle of 1-jets on the hyperplane bundle $J_1(\ensuremath{\mathcal{O}}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}(1))$. In Section \ref{sec:higherKen}, this Chern class is then used to obtain the formula (\ref{eq:maineq}), relating higher K-energies to discriminants. The author would like to thank his PhD advisor, Sean Paul, for encouraging him to study these functionals, and for helpful advice along the way. Also, he would like to acknowledge Joel Robbin, Jeff Viaclovsky, and Bing Wang for their roles in his education. Finally, he would like to thank the mathematics department at UW-Madison for its stimulating environment. This work will provide part of his dissertation. \section{Background and Notation} \label{sec:notation} \subsection*{Polarized K\"ahler Manifolds} \label{subsec:embedding} A K\"ahler manifold $(X,\omega)$ is said to be \emph{polarized} by a Hermitian holomorphic line bundle $(L,h)$ over $X$ if $\omega$ is the curvature of $h$, i.e., if $\omega=-\ensuremath{\sqrt{-1}}\ensuremath{\partial\ensuremath{\bar{\partial}}}\log \abs{s}^2_h$ for any local non-vanishing holomorphic section $s$. By Kodaira's embedding theorem any compact polarized K\"ahler manifold holomorphically embeds into $\ensuremath{\mathbb{P}}(H^0(X,L^{\otimes m}))\cong\ensuremath{\mathbb{CP}}^N$, for $m$ sufficiently large and $N:=\dim_{\ensuremath{\mathbb{C}}} H^0(X,L^{\otimes m})-1$. Conversely, any projective embedding $X\hookrightarrow \ensuremath{\mathbb{CP}}^N$ polarizes $X$. Specifically, given a smooth, compact, projectively embedded K\"ahler manifold $\iota:X\hookrightarrow \ensuremath{\mathbb{CP}}^N$ of complex dimension $n$ with K\"ahler form $\omega$, we have that $X$ is polarized by the (positive, holomorphic) line bundle $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^N}(1)|_{\iota(X)}$. The standard metric $h_{\ensuremath{\mathbb{C}}^{N+1}}$ on $\ensuremath{\mathbb{C}}^{N+1}$ restricts to a Hermitian metric $h$ (unique up to rescaling) on $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^N}(-1)|_{\iota(X)}\subset\ensuremath{\mathbb{CP}}^N\times\ensuremath{\mathbb{C}}^{N+1}$ such that $\omega$ is the curvature $F$ of the Chern connection $\nabla^{h^\vee}$ for $h^\vee$, the metric on $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^N}(1)|_{\iota(X)}:=\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^N}(-1)|_{\iota(X)}^\vee$ induced by $h$; thus $ \omega = \iota^*F(\nabla^{h^\vee}).$ A local nonvanishing holomorphic section $s$ of $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^N}(1)|_{\iota(X)}$ gives a local K\"ahler potential via \begin{equation} \omega =_\ensuremath{\mathrm{loc}} -\ensuremath{\sqrt{-1}} \iota^* \ensuremath{\partial\ensuremath{\bar{\partial}}} \log \abs{s}^2_{h^\vee} = \ensuremath{\sqrt{-1}} \iota^* \ensuremath{\partial\ensuremath{\bar{\partial}}} \log \abs{s}^2_{h}, \end{equation} where $\abs{s}^2_{h^\vee}$ is the (pointwise) square norm of $s$ with respect to $h^\vee$ and similarly for $\abs{s}^2_h$. \subsection*{Bergman Metrics} \label{subsec:bergman} Given a basis $\set{s_0,\ldots,s_N}$ of sections of $H^0\!\left(X,\iota^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^N}(1)|_{\iota(X)}\right)$, the embedding $\iota$ can be written explicitly for an open set $U_i\subset X$ on which some $s_i$ is nonvanishing as $\iota|_{U_i} :U_i\hookrightarrow \ensuremath{\mathbb{CP}}^N $ given by \begin{align} \iota|_{U_i} &:p\mapsto \left[\frac{s_0(p)}{s_i(p)}:\cdots:\frac{s_N(p)}{s_i(p)} \right]. \end{align} Let $z=(z^1,\ldots,z^n):U_0\to\ensuremath{\mathbb{C}}^n$ be local coordinates on $U_0$ and set $T:\ensuremath{\mathbb{C}}^n\to\ensuremath{\mathbb{C}}^{N+1} $ \begin{align} T_i(z(p))&:=\frac{s_i(p)}{s_0(p)} , \end{align} $i=0,\ldots,N$, so that $ \iota|_{U_0}(p) = [1:T_1(z(p)):\cdots:T_N(z(p))]. $ The K\"ahler metric $\omega = \frac{\ensuremath{\sqrt{-1}}}{2\pi}\sum g_{i\ensuremath{\bar{j}}}\,\ensuremath{\mathrm{d}} z^i\wedge\ensuremath{\mathrm{d}} \ensuremath{\bar{z}}^j$ on $X$ is the pullback under $\iota$ of the Fubini-Study metric on $\ensuremath{\mathbb{CP}}^N$ restricted to $\iota(X)$. Let $\abs{\;\cdot\:}$ and $\ip{\,\cdot\,,\,\cdot\,}$ denote the norm and inner product on $\ensuremath{\mathbb{C}}^{N+1}$, respectively, and put $\partial_i:=\partial_{z^i}$, etc. Since $T$ is holomorphic, we have \begin{align} g_{i\ensuremath{\bar{j}}}(z) &= \partial_i \partial_{\ensuremath{\bar{j}}} \log \abs{T(z)}^2 = \partial_i \left(\frac{ \ip{T(z),\partial_jT(z) } }{\abs{T(z)}^2} \right) = \frac{\ip{\partial_{i}T(z),\partial_jT(z)}}{\abs{T(z)}^2} - \frac{\ip{T(z),\partial_jT(z)}\ip{\partial_iT(z),T(z)}}{\abs{T(z)}^4} . \end{align} The standard action of $\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$ on $\ensuremath{\mathbb{C}}^{N+1}$ induces an action of $\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$ on $X$ so that for $\sigma\in\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$ and $p\in X$, $$ \sigma\cdot p := \left[\sigma\cdot \left(\sum s_i(p)e_i\right)\right], $$ where the $e_i$ are the standard basis vectors in $\ensuremath{\mathbb{C}}^{N+1}$. The metric on $\sigma\cdot X$ is given locally by \begin{align} \omega_\sigma(z) := \sigma^*\omega (z) &=_\ensuremath{\mathrm{loc}} \ensuremath{\sqrt{-1}}\iota^*\ensuremath{\partial\ensuremath{\bar{\partial}}}\log\abs{\sigma\cdot T(z)}^2 . \end{align} It follows that $\omega_\sigma = \omega+\ensuremath{\sqrt{-1}}\ensuremath{\partial\ensuremath{\bar{\partial}}}\varphi_\sigma$, where \begin{align} \varphi_\sigma(z) &= \log \frac{\abs{\sigma\cdot T(z)}^2}{\abs{T(z)}^2} \end{align} on $U_0\subset X$. Writing $\omega_\sigma = \frac{\ensuremath{\sqrt{-1}}}{2\pi}\sum h_{i\ensuremath{\bar{j}}}\,\ensuremath{\mathrm{d}} z^i\wedge\ensuremath{\mathrm{d}} \ensuremath{\bar{z}}^j$ on $U_0$, we have \begin{align} h_{i\ensuremath{\bar{j}}}(z) &= \frac{\ip{\sigma(\partial_iT(z)),\sigma(\partial_jT(z))}}{\abs{\sigma T(z)}^2} - \frac{\ip{\sigma(\partial_iT(z)),\sigma T(z)}\ip{\sigma T(z),\sigma(\partial_jT(z))}}{\abs{\sigma T(z)}^4}. \end{align} For a general compact K\"ahler manifold the space of K\"ahler metrics in the cohomology class $[\omega]$ are parameterized by real $C^\infty$ plurisubharmonic functions up to the the addition of a constant via $\omega_\varphi=\omega+\sqrt{-1}\ensuremath{\partial\ensuremath{\bar{\partial}}}\varphi$, where $\varphi\in\ensuremath{C^\infty}(M,\ensuremath{\mathbb{R}})$. We denote the function space $$ \ensuremath{\mathcal{H}}_\omega := \set{\varphi\in\ensuremath{C^\infty}(M,\ensuremath{\mathbb{R}})\,:\, \omega_\varphi=\omega+\sqrt{-1}\ensuremath{\partial\ensuremath{\bar{\partial}}}\varphi>0}, $$ where here $>0$ means ``is positive definite.'' The key result due to Tian, Ruan, Zelditch, and Catlin shows that the Bergman metrics are dense in the $C^\infty$ topology on $\ensuremath{\mathcal{H}}_\omega$, in the sense that for any $\varphi\in\ensuremath{\mathcal{H}}_\omega$ there exists a sequence $\frac{1}{k}\rho_k:= \frac{1}{k} \log(\sum_j\abs{s_j}^2_{h^k})$ converging to $\varphi$ as $k\ensuremath{\rightarrow}\infty$ in the $C^\infty$ topology, where $\set{s_0,\ldots,s_{N_{k}}}$ is a basis of $H^0(X,L^{\otimes k})$. \subsection*{Higher K-Energies} \label{subsec:higherenergies} Let $(X,\omega)$ be a compact K\"ahler manifold, $\varphi\in\ensuremath{\mathcal{H}}_{\omega}$, and $k\in\set{1,\ldots,n}$. Given a smooth path $\Phi:[0,1]\to\ensuremath{\mathcal{H}}_{\omega_0}$, $\varphi_t:=\Phi(t)$, from $\varphi_0:=0$ to $\varphi_1:=\varphi$, the \emph{$k$th K-energy functional} $M_k:\ensuremath{\mathcal{H}}_\omega\to\ensuremath{\mathbb{R}}$ is defined to be \begin{equation} \label{eq:genKenergy} M_k(\varphi):= -(n+1)(n-k+1)V\int_0^1 \set{ \int_X \dot\varphi_t \left[c_k(\omega_t)-\mu_k\omega_t^k\right]\wedge \omega_t^{n-k} } \ensuremath{\mathrm{d}} t, \end{equation} where \begin{align} \omega_t&:=\omega_0+\ensuremath{\sqrt{-1}}\ensuremath{\partial\ensuremath{\bar{\partial}}}\varphi_t & \mu_k &:= \frac{1}{V} \int_X c_k(\omega_0)\wedge \omega_0^{n-k} \label{eq:muk} & V &:= \int _X \omega_0^n. \end{align} Note that $\mu_k$ and $V$ are constants on $\ensuremath{\mathcal{H}}_\omega$. The factor $-(n+1)(n-k+1)V$ coincides with the normalization for the Mabuchi K-energy in \cite{paul12}. Its presence simplifies the formula for $M_k$ in terms of discriminants and ensures that $A_k(\lambda)$ in Corollary \ref{cor:asymptotics} is in $\ensuremath{\mathbb{Z}}$ instead of just $\ensuremath{\mathbb{Q}}$. Bando and Mabuchi showed that these functionals are independent of the chosen path $\omega_t$ in $\ensuremath{\mathcal{H}}_{\omega}$. In the case $k=1$, $M_1$ is the Mabuchi energy whose extrema are cscK metrics. In the general case, the extrema of $M_k$ are metrics whose $k$th Chern form is harmonic. \subsection*{Discriminants} \label{subsec:discriminants} In this subsection we provide the necessary background material on discriminants and projective duality. While many of the definitions and results are classical, we include them here since they may not be familiar to many K\"ahler geometers. A good reference for the material in this section is the excellent book by Gelfand, Kaparanov, and Zelevinsky \cite{GKZ08}. See also \cite{tevelev05} for a more compact treatment. Let $X^n\hookrightarrow\ensuremath{\mathbb{CP}}^N$ be an projectively embedded K\"ahler manifold. For each $p\in X$, denote by $\ensuremath{\mathbb{T}}_pX$ the \emph{embedded tangent space} to $X$ at $p$. This is an $n$-dimensional linear subspace of $\ensuremath{\mathbb{CP}}^N$. The set of all hyperplanes in $\ensuremath{\mathbb{CP}}^N$ is the \emph{dual} of $\ensuremath{\mathbb{CP}}^N$, denoted $(\ensuremath{\mathbb{CP}}^N)^\vee$. \begin{definition} Let $X\hookrightarrow\ensuremath{\mathbb{CP}}^N$ be a projectively embedded K\"ahler manifold. Assume further that $X$ is a nonlinear, linearly normal subvariety. Then the \emph{dual variety} $X^\vee$ of $X$ is the variety of tangent hyperplanes to $X$: \begin{equation} X^\vee := \cl{\set{H\in(\ensuremath{\mathbb{CP}}^N)^\vee\,|\, \exists p\in X:\ensuremath{\mathbb{T}}_pX \subseteq H}} , \end{equation} the closure taken in the Zariski sense. \end{definition} The \emph{linear normality} condition is added to avoid trivialities, and is not restrictive. It ensures that $X$ is nondegenerate and not equal to a nontrivial projection. Here, \emph{nondegeneracy} means that $X\subset\ensuremath{\mathbb{CP}}^N$ is not contained in any hyperplane. The essential content is that our embedding is optimal: smaller $N$ values preclude an embedding. \begin{quote} \emph{In the remainder of this section it is assumed that $X^n\hookrightarrow\ensuremath{\mathbb{CP}}^N$ is a smooth, nonlinear, irreducible, linearly normal, degree $d$ projective variety with $n<N$.} \end{quote} Most dual varieties are hypersurfaces in $(\ensuremath{\mathbb{CP}}^N)^\vee$. The deviance of the codimension of $X^\vee\subset(\ensuremath{\mathbb{CP}}^N)^\vee$ from $1$ is measured by the \emph{dual defect} \begin{equation} \delta(X) := (N-1)-\dim(X^\vee) \geq 0. \end{equation} An upper bound for $X^n\hookrightarrow \ensuremath{\mathbb{CP}}^N$ with $n\geq 2$ is $$ \delta(X) \leq n-2. $$ We have a formula for the dual defect in terms of the Chern classes of the bundle of $1$-jets on the hyperplane bundle $$ \delta(X) = \min\set{k\in\ensuremath{\mathbb{Z}}\,|\,c_{n-k}(J_1(\ensuremath{\mathcal{O}}_X(1)))\neq 0}. $$ \begin{definition} If $\delta(X)=0$ so that $X^\vee\subset(\ensuremath{\mathbb{CP}}^N)^\vee$ is a hypersurface, the defining polynomial $\Delta_X$ (unique up to scaling) is called the \emph{$X$-discriminant}: \begin{equation} (\Delta_X)\ensuremath{^{-1}}(0) := X^\vee \subset (\ensuremath{\mathbb{CP}}^N)^\vee. \end{equation} \end{definition} We usually just speak of \emph{the} discriminant when $X$ is understood. We can say more about the dual defect if we follow Cayley and look at Segre embeddings. \begin{definition} In general, we consider the Segre embedding $$X\times\ensuremath{\mathbb{CP}}^k \hookrightarrow \ensuremath{\mathbb{P}}(\ensuremath{\mathbb{C}}^{N+1}\otimes\ensuremath{\mathbb{C}}^{k+1}).$$ If $\delta(X\times\ensuremath{\mathbb{CP}}^k)=0$, the \emph{$X$-hyperdiscriminant} of \emph{format}\footnote{As the notation suggests, there is a multiindex fomulation of the $X$-hyperdiscriminant. We omit this as it is unnecessary for our purposes.} $(k)$ is the irreducible defining polynomial of the hypersurface $(X\times\ensuremath{\mathbb{CP}}^k)^\vee$: \begin{equation} (\Delta^{(k)}_X)\ensuremath{^{-1}}(0):=(X\times\ensuremath{\mathbb{CP}}^k)^\vee \subset \ensuremath{\mathbb{P}}(\ensuremath{\mathbb{C}}^{N+1}\otimes\ensuremath{\mathbb{C}}^{k+1})^\vee . \end{equation} \end{definition} \begin{lem} The $X$-hyperdiscriminant $\Delta^{(k)}_X$ exists if and only if \begin{equation} \delta(X)\leq k\leq n.\end{equation} \end{lem} In particular, since $\delta(X)\leq n-2$ whenever $n\geq 2$, we have in that case that $\Delta_{X}^{(n-2)},\Delta_{X}^{(n-1)},\Delta_{X}^{(n)}$ always exist. There is a nice relationship between discriminants and resultants. Recall that the \emph{Cayley-Chow form} of $X$, or \emph{$X$-resultant}, is the defining polynomial $R_X$ (unique up to scaling) of the divisor \begin{equation} (R_X)\ensuremath{^{-1}}(0) := \set{L\in\ensuremath{\mathbb{G}}(N-n,\ensuremath{\mathbb{CP}}^N) \,|\, L\cap X\neq\varnothing} , \end{equation} where $\ensuremath{\mathbb{G}}(k,\ensuremath{\mathbb{CP}}^N)$ denotes the Grassmannian variety of $k$-planes in $\ensuremath{\mathbb{CP}}^N$. We note that $R_X$ is irreducible since $X$ is irreducible, and in Pl\"ucker coordinates on $\ensuremath{\mathbb{G}}(N-n,\ensuremath{\mathbb{CP}}^N)$, $\deg(R_X)=\deg(X)$. The \emph{Cayley trick} relates $X$-hyperdiscriminants and $X$-resultants by \begin{align} \Delta_{X}^{(\delta(X))} &= R_{X^\vee} \\ \Delta_{X}^{(n)} &= R_{X} . \end{align} We think of intermediate hyperdiscriminants $\Delta_{X}^{(k)}$ with $\delta(X)<k<n$ as interpolating between $R_{X^\vee}$ and $R_X$. \section{Log-Polynomial Growth of K-Energies on the Space of Bergman Metrics} \label{sec:logpolygrowth} The purpose of this section is to generalize the Main Lemma in \cite{paul12}, stated below, to the higher K-energies. Recall that the \emph{Donaldson functional} of a $\ensuremath{\mathrm{GL}}(n,\ensuremath{\mathbb{C}})$-invariant polynomial $\Phi$ of degree $n+1$ on a vector bundle $E$ is \begin{equation} D_E(\Phi;H_0,H_1) := \int_X \ensuremath{\mathbf{BC}}(E,\Phi;H_0,H_1), \end{equation} where $\ensuremath{\mathbf{BC}}(E,\Phi;H_0,H_1)$ is the \emph{Bott-Chern form} of $\Phi$ on the vector bundle $E$ between the Hermitian metrics $H_0$ and $H_1$ on $E$. The Bott-Chern form \emph{transgresses} between $\Phi(F_0)$ and $\Phi(F_1)$, i.e. \begin{equation} \ensuremath{\sqrt{-1}}\ensuremath{\partial\ensuremath{\bar{\partial}}} \ensuremath{\mathbf{BC}}(E,\Phi;H_0,H_1) = \Phi(F_1)-\Phi(F_0). \end{equation} \begin{lem}[\cite{paul12}] \label{lem:paul} Let $X\hookrightarrow \ensuremath{\mathbb{CP}}^N$ be a smooth, linearly normal $n$-dimensional subvariety. Assume that $X^\vee$, the dual of $X$, is a hypersurface with defining polynomial $\Delta_X$ of degree $d^\vee$ and that $D_{J_1(\ensuremath{\mathcal{O}}_X(1))^\vee}(c_{n+1};H(\sigma),H(e))$ has log-polynomial growth in $\sigma$, where $\sigma\in\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$. Then there is a continuous norm $\norm{\,\cdot\,}$ on the vector space of degree-$d^\vee$ polynomials on $(\ensuremath{\mathbb{C}}^{N+1})^\vee$ such that for all $\sigma \in \ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$, we have \begin{equation} (-1)^{n+1} D_{J_1(\ensuremath{\mathcal{O}}_X(1))^\vee}(c_{n+1};H(\sigma),H(e)) = \lognorm{\Delta_X} \end{equation} where $e$ denotes the identity of $\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$. \end{lem} Here we recall the construction of the continuous norm on $\ensuremath{\mathcal{O}}_B(-1)$, where $B$ is the projective space \begin{equation} B:= \ensuremath{\mathbb{P}}(H^0((\ensuremath{\mathbb{CP}}^N)^\vee,\ensuremath{\mathcal{O}}(d^\vee))) \end{equation} and $d^\vee:=\deg(X^\vee)$. The discriminant $\Delta_X\in B$ and, given a linear functional $a_0z_0+\cdots+a_Nz_N$ on $(\ensuremath{\mathbb{CP}}^N)^\vee$, we can write \begin{equation} \Delta_X(a_0z_0+\cdots+a_Nz_N) = \sum_{i_0+\cdots+i_N=d^\vee} c_{i_0,\ldots,i_N} a_0^{i_0} \cdots a_N^{i_N}. \end{equation} In these coordinates, we define a norm on $\ensuremath{\mathcal{O}}_B(-1)$ by \begin{equation} \norm{\Delta_X}^2_{FS} := \sum_{i_0+\cdots+i_N=d^\vee} \frac{\abs{c_{i_0,\ldots,i_N}}^2}{i_0! \cdots i_N!}. \end{equation} Define a new norm conformal to $\norm{\,\cdot\,}_{FS}$ by \begin{equation} \norm{\,\cdot\,} := e^\theta \norm{\,\cdot\,}_{FS}, \end{equation} where $\theta$, defined below, is a continuous function on $B$. Note that $\theta$ is bounded since $B$ is compact. To define $\theta$, first recall that the \emph{universal hypersurface associated to $B$} is the kernel of the evaluation map \begin{equation} \Sigma := \set{([F],[a_0z_0+\cdots+a_Nz_N])\in B\times(\ensuremath{\mathbb{CP}}^N)^\vee\,:\, F(a_0,\ldots,a_N)=0}. \end{equation} Now define $u$ to be the $(1,1)$-current on $B$ given by \begin{equation} \int_B u\wedge \psi = \int_\Sigma (\ensuremath{\mathrm{pr}}_2)^*(\omega_{(\ensuremath{\mathbb{CP}}^N)^\vee})\wedge(\ensuremath{\mathrm{pr}}_1)^*(\psi) \end{equation} for all smooth $(b-1,b-1)$-forms $\psi$ on $B$, where $b=\dim_\ensuremath{\mathbb{C}} B$, $\omega_{(\ensuremath{\mathbb{CP}}^N)^\vee}$ is the Fubini-Study K\"ahler form on $(\ensuremath{\mathbb{CP}}^N)^\vee$, and $\ensuremath{\mathrm{pr}}_1$ and $\ensuremath{\mathrm{pr}}_2$ are the projection maps on $B\times(\ensuremath{\mathbb{CP}}^N)^\vee$. Tian (\cite{tian97}) showed that $[u]=[\omega_B]$ in cohomology, where $\omega_B$ is the Fubini-Study form on $B$, and there exists a continuous function $\theta$ on $B$ such that \begin{equation} u=\omega_B + \ensuremath{\sqrt{-1}}\ensuremath{\partial\ensuremath{\bar{\partial}}}\theta, \end{equation} in the sense of currents. We explain the ``log-polynomial growth'' mentioned in the lemma. Denote \begin{equation} D(\sigma) := D_{J_1(\ensuremath{\mathcal{O}}_X(1))^\vee}(c_{n+1};H(\sigma),H(e)). \end{equation} In the proof of this lemma it was shown that (Prop.\ 4.2) the quantity \begin{equation} (-1)^{n+1}D(\sigma) - \lognorm{\Delta_X} \end{equation} is a pluriharmonic function on $G$, i.e. \begin{equation} \ensuremath{\partial\ensuremath{\bar{\partial}}}\left((-1)^{n+1} D(\sigma) - \lognorm{\Delta_X} \right) = 0. \end{equation} We would like to drop the $\ensuremath{\partial\ensuremath{\bar{\partial}}}$ from this formula. To this end, note that pluriharmonicity implies that there is a holomorphic function $F$ on $G$ such that \begin{equation} (-1)^{n+1}D(\sigma) - \lognorm{\Delta_X} = \log\abs{F(\sigma)}^2. \end{equation} Following \cite{tian97} pp.33--34, consider $G$ as a quasi-affine subvariety of $\ensuremath{\mathbb{CP}}^{(N+1)^2}$. More precisely, given homogeneous coordinates $z_{ij}$ for $0\leq i,j\leq N$, define $W$ to be the affine variety $$ W = \set{[z_{00}:z_{01}:\cdots:z_{N,N}:w]\,:\, \det(z_{ij}) = w^{N+1} } \subset \ensuremath{\mathbb{CP}}^{(N+1)^2}. $$ Then $G=W\cap\set{w\neq 0}$. We have that $F$ extends to $W$ as a meromorphic function provided $F$ \emph{grows polynomially} near $W\ensuremath{\smallsetminus} G$, i.e. there are constants $\ell>0$ and $C>0$ such that $$ F(\sigma) \leq C \cdot \ensuremath{\mathrm{dist}}(\sigma,W\ensuremath{\smallsetminus} G)^\ell, $$ where the distance is measured using the Fubini-Study metric on $\ensuremath{\mathbb{CP}}^{(N+1)^2}$. All the poles of $F$ must live in $W\ensuremath{\smallsetminus} G$. But $W\ensuremath{\smallsetminus} G$ is irreducible and $W$ is normal, so all the zeroes of $F$ must live in $W\ensuremath{\smallsetminus} G$. Therefore $F$ is constant and, since $\log\abs{F(e)}^2=0$ for $e\in G$ the identity, \begin{equation} (-1)^{n+1}D(\sigma) = \lognorm{\Delta_X}. \end{equation} While the polynomial growth of $F$ was given for $D(\sigma)$ corresponding to $M_1(\sigma)$, we must establish the log-polynomial growth for the higher K-energies. This is done in the next two lemmas. \begin{lem} Given a compact polarized K\"ahler manifold $\iota:X\hookrightarrow \ensuremath{\mathbb{CP}}^n$, let $\omega_\sigma$ denote the Bergman metric induced by $\sigma\in\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$. Then there exist constants $C_1,C_2,C_3,C_4$ such that \begin{align} \label{eq:hest} \norm{\omega_\sigma}_{\omega_e} &\leq C_1(\norm{\iota}_{C^1(X)}) \\ \label{eq:hinvset} \norm{\omega_\sigma^\#}_{\omega_e} &\leq C_2(\norm{\iota}_{C^1(X)}) \\ \norm{\ensuremath{\mathrm{Rm}}_\sigma}_{\omega_e} &\leq C_3(\norm{\iota}_{C^2(X)}) \\ \norm{c_k(\omega_\sigma)}_{\omega_e} &\leq C_4(\norm{\iota}_{C^2(X)}), \end{align} where $e\in\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$ is the identity. By $\norm{\iota}_{C^k(X)}$ we mean \begin{equation} \norm{\iota}_{C^k(X)} := \sup_{f\in C^k(\ensuremath{\mathbb{CP}}^N)\ensuremath{\smallsetminus}\set{0}} \frac{\norm{\iota^* f}_{C^k(X)} }{ \norm{f}_{C^k(\ensuremath{\mathbb{CP}}^N)} } \end{equation} and the musical isomorphism $\#$ is induced by $\omega_e$. \end{lem} \begin{proof} Locally, \begin{equation} \omega_\sigma = \ensuremath{\sqrt{-1}}\ensuremath{\partial\ensuremath{\bar{\partial}}}\log\abs{\sigma T}^2 = \ensuremath{\sqrt{-1}}\sum_{i,j} h_{i\ensuremath{\bar{j}}} \,\ensuremath{\d z}^i\wedge\ensuremath{\d\ensuremath{\bar{z}}}^j \end{equation} on an open subset $U\subset X$ so that \begin{equation} \label{eq:Bergmanmetricincoordinates} h_{i\ensuremath{\bar{j}}} = \frac{\ip{\sigma(\partial_iT),\sigma(\partial_jT)}}{\abs{\sigma T}^2} - \frac{\ip{\sigma(\partial_iT),\sigma T}\ip{\sigma T,\sigma(\partial_jT)}}{\abs{\sigma T}^4}, \end{equation} where the norms and inner products in Eq.\ (\ref{eq:Bergmanmetricincoordinates}) are on $\ensuremath{\mathbb{C}}^{N+1}$. Recall the definition of $T:\ensuremath{\mathbb{C}}^n\to\ensuremath{\mathbb{C}}^{N+1}$ given in Section \ref{subsec:bergman}. Consider a fixed point $p\in U$. We see that each term is rational in $\sigma$ with matching degrees in the numerator and denominator. Since the norm $\abs{\sigma T}$ is nondegenerate and $T(p)\in\ensuremath{\mathbb{C}}^{N+1}\ensuremath{\smallsetminus}\set{0}$, each rational expression is uniformly bounded above and below. Thus, at $p\in U$ \begin{equation} \abs{h_{i\ensuremath{\bar{j}}}(p)}(\sigma) \leq C_1(T(p),\bar{T}(p),\partial_iT(p),\partial_{\ensuremath{\bar{j}}}\bar{T}(p)), \end{equation} which shows (\ref{eq:hest}). By the same token \begin{align} \partial_kh_{i\ensuremath{\bar{j}}} &= \frac{\ip{\sigma (\partial_k\partial_iT),\sigma (\partial_jT)}}{\abs{\sigma T}^2} - \frac{\ip{\sigma (\partial_iT),\sigma (\partial_jT)}\ip{\sigma (\partial_kT),\sigma T}}{\abs{\sigma T}^4} -\frac{\ip{\sigma (\partial_k\partial_iT),\sigma T}\ip{\sigma T,\sigma (\partial_jT)}}{\abs{\sigma T}^4} \notag \\ &\qquad -\frac{\ip{\sigma (\partial_iT),\sigma T}\ip{\sigma (\partial_kT),\sigma (\partial_jT)}}{\abs{\sigma T}^4} +2\frac{\ip{\sigma (\partial_kT),\sigma T}\ip{\sigma (\partial_iT),\sigma T}\ip{\sigma T,\sigma (\partial_jT)}}{\abs{\sigma T}^6}, \end{align} etc., so that at $p\in U$, but suppressing this dependence in the notation, \begin{align} \abs{\partial_k h_{i\ensuremath{\bar{j}}}}(\sigma) &\leq C_2(T,\bar{T},\partial_iT,\partial_{\ensuremath{\bar{j}}}\bar{T},\partial_kT,\partial_k\partial_iT) \\ \abs{\partial_{\ensuremath{\overline{\ell}}} h_{i\ensuremath{\bar{j}}}}(\sigma) &\leq C_3(T,\bar{T},\partial_iT,\partial_{\ensuremath{\bar{j}}}\bar{T},\partial_{\ensuremath{\overline{\ell}}} \bar{T},\partial_{\ensuremath{\overline{\ell}}}\partial_{\ensuremath{\bar{j}}}\bar T) \\ \abs{\partial_k\partial_{\ensuremath{\overline{\ell}}} h_{i\ensuremath{\bar{j}}}}(\sigma) &\leq C_4(T,\bar{T},\partial_iT,\partial_{\ensuremath{\bar{j}}}\bar{T},\partial_{\ensuremath{\overline{\ell}}} \bar{T},\partial_{\ensuremath{\overline{\ell}}}\partial_{\ensuremath{\bar{j}}}\bar T,\partial_kT,\partial_k\partial_iT). \end{align} To bound the inverse metric $H\ensuremath{^{-1}}$ just note that the constant term of the characteristic polynomial is $\ensuremath{\mathrm{Tr}} (H)>0$. Applying $H\ensuremath{^{-1}}$ to both sides of the characteristic equation shows that $H\ensuremath{^{-1}}$ is a polynomial in $H$. Thus \begin{equation} \big|h^{i\ensuremath{\bar{j}}}\big|(\sigma)\leq C_5(T,\bar{T},\partial_iT,\partial_{\ensuremath{\bar{j}}}\bar{T}) , \end{equation} which shows (\ref{eq:hinvset}). Next, we see that \begin{align} \abs{R_{i\ensuremath{\bar{j}} k\ensuremath{\overline{\ell}}}}(\sigma) &\leq \abs{\partial_k\partial_{\ensuremath{\overline{\ell}}}h_{i\ensuremath{\bar{j}}}}+\abs{h^{p\ensuremath{\bar{q}}}}\abs{\partial_k h_{i\ensuremath{\bar{q}}}}\abs{\partial_{\ensuremath{\overline{\ell}}}h_{p\ensuremath{\bar{j}}}} \leq C_6(T,\bar{T},\partial_iT,\partial_{\ensuremath{\bar{j}}}\bar{T},\partial_{\ensuremath{\overline{\ell}}} \bar{T},\partial_{\ensuremath{\overline{\ell}}}\partial_{\ensuremath{\bar{j}}}\bar T,\partial_kT,\partial_k\partial_iT). \end{align} (Similarly, the Ricci and scalar curvatures are uniformly bounded with respect to $\sigma$, since contractions with $h^{i\ensuremath{\bar{j}}}$ are controlled.) Finally, we note that the Chern forms are given by polynomials of the curvature 2-form, which is uniformly bounded. \end{proof} \begin{lem} \label{lem:estimate} Assume $X\times\ensuremath{\mathbb{CP}}^{n-k}$ is dually nondegenerate in its Segre embedding. Then the higher K-energies have log-polynomial growth. Thus, for each $k=1,\ldots,n$, there is a holomorphic function $F_k$ on $G$ and constants $\ell_k>0$ and $C_k>0$ such that for all $\sigma \in G$, \begin{equation} \label{eq:estresult} (-1)^{n+1}D_k(\sigma) - \lognorm{\Delta_{X\times\ensuremath{\mathbb{CP}}^{n-k}}} = \log\abs{F_k(\sigma)}^2, \end{equation} and \begin{equation} \label{eq:theestimate} F_k(\sigma) \leq C_k\cdot \ensuremath{\mathrm{dist}}(\sigma,W\ensuremath{\smallsetminus} G)^{\ell_k}. \end{equation} \end{lem} \begin{proof} We study the asymptotic behavior in $\sigma$ of \begin{align} \label{eq:higherKen} M_k(\sigma) &= -(n+1)(n-k+1)V \int_0^1\int_X \dot\varphi_t \left[c_k(\omega_t)\wedge \omega_t^{n-k}-\mu_k\omega_t^n\right] \,\ensuremath{\mathrm{d}} t \end{align} by considering the particular path in $\ensuremath{\mathcal{H}}_\omega$ given by \begin{align} \varphi_t &= \log\frac{\abs{e^{\xi t}T}^2}{\abs{T}^2} \end{align} where $\xi\in\ensuremath{\mathfrak{sl}}(N+1,\ensuremath{\mathbb{C}})$ satisfies $e^\xi=\sigma$. With this path $\omega_t:=\omega+\ensuremath{\sqrt{-1}}\ensuremath{\partial\ensuremath{\bar{\partial}}}\varphi_t$ is a Bergman metric for each $t\in[0,1]$. By the previous lemma the factor in brackets in Eq. (\ref{eq:higherKen}) is uniformly bounded in $\sigma$. Also \begin{align} \abs{\dot\varphi_t} (\sigma) &= \frac{\abs{\ip{e^{\xi t} T,(\xi^*+\xi)e^{\xi t} T}}}{\abs{e^{\xi t}T}^2} \leq \norm{\xi^*+\xi}_{\ensuremath{\mathrm{op}}} \leq \log\ensuremath{\mathrm{Tr}}(\sigma^*\sigma), \end{align} where $\norm{\;\cdot\;}_{\ensuremath{\mathrm{op}}}$ is the operator norm on matrices. The last inequality follows since the eigenvalues of $\sigma$ are the exponentials of the eigenvalues of $\xi$. This establishes the estimate (\ref{eq:theestimate}); Eq.\ \ref{eq:estresult} now follows from Proposition 4.2 in \cite{paul12}. \end{proof} This establishes Lemma \ref{lem:paul} for the Donaldson functionals corresponding to the higher K-energies. \section{Discriminant Degrees} \label{sec:discrdegrees} The purpose of this section is to compute the degree of the $X$-hyperdiscriminant of format $(n-k)$. To accomplish this we use the following result of Beltrametti, Fania, and Sommese \cite{BFS92}. \begin{lem}[\cite{BFS92}] \label{lem:BFS92} If $X^n\hookrightarrow\ensuremath{\mathbb{CP}}^N$ is smooth, then $X^\vee$ is a hypersurface if and only if $c_n(J_1(\ensuremath{\mathcal{O}}_X(1))) \neq 0$, where $J_1(\ensuremath{\mathcal{O}}_X(1))$ is the bundle of 1-jets of the hyperplane bundle on $\ensuremath{\mathbb{CP}}^N$ restricted to $X$. In this case \begin{equation} \label{eq:BFSdegformula} \deg(\Delta_X) = \int_X c_n(J_1(\ensuremath{\mathcal{O}}_X(1))) . \end{equation} \end{lem} In the case of the $X$-hyperdiscriminant of format $(n-k)$ the integral becomes \begin{equation} \label{eq:degreeintegral} \deg\left(\Delta_X^{(n-k)}\right) = \int_{X\times \ensuremath{\mathbb{CP}}^{n-k}} s^* c_{2n-k}(J_1(\ensuremath{\mathcal{O}}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}(1))), \end{equation} where $s:X\times \ensuremath{\mathbb{CP}}^{n-k}\hookrightarrow \ensuremath{\mathbb{CP}}^\ell$, $\ell=(n+1)(n-k+1)-1$, denotes the Segre embedding. To compute this integral, our strategy will be to split Chern classes up until each factor is supported either on $X$ or on $\ensuremath{\mathbb{CP}}^{n-k}$. This is accomplished by the following. \begin{lem} \label{lem:JettopChern} We have \begin{equation} \label{eq:JettopChern} c_{2n-k}(J_1) = \sum_{i=0}^{k} (-1)^i (n-i+1)\binom{n-i}{n-k} c_i(\omega)\wedge \omega^{n-i} \wedge \omega_{FS}^{n-k} \end{equation} where \begin{align} J_1 &:= J_1(\ensuremath{\mathcal{O}}_{s(X\times \ensuremath{\mathbb{CP}}^{n-k})}(1)) & \omega &:= \ensuremath{\mathrm{pr}}_1^*\omega=\ensuremath{\mathrm{pr}}_1^*c_1(\ensuremath{\mathcal{O}}_X(1)) \\ c_i(\omega) &:= \ensuremath{\mathrm{pr}}_1^*c_i(T^{1,0}_X) = (-1)^i\ensuremath{\mathrm{pr}}_1^*c_i(\Omega_X^{1,0}) & \omega_{FS} &:= \ensuremath{\mathrm{pr}}_2^*\omega_{FS} = \ensuremath{\mathrm{pr}}_2^* c_1(\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}). \end{align} \end{lem} \begin{proof} \noindent\textbf{Bundle Factorization Formulas.} \noindent\emph{Smooth} Euler Splitting: \begin{align} \bigoplus^{k+1} \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^k}(-1) &\cong \Omega^{1,0}_{\ensuremath{\mathbb{CP}}^k} \oplus \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^k} \end{align} Jet Bundle Sequence: for any holomorphic line bundle $L\to X$ \begin{align} \SelectTips{cm}{} \xymatrix{ 0 \ar[r] & \Omega^{1,0}_X \otimes L \ar[r] & J_1(L) \ar[r] & L \ar[r] & 0 } \end{align} Segre Factorization: setting $s^*\ensuremath{\mathcal{O}}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}(1):=s^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^\ell}(1)|_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}$ and $\ensuremath{\mathcal{O}}_X(1):=\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^N}(1)|_X$ \begin{align} s^*\ensuremath{\mathcal{O}}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}(1) &\cong \ensuremath{\mathrm{pr}}_1^*\left(\ensuremath{\mathcal{O}}_X(1)\right) \otimes \ensuremath{\mathrm{pr}}_2^*\left(\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}(1)\right) \end{align} (Holomorphic) Base Product Splitting: \begin{align} s^*\Omega^{1,0}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}(1) &\cong \left(\ensuremath{\mathrm{pr}}_1^*\Omega^{1,0}_X(1) \otimes \ensuremath{\mathrm{pr}}_2^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}(1)\right) \oplus \left( \ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1) \otimes \ensuremath{\mathrm{pr}}_2^*\Omega^{1,0}_{\ensuremath{\mathbb{CP}}^{n-k}}(1) \right) \end{align} Twisted Smooth Euler Splitting: \begin{align} \bigoplus^{k+1} \ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1) &\cong \left(\ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1) \otimes \ensuremath{\mathrm{pr}}_2^*\Omega^{1,0}_{\ensuremath{\mathbb{CP}}^k}(1)\right) \oplus \left(\ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1) \otimes \ensuremath{\mathrm{pr}}_2^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^k}(1) \right) \end{align} \bigskip By the jet bundle sequence for \begin{align} L &=\ensuremath{\mathcal{O}}_{s(X\times \ensuremath{\mathbb{CP}}^{n-k})}(1):=\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^\ell}(1)|_{s(X\times \ensuremath{\mathbb{CP}}^{n-k})} \\ J_1\! :\!\! &= J_1(L) = J_1(\ensuremath{\mathcal{O}}_{s(X\times \ensuremath{\mathbb{CP}}^{n-k})}(1)) \end{align} over $s(X\times\ensuremath{\mathbb{CP}}^{n-k})$, the total Chern class of the jet bundle is \begin{align} s^*c(J_1) &= s^*c\left(\Omega^{1,0}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}\otimes \ensuremath{\mathcal{O}}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}(1)\right) \wedge s^*c\left(\ensuremath{\mathcal{O}}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}(1)\right) \\ &= s^*c\left(\Omega^{1,0}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}(1)\right) \wedge s^*c\left(\ensuremath{\mathcal{O}}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}(1)\right) \\ &= c\left(s^*\Omega^{1,0}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}(1)\right) \wedge c\left(s^*\ensuremath{\mathcal{O}}_{s(X\times\ensuremath{\mathbb{CP}}^{n-k})}(1)\right). \end{align} Applying the holomorphic base product splitting to the first factor and the Segre factorization to the second factor, we see that \begin{align} s^*c(J_1) &= c\left(\left(\ensuremath{\mathrm{pr}}_1^*\Omega^{1,0}_X(1) \otimes \ensuremath{\mathrm{pr}}_2^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}(1)\right) \oplus \left( \ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1) \otimes \ensuremath{\mathrm{pr}}_2^*\Omega^{1,0}_{\ensuremath{\mathbb{CP}}^{n-k}}(1) \right)\right) \\ & \hspace{2cm} \wedge c\left(\ensuremath{\mathrm{pr}}_1^*\left(\ensuremath{\mathcal{O}}_X(1)\right) \otimes \ensuremath{\mathrm{pr}}_2^*\left(\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}(1)\right)\right) \\ &= c\left(\ensuremath{\mathrm{pr}}_1^*\Omega^{1,0}_X(1) \otimes \ensuremath{\mathrm{pr}}_2^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}(1)\right) \\ & \hspace{1cm} \wedge c\left( \left( \ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1) \otimes \ensuremath{\mathrm{pr}}_2^*\Omega^{1,0}_{\ensuremath{\mathbb{CP}}^{n-k}}(1) \right) \oplus \left(\ensuremath{\mathrm{pr}}_1^*\left(\ensuremath{\mathcal{O}}_X(1)\right) \otimes \ensuremath{\mathrm{pr}}_2^*\left(\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}(1)\right)\right)\right) , \end{align} where we used the Whitney product formula in the second equality. By the smooth Euler splitting this becomes \begin{align} s^*c(J_1) &= c\left(\ensuremath{\mathrm{pr}}_1^*\Omega^{1,0}_X(1) \otimes \ensuremath{\mathrm{pr}}_2^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}(1)\right) \wedge c\left(\bigoplus^{n-k+1} \ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1)\right) \\ &= c\left(\ensuremath{\mathrm{pr}}_1^*(\Omega^{1,0}_X \otimes \ensuremath{\mathcal{O}}_X(1)) \otimes \ensuremath{\mathrm{pr}}_2^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}(1)\right) \wedge c\left(\ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1)\right)^{n-k+1} \\ &= c\left(\ensuremath{\mathrm{pr}}_1^*\Omega^{1,0}_X \otimes \left(\ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1) \otimes \ensuremath{\mathrm{pr}}_2^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}(1)\right) \right) \wedge c\left(\ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1)\right)^{n-k+1} . \end{align} To obtain $p$th Chern classes, we apply the general formula \begin{equation} c_p(E\otimes L) = \sum_{i=0}^p \binom{r-i}{p-i} c_i(E)\wedge c_1(L)^{p-i}, \end{equation} where $E$ is a rank $r$ vector bundle, $L$ is a line bundle, and $0\leq p\leq r$ is an integer. Taking $E=\ensuremath{\mathrm{pr}}_1^*\Omega_X^{1,0}$ and $L=\ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1) \otimes \ensuremath{\mathrm{pr}}_2^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}(1)$, it follows that \begin{align} s^*c(J_1) &= \sum_{p=0}^n \sum_{i=0}^p \binom{n-i}{p-i} c_i\!\left(\ensuremath{\mathrm{pr}}_1^*\Omega^{1,0}_X\right) \wedge c_1\!\left(\ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1)\otimes \ensuremath{\mathrm{pr}}_2^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{CP}}^{n-k}}(1)\right)^{p-i} \wedge c\left(\ensuremath{\mathrm{pr}}_1^*\ensuremath{\mathcal{O}}_X(1)\right)^{n-k+1} \\ &= \sum_{p=0}^n \sum_{i=0}^p \binom{n-i}{p-i} (-1)^i c_i(\omega) \wedge \left( \omega + \omega_{FS} \right)^{p-i} \wedge (1+\omega)^{n-k+1} \\ &= \sum_{p=0}^n \sum_{i=0}^p \sum_{j=0}^{p-i} \sum_{q=0}^{n-k+1} \binom{n-i}{p-i} \binom{p-i}{j} \binom{n-k+1}{q} (-1)^i c_i(\omega) \wedge \omega^{p+q-j-i} \wedge \omega_{FS}^{j}. \end{align} We are now ready to compute $c_{2n-k}(J_1)$. When $j=n-k$ and $p+q-j=n$, it follows that $q=2n-k-p$. Then $p\leq n$ implies $q\geq n-k$ so that $q\in\set{n-k,n-k+1}$. This, in turn, implies that $p\in\set{n-1,n}$. Also $j\leq p-i$ implies $i\leq p-j=p-(n-k)$. Thus \begin{align} c_{2n-k}(J_1) &= \sum_{p=n-1}^n \sum_{i=0}^{p-(n-k)} \sum_{q=n-k}^{n-k+1} (-1)^i \binom{n-i}{p-i} \binom{p-i}{n-k} \binom{n-k+1}{q} \, c_i(\omega) \wedge \omega^{p+q-n+k-i} \wedge \omega_{FS}^{n-k} \\ &= \sum_{p=n-1}^n \sum_{i=0}^{p-(n-k)} (-1)^i \binom{n-i}{p-i} \binom{p-i}{n-k} \, c_i(\omega) \wedge \left[ (n-k+1)\omega^{p-i}+\omega^{p-i+1}\right] \wedge \omega_{FS}^{n-k} \\ &= \sum_{i=0}^{k-1} (-1)^i \left[(n-k+1) \binom{n-i}{n-k}+(n-i)\binom{n-i-1}{n-k} \right] \, c_i(\omega) \wedge \omega^{n-i} \wedge \omega_{FS}^{n-k} \notag \\ & \qquad + (-1)^k (n-k+1) \, c_k(\omega) \wedge \omega^{n-k} \wedge \omega_{FS}^{n-k} . \end{align} A quick calculation shows that \begin{equation} (n-k+1) \binom{n-i}{n-k}+(n-i)\binom{n-i-1}{n-k} = (n-i+1)\binom{n-i}{n-k}. \end{equation} \end{proof} \begin{lem} \label{lem:degrees} Let $\Delta_{X}^{(n-k)}$ denote the $X$-hyperdiscriminant of format $(n-k)$ and $\mu_k$ be as in Eq.\ (\ref{eq:muk}). Then the degree of $\Delta_X^{(n-k)}$ is given by \begin{align} \deg\left(\Delta_{X}^{(n-k)}\right) &= \deg(X) \sum_{i=0}^k (-1)^i (n-i+1) \binom{n-i}{n-k}\, \mu_i \label{eq:discrdeg} \end{align} \end{lem} \begin{proof} Follows immediately from Eqs.\ (\ref{eq:muk}), (\ref{eq:BFSdegformula}), and (\ref{eq:JettopChern}). \begin{comment} we have \begin{align} \deg\left(\Delta_{X}^{(n-k)}\right) &= \sum_{i=0}^{k} (-1)^i (n-i+1)\binom{n-i}{n-k} \int_X c_i \omega^{n-i}\int_{\ensuremath{\mathbb{CP}}^{n-k}}\omega_{FS}^{n-k} \\ &=\deg(X) \sum_{i=0}^k (-1)^i (n-i+1) \binom{n-i}{n-k}\,\mu_i. \end{align} \end{comment} \end{proof} \begin{comment} \begin{lem} \label{lem:linearsystem} For each $n\geq 0$ and $k\leq n$, the linear system \begin{align} Y_j &= \sum_{i=0}^j \binom{n-i}{n-j} X_i, \qquad j=0,1,\ldots,k \end{align} has the solution \begin{align} X_j &= \sum_{i=0}^j (-1)^{i+j} \binom{n-i}{n-j} Y_i, \qquad j=0,1,\ldots,k. \end{align} \end{lem} \begin{proof} This is an elementary consequence of the general formula \begin{equation} \sum_{k=0}^n (-1)^k \binom{n}{k} = 0. \end{equation} \end{proof} \begin{cor} \label{cor:muk} Let $\Delta_{X}^{(n-k)}$ denote the $X$-hyperdiscriminant of format $(n-k)$ and $\mu_k$ be as in Eq.\ (\ref{eq:muk}). Then \begin{align} \mu_k &= \frac{1}{n-k+1} \sum_{i=0}^k (-1)^i \binom{n-i}{n-k} \frac{\deg\left(\Delta_{X}^{(n-i)}\right)}{ \deg(X) }. \end{align} \end{cor} \begin{proof} Apply Lemma \ref{lem:linearsystem} to Lemma \ref{lem:degrees}. \end{proof} \end{comment} \section{Relations among Discriminants and Higher K-Energies} \label{sec:higherKen} \begin{lem} \label{lem:lognorm} Let $X\hookrightarrow\ensuremath{\mathbb{CP}}^N$ be a smooth, linearly normal $n$-dimensional subvariety. Assume that $\delta(X)\leq n- k$, where $\delta(X)$ is the dual defect of $X$. Then there is a continuous norm $\norm{\,\cdot\,}$ on the vector space of degree $d_k^\vee := \deg(\Delta_X^{(n-k)})$ polynomials on $(\ensuremath{\mathbb{C}}^{N+1}\otimes\ensuremath{\mathbb{C}}^{n-k+1})^\vee$ such that for all $\sigma\in\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$, we have \begin{align} \log \frac{\norm{\sigma\cdot\Delta_X^{(n-k)}}^2}{\norm{\Delta_X^{(n-k)}}^2} &= \sum_{i=0}^{k} (-1)^i (n-i+1) \binom{n-i}{n-k} \int_0^1 \int_{X} \dot\varphi_t \, c_i(\omega_t) \wedge \omega_t^{n-i}\wedge \ensuremath{\mathrm{d}} t ,\label{eq:logdiscr} \end{align} where $e$ denotes the identity in $\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$. \end{lem} \begin{proof} Combining equations (5.50) and (5.52) in \cite{paul12} we see that \begin{equation} D_{J_1(\ensuremath{\mathcal{O}}_X(1))^\vee}(c_{n+1};H(\sigma),H(e)) = (-1)\int_0^1 \int_X \dot\varphi_t c_n(J_1(\ensuremath{\mathcal{O}}(1)|X)^\vee;h_t)\,\ensuremath{\mathrm{d}} t \end{equation} one the one hand; on the other hand by the Main Lemma (p.\ 276 \emph{ibid.}), which we have extended to the higher K-energies in Lemma \ref{lem:estimate}, \begin{equation} D_{J_1(\ensuremath{\mathcal{O}}_X(1))^\vee}(c_{n+1};H(\sigma),H(e)) = (-1)^{n+1} \log\frac{\norm{\sigma\cdot\Delta_X}^2}{\norm{\Delta_X}^2} . \end{equation} Hence, \begin{align} \log\frac{\norm{\sigma\cdot\Delta_X}^2}{\norm{\Delta_X}^2} &= (-1)^n \int_0^1 \int_X \dot\varphi_t c_n(J_1(\ensuremath{\mathcal{O}}(1)|_X)^\vee;h_t)\,\ensuremath{\mathrm{d}} t \\ &= \int_0^1 \int_X \dot\varphi_t c_n(J_1(\ensuremath{\mathcal{O}}(1)|_X);h_t)\,\ensuremath{\mathrm{d}} t \label{eq:donaldsonlog}. \end{align} By Lemma \ref{lem:JettopChern} it follows that \begin{align} \log \frac{\norm{\sigma\cdot\Delta_X^{(n-k)}}^2}{\norm{\Delta_X^{(n-k)}}^2} &= \sum_{i=0}^{k} (-1)^i (n-i+1)\binom{n-i}{n-k} \int_0^1 \int_{X\times\ensuremath{\mathbb{CP}}^{n-k}} \dot\varphi_t \, c_i(\omega_t) \wedge \omega_t^{n-i}\wedge \omega_{FS}^{n-k} \wedge \ensuremath{\mathrm{d}} t \\ &= \sum_{i=0}^{k} (-1)^i (n-i+1) \binom{n-i}{n-k} \int_0^1 \int_{X} \dot\varphi_t \, c_i(\omega_t) \wedge \omega_t^{n-i}\wedge \ensuremath{\mathrm{d}} t .\label{eq:logdiscr} \end{align} \end{proof} \begin{comment} \begin{cor} \label{cor:lognorm} Under the hypotheses of Lemma \ref{lem:lognorm}, we have \begin{align} \int_0^1 \int_{X} \dot\varphi_t \, c_k(\omega_t) \wedge \omega_t^{n-k}\wedge \ensuremath{\mathrm{d}} t &= \frac{1}{n-k+1} \sum_{i=0}^{k} (-1)^i \binom{n-i}{n-k} \lognorm{\Delta_X^{(n-i)}}. \end{align} \end{cor} \begin{proof} Apply Lemma \ref{lem:linearsystem} to Lemma \ref{lem:lognorm}. \end{proof} \end{comment} \begin{Thm} \label{thm:mainformula} Under the hypotheses of Lemma \ref{lem:lognorm}, we have \begin{equation} M_k(\sigma) = \sum_{i=1}^k (-1)^{i+1} \binom{n-i}{n-k} \left[ \deg\left(R_X\right) \lognorm{\Delta_X^{(n-i)}} - \deg\left(\Delta_{X}^{(n-i)}\right) \lognorm{R_X} \right]. \end{equation} \end{Thm} \begin{proof} First, note that for each $n\geq 0$ and $k\leq n$, the linear system \begin{align} Y_j &= \sum_{i=0}^j \binom{n-i}{n-j} X_i, \qquad j=0,1,\ldots,k \end{align} has the solution \begin{align} X_j &= \sum_{i=0}^j (-1)^{i+j} \binom{n-i}{n-j} Y_i, \qquad j=0,1,\ldots,k. \end{align} When applied to Eqs.(\ref{eq:discrdeg}) and (\ref{eq:logdiscr}), this gives, respectively, \begin{align} \mu_k &= \frac{1}{n-k+1} \sum_{i=0}^k (-1)^i \binom{n-i}{n-k} \frac{\deg\left(\Delta_{X}^{(n-i)}\right)}{ \deg(X) } \label{eq:mukbydiscrim} \\ \int_0^1 \int_{X} \dot\varphi_t \, c_k(\omega_t) \wedge \omega_t^{n-k}\wedge \ensuremath{\mathrm{d}} t &= \frac{1}{n-k+1} \sum_{i=0}^{k} (-1)^i \binom{n-i}{n-k} \lognorm{\Delta_X^{(n-i)}} \label{eq:intbydiscrim}. \end{align} Applying Eqs.(\ref{eq:mukbydiscrim}) and (\ref{eq:intbydiscrim}) to Eq.(\ref{eq:genKenergy}) gives the result. \begin{comment} , we have \begin{align} M_k (\sigma) &= -(n-k+1)(n+1) V \int_0^1 \int_X \dot\varphi_t [c_k(\omega_t)-\mu_k \omega_t^{k}]\wedge \omega_t^{n-k}\,\ensuremath{\mathrm{d}} t \\ &= -V \left[ (n+1)\sum_{i=0}^k (-1)^i \binom{n-i}{n-k} \lognorm{\Delta_X^{(n-i)}} - \sum_{i=0}^k (-1)^i \binom{n-i}{n-k} \frac{\deg\left(\Delta_{X}^{(n-i)}\right)}{ \deg(X) } \lognorm{\Delta_X^{(n)}} \right] \\ &= \sum_{i=1}^k (-1)^{i+1} \binom{n-i}{n-k} \left[ \deg\left(\Delta_X^{(n)}\right) \lognorm{\Delta_X^{(n-k)}} - \deg\left(\Delta_{X}^{(n-i)}\right) \lognorm{\Delta_X^{(n)}} \right] . \end{align} \end{comment} \end{proof} \begin{remark} It is interesting that the $X$-hyperdiscriminants $\Delta_X^{(n-i)}$ of format $(n-i)$, $i=0,\ldots,k$ are collectively responsible for encoding the presence of the $k$th Chern form in $M_k$. \end{remark} \begin{proof}[Proof of Theorem \ref{thm:mainthm}] The theorem now follows directly from Theorem \ref{thm:mainformula}, after gathering even and odd powers of $(-1)$. Explicitly, the vectors are \begin{align} v &= R_X^{\sum_{j=1}^{\left\lfloor\frac{k}{2}\right\rfloor}\binom{n-2j}{n-k}d^\vee_{2j}} \otimes \displaystyle\bigotimes_{j=1}^{\left\lceil\frac{k}{2}\right\rceil} \left(\Delta_X^{(n-2j+1)} \notag\right)^{\binom{n-2j+1}{n-k}d_0^\vee} \\ w &= R_X^{\sum_{j=1}^{\left\lceil\frac{k}{2}\right\rceil}\binom{n-2j+1}{n-k}d^\vee_{2j-1}} \otimes \displaystyle\bigotimes_{j=1}^{\left\lfloor\frac{k}{2}\right\rfloor} \left(\Delta_X^{(n-2j)}\right)^{\binom{n-2j}{n-k}d_0^\vee}, \end{align} where $d^\vee_i := \deg\left(\Delta_X^{(n-i)}\right)$. We regard the polynomials $R_X^r$ and $\left(\Delta_X^{(n-i)}\right)^r$ as vectors in the irreducible $\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$-modules \begin{align} R_X^r &\in \ensuremath{\mathbb{C}}_{rd_0^\vee}[M_{(n+1)\times (N+1)}]^{\ensuremath{\mathrm{SL}}(n+1,\ensuremath{\mathbb{C}})} \\ \left(\Delta_X^{(n-i)}\right)^r &\in \ensuremath{\mathbb{C}}_{rd_k^\vee}[M_{(n-i+1)\times (N+1)}]^{\ensuremath{\mathrm{SL}}(n-i+1,\ensuremath{\mathbb{C}})} \end{align} for $r$ a positive integer and $i=1,2,\ldots ,k$, $\delta(X)\leq n-k$. The $\ensuremath{\mathrm{SL}}(N+1,\ensuremath{\mathbb{C}})$-modules $V$ and $W$ are then the appropriate tensor product modules containing $v$ and $w$, respectively. \end{proof} \begin{remark} From Lemmas \ref{lem:degrees} and \ref{lem:lognorm}, we have a recursion relation \begin{align} M_k(\sigma) &= (-1)^{k+1} \left[\deg\left(R_X\right) \lognorm{\Delta_{X}^{(n-k)}} - \deg\left(\Delta_{X}^{(n-k)}\right) \lognorm{R_X} + \sum_{i=1}^{k-1} (-1)^i \binom{n-i}{n-k} M_i(\sigma) \right]. \end{align} \end{remark} \begin{remark} When $k=1$ we recover formula (1.1) in Theorem A in \cite{paul12}: \begin{equation} M_1(\sigma) = \deg\left(R_X\right) \lognorm{\Delta_{X}^{(n-1)}} - \deg\left(\Delta_{X}^{(n-1)}\right) \lognorm{R_X} . \end{equation} In this case $V$ and $W$ are \emph{irreducible}; in contrast, for $k>1$, $V$ and $W$ may no longer be irreducible. \end{remark} Corollary \ref{cor:asymptotics} now follows from the asympototic expansions (\cite{paul12} p.268) \begin{align} \lim_{\abs{t}\ensuremath{\rightarrow} 0} \log \norm{\lambda(t) v}^2 &= w_\lambda(v) \log\abs{t}^2 + O(1) \\ \lim_{\abs{t}\ensuremath{\rightarrow} 0} \log \norm{\lambda(t) w}^2 &= w_\lambda(w) \log\abs{t}^2 + O(1), \end{align} where $v\in V$, $w\in W$, and $w_\lambda(v)$, $w_\lambda(w)$ are the weights of $\lambda$ on $v$ and $w$, respectively. Corollary \ref{cor:boundedness} follows from the general formula (\cite{paul13} p.18 Lemma 4.1) \begin{equation} \lognorm{v}-\lognorm{w} = \log \tan^2 d_g(\sigma\cdot[(v,w)],\sigma\cdot[(v,0)]), \end{equation} where $d_g$ denotes the distance in the Fubini-Study metric on $\ensuremath{\mathbb{P}}(V\oplus W)$, and the numerical criterion established in \cite{paul12a}. \bibliographystyle{alpha}
{ "timestamp": "2015-09-17T02:13:21", "yymm": "1507", "arxiv_id": "1507.01152", "language": "en", "url": "https://arxiv.org/abs/1507.01152", "abstract": "Given a compact polarized Kähler manifold $X\\hookrightarrow\\mathbb{CP}^N$, the space of Bergman metrics on $X$, parameterized by $\\mathrm{SL}(N+1,\\mathbb{C})$, corresponds to a dense set in the space of Kähler potentials in the Kähler class as $N\\to\\infty$. Critical points of the $k$th K-energy functional, which is defined on the Kähler class, correspond to metrics with harmonic $k$th Chern form. In this paper it is shown that the higher K-energy functionals, when restricted to the Bergman metrics, are expressible as the energies of certain pairs of vectors (tensors products of discriminants). Consequentially, we obtain results on the asymptotic behavior of these functionals along 1-parameter subgroups and their boundedness properties.", "subjects": "Differential Geometry (math.DG)", "title": "Discriminants and Higher K-energies on Polarized Kähler Manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363517478327, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7073385768375227 }
https://arxiv.org/abs/1507.02990
Spanning trees in directed circulant graphs and cycle power graphs
The number of spanning trees in a class of directed circulant graphs with generators depending linearly on the number of vertices $\beta n$, and in the $n$-th and $(n-1)$-th power graphs of the $\beta n$-cycle are evaluated as a product of $\lceil\beta/2\rceil-1$ terms.
\section{Introduction} In this paper we study the number of spanning trees in a class of directed and undirected circulant graphs. Let $1\leqslant\gamma_1\leqslant\cdots\leqslant\gamma_d\leqslant\lfloor n/2\rfloor$ be positive integers. A circulant directed graph, or circulant digraph, on $n$ vertices generated by $\gamma_1,\ldots,\gamma_d$ is the directed graph on $n$ vertices labelled $0,1,\ldots,n-1$ such that for each vertex $v\in\mathbb{Z}/n\mathbb{Z}$ there is an oriented edge connecting $v$ to $v+\gamma_m$ mod $n$ for all $m\in\{1,\ldots,d\}$. We will denote such graphs by $\overrightarrow{C}^{\gamma_1,\ldots,\gamma_d}_n$. Similarly, a circulant graph on $n$ vertices generated by $\gamma_1,\ldots \gamma_d$, denoted by $C^{\gamma_1,\ldots,\gamma_d}_n$, is the undirected graph on $n$ vertices labelled $0,1,\ldots,n-1$ such that each vertex $v\in\mathbb{Z}/n\mathbb{Z}$ is connected to $v\pm\gamma_m$ mod $n$ for all $m\in\{1,\ldots,d\}$. Circulant graphs and digraphs are used as models in network theory. In this context, they are called multi-loop networks, or double-loop networks when they are $2$-generated, see for example \cite{MR1846929,MR1973148}. The number of spanning tree measures the reliability of a network.\\ The evaluation of the number of spanning trees in circulant graphs and digraphs has been widely studied, were both exact and asymptotic results have been obtained as the number of vertices grows, see \cite{MR2565193,MR2574828,louis2015asymptotics,louis2015formula,MR2445039} and references therein. In \cite{MR2261780,MR2320194}, the authors showed that the number of spanning trees in such graphs satisfy linear recurrence relations. Yong, Zhang and Golin developped a technique in \cite{MR2445039} to evaluate the number of spanning trees in a particular class of double-loop networks $\overrightarrow{C}^{p,\gamma n+p}_{\beta n}$. In the first section of this work, we derive a closed formula for these graphs, and more generally for $d$-generated circulant digraphs with generators depending linearly on the number of vertices, that is $\overrightarrow{C}^{p,\gamma_1n+p \ldots,\gamma_{d-1}n+p}_{\beta n}$ where $p,\gamma_1,\ldots,\gamma_{d-1},\beta,n$ are positive integers. This partially answers an open question posed in \cite{MR2565193} by simplifying the formula given in \cite[Corollary $1$]{MR2565193}.\\ In the second section we calculate the number of spanning trees in the $n$-th and $(n-1)$-th power graphs of the $\beta n$-cycle which are the circulant graphs generated by the $n$, respectively $n-1$, first consecutive integers, denoted by $\boldsymbol{C}^n_{\beta n}$ and $\boldsymbol{C}^{n-1}_{\beta n}$ respectively, where $\beta\in\mathbb{N}_{\geqslant2}$. As a consequence, the asymptotic behaviour of it is derived. Cycle power graphs appear, for example, in graph colouring problems, see \cite{MR2587027,MR1720404}.\\ The results obtained here are derived from the matrix tree theorem (see \cite{MR2339282,MR1271140}) which provides a closed formula of a product of $\beta n-1$ terms for a graph on $\beta n$ vertices. Our formulas are a product of $\lceil\beta/2\rceil-1$ terms and are therefore interesting when $n$ is large. In both cases, the symmetry of the graphs is reflected in the formulas which are expressed in terms of eigenvalues of subgraphs of the original graph. This fact was already observed in \cite{louis2015formula}. \par\vspace{\baselineskip} \noindent \textbf{Acknowledgements:} The author thanks Anders Karlsson for reading the manuscript and useful discussions. \section{Spanning trees in directed circulant graphs} Let $G$ be a directed graph and $V(G)$ its vertex set. A spanning arborescence converging to $v\in V(G)$ is an oriented subgraph of $G$ such that the out-degree of all vertices except $v$ equals one, and the out-degree of $v$ is zero. We define the combinatorial Laplacian of a directed graph $G$ as an operator acting on the space of functions defined on $V(G)$, by \begin{equation} \label{Delta-} \Delta^-_Gf(x)=\sum_ {y:\ x\rightarrow y}(f(x)-f(y)) \end{equation} where the sum is over all vertices $y$ such that there is an oriented edge from $x$ to $y$. Equivalently, the combinatorial Laplacian can be defined as a matrix by $\Delta^-_G=D^--A$, where $D^-$ is the out-degree matrix and $A$ is the adjacency matrix such that $(A)_{ij}$ is the number of directed edges from $i$ to $j$. Let $\tau^-(G,v)$ denote the number of arborescences converging to $v$. The Tutte matrix tree theorem (see \cite{MR2339282}) states that for all $v\in V(G)$, \begin{equation*} \tau^-(G,v)=\det\Delta^-_{G,v} \end{equation*} where $\det\Delta^-_{G,v}$ is the $v$-th cofactor of the Laplacian $\Delta^-_G$ obtained by deleting the row and column of $\Delta^-_G$ corresponding to the vertex $v$. For a regular directed graph $G$, we define the number of spanning trees in $G$, $\tau(G)$, by the sum over all vertices $v\in V(G)$ of the number of arborescences converging to $v$, that is \begin{equation*} \tau(G)=\sum_{v\in V(G)}\tau^-(G,v). \end{equation*} Notice that we could have defined the number of spanning trees by the sum over all vertices $v\in V(G)$ of the number of spanning arborescences diverging from $v$.\\ By symmetry, all cofactors of the Laplacian of a directed circulant graph are equal and are equal to the product of the non-zero eigenvalues of the Laplacian divided by the number of vertices. Therefore we have that \begin{equation*} \tau(G)=\prod_{k=1}^{\lvert V(G)\rvert}\lambda_k \end{equation*} where $\lambda_k$, $k=1,\ldots,\lvert V(G)\rvert$, denote the non-zero eigenvalues of the Laplacian of $G$. The non-zero eigenvalues of the Laplacian of the directed circulant graph $\overrightarrow{C}^{\gamma_1,\ldots,\gamma_d}_n$ are given by (see \cite[Proposition $3.5$]{MR1271140}) \begin{equation*} \lambda_k=d-\sum_{m=1}^de^{2\pi i\gamma_mk/n},\quad k=1,\ldots,n-1. \end{equation*} This can also be derived by noticing that the eigenvectors are given by the characters $\chi_k(x)=e^{2\pi ikx/n}$, $k=0,1,\ldots,n-1$, and then applying the Laplacian (\ref{Delta-}) on it.\\ In this section, we establish a formula for the number of spanning trees in directed circulant graphs $\overrightarrow{C}^\Gamma_{\beta n}$ generated by $\Gamma=\{p,\gamma_1n+p,\ldots,\gamma_{d-1}n+p\}$ and in the particular case of two generators $\overrightarrow{C}^{p,\gamma n+p}_{\beta n}$. Figure \ref{directedgraphs} illustrates a $2$ and a $3$ generated directed circulant graphs. We denote by $\mu_k=d-1-\sum_{m=1}^{d-1}e^{2\pi i\gamma_mk/\beta}$, $k=1,\ldots,\beta-1$, the non-zero eigenvalues of the Laplacian on the directed circulant graph $\overrightarrow{C}^{\gamma_1,\ldots,\gamma_{d-1}}_\beta$ and by $\eta_k=2(d-1)-2\sum_{m=1}^{d-1}\cos(2\pi\gamma_mk/\beta)$, $k=1,\ldots,\beta-1$, the non-zero eigenvalues of the Laplacian on the circulant graph $C^{\gamma_1,\ldots,\gamma_{d-1}}_\beta$. Let $A$ be a statement and $\delta_A$ be defined by \begin{equation*} \delta_A=\left\{\begin{array}{rl}1&\textnormal{if }A\textnormal{ is satisfied}\\0&\textnormal{otherwise}\end{array}.\right. \end{equation*} \begin{figure}[!ht] \centering \subfigure[$\protect\overrightarrow{C}^{2,n}_{3n}$ with $n=5$]{\includegraphics[width=5cm]{dic2515}} \hspace{2cm} \subfigure[$\protect\overrightarrow{C}^{1,n+1,2n+1}_{4n}$ with $n=5$]{\includegraphics[width=5cm]{dic161120}} \caption{Examples of directed graphs} \label{directedgraphs} \end{figure} \begin{theorem} \label{dicd} Let $1\leqslant\gamma_1\leqslant\cdots\leqslant\gamma_{d-1}\leqslant\beta$ and $p$, $n$ be positive integers. For all even $n\in\mathbb{N}_{\geqslant2}$ such that $(p,n)=1$, the number of spanning trees in the directed circulant graph $\overrightarrow{C}^{\Gamma}_{\beta n}$, where $\Gamma=\{p,\gamma_1n+p,\ldots,\gamma_{d-1}n+p\}$, is given by \begin{align*} \tau(\overrightarrow{C}^{\Gamma}_{\beta n})&=nd^{\beta n-1}\Big(1-\delta_{\beta\textnormal{ even}}\frac{(-1)^p}{d^n}(1+\sum_{m=1}^{d-1}(-1)^{\gamma_m})^n\Big)\\ &\times\prod_{k=1}^{\lceil\beta/2\rceil-1}\left(1-2\Big|1-\frac{\mu_k}{d}\Big|^n\cos\left(\frac{2\pi pk}{\beta}+n\Arctg\left(\frac{\sum_{m=1}^{d-1}\sin(2\pi\gamma_mk/\beta)}{d-\eta_k/2}\right)\right)+\Big|1-\frac{\mu_k}{d}\Big|^{2n}\right) \end{align*} and for odd $n\in\mathbb{N}_{\geqslant1}$, \begin{align*} &\tau(\overrightarrow{C}^{\Gamma}_{\beta n})=nd^{\beta n-1}\Big(1-\delta_{\beta\textnormal{ even}}\frac{(-1)^p}{d^n}(1+\sum_{m=1}^{d-1}(-1)^{\gamma_m})^n\Big)\\ &\times\prod_{k=1}^{\lceil\beta/2\rceil-1}\left(1-2\textnormal{sgn}(d-\eta_k/2)\Big|1-\frac{\mu_k}{d}\Big|^n\cos\left(\frac{2\pi pk}{\beta}+n\Arctg\left(\frac{\sum_{m=1}^{d-1}\sin(2\pi\gamma_mk/\beta)}{d-\eta_k/2}\right)\right)+\Big|1-\frac{\mu_k}{d}\Big|^{2n}\right) \end{align*} where $\lceil x\rceil$ is the smallest integer greater or equal to $x$, $\lvert.\rvert$ denotes the modulus and we set $\textnormal{sgn}(0)=1$. The number of spanning trees in $\overrightarrow{C}^{\Gamma}_{\beta n}$ is zero if either $(p,n)=1$ and $\beta$, $p$, $\gamma_m$, $m=1,\ldots,d-1$ are all even or either $(p,n)\neq1$. \end{theorem} \begin{proof} From the Tutte matrix tree theorem, the number of spanning trees in $\overrightarrow{C}^\Gamma_{\beta n}$ is given by \begin{equation*} \tau(\overrightarrow{C}^{\Gamma}_{\beta n})=\prod_{k=1}^{\beta n-1}(d-e^{2\pi ipk/(\beta n)}-\sum_{m=1}^{d-1}e^{2\pi i(\gamma_mn+p)k/(\beta n)}). \end{equation*} By splitting the product over $k=1,\ldots,\beta n-1$ into two products, when $k$ is a multiple of $\beta$, that is $k=l\beta$ with $l=1,\ldots,n-1$, and over non-multiples of $\beta$, that is, $k=k'+l'\beta$ with $k'=1,\ldots,\beta-1$ and $l'=0,1,\ldots,n-1$, we have \begin{equation} \label{tau} \tau(\overrightarrow{C}^{\Gamma}_{\beta n})=\prod_{l=1}^{n-1}(d-de^{2\pi ipl/n})\prod_{k=1}^{\beta-1}\prod_{l'=0}^{n-1}(d-(1+\sum_{m=1}^{d-1}e^{2\pi i\gamma_mk/\beta})e^{2\pi ipk/(\beta n)}e^{2\pi ipl'/n}). \end{equation} We have that \begin{equation*} \prod_{l=1}^{n-1}(d-de^{2\pi ipl/n})=d^{n-1}\prod_{l=1}^{n-1}(1-e^{2\pi ipl/n})=nd^{n-1}\delta_{(p,n)=1}. \end{equation*} This equality comes from the fact that $\prod_{l=1}^{n-1}(1-e^{2\pi ipl/n})$ is the number of spanning trees of the directed graph $\overrightarrow{C}^p_n$, which is isomorphic to the directed cycle on $n$ vertices if $(p,n)=1$, and is not connected if $(p,n)\neq1$. Therefore the product is equal to $n\delta_{(p,n)=1}$.\\ Hence, if $(p,n)\neq1$, we have \begin{equation*} \tau(\overrightarrow{C}^{\Gamma}_{\beta n})=0. \end{equation*} Let $p$ be relatively prime to $n$. Using that the complex numbers $e^{2\pi il/n}$, $l=0,1,\ldots,n-1$, are the $n$ non-trivial roots of unity, we have for all $x$, \begin{equation} \label{unityroots} \prod_{l=0}^{n-1}(x-e^{2\pi ilp/n})=x^n-1. \end{equation} since $(p,n)=1$. Equivalently we have, \begin{equation*} \prod_{l=0}^{n-1}(1-xe^{2\pi ilp/n})=1-x^n. \end{equation*} Using this identity in (\ref{tau}) enables to evaluate the product over $l'$, it comes \begin{equation} \label{betaproduct} \tau(\overrightarrow{C}^{\Gamma}_{\beta n})=nd^{\beta n-1}\prod_{k=1}^{\beta-1}(1-\frac{1}{d^n}(1+\sum_{m=1}^{d-1}e^{2\pi i\gamma_mk/\beta})^ne^{2\pi ipk/\beta}). \end{equation} For odd $\beta$ we write the product over $k$, $k=1,\ldots,\beta-1$, as a product from $1$ to $(\beta-1)/2$, and for even $\beta$ we write it as a product from $1$ to $\beta/2-1$ and add the $k=\beta/2$ factor which is given by $1-(-1)^p(1+\sum_{m=1}^{d-1}(-1)^{\gamma_m})^n/d^n$. Writing the above expression in terms of \linebreak$\mu_k=d-1-\sum_{m=1}^{d-1}e^{2\pi i\gamma_mk/\beta}$, it comes \begin{align} \tau(\overrightarrow{C}^{\Gamma}_{\beta n})&=nd^{\beta n-1}\Big(1-\delta_{\beta\textnormal{ even}}\frac{(-1)^p}{d^n}(1+\sum_{m=1}^{d-1}(-1)^{\gamma_m})^n\Big)\nonumber\\ &\times\prod_{k=1}^{\lceil\beta/2\rceil-1}(1-(1-\mu_k/d)^ne^{2\pi ipk/\beta})(1-(1-\mu_k^\ast/d)^ne^{-2\pi ipk/\beta})\nonumber\\ &=nd^{\beta n-1}\Big(1-\delta_{\beta\textnormal{ even}}\frac{(-1)^p}{d^n}(1+\sum_{m=1}^{d-1}(-1)^{\gamma_m})^n\Big)\nonumber\\ &\times\prod_{k=1}^{\lceil\beta/2\rceil-1}(1-2\lvert1-\mu_k/d\rvert^n\cos(2\pi pk/\beta+n\phi_k)+\lvert1-\mu_k/d\rvert^{2n}) \label{tau2} \end{align} where $\phi_k$ is the phase of the complex number $1-\mu_k/d$ such that $1-\mu_k/d=\lvert1-\mu_k/d\rvert e^{i\phi_k}$. We have \begin{equation*} \lvert1-\mu_k/d\rvert=\frac{1}{d}\Big((d-\eta_k/2)^2+\Big(\sum_{m=1}^{d-1}\sin(2\pi\gamma_mk/\beta)\Big)^2\Big)^{1/2} \end{equation*} and \begin{equation*} \cos{\phi_k}=\frac{d-\eta_k/2}{\lvert d-\mu_k\rvert},\quad\sin{\phi_k}=\frac{\sum_{m=1}^{d-1}\sin(2\pi\gamma_mk/\beta)}{\lvert d-\mu_k\rvert}. \end{equation*} Therefore for $k$ such that $d-\eta_k/2\neq0$, the phase is given by \begin{equation} \label{phik} \phi_k=\Arctg\left(\frac{\sum_{m=1}^{d-1}\sin(2\pi\gamma_mk/\beta)}{d-\eta_k/2}\right)+\epsilon\pi \end{equation} where $\epsilon=0$ if $\textnormal{sgn}(d-\eta_k/2)=1$ and $\epsilon\in\{-1,1\}$ if $\textnormal{sgn}(d-\eta_k/2)=-1$. For $k$ such that $d-\eta_k/2=0$, we take the limit as $d-\eta_k/2\rightarrow0$ in (\ref{phik}), with $\epsilon=0$. The theorem follows by putting equation (\ref{phik}) into equation (\ref{tau2}).\\ When $\beta$, $p$ and $\gamma_m$, $m=1,\ldots,d-1$ are all even, the directed circulant graph $\overrightarrow{C}^{\Gamma}_{\beta n}$ is not connected and therefore the number of spanning trees is zero, this is reflected in the formula. \end{proof} In the following theorem we state the particular case on two-generated directed circulant graphs. \begin{theorem} \label{d=2} Let $1\leqslant\gamma\leqslant\beta$ and $p$, $n$ be positive integers. For odd $\beta$ and all $n\in\mathbb{N}_{\geqslant1}$ such that $(p,n)=1$, the number of spanning trees in the directed circulant graph $\overrightarrow{C}^{p,\gamma n+p}_{\beta n}$ is given by \begin{equation*} \tau(\overrightarrow{C}^{p,\gamma n+p}_{\beta n})=n2^{\beta n-1}\prod_{k=1}^{(\beta-1)/2}\Big(1-2\cos(2\pi(p+\gamma n/2)k/\beta)\cos^n(\pi\gamma k/\beta)+\cos^{2n}(\pi\gamma k/\beta)\Big) \end{equation*} and for even $\beta$, if $\gamma$ or $p$ is odd, then \begin{equation*} \tau(\overrightarrow{C}^{p,\gamma n+p}_{\beta n})=n2^{\beta n-1+\delta_{\gamma\textnormal{ even}}}\prod_{k=1}^{\beta/2-1}\Big(1-2\cos(2\pi(p+\gamma n/2)k/\beta)\cos^n(\pi\gamma k/\beta)+\cos^{2n}(\pi\gamma k/\beta)\Big). \end{equation*} The number of spanning trees in $\overrightarrow{C}^{p,\gamma n+p}_{\beta n}$ is zero if either $(p,n)=1$ and $\beta$, $p$ and $\gamma$ are all even or either $(p,n)\neq1$. \end{theorem} \begin{proof} From equation (\ref{betaproduct}) it follows \begin{equation*} \tau(\overrightarrow{C}^{p,\gamma n+p}_{\beta n})=n2^{\beta n-1}\prod_{k=1}^{\beta-1}(1-e^{2\pi i(p+\gamma n/2)k/\beta}\cos^n(\pi\gamma k/\beta)). \end{equation*} For odd $\beta$, we have \begin{align*} \tau(\overrightarrow{C}^{p,\gamma n+p}_{\beta n})&=n2^{\beta n-1}\prod_{k=1}^{(\beta-1)/2}(1-e^{2\pi i(p+\gamma n/2)k/\beta}\cos^n(\pi\gamma k/\beta))\\ &\qquad\qquad\quad\qquad\times(1-e^{-2\pi i(p+\gamma n/2)k/\beta}\cos^n(\pi\gamma k/\beta))\\ &=n2^{\beta n-1}\prod_{k=1}^{(\beta-1)/2}(1-2\cos(2\pi(p+\gamma n/2)k/\beta)\cos^n(\pi\gamma k/\beta)+\cos^{2n}(\pi\gamma k/\beta)). \end{align*} For even $\beta$, the factor $k=\beta/2$ is added: \begin{equation*} 1-e^{\pi i(p+\gamma n/2)}\cos^n(\pi\gamma/2)=\left\{\begin{array}{rl}0&\textnormal{if }p\textnormal{ and }\gamma\textnormal{ are even}\\1&\textnormal{if }\gamma\textnormal{ is odd}\\2&\textnormal{otherwise}\end{array}.\right. \end{equation*} For even $\beta$, $p$ and $\gamma$, the graph $\overrightarrow{C}^{p,\gamma n+p}_{\beta n}$ is not connected and therefore the number of spanning trees is zero. Therefore if $p$ or $\gamma$ is odd, we have \begin{align*} \tau(\overrightarrow{C}^{p,\gamma n+p}_{\beta n})&=n2^{\beta n-1+\delta_{\gamma\textnormal{ even}}}\prod_{k=1}^{\beta/2-1}(1-e^{2\pi i(p+\gamma n/2)k/\beta}\cos^n(\pi\gamma k/\beta))\\ &\quad\qquad\qquad\qquad\qquad\times(1-e^{-2\pi i(p+\gamma n/2)k/\beta}\cos^n(\pi\gamma k/\beta))\\ &=n2^{\beta n-1+\delta_{\gamma\textnormal{ even}}}\prod_{k=1}^{\beta/2-1}(1-2\cos(2\pi(p+\gamma n/2)k/\beta)\cos^n(\pi\gamma k/\beta)+\cos^{2n}(\pi\gamma k/\beta)). \end{align*} \end{proof} \begin{example} Consider the case when $p=\beta=3$ and $\gamma=2$. It follows from Theorem \ref{d=2} that $\tau(\overrightarrow{C}^{3,2n+3}_{3n})=0$ if $n$ is a multiple of $3$, otherwise, \begin{align*} \tau(\overrightarrow{C}^{3,2n+3}_{3n})&=n2^{3n-1}(1-2\cos(2\pi n/3)\cos^n(2\pi/3)+\cos^{2n}(2\pi/3))\\ &=n(2^{3n-1}-2^{2n}\cos(\pi n/3)+2^{n-1}) \end{align*} as stated in \cite[Example $4$.(iii)]{MR2445039}. As another example, consider the case when $p=2$, $\gamma=5$ and $\beta=6$. From Theorem \ref{d=2}, for even $n$, $\tau(\overrightarrow{C}^{2,5n+2}_{6n})=0$, and for odd $n$, \begin{align*} \tau(\overrightarrow{C}^{2,5n+2}_{6n})&=n2^{6n-1}(1-2\cos(2\pi(2+5n/2)/6)\cos^n(5\pi/6)+\cos^{2n}(5\pi/6))\\ &\quad\times(1-2\cos(4\pi(2+5n/2)/6)\cos^n(10\pi/6)+\cos^{2n}(10\pi/6))\\ &=\frac{n}{2}(2^{3n}+2^{2n}3^{n/2}\cos(\pi n/6)-2^{2n}3^{(n+1)/2}\sin(\pi n/6)+6^n)\\ &\quad\times(2^{3n}-2^{2n-1}3^{n/2}\cos(\pi n/3)+2^{n-1}3^{(n+1)/2}\sin(\pi n/3)+2^n). \end{align*} \end{example} \section{Spanning trees in cycle power graphs} The $k$-th power graph of the $n$-cycle, denoted by $\boldsymbol{C}^k_n$, is the graph with the same vertex set as the $n$-cycle where two vertices are connected if their distance on the $n$-cycle is at most $k$. It is therefore the circulant graph on $n$ vertices generated by the first $k$ consecutive integers. In this section, we derive a formula for the number of spanning trees in the $n$-th and $(n-1)$-th power graph of the $\beta n$-cycle, where $\beta\in\mathbb{N}_{\geqslant2}$. As a consequence we derive the asymptotic behaviour of it as $n$ goes to infinity.\\ The combinatorial Laplacian of an undirected graph $G$ with vertex set $V(G)$ defined as an operator acting on the space of functions is \begin{equation*} \Delta_Gf(x)=\sum_{y\sim x}(f(x)-f(y)) \end{equation*} where the sum is over all vertices adjacent to $x$. The matrix tree theorem \cite{MR1271140} states that the number of spanning trees in $G$, $\tau(G)$, is given by \begin{equation*} \tau(G)=\frac{\prod_{k=1}^{\lvert V(G)\rvert-1}\lambda_k}{\lvert V(G)\rvert} \end{equation*} where $\lambda_k$, $k=1,\ldots,\lvert V(G)\rvert-1$, are the non-zero eigenvalues of $\Delta_G$. The eigenvectors of the Laplacian on the circulant graph $C^{1,\ldots,n}_{\beta n}$ are given by the characters $\chi_k(x)=e^{2\pi ikx/(\beta n)}$, $k=0,1,\ldots,\beta n-1$. Therefore the non-zero eigenvalues are given by \begin{equation*} \lambda_k=2n-2\sum_{m=1}^n\cos(2\pi km/(\beta n)),\quad k=1,\ldots,\beta n-1. \end{equation*} Similarly, the non-zero eigenvalues on $C^{1,\ldots,n-1}_{\beta n}$ are given by \begin{equation*} \lambda_k=2(n-1)-2\sum_{m=1}^{n-1}\cos(2\pi km/(\beta n)),\quad k=1,\ldots,\beta n-1. \end{equation*} Figure \ref{powergraphs} below illustrates two power graphs of the $24$-cycle. \begin{figure}[H] \centering \subfigure[$\boldsymbol{C}^8_{24}$]{\includegraphics[width=5cm]{c824}} \hspace{2cm} \subfigure[$\boldsymbol{C}^7_{24}$]{\includegraphics[width=5cm]{c724}} \caption{$8$-th and $7$-th power graphs of the $24$-cycle} \label{powergraphs} \end{figure} \begin{theorem} \label{circ} Let $\beta\geqslant2$ be an integer and $\mu_k=2-2\cos(2\pi k/\beta)$, $k=1,\ldots,\beta-1$, be the non-zero eigenvalues of the Laplacian on the $\beta$-cycle. The number of spanning trees in the $n$-th power graph of the $\beta n$-cycle $\boldsymbol{C}^n_{\beta n}$ for $\beta\geqslant3$, is given by \begin{align*} \tau(\boldsymbol{C}^n_{\beta n})&=\frac{2^{\beta(n+1)}}{(2\beta)^2}n^{\beta n-2}\left(1+\frac{1}{2n}\right)^{\beta n}(1-(2n+1)^{-\beta})^n\\ &\times\prod_{k=1}^{\lceil\beta/2\rceil-1}\sin^2\left(\frac{\pi(n+1)k}{\beta}-n\Arcsin\left(\frac{n+1}{\sqrt{4n^2/\mu_k+2n+1}}\right)\right) \end{align*} where $\lceil x\rceil$ denotes the smallest integer greater or equal to $x$. For $\beta=2$, it is given by \begin{equation*} \tau(\boldsymbol{C}^n_{2n})=(2n)^{2n-2}(1+1/n)^n. \end{equation*} The number of spanning trees in the $(n-1)$-th power graph of the $\beta n$-cycle $\boldsymbol{C}^{n-1}_{\beta n}$, for $\beta\geqslant3$, is given by \begin{align*} \tau(\boldsymbol{C}^{n-1}_{\beta n})&=\frac{2^{\beta(n+1)}}{(2\beta)^2}n^{\beta n-2}\left(1-\frac{1}{2n}\right)^{\beta n}\lvert(-1)^\beta-(2n-1)^{-\beta}\rvert^n\\ &\times\prod_{k=1}^{\lceil\beta/2\rceil-1}\sin^2\left(\frac{\pi(n-1)k}{\beta}-n\Arcsin\left(\frac{n-1}{\sqrt{4n^2/\mu_k-(2n-1)}}\right)\right). \end{align*} For $\beta=2$, it is given by \begin{equation*} \tau(\boldsymbol{C}^{n-1}_{2n})=(2n)^{2n-2}(1-1/n)^n. \end{equation*} \end{theorem} \begin{remark} We emphasise that in the cycle power graphs $\boldsymbol{C}^{n-1}_{\beta n}$ and $\boldsymbol{C}^n_{\beta n}$ there are $\beta$ copies of $n$-cliques as subgraphs of the original graph. This fact appears in the formula by the factor $n^{\beta n-2}=(n^{n-2})^\beta n^{2(\beta-1)}$ since the number of spanning trees in the complete graph on $n$ vertices is $n^{n-2}$. \end{remark} \begin{proof} We prove the theorem only for the first type of graphs $\boldsymbol{C}^n_{\beta n}$. The proof of the second type $\boldsymbol{C}^{n-1}_{\beta n}$ is very similar to the first one. The matrix tree theorem states that \begin{equation*} \tau(\boldsymbol{C}^n_{\beta n})=\frac{1}{\beta n}\prod_{k=1}^{\beta n-1}(2n-2\sum_{m=1}^n\cos(2\pi km/(\beta n))). \end{equation*} Lagrange's trigonometric identity expresses the sum of cosines appearing in the above formula in terms of a quotient of sines: \begin{equation*} 2\sum_{m=1}^n\cos(2\pi km/(\beta n))=\frac{\sin((n+1/2)2\pi k/(\beta n))}{\sin(\pi k/(\beta n))}-1. \end{equation*} Hence, \begin{equation*} \tau(\boldsymbol{C}^n_{\beta n})=\frac{1}{\beta n}\prod_{k=1}^{\beta n-1}(\sin(\pi k/(\beta n)))^{-1}((2n+1)\sin(\pi k/(\beta n))-\sin(\pi k/(\beta n)+2\pi k/\beta)). \end{equation*} Using that there are $\beta n$ spanning trees in the $\beta n$-cycle, that is $\frac{1}{\beta n}\prod_{k=1}^{\beta n-1}(2-2\cos(2\pi k/(\beta n)))=\beta n$, it follows that \begin{equation} \label{taucycle} \prod_{k=1}^{\beta n-1}\sin(\pi k/(\beta n))=\frac{\beta n}{2^{\beta n-1}}. \end{equation} For the second factor, as in the proof of Theorem \ref{dicd}, we split the product over $k=1,\ldots,\beta n-1$ into two products, first when $k$ is a multiple of $\beta$, that is $k=l\beta$ with $l=1,\ldots,n-1$, and second when $k$ is not a multiple of $\beta$, that is, $k=k'+l'\beta$ with $k'=1,\ldots,\beta-1$ and $l'=0,1,\ldots,n-1$. The product over the multiples of $\beta$ reduces to \begin{equation*} \prod_{l=1}^{n-1}2n\sin(\pi l/n)=n^n. \end{equation*} We have \begin{equation} \label{2prod} \tau(\boldsymbol{C}^n_{\beta n})=\frac{2^{\beta n-1}n^n}{(\beta n)^2}\prod_{k=1}^{\beta-1}\prod_{l=0}^{n-1}((2n+1)\sin(\pi k/(\beta n)+\pi l/n)-\sin(\pi k/(\beta n)+\pi l/n+2\pi k/\beta). \end{equation} The difference of sines in the above product can be written as \begin{equation} \label{sine} (2n+1)\sin(\pi k/(\beta n)+\pi l/n)-\sin(\pi k/(\beta n)+\pi l/n+2\pi k/\beta)=\lvert z_k\rvert\sin(\pi(n+1)k/(\beta n)+\theta_k+\pi l/n) \end{equation} where \begin{equation*} z_k=2n\cos(\pi k/\beta)-i(2n+2)\sin(\pi k/\beta)\eqdef\lvert z_k\rvert e^{i\theta_k}. \end{equation*} Let $\omega_k=\pi(n+1)k/(\beta n)+\theta_k$, we have \begin{align} \prod_{l=0}^{n-1}\sin(\omega_k+\pi l/n)&=\frac{1}{(2i)^n}\prod_{l=0}^{n-1}(e^{i(\omega_k+\pi l/n)}-e^{-i(\omega_k+\pi l/n)})\nonumber\\ &=\frac{1}{(2i)^n}e^{-i\omega_kn}e^{\pi i(n-1)/2}\prod_{l=0}^{n-1}(e^{2i\omega_k}-e^{-2\pi il/n})\nonumber\\ &=\frac{\sin(\omega_kn)}{2^{n-1}} \label{prodsines} \end{align} where in the last equality we used equation (\ref{unityroots}). Putting equations (\ref{2prod}), (\ref{sine}) and (\ref{prodsines}) together yields \begin{equation*} \tau(\boldsymbol{C}^n_{\beta n})=\frac{2^{\beta n-1}n^n}{(\beta n)^2}\prod_{k=1}^{\beta-1}\frac{\lvert z_k\rvert^n}{2^{n-1}}\sin(\pi(n+1)k/\beta+n\theta_k). \end{equation*} Notice that for even $\beta$, the phase of $z_{\beta/2}$ is $\theta_{\beta/2}=-\pi/2$, so that $\sin(\pi(n+1)/2+n\theta_{\beta/2})=1$. For $\beta=2$, $z_1=-2(n+1)i$, hence \begin{equation*} \tau(\boldsymbol{C}^n_{2n})=(2n)^{2n-2}(1+1/n)^n. \end{equation*} For $\beta\geqslant3$, we have \begin{equation*} \tau(\boldsymbol{C}^n_{\beta n})=\frac{2^{n+\beta-2}n^n}{(\beta n)^2}\big(\prod_{k=1}^{\beta-1}\lvert z_k\rvert^n\big)\prod_{k=1}^{\lceil\beta/2\rceil-1}\sin(\pi(n+1)k/\beta+n\theta_k)\sin(\pi(n+1)(\beta-k)/\beta+n\theta_{\beta-k}). \end{equation*} For $1\leqslant k\leqslant\lceil\beta/2\rceil-1$, the phase of $z_k$ is $\theta_k=-\Arcsin((2n+2)\sin(\pi k/\beta)/\lvert z_k\rvert)$. The phase of $z_{\beta-k}$ satisfies \begin{equation*} \cos\theta_{\beta-k}=-\cos\theta_k,\quad\sin\theta_{\beta-k}=\sin\theta_k \end{equation*} so that, $\theta_{\beta-k}=\pi-\theta_k$. The modulus of $z_k$ is given by \begin{equation*} \lvert z_k\rvert=((2n+1)^2+1-2(2n+1)\cos(2\pi k/\beta))^{1/2}=(4n^2+(2n+1)\mu_k)^{1/2} \end{equation*} where $\mu_k=2-2\cos(2\pi k/\beta)$, $k=1,\ldots,\beta-1$, are the non-zero eigenvalues of the Laplacian on the $\beta$-cycle. We have $\sin(\pi k/\beta)=\mu_k^{1/2}/2$. Hence for $1\leqslant k\leqslant\lceil\beta/2\rceil-1$, the phase is given by $\theta_k=-\Arcsin((n+1)/\sqrt{4n^2/\mu_k+2n+1})$. Therefore \begin{equation} \label{taucirc} \tau(\boldsymbol{C}^n_{\beta n})=\frac{2^{n+\beta-2}n^n}{(\beta n)^2}\big(\prod_{k=1}^{\beta-1}\lvert z_k\rvert^n\big)\prod_{k=1}^{\lceil\beta/2\rceil-1}\sin^2\left(\frac{\pi(n+1)k}{\beta}-n\Arcsin\left(\frac{(n+1)}{\sqrt{4n^2/\mu_k+2n+1}}\right)\right). \end{equation} The product of the modulus of $z_k$ is given by \begin{align} \prod_{k=1}^{\beta-1}\lvert z_k\rvert&=\frac{(2n+1)^{\beta/2}}{2n}\prod_{k=0}^{\beta-1}(2n+1+1/(2n+1)-2\cos(2\pi k/\beta))^{1/2}\nonumber\\ &=\frac{(2n+1)^{\beta/2}}{2n}(2\cosh(\beta\argcosh(n+1/2+1/(4n+2)))-2)^{1/2}\nonumber\\ &=\frac{(2n+1)^{\beta}}{2n}(1-(2n+1)^{-\beta}) \label{prod_modulus} \end{align} where the second equality comes from the identity (see \cite[section $2$]{louis2015formula}) \begin{equation*} \prod_{k=0}^{\beta-1}(2\cosh\theta-2\cos(2\pi k/n))=2\cosh(\beta\theta)-2. \end{equation*} Putting equality (\ref{prod_modulus}) into (\ref{taucirc}) gives the theorem. \end{proof} \begin{remark} We point out that the proof above could not be easily applied to other powers of the $\beta n$-cycle, like $\boldsymbol{C}^{n-p}_{\beta n}$, where $p\geqslant2$ or $p\leqslant-1$, because in this case $z_k$ defined in equation (\ref{sine}) would also depend on $l$ and the phase $\theta_k$ of $z_k$ cannot be easily determined. As a consequence, the product over $l$ cannot be evaluated in the same way as it is done in the proof. It would be interesting to find a derivation in this class of more general circulant graphs. \end{remark} From Theorem \ref{circ}, we derive the asymptotic behaviour of the number of spanning trees in the $n$-th, respectively $(n-1)$-th, power graph of the $\beta n$-cycle as $n\rightarrow\infty$. \begin{corollary} Let $\beta\in\mathbb{N}_{\geqslant2}$. The asymptotic number of spanning trees in the $n$-th and $(n-1)$-th power graphs of the $\beta n$-cycle $\boldsymbol{C}^n_{\beta n}$ and $\boldsymbol{C}^{n-1}_{\beta n}$ as $n\rightarrow\infty$ is respectively given by \begin{equation*} \tau(\boldsymbol{C}^n_{\beta n})=\frac{2^{\beta n}}{2\beta}n^{\beta n-2}(e^{\beta/2}+o(1)) \end{equation*} and \begin{equation*} \tau(\boldsymbol{C}^{n-1}_{\beta n})=\frac{2^{\beta n}}{2\beta}n^{\beta n-2}(e^{-\beta/2}+o(1)). \end{equation*} \end{corollary} \begin{proof} By observing that for all $k\in\{1,\ldots,\lceil\beta/2\rceil-1\}$, \begin{equation*} \lim_{n\rightarrow\infty}\frac{n+1}{\sqrt{4n^2/\mu_k+2n+1}}=\sin(\pi k/\beta)\quad\textnormal{and}\quad\lim_{n\rightarrow\infty}\frac{n-1}{\sqrt{4n^2/\mu_k-(2n-1)}}=\sin(\pi k/\beta) \end{equation*} where $\mu_k=2-2\cos(2\pi k/\beta)$ and using relation (\ref{taucycle}) the corollary is a direct consequence of Theorem \ref{circ}. \end{proof} \nocite{*} \bibliographystyle{plain}
{ "timestamp": "2016-08-01T02:06:28", "yymm": "1507", "arxiv_id": "1507.02990", "language": "en", "url": "https://arxiv.org/abs/1507.02990", "abstract": "The number of spanning trees in a class of directed circulant graphs with generators depending linearly on the number of vertices $\\beta n$, and in the $n$-th and $(n-1)$-th power graphs of the $\\beta n$-cycle are evaluated as a product of $\\lceil\\beta/2\\rceil-1$ terms.", "subjects": "Combinatorics (math.CO)", "title": "Spanning trees in directed circulant graphs and cycle power graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363512883317, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7073385765073278 }
https://arxiv.org/abs/1703.03743
The Marcinkiewicz-type discretization theorems
The paper is devoted to discretization of integral norms of functions from a given finite dimensional subspace. This problem is very important in applications but there is no systematic study of it. We present here a new technique, which works well for discretization of the integral norm. It is a combination of probabilistic technique, based on chaining, with results on the entropy numbers in the uniform norm.
\section{Introduction} \label{I} Discretization is a very important step in making a continuous problem computationally feasible. The problem of construction of good sets of points in a multidimensional domain is a fundamental problem of mathematics and computational mathematics. A prominent example of classical discretization problem is a problem of metric entropy (covering numbers, entropy numbers). Bounds for the entropy numbers of function classes are important by themselves and also have important connections to other fundamental problems (see, for instance, \cite{Tbook}, Ch.3 and \cite{DTU}, Ch.6). Another prominent example of a discretization problem is the problem of numerical integration. Numerical integration in the mixed smoothness classes requires deep number theoretical results for constructing optimal (in the sense of order) cubature formulas (see, for instance, \cite{DTU}, Ch.8). A typical approach to solving a continuous problem numerically -- the Galerkin method -- suggests to look for an approximate solution from a given finite dimensional subspace. A standard way to measure an error of approximation is an appropriate $L_q$ norm, $1\le q\le\infty$. Thus, the problem of discretization of the $L_q$ norms of functions from a given finite dimensional subspace arises in a very natural way. The main goal of this paper is to study the discretization problem for a finite dimensional subspace $X_N$ of a Banach space $X$. We are interested in discretizing the $L_q$, $1\le q\le \infty$, norm of elements of $X_N$. We call such results the Marcinkiewicz-type discretization theorems. There are different settings and different ingredients, which play important role in this problem. We now discuss these issues. {\bf Marcinkiewicz problem.} Let $\Omega$ be a compact subset of ${\mathbb R}^d$ with the probability measure $\mu$. We say that a linear subspace $X_N$ of the $L_q(\Omega)$, $1\le q < \infty$, admits the Marcinkiewicz-type discretization theorem with parameters $m$ and $q$ if there exist a set $\{\xi^\nu \in \Omega, \nu=1,\dots,m\}$ and two positive constants $C_j(d,q)$, $j=1,2$, such that for any $f\in X_N$ we have \begin{equation}\label{1.1} C_1(d,q)\|f\|_q^q \le \frac{1}{m} \sum_{\nu=1}^m |f(\xi^\nu)|^q \le C_2(d,q)\|f\|_q^q. \end{equation} In the case $q=\infty$ we define $L_\infty$ as the space of continuous on $\Omega$ functions and ask for \begin{equation}\label{1.2} C_1(d)\|f\|_\infty \le \max_{1\le\nu\le m} |f(\xi^\nu)| \le \|f\|_\infty. \end{equation} We will also use a brief way to express the above property: the $\mathcal M(m,q)$ theorem holds for a subspace $X_N$ or $X_N \in \mathcal M(m,q)$. {\bf Numerical integration problem.} In the case $1\le q<\infty$ the above problem can be reformulated as a problem on numerical integration of special classes of functions. Define a class $|X_N|^q := \{|f|^q: f\in X_N, \|f\|_q \le 1\}$ and consider the numerical integration problem: for a given $\varepsilon>0$ find $m = m(N,q,\varepsilon)$ such that \begin{equation}\label{1.3} \inf_{\xi^1,\dots,\xi^m}\sup_{f\in X_N,\|f\|_q\le 1} \left|\frac{1}{m}\sum_{\nu=1}^m |f(\xi^\nu)|^q -\|f\|_q^q\right| \le \varepsilon. \end{equation} In (\ref{1.3}) we limit our search for good numerical integration methods to cubature formulas with equal weights $1/m$. This special kind of cubature formulas is called the Quasi-Monte Carlo methods. In numerical integration general cubature formulas (with weights) are also very important. In this case the above problem (\ref{1.3}) is reformulated as follows \begin{equation}\label{1.4} \inf_{\xi^1,\dots,\xi^m;\lambda_1,\dots,\lambda_m}\sup_{f\in X_N,\|f\|_q\le 1} \left|\sum_{\nu=1}^m\lambda_\nu |f(\xi^\nu)|^q -\|f\|_q^q\right| \le \varepsilon. \end{equation} Thus, in this case we are optimizing both over the knots $\xi^1,\dots,\xi^m$ and over the weights $\lambda_1,\dots,\lambda_m$. {\bf Marcinkiewicz problem with weights.} The above remark on numerical integration encourages us to consider the following variant of the Marcinkiewicz problem. We say that a linear subspace $X_N$ of the $L_q(\Omega)$, $1\le q < \infty$, admits the weighted Marcinkiewicz-type discretization theorem with parameters $m$ and $q$ if there exist a set of knots $\{\xi^\nu \in \Omega\}$, a set of weights $\{\lambda_\nu\}$, $\nu=1,\dots,m$, and two positive constants $C_j(d,q)$, $j=1,2$, such that for any $f\in X_N$ we have \begin{equation}\label{1.5} C_1(d,q)\|f\|_q^q \le \sum_{\nu=1}^m \lambda_\nu |f(\xi^\nu)|^q \le C_2(d,q)\|f\|_q^q. \end{equation} Then we also say that the $\mathcal M^w(m,q)$ theorem holds for a subspace $X_N$ or $X_N \in \mathcal M^w(m,q)$. Obviously, $X_N\in \mathcal M(m,q)$ implies that $X_N\in \mathcal M^w(m,q)$. {\bf Marcinkiewicz problem with $\varepsilon$.} We write $X_N\in \mathcal M(m,q,\varepsilon)$ if (\ref{1.1}) holds with $C_1(d,q)=1-\varepsilon$ and $C_2(d,q)=1+\varepsilon$. Respectively, we write $X_N\in \mathcal M^w(m,q,\varepsilon)$ if (\ref{1.5}) holds with $C_1(d,q)=1-\varepsilon$ and $C_2(d,q)=1+\varepsilon$. We note that the most powerful results are for $\mathcal M(m,q,0)$, when the $L_q$ norm of $f\in X_N$ is discretized exactly by the formula with equal weights $1/m$. In this paper we mostly concentrate on the Marcinkiewicz problem and on its variant with $\varepsilon$. Our main results are for $q=1$. We now give some general remarks for the case $q=2$, which illustrate the problem. We discuss the case $q=2$ in more detail in Section \ref{L2}. We describe the properties of the subspace $X_N$ in terms of a system $\mathcal U_N:=\{u_i\}_{i=1}^N$ of functions such that $X_N = \operatorname{span}\{u_i, i=1,\dots,N\}$. In the case $X_N \subset L_2$ we assume that the system is orthonormal on $\Omega$ with respect to measure $\mu$. In the case of real functions we associate with $x\in\Omega$ the matrix $G(x) := [u_i(x)u_j(x)]_{i,j=1}^N$. Clearly, $G(x)$ is a symmetric positive semi-definite matrix of rank $1$. It is easy to see that for a set of points $\xi^k\in \Omega$, $k=1,\dots,m$, and $f=\sum_{i=1}^N b_iu_i$ we have $$ \sum_{k=1}^m\lambda_k f(\xi^k)^2 - \int_\Omega f(x)^2 d\mu = {\mathbf b}^T\left(\sum_{k=1}^m \lambda_k G(\xi^k)-I\right){\mathbf b}, $$ where ${\mathbf b} = (b_1,\dots,b_N)^T$ is the column vector and $I$ is the identity matrix. Therefore, the $\mathcal M^w(m,2)$ problem is closely connected with a problem of approximation (representation) of the identity matrix $I$ by an $m$-term approximant with respect to the system $\{G(x)\}_{x\in\Omega}$. It is easy to understand that under our assumptions on the system $\mathcal U_N$ there exist a set of knots $\{\xi^k\}_{k=1}^m$ and a set of weights $\{\lambda_k\}_{k=1}^m$, with $m\le N^2$ such that $$ I = \sum_{k=1}^m \lambda_k G(\xi^k) $$ and, therefore, we have for any $X_N \subset L_2$ that \begin{equation}\label{1.6} X_N \in \mathcal M^w(N^2,2,0). \end{equation} However, we do not know a characterization of those $X_N$ for which $X_N \in \mathcal M(N^2,2,0)$. In the above formulations of the problems we only ask about existence of either good $\{\xi^\nu\}$ or good $\{\xi^\nu,\lambda_\nu\}$. Certainly, it is important to have either explicit constructions of good $\{\xi^\nu\}$ ($\{\xi^\nu,\lambda_\nu\}$) or deterministic ways to construct good $\{\xi^\nu\}$ ($\{\xi^\nu,\lambda_\nu\}$). Thus, the Marcinkiewicz-type problem can be split into the following four problems: under some assumptions on $X_N$ (I) Find a condition on $m$ for $X_N \in \mathcal M(m,q)$; (II) Find a condition on $m$ for $X_N \in \mathcal M^w(m,q)$; (III) Find a condition on $m$ such that there exists a deterministic construction of $\{\xi^\nu\}_{\nu=1}^m$ satisfying (\ref{1.1}) for all $f\in X_N$; (IV) Find a condition on $m$ such that there exists a deterministic construction of $\{\xi^\nu,\lambda_\nu\}_{\nu=1}^m$ satisfying (\ref{1.5}) for all $f\in X_N$. The main results of this paper address the problem (I) in the case $q=1$. Our method is probabilistic. We impose the following assumptions on the system $\{u_i\}_{i=1}^N$ of real functions. {\bf A.} There exist $\alpha>0$, $\beta$, and $K_1$ such that for all $i\in [1,N]$ we have \begin{equation}\label{4.1i} |u_i(\mathbf x)-u_i(\mathbf y)| \le K_1N^\beta\|\mathbf x-\mathbf y\|_\infty^\alpha,\quad \mathbf x,\mathbf y \in \Omega. \end{equation} {\bf B.} There exists a constant $K_2$ such that $\|u_i\|_\infty^2 \le K_2$, $i=1,\dots,N$. {\bf C.} Denote $X_N:= \operatorname{span}(u_1,\dots,u_N)$. There exist two constants $K_3$ and $K_4$ such that the following Nikol'skii-type inequality holds for all $f\in X_N$ \begin{equation}\label{4.3i} \|f\|_\infty \le K_3N^{K_4/p}\|f\|_p,\quad p\in [2,\infty). \end{equation} The main result of this paper is the following theorem (see Theorem \ref{T4.9}). \begin{Theorem}\label{T1.1} Suppose that a real orthonormal system $\{u_i\}_{i=1}^N$ satisfies conditions {\bf A}, {\bf B}, and {\bf C}. Then there exists a set of $m \le C_1N(\log N)^{7/2}$ points $\xi^j\in \Omega$, $j=1,\dots,m$, $C_1=C(d,K_1,K_2,K_3,K_4,\Omega,\alpha,\beta)$, such that for any $f\in X_N$ we have $$ \frac{1}{2}\|f\|_1 \le \frac{1}{m}\sum_{j=1}^m |f(\xi^j)| \le \frac{3}{2}\|f\|_1. $$ \end{Theorem} An important particular case for application of Theorem \ref{T1.1} is the case, when $X_N$ is a subspace of trigonometric polynomials. For a finite $Q\subset \mathbb Z^d$ denote $$ \mathcal T(Q) := \{f: f(\mathbf x) = \sum_{\mathbf k\in Q}c_\mathbf k e^{i(\mathbf k,\mathbf x)}\}. $$ The hyperbolic cross polynomials $\mathcal T(Q_n)$ are of special interest (see, for instance, \cite{DTU}): let $\mathbf s \in \mathbb Z^d_+$ $$ Q_n := \cup_{\|\mathbf s\|_1 \le n} \rho(\mathbf s), $$ and $$ \rho(\mathbf s):= \{\mathbf k=(k_1,\dots,k_d)\in \mathbb Z^d_+ : [2^{s_j-1}]\le |k_j|<2^{s_j},\quad j=1,\dots,d\} $$ where $[a]$ denotes the integer part of a number $a$. The following two theorems were proved in \cite{VT160}. \begin{Theorem}\label{T3.1} Let $d=2$. For any $n\in {\mathbb{N}}$ there exists a set of $m \le C_1|Q_n|n^{7/2}$ points $\xi^j\in \mathbb T^2$, $j=1,\dots,m$ such that for any $f\in \mathcal T(Q_n)$ we have $$ C_2\|f\|_1 \le \frac{1}{m}\sum_{j=1}^m |f(\xi^j)| \le C_3\|f\|_1. $$ \end{Theorem} \begin{Theorem}\label{T3.2} For any $d\in {\mathbb{N}}$ and $n\in {\mathbb{N}}$ there exists a set of \newline $m \le C_1(d)|Q_n|n^{d/2+3}$ points $\xi^j\in \mathbb T^d$, $j=1,\dots,m$ such that for any $f\in \mathcal T(Q_n)$ we have $$ C_2\|f\|_1 \le \frac{1}{m}\sum_{j=1}^m |f(\xi^j)| \le C_3\|f\|_1. $$ \end{Theorem} Theorem \ref{T3.1} addresses the case $d=2$ and Theorem \ref{T3.2} extends Theorem \ref{T3.1} to the case of all $d$. We point out that for $d=2$ Theorem \ref{T3.2} is weaker than Theorem \ref{T3.1}. Theorem \ref{T1.1} gives Theorem \ref{T3.1} and improves Theorem \ref{T3.2} by replacing an extra factor $n^{d/2+3}$ by $n^{7/2}$ in the bound for $m$. The technique for proving Theorem \ref{T1.1} presented in this paper is a development of technique from \cite{VT160}. It is a combination of probabilistic technique, based on chaining, with results on the entropy numbers. We present this technique in the following way. In Section \ref{entropy} we discuss new elements of a method, which gives good upper bounds for the entropy numbers of the unit $L_1$ ball $\mathcal T(Q)_1$ in the $L_\infty$ norm. In Section \ref{M} we present results on the Marcinkiewicz-type theorems for the trigonometric polynomials. In Section \ref{MX} we show how the technique developed in Section \ref{M} for the trigonometric polynomials can be generalized for subspaces $X_N$ satisfying conditions {\bf A}, {\bf B}, and {\bf C}. The main results of the paper are in Sections \ref{entropy} -- \ref{MX}. They are about discretization theorems in $L_1$. In Section \ref{L2} we give some comments on the discretization theorems in $L_2$. This case (the $L_2$ case) is much better understood than the $L_1$ case and it has nice connections to recent strong results on submatrices of orthogonal matrices and on random matrices. \section{The entropy numbers of $\mathcal T(Q)_1$} \label{entropy} We begin with the definition of the entropy numbers. Let $X$ be a Banach space and let $B_X$ denote the unit ball of $X$ with the center at $0$. Denote by $B_X(y,r)$ a ball with center $y$ and radius $r$: $\{x\in X:\|x-y\|\le r\}$. For a compact set $A$ and a positive number $\varepsilon$ we define the covering number $N_\varepsilon(A)$ as follows $$ N_\varepsilon(A) := N_\varepsilon(A,X) :=\min \{n : \exists y^1,\dots,y^n, y^j\in A :A\subseteq \cup_{j=1}^n B_X(y^j,\varepsilon)\}. $$ It is convenient to consider along with the entropy $H_\varepsilon(A,X):= \log_2 N_\varepsilon(A,X)$ the entropy numbers $\varepsilon_k(A,X)$: $$ \varepsilon_k(A,X) :=\inf \{\varepsilon : \exists y^1,\dots ,y^{2^k} \in A : A \subseteq \cup_{j=1} ^{2^k} B_X(y^j,\varepsilon)\}. $$ In our definition of $N_\varepsilon(A)$ and $\varepsilon_k(A,X)$ we require $y^j\in A$. In a standard definition of $N_\varepsilon(A)$ and $\varepsilon_k(A,X)$ this restriction is not imposed. However, it is well known (see \cite{Tbook}, p.208) that these characteristics may differ at most by a factor $2$. We use the technique developed in \cite{VT156}, which is based on the following two steps strategy. At the first step we obtain bounds of the best $m$-term approximations with respect to a dictionary. At the second step we use general inequalities relating the entropy numbers to the best $m$-term approximations. We begin the detailed discussion with the second step of the above strategy. Let ${\mathcal D}=\{g_j\}_{j=1}^N$ be a system of elements of cardinality $|{\mathcal D}|=N$ in a Banach space $X$. Consider best $m$-term approximations of $f$ with respect to ${\mathcal D}$ $$ \sigma_m(f,{\mathcal D})_X:= \inf_{\{c_j\};\Lambda:|\Lambda|=m}\|f-\sum_{j\in \Lambda}c_jg_j\|. $$ For a function class $F$ set $$ \sigma_m(F,{\mathcal D})_X:=\sup_{f\in F}\sigma_m(f,{\mathcal D})_X. $$ The following results are from \cite{VT138}. \begin{Theorem}\label{T2.1} Let a compact $F\subset X$ be such that there exists a system ${\mathcal D}$, $|{\mathcal D}|=N$, and a number $r>0$ such that $$ \sigma_m(F,{\mathcal D})_X \le m^{-r},\quad m\le N. $$ Then for $k\le N$ \begin{equation}\label{2.1} \varepsilon_k(F,X) \le C(r) \left(\frac{\log(2N/k)}{k}\right)^r. \end{equation} \end{Theorem} \begin{Remark}\label{R2.1} Suppose that a compact $F$ from Theorem \ref{T2.1} belongs to an $N$-dimensional subspace $X_N:=\operatorname{span}({\mathcal D})$. Then in addition to (\ref{2.1}) we have for $k\ge N$ \begin{equation}\label{2.2} \varepsilon_k(F,X) \le C(r)N^{-r}2^{-k/(2N)}. \end{equation} \end{Remark} We point out that Remark \ref{R2.1} is formulated for a complex Banach space $X$. In the case of real Banach space $X$ we have $2^{-k/N}$ instead of $2^{-k/(2N)}$ in (\ref{2.2}). We begin with the best $m$-term approximation of elements of $\mathcal T(Q)_1:=\{f\in\mathcal T(Q):\|f\|_1\le 1\}$ in $L_2$ with respect to a special dictionary ${\mathcal D}^1:={\mathcal D}^1(Q)$ associated with $Q$. Denote $$ {\mathcal D}_Q(\mathbf x) := \sum_{\mathbf k\in Q} e^{i(\mathbf k,\mathbf x)},\quad w_Q := |Q|^{-1/2}{\mathcal D}_Q. $$ Then $\|w_Q\|_2 =1$. Consider the dictionary $$ {\mathcal D}^1:= {\mathcal D}^1(Q):= \{w_Q(\mathbf x-\mathbf y)\}_{\mathbf y\in\mathbb T^d}. $$ For a dictionary ${\mathcal D}$ in a Hilbert space $H$ with an inner product $\<\cdot,\cdot\>$ denote by $A_1({\mathcal D})$ the closure of the convex hull of the dictionary ${\mathcal D}$. In the case of complex Hilbert space define the symmetrized dictionary ${\mathcal D}^s := \{e^{i\theta} g: g\in {\mathcal D}, \theta\in [0,2\pi]\}$. We use the Weak Orthogonal Greedy Algorithm (Weak Orthogonal Matching Pursuit) for $m$-term approximation. We remind the corresponding definition and formulate the know result, which we will use. {\bf Weak Orthogonal Greedy Algorithm (WOGA).} Let $t\in (0,1]$ be a weakness parameter. We define $f^{o,t}_0 :=f$. Then for each $m\ge 1$ we inductively define: (1) $\varphi^{o,t}_m \in {\mathcal D}$ is any element satisfying $$ |\langle f^{o,t}_{m-1},\varphi^{o,t}_m\rangle | \ge t \sup_{g\in {\mathcal D}} |\langle f^{o,t}_{m-1},g\rangle |. $$ (2) Let $H_m^t := \operatorname{span} (\varphi_1^{o,t},\dots,\varphi^{o,t}_m)$ and let $P_{H_m^t}(f)$ denote an operator of orthogonal projection onto $H_m^t$. Define $$ G_m^{o,t}(f,{\mathcal D}) := P_{H_m^t}(f). $$ (3) Define the residual after $m$th iteration of the algorithm $$ f^{o,t}_m := f-G_m^{o,t}(f,{\mathcal D}). $$ In the case $t=1$ the WOGA is called the Orthogonal Greedy Algorithm (OGA). The following theorem is from \cite{T1} (see also \cite{Tbook}, Ch.2). \begin{Theorem}\label{T2.2} Let ${\mathcal D}$ be an arbitrary dictionary in $H$. Then for each $f \in A_1({\mathcal D}^s)$ we have \begin{equation}\label{2.3} \|f-G^{o,t}_m(f,{\mathcal D})\| \le (1+mt^2)^{-1/2}. \end{equation} \end{Theorem} We now prove the following assertion. \begin{Theorem}\label{T2.3} For any finite $Q\subset \mathbb Z^d$ we have $$ \sigma_m(\mathcal T(Q)_1,{\mathcal D}^1(Q))_2 \le (|Q|/m)^{-1/2}. $$ \end{Theorem} \begin{proof} Each $f\in \mathcal T(Q)_1$ has a representation \begin{equation}\label{2.4} f(\mathbf x) = (2\pi)^{-d} \int_{\mathbb T^d} f(\mathbf y){\mathcal D}_Q(\mathbf x-\mathbf y)d\mathbf y = |Q|^{1/2}(2\pi)^{-d} \int_{\mathbb T^d} f(\mathbf y)w_Q(\mathbf x-\mathbf y)d\mathbf y. \end{equation} It follows from $\|f\|_1 \le 1$ and (\ref{2.4}) that $f|Q|^{-1/2} \in A_1(({\mathcal D}^1)^s)$. Therefore, by Theorem \ref{T2.2} we get the required bound. \end{proof} Dictionary ${\mathcal D}^1(Q)$ is an infinite dictionary. In our further applications we would like to have a finite dictionary. Here we consider $Q\subset \Pi(\mathbf N)$ with $\mathbf N=(2^n,\dots,2^n)$, where $\Pi(\mathbf N):=[-N_1,N_2]\times\cdots\times[-N_d,N_d]$, $\mathbf N=(N_1,\dots,N_d)$. We denote \begin{align*} P(\mathbf N) := \bigl\{\mathbf n = (n_1 ,\dots,n_d),&\qquad n_j\ -\ \text{ are nonnegative integers},\\ &0\le n_j\le 2N_j ,\qquad j = 1,\dots,d \bigr\}, \end{align*} and set $$ \mathbf x^{\mathbf n}:=\left(\frac{2\pi n_1}{2N_1+1},\dots,\frac{2\pi n_d} {2N_d+1}\right),\qquad \mathbf n\in P(\mathbf N) . $$ Then for any $t\in \mathcal T(\Pi(\mathbf N))$ (see \cite{Z}, Ch.10) \begin{equation}\label{2.5} \vartheta(\mathbf N)^{-1}\sum_{\mathbf n\in P(\mathbf N)} \bigl|t(\mathbf x^{\mathbf n})\bigr|\le C(d)\|t\|_1, \end{equation} where $\vartheta(\mathbf N) := \prod_{j=1}^d (2N_j + 1)=\dim\mathcal T(\Pi(\mathbf N))$. Specify $\mathbf N:=(2^n,\dots,2^n)$ and define $$ {\mathcal D}^2:={\mathcal D}^2(Q):= \{w_Q(\mathbf x-\mathbf x^\mathbf n)\}_{\mathbf n\in P(\mathbf N)}. $$ Then, clearly, $|{\mathcal D}^2(Q)|=\vartheta(\mathbf N) = (2^{n+1}+1)^d$. Also, it is well known that for $f\in \mathcal T(\Pi(\mathbf N))$ one has \begin{equation}\label{2.6} f(\mathbf x) = \vartheta(\mathbf N)^{-1}\sum_{\mathbf n\in P(\mathbf N)} f(\mathbf x^{\mathbf n}){\mathcal D}_{\Pi(\mathbf N)}(\mathbf x-\mathbf x^\mathbf n) \end{equation} and, therefore, for $f\in \mathcal T(Q)$, $Q\subset \Pi(\mathbf N)$ \begin{equation}\label{2.6Q} f(\mathbf x) = \vartheta(\mathbf N)^{-1}\sum_{\mathbf n\in P(\mathbf N)} f(\mathbf x^{\mathbf n}){\mathcal D}_Q(\mathbf x-\mathbf x^\mathbf n). \end{equation} In particular, (\ref{2.5}) and (\ref{2.6Q}) imply that there exists $C(d)>0$ such that for every $f\in\mathcal T(Q)_1$ we have $C(d)^{-1}|Q|^{-1/2} f\in A_1(({\mathcal D}^2)^s)$. Therefore, we have the following version of Theorem \ref{T2.3}. \begin{Theorem}\label{T2.4} For any $Q\subset \Pi(\mathbf N)$ with $\mathbf N=(2^n,\dots,2^n)$ we have $$ \sigma_m(\mathcal T(Q)_1,{\mathcal D}^2(Q))_2 \le C(d)(|Q|/m)^{-1/2} $$ and $|{\mathcal D}^2(Q)|\le C'(d)2^{nd}$. \end{Theorem} Theorems \ref{T2.3} and \ref{T2.4} provide bounds for the best $m$-term approximation of elements of $\mathcal T(Q)_1$ in the $L_2$ norm. For applications in the Marcinkiewicz discretization theorem we need bounds for the entropy numbers in the $L_\infty$ norm. As we explained above we derive appropriate bounds for the entropy numbers from the corresponding bounds on the best $m$-term approximations with the help of Theorem \ref{T2.1}. Thus we need bounds on the best $m$-term approximations in the $L_\infty$ norm. We proceed in the same way as in \cite{VT156} and use the following dictionary $$ {\mathcal D}^\mathcal T:={\mathcal D}^\mathcal T(Q):= \{e^{i(\mathbf k,\mathbf x)}: \mathbf k\in Q\}. $$ In order to obtain the bounds in the $L_\infty$ norm we use the following theorem from \cite{VT156}, which in turn is a corollary of the corresponding result from \cite{VT150}. \begin{Theorem}\label{T2.5} Let $\Lambda\subset \Pi(\mathbf N)$ with $N_j=2^n$, $j=1,\dots,d$. There exist constructive greedy-type approximation methods $G^\infty_m(\cdot)$, which provide $m$-term polynomials with respect to $\mathcal T^d$ with the following properties: \newline for $f\in \mathcal T(\Lambda)$ we have $G^\infty_m(f)\in \mathcal T(\Lambda)$ and $$ \|f-G^\infty_m(f)\|_\infty \le C_3(d)({\bar m})^{-1/2}n^{1/2}|\Lambda|^{1/2}\|f\|_2,\quad \bar m := \max(m,1). $$ \end{Theorem} We now consider a dictionary $$ {\mathcal D}^3:={\mathcal D}^3(Q):= {\mathcal D}^2(Q)\cup {\mathcal D}^\mathcal T(Q). $$ \begin{Lemma}\label{L2.1} For any $Q\subset \Pi(\mathbf N)$ with $\mathbf N=(2^n,\dots,2^n)$ we have $$ \sigma_m(\mathcal T(Q)_1,{\mathcal D}^3(Q))_\infty \le C(d)n^{1/2}|Q|/m $$ and $|{\mathcal D}^3(Q)|\le C'(d)2^{nd}$. \end{Lemma} \begin{proof} Take $f\in\mathcal T(Q)_1$. Applying first Theorem \ref{T2.4} with $[m/2]$ and, then, applying Theorem \ref{T2.5} with $\Lambda = Q$ and $[m/2]$ we obtain $$ \sigma_m(f,{\mathcal D}^3(Q))_\infty \ll n^{1/2}(| Q_n|/m)\|f\|_1, $$ which proves the lemma. \end{proof} Lemma \ref{L2.1}, Theorem \ref{T2.1}, and Remark \ref{R2.1} imply the following result on the entropy numbers. \begin{Theorem}\label{T2.6} For any $Q\subset \Pi(\mathbf N)$ with $\mathbf N=(2^n,\dots,2^n)$ we have $$ \varepsilon_k(\mathcal T( Q)_1,L_\infty) \ll \left\{\begin{array}{ll} n^{3/2}(|Q|/k), &\quad k\le 2| Q|,\\ n^{3/2}2^{-k/(2| Q|)},&\quad k\ge 2| Q|.\end{array} \right. $$ \end{Theorem} The above theorem with $Q=Q_n$ can be used for proving the upper bounds for the entropy numbers of the mixed smoothness classes. We define the classes which were studied in \cite{VT152} and \cite{VT156}. Let $\mathbf s=(s_1,\dots,s_d)$ be a vector with nonnegative integer coordinates ($\mathbf s \in \mathbb Z^d_+$) and as above $$ \rho(\mathbf s):= \{\mathbf k=(k_1,\dots,k_d)\in \mathbb Z^d_+ : [2^{s_j-1}]\le |k_j|<2^{s_j},\quad j=1,\dots,d\} $$ where $[a]$ denotes the integer part of a number $a$. Define for $f\in L_1$ $$ \delta_\mathbf s(f) := \sum_{\mathbf k\in\rho(\mathbf s)} \hat f(\mathbf k) e^{i(\mathbf k,\mathbf x)}, $$ and $$ f_l:=\sum_{\|\mathbf s\|_1=l}\delta_\mathbf s(f), \quad l\in {\mathbb{N}}_0,\quad {\mathbb{N}}_0:={\mathbb{N}}\cup \{0\}. $$ Consider the class (see \cite{VT152}) $$ \mathbf W^{a,b}_q:=\{f: \|f_l\|_q \le 2^{-al}(\bar l)^{(d-1)b}\},\quad \bar l:=\max(l,1). $$ Define $$ \|f\|_{\mathbf W^{a,b}_q} := \sup_l \|f_l\|_q 2^{al}(\bar l)^{-(d-1)b}. $$ Here is one more class, which is equivalent to $\mathbf W^{a,b}_q$ in the case $1<q<\infty$ (see \cite{VT152}). Consider a class ${\bar \mathbf W}^{a,b}_q$, which consists of functions $f$ with a representation $$ f=\sum_{n=1}^\infty t_n, \quad t_n\in \mathcal T(Q_n), \quad \|t_n\|_q \le 2^{-an} n^{b(d-1)}. $$ In the case $q=1$ classes ${\bar \mathbf W}^{a,b}_1$ are wider than $\mathbf W^{a,b}_1$. The following theorem was proved in \cite{VT156}. \begin{Theorem}\label{T2.7} Let $d=2$ and $a>1$. Then \begin{equation}\label{2.7} \varepsilon_k(\mathbf W^{a,b}_1,L_\infty) \asymp \varepsilon_k(\bar\mathbf W^{a,b}_1,L_\infty) \asymp k^{-a} (\log k)^{a+b+1/2}. \end{equation} \end{Theorem} We prove here an extension of Theorem \ref{T2.7} to all $d$. We note that this extension -- Theorem \ref{T2.8} -- is weaker than Theorem \ref{T2.7} in case $d=2$. \begin{Theorem}\label{T2.8} Let $a>1$. Then \begin{equation}\label{2.8} \varepsilon_k(\mathbf W^{a,b}_1,L_\infty) \le \varepsilon_k(\bar\mathbf W^{a,b}_1,L_\infty) \ll k^{-a} (\log k)^{(a+b)(d-1)+3/2}. \end{equation} \end{Theorem} \begin{proof} The proof is based on the following general result from \cite{VT156}. Let $X$ and $Y$ be two Banach spaces. We discuss a problem of estimating the entropy numbers of an approximation class, defined in the space $X$, in the norm of the space $Y$. Suppose a sequence of finite dimensional subspaces $X_n \subset X$, $n=1,\dots $, is given. Define the following class $$ {\bar \mathbf W}^{a,b}_X:={\bar \mathbf W}^{a,b}_X\{X_n\} := \{f\in X: f=\sum_{n=1}^\infty f_n,\quad f_n\in X_n, $$ $$ \|f_n\|_X \le 2^{-an}n^{b},\quad n=1,2,\dots\}. $$ In particular, $$ {\bar \mathbf W}^{a,b}_q = {\bar \mathbf W}^{a,b(d-1)}_{L_q}\{\mathcal T(Q_n)\} . $$ Denote $D_n:=\dim X_n$ and assume that for the unit balls $B(X_n):=\{f\in X_n: \|f\|_X\le 1\}$ we have the following upper bounds for the entropy numbers: there exist real $\alpha$ and nonnegative $\gamma$ and $\beta\in(0,1]$ such that \begin{equation}\label{EA} \varepsilon_k(B(X_n),Y) \ll n^\alpha \left\{\begin{array}{ll} (D_n/(k+1))^\beta (\log (4D_n/(k+1)))^\gamma, &\quad k\le 2D_n,\\ 2^{-k/(2D_n)},&\quad k\ge 2D_n.\end{array} \right. \end{equation} \begin{Theorem}\label{T2.1G} Assume $D_n \asymp 2^n n^c$, $c\ge 0$, $a>\beta$, and subspaces $\{X_n\}$ satisfy (\ref{EA}). Then \begin{equation}\label{2.0G} \varepsilon_k(\bar \mathbf W^{a,b}_X\{X_n\},Y) \ll k^{-a} (\log k)^{ac+b+\alpha}. \end{equation} \end{Theorem} Theorem \ref{T2.6} with $Q=Q_n$ provides (\ref{EA}) with $\alpha=3/2$, $\beta=1$, $\gamma =0$. It remains to apply Theorem \ref{T2.1G} with $X_n=\mathcal T(Q_n)$ and $c=d-1$. \end{proof} \section{The Marcinkiewicz-type discretization theorem for the trigonometric polynomials} \label{M} In this section we improve Theorem \ref{T3.2} from the Introduction, which was proved in \cite{VT160}, in two directions. We prove the Marcinkiewicz-type discretization theorem for $\mathcal T(Q)$ instead of $\mathcal T(Q_n)$ for a rather general $Q$. Also, even in a more general situation, we improve the bound from $m \le C_1(d)|Q_n|n^{d/2+3}$ to $m \le C_1(d)|Q_n|n^{7/2}$ similar to that in Theorem \ref{T3.1}. Our prove goes along the lines of the proof of Theorem \ref{T2.1} from \cite{VT160}. We use the following results from \cite{VT160}. Lemma \ref{L3.1} is from \cite{BLM}. \begin{Lemma}\label{L3.1} Let $\{g_j\}_{j=1}^m$ be independent random variables with $\mathbb E g_j=0$, $j=1,\dots,m$, which satisfy $$ \|g_j\|_1\le 2,\qquad \|g_j\|_\infty \le M,\qquad j=1,\dots,m. $$ Then for any $\eta \in (0,1)$ we have the following bound on the probability $$ \mathbb P\left\{\left|\sum_{j=1}^m g_j\right|\ge m\eta\right\} < 2\exp\left(-\frac{m\eta^2}{8M}\right). $$ \end{Lemma} We now consider measurable functions $f(\mathbf x)$, $\mathbf x\in \Omega$. For $1\le q<\infty$ define $$ L^q_\mathbf z(f) := \frac{1}{m}\sum_{j=1}^m |f(\mathbf x^j)|^q -\|f\|_q^q,\qquad \mathbf z:= (\mathbf x^1,\dots,\mathbf x^m). $$ Let $\mu$ be a probabilistic measure on $\Omega$. Denote $\mu^m := \mu\times\cdots\times\mu$ the probabilistic measure on $\Omega^m := \Omega\times\cdots\times\Omega$. We will need the following inequality, which is a corollary of Lemma \ref{L3.1} (see \cite{VT160}). \begin{Proposition}\label{P3.1} Let $f_j\in L_1(\Omega)$ be such that $$ \|f_j\|_1 \le 1/2,\quad j=1,2;\qquad \|f_1-f_2\|_\infty \le \delta. $$ Then \begin{equation}\label{3.1} \mu^m\{\mathbf z: |L^1_\mathbf z(f_1) -L^1_\mathbf z(f_2)| \ge \eta\} < 2\exp\left(-\frac{m\eta^2}{16\delta}\right). \end{equation} \end{Proposition} We now prove the Marcinkiewicz-type theorem for discretization of the $L_1$ norm of polynomials from $\mathcal T(Q)$. \begin{Theorem}\label{T3.3} For any $Q\subset \Pi(\mathbf N)$ with $\mathbf N=(2^n,\dots,2^n)$ there exists a set of $m \le C_1(d)|Q|n^{7/2}$ points $\xi^j\in \mathbb T^d$, $j=1,\dots,m$ such that for any $f\in \mathcal T(Q)$ we have $$ C_2\|f\|_1 \le \frac{1}{m}\sum_{j=1}^m |f(\xi^j)| \le C_3\|f\|_1. $$ \end{Theorem} \begin{proof} We use the technique developed in learning theory and in distribution-free theory of regression known under the name of {\it chaining technique}. Proposition \ref{P3.1} plays an important role in our proof. It is used in the proof of the bound on the probability of the event $\{\sup_{f\in W}|L^1_\mathbf z(f)|\ge \eta\}$ for a function class $W$. The corresponding proof is in terms of the entropy numbers of $W$. We consider the case $X$ is ${\mathcal C}(\Omega)$ the space of functions continuous on a compact subset $\Omega$ of ${\mathbb{R}}^d$ with the norm $$ \|f\|_\infty:= \sup_{\mathbf x\in \Omega}|f(\mathbf x)|. $$ We use the abbreviated notations $$ \varepsilon_n(W):= \varepsilon_n(W,{\mathcal C}). $$ In our case \begin{equation}\label{3.3} W:=W(Q) := \{t\in \mathcal T(Q): \|t\|_1 = 1/2\}. \end{equation} We use Theorem \ref{T2.6} proved in Section \ref{entropy}. We formulate it here for the reader's convenience. We stress that Theorem \ref{T2.6'} is the only result on the specific features of the $\mathcal T(Q)$, which we use in the proof of Theorem \ref{T3.3}. \begin{Theorem}\label{T2.6'} For any $Q\subset \Pi(\mathbf N)$ with $\mathbf N=(2^n,\dots,2^n)$ we have $$ \varepsilon_k(\mathcal T( Q)_1,L_\infty) \le 2\varepsilon_k:= 2C_4(d) \left\{\begin{array}{ll} n^{3/2}(|Q|/k), &\quad k\le 2| Q|,\\ n^{3/2}2^{-k/(2| Q|)},&\quad k\ge 2| Q|.\end{array} \right. $$ \end{Theorem} Specify $\eta=1/4$. Denote $\delta_j := \varepsilon_{2^j}$, $j=0,1,\dots$, and consider minimal $\delta_j$-nets ${\mathcal N}_j \subset W$ of $W$ in ${\mathcal C}(\mathbb T^d)$. We use the notation $N_j:= |{\mathcal N}_j|$. Let $J$ be the minimal $j$ satisfying $\delta_j \le 1/16$. For $j=1,\dots,J$ we define a mapping $A_j$ that associates with a function $f\in W$ a function $A_j(f) \in {\mathcal N}_j$ closest to $f$ in the ${\mathcal C}$ norm. Then, clearly, $$ \|f-A_j(f)\|_{\mathcal C} \le \delta_j. $$ We use the mappings $A_j$, $j=1,\dots, J$ to associate with a function $f\in W$ a sequence (a chain) of functions $f_J, f_{J-1},\dots, f_1$ in the following way $$ f_J := A_J(f),\quad f_j:=A_j(f_{j+1}),\quad j=J-1,\dots,1. $$ Let us find an upper bound for $J$, defined above. Certainly, we can carry out the proof under assumption that $C_4(d)\ge 1$. Then the definition of $J$ implies that $2^J\ge 2|Q|$ and \begin{equation}\label{3.5} C_4(d)n^{3/2}2^{-2^{J-1}/(2| Q|)} \ge 1/16. \end{equation} We derive from (\ref{3.5}) \begin{equation}\label{3.6} 2^J \le 4|Q| C(d)\log n,\qquad J \le 2dn \end{equation} for sufficiently large $n\ge C(d)$. Set $$ \eta_j := \frac{1}{16nd},\quad j=1,\dots,J. $$ We now proceed to the estimate of $\mu^m\{\mathbf z:\sup_{f\in W}|L^1_\mathbf z(f)|\ge 1/4\}$. First of all by the following simple Proposition \ref{P3.2} the assumption $\delta_J\le 1/16$ implies that if $|L^1_\mathbf z(f)| \ge 1/4$ then $|L^1_\mathbf z(f_J)|\ge 1/8$. \begin{Proposition}\label{P3.2} If $\|f_1-f_2\|_\infty\le \delta$, then $$ |L^1_\mathbf z(f_1)-L^1_\mathbf z(f_2)| \le 2\delta. $$ \end{Proposition} Rewriting $$ L^1_\mathbf z(f_J) = L^1_\mathbf z(f_J)-L^1_\mathbf z(f_{J-1}) +\dots+L^1_\mathbf z(f_{2})-L^1_\mathbf z(f_1)+L^1_\mathbf z(f_1) $$ we conclude that if $|L^1_\mathbf z(f)| \ge 1/4$ then at least one of the following events occurs: $$ |L^1_\mathbf z(f_j)-L^1_\mathbf z(f_{j-1})|\ge \eta_j\quad\text{for some}\quad j\in (1,J] \quad\text{or}\quad |L^1_\mathbf z(f_1)|\ge \eta_1. $$ Therefore \begin{eqnarray}\label{3.6'} \mu^m\{\mathbf z:\sup_{f\in W}|L^1_\mathbf z(f)|\ge1/4\} \le \mu^m\{\mathbf z:\sup_{f\in {\mathcal N}_1}|L^1_\mathbf z(f)|\ge\eta_1\} \nonumber \\ +\sum_{j\in(1,J]}\sum_{f\in {\mathcal N}_j}\mu^m \{\mathbf z:|L^1_\mathbf z(f)-L^1_\mathbf z(A_{j-1}(f))|\ge\eta_j\}\nonumber\\ \le \mu^m\{\mathbf z:\sup_{f\in {\mathcal N}_1}|L^1_\mathbf z(f)|\ge\eta_1\}\nonumber\\ +\sum_{j\in(1,J]} N_j\sup_{f\in W}\mu^m \{\mathbf z:|L^1_\mathbf z(f)-L^1_\mathbf z(A_{j-1}(f))|\ge\eta_j\}. \end{eqnarray} Applying Proposition \ref{P3.1} we obtain $$ \sup_{f\in W} \mu^m\{\mathbf z:|L^1_\mathbf z(f)-L^1_\mathbf z(A_{j-1}(f))|\ge \eta_j\} \le 2\exp\left(-\frac{m\eta_j^2}{16\delta_{j-1}}\right). $$ We now make further estimates for a specific $m=C_1(d)|Q|n^{7/2}$ with large enough $C_1(d)$. For $j$ such that $2^j\le 2|Q|$ we obtain from the definition of $\delta_j$ $$ \frac{m\eta_j^2}{\delta_{j-1}} \ge \frac{C_1(d)n^{3/2}2^{j-1}}{C_5(d)n^{3/2} } \ge \frac{C_1(d)}{2C_5(d)}2^j. $$ By our choice of $\delta_j=\varepsilon_{2^j}$ we get $N_j\le 2^{2^j} <e^{2^j}$ and, therefore, \begin{equation}\label{3.7} N_j\exp\left(-\frac{m\eta_j^2}{16\delta_{j-1}}\right)\le \exp(-2^j) \end{equation} for sufficiently large $C_1(d)$. In the case $2^j\in (2|Q|, 2^J]$ we have $$ \frac{m\eta_j^2}{\delta_{j-1}} \ge \frac{C_1(d)|Q|n^{3/2}}{C_6(d)n^{3/2}2^{-2^{j-1}/(2|Q|)}} \ge \frac{C_1(d)}{C_7(d)}2^j $$ and \begin{equation}\label{3.8} N_j\exp\left(-\frac{m\eta_j^2}{16\delta_{j-1}}\right)\le \exp(-2^j) \end{equation} for sufficiently large $C_1(d)$. We now estimate $\mu^m\{\mathbf z:\sup_{f\in {\mathcal N}_1}|L^1_\mathbf z(f)|\ge\eta_1\}$. We use Lemma \ref{L3.1} with $g_j(\mathbf z) = |f(\mathbf x^j)|-\|f\|_1$. To estimate $\|g_j\|_\infty$ it is sufficient to use the following trivial Nikol'skii-type inequality for the trigonometric polynomials: \begin{equation}\label{3.9} \|f\|_\infty \le |Q| \|f\|_1,\qquad f\in \mathcal T(Q). \end{equation} Then Lemma \ref{L3.1} gives $$ \mu^m\{\mathbf z:\sup_{f\in {\mathcal N}_1}|L^1_\mathbf z(f)|\ge\eta_1\}\le N_1\exp\left(-\frac{m\eta_1^2}{C|Q|}\right) \le 1/4 $$ for sufficiently large $C_1(d)$. Substituting the above estimates into (\ref{3.6'}) we obtain $$ \mu^m\{\mathbf z:\sup_{f\in W}|L^1_\mathbf z(f)|\ge1/4\} <1. $$ Therefore, there exists $\mathbf z_0=(\xi^1,\dots,\xi^m)$ such that for any $f\in W$ we have $$ |L^1_{\mathbf z_0}(f)| \le 1/4. $$ Taking into account that $\|f\|_1=1/2$ for $f\in W$ we obtain the statement of Theorem \ref{T3.3} with $C_2 =1/2$, $C_3=3/2$. \end{proof} In the above proof of Theorem \ref{T3.3} we specified $\eta =1/4$. If instead we take $\eta \in [2^{-2^{nd/2}},1/4]$, define $J(\eta)$ to be the minimal $j$ satisfying $\delta_j \le \eta/4$ and set $$ \eta_j := \frac{\eta}{4nd}, $$ then we obtain the following generalization of Theorem \ref{T3.3}. \begin{Theorem}\label{T3.3e} For any $Q\subset \Pi(\mathbf N)$ with $\mathbf N=(2^n,\dots,2^n)$ and $\epsilon \in [2^{1-2^{nd/2}},1/2]$ there exists a set of $m \le C_1(d)|Q|n^{7/2}\epsilon^{-2}$ points $\xi^j\in \mathbb T^d$, $j=1,\dots,m$ such that for any $f\in \mathcal T(Q)$ we have $$ (1-\epsilon)\|f\|_1 \le \frac{1}{m}\sum_{j=1}^m |f(\xi^j)| \le (1+\epsilon)\|f\|_1. $$ \end{Theorem} \section{Some Marcinkiewicz-type discretization theorems for general polynomials} \label{MX} In this section we extend the technique developed in Sections \ref{entropy} and \ref{M} to the case of a general orthonormal system $\{u_i\}_{i=1}^N$ on a compact $\Omega \subset {\mathbb R}^d$, which satisfies conditions {\bf A}, {\bf B}, and {\bf C} from the Introduction. Let $\mu$ be a probability measure on $\Omega$. It is convenient for us to assume that $u_i$, $i=1,\dots,N$, are real functions and denote $$ \<u,v\> := \int_\Omega u(\mathbf x)v(\mathbf x) d\mu, \quad \|u\|_2 := \<u,u\>^{1/2}. $$ Denote the unit $L_p$ ball in $X_N$ by $$ X^p_N :=\{f\in X_N: \|f\|_p\le 1\}. $$ We begin with the estimates of the entropy numbers $\varepsilon_k(X^1_N,L_\infty)$. We use the same strategy as above: first we get bounds on $m$-term approximations for $X^1_N$ in $L_2$ with respect to a dictionary ${\mathcal D}^1$, second we obtain bounds on $m$-term approximations for $X^2_N$ in $L_\infty$ with respect to a dictionary ${\mathcal D}^2$, third we get bounds on $m$-term approximations for $X^1_N$ in $L_\infty$ with respect to a dictionary ${\mathcal D}^3={\mathcal D}^1\cup {\mathcal D}^2$. Then we apply Theorem \ref{2.1} to obtain the entropy numbers estimates. \subsection{Sparse approximation in $L_2$} We begin with the study of $m$-term approximations with respect to the dictionary $$ {\mathcal D}^0:= \{g_\mathbf y(\mathbf x)\}_{\mathbf y\in\Omega},\quad g_\mathbf y(\mathbf x):= (K_2N)^{-1/2}{\mathcal D}_N(\cdot,\mathbf y), $$ where $$ {\mathcal D}_N(\mathbf x,\mathbf y) := \sum_{i=1}^N u_i(\mathbf x)u_i(\mathbf y) $$ is the Dirichlet kernel for the system $\{u_i\}_{i=1}^N$. Then assumption {\bf B} guarantees that $\|g_\mathbf y\|_2 \le 1$. We now use the following greedy-type algorithm (see \cite{Tbook}, p.82). {\bf Relaxed Greedy Algorithm (RGA).} Let $f^r_0:=f$ and $G_0^r(f):= 0$. For a function $h$ from a real Hilbert space $H$, let $g=g(h)$ denote the function from ${\mathcal D}^\pm:=\{\pm g:g\in{\mathcal D}\}$, which maximizes $\langle h,g\rangle$ (we assume the existence of such an element). Then, for each $m\ge 1$, we inductively define $$ G_m^r(f):= \left(1-\frac{1}{m}\right)G_{m-1}^r(f)+\frac{1}{m}g(f_{m-1}^r), \quad f_m^r:= f-G_m^r(f). $$ We use the following known result (see \cite{Tbook}, p.90). \begin{Theorem}\label{T4.1} For the Relaxed Greedy Algorithm we have, for each $f\in A_1({\mathcal D}^\pm)$, the estimate $$ \|f-G_m^r(f)\|\le \frac{2}{\sqrt{m}},\quad m\ge 1. $$ \end{Theorem} In our application of the above RGA the Hilbert space $H$ is the $X_N$ with the $L_2$ norm, the dictionary ${\mathcal D}$ is the ${\mathcal D}^0$ defined above. Using representation \begin{equation}\label{4.4} f(\mathbf x) = \int_{\Omega} f(\mathbf y) {\mathcal D}_N(\mathbf x,\mathbf y)d\mu(\mathbf y) \end{equation} we see that the search for $g\in({\mathcal D}^0)^\pm$ maximizing $\<h,g\>$, $h\in X_N$, is equivalent to the search for $\mathbf y\in\Omega$ maximizing $|h(\mathbf y)|$. A function $h$ from $X_N$ is continuos on the compact $\Omega$ and, therefore, such a maximizing $\mathbf y_{\max}$ exists. This means that we can run the RGA. For $f\in X^1_N$ by representation (\ref{4.4}) we obtain $$ f(\mathbf x) = \int_{\Omega} f(\mathbf y) {\mathcal D}_N(\mathbf x,\mathbf y)d\mu(\mathbf y) $$ $$ = (K_2N)^{1/2}\int_{\Omega} f(\mathbf y) (K_2N)^{-1/2}{\mathcal D}_N(\mathbf x,\mathbf y)d\mu(\mathbf y). $$ Therefore, $$ (K_2N)^{-1/2} f \in A_1(({\mathcal D}^0)^\pm),\quad \text{or}\quad f\in A_1(({\mathcal D}^0)^\pm,(K_2N)^{1/2}). $$ Applying Theorem \ref{T4.1} we get the following result. \begin{Theorem}\label{T4.2} For the Relaxed Greedy Algorithm with respect to ${\mathcal D}^0$ we have, for each $f\in X^1_N$, the estimate $$ \|f-G_m^r(f)\|\le 2(K_2N/m)^{1/2},\quad m\ge 1. $$ \end{Theorem} We need an analog of Theorem \ref{T4.2} for a discrete version of ${\mathcal D}^0$. Take a $\delta>0$ and let $\{\mathbf y^1,\dots,\mathbf y^M\}$, $M=M(\delta)$, be a $\delta$-net of points in $\Omega$, which means that for any $\mathbf y\in \Omega$ there is a $\mathbf y^j$ from the net such that $\|\mathbf y-\mathbf y^j\|_\infty\le \delta$. It is clear that \begin{equation}\label{4.5} M(\delta) \le (C(\Omega)/\delta)^d. \end{equation} It follows from the definition of the RGA that $G_m^r(f)\in A_1({\mathcal D}^\pm)$ provided $f\in A_1({\mathcal D}^\pm)$. Let $f\in X^1_N$ and let $G_m^r(f)$ be its approximant from Theorem \ref{T4.2}. Then \begin{equation}\label{4.6} G_m^r(f) = \sum_{k=1}^m c_kg_{\mathbf y(k)},\quad \sum_{k=1}^m |c_k| \le (K_2N)^{1/2}. \end{equation} For each $\mathbf y(k)$ find $\mathbf y^{j(k)}$ from the net such that $\|\mathbf y(k)-\mathbf y^{j(k)}\|_\infty \le \delta$. Then, using assumption {\bf A} we get $$ \|g_{\mathbf y(k)}-g_{\mathbf y^{j(k)}}\|_2^2 = (K_2N)^{-1}\sum_{i=1}^N |u_i(\mathbf y(k))-u_i(\mathbf y^{j(k)})|^2 $$ \begin{equation}\label{4.7} \le (K_2N)^{-1} K_1^2 N^{1+2\beta} \delta^{2\alpha}. \end{equation} Denote $$ t_m(f) := \sum_{k=1}^m c_kg_{\mathbf y^{j(k)}}. $$ Combining (\ref{4.6}) and (\ref{4.7}) we obtain \begin{equation}\label{4.8} \|G_m^r(f)-t_m(f)\|_2 \le (K_2N)^{1/2}K_2^{-1/2} K_1N^\beta\delta^\alpha. \end{equation} Choosing $\delta$ such that $$ \delta_0^\alpha = K_1^{-1}N^{-1/2-\beta} $$ we obtain by Theorem \ref{T4.2} and (\ref{4.8}) that for $f\in X^1_N$ \begin{equation}\label{4.9} \|f-t_m(f)\|_2 \le 3(K_2N/m)^{1/2},\quad m\le N. \end{equation} Inequality (\ref{4.5}) gives \begin{equation}\label{4.10} M(\delta_0) \le C(K_1,\Omega,d) N^{c(\alpha,\beta,d)}. \end{equation} Define the dictionary ${\mathcal D}^1$ as follows $$ {\mathcal D}^1 := \{g_{\mathbf y^j}\}_{j=1}^M. $$ Relation (\ref{4.9}) gives us the following theorem. \begin{Theorem}\label{T4.3} We have $$ \sigma_m(X^1_N,{\mathcal D}^1)_2 \le 3(K_2N/m)^{1/2}. $$ \end{Theorem} \subsection{Sparse approximation in $L_\infty$} In this subsection we study $m$-term approximations of $f\in X^2_N$ in the $L_\infty$ norm with respect to the following dictionary $$ {\mathcal D}^2 := \{\pm g_i\}_{i=1}^N,\quad g_i:= u_i K_2^{-1/2}. $$ Then by property {\bf B} for all $p$ we have $\|g_i\|_p \le 1$. In this subsection we use greedy algorithms in Banach spaces. We remind some notations from the theory of greedy approximation in Banach spaces. The reader can find a systematic presentation of this theory in \cite{Tbook}, Chapter 6. Let $X$ be a Banach space with norm $\|\cdot\|$. We say that a set of elements (functions) ${\mathcal D}$ from $X$ is a dictionary if each $g\in {\mathcal D}$ has norm less than or equal to one ($\|g\|\le 1$) and the closure of $\operatorname{span} {\mathcal D}$ coincides with $X$. We note that in \cite{T8} we required in the definition of a dictionary normalization of its elements ($\|g\|=1$). However, it is pointed out in \cite{T12} that it is easy to check that the arguments from \cite{T8} work under assumption $\|g\|\le 1$ instead of $\|g\|=1$. In applications it is more convenient for us to have an assumption $\|g\|\le 1$ than normalization of a dictionary. For an element $f\in X$ we denote by $F_f$ a norming (peak) functional for $f$: $$ \|F_f\| =1,\qquad F_f(f) =\|f\|. $$ The existence of such a functional is guaranteed by the Hahn-Banach theorem. We proceed to the Incremental Greedy Algorithm (see \cite{T12} and \cite{Tbook}, Chapter 6). Let $\epsilon=\{\epsilon_n\}_{n=1}^\infty $, $\epsilon_n> 0$, $n=1,2,\dots$ . For a Banach space $X$ and a dictionary ${\mathcal D}$ define the following algorithm IA($\epsilon$) $:=$ IA($\epsilon,X,{\mathcal D}$). {\bf Incremental Algorithm with schedule $\epsilon$ (IA($\epsilon,X,{\mathcal D}$)).} Denote $f_0^{i,\epsilon}:= f$ and $G_0^{i,\epsilon} :=0$. Then, for each $m\ge 1$ we have the following inductive definition. (1) $\varphi_m^{i,\epsilon} \in {\mathcal D}$ is any element satisfying $$ F_{f_{m-1}^{i,\epsilon}}(\varphi_m^{i,\epsilon}-f) \ge -\epsilon_m. $$ (2) Define $$ G_m^{i,\epsilon}:= (1-1/m)G_{m-1}^{i,\epsilon} +\varphi_m^{i,\epsilon}/m. $$ (3) Let $$ f_m^{i,\epsilon} := f- G_m^{i,\epsilon}. $$ We consider here approximation in uniformly smooth Banach spaces. For a Banach space $X$ we define the modulus of smoothness $$ \rho(u) := \sup_{\|x\|=\|y\|=1}(\frac{1}{2}(\|x+uy\|+\|x-uy\|)-1). $$ The uniformly smooth Banach space is the one with the property $$ \lim_{u\to 0}\rho(u)/u =0. $$ It is well known (see for instance \cite{DGDS}, Lemma B.1) that in the case $X=L_p$, $1\le p < \infty$ we have \begin{equation}\label{t6.2} \rho(u) \le \begin{cases} u^p/p & \text{if}\quad 1\le p\le 2 ,\\ (p-1)u^2/2 & \text{if}\quad 2\le p<\infty. \end{cases} \end{equation} Denote by $A_1({\mathcal D}):=A_1({\mathcal D})(X)$ the closure in $X$ of the convex hull of ${\mathcal D}$. In order to be able to run the IA($\epsilon$) for all iterations we need existence of an element $\varphi_m^{i,\epsilon} \in {\mathcal D}$ at the step (1) of the algorithm for all $m$. It is clear that the following condition guarantees such existence (see \cite{VT149}). {\bf Condition B.} We say that for a given dictionary ${\mathcal D}$ an element $f$ satisfies Condition B if for all $F\in X^*$ we have $$ F(f) \le \sup_{g\in{\mathcal D}}F(g). $$ It is well known (see, for instance, \cite{Tbook}, p.343) that any $f\in A_1({\mathcal D})$ satisfies Condition B. For completeness we give this simple argument here. Take any $f\in A_1({\mathcal D})$. Then for any $\epsilon >0$ there exist $g_1^\epsilon,\dots,g_N^\epsilon \in {\mathcal D}$ and numbers $a_1^\epsilon,\dots,a_N^\epsilon$ such that $a_i^\epsilon>0$, $a_1^\epsilon+\dots+a_N^\epsilon = 1$ and $$ \|f-\sum_{i=1}^Na_i^\epsilon g_i^\epsilon\| \le \epsilon. $$ Thus $$ F(f) \le \|F\|\epsilon + F(\sum_{i=1}^Na_i^\epsilon g_i^\epsilon) \le \epsilon \|F\| +\sup_{g\in {\mathcal D}} F(g) $$ which proves Condition B. We note that Condition B is equivalent to the property $f\in A_1({\mathcal D})$. Indeed, as we showed above, the property $f\in A_1({\mathcal D})$ implies Condition B. Let us show that Condition B implies that $f\in A_1({\mathcal D})$. Assuming the contrary $f\notin A_1({\mathcal D})$ by the separation theorem for convex bodies we find $F\in X^*$ such that $$ F(f) > \sup_{\phi\in A_1({\mathcal D})} F(\phi) \ge \sup_{g\in{\mathcal D}}F(g) $$ which contradicts Condition B. We formulate results on the IA($\epsilon$) in terms of Condition B because in the applications it is easy to check Condition B. \begin{Theorem}\label{T4.4} Let $X$ be a uniformly smooth Banach space with modulus of smoothness $\rho(u)\le \gamma u^q$, $1<q\le 2$. Define $$ \epsilon_n := \beta\gamma ^{1/q}n^{-1/p},\qquad p=\frac{q}{q-1},\quad n=1,2,\dots . $$ Then, for every $f$ satisfying Condition B we have $$ \|f_m^{i,\epsilon}\| \le C(\beta) \gamma^{1/q}m^{-1/p},\qquad m=1,2\dots. $$ \end{Theorem} In the case $f\in A_1({\mathcal D})$ this theorem is proved in \cite{T12} (see also \cite{Tbook}, Chapter 6). As we mentioned above Condition B is equivalent to $f\in A_1({\mathcal D})$. For $f\in X_N$ write $f=\sum_{i=1}^N c_ig_i$ and define $$ \|f\|_A := \sum_{i=1}^N |c_i|. $$ \begin{Theorem}\label{T4.5} Assume that $X_N$ satisfies {\bf C}. For any $t\in X_N$ the IA($\epsilon,X_N\cap L_p,{\mathcal D}^2$) with an appropriate $p$ and schedule $\epsilon$, applied to $f:=t/\|t\|_A$, provides after $m$ iterations an $m$-term polynomial $G_m(t):=G^{i,\epsilon}_m(f)\|t\|_A$ with the following approximation property $$ \|t-G_m(t)\|_\infty \le Cm^{-1/2}(\ln N)^{1/2}\|t\|_A, \quad \|G_m(t)\|_A =\|t\|_A, $$ with a constant $C=C(K_3,K_4)$. \end{Theorem} \begin{proof} It is clear that it is sufficient to prove Theorem \ref{T4.5} for $t\in X_N$ with $\|t\|_A =1$. Then $t\in A_1({\mathcal D}^2)(X_N\cap L_p)$ for all $p\in [2,\infty)$. Applying the IA($\epsilon$) to $f$ with respect to ${\mathcal D}^2$ we obtain by Theorem \ref{T4.4} after $m$ iterations \begin{equation}\label{4.12} \| t -\sum_{j\in \Lambda} \frac{a_j}{m}g_j\|_p \le C\gamma^{1/2}m^{-1/2},\qquad \sum_{j\in\Lambda}a_j = m, \end{equation} where $\sum_{j\in \Lambda} \frac{a_j}{m}g_j$ is the $G^{i,\epsilon}_m(t)$. By (\ref{t6.2}) we find $\gamma \le p/2$. Next, by the Nikol'skii inequality from assumption {\bf C} we get from (\ref{4.12}) $$ \| t -\sum_{j\in \Lambda} \frac{a_j}{m}g_j\|_\infty \le CN^{K_4/p}\| t -\sum_{j\in \Lambda} \frac{a_j}{m}g_j\|_p \le Cp^{1/2}N^{K_4/p}m^{-1/2}. $$ Choosing $p\asymp \ln N$ we obtain the desired in Theorem \ref{T4.5} bound. \end{proof} Using the following simple relations $$ \|f\|_2^2 = \|\sum_{i=1}^N c_ig_i\|_2^2 = \|K_2^{-1/2}\sum_{i=1}^N c_iu_i\|_2^2 = K_2^{-1}\sum_{i=1}^N |c_i|^2, $$ $$ \sum_{i=1}^N |c_i| \le N^{1/2} \left(\sum_{i=1}^N |c_i|^2\right)^{1/2} = (K_2N)^{1/2}\|f\|_2 $$ we obtain from Theorem \ref{T4.5} the following estimates. \begin{Theorem}\label{T4.6} We have $$ \sigma_m(X^2_N,{\mathcal D}^2)_\infty \ll (N/m)^{1/2} (\ln N)^{1/2}. $$ \end{Theorem} Combining Theorems \ref{T4.3} and \ref{T4.6} we obtain. \begin{Theorem}\label{T4.7} We have $$ \sigma_m(X^1_N,{\mathcal D}^1\cup{\mathcal D}^2)_\infty \ll (N/m) (\ln N)^{1/2}. $$ \end{Theorem} \subsection{The entropy numbers} By our construction (see (\ref{4.10})) we obtain $$ |{\mathcal D}^1\cup{\mathcal D}^2| \ll N^{c(\alpha,\beta,d)}. $$ Theorem \ref{T4.7}, Theorem \ref{T2.1}, and Remark \ref{R2.1} (its version for the real case) imply the following result on the entropy numbers. \begin{Theorem}\label{T4.8} Suppose that a real orthonormal system $\{u_i\}_{i=1}^N$ satisfies conditions {\bf A}, {\bf B}, and {\bf C}. Then we have $$ \varepsilon_k(X^1_N,L_\infty) \ll \left\{\begin{array}{ll} (\log N)^{3/2}(N/k), &\quad k\le N,\\ (\log N)^{3/2}2^{-k/N},&\quad k\ge N.\end{array} \right. $$ \end{Theorem} In the same way as Theorem \ref{T3.3} was derived from Theorem \ref{T2.6} the following Theorem \ref{T4.9} can be derived from Theorem \ref{T4.8} \begin{Theorem}\label{T4.9} Suppose that a real orthonormal system $\{u_i\}_{i=1}^N$ satisfies conditions {\bf A}, {\bf B}, and {\bf C}. Then there exists a set of $m \le C_1N(\log N)^{7/2}$ points $\xi^j\in \Omega$, $j=1,\dots,m$, $C_1=C(d,K_1,K_2,K_3,K_4,\Omega,\alpha,\beta)$, such that for any $f\in X_N$ we have $$ C_2\|f\|_1 \le \frac{1}{m}\sum_{j=1}^m |f(\xi^j)| \le C_3\|f\|_1. $$ \end{Theorem} The following analog of Theorem \ref{T3.3e} holds for general systems. \begin{Theorem}\label{T4.9e} Suppose that a real orthonormal system $\{u_i\}_{i=1}^N$ satisfies conditions {\bf A}, {\bf B}, and {\bf C}. Then for $\epsilon \in [2^{-N},1/2]$ there exists a set of \newline $m \le C_1N(\log N)^{7/2}\epsilon^{-2}$ points $\xi^j\in \Omega$, $j=1,\dots,m$, \newline$C_1=C(d,K_1,K_2,K_3,K_4,\Omega,\alpha,\beta)$, such that for any $f\in X_N$ we have $$ (1-\epsilon)\|f\|_1 \le \frac{1}{m}\sum_{j=1}^m |f(\xi^j)| \le (1+\epsilon)\|f\|_1. $$ \end{Theorem} \subsection{Conditional theorem} We already pointed out in the proof of Theorem \ref{T3.3} that the only special properties of the subspace $\mathcal T(Q)$, which we used in the proof of Theorem \ref{T3.3}, were stated in Theorem \ref{T2.6} on the entropy numbers $\varepsilon_k(\mathcal T(Q)_1,L_\infty)$. Similarly, in Section \ref{MX} above we used assumptions {\bf A}, {\bf B}, and {\bf C} to prove (constructively) Theorem \ref{T4.8} on the entropy numbers $\varepsilon_k(X_N^1,L_\infty)$ and, then, derived from it Theorem \ref{T4.9}. This encourages us to formulate the following conditional result. \begin{Theorem}\label{T4.10} Suppose that a real $N$-dimensional subspace $X_N$ satisfies the condition $(B\ge 1)$ $$ \varepsilon_k(X^1_N,L_\infty) \le B\left\{\begin{array}{ll} N/k, &\quad k\le N,\\ 2^{-k/N},&\quad k\ge N.\end{array} \right. $$ Then there exists a set of $m \le C_1NB(\log_2(2N\log_2(8B)))^2$ points $\xi^j\in \Omega$, $j=1,\dots,m$, with large enough absolute constant $C_1$, such that for any $f\in X_N$ we have $$ \frac{1}{2}\|f\|_1 \le \frac{1}{m}\sum_{j=1}^m |f(\xi^j)| \le \frac{3}{2}\|f\|_1. $$ \end{Theorem} \section{The Marcinkiewicz-type theorem in $L_2$} \label{L2} In this section we discuss some known results directly connected with the discretization theorems and demonstrate how recent results on random matrices can be used to obtain the Marcinkiewicz-type theorem in $L_2$. We begin with formulation of the Rudelson result from \cite{Rud}. In the paper \cite{Rud} it is formulated in terms of submatrices of an orthogonal matrix. We reformulate it in our notations. Let $\Omega_M=\{x^j\}_{j=1}^M$ be a discrete set with the probability measure $\mu(x^j)=1/M$, $j=1,\dots,M$. Assume that $\{u_i(x)\}_{i=1}^N$ is a real orthonormal on $\Omega_M$ system satisfying the following condition: for all $j$ \begin{equation}\label{5.1} \sum_{i=1}^Nu_i(x^j)^2 \le Nt^2 \end{equation} with some $t\ge 1$. Then for every $\epsilon>0$ there exists a set $J\subset \{1,\dots,M\}$ of indices with cardinality \begin{equation}\label{5.1a} m:=|J| \le C\frac{t^2}{\epsilon^2}N\log\frac{Nt^2}{\epsilon^2} \end{equation} such that for any $f=\sum_{i=1}^N c_iu_i$ we have $$ (1-\epsilon)\|f\|_2^2 \le \frac{1}{m} \sum_{j\in J} f(x^j)^2 \le (1+\epsilon)\|f\|_2^2. $$ In particular, the above result implies that for any orthonormal system $\{u_i\}_{i=1}^N$ on $\Omega_M$, satisfying (\ref{5.1}) we have $$ {\mathcal U}_N := \operatorname{span}(u_1,\dots,u_N) \in \mathcal M(m,2)\quad \text{provided}\quad m\ge CN\log N $$ with large enough $C$. We note that (\ref{5.1}) is satisfied if the system $\{u_i\}_{i=1}^N$ is uniformly bounded: $\|u_i\|_\infty \le t$, $i=1,\dots,N$. We first demonstrate how the Bernstein-type concentration inequality for matrices can be used to prove an analog of the above Rudelson's result for a general $\Omega$. Our proof is based on a different idea than the Rudelson's proof. Let $\{u_i\}_{i=1}^N$ be an orthonormal system on $\Omega$, satisfying the condition {\bf D.} For $x\in \Omega$ we have \begin{equation}\label{5.2a} w(x):=\sum_{i=1}^N u_i(x)^2 = N. \end{equation} With each $x\in\Omega$ we associate the matrix $G(x) := [u_i(x)u_j(x)]_{i,j=1}^N$. Clearly, $G(x)$ is a symmetric matrix. We will also need the matrix $G(x)^2$. We have for the $(k,l)$ element of $G(x)^2$ $$ (G(x)^2)_{k,l} = \sum_{j=1}^N u_k(x)u_j(x)u_j(x)u_l(x) = w(x)u_k(x)u_l(x). $$ Therefore, \begin{equation}\label{5.2} G(x)^2 = w(x)G(x)\quad \text{and}\quad \|G(x)\| = w(x). \end{equation} We use the following Bernstein-type concentration inequality for matrices (see \cite{Tro12}). \begin{Theorem}\label{T5.1} Let $\{T_k\}_{k=1}^n$ be a sequence of independent random symmetric $N\times N$ matrices. Assume that each $T_k$ satisfies: $$ \mathbb E(T_k) =0 \quad \text{and}\quad \|T_k\| \le R \quad \text{almost surely}. $$ Then for all $\eta\ge 0$ $$ \mathbb P\left\{\left\|\sum_{k=1}^n T_k \right\|\ge \eta\right\} \le N\exp\left(-\frac{\eta^2}{2\sigma^2 +(2/3)R\eta}\right) $$ where $\sigma^2 := \left\|\sum_{k=1}^n \mathbb E(T_k^2)\right\|$. \end{Theorem} We now consider a sequence $T_k := G(x^k)-I$, $k=1,\dots,m$ of independent random symmetric matrices. Orthonormality of the system $\{u_i\}_{i=1}^N$ implies that $\mathbb E(T_k)=0$ for all $k$. Relation (\ref{5.2}) and our assumption {\bf D} imply for all $k$ \begin{equation}\label{5.3} \|T_k\|\le \|G(x^k)\| +1 = N+1=: R. \end{equation} Denote $T(x):= G(x)-I$ and, using (\ref{5.2a}) and (\ref{5.2}), represent $$ T(x)^2 = G(x)^2 -2G(x) +I = (N-2)G(x) +I. $$ Then by the orthonormality of the system $\{u_i\}_{i=1}^N$ we get $$ \mathbb E(T(x)^2) = (N-1)I $$ and, therefore, we obtain \begin{equation}\label{5.6} \|\mathbb E(T^2)\| \le N-1. \end{equation} Thus, by Theorem \ref{T5.1} we obtain for $\eta \le 1$ \begin{equation}\label{5.8} \mathbb P\left\{\left\|\sum_{k=1}^n (G(x^k)-I) \right\|\ge n\eta\right\} \le N\exp\left(-\frac{n\eta^2}{cN}\right) \end{equation} with an absolute constant $c$. For a set of points $\xi^k\in \Omega$, $k=1,\dots,m$, and $f=\sum_{i=1}^N b_iu_i$ we have $$ \frac{1}{m}\sum_{k=1}^m f(\xi^k)^2 - \int_\Omega f(x)^2 d\mu = {\mathbf b}^T\left(\frac{1}{m}\sum_{k=1}^m G(\xi^k)-I\right){\mathbf b}, $$ where ${\mathbf b} = (b_1,\dots,b_N)^T$ is the column vector. Therefore, \begin{equation}\label{5.8'} \left|\frac{1}{m}\sum_{k=1}^m f(\xi^k)^2 - \int_\Omega f(x)^2 d\mu \right| \le \left\|\frac{1}{m}\sum_{k=1}^m G(\xi^k)-I\right\|\|{\mathbf b}\|_2^2. \end{equation} We now make $m = [CN\eta^{-2}\log N]$ with large enough $C$. Then, using (\ref{5.8}) with $n=m$, we get the corresponding probability $<1$. Thus, we have proved the following theorem. \begin{Theorem}\label{T5.2} Let $\{u_i\}_{i=1}^N$ be an orthonormal system, satisfying condition {\bf D}. Then for every $\epsilon>0$ there exists a set $\{\xi^j\}_{j=1}^m \subset \Omega$ with $$ m \le C \epsilon^{-2}N\log N $$ such that for any $f=\sum_{i=1}^N c_iu_i$ we have $$ (1-\epsilon)\|f\|_2^2 \le \frac{1}{m} \sum_{j=1}^m f(\xi^j)^2 \le (1+\epsilon)\|f\|_2^2. $$ \end{Theorem} We note that Theorem \ref{T5.2} treats a special case, when (\ref{5.2a}) instead of (\ref{5.1}) is satisfied. This is the case, for instance, for the trigonometric and the Walsh systems. In this special case Theorem \ref{T5.2} is more general and slightly stronger than the Rudelson theorem discussed in the beginning of this section. Theorem \ref{T5.2} provides the Marcinkievicz-type discretization theorem for a general domain $\Omega$ instead of a discrete set $\Omega_M$. Also, in Theorem \ref{T5.2} we have an extra $\log N$ instead of $\log\frac{Nt^2}{\epsilon^2}$ in (\ref{5.1a}). Second, we demonstrate other way of proof, which allows us to replace condition {\bf D} by the following more general condition {\bf E}, which is similar to (\ref{5.1}). {\bf E.} There exists a constant $t$ such that \begin{equation}\label{5.2b} w(x):=\sum_{i=1}^N u_i(x)^2 \le Nt^2. \end{equation} The new way of proof uses the fact that the matrix $G(x)$ is a semi-definite matrix. It is based on the following result (see \cite{Tro12}, Theorem 1.1) on random matrices. \begin{Theorem}\label{T5.3} Consider a finite sequence $\{T_k\}_{k=1}^m$ of independent, random, self-adjoint matrices with dimension $N$. Assume that each random matrix is semi-positive and satisfies $$ \lambda_{\max}(T_k) \le R\quad \text{almost surely}. $$ Define $$ s_{\min} := \lambda_{\min}\left(\sum_{k=1}^m \mathbb E(T_k)\right) \quad \text{and}\quad s_{\max} := \lambda_{\max}\left(\sum_{k=1}^m \mathbb E(T_k)\right). $$ Then $$ \mathbb P\left\{\lambda_{\min}\left(\sum_{k=1}^m T_k\right) \le (1-\eta)s_{\min}\right\} \le N\left(\frac{e^{-\eta}}{(1-\eta)^{1-\eta}}\right)^{s_{\min}/R} $$ for $\eta\in[0,1)$ and for $\eta\ge 0$ $$ \mathbb P\left\{\lambda_{\max}\left(\sum_{k=1}^m T_k\right) \ge (1+\eta)s_{\max}\right\} \le N\left(\frac{e^{\eta}}{(1+\eta)^{1+\eta}}\right)^{s_{\max}/R}. $$ \end{Theorem} As above, we consider the matrix $G(x) := [u_i(x)u_j(x)]_{i,j=1}^N$. Clearly, $G(x)$ is a symmetric matrix. Consider a sequence $T_k := G(x^k)$, $k=1,\dots,m$ of independent random symmetric matrices. It is easy to see that $T_k$ are semi-positive definite. Orthonormality of the system $\{u_i\}_{i=1}^N$ implies that $\mathbb E(T_k)=I$ for all $k$. This implies that $s_{\min}=s_{\max} = m$. Relation (\ref{5.2}) shows that we can take $R:=Nt^2$. Then Theorem \ref{T5.3} implies for $\eta\le 1$ \begin{equation}\label{5.8a} \mathbb P\left\{\left\|\sum_{k=1}^m (G(x^k)-I) \right\|\ge m\eta\right\} \le N\exp\left(-\frac{m\eta^2}{ct^2N}\right) \end{equation} with an absolute constant $c$ (we can take $c=2/\ln 2$). Using inequality (\ref{5.8'}), which was used in the above proof of Theorem \ref{T5.2}, we derive from here the following theorem. \begin{Theorem}\label{T5.4} Let $\{u_i\}_{i=1}^N$ be an orthonormal system, satisfying condition {\bf E}. Then for every $\epsilon>0$ there exists a set $\{\xi^j\}_{j=1}^m \subset \Omega$ with $$ m \le C\frac{t^2}{\epsilon^2}N\log N $$ such that for any $f=\sum_{i=1}^N c_iu_i$ we have $$ (1-\epsilon)\|f\|_2^2 \le \frac{1}{m} \sum_{j=1}^m f(\xi^j)^2 \le (1+\epsilon)\|f\|_2^2. $$ \end{Theorem} We note that Theorem \ref{T5.4} is more general and slightly stronger than the Rudelson theorem discussed in the beginning of this section. Theorem \ref{T5.4} provides the Marcinkievicz-type discretization theorem for a general domain $\Omega$ instead of a discrete set $\Omega_M$. Also, in Theorem \ref{T5.4} we have an extra $\log N$ instead of $\log\frac{Nt^2}{\epsilon^2}$ in (\ref{5.1a}). {\bf The Marcinkiewicz theorem and sparse approximation.} Our above argument, in particular, inequality (\ref{5.8'}), shows that the Marcinkiewicz-type discretization theorem in $L_2$ is closely related with approximation of the identity matrix $I$ by an $m$-term approximant of the form $\frac{1}{m}\sum_{k=1}^m G(\xi^k)$ in the operator norm from $\ell^N_2$ to $\ell^N_2$ (spectral norm). Therefore, we can consider the following sparse approximation problem. Assume that the system $\{u_i(x)\}_{i=1}^N$ satisfies (\ref{5.2b}) and consider the dictionary $$ {\mathcal D}^u := \{g_x\}_{x\in\Omega},\quad g_x:= G(x)(Nt^2)^{-1},\quad G(x):=[u_i(x)u_j(x)]_{i,j=1}^N. $$ Then condition (\ref{5.2b}) guarantees that for the Frobenius norm of $g_x$ we have \begin{equation}\label{5.11} \|g_x\|_F = w(x)(Nt^2)^{-1} \le 1. \end{equation} Our assumption on the orthonormality of the system $\{u_i\}_{i=1}^N$ gives $$ I = \int_\Omega G(x)d\mu = Nt^2\int_\Omega g_x d\mu, $$ which implies that $I \in A_1({\mathcal D}^u,Nt^2)$. Consider the Hilbert space $H$ to be a closure in the Frobenius norm of $\operatorname{span}\{g_x, x\in\Omega\}$ with the inner product generated by the Frobenius norm: for $A=[a_{i,j}]_{i,j=1}^N$ and $B=[b_{i,j}]_{i,j=1}^N$ $$ \<A,B\> = \sum_{i,j=1}^N a_{i,j}b_{i,j} $$ in case of real matrices (with standard modification in case of complex matrices). By Theorem \ref{T4.1} for any $m\in {\mathbb{N}}$ we constructively find (by the RGA) points $\xi^1,\dots,\xi^m$ such that \begin{equation}\label{5.12} \left\|\frac{1}{m}\sum_{k=1}^m G(\xi^k)-I\right\|_F \le 2Nt^2 m^{-1/2}. \end{equation} Taking into account the inequality $\|A\|\le \|A\|_F$ we get from here and from (\ref{5.8'}) the following proposition. \begin{Proposition}\label{P5.1} Let $\{u_i\}_{i=1}^N$ be an orthonormal system, satisfying condition {\bf E}. Then there exists a constructive set $\{\xi^j\}_{j=1}^m \subset \Omega$ with $m\le C(t)N^2$ such that for any $f=\sum_{i=1}^N c_iu_i$ we have $$ \frac{1}{2}\|f\|_2^2 \le \frac{1}{m} \sum_{j=1}^m f(\xi^j)^2 \le \frac{3}{2}\|f\|_2^2. $$ \end{Proposition} {\bf The Marcinkiewicz-type theorem with weights.} We now comment on a recent breakthrough result by J. Batson, D.A. Spielman, and N. Srivastava \cite{BSS}. We formulate their result in our notations. Let as above $\Omega_M=\{x^j\}_{j=1}^M$ be a discrete set with the probability measure $\mu(x^j)=1/M$, $j=1,\dots,M$. Assume that $\{u_i(x)\}_{i=1}^N$ is a real orthonormal on $\Omega_M$ system. Then for any number $d>1$ there exist a set of weights $w_j\ge 0$ such that $|\{j: w_j\neq 0\}| \le dN$ so that for any $f\in \operatorname{span}\{u_1,\dots,u_N\}$ we have $$ \|f\|_2^2 \le \sum_{j=1}^M w_jf(x^j)^2 \le \frac{d+1+2\sqrt{d}}{d+1-2\sqrt{d}}\|f\|_2^2. $$ The proof of this result is based on a delicate study of the $m$-term approximation of the identity matrix $I$ with respect to the system ${\mathcal D} := \{G(x)\}_{x\in \Omega}$, $G(x):=[u_i(x)u_j(x)]_{i,j=1}^N$ in the spectral norm. The authors of \cite{BSS} control the change of the maximal and minimal eigenvalues of a matrix, when they add a rank one matrix of the form $wG(x)$. Their proof provides an algorithm for construction of the weights $\{w_j\}$. In particular, this implies that $$ X_N(\Omega_M) \in \mathcal M^w(m,2,\epsilon)\quad \text{provided} \quad m \ge CN\epsilon^{-2} $$ with large enough $C$. In this section we discussed two deep general results -- the Rudelson theorem and the Batson-Spielman-Srivastava theorem -- about submatrices of orthogonal matrices, which provide very good Marcinkiewicz-type discretization theorems for $L_2$. The reader can find a corresponding historical comments in \cite{Rud}. We also refer the reader to the paper \cite{Ka} for a discussion of a recent outstanding progress on the theory of submatrices of orthogonal matrices. \section{Discussion} As we pointed out in the Introduction the main results of this paper are on the Marcinkiewicz-type discretization theorems in $L_1$. We proved here that under certain conditions on a subspace $X_N$ we can get the corresponding discretization theorems with the number of knots $m\ll N(\log N)^{7/2}$. This result is only away from the ideal case $m=N$ by the $(\log N)^{7/2}$ factor. We point out that the situation with the discretization theorems in the $L_\infty$ case is fundamentally different. A very nontrivial surprising negative result was proved for the $L_\infty$ case (see \cite{KT3}, \cite{KT4}, and \cite{KaTe03}). The authors proved that the necessary condition for $\mathcal T(Q_n)\in\mathcal M(m,\infty)$ is $m\gg |Q_n|^{1+c}$ with absolute constant $c>0$. Theorem \ref{T4.10} shows that an important ingredient of our technique of proving the Marcinkiewicz discretization theorems in $L_1$ consists in the study of the entropy numbers $\varepsilon_k(X^1_N,L_\infty)$. We note that this problem is a nontrivial problem by itself. We demonstrate this on the example of the trigonometric polynomials. It is proved in \cite{VT156} that in the case $d=2$ we have \begin{equation}\label{6.1} \varepsilon_k(\mathcal T( Q_n)_1,L_\infty)\ll n^{1/2} \left\{\begin{array}{ll} (| Q_n|/k) \log (4| Q_n|/k), &\quad k\le 2| Q_n|,\\ 2^{-k/(2| Q_n|)},&\quad k\ge 2| Q_n|.\end{array} \right. \end{equation} The proof of estimate (\ref{6.1}) is based on an analog of the Small Ball Inequality for the trigonometric system proved for the wavelet type system (see \cite{VT156}). This proof uses the two-dimensional specific features of the problem and we do not know how to extend this proof to the case $d>2$. Estimate (\ref{6.1}) is used in the proof of the upper bounds in Theorem \ref{T2.7}. Theorem \ref{T2.7} gives the right order of the entropy numbers for the classes of mixed smoothness. This means that (\ref{6.1}) cannot be substantially improved. The trivial inequality $\log (4| Q_n|/k) \ll n$ shows that (\ref{6.1}) implies the following estimate \begin{equation}\label{6.2} \varepsilon_k(\mathcal T( Q_n)_1,L_\infty)\ll n^{3/2} \left\{\begin{array}{ll} | Q_n|/k , &\quad k\le 2| Q_n|,\\ 2^{-k/(2| Q_n|)},&\quad k\ge 2| Q_n|.\end{array} \right. \end{equation} Estimate (\ref{6.2}) is not as good as (\ref{6.1}) in application for proving the upper bounds of the entropy numbers of smoothness classes. For instance, instead of the bound in Theorem \ref{T2.7} use of (\ref{6.2}) will give $$ \varepsilon_k(\mathbf W^{a,b}_1,L_\infty) \ll k^{-a}(\log k)^{a+b+3/2}. $$ However, it turns out that in application to the Marcinkiewicz-type discretization theorems estimates (\ref{6.1}) and (\ref{6.2}) give the same bounds on the number of knots $m\ll |Q_n|n^{7/2}$ (see Theorem \ref{T3.1} and Theorem \ref{T3.3}). As we pointed out above we do not have an extension of (\ref{6.1}) to the case $d>2$. A somewhat straight forward technique presented in \cite{VT160} gives the following result for all $d$ \begin{equation}\label{6.3} \varepsilon_k(\mathcal T( Q_n)_1,L_\infty)\ll n^{d/2} \left\{\begin{array}{ll} (| Q_n|/k) \log (4| Q_n|/k), &\quad k\le 2| Q_n|,\\ 2^{-k/(2| Q_n|)},&\quad k\ge 2| Q_n|.\end{array} \right. \end{equation} This result is used in \cite{VT160} to prove Theorem \ref{T3.2}. An interesting contribution of this paper is the proof of (\ref{6.2}) for all $d$ and for rather general sets $\mathcal T(Q)_1$ instead of $\mathcal T(Q_n)_1$. An important new ingredient here is the use of dictionary ${\mathcal D}^2(Q)$, consisting of shifts of normalized Dirichlet kernels associated with $Q$, in $m$-term approximations. Certainly, it would be nice to understand, even in the special case of the hyperbolic cross polynomials $\mathcal T(Q_n)$, if the embedding $\mathcal T(Q_n) \in \mathcal M(m,1)$ with $m\asymp |Q_n|$ holds. Results of this paper only show that the above embedding holds with $m \gg |Q_n|n^{7/2}$. We got the extra factor $n^{7/2}$ as a result of using (\ref{6.2}), which contributed $n^{3/2}$, and of using the chaining technique, which contributed $n^2$.
{ "timestamp": "2017-03-13T01:08:26", "yymm": "1703", "arxiv_id": "1703.03743", "language": "en", "url": "https://arxiv.org/abs/1703.03743", "abstract": "The paper is devoted to discretization of integral norms of functions from a given finite dimensional subspace. This problem is very important in applications but there is no systematic study of it. We present here a new technique, which works well for discretization of the integral norm. It is a combination of probabilistic technique, based on chaining, with results on the entropy numbers in the uniform norm.", "subjects": "Numerical Analysis (math.NA)", "title": "The Marcinkiewicz-type discretization theorems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363508288306, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7073385761771329 }
https://arxiv.org/abs/1907.03944
More accurate numerical radius inequalities (II)
In a recent work of the authors, we showed some general inequalities governing numerical radius inequalities using convex functions. In this article, we present results that complement the aforementioned inequalities. In particular, the new versions can be looked at as refined and generalized forms of some well known numerical radius inequalities. Among many other results, we show that \[\left\| f\left( \frac{{{A}^{*}}A+A{{A}^{*}}}{4} \right) \right\|\le \left\| \int_{0}^{1}{f\left( \left( 1-t \right){{B}^{2}}+t{{C}^{2}} \right)dt} \right\|\le f\left( {{w}^{2}}\left( A \right) \right),\] when $A$ is a bounded linear operator on a Hilbert space having the Cartesian decomposition $A=B+iC.$ This result, for example, extends and refines a celebrated result by kittaneh.
\section{Introduction} Let $\mathcal{B}(\mathcal{H})$ stand for the $C^*$ algebra of all bounded linear operators on a complex Hilbert space $\mathcal{H}.$ Every $A\in\mathcal{B}(\mathcal{H})$ admits the Cartesian decomposition $A=B+iC,$ in which $B$ and $C$ are self adjoint operators. In this context, an operator $X\in \mathcal{B}(\mathcal{H})$ is said to be self adjoint if $X^*=X$, where $X^*$ is the adjoint operator of $X$.\\ For $A\in\mathcal{B}(\mathcal{H})$, the absolute value $|A|$ is defined by $\left| A \right|={{\left( {{A}^{*}}A \right)}^{\frac{1}{2}}}$. Notice that $|A|$ is a positive semi-definite operator, in the sense that $\left<|A|x,x\right>\geq 0$, for all $x\in\mathcal{H}.$ Among the most interesting numerical values associated with an operator $A\in\mathcal{B}(\mathcal{H})$ are the operator norm $\|A\|$ and the numerical radius $w(A)$ of $A$, defined respectively by $$\|A\|=\sup_{\|x\|=1}\|Ax\|\;{\text{and}}\;w(A)=\sup_{\|x\|=1}\left|\left<Ax,x\right>\right|.$$ It is easy to see that $\|A\|=\sup\limits_{\|x\|=\|y\|=1}\left|\left<Ax,y\right>\right|.$ Also, it is well known that when $A$ is normal, then $\|A\|=w(A).$ If $A$ is not normal, then $\|A\|$ and $w(A)$ are related via the inequalities \begin{equation}\label{norm and numerical radius} \frac{1}{2}\|A\|\leq w(A)\leq \|A\|. \end{equation} Research in this direction includes obtaining better bounds in \eqref{norm and numerical radius}. We refer the reader to \cite{10, 9, 8} for a sample of such research. In \cite{1}, Kittaneh proved \begin{equation}\label{kittaneh_first_ineq} \frac{1}{4}\left\| {{A}^{*}}A+A{{A}^{*}} \right\|\le {{w}^{2}}\left( A \right)\le \frac{1}{2}\left\| {{A}^{*}}A+A{{A}^{*}} \right\|. \end{equation} He also proved the following inequality in \cite{2} \[w\left( A \right)\le \frac{1}{2}\left\|\; \left| A \right|+\left| {{A}^{*}} \right|\; \right\|\] and the inequality is reversed if $\frac{1}{2}$ is replaced by $\frac{1}{4}$. In fact, noting that $\|\;|A|\;\|=\|\;|A^*|\;\|$, one has \begin{equation}\label{6} \frac{1}{4}\left\|\; \left| A \right|+\left| {{A}^{*}} \right|\; \right\|\le \frac{1}{4}\left( \left\|\; \left| A \right|\; \right\|+\left\| \;\left| {{A}^{*}} \right|\; \right\| \right)=\frac{1}{2}\left\| A \right\|\le w\left( A \right). \end{equation} In this article, we target both \eqref{kittaneh_first_ineq} and \eqref{6}; where we show that these inequalities follow from a more general treatment of convex and operator convex functions. We emphasize here that the original treatment of \eqref{kittaneh_first_ineq} and \eqref{6} did not involve any convexity approach. Therefore, we claim that our results not only are new results but also they introduce a new approach treating such inequalities. We would like also to mention that this work can be considered as an extension of our earlier work \cite{7}; where a different set of inequalities is targeted. However, we present Theorem \ref{new_thm} and Proposition \ref{new_prop} as refinements of some results in \cite{7}, just to show how the current paper is related to \cite{7}. For example, we show that for $A\in\mathcal{B}(\mathcal{H})$, we have the double-sided inequality: \[\frac{1}{4}\left\| {{A}^{*}}A+A{{A}^{*}} \right\|\le {{\left\| \int_{0}^{1}{{{\left( \left( 1-t \right){{B}^{2}}+t{{C}^{2}} \right)}^{2}}dt} \right\|}^{\frac{1}{2}}}\le {{w}^{2}}\left( A \right)\] as a refinement of \eqref{kittaneh_first_ineq}. However, even this last inequality will follow as a special case of the more general inequality that when $f:\left[ 0,\infty \right)\to \left[ 0,\infty \right)$ is an increasing operator convex function, then \[\left\| f\left( \frac{{{A}^{*}}A+A{{A}^{*}}}{4} \right) \right\|\le \left\| \int_{0}^{1}{f\left( \left( 1-t \right){{B}^{2}}+t{{C}^{2}} \right)dt} \right\|\le f\left( {{w}^{2}}\left( A \right) \right),\] where $A=B+iC$ is the Cartesian decomposition of $A$.\\ Another generalization is given for \eqref{6}, in a similar form. Many other generalizations and refinements of some well known results will be presented too. Further, we will present a refined version of \eqref{from 7_1}; as one can see in Theorem \ref{new_thm}. At this stage, we pay the reader attention that in our recent work \cite{7}, a reverse-type of the above inequality was shown as follows \begin{equation}\label{from 7_1} f\left( w\left( A \right) \right)\le \left\| \int_{0}^{1}{f\left( t\left| A \right|+\left( 1-t \right)\left| {{A}^{*}} \right| \right)dt} \right\|\le \frac{1}{2}\left\| f\left( \left| A \right| \right)+f\left( \left| {{A}^{*}} \right| \right) \right\|. \end{equation} Therefore, the current work can be thought of as an extension of some results appearing in \cite{7}. Further refinement of this last inequality will be shown too. Independently, we will prove a general result that implies a refinement of the well known inequality \cite{02} $$w^p(A)\leq \left\| \left( 1-t \right){{\left| A \right|}^{p}}+t{{\left| {{A}^{*}} \right|}^{p}} \right\|,\quad 2\leq p\leq 4,$$ using certain properties of operator convex functions. In our results, operator convex functions will be an essential assumption. Recall that a function $f:J\to\mathbb{R}$ is said to be operator convex if it is continuous and $f\left(\frac{A+B}{2}\right)\leq \frac{f(A)+f(B)}{2},$ for all self adjoint operators $A,B$ with spectra in the interval $J$. Of course, this implies that for all $0\leq t\leq 1,$ one has $f((1-t)A+tB)\leq (1-t)f(A)+tf(B).$\\ It is this context, if $f:J\to\mathbb{R}$ is a given function and $A$ is a self adjoint operator with spectrum in $J$, then $f(A)$ is defined via functional calculus. It can be easily seen that when $f$ is an increasing function, then $\|f(|X|)\|=f(\|X\|)$ for the self adjoint operator $X$. It is well known that a convex function in the usual sense is not necessarily operator convex, and it is also known that the function $f(t)=t^r, r>0$ defined on $[0,\infty)$ is operator convex if and only if $r\in [1,2]$. We refer the reader to \cite{11} for related literature about operator convex functions. It is also readily seen that an operator convex function $f:J\to\mathbb{R}$ is also convex. Therefore, such functions comply with the Hermite-Hadamard inequality \begin{equation}\label{HH_ineq} f\left(\frac{a+b}{2}\right)\leq\int_{0}^{1}f((1-t)a+tb)dt\leq \frac{f(a)+f(b)}{2},\;a,b\in J. \end{equation} Notice that \eqref{HH_ineq} is a refinement of the convex inequality $f\left(\frac{a+b}{2}\right)\leq\frac{f(a)+f(b)}{2}.$\\ The following modified operator version of \eqref{HH_ineq} was proved in \cite{4}: \begin{equation}\label{1} \begin{aligned} f\left( \frac{X+Y}{2} \right)&\le \frac{1}{2}\left[ f\left( \frac{3X+Y}{4} \right)+f\left( \frac{X+3Y}{4} \right) \right] \\ & \le \int_{0}^{1}{f\left( \left( 1-t \right)X+tY \right)dt} \\ & \le \frac{1}{2}\left[ f\left( \frac{X+Y}{2} \right)+\frac{f\left( X \right)+f\left( Y \right)}{2} \right] \\ & \le \frac{f\left( X \right)+f\left( Y \right)}{2} \end{aligned} \end{equation} where $f:J\to \mathbb{R}$ is an operator convex function and $X, Y$ are two self adjoint operators with spectra in $J$. Our proofs will rely heavily on properties of convex functions and their role with inner product. Recall that if $f:J\to\mathbb{R}$ is a convex function and $A$ is a self adjoint operator with spectrum in $J$, then one has \cite{6} \begin{equation}\label{convex_inner} f\left( \left\langle Ax,x \right\rangle \right)\le \left\langle f\left( A \right)x,x \right\rangle \end{equation} for any unit vector $x\in \mathcal{H}$. \section{Main Results} In this section, we present our main results, which are mainly to extend \eqref{kittaneh_first_ineq} and \eqref{6}. The main results are shown in Proposition \ref{thm_second_main} and Theorem \ref{thm_first_main} . However, the connection with \eqref{kittaneh_first_ineq} and \eqref{6} are given in Corollaries \ref{cor_first} and \ref{cor_second}. We would like also to mention that the assumption operator convexity will be released to scalar convexity in Proposition \ref{prop_convex}, but this will lead to a weaker form. \subsection{Some related norm inequalities} In \eqref{1}, replacing $X$ by $\frac{1}{2}|A|$ and $Y$ by $\frac{1}{2}|B|$, then noting that when $f$ is an increasing non-negative function, one has $\|f(|A|)\|=f(\|A\|)$, we reach the following inequalities. \begin{proposition}\label{thm_second_main} Let $A,B\in \mathcal{B}\left( \mathcal{H} \right)$. If $f:\left[ 0,\infty \right)\to \left[ 0,\infty \right)$ is an increasing operator convex function, then \begin{align*} \begin{aligned} \left\| f\left( \frac{\left| A \right|+\left| B \right|}{4} \right) \right\|&\le \left\| \int_{0}^{1}{f\left( \frac{\left( 1-t \right)\left| A \right|+t\left| B \right|}{2} \right)dt} \right\| \\ & \le \frac{1}{2}f\left( \frac{1}{2}\left\| A \right\| \right)+\frac{1}{2}f\left( \frac{1}{2}\left\| B \right\| \right). \end{aligned} \end{align*} \end{proposition} Notice that we did not consider all inequalities appearing in \eqref{1}. Although using the other inequalities would imply further refinements, our goal in this article is to show the idea, away from getting into unnecessary computations. We would like to emphasize that the significance of the inequalities in Proposition \ref{thm_second_main} is not the inequalities themselves, but their applications in obtaining numerical radius and norm inequalities refining \eqref{kittaneh_first_ineq} and \eqref{6}. Now letting $B=A^*$ in Proposition \ref{thm_second_main}, we obtain the following extension and refinement of \eqref{6}. The proof follows immediately noting that $\|A\|=\|\;|A|\;\|=\|\;|A^*|\;\|.$ \begin{corollary}\label{cor_second} Let $A\in \mathcal{B}\left( \mathcal{H} \right)$. Then for any $1\le r\le 2$, \[\frac{1}{{{4}^{r}}}{{\left\|\; \left| A \right|+\left| {{A}^{*}} \right|\; \right\|}^{r}}\le \left\| \int_{0}^{1}{{{\left( \frac{\left( 1-t \right)\left| A \right|+t\left| {{A}^{*}} \right|}{2} \right)}^{r}}dt} \right\|\le \frac{1}{{{2}^{r}}}{{\left\| A \right\|}^{r}}.\] In particular, \[\frac{1}{4}\left\| \;\left| A \right|+\left| {{A}^{*}} \right| \;\right\|\le {{\left\| \int_{0}^{1}{{{\left( \frac{\left( 1-t \right)\left| A \right|+t\left| {{A}^{*}} \right|}{2} \right)}^{2}}dt} \right\|}^{\frac{1}{2}}}\le \frac{1}{2}\left\| A \right\|.\] \end{corollary} Notice that operator convexity of $f$ is a necessary condition, since it is so in \eqref{1}. In the next result, we show the convex version of Proposition \ref{thm_second_main}. \begin{theorem}\label{prop_convex} Let $A,B\in \mathcal{B}\left( \mathcal{H} \right)$. If $f:\left[ 0,\infty \right)\to \left[ 0,\infty \right)$ is a convex function, then \begin{equation}\label{1st_concl_cov_thm} \begin{aligned} f\left(\left<\frac{|A|+|B|}{2}x,x\right>\right)&\leq \int_{0}^{1}f\left(\left\|((1-t)|A|+t|B|)^{1/2}x\right\|^2\right)dt\\ &\leq \frac{1}{2}\left\|f(|A|)+f(|B|)\right\|, \end{aligned} \end{equation} for any unit vector $x\in\mathcal{H}$, and \begin{equation}\label{2ndd_concl_cov_thm} f\left( \left\|\frac{\left| A \right|+\left| B \right|}{2} \right\|\right) \le \underset{\left\| x \right\|=1}{\mathop{\underset{x\in \mathcal{H}}{\mathop{\sup }}\,}}\,\int_{0}^{1}{f\left( {{\left\| {{\left( \left( 1-t \right)\left| A \right|+t\left| B \right| \right)}^{\frac{1}{2}}}x \right\|}^{2}} \right)dt\le \frac{1}{2}\left\| f\left( \left| A \right| \right)+f\left( \left| B \right| \right) \right\|}. \end{equation} Further, if $f$ is increasing then \begin{equation}\label{2nd_concl_cov_thm} \left\| f\left( \frac{\left| A \right|+\left| B \right|}{2} \right) \right\|\le \underset{\left\| x \right\|=1}{\mathop{\underset{x\in \mathcal{H}}{\mathop{\sup }}\,}}\,\int_{0}^{1}{f\left( {{\left\| {{\left( \left( 1-t \right)\left| A \right|+t\left| B \right| \right)}^{\frac{1}{2}}}x \right\|}^{2}} \right)dt\le \frac{1}{2}\left\| f\left( \left| A \right| \right)+f\left( \left| B \right| \right) \right\|}. \end{equation} \end{theorem} \begin{proof} Let $x\in\mathcal{H}$ be a unit vector. Then \eqref{HH_ineq} implies \begin{equation}\label{needed_1_conv_thm} \begin{aligned} f\left(\left<\frac{|A|+|B|}{2}x,x\right>\right)&=f\left(\frac{\left<|A|x,x\right>+\left<|B|x,x\right>}{2}\right)\\ &\leq \int_{0}^{1}f\left((1-t)\left<|A|x,x\right>+t\left<|B|x,x\right>\right)dt\\ &\leq \frac{f\left(\left<|A|x,x\right>\right)+f\left(\left<|A|x,x\right>\right)}{2}. \end{aligned} \end{equation} Denoting $\int_{0}^{1}f\left((1-t)\left<|A|x,x\right>+t\left<|B|x,x\right>\right)dt$ by $I$, we have \begin{equation}\label{needed_2_conv_thm} \begin{aligned} I&=\int_{0}^{1}f\left(\left<((1-t)|A|+t|B|)x,x\right>\right)dt\\ &=\int_{0}^{1}f\left(\left<((1-t)|A|+t|B|)^{1/2}x,((1-t)|A|+t|B|)^{1/2}x\right>\right)dt\\ &=\int_{0}^{1}f\left(\left\|((1-t)|A|+t|B|)^{1/2}x\right\|^2\right)dt.\\ \end{aligned} \end{equation} Further, noting convexity of $f$ and \eqref{convex_inner} we have \begin{equation}\label{needed_3_conv_thm} \begin{aligned} \frac{f\left(\left<|A|x,x\right>\right)+f\left(\left<|B|x,x\right>\right)}{2}&\leq\frac{\left<f(|A|)x,x\right>+\left<f(|B|)x,x\right>}{2}\\ &=\frac{1}{2}\left<\left(f(|A|)+f(|B|)\right)x,x\right>\\ &\leq \frac{1}{2}\left\|f(|A|)+f(|B|)\right\|. \end{aligned} \end{equation} Combining \eqref{needed_1_conv_thm}, \eqref{needed_2_conv_thm} and \eqref{needed_3_conv_thm}, we obtain \begin{equation} \begin{aligned} f\left(\left<\frac{|A|+|B|}{2}x,x\right>\right)&\leq \int_{0}^{1}f\left(\left\|((1-t)|A|+t|B|)^{1/2}x\right\|^2\right)dt\\ &\leq \frac{1}{2}\left\|f(|A|)+f(|B|)\right\|. \end{aligned} \end{equation} This proves \eqref{1st_concl_cov_thm}.\\ To prove \eqref{2ndd_concl_cov_thm}, notice that for any such $f$, \begin{align*} \underset{\left\| x \right\|=1}{\mathop{\underset{x\in \mathcal{H}}{\mathop{\sup }}\,}}f\left(\left<\frac{|A|+|B|}{2}x,x\right>\right)&\geq f\left(\underset{\left\| x \right\|=1}{\mathop{\underset{x\in \mathcal{H}}{\mathop{\sup }}\,}}\left<\frac{|A|+|B|}{2}x,x\right>\right)\\ &=f\left(\left\|\frac{|A|+|B|}{2}\right\|\right). \end{align*} To prove \eqref{2nd_concl_cov_thm}, notice that when $f$ is increasing then, \begin{align*} \underset{\left\| x \right\|=1}{\mathop{\underset{x\in \mathcal{H}}{\mathop{\sup }}\,}}f\left(\left<\frac{|A|+|B|}{2}x,x\right>\right)&= f\left(\underset{\left\| x \right\|=1}{\mathop{\underset{x\in \mathcal{H}}{\mathop{\sup }}\,}}\left<\frac{|A|+|B|}{2}x,x\right> \right)\\ &=f\left(\left\|\frac{|A|+|B|}{2}\right\|\right)\\ &=\left\|f\left(\frac{|A|+|B|}{2}\right)\right\| \end{align*} where we have used the fact that $\|f(|X|)\|=f(\|X\|)$ when $f$ is increasing in the last line. This together with \eqref{1st_concl_cov_thm} imply \eqref{2nd_concl_cov_thm}. \end{proof} In \cite[Corollary 2.2]{bourin}, it is shown that for the increasing convex function $f:[0,\infty)\to [0,\infty),$ one has $$f\left(\frac{|A|+|B|}{2}\right)\leq U\frac{f(|A|)+f(|B|)}{2}U^*,$$ for some unitary matrix $U$. Notice that this inequality implies that $$\left\|f\left(\frac{|A|+|B|}{2}\right)\right\|\leq \frac{1}{2}\left\|f(|A|)+f(|B|)\right\|.$$ It is clear that \eqref{2nd_concl_cov_thm} provides a refinement of this inequality. \subsection{Sharper lower bounds of the numerical radius} In order to present our next main result (Theorem \ref{thm_first_main}), we will need the following Lemma. \begin{lemma}\label{3} Let $A\in \mathcal{B}\left( \mathcal{H} \right)$ have the Cartesian decomposition $A=B+iC$. Then \[{{\left\| B \right\|}^{2}},{{\left\| C \right\|}^{2}}\le {{w}^{2}}\left( A \right).\] \end{lemma} \begin{proof} If $A=B+iC$ be the Cartesian decomposition of $A$, then for any unit vector $x\in \mathcal{H}$, \[{{\left\langle Bx,x \right\rangle }^{2}}+{{\left\langle Cx,x \right\rangle }^{2}}={{\left| \left\langle Ax,x \right\rangle \right|}^{2}}.\] Therefore, \[{{\left\langle Bx,x \right\rangle }^{2}}\le {{\left| \left\langle Ax,x \right\rangle \right|}^{2}}.\] By taking supremum over $x\in \mathcal{H}$ with $\left\| x \right\|=1$, \[{{\left\| B \right\|}^{2}}={{w}^{2}}\left( B \right)\le {{w}^{2}}\left( A \right).\] Similarly one can prove ${{\left\| C \right\|}^{2}}\le {{w}^{2}}\left( A \right)$. This completes the proof. \end{proof} Now Proposition \ref{thm_second_main} is utilized with the Cartesian decomposition of $A$ to obtain the following generalized form of \eqref{kittaneh_first_ineq}. This shows the significance of the proposition. \begin{theorem}\label{thm_first_main} Let $A\in \mathcal{B}\left( \mathcal{H} \right)$ have the Cartesian decomposition $A=B+iC$. If $f:\left[ 0,\infty \right)\to \left[ 0,\infty \right)$ is an increasing operator convex function, then \begin{align*} \left\| f\left( \frac{{{A}^{*}}A+A{{A}^{*}}}{4} \right) \right\|&\le \left\| \int_{0}^{1}{f\left( \left( 1-t \right){{B}^{2}}+t{{C}^{2}} \right)dt} \right\|\\ &\le \frac{1}{2}\left\| f\left( {{B}^{2}} \right)+f\left( {{C}^{2}} \right) \right\|\\ &\le f\left( {{w}^{2}}\left( A \right) \right). \end{align*} \end{theorem} \begin{proof} In Theorem \ref{thm_second_main}, replace $|A|$ by $2B^2$ and $|B|$ by $2C^2.$ Then direct application of Theorem \ref{thm_second_main} implies the first and second inequalities.\\ For the third inequality, notice that \begin{align*} \frac{1}{2}\left\| f\left( {{B}^{2}} \right)+f\left( {{C}^{2}} \right) \right\|&\leq \frac{1}{2}\left(\|f(B^2)\|+\|f(C^2)\|\right)\quad \text{(by the triangle inequality)}\\ &= \frac{1}{2}\left(f(\|B^2\|)+f(\|C^2\|)\right)\quad \text{(since $\left\| f\left( \left| X \right| \right) \right\|=f\left( \left\| X \right\| \right)$)}\\ &\leq f\left( {{w}^{2}}\left( A \right) \right)\quad \text{(by Lemma \ref{3})}. \end{align*} This completes the proof. \end{proof} Noting that the function $f(t)=t^r$ is an increasing operator convex function when $1\leq r\leq 2,$ Theorem \ref{thm_first_main} implies the following extension and refinement of the first inequality in \eqref{kittaneh_first_ineq}. \begin{corollary}\label{cor_first} Let $A\in \mathcal{B}\left( \mathcal{H} \right)$ with the Cartesian decomposition $A=B+iC$. Then for any $1\le r\le 2$, \[\frac{1}{{{4}^{r}}}{{\left\| {{A}^{*}}A+A{{A}^{*}} \right\|}^{r}}\le \left\| \int_{0}^{1}{{{\left( \left( 1-t \right){{B}^{2}}+t{{C}^{2}} \right)}^{r}}dt} \right\|\le {{w}^{2r}}\left( A \right).\] In particular, \[\frac{1}{4}\left\| {{A}^{*}}A+A{{A}^{*}} \right\|\le {{\left\| \int_{0}^{1}{{{\left( \left( 1-t \right){{B}^{2}}+t{{C}^{2}} \right)}^{2}}dt} \right\|}^{\frac{1}{2}}}\le {{w}^{2}}\left( A \right).\] \end{corollary} \subsection{Sharper upper bounds of the numerical radius} In \cite[Theorem 2.1]{7} it has been shown that \begin{equation}\label{needed_fom_first_paper} f\left( w\left( A \right) \right)\le \left\| \int_{0}^{1}{f\left( t\left| A \right|+\left( 1-t \right)\left| {{A}^{*}} \right| \right)dt} \right\|\le \frac{1}{2}\left\| f\left( \left| A \right| \right)+f\left( \left| {{A}^{*}} \right| \right) \right\| \end{equation} where $f:\left[ 0,\infty \right)\to \left[ 0,\infty \right)$ is an increasing operator convex function. This can be improved in the following theorem. \begin{theorem}\label{new_thm} Let $A\in \mathcal{B}\left( \mathcal{H} \right)$. If $f:\left[ 0,\infty \right)\to \left[ 0,\infty \right)$ is an increasing convex function, then \[f\left( w\left( A \right) \right)\le \frac{1}{2}\left\| f\left( \frac{3\left| A \right|+\left| {{A}^{*}} \right|}{4} \right)+f\left( \frac{\left| A \right|+3\left| {{A}^{*}} \right|}{4} \right) \right\|.\] \end{theorem} \begin{proof} It is easy to see that if $f:J\to \mathbb{R}$ is a convex function and $a,b\in J$, \[\begin{aligned} f\left( \frac{a+b}{2} \right)&=f\left( \frac{1}{2}\left( \frac{3a+b}{4}+\frac{a+3b}{4} \right) \right) \\ & \le \frac{1}{2}\left[ f\left( \frac{3a+b}{4} \right)+f\left( \frac{a+3b}{4} \right) \right]. \end{aligned}\] Let $x\in \mathcal{H}$ be a unit vector. Replacing $a$ and $b$ by $\left\langle \left| A \right|x,x \right\rangle $ and $\left\langle \left| {{A}^{*}} \right|x,x \right\rangle $ in the above inequality, we get \begin{equation}\label{9} \begin{aligned} & f\left( \left\langle \frac{\left| A \right|+\left| {{A}^{*}} \right|}{2}x,x \right\rangle \right) \\ & \le \frac{1}{2}\left[ f\left( \left\langle \frac{3\left| A \right|+\left| {{A}^{*}} \right|}{4}x,x \right\rangle \right)+f\left( \left\langle \frac{\left| A \right|+3\left| {{A}^{*}} \right|}{4}x,x \right\rangle \right) \right] \\ & \le \frac{1}{2}\left[ \left\langle f\left( \frac{3\left| A \right|+\left| {{A}^{*}} \right|}{4} \right)x,x \right\rangle +\left\langle f\left( \frac{\left| A \right|+3\left| {{A}^{*}} \right|}{4} \right)x,x \right\rangle \right] \quad \text{(by \eqref{convex_inner})}\\ & =\frac{1}{2}\left\langle \left\{f\left( \frac{3\left| A \right|+\left| {{A}^{*}} \right|}{4} \right)+f\left( \frac{\left| A \right|+3\left| {{A}^{*}} \right|}{4} \right)\right\}x,x \right\rangle. \\ \end{aligned} \end{equation} On the other hand, since $f$ is increasing, \begin{equation}\label{10} \begin{aligned} & f\left( \left| \left\langle Ax,x \right\rangle \right| \right)\\ &\le f\left( \sqrt{\left\langle \left| A \right|x,x \right\rangle \left\langle \left| {{A}^{*}} \right|x,x \right\rangle } \right) \quad \text{(by the mixed Schwarz inequality \cite[pp 75-76]{5})}\\ & \le f\left( \left\langle \frac{\left| A \right|+\left| {{A}^{*}} \right|}{2}x,x \right\rangle \right) \quad \text{(by the arithmetic-geometric mean inequality)}. \end{aligned} \end{equation} Combining \eqref{9} and \eqref{10} we get \[f\left( \left| \left\langle Ax,x \right\rangle \right| \right)\le \frac{1}{2}\left\langle \left\{f\left( \frac{3\left| A \right|+\left| {{A}^{*}} \right|}{4} \right)+f\left( \frac{\left| A \right|+3\left| {{A}^{*}} \right|}{4} \right)\right\}x,x \right\rangle \] for any unit vector $x\in \mathcal{H}$. By taking supremum we have \[f\left( w\left( A \right) \right)\le \frac{1}{2}\left\| f\left( \frac{3\left| A \right|+\left| {{A}^{*}} \right|}{4} \right)+f\left( \frac{\left| A \right|+3\left| {{A}^{*}} \right|}{4} \right) \right\|.\] This completes the proof of the theorem. \end{proof} The fact that Theorem \ref{new_thm} improves \eqref{needed_fom_first_paper} is justified in the following proposition. \begin{proposition}\label{new_prop} Let $A\in \mathcal{B}\left( \mathcal{H} \right)$. If $f:\left[ 0,\infty \right)\to \left[ 0,\infty \right)$ is an operator convex function, then \begin{equation}\label{8} \frac{1}{2}\left\| f\left( \frac{3\left| A \right|+\left| {{A}^{*}} \right|}{4} \right)+f\left( \frac{\left| A \right|+3\left| {{A}^{*}} \right|}{4} \right) \right\|\le \left\| \int_{0}^{1}{f\left( t\left| A \right|+\left( 1-t \right)\left| {{A}^{*}} \right| \right)dt} \right\|. \end{equation} \end{proposition} \begin{proof} By the second inequality in \eqref{1}, \[\frac{1}{2}\left[ f\left( \frac{3A+B}{4} \right)+f\left( \frac{A+3B}{4} \right) \right]\le \int_{0}^{1}{f\left( tA+\left( 1-t \right)B \right)dt}\] where $A$ and $B$ are two self adjoint operators with the spectra in $J$ and $f$ is operator convex on $J$. Replacing $A$ and $B$ by $\left| A \right|$ and $\left| {{A}^{*}} \right|$, respectively, we get \[ \frac{1}{2}\left[ f\left( \frac{3\left| A \right|+\left| {{A}^{*}} \right|}{4} \right)+f\left( \frac{\left| A \right|+3\left| {{A}^{*}} \right|}{4} \right) \right]\le \int_{0}^{1}{f\left( t\left| A \right|+\left( 1-t \right)\left| {{A}^{*}} \right| \right)dt}.\] From this we infer \eqref{8} \end{proof} Letting $f(t)=t^2,$ Theorem \ref{new_thm} together with Proposition \ref{new_prop} impliy the following two-term refinement of the right inequality in \eqref{kittaneh_first_ineq}. \begin{corollary} Let $A\in \mathcal{B}\left( \mathcal{H} \right)$. Then \begin{align} {{w}^{2}}\left( A \right)&\le \frac{1}{32}\left\| {{\left( 3\left| A \right|+\left| {{A}^{*}} \right| \right)}^{2}}+{{\left( \left| A \right|+3\left| {{A}^{*}} \right| \right)}^{2}} \right\| \label{11}\\ & \le \left\| \int_{0}^{1}{{{\left( t\left| A \right|+\left( 1-t \right)\left| {{A}^{*}} \right| \right)}^{2}}dt} \right\| \nonumber \\ & \le \frac{1}{2}\left\| {{A}^{*}}A+A{{A}^{*}} \right\| \nonumber. \end{align} \end{corollary} \begin{remark} The constant $\frac{1}{32}$ is best possible in \eqref{11}. Actually, if we assume that \eqref{11} holds with a constant $C>0$, i.e., \begin{equation}\label{7} {{w}^{2}}\left( A \right)\le C\left\| {{\left( 3\left| A \right|+\left| {{A}^{*}} \right| \right)}^{2}}+{{\left( \left| A \right|+3\left| {{A}^{*}} \right| \right)}^{2}} \right\| \end{equation} for any $A\in \mathcal{B}\left( \mathcal{H} \right)$, then if we choose $A$ a normal operator and use the fact that for normal operators we have $w\left( A \right)=\left\| A \right\|$, then by \eqref{7} we deduce that $\frac{1}{32}\le C$ which proves the sharpness of the constant. \end{remark} \subsection{Some additive refinements} We have already seen that Corollary \ref{cor_first} refines the left side inequality in \eqref{kittaneh_first_ineq}. The refinement this corollary presents was based on a convexity approach and the refining term contains an operator integral. In the next result, we use a different approach to present a new refinement of the first inequality in \eqref{kittaneh_first_ineq}. The main tool will be the basic inequality \begin{equation}\label{sum_diff_pos} {\left( \frac{X+Y}{2} \right)}^{2}\leq {{\left( \frac{X+Y}{2} \right)}^{2}}+{{\left( \frac{\left| X-Y \right|}{2} \right)}^{2}}= \frac{{{X}^{2}}+{{Y}^{2}}}{2} \end{equation} valid for the self adjoint operators $X$ and $Y$. \begin{theorem} Let $A\in\mathcal{B}(\mathcal{H})$. Then \[\frac{1}{4}\left\| {{A}^{*}}A+A{{A}^{*}} \right\|\le \frac{1}{4}{{\left\| {{\left( {{A}^{*}}A+A{{A}^{*}} \right)}^{2}}+{{\left| {{A}^{2}}+{{\left( {{A}^{*}} \right)}^{2}} \right|}^{2}} \right\|}^{\frac{1}{2}}}\le {{w}^{2}}\left( A \right).\] \end{theorem} \begin{proof} Let $A=B+iC$ be the Cartesian decomposition of $A$. Then \begin{equation}\label{needed_last_1} {{B}^{2}}+{{C}^{2}}=\frac{{{A}^{*}}A+A{{A}^{*}}}{2}\text{ and }{{B}^{2}}-{{C}^{2}}=\frac{{{A}^{2}}+{{\left( {{A}^{*}} \right)}^{2}}}{2}. \end{equation} Replacing $X$ and $Y$ by ${{B}^{2}}$ and ${{C}^{2}}$ respectively in \eqref{sum_diff_pos}, we get \[\left( \frac{{{B}^{2}}+{{C}^{2}}}{2} \right)^2\leq {{\left( \frac{{{B}^{2}}+{{C}^{2}}}{2} \right)}^{2}}+{{\left( \frac{\left| {{B}^{2}}-{{C}^{2}} \right|}{2} \right)}^{2}}= \frac{{{B}^{4}}+{{C}^{4}}}{2}.\] Consequently, \[\left\| \left( \frac{{{B}^{2}}+{{C}^{2}}}{2} \right)\right\|^{2}\leq \left\| {{\left( \frac{{{B}^{2}}+{{C}^{2}}}{2} \right)}^{2}}+{{\left( \frac{\left| {{B}^{2}}-{{C}^{2}} \right|}{2} \right)}^{2}} \right\|= \left\| \frac{{{B}^{4}}+{{C}^{4}}}{2} \right\|.\] Now, if $A\in \mathcal{B}\left( \mathcal{H} \right)$ have the Cartesian decomposition $A=B+iC$, \eqref{needed_last_1} implies \begin{align*} \frac{1}{16}\left\| {{A}^{*}}A+A{{A}^{*}} \right\|^2&\le \frac{1}{16}{{\left\| {{\left( {{A}^{*}}A+A{{A}^{*}} \right)}^{2}}+{{\left| {{A}^{2}}+{{\left( {{A}^{*}} \right)}^{2}} \right|}^{2}} \right\|}}\\ &= \left\| \frac{{{B}^{4}}+{{C}^{4}}}{2} \right\|\\ &\leq\frac{\|B\|^4+\|C\|^4}{2}\\ &\le {{w}^{4}}\left( A \right), \end{align*} where we have used Lemma \ref{3} to obtain the last inequality. This completes the proof. \end{proof} Our last result in this approach will be extending the inequality \begin{equation}\label{had_kitt_ineq} w^p(A)\leq \left\| \left( 1-t \right){{\left| A \right|}^{p}}+t{{\left| {{A}^{*}} \right|}^{p}} \right\|,\quad 2\leq p\leq 4 \end{equation} which was shown in \cite[Theorem 2]{02}. The approach we use here is again a convexity approach; which means that we present the main result in terms of convex or operator convex functions, then we deduce the desired refinement as a special case. For this result, we will need the following lemma. \begin{lemma}\label{lemma_sab}(\cite[Lemma 3.12]{01}) Let $f:\mathbb{R}\to\mathbb{R}$ be operator convex, $A,B$ be two Hermitian matrices in $\mathbb{M}_n$ and let $0\leq t \leq 1$. Then \begin{eqnarray*} f\left((1-t)A+t B\right)&+&2r\left(f(A)\nabla f(B)-f(A\nabla B)\right)\leq (1-t)f(A)+t f(B), \end{eqnarray*} where $r=\min\{t,1-t\}$ and $A\nabla B=\frac{A+B}{2}.$ \end{lemma} Now we prove our last result. \begin{theorem} Let $A\in\mathcal{B}(\mathcal{H})$ and let $f:[0,\infty)\to [0,\infty)$ be an increasing operator convex function. Then \begin{equation}\label{wanted_last_1} \begin{aligned} & f\left( {{w}^{2}}\left( A \right) \right) \\ & \le \left\| \left( 1-t \right)f\left( {{\left| A \right|}^{2}} \right)+tf\left( {{\left| {{A}^{*}} \right|}^{2}} \right)-2r\left( \frac{f\left( {{\left| A \right|}^{2}} \right)+f\left( {{\left| {{A}^{*}} \right|}^{2}} \right)}{2}-f\left( \frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{2} \right) \right) \right\| \\ & \le \left\| \left( 1-t \right)f\left( {{\left| A \right|}^{2}} \right)+tf\left( {{\left| {{A}^{*}} \right|}^{2}} \right) \right\|. \end{aligned} \end{equation} In particular, \begin{equation}\label{wanted_last_2} \begin{aligned} {{w}^{p}}\left( A \right)&\le \left\| \left( 1-t \right){{\left| A \right|}^{p}}+t{{\left| {{A}^{*}} \right|}^{p}}-2r\left( \frac{{{\left| A \right|}^{p}}+{{\left| {{A}^{*}} \right|}^{p}}}{2}-{{\left( \frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{2} \right)}^{\frac{p}{2}}} \right) \right\| \\ & \le \left\| \left( 1-t \right){{\left| A \right|}^{p}}+t{{\left| {{A}^{*}} \right|}^{p}} \right\| \end{aligned} \end{equation} for $2\le p\le 4$. \end{theorem} \begin{proof} Lemma \ref{lemma_sab} implies \[\begin{aligned} & f\left( \left( 1-t \right){{\left| A \right|}^{2}}+t{{\left| {{A}^{*}} \right|}^{2}} \right) \\ & \le \left( 1-t \right)f\left( {{\left| A \right|}^{2}} \right)+tf\left( {{\left| {{A}^{*}} \right|}^{2}} \right)-2r\left( \frac{f\left( {{\left| A \right|}^{2}} \right)+f\left( {{\left| {{A}^{*}} \right|}^{2}} \right)}{2}-f\left( \frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{2} \right) \right) \\ & \le \left( 1-t \right)f\left( {{\left| A \right|}^{2}} \right)+tf\left( {{\left| {{A}^{*}} \right|}^{2}} \right). \end{aligned}\] Therefore, \[\begin{aligned} & \left\| f\left( \left( 1-t \right){{\left| A \right|}^{2}}+t{{\left| {{A}^{*}} \right|}^{2}} \right) \right\| \\ & \le \left\| \left( 1-t \right)f\left( {{\left| A \right|}^{2}} \right)+tf\left( {{\left| {{A}^{*}} \right|}^{2}} \right)-2r\left( \frac{f\left( {{\left| A \right|}^{2}} \right)+f\left( {{\left| {{A}^{*}} \right|}^{2}} \right)}{2}-f\left( \frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{2} \right) \right) \right\| \\ & \le \left\| \left( 1-t \right)f\left( {{\left| A \right|}^{2}} \right)+tf\left( {{\left| {{A}^{*}} \right|}^{2}} \right) \right\|. \end{aligned}\] On the other hand, \[\begin{aligned} f\left( {{\left| \left\langle Ax,x \right\rangle \right|}^{2}} \right)&\le f\left( \left\langle {{\left| A \right|}^{2\left( 1-t \right)}}x,x \right\rangle \left\langle {{\left| {{A}^{*}} \right|}^{2t}}x,x \right\rangle \right) \\ & \le f\left( {{\left\langle {{\left| A \right|}^{2}}x,x \right\rangle }^{1-t}}{{\left\langle {{\left| {{A}^{*}} \right|}^{2}}x,x \right\rangle }^{t}} \right) \\ & \le f\left( \left( 1-t \right)\left\langle {{\left| A \right|}^{2}}x,x \right\rangle +t\left\langle {{\left| {{A}^{*}} \right|}^{2}}x,x \right\rangle \right) \\ & =f\left( \left\langle \left( 1-t \right){{\left| A \right|}^{2}}+t{{\left| {{A}^{*}} \right|}^{2}}x,x \right\rangle \right) \end{aligned}\] for any unit vector $x\in \mathcal{H}$. This implies that \[f\left( {{w}^{2}}\left( A \right) \right)\le \left\| f\left( \left( 1-t \right){{\left| A \right|}^{2}}+t{{\left| {{A}^{*}} \right|}^{2}} \right) \right\|.\] Consequently, \[\begin{aligned} & f\left( {{w}^{2}}\left( A \right) \right) \\ & \le \left\| \left( 1-t \right)f\left( {{\left| A \right|}^{2}} \right)+tf\left( {{\left| {{A}^{*}} \right|}^{2}} \right)-2r\left( \frac{f\left( {{\left| A \right|}^{2}} \right)+f\left( {{\left| {{A}^{*}} \right|}^{2}} \right)}{2}-f\left( \frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{2} \right) \right) \right\| \\ & \le \left\| \left( 1-t \right)f\left( {{\left| A \right|}^{2}} \right)+tf\left( {{\left| {{A}^{*}} \right|}^{2}} \right) \right\|. \end{aligned}\] This proves \eqref{wanted_last_1}. For \eqref{wanted_last_2}, let $f(t)=t^r, 1\leq r\leq 2,$ and apply \eqref{wanted_last_1}. \end{proof}
{ "timestamp": "2019-07-10T02:06:45", "yymm": "1907", "arxiv_id": "1907.03944", "language": "en", "url": "https://arxiv.org/abs/1907.03944", "abstract": "In a recent work of the authors, we showed some general inequalities governing numerical radius inequalities using convex functions. In this article, we present results that complement the aforementioned inequalities. In particular, the new versions can be looked at as refined and generalized forms of some well known numerical radius inequalities. Among many other results, we show that \\[\\left\\| f\\left( \\frac{{{A}^{*}}A+A{{A}^{*}}}{4} \\right) \\right\\|\\le \\left\\| \\int_{0}^{1}{f\\left( \\left( 1-t \\right){{B}^{2}}+t{{C}^{2}} \\right)dt} \\right\\|\\le f\\left( {{w}^{2}}\\left( A \\right) \\right),\\] when $A$ is a bounded linear operator on a Hilbert space having the Cartesian decomposition $A=B+iC.$ This result, for example, extends and refines a celebrated result by kittaneh.", "subjects": "Functional Analysis (math.FA)", "title": "More accurate numerical radius inequalities (II)", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363508288305, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7073385761771328 }
https://arxiv.org/abs/1306.3355
Recurrence relations for patterns of type $(2,1)$ in flattened permutations
We consider the problem of counting the occurrences of patterns of the form $xy-z$ within flattened permutations of a given length. Using symmetric functions, we find recurrence relations satisfied by the distributions on $\mathcal{S}_n$ for the patterns 12-3, 21-3, 23-1 and 32-1, and develop a unified approach to obtain explicit formulas. By these recurrences, we are able to determine simple closed form expressions for the number of permutations that, when flattened, avoid one of these patterns as well as expressions for the average number of occurrences. In particular, we find that the average number of 23-1 patterns and the average number of 32-1 patterns in $\text{Flatten}(\pi)$, taken over all permutations $\pi$ of the same length, are equal, as are the number of permutations avoiding either of these patterns. We also find that the average number of 21-3 patterns in $\text{Flatten}(\pi)$ over all $\pi$ is the same as it is for 31-2 patterns.
\section{Introduction} The pattern counting problem for permutations has been studied extensively from various perspectives in both enumerative and algebraic combinatorics; see, e.g., \cite{EN, Ki}. The comparable problem has also been considered on other discrete structures such as $k$-ary words \cite{BM}, compositions \cite{MSi}, and set partitions \cite {MSY} (see also \cite{HM} and the references contained therein). In his recent study~\cite{C} on finite set partitions, Callan introduced the notion of flattened partitions. In a previous paper~\cite{MSW}, we considered flattened permutations in the same sense and obtained formulas for the generating functions which count the flattened permutations of size $n$ according to the number of peaks and valleys. Here, we continue this work for some related statistics on flattened permutations. Let $[n]=\{1,2,\ldots,n\}$ if $n \geq 1$, with $[0]=\emptyset$. Denote the set of permutations of~$[n]$ by~$\mathcal{S}_n$. Let $\pi=\pi_1\pi_2\cdots\pi_n\in\mathcal{S}_n$. A {\em pattern} is any permutation $\sigma$ of shorter length, and an {\em occurrence} of~$\sigma$ in~$\pi$ is a subsequence of~$\pi$ that is order-isomorphic to~$\sigma$. If $r$ denotes the number of occurrences of a given pattern $\sigma$ within a permutation $\pi$ in general, then the case that has been studied most often in previous research has been when $r=0$, i.e., the avoidance of $\sigma$ by $\pi$. Relatively little work has been done concerning the case when $r>0$, and in what has been done, the patterns were usually of length three. Simple algebraic maps show that the six patterns of length three are classified into two classes with respect to the pattern counting enumeration, that is, the class with the representative pattern $\sigma=123$ (see \cite{NZ12,NZ96}) and the class with the representative pattern $\sigma=132$ (see \cite{MV02} and references therein). By specifying the length of adjacent letters allowed, Claesson and Mansour~\cite{CM02} further generalized the notion of patterns. Precisely, a pattern $\sigma=\sigma_1\text{-}\sigma_2\text{-}\cdots\text{-}\sigma_k$ is said to be of {\em type} $(\ell_1,\ell_2,\ldots,\ell_k)$ if the subword~$\sigma_i$ of~$\sigma$ has length~$\ell_i$. In this notation, a classical pattern of length~$k$ is of type $(1,1,\ldots,1)$ which consists of~$k$ occurrences of~$1$. In particular, the permutation~$\pi$ is said to contain a pattern $\tau=xy\text{-} z$ of type $(2,1)$ if there exist indices $2\le i<j\le n$ such that $\pi_{i-1}\pi_i\pi_j$ is order-isomorphic to $xyz$, where $xyz$ is some permutation of $\{1,2,3\}$. Otherwise, we say that $\pi$ avoids $\tau$. Let $\pi\in\mathcal{S}_n$ be a permutation of length~$n$ represented in its {\em standard cycle form}, that is, cycles arranged from left to right in ascending order according to the size of the smallest elements, where the smallest element is written first within each cycle. Define $\text{Flatten}(\pi)$ to be the permutation of length $n$ obtained by erasing the parentheses enclosing the cycles of $\pi$ and considering the resulting word. For example, if $\pi=71564328 \in \mathcal{S}_8$, then the standard cycle form of $\pi$ is $(172)(3546)(8)$ and $\text{Flatten}(\pi)=17235468$. One can combine the ideas of the previous two paragraphs and say that a permutation $\pi$ contains a pattern $\tau$ in the \emph{flattened sense} if and only if $\text{Flatten}(\pi)$ contains $\tau$ in the usual sense and avoids $\tau$ otherwise. Here, we will use this definition of pattern containment and consider the case when $\tau$ is a pattern of type $(2,1)$. For example, the permutation $\pi=71564328$ avoids $23\dash1$ but has four occurrences of $31\text{-} 2$ by this definition since $\text{Flatten}(\pi)=17235468$ avoids $23\dash1$ but has four occurrences of $31\dash2$. Let $\mathrm{st}$ denote the statistic on $\mathcal{S}_n$ which records the number of occurrences in the flattened sense of one of five patterns under consideration in this paper. In accordance with the previous paper~\cite{MSW}, we will use the notation \[ g^\mathrm{st}_n(a_1a_2\cdots a_k) =\sum_{\pi}q^{\mathrm{st}(\text{Flatten}(\pi))}, \] where $\pi$ ranges over all permutations of length~$n$ such that $\text{Flatten}(\pi)$ starts with $a_1a_2\cdots a_k$. It is easy to see that $g^\mathrm{st}_n(a_1a_2\cdots a_k)=0$ if $a_1\ne1$. We will write $g_n=g^\mathrm{st}_n(1)$ when considering a particular pattern. In this paper, we use symmetric functions to develop recurrences for the generating functions $g^\mathrm{st}_n(1)$ in the cases when $\mathrm{st}$ is the statistic recording the number of occurrences of $\tau$, where $\tau$ is any pattern of type $(2,1)$ (except for $13\dash2$). As a consequence, we obtain simple closed formulas for the number of permutations avoiding a pattern of type $(2,1)$ in the flattened sense as well as the average number of occurrences of a pattern over all permutations of a given size. We provide algebraic proofs of these results, as well as combinatorial proofs in all but two cases. The results are summarized in Table \ref{tab1} below. \begin{table}[htdp] \caption{The number of avoiding and the average number of occurrences for patterns of type $(2,1)$ over flattened permutations of length $n$.} \begin{center} \begin{tabular}{|c|c|c|c|} \hline pattern & number of avoiding & average number & reference \\ \hline\rule{0pt}{.8cm}\raisebox{1.4ex}[0pt] {13-2} &\raisebox{1.4ex}[0pt]{$\displaystyle2^{n-1}$} &\raisebox{1.4ex}[0pt]{$\displaystyle{n^2+3n+8\over12}-H_n$} &\raisebox{1.4ex}[0pt]{\cite{MW13X}} \\ \hline\rule{0pt}{.9cm}\raisebox{1.7ex}[0pt] {31-2} &\raisebox{1.7ex}[0pt]{$\displaystyle{2n-2\choose n-1}$ } & &\raisebox{1.7ex}[0pt]{Corollary \ref{cor:31-2:average}} \\ \cline{1-1}\cline{2-2}\cline{4-4}\rule{0pt}{1.1cm}\raisebox{2.4ex}[0pt]{21-3} &\raisebox{2.7ex}[0pt]{$\displaystyle2\sum_{k=1}^{n-1}kS(n-1,k)$} &\raisebox{5.3ex}[0pt]{$\displaystyle{n^3-3n^2+26n-12\over 12n}-H_n$} &\raisebox{2.4ex}[0pt]{Corollary \ref{cor:21-3:average}} \\ \hline\rule{0cm}{.5cm}\raisebox{.08cm}[0pt] {32-1} & & &\raisebox{.08cm}[0pt]{Corollary \ref{cor:32-1:average}} \\ \cline{1-1} \cline{4-4}\rule{0pt}{.5cm}\raisebox{.08cm}[0pt] {23-1} &\raisebox{.4cm}[0pt]{$\displaystyle\sum_{k=1}^{n-1}2^kS(n-1,k)$} &\raisebox{.38cm}[0pt]{$\displaystyle{n^2-9n-4\over 12}+H_n$} &\raisebox{.08cm}[0pt]{Corollary \ref{cor:23-1:average}} \\ \hline\rule{0pt}{1cm}\raisebox{1.4ex}[0pt] {12-3} &\raisebox{2.4ex}[0pt]{$\displaystyle-2\sum_{i=0}^{n-2}{n-2\choose i}(B_i+B_{i+1})\tilde{B}_{n-i-3}$} &\raisebox{2ex}[0pt]{$\displaystyle{n^3+3n^2-40n+24\over 12n}+H_n$} &\raisebox{2.2ex}[0pt]{Corollary \ref{cor:12-3:average}} \\ \hline \end{tabular} \end{center} \label{table} \end{table} Note that as an immediate consequence, we have the following result. \begin{theorem} For any pattern $p$ of type $(2,1)$, the average number $\mathrm{avr}(n)$ of occurrences of $p$ in $\text{Flatten}(\pi)$ over all permutations~$\pi$ of length~$n$ satisfies \[ \lim_{n\to\infty}{\mathrm{avr}(n)\over n^2}={1\over 12}. \] \end{theorem} We will need the following notation related to symmetric functions. Let $X=\{x_1,x_2,\ldots,x_m\}$ be an ordered set. Define \begin{align*} e_j(X) &=\sum_{1\le i_1<i_2<\cdots< i_j\le m}x_{i_1}x_{i_2}\cdots x_{i_j},\\ e_j'(X) &=\sum_{1\le i_1\le i_2-2\le i_3-4\le\cdots\le i_j-2(j-1)\le m}x_{i_1}x_{i_2}\cdots x_{i_j},\\ h_j(X) &=\sum_{1\le i_1\le i_2\le\cdots\le i_j\le m}x_{i_1}x_{i_2}\cdots x_{i_j}. \end{align*} In other words, $e_j(X)$ is the sum of products of any $j$ distinct elements of~$X$; $e_j'(X)$ is the sum of products of any $j$ pairwise non-adjacent elements in~$X$; $h_j(X)$ is the sum of products of any $j$ elements (non-distinct allowed) of~$X$. For convenience, for any function $s_j(X)$ of these three, let $s_0(X)=1$, $s_j(\emptyset)=\delta_{j,1}$ and $s_j(X)=0$ if $j<0$. Throughout this paper, we will make use of the Kronecker's delta notation $\delta_{i,j}$ defined by \[ \delta_{i,j}=\begin{cases} 1,&\text{if $i=j$};\\ 0,&\text{if $i\ne j$}. \end{cases} \] We will follow the standard notation $[n]=\sum_{j=0}^{n-1}q^j={1-q^n\over 1-q}$ and $[m]!=\prod_{i=1}^m[i]$, with \[ {n\brack k}=\begin{cases} {[n]!\over[k]![n-k]!},&\text{if }0\le k\le n;\\ 0,&\text{otherwise}, \end{cases}• \] where $q$ is an indeterminate. Note that $[n]\big|_{q=0}=1$ and $[n]\big|_{q=1}=n$. Moreover, $[n]'|_{q=1}={n\choose 2}$. Let $H_n=\sum_{k=1}^n{1\over k}$ denote the $n$-th harmonic number; see Graham, Knuth and Patashnik~\cite{GKP94}. Denote the Stirling number of the second kind by $S(n,k)$, the $n$th Bell number by $B_n$, and the $n$th complementary Bell number by $\tilde{B}_n$. \section{Counting $31\dash2$-patterns} For any $3\le i\le n$, we have \begin{align} g_n(1i) &=\sum_{j<i}g_n(1ij)+\sum_{j>i}g_n(1ij) =\sum_{j<i}q^{i-j-1}g_{n-1}(1j)+\sum_{j\ge i}g_{n-1}(1j)\notag\\ &=g_{n-1}+\sum_{j\le i-1}(q^{i-j-1}-1)g_{n-1}(1j).\label{rec1:31-2:g1i} \end{align} Define \begin{align} G_{n,r}(v)&=\sum_{i\ge2}g_{n,r}(1i)v^{i-2},\\ G_{r}(x,v)&=\sum_{n\ge 2}G_{n,r}(v)x^n=\sum_{n\ge 2}\sum_{i\ge2}g_{n,r}(1i)v^{i-2}x^{n}. \end{align} It follows that $G_{n,r}(1)=g_{n,r}$ and $G_{r}(x,1)=\sum_{n\ge 2}g_{n,r}x^n$. As before, we have $g_{n,r}(12)=2g_{n-1,r}$. \begin{theorem} For any integer $r\ge0$, we have \begin{align} \Bigl(1+{v^2x\over 1-v}\Bigr)G_{r}(x,v) =x\sum_{j=0}^{r-1}v^{r-j+1}G_j(x,v)+{2-v\over 1-v}xG_{r}(x,1)+2(2+v)x^3\delta_{r,0} +H_r(x,v), \label{rec:31-2:Gr} \end{align} where $H_r(x,v)=-x\sum_{s=0}^{r-1}\sum_{j\ge2}\sum_{n=3}^{j+r-s-1}g_{n,s}(1j)v^{j+r-s-1}x^n$. \end{theorem} \begin{proof} Let $r\ge0$. Extracting the coefficient of $q^r$ in~(\ref{rec1:31-2:g1i}) gives \begin{equation}\label{eq:2.1} g_{n,r}(1i)-g_{n-1,r}+\sum_{j=2}^{i-2}g_{n-1,r}(1j)-\sum_{j=2}^{i-2}g_{n-1,r-i+j+1}(1j)=0, \quad 3\le i\le n, \end{equation} with $g_{2,r}=g_{2,r}(12)=2\delta_{r,0}$. Multiplying each of the four terms on the left-hand side of (\ref{eq:2.1}) by ${v^{i-2}x^{n}}$, and summing over $n \geq 3$ and $3\le i\le n$, yields \begin{align*} &\sum_{n\ge 3}\sum_{i=3}^{n}g_{n,r}(1i)v^{i-2}x^{n} =G_{r}(x,v)-2xG_r(x,1)-4x^{3}\delta_{r,0},\\ &\sum_{n\ge 3}\sum_{i=3}^{n}g_{n-1,r}v^{i-2}x^{n} ={vx\over 1-v}G_{r}(x,1) -{x\over 1-v}G_{r}(vx,1) +2vx^{3}\delta_{r,0},\\ &\sum_{n\ge 3}\sum_{i=3}^{n}\sum_{j=2}^{i-2}g_{n-1,r}(1j)v^{i-2}x^{n} ={x\over 1-v}\Bigl(v^2G_r(x,v)-G_{r}(vx,1)\Bigr), \end{align*} and \begin{align*} &\sum_{n\ge 3}\sum_{i=3}^{n}\sum_{j=i-1-r}^{i-2}g_{n-1,r-i+j+1}(1j)v^{i-2}x^{n}\\ =&x\sum_{n\ge 3}\sum_{i=4}^{n+1}\sum_{j=i-1-r}^{i-2}g_{n,r-i+j+1}(1j)v^{i-2}x^{n}\\ =&x\sum_{i\ge 4}\sum_{j=i-1-r}^{i-2}\sum_{n\ge i-1}g_{n,r-i+j+1}(1j)v^{i-2}x^{n}\\ =&x\sum_{s\le r-1}\sum_{j\ge2}\sum_{n\ge r+j-s}g_{n,s}(1j)v^{r+j-s-1}x^{n}, \end{align*} which combine to give \eqref{rec:31-2:Gr}. \end{proof} Taking $r=0$ in recurrence~(\ref{rec:31-2:Gr}) gives \begin{align*} \Bigl(1+{v^2x\over 1-v}\Bigr)G_{0}(x,v)={2-v\over 1-v}xG_0(x)+2(2+v)x^3. \end{align*} To solve this equation, we use the kernel method and substitute $v=C(x)=\frac{1-\sqrt{1-2x}}{2x}$ to obtain $$G_0(x)=\frac{x}{\sqrt{1-4x}}-x-2x^2$$ and \begin{align}\label{eqAAG0} G_{0}(x,C(x))=\lim_{v\rightarrow C(x)}G_0(x,v)=\frac{x^2(\sqrt{1-4x}+8x-1)}{1-4x}. \end{align} Taking $r=1$ in (\ref{rec:31-2:Gr}) and using the fact that $H_1(x,v)=-x\sum_{j\ge3}g_{j,0}(1j)v^jx^j=-\frac{2x^4v^3}{1-xv}$, we obtain \begin{align} \Bigl(1+{v^2x\over 1-v}\Bigr)G_1(x,v) =xv^2G_0(x,v)+{2-v\over 1-v}xG_1(x)-2\frac{x^4v^3}{1-xv}. \end{align} Substituting $v=C(x)=\frac{1-\sqrt{1-2x}}{2x}$ into this equation, and using (\ref{eqAAG0}), yields $$G_1(x)=\frac{(3x-1)(1-5x+2x^2)+(1-6x+7x^2)\sqrt{1-4x}}{x\sqrt{(1-4x)^3}},$$ and thus \begin{align*} G_{1}(x,C(x))&=\lim_{v\rightarrow C(x)}G_1(x,v)\\ &= \frac{(1-4x)(5x^4-x^3-16x^2+8x-1)-(17x^4+19x^3-30x^2+10x-1)\sqrt{1-4x}}{x\sqrt{(1-4x)^5}}. \end{align*} Continuing in this way for $r=2,3$, we obtain the following result. \begin{corollary} For $0\le r\le 3$, we have \[ G_r^{31\dash2}(x)=\frac{a_r(x)+b_r(x)\sqrt{1-4x}}{\sqrt{(1-4x)^{2r+1}}}, \] where \begin{itemize} \item[(i)] $a_0(x)=x$, $b_0(x)=-x-2x^2$; \item[(ii)] $a_1(x)=\frac{1}{x}(3x-1)(1-5x+2x^2)$, $b_1(x)=\frac{1}{x}(1-6x+7x^2)$; \item[(iii)] $a_2(x)=\frac{1}{x}(1-12x+50x^2-76x^3+22x^4)$, $b_2(x)=\frac{1}{x}(-1+10x-32x^2+28x^3)$; \item[(iv)] $a_3(x)=\frac{1}{x^2}(2-37x+270x^2-972x^3+1748x^4-1346x^5+220x^6)$, $b_3(x)=\frac{1}{x^2}(-2+33x-208x^2+614x^3-824x^4+368x^5)$. \end{itemize} \end{corollary} The second-order difference transformation of the above formula gives \begin{equation}\label{rec:31-2:g1k} g_n(1k) =(q+1)g_n\bigl(1(k-1)\bigr)-q\cdotp g_n\bigl(1(k-2)\bigr)+(q-1)g_{n-1}\bigl(1(k-2)\bigr), \quad 5\le k\le n. \end{equation}• We shall solve it with the initial values \begin{equation}\label{ini:31-2:g1k} \begin{split} g_n(13)&=g_{n-1},\quad n\ge3,\\ g_n(14)&=g_{n-1}+2(q-1)g_{n-2},\quad n\ge4. \end{split} \end{equation} \begin{theorem} For any $n\ge2$, we have \begin{equation}\label{rec:31-2:gn} g_n=\sum_{j=1}^{\lfloor n/2\rfloor}(q-1)^{j-1}b_{n,j}g_{n-j},• \end{equation}• where \[ b_{n,j}=\sum_{k=0}^{n-j-1}{n-k\over j}{n-j-1-k\choose j-1}{j-2+k\choose j-2}q^k. \] \end{theorem} \begin{proof} For any $k\ge3$ and any integer $j$, define $a_{k,j}$ by $a_{3,j}=\delta_{j,1}$, $a_{4,j}=\delta_{j,1}+2\delta_{j,2}$ and \begin{equation}\label{def:31-2:a} a_{k,j}=(q+1)a_{k-1,j}-q\cdotp a_{k-2,j}+a_{k-2,j-1},\quad k\ge 5. \end{equation} By~(\ref{rec:31-2:g1k}) and~(\ref{ini:31-2:g1k}), it is routine to verify that \[ g_n(1k)=\sum_{j\le \lfloor k/2\rfloor}a_{k,j}(q-1)^{j-1}g_{n-j}, \quad 3\le k\le n. \] Consequently, for any $n\ge2$, we have \[ g_n=\sum_{k\ge2}g_n(1k) =ng_{n-1}+\sum_{j=2}^{\lfloor n/2\rfloor}(q-1)^{j-1}b_{n,j}g_{n-j}, \] where $b_{n,j}=\sum_{k=3}^na_{k,j}$. By~(\ref{def:31-2:a}), we have \[ b_{n,j} =(q+1)b_{n-1,\,j} -qb_{n-2,\,j} +b_{n-2,\,j-1} +2\chi(j=2\text{ and }n\ge4),\quad j\ge2. \] It follows that \begin{align*} B(x,y)=\sum_{n\ge4}\sum_{j\ge2}b_{n,j}x^{n-4}y^{j-2} ={2-x\over(1-x)^2}\cdot{1\over(1-x)(1-qx)-x^2y}. \end{align*} We obtain the desired expression of $b_{n,j}$ by extracting the coefficient of $x^{n-4}y^{j-2}$ from $B(x,y)$. \end{proof} \begin{corollary}\label{cor:31-2:average} For any $n\ge1$, the number of permutations~$\pi$ of length~$n$ with $\text{Flatten}(\pi)$ avoiding $31\dash2$ is ${2n-2\choose n-1}$, and the average number of occurrences of $31\dash2$ in $\text{Flatten}(\pi)$ over $\pi\in\mathcal{S}_n$ is given by ${n^3-3n^2+26n-12\over 12n}-H_n$. \end{corollary} \section{Recurrence in terms of symmetric functions} \subsection{Counting $32\dash1$-patterns} Let $3\le i\le n$. We have \begin{align} g_n(1i) &=\sum_{j<i}g_n(1ij)+\sum_{j>i}g_n(1ij) =\sum_{j<i}q^{j-2}g_{n-1}(1j)+\sum_{j\ge i}g_{n-1}(1j)\notag\\ &=g_{n-1}+\sum_{j\le i-1}(q^{j-2}-1)g_{n-1}(1j).\label{rec1:32-1:g1i} \end{align} Define \begin{align} G_{n,r}(v)&=\sum_{i}g_{n,r}(1i)v^{i-2},\\ G_{r}(x,v)&=\sum_{n\ge 3}G_{n,r}(v)x^n=\sum_{n\ge 3}\sum_{i}g_{n,r}(1i)v^{i-2}x^{n}. \end{align} It follows that $G_{n,r}(1)=g_{n,r}$ and $G_{r}(x,1)=\sum_{n\ge 3}g_{n,r}x^n$. As before, we have $g_{n,r}(1i)=2g_{n-1,r}$. \begin{lemma} For any integer $r\ge0$, we have \begin{align} \Bigl(1+{vx\over 1-v}\Bigr)G_{r}(x,v) =&{x(2-v+2vx)\over 1-v}G_{r}(x,1) -{2vx^2\over 1-v}G_r(vx,1)\notag\\ &+2x^3(2+v+2vx+2v^2x)\delta_{r,0} +H_r(x,v), \label{rec:32-1:Gr} \end{align} where $H_r(x,v)={x\over 1-v}\sum_{n\ge3}\sum_{j=3}^ng_{n,r-j+2}(1j)x^{n}(v^{j-1}-v^{n})$. \end{lemma} \begin{lemma} For any integer $r\ge0$, we have \begin{align} (1-v+vx)G_{r}(x,v) =&x(2-v+2vx)G_{r}(x,1) -2vx^2G_r(vx,1)\notag\\ &+2x^3(1-v)(2+v+2vx+2v^2x)\delta_{r,0} +H_r(x,v), \label{rec:32-1:Gr} \end{align} where $H_r(x,v)=\sum_{n\ge3}\sum_{j=3}^ng_{n,r-j+2}(1j)x^{n+1}(v^{j-1}-v^{n})$. \end{lemma} \begin{proof} Let $r\ge0$. Extracting the coefficient of $q^r$ in~(\ref{rec1:32-1:g1i}) gives \begin{equation}\label{eq:32-1:gnr1i} g_{n,r}(1i)-g_{n-1,r}+\sum_{j\le i-1}g_{n-1,r}(1j)-\sum_{j\le i-1}g_{n-1,r-j+2}(1j)=0,\quad 3\le i\le n. \end{equation} In the last sum in this formula, the subscript $j$ has upper bound $\min(i-1,r+2)$. Note that $G_{2,r}(v)=g_{2,r}=2\delta_{r,0}$. We obtain equation~(\ref{rec:32-1:Gr}) by multiplying~(\ref{eq:32-1:gnr1i}) by ${v^{i-2}x^{n}}$ and summing over $n\ge 3$ and $3\le i\le n$. The expressions that result from performing these operations on the four summands in \eqref{eq:32-1:gnr1i} are \begin{align*} &\sum_{n\ge 3}\sum_{i=3}^{n}g_{n,r}(1i)v^{i-2}x^{n} =G_{r}(x,v)-2xG_r(x,1)-4x^{3}\delta_{r,0},\\ &\sum_{n\ge 3}\sum_{i=3}^{n}g_{n-1,r}v^{i-2}x^{n} ={vx\over 1-v}G_{r}(x,1) -{x\over 1-v}G_{r}(vx,1) +2vx^{3}\delta_{r,0},\\ &\sum_{n\ge 3}\sum_{i=3}^{n}\sum_{j=2}^{i-1}g_{n-1,r}(1j)v^{i-2}x^{n} ={vx\over 1-v}G_r(x,v) -{x\over 1-v}G_r(vx,1) +2vx^3\delta_{r,0}, \end{align*} and \begin{align*} &\sum_{n\ge 3}\sum_{i=3}^{n}\sum_{j=2}^{i-1}g_{n-1,r-j+2}(1j)v^{i-2}x^{n}\\ =&x\sum_{n\ge2}\sum_{i=3}^{n+1}\sum_{j=2}^{i-1}g_{n,r-j+2}(1j)v^{i-2}x^{n} ={x\over 1-v}\sum_{n\ge2}\sum_{j=2}^ng_{n,r-j+2}(1j)x^{n}(v^{j-1}-v^{n})\\ =&2vx^3(1+2x+2vx)\delta_{r,0} +{2vx^2\over 1-v}\bigl(G_r(x,1)-G_r(vx,1)\bigr) +H_r(x,v). \end{align*} This completes the proof. \end{proof} The first-order difference transformation applied to \eqref{rec1:32-1:g1i} gives the recurrence \begin{equation}\label{rec:32-1:g1k} g_n(1k)=g_n\bigl(1(k-1)\bigr)+(q^{k-3}-1)g_{n-1}\bigl(1(k-1)\bigr),\quad 4\le k\le n.• \end{equation}• We will solve it with the initial value $g_n(13)=g_{n-1}$. \begin{lemma}\label{lemeee} Let $k\geq3$. For all $j\geq1$, $$e_j([1],\ldots,[k-3])=\frac{1}{(1-q)^j}\sum_{a=0}^{k-3}(-1)^aq^{\binom{a+1}{2}}\left[\begin{array}{c} k-3\\a\end{array}\right]_q\binom{k-3-a}{k-3-j}.$$ \end{lemma} \begin{proof} Let $F(x_1,\ldots,x_k)=\sum_{j=0}^ke_j(x_1,\ldots,x_k)z^j$. By the definition of elementary symmetric functions, we deduce \begin{align*} F([1],\ldots,[k-3]) &=\prod_{a=1}^{k-3}(1+[a]z) =\biggl(1+\frac{z}{1-q}\biggr)^{k-3}\biggl(\frac{qz}{1-q+z};q\biggr)_{k-3}\\ &=\sum_{a=0}^{k-3}q^{\binom{a}{2}}\left[\begin{array}{c} k-3\\a\end{array}\right]_q\frac{q^a(-z)^a(1-q+z)^{k-3-a}}{(1-q)^{k-3}}\\ &=\sum_{a=0}^{k-3}\sum_{b=0}^{k-3-a}(-1)^aq^{\binom{a+1}{2}}\left[\begin{array}{c} k-3\\a\end{array}\right]_q\binom{k-3-a}{b}\frac{z^{a+b}}{(1-q)^{a+b}}. \end{align*} The desired formula now follows from comparing coefficients of $z^j$ on both sides of the above identity. \end{proof} \begin{theorem} For any $n\ge2$, we have \begin{align}\label{rec:32-1:gn} g_n &=ng_{n-1}+ \sum_{j=2}^{n-2}\Biggl( \sum_{a=1}^{j}(-1)^{j-a}q^{{a\choose 2}}\sum_{k=0}^{n-2-j}{j-a+k\choose k}{j-1+k\brack a-1} \Biggr)g_{n-j}. \end{align}• \end{theorem}• \begin{proof} For any $k\ge3$ and any integer $j$, define $a_{k,j}=e_{j-1}([1],\,[2],\,\ldots,[k-3])$. By~(\ref{rec:32-1:g1k}), it is routine to verify that \[ g_n(1k)=\sum_{j=1}^{k-2}a_{k,j}(q-1)^{j-1}g_{n-j},\quad 3\le k\le n. \] Therefore, \[ g_n=\sum_{k\ge2}g_n(1k)=ng_{n-1}+\sum_{j=2}^{n-2}b_{n,j}(q-1)^{j-1}g_{n-j},• \] where $b_{n,j}=\sum_{k=j+2}^ne_{j-1}([1],\,[2],\,\ldots,[k-3])$. By Lemma \ref{lemeee}, we deduce $$b_{n,j}=\frac{1}{(1-q)^j}\sum_{k=j+2}^n\sum_{a=0}^{k-3}(-1)^aq^{\binom{a+1}{2}}\left[\begin{array}{c} k-3\\a\end{array}\right]_q\binom{k-3-a}{k-2-j},$$ which gives \eqref{rec:32-1:gn}. \end{proof} \begin{corollary}\label{cor:32-1:average} For any $n\ge2$, the number of permutations~$\pi$ of length~$n$ with $\text{Flatten}(\pi)$ avoiding $32\dash1$ is $\sum_{k=1}^{n-1}2^kS(n-1,k)$, and the average number of occurrences of $32\dash1$ in $\text{Flatten}(\pi)$ over $\pi\in\mathcal{S}_n$ is given by ${n^2-9n-4\over 12}+H_n$. \end{corollary} \begin{proof} Let $n\ge2$. Setting $q=0$ in~(\ref{rec:32-1:gn}), we get \[ g_{n,0}=ng_{n-1,0}+\sum_{j=2}^{n-2}(-1)^{j-1}{n-2\choose j}g_{n-j,0}. \] Note that $g_{1,0}=1$. One can prove by induction that \[ g_{n,0}={1\over e^2}\sum_{k\ge1}{2^kk^{n-1}\over k!}=\sum_{k=1}^{n-1}2^kS(n-1,k). \] The average number can be found in a similar manner as in the case $13\dash2$. \end{proof} For more information of the sequence $g_n(0)$, see the sequence~$A001861$ in OEIS~\cite{OEIS}. \subsection{Counting $12\dash3$-patterns} Let $3\le i\le n$. We have \begin{align} g_n(1i) &=\sum_{j<i}g_n(1ij)+\sum_{j>i}g_n(1ij) =\sum_{j<i}q^{j-i+1}g_{n-1}(1j)+q^{n-i}\sum_{j\ge i}g_{n-1}(1j)\notag\\ &=q^{n-i}g_{n-1}-\sum_{j\le i-1}(q^{n-i}-q^{j-i+1})g_{n-1}(1j).\label{rec1:12-3:g1i} \end{align} The first-order difference transformation of the above formula gives the recurrence \begin{equation}\label{rec:12-3:g1k} g_n(1k) =q^{-1}g_n\bigl(1(k-1)\bigr)+(1-q^{n-k})g_{n-1}\bigl(1(k-1)\bigr), \quad 4\le k\le n. \end{equation}• We will solve it with the initial value \begin{equation}\label{ini:12-3:g1k} g_n(13)=q^{n-3}g_{n-1}-2q^{n-3}(q^{n-3}-1)g_{n-2},\quad n\ge3. \end{equation} \begin{lemma}\label{lem:h} Let $n\geq0$, $0\le j\le k-1$ and $X=\{[n+i]\colon0\le i\le k-j-1\}$. Then $$h_{j-1}(X) =\sum_{i=0}^{j-1}(-1)^i {k-j-1+i\brack i} \binom{k-2}{j-1-i}\frac{q^{in}}{(1-q)^{j-1}}.$$ \end{lemma} \begin{proof} By the definition of complete symmetric functions, we have $\sum_{j\ge0}h_{j}(X)z^{j}=\prod_{x\in X}(1-xz)^{-1}$ for any set~$X$. Taking $X=\{[n+i]\colon0\le i\le k-j-1\}$, we deduce \begin{align*} \sum_{j\ge1}h_{j-1}(X)z^{j-1} &=\biggl(1-\frac{z}{1-q}\biggr)^{j-k} \prod_{i=0}^{k-j-1}\biggl(1+\frac{zq^{n}}{1-q-z}q^i\biggr)^{-1}\\ &=\sum_{a\geq0}(-1)^a {k-j-1+a\brack a}\frac{q^{na}z^a}{(1-q)^a}\biggl(1-\frac{z}{1-q}\biggr)^{j-k-a}\\ &=\sum_{a,b\geq0}(-1)^a {k-j-1+a\brack a}\binom{k-j+a+b-1}{b}\frac{q^{na}z^{a+b}}{(1-q)^{a+b}}. \end{align*} The desired formula now follows from comparing coefficients of $z^{j-1}$ on both sides of the above identity. \end{proof} \begin{theorem} For any $n\ge3$, we have \begin{equation}\label{rec:12-3:gn} g_n=\sum_{j=2}^{n-1}c_{n,j}g_{n-j},• \end{equation}• where \[ c_{n,j}=\sum_{i=0}^{j-1}\sum_{k=0}^{n-1-j}(-1)^i\left( 2{k+i\brack i}{k+j-1\choose j-i-1} -{k+i-1\brack i}{k+j-2\choose j-i-1}\right)q^{(i+1)(n-j-k-1)}. \] \end{theorem} \begin{proof} By~(\ref{rec:12-3:g1k}) and~(\ref{ini:12-3:g1k}), it is routine to verify that \[ {g_n(1k)}=q^{n-k}\sum_{j=1}^{k-1}(1-q)^{j-1}a_{n,k,j}g_{n-j}, \qquad 3\le k\le n, \] where $a_{n,k,j} =2h_{j-1}\bigl(\bigl\{[n-i]\colon j+1\le i\le k\bigr\}\bigr) -h_{j-1}\bigl(\bigl\{[n-i]\colon j+2\le i\le k\bigr\}\bigr)$. Therefore, \begin{equation}\label{rec:12-3:gna} g_n=\sum_{k\ge2}g_n(1k) =\bigl(2q^{n-2}+[n-2]\bigr)g_{n-1}+\sum_{j=2}^{n-1}b_{n,j}(1-q)^{j-1}g_{n-j},• \end{equation}• where \begin{equation}\label{def:12-3:a} b_{n,j} =\sum_{k=j+1}^n \biggl(2h_{j-1}\bigl(\bigl\{[n-i]\colon j+1\le i\le k\bigr\}\bigr) -h_{j-1}\bigl(\bigl\{[n-i]\colon j+2\le i\le k\bigr\}\bigr)\biggr)q^{n-k}. \end{equation} Recurrence \eqref{rec:12-3:gn} now follows from Lemma \ref{lem:h}. \end{proof} \begin{corollary}\label{cor:12-3:average} For any $n\ge2$, the number of permutations~$\pi$ of length~$n$ with $\text{Flatten}(\pi)$ avoiding $12\dash3$ is $-2\sum_{i=0}^{n-2}{n-2\choose i}(B_i+B_{i+1})\tilde{B}_{n-i-3}$, and the average number of occurrences of $12\dash3$ in $\text{Flatten}(\pi)$ over $\pi\in\mathcal{S}_n$ is given by ${n^3+3n^2-40n+24\over 12n}+H_n$. \end{corollary} \begin{proof} Letting $q=0$ in~(\ref{rec:12-3:gn}), we get \[ g_n(0)=\sum_{j=1}^{n-2}\Biggl({n-3\choose j-1}+{n-4\choose j-2}\Biggr)g_{n-j}(0), \quad n\ge4, \] with $g_1(0)=1$ and $g_2(0)=g_3(0)=2$. Define $G(x)=\sum_{n\ge2}g_n(0){x^{n-2}\over (n-2)!}$. Then the above recurrence translates to \[ G''(x)=(e^xG(x))'+e^xG(x). \] Solving this differential equation gives \[ G(x)=2(e^x+1)e^{e^x-1}\biggl(1-\int_{0}^xe^{1-e^t} dt\biggr)-2. \] Note that \begin{align*} e^{e^x-1}&=\sum_{n\ge0}B_n{x^n\over n!},\\ e^{e^x-1+x}&=\sum_{n\ge0}B_{n+1}{x^n\over n!},\\ \int_{0}^xe^{1-e^t} dt&=\int_0^x\sum_{n\ge0}\tilde{B}_n{t^n\over n!} dt =\sum_{n\ge1}\tilde{B}_{n-1}{x^{n}\over n!}, \end{align*} Recall that the sequence $\{\tilde{B}_n\}$ contains both positive and negative integers. (See Rao Uppuluri and Carpenter \cite{RC69} and entry A000587 in OEIS \cite{OEIS}). We define $\tilde{B}_{-1}=-1$. Then extracting the coefficient of $x^{n-2}$ from the formula above for $G(x)$ gives \[ g_n(0)=-2\sum_{i=0}^{n-2}{n-2\choose i}(B_i+B_{i+1})\tilde{B}_{n-i-3},\quad n\ge3, \] which completes the proof of the first statement. For the average number of occurrences, we differentiate both sides of (\ref{rec:12-3:gn}) and set $q=1$ to obtain \begin{align*} g_n'(1) &=\Bigl(2(n-2)q^{n-3}+[n-2]'\Bigr)\Big|_{q=1}g_{n-1}(1)+ng_{n-1}'(1)-b_{n,2}\big|_{q=1}g_{n-2}(1)\\ &=\biggl(2(n-2)+{n-2\choose 2}\biggr)(n-1)!+ng_{n-1}'(1)-{(n+2)(n-2)(n-3)\over3}(n-2)!. \end{align*} The desired result now follows from solving this recurrence and noting that the average number of occurrences is given by $g_n'(1)/n!$. \end{proof} \section{Recurrence in terms of generalized symmetric functions} \subsection{Counting $23\dash1$-patterns} Let $3\le i\le n$. We have \begin{align} g_n(1i) &=\sum_{j<i}g_n(1ij)+\sum_{j>i}g_n(1ij) =\sum_{j<i}g_{n-1}(1j)+\sum_{j\ge i}q^{i-2}g_{n-1}(1j)\notag\\ &=q^{i-2}g_{n-1}+(1-q^{i-2})\sum_{j\le i-1}g_{n-1}(1j).\label{rec1:23-1:g1i} \end{align} The first-order difference transformation of the above formula gives the recurrence \begin{equation}\label{rec:23-1:g1k} g_n(1k) =-{q^{k-3}g_{n-1}\over[k-3]} +{[k-2]\over[k-3]}g_n\bigl(1(k-1)\bigr) +(1-q)[k-2]g_{n-1}\bigl(1(k-1)\bigr),\quad 4\le k\le n. \end{equation} We will solve it with the initial value $g_n(13)=q\cdotp g_{n-1}+2(1-q)g_{n-2}$. \begin{theorem} For all $n\geq2$, we have \begin{equation}\label{rec:23-1:gn} g_n=\bigl(1+[n-1]\bigr)g_{n-1}+\sum_{j=2}^{n-1}b_{n,j}(1-q)^{j-1}g_{n-j},• \end{equation}• where \[ b_{n,2} =\sum_{k=1}^{n-2}[k]\bigl(1+[k]\bigr) ={(2-q)n\over(q-1)^2} +{q^3-3q^2+q+4\over(q-1)^3(q+1)} +{(q-3)q^{n-1}\over(q-1)^3} +{q^{2n-2}\over(q-1)^3(q+1)}, \] and for $j\ge3$, \[ b_{n,j}=\sum_{k=j+1}^n \sum_{1\le i_1<i_2<\cdots<i_{j-2}\le k-3}\bigl(1+[i_1]\bigr)[i_1][i_2]\cdots[i_{j-2}][k-2]. \] \end{theorem} \begin{proof} For any $3\le k\le n$ and any integer $j$, define $d_{n,k,j}$ by $d_{n,3,j}=q\delta_{j,1}+2(1-q)\delta_{j,2}$ and \[ d_{n,k,j} =-\frac{q^{k-3}}{[k-3]}\delta_{j,1} +\frac{[k-2]}{[k-3]}d_{n,k-1,j} +(1-q)[k-2]d_{n-1,k-1,j-1}, \quad 4\le k\le n. \] By~(\ref{rec:23-1:g1k}), it is easy to verify that $g_n(1k)=\sum_{j=1}^{k-1}d_{n,k,j}g_{n-j}$ for any $3\le k\le n$. On the other hand, we can solve $d_{n,k,j}$ by iteration as follows. For any $k\ge4$, we have \begin{align*} d_{n,k,1}&=-[k-2]\sum_{j=3}^{k-1}{q^{k-j}\over[k-j][k-j+1]}+[k-2]d_{n,3,1}=q^{k-2},\\ d_{n,k,2}&=[k-2]d_{n,3,2}+(1-q)[k-2]\sum_{j=1}^{k-3}q^j =(1-q)[k-2]\bigl(1+[k-2]\bigr), \end{align*} and for $j\ge3$, \begin{align*} d_{n,k,j} &=(1-q)[k-2]\sum_{i=j}^{k-1}d_{n-1,i,j-1}\\ &=(1-q)^{j-2}[k-2] \sum_{2<i_{j-2}<\cdots<i_2<i_1<k}[i_1-2][i_2-2]\cdots[i_{j-3}-2]d_{n-j+2,i_{j-2},2}\\ &=(1-q)^{j-1}[k-2]a_{k,j}, \end{align*} where \[ a_{k,j} =\sum_{1\le i_1<i_2<\cdots<i_{j-2}\le k-3}\bigl(1+[i_1]\bigr)[i_1][i_2]\cdots[i_{j-2}],\quad j\ge3. \] Therefore, for all $3\le k\le n$, we have \[ g_n(1k)=q^{k-2}g_{n-1}+[k-2]\sum_{j=2}^{k-1}a_{k,j}(1-q)^{j-1}g_{n-j}, \] where $a_{k,2}=1+[k-2]$. Consequently, we obtain the desired formula by using $g_n=\sum_{k=2}^ng_n(1k)$. \end{proof} \begin{corollary}\label{cor:23-1:average} For any $n\ge2$, the number of permutations $\pi$ of length~$n$ with $\text{Flatten}(\pi)$ avoiding $23\dash1$ is $\sum_{k=1}^{n-1}2^kS(n-1,k)$, and the average number of occurrences of $23\dash1$ in $\text{Flatten}(\pi)$ over $\pi\in\mathcal{S}_n$ is given by ${n^2-9n-4\over 12}+H_n$. \end{corollary} \begin{proof} Let $n\ge2$. Taking $q=0$ in the recurrence~(\ref{rec:23-1:gn}), we obtain \begin{equation}\label{rec:23-1:avoidance} g_n(0)=2\sum_{j=1}^{n-1}{n-2\choose j-1}g_{n-j}(0). \end{equation}• Note that $g_1(0)=1$. One can prove by induction that \[ g_n(0)={1\over e^2}\sum_{k\ge1}{2^kk^{n-1}\over k!}=\sum_{k=1}^{n-1}2^kS(n-1,k). \] The average number of occurrences may be obtained as it was for previous patterns. \end{proof} \subsection{Counting $21\dash3$-patterns} Let $3\le i\le n$. We have \begin{align} g_n(1i) &=\sum_{j<i}g_n(1ij)+\sum_{j>i}g_n(1ij) =\sum_{j<i}q^{n-i}g_{n-1}(1j)+\sum_{j\ge i}g_{n-1}(1j)\notag\\ &=g_{n-1}-(1-q^{n-i})\sum_{j\le i-1}g_{n-1}(1j).\label{eq:21-3:g1i} \end{align} In particular, since $g_n(12)=2g_{n-1}$, we have \begin{equation}\label{fm:21-3:g13} g_n(13)=g_{n-1}-(1-q^{n-3})g_{n-1}(12) =g_{n-1}+2(q-1)[n-3]g_{n-2},\quad n\ge3. \end{equation} So we can focus on $n\ge4$. The first-order difference transformation of~(\ref{eq:21-3:g1i}) gives \begin{align}\label{rec:21-3:g1k} g_n(1k) ={q^{n-k}g_{n-1}\over[n-k+1]} +{[n-k]g_n\bigl(1(k-1)\bigr)\over[n-k+1]} +(q-1)[n-k]g_{n-1}\bigl(1(k-1)\bigr),\quad 4\le k\le n. \end{align} \begin{theorem} For all $n\geq2$, we have \begin{equation}\label{rec:21-3:gn} g_n=ng_{n-1}+\sum_{j=2}^{n-1}b_{n,j}(q-1)^{j-1}g_{n-j}, •\end{equation}• where \[ b_{n,2} =\sum_{k=3}^n(k-1)[n-k] =-{n^2\over 2(q-1)}+{(q-3)n\over2(q-1)^2}+{q(2q^{n-2}-q^{n-3}+q-2)\over(q-1)^3}, \] and for $j\ge3$, \[ b_{n,j}=\sum_{k=j+1}^n \sum_{n-k\le i_1\le i_2\le \cdots\le i_{j-2}\le n-j-1}(n-j-i_{j-2}+1)[i_1][i_2]\cdots[i_{j-2}][n-k]. \] \end{theorem} \begin{proof} For any $3\le k\le n$ and any integer $j$, define $d_{n,k,j}$ by $d_{n,3,j}=\delta_{j,1}+2(q-1)[n-3]\delta_{j,2}$ and \[ d_{n,k,j} =\frac{q^{n-k}}{[n-k+1]}\delta_{j,0} +\frac{[n-k]}{[n-k+1]}d_{n,k-1,j} +(q-1)[n-k]d_{n-1,k-1,j-1},\quad 4\le k\le n. \] By~(\ref{rec:21-3:g1k}), it is easy to verify that $g_n(1k)=\sum_{j=1}^{k-1}d_{n,k,j}g_{n-j}$ for any $3\le k\le n$. On the other hand, we can solve $d_{n,k,j}$ by iteration as follows: \begin{align*} d_{n,k,1}&=\frac{[n-k]}{[n-3]}+\sum_{j=4}^k\frac{[n-k]}{[n-j][n+1-j]}q^{n-j}=1,\\ d_{n,k,2}&=\frac{[n-k]}{[n-3]}d_{n,3,2}+(k-3)(q-1)[n-k]=(q-1)(k-1)[n-k], \end{align*} and for $j\ge3$, \begin{align*} d_{n,k,j}&=(q-1)[n-k]\sum_{i=j+1}^{k-1}d_{n-1,i,j-1}\\ &=(q-1)^{j-1}[n-k]\sum_{3\le i_{j-2}<\cdots<i_1\le k-1}(i_{j-2}-1) \prod_{\ell=1}^{j-2}[n-\ell-i_\ell]. \end{align*} Combining these formulas, we obtain $$g_n(1k)=g_{n-1}+[n-k]\sum_{j=2}^{k-1}a_{n,k,j}(q-1)^{j-1}g_{n-j},\quad 3\le k\le n,$$ where $a_{n,k,2}=k-1$ and \begin{equation}\label{def:21-3:a} a_{n,k,j} =\sum_{n-k\le i_1\le i_2\le \cdots\le i_{j-2}\le n-j-1}(n-j-i_{j-2}+1)[i_1][i_2]\cdots[i_{j-2}]. \end{equation} We derive~(\ref{rec:21-3:gn}) by using $g_n=\sum_{k=2}^ng_n(1k)$. This completes the proof. \end{proof} \begin{corollary}\label{cor:21-3:average} For any $n\ge2$, the number of permutations~$\pi$ of length~$n$ with $\text{Flatten}(\pi)$ avoiding $21\dash3$ equals $2\sum_{k=1}^{n-1}kS(n-1,k)$, and the average number of occurrences of $21\dash3$ in $\text{Flatten}(\pi)$ over $\pi\in\mathcal{S}_n$ is given by ${n^3-3n^2+26n-12\over 12n}-H_n$. \end{corollary} \begin{proof} Let $n\ge3$. Letting $q=0$ in~(\ref{rec:21-3:gn}) gives \begin{equation}\label{rec:21-3:avoidance} g_n(0)=ng_{n-1}(0)-{n(n-3)\over 2}g_{n-2}(0) +\sum_{j=3}^{n-1}(-1)^{j-1}\Biggl({n-2\choose j}+{n-3\choose j-1}\Biggr)g_{n-j}(0). \end{equation} Note that $g_1(0)=1$ and $g_2(0)=2$. By a routine application of the generating function technique, we may deduce that \[ \sum_{n\ge0}g_{n+2}(0){x^n\over n!}=2e^{e^x+2x-1}, \] which gives the desired formula of $g_n(0)$. Differentiating both sides of (\ref{rec:21-3:gn}), and setting $q=1$, yields \[ g_n'(1) =ng_{n-1}'(1)+b_{n,2}\big|_{q=1}g_{n-2}(1) =ng_{n-1}'(1)+{(n+2)(n-2)(n-3)\over6}(n-2)!. \] Solving this recurrence, we obtain the requested formula for the average number $g_n'(1)/n!$. \end{proof} \section{Combinatorial proofs} In this section, we provide combinatorial proofs of Corollaries \ref{cor:31-2:average}, \ref{cor:32-1:average}, and \ref{cor:23-1:average} and of the statements concerning the average number of occurrences in Corollaries \ref{cor:12-3:average} and \ref{cor:21-3:average}. We first prove the statements concerning the avoidance of the pattern in question.\\ \noindent\textbf{Combinatorial proofs of Corollaries \ref{cor:32-1:average} and \ref{cor:23-1:average} (avoiding).}\\ We first treat the case $23\dash1$. Recall that the Stirling number of the second kind $S(m,k)$ counts the partitions of an $m$-element set into exactly $k$ blocks. Then the sum $\sum_{k=1}^{n-1}2^kS(n-1,k)$ counts the partitions of the set $\{2,3,\ldots,n\}$ having any number of blocks in which some subset of the blocks are marked. We will denote the set of such partitions by $\pi(n)^*$. Let $\pi=B_1/B_2/ \cdots/B_k \in \pi(n)^*$, $1 \leq k \leq n-1$, where the $B_i$ are arranged in \emph{ascending} order of smallest elements and some subset of the $B_i$ are marked. Furthermore, we assume within each block $B_i$ that the elements are written in \emph{descending} order. Finally, let $m_i$ denote the smallest element of block $B_i$, $1 \leq i \leq k$. We now transform $\pi$ into a permutation of size $n$ whose flattened form avoids the pattern $23\dash1$. We start by writing the element $1$ in a cycle by itself as $(1\cdots)$. We first consider the block $B_1$. If $B_1$ is not marked, then write the elements of $B_1$ in descending order after $1$ within its cycle to obtain the longer cycle $(1B_1\cdots)$. If $B_1$ is marked, then write all elements of $B_1$ except for the last one in the cycle with $1$ and start a new cycle with $m_1=2$; at this point, one would have two cycles $(1\widetilde{B}_1\cdots)$ and $(2\cdots)$, where $\widetilde{B}_1=B_1-\{m_1\}$. Continue in this fashion, inductively, as follows. If $i \geq 2$ and block $B_i$ is not marked, then write all of the elements of $B_i$ at the end of the last current cycle, while if $B_i$ is marked, write all of the elements of $B_i$ except $m_i$ at the end of the last current cycle and then write the element $m_i$ in a cycle by itself. Doing this for each of the $k$ blocks of $\pi$ yields a permutation $\sigma$ of length $n$ which avoids $23\dash1$ such that $\text{Flatten}(\sigma)$ has exactly $k$ ascents. The above procedure is seen to be reversible, and hence is a bijection, upon considering whether or not the smaller number in an ascent within $\text{Flatten}(\sigma)$ starts a new cycle of $\sigma$. For example, if $n=10$ and $\pi=\{6,5,2\},\{10,7,3\},\{4\},\{9,8\} \in \pi(10)^*$, with the second and third blocks marked, then the corresponding permutation in standard cycle form would be $\sigma=(1,6,5,2,10,7),~(3),~(4,9,8)$. For Corollary \ref{cor:32-1:average}, we now define a bijection between permutations avoiding $23\dash1$ and those avoiding $32\dash1$ in the flattened sense. To do so, given $\sigma$ avoiding $23\dash1$, let $t_1=1<t_2<\cdots<t_\ell$ denote the set of numbers consisting of the first (i.e., left) letters of the ascents of $\text{Flatten}(\sigma)$, going from left to right. Note that the $t_i$ are increasing since there is no occurrence of $23\dash1$ in $\text{Flatten}(\sigma)$. Given $1 < i \leq \ell$, let $\alpha_i$ denote the sequence (possible empty) of numbers occurring between $t_{i-1}$ and $t_i$ in $\text{Flatten}(\sigma)$. Note that the letters of an $\alpha_i$ must belong to the same cycle of $\sigma$, by definition of the $t_i$, since $\sigma$ is assumed to be in standard cycle form. Write the sequence $\alpha_i$ in reverse order for each $i$, where the letters remain in the same cycle of $\sigma$. If $\sigma'$ denotes the resulting permutation, then it may be v erified that the mapping $\sigma \mapsto \sigma'$ is the requested bijection. For example, if $\sigma$ is as above, then the order of the letters between $1$ and $2$ and between $2$ and $3$ is reversed, which gives $\sigma'=(1,5,6,2,7,10),(3),(4,9,8)$.\linebreak \indent Given a permutation $\rho=\rho_1\rho_2\cdots\rho_n$ which avoids $32\dash1$, the above mapping is reversed by considering the subsequence $\rho_{i_r}$ of $\text{Flatten}(\rho)$ where $\rho_{i_1}=1$ and $\rho_{i_r}$ is the smallest letter to the right of $\rho_{i_{r-1}}$ if $r>1$ and changing the order of the letters between $\rho_{i_{r-1}}$ and $\rho_{i_r}$ for each $r$. \hfill \qed\\ \noindent\textbf{Combinatorial proof of Corollary \ref{cor:31-2:average} (avoiding).}\\ We will show that a permutation avoids $31\dash2$ in the flattened sense if and only if it avoids $3\dash1\dash2$, whence the result will follow from Theorem 2.4 in \cite{MS} where a combinatorial proof was given. Clearly, a permutation avoiding $3\dash1\dash2$ also avoids $31\dash2$. So suppose a permutation $\sigma$ contains an occurrence of $3\dash1\dash2$ in the flattened sense. We will show that it must contain an occurrence of $31\dash2$. Let $\text{Flatten}(\sigma)=\sigma_1\sigma_2\cdots \sigma_n$, which we'll denote by $\sigma'$. First suppose that there is at least one ascent to the right of $n$ in $\sigma'$. Let $j$ denote the index of the left-most such ascent. That is, there exist indices $i$ and $j$ with $i<j<n$ such that $\sigma_i=n$, $\sigma_i>\sigma_{i+1}>\cdots>\sigma_j$ and $\sigma_{j+1}>\sigma_j$. Since $\sigma_j<\sigma_{j+1}<\sigma_i$, there exists an index $\ell$ with $i<\ell\leq j$ such that $\sigma_\ell<\sigma_{j+1}<\sigma_{\ell-1}$. Then the subsequence $\sigma_{\ell-1}\sigma_{\ell}\sigma_{j+1}$ would be an occurrence of $31\dash2$ in $\sigma'$, which completes this case. On the other hand, suppose there is no ascent in $\sigma'$ to the right of the letter $n$. If $\sigma_i=n$, then $\sigma_i>\sigma_{i+1}>\cdots>\sigma_n$. Apply now the reasoning of the previous paragraph on the subpermutation $\sigma_1\sigma_2\cdots \sigma_{i-1}$, considering instead of $n$, the largest element among the first $i-1$ positions of $\sigma'$. If an occurrence of $31\dash2$ arises as before, then we are done. Otherwise, continue with still a smaller subpermutation. If no occurrence of $31\dash2$ arises before all of the positions of $\sigma'$ are exhausted, then it must be the case that there is the following decomposition of $\sigma'$: $$\sigma'=T_rT_{r-1}\cdots T_1,$$ for some $r \geq 1$, where $T_1$ starts with the letter $n$ and is decreasing and $T_i$, $1 < i \leq r$, starts with the largest letter to the left of $T_{i-1}T_{i-2}\cdots T_1$ followed by a possibly empty decreasing sequence. Now suppose $\sigma'$ contains an occurrence of $3\dash1\dash2$ consisting of the letters $x$, $y$, and $z$, respectively. If $m_k=\max(T_k)$, $1 \leq k \leq r$, then $m_1>m_2>\cdots>m_r$, which implies that we may assume that $x$ and $y$ belong to the same block $T_i$ for some $i$, with $x=m_i$, and that $z$ belongs to a block $T_j$ for some $j$ with $j<i$. (Note that if $x\in T_{i_1}$ and $y \in T_{i_2}$, where $i_1>i_2$, then we have $m_{i_2}>m_{i_1}\geq x>y$, with $m_{i_2}$ occurring to the left of $y$ in $\sigma'$.) Since the letters are decreasing between $x$ and $y$, inclusive, with $x>z>y$, there must exist an occurrence of $31\dash2$ in $\sigma'$ where the $3$ and $1$ correspond to a pair of adjacent letters between $x$ and $y$ (and possibly including $x$ or $y$) and the $2$ corresponds to the letter $z$. Thus, $\sigma'$ contains an occurrence of $31\dash2$ in all cases, which completes the proof. \hfill \qed\\ Given a pattern $\tau$ of type $(2,1)$, we will refer to an occurrence of $\tau$ (in the flattened sense) in which the $2$ corresponds to the actual letter $i$ as an $i$-occurrence of $\tau$ and an occurrence in which the $2$ and $3$ correspond to the letters $i$ and $j$, respectively, as an $(i,j)$-occurrence. In the proofs that follow, $\text{tot}(\tau)$ denotes the total number of occurrences of the pattern $\tau$ under consideration. Furthermore, given positive integers $m$ and $n$, we let $[m,n]=\{m,m+1,\ldots,n\}$ if $n\geq m$, with $[m,n]=\emptyset$ if $m>n$.\\ \noindent\textbf{Combinatorial proofs of Corollaries \ref{cor:32-1:average} and \ref{cor:23-1:average} (average).}\\ We first treat the case $32\dash1$ and argue that the total number of $i$-occurrences of $32\dash1$ (in the flattened sense) within all of the permutations of size $n$ is given by $(n-i)\binom{i-1}{2}\frac{(n-1)!}{i}$ for $i \in [3,n-1]$. Summing over $i$ would then give the total number of occurrences of $32\dash1$. Note that within an $i$-occurrence of $32\dash1$, the letter $i$ cannot start a cycle since there is a letter to the right of it in the flattened form which is smaller. Note further that the position of $j$ is determined by that of $i$'s within an $(i,j)$-occurrence of $32\dash1$, where $i+1\leq j \leq n$. Given $i$ and $j$, we count the permutations of size $n$ for which there are exactly $r$ $(i,j)$-occurrences of $32\dash1$, respectively, where $1 \leq r \leq i-2$. Note that the position of $i$ is determined within such permutations once the positions of the elements of $[i-1]$ have been, which also determines the position of $j$ (note that $i$ must be placed within a current cycle so that there are exactly $r$ members of $[i-1]$ to its right within the flattened form). Thus, there are $\frac{(n-1)!}{i}$ such permutations for each $r$, which implies that the total number of $(i,j)$-occurrences of $32\dash1$ is given by $$\sum_{r=1}^{i-2}r\frac{(n-1)!}{i}=\binom{i-1}{2}\frac{(n-1)!}{i}.$$ Since there are $n-i$ choices for $j$, given $i$, with each choice yielding the same number of $(i,j)$-occurrences of $32\dash1$, it follows that there are $(n-i)\binom{i-1}{2}\frac{(n-1)!}{i}$ $i$-occurrences of $32\dash1$, as desired. Summing over $3 \leq i \leq n-1$, and simplifying, then gives \begin{align*} \text{tot}(32\dash1)&=\sum_{i=3}^{n-1}(n-i)\binom{i-1}{2}\frac{(n-1)!}{i}=n!\sum_{i=3}^{n-1}\left(\frac{i-3}{2}+\frac{1}{i}\right)-(n-1)!\binom{n-1}{3}\\ &=\frac{n!}{2}\binom{n-3}{2}+n!\left(H_{n-1}-\frac{3}{2}\right)-(n-1)!\binom{n-1}{3}\\ &=\frac{n!}{12}(n^2-9n-4)+n!H_n. \end{align*} Dividing by $n!$ yields the average value formula found in Corollary \ref{cor:32-1:average}. Writing the letter corresponding to $3$ directly after (instead of directly before) the letter corresponding to $2$ shows that the total number of $(i,j)$-occurrences of $23\dash1$ is the same as the total number of $(i,j)$-occurrences of $32\dash1$ for each $i$ and $j$, which implies Corollary \ref{cor:23-1:average}. \hfill \qed\\ \noindent\textbf{Combinatorial proof of Corollary \ref{cor:31-2:average} (average).}\\ Similar reasoning as in the prior proof shows that the total number of $i$-occurrences of $31\dash2$ in the flattened sense within all of the permutations of $[n]$ is given by $(n-i)\left(\binom{i}{2}-1\right)\frac{(n-1)!}{i}$ for $i \in [3,n-1]$. To see this, first note that there are $n-i$ choices for the letter $j$ to play the role of the $3$, given $i$, within an occurrence of $31\dash2$. Let $\sigma \in \mathcal{S}_n$. Note that for each $r$, $1 \leq r \leq i-3$, there are $\frac{(n-1)!}{i}r$ permutations which have an $(i,j)$-occurrence of $31\dash2$ in which the letter $i$ comes somewhere between the $(r+1)$-st and $(r+2)$-nd members of $[i-1]$ from the left within $\text{Flatten}(\sigma)$, and $2\frac{(n-1)!}{i}(i-2)$ permutations in which $i$ occurs to the right of all the members of $[i-1]$ within $\text{Flatten}(\sigma)$. Observe that in the latter case, the letter $i$ would either occur within a cycle whose first letter is a member of $[i-1]$ or as the first let ter of a cycle. Note that in all cases, the possible positions for $j$ are determined by the value of $r$ and is independent of the value of $i$. Summing over $r$, the total number of $(i,j)$-occurrences of $31\dash2$ is thus given by $$(1+2+\cdots+(i-3)+2(i-2))\frac{(n-1)!}{i}=\left(\binom{i}{2}-1\right)\frac{(n-1)!}{i}.$$ Summing over $i$, and simplifying, then implies \begin{align*} \text{tot}(31\dash2)&=\sum_{i=3}^{n-1}(n-i)\left(\binom{i}{2}-1\right)\frac{(n-1)!}{i}\\ &=\frac{n!}{2}\sum_{i=2}^{n-1}\left(i-1-\frac{2}{i}\right)-(n-1)!\left(\binom{n}{3}-(n-2)\right)\\ &=\frac{n!}{12n}(n^3-3n^2+26n-12)-n!H_n. \end{align*} Dividing by $n!$ yields the average value formula found in Corollary \ref{cor:31-2:average}. \hfill \qed\\ \noindent\textbf{Combinatorial proofs of Corollaries \ref{cor:12-3:average} and \ref{cor:21-3:average} (average).}\\ We first treat the case $21\dash3$. To handle this case, we will simultaneously consider occurrences of the pattern $3\dash21$. If $i \in [3,n-1]$, first note that there are $(i-2)(n-1)!$ permutations $\sigma$ of size $n$ in which the letter $i$ directly precedes a member of $[i-1]$ in $\text{Flatten}(\sigma)$. To show this, first insert $i$ directly before some member of $[2,i-1]$ within a permutation of $[i-1]$ expressed in standard cycle form. Upon treating $i$ and the letter directly thereafter as a single object, we see that there are $\prod_{s=i+1}^n(s-1)$ choices for the positions of the elements of $[i+1,n]$ and thus the total number of such permutations is $(i-2)(i-1)!\prod_{s=i+1}^n(s-1)=(i-2)(n-1)!$, as claimed. Within each of these permutations $\sigma$, every letter of $[i+1,n]$ contributes either an $i$-occurrence of $3\dash21$ or $21\dash3$ depending on whether the letter goes somewhere before or somewhere after $i$ within $\text{Flatten}(\sigma)$. This implies $$\text{tot}(i\text{-}\text{occurrences~of } 21\dash3)+\text{tot}(i\text{-}\text{occurrences~of } 3\dash21)=(n-i)(i-2)(n-1)!, \qquad 3 \leq i \leq n-1,$$ and summing this over $i$ gives \begin{equation}\label{cpr1} \text{tot}(21\dash3)+\text{tot}(3\dash21)=(n-1)!\sum_{i=3}^{n-1}(n-i)(i-2). \end{equation} We now count the total number of occurrences of $3\dash21$ within permutations of size $n$, which is apparently easier. We first count the number of permutations having an $(i,j)$-occurrence of $3\dash21$, where $3 \leq i < j \leq n$ are given. To do so, we first create permutations of the set $[i]\cup\{j\}$ by writing some permutation of $[i-1]$ in standard cycle form and then deciding on the positions of the letters $i$ and $j$. Either $i$ and $j$ can directly precede different members of $[2,i-1]$ or can precede the same member (in which case $i$ would come before $j$), whence there are $\binom{i-2}{2}+(i-2)=\binom{i-1}{2}$ choices regarding the placement of $i$ and $j$. Upon treating $i$ and the letter directly thereafter as a single object and adding the remaining members $r$ of $[i+1,n]-\{j\}$, we see that the number of permutations of length $n$ having an $(i,j)$-occurrence of $3\dash21$ is $$\binom{i-1}{2}(i-1)!\prod_{r=i+1}^{j-1}r \prod_{r=j+1}^n(r-1)=\binom{i-1}{2}\frac{(n-1)!}{i}.$$ Since there are $n-i$ choices for $j$, given $i$, the total number of $i$-occurrence of $3\dash21$ is then given by $(n-i)\binom{i-1}{2}\frac{(n-1)!}{i}.$ Summing over $3 \leq i \leq n-1$ gives \begin{equation}\label{cpr2} \text{tot}(3\dash21)=(n-1)!\sum_{i=3}^{n-1}\frac{n-i}{i}\binom{i-1}{2}. \end{equation} Subtracting \eqref{cpr2} from \eqref{cpr1} yields \begin{align*} \text{tot}(21\dash3)&=(n-1)!\sum_{i=3}^{n-1}(n-i)\left(i-2-\frac{1}{i}\binom{i-1}{2}\right)\\ &=\frac{n!}{2}\sum_{i=2}^{n-1}\left(i-1-\frac{2}{i}\right)-\frac{(n-1)!}{2}\sum_{i=2}^{n-1}(i^2-i-2)\\ &=\frac{n!}{12n}(n^3-3n^2+26n-12)-n!H_n, \end{align*} which completes the proof in the case $21\dash3$. A similar proof may be given for the case $12\dash3$. In fact, there are the comparable formulas $$\text{tot}(12\dash3)+\text{tot}(3\dash12)=(n-1)!\sum_{i=2}^{n-1}(n-i)i$$ and $$\text{tot}(3\dash12)=(n-1)!\sum_{i=2}^{n-1}\frac{n-i}{i}\left(\binom{i}{2}-1\right).$$ Subtracting, simplifying, and dividing by $n!$ gives the average value formula found in Corollary \ref{cor:12-3:average}. \hfill \qed
{ "timestamp": "2013-06-17T02:01:33", "yymm": "1306", "arxiv_id": "1306.3355", "language": "en", "url": "https://arxiv.org/abs/1306.3355", "abstract": "We consider the problem of counting the occurrences of patterns of the form $xy-z$ within flattened permutations of a given length. Using symmetric functions, we find recurrence relations satisfied by the distributions on $\\mathcal{S}_n$ for the patterns 12-3, 21-3, 23-1 and 32-1, and develop a unified approach to obtain explicit formulas. By these recurrences, we are able to determine simple closed form expressions for the number of permutations that, when flattened, avoid one of these patterns as well as expressions for the average number of occurrences. In particular, we find that the average number of 23-1 patterns and the average number of 32-1 patterns in $\\text{Flatten}(\\pi)$, taken over all permutations $\\pi$ of the same length, are equal, as are the number of permutations avoiding either of these patterns. We also find that the average number of 21-3 patterns in $\\text{Flatten}(\\pi)$ over all $\\pi$ is the same as it is for 31-2 patterns.", "subjects": "Combinatorics (math.CO)", "title": "Recurrence relations for patterns of type $(2,1)$ in flattened permutations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363503693294, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7073385758469379 }
https://arxiv.org/abs/1902.08513
Furstenberg boundaries for pairs of groups
Furstenberg has associated to every topological group $G$ a universal boundary $\partial(G)$. If we consider in addition a subgroup $H<G$, the relative notion of $(G,H)$-boundaries admits again a maximal object $\partial(G,H)$. In the case of discrete groups, an equivalent notion was introduced by Bearden--Kalantar as a very special instance of their constructions. However, the analogous universality does not always hold, even for discrete groups. On the other hand, it does hold in the affine reformulation in terms of convex compact sets, which admits a universal simplex $\Delta(G,H)$, namely the simplex of measures on $\partial(G,H)$. We determine the boundary $\partial(G,H)$ in a number of cases, highlighting properties that might appear unexpected.
\section{Introduction A compact topological space on which a group $G$ acts by homeomorphisms is called a \textbf{$G$-flow}. When $G$ is a topological group, the action is assumed to be jointly continuous and $G$ Hausdorff. Furstenberg~\cite{Furstenberg63,Furstenberg73_bnd} discovered the particular importance of the case where the action is \emph{minimal} and \emph{strongly proximal}; the flow is then called a \textbf{$G$-\penalty0\hskip0pt\relax{}boundary}. He showed for instance that each group $G$ admits a \emph{universal} boundary, now called the \textbf{Furstenberg boundary $\partial(G)$ of $G$}. Although $\partial(G)$ is often a huge non-metrisable space, Furstenberg showed that for semi-simple Lie groups it reduces to a homogeneous space $\partial(G) = G/P$, where $P$ is a minimal parabolic subgroup. \medskip It turns out, following Furstenberg and Glasner~\cite{Glasner_LNM}, that the notion of boundary is even more natural and transparent if we recast the whole discussion in the setting of \emph{convex compact spaces}: An \textbf{affine $G$-flow} refers to a compact convex set $K$ endowed with a $G$-action preserving both the topology and the affine structure of $K$. Here $K$ is understood to lie in an arbitrary locally convex (Hausdorff) topological vector space over the reals. A \textbf{$G$-\penalty0\hskip0pt\relax{}morphism} is a $G$-equivariant continuous affine map. Any $G$-flow $X$ gives an affine $G$-flow $\mathrm{P}(X)$, the space of probability measures on $X$. There are of course many other affine $G$-flows. Now $X$ is a $G$-\penalty0\hskip0pt\relax{}boundary if and only if $\mathrm{P}(X)$ satisfies just one single minimality condition: namely that it is \textbf{$G$-\penalty0\hskip0pt\relax{}irreducible}. This means by definition that it does not contain a proper affine subflow. The simplex of probability measures on $\partial(G)$, which we denote by $\Delta(G)$, is universal in that setting. \bigskip This article considers the more general \emph{relative case}, where we are given a topological group $G$ together with a subgroup $H<G$. We shall see that there exist again canonical relative objects $\partial(G,H)$ and $\Delta(G,H)$. However, there are interesting complications; notably, the topological flow $\partial(G,H)$ behaves less well than its affine counterpart $\Delta(G,H)$. We therefore start off in the affine setting. \begin{defi} An \textbf{affine $(G,H)$-flow} is an affine $G$-flow with an $H$-fixed point. It is called \textbf{$(G,H)$-\penalty0\hskip0pt\relax{}irreducible} if it does not contain any smaller affine $(G,H)$-flow. \end{defi} The classical case corresponds to the trivial subgroup $H=1$. \begin{prop}\label{prop:univ} There exists a $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flow that is \textbf{universal} in the sense that it admits a $G$-\penalty0\hskip0pt\relax{}morphism onto every $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flow. Moreover, this universal flow is unique up to unique $G$-\penalty0\hskip0pt\relax{}morphisms. \end{prop} \begin{defi} We denote the universal $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flow by $\Delta(G, H)$. \end{defi} At this point we have a universal \emph{convex} compact object, but we lost sight of the initial discussion of $G$-flows: actions on arbitrary compact spaces. Not to worry: $\Delta(G, H)$ is in fact the simplex of probability measures $\mathrm{P}(\partial(G,H))$ of a flow $\partial(G,H)$\,! It is indeed well understood (since Bauer~\cite[Satz~13]{Bauer61}) when a convex compact space $K$ is of the form $\mathrm{P}(X)$. This happens exactly when the set of extremal points $\mathrm{Ext}(K)$ is closed and when moreover every point of $K$ is a \emph{unique} Choquet integral on $\mathrm{Ext}(K)$. This realises $K$ as $\mathrm{P}(\mathrm{Ext}(K))$ and one calls $K$ a \textbf{Bauer simplex}~\cite[II.4.1]{Alfsen_book}. \begin{thm}\label{thm:Bauer} $\Delta(G, H)$ is a Bauer simplex. \end{thm} \begin{defi} The \textbf{Furstenberg boundary $\partial(G,H)$} of the pair $(G,H)$ is the set of extremal points $\partial(G,H) = \mathrm{Ext}\left(\Delta(G,H)\right)$. \end{defi} In other words, Theorem~\ref{thm:Bauer} states that there is a canonical $G$-flow, the Furstenberg boundary $\partial(G,H)$, such that \begin{equation*} \mathrm{P}\left(\partial(G,H)\right) = \Delta(G, H). \end{equation*} Since we return to topological $G$-flows, we should define general $(G,H)$-\penalty0\hskip0pt\relax{}boundaries: \begin{defi} A $G$-flow $X$ is a \textbf{$(G,H)$-\penalty0\hskip0pt\relax{}boundary} if $\mathrm{P}(X)$ is $(G,H)$-\penalty0\hskip0pt\relax{}irreducible. \end{defi} In fact this definition is equivalent to a characterisation given, in the case of discrete groups, by Bearden--Kalantar~\cite{Bearden-Kalantar_arx} in the context of their much more general non-commutative notion of Furstenberg--Hamana boundaries for unitary representations. To make this explicit, recall first that a probability measure $\mu$ on a $G$-flow $X$ is said to be \textbf{$G$-contracted} to a point $x$ if the Dirac mass $\delta_x$ belongs to the orbit closure $\overline{G \mu}$. Using the Krein--Milman theorem, one shows: \begin{prop}\label{prop:traduc} Let $X$ be a $G$-flow. Then $X$ is a $(G,H)$-\penalty0\hskip0pt\relax{}boundary if and only if $H$ fixes a measure in $\mathrm{P}(X)$ and every such fixed measure is $G$-contracted to every point of $X$. \end{prop} This property coincides with the characterisation from~\cite{Bearden-Kalantar_arx}. \medskip Of course, we have not defined anything new when $H\lhd G$ is a normal subgroup: $(G,H)$-flows, boundaries and universal objects reduce to the classical objects for the quotient group $G/H$. However, general pairs $H<G$ exhibit completely different behaviours, as will become clear with a few examples which should serve as a warning, or better as an advertisement, for the new phenomena. For instance, although the Furstenberg boundary $\partial(G,H)$ is canonical, unique up to unique identification, and in a sense the \emph{maximal} $(G,H)$-\penalty0\hskip0pt\relax{}boundary, it is however not \emph{universal} in the strong sense of Proposition~\ref{prop:univ}. \begin{prop}\label{prop:not:univ} There is not always a $G$-map from $\partial(G,H)$ to every $(G,H)$-\penalty0\hskip0pt\relax{}boundary, even for discrete groups. \end{prop} We can illustrate this very concretely on an example. \begin{exam}\label{exam:F4} Let $G=F_4$ be a free group on four generators and let $H<G$ be the first free factor $H=F_2$ in a splitting $G=F_2 * F_2$. Let $X$ be a topological circle. We endow $X$ with a $G$-action by specifying the following actions for each of the two copies of $F_2$. For $H$, we choose two arbitrary rotations of the circle, at least one of which is non-trivial. Thus $H$ acts via a non-trivial abelian quotient. For the second copy of $F_2$, we identify $X$ with the projective line and map $F_2$ to $\mathbf{SL}_2(\mathbf{Z})$ by sending its generators to $\left(\begin{smallmatrix} 1&1\\ 0&1 \end{smallmatrix}\right)$ and $\left(\begin{smallmatrix} 1&0\\ 1&1 \end{smallmatrix}\right)$. This yields a projective action on $X$. We claim that $X$ is a $(G,H)$-\penalty0\hskip0pt\relax{}boundary. Indeed, $H$ fixes the round measure on the circle. On the other hand, the second $F_2$ acts minimally and strongly proximally and hence \emph{any} measure can be contracted to any point. The claim follows. \itshape However, there is no $G$-map from $\partial(G,H)$ to $X$. The key point is that $\partial(G,H)$ will be shown to contain an $H$-fixed point. But, by construction, $X$ does not.\upshape \end{exam} \begin{rem}\label{rem:image:contains} What remains true in general is that for any $(G,H)$-\penalty0\hskip0pt\relax{}boundary $X$ there is a $G$-map $\partial(G,H) \to \mathrm{P}(X)$ whose image \emph{contains} $X$, see Proposition~\ref{prop:image:contains}. \end{rem} A major advantage of considering non-discrete groups is that Furstenberg boundaries are completely understood for all connected Lie groups, where they are always homogeneous spaces~\cite[IV.3.3]{Glasner_LNM}. This again gets more complicated for the relative theory: \begin{exam}\label{exam:affine} Let $G= \mathbf{R}^2 \rtimes \mathbf{SL}_2(\mathbf{R})$ be the special affine group of $\mathbf{R}^2$ and let $H=\mathbf{SL}_2(\mathbf{R})$. Consider the visual compactification $D=\mathbf{R}^2 \sqcup \mathbf{S}^1$ of $\mathbf{R}^2$ obtained by gluing the circle of (oriented) directions; thus $D$ is a topological disc. We view $D$ as a $G$-flow where the action on the open disc is the affine action on $\mathbf{R}^2$, while the action on $\mathbf{S}^1$ is induced by the linear representation on the space of directions via the quotient map to $H$. \end{exam} This example illustrates several interesting points, proved Section~\ref{sec:LC} and particularly Theorem~\ref{thm:af}: \medskip \begin{enumerate}[(1)] \item $D$ is a $(G,H)$-\penalty0\hskip0pt\relax{}boundary.\label{af:pt:bnd} \item $D$ is not a minimal $G$-flow. It will follow that the Furstenberg boundary $\partial(G,H)$ is not minimal either --- much less homogeneous.\label{af:pt:no-min} \item The Furstenberg boundary $\partial(G)$ is realised by the natural $G$-action on the projective line $\mathbf{P}^1$. Therefore, there is no $G$-map from $\partial(G)$ to $D$.\\ We shall deduce that there is no $G$-map from $\partial(G)$ to $\partial(G,H)$.\label{af:pt:no-map} \item $D$ is not maximal; in fact, the Furstenberg boundary $\partial(G,H)$ consists of the open disc together with a non-metrisable compact space on which $\mathbf{R}^2$ acts trivially, glued above $\mathbf{S}^1$.\label{af:pt:no-uni} \end{enumerate} \medskip There are however also cases where the Furstenberg boundary of pairs behaves very simply and is indeed a quotient of the Furstenberg boundary of the ambient group: \begin{exam}\label{exam:Gras} Let $0<p<n$, let $G=\mathbf{SL}_n(\mathbf{R})$ and let $H<G$ be the block-wise upper-triangular subgroup with diagonal blocks of size $p$ and $n-p$. Then \begin{equation*} \partial(G,H) \cong G/H \cong \mathrm{Gr}_p(\mathbf{R}^n), \end{equation*} the Grassmannian of $p$-spaces in $\mathbf{R}^n$. \end{exam} This is a particular case of a general phenomenon for co-\penalty0\hskip0pt\relax{}compact subgroups $H<G$ in any topological group $G$. We shall prove that in this case $\partial(G,H)$ is a homogeneous space $G/\widehat{H}$ for a \emph{hull} $\widehat{H}$ canonically attached to $H<G$ up to conjugation. In Example~\ref{exam:Gras}, we have $\widehat{H}=H$, which holds more generally for parabolic subgroups: \begin{thm}\label{thm:parabolic} Let $G$ be a connected semi-simple Lie group with finite center and let $H$ be any parabolic subgroup. Then $\partial(G,H) \cong G/H$. The corresponding statement holds for semi-simple algebraic groups over local fields. \end{thm} We already pointed out that the relative theory reduces to the classical one when $H<G$ is a normal subgroup. This invites the question: what happens at the opposite extreme, when $H$ is \textbf{malnormal} in $G$? That is, when $g H g^{-1}$ intersects $H$ trivially for all $g\notin H$. \begin{thm}\label{thm:malnormal} Let $H$ be a malnormal subgroup of a discrete group $G$. Suppose $H$ non-amenable. Then $\partial(G,H) \cong \beta(G/H)$, the Stone--\v{C}ech compactification of the $G$-set $G/H$. \end{thm} In a sense, this completely describes the malnormal case for discrete groups; indeed we shall see that when $H$ is amenable $\partial(G,H)$ reduces again to the classical boundary $\partial(G)$. For non-discrete groups, an example of a similar nature is given in Example~\ref {exam:Samuel}. \begin{cor}\label{cor:free} Let $G=H*H'$ be the free product of two discrete groups $H, H'$. If $H$ is non-amenable, then $\partial(G,H) \cong \beta(G/H)$.\qed \end{cor} Indeed $H$ is malnormal in $G$, see~\cite[4.1.5]{Magnus-Karrass-Solitar}. This justifies the claim that the subgroup $H$ of Example~\ref{exam:F4} fixes a point in $\partial(G,H)$. \smallskip We shall in fact prove a slightly stronger version of Theorem~\ref{thm:malnormal}, which has a consequence for the \textbf{wreath product} $J\wr H$ of two discrete groups. We recall that $J\wr H$ is the semi-direct product of the restricted product $\oplus_H J$ with $H$. This includes for instance the lamplighter group on $H$. \begin{cor}\label{cor:wreath} If $H$ is non-amenable, then $\partial(J\wr H, H) \cong \beta\left(\oplus_H J\right)$. \end{cor} Another consequence of this method regards \emph{relatively hyperbolic} groups: \begin{cor}\label{cor:relhyp} Let $G$ be a discrete group that is hyperbolic relative to some family of subgroups $\{H_i\}_{i\in I}$. Then $\partial(G,H_i) \cong \beta(G/H_i)$ whenever $H_i$ is non-amenable. \end{cor} Boundary theory is inseparable from \textbf{amenability}: recall that the topological group $G$ is amenable if and only if its Furstenberg boundary $\partial(G)$ is trivial. It is therefore not surprising that the relative Furstenberg boundary $\partial(G, H)$ and its affine version $\Delta(G,H)$ are intimately connected to relative versions of amenability for pairs $H<G$. There are two complementary such relative notions. First, we can ask when the relative Furstenberg boundary $\partial(G, H)$ is trivial; the answer is straightforward and was already recorded in~\cite{Bearden-Kalantar_arx} for discrete groups: \begin{prop}\label{prop:coamen} $\partial(G, H)$ is trivial if and only if $H$ is co-amenable in $G$. \end{prop} The notion of co-amenability extends to arbitrary pairs the notion of amenability of the quotient group $G/H$ when $H\lhd G$ is a normal subgroup. For discrete groups, it is equivalent to the existence of a $G$-invariant mean on $G/H$, and more generally in the locally compact case to the weak containment of the trivial $G$-representation in the quasi-regular representation on $L^2(G/\overline{H})$, see~\cite{Eymard72}. For arbitrary topological groups it is defined by requiring that every affine $(G, H)$-flow has a $G$-fixed point and the above proposition is expected. Nonetheless there are some subtleties, such as the following direct consequence of a construction in~\cite{Monod-Popa} or~\cite{Pestov}. \begin{prop}\label{prop:MP} There is a discrete group $G$ with subgroups $H_1 < H_2 < G$ such that $\partial(G, H_1)$ and $\partial(G, H_2)$ are trivial but not $\partial(H_2, H_1)$. Moreover we can choose $H_1$ normal in $H_2$ and $H_2$ normal in $G$. \end{prop} At the opposite end from co-amenability, we get the other relative notion by asking when $\partial(G,H)$ is isomorphic (as $G$-flow) to $\partial(G)$. \begin{prop}\label{prop:relamen} $\partial(G, H) \cong \partial(G) $ if and only if $H$ is amenable relative to $G$. \end{prop} We recall that $H$ is called \textbf{amenable relative to $G$} if every affine $G$-flow has an $H$-fixed point. Although it is clearly complementary to co-amenability, this notion comes with a surprise. For discrete groups, it simply amounts to the amenability of the group $H$. But already in the locally compact setting, it is a priori weaker than the amenability of $H$, see~\cite{Caprace-Monod_rel}. It remains an open problem to exhibit a locally compact example where the notions do not coincide. Finally, beyond locally compact groups, there is a complete divergence. Indeed, any subgroup of an amenable group will be relatively amenable, but need not be amenable. The first example is from 1973, when la Harpe~\cite{Harpe73} showed that the unitary group of the separable Hilbert space in the strong operator topology is an amenable Polish group, whereas it contains any countable group as a discrete closed subgroup, including the non-amenable ones, as witnessed by the regular representation. \medskip In fact these two relative notions are just extreme cases of a very basic pre-order relation on the set of all subgroups of a given group $G$; this definition is taken from~\cite[\S2.3]{Portmann_PhD} and~\cite[\S7.C]{Caprace-Monod_rel}. \begin{defi}\label{defi:relco} Let $H, H'$ be subgroups of a topological group $G$. We say that \textbf{$H$ is co-amenable to $H'$ relative to $G$} if every affine $(G,H)$-flow has an $H'$-fixed point. \end{defi} We see that a co-amenable subgroup $H$ corresponds to the special case $H=G$, whilst a relatively amenable subgroup $H'$ corresponds to $H=1$. Even for discrete groups, this notion has clarifying virtues. For instance, the situation of Proposition~\ref{prop:MP} can be rephrased by saying that $H_1 \lhd H_2$ is co-amenable to $H_2$ relative to $G$, even though it is no co-amenable \emph{in} $H_2$. \section{Irreducible affine flows Let $G$ be a topological group and $H<G$ a subgroup. We first comment on the fact that an affine $G$-flow $K$ and $G$-\penalty0\hskip0pt\relax{}morphisms were defined intrinsically on $K$ rather than on some locally convex topological vector space containing $K$. There is no loss of generality in assuming that the $G$-action on $K$ actually comes from a representation by continuous linear operators on the ambient space, although this might require us to modify that ambient space without changing $K$. Indeed we can embed $K$ in the state space on the continuous affine functions on $K$. However, our focus will always be on $K$ only. A similar remark applies to morphisms. \smallskip A straightforward compactness argument with Zorn's lemma and the fact that fixed points constitute a closed convex subset gives the following. \begin{lem}\label{lem:exists:min} Every affine $(G,H)$-flow contains a $(G,H)$-\penalty0\hskip0pt\relax{}irreducible one.\qed \end{lem} On the other hand, the definitions imply: \begin{lem}\label{lem:onto} Every $G$-\penalty0\hskip0pt\relax{}morphism from an affine $(G,H)$-flow to a $(G,H)$-\penalty0\hskip0pt\relax{}irreducible one is onto.\qed \end{lem} Given a convex compact set $K$, we denote the set of its \textbf{extremal points} by $\mathrm{Ext}(K)$. We recall that $\mathrm{Ext}(K)$ can variously be closed, or dense~\cite{Poulsen}, or non-Borel~\cite[\S VII]{Bishop-deLeeuw}. The following reformulation is essentially an exercice around the Krein--Milman theorem. \begin{lem}\label{lem:KM} Let $K$ be an affine $(G,H)$-flow. Then \begin{equation*} \text{$K$ is $(G,H)$-\penalty0\hskip0pt\relax{}irreducible} \kern2mm\Longleftrightarrow \kern2mm \forall x\in K^H : \kern2mm \mathrm{Ext}(K) \subseteq \overline{G x}. \end{equation*} \end{lem} \begin{proof} The condition on the right is sufficient. Indeed, if $L\subseteq K$ is an affine $(G,H)$-flow, it contains an $H$-fixed point and hence contains $\mathrm{Ext}(K)$. Thus $L=K$ by Krein--Milman. Conversely, suppose that $K$ is $(G,H)$-\penalty0\hskip0pt\relax{}irreducible and let $x\in K^H$. Then the closed convex hull of $\overline{G x}$ is $K$ and hence $\overline{G x}$ contains $\mathrm{Ext}(K)$ by Milman's partial converse to Krein--Milman~\cite[V.8.5]{Dunford-Schwartz_I}. \end{proof} \begin{lem}\label{lem:coincidence} Let $K,L$ be affine $(G,H)$-flows and let $f, f'\colon K\to L$ be $G$-\penalty0\hskip0pt\relax{}morphisms. Consider the coincidence set $K_0=\{x\in K : f(x) = f'(x)\}$. If $L$ is $(G,H)$-\penalty0\hskip0pt\relax{}irreducible, then $f(K_0)=L$. \end{lem} The point here is that we do not know a priori that $f(K_0)$ contains an $H$-fixed point (or even is non-empty). \begin{proof} By Krein--Milman, it suffices to prove that every extremal point $\zeta$ of $L$ belongs to $f(K_0)$. Let $x\in K^H$. Then $f(x)$ and $f'(x)$ belong to $L^H$; hence so does $z= (f(x) + f'(x))/2$. By Lemma~\ref{lem:KM}, there is a net $(g_\alpha)_{\alpha\in A}$ in $G$ such that $g_\alpha z \to \zeta$. Upon replacing $(g_\alpha)$ by a subnet, $g_\alpha x$ converges to some $y\in K$. Now $\zeta$ is the limit of \begin{equation*} g_a z = g_a \tfrac12 (f(x) + f'(x)) = \tfrac12 (f(g_a x) + f'(g_a x)) \to \tfrac12 (f(y) + f'(y) ) . \end{equation*} % Since $\zeta$ is extremal, we deduce that $f(y)=f'(y)=\zeta$, which witnesses that $\zeta\in f(K_0)$. \end{proof} If we apply Lemma~\ref{lem:coincidence} to $K=L$ and $f$ the identity, we deduce the following. \begin{cor}\label{cor:no:endo} Let $K$ be a $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flow. Then the identity is the only $G$-\penalty0\hskip0pt\relax{}morphism $K\to K$.\qed \end{cor} We now establish the existence of the universal $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flow $\Delta(G,H)$. \begin{proof}[Proof of Proposition~\ref{prop:univ}] Let $\{K_i\}_{i\in I}$ a family of isomorphism representatives of all $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flows. Such a set indeed exists, see Remark~\ref{rem:cardi} below. The product $\prod_{i\in I} K_i$ has $H$-fixed points and hence it contains an $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine subflow $K$ by Lemma~\ref{lem:exists:min}. The coordinate projections provide $G$-\penalty0\hskip0pt\relax{}morphisms to all $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flows. These morphisms are onto by Lemma~\ref{lem:onto}. Suppose now that $K'$ is another $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flow with this property. Then we have $G$-\penalty0\hskip0pt\relax{}morphisms $K\to K'$ and $K'\to K$. Applying Corollary~\ref{cor:no:endo} to the composition of these morphisms in the two possible orders shows that they are isomorphisms, and then that they are unique. This completes the proof of Proposition~\ref{prop:univ}. \end{proof} \begin{rem}\label{rem:cardi} Given $G$, we can bound the cardinal of an arbitrary $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flow $K$ using basic cardinal functions from general topology such as \emph{density} and \emph{weight}, for which we refer to~\cite[a-3]{Hart-Nagata-Vaughan}. For instance, the cardinal of $K$ is bounded by the exponential of its weight, which is bounded by the exponential of its density. The density of $K$ is bounded by the density of $\mathrm{Ext}(K)$ (as soon as the latter is infinite), which is bounded by the density of $G$ in view of Lemma~\ref{lem:KM}. Now that the cardinal of $K$ is bounded, we also have a bound on all possible structures of affine $G$-flow on $K$. \end{rem} \begin{rem}\label{rem:univ} Combining universality with Lemma~\ref{lem:exists:min}, we also see that $\Delta(G, H)$ admits a $G$-\penalty0\hskip0pt\relax{}morphism to every affine $(G,H)$-flow. \end{rem} A standard observation is that whenever a $G$-object is unique up to \emph{unique isomorphisms}, it automatically inherits an action of the automorphism group ${\rm Aut}(G)$. This fact was used by Furstenberg for the boundary $\partial(G)$, see~\cite[II.4.3]{Glasner_LNM}. In the present case, this holds for the automorphisms that preserve the subgroup $H$. We denote by ${\rm Aut}(G,H)<{\rm Aut}(G)$ the group of these automorphisms. Notice that the pre-image of ${\rm Aut}(G,H)$ in $G$ under the conjugation homomorphism $G\to{\rm Aut}(G)$ is precisely the normaliser ${\rm N}_G(H)$ of $H$ in $G$. \begin{cor}\label{cor:bnda:extends} The automorphism group ${\rm Aut}(G,H)$ has an affine action on $\Delta(G, H)$ such that the resulting action of ${\rm N}_G(H)$ induced by $G\to{\rm Aut}(G)$ coincides with the one given by the original $G$-action on $\Delta(G, H)$.\qed \end{cor} Indeed, given $\alpha\in {\rm Aut}(G)$, we have a new $G$-action on $\Delta(G, H)$ given by $\alpha$-twisting the original action. This turns $\Delta(G, H)$ into a universal affine $(G,H)$-flow if $\alpha\in {\rm Aut}(G,H)$. The unique isomorphism identifying this new $(G,H)$-flow $\Delta(G, H)$ with the original one defines the action of $\alpha$. The argument in~\cite[II.4.3]{Glasner_LNM} can be copied verbatim. \begin{rem}\label{rem:morph:pairs} We can also consider more generally morphisms of pairs of groups $H<G$ and $H'<G'$. Let thus $f\colon G' \to G$ be a continuous group homomorphism such that $f(H')\subseteq H$. Then every affine $(G,H)$-flow becomes an affine $(G',H')$-flow by pull-back. By Remark~\ref{rem:univ}, there exists an $f$-equivariant morphism $\Delta(G', H')\to \Delta(G, H)$. We shall see in Remark~\ref{rem:no-morph:pairs} below that the corresponding fact does not hold for $\partial(G', H')$ and $\partial(G, H)$. \end{rem} \section{Back to topological flows By definition, the functor $X\mapsto \mathrm{P}(X)$ sends $G$-flows to affine $G$-flows and $(G,H)$-\penalty0\hskip0pt\relax{}boundaries to $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flows. What is less clear is to which extent one can go in the other direction. Specifically, it is not clear how to obtain a $(G,H)$-\penalty0\hskip0pt\relax{}boundary from a general $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flow. This contrasts with the classical case, where the closure $\overline{\mathrm{Ext}(K)}$ is a $G$-\penalty0\hskip0pt\relax{}boundary for every $G$-\penalty0\hskip0pt\relax{}irreducible affine flow~\cite[III.2.3]{Glasner_LNM}. Nonetheless, much more is true for the universal $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flow $\Delta(G,H)$, since we will prove that it is a Bauer simplex. \medskip Before starting, we recall that one can always homeomorphically identify $X$ with its image in $\mathrm{P}(X)$. Moreover, if $Y$ is a closed subspace of the compact space $X$, then the Tietze--Urysohn extension theorem realizes $\mathrm{P}(Y)$ as a convex compact subspace of $\mathrm{P}(X)$. Finally, recall that for every convex compact $K$ there is a \textbf{barycentre map} $\beta\colon \mathrm{P}(K) \to K$ given by integrating the measures. \begin{proof}[Proof of Theorem~\ref{thm:Bauer}] Notice that $\mathrm{P}(\Delta(G,H))$ is an affine $(G,H)$-flow since $\Delta(G,H)$ and hence $\mathrm{P}(\Delta(G,H))$ contains an $H$-fixed point. Thus, applying Lemma~\ref{lem:exists:min}, there is a $(G,H)$-\penalty0\hskip0pt\relax{}irreducible subflow $L$ in $\mathrm{P}(\Delta(G,H))$. Now the barycentre map $\beta$ associated to $\Delta(G,H)$ induces a $G$-\penalty0\hskip0pt\relax{}morphism $L\to \Delta(G,H)$. This is an isomorphism by the universality of $\Delta(G,H)$. Let now $x$ be any extremal point of $\Delta(G,H)$. Then the only measure in $\mathrm{P}(\Delta(G,H))$ mapped to $x$ is the Dirac mass $\delta_x$, see~\cite[1.4]{Phelps_LNM}. Therefore $\delta_x \in L$ for any $x\in\mathrm{Ext}(\Delta(G,H))$. Since $L$ is closed, it contains $\delta_x$ for all $x\in\overline{\mathrm{Ext}(\Delta(G,H))}$. Since $L$ is also convex, it contains $\mathrm{P}\left(\overline{\mathrm{Ext}(\Delta(G,H))}\right)$ viewed as a subspace of $\mathrm{P}(\Delta(G,H))$. By irreducibility, $L$ is exactly $\mathrm{P}\left(\overline{\mathrm{Ext}(\Delta(G,H))}\right)$. The fact that the barycentre map restricts to an isomorphism \begin{equation*} \mathrm{P}\left(\overline{\mathrm{Ext}(\Delta(G,H))}\right) \longrightarrow\Delta(G,H) \end{equation*} precisely means that the set $\partial(G,H)$ defined as $\mathrm{Ext}(\Delta(G,H))$ is closed and that $\Delta(G,H)$ identifies with $\mathrm{P}(\partial(G,H))$, see~\cite[II.4.1]{Alfsen_book}. \end{proof} Given that the Furstenberg boundary $\partial(G, H)$ is canonically defined in terms of $\Delta(G, H)$, Corollary~\ref{cor:bnda:extends} implies: \begin{cor}\label{cor:bnd:extends} The automorphism group ${\rm Aut}(G,H)$ has an action by homeomorphisms on $\partial(G, H)$ such that the resulting action of ${\rm N}_G(H)$ induced by $G\to{\rm Aut}(G)$ coincides with the one given by the original $G$-action on $\partial(G, H)$.\qed \end{cor} Likewise, Corollary~\ref{cor:no:endo} implies the corresponding statement for $(G,H)$-\penalty0\hskip0pt\relax{}boundaries thanks to the functor $X\mapsto \mathrm{P}(X)$ : \begin{cor}\label{cor:no:endo:top} Let $X$ be a $(G,H)$-\penalty0\hskip0pt\relax{}boundary. Then the identity is the only continuous $G$-map $X\to X$.\qed \end{cor} We now justify Remark~\ref{rem:image:contains}: \begin{prop}\label{prop:image:contains} For any $(G,H)$-\penalty0\hskip0pt\relax{}boundary $X$ there is a continuous $G$-map from $\partial(G,H)$ to $P(X)$ whose image contains $X$ --- when $X$ is viewed as a subspace of $\mathrm{P}(X)$. \end{prop} \begin{proof} Since $X$ is a $(G,H)$-\penalty0\hskip0pt\relax{}boundary, there is a $G$-\penalty0\hskip0pt\relax{}morphism from $\Delta(G,H)$ onto $P(X)$. Now the proposition is a consequence of the following observation. Given any surjective continuous affine map $\pi\colon K\to L$ between convex compact sets, $\pi(\mathrm{Ext}(K))$ contains $\mathrm{Ext}(L)$. Indeed, let $x$ be an extremal point of $L$. Then the fiber $\pi^{-1}(\{x\})$ is a convex compact subset of $K$. One checks that any extremal point of $\pi^{-1}(\{x\})$ is extremal in $K$, whence the observation. \end{proof} \section{Relative co-amenability Let $G$ be a topological group. As indicated in the Introduction, the amenability of $G$ and of the relative position of its subgroups $H, H'$ are all subsumed in one simple notion: $H$ is called co-amenable to $H'$ relative to $G$ if every affine $(G,H)$-flow has an $H'$-fixed point. It is natural that this can be rephrased using the universal affine spaces: \begin{prop}\label{prop:relco} Let $H, H'$ be subgroups of $G$. The following are equivalent: \begin{enumerate}[(i)] \item $H$ is co-amenable to $H'$ relative to $G$.\label{pt:relco:def} \item There exists a $G$-\penalty0\hskip0pt\relax{}morphism $\Delta(G, H') \to \Delta(G, H)$. \label{pt:relco:map} \item $H'$ fixes a point in $\Delta(G, H)$. \label{pt:relco:fp} \end{enumerate} \end{prop} \begin{proof} \eqref{pt:relco:def}$\Rightarrow$\eqref{pt:relco:fp} holds by definition and \eqref{pt:relco:fp}$\Rightarrow$\eqref{pt:relco:map} follows from Remark~\ref{rem:univ} applied to $H'$. Assume now~\eqref{pt:relco:map} and let $K$ be any affine $(G,H)$-flow. Composing the $G$-\penalty0\hskip0pt\relax{}morphism $\Delta(G, H') \to \Delta(G, H)$ with the $G$-\penalty0\hskip0pt\relax{}morphism $\Delta(G, H)\to K$ of Remark~\ref{rem:univ}, we see that $K$ has an $H'$-fixed point, whence~\eqref{pt:relco:def}. \end{proof} Combining Proposition~\ref{prop:relco} with the fact that $\Delta(G, H')$ and $\Delta(G, H)$ have no non-trivial endomorphisms (Corollary~\ref{cor:no:endo}), we deduce a characterisation that also holds with the Furstenberg boundary $\partial(G, H)$ instead of $\Delta(G, H)$: \begin{cor}\label{cor:same:bnd} Let $H, H'$ be subgroups of $G$. The following are equivalent: \begin{enumerate}[(i)] \item $H,H'$ are co-amenable to each other relative to $G$. \item $\Delta(G, H)$ and $\Delta(G, H')$ are isomorphic affine $G$-flows. \item The Furstenberg boundaries $\partial(G, H)$ and $\partial(G, H')$ are isomorphic $G$-spaces. \end{enumerate}\qed \end{cor} For nested subgroups, we deduce: \begin{cor}\label{cor:nested} Let $H_1 < H_2 < G$. Then \begin{equation*} \partial(G, H_1) \cong \partial(G, H_2) \kern2mm\Longleftrightarrow \kern2mm \text{$H_1$ is co-amenable to $H_2$ relative to $G$.} \end{equation*} This holds in particular if $H_1$ is co-amenable in $H_2$.\qed \end{cor} The fact that the co-amenability of $H_1$ in $H_2$ is not necessary is illustrated by the examples in~\cite{Monod-Popa} or~\cite{Pestov}, where one can assume $H_1\lhd H_2\lhd G$. This, together with Corollary~\ref{cor:nested}, justifies Proposition~\ref{prop:MP}. Notice that Corollary~\ref{cor:nested} contains the following particular cases, which were established in~\cite{Bearden-Kalantar_arx} for discrete groups: \begin{cor} Let $H$ be a subgroup of the topological group $G$. Then \begin{align*} \partial(G, H) \cong \partial(G) \kern2mm&\Longleftrightarrow \kern2mm \text{$H$ amenable relative to $G$,}\\ \text{$\partial(G, H)$ is trivial} \kern2mm&\Longleftrightarrow \kern2mm \text{$H$ is co-amenable in $G$.} \end{align*} \qed \end{cor} Using the characterisation~\eqref{pt:relco:fp} of Proposition~\ref{prop:relco}, the following reduces to a compactness argument together with the continuity of the action on $\Delta(G, H)$. \begin{cor} Let $H, H'$ be subgroups of $G$. There is a subgroup $H''$ of $G$ containing $H'$ which is maximal amongst all subgroups to which $H$ is co-amenable relative to $G$. Moreover, $H''$ is closed in $G$.\qed \end{cor} In the case $H=H'$, we can further apply Corollary~\ref{cor:same:bnd} and deduce: \begin{cor}\label{cor:hull} Any subgroup $H<G$ is contained in a closed subgroup $\widehat{H}<G$ which is maximal amongst all subgroups to which $H$ is co-amenable relative to $G$. Moreover, we have $\partial(G, H) =\partial(G, \widehat{H})$.\qed \end{cor} We shall refer to $\widehat{H}$ as a \textbf{hull} of $H$ in $G$. For instance, when $H$ is trivial, $\widehat{H}$ is just a maximal relatively amenable subgroup of $G$. In the discrete case, or for semi-simple groups, this coincides with maximal amenable subgroups. \section{Co-compact subgroups The key reason why the Furstenberg boundary $\partial(G)$ of a semi-simple Lie group $G$ is just a homogeneous space is that $G$ contains a co-\penalty0\hskip0pt\relax{}compact amenable subgroup. In the relative case, we can benefit from the co-\penalty0\hskip0pt\relax{}compactness of certain subgroups even when they are not amenable. Specifically, let $H<G$ be a subgroup and consider a hull $H< \widehat{H} < G$ in the sense of Corollary~\ref{cor:hull}. \begin{thm}\label{thm:coc:hull} If $G/\widehat{H}$ is compact (for instance if $H$ itself is co-\penalty0\hskip0pt\relax{}compact), then \begin{equation*} \partial(G, H) =\partial(G, \widehat{H}) \cong G/\widehat{H}. \end{equation*} Moreover in that case $\widehat{H} < G$ is unique up to conjugacy amongst co-\penalty0\hskip0pt\relax{}compact hulls of $H$. \end{thm} The classical case is again contained here by setting $H=1$. However, in general, we caution that it is important to take $\widehat{H}$ containing $H$. Indeed, the group $G$ of Example~\ref{exam:affine} contains a co-\penalty0\hskip0pt\relax{}compact amenable subgroup $H'$, but $\partial(G, H)$ is far from homogeneous even though $H$ is trivially co-amenable to $H'$ relative to $G$. \begin{proof}[Proof of Theorem~\ref{thm:coc:hull}] We know already $\partial(G, H) =\partial(G, \widehat{H})$ from Corollary~\ref{cor:hull}. By definition and Proposition~\ref{prop:relco}, $\widehat{H}$ fixes a point $x$ in $\Delta(G/\widehat{H})$. The orbit $Gx$ is a quotient of $G/\widehat{H}$ and hence is closed in $\Delta(G/\widehat{H})$ because $\widehat{H}$ is co-\penalty0\hskip0pt\relax{}compact. On the other hand, $x$ is $H$-fixed and therefore Lemma~\ref{lem:KM}, implies that $Gx$ contains the extremal points $\partial(G, \widehat{H})$ of $\Delta(G/\widehat{H})$. This implies $G x = \partial(G, \widehat{H})$. On the other hand, the stabiliser of $x$ cannot be larger than $\widehat{H}$ by maximality and Proposition~\ref{prop:relco} again. The uniqueness of $\widehat{H}$ up to conjugacy now follows from the uniqueness of $\partial(G, H)$. \end{proof} Suppose that a topological group $G$ contains a co-\penalty0\hskip0pt\relax{}compact amenable subgroup $P$. This is for instance the case of all connected locally compact groups~\cite[3.3]{Anantharaman02} and algebraic groups of local fields. More generally we allow $P$ to be amenable relative to $G$, although this makes no difference in the two classes of examples just mentioned, see~\cite{Caprace-Monod_rel}. Then $P$ is contained in a \emph{maximal} relatively amenable subgroup, which is just $\widehat{P}$; the latter is still co-\penalty0\hskip0pt\relax{}compact of course. In the case of semi-simple Lie groups or algebraic groups over local fields, $\widehat{P}$ is a minimal parabolic subgroup. This is the motivation for the following statement. \begin{thm}\label{thm:parabolic:abstract} Let $G$ be a topological group admitting a co-\penalty0\hskip0pt\relax{}compact relatively amenable subgroup $P$. For any closed subgroup $H$ containing $\widehat{P}$, we have $\partial(G, H) = G/H$. \end{thm} In particular this implies Theorem~\ref{thm:parabolic}. \begin{proof}[Proof of Theorem~\ref{thm:parabolic:abstract}] In view of Theorem~\ref{thm:coc:hull}, we know already $\partial(G, H) =\partial(G, \widehat{H}) = G/\widehat{H}$. It remains to prove $\widehat{H}=H$. It is known since Furstenberg that $G/\widehat{P}$ is the Furstenberg boundary of $G$; we can see this as a special case of Theorem~\ref{thm:coc:hull}. In particular, $G/H$ is a $G$-\penalty0\hskip0pt\relax{}boundary and hence a $(G,H)$-\penalty0\hskip0pt\relax{}boundary. By Proposition~\ref{prop:image:contains}, we have a continuous $G$-map $G/\widehat{H}\to \mathrm{P}(G/H)$ whose image contains $G/H$. Since $G/\widehat{H}$ is homogeneous, this means that we have in fact a $G$-map $G/\widehat{H}\to G/H$. It follows that $H$ contains $\widehat{H}$ and hence $\widehat{H}=H$. \end{proof} Here is an explicit illustration of the interplay between Theorem~\ref{thm:coc:hull} and Theorem~\ref{thm:parabolic:abstract}: \begin{exam} Let $G=\mathbf{SL}_3(\mathbf{R})$ and let $H$ be the (non-cocompact!) subgroup of matrices of the form form $\left(\begin{smallmatrix} * & * & *\\ * & * & *\\ 0 & 0 & 1\end{smallmatrix}\right)$. Thus $H$ is isomorphic to the special affine group of $\mathbf{R}^2$ studied more closely in Section~\ref{sec:LC}. Let moreover $Q$ be the parabolic subgroup $\left(\begin{smallmatrix} * & * & *\\ * & * & *\\ 0 & 0 & *\end{smallmatrix}\right)$. Then we have \begin{equation*} \widehat{H}= Q, \kern2mm \partial(G, H) = \partial(G, Q) = G/Q = \mathrm{Gr}_2(\mathbf{R}^3) \end{equation*} wherein the first equality is defined up to conjugation. Indeed, if we justify $\widehat{H}= Q$, the remaining identifications hold by Theorems~\ref{thm:coc:hull} and~\ref{thm:parabolic:abstract}. Notice first that $H$ is co-amenable in $Q$. Therefore $H$ has a hull $\widehat{H}$ containing $Q$. On the other hand, $\widehat{Q}=Q$, as established in Theorem~\ref{thm:parabolic:abstract}. But $H<Q<\widehat{H}$ implies that $Q$ is co-amenable in $\widehat{H}$ and hence $\widehat{Q}$ contains $\widehat{H}$; the claim follows. \end{exam} We note that in this example $Q$ accidentally happens to be a maximal subgroup of $G$, but the reasoning used above applies beyond this special case. \section{Malnormal subgroups Recall that a subgroup $H<G$ of a group $G$ is called \textbf{almost malnormal} if the intersection $H\cap g H g^{-1}$ is finite for all $g\notin H$. Theorem~\ref{thm:malnormal} from the Introduction holds in this wider setting, indeed in an even more general one: \begin{thm}\label{thm:a:malnormal} Let $H$ be a subgroup of a discrete group $G$. Suppose that $H$ is non-amenable but that $H\cap g H g^{-1}$ is amenable for all $g\notin H$. Then $\partial(G,H) \cong \beta(G/H)$. \end{thm} \begin{proof} We first claim that the Stone--\v{C}ech compactification $\beta(G/H)$ of the $G$-set $G/H$ is a $(G,H)$-\penalty0\hskip0pt\relax{}boundary, using the characterisation of Proposition~\ref{prop:traduc}. There is an $H$-invariant measure, namely the Dirac mass at the trivial coset in $G/H$. We write $*$ for this coset. That measure can be $G$-contracted to any point of $\beta(G/H)$ since it is defined by a point whose orbit is dense. It therefore suffices to show that there is no other $H$-invariant measure on $\beta(G/H)$. To this end, we recall that probability measures on $\beta(G/H)$ correspond to means (i.e.\ finitely additive measures) on $G/H$. Indeed, by the definition of the Stone--\v{C}ech compactification, there is a natural identification between the algebras $\mathrm{C}(\beta(G/H))$ and $\ell^\infty(G/H)$. Therefore we want to show that the Dirac mass at the trivial coset $*$ is the only $H$-invariant mean on $G/H$. Equivalently, that there is no $H$-invariant mean on the $H$-set $S=G/H\setminus\{*\}$. Our assumption is equivalent to the amenability of the stabiliser in $H$ of any point in $S$. Therefore, the existence of an $H$-invariant mean on $S$ would contradict the non-amenability of $H$ (see e.g.\ Lemma~4.5 in~\cite{Glasner-Monod}). The claim is proven. It remains to verify that $\mathrm{P}(\beta(G/H))$ is also universal. Let thus $K$ be any $(G,H)$-\penalty0\hskip0pt\relax{}irreducible affine flow. Pick an $H$-fixed point in $K$; the associated orbital map yields a $G$-map $G/H\to K$. By the universal property of the Stone--\v{C}ech compactification, this extends to a continuous $G$-map $\beta(G/H)\to K$. The latter induces a $G$-\penalty0\hskip0pt\relax{}morphism from $\mathrm{P}(\beta(G/H))$ to $\mathrm{P}(K)$ which gives the desired $G$-\penalty0\hskip0pt\relax{}morphism to $K$ when composed with the barycentre map on $K$. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:wreath}] Let $G=J\wr H$ with $H$ non-amenable. In order to be able to apply Theorem~\ref{thm:a:malnormal}, it suffices to check that $H$ is almost malnormal in $G$, which is equivalent to checking that the stabiliser in $H$ of a \emph{non-trivial} element of $\oplus_H J$ is finite. This stabiliser must in particular preserve the support of this element, which is a non-empty subset of $H$. The definition of the restricted product $\oplus_H J$ shows that this subset is finite and hence its stabiliser in $H$ is finite too. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:relhyp}] It was proved by Osin that every $H_i$ is almost malnormal, see Theorem~1.4(2) in~\cite{Osin06_AMS}. Therefore, we can again apply Theorem~\ref{thm:a:malnormal} above. \end{proof} \section{The facts in the case of the affine group \label{sec:LC} This section is devoted to the study of the special affine group of $\mathbf{R}^2$. All the arguments hold mutatis mutandis also for $\mathbf{R}^n$ with $n\geq 3$, and in fact even simplify slightly in what regards the uniqueness of a certain finitely additive measure below, thanks to Kazhdan's property~(T). Let thus \begin{equation*} G= \mathbf{R}^2 \rtimes H \kern2mm \text{with} \kern2mm H=\mathbf{SL}_2(\mathbf{R}) \end{equation*} and let $D$ be a topological disc. Let $G$ act on the boundary circle $\mathbf{S}^1$ of $D$ by the action on the space of directions. Thus this is a double cover of the projective $G$-action on $\mathbf{P}^1$. As for the interior $\mathring D$ of the disc, we identify it with $\mathbf{R}^2$ endowed with the affine $G$-action. Notice that the $G$-actions on $\mathring D$ and on $\mathbf{S}^1$ do indeed combine to turn $D$ into a $G$-flow. \begin{prop}\label{prop:D:bnd} $D$ is a $(G,H)$-\penalty0\hskip0pt\relax{}boundary. \end{prop} \begin{proof} The group $H$ has an invariant probability measure on $D$ since it fixes a point in $\mathbf{R}^2\cong \mathring D$, the origin. Moreover this measure can be $G$-contracted to any point of $D$ since the orbit of that point is dense. It suffices therefore to show that $H$ does not admit any other invariant probability measure $\mu$ on $D$. First, $\mu$ gives mass zero to $\mathbf{S}^1$. Indeed, even the quotient $\mathbf{P}^1$ has no $H$-invariant measure --- in fact this quotient is well known to be the Furstenberg boundary of $H$. Next, on $\mathring D \cong \mathbf{R}^2$, we have an even stronger fact, which we will need again later: the only $H$-invariant \emph{finitely additive} probability measure on the algebra of Borel subsets of $\mathbf{R}^2$ is the Dirac mass at the origin, see e.g.~\cite[ch.~2]{Harpe-Valette} \end{proof} Notice that $D$ has two $G$-orbits; therefore it is not a minimal $G$-flow. Proposition~\ref{prop:image:contains} now implies that $\partial(G,H)$ is not minimal either. Thus we have already established points~\eqref{af:pt:bnd} and~\eqref{af:pt:no-min} of Example~\ref{exam:affine}. We now proceed to a somewhat non-explicit characterisation of the Furstenberg boundary $\partial(G,H)$. Denote by $\mathrm{C}^\mathrm{b}_\mathrm{ru}(G)$ the space of right uniformly continuous bounded function on $G$. Recall that the right uniform continuity is equivalent to the continuity in sup-norm of the \emph{left} regular representation. We further denote by $\mathrm{C}^\mathrm{b}_\mathrm{ru}(G/H)$ the subspace of those functions that are right $H$-invariant. Although these functions will be seen as continuous functions on $\mathbf{R}^2$, we caution that the uniform continuity requirement here is much stronger than the uniform continuity with respect to $\mathbf{R}^2$. In fact, using the semi-direct product structure of $G$, we have: \begin{lem}\label{lem:ru} A bounded uniformly continuous function $f$ on $\mathbf{R}^2$ is in $\mathrm{C}^\mathrm{b}_\mathrm{ru}(G/H)$ if and only if \begin{equation*} \lim_{n\to\infty}\sup_{v\in\mathbf{R}^2} \left| f(h_n v) - f(v)\right| =0 \end{equation*} for every sequence $h_n\to 1$ in $H$.\qed \end{lem} In any case, the restriction map $\mathrm{C}^\mathrm{b}_\mathrm{ru}(G/H)\to \mathrm{C}^\mathrm{b}(\mathbf{R}^2)$ realises $\mathrm{C}^\mathrm{b}_\mathrm{ru}(G/H)$ as a closed subalgebra of $\mathrm{C}^\mathrm{b}(\mathbf{R}^2)$ and therefore determines a compactification $\alpha(\mathbf{R}^2)$ which has a structure of $G$-flow. The continuity of the action follows from the definition of $\mathrm{C}^\mathrm{b}_\mathrm{ru}(G/H)$. \begin{thm}\label{thm:af} \leavevmode \begin{enumerate}[(i)] \item The Furstenberg boundary $\partial(G,H)$ is isomorphic to $\alpha(\mathbf{R}^2)$.\label{thm:af:pt:isom} \item The $\mathbf{R}^2$-action on the corona $\alpha(\mathbf{R}^2)\setminus \mathbf{R}^2$ is trivial.\label{thm:af:pt:triv} \item The identification $\mathbf{R}^2\cong\mathring D$ extends to a $G$-map $\alpha(\mathbf{R}^2)\to D$ mapping the corona onto $\mathbf{S}^1$.\label{thm:af:pt:quot} \item The compact space $\alpha(\mathbf{R}^2)$ is non-metrisable.\label{thm:af:pt:huge} \end{enumerate} \end{thm} The point of~\eqref{thm:af:pt:quot} is that even when we know that $\alpha(\mathbf{R}^2)$ is $\partial(G,H)$, the corona is a priori only mapped to $\mathrm{P}(\mathbf{S}^1)$. \begin{proof}[Proof of Theorem~\ref{thm:af}] We begin with~\eqref{thm:af:pt:triv}. Let $(p_i)_{i\in I}$ be any net tending to infinity in the locally compact space $\mathbf{R}^2$ (this means by definition that $p_i$ eventually leaves any compact subset). Write $p_i=\left(\begin{smallmatrix} x_i\\y_i\end{smallmatrix}\right)$ and suppose that it converges in $\alpha(\mathbf{R}^2)$. By symmetry under rotations and scaling, it suffices to show that $p_i$ and $p'_i=\left(\begin{smallmatrix} x_i+1\\y_i\end{smallmatrix}\right)$ have the same limit. Suppose first that $|y_i|$ tends to infinity. Let $h_i\in H$ be the matrix $\left(\begin{smallmatrix} 1&1/y_i\\ 0&1 \end{smallmatrix}\right)$. Since $h_i\to 1$ and $h_i p_i = p'_i$, indeed $p_i$ and $p'_i$ have the same limit. Therefore we can reduce to the case where $y_i$ remains bounded. Now $|x_i|$ tends to infinity and we take $a_i\in H$ to be the diagonal matrix with diagonal entries $(x_i +1)/x_i$ and $x_i/(x_i +1)$. Again, $a_i\to 1$. This time $a_i p_i - p'_i$ tends to zero in $\mathbf{R}^2$, and therefore the continuity of the $G$-action implies that $p_i$ and $p'_i$ have the same limit. We now turn to~\eqref{thm:af:pt:isom}. We claim that the Dirac mass at the origin is the only $H$-invariant probability measure on $\alpha(\mathbf{R}^2)$. Let indeed $\mu$ be $H$-invariant; it suffices to show that $\mu$ gives mass zero to the corona. Otherwise, by point~\eqref{thm:af:pt:triv}, we would obtain a $G$-invariant measure on $\alpha(\mathbf{R}^2)$. This corresponds to a $G$-invariant mean on the space $\mathrm{C}^\mathrm{b}_\mathrm{ru}(G/H)$, which is one of the criteria for $H$ to be co-amenable in $G$, see~\cite[No.~2 \S4]{Eymard72}. However this co-amenability does not hold, because another equivalent criterion (same reference) is the existence of a $G$-invariant mean on $L^\infty(G/H)$, which does not exist due to the phenomenon already mentioned in the proof of Proposition~\ref{prop:D:bnd}. Now the claim is established and it follows that $\alpha(\mathbf{R}^2)$ is a $(G,H)$-\penalty0\hskip0pt\relax{}boundary in view of the density of $\mathbf{R}^2$. To prove that is it the Furstenberg boundary of the pair, it suffices to show that there is a $G$-\penalty0\hskip0pt\relax{}morphism from $\mathrm{P}(\alpha(\mathbf{R}^2))$ to $\Delta(G,H)$. Let $x\in \Delta(G,H)$ be an $H$-fixed point. By continuity of the action and compactness, the orbital map of $x$ is a right uniformly continuous map $G/H\to \Delta(G,H)$. Therefore it extends to a continuous $G$-map from $\alpha(\mathbf{R}^2)$ to $\Delta(G,H)$ and finally using the barycentre map we obtain a $G$-\penalty0\hskip0pt\relax{}morphism from $\mathrm{P}(\alpha(\mathbf{R}^2))$ to $\Delta(G,H)$ as desired. To establish~\eqref{thm:af:pt:quot}, it suffices to show that any net in $\mathbf{R}^2$ that converges to a point in the corona of $\alpha(\mathbf{R}^2)$ must converge in direction. Let $s,c\colon \mathbf{R}^2\to \mathbf{R}$ be any continuous functions that coincide respectively with the sine and cosine of the direction of vectors in $\mathbf{R}^2$ outside some compact neighbourhood of the origin. What we have to show is that $s$ and $c$ are in $\mathrm{C}^\mathrm{b}_\mathrm{ru}(G/H)$. This follows from Lemma~\ref{lem:ru}. Finally, to justify~\eqref{thm:af:pt:huge}, it suffices to exhibit an embedding of $\mathrm{C}^\mathrm{b}_\mathrm{u}(\mathbf{R}_{\geq 0})$ to $\mathrm{C}(\alpha(\mathbf{R}^2))$. Here the uniform structure on $\mathbf{R}_{\geq 0}$ is the usual one (induced from the group $\mathbf{R}$, or equivalently the usual metric). Given $f\in \mathrm{C}^\mathrm{b}_\mathrm{u}(\mathbf{R}_{\geq 0})$, one checks that the function $\widetilde{f}$ defined on $v\in \mathbf{R}^2$ by $\widetilde{f}(v)= f(\log(\|v\|+1))$ is in $\mathrm{C}^\mathrm{b}_\mathrm{ru}(G/H)$ by using Lemma~\ref{lem:ru}. \end{proof} We have now justified all claims made in the Introduction about Example~\ref{exam:affine}. In particular, point~\eqref{thm:af:pt:quot} above implies: \begin{cor}\label{cor:nomap} There is no $G$-map $\partial(G)\to \partial(G,H)$.\qed \end{cor} \begin{rem}\label{rem:no-morph:pairs} This example also justifies the claim in Remark~\ref {rem:morph:pairs} that there is in general no $f$-equivariant morphism $\partial(G', H')\to \partial(G, H)$ associated to a morphism of pairs $f$. Indeed, in the context of Corollary~\ref{cor:nomap}, we can take $G'=G$ with $f$ the identity and $H'$ trivial. \end{rem} We note in passing that $D$ is not a smallest non-trivial $(G,H)$-\penalty0\hskip0pt\relax{}boundary: it admits as a quotient the projective plane $\mathbf{P}^2$ obtained by identifying each direction with its opposite in the boundary circle of $D$. The example of $\mathbf{P}^2$ would not, however, have allowed us to deduce Corollary~\ref{cor:nomap} since there is an obvious $G$-map $\mathbf{P}^1\to\mathbf{P}^2$. \bigskip A small variation of Example~\ref{exam:affine}, leaving the connected case, gives a Furstenberg boundary that is a very classical object and in a sense even larger than $\alpha(\mathbf{R}^2)$ (onto which is naturally projects). \begin{exam}\label{exam:Samuel} Let $G= \mathbf{R}^2 \rtimes \mathbf{SL}_2(\mathbf{Z})$ and $H=\mathbf{SL}_2(\mathbf{Z})$. Then $\partial(G,H) \cong \sigma(\mathbf{R}^2)$, the Samuel compactification of $\mathbf{R}^2$. This coincides with the \emph{greatest ambit} of $\mathbf{R}^2$ and can be defined as the Gelfand spectrum of the algebra of uniformly continuous bounded functions on $\mathbf{R}^2$. \end{exam} The proof is a much simpler version of Theorem~\ref{thm:af} since there is no $H$-continuity to take into account. In fact it is almost the same as Theorem~\ref{thm:malnormal}, only with the topology of $\mathbf{R}^2$ coming into the picture. The same arguments show the following. Define $H$ to be $\mathbf{SL}_2^\mathrm{d}(\mathbf{R})$, which denotes the group $\mathbf{SL}_2(\mathbf{R})$ in the discrete topology. Set $G= \mathbf{R}^2 \rtimes H$, which is still a locally compact group. Then $\partial(G,H) \cong \sigma(\mathbf{R}^2)$, in contrast to Theorem~\ref{thm:af}. \bibliographystyle{amsplain}
{ "timestamp": "2019-02-25T02:19:49", "yymm": "1902", "arxiv_id": "1902.08513", "language": "en", "url": "https://arxiv.org/abs/1902.08513", "abstract": "Furstenberg has associated to every topological group $G$ a universal boundary $\\partial(G)$. If we consider in addition a subgroup $H<G$, the relative notion of $(G,H)$-boundaries admits again a maximal object $\\partial(G,H)$. In the case of discrete groups, an equivalent notion was introduced by Bearden--Kalantar as a very special instance of their constructions. However, the analogous universality does not always hold, even for discrete groups. On the other hand, it does hold in the affine reformulation in terms of convex compact sets, which admits a universal simplex $\\Delta(G,H)$, namely the simplex of measures on $\\partial(G,H)$. We determine the boundary $\\partial(G,H)$ in a number of cases, highlighting properties that might appear unexpected.", "subjects": "Dynamical Systems (math.DS); Group Theory (math.GR)", "title": "Furstenberg boundaries for pairs of groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363494503271, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.707338575186548 }
https://arxiv.org/abs/1408.2786
Hook Weighted Increasing Trees, Cayley Trees and Abel-Hurwitz Identities
Recently Féray, Goulden and Lascoux gave a proof of a new hook summation formula for unordered increasing trees by means of a generalization of the Prüfer code for labelled trees and posed the problem of finding a bijection between weighted increasing trees and Cayley trees. We give such a bijection, providing an answer to the problem posed by Féray, Goulden and Lascoux as well as showing a combinatorial connection to the theory of tree volumes defined by Kelmans. In addition we give two simple proofs of the hook summation formula. As an application we describe how the hook summation formula gives a combinatorial proof of a generalization of Abel and Hurwitz' theorem, originally proven by Strehl.
\section{Introduction} We begin by fixing some terminology. A tree $T$ is an acyclic connected graph and we denote by $V(T)$ the set of vertices of $T$ and $E(T)$ the set of edges. A tree is said to be rooted if one of its vertices is distinguished. This distinguished vertex is called the root. We only consider unordered trees, that is, trees in which the children of any vertex are unordered. Given a finite set $A$, we let $\mathbf{m}(A) = \min(A)$ and $\mathbf{M}(A) = \max(A)$. Let $T$ be a labelled tree with vertex labels given by the finite set $A$ and rooted at the vertex labelled with $\mathbf{m}(A)$. We direct the edges of $T$ away from the root so that if $(i,j)$ is an edge in $T$ then $i$ is on the unique path from the root of $T$ to $j$. In this case we call $i$ the father of $j$ in $T$ and denote this by $\mathfrak{f}_T(j)$. A vertex is said to be increasing if $\mathfrak{f}_T(i) < i$ and is decreasing otherwise. Note that $\mathfrak{f}_T(\mathbf{m}(A))$ is not defined and so the root of $T$ is neither increasing nor decreasing. We say that a tree $T$ is increasing if every non root vertex in $T$ is an increasing vertex. Given a tree $T$ and a vertex $i \in V(T)$ we define the hook generated by $i$ in $T$, written $\mathfrak{h}_T(i)$, to be the set of vertices $j$ such that $i$ is in the unique path from the root to $j$ in $T$. In other words, $\mathfrak{h}_T(i)$ is the set of vertices in the subtree of $T$ rooted at $i$. Note that $i \in \mathfrak{h}_T(i)$. Consider the family $\mathfrak{T}_A$ of increasing unordered labelled trees with vertex labels given by $A$. F\'eray, Goulden and Lascoux\cite{FGL2} studied a combinatorial sum involving a hook weight summed over increasing trees with a fixed number of vertices. Using a generalization of the Pr\"ufer code, it was shown that these sums have an appealing multiplicative closed form. In particular, the following theorem is proven. \begin{thm}[Theorem 1.1 in \cite{FGL2}]\label{Thm.2} For a tree $T \in \mathfrak{T}_A$ define a weight on $T$ as \[ \widetilde{w}(T) = \prod_{i \in A \backslash \mathbf{m}(A)} x_{\mathfrak{f}_T(i)} \left( \sum_{ j \in \mathfrak{h}_T(i) } y_{i,j} \right). \] Then the generating series is given by \begin{align*} \Theta_A &= \sum_{T \in \mathfrak{T}_A} \widetilde{w}(T) \\ &= x_{\mathbf{m}(A)} y_{\mathbf{M}(A),\mathbf{M}(A)} \prod_{i \in A \backslash \{\mathbf{M}(A),\mathbf{m}(A)\}} \left( y_{i,i}\sum_{\substack{j \in A \\ j \leq i}} x_j + x_i \sum_{\substack{j \in A \\ j > i}} y_{i,j} \right). \end{align*} \end{thm} If one makes the specialization $x_i \to 1$ and $y_{i,j} \to 1$ for all $i$ and $j$ in Theorem~\ref{Thm.2} then the right hand side of the identity becomes $|A|^{|A|-2}$, which is the number of Cayley trees with vertices labelled by the set $A$ as shown by Cayley\cite{Cayley}. In other words, $|A|^{|A|-2}$ is the number of trees with labels given by $A$ and which are not rooted and not necessarily increasing (although for convenience we may assume that a Cayley tree is rooted at the vertex labelled by $\mathbf{m}(A)$). This observation prompted F\'eray, Goulden and Lascoux to ask for a combinatorial bijection between increasing trees and Cayley trees which could be used to prove Theorem~\ref{Thm.2}. Further evidence for the existence of such a bijection was provided by some results in F\'eray and Goulden's earlier paper\cite{FG1} in which the authors study a specialization of Theorem~\ref{Thm.2} and are able to give a combinatorial bijection for the top degree of the polynomial identity (Section 2.2 in \cite{FG1}) which involves Cayley trees. In Section~\ref{Section.Direct} we give a bijective proof of Theorem~\ref{Thm.2} which involves Cayley trees, solving the problem posed by F\'eray, Goulden and Lascoux. In addition to answering the question posed by F\'eray, Goulden and Lascoux, the contents of Section~\ref{Section.Direct} also indicates a connection between the hook sum formula in Theorem~\ref{Thm.2} and the theory of tree volume formulas defined by Kelmans\cite{K92} and further studied by Kelmans, Postnikov and Pitman\cite{P01,P02,KP08}. This connection comes from Theorem~\ref{Bijection} below which implies that the generating polynomial $\Theta_A$ in Theorem~\ref{Thm.2} is in fact a tree volume polynomial corresponding to the complete graph. More generally, this gives a connection between the hook sum formula in Theorem~\ref{Thm.2} and various generalizations of the binomial theorem, such as Abel and Hurwitz' identities. As an application of Theorem~\ref{Thm.2} we will take a moment to discuss more directly the connection to a multivariate generalization of the binomial theorem. In \cite{Strehl}, Strehl proves the following multivariate generalization of the binomial theorem. \begin{thm}[Theorem 1(7) in \cite{Strehl}]\label{Thm.S} Suppose $A$ is a finite set of positive integers and let \[ w_A(z) = z \prod_{i \in A \backslash \{\mathbf{M}(A)\}} \left( z + \sum_{\substack{j \in A \\ j \leq i}} x_j + \sum_{\substack{j \in A \\ j > i}} y_{i,j} \right). \] Then \[ w_A(u+v) = \sum_{B \sqcup C = A} w_B(u)w_C(v), \] where $B \sqcup C = A$ means that $B \cup C = A$ and $B \cap C = \emptyset$. \end{thm} By specializing variables, Theorem~\ref{Thm.S} can be seen to be a generalization of the binomial theorem. In particular, if we let $y_{i,j} \to 0$ and $x_i \to 0$ for all $i$ and $j$ then it is easily seen that the identity in Theorem~\ref{Thm.S} is the binomial identity. If we let $y_{i,j} \to 1$ and $x_i \to 1$ for all $i$ and $j$ then Theorem~\ref{Thm.S} gives Abel's generalization\cite{Abel,Riordan} of the binomial theorem, \[ (u+v)(u+v+n)^{n-1} = \sum_{k = 0}^n \binom{n}{k} u (u+k)^{k-1}v(v+(n-k))^{n-k-1}, \] where $n = |A|$. Lastly, if we let $y_{i,j} \to x_j$ for $i < j$ then Theorem~\ref{Thm.S} becomes Hurwitz' generalization\cite{Hurwitz} of Abel's identity, \[ (u+v)\left(u+v+ \sum_{i \in A} x_i\right)^{|A|-1} = \sum_{B \sqcup C = A} u\left(u+\sum_{i \in B} x_i\right)^{|B|-1} v \left(v+\sum_{i \in C}x_i\right)^{|C|-1}. \] We refer the reader to Strehl's paper \cite{Strehl} for additional specializations of interest as well as a number of applications. Theorem~\ref{Thm.2} can be used to give a new proof of Theorem~\ref{Thm.S}. \begin{thm}\label{Thm.Binom} Let $x_i, i \geq 0$ and $y_{i,j}, 0 \leq i < j$ be indeterminates and for any finite set $A$ of integers let \[ \Theta_A = x_{\mathbf{m}(A)} y_{\mathbf{M}(A),\mathbf{M}(A)} \prod_{i \in A \backslash \{\mathbf{M}(A),\mathbf{m}(A)\}} \left( y_{i,i}\sum_{\substack{j \in A \\ j \leq i}} x_j + x_i \sum_{\substack{j \in A \\ j > i}} y_{i,j} \right). \] Then for any finite set $A$ of positive integers, \[ \left. \Theta_{A \cup \{0\}} \right|_{x_0 = u + v} = \sum_{B \sqcup C = A} \left. \Theta_{B \cup \{0\}} \right|_{x_0 = u} \left. \Theta_{C \cup \{0\}} \right|_{x_0 = v}. \] \end{thm} \begin{proof} Our Theorem~\ref{Thm.Binom} above follows directly from Theorem~\ref{Thm.2} since both sides of the equality in Theorem~\ref{Thm.Binom} count trees in which each root edge is coloured either red or blue and then each blue edge is marked with a $u$ and each red edge is marked with a $v$. \end{proof} Note that if $A$ is a finite set of positive integers and we let $y_{i,i} \to 1$ for all $i$ and $y_{i,j} \to \frac{y_{i,j}}{x_i}$ for all $i < j$ then we recover Theorem~\ref{Thm.S} from Theorem~\ref{Thm.Binom} where $w_A(z) = \left. \Theta_{A \cup \{0\}} \right|_{x_0 = z}$. Similarly, if we let $z \to x_0 y_{0,0}$, $x_i \to x_i y_{i,i}$ for $i \in A$ and $y_{i,j} \to x_i y_{i,j}$ for $i < j \in A$ then we recover Theorem~\ref{Thm.Binom} from Theorem~\ref{Thm.S} where $\Theta_{A \cup \{0\}} = w_A$. It should be noted that the method of proof for Theorem~\ref{Thm.S} and Theorem~\ref{Thm.Binom} is very similar, the main difference being the combinatorial description of the generating series involved. Strehl uses the description of the generating series given in Proposition~\ref{MatrixTreeProp} below as sums over Cayley trees. Instead, we use the description of the generating series as hook weighted sums over increasing trees as given in the statement of Theorem~\ref{Thm.2}. The remainder of this paper is organized as follows. In Section~\ref{Section.Direct} we give a combinatorial proof of Theorem~\ref{Thm.2} by describing an `unsorting' operation which can be applied to increasing trees and, after repeated application, results in a Cayley tree. Following the combinatorial proof we also describe two simple proofs of Theorem~\ref{Thm.2}. In Section~\ref{Section.Indirect} we give an indirect combinatorial proof by showing that both expressions for the polynomials $\Theta_A$ given in Theorem~\ref{Thm.2} satisfy the same recursion and initial conditions and in Section~\ref{Section.Algebraic} we give a direct algebraic proof which uses the fact that increasing trees can be constructed inductively by adding leaves. \section{A Bijective Proof}\label{Section.Direct} Let $\mathcal{L}_{i,j}(A)$ be the set of pairs $(T, \phi)$ where $T$ is a labelled tree with vertex labels given by $A$, $\phi$ is a function from the set of increasing vertices in $T$ to $A$ and the pair $(T,\phi)$ satisfies the following conditions. \begin{enumerate} \item [(1)] For any increasing vertex $v$ in $T$, $\phi(v) \in \mathfrak{h}_T(v)$ and $\phi(v) \geq v$. \item [(2)] If $v$ is an increasing vertex in $T$ and $\phi(v) \not = v$ then every vertex on the unique path from $v$ to the root (not including the root) is increasing. \item [(3)] If $v$ is a decreasing vertex in $T$ and $u$ is an increasing vertex with $\phi(u) \not = u$ then $u < v$. \item [(4)] $T$ has $i$ decreasing vertices. \item [(5)] $T$ has $j$ increasing vertices $v$ with $\phi(v) \not = v$. \end{enumerate} Define a weight function on $\mathcal{L}_{i,j}(n)$ by \[ \omega(T, \phi) = \prod_{\substack{increasing \\ w \in V(T)}} x_{\mathfrak{f}_T(w)} y_{w,\phi(w)} \prod_{\substack{decreasing \\ w \in V(T)}} x_w y_{w,\mathfrak{f}_T(w)}. \] The goal of the following theorem is to describe a method by which we can transform trees contained in the sets $\mathcal{L}_{0,j}(n)$ (increasing trees) into trees counted by the sets $\mathcal{L}_{i,0}(n)$. The reason for this is that the increasing trees contained in the $\mathcal{L}_{0,j}(n)$ sets are the objects of interest for the purposes of Theorem~\ref{Thm.2}, however, the weight function depends on non-local information. In particular, for some vertex $v$ it may be the case that $\phi(v)$ is not adjacent to $v$ since the only condition is that $\phi(v) \in \mathfrak{h}_T(v)$. Fortunately, the following theorem says that we can repeatedly `unsort' the increasing trees so that they become Cayley trees in which the weight function is entirely local. That is, in $\mathcal{L}_{i,0}(n)$ the weight of each tree depends only on vertices and their neighbors and so the generating series can be computed in a straightforward way. \begin{thm}\label{Bijection} There exists a weight preserving bijection between $\mathcal{L}_{i,j}(A)$ and $\mathcal{L}_{i+1,j-1}(A).$ \end{thm} \begin{proof} Let $(T,\phi) \in \mathcal{L}_{i,j}(A)$ and let $v$ be the increasing vertex with greatest label such that $\phi(v) \not = v$. Let $b \in \mathfrak{h}_v(T)$ be the vertex adjacent to $v$ with $\phi(v) \in \mathfrak{h}_T(b)$ and $a$ be the vertex adjacent to $v$ on the unique path from $v$ to the root. In other words, $a = \mathfrak{f}_T(v)$ and $b$ is the child of $v$ whose hook contains $\phi(v)$. Note that condition 2 on $(T,\phi)$ implies that every vertex in the path from $v$ to the root is increasing. Form a new tree $T'$ by removing edges $av$ and $vb$ and adding edges $ab$ and $\phi(v) v$. Since condition 3 implies that $b$ is an increasing vertex in $T$, it is still increasing in $T'$. Also, vertex $v$ is decreasing in $T'$ by condition 1. If we construct a function $\phi'$ from the set of increasing vertices in $T'$ to $\{1, 2, \cdots, n\}$ such that $\phi'(u) = \phi(u)$ for all increasing $u$ in $T'$ then it is easily checked that $(T', \phi')$ satisfies the conditions for $\mathcal{L}_{i+1,j-1}(A)$. To see that this map is invertible we only need for $a, b$ and $v$ to be uniquely determined in $T'$ since $\phi$ must be equal to $\phi'$ for all increasing vertices in $T'$ and $\phi(v) = \mathfrak{f}_{T'}(v)$. However, $v$ is the unique decreasing vertex in $T'$ with the smallest label (this follows from condition 4 and the choice of $v$ in $T$). Once we know this, $a$ and $b$ must be the unique pair of adjacent vertices on the path from $v$ to the root in $T'$ such that $a < v < b$ and every vertex on the path from the root to $a$ in $T'$ is increasing (this follows from conditions 2 and 3). It is then straightforward to check that the constructed bijection is weight preserving since the weight corresponding to vertices $v$ and $b$ in $(T,\phi)$ is equal to their weight in $(T', \phi')$ (although $v$ becomes decreasing in $T'$). \end{proof} Now we need to determine the generating series for trees in the collection of sets of the form $\mathcal{L}_{i,0}(n)$. However, note that in this case the map $\phi$ is redundant since every vertex $v$ in such a tree must have $\phi(v) = v$. In other words, this amounts to determining the generating series for the set of Cayley trees. \begin{pro}\label{MatrixTreeProp} Let $\mathfrak{D}(A)$ be the set of Cayley trees labelled by $A$, rooted at $\mathbf{m}(A)$ and with edges directed toward the root. For $T \in \mathfrak{D}(A)$ let \[ w(T) = \prod_{(i,j) \in E(T)} w_{i,j}, \] where \[ w_{i,j} = \begin{cases} x_i y_{i,j} & \mbox{ if } i < j, \\ x_j y_{i,i} & \mbox{ if } i > j. \end{cases} \] Then \[ \sum_{T \in \mathfrak{D}(A)} w(T) = x_{\mathbf{m}(A)} y_{\mathbf{M}(A),\mathbf{M}(A)} \prod_{i \in A \backslash \{\mathbf{M}(A),\mathbf{m}(A)\}} \left( y_{i,i} \sum_{\substack{j \in A \\ j \leq i}} x_j + x_i \sum_{\substack{j \in A \\ j > i}} y_{i,j} \right). \] \end{pro} \begin{proof} This follows from a straightforward application of the matrix tree theorem and is essentially the same as the method used in part of the proof of Proposition~1 in Strehl\cite{Strehl}. Without loss of generality we may assume that $A = \{1, 2, \cdots, n\}$. \[ \sum_{T \in \mathfrak{D}(\{1, 2, \cdots, n\})} w(T) = \det(K_{1,1}), \] where \[ k_{i,j} = \begin{cases} -x_i y_{i,j} & \mbox{ if } 1 \leq i < j \leq n, \\ -x_j y_{i,i} & \mbox{ if } 1 \leq j < i \leq n, \\ y_{i,i} \sum_{m=1}^{i-1} x_m + x_i \sum_{m = i+1}^n y_{i,m} & \mbox{ if } i = j, \end{cases} \] and $K_{1,1}$ is the matrix $K$ with the first row and column removed. Adding each of the columns to the last column and then subtracting $\frac{y_{i-1,i-1}}{y_{i,i}}$ times row $i$ from row $i-1$ for each $i$ gives the matrix $L$ with \[ \ell_{i,n} = 0 \mbox{ for } 1 \leq i < n-1, \qquad \ell_{i,j} = 0 \mbox{ for } 1 \leq j < i \leq n-1, \] \[ \ell_{i,i} = y_{i+1,i+1} \sum_{j=1}^{i+1} x_j + x_{i+1} \sum_{j=i+2}^n y_{i+1,j} \mbox{ for } 1 \leq i < n-1, \] and $\ell_{n-1,n-1} = y_{n,n}x_1.$ Since $\det(K_{1,1}) = \det(L)$ is the product of the main diagonal of $L$, the result follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{Thm.2}.] First note that by expanding it is easily seen that \[ \sum_{T \in \mathfrak{T}_A} \prod_{v = 2}^n x_{\mathfrak{f}_T(v)} \left( \sum_{u \in \mathfrak{h}_T(v)} y_{v,u} \right) = \sum_{(T,\phi)} \prod_{v=2}^n x_{\mathfrak{f}_T(v)} y_{v,\phi(v)} \] where the sum is over all pairs $(T, \phi)$ where $T$ is in $\mathfrak{T}_A$ and $\phi$ is a map from the set of increasing vertices in $T$ to $A$ with $\phi(v) \in \mathfrak{h}_T(v)$ for all increasing vertices $v$. However, this is equal to the sum \[ \sum_{i \geq 0} \sum_{(T,\phi) \in \mathcal{L}_{0,i}(A)} \omega(T,\phi). \] Applying Theorem~\ref{Bijection} then gives, letting $\mathfrak{D}(A)$ be the set of Cayley trees with vertex labels given by $A$ as in Proposition~\ref{MatrixTreeProp}, \[ \sum_{j \geq 0} \sum_{(T,\phi) \in \mathcal{L}_{j,0}(A)} \omega(T,\phi) = \sum_{T \in \mathfrak{D}(A)} \prod_{\substack{increasing \\ v \in V(T)}} x_{\mathfrak{f}_T(v)}y_{v,v} \prod_{\substack{decreasing \\ v \in V(T)}} x_v y_{v,\mathfrak{f}_T(v)}. \] The result then follows by applying Proposition~\ref{MatrixTreeProp} \end{proof} \section{An Indirect Combinatorial Proof}\label{Section.Indirect} We will now give an indirect combinatorial proof of Theorem~\ref{Thm.2} which relies on Proposition~\ref{MatrixTreeProp}. Given a set $A$ of positive integers, let $t_A = 1$ if $|A| = 1$ and for $|A| > 1$, \[ t_A = x_{\mathbf{m}(A)} y_{\mathbf{M}(A),\mathbf{M}(A)} \prod_{i \in A \backslash \{\mathbf{M}(A),\mathbf{m}(A)\}} \left( y_{i,i} \sum_{\substack{j \in A \\ j \leq i}} x_j + x_i \sum_{\substack{j \in A \\ j > i}} y_{i,j} \right). \] From Proposition~\ref{MatrixTreeProp} above we immediately get the following two results. \begin{lem}\label{singleEdge} Let $\mathfrak{E}(A)$ be the subset of trees in $\mathfrak{D}(A)$ which have a unique edge incident with $\mathbf{m}(A)$. Then if \[ r_A = \sum_{T \in \mathfrak{E}(A)} w(T), \] with $w(T)$ as defined in Proposition~\ref{MatrixTreeProp}, then \[ r_A = x_{\mathbf{m}(A)} y_{\mathbf{M}(A),\mathbf{M}(A)}\prod_{i \in A \backslash \{\mathbf{M}(A),\mathbf{m}(A)\}} \left( y_{i,i} \sum_{\substack{j \in A \backslash \mathbf{m}(A) \\ j \leq i}} x_j + x_i \sum_{\substack{j \in A \backslash \mathbf{m}(A) \\ j > i}} y_{i,j} \right). \] \end{lem} \begin{proof} This follows from the observation that \[ r_A = x_{\mathbf{m}(A)} \left. \frac{d}{d x_{\mathbf{m}(A)}} t_A \right|_{x_{\mathbf{m}(A)} = 0}. \] \end{proof} \begin{pro}\label{LittleRecursion} With the polynomials $t_A$ and $r_A$ as defined above with $|A| > 1$ and for any $a \in A \backslash \{ \mathbf{m}(A) \}$, \[ t_A = \sum_{\substack{B \sqcup C = A \\ \mathbf{m}(A) \in C \\ a \in B}} x_{\mathbf{m}(A)} \left( \sum_{j \in B} y_{\mathbf{m}(B), j} \right) t_B t_C. \] \end{pro} \begin{proof} By Proposition~\ref{MatrixTreeProp} we know that $t_A = \sum_{T \in \mathfrak{D}(A)} w(T)$. For any $T \in \mathfrak{D}(A)$ there is a unique child $v$ of $\mathbf{m}(A)$ for which the subtree rooted at $v$ contains $a$. Letting $B$ be the set of labels in the subtree it follows from Lemma~\ref{singleEdge} that $w(T) = w(T_1)w(T_2)$ where $T_1 \in \mathfrak{E}(B \cup \mathbf{m}(A))$ and $T_2 \in \mathfrak{D}(A \backslash B)$. Thus, \[ t_A = \sum_{\substack{B \sqcup C \\ \mathbf{m}(A) \in C \\ a \in B}} r_{B \cup \mathbf{m}(A)} t_C. \] We also see that \[ r_{B \cup \mathbf{m}(A)} = x_{\mathbf{m}(A)} \left( \sum_{j \in B} y_{\mathbf{m}(B),j} \right) t_B, \] from which the result follows. \end{proof} Given a finite set $A$ of positive integers let \[ \Gamma_A = x_{\mathbf{m}(A)} \left. \frac{d}{d x_{\mathbf{m}(A)}} \Theta_A \right|_{x_{\mathbf{m}(A)} = 0} = \sum_{T \in \mathfrak{R}_A} \widetilde{w}(T) \] where $\mathfrak{R}_A$ is the subset of $\mathfrak{T}_A$ in which there is a single edge incident with $\mathbf{m}(A)$. \begin{pro}\label{BigRecursion} With $|A| > 1$ and for any $a \in A \backslash \{\mathbf{m}(A)\}$, \[ \Theta_A = \sum_{\substack{B \sqcup C = A \\ \mathbf{m}(A) \in C \\ a \in B}} x_{\mathbf{m}(A)} \left( \sum_{j \in B} y_{\mathbf{m}(B), j} \right) \Theta_B \Theta_C. \] \end{pro} \begin{proof} As in the proof of Proposition~\ref{LittleRecursion} by considering the subtree of $\mathbf{m}(A)$ which contains $a$ we see that \[ \Theta_A = \sum_{\substack{B \sqcup C = A \\ \mathbf{m}(A) \in C \\ a \in B}} \Gamma_{B \cup \mathbf{m}(A)} \Theta_C. \] Since for any tree $T \in \mathfrak{R}(B \cup \mathbf{m}(A))$ we have $\mathfrak{h}_T(\mathbf{m}(B)) = B$ this gives \[ \Gamma_{B \cup \mathbf{m}(A)} = x_{\mathbf{m}(A)} \left( \sum_{j \in B} y_{\mathbf{m}(B),j} \right) \Theta_B \] from which the result follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{Thm.2}.] That \[ \Theta_A = x_{\mathbf{m}(A)} y_{\mathbf{M}(A),\mathbf{M}(A)} \prod_{i \in A \backslash \{\mathbf{M}(A),\mathbf{m}(A)\}} \left( y_{i,i}\sum_{\substack{j \in A \\ j \leq i}} x_j + x_i \sum_{\substack{j \in A \\ j > i}} y_{i,j} \right), \] for $|A| > 1$ follows by induction after comparing Proposition~\ref{LittleRecursion} and Proposition~\ref{BigRecursion} and checking the base case \[ \Theta_A = 1 = t_A, \] when $|A| = 1$. \end{proof} \begin{rem} The indirect combinatorial proof above uses the canonical decomposition of an unordered increasing tree by removing the edge with vertex labels $1$ and $2$. The same result can be obtained by using the decomposition in which the vertex labelled $1$ is removed. In either case the proof is essentially the same, the generating series for hook-weighted increasing trees and weighted labelled trees are shown to satisfy the same recursion. \end{rem} \section{An Algebraic Proof}\label{Section.Algebraic} Lastly we give an algebraic proof of Theorem~\ref{Thm.2} which proceeds by induction on the number of vertices. In fact, we prove a small variation of Theorem~\ref{Thm.2} as it will make the algebraic manipulations that follow a little easier. \begin{thm}[Variation on Theorem~\ref{Thm.2}]\label{Thm.3} Let $\mathfrak{T}_n = \mathfrak{T}_{\{1, 2, \cdots, n\}}$ and let \[ \Theta_n = \sum_{T \in \mathfrak{T}_n} \left( \prod_{i = 2}^n x_{\mathfrak{f}_T(i)} \right)\left( \prod_{i = 1}^n \left( \sum_{j \in \mathfrak{h}_T(i)} y_{i,j} \right) \right). \] Then for $n \geq 1$, \[ \Theta_n = y_{n,n} \prod_{i=1}^{n-1} \left( y_{i,i} \sum_{j=1}^i x_j + x_i \sum_{j=i+1}^n y_{i,j} \right). \] \end{thm} Note that Theorem~\ref{Thm.2} follows very easily from Theorem~\ref{Thm.3}. \begin{proof}[Proof of Theorem~\ref{Thm.2}.] Without loss of generality we may assume that the set $A$ in Theorem~\ref{Thm.2} is $A = \{1, 2, \cdots, n\}$. In this case, \[ \Theta_n = \left( \sum_{i = 1}^n y_{1,i} \right) \Theta_A, \] and so the result follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{Thm.3}.] First, notice that \[ \Theta_1 = y_{1,1}, \qquad \mbox{ and } \qquad \Theta_2 = x_1 (y_{1,1} + y_{1,2}) y_{2,2} \] agree with the combinatorial definition. Now, suppose that $\Theta_n$ is as above and let, for $1 \leq i < j \leq n$, $\psi^i_j$ be the evaluation map which takes $y_{k,i}$ to $y_{k,i} + y_{k,j}$ for all $1 \leq k \leq i$. Then since every increasing tree on $n+1$ vertices is created by adding the vertex labelled $n+1$ to some other vertex, we see that \[ \Theta_{n+1} = y_{n+1,n+1} \sum_{i=1}^n x_i \psi^i_{n+1} \Theta_n. \] Let $\alpha_i(n) = y_{i,i} \sum_{j=1}^i x_j + x_i(\sum_{j=i+1}^n y_{i,j})$ so that $\Theta_n = y_{n,n} \prod_{i=1}^{n-1} \alpha_i(n)$. For $1 \leq i \leq n-1$ we have \begin{align*} \psi^i_{n+1} \Theta_n &= y_{n,n} \prod_{k=1}^{n-1} \left( \psi^i_{n+1} y_{k,k} \sum_{j=1}^k x_j + x_k \sum_{j=k+1}^n \psi^i_{n+1} y_{k,j} \right) \\ &= y_{n,n} \left( \prod_{k=1}^i \alpha_k(n+1) \prod_{k=i+1}^{n-1} \alpha_k(n) + \sum_{j=1}^{i-1} x_j y_{i,n+1} \prod_{k=1}^{i-1} \alpha_k(n+1) \prod_{k=i+1}^{n-1} \alpha_k(n) \right). \end{align*} If we let \[ \beta_i(n) = \prod_{k=1}^{i-1} \alpha_k(n+1) \prod_{k=i+1}^{n-1} \alpha_k(n), \] this shows that for $1 \leq i \leq n-1$, \[ \psi^i_{n+1} \Theta_n = y_{n,n} \left( \prod_{k=1}^i \alpha_k(n+1) \prod_{k=i+1}^{n-1} \alpha_k(n) + \sum_{j=1}^{i-1} x_j y_{i,n+1} \beta_i(n) \right). \] Also, \[ \psi^n_{n+1} \Theta_n = (y_{n,n} + y_{n,n+1}) \prod_{k=1}^{n-1} \alpha_k(n+1). \] Putting this together gives, after some algebraic manipulation, \begin{align*} \frac{\Theta_{n+1}}{y_{n+1,n+1}} &= \sum_{i=1}^n \psi^i_{n+1} \Theta_n \\ &= x_n(y_{n,n} + y_{n,n+1}) \prod_{k=1}^{n-1} \alpha_k(n+1) \\ &\qquad + \sum_{i=1}^{n-1} y_{n,n} x_i \left( \prod_{k=1}^i \alpha_k(n+1) \prod_{k=i+1}^{n-1} \alpha_k(n) + \sum_{j=i+1}^{n-1} x_j y_{j,n+1} \beta_j(n) \right). \end{align*} Now, since $\alpha_k(n+1) = \alpha_k(n) + x_k y_{k,n+1}$, by expanding from the largest index to the smallest, \[ \prod_{k=1}^{n-1} \alpha_k(n+1) = \prod_{k=1}^i \alpha_k(n+1) \prod_{k=i+1}^{n-1} \alpha_k(n) + \sum_{j=i+1}^{n-1} x_j y_{j,n+1} \beta_j(n). \] Thus, \begin{align*} \frac{\Theta_{n+1}}{y_{n+1,n+1}} &= x_n(y_{n,n} + y_{n,n+1}) \prod_{k=1}^{n-1} \alpha_k(n+1) + \sum_{i=1}^{n-1} y_{n,n} x_i \prod_{k=1}^{n-1} \alpha_k(n+1) \\ &= \prod_{k=1}^{n-1} \alpha_k(n+1) \left( y_{n,n} \sum_{i=1}^n x_i + x_n y_{n,n+1} \right) \\ &= \prod_{k=1}^n \alpha_k(n+1). \end{align*} \end{proof} \bibliographystyle{plain}
{ "timestamp": "2014-08-13T02:11:25", "yymm": "1408", "arxiv_id": "1408.2786", "language": "en", "url": "https://arxiv.org/abs/1408.2786", "abstract": "Recently Féray, Goulden and Lascoux gave a proof of a new hook summation formula for unordered increasing trees by means of a generalization of the Prüfer code for labelled trees and posed the problem of finding a bijection between weighted increasing trees and Cayley trees. We give such a bijection, providing an answer to the problem posed by Féray, Goulden and Lascoux as well as showing a combinatorial connection to the theory of tree volumes defined by Kelmans. In addition we give two simple proofs of the hook summation formula. As an application we describe how the hook summation formula gives a combinatorial proof of a generalization of Abel and Hurwitz' theorem, originally proven by Strehl.", "subjects": "Combinatorics (math.CO)", "title": "Hook Weighted Increasing Trees, Cayley Trees and Abel-Hurwitz Identities", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363480718236, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.7073385741959631 }
https://arxiv.org/abs/math-ph/0504063
Maslov Indices and Monodromy
We prove that for a Hamiltonian system on a cotangent bundle that is Liouville-integrable and has monodromy the vector of Maslov indices is an eigenvector of the monodromy matrix with eigenvalue 1. As a corollary the resulting restrictions on the monodromy matrix are derived.
\section{Introduction} The Liouville-Arnold theorem describes the local structure of an integrable system: for regular values of the energy-momentum map $F: T^*M \to \mathbb{R}^n$, the preimage of a regular value is an $n$-torus (or a union of disconnected $n$-tori, but for simplicity we assume there is just one), and there exist action-angle variables in a neighbourhood of this torus. Thus locally phase space has the structure of a trivial $n$-torus-bundle over an open neighbourhood of a regular value in the image of $F$. Duistermaat \cite{Duistermaat80} pointed out that globally the torus-bundle over the regular values of $F$ may be non-trivial. This phenomenon is called monodromy. As a result there may not exist global action-angle variables. In two degrees of freedom monodromy is well understood \cite{Matveev96,Zung97}. It is a common phenomenon because it occurs in a neighbourhood of an equilibrium of focus-focus type. In three degrees of freedom now also many examples \cite{WD02,WDR03,DGC03,BDV05} are known. Quantisation of a classical system with monodromy leads to quantum monodromy \cite{CushDuist88,VuNgoc99,Child98,SadovskiiZhilinskii99,ECS04}. The fact that the classical actions cannot be globally defined implies that the quantum numbers suffer the same problem. The Maslov index is not only interesting for semiclassical quantisation, but also in classical mechanics it is an invariant object defined for paths on Lagrangian submanifolds, e.g.\ on invariant tori, see \cite{Arnold67,Maslov81,AG80}. Recently it has been shown that the Maslov index is related to the singular points of the energy-momentum map \cite{FoxmanRobbins05}. In this letter we are going to show that if the vector of Maslov indices is non-zero, then it is an eigenvector of the monodromy matrix with eigenvalue 1. This has some interesting consequences for the structure of admissible monodromy matrices. Since the Maslov index is only defined on cotangent bundles our results are only valid when the phase space is a symplectic manifold of the form $T^*M$. \section{Maslov indices} Let $C$ be a closed curve in the set of regular values of the energy-momentum map. We take $C$ to be parameterised by $0 \le s \le 1$. Let $T_s$ denote the corresponding one-parameter family of $n$-tori in phase space. Fix a basis of cycles $\gamma_0$ for $T_0$. By continuation this defines a basis of cycles $\gamma_s$ for every $s$. The curve $C$ has monodromy when $\gamma_1 = {\mathbf M} \gamma_0$ for ${\mathbf M} \in SL(n, \mathbb{Z})$ is nontrivial. More precisely, monodromy is a nontrivial automorphism of the first homology group, and it implies that the preimage of $C$ under $F$ is a nontrivial $n$-torus-bundle over $C$. The basis $\gamma_s$ determines actions $I_s$ and Maslov indices $\mu_s$ on $T_s$. In fact, the Maslov indices are independent of $s$, as they depend continuously on $s$ and are integer-valued \cite{Trofimov95}. Let us denote their common value by $\mu$. Our main result is the following simple observation: \begin{theo}\label{thm:maslov-indices} If the vector of Maslov indices $\mu$ is not equal to zero, then $\mu$ is an eigenvector of the monodromy matrix ${\mathbf M}$ with eigenvalue 1. \end{theo} \begin{proof} We have that $\mu_1 = {\mathbf M} \mu_0$ (just as $I_1 = {\mathbf M} I_0$), since in general a change of basis cycles $\gamma' = {\mathbf T} \gamma$, where ${\mathbf T} \in SL(n,\mathbb{Z})$, induces the transformation of Maslov indices $\mu' = {\mathbf T} \mu$ (and the transformation of actions $I' = {\mathbf T} I$). Since $\mu_s = \mu$ for all $s$, $\mu_1 = {\mathbf M} \mu_0$, i.e. \[ {\mathbf M} \mu = \mu \,. \] \end{proof} We remark that the Maslov indices $\mu$, the actions $I$, and the monodromy matrix $M$ depend on the initial choice of basis $\gamma_0$. Under a change of basis $\gamma_0' = {\mathbf T}\gamma_0$, where $T \in SL(n,\mathbb{Z})$, we have that $\mu' = {\mathbf T}\mu$ and ${\mathbf M}' = {\mathbf T}{\mathbf M}{\mathbf T}^{-1}$. \section{Monodromy matrices} From Theorem~\ref{thm:maslov-indices} we immediately obtain the well-known result \cite{Matveev96,Zung97} about the structure of monodromy matrices in two degrees of freedom: \begin{cor}\label{cor:2} For $n=2$ degrees of freedom and a loop $C$ with $\mu \not = 0$ there exists a basis of cycles such that the monodromy matrix of $C$ has the form \[ {\mathbf M} = \begin{pmatrix} 1 & m \\ 0 & 1 \end{pmatrix} \]. \end{cor} \begin{proof} Since ${\mathbf M} \in SL(2,\mathbb{Z})$ the eigenvalues $\lambda_1, \lambda_2$ must satisfy $\lambda_1 \lambda_2 = 1$. But one eigenvalue must be 1 by Theorem~\ref{thm:maslov-indices}, hence $\lambda_1 = \lambda_2 = 1$. Finally a matrix in $SL(2,\mathbb{Z})$ with a single eigenvalue equal to 1 is conjugate to the stated form by some matrix from $SL(2, \mathbb{Z})$. The Maslov index in this basis is $\mu = (\mu_1, 0)$. \end{proof} Notice that this does not give a complete classification of monodromy matrices on cotangent bundles because we have assumed that $\mu \not = 0$. When $\mu \not = 0$, Corollary~\ref{cor:2} is quite strong because no assumption is needed on the type of singularity that is encircled by $C$, in particular the usual non-degeneracy condition is not needed. Corollary~\ref{cor:2} is a special case of the simple general \begin{lem}\label{lem:monodromy-matrices} Suppose ${\mathbf M} \in SL(n,\mathbb{Z})$ has eigenvalue $\pm 1$. Then there exists ${\mathbf T} \in SL(n,\mathbb{Z})$ such ${\mathbf M}' = {\mathbf T}{\mathbf M}{\mathbf T}^{-1}$ has first column equal to $\pm{\mathbf e}_1 = (\pm 1, 0,\dots, 0)^t$. \end{lem} \begin{proof} Let $\u$ denote an eigenvector of ${\mathbf M}$ with eigenvalue $\pm 1$, chosen so that its components are coprime integers. Then one can construct a matrix ${\mathbf S} \in SL(n,\mathbb{Z})$ whose first column is $\u$ (see, e.g., \cite{Cassels59}). Let ${\mathbf T} = {\mathbf S}^{-1}$ and ${\mathbf M}' = {\mathbf T}{\mathbf M}{\mathbf T}^{-1}$. It is easy to check that ${\mathbf e}_1$ is an eigenvector of ${\mathbf M}'$ with eigenvalue $\pm 1$, so that ${\mathbf M}'$ has first column equal to $\pm {\mathbf e}_1$. \end{proof} Using Lemma \ref{lem:monodromy-matrices} and again the fact that $\det {\mathbf M} = 1$ and $\lambda_1 = 1$, we can obtain the classification of monodromy matrices (for non-zero Maslov index) in $n=3$ degrees of freedom: \begin{cor}\label{cor:4} For $n=3$ degrees of freedom and a loop $C$ with $\mu \not = 0$ there exists a basis of cycles such that the monodromy matrix ${\mathbf M}$ of $C$ has one of the following forms: \begin{equation} \label{eq:1} \begin{pmatrix} 1 & * & * \\ 0 & 1 & * \\ 0 & 0 & 1 \end{pmatrix}, \quad \begin{pmatrix} 1 & * & * \\ 0 & -1 & * \\ 0 & 0 & -1 \end{pmatrix}, \quad \begin{pmatrix} 1 & * \\ 0 & {\mathbf B} \end{pmatrix}, \quad \end{equation} where ${\mathbf B} \in SL(2, \mathbb{Z})$ has irrational eigenvalues and $*$ denotes integers. \end{cor} \begin{proof} The eigenvalue 1 can appear with algebraic multiplicity $m_a = 1$ or $m_a= 3$ only; $m_a = 2$ is impossible because $\det {\mathbf M} = \lambda_1 \lambda_2 \lambda_3 = 1$. \footnote{For general $n$ the multiplicity cannot be $n-1$.} The case $m_a = 3$ corresponds to the first form above. When $m_a = 1$, the remaining eigenvalues are either both $-1$, corresponding to the second form, or they are irrational, corresponding to the third. Other combinations of eigenvalues are not possible, because rational eigenvalues of matrices in $SL(n,\mathbb{Z})$ are necessarily equal to $\pm 1$. If the eigenvalues are all $\pm 1$ (corresponding to the first two forms), the matrices can be made upper triangular by applying Lemma \ref{lem:monodromy-matrices} recursively, using a transformation of the form \[ {\mathbf T}_n = \begin {pmatrix} 1 & * \\ {\bf 0} & {\mathbf T}_{n-1} \end{pmatrix} \,. \] Matrices with two irrational eigenvalues cannot be made upper triangular in $SL(n,\mathbb{Z})$ (as the diagonal elements of a triangular matrix are its eigenvalues). \end{proof} It is interesting to consider how the entries denoted $*$ in (\ref{eq:1}) can be normalised. In the first form the eigenvalue $1$ has geometric multiplicity $m_g$ equal to $1$ or $2$ (i.e., there are either one or two independent eigenvectors with eigenvalue $1$). The normal form for $m_g = 2$ has been computed in \cite{WD02}. The result is that only a single nonzero element remains above the diagonal. Essentially this means that when $m_g= 2$ the matrix can be block-diagonalised in $SL(3,\mathbb{Z})$. In the remaining cases in (\ref{eq:1}) a block-diagonal form is in general not possible: Conjugating a block triangular matrix with a block triangular matrix gives \[ \begin{pmatrix} 1 & -{\bf d} {\bf D}^{-1} \\ {\bf 0} & {\bf D}^{-1} \end{pmatrix} \begin{pmatrix} 1 & \bf{a} \\ {\bf 0} & {\bf A} \end{pmatrix} \begin{pmatrix} 1 & {\bf d} \\ {\bf 0} & {\bf D} \end{pmatrix} = \begin{pmatrix} {\bf 1} & ( {\bf a} - {\bf d} {\bf D}^{-1} ({\bf A} - {\bf 1})) {\bf D} \\ {\bf 0} & {\bf D}^{-1}{\bf A}{\bf D} \end{pmatrix} \,. \] Setting the upper right element of the right hand side to zero and solving for ${\bf d}$ involves the inverse of ${\bf A - 1}$ which is in general not an integer matrix. Using a more general transformation leads to the same condition. Thus for general ${\bf a}$ and ${\bf A}$ the monodromy matrix cannot be block-diagonalised. However, e.g., for the special matrix ${\bf A} = \begin{pmatrix} 2 & 1 \\ 1 & 1\end{pmatrix}$ (the `cat map') it is always possible since $\det( {\bf A - 1} ) = -1$. If ${\bf A - 1}$ is singular (corresponding to the first case in Corollary 4) the resulting Diophantine equation may or may not have a solution. Our results need the condition $\mu \ne 0$. It may be possible to show that $\mu \ne 0$ necessarily holds for certain configuration spaces $M$. We suspect, for example, that this is the case when $M=\mathbb{R}^n$, although we have not been able to prove this. Our result would then give the complete classification of monodromy matrices on $T^*\mathbb{R}^n$. In particular the construction of arbitrary monodromy matrices given in \cite{CushmanVuNgoc02} would be impossible on these cotangent bundles. In three degrees of freedom, the known examples of monodromy are either of the first form with $m_g = 2$ \cite{WD02, WDR03} or of the last form and block-diagonal. The last form of ${\mathbf M}$ is realised for geodesic flows on $Sol$-manifolds, where an arbitrary hyperbolic ${\mathbf B} \in SL(2, \mathbb{Z})$ may appear \cite{BDV05}. The main implication of the above is that when $m_a = 3$ and $m_g = 2$ there are always two invariant actions, i.e.\ actions that do not change globally along the path $C$. Obviously there is always one invariant action, namely the one corresponding to the eigenvector ${\mathbf e}_1$, and when $m_g = 1$ it is the only one. With eigenvalues $-1$ there is at most one invariant action, but another action is invariant when $C$ is traversed twice. Hence on a covering space this may reduce to $m_a = 3$ and $m_g = 2$. It would be very interesting to find an example of this type. \section*{ Acknowledgement } HRD would like to that AP Veselov and VS Matveev for helpful discussions. This work was supported by the EU Research Training Network ''Mechanics and Symmetry in Europe'' (MASIE), (HPRN\_CT-2000-00113). \bibliographystyle{plain} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
{ "timestamp": "2005-04-21T12:21:32", "yymm": "0504", "arxiv_id": "math-ph/0504063", "language": "en", "url": "https://arxiv.org/abs/math-ph/0504063", "abstract": "We prove that for a Hamiltonian system on a cotangent bundle that is Liouville-integrable and has monodromy the vector of Maslov indices is an eigenvector of the monodromy matrix with eigenvalue 1. As a corollary the resulting restrictions on the monodromy matrix are derived.", "subjects": "Mathematical Physics (math-ph); Exactly Solvable and Integrable Systems (nlin.SI)", "title": "Maslov Indices and Monodromy", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363480718235, "lm_q2_score": 0.718594386544335, "lm_q1q2_score": 0.707338574195963 }
https://arxiv.org/abs/1409.5056
Complexity and directional entropy in two dimensions
We study the directional entropy of the dynamical system associated to a $\Z^2$ configuration in a finite alphabet. We show that under local assumptions on the complexity, either every direction has zero topological entropy or some direction is periodic. In particular, we show that all nonexpansive directions in a $\Z^2$ system with the same local assumptions have zero directional entropy.
\section{Introduction} A classic problem in dynamics is to deduce global properties of a system from local assumptions. A beautiful example of such a result is the Morse-Hedlund Theorem~\cite{MH}: a local assumption on the {\em complexity} of a system is equivalent to the global property of {\em periodicity} of the system. Any periodic system trivially has zero topological entropy. In higher dimensions, this local to global connection is less well understood. Again, there is a natural local assumption on the system that implies zero topological entropy, but now one can study the finer notion of directional behavior and new subtleties arise: under this assumption some directions may have positive directional entropy, while others do not. We prove that there are natural local assumptions on the complexity of a ${\mathbb Z}^2$ system under which either every direction has zero topological directional entropy or some direction is periodic. To explain the results more precisely, for a finite alphabet $\mathcal{A}$, we study functions of the form $\eta\colon{\mathbb Z}^2\to\mathcal{A}$ which we view as colorings of ${\mathbb Z}^2$. For $\mathbf{n}\in{\mathbb Z}^2$, define the translation $T^{\mathbf{n}}\colon{\mathbb Z}^2\to{\mathbb Z}^2$ by $T^{\mathbf{n}}({\bf x}):={\bf x}+\mathbf{n}$ and for fixed $\eta\colon{\mathbb Z}^2\to\mathcal{A}$, define $T^{\mathbf{n}}\eta\colon{\mathbb Z}^2\to\mathcal{A}$ by $T^{\mathbf{n}}\eta({\bf x}):=\eta(T^{\mathbf{n}}{\bf x})$. If $\mathcal{S}\subset{\mathbb Z}^2$, then an $\eta$-coloring of $\mathcal{S}$ is any function of the form $T^{\mathbf{n}}\eta\rst{\mathcal{S}}$, where by $\eta\rst{\mathcal{S}}$ we mean the restriction of the coloring $\eta$ of ${\mathbb Z}^2$ to the set $\mathcal{S}$. If $K\subset\mathbb{R}^2$ is compact, we define the complexity $P_\eta(K)$ to be the number of distinct $\eta$-colorings of $K \cap {\mathbb Z}^2$: $$P_\eta(K) = \bigl\vert\{T^{\mathbf{n}}\eta\rst{K \cap {\mathbb Z}^2} \colon \mathbf{n} \in {\mathbb Z}^2\}\bigr\vert,$$ where $\vert\cdot\vert$ denotes the cardinality. This is a generalization to two dimensions of the one dimensional complexity $P_\alpha\colon {\mathbb N}\to{\mathbb N}$ defined for $\alpha\colon{\mathbb Z}\to\mathcal{A}$, where $P_\alpha(n)$ is defined to be the number of distinct words of length $n$ appearing in $\alpha$. It follows immediately from the definition that $P_\eta$ is monotonic: If $A \subseteq B \subset \mathbb{R}^2$ are compact sets, then $P_\eta(A) \leq P_\eta(B)$. When $K$ is the rectangle $[0,n-1] \times [0,k-1]$, we write $P_\eta(n,k)$ instead of $P_\eta(K)$. There is a standard dynamical system associated to a configuration such as $\eta\colon{\mathbb Z}^2\to\mathcal{A}$. We endow $\mathcal{A}$ with the discrete topology and $\mathcal{A}^{{\mathbb Z}^2}$ with the product topology. For $\eta\colon{\mathbb Z}^2\to\mathcal{A}$, we let $X_\eta$ denote the orbit closure of $\eta$ under the ${\mathbb Z}^2$ translations $\{T^{\mathbf{n}}\colon\mathbf{n}\in{\mathbb Z}^2\}$. Then $X_\eta$, endowed with a distance $\rho$ defined by $$\rho(x,y) = 2^{-\min \{\|\mathbf{m}\| \colon x(\mathbf{m}) \neq y(\mathbf{m})\}}$$ for $x, y\in \mathcal{A}^{{\mathbb Z}^2}$, and with the ${\mathbb Z}^2$ action by translation, is a ${\mathbb Z}^2$ topological dynamical system. We refer to an element of the system $X_\eta$ as an {\em $\eta$-coloring} of ${\mathbb Z}^2$. We say that $\eta\colon{\mathbb Z}^2\to\mathcal{A}$ is {\em periodic} if there exists $\mathbf{m}\neq\bf{0}$ such that $\eta(\mathbf{m}+\mathbf{n}) = \eta(\mathbf{n})$ for all $\mathbf{n}\in{\mathbb Z}^2$ (note that this means that $\eta$ has a direction of periodicity but is not necessarily doubly periodic). Again, this is a two dimensional generalization of periodicity for some $\alpha\colon{\mathbb Z}\to\mathcal{A}.$ It was conjectured by Nivat~\cite{nivat} that for $\eta\colon{\mathbb Z}^2\to\mathcal{A}$, if there exist $n, k \in {\mathbb N}$ such that $P_\eta(n,k) \le nk$, then $\eta$ is periodic. This question remains open, but in~\cite{CK} it is observed that a system with such a complexity bound has zero topological entropy. In this work, we study the finer notion of directional entropy, which was introduced for cellular automata by Milnor~\cite{milnor} (see Section~\ref{sec:directional} for the definition). If the directional entropy is finite in all directions, then the system has zero topological entropy, but the converse is false: zero topological entropy does not imply anything more than the existence of a single direction with finite directional entropy. We study the directional entropy of a system under a low complexity assumption (this assumption is made precise in Theorem~\ref{trichotomy}). Boyle and Lind~\cite{BL} further analyzed directional entropy for topological dynamical systems and related it to expansive subdynamics. We use their definition, but restricted to our two dimensional setting: \begin{definition} If $X$ is a dynamical system with a continuous ${\mathbb Z}^2$ action $(T^{\mathbf{n}}\colon \mathbf{n}\in{\mathbb Z}^2)$, we say that a line $\ell\subset\mathbb{R}^2$ is {\em expansive} if there exists $r> 0$ such that if $x,y\in X$ satisfy $x({\mathbf{n}}) = y(\mathbf{n})$ for all $\mathbf{n}\in{\mathbb Z}^2$ with $\rho(\mathbf{n}, \ell)< r$, then $x=y$. If $\ell$ is not an expansive line, we say that it is {\em nonexpansive}. \end{definition} For the full shift $X = \mathcal{A}^{{\mathbb Z}^2}$ with the ${\mathbb Z}^2$ action by translations, it is easy to check that there are no expansive lines. However, restricting to a system of the form $X_\eta$ for some $\eta\colon{\mathbb Z}^2\to\mathcal{A}$, there are more possibilities. If $\eta$ is periodic, then either the directional entropy $h(\mathbf{u}) = 0$ for all $\mathbf{u} \in S^1$ or there is a single direction of zero entropy. In the former case, $\eta$ has an expansive direction with zero entropy and in the latter case, the unique direction of zero entropy is nonexpansive. Thus, assuming Nivat's conjecture, if there exist $n,k \in {\mathbb Z}$ such that $P_\eta(n,k) \leq nk$, then the directional entropy of $\eta$ is either zero in all directions or there is a unique direction of zero directional entropy. We show that this conclusion holds under the stronger hypothesis that a complexity assumption holds for infinitely many pairs $n_i, k_i$ (as usual, $\mathbf{e}_1$ and $\mathbf{e}_2$ denote the standard basis vectors): \begin{theorem} \label{trichotomy} Assume $\mathcal{A}$ is a finite alphabet and $\eta\colon {\mathbb Z}^2\to\mathcal{A}$. If there exists an infinite sequence $n_i, k_i\in{\mathbb N}$ such that $P_\eta(n_i,k_i) \le n_ik_i$, then either \begin{enumerate} \item $h(\mathbf{u}) = 0$ for all $\mathbf{u} \in S^1$, or \item there is a unique nonexpansive direction for $\eta$, which is either $\mathbf{e}_1$ or $\mathbf{e}_2$, and $\eta$ is periodic in this direction. \end{enumerate} \end{theorem} An immediate consequence is the following. \begin{cor} Assume $\mathcal{A}$ is a finite alphabet and $\eta\colon {\mathbb Z}^2\to\mathcal{A}$. If there exists an infinite sequence $n_i, k_i\in{\mathbb N}$ such that $P_\eta(n_i,k_i) \le n_ik_i$, then $\eta$ has zero directional entropy along each of its nonexpansive directions. \end{cor} \section{Sufficient conditions for zero directional entropy} \label{sec:directional} We start by reviewing some definitions from ~\cite{CK}. If $\mathcal{S}\subset\mathbb{R}^2$, we denote the convex hull of $\mathcal{S}$ by ${\rm conv}(\mathcal{S})$. We say $\mathcal{S}\subset{\mathbb Z}^2$ is {\em convex} if $\mathcal{S}={\rm conv}(\mathcal{S})\cap{\mathbb Z}^2$. Define the {\em volume} of convex $\mathcal{S}\subset{\mathbb Z}^2$ to be the volume of its convex hull and define the \emph{boundary} $\partial(\mathcal{S})$ of a convex set $\mathcal{S}\subset {\mathbb Z}^2$ to be the boundary of ${\rm conv}(\mathcal{S})$. Given a convex set in ${\mathbb Z}^2$ of positive volume, we endow its boundary with the positive orientation, so that it consists of directed line segments. If $\mathcal{S}\subset{\mathbb Z}^2$ is convex and has zero volume, then ${\rm conv}(\mathcal{S})$ is a line segment in $\mathbb{R}^2$ and in this case, we do not define an orientation on $\partial(\mathcal{S})$. For finite $\mathcal{S}\subset{\mathbb Z}^2$ and $\eta\colon{\mathbb Z}^2\to\mathcal{A}$, we define $X_\mathcal{S}(\eta)$ to be the ${\mathbb Z}^2$ subshift of finite type generated by the $\mathcal{S}$ words of $\eta$, meaning that $X_\mathcal{S}(\eta)$ consists of all $f\in\mathcal{A}^{{\mathbb Z}^2}$ such that all $f$-colorings of $\mathcal{S}$ appear as $\eta$-colorings of $\mathcal{S}$. \begin{defn} Suppose $\mathcal{S}\subset\mathcal{T}\subset{\mathbb Z}^2$ are nonempty, finite sets and that $f\colon\mathcal{S}\to\mathcal{A}$ is an $\eta$-coloring of $\mathcal{S}$. We say that $f$ {\em extends uniquely} to an $\eta$-coloring of $\mathcal{T}$ if there is exactly one $\eta$-coloring of $\mathcal{T}$ whose restriction to $\mathcal{S}$ coincides with $f$. \end{defn} \begin{defn} If $\mathcal{S} \subset {\mathbb Z}^2$ is a nonempty, finite, convex set, then $x\in\mathcal{S}$ is {\em $\eta$-generated} by $\mathcal{S}$ if every $\eta$-coloring of $\mathcal{S}\setminus\{x\}$ extends uniquely to an $\eta$-coloring of $\mathcal{S}$, and $\mathcal{S}$ is an {\em $\eta$-generating set} if every boundary vertex of $\mathcal{S}$ is $\eta$-generated. When $\eta$ is clear from context, we refer to an $\eta$-generating set as a {\em generating set}. \end{defn} Generating sets give rise to zero topological entropy: \begin{lemma}[\cite{CK}, Lemma 2.15] \label{topentropy} If $\mathcal{S} \subset {\mathbb Z}^2$ is a generating set for $\eta \colon {\mathbb Z}^2\to \mathcal{A}$ and $\mathcal{S}^{\prime} \supset \mathcal{S}$ is finite, then the topological entropy of the ${\mathbb Z}^2$ dynamical system $(X_{\mathcal{S}^{\prime}}, \{T^{\mathbf{u}}\}_{\mathbf{u}\in{\mathbb Z}^2})$ is zero. \end{lemma} We review the definition of directional entropy introduced by Milnor~\cite{milnor}. \begin{notation} \label{notation:larger} If $T$ is a continuous ${\mathbb Z}^2$ action on the compact metric space $(X,\rho)$, $E\subset \mathbb{R}^2$ is a compact set, and $\varepsilon > 0$, set $N_{T}(E,\varepsilon)$ to be cardinality of the smallest set $Y \subset X$ such that for each $x \in X$ there exists $y \in Y$ with $\rho(T^{\mathbf{n}}(x) , T^{\mathbf{n}}(y)) < \varepsilon$ for each $\mathbf{n} \in E \cap {\mathbb Z}^2$. For a compact set $E$ and $t > 0$ , let $$ E^{(t)} = \{\mathbf{v}\colon \|\mathbf{v} - \mathbf{u}\| < t \text{ for some } \mathbf{u} \in E\}$$ denote the {\em $t$-neighborhood of $E$} and let $tE = \{t\mathbf{u}\colon \mathbf{u} \in E\}$ denote the {\em $t$-dilation} of $E$. \end{notation} \begin{defn} If $\Phi$ is a set of $k$ linearly independent vectors and $Q_\Phi$ is the parallelepiped spanned by $\Phi$, then the \emph{$k$-dimensional topological directional entropy} $h_k(\Phi)$ is defined to be $$h_k(\Phi) = \lim_{\varepsilon\to0} \sup_{t > 0} \overline{\lim_{s\to \infty}} \frac{\log N_T((sQ_\Phi)^{(t)},\varepsilon)}{s^k}.$$ \end{defn} We compute the directional entropy for the ${\mathbb Z}^2$ action by translations on the space $X_\eta$, in which case it is straightforward to recast the definition in terms of complexity. To do so, we make a slight abuse of notation and for a vector $\mathbf{v} \in \mathbb{R}^2$, we write $[0,\mathbf{v}]$ for $\{\varepsilon \mathbf{v} \colon 0 \le \varepsilon \le 1\}$. As we are interested in one dimensional directional entropy, to simplify notation, when $\Phi = \{\mathbf{u}\}$ for some unit vector $\mathbf{u}$, we write $h(\mathbf{u})$, instead of $h_1(\Phi)$. \begin{lemma} \label{entropycomplex} Assume $\eta \colon {\mathbb Z}^2 \to \mathcal{A}$. If $\mathbf{u}$ is a unit vector, then the ($1$-dimensional) topological directional entropy $h(\mathbf{u})$ of the ${\mathbb Z}^2$ action by translation on $X_\eta$ in the direction of $\mathbf{u}$ is given by $$h(\mathbf{u}) = \sup_{t>0}\overline{\lim_{s\to\infty}}\frac{\log P_\eta([0,s\mathbf{u}]^{(t)})}{s}.$$ \end{lemma} \begin{proof} Let $T$ denote the ${\mathbb Z}^2$ action by translation. Fix $0 < \varepsilon < 1$ and let $M = \lfloor -\log_{2} \varepsilon\rfloor$. If $x,y \in X_\eta$ satisfy \begin{equation} \label{eq:close} \rho\bigl(T^{\mathbf{n}}(x),T^\mathbf{n}(y)\bigr) < \varepsilon \text{ for all } \mathbf{n} \in [0,s\mathbf{u}]^{(t)}, \end{equation} then $x$ and $y$ agree on $[0,s\mathbf{u}]^{(t+M)}$. Conversely, if $x$ and $y$ agree on $[0,s\mathbf{u}]^{(t+M+1)}$, they satisfy~\eqref{eq:close}. Thus, $$P_\eta([0,s\mathbf{u}]^{(t+M)}) \le N_T([0,s\mathbf{u}]^{(t)},\varepsilon) \le P_\eta([0,s\mathbf{u}]^{(t+M+1)})$$ and so $$\lim_{M\to\infty} \sup_{t>0}\overline{\lim_{s\to\infty}}\frac{P_\eta([0,s\mathbf{u}]^{(t+M)}) }{s} \le h(\mathbf{u}) \le \lim_{M\to\infty} \sup_{t>0}\overline{\lim_{s\to\infty}} \frac{\log P_\eta([0,s\mathbf{u}]^{(t+M+1)})}{s}.$$ But since $P_\eta([0,s\mathbf{u}]^{(t)})$ is non-decreasing in $t$, \begin{align*}\sup_{t>0}\overline{\lim_{s\to\infty}}\frac{\log P_\eta([0,s\mathbf{u}]^{(t+M)}) }{s} & = \sup_{t>0}\overline{\lim_{s\to\infty}}\frac{\log P_\eta([0,s\mathbf{u}]^{(t+M+1)}) }{s} \\ & = \sup_{t>0}\overline{\lim_{s\to\infty}}\frac{\log P_\eta([0,s\mathbf{u}]^{(t)}) }{s}.\hfill\qedhere \end{align*} \end{proof} Given a generating set $\mathcal{S}\subset{\mathbb Z}^2$ for some $\eta\colon{\mathbb Z}^2\to\mathcal{A}$, applying Lemma~\ref{topentropy} with $\mathcal{S}'=\mathcal{S}$ provides an upper bound on the entropy of the associated dynamical system $X_\eta$. We use Lemma~\ref{entropycomplex} to strengthen this result: \begin{prop} \label{generating-directional} Assume $\eta \colon {\mathbb Z}^2 \to \mathcal{A}$ has an $\eta$-generating set and let $X_\eta$ be the associated dynamical system endowed with the ${\mathbb Z}^2$ action by translation. Then there exists $c > 0$ such that $h(\mathbf{u}) < c$ for all unit vectors $\mathbf{u} \in \mathbb{R}^2$. \end{prop} \begin{proof} Let $\mathcal{S}$ be an $\eta$-generating set and let $d = \operatorname{diam}(\mathcal{S})$. Fix a unit vector $\mathbf{u}\in\mathbb{R}^2$ and let $t > 0$. We claim that there exists $C>0$ such that for sufficiently large $s > 0$, we have $P_\eta([0,s\mathbf{u}]^{(t)}) \le C |\mathcal{A}|^{2ds}$. Once the claim is proven, the proposition follows from Lemma~\ref{entropycomplex}. Fix $s> d$. Since $P_\eta([0,s\mathbf{u}]^{(t)})$ is non-decreasing in $t$, we can assume that $t > d$. Then $[0,s\mathbf{u}]^{(t)}$ contains a translate of $\mathcal{S}$. Let $\mathbf{u}^\perp$ be one of the two unit vectors perpendicular to $\mathbf{u}$. Let $\mathcal{S}'$ be a translate of $\mathcal{S}$ such that exactly one row of $\mathcal{S}'$ in the direction of $\mathbf{u}^\perp$ lies outside $[0,s\mathbf{u}]^{(t)}$ and such that $\mathcal{S}'$ touches one of the two border segments of $[0,s\mathbf{u}]^{(t)}$ in the direction of $\mathbf{u}$. Thus $\mathcal{S}'$ satisfies $$\mathcal{S}' \subseteq [0,(s+1)\mathbf{u}]^{(t)} \text{ but } \mathcal{S}' \not\subseteq [0,(s+\varepsilon)\mathbf{u}]^{(t)} \text{ for } \varepsilon < 1, $$ and $$\mathcal{S}' \cap ([0,(s+1)\mathbf{u}] - t\mathbf{u}^{\perp}) \neq \emptyset.$$ Fix a coloring of $[0,s\mathbf{u}]^{(t)} \cup \mathcal{S}'$. Since $\mathcal{S}$ is a generating set, this coloring extends uniquely to a coloring of $[0,s\mathbf{u}]^{(t)} \cup (\mathcal{S}' + [0,\mathbf{u}^\perp])$. This in turn extends uniquely to a coloring of $[0,s\mathbf{u}]^{(t)} \cup (\mathcal{S}' + [0,2\mathbf{u}^\perp])$. Continuing to extend and using that $\operatorname{diam}(\mathcal{S}) = d$, each coloring of $[0,s\mathbf{u}]^{(t)} \cup (s\mathbf{u} - t\mathbf{u}^\perp+[0,d\mathbf{u}^\perp] + [0,\mathbf{u}])$ extends uniquely to a coloring of $[0,(s+1)\mathbf{u}]^{(t)} \setminus (s\mathbf{u} + t\mathbf{u}^\perp+[0,-d\mathbf{u}^\perp]+[0,\mathbf{u}])$. It follows that $$P_\eta([0,(s+1)\mathbf{u}]^{(t)}) \le |\mathcal{A}|^{2d}P_\eta([0,s\mathbf{u}]^{(t)}),$$ which completes the proof. \end{proof} It was shown in~\cite{sinai} that if a ${\mathbb Z}^2$ topological dynamical system has bounded directional entropy in all directions, then it has zero topological entropy. Thus the system $X_\eta$ generated by $\eta\colon{\mathbb Z}^2\to\mathcal{A}$ that also an $\eta$-generating system (as in Proposition~\ref{generating-directional}) has zero topological entropy. The following example shows that a converse to this result fails, even for a system $X_\eta$ endowed with translations, showing that Proposition~\ref{generating-directional} strengthens existing results on the entropy of $X_\eta$: \begin{example} Let $\alpha \colon {\mathbb Z} \to \{0,1\}$ with $P_\alpha(n) = 2^n$ and let $A = \{10^n + i^2 \colon i, n \in {\mathbb N}, 1\le i \le n\}$. Define $\eta \colon {\mathbb Z}^2 \to \{0,1\}$ by $$\eta(i,j) = \begin{cases} \alpha(i+j) & \text{if }j \in A\\ \alpha(i) & \text{otherwise.} \end{cases} $$ Then the topological entropy of the ${\mathbb Z}^2$ action on $X_\eta$ by translations is zero and $h(\mathbf{e}_1) = \infty$. \end{example} \begin{proof} Let $\beta\colon {\mathbb Z} \to \{0,1\}$ denote the indicator function of the set $A$. We first bound the complexity function $P_\beta(k)$. Fix $k \in {\mathbb N}$. Given $m \in {\mathbb Z}$, let $I(m) = \{m, m+1, \dots, m+k-1\}$. Let $n(m)$ be the smallest $n \in {\mathbb N}$ such that $10^n + i^2 \in I(m)$ for some $1 \le i \le n$, or take $n(m) = 0$ if no such $n$ exists. If $n(m) > 0$, let $i(m)$ be the minimal $1 \le i \le n(m)$ such that $n(m) + i^2 \in I(m)$, and let $i(m) = 0$ if $n(m) = 0$. Finally, let $a(m) = \min(A \cap I(m)) - m$. If $10^{n(m)} > k$, then $10^n + i^2 \not\in I(m)$ for any $n \neq n(m)$ and any $1 \le i \le n$. Note that if $i = i(m) > k$, then $(i+1)^2 - i^2 \ge i^2 - (i-1)^2 \ge 2i - 1 > k$, and so $I(m) \cap A = \{10^{n(m)} + i(m)\}$. Thus, $\beta\rst{I(m)}$ is determined by $0 \le \min\{n(m), k\} \le k$, $0 \le \min\{i(m), k\} \le k$, and $0 \le a(m) \le k$, and so $P_\beta(k) \le (k+1)^3$. Since $I(m)$ contains at most $\sqrt{k}\log_{10} k \le k^{3/4}$ elements of $A$, $$P_\eta(n,k) \le P_\beta(k) (2^n)^{k^{3/4}} = k^32^{nk^{3/4}}$$ and so $$\lim_{n\to\infty}\frac{\log P_\eta(n,n)}{n^2} \leq \lim_{n\to\infty}\frac{3\log n + n^{7/4}\log 2}{n^2} = 0.$$ But since there are exactly $\lfloor \sqrt{k}\rfloor$ elements of $A$ in $10^n + [1,k]$ when $k \le n$, we have that $$\frac{\log P_\eta([0, n\mathbf{e}_1]^{(k/2)})}{n} \ge \frac{\log P_\eta(n,k)}{n} \ge \frac{\log (2^n)^{\sqrt{k-1}}}{n} =\sqrt{k-1}\log 2.$$ By Lemma~\ref{entropycomplex}, it follows that \begin{equation*} h(\mathbf{e}_1) \ge \sup_{k>0}\sqrt{k-1}\log 2= \infty. \hfill\qedhere \end{equation*} \end{proof} As shown in~\cite{CK}, if there exist $n, k \in {\mathbb N}$ such that $P_\eta(n,k) \le nk$, then there is an $\eta$-generating set. Thus for $\eta$ satisfying such a complexity bound, the ${\mathbb Z}^2$ action by translations on $X_\eta$ has bounded directional entropy in all directions. In particular, it has zero topological entropy. \section{A sequence of complexity bounds} \begin{notation} \label{notation:thick} For a unit vector $\mathbf{u} \in \mathbb{R}^2$ and a compact set $K\subset\mathbb{R}^2$, we let $\tau_\mathbf{u}(K)$ denote the thickness of the compact set $K$ in the direction of $\mathbf{u}$, defined by $$\tau_\mathbf{u}(K) = \sup\{ \tau \colon [0,\tau \mathbf{u}] + \mathbf{n} \subset K\text{ for some }\mathbf{n} \in {\mathbb Z}^2\}.$$ \end{notation} \begin{prop} \label{convexentropy} Assume $\eta\colon{\mathbb Z}^2\to\mathcal{A}$. The ${\mathbb Z}^2$ action by translation on $X_\eta$ has zero entropy in direction $\mathbf{u}\in\mathbb{R}^2$ if and only if there exist compact sets $K_i \subset \mathbb{R}^2$ such that $\lim_{i \to \infty} \frac{\log P_\eta(K_i)}{\tau_\mathbf{u}(K_i)} = 0$. \end{prop} \begin{proof} Assume there exists such a sequence $K_i$ and assume for contradiction that $h(\mathbf{u}) = 4\delta > 0$. Since $P_\eta([0,s\mathbf{u}]^{(t)})$ is non-decreasing in $t$ (recall Notation~\ref{notation:larger}), using Lemma~\ref{entropycomplex}, there exists $t_0 > 0$ such that whenever $t \ge t_0$, $$\overline{\lim_{s \to \infty}} \frac{\log P_\eta([0,s\mathbf{u}]^{(t)})}{s} \ge 3\delta.$$ Thus there exists a sequence $(s_m)$ such that $\log P_\eta([0,s_m\mathbf{u}]^{(t_0)})\ge 2\delta s_m$. Set $\tau_i = \tau_\mathbf{u}(K_i)$. If $s_m \le s \le s_m+\tau_i$ and $s_m \ge \tau_i$, then $$\log P_\eta([0,s\mathbf{u}]^{(t_0)})\ge \log P_\eta ([0,s_m\mathbf{u}]^{(t_0)}) \ge 2\delta s_m \ge 2\delta s \frac{s_m}{s} \ge 2\delta s \frac{s_m}{s_m + \tau_i} \ge \delta s.$$ Hence, there exist infinitely many $j \in {\mathbb N}$ such that for all $t \ge t_0$, $$\log P_\eta ([0,j\tau_i\mathbf{u}]^{(t)}) \ge \delta j \tau_i.$$ But since $K_i$ contains a translate of $[0,\tau_i\mathbf{u}]$, it follows that $$\log P_\eta(K_i)^{2t_0j} \ge \log P_\eta ([0,j\tau_i\mathbf{u}]^{(t_0)}) \ge \delta j \tau_i,$$ and so $$\frac{\log P_\eta(K_i)}{\tau_i} \ge \frac{\delta}{2t_0}$$ for all $i \in {\mathbb N}$, a contradiction. Conversely, if no such sequence exists, then setting $K_i = [0,i\mathbf{u}]^{(1)}$, there exists a constant $c > 0$ such that $\log P_\eta(K_{i_j}) \ge ci_j$ for some increasing sequence $(i_j)$. By Lemma~\ref{entropycomplex}, $h(\mathbf{u}) \ge c$. \end{proof} \begin{cor} \label{exponential-eccentricity} For $\eta\colon{\mathbb Z}^2\to\mathcal{A}$, if there exist $n_i, k_i\in{\mathbb N}$ tending to infinity such that $P_\eta(n_i,k_i) \le n_ik_i$ and $\lim \frac{\log (n_i)}{k_i} = \lim \frac{\log (k_i)}{n_i} = 0$, then the ${\mathbb Z}^2$ action by translation on $X_\eta$ has zero entropy in all directions. \end{cor} \begin{proof} We apply Proposition~\ref{convexentropy} to the sets $K_i = [0,n_i-1]\times [0,k_i-1]$. If $\mathbf{u}$ is a unit vector, then $\tau_{\mathbf{u}}(K_i) \ge \min(n_i, k_i)$. Without loss of generality, we can assume that $k_i \ge n_i$ for all $i \in {\mathbb N}$. Then $$\lim_{i \to \infty} \frac{\log P_\eta(K_i)}{\tau_{\mathbf{u}}(K_i)} \le \frac{\log(n_i k_i)}{n_i} \le 2\frac{\log(k_i)}{n_i} = 0.$$ \end{proof} \begin{remark} \label{exponential-eccentricity-remark} By passing to a subsequence, the conclusion of Corollary~\ref{exponential-eccentricity} holds unless the rectangles for which we have complexity assumptions have eccentricity unbounded either above or below. In particular, it holds unless there exists $C > 1$ such that either $k_i \ge C^{n_i}$ for all $i \in {\mathbb N}$ or $n_i \ge C^{k_i}$ for all $i \in {\mathbb N}$, and this is the setting studied in the next section. \end{remark} \section{Proof of Theorem~\ref{trichotomy}} We say two vectors $\mathbf{v}, \mathbf{w} \in \mathbb{R}^2 \setminus \{0\}$ are \emph{parallel} if $\mathbf{v} = c\mathbf{w}$ for some $c > 0$, and we say they are \emph{antiparallel} if $\mathbf{v} =c\mathbf{w}$ for some $c < 0$. We define these terms analogously for directed lines and line segments. Recall that we endow the boundary of a convex set $S\subset {\mathbb Z}^2$ with positive orientation. Given $\mathbf{v}\in \mathbb{R}^2 \setminus \{0\}$, a $\mathbf{v}$-plane is a closed half-plane whose boundary is parallel to $\mathbf{v}$. For example, $\{(x,y) \colon x \le 2\}$ is an $\mathbf{e}_2$-plane, while $\{(x,y) \colon x \ge 2\}$ is a $(-\mathbf{e}_2)$-plane. \begin{notation} If $\mathcal{S}\subset{\mathbb Z}^2$ is convex and $\ell$ is a directed line, we write $E(\ell, \mathcal{S}) = \ell' \cap \mathcal{S}$, where $\ell'$ is the boundary of the intersection of all $\ell$-planes containing $\mathcal{S}$. \end{notation} Note that $E(\ell, \mathcal{S})$ is the set of integer points on some edge of $\partial(\mathcal{S})$, and it may reduce to a single vertex. We recall a definition from~\cite{CK}: \begin{definition} \label{def:balanced} Suppose $\ell\subset\mathbb{R}^2$ is a directed line. A finite, convex set $\mathcal{S}\subset{\mathbb Z}^2$ is {\em $\ell$-balanced for $\eta$} if \begin{enumerate} \item The endpoints of $E(\ell, \mathcal{S})$ are $\eta$-generated by $\mathcal{S}$; \item The set $\mathcal{S}$ satisfies $P_{\eta}(\mathcal{S}\setminus E(\ell, \mathcal{S})) > P_{\eta}(\mathcal{S})-|E(\ell, \mathcal{S})|$; \label{item:two} \item Every line parallel to $\ell$ that has nonempty intersection with $\mathcal{S}$ intersects $\mathcal{S}$ in at least $|E(\ell, S)|-1$ integer points. \end{enumerate} We also call such a set \emph{$\mathbf{v}$-balanced for $\eta$} whenever $\mathbf{v}$ is a vector $\mathbf{v}$ parallel to $\ell$. \end{definition} We note that the endpoints of $E(\ell, \mathcal{S})$ could consist of a single endpoint. Furthermore, $P_\eta(\emptyset)$ is not defined and so a generating set $\mathcal{S}$ does not consist of a line segment, meaning that $\mathcal{S}\neq E(\ell, \mathcal{S})$ and so condition~\eqref{item:two} is well defined. \begin{lemma} \label{balanced} Suppose $\eta\colon {\mathbb Z}^2 \to \mathcal{A}$ satisfies $P_\eta(n_1,k_1) \le n_1k_1$ for some $n_1, k_1 \in {\mathbb N}$. If $\ell$ is a horizontal or vertical directed line, then there is an $\ell$-balanced set for $\eta$. Furthermore, there exist $\mathbf{e}_1$ and $(-\mathbf{e}_1)$-balanced sets with the same height, and there exist $\mathbf{e}_2$- and $(-\mathbf{e}_2)$-balanced sets with the same width. \end{lemma} This is essentially proved in Lemma 4.7 in~\cite{CK}. There, the assumption that there exist $n, k$ with $P_\eta(n,k) \le nk$ is replaced with the stronger assumption that we can find $n, k$ with $P_\eta(n,k) \le \dfrac{nk}{2}$, but this hypothesis is not necessary in the case of a horizontal or vertical line, the only ones needed here. We include a proof for completeness. \begin{proof} We prove the case that $\ell$ is parallel to $\mathbf{e}_2$; the other three cases are handled similarly. Let $n_1' \le n_1$ be the minimal positive integer such that $P_\eta(n_1', k_1) \le n_1'k_1$. First suppose $n_1'\leq 1$. Then for each $n \in {\mathbb Z}$, $\eta \rst {\{n\}\times {\mathbb Z}}$ is vertically periodic with period at most $k_1$ by the Morse-Hedlund Theorem \cite{MH}, and so $\eta$ is vertically periodic. Let $p$ denote its period. Set $c = P_\eta(\{0\}\times [0,p-1])$ and take $k > \max\{p, c^2 - c\}$. Define $\mathcal{S} = [0,1]\times [0,k-1]$. Then $E(\ell, \mathcal{S}) = \{1\}\times [0,k-1]$ and property (iii) of Definition~\ref{def:balanced} is clearly satisfied. Property (i) is satisfied since $k > p$. Finally, property (ii) follows since $$P_\eta(\mathcal{S}) \le c^2 < c + k = P_\eta(\mathcal{S} \setminus E(\ell, \mathcal{S})) + |E(\ell, \mathcal{S})|.$$ Note that in this case, $\mathcal{S}$ is both an $\mathbf{e}_2$-balanced set and a $(-\mathbf{e}_2)$-balanced set. Hence, we may assume that $n_1'\geq2$. Set $R_1 = [0, n_1'-2] \times [0, k_1-1]$ and set $R_2 = [0, n_1' - 1] \times [0, k_1-1]$. Note that $R_1$ is simply the result of removing the rightmost vertical line from $R_2$. By the minimality of $n_1'$, we have \begin{equation} \label{edgeremoval} P_\eta(R_1) = P_\eta(n_1' - 1, k_1) > (n_1'-1)k_1 \ge P_\eta(R_2) - k_1. \end{equation} Take $\mathcal{S} \subset {\mathbb Z}^2$ to be a convex set of minimal size with $R_1 \subsetneq \mathcal{S} \subset R_2$ satisfying $P_\eta(\mathcal{S}) - |\mathcal{S}| = P_\eta(R_2) - |R_2|$. Then by the minimality of $n_1'$, $\mathcal{S}$ contains at least one point on the vertical line $x = n_1'-1$, and so $E(\ell, \mathcal{S}) \subset \{(x,y) \colon x = n_1'-1\}$. By the minimality of $\mathcal{S}$, removing an endpoint of $E(\ell, \mathcal{S})$ results in a set $\mathcal{S}'$ with $P_\eta(\mathcal{S}') - |\mathcal{S}'| > P_\eta(\mathcal{S}) - |\mathcal{S}| = P_\eta(\mathcal{S}) - |\mathcal{S}'| - 1$. Thus, since $\mathcal{S}' \subset \mathcal{S}$, $P_\eta(\mathcal{S}') \le P_\eta(\mathcal{S}) \le P_\eta(\mathcal{S}')$, which implies that every $\eta$-coloring of $\mathcal{S}'$ extends uniquely to an $\eta$-coloring of $\mathcal{S}$ (or, in other words, the endpoint we removed was $\eta$-generated by $\mathcal{S}$). Hence, property (i) in Definition~\ref{def:balanced} holds. By construction, we have $$P_\eta(\mathcal{S}) - |E(\ell, \mathcal{S})| = P_\eta(\mathcal{S}) - |\mathcal{S}| + |R_1| = P_\eta(R_2) - |R_2| + |R_1| = P_\eta(R_2) - k_1.$$ By~\eqref{edgeremoval}, it follows that $P_\eta(\mathcal{S}) - |E(\ell, \mathcal{S})| < P_\eta(R_1) = P_\eta(\mathcal{S} \setminus E(\ell, \mathcal{S}))$, which is property (ii) of Definition~\ref{def:balanced}. Finally, any vertical line other than $x=n_1'-1$ that intersects $\mathcal{S}$ meets that set in $k_1 \ge |E(\ell, \mathcal{S})|$ points, and so property (iii) of Definition~\ref{def:balanced} holds as well, and $\mathcal{S}$ is $\mathbf{e}_2$-balanced for $\eta$. Finally, notice that the width of $\mathcal{S}$ is $n_1'$ and the choice of this $n_1'$ would be the same if the argument is repeated to produce a $(-\mathbf{e}_2)$-balanced set. \end{proof} \begin{definition} Given a directed line $\ell$, an $\ell$-balanced set $\mathcal{S}$ for $\eta$, an integer $p \ge 0$, and a convex set $\mathcal{T} \subset {\mathbb Z}^2$ that contains a translate of $\mathcal{S}$ and has an edge parallel to $\ell$, we define the \emph{$(\ell, \mathcal{S})$-extension} of $\mathcal{T}$ to be $$\operatorname{ext}_{\ell,\mathcal{S}}^1(\mathcal{T}) = \mathcal{T} \cup \bigcup_{\mathbf{j} \in J(\mathcal{T}, \ell, \mathcal{S})} (\mathcal{S} + \mathbf{j}),$$ where $$J_{\mathcal{T}, \ell, \mathcal{S}} = \{\mathbf{j} \colon (\mathcal{S} + \mathbf{j}) \setminus \mathcal{T} = E(\ell, \mathcal{S}) + \mathbf{j}\}.$$ Note that $J_{\mathcal{T}, \ell, \mathcal{S}}$ is the set of integer points on some line segment parallel to $\ell$. Given an integer $p \ge 0$, let $J_{\mathcal{T}, \ell, \mathcal{S}, p}$ be the result of removing the $p$ integer points nearest each endpoint of $J_{\mathcal{T}, \ell, \mathcal{S}}$ and define $$\operatorname{ext}_{\ell,\mathcal{S}, p}^1(\mathcal{T}) = \mathcal{T} \cup \bigcup_{\mathbf{j} \in J_{\mathcal{T}, \ell, \mathcal{S}, p}} (\mathcal{S} + \mathbf{j}).$$ For each $n\geq 1$, we then inductively define $$\operatorname{ext}_{\ell,\mathcal{S}, p}^{n+1}(\mathcal{T}) = \operatorname{ext}_{\ell, \mathcal{S}, p}^1(\operatorname{ext}_{\ell, \mathcal{S}, p}^n(\mathcal{T})).$$ Note that $\operatorname{ext}_{\ell,\mathcal{S}}^{n}(\mathcal{T}) = \operatorname{ext}_{\ell,\mathcal{S},0}^{n}(\mathcal{T})$. For notational convenience, we set $\operatorname{ext}_{\ell, \mathcal{S}, p}^0(\mathcal{T}) = \mathcal{T}$. We define the \emph{$(\ell, \mathcal{S})$-border} $\partial_{\ell, \mathcal{S}}(\mathcal{T})$ of $\mathcal{T}$ to be $\bigcup_{\mathbf{j} \in J_{\mathcal{T}, \ell, \mathcal{S}}}(\tilde{\mathcal{S}} + \mathbf{j})$, where $\tilde{\mathcal{S}} = \mathcal{S} \setminus E(\ell, \mathcal{S})$. We define the \emph{$(\ell,\mathcal{S},p)$-border} $\partial_{\ell,\mathcal{S},p}(\mathcal{T})$ of $\mathcal{T}$ to b e $\bigcup_{\mathbf{j} \in J_{\mathcal{T}, \ell, \mathcal{S},p}}(\tilde{\mathcal{S}} + \mathbf{j})$ Given two sets $\mathcal{S}, R \subset {\mathbb Z}^2$, we say $f \colon R \to \mathcal{A}$ is an \emph{$(\mathcal{S},\eta)$-coloring} of $R$ if $f = g\rst{R}$ for some $g \colon {\mathbb Z}^2 \to \mathcal{A}$ such that $g\rst{{\mathcal{S}+ \mathbf{j}}}$ is an $\eta$-coloring of $\mathcal{S}$ for each $\mathbf{j} \in {\mathbb Z}^2$. \end{definition} Note that every $\eta$-coloring of $R$ is also an $(\mathcal{S},\eta)$-coloring of $R$, but the converse does not always hold. To prove the next lemma, we recall a finite version of the Morse-Hedlund Theorem. \begin{definition} If $a\in{\mathbb Z}$ and $f\colon\{a,a+1,a+2,\dots,a+i-1\}\to\mathcal{A}$, define $Tf\colon\{a-1,a,\dots,a+i-2\}\to\mathcal{A}$ by $(Tf)(n):=f(n+1)$ and define $P_f(n)$ to be the number of distinct functions of the form $(T^mf)\rst{\{a,a+1,\dots,a+n-1\}}$, where $0\leq m\leq i-n$ and $0\leq n\leq i$. \end{definition} The following is essentially due to Morse and Hedlund~\cite{MH}, and appears with this formulation in~\cite{CK2}: \begin{theorem} \label{MorseHedlundTheorem} Suppose $f\colon\{a,a+1,\dots,a+i-1\}\to\mathcal{A}$ and suppose there exists $n_0\in{\mathbb N}$ such that $P_f(n_0)\leq n_0$. If $i>3n_0$, then the restriction of $f$ to the set $\{a+n_0,a+n_0+1,\dots,a+i-n_0\}$ is periodic of period at most $n_0$. \end{theorem} \begin{lemma} \label{nonunique-periodic} Let $\ell$ be a vertical (respectively, horizontal) directed line, and assume $\eta \colon {\mathbb Z}^2 \to \mathcal{A}$ has an $\ell$-balanced set $\mathcal{S}$. Suppose $R$ is a rectangle (with horizontal and vertical sides) large enough to contain $\mathcal{S}$ and define $q:=|\mathcal{S}|$. If $f$ is an $\eta$-coloring of $\operatorname{ext}_{\ell, \mathcal{S}}^1(R)$ such that $f\rst{R}$ does not extend uniquely to an $(\mathcal{S},\eta)$-coloring of $\operatorname{ext}_{\ell, \mathcal{S}}^1(R)$, then $f$ is vertically (respectively, horizontally) periodic on $\partial_{\ell,\mathcal{S},2q}(R)$ with period at most $|E(\ell, \mathcal{S})| - 1$. \end{lemma} \begin{proof} For convenience, set $R = [0,n-1] \times [0, k-1]$ and $R^1 = \operatorname{ext}_{\ell, \mathcal{S}}^1(R)$. We prove the claim when $\ell$ is parallel to $\mathbf{e}_2$; the other cases are similar. In this case, $R^1 \setminus R = \{n\} \times I$ for some interval $I$. Write $I = \{j_0, j_0 + 1, \dots, j_0 + L - 1\}$, where $k - q \le L \le k$. We may assume without loss of generality that $I$ is also the set of integers $j$ such that $(\mathcal{S}+ (0,j)) \setminus R = E(\ell, \mathcal{S}) + (0,j)$. (Otherwise replace $\mathcal{S}$ with a translate that has this property.) Let $\tilde{\mathcal{S}} = \mathcal{S} \setminus E(\ell, \mathcal{S})$. The assumptions imply that for each $j \in I$, $f\rst{{\tilde{\mathcal{S}}+(0,j)}}$ does not extend uniquely to an $\eta$-coloring of $\mathcal{S} + (0,j)$. Indeed, if it did extend uniquely, then since the endpoints of $E(\ell, \mathcal{S})$ are $\eta$-generated, $f\rst{R}$ would extend uniquely to an $(\mathcal{S},\eta)$-coloring of $R^1$. Since $\mathcal{S}$ is $\ell$-balanced, $P_\eta(\tilde{\mathcal{S}}) > P_\eta(\mathcal{S}) - |E(\ell, \mathcal{S})|$. Thus there are at most $|E(\ell, \mathcal{S})|-1$ distinct $\eta$-colorings of $\tilde{\mathcal{S}}$ that do not extend uniquely to an $\eta$-coloring of $\mathcal{S}$. Thus, at most $|E(\ell, \mathcal{S})|-1$ such colorings appear as a coloring of the form $f\rst{{\tilde{\mathcal{S}} + (0,j)}}$ for $j \in I$. Set $\tilde{\mathcal{S}_0} = \{(i,j) \in \tilde{\mathcal{S}} \colon (i,j-1) \not\in \tilde{\mathcal{S}}\}$. Since $\mathcal{S}$ is a balanced set, $\tilde{\mathcal{S}_0} + (0,j) \subset \tilde{\mathcal{S}}$ for each $0 \le j \le |E(\ell, \mathcal{S})| - 2$. Let $\mathcal{B}$ be the set of $\mathcal{A}$-colorings of $\tilde{\mathcal{S}_0}$ and define $g \colon I \to \mathcal{B}$ by $g(j) = f\rst{{\tilde{\mathcal{S}_0} + (0,j)}}$. Then the one dimensional complexity $P_g(|E(\ell, \mathcal{S})|-1)$ is bounded above by the number of colorings of $\tilde{\mathcal{S}} \supset \bigcup_{0 \le j \le |E(\ell, \mathcal{S})| - 2} (\tilde{\mathcal{S}_0} + (0,j))$ that arise as a coloring of the form $f\rst{\tilde{\mathcal{S}} + (0,j)}$ with $j \in I$, and so $P_g(|E(\ell, \mathcal{S})| -1) \le |E(\ell, \mathcal{S})|-1$. Hence by Theorem~\ref{MorseHedlundTheorem}, $g$ is periodic on $$\{i_0 + |E(\ell, \mathcal{S})| -1, i_0 + |E(\ell, \mathcal{S})| -1, \dots, i_0 + L - (|E(\ell, \mathcal{S})| - 1) \}$$ with period at most $|E(\ell, \mathcal{S})| - 1$. Since $|E(\ell, \mathcal{S})| - 1 < q$ and $L \ge k - q$, this implies that $f\rst{\partial_{\ell,\mathcal{S}, 2q}(R)}$ is vertically periodic with period at most $|E(\ell, \mathcal{S})|-1$. \end{proof} \begin{lemma} \label{extend-periodicity} Let $\ell$ be a vertical (respectively, horizontal) directed line and let $\eta$, $\mathcal{S}$, $q$, and $R$ be as in Lemma~\ref{nonunique-periodic}. Let $p\in {\mathbb N}$ and let $a = \max\{p, 2q\}$. Let $m \in {\mathbb N}$ and let $f$ be an $\eta$-coloring of $R$ which is vertically (respectively, horizontally) periodic on $\partial_{\ell, \mathcal{S}}(R)$ with period at most $p$. Then any extension of $f$ to an $\eta$-coloring of $\operatorname{ext}_{\ell, \mathcal{S}, a}^m(R)$ must be vertically (respectively, horizontally) periodic on $\operatorname{ext}_{\ell,\mathcal{S}, a}^m(R) \setminus \operatorname{ext}_{\ell, \mathcal{S}, a}^{m-1}(R)$ with period at most $p$, and therefore vertically (respectively, horizontally) periodic on $\operatorname{ext}_{\ell, \mathcal{S}, a}^m(R)$ with period at most $p!$. \end{lemma} We note that the proof is similar in spirit to the proof of Proposition 4.8 in \cite{CK}. \begin{proof} We prove this when $\ell$ is parallel to $\mathbf{e}_2$, by induction on $m$. The other cases are proved similarly. Let $T_0 = R = [0,n-1] \times [0,k-1]$, and for $m \ge 1$ let $T_{m} = \operatorname{ext}_{\ell, \mathcal{S}, a}^m(R)$. Let $\tilde{f}$ be an extension of $f$ to an $\eta$-coloring of $T_{m+1}$ and suppose that the restriction of $\tilde{f}$ to $\partial_{\ell,\mathcal{S}}(T_m)$ is periodic of period $p' \le p$. Then this $\eta$-coloring either extends uniquely to an $(\mathcal{S},\eta)$-coloring of $T_{m+1}$ or it does not. We claim that in either case, $f\rst{\partial_{\ell, \mathcal{S}}(T_{m+1})}$ is vertically periodic of period at most $p$. {\bf Case 1:} \emph{$f\rst{T_m}$ does extend uniquely to an $(\mathcal{S},\eta)$-coloring of $T_{m+1}$.} Since $\tilde{f}$ is an $\eta$-coloring of $T_{m+1}$, there exists $(i_0,j_0)$ such that for all $(i,j) \in T_{m+1}$, $\tilde{f}(i,j) = \eta(i + i_0, j + j_0)$. Define $g \colon T_{m+1} \to \mathcal{A}$ by $$g(i,j) = \begin{cases} \eta(i+i_0,j+j_0) &\mbox{ if } (x,y) \in T_m\\ \eta(i + i_0,j+j_0 + p') &\mbox{ if } (x,y) \in T_{m+1}\setminus T_m. \end{cases}$$ We claim that $g$ is an extension of $f\rst{T_m}$ to an $(\mathcal{S},\eta)$-coloring of $T_{m+1}$. To see this, fix $(i_1, j_1) \in {\mathbb Z}^2$ such that $\mathcal{S} + (i_1, j_1)$ intersects $T_{m+1}$. If this translate does not intersect $T_{m+1} \setminus T_m$, then $g\rst{\mathcal{S} + (i_1,j_1)} = \tilde{f}\rst{\mathcal{S}+(i_1,j_1)}$ and so $g\rst{\mathcal{S} + (i_1,j_1)}$ is (or can be extended to) an $\eta$-coloring of $\mathcal{S}$. If $\mathcal{S}+(i_1,j_1)$ intersects $T_{m+1} \setminus T_m$, then by the definition of $T_{m+1}$, we must have $(\mathcal{S}+(i_1,j_1+p')) \cap [0, n+m-1] \times {\mathbb Z} \subset T_m$. Since $\tilde{f}$ is vertically periodic on $\partial_{\ell, \mathcal{S}}(T_m)$ with period $p'$, we see that $g(i,j) = \eta(i+i_0,j+j_0+p')$ for $(i,j) \in (\mathcal{S} + (i_1,j_1)) \cap (R_m \cup R_{m+1})$. Thus, $g$ is an $(\mathcal{S}, \eta)$-coloring of $T_m \cup T_{m+1}$ which restricts to $f$ on $T_m$. By assumption we must have $g= \tilde{f}$, meaning that $\tilde{f}$ is vertically periodic on $T_{m+1} \setminus T_m$ with period dividing $p'$, and therefore periodic on $\partial_{\ell, \mathcal{S}}(T_{m+1})$ with period $p' \le p$. {\bf Case 2:} \emph{$f\rst{T_m}$ does not extend uniquely to an $(\mathcal{S},\eta)$-coloring of $T_{m+1}$.} Let $\tilde{\mathcal{S}} = \mathcal{S} \setminus E(\ell, \mathcal{S})$. By Lemma~\ref{nonunique-periodic}, $f\rst{\partial_{\ell,\mathcal{S}, 2q}(T_m)}$ is vertically periodic of period at most $h = |E(\ell, \mathcal{S})\cap \mathcal{S}|-1$. Let $N$ be the number of colorings $\alpha$ of $\mathcal{S}$ such that $\alpha\rst{\tilde{\mathcal{S}}}$ extends in more than one way to an $\eta$-coloring of $\mathcal{S}$. We claim that $N \le 2h$. For each such $\alpha$, let $C_\alpha$ be the $\eta$-colorings $\alpha^{\prime}$ of $\mathcal{S}$ such that $\alpha\rst{\tilde{\mathcal{S}}} = \alpha'\rst{\tilde{\mathcal{S}}}$. Since $\mathcal{S}$ is $\mathbf{e}_2$-balanced for $\eta$, $$P_\eta(\tilde{\mathcal{S}}) + h \ge P_\eta(\mathcal{S}) = P_\eta(\tilde{\mathcal{S}}) + \sum_{C_\alpha} (|C_\alpha| - 1).$$ In particular, $\alpha\rst{\tilde{\mathcal{S}}}$ extends in more than one way exactly when $|C_\alpha| > 1$. Enumerating the colorings of $\tilde{\mathcal{S}}$ that extend in more than one way to a coloring of $\mathcal{S}$ as $\alpha_1, \dots, \alpha_r$ (where $r \le h$), we have that $$N = \sum_{i=1}^r |C_{\alpha'_i}| = \sum_{i=1}^r (C_{\alpha'_i} - 1) + \sum_{i=1}^r 1 \le h + r \le 2h,$$ where $\alpha'_i$ is a choice of a coloring of $\mathcal{S}$ that restricts to $\alpha_i$ on $\tilde{\mathcal{S}}$. Without loss of generality, we can assume that $\mathcal{S} \setminus T_m = E(\ell, \mathcal{S})$, but $(\mathcal{S} + (0,-1)) \setminus T_m \neq E(\ell, \mathcal{S}) + (0,-1)$. (If not we can replace $\mathcal{S}$ with a translate that has this property and it continues to be an $\mathbf{e}_2$-balanced set for $\eta$.) Then by the pigeonhole principle, there exist integers $0 \le i < j < 2h$ such that $f\rst{\mathcal{S} + (0,i)} = f\rst{{\mathcal{S}}+(0,j)}$. Since $f\rst{\partial_{\ell, \mathcal{S}, 2q}(T_m)}$ is vertically periodic of period at most $h$ and each vertical line intersecting $\mathcal{S}$ intersects it in at least $h$ points (since $\mathcal{S}$ is a vertically balanced set), it follows that $f\rst{\tilde{\mathcal{S}} + (0,i+k)} = f\rst{\tilde{\mathcal{S}} + (0,j+k)}$ for each $k$ such that $(\tilde{\mathcal{S}} + (0,i+k)) \cup (\tilde{\mathcal{S}} + (0,j+k)) \subset \partial_{\ell, \mathcal{S}, 2q}(T_m)$. But since each endpoint of $E(\ell, \mathcal{S})$ is generated, an easy induction argument shows that $f\rst{\mathcal{S} + (0,i+k)} = f\rst{\mathcal{S} + (0,j+k)}$. Thus $f\rst{\partial_{\ell,\mathcal{S}}(T_{m+1})}$ is vertically periodic of period at most $2h \le p$. \end{proof} \begin{lemma} \label{few-periodic} If $\eta\colon{\mathbb Z}^2\to\mathcal{A}$ is not vertically periodic and there exists an infinite sequence $n_i, k_i \in {\mathbb N}$ such that $P_\eta(n_i, k_i) \le n_i k_i$, then for any $p \in {\mathbb N}$ and $\lambda > 1$, there exists $h \in {\mathbb N}$ such that for sufficiently large $w \in {\mathbb N}$, the number of $\eta$-colorings of $[0,w-1] \times [0,h-1]$ that are vertically periodic with period at most $p$ is less than $\lambda^w$. \end{lemma} The proof requires the following technical lemma: \begin{lemma} \label{period-break} Let $\eta$, $n_i$, $k_i$, $w$ and $p$ be as in Lemma~\ref{few-periodic}. Let $a = \max\{p, 2n_1k_1\}$, set $n_i' = \left\lfloor n_i/3\right\rfloor$ for $i\in{\mathbb N}$, $h_i = 2(p+k_i)$, $R_{i,w} = [0, w -1] \times [0,h_i- 1]$, and $S_i = [0, n_i' -1] \times [0, h_i-1]$. There exists a constant $C$ independent of $w$ and $i$ such that for any $i \in {\mathbb N}$ with $k_i > 4p$, there exist $\eta$-colorings $g_1, \ldots, g_C$ of $S_i$ such that if $n_i \le w$, $(x_0, y_0) \in {\mathbb Z}^2$, and $\eta\rst{R_{i,w} + (x_0, y_0)}$ is vertically periodic with period $p$, then the following hold: \begin{enumerate}[label=(\alph*)] \item\label{it:a1} Either there exists minimal $y_1 \ge y_0 + h$ such that for some $x_1 \in [x_0, x_0 + w - 1]$, $\eta(x_1,y_1) \neq \eta(x_1,y_1-p)$, or \item \label{it:a2} there exists maximal $y_1 < y_0$ such that $\eta(x_1,y_1) \neq \eta(x_1,y_1+p)$ for some $x_1 \in [x_0, x_0 + w - 1]$, \end{enumerate} and exactly one of the following holds: \begin{enumerate}[label=(\roman*)] \item \label{it:b1} $x_1$ can be chosen to lie in $[x_0 + n_1, x_0 + w - n_1 -1 ]$, in which case $\eta$ is horizontally periodic on $[x_0 + a(p+2), x_0 + w - a(p+2)] \times [0,h-1]$ with period at most $(2n_1)!$, \item \label{it:b2} $x_1$ cannot be chosen to lie in $[x_0 + n_1, x_0 + w - n_1 -1 ]$ but can be chosen to lie in $[x_0, x_0 + n_1-1]$, in which case $\eta\rst{S_i + (x_0, y_0)} = g_j$ for some $1 \le j \le C$, or \item \label{it:b3} $x_1$ can only be chosen to lie in $[x_0 + w - n_1, x_0 + w - 1]$, in which case $\eta\rst{S_i + (x_0 + w - n_i', y_0)} = g_j$ for some $1 \le j \le C$. \end{enumerate} \end{lemma} Before proving Lemma~\ref{period-break}, we first use it to prove Lemma~\ref{few-periodic}. \begin{proof}[Proof of Lemma~\ref{few-periodic}] For convenience, write $R = R_{i,w}$, $S = S_{i}$, and $h = h_i$. Let $f$ be an $\eta$-coloring of $R$ that is vertically periodic with period $p' \le p$, and let $x_0, y_0$ be such that $f(x,y) = \eta(x + x_0, y+ y_0)$ for all $(x,y) \in R$. Define a finite sequence of rectangles in the following way. Let $R_0 = R = R_{i,w}$. For each $0 \le j$ until the process terminates, apply Lemma~\ref{period-break} to $R_j + (x_0,y_0)$. If case~\ref{it:b1} holds, terminate the process. If case~\ref{it:b2} holds, let $R_{j+1}'$ be the translate of $S_{i}$ that shares a left edge with $R_j$ and let $R_{j+1} = R_j \setminus R_{j+1}'$, which is a rectangle to which the claim also applies, so long as $w - (j+1)n_i' \ge n_i$. (If this inequality fails, terminate the process instead.) If case~\ref{it:b3} holds, let $R_{j+1}'$ be the translate of $S_{i}$ that shares a right edge with $R_{j}$ and let $R_{j+1} = R_j \setminus R_{j+1}'$, which is also a rectangle to which the claim applies for $w - (j+1)n_i' \ge n_i$. The coloring $f$ is completely determined by the following data: \begin{itemize} \item The length $m$ of the sequence of rectangles, which satisfies $m \le \left\lfloor \dfrac{w-n_i}{n_i'}\right\rfloor + 1$. \item Whether $R_{j+1}'$ is on the right or left side of $R_j$ for each $0 \le j < m$. \item The indices $1 \le a_j \le C$ for which $\eta\rst{R_{j}'+(x_0,y_0)} = g_{a_j}$ for $1 \le j \le m$. \item The restriction of $\eta$ to $R_m+(x_0,y_0)$. \end{itemize} Since $\left\lfloor \dfrac{w-n_i}{n_i'}\right\rfloor + 1 \le 4w/n_i$, the number of colorings $f$ with these properties is at most $$\frac{4w}{n_i}2^{4w/n_i}C^{4w/n_i} \max\{C_1, C_2\},$$ where $C_1$ is the number of $\eta$-colorings of $R$ that are horizontally periodic on $[x_0 + a(p+2), x_0 + w - a(p+2)]\times [0,h-1]$ with period at most $(2n_1)!$ and vertically periodic on $R$ with period at most $p$ and $C_2$ is the number of $\eta$-colorings of $[0, n_i -1]\times [0, h-1]$. Clearly we have $$C_1 \le p(2n_1)!|\mathcal{A}|^{p(2n_1)! + a(p+2)h}$$ and $$C_2 \le |\mathcal{A}|^{n_ih}.$$ In particular, $C_1$ and $C_2$ are independent of $w$, and so the number of $\eta$-colorings of $R$ that are vertically periodic with period at most $p$ is at most $K_iw(2C)^{4w/n_i}$, where $K_i$ is independent of $w$ and $C$ is independent of both $w$ and $i$. Choose $i$ large enough such that $(2C)^{4/n_i} < \sqrt{\lambda}$. Then for large enough $w$, the number of colorings $f$ of $R$ that are vertically periodic with period at most $p$ is less than \begin{equation*} K_iw(2C)^{4w/n_i} < K_iw\lambda^{w/2} < \lambda^w. \hfill\qedhere \end{equation*} \end{proof} \begin{proof}[Proof of Lemma~\ref{period-break}] If $\eta$ is vertically periodic on some strip of width $n_1$, then by Lemma~\ref{extend-periodicity} it is periodic on all such strips, with bounded period, and so $\eta$ is vertically periodic. Hence, we may assume that $\eta$ is not periodic on any vertical strip of width $n_1$, meaning that either~\ref{it:a1} or~\ref{it:a2} holds. We assume throughout the rest of the proof that~\ref{it:a1} holds; the argument in the other case is similar. Let $p' \le p$ be the vertical period of $\eta\rst{R_{i,w}+(x_0,y_0)}$, and again for convenience set $R = R_{i,w}$, $S = S_i$, and $h = h_i$. Let $\ell$ be a line parallel to $-\mathbf{e}_1$, let $\mathcal{S} \subset [0,n_1-1] \times [0, k_1-1]$ be an $\ell$-balanced set for $\eta$. First suppose $x_1$ may be chosen to lie in $[x_0 + n_1, x_0 + w' - n_1 - 1]$. Then the restriction of $\eta$ to $[x_0, x_0 + w - 1] \times [y_1 - k_1, y_1 - 1]$ extends nonuniquely to an $\eta$-coloring of $\operatorname{ext}_{\ell, \mathcal{S}}^1([x_0, x_0 + w' - 1] \times [y_1 - k_1, y_1 - 1])$, and hence also extends nonuniquely to an $(\mathcal{S}, \eta)$-coloring of that set. By Lemma~\ref{nonunique-periodic}, it follows that $\eta$ is horizontally periodic on $\partial_{\ell, \mathcal{S}, a}([x_0, x_0 + w' - 1] \times [y_1 - k_1, y_1 - 1])$ with period at most $|E(\ell, \mathcal{S})| -1 < n_1$. Therefore by Lemma~\ref{extend-periodicity}, it is horizontally periodic with period at most $(2n_1)!$ on $$\operatorname{ext}_{\ell', \mathcal{S}', a}^p(\partial_{\ell, \mathcal{S}, a}([x_0, x_0 + w' - 1] \times [y_1 - k_1, y_1 - 1])), $$ where $\ell'$ is parallel to $\mathbf{e}_1$ and $\mathcal{S}'$ is an $\ell'$-balanced set for $\eta$. Since $n_1 + a + pa \le a(p+2)$, it follows that $\eta$ is horizontally periodic with period at most $(n_1)!$ on $[x_0 + a(p+2), x_0 + w - a(p+2)] \times [y_1 - p, y_1 - 1]$. Thus by the vertical periodicity assumption, $\eta$ is horizontally periodic on $[x_0 + a(p+2), x_0 + w - a(p+2)]\times [0, h-1]$ with period at most $(2n_1)!$. Otherwise, $x_1$ cannot be chosen to lie in $[x_0 + n_1, x_0 + w - n_1 - 1]$ but can be chosen to lie in either $[x_0, x_0 + n_1 -1]$ or $[x_0 + w - n_1, x_0 +w - 1]$. Let us assume it is the former; the argument in the other case is similar. Let $x_0', y_0'$ be other integers such that $\eta\rst{R + (x_0', y_0')}$ is vertically periodic with period $p'$. Assume that~\ref{it:a1} holds for $(x_0',y_0')$ as well and that $y_1'$ is as in~\ref{it:a1}. Suppose also that $x_1'$ cannot be chosen in $[x_0' + n_1, x_0' + w - n_1 - 1]$ but can be chosen in $[x_0', x_0' + n_1 -1]$. Assume further that $x_1' - x_0' = x_1 - x_0$. We claim that $\eta(x_0 + x,y_1 + y) = \eta(x_0'+x,y_1'+y)$ for $(x,y) \in [0,n_i'-1] \times [-p+1,0]$. Indeed, let $B = [0, n_i - 1] \times [0, k_i - 1]$ and, for an integer vector $\mathbf{t} \in [0, n_i - n_i' - 1] \times [1, k_i - p - 1]$, let $B_{\mathbf{t}} = B + (x_0,y_1 - k_i) + \mathbf{t}$ and $B_{\mathbf{t}}' = B + (x_0', y_1' - k_i) + \mathbf{t}$. For a coloring $\alpha\colon B \to \mathcal{A}$, define $y(\alpha)$ to be the minimal integer $p' \le y \le k_i-1$ such that $\alpha(x,y) \neq \alpha(x,y-p')$ for some $0 \le x\le n_i - 1$, and let $x(\alpha)$ be the maximal such $x$. Setting $\alpha_{\mathbf{t}} = \eta\rst{B_{\mathbf{t}}}$, then $(x(\alpha_{\mathbf{t}}), y(\alpha_{\mathbf{t}})) = (x(\alpha_{\mathbf{t}'}), y(\alpha_{\mathbf{t}'}))$ if and only if $\mathbf{t} = \mathbf{t}'$ and so the colorings $\alpha_{\mathbf{t}}$ are all distinct. Similarly, setting $\alpha'_{\mathbf{t}} = \eta\rst{B'_{\mathbf{t}}}$ these colorings of $B$ are also distinct from one another. Since there are $(n_i-n_i')(k_i - p)$ choices of $\mathbf{t}$, we have $\alpha_{\mathbf{t}} = \alpha'_{\mathbf{t}'}$ for some $\mathbf{t}\neq \mathbf{t}'$. If not, instead we have $$2(n_i-n_i')(k_i - p) \ge 4/3n_i (k_i - p) > 4/3 n_i(k_i - k_i/4) = n_i k_i$$ distinct $\eta$-colorings of $B$, a contradiction. However, since we assume that $x_1-x_0 = x_1'-x_0'$, we can have $\alpha_{\mathbf{t}} = \alpha'_{\mathbf{t}'}$ only if $\mathbf{t}= \mathbf{t}'$. Since $[x_0,x_0+n_i' - 1] \times [y_1 - p +1 , y_1] \subset B_{\mathbf{t}}$ for all $\mathbf{t}$, it follows that $\eta(x_0 + x,y_1 + y) = \eta(x_0'+x,y_1'+y)$ for $(x,y) \in [0,n_i'-1] \times [-p+1,0]$, as claimed. By the vertical periodicity assumptions, there exists $0 \le j \le p'$ such that $\eta(x_0 + x, y_0 + y) = \eta(x_0' + x, y_0' + y + j)$ for all $(x,y) \in [0, n_i' - 1] \times [0, h-p-1]$. Thus, for a pair $(x_0, y_0)$ such that~\ref{it:a1} and~\ref{it:b2} hold, $\eta\rst{S + (x_0, y_0)}$ is determined by \begin{itemize} \item the vertical period $p' \le p$ of $\eta\rst{R+(x_0, y_0)}$, \item the integer $x_1 - x_0 \in [0, n_1 - 1]$, and \item the integer $(y_1 - y_0) \mod p' \in [0, p-1]$. \end{itemize} Thus, there are at most $p^2n_1$ possibilities for $\eta\rst{S + (x_0, y_0)}$. Arguing similarly, we can bound the number of possibilities if~\ref{it:a2} and~\ref{it:b2} hold, and if~\ref{it:b3} holds, all independent of $w$ and $i$. Taking $C$ to be the sum of these bounds completes the proof. \end{proof} \begin{lemma} \label{trapezoid} Suppose $\eta\colon{\mathbb Z}^2\to\mathcal{A}$ satisfies $P_\eta(n_1,k_1)\leq n_1k_1$ for some $n_1, k_1\in{\mathbb N}$. For any $k, m \in {\mathbb N}$ with $k > 14mn_1k_1$, any $\eta$-coloring of $[0,n_1-1]\times[0,k-1]$ either \begin{enumerate} \item[$(i)$] extends uniquely to an $\eta$-coloring of \begin{multline*} T_{m, k} = \\ {\mathbb Z}^2 \cap ([0,n_1-1] \times [0,k-1] \cup \{(i,j)\colon n_1 \le i \le m-1, k_1 (i-n_1) \le j \le k - k_1(i-n_1)\}), \end{multline*} or \item[$(ii)$] extends only to vertically periodic (with period independent of $k$ and $m$) colorings of $$B_{m ,k} = {\mathbb Z}^2 \cap [0,m-1] \times [7mn_1k_1, k - 7mn_1k_1] \subset T_{m,k}.$$ \end{enumerate} \end{lemma} \begin{proof} Fix an $\eta$-coloring $f$ of $T_{m,k}$. Let $\ell_1$ and $\ell_2$ be lines parallel to $\mathbf{e}_1$ and $\mathbf{e}_2$ respectively. By Lemma~\ref{balanced}, there exist balanced sets $\mathcal{S}_1, \mathcal{S}_2 \subset [0,n_1-1]\times[0,k_1-1]$ for $\ell_1$ and $\ell_2$ respectively, with the same width, which is at most $n_1$. Let $\tilde{\mathcal{S}_1} = \mathcal{S}_1 \setminus E(\ell_1, \mathcal{S}_1)$ and $\tilde{\mathcal{S}_2} = \mathcal{S}_2 \setminus E(\ell_2, \mathcal{S}_2)$ Suppose $(i)$ does not hold for $f$ restricted to $[0,n_1-1] \times [0, k - 1]$. Letting $d$ denote the smallest integer such that some translate of $\tilde{\mathcal{S}_1}$ is contained in $[0,d-1]\times {\mathbb Z}$, there exists $0 \le i \le m - d-1$ such that the restriction of $f$ to $W_i = T_{m,k} \cap ([i, i + d-1] \times {\mathbb Z})$ does not extend uniquely to an $\eta$-coloring of $W_i \cup W_{i+1}$. By Lemma~\ref{nonunique-periodic}, $f\rst{\partial_{\ell_1, \mathcal{S}_1, 2n_1k_1}(W_i)}$ is vertically periodic with period at most $|E(\ell_1, \mathcal{S}_1)| - 1 \le k_1$. Since $\partial_{\ell_1, \mathcal{S}_1, 2n_1k_1}(W_i)$ contains the rectangle $R = [i, i+d-1]\times [mk_1+ 3n_1k_1, k - (mk_1 + 3n_1k_1)]$, by Lemma~\ref{extend-periodicity} we have that $f$ is vertically periodic, with period at most $[2|E(\ell_1, \mathcal{S}_1)| - 1)]!$, on $\operatorname{ext}_{\ell_1,\mathcal{S}_1, 2n_1k_1}^n(R)$ and $\operatorname{ext}_{\ell_2, \mathcal{S}_2, 2n_1k_1}^n(R)$ for each $n \in {\mathbb N}$. Since $B_{m,k}$ is a subset of the union of these two sets for $n = m$, $(ii)$ follows. \end{proof} We are now ready to complete the proof of the main theorem: \begin{proof}[Proof of Theorem~\ref{trichotomy}] By Corollary~\ref{exponential-eccentricity} and Remark~\ref{exponential-eccentricity-remark}, without loss of generality we can assume that there exists $C > 1$ such that $k_i \ge C^{n_i}$ for all $i \in {\mathbb N}$. In particular, $$\dfrac{\log(P_\eta(n_i,k_i))}{k_i} \le \dfrac{2\log(k_i)}{k_i} \to 0.$$ Thus by Proposition~\ref{convexentropy}, $h(\mathbf{e}_2) = 0$. Applying Theorem 6.3, Part (4) in~\cite{BL}, if $\mathbf{e}_2$ is an expansive direction then we have that the directional entropy is zero in all directions. Thus we can assume that $\mathbf{e}_2$ is nonexpansive, and we are left with showing that this is the unique nonexpansive direction and $\eta$ is periodic in this direction. For any $m, k \in {\mathbb N}$, the complexity of $B_{m,k}$ can written as the sum of the number of colorings of $[0,n_1-1]\times[0,k-1]$ that extend uniquely to $T_{m, k}$ (and therefore extend uniquely to the smaller set $B_{m,k}$) plus the number of colorings of $B_{m,k}$ that do not arise as the unique extension of a coloring of the rectangle $[0,n_1-1] \times [0,k-1]$. The number of colorings of the first type is clearly bounded above by $P_\eta(n_1,k)$. By Lemma~\ref{trapezoid}, each of the colorings of $B_{m,k}$ of the latter type is vertically periodic with period independent of $m$ and $k$. Applying Lemma~\ref{few-periodic}, we see that for sufficiently large $m$ and $k$, the number of such colorings is at most $(C^{1/8})^m$. Set $k = k_i$ and $m = 8n_i$ for $i$ large enough such that this bound holds and also sufficiently large such that $112 n_i n_1 k_1 \le k_i/2$. Then the number of such colorings is at most $$(C^{1/8})^m \le n_i C^{n_i} \le n_i k_i.$$ Thus $$P_\eta(B_{m, k}) \le P_\eta(n_1, k_i) + (C^{1/8})^m \le P_\eta(n_i, k_i) + n_ik_i \le 2n_ik_i.$$ But by the choice of $i$, $$|B_{m,k}| = 8n_i (k_i - 14(8n_i)n_1k_1) \ge 8n_i (k_i/2) = 4n_ik_i,$$ and so $P_\eta(B_{m,k}) \le \dfrac{|B_{m,k}|}{2}$. Hence, by Theorems 1.4 and 1.5 in \cite{CK}, vertical is the unique nonexpansive direction for $\eta$, and $\eta$ is periodic. \end{proof} \section{Further directions} We conjecture a stronger result than Theorem~\ref{trichotomy}, namely that it holds under the same assumption as that in Nivat's Conjecture: \begin{conjecture} For $\eta\colon{\mathbb Z}^2\to\mathcal{A}$, if there exist $n,k\in {\mathbb N}$ such that $P_\eta(n,k)\leq nk$, then the directional entropy of every nonexpansive direction of $X_\eta$ is zero. \end{conjecture} If the answer is no, this would provide a counterexample to the Nivat Conjecture, and if the answer is yes, this is further evidence in favor of the conjecture. A weaker conjecture would be that under the same hypothesis, $X_\eta$ has some direction with zero directional entropy. Both statements follow from Theorem~\ref{trichotomy} under the stronger assumptions on the complexity. Alternately it is likely easier to show that a generalization of Nivat's Conjecture, but with a stronger complexity assumption, holds (recall Notation~\ref{notation:thick}): \begin{conjecture} If there exist $K_i \subset \mathbb{R}^2$ compact and convex with $\lim_{i \to \infty} \frac{\log P_\eta(K_i)}{\tau_\mathbf{u}(K_i)} = 0$, then $\eta$ is periodic. \end{conjecture} Closely related, we ask: \begin{question} \label{isolatedzero} Say that $\eta$ has an isolated, rational direction of zero directional entropy and that $P_\eta(n,k) \le nk$. Must $\eta$ be periodic? \end{question} The following example shows that if the complexity assumption is removed, then the answer is no: \begin{example} Let $\alpha\colon {\mathbb Z}\to \{0,1\}$ such that $P_{\alpha}(n) = 2^n$ for all $n \in {\mathbb N}$. Define $\eta \colon {\mathbb Z}^2 \to \{0,1,2,3\}$ by $\eta(n,k) = \alpha(n)$ for each $k \neq 0$, and $\eta(n,0) = \alpha(n) + 2$. Then for the ${\mathbb Z}^2$ action on $X_\eta$ by translations, $h(\mathbf{e}_2) = h(-\mathbf{e}_2) = 0$, but $h(\mathbf{u}) > 0$ for all other unit vectors $\mathbf{u}$. \end{example} \begin{proof} We first prove $h(\mathbf{e}_2) = 0$ (the proof that $h(-\mathbf{e}_2) = 0$ is analogous). Fix $t > 0$. There are $2^{2t+1}$ $\alpha$-colorings of $[-t,t]$. For each of these $\alpha$-colorings $f$, there are at most $s+2t +2$ $\eta$-colorings of $[-t,t] \times [-t,s+t]$ for which $\eta(i,j) = f(i)$ for some $-t \le j \le s+t$. Hence, $$P_\eta([0,s\mathbf{e}_2]^{(t)}) \le P_\eta([-t,t] \times [-t,s+t]) \le 2^{2t+1}(s+2t+2).$$ Thus, $$\overline{\lim_{s\to \infty}} \frac{\log P_\eta([0,s\mathbf{e}_2]^{(t)})}{s} = \overline{\lim_{s\to \infty}} \frac{(2t+1)\log 2 + \log(s+2t+2)}{s} = 0.$$ By Lemma~\ref{entropycomplex}, it follows that $h(\mathbf{e}_2) = 0$. For $\mathbf{u} \neq \pm \mathbf{e}_2$, let $m = \frac{1}{\|\operatorname{proj}_{\mathbf{e}_1} \mathbf{u}\|}$ where $\operatorname{proj}_{\mathbf{v}}$ is the projection onto the direction $\mathbf{v}$. By assumption $m < \infty$. Then $\|\operatorname{proj}_{\mathbf{e}_1} m\mathbf{u}\| = 1$, and so $[0,ms\mathbf{u}]^{(1)} \cap \{i\} \times {\mathbb Z} \neq \emptyset$ for each $0 \le i \le s$. For any set $K \subset \mathbb{R}^2$, if $\vert\{i \in {\mathbb Z} \colon K \cap \{i\} \times {\mathbb Z}\neq \emptyset\} \vert= k_1$ and $\vert\{j \in {\mathbb Z} \colon K \cap {\mathbb Z} \times \{j\}\neq \emptyset\}\vert = k_2$, then $P_\eta(K) = (k_2+1)2^{k_1}$. Hence, \begin{equation*} \overline{\lim_{s\to \infty}} \frac{\log P_\eta([0,s\mathbf{u}]^{(t)})}{s} \ge \overline{\lim_{s\to \infty}} \frac{\log(2^{(s+2t)/m})}{s} = \frac{1}{m}. \hfill \qedhere \end{equation*} \end{proof} Finally, we can ask how much of this holds in higher dimensions. While there are examples~\cite{ST2} showing that the analog Nivat's Conjecture is false for dimension $d\geq 3$, it is possible that the results on directional entropy generalize. We remark that Cassaigne~\cite{Cas} constructed aperiodic examples in higher dimensions which satisfy the higher dimensional analog of the complexity assumptions used in our results. These all have zero directional entropy in all directions, and so do not rule out a higher dimensional version of our theorem: \begin{question} Does the analog of Theorem~\ref{trichotomy} hold for $\eta\colon{\mathbb Z}^d\to\mathcal{A}$, where $d\geq 3$? \end{question}
{ "timestamp": "2014-09-18T02:12:07", "yymm": "1409", "arxiv_id": "1409.5056", "language": "en", "url": "https://arxiv.org/abs/1409.5056", "abstract": "We study the directional entropy of the dynamical system associated to a $\\Z^2$ configuration in a finite alphabet. We show that under local assumptions on the complexity, either every direction has zero topological entropy or some direction is periodic. In particular, we show that all nonexpansive directions in a $\\Z^2$ system with the same local assumptions have zero directional entropy.", "subjects": "Dynamical Systems (math.DS)", "title": "Complexity and directional entropy in two dimensions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363535858372, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7073385722261785 }
https://arxiv.org/abs/1808.09520
On a Conjecture of Ashbaugh and Benguria about Lower Eigenvalues of the Neumann Laplacian
In this paper, we prove an isoperimetric inequality for lower order eigenvalues of the free membrane problem on bounded domains of a Euclidean space or a hyperbolic space which strengthens the well-known Szegö-Weinberger inequality and supports strongly an important conjecture of Ashbaugh-Benguria.
\section{Introduction} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} Let $(M, g)$ be complete Riemannian manifold of dimension $n$, $n\geq 2$. We denote by $\D $ the Laplace operator on $M$. For bounded domain $\om$ with smooth boundary in $M$ we consider the free membrane problem \be\label{1.1}\left\{\begin{array}{l} \Delta f = -\mu f \ \ \ {\rm in \ }\ \ \om,\\ \fr{\pa f}{\pa\nu} = 0 \ \ \ \ \ \ \ \ {\rm on \ }\ \pa\om. \end{array}\right. \en Here $\fr{\pa }{\pa\nu}$ denotes the outward unit normal derivative on $\pa\om$. It is well known that the problem (\ref{1.1}) has discrete spectrum consisting in a sequence \be \no \mu_0=0<\mu_1\leq \mu_2\leq\cdots \rightarrow +\infty.\en In the two dimensional case, G. Szego \cite{s} proved via conformal mapping techniques that if $\om\subset \R^2 $ is simply connected, then \be\label{in1} \mu_1(\om)A(\om)\leq \left.(\mu_1 A)\right|_{disk}= \pi p_{1,1}^2 \en where $A$ denotes the area. Later, using more general methods, Weinberger \cite{w} showed that (\ref{in1}) and its $n$-dimensional analogue, \be \label{int2} \mu_1(\om)\leq \left(\fr{\omega_n}{|\om|}\right)^{2/n} p_{n/2,1}^2,\en hold for arbitrary domains in $\R^2$ and $\R^n$, respectively. Here $J_v$ is the Bessel function of the first kind of order $v$, $p_{v,k}$ is the $k$th positive zero of the derivative of $x^{l-v}J_v(x)$ and $|\om|$ denotes the volume of $\om$. Szeg\"o and Weinberger also noticed that Szeg\"o's proof of (\ref{in1}) for simply connected domains in $\R^2$ extends to prove the bound \be\label{in3}\fr 1{\mu_1}+\fr 1{\mu_2}\geq \fr{2A}{\pi p_{1,1}^2}\en for such domains. The bounds of Szeg\"o and Weinberger are isoperimetric with equality if and only if $\om$ is a disk ($n$-dimensional ball in the case of Weinberger's result (\ref{int2})). A quantitative improvement of (\ref{in1}) was made by Brasco and Pratelli in \cite{bp} who showed that for any bounded domain with smooth boundary $\om\subset\R^n$ we have \be \omega_{n}^{2/n}p_{n/2,1}^2 -\mu_1(\om)|\om|^{2/n}\geq c(n)\mathcal{A}(\om)^2.\en Here, $c(n)$ is positive constant depending only on $n$ and ${\cal A}(\om)$ is the so called {\it Fraenkel asymmetry}, defined by $$ {\cal A}(\om)=\inf\left\{\fr{|\om\Delta B|}{|\om|}: \ B\ {\rm ball\ in \ \mathbb{R}^n\ such\ that\ } |B|=|\om|\right\}. $$ Nadirashvilli obtained in \cite{n} a quantitative improvement of (\ref{in3}) which states that there exists a constant $C>0$ such that for every $\om\subset\R^2$ smooth simply connected bounded open set it holds \be \fr 1{|\om|}\left(\fr 1{\mu_1(\om)}+\fr 1{\mu_2(\om)}\right) -\fr 1{|B|}\left(\fr 1{\mu_1(B)}+\fr 1{\mu_2(B)}\right)\geq \fr 1C{\cal A}(\om)^2, \en where $B$ is any disk in $\R^2$. On the other hand, Ashbaugh and Benguria \cite{ab1} showed that \be \label{int3} \fr 1{\mu_1(\om)}+\cdots +\cdots \fr 1{\mu_n(\om)} \geq \fr n{n+2}\left(\fr{|\om|}{\omega_n}\right)^{2/n} \en holds for any $\om\subset\R^n$. Some generalizations to (1.7) haven been obtained e.g., in \cite{hx}, \cite{x}. In\cite{ab1}, Ashbaugh and Benguria also proposed the following important \vskip0.3cm {\bf Conjecture I }(\cite{ab1}). {\it For any bounded domain $\om$ with smooth boundary in $\R^n$, we have \be \label{int3} \fr 1{\mu_1(\om)}+\fr 1{\mu_2(\om)}+\cdots \fr 1{\mu_n(\om)} \geq \fr{n\left(|\om|/\omega_n\right)^{2/n}}{ p_{n/2,1}^2} \en with equality holding if and only if $\om$ is a ball in $\R^n$. } \vskip0.3cm Ashbaugh \cite{a} and Henrot \cite{h} mentioned this conjecture again. A more general conjecture might be true. That is, for any bounded domain domain $\om$ with smooth boundary in $\R^n$, it would hold \be\no \label{int2} \mu_n(\om)\leq \left(\fr{\omega_n}{|\om|}\right)^{2/n} p_{n/2,1}^2,\en with equality holding if and only if $\om$ is a ball in $\R^n$. The Szeg\"o-Weinberger inequality (\ref{int2}) has been generalized to bounded domains in a hyperbolic space by Ashbaugh-Benguria \cite{ab2} and Xu \cite{x} independently. In his book, Chavel \cite{c} mentioned that one can use Weinberger's method to prove this result. In \cite{ab2}, Ashbaugh-Benguria also proved the Szeg\"o-Weinberger inequality for bounded domains in a hemisphere. One can also consider similar estimates for lower order eigenvalues of the Neumann Laplacian for bounded domains in a hyperbolic space or a hemisphere. \vskip0.3cm {\bf Conjecture II. } {\it Let $M$ be an $n$-dimensional complete simply connected Riemannian manifold of constant sectional curvature $\kappa\in\{-1, 1\}$ and $\om$ be a bounded domain in $M$ which is contained in a hemisphere in the case that $\kappa=1$. Let $B_{\om}$ be a geodesic ball in $M$ such that $|\om|=|B_{\om}|$ and denote by $\mu_1(B_{\om})$ the first nonzero eigenvalue of the Neumann Laplacian of $B_{\om}$. Then the first $n$ non-zero eigenvalues of the Neumann Laplacian of $\om$ satisfy \be \label{conj2.1} \fr 1{\mu_1(\om)}+\fr 1{\mu_2(\om)}+\cdots \fr 1{\mu_n(\om)} \geq \fr{n}{\mu_1(B_{\om})} \en with equality holding if and only if $\om$ is isometric to $B_{\om}$.} \vskip0.3cm In this paper, we prove an isoperimetric inequality for the sums of the reciprocals of the first $(n-1)$ non-zero eigenvalues of the Neumann Laplacian on bounded domains in $\R^n$ or a hyperbolic space which supports the above conjectures. \begin{theorem}\label{th1} Let $\om$ be a bounded domain with smooth boundary in $\R^n$. Then \be \label{int3} \fr 1{\mu_1(\om)}+\cdots +\fr 1{\mu_{n-1}(\om)} \geq \fr{(n-1)\left(|\om|/\omega_n\right)^{2/n}}{ p_{n/2,1}^2} \en with equality holding if and only if $\om$ is a ball in $\R^n$. \end{theorem} \begin{theorem}\label{th2} Let $\mathbb{H}^n$ be an $n$-dimensional hyperbolic space of curvature $-1$ and $\om$ be a bounded domain in $\mathbb{H}^n$. Let $B_{\om}$ be a geodesic ball in $\mathbb{H}^n$ such that $|\om|=|B_{\om}|$. Then we have \be \label{th2.1} \fr 1{\mu_1(\om)}+\cdots +\fr 1{\mu_{n-1}(\om)} \geq \fr{n-1}{\mu_1(B_{\om})} \en with equality holding if and only if $\om$ is isometric to $B_{\om}$. \end{theorem} \section{A proof of Theorem 1.1.} \setcounter{equation}{0} In this section, we shall prove the following result which implies Theorem \ref{th1}. \begin{theorem}\label{th2.1} Let $\om$ be a bounded domain with smooth boundary in $\R^n$. There exists a positive constant $d(n)$ depending only on $n$ such that the first $(n-1)$ nonzero Neumann eigenvalues of the Laplacian of $\om$ satisfy the inequality \be \omega_{n}^{2/n}p_{n/2,1}^2 -\fr{(n-1)|\om|^{2/n}}{\fr 1{\mu_1}+\cdots + \fr 1{\mu_{n-1}}}\geq d(n)\mathcal{A}(\om)^2,\en with equality holding if and only if $\om$ is an $n$-ball. \end{theorem} {\bf Remark.} One can easily see that (2.1) strengthens (1.5). \vskip0.2cm Before proving Theorem \ref{th2.1}, we recall some known facts we need (Cf. \cite{c},\cite{h},\cite{sy}). Let $\{u_j\}_{j=0}^{\infty}$ be an orthonormal set of eigenfunctions of the problem (\ref{1.1}), that is, \be\label{pth1.6} \left\{\begin{array}{l} \Delta u_i= -\mu_i u_i \ \ \ {\rm in} \ \ \ \om,\\ \left.\fr{\pa u_i}{\pa\nu}\right|_{\pa\om}=0,\\ \int_{\om} u_i u_j dv_g=\delta_{ij}. \end{array}\right., \en where $dv_g$ denotes the volume element of the metric $g$. For each $i=1,2,\cdots,$ the variational characterization of $\mu_i(\om)$ is given by \be\label{pth0.5} \mu_i(\om)=\inf_{u\in H^1(\om)\setminus\{0\}}\left\{\fr{\int_{\om}|\na u|^2 dv_g}{\int_{\om} u^2 dv_g}: \int_{\om} uu_j dv_g=0, j=0,\cdots, i-1\right\}. \en Let $B_r$ be a ball of radius $r$ centered at the origin in $\R^n$. It is known that $\mu_1(B_r)$ has multiplicity $n$, that is, $\mu_1(B_r)=\cdots =\mu_{n}(B_r)$. This value can be explicitly computed together with its corresponding eigenfunctions. A basis for the eigenspace corresponding to $\mu_1(B_r)$ consists of \be\label{pth1.2} \xi_i(x)=|x|^{1-\fr n2} J_{n/2}\left(\fr{p_{n/2, 1}|x|}r\right)\fr{x_i}{|x|}, \ \ i=1,\cdots, n. \en The radial part of $\xi_i$ \be\label{pth1.3} g(|x|)=|x|^{1-\fr n2} J_{n/2}\left(\fr{p_{n/2, 1}|x|}r\right), \en satisfies the differential equation of Bessel type \be\label{pth1.4}\left\{\begin{array}{l} g^{\prime\prime}(t)+\fr{n-1}t g^{\prime}(t)+\left(\mu_1(B_r)-\fr{n-1}{t^2}\right)g(t)=0,\\ g(0)=0, \ \ g^{\prime}(r)=0. \end{array}\right. \en We can compute \be \label{pth1.52} \mu_1(B_r)&=& \fr{\int_{B_r}\left( g^{\prime}(|x|)^2 +(n-1)\fr{g(|x|)^2}{|x|^2}\right)dx}{\int_{B_r}g(|x|)^2 dx} \\ \no &=& \left(\fr{p_{n/2, 1}}r\right)^2. \en {\it Proof of Theorem \ref{th2.1}.} Let \be\label{pth1.51} r=\left(\fr{|\om|}{\omega_n}\right)^{1/n}\en and define $G: [0, +\infty)\ri\R$ by \be\label{pth1.6} G(t)=\left\{\begin{array}{l} g(t), \ t\leq r,\\ g(r), \ t > r. \end{array} \right. \en We need to choose suitable trial functions $\phi_i$ for each of the eigenfunctions $u_i$ and insure that these are orthogonal to the preceding eigenfunctions $u_0,\cdots, u_{i-1}$. For the $n$ trial functions $\phi_1, \phi_2, \cdots, \phi_n,$ we choose: \be \phi_i= G(|x|)\fr{x_i}{|x|}, \ \ {\rm for}\ \ i=1,\cdots, n, \en but before we can use these we need to make adjustments so that \be \phi_i \perp{\rm span}\{u_0,\cdots, u_{i-1}\} \en in $L^2(\om)$. In order to do this, let us fix an orthonormal basis $\{e_i\}_{i=1}^n$ of $\R^n$. From the well-know arguments of Weinberger in \cite{w} by using the Brouwer fixed point theorem, we know that it is always possible to choose the origin of $\R^n$ so that \be \label{pth1.7} \int_{\om}\langle x, e_i\rangle\fr{G(|x|)}{|x|} dx =0,\ \ i=1,\cdots, n,\en that is, $ \langle x, e_i\rangle\fr{G(|x|)}{|x|}\perp u_0$ (which is actually just the constant function $1/\sqrt{|\om|}$). Here $dx$ and $\langle , \rangle$ denote the standard Lebesgue measure and the inner product of $\R^n$, respectively. Now we show that there exists a new orthonormal basis $\{e_i^{\prime}\}_{i=1}^n$ of $\R^n$ such that \be\label{pth1.8} \langle x, e_i^{\prime}\rangle\fr{G(|x|)}{|x|}\perp u_j, \en for $j=1,\cdots, i-1$ and $i=2,\cdots, n$. To see this, we define an $n \times n$ matrix $Q=\left(q_{ij}\right)$ by \be q_{ij}=\int_{\om} \langle x, e_i\rangle\fr{G(|x|)}{|x|} u_j(x) dx, \ i,j=1,2,\cdots,n.\en Using the orthogonalization of Gram and Schmidt (QR-factorization theorem), we know that there exist an upper triangle matrix $T=(T_{ij})$ and an orthogonal matrix $U=(a_{ij})$ such that $T=UQ$, i.e., \begin{eqnarray*} T_{ij}=\sum_{k=1}^n a_{ik}q_{kj}=\int_{\om} \sum_{k=1}^n a_{ik}\langle x, e_k\rangle\fr{G(|x|)}{|x|} u_j(x) dx =0,\ \ 1\leq j<i\leq n. \end{eqnarray*} Letting $e_i^{\prime}=\sum_{k=1}^n a_{ik}e_k, \ i=1,...,n$; we arrive at (\ref{pth1.8}). Let us denote by $x_1, x_2,\cdots, x_n$ the coordinate functions with respect to the base $\{e_i^{\prime}\}_{i=1}^n$, that is, $x_i=\langle x, e_i^{\prime}\rangle, \ x\in\R^n$. From (\ref{pth1.7}) and (\ref{pth1.8}), we have \be\label{pth1.9}\int_{\om}\phi_i u_j dx= \int_{\om} G(|x|)\fr{x_i}{|x|} u_j(x) dx=0, \ i=1,\cdots, n, \ j=0,\cdots, i-1. \en It then follows from the variational characterization (\ref{pth0.5}) that \begin{eqnarray}\label{pth1.11} \mu_{i}\int_{\om} \phi_i^2dx \leq\int_{\om} |\na\phi_i|^2 dx,\ i=1,\cdots,n. \en Substituting \be\label{p} |\na \phi_i|^2&=& G^{\prime}(|x|)^2\fr{x_i^2}{|x|^2}+\fr{G(|x|)^2}{|x|^2}\left(1-\fr{x_i^2}{|x|^2}\right)\\ \no &=& \fr{G(|x|)^2}{|x|^2}+\left(G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\right)\fr{x_i^2}{|x|^2} \en into (\ref{pth1.11}) and dividing by $\mu_i$, one gets for $i=1,\cdots,n$ that \be\label{pth1.12} \int_{\om}\phi_i^2 dx\leq\fr 1{\mu_i}\int_{\om} \fr{G(|x|)^2}{|x|^2}dx + \fr 1{\mu_i}\int_{\om} \left(G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\right)\fr{x_i^2}{|x|^2}dx. \en Summing over $i$, we get \be\label{d7}\int_{\om} G(|x|)^2 dx &\leq& \sum_{i=1}^n \fr 1{\mu_i}\int_{\om}\fr{G(|x|)^2}{|x|^2}dx \\ \no & & +\sum_{i=1}^n \fr 1{\mu_i}\int_{\om} \left(G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\right)\fr{x_i^2}{|x|^2}dx. \en Since \be \sum_{i=1}^n\fr 1{\mu_i}\fr{x_i^2}{|x|^2}&=& \sum_{i=1}^{n-1}\fr 1{\mu_i}\fr{x_i^2}{|x|^2}+ \fr 1{\mu_n} \fr{x_n^2}{|x|^2}\\ \no &=& \sum_{i=1}^{n-1}\fr 1{\mu_i} \fr{x_i^2}{|x|^2}+ \fr 1{\mu_n}\left(1-\sum_{i=1}^{n-1} \fr{x_i ^2}{|x|^2}\right), \en we have \be & &\sum_{i=1}^n \fr 1{\mu_i}\int_{\om} \left(G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\right)\fr{x_i^2}{|x|^2}dx \\ \no &=& \sum_{i=1}^{n-1} \fr 1{\mu_i}\int_{\om} \left(G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\right)\fr{x_i^2}{|x|^2}dx \\ \no & & + \fr 1{\mu_n}\int_{\om} \left(G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\right)dx \\ \no & & - \fr 1{\mu_n}\int_{\om} \left(G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\right)\sum_{i=1}^{n-1}\fr{x_i^2}{|x|^2} dx \\ \no&=& \sum_{i=1}^{n-1}\int_{\om}\left(\fr 1{\mu_i}-\fr 1{\mu_n}\right)\left(G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\right)\fr{x_i^2}{|x|^2}dx \\ \no & & + \fr 1{\mu_n}\int_{\om} \left(G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\right) dx.\en \begin{lemma}\label{le1} We have $g^{\prime}|_{[0, r)}>0$, $g|_{(0, r]}>0$ and $g^{\prime}(t)-\fr{g(t)} t\leq 0, \ \forall t\in (0, r].$ \end{lemma} {\it Proof of Lemma \ref{le1}.} The Bessel function of the first kind $J_v(t)$ is given by \be J_v(t)=\sum_{k=0}^{+\infty}\fr{(-1)^k\left(\fr t2\right)^{2k+v}}{k!\Gamma(k+v+1)}, \en which, combining with (\ref{pth1.3}), gives \be\label{le1.2} g(t) =\left(\fr{p_{n/2, 1}}{2r}\right)^{\fr n2}t\sum_{k=0}^{+\infty}\fr{(-1)^k\left(\fr{p_{n/2, 1}}{2r}t\right)^{2k}}{k!\Gamma(k+\fr n2 +1)}. \en Thus, $g(0)=0, g^{\prime}(0)>0$. Since $r$ is the first positive zero of $g^{\prime}$, we have $g|_{(0, r]}>0$ and $g^{\prime}|_{[0, r)}>0.$ Observe that \be\label{le1.21} \ \lim_{t\ri 0}\left(g^{\prime}(t)-\fr{g(t)}t\right)=0, \ \ g^{\prime}(r)-\fr{g(r)}r<0. \en Let us assume by contradiction that there exists a $t_0\in (0, r)$ such that \be g^{\prime}(t_0)-\fr{g(t_0)}{t_0}>0.\en In this case, we know from (\ref{le1.21}) that the function $g^{\prime}(t)-\fr{g(t)}t$ attains its maximum at some $t_1\in (0, r)$ and so we have \be \label{le1.4} g^{\prime\prime}(t_1)-\fr{t_1g^{\prime}(t_1)-g(t_1)}{t_1^2}=0. \en From (2.6), we have \be\label{le1.5} g^{\prime\prime}(t_1)+\fr{n-1}{t_1} g^{\prime}(t_1)+\left(\mu_1(B_r)-\fr{n-1}{t_1^2}\right)g(t_1)=0. \en Eliminating $g^{\prime\prime}(t_1)$ from (\ref{le1.4}) and (\ref{le1.5}), we get \be \fr n{t_1}\left(g^{\prime}(t_1)-\fr{g(t_1)}{t_1}\right) = -\mu_1(B_r)g(t_1)<0. \en This is a contradiction and completes the proof of Lemma 2.1. From Lemma 2.1 and the definition of $G$, we know that \be G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\leq 0 \ \ \ {\rm on\ \ }\om. \en Hence \be \sum_{i=1}^{n-1}\int_{\om}\left(\fr 1{\mu_i}-\fr 1{\mu_n}\right)\left(G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\right)\fr{x_i^2}{|x|^2}dx\leq 0. \en Combining (2.19), (2.21) and (2.30), one gets \be\label{d7}\int_{\om} G(|x|)^2 dx &\leq& \fr 1{\mu_n}\int_{\om} \left(G^{\prime}(|x|)^2-\fr{G(|x|)^2}{|x|^2}\right)dx\\ \no & & +\sum_{i=1}^{n} \fr 1{\mu_i}\int_{\om}\fr{G(|x|)^2}{|x|^2}dx \\ \no &=& \fr 1{\mu_n}\int_{\om} G^{\prime}(|x|)^2 +\sum_{i=1}^{n-1} \fr 1{\mu_i}\int_{\om}\fr{G(|x|)^2}{|x|^2}dx \\ \no &\leq& \fr 1{n-1} \sum_{i=1}^{n-1}\fr 1{\mu_i}\int_{\om}\left(G^{\prime}(|x|)^2 +(n-1)\fr{G(|x|)^2}{|x|^2}\right)dx, \en that is, \be \fr {n-1}{\sum_{i=1}^{n-1}\fr 1{\mu_i} }\int_{\om} G(|x|)^2 dx \leq \int_{\om}\left(G^{\prime}(|x|)^2 +(n-1)\fr{G(|x|)^2}{|x|^2}\right)dx. \en Using the fact that $G(t)$ is increasing, one gets \be \label{th1.9} \int_{\om} G(|x|)^2dx &=& \int_{\om\cap B_r} G(|x|)^2dx+\int_{\om\setminus B_r} G(|x|)^2dx\\ \no &\geq & \int_{\om\cap B_r} G(|x|)^2dx+ g(r)^2|\om\setminus B_r| \\ \no &=& \int_{\om\cap B_r} g(|x|)^2dx+ g(r)^2|B_r\setminus \om| \\ \no &\geq & \int_{\om\cap B_r} g(|x|)^2+\int_{B_r\setminus \om} g(|x|)^2dx \\ \no &=& \int_{B_r}g(|x|)^2dx, \en which, combining with (2.32), gives \be \fr {n-1}{\sum_{i=1}^{n-1}\fr 1{\mu_i} }\int_{B_r}g(|x|)^2dx \leq \int_{\om}\left(G^{\prime}(|x|)^2 +(n-1)\fr{G(|x|)^2}{|x|^2}\right)dx. \en We know from (2.7) that \be \left(\fr{p_{n/2,1}}{r}\right)^2\int_{B_r}g(|x|)^2dx &=& \int_{B_r}\left(g^{\prime}(|x|)^2 +(n-1)\fr{g(|x|)^2}{|x|^2}\right)dx \\ \no &=& \int_{B_r}\left(G^{\prime}(|x|)^2 +(n-1)\fr{G(|x|)^2}{|x|^2}\right)dx.\en Consequently, we have \be & & \left(\left(\fr{p_{n/2,1}}{r}\right)^2-\fr {n-1}{\sum_{i=1}^{n-1}\fr 1{\mu_i} } \right)\int_{B_r}g(|x|)^2dx \\ \no &\geq& \int_{B_r} \left(G^{\prime}(|x|)^2 +(n-1)\fr{G(|x|)^2}{|x|^2}\right)dx-\int_{\om}\left(G^{\prime}(|x|)^2 +(n-1)\fr{G(|x|)^2}{|x|^2}\right)dx. \en We have \be \no\fr d{dt}\left[G^{\prime}(t)^2+(n-1)\fr{G(t)^2}{t^2}\right]=2G^{\prime}(t)G^{\prime\prime}(t)+2(n-1)(tG(t)G^{\prime}(t)-G(t)^2)/t^3. \en For $t>r$ this is negative since $G$ is constant there. For $t\leq r$ we use the differential equation (2.6) to obtain \be\no \fr d{dt}\left[G^{\prime}(t)^2+(n-1)\fr{G(t)^2}{t^2}\right]=-2 \mu_1(B_r)GG^{\prime}-(n-1)(tG^{\prime}-G)^2/t^3<0.\en Thus the function $ G^{\prime}(t)^2+(n-1)\fr{G(t)^2}{t^2}$ is decreasing for $t>0$. {\lemma \label{le2.2} (\cite{bp}) Let $f: \mathbb{R}_+\ri \mathbb{R}_+$ be a decreasing function. Then we have \be \label{le1.0} \int_{B_r} f(|x|)dx -\int_{\om} f(|x|)dx\geq n\omega_n \int_{\rho_1}^{\rho_2} |f(t)-f(r)|t^{n-1} dt. \en Here \be \label{le1.2} \rho_1=\left(\fr{|\om\cap B_r|}{\omega_n}\right)^{\fr 1n} \ \ \ {\rm and}\ \ \ \rho_2=\left(\fr{|\om|+|\om\setminus B_r|}{\omega_n}\right)^{\fr 1n}.\en } Taking $f(t)=G^{\prime}(t)^2+(n-1)\fr{G(t)^2}{t^2}$ in Lemma 2.3, we obtain \be\no & & \int_{B_r} \left(G^{\prime}(|x|)^2 +(n-1)\fr{G(|x|)^2}{|x|^2}\right)dx-\int_{\om}\left(G^{\prime}(|x|)^2 +(n-1)\fr{G(|x|^2}{|x|^2}\right)dx\\ \no &\geq & n\omega_n \int_{r}^{\rho_2} |f(t)-f(r)|t^{n-1} dt \\ & =& n\omega_n \int_{r}^{\rho_2} (f(r)-f(t))t^{n-1} dt \en Observe that \be (f(r)-f(t))t^{n-1}= (n-1)g(r)^2\left(\fr 1{r^2}-\fr 1{t^2}\right)t^{n-1}, \ \ \ {\rm for\ } \rho_2\geq t\geq r. \en Therefore, \be\no & &\int_{r}^{\rho_2} (f(r)-f(t))t^{n-1} dt\\ & = & g(r)^2\cdot\left\{\begin{array}{l} \fr{n-1}{nr^2}\left(\rho_2^n-r^n\right)-\fr{n-1}{n-2}\left(\rho_2^{n-2}-r^{n-2}\right), \ \ \ {\rm if\ \ } n>2,\\ \fr 1{2r^2}\left(\rho_2^2-r^2\right)-\ln\fr{\rho_2}r, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm if\ \ } n=2. \end{array}\right. \en By using the definition of $\rho_2$ we have when $n>2$, \be \rho_2^{n-2}-r^{n-2}&=& r^{n-2}\left[\left(1+\fr{|\om\setminus B_r|}{|\om|}\right)^{\fr{n-2}n}-1\right]\\ \no &\leq& r^{n-2}\left(\fr{n-2}n\fr{|\om\setminus B_r|}{|\om|}-\fr{(n-2)2^{-\fr 2n-1}}{n^2}\left(\fr{|\om\setminus B_r|}{|\om|}\right)^2\right), \en thanks to the elementary inequality \be\no (1+t)^{\delta}\leq 1+\delta t +\fr{\delta(\delta -1)}2\cdot 2^{\delta-2} t^2, \forall \ \delta\in (0, 1), \ \forall t\in [0, 1], \en and when $n=2$, \be \ln\fr{\rho_2}r &=&\fr 12 \ln\left(1+\fr{|\om\setminus B_r|}{|\om|}\right) \\ \no &\leq& \fr 12 \left(\fr{|\om\setminus B_r|}{|\om|}-\fr 14 \left(\fr{|\om\setminus B_r|}{|\om|}\right)^2\right), \en thanks to the elementary inequality \be\no \ln(1+t)\leq t-\fr{t^2}4, \ \forall t\in [0, 1]. \en Since $|B_r|=|\om|$, we have $|\om\Delta B_r|=2|\om\setminus B_r|$ and so $$ \fr{|\om\setminus B_r|}{|\om|}\geq \fr 12 \mathcal{A}(\om).$$ It then follows by substituting (2.42) and (2.43) into (2.41) that \be& & \int_{r}^{\rho_2} (f(r)-f(t))t^{n-1} dt\\ \no &=& g(r)^2\left(\fr{|\om\setminus B_r|}{|\om|}\right)^2\cdot\left\{\begin{array}{l} r^{n-2}\cdot \fr{(n-1)2^{-\fr 2n -1}}{n^2}, \ \ \ {\rm if\ \ } n>2,\\ \fr 18, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm if\ \ } n=2. \end{array}\right. \\ \no &\geq&\fr 14 g(r)^2\mathcal{A}(\om)^2\cdot\left\{\begin{array}{l} r^{n-2}\cdot \fr{(n-1)2^{-\fr 2n -1}}{n^2}, \ \ \ {\rm if\ \ } n>2,\\ \fr 18, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm if\ \ } n=2. \end{array}\right. \en Thus, concerning the right hand side of (2.36), one gets from (2.39) and (2.44) that \be\no & & \int_{B_r} \left(G^{\prime}(|x|)^2 +(n-1)\fr{G(|x|)^2}{|x|^2}\right)dx-\int_{\om}\left(G^{\prime}(|x|)^2 +(n-1)\fr{G(|x|^2}{|x|^2}\right)dx\\ \no &\geq & \fr{\omega_n}4g(r)^2\mathcal{A}(\om)^2\cdot \left\{\begin{array}{l} r^{n-2}\cdot \fr{(n-1)2^{-\fr 2n -1}}{n}, \ \ \ {\rm if\ \ } n>2,\\ \fr 14 , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm if\ \ } n=2, \end{array}\right. \\ \no &=& \fr{\omega_n}4 J_{n/2}(p_{n/2,1})^2\mathcal{A}(\om)^2\cdot \left\{\begin{array}{l}\fr{(n-1)2^{-\fr 2n -1}}{n}, \ \ \ {\rm if\ \ } n>2,\\ \fr 14, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm if\ \ } n=2, \end{array}\right. \\ &\equiv& \alpha(n)\mathcal{A}(\om)^2.\en Concerning the left hand side of (2.36), we have \be& &\left(\left(\fr{p_{n/2,1}}r\right)^2-\fr {n-1}{\sum_{i=1}^{n-1}\fr 1{\mu_i} } \right)\int_{B_r}g(|x|)^2dx\\ \no &=&\left(\left(\fr{p_{n/2,1}}r\right)^2-\fr {n-1}{\sum_{i=1}^{n-1}\fr 1{\mu_i} } \right) r^2\int_{\{|y|\leq 1\}}|y|^{2-n} J_{\fr n2}(p_{n/2,1}|y|)^2 dy \\ \no &=&\left(p_{n/2,1}^2\omega_n^{2/n}-\fr {(n-1)|\om|^{2/n}}{\sum_{i=1}^{n-1}\fr 1{\mu_i} } \right)\beta(n), \en where \be\no \beta(n)=\omega_n^{-2/n}\int_{\{|y|\leq 1\}}|y|^{2-n} J_{\fr n2}(p_{n/2,1}|y|)^2 dy. \en Combining (2.36), (2.45) and (2.46), we obtain \be p_{n/2,1}^2\omega_n^{2/n}-\fr {(n-1)|\om|^{2/n}}{\sum_{i=1}^{n-1}\fr 1{\mu_i} }\geq \alpha(n)\beta(n)^{-1}\mathcal{A}(\om)^2\equiv d(n)\mathcal{A}(\om)^2. \en Moreover, we can see that equality holds in (2.47) only when $\om$ is a ball. This completes the proof of Theorem \ref{th2.1}. \section{A Proof of Theorem \ref{th2}} \setcounter{equation}{0} In this section, we shall prove Theorem \ref{th2}. Firstly, we list some important facts we need. About each point $p\in \mathbb{H}^n$ there exists a coordinate system $(t,\xi)\in [0, +\infty)\times\mathbb{S}^{n-1}$ relative to which the Riemannian metric reads as \be ds^2=dt^2 +\sinh^2t d\sigma^2, \en where $d\sigma^2$ is the canonical metric on the $(n-1)$-dimensional unit sphere $\mathbb{S}^{n-1}$. \begin{lemma}(Cf. \cite{c}, \cite{x}). \label{le2} Let $B(p, r)$ be a geodesic ball of radius $r$ with center $p$ in $\mathbb{H}^n$. Then the eigenfunction corresponding to the first nonzero eigenvalue $\mu_1(B(p, r))$ of the Neumann problem on $B(p, r)$ must be \be\label{le3.1} h(t, \xi)=f(t)\omega(\xi), \ \xi\in \mathbb{S}^{n-1}, \en where $\omega(\xi)$ is an eigenfunction corresponding to the first nonzero eigenvalue of $\mathbb{S}^{n-1}$, $f$ satisfies \be\label{le2.2}\left\{\begin{array}{l}f^{\prime\prime}+(n-1)\coth t+\left(\mu_1(B(p,r))-\fr{n-1}{\sinh^2 t}\right)f=0, \\ f(0)=f^{\prime}(r)=0, \ f^{\prime}|_{[0, r)}\neq 0, \end{array} \right. \en and \be\label{le2.3} \mu_1(B(p,r))=\fr{\int_{B(p,r)}\left(f^{\prime}(t)^2+(n-1)\fr{f(t)^2}{\sinh^2t}\right)dv}{\int_{B(p,r)}f(t)^2dv}. \en \end{lemma} {\it Proof of Theorem \ref{th2}.} Assume that the radius of $B_{\om}$ is $r$. Let $f$ be as in Lemma 3.1. Noticing $f(t)\neq 0$ when $0<t\leq r$, we may assume that $f(t)>0$ for $0<t\leq r$ and so $f$ is nondecreasing on $[0, r]$. Let $\{{\bf e}_i\}_{i=1}^n$ be an orthonormal basis of $\mathbb{R}^n$ and set $\omega_i(\xi)=\langle{\bf e}_i, \xi\rangle, \ \xi\in \mathbb{S}^{n-1}\subset \mathbb{R}^n$. Define \be F(t)=\left\{\begin{array}{l} f(t), \ \ t\leq r,\\ f(r), \ \ t>r. \end{array} \right. \en Let us take a point $p\in \mathbb{H}^n$ such that in the above coordinate system at $p$ we have \be\label{pth2.1} \int_{\om} F(t)\omega_i(\xi)dv=0, \ \ i=1,\cdots,n. \en Here, $dv$ is the volume element of $\mathbb{H}^n$. By using the same arguments as in the proof of Theorem 2.1, we can assume further that \be\label{pth2.2} \int_{\om}F(t)\omega_i(\xi) u_jdv=0, \en for $i=2,3,\cdots, n$ and $j=1,\cdots,i-1$. Here $\{u_i\}_{i=0}^{+\infty}$ is a orthonormal set of eigenfunctions corresponding to the eigenvalues $\{\mu_i(\om)\}_{i=0}^{+\infty}$. Hence, we conclude from the Rayleigh-Ritz variational characterization (\ref{pth0.5}) that \be & & \mu_i(\om)\int_{\om} F(t)^2 \omega_i^2(\xi)dv\\ \no &\leq& \int_{\om}|\nabla(F(t)\omega_i(\xi))|^2 dv\\ \no &=& \int_{\om}\left(|F^{\prime}(t)|^2\omega_i^2(\xi)+F^2(t)|\tilde{\nabla}\omega_i(\xi)|^2\sinh^{-2} t\right)dv, \ i=1,\cdots,n, \en where $\tilde{\nabla}$ denotes the gradient operator of $\mathbb{S}^{n-1}.$ Thus \be\label{pth2.3} & & \int_{\om} F(t)^2 \omega_i^2(\xi)dv\\ \no &\leq& \fr 1{\mu_i(\om)}\int_{\om}|F^{\prime}(t)|^2\omega_i^2(\xi)dv+ \fr 1{\mu_i(\om)}\int_{\om}F^2(t)|\tilde{\nabla}\omega_i(\xi)|^2\sinh^{-2} t dv. \en Observing $F^{\prime}(t)=0, t\geq r$, one gets \be\label{pth2.4} \int_{\om}|F^{\prime}(t)|^2\omega_i^2(\xi)dv&=&\int_{\om\cap B(p,r)}|F^{\prime}(t)|^2\omega_i^2(\xi)dv\\ \no &\leq& \int_{ B(p,r)}|F^{\prime}(t)|^2\omega_i^2(\xi)dv\\ \no &=& \int_0^r \int_{\mathbb{S}^{n-1}}|F^{\prime}(t)|^2\omega_i^2(\xi)\sinh^{n-1} t dA\ dt \\ \no &=& \fr 1n \int_0^r \int_{\mathbb{S}^{n-1}}|F^{\prime}(t)|^2\sinh^{n-1} t dA\ dt \\ \no &=& \fr 1n\int_{B(p,r)}|F^{\prime}(t)|^2 dv, \en where $dA$ denotes the area element of $\mathbb{S}^{n-1}$. Since \be |\tilde{\nabla}\omega_i(\xi)|\leq 1, \ \ \sum_{i=1}^n |\tilde{\nabla}\omega_i(\xi)|^2=n-1, \en we have \be\label{pth2.5} & & \sum_{i=1}^n \fr 1{\mu_i(\om)}|\tilde{\nabla}\omega_i(\xi)|^2\\ \no &=&\sum_{i=1}^{n-1} \fr 1{\mu_i(\om)}|\tilde{\nabla}\omega_i(\xi)|^2 + \fr 1{\mu_n(\om)}\sum_{i=1}^{n-1}\left(1-|\tilde{\nabla}\omega_i(\xi)|^2\right) \\ \no &\leq&\sum_{i=1}^{n-1} \fr 1{\mu_i(\om)}|\tilde{\nabla}\omega_i(\xi)|^2 + \sum_{i=1}^{n-1}\fr 1{\mu_i(\om)}\left(1-|\tilde{\nabla}\omega_i(\xi)|^2\right)\\ \no &=& \sum_{i=1}^{n-1}\fr 1{\mu_i(\om)}.\en Summing on $i$ from $1$ to $n$ in (\ref{pth2.3}) and using (\ref{pth2.4}) and (\ref{pth2.5}), we get \be\label{pth2.6} & & \int_{\om} F(t)^2dv\\ \no &\leq& \sum_{i=1}^n \fr 1{n\mu_i(\om)}\int_{B(p,r)}|F^{\prime}(t)|^2dv+\sum_{i=1}^{n-1}\fr 1{\mu_i(\om)}\int_{\om}F^2(t)\sinh^{-2}tdv. \en We need the following lemma. \begin{lemma}\label{le2}The function $h(t)=\fr{F(t)}{\sinh t}$ is decreasing. \end{lemma} {\it Proof of Lemma \ref{le2}.} Observe that \be\no \lim_{t\ri 0} h(t)= f^{\prime}(0). \en Let us show that \be\label{ple2.1} \gamma(t)\equiv f^{\prime}(t)-\coth t f(t)\leq 0, \ t\in (0, r]. \en Since \be\lim_{t\ri 0} \gamma(t)=0, \ \gamma(r)= -\coth r f(r)<0, \en if $\gamma(t_0)>0$ for some $t_0\in (0, r),$ then $\gamma$ attains its maximum at some $t_1\in (0, r)$ and so \be\label{ple2.2} 0=\gamma^{\prime}(t_1)= f^{\prime\prime}(t_1)+\fr{f(t_1)}{\sinh^2t_1}-\coth t_1 f^{\prime}(t_1). \en We have from (\ref{le2.2}) that \be\label{ple2.3} f^{\prime\prime}(t_1)+(n-1)\coth t_1f^{\prime}(t_1)+ \mu_1(B(r))f(t_1)-\fr{n-1}{\sinh^2 t_1}f(t_1)=0. \en Hence \be f^{\prime}(t_1)-\fr{f(t_1)}{\cosh t_1\sinh t_1} = -\fr{\mu_1(B(r))f(t_1)\sinh t_1}{n\cosh t_1}<0, \en which contradicts to \be f^{\prime}(t_1)-\coth t_1 f(t_1)>0. \en Thus (\ref{ple2.1}) holds. Consequently $h^{\prime}(t)\leq 0, \ \forall t\in (0, r]$ and $h$ is decreasing. The proof of Lemma 3.2 is completed. \vskip0.2cm Now we go on the proof of Theorem \ref{th2}. Since $F$ is increasing and $\fr{F(t)}{\sinh t}$ is decreasing, we can use the same arguments as in the proof of (2.33) to conclude that \be\label{pth2.7} \int_{\om} F(t)^2 dv\geq \int_{B(p,r)}f(t)^2 dv\en and \be\label{pth2.8} \int_{\om} \fr{F(t)^2}{\sinh^2t} dv\leq \int_{B(p,r)} \fr{f(t)^2}{\sinh^2t} dv. \en Substituting (\ref{pth2.7}) and (\ref{pth2.8}) into (\ref{pth2.6}), one gets \be\label{pth2.9} \fr 1{n-1}\sum_{i=1}^{n-1}\fr 1{\mu_i(\om)}&\geq&\fr{\int_{B(p,r)}f(t)^2dv}{\int_{B(p,r)}\left(f^{\prime}(t)^2+(n-1)\fr{f(t)^2}{\sinh^2t}\right)dv}\\ \no &=&\fr 1{\mu_1(B(p,r))} \en and equality holds if and only if $\om=B(p,r)$. This completes the proof of Theorem \ref{th2}. \section*{Acknowledgments} Q. Wang was partially supported by CNPq, Brazil (Grant No. 307089/2014-2). C. Xia was partially supported by CNPq, Brazil (Grant No. 306146/2014-2).
{ "timestamp": "2020-01-22T02:04:06", "yymm": "1808", "arxiv_id": "1808.09520", "language": "en", "url": "https://arxiv.org/abs/1808.09520", "abstract": "In this paper, we prove an isoperimetric inequality for lower order eigenvalues of the free membrane problem on bounded domains of a Euclidean space or a hyperbolic space which strengthens the well-known Szegö-Weinberger inequality and supports strongly an important conjecture of Ashbaugh-Benguria.", "subjects": "Analysis of PDEs (math.AP)", "title": "On a Conjecture of Ashbaugh and Benguria about Lower Eigenvalues of the Neumann Laplacian", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.984336353126336, "lm_q2_score": 0.7185943805178138, "lm_q1q2_score": 0.7073385718959835 }
https://arxiv.org/abs/1412.3366
Frattini and related subgroups of Mapping Class Groups
Let $\Gamma_{g,b}$ denote the orientation-preserving Mapping Class Group of a closed orientable surface of genus $g$ with $b$ punctures. For a group $G$ let $\Phi_f(G)$ denote the intersection of all maximal subgroups of finite index in $G$. Motivated by a question of Ivanov as to whether $\Phi_f(G)$ is nilpotent when $G$ is a finitely generated subgroup of $\Gamma_{g,b}$, in this paper we compute $\Phi_f(G)$ for certain subgroups of $\Gamma_{g,b}$. In particular, we answer Ivanov's question in the affirmative for these subgroups of $\Gamma_{g,b}$.
\section{Introduction} We fix the following notation throughout this paper: Let $\Gamma_{g,b}$ denote the orientation-preserving Mapping Class Group of a closed orientable surface of genus $g$ with $b$ punctures. When $b=0$ we simply write $\Gamma_g$. In addition, when $b>0$ we let $\P\Gamma_{g,b}$ denote the {\em pure Mapping Class Group}; i.e. the subgroup of $\Gamma_{g,b}$ consisting of those elements that fix the punctures pointwise. The Torelli group ${\cal I}_g$ is the subgroup of $\Gamma_g$ arising as the kernel of the homomorphism $\Gamma_g \rightarrow \Sp(2g,{\bf Z})$ coming from the action of $\Gamma_g$ on $H_1(\Sigma_g,{\bf Z})$. As usual $\Out(F_n)$ will denote the Outer Automorphism Group of a free group of rank $n\geq 2$. For a group $G$, the {\em Frattini subgroup} $\Phi(G)$ of $G$ is defined to be the intersection of all maximal subgroups of $G$ (if they exist), otherwise it is defined to be the group $G$ itself. (Here a maximal subgroup is a strict subgroup which is maximal with respect to inclusion.) In addition we define $\Phi_f(G)$ to be the intersection of all maximal subgroups of finite index in $G$. Note that $\Phi(G) < \Phi_f(G)$.\footnote{We use the notation $G_1<G_2$ to indicate that $G_1$ is a subgroup of $G_2$ (including the case where $G_1=G_2$).} Frattini's original theorem is that if $G$ is finite then $\Phi(G)=\Phi_f(G)$ is a nilpotent group (see for example \cite[Theorem 11.5]{Ro}). For infinite groups this is not the case: there are examples of finitely generated infinite groups $G$ with $\Phi(G)$ not nilpotent \cite[p.~328]{H}. On the other hand, in \cite{Pl} Platonov showed that if $G$ is any finitely generated linear group then $\Phi(G)$ and $\Phi_f(G)$ are nilpotent. Motivated by the question as to whether $\Gamma_g$ is a linear group, in \cite{Lo}, Long proved that $\Phi(\Gamma_g)=1$ for $g\geq 3$, and $\Phi(\Gamma_2)={\bf Z}/2{\bf Z}$. This was extended by Ivanov \cite{Iv1} who showed that (as in the linear case), $\Phi(G)$ is nilpotent for any finitely generated subgroup $G<\Gamma_{g,b}$. Regarding $\Phi_f(G)$, in \cite{Iv1} (and then again in \cite{Iv2}), Ivanov asks whether the same is true for $\Phi_f$: \vskip 8pt \noindent{\bf Question:} (Ivanov \cite{Iv1,Iv2}) Is $\Phi_f(G)$ is nilpotent for every finitely generated subgroup $G$ of $\Gamma_{g,b}$? \\[\baselineskip] The aim of this note is to prove some results in the direction of answering Ivanov's question. In particular, the following theorem answers Ivanov's question in the affirmative for $\Gamma_g$ and some of its subgroups in the case where $g\geq 3$. \begin{theorem} \label{main1} Suppose that $g\geq 3$, and that $G$ is either \begin{enumerate} \item[(i)] the Mapping Class Group $\Gamma_g$, or \item[(ii)] a normal subgroup of $\Gamma_g$ (for example the Torelli group ${\cal I}_g$, the Johnson kernel ${\cal K}_g$, or any higher term in the Johnson filtration of $\Gamma_g$), or \item[(iii)] a subgroup of $\Gamma_g$ which contains a finite index subgroup of the Torelli group ${\cal I}_g$. \end{enumerate} Then $\Phi_f(G)=1$.\end{theorem} \vskip 8pt \noindent{\bf Remarks:} (1)~Since $\Phi(G) < \Phi_f(G)$, our methods also give a different proof of Long's result that $\Phi(\Gamma_g)=1$ for $g\geq 3$. As for the case $g\leq 2$, note that $\Gamma_1$ and $\Gamma_2$ are linear (see \cite{BB} for $g=2$) and so Platonov's result \cite{Pl} applies to answer Ivanov's question in the affirmative in these cases for all finitely generated subgroups. On the other hand, for $g\geq 3$ the Mapping Class Group is not known to be linear and no other technique for answering Ivanov's question was known. In fact, as pointed out by Ivanov, neither the methods of \cite{Lo} or \cite{Iv1} apply to $\Phi_f$, and so even the case of $\Phi_f$ of the Mapping Class Group itself was not known. \\[\baselineskip] \noindent (2)~Note that in Platonov's and Ivanov's theorems and in Ivanov's question, the Frattini subgroup and its variant $\Phi_f$ are considered for finitely generated subgroups. In reference to Theorem \ref{main1} above, it remains an open question as to whether the Johnson kernel ${\cal K}_g$, or any higher term in the Johnson filtration of $\Gamma_g$, is finitely generated or not.\\[\baselineskip] Perhaps the most interesting feature of the proof of Theorem \ref{main1} is that it is another application of the projective unitary representations arising in Topological Quantum Field Theory (TQFT) first constructed by Reshetikhin and Turaev \cite{RT} (although as in \cite{MR}, the perspective here is that of the skein-theoretical approach of \cite{BHMV}). We are also able to prove: \begin{theorem} \label{main1add} Assume that $b>0$, then $\Phi_f(\P\Gamma_{g,b})$ is either trivial or ${\bf Z}/2{\bf Z}$. Indeed, $\Phi_f(\P\Gamma_{g,b})=1$ unless $(g,b)\in\{(1,b),(2,b)\}$. \end{theorem} \noindent The reason for separating out the case when $b>0$ is that the proof does not directly use the TQFT framework, but rather makes use of Theorem \ref{main1}(i) in conjunction with the Birman exact sequence and a general group theoretic lemma (see \S \ref{sec6}). We expect that the methods of this paper will also answer Ivanov's question for $\Gamma_{g,b}$ but at present we are unable to do so. We comment further on this at the end of \S \ref{sec6}. In addition our methods can also be used to give a straightforward proof of the following. \begin{theorem} \label{main2} Suppose that $n\geq 3$, $\Phi(\Out(F_n))=\Phi_f(\Out(F_n))=1$.\end{theorem} \noindent Note that it was shown in \cite{Hu} that $\Phi(\Out(F_n))$ is finite. As remarked upon above, this note was largely motivated by the questions of Ivanov. To that end, we discuss a possible approach to answering Ivanov's question in general using the aforementioned projective unitary representations arising from TQFT, coupled with Platonov's work \cite{Pl}. Another motivation for this work arose from attempts to understand the nature of the Frattini subgroup and the center of the profinite completion of $\Gamma_g$ and ${\cal I}_g$. We discuss these further in \S \ref{sec7}.\\[\baselineskip] \noindent{\bf Acknowledgements:}~{\em The authors wish to thank the organizers of the conference "Braids and Arithmetic" at CIRM Luminy in October 2014, where this work was completed.} \section{Proving triviality of $\Phi_f$} Before stating and proving an elementary but useful technical result we introduce some notation. Let $\Gamma$ be a finitely generated group, and let ${\cal S}=\{G_n\}$ a collection of finite groups together with epimorphisms $\phi_n:\Gamma\rightarrow G_n$. We say that $\Gamma$ is {\em residually}-${\cal S}$, if given any non-trivial element $\gamma\in\Gamma$, there is some group $G_n\in{\cal S}$ and an epimorphism $\phi_n$ for which $\phi_n(\gamma)\neq 1$. Note that, as usual, this is equivalent to the statement $\bigcap \ker\phi_n = 1$. \begin{proposition} \label{tool} Let $\Gamma$ and ${\cal S}$ be as above with $\Gamma$ being residually-${\cal S}$. Assume further that $\Phi(G_n)=1$ for every $G_n\in{\cal S}$. Then $\Phi(\Gamma)=\Phi_f(\Gamma)=1$.\end{proposition} Before commencing with the proof of this proposition, we recall the following property. \begin{lemma} \label{frattini_under_epi} Let $\Gamma$ and $G$ be groups and $\alpha:\Gamma\rightarrow G$ an epimorphism. Then $\alpha(\Phi(\Gamma)) \subset \Phi(G)$ and $\alpha(\Phi_f(\Gamma)) \subset \Phi_f(G)$.\end{lemma} \noindent{\bf Proof:}~We prove the last statement. Let $M$ be a maximal subgroup of $G$ of finite index. Then $\alpha^{-1}(M)$ is a maximal subgroup of $\Gamma$ of finite index in $\Gamma$, and hence $\Phi_f(\Gamma) \subset \alpha^{-1}(M)$. Thus $\alpha(\Phi_f(\Gamma)) \subset M$ for all maximal subgroups $M$ of finite index in $G$ and the result follows.\qed\\[\baselineskip] \noindent{\bf Remark:}~As pointed out in \S 1, for a finite group $G$, $\Phi(G)=\Phi_f(G)$.\\[\baselineskip] \noindent{\bf Proof of Proposition \ref{tool}:}~We give the argument for $\Phi_f(\Gamma)$, the argument for $\Phi(\Gamma)$ is exactly the same. Thus suppose that $g\in\Phi_f(\Gamma)$ is a non-trivial element. Since $\Gamma$ is residually-${\cal S}$, there exists some $n$ so that $\phi_n(g) \in G_n$ is non-trivial. However, by Lemma \ref{frattini_under_epi} (and the remark following it) we have: $$\phi_n(g) \in \phi_n(\Phi_f(\Gamma)) < \Phi_f(G_n)=\Phi(G_n),$$ and in particular $\Phi(G_n)\neq 1$, a contradiction.\qed \section{The quantum representations and finite quotients} We briefly recall some of \cite{MR} (which uses \cite{BHMV} and \cite{GM}). As in \cite{MR} we only consider the case of $p$ a prime satisfying $p\equiv 3 \pmod 4 $. Let $\Sigma$ be a closed orientable surface of genus $g\geq 3$. The integral $SO(3)$-TQFT constructed in \cite{GM} provides a representation of a central extension $\widetilde \Gamma_g$ of $\Gamma_g$ $$\rho_p \,:\, \widetilde \Gamma_g \longrightarrow \GL(N_g(p),\BZ[\zeta_p])~,$$ where $\zeta_p$ is a primitive $p$-th root of unity, $\BZ[\zeta_p]$ is the ring of cyclotomic integers and $N_g(p)$ the dimension of a vector space $V_p(\Sigma)$ on which the representation acts. It is known that $N_g(p)$ is given by a Verlinde-type formula and goes to infinity as $p\rightarrow \infty$. For convenience we simply set $N=N_g(p)$. As in \cite{MR} the image group $\rho_p(\widetilde{\Gamma}_g)$ will be denoted by $\Delta_g$. As is pointed out in \cite{MR}, $\Delta_g< \SL(N, \BZ[\zeta_p])$, and moreover, $\Delta_g$ is actually contained in a special unitary group $\SU(V_p,H_p;\BZ[\zeta_p])$, where $H_p$ is a Hermitian form defined over the real field ${\bf Q}(\zeta_p+\zeta_p^{-1})$. Furthermore, the homomorphism $\rho_p$, descends to a projective representation of $\Gamma_g$ (which we denote by $\overline{\rho}_p$): $$\overline{\rho}_p : \Gamma_g \longrightarrow \PSU(V_p,H_p;\BZ[\zeta_p]),$$ What we need from \cite{MR} is the following. We can find infinitely many rational primes $q$ which split completely in $\BZ[\zeta_p]$, and for every such prime $\tilde q$ of $\BZ[\zeta_p]$ lying over such a $q$, we can consider the group $$\pi_{\tilde q}(\Delta_g) \subset \SL(N,q),$$ where $\pi_{\tilde q}$ is the reduction homomorphism from $\SL(N,\BZ[\zeta_p])$ to $\SL(N,q)$ induced by the isomorphism $\BZ[\zeta_p]/\tilde q\simeq \BF_q$. As is shown in \cite{MR} (see also \cite{Fu}) we obtain epimorphisms $\Delta_g\twoheadrightarrow \SL(N,q)$ for all but finitely many of these primes $\tilde{q}$, and it then follows easily that we obtain epimorphisms $\Gamma_g \twoheadrightarrow \PSL(N,q)$. We denote these epimorphisms by $\rho_{p,\tilde{q}}$. These should be thought of as reducing the images of $\overline{\rho}_{p}$ modulo $\tilde{q}$. That one obtains finite simple groups of the form $\PSL$ rather than $\PSU$ when $q$ is a split prime is discussed in \cite{MR} \S2.2. \begin{lemma} \label{residual} For each $g\geq 3$, $\bigcap \ker\rho_{p,\tilde{q}}=1$.\end{lemma} \noindent{\bf Proof:}~Fix $g\geq 3$ and suppose that there exists a non-trivial element $\gamma\in \bigcap \ker\rho_{p,\tilde{q}}$. Now it follows from asymptotic faithfulness \cite{A,FWW} that $\bigcap \ker\overline{\rho}_p = 1$. Thus for some $p$ there exists $\overline{\rho}_p$ such that $\overline{\rho}_p(\gamma)\neq 1$. Now $\rho_{p,\tilde{q}}(\gamma)$ is obtained by reducing $\overline{\rho}_p(\gamma)$ modulo $\tilde{q}$, and so there clearly exists $\tilde{q}$ so that $\rho_{p,\tilde{q}}(\gamma)\neq 1$, a contradiction.\qed\\[\baselineskip] \section{Proofs of Theorems \ref{main1} and \ref{main2}} The proof of Theorem \ref{main1} for $G=\Gamma_g$ follows easily as a special case of our next result. To state this, we introduce some notation: If $H<\Gamma_g$, we denote by $\widetilde H$, the inverse image of $H$ under the projection $\widetilde{\Gamma}_g \rightarrow \Gamma_g$. \begin{proposition} \label{saturating_trivial_frattini} Let $g\geq 3$, and assume that $H$ is a finitely generated subgroup of $\Gamma_g$ for which $\rho_p(\widetilde H)$ has the same Zariski closure and adjoint trace field as $\Delta_g$. Then $\Phi(H)=\Phi_f(H)=1$.\end{proposition} \noindent{\bf Proof:}~We begin with a remark. That the homomorphisms $\rho_{p,\tilde{q}}$ of \S 3 are surjective is proved using Strong Approximation. The main ingredients of this are the Zariski density of $\Delta_g$ in the algebraic group $\SU(V_p,H_p)$, and the fact that the adjoint trace field of $\Delta_g$ is the field ${\bf Q}(\zeta_p+\zeta_p^{-1})$ over which the group $\SU(V_p,H_p)$ is defined (see \cite{MR} for more details). In particular, the proof establishes surjectivity of $\rho_{p,\tilde{q}}$ when restricted to any subgroup $H<\Gamma_g$ equipped with the hypothesis of the proposition. To complete the proof, the groups $\PSL(N,q)$ are finite simple groups (since the dimensions $N$ are all very large) so their Frattini subgroup is trivial. This follows from Frattini's theorem, or, more simply, from the fact that the Frattini subgroup of a finite group is a normal subgroup which is moreover a strict subgroup (since finite groups do have maximal subgroups). Hence the result follows from Lemma \ref{residual}, Proposition \ref{tool} and the remark at the start of the proof.\qed\\[\baselineskip] \noindent In particular, $\Gamma_g$ satisfies the hypothesis of Proposition \ref{saturating_trivial_frattini}, and so $\Phi_f(\Gamma_g)=1$. This also recovers the result of Long \cite{Lo} proving triviality of the Frattini subgroup.\\[\baselineskip] \noindent The proof of Theorem \ref{main1} in case (ii), that is, when $G$ is a normal subgroup of $\Gamma_g$, follows from this and the following general fact: \begin{proposition}\label{fFN} If $N $ is a normal subgroup of a group $\Gamma$, then $\Phi_f(N) < \Phi_f(\Gamma)$. \end{proposition} This fact is known for Frattini subgroups of finite groups, and the proof can be adapted to our situation. We defer the details to Section~\ref{FFN}.\\[\baselineskip] In the remaining case (iii) of Theorem \ref{main1}, $G$ is a subgroup of $\Gamma_g$ which contains a finite index subgroup of the Torelli group ${\cal I}_g$. We shall show that $G$ satisfies the hypothesis of Proposition \ref{saturating_trivial_frattini}, and deduce $\Phi_f(G)=1$ as before. Consider first the case where $G$ is the Torelli group ${\cal I}_g$ itself. Recall the short exact sequence $$1 \longrightarrow {\cal I}_g \longrightarrow \Gamma_g \longrightarrow \Sp(2g,{\bf Z})\longrightarrow 1~.$$ We now use the following well-known facts. \begin{enumerate} \item[$\bullet$] $\Gamma_g$ is generated by Dehn twists, which map to transvections in $ \Sp(2g,{\bf Z})$. \item[$\bullet$] The central extension $\widetilde \Gamma_g$ of $\Gamma_g$ is generated by certain lifts of Dehn twists, and $\rho_p$ of every such lift is a matrix of order $p$. \item[$\bullet$] The quotient of $\Sp(2g,{\bf Z})$ by the normal subgroup generated by $p$-th powers of transvections is the finite group $\Sp(2g,{\bf Z}/p{\bf Z})$ (see \cite{BMS} for example). \end{enumerate} \noindent It follows that the finite group $\Sp(2g,{\bf Z}/p{\bf Z})$ admits a surjection onto the quotient group $$\Delta_g / \rho_p(\widetilde{\cal I}_g)$$ (recall that $\Delta_g= \rho_p(\widetilde{\Gamma}_g)$) and hence the group $\rho_p(\widetilde{\cal I}_g)$ has finite index in $\Delta_g$. But the Zariski closure of $\Delta_g$ is the connected, simple, algebraic group $\SU(V_p,H_p)$. Thus $\rho_p(\widetilde{\cal I}_g)$ and $\Delta_g$ have the same Zariski closure. Again using the fact that $\SU(V_p,H_p)$ is a simple algebraic group, we also deduce that $\rho_p(\widetilde{\cal I}_g)$ has the same adjoint trace field as $\Delta_g$ (this follows from \cite{DM} Proposition 12.2.1 for example). This shows that ${\cal I}_g$ indeed satisfies the hypothesis of Proposition~\ref{saturating_trivial_frattini}, and so once again $\Phi_f({\cal I}_g)=1$. The same arguments work when $G$ has finite index in ${\cal I}_g$, and also when $G$ is any subgroup of $\Gamma_g$ which contains a finite index subgroup of ${\cal I}_g$. This completes the proof of Theorem \ref{main1}.\qed\\[\baselineskip] We now turn to the proof of Theorem \ref{main2}. To deal with the case of $\Out(F_n)$, we recall that R.~Gilman \cite{Gil} showed that for $n\geq 3$, $\Out(F_n)$ is residually alternating: i.e. in the notation of \S 2, the collection ${\cal S}$ consists of alternating groups.\\[\baselineskip] \noindent{\bf Proof of Theorem \ref{main2}:}~For $n\geq 3$, the abelianization of $\Out(F_n)$ is ${\bf Z}/2{\bf Z}$ (as can be seen directly from Nielsen's presentation of $\Out(F_n)$, see \cite{Vo} \S 2.1). Hence, $\Out(F_n)$ does not admit a surjection onto $A_3$ or $A_4$. Thus all the alternating quotients described by Gilman's result above have trivial Frattini subgroups (as in the proof of Proposition \ref{saturating_trivial_frattini}). The proof is completed using the residual alternating property and Proposition \ref{tool}.\qed \section{Proof of Proposition~\protect{\ref{fFN}}}\label{FFN} Let $\Gamma$ be a group and $N $ a normal subgroup of $\Gamma$. We wish to show that $\Phi_f(N) < \Phi_f(\Gamma)$. We proceed as follows. First a preliminary observation. Let $K=\Phi_f(N)$. It is easy to see that $K$ is characteristic in $N$ (i.e., fixed by every automorphism of $N$). Since $N$ is normal in $\Gamma$, it follows that $K$ is normal in $\Gamma$. This implies that for every subgroup $M$ of $\Gamma$, the set $$KM=\{km\,|\, k\in K, m\in M\}$$ is a subgroup of $\Gamma$. Moreover, since $K < N$, we have \begin{equation}\label{eq1} KM \cap N = KM_1\end{equation} where $M_1= M\cap N$. To see the inclusion $KM \cap N \subset KM_1$, write an element of $KM \cap N$ as $km=n$ and observe that $m\in N$ since $K<N$. Thus $m\in M_1$. The reverse inclusion is immediate. Now suppose for a contradiction that $K =\Phi_f(N)$ is not contained in $ \Phi_f(\Gamma)$. Then there exists a maximal subgroup $M<\Gamma$ of finite index such that $K$ is not contained in $M$. Write $$M_1=M\cap N$$ as above. Then $M_1$ is a finite index subgroup of $N$. If $M_1=N$ then $N$ is contained in $M$, and hence so is $K$, which is a contradiction. Thus $M_1$ is a strict subgroup of $N$, and since its index in $N$ is finite, $M_1$ is contained in a maximal subgroup $H$, say, of $N$. The proof is now concluded as follows. By definition, $K=\Phi_f(N)$ is also contained in $H$. Hence the group $KM_1$ is contained in $H$ and therefore strictly smaller than $N$. On the other hand, by the maximality of $M$ in $\Gamma$, we have $KM=\Gamma$, and hence, using (\ref{eq1}), we have $$ KM_1 = KM \cap N =\Gamma \cap N = N~.$$ This contradiction completes the proof.\qed\\[\baselineskip] \noindent{\bf Remark:} If we consider the original Frattini group $\Phi$ in place of $\Phi_f$, one can show similarly that $\Phi(N) < \Phi(G)$, provided that every subgroup of $N$ is contained in a maximal subgroup of $N$; e.g. when $N$ is finitely generated. \section{Proof of Theorem \ref{main1add}}\label{sec6} We begin by recalling the Birman exact sequence. Let $\Sigma_{g,b}$ denote the closed orientable surface of genus $g$ with $b$ punctures. If $b=0$ we abbreviate to $\Sigma_g$. There is a short exact sequence (the {\em Birman exact sequence}): $$1\rightarrow \pi_1(\Sigma_{g,(b-1)})\rightarrow \P\Gamma_{g,b}\rightarrow \P\Gamma_{g,(b-1)}\rightarrow 1,$$ \noindent where the map $\P\Gamma_{g,b}\rightarrow \P\Gamma_{g,(b-1)}$ is the forgetful map, and the map $\pi_1(\Sigma_{g,(b-1)})\rightarrow \P\Gamma_{g,b}$ the point pushing map (see \cite{FM} Chapter 4.2 for details). Also in the case when $b=1$, the symbol $\P\Gamma_{g,0}$ simply denotes the Mapping Class Group $\Gamma_g$. It will be useful to recall that an alternative description of $\P\Gamma_{g,b}$ is as the kernel of an epimorphism $\Gamma_{g,b}\rightarrow S_b$ (the symmetric group on $b$ letters). The proof will proceed by induction, using Theorem \ref{main1}(i) to get started, together with the following (which is an adaptation of Lemma 3.5 of \cite{ABetal} to the case of $\Phi_f$). The proof is included in \S \ref{sec.all} below. We introduce the following notation. Recalling \S 2, let $G$ be a group, say $G$ is {\em residually simple} if the collection ${\cal S}=\{G_n\}$ (as in \S 2) consists of finite non-abelian simple groups. \begin{lemma} \label{allenby} Let $N$ be a finitely generated normal subgroup of the group $G$ and assume that $N$ is residually simple. Then $N\cap \Phi_f(G)=1$. In particular if $\Phi_f(G/N)=1$, then $\Phi_f(G)=1$.\end{lemma} Given this we now complete the proof. In the cases of $(0,1)$, $(0,2)$ and $(0,3)$, it is easily seen that the subgroup $\Phi_f$ is trivial. Thus we now assume that we are not in those cases. As is well-known, $\pi_1(\Sigma_{g,b})$ is residually simple for those surface groups under consideration, except the case of $\pi_1(\Sigma_1)$ which we deal with separately below. For example this follows by uniformization of the surface by a Fuchsian group with algebraic matrix entries and then use Strong Approximation. Assume first that $g\geq 3$, then Theorem \ref{main1}(i), Lemma \ref{allenby} and the Birman exact sequence immediately proves that the statement holds for $\P\Gamma_{g,1}$. The remarks above, Lemma \ref{allenby} and induction then proves the result for $\P\Gamma_{g,b}$ whenever $g\geq 3$ and $b>0$. Now assume that $g=0$. The base case of the induction here is $\P\Gamma_{0,4}$. From the above, it is easy to see that $\P\Gamma_{0,3}$ is trivial, and so $\P\Gamma_{0,4}$ is a free group of rank $2$. As such, it follows that $\Phi_f(\P\Gamma_{0,4})=1$. The remarks above, Lemma \ref{allenby} and induction then proves the result for $\P\Gamma_{0,b}$ whenever $b>0$. When $g=1$, $\Gamma_1\cong \Gamma_{1,1}\cong \SL(2,{\bf Z})$ and it is easy to check that $\Phi_f(\SL(2,{\bf Z}))={\bf Z}/2{\bf Z}$ (coinciding with the center of $\SL(2,{\bf Z})$). Now $\P\Gamma_{1,1}=\Gamma_{1,1}$ and so these facts together with Lemma \ref{allenby} and induction then prove the result (i.e. that $\Phi_f(\P\Gamma_{1,b})$ is either trivial or ${\bf Z}/2{\bf Z}$). In the case of $g=2$, by \cite{BB} $\Gamma_2$ is linear, and so \cite{Pl} also proves that $\Phi_f(\Gamma_2)$ is nilpotent. We claim that this forces $\Phi_f(\Gamma_2)={\bf Z}/2{\bf Z}$. To see this we argue as follows. If $\Phi_f(\Gamma_2)$ is finite, it is central by \cite{Lo} Lemma 2.2. Since $\Phi_f(\Gamma_2)$ contains $\Phi(\Gamma_2)$, which is equal to the center ${\bf Z}/2{\bf Z}$ of $\Gamma_2$ by \cite{Lo} Theorem 3.2, it follows that $\Phi_f(\Gamma_2) = {\bf Z}/2{\bf Z}$. Thus it is enough to show that $\Phi_f(\Gamma_2)$ is finite. Assume that it is not. Then by \cite{Lo} Lemma 2.5, $\Phi_f(\Gamma_2)$ contains a pseudo-Anosov element. Indeed, \cite{Lo} Lemma 2.6 shows that the set of invariant laminations of pseudo-Anosov elements in $\Phi_f(\Gamma_2)$ is dense in projective measured lamination space. This contradicts $\Phi_f(\Gamma_2)$ being nilpotent (e.g. the argument of \cite{Lo} p. 86 constructs a free subgroup). As before, using Lemma \ref{allenby} and by induction via the Birman exact sequence, we can now handle the cases of $\Gamma_{2,b}$ with $b>0$.\qed\\[\baselineskip] \noindent{\bf Remark 1:}~Recall that the {\em hyperelliptic Mapping Class Group} (which we denote by $\Gamma_g^h$) is defined to be the subgroup of $\Gamma_g$ consisting of those elements that commute with a fixed hyperelliptic involution. It is pointed out in \cite{BB} p. 706, that the arguments used in \cite{BB} prove that $\Gamma_g^h$ is linear. Hence once again $\Phi_f(G)$ is nilpotent for every finitely generated subgroup $G$ of $\Gamma_g^h$.\\[\baselineskip] \noindent{\bf Remark 2:}~We make some comments on the case of $\Gamma_{g,b}$ with $b>0$. First, since $\P\Gamma_{g,b}=\ker\{\Gamma_{g,b}\rightarrow S_b\}$ and $\Phi_f(S_b)=1$, if $\P\Gamma_{g,b}$ were known to be residually simple then the argument in the proof of Theorem \ref{main1add} could be used to show that $\Phi_f(\Gamma_{g,b})=1$. Hence we raise here:\\[\baselineskip] \noindent{\bf Question:}~{\em Is $\P\Gamma_{g,b}$ residually simple?}\\[\baselineskip] Another approach to showing that $\Phi_f(\Gamma_{g,b})=1$ is to directly use the representations arising from TQFT. In this case the result of Larsen and Wang \cite{LW} that allows us to prove Zariski density in \cite{MR} needs to be established. Given this, the proof (for most $(g,b)$) would then follow as above. \section{Proof of Lemma~\protect{\ref{allenby}}}\label{sec.all} As already mentioned, in what follows we adapt the proof of Lemma 3.5 of \cite{ABetal} to the case of $\Phi_f$. We argue by contradiction and assume that there exists a non-trivial element $x\in N\cap \Phi_f(G)$. By the residually simple assumption, we can find a non-abelian finite simple group $S_0$ and an epimorphism $f:N\rightarrow S_0$ for which $f(x)\neq 1$. Set $K_0=\ker~f$ and let $K_0,K_1\ldots, K_n$ be the distinct copies of $K_0$ which arise on mapping $K_0$ under the automorphism group of $N$ (this set being finite since $N$ is finitely generated). Set $K=\bigcap K_i$, a characteristic subgroup of finite index in $N$. As in \cite{ABetal}, it follows from standard finite group theory that $N/K$ is isomorphic to a direct product of finite simple groups (all of which are isomorphic to $S_0=N/K_0$). Now $K$ being characteristic in $N$ implies that $K$ is a normal subgroup of $G$. Put $G_1=G/K$ and let $f_1:G\rightarrow G_1$ denote the canonical homomorphism. Also write $N_1$ for $N/K=f_1(N)$. Now $f_1(x) \in N_1$ and $f_1(x)\in f_1(\Phi_f(G))$ which by Lemma \ref{frattini_under_epi} implies that $f_1(x)\in \Phi_f(G_1)$. Hence $$f_1(x) \in N_1\cap \Phi_f(G_1)~.$$ Following \cite{ABetal}, let $C$ denote the centralizer of $N_1$ in $G_1$, and as in \cite{ABetal}, we can deduce various properties about the groups $N_1$ and $C$. Namely: \medskip \noindent (i) since $N_1$ is a finite group, its centralizer $C$ in $G_1$ is of finite index in $G_1$ . \smallskip \noindent (ii) since $N_1$ is a product of non-abelian finite simple groups it has trivial center, and so $C\cap N_1=1$. \smallskip \noindent (iii) since $N_1$ is normal in $G_1$, $C$ is normal in $G_1$. \medskip Using (iii), put $G_2=G_1/C$ and let $f_2:G_1\rightarrow G_2$ denote the canonical homomorphism. Also write $N_2$ for $f_2(N_1)$. Arguing as before (again invoking Lemma \ref{frattini_under_epi}), we have $$f_2(f_1(x)) \in N_2\cap \Phi_f(G_2)~.$$ Moreover, since $f_1(x)\in N_1$ and $f_1(x)\neq 1$ by construction, we have from (ii) that $f_2f_1(x)\neq 1$. Thus the intersection $$H:= N_2\cap \Phi_f(G_2)$$ is a non-trivial group. As in \cite{ABetal}, we will now get a contradiction by showing that $H$ is both a nilpotent group and a direct product of non-abelian finite simple groups, which is possible only if $H$ is trivial. Here is the argument. From (i) above we deduce that $G_2$ is a finite group, hence $\Phi_f(G_2)=\Phi(G_2)$ is nilpotent by Frattini's theorem. Thus $H<\Phi_f(G_2)$ is nilpotent. On the other hand, $N_2$ is a quotient of $N_1$ and hence a direct product of non-abelian finite simple groups. But $H$ is normal in $N_2$ (since $\Phi_f(G_2)$ is normal in $G_2$). Thus $H$ is a direct product of non-abelian finite simple groups. This contradiction shows that $N\cap\Phi_f(G)=1$, which was the first assertion of the Lemma. The second assertion of the lemma now follows from Lemma \ref{frattini_under_epi}. This completes the proof. \qed\\[\baselineskip] \section{Final comments}\label{sec7} \subsection{An approach to Ivanov's question} We will now discuss an approach to answering Ivanov's question (i.e. the nilpotency of $\Phi_f(G)$ for finitely generated subgroups $G$ of $\Gamma_g$) using the projective unitary representations described in \S 3. In the remainder of this section $G$ is an infinite, finitely generated subgroup of $\Gamma_g$.\\[\baselineskip] The following conjecture is the starting point to this approach. Recall that a subgroup $G$ of $\Gamma_g$ is {\em reducible} if there is a collection of essential simple closed curves $C$ on the surface $\Sigma_g$, such that for any $\beta\in G$ there is a diffeomorphism $\overline{\beta}:\Sigma_g\rightarrow \Sigma_g$ in the isotopy class of $\beta$ so that $\overline{\beta}(C)=C$. Otherwise $G$ is called {\em irreducible}. As shown in \cite{Iv1} Theorem 2 an irreducible subgroup $G$ is either virtually an infinite cyclic subgroup generated by a pseudo-Anosov element, or, $G$ contains a free subgroup of rank $2$ generated by a two pseudo-Anosov elements.\\[\baselineskip] \noindent{\bf Conjecture:}~{\em If $G$ is a finitely generated irreducible non-virtually cyclic subgroup of $\Gamma_g$, then $\Phi_f(G)=1$.}\\[\baselineskip] The motivation for this conjecture is that the irreducible (non-virtually cyclic) hypothesis should be enough to guarantee that the image group $\rho_p(\widetilde{G}) <\Delta_g$ is Zariski dense (with the same adjoint trace-field). Roughly speaking the irreducibility hypothesis should ensure that there is no reason for Zariski density to fail (i.e. the image is sufficiently complicated). Indeed, in this regard, we note that an emerging theme in linear groups is that random subgroups of linear groups are Zariski dense (see \cite{Ao} and \cite{Ri} for example). Below we discuss a possible approach to proving the Conjecture. The idea now is to follow Ivanov's proof in \cite{Iv1} that the Frattini subgroup is nilpotent. Very briefly if the subgroup is reducible then we first identify $\Phi_f$ on the pieces and then build up to identify $\Phi_f(G)$. In Ivanov's argument, this involves passing to certain subgroups of $G$ (``pure subgroups''), understanding the Frattini subgroup of these pure subgroups when restricted to the connected components of $S\setminus C$, and then building $\Phi(G)$ from this information. This uses several statements about the Frattini subgroup, at least one of which (Part (iv) of Lemma 10.2 of \cite{Iv1}) does not seem to easily extend to $\Phi_f$.\\[\baselineskip] \noindent{\bf Remark:}~As a cautionary note to the previous discussion, at present, it still remains conjectural that the image of a fixed pseudo-Anosov element of $\Gamma_g$ under the representations $\overline{\rho}_p$ is infinite order for big enough $p$ (which was raised in \cite{AMU}).\\[\baselineskip] \noindent{\bf An approach to the Conjecture:}\\[\baselineskip] We begin by recalling that in \cite{Pl} Platonov also proves that $\Phi_f(H)$ is nilpotent for every finitely generated linear group $H$. Note that if $G$ is irreducible and virtually infinite cyclic then $G$ is a linear group, and so \cite{Pl} implies that $\Phi_f(G)$ is nilpotent. Thus we now assume that $G$ is irreducible as in the conjecture. Consider $\rho_p(\widetilde{\Phi_f(G)})$: by Lemma \ref{frattini_under_epi} above we deduce that $\rho_p(\widetilde{\Phi_f(G)})$ is a nilpotent normal subgroup of $\rho_p(\widetilde{G})$. Now $\overline{\rho}_p(\Gamma_g)<\PSU(V_p,H_p;\BZ[\zeta_p])$ and it follows from this that (in the notation of \S 3) $\Delta_g < \Lambda_p=\SU(V_p,H_p;\BZ[\zeta_p])$. As discussed in \cite{MR}, $\Lambda_p$ is a cocompact arithmetic lattice in the algebraic group $\SU(V_p,H_p)$. Thus $\rho_p(\widetilde{\Phi_f(G)})< \rho_p(\widetilde{G}) < \Lambda_p$. It follows from general properties of cocompact lattices acting on symmetric spaces (see e.g. \cite{Eb} Proposition 10.3.7) that $\rho_p(\widetilde{\Phi_f(G)})$ contains a maximal normal abelian subgroup of finite index. Now there is a general bound on the index of this abelian subgroup that is a function of the dimension $N_g(p)$. However, in our setting, if the index can be bounded by some fixed constant $R$ independent of $N_g(p)$, then we claim that $\Phi_f(G)$ can at least be shown to be finite. To see this we argue as follows. Assume that $\Phi_f(G)$ is infinite. Since $G$ is an irreducible subgroup containing a free subgroup generated by a pair of pseudo-Anosov elements, the same holds for the infinite normal subgroup $\Phi_f(G)$ (by standard dynamical properties of pseudo-Anosov elements, see for example \cite{Lo} pp. 83--84). Thus we can find $x,y\in \Phi_f(G)$ a pair of non-commuting pseudo-Anosov elements. Also note that $[x^t,y^t]\neq 1$ for all non-zero integers $t$. From Lemma \ref{frattini_under_epi} we have that $\rho_p(\widetilde{\Phi_f(G)}) < \Phi_f(\rho_p(\widetilde{G}))$ and from the assumption above it therefore follows that $\rho_p(\widetilde{\Phi_f(G)})$ contains a maximal normal abelian subgroup $A_p$ of index bounded by $R$ (independent of $p$). Thus, setting $R_1=R!$, we have $[\overline{\rho}_p(x^{R_1}),\overline{\rho}_p(y^{R_1})]=1$ for all $p$. However, as noted above, $[x^{R_1},y^{R_1}]$ is a non-trivial element of $G$, and by asymptotic faithfulness this cannot be mapped trivially for all $p$. This is a contradiction.\\[\baselineskip] \subsection{The profinite completion of $\Gamma_g$} We remind the reader that the profinite completion $\widehat{\Gamma}$ of a group $\Gamma$ is the inverse limit of the finite quotients $\Gamma/N$ of $\Gamma$. (The maps in the inverse system are the obvious ones: if $N_1 <N_2$ then $\Gamma/N_1\rightarrow \Gamma/N_2$.) The Frattini subgroup $\Phi(G)$ of a profinite group $G$ is defined to be the intersection of all maximal open subgroups of $G$. Open subgroups are of finite index, and if $G$ is finitely generated as a profinite group, then Nikolov and Segal \cite{NS} show that finite index subgroups are always open. Hence we can simply take $\Phi(G)$ to be the intersection of all maximal finite index subgroups of $G$. Now if $\Gamma$ is a finitely generated residually finite discrete group, the correspondence theorem between finite index subgroups of $\Gamma$ and its profinite completion (see \cite{RZ} Proposition 3.2.2) shows that $\overline{\Phi_f(\Gamma)} < \Phi(\widehat{\Gamma})$. There is a well-known connection between the center of a group $G$, denoted $Z(G)$ (profinite or otherwise), and $\Phi(G)$. We include a proof for completeness. Note that for a profinite group $\Phi(G)$ is a closed subgroup of $G$, $Z(G)$ is a closed subgroup and by \cite{NS} the commutator subgroup $[G,G]$ is a closed subgroup. \begin{lemma} \label{profinitecenter} Let $G$ be a finitely generated profinite group. Then $\Phi(G) > Z(G) \cap [G,G]$.\end{lemma} \noindent{\bf Proof:}~Let $U$ be a maximal finite index subgroup of $G$, and assume that $Z(G)$ is not contained in $U$. Then $<Z(G),U>=G$ by maximality. It also easily follows that $U$ is a normal subgroup of $G$. But then $G/U = Z(G)U/U \cong Z(G)/(U\cap Z(G))$ which is abelian, and so $[G,G]<U$. This being true for every maximal finite index subgroup $U$ we deduce that $\Phi(G) > Z(G) \cap [G,G]$ as required.\qed\\[\baselineskip] We now turn to the following questions which were also part of the motivation of this note. \\[\baselineskip] \noindent{\bf Question 1:}~{\em For $g\geq 3$, is $Z(\widehat{\Gamma}_g)=1$?} \medskip \noindent{\bf Question 2:}~{\em For $g\geq 3$, is $Z(\widehat{\cal I}_g)=1$?}\\[\baselineskip] Regarding Question 1, it is shown in \cite{HM} that the completion of $\Gamma_g$ arising from the congruence topology on $\Gamma_g$ has trivial center. Regarding Question 2, if $Z(\widehat{\cal I}_g)=1$, then the profinite topology on $\Gamma_g$ will induce the full profinite topology on ${\cal I}_g$ (see \cite{LS} Lemma 2.6). Motivated by this and Lemma \ref{profinitecenter} we can also ask:\\[\baselineskip] \noindent{\bf Question 1':}~{\em For $g\geq 3$, is $\Phi(\widehat{\Gamma}_g)=1$?} \medskip \noindent{\bf Question 2':}~{\em For $g\geq 3$, is $\Phi(\widehat{\cal I}_g)=1$?}\\[\baselineskip] \noindent Although the results in this paper do not impact directly on Questions 1, 1',2 and 2', we note that since $\Gamma_g$ is finitely generated and perfect for $g\geq 3$, it follows that $\widehat{\Gamma}_g$ is also perfect and hence $$ Z(\widehat{\Gamma}_g)<\Phi(\widehat{\Gamma}_g)$$ by Lemma \ref{profinitecenter}. As remarked above, the correspondence theorem gives $$\overline{\Phi_f(\Gamma_g)} < \Phi(\widehat{\Gamma}_g)~.$$ Thus our result that $\Phi_f(\Gamma_g)=1$ for $g\geq 3$ (which implies $\overline{\Phi_f(\Gamma_g)}=1$) is consistent with triviality of $Z(\widehat{\Gamma}_g)$ (and similarly for $Z(\widehat{\cal I}_g)$).
{ "timestamp": "2015-02-05T02:13:33", "yymm": "1412", "arxiv_id": "1412.3366", "language": "en", "url": "https://arxiv.org/abs/1412.3366", "abstract": "Let $\\Gamma_{g,b}$ denote the orientation-preserving Mapping Class Group of a closed orientable surface of genus $g$ with $b$ punctures. For a group $G$ let $\\Phi_f(G)$ denote the intersection of all maximal subgroups of finite index in $G$. Motivated by a question of Ivanov as to whether $\\Phi_f(G)$ is nilpotent when $G$ is a finitely generated subgroup of $\\Gamma_{g,b}$, in this paper we compute $\\Phi_f(G)$ for certain subgroups of $\\Gamma_{g,b}$. In particular, we answer Ivanov's question in the affirmative for these subgroups of $\\Gamma_{g,b}$.", "subjects": "Geometric Topology (math.GT); Group Theory (math.GR)", "title": "Frattini and related subgroups of Mapping Class Groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363517478328, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7073385709053989 }
https://arxiv.org/abs/2208.02726
Algebraic Experimental Design: Theory and Computation
Over the past several decades, algebraic geometry has provided innovative approaches to biological experimental design that resolved theoretical questions and improved computational efficiency. However, guaranteeing uniqueness and perfect recovery of models are still open problems. In this work we study the problem of uniqueness of wiring diagrams. We use as a modeling framework polynomial dynamical systems and utilize the correspondence between simplicial complexes and square-free monomial ideals from Stanley-Reisner theory to develop theory and construct an algorithm for identifying input data sets $V\subset \mathbb F_p^n$ that are guaranteed to correspond to a unique minimal wiring diagram regardless of the experimental output. We apply the results on a tumor-suppression network mediated by epidermal derived growth factor receptor and demonstrate how careful experimental design decisions can lead to a unique minimal wiring diagram identification. One of the insights of the theoretical work is the connection between the uniqueness of a wiring diagram for a given $V\subset \mathbb F_p^n$ and the uniqueness of the reduced Gröbner basis of the polynomial ideal $I(V)\subset \mathbb F_p[x_1,\ldots, x_n]$. We discuss existing results and introduce a new necessary condition on the points in $V$ for uniqueness of the reduced Gröbner basis of $I(V)$. These results also point to the importance of the relative proximity of the experimental input points on the number of minimal wiring diagrams, which we then study computationally. We find that there is a concrete heuristic way to generate data that tends to result in fewer minimal wiring diagrams.
\section{Introduction} The abundance of numerous substantial data sets from laboratory experiments and myriad diverse methods for modeling and analysis render network inference a critical component of systems biology research; for a recent example, see~\cite{grand}. A vital process linked to inference is experimental design, which optimizes data generation and collection for effective prediction of network structure. While traditional experimental design is rooted in statistical methods~\cite{litwin}, algebraic geometry has offered innovative approaches to experimental design \cite{wynn,he-unique-gbs}. In fact a fractional factorial design can be viewed as a set of $n$-tuples over a finite field and a special class of discrete models called \emph{polynomial dynamical systems} can be used to capture all models which fit the design points for a network with $n$ nodes. Associated to a polynomial dynamical system is a directed graph called the \emph{wiring diagram}, which encodes the topology (connectivity) of the network. While the wiring diagram represents only a static picture of the network, knowledge of the connectivity is crucial for studying network robustness, regulation, and control strategies in order to develop, for example, therapeutic interventions~\cite{tan2013, Wang:2013aa} and drug delivery strategies~\cite{yousefi2012,Lee:2012aa}, or to understand the mechanisms for the spread of an infectious disease~\cite{Madrahimov:2013dq,PMID:20478257}. Moreover, it has been demonstrated that the role of network connectivity goes beyond static properties and can in fact dictate certain dynamical properties and be used for their control~\cite{jarrah2010dynamics,campbell,veliz2011reduction,zamal,wu,albert,sontag2008effect,murrugarra15, murrugarra19}. In this work, we will develop theory and algorithms for experimental design which reduce the size of the space of possible wiring diagrams. The central object of study is a \emph{minimal set} for a node $x$, that is a set of variables representing the incoming edges to $x$ in the wiring diagram. Each minimal set, or \emph{minset} for short, has the property that there exists a polynomial in those variables that fits the data (design points) and there is no such polynomial for any proper subset. Specifically, we aim to find properties on input-output data $(V,T)$ that guarantees that it has a unique minimal set. In this way, we contribute a number of distinct results. When only the design points, referred to as \emph{inputs},~$V$ are known, we prove a necessary and sufficient condition on $V$ (Theorem~\ref{unique-minset}); a necessary condition on $V$ (Theorem~\ref{diagonal}); and a sufficient condition on $V$ (Corollary~\ref{ugb-minset}). Each of these conditions on $V$ guarantees that for any corresponding output assignment~$T$, the input-output data set $(V,T)$ has a unique minimal set. Furthermore, when both inputs~$V$ and outputs~$T$ are known, we provide a sufficient condition on polynomial functions which fit $(V,T)$ in Theorem~\ref{unique-nf-minset}. In parallel, this work has uncovered interesting results for ideals of points. While it is known that for every monomial order $\prec$ there is a unique reduced Gr\"obner basis $G_\prec$ for $I(V)$, there are cases when the Gr\"obner basis is the same across all monomial orders: that is, there exists a generating set $G$ for $I(V)$ such that for all monomial orders $\prec$ the associated reduced Gr\"obner basis $G_\prec = G$. In this case we say that \emph{$I(V)$ has a unique reduced Gr\"obner basis for all monomial orders.} We prove a necessary condition on fixed inputs $V$ (Corollary~\ref{ugb-diag-free}); a necessary condition on arbitrary outputs $T$ (Corollary~\ref{ugb-minset}); and a necessary and sufficient condition on polynomial functions which fit $(V,T)$ for any output $T$ (Corollary~\ref{nf-ugb}). In an effort to provide guidance for designing experiments, we performed computational experiments that suggest the following rubric: having data with small Hamming distance between points results in fewer minsets than data with large Hamming distance between points. Moreover we provide computational evidence that design points generated using a \emph{small-distance scheme} result in fewer minsets than randomly generated points. The paper is organized as follows. We provide the relevant background in \Cref{sec:background}. Theoretical results are in \Cref{sec:main}, while computational results are in \Cref{sec:experiments}. We close with a discussion in \Cref{sec:conclusions}. \section{Background} \label{sec:background} Much of the language in this section is taken from \cite{macauley-stigler}. Discrete models have been used extensively and there is evidence that they provide a good framework for a variety of applications, e.g. \cite{davidson, albert, thomas91, laubenbacher04, dimitrova-zardecki}. Such models are collections of functions defined over a finite state set $X$ and can be described using polynomials when the state set size is constrained to a power of a prime. In the latter case, discrete models are often referred to as polynomial models and can be written as $n$-tuples of polynomial functions, one for each node in the network, i.e. $f=(f_1,\ldots,f_n): X^n\to X^n$, where $f_i:X^n\to X$ is a polynomial which determines the behavior of node (variable) $x_i$. Examples of polynomial models are Boolean networks ($X=\mathbb F_2$) and more generally \emph{polynomial dynamical systems} (PDSs) over finite fields ($X=\mathbb F_p$) Specifically a \emph{polynomial dynamical system} over $F=\mathbb F_p$ is a polynomial map $f:F^n\rightarrow F^n$ where $f=(f_1,\ldots,f_n)$ and each coordinate function $f_i:F^n\rightarrow F$ is a polynomial in $F[x_1,\ldots ,x_n]$. We say that $f$ \emph{fits} the input-output data $D=\{(s_1,t_1),\ldots ,(s_m,t_m)\}\subset F^n\times F^n$ if $f(s_j)=t_j$ for each $1\leq j\leq m$. The \emph{monomials} or \emph{terms} of a polynomial model represent interactions among the nodes in a network, whereas the coefficient of a monomial can be interpreted as the strength or weight of the associated interaction. The \emph{support} of a polynomial $f\in k[x_1,\ldots,x_n]$, denoted $supp(f)$, is the collection of variables that appear in $f$. \begin{definition}\label{wd} A \emph{wiring diagram} of a PDS $f=(f_1,\ldots,f_n)$ is a directed graph $W=(L,E)$ where $|L|=n$, the vertices are labeled as the $n$ variables, and there is a directed edge in $E$ $x_i\rightarrow x_j$ iff $x_i\in supp(f_j)$. \end{definition} Monomials in the polynomial ring $\mathbb{F}_p[x_1,\ldots ,x_n]$ are written as $x^\alpha=x_1^{\alpha_1}x_2^{\alpha_2}\cdots x_n^{\alpha_n}$, with exponent vector $\alpha=(\alpha_1,\ldots ,\alpha_n)\in \mathbb Z^n$. A \emph{monomial ideal} $I\subseteq \mathbb{F}_p[x_1,\dots,x_n]$ is an ideal generated by monomials, written as $I=\langle x^{\alpha},x^{\beta},\dots\rangle$. A monomial $x^\alpha$ is \emph{square free} if each $\alpha_i\in\{0,1\}$. A monomial ideal is a \emph{Stanley-Reisner ideal} if it can be generated by square-free monomials. A \emph{simplicial complex} over a finite set $X$ is a collection $\Delta$ of subsets of $X$ that are closed under the operation of taking subsets. That is, if $\beta\in\Delta$ and $\alpha\subseteq\beta$, then $\alpha\in\Delta$. The elements in $\Delta$ are called \emph{simplices} or \emph{faces}. Given an ideal $I$, we define the simplicial complex \[ \Delta_{I^c}=\{\alpha\mid x^\alpha\not\in I\}, \] and given a simplicial complex $\Delta$ on $X=[n]=\{1,\ldots,n\}$, we define the square-free monomial ideal \[ I_{\Delta^C}=\<x^\alpha\mid\alpha\not\in\Delta\>, \] which is the Stanley-Reisner ideal of $\Delta$. Consider a set $V=\{\mathbf{s_1},\ldots,\mathbf{s_m}\}\subseteq \mathbb F_p^n$ of distinct input vectors, and a multiset $T=\{t_1,\ldots, t_m\}$ of output values from $\mathbb F_p$. We call \[ \mathcal{D}= \{(\mathbf{s_1},t_1),\dots,(\mathbf{s_m},t_m)\}\subseteq\mathbb F^n\times\mathbb F \] the \emph{input-output data set}, where inputs may be stimuli applied to the network and outputs are its responses. A function $f\colon\mathbb F_p^n\to\mathbb F_p$ is said to \emph{fit the data} if $f(\mathbf{s})=t$ for all $(\mathbf{s},t)\in\mathcal{D}$. The \emph{model space} of $\mathcal{D}$ is the set of all functions that fit the data, i.e. \[ \Mod(\mathcal{D})=\{f\colon\mathbb{F}_p^n\to\mathbb{F}_p^n\mid f(\mathbf{s})=t,\;\text{for all }(\mathbf{s},t)\in\mathcal{D}\}. \] For ease of presentation, we will focus on the wiring diagram of an individual node~$x_i$, that is, the edge set of the graph will be $E_{x_i}=\{(t,x_i)\mid t\in \supp(f_i)\}$. The union of the wiring diagrams of all nodes is, of course, the entire wiring diagram $W$. In \cite{JLSS}, the authors developed an algorithm for constructing all wiring diagrams based on sets of input-output data. The method encoded certain coordinate changes in input data as square-free monomials, generated a monomial ideal from these monomials, and used Stanley-Reisner theory to decompose the ideal into primary components. These primary components were named \emph{minimal sets} or \emph{minsets} for short. A minset is a set $S$ of variables so that there is a function in terms of those variables that fits the given data and there is no such function on proper subsets of $S$ (a formal definition will be presented as Definition~\ref{minset}). A wiring diagram for a specific node~$x$ can be constructed by drawing edges from the variables in the minset towards $x$. Details are provided in the following definitions and results from~\cite{JLSS}. For every pair of distinct input vectors $\mathbf{s}=(s_1,\dots,s_n)$ and $\mathbf{s}'=(s'_1,\dots,s'_n)$ in $V$, we can encode the coordinates in which they differ by a square-free monomial \[ m(\mathbf{s},\mathbf{s}')=\prod_{s_i\neq s'_i}x_i. \] Let $\mathcal{M}(V)$ be the set of all such monomials from $V$, that is, \begin{equation}\label{sf-mon} \mathcal{M}(V)=\{m(\mathbf{s},\mathbf{s}')\mid\mathbf{s},\mathbf{s}'\in V,\;\mathbf{s}\neq\mathbf{s}'\}. \end{equation} If distinct input vectors $\mathbf{s},\mathbf{s}'\in V$ have different output values, $t\neq t'$, then any function $f\colon\mathbb F_p^n\to\mathbb F_p$ satisfying $f(\mathbf{s})=t$ and $f(\mathbf{s}')=t'$ must depend on at least one of the variables in $m(\mathbf{s},\mathbf{s}')$. In this case, we say that the support of $m(\mathbf{s},\mathbf{s}')$, i.e., the set of variables that appear in it, is a \emph{non-disposable set} of $\mathcal{D}$. For a fixed data set $\mathcal{D}$, the non-disposable sets in the power set $2^{[n]}$, where $[n]=\{1,\ldots, n\}$, are clearly closed under unions. We call all other sets \emph{disposable}, i.e. $\alpha\subseteq [n]$ is a disposable set of $\mathcal{D}$ if and only if there is some $f\in\Mod(\mathcal{D})$ that depends only on the variables \emph{not} in~$\alpha$. Equivalently, its support satisfies $supp(f)\subseteq\overline{\alpha}=[n]\setminus\alpha$. It is easy to see that disposable sets are closed under intersections. As such we can define the abstract \emph{simplicial complex of disposable sets} of~$\mathcal{D}$ to be \[ \Delta_\mathcal{D}=\{\alpha\subseteq[n]\mid \alpha\text{ is a disposable set of } \mathcal{D}\}. \] If we canonically identify square-free monomials with subsets of $[n]$, then the \emph{Alexander dual} of~$\Delta_\mathcal{D}$ is the Stanley-Reisner ideal \[ I_{\Delta^c_\mathcal{D}}=\<x^\alpha\mid \alpha\not\in\Delta_\mathcal{D}\>=\<m(\mathbf{s},\mathbf{s}')\mid t\neq t'\>, \] which is called the \emph{ideal of non-disposable sets}. By the Alexander duality, the simplicial complex of disposable sets is \[ \Delta_\mathcal{D}=\{\alpha\subseteq[n]\mid \alpha\not\in I_{\Delta^c_\mathcal{D}}\}. \] Since $I_{\Delta^c_\mathcal{D}}$ is squarefree, it has a unique primary decomposition, where the primary components are prime ideals generated by the variables in the complements of the facets (maximal faces) of $\Delta_\mathcal{D}$ (i.e., complements of maximal disposable sets). For a facet $\alpha\subseteq[n]$, denote the corresponding primary component by $\mathfrak{p}^{\overline{\alpha}}$. For example, if $n=5$ and $\alpha=x_2x_5$, then $\mathfrak{p}^{\overline{\alpha}}=\<x_1,x_3,x_4\>$. The primary decomposition is thus \[ I_{\Delta^c_\mathcal{D}}=\bigcap_{\alpha\in\Delta_\mathcal{D}}\mathfrak{p}^{\overline\alpha}=\bigcap_{\substack{\alpha\in\Delta_\mathcal{D} \\ \alpha\text{ maximal}}}\mathfrak{p}^{\overline\alpha}. \] Over a field, being prime and being primary are equivalent properties for square-free monomial ideals. The ideal $I_{\Delta^c_\mathcal{D}}$ is prime if and only if it has only one primary component, which means that there is a unique maximal disposable set (i.e., a facet) $\alpha$ in $\Delta_\mathcal{D}$, and so \[ I_{\Delta_\mathcal{D}^c}=\mathfrak{p}^{\overline{\alpha}}=\<x_i\mid i\not\in\alpha\>. \] Thus the set $\mathcal{G}=\{x_i\mid i\not\in\alpha\}$ is a Gr\"obner basis for $I_{\Delta_\mathcal{D}^c}$. The converse holds as well: if a reduced Gr\"obner basis for $I_{\Delta_\mathcal{D}^c}$ has only single-variable monomials, then it must be prime. We summarize this next. \begin{theorem}\label{equiv} The simplicial complex of disposable sets $\Delta_\mathcal{D}$ has a unique facet if and only if the ideal of non-disposable sets $I_{\Delta_\mathcal{D}^c}$ is prime. \end{theorem} By the Alexander duality, the primary components of $I_{\Delta^c_\Delta}$ are in bijection with the complements of the maximal disposable sets. Such a complement $\overline{\alpha}$ is precisely a minimal subset of~$[n]$ on which a function in the model space $\Mod(\mathcal{D})$ can depend. This motivates the following definition. \begin{definition}[\cite{JLSS}]\label{minset} The complement $\overline{\alpha}$ of a maximal disposable set $\alpha$ in $\Delta_\mathcal{D}$ is called a \emph{minimal set}, or \emph{minset} for short. \end{definition} Each minset is a set of variables on which a polynomial can depend based on the data, and one that is minimal with respect to inclusion. These variables also encode the wiring diagram of a minimal number of edges incident to the node under consideration. We call such wiring diagrams \emph{minimal} as well. We will use the following tumor-suppression network mediated by epidermal derived growth factor receptor (EGFR) \cite{Steinway2016} as a running example. We consider Boolean and non-Boolean data for the gene network of three parameters (EGFR, Rasgap, and miR221) and three variables (Rkip, Kras, and Raf1), and an outcome of this network is proliferation or suppression of a tumor. For illustration purposes, we focus on identifying the direct regulators of Raf1 from among the candidates Rasgap, Rkip, and Kras. % \begin{example}\label{unsigned-egfr} Suppose we want to determine which nodes Raf1 depends on -- Rasgap, Rkip, or Kras -- based solely on experimental data. Suppose experiments are performed to generate the following input-output data (parentheses and commas are suppressed for readability): $$\mathcal{D}=\{(\mathbf{s}_1,t_1),(\mathbf{s}_2,t_2),(\mathbf{s}_3,t_3),(\mathbf{s}_4,t_4)\} =\{(000,1),(101,1),(110,0),(011,1)\},$$ where $\mathbf{s}_i=(Rasgap, Rkip, Kras)=(x_1,x_2,x_3)$ and $t_i$ is the corresponding value of Raf1. That is, we want to determine the minimal sets of variables that appear in the unknown function $f\colon\mathbb F_2^3\to\mathbb F_2$ which determines the behavior of Raf1 based on input from the other three nodes, and fits the experimental data, that is, $f(000)=1,f(101)=1$, $f(110)=0$, and $f(011)=1$. Since $t_1=t_2=t_4\neq t_3$, we compute $m(\mathbf{s}_1,\mathbf{s}_3) =x_1x_2$, $m(\mathbf{s}_2,\mathbf{s}_3)=x_2x_3$, and $m(\mathbf{s}_3,\mathbf{s}_4) =x_1x_3$. The ideal of non-disposable sets is thus $I_{\Delta_\mathcal{D}^c}=\<x_1x_2, x_2x_3, x_1x_3\>$ and has primary decomposition $\<x_1, x_2\>\cap\<x_1,x_3\>\cap\<x_2,x_3\>$, corresponding to these minimal wiring diagrams: \[ \tikzstyle{v} = [draw,inner sep=0pt, minimum size=3mm] \tikzstyle{activ} = [draw, -stealth] \begin{tikzpicture}[scale=1] \node (1) at (0,2) {\small Rasgap}; \node (2) at (1,2) {\small Rkip}; \node (3) at (2,2) {\small Kras}; \node (i) at (1,0) {\small Raf1}; \draw [activ] (1) to[bend right,shorten >= 2pt] (i); \draw [activ] (2) to[shorten >= 2pt] (i); \end{tikzpicture} \hspace{8mm \begin{tikzpicture}[scale=1] \node (1) at (0,2) {\small Rasgap}; \node (2) at (1,2) {\small Rkip}; \node (3) at (2,2) {\small Kras}; \node (i) at (1,0) {\small Raf1}; \draw [activ] (1) to[bend right,shorten >= 2pt] (i); \draw [activ] (3) to[bend left,shorten >= 2pt] (i); \end{tikzpicture} \hspace{8mm} \begin{tikzpicture}[scale=1] \node (1) at (0,2) {\small Rasgap}; \node (2) at (1,2) {\small Rkip}; \node (3) at (2,2) {\small Kras}; \node (i) at (1,0) {\small Raf1}; \draw [activ] (2) to[shorten >= 2pt] (i); \draw [activ] (3) to[bend left,shorten >= 2pt] (i); \end{tikzpicture} \] \end{example} The limited information that these experimental data support is that any two of the three nodes can influence Raf1. If, in addition, we perform an experiment where the input nodes are all expressed and Raf1 happens to also be expressed as a result, this will add to $\mathcal{D}$ the data point $(\mathbf{s}_5,t_5)=(111,1)$. As a result, the monomial $x_2$ will be added to $I_{\Delta_\mathcal{D}^c}$ whose primary decomposition now becomes $\<x_1, x_2\>\cap\<x_2,x_3\>$, eliminating the middle wiring diagram from the figure above. Since $x_2$ is in both primary ideals, we are now confident that Rkip affects Raf1. While we still do not know if Rasgap and Kras participate in the regulation of Raf1, this may be sufficient if the role of Rkip is the focus of the experimental work. On the other hand, if instead of adding $(\mathbf{s}_5,t_5)=(111,1)$, we added $(\mathbf{s}'_5,t'_5)=(010,0)$, the new monomials added to $I_{\Delta_\mathcal{D}^c}$ will be not only $x_2$ but also $x_1x_2x_3$ and $x_3$. Now the primary decomposition becomes $\<x_2,x_3\>$, reducing the possible wiring diagrams to a unique one (rightmost above) and completely determining the regulation of Raf1. This example shows that some input-output data sets result in multiple models, whereas well-chosen datasets can reduce the number of possible wiring diagrams and even lead to a unique model. \section{Main results} \label{sec:main} \subsection{Theoretical Results} The one-to-one correspondence between the minsets of $\Delta_{\mathcal{D}}$ and the minimal wiring diagrams of $\Mod(\mathcal{D})$ implies that finding input sets which uniquely identify the minimal wiring diagram underlying a system is equivalent to finding input sets $\mathcal{D}$ whose corresponding simplicial complexes $\Delta_{\mathcal{D}}$ have a unique minset. The theory of minsets developed in \cite{JLSS, veliz-cuba-signed-ms} establishes methods for generating all minimal wiring diagrams for a given input-output data set $\mathcal{D}$. In practice, however, one does not know the experimental output~$T$ \textit{a priori}. Therefore, it is desirable to develop theory and algorithms which allow us to design experiments whose output is guaranteed to reduce the size of the wiring-diagram space of the system without making assumptions for the unknown experimental outcome. In the next section, we provide necessary and sufficient conditions on the input data set which are computationally feasible to guarantee that the identified minset is unique regardless of the output. \subsubsection{Identifying input sets corresponding to a unique minset} Based on Theorem~\ref{equiv}, our goal is to efficiently identify sets whose ideal of non-disposable sets in prime. Below we construct an algorithm for the identification of such input sets. Let $V=\{\mathbf{s}_1,\ldots,\mathbf{s}_m\}\subseteq \mathbb F_p^n$ be an input set of distinct vectors. We define the multiset \begin{equation}\label{pairs} M=\left\{m(\mathbf{s}_i,\mathbf{s}_j)\mid i,j\in [r],\;1\leq i<j\leq r\right\}, \end{equation} where ${\displaystyle m(\mathbf{s}_i,\mathbf{s}_j)=\prod_{\mathbf{s}_{ik}\neq \mathbf{s}_{jk}}x_i}$ are square-free monomials which record the coordinates where each pair of points in $V$ differ. The number of pairs in this set is $|M|=(r-1)+(r-2)+\cdots +1=\frac{(r-1)r}{2}$ since, unlike in (\ref{sf-mon}), monomials are repeated if they come from different input pairs. For example, if $m(s_1,s_2)=m(s_2,s_6)=x_2x_5$, then $x_2x_5$ will be listed twice and it will be recorded to which input pairs it corresponds. Let $M_{MV}$ be the list of multivariate monomials in $M$, again keeping track of which pairs of points in $V$ yielded each monomial. For each $m(s_a,s_b)\in M_{MV}$, let $m(s_{i_1},s_{j_1}),\ldots,m(s_{i_k},s_{j_{\ell}})$ be the single-variate monomials in $M$ that divide $m(s_a,s_b)$. \begin{theorem}\label{unique-minset} Let $V=\{\mathbf{s}_1,\ldots,\mathbf{s}_m\}\subseteq \mathbb F_p^n$ be an input set of distinct vectors, and $M$ and $M_{MV}$ be defined as above. Let $t_k$ denote the unknown output of $s_k$. There exists an output assignment $T$ for which $I_{\Delta^c}$ is not prime (and so there are multiple minsets) if and only if there is a monomial in $M_{MV}$ for which the following system is consistent. \begin{eqnarray}\label{sys} t_a&\ne &t_b\nonumber \\ t_{i_1}&=&t_{j_1}\\ & \vdots & \nonumber\\ t_{i_k}&=&t_{j_{\ell}}\nonumber \end{eqnarray} \end{theorem} \begin{proof} The system is set up so that if consistent, $\mathcal M(V)$ from (\ref{sf-mon}) will contain at least one multivariate monomial without a single-variate monomial that divides it. In that case, the primary decomposition of the ideal generated by the monomials in $\mathcal M(V)$ will have more than one primary component. \end{proof} Notice that the equations in (\ref{sys}) form a homogeneous linear system whose coefficient matrix is sparse and solving it is computationally easy. As soon as a consistent system is found for an element in $M_{MV}$, we can stop and conclude that there exists a $T$ for which $I_{\Delta^c}$ is not prime. If no such system is found, then for any $T$, the Gr\"obner basis of $I_{\Delta^c}$ consists entirely of single-variate monomials and so $I_{\Delta^c}$ is prime for all output assignments. To illustrate the process above consider the following examples. \begin{example} Consider the following non-Boolean input data for the EGFR network in~\cite{Steinway2016}, where $\mathbf{s}_i=(Rasgap, Rkip, Kras)=(x_1,x_2,x_3)$ and $t_i$ is the corresponding value of Raf1: $V=\{\mathbf{s}_1,\mathbf{s}_2,\mathbf{s}_3,\mathbf{s}_4\}=\{(010),(110),(210),(212)\}\subseteq \mathbb F_3^3$. The set $M$ contains the monomials $m(\mathbf{s}_1,\mathbf{s}_2)=x_1, m(\mathbf{s}_1,\mathbf{s}_3)=x_1, m(\mathbf{s}_1,\mathbf{s}_4)=x_1x_3, m(\mathbf{s}_2,\mathbf{s}_3)=x_1, m(\mathbf{s}_2,\mathbf{s}_4)=x_1x_3,$ $m(\mathbf{s}_3,\mathbf{s}_4)=x_3$. The multivariate monomials are $m(\mathbf{s}_1,\mathbf{s}_4)=x_1x_3$ and $m(\mathbf{s}_2,\mathbf{s}_4)=x_1x_3$. The two corresponding systems below are both inconsistent and so $V$ has a unique minset for any~$T$. \begin{multicols}{2} \begin{eqnarray} t_1&\ne &t_4\nonumber\\ t_1&=&t_2\nonumber\\ t_1&=&t_3\nonumber\\ t_2&=&t_3\nonumber\\ t_3&=&t_4\nonumber \end{eqnarray} \begin{eqnarray} t_2&\ne &t_4\nonumber\\ t_1&=&t_2\nonumber\\ t_1&=&t_3\nonumber\\ t_2&=&t_3\nonumber\\ t_3&=&t_4\nonumber \end{eqnarray} \end{multicols} The algorithm determines that regardless of the experimental output, this input set $V$ is guaranteed to result in a unique minimal wiring diagram for Raf1. (Notice that while unique for any output, the wiring diagram will vary based on the output.) Now consider the input data set $U=\{\mathbf{s}_1,\mathbf{s}_2,\mathbf{s}_3,\mathbf{s}_4\}=\{(211),(002),(200),(201)\}\subseteq \mathbb F_3^3$. The monomials in $M$ are $m(\mathbf{s}_1,\mathbf{s}_2)=x_1x_2x_3, m(\mathbf{s}_1,\mathbf{s}_3)=x_2x_3, m(\mathbf{s}_1,\mathbf{s}_4)=x_2, m(\mathbf{s}_2,\mathbf{s}_3)=x_1x_2, m(\mathbf{s}_2,\mathbf{s}_4)=x_1x_2,$ and $m(\mathbf{s}_3,\mathbf{s}_4)=x_3$. Based on the multivariate monomial $m(\mathbf{s}_1,\mathbf{s}_2)=x_1x_2x_3$, we form the consistent system $$t_1\ne t_2, \ \ t_1=t_4, \ \ t_3=t_4.$$ The algorithm identifies that there exist output assignments for which $I_{\Delta^c}$ is not prime. For example, $T=\{0,2,0,0\}$, i.e. $t_1=t_3=t_4=0, t_2=2$, corresponds to two minsets: $\{x_2\}$ and $\{x_3\}$; that is, we can have experimental output that will result in two possible minimal wiring diagrams for Raf1: in one Raf1 depends on Rkip only, and in the other Raf1 depends on Kras only. \end{example} Having built an algorithm for identifying if an input data set $V$ corresponds to a unique minset, we next ask how a unique minset relates to the Gr\"obner basis of $I(V)$ and to the normal form of polynomials that take $V$ as input. \subsubsection{Polynomial normal forms and minsets} The main result in this section is Theorem~\ref{unique-nf-minset} which establishes that a unique normal form (regardless of monomial order) of a polynomial that fits a set of input-output pairs $\mathcal D$ implies a unique minset for $\mathcal D$. \begin{lemma}[\cite{he-unique-gbs}]\label{lma-mon} Let $x^{\alpha},x^{\beta}$ be monomials with $x^{\alpha} \nmid x^{\beta}$. There exists a weight vector $\gamma$ and monomial order $\prec_{\gamma}$ such that $x^{\beta} \prec_{\gamma} x^{\alpha}$. \end{lemma} \begin{proof} Let $x^{\alpha} \nmid x^{\beta}$. As $x^{\alpha} \nmid x^{\beta}$, $\alpha_j > \beta_j$ for some coordinate $j$. Take $\gamma$ to be a vector in $\mathbb{R}^n$ with a sufficiently large rational value in entry $j$ and square roots of distinct prime numbers elsewhere such that $\gamma \cdot \alpha > \gamma \cdot \beta$. Then the entries of $\gamma$ are linearly independent over $\mathbb{Q}$ and so~$\gamma$ defines a weight order. Define $\prec_\gamma$ to be the monomial order weighted by $\gamma$. It follows that $x^\beta \prec_\gamma x^\alpha$. \end{proof} \begin{theorem}\label{unique-nf-minset} Let $\mathcal{D}\subseteq \mathbb{F}^n\times\mathbb{F}$ be a data set of input-output pairs and let $f:\mathbb{F}^n\to \mathbb{F}$ be any polynomial that fits $\mathcal{D}$. If $f$ has a unique normal form for all Gr\"obner bases of $I(V)$, then $\mathcal{D}$ has a unique minset. \end{theorem} \begin{proof} Let $\overline{f}$ be the unique normal form of $f$ with respect to $I(V)$. For contradiction, suppose that there exists a polynomial $h$ that fits $\mathcal{D}$ such that supp$(\overline{f})$ contains a variable~$x_i$ which is not in supp$(h)$. Notice that $\overline{f}-h\in I(V)$ and all monomials of $\overline{f}$ that contain~$x_i$ are in $\overline{f}-h$. Since a monomial that contains~$x_i$ does not divide a monomial that does not contain~$x_i$, it follows by Lemma~\ref{lma-mon} that there is a monomial order $\prec$ under which some monomial~$x^{\alpha}$ of $\overline{f}-h$ which contain $x_i$ is the leading monomial of $\overline{f}-h$ and thus it is in $in_{\prec}(I(V))$. This is a contradiction since~$x^{\alpha}$ is a monomial of~$\overline{f}$ which is a linear combination of monomials that are standard with respect to any monomial order as the normal form is unique. \end{proof} One consequence of the previous theorem is that the support of a unique normal form is a minset. Another is the following key condition on inputs. \begin{corollary}\label{ugb-minset} Let $V$ be a set of inputs. If $I(V)$ has a unique Gr\"obner basis, then for all output assignments there is a unique minset. \end{corollary} Notice that the converse of Corollary~\ref{ugb-minset} is false. For example, $V=\{00, \allowbreak 10, \allowbreak 01, \allowbreak 11, \allowbreak 02, \allowbreak 20, \allowbreak 22\}\subseteq \mathbb{Z}_3^2$ has an ideal $I(V)$ with two Gr\"obner bases, $\{x+y,y^2-1\}$ and $\{x^2-1,y+x\}$, but~$V$ has only one minset for any output $T$. Theorem~\ref{unique-nf-minset} and its corollaries beg the following question in algebraic design of experiments: What input-output data corresponds to a model with a unique normal form? We answer that in Theorem~\ref{unique-nf} below. \begin{definition} Let $\lambda=\{u^1, \ldots, u^r\}$ be an $r$-subset of $\mathbb{N}^n_p$ and let $V=\{v^1, \ldots, v^s\}$ be an $s$-subset of $\mathbb{N}^n_p$. The \emph{evaluation matrix} $\mathbb{X}(x^{\lambda},V)$ is the $s$ by $r$ matrix whose element in position $(i,j)$ is $x^{u^j}(v^{i})$, the evaluation of $x^{u^j}$ at $v^{i}$. \end{definition} \begin{example} Consider $V=\{(0,0,1), (0,1,0), (1,0,1)\}\subset \mathbb{F}_2^3$. One of its sets of standard monomials is $x^{\lambda}=\{1,z,x\}$ which corresponds to the set of exponent vectors $\lambda=\{(0,0,0), (0,0,1), (1,0,0)\}$ and produces the following evaluation matrix on $V$: \begin{center} $\mathbb{X}(x^{\lambda},V)=\begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \\ \end{bmatrix}.$ \end{center} \end{example} \begin{theorem}\label{unique-nf} Let $\mathbb{F}$ be a field. Consider a set $V=\{s_1 \ldots, s_r\} \subseteq \mathbb{F}^n$ of distinct input vectors and an output vector $T=(t_1, \ldots, t_r )\in \mathbb{F}^r$. Let $f \in \mathbb{F}[x_1, \ldots, x_n]$ be such that $f(s_i)=t_i$ for all $i \in \{1, \ldots, r \}$. The normal form of $f$ is unique with respect to any Gr\"obner basis if and only if $T$ is a linear combination of the columns in $\mathbb{X}(x^{\lambda},V)$ that correspond to monomials which are standard with respect to any Gr\"obner basis. \end{theorem} \begin{proof} First suppose that $T$ is a linear combination of the columns of $\mathbb{X}(x^{\lambda},V)$ which correspond to the standard monomials in the intersection of all sets of standard monomials. Therefore, the normal form of the interpolating polynomial $f$ is also a linear combination (with the same coefficients) of standard monomials that appear in every set of standard monomials and so will not change as we change the Gr\"obner basis. Conversely, suppose that the normal form of $f$ is unique with respect to any Gr\"obner basis. Then the normal form of $f$ is a linear combination of monomials that are standard with respect to any Gr\"obner basis and so $T$ is (the same) linear combination of the columns in the evaluation matrix that correspond to the monomials that are standard for every Gr\"obner basis. \end{proof} \begin{example} Consider an input set $V = \{(0,0,1), (0,1,1), (1,0,1), (1,1,0)\} \subset \mathbb{F}_2^{3}$. $I(V)$ has exactly two distinct sets of standard monomials, namely $SM_1 = \{1,z,y,x\}$ and $SM_2 = \{1,y,x,xy\}$, resulting from different monomial orderings, with $SM_1 \cap SM_2 = \{1,x,y\}$. The evaluation matrices for each of these sets of standard monomials are \vspace{5pt} \begin{center} $\begin{matrix} SM_1 & \vline & 1 & z & y & x \\ \hline (0,0,1) & \vline & 1 & 1 & 0 & 0 \\ (0,1,1) & \vline & 1 & 1 & 1 & 0 \\ (1,0,1) & \vline & 1 & 1 & 0 & 1 \\ (1,1,0) & \vline & 1 & 0 & 1 & 1 \\ \end{matrix}$ \\ \end{center} and \begin{center} $\begin{matrix} SM_2 & \vline & 1 & y & x & xy \\ \hline (0,0,1) & \vline & 1 & 0 & 0 & 0 \\ (0,1,1) & \vline & 1 & 1 & 0 & 0 \\ (1,0,1) & \vline & 1 & 0 & 1 & 0 \\ (1,1,0) & \vline & 1 & 1 & 1 & 1 \\ \end{matrix}$ \end{center} Take, for example, the sum of the matrix columns that are the evaluations of the monomials in $SM_1 \cap SM_2 = \{1,x,y\}$: $[1, 0, 0, 1]^{T}$, i.e. one linear combination. We find a polynomial function $f \in \mathbb{F}_2[x,y,z]$ that maps each input point to the corresponding output value as follows: \begin{center} $\begin{matrix} (0,0,1) & \mapsto & 1\\ (0,1,1) & \mapsto & 0\\ (1,0,1) & \mapsto & 0\\ (1,1,0) & \mapsto & 1\\ \end{matrix}$ \end{center} We find such $f$ via, say, Lagrange interpolation, to be $f=xy+xz+yz+z$. Now we compute the normal forms of $f$ reduced by $G_1$ and $G_2$, where $G_1$ and $G_2$ are the Gröbner bases for the ideal $I(V)$ corresponding to $SM_1$ and $SM_2$, arriving at \begin{center} $\overline{f}^{G_1} = \overline{f}^{G_2}= x+y+1$. \end{center} Since $G_1$ and $G_2$ are the only two reduced Gröbner bases for the ideal, this normal form is unique. If, instead, we take the same input set $V$ and corresponding standard monomials but choose a new output vector, one that is not a linear combination of the columns corresponding to monomials that are standard with respect to any monomial ordering, we expect to find more than one distinct normal form of $f$. Consider, for example, the output vector $[0,1,1,1]^{T}$. That is, we are looking for a polynomial function $f \in \mathbb{F}_2[x,y,z]$ which maps \begin{center} $\begin{matrix} (0,0,1) & \mapsto & 0\\ (0,1,1) & \mapsto & 1\\ (1,0,1) & \mapsto & 1\\ (1,1,0) & \mapsto & 1\\ \end{matrix}$ \end{center} This time, $f$ has two distinct normal forms, $$\overline{f}^{G_1}=x+y+z+1 ~~ \textrm{and}~~ \overline{f}^{G_2}=xy+x+y.$$ So as expected, $\overline{f}^{G_1} \neq \overline{f}^{G_2}$. \end{example} \begin{corollary}\label{nf-ugb} The normal form of $f \in \mathbb{F}[x_1, \ldots, x_n]$ that fits a data set with input $V=\{s_1 \ldots s_r\} \subseteq \mathbb{F}^n$ is unique for any output $T$ if and only if $I(V)$ has a unique reduced Gr\"obner basis. \end{corollary} Corollaries~\ref{ugb-minset} and~\ref{nf-ugb} point towards the importance of ideals $I(V)$ that have a unique reduced Gr\"obner basis. Such ideals were studied in~\cite{he-unique-gbs, robbiano}, where sufficient conditions for $I(V)$ to have a unique reduced Gr\"obner basis were introduced; in this paper, Corollary~\ref{ugb-diag-free} is a necessary condition that depends on a special relation between the points in $V$ that we define next. \begin{definition} A pair of points \(p,q \in \mathbb{F}_p^n\) form a \emph{diagonal} if $p$ and $q$ differ in at least two coordinates. We will also say that a set $V$ \emph{contains a diagonal} if there is a point $p\in V$ which forms a diagonal with all other points in $V$. \end{definition} \begin{theorem}\label{diagonal} If $V$ contains a diagonal, then there exists an output assignment that corresponds to multiple minsets. \end{theorem} \textbf{Proof:} Let $p\in V$ form a diagonal with all other points in $V$. Then there is a point $s\in V$ such that $m(p, s)$ is a multivariate monomial. Denote the corresponding outputs from $p$ and~$s$ by $t_p$ and $t_s$. \begin{itemize} \item[Case 1:] There are no points $s_i, s_j\in V$ such that $m(s_i, s_j)$ is a single-variate monomial that divides $m(p, s)$. Then according to Theorem~\ref{unique-minset} there is an output assignment for which there are multiple minsets. \item[Case 2:] There are pairs of points $s_i, s_j\in V$ for which $m(s_i, s_j)$ is a single-variate monomial that divides $m(p, s)$. However, since $m(p, s_i)$ and $m(p, s_j)$ are multivariate, we know that $p\ne s_i$ and $p\ne s_j$. Therefore, one can choose an output assignment $T$ where $t_i=t_j$ for all pairs of input points $s_i, s_j\in V$ such that $m(s_i, s_j)$ is a single-variate monomial that divides $m(p, s)$, while also choosing $t_p\ne t_s$. According to Theorem~\ref{unique-minset}, there are multiple minsets for this $T$. \end{itemize} \begin{corollary}\label{ugb-diag-free} If $I(V)$ has a unique reduced Gr\"obner basis, then $V$ is diagonal-free. \end{corollary} \begin{proof} The contrapositive of Theorem~\ref{diagonal} is ``If $V$ corresponds to a unique minset for any output assignment, then $V$ is diagonal-free.'' which follows from Corollary~\ref{ugb-minset}. \end{proof} \section{Experimental results} \label{sec:experiments} Theorem~\ref{diagonal} suggests the following heuristic idea that we will test computationally: \textit{the smaller the Hamming distance between points in $V$, the smaller the number of minsets}. To quantify the Hamming distance between points in $V$, we use the following definition. \begin{definition} Given an input set $V$, we define $d(V)$ as the average value of the Hamming distance $H(p,q)$ between distinct points~$p$ and~$q$ of~$V$. We call $d(V)$ the internal distance. \end{definition} \begin{example}\label{ex:small_experiments} Consider $f:\mathbb F_2^3\rightarrow \mathbb F_2$ given by $f(x_1,x_2,x_3)=\overline{x_2} \vee x_1$ or equivalently, $f(x_1,x_2,x_3)=1+x_2+x_2x_3$. To illustrate the definition we consider two different input sets, $V_1=\{000,001,010,100\}$ and $V_2=\{000,101,110,011\}$. The distance between points in $V_1$ is given below. \begin{center} $\begin{matrix} (\mathbf{s}_1,\mathbf{s}_2) & H(\mathbf{s}_1,\mathbf{s}_2) \\ \hline (000,001) & 1 \\ (000,010) & 1 \\ (000,100) & 1 \\ (001,010) & 2 \\ (001,100) & 2 \\ (010,100) & 2 \\ \hline & d(V_1) = 1.5 \end{matrix}$ \\ \end{center} Similarly, $d(V_2)=2$. Now, we use $f$ to generate data sets for $V_1$ and $V_2$: $\mathcal{D}_1=\{(000,1),\allowbreak (001,1),\allowbreak (010,0),\allowbreak (100,1)\}$ and $\mathcal{D}_2=\{(000,1),\allowbreak (101,1),\allowbreak (110,0),\allowbreak (011,1)\}$. $\mathcal{D}_1$ has the unique minset $\{x_2\}$ and $\mathcal{D}_2$ has the minsets $\{x_1,x_2\}$, $\{x_1,x_3\}$, $\{x_2,x_3\}$. In summary, $V_1$ has an internal distance of $d(V_1)=1.5$ and resulted in $\#M(V_1)=1$ minset. $V_2$ has an internal distance of $d(V_2)=2$ and resulted in $\#M(V_2)=3$ minsets. The following table shows the statistics of all possible input sets with 4 points (there are $\binom{2^3}{4}=70$ of them). Some of them have the same internal distance and/or number of minsets. This is reported in the following table and a scatter plot is shown in Figure~\ref{fig:scatter_plot_example}. \begin{table}[h] \centering \begin{tabular}{c|c|c} $d(V)$ & $\#M(V)$ & \text{ number of such $V$'s } \\ \hline 1.3 & 1 & 6\\ 1.5 & 1 & 8\\ 1.7 & 1 & 24\\ 1.8 & 1 & 10\\ 1.8 & 2 & 14\\ 2 & 1 & 1\\ 2 & 2 & 5\\ 2 & 3 & 2\\ \hline \end{tabular} \caption{Statistics of all 70 possible $V$'s with 4 elements, grouped by internal distance and number of minsets.} \label{tab:distance_numminsets_example} \end{table} \begin{figure}[ht] \centering \includegraphics[width=4in]{plots_for_small_example.pdf} \caption{Scatter plot of $\#M(V)$ vs $d(V)$ and histogram of $\#M(V)$ for \textbf{all} input sets with 4 points. The area of each circle corresponds to the number of $V$'s that have the same values of $d(V)$ and $\#M(V)$. We can see that as the internal distance increases, the number of minsets can get larger.} \label{fig:scatter_plot_example} \end{figure} \end{example} The results from Figure~\ref{fig:scatter_plot_example} are consistent with the heuristic idea that the smaller the distance, the smaller the number of minsets. Now we would like to test two different strategies for generating data, one of which will tend to have small internal distance. Consider a Boolean function $f:\mathbb F_2^n\rightarrow \mathbb F_2$. A \textit{trial} will consist of selecting an input set with~$m$ elements, $V\subseteq \mathbb F_2^n$. Then, we consider the data set $\mathcal{D}=\{(s,f(s)): s\in V\}$ and compute the minsets $M$. We are interested in the relationship between the internal distance of $V$, $d(V)$, and the number of minsets $\#M(V)$. If we plot the points $(d(V),\#M(V))$ for several trials, we expect to see some type of relationship like in Figure~\ref{fig:scatter_plot_example}. We used two different strategies or sampling schemes to generate the $m$ points in $V$. \begin{itemize} \item Pick $m$ points randomly. We refer to this as the \textit{random scheme}. \item Generate $m/2$ points randomly. Then, for each of those points, select a random entry to switch it. We refer to this as the \textit{small-distance scheme}. \end{itemize} In both cases we get an input set $V$ with $m$ points, but the small-distance scheme favours a smaller internal distance. The Boolean functions we selected for our analysis were \textit{fanout-free} (that is, each variable appears only once in its Boolean representation). These functions cover the vast majority of functions used in modeling \cite{mendoza2006method, sridharan2012boolean,mbodj2013logical,orlando2008global,veliz2011boolean, giacomantonio2010boolean, helikar2015integrating, jenkins2017bistability, mbodj2013logical}. To keep the simulations tractable, we used Boolean functions $f:\mathbb F_2^{10}:\rightarrow\mathbb F_2$ such that $|supp(f)|\leq 4$. Up to a relabeling of variables and states, there are 9 such functions (not counting the constant functions), given in Table~\ref{tab:all_functions}. \begin{table}[h] \centering \begin{tabular}{l|l} \textbf{Function in polynomial form} & \textbf{Function in Boolean form}\\ \hline $x_1$ & $x_1$\\ $x_1 x_2$ & $x_1\wedge x_2$\\ $x_1 x_2 x_3$ & $x_1\wedge x_2\wedge x_3$\\ $x_1 (x_2 + x_3 + x_2 x_3)$ & $x_1\wedge (x_2 \vee x_3)$\\ $x_1 x_2 x_3 x_4$ & $x_1\wedge x_2\wedge x_3\wedge x_4$ \\ $x_1x_2x_3 + x_4 +x_1x_2x_3x_4$ & $(x_1\wedge x_2 \wedge x_3)\vee x_4$ \\ $x_1x_2x_3x_4 + x_1x_2x_3 + x_1x_2x_4 + x_1x_2 + x_3x_4 + x_3 + x_4$ & $(x_1\wedge x_2)\vee x_3 \vee x_4$ \\ $x_1x_2+x_3x_4+x_1x_2x_3x_4$ & $(x_1\wedge x_2)\vee (x_3 \wedge x_4)$ \\ $(x_1x_2+x_3+x_1x_2x_3 )x_4$ & $((x_1\wedge x_2)\vee x_3)\wedge x_4$ \end{tabular} \caption{Boolean functions used for the computational analysis in Figure~\ref{fig:scatter_plot_9fun}. These represent all fanout-free functions with up to four variables. } \label{tab:all_functions} \end{table} The results of the simulations are shown in Figure~\ref{fig:scatter_plot_9fun}. As expected, the internal distance $d(V)$ is smaller when points are generated using the small-distance scheme (blue). Importantly, in the small-distance and random schemes, the smaller the internal distance, the smaller the number of minsets. The histograms compare the number of minsets for both schemes and clearly show that the small-distance scheme results in a smaller number of minsets. These computational results provide a straightforward way to generate data with a small number of minsets, the small-distance scheme. \begin{figure}[ht] \centering \includegraphics[width=3in]{9fun_scatter_plots_10000_compressed.pdf} \includegraphics[width=3in]{hist_9fun.pdf} \caption{ Scatter plots of $\#M(V)$ vs $d(V)$ and histograms of $\#M(V)$ for the functions in Table~\ref{tab:all_functions}. The scatter plots show that as the internal distance increases, the number of minsets can get larger (blue: small-distance scheme, yellow: random scheme). The histograms show that the small-distance scheme results in an overall smaller number of minsets. For each of the Boolean functions we run 10,000 trials with input sets with $m=20$ elements (about $2\%$ of the $2^{10}$ possible points). } \label{fig:scatter_plot_9fun} \end{figure} \section{Conclusions and future work} \label{sec:conclusions} One of the difficulties in data-driven approaches is that there is typically a large number of models that fit the collected data and the known constraints of the system are not sufficient to reduce the pool of candidate models to a manageable size for testing and validation purposes. As each model contains a set of predictions about the network being studied, even small numbers of competing models result in a combinatorial growth in validation experiments to be performed. Thus it is desirable to design experiments in such a way that maximizes the chance that the outputs will increase our understanding of the system. We introduced a method which generates data sets that are guaranteed to result in a unique minimal wiring diagram regardless of what the experimental outputs are. A natural next step is to extend these results to signed minimal wiring diagrams and address the question of existence for this case. The somewhat surprising connection between uniqueness of interpolating polynomial normal forms and unique minsets (i.e. unique minimal wiring diagrams) elucidate the role of polynomial ideals with unique Gr\"obner bases. While partial results are available in our prior work and in this manuscript, a complete geometric or combinatorial characterization of sets $V\subset \mathbb F_p^n$ such that $I(V)$ has a unique reduced Gr\"obner basis is still an open question whose importance has been emphasized in this work. \bibliographystyle{siamplain}
{ "timestamp": "2022-08-05T02:16:45", "yymm": "2208", "arxiv_id": "2208.02726", "language": "en", "url": "https://arxiv.org/abs/2208.02726", "abstract": "Over the past several decades, algebraic geometry has provided innovative approaches to biological experimental design that resolved theoretical questions and improved computational efficiency. However, guaranteeing uniqueness and perfect recovery of models are still open problems. In this work we study the problem of uniqueness of wiring diagrams. We use as a modeling framework polynomial dynamical systems and utilize the correspondence between simplicial complexes and square-free monomial ideals from Stanley-Reisner theory to develop theory and construct an algorithm for identifying input data sets $V\\subset \\mathbb F_p^n$ that are guaranteed to correspond to a unique minimal wiring diagram regardless of the experimental output. We apply the results on a tumor-suppression network mediated by epidermal derived growth factor receptor and demonstrate how careful experimental design decisions can lead to a unique minimal wiring diagram identification. One of the insights of the theoretical work is the connection between the uniqueness of a wiring diagram for a given $V\\subset \\mathbb F_p^n$ and the uniqueness of the reduced Gröbner basis of the polynomial ideal $I(V)\\subset \\mathbb F_p[x_1,\\ldots, x_n]$. We discuss existing results and introduce a new necessary condition on the points in $V$ for uniqueness of the reduced Gröbner basis of $I(V)$. These results also point to the importance of the relative proximity of the experimental input points on the number of minimal wiring diagrams, which we then study computationally. We find that there is a concrete heuristic way to generate data that tends to result in fewer minimal wiring diagrams.", "subjects": "Algebraic Geometry (math.AG); Molecular Networks (q-bio.MN)", "title": "Algebraic Experimental Design: Theory and Computation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363508288305, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.707338570245009 }
https://arxiv.org/abs/2111.10598
$F_σ$ ideals of perfectly bounded sets
Let ${\bf x}=(x_n)_n$ be a sequence in a Banach space. A set $A\subseteq \mathbb{N}$ is perfectly bounded, if there is $M$ such that $\|\sum_{n\in F}x_n\|\leq M$ for every finite $F\subseteq A$. The collection $B({\bf x})$ of all perfectly bounded sets is an ideal of subsets of $\mathbb{N}$. We show that an ideal $\mathcal{I}$ is of the form $B({\bf x})$ iff there is a non pathological lower semicontinuous submeasure $\varphi$ on $\mathbb{N}$ such that $\mathcal{I} =FIN(\varphi)=\{A\subseteq \mathbb{N}: \;\varphi(A)<\infty\}$. We address the questions of when $FIN(\varphi)$ is a tall ideal and has a Borel selector. We show that in $c_0$ the ideal $B({\bf x})$ is tall iff $(x_n)_n$ is weakly null, in which case, it also has a Borel selector.
\section{Introduction} An ideal over a set $X$ is a collection of subsets of $X$ closed under taking subsets and finite unions of its elements. An interesting interplay between ideals on $\N$ and Banach spaces was established in \cite{Borodulinetal2015,Borodulin-Farkas2020,Drewnowski,Drewnowski2017}. Given a sequence ${\bf x}=(x_n)_n$ in a Banach space, Drewnowski and Labuda \cite{Drewnowski} defined an ideal on $\N$ as follows: $$ \mathcal{C}({\bf x})=\left \{ A\subseteq \mathbb{N}: \sum_{n\in A} x_n \text{ is unconditionally convergent}\right \}. $$ We recall that $\sum_{n\in A} x_n $ is unconditionally convergent when converges under every permutation of the index set. As a consequence of a classical Bessaga-Pelczy\'nski's theorem, they showed that a Banach space does not contain an isomorphic copy of $c_0$ iff $\C({\bf x})$ is $F_\sigma$ (as a subset of $2^{\N}$ via characteristic functions) for every ${\bf x}$. Borodulin-Nadzieja, Farkas and Plebanek \cite{Borodulinetal2015} made a through analysis of the ideals $\C({\bf x})$ and established a tight connection with the well known class of analytic P-ideals. They showed that many analytic P-ideals can be represented as ideals of the form $\C({\bf x})$. The ideal $\C({\bf x})$ is a generalization of summable ideals, a prototype of which is $\mathcal{I}_{1/n}$ that consists of all $A\subseteq\N$ such that $\sum_{n\in A} 1/n<\infty$. On the other hand, Drewnowski and Labuda \cite{Drewnowski} also used the following ideal: $$ \mathcal{B}({\bf x})=\left \{ A\subseteq \mathbb{N}: \sum_{n\in A} x_n \text{ is perfectly bounded }\right \}. $$ We recall that a sequence ${\bf x}= (x_n)_{n\in A}$ is perfectly bounded if there is $M>0$ such that $\| \sum_{n\in F}x_n\|\leq M$ for all $F\subseteq A$ finite. In general, $ \C({\bf x})\subseteq \B({\bf x})$ and Drewnowski and Labuda showed that a Banach space does not contain an isomorphic copy of $c_0$ iff $\C({\bf x})=\B({\bf x})$ for every ${\bf x}$. The ideal $\B({\bf x})$ has not been studied as extensively as $\C({\bf x})$. One of the objective of this paper is to present some results about it. The ideal $\B({\bf x})$ is $F_\sigma$ and thus it can also be represented using lower semicontinuous submeasures (lscsm) on $\N$. Indeed, Mazur showed \cite{Mazur91} that every $F_\sigma$ ideal on $\N$ is of the form $\fin(\varphi)=\{A\subseteq \N:\varphi(A)<\infty\}$ for some lscsm $\varphi$ on $\N$. Following the work developed in \cite{Borodulinetal2015}, we characterize the ideals of the form $\B({\bf x})$ as those $F_\sigma$ ideals which are represented as $\fin(\varphi)$ for $\varphi$ a non-pathological lscsm (i.e, a lscsm which is a supremum of measures). The proof of Mazur's theorem provides a lscsm that takes values in $\N\cup \{\infty\}$. We address the question of when such lscsm are pathological. However, we do not know if every $F_\sigma$ ideal can be $\B$-represented in a Banach space, that is to say, it is open where there is an $F_\sigma$ ideal $\mathcal{I}$ such that whenever $\mathcal{I}=\fin(\varphi)$ for $\varphi$ a lscsm, then $\varphi$ is pathological. The ideal $\mathcal{C}({\bf x})$ can be defined in the more general setting of abelian topological groups. As a consequence of Solecki's theorem \cite{Solecki1999}, Borodulin-Nadzieja, Farkas and Plebanek \cite{Borodulinetal2015} showed that an analytic ideal $\mathcal{I}$ is a P-ideal iff there is Polish group $G$ such that $\mathcal{I}=\mathcal{C}({\bf x})$ where the unconditionally convergence refers to $G$. Following this line of investigation we show that every $F_\sigma$ ideal is of the form $\B({\bf x})$ calculated in a polish group. An interesting corollary of the proof is that such representations can always be made in the group $\fin$ of all finite subset of $\N$ (as a subgroup of $2^{\N}$) with the discrete topology, thus, what matter for this type of representation is the translation invariant metric used on $\fin$. An ideal on a set $X$ is {\em tall} if every infinite subset of $X$ contains an infinite subset belonging to the ideal. Tall ideals have been extensively investigated (see for instance \cite{Hrusak2011,HMTU2017,uzcasurvey}). We show that in $c_0$, $\B({\bf x})$ is tall iff $(x_n)_n$ is weakly null. Our proof shows more, in fact $\B({\bf x})$ has a Borel selector, that is to say, there is a Borel function $S:[\N]^\omega\to [\N]^\omega$ (where $[\N]^\omega$ is the collection of infinite subsets of $\N$) such that $S(A)\subseteq A$ and $(x_n)_{n\in S(A)}$ is perfectly bounded. The argument is based in the Bessaga-Pelczy\'nski's selection principle. We recall that there are $F_\sigma$ tall ideals without Borel selectors (see \cite{GrebikUzca2018} and \cite{grebik2020tall}). The paper is organized as follows. Section \ref{lscsmFsigma} is dedicated to submeasures for $F_\sigma$ ideals. We study integer valued lscsm and present a simple condition for a lscsm to be pathological. We also introduce property A on a lscsm $\varphi$ which implies that $\fin(\varphi)$ is tall. We study Borel selectors for $\fin(\varphi)$ when $\varphi$ is non-pathological. In sections \ref{B-representable} and \ref{tallnessB} we study the ideal $\B({\bf x})$. \section{Preliminaries} \label{preliminares} We say that a collection $\mathcal{A}$ of subsets of a countable set $X$ is {\em analytic} (resp. Borel), if $\mathcal{A}$ is analytic (resp. Borel) as a subset of the Cantor cube $2^X$ (identifying subsets of $X$ with characteristic functions). $[X]^\omega$ is the collection of infinite subsets of $X$ endowed with the subspace topology as a subset of $2^X$. We refer the reader to \cite{Kechris94} for all non explained descriptive set theoretic notions and notations. An ideal $\mathcal{I}$ on a set $X$ is a collection of subsets of $X$ such that (i) $\emptyset \in \mathcal{I}$ and $X\nin \mathcal{I}$, (ii) If $A, B\in \mathcal{I}$, then $A\cup B\in \mathcal{I}$ and (iii) If $A\subseteq B$ and $B\in \mathcal{I}$, then $A\in \mathcal{I}$. Given an ideal $\mathcal{I}$ on $X$, the {\em dual filter} of $\mathcal{I}$, denoted $\mathcal{I}^*$, is the collection of all sets $X\setminus A$ with $A\in \mathcal{I}$. We denote by $\mathcal{I}^+$ the collection of all subsets of $X$ which do not belong to $\mathcal{I}$. The ideal of all finite subsets of $\N$ is denoted by $\fin$. By $[X]^{<\omega}$ we denote the collection of finite subsets of $X$. We write $A\subseteq^*B$ if $A\setminus B$ is finite. There is a vast literature about ideals on countable sets (see for instance the surveys \cite{Hrusak2011} and \cite{uzcasurvey}). Since the collection of finite subsets of $X$ is a dense set in $2^X$, there are no ideals containing $[X]^{<\omega}$ which are closed as subsets of $2^X$. An ideal $\mathcal{I}$ on $\N$ is $F_\sigma$ if there is a countable collection of closed subsets $\mathcal{K}_n\subseteq 2^X$ such that $\mathcal{I}=\bigcup_n \mathcal{K}_n$. On the other hand, there are no $G_\delta$ ideals containing all finite sets. Thus the simplest Borel ideals (containing all finite sets) have complexity $F_\sigma$. An ideal on $X$ is {\em analytic}, if it is analytic as a subset of $2^X$. A family $\mathcal{A}$ (not necessarily an ideal) of subsets of $X$ is {\em tall}, if every infinite subset of $X$ contains an infinite subset that belongs to $\mathcal{A}$. A tall family $\mathcal{A}$ admits a {\em Borel selector}, if there is a Borel function $S: [X]^\omega\to [X]^\omega$ such that $S(E)\subseteq E$ and $S(E)\in \mathcal{A}$ for all $E$. A coloring is a function $c:[X]^2\to 2$ , where $[X]^2$ is the collection of two element subsets of $X$. A set $H\subseteq X$ is {\em $c$-homogeneous}, if $c$ is constant in $[H]^2$. We denote by $\hom(c)$ the collection of homogeneous sets and by $Hom(c)$ the ideal generated by the $c$-homogeneous sets, that is, $A\in Hom(c)$ iff there are $c$-homogeneous sets $H_1, \cdots, H_n$ such that $ A\setminus (H_1\cup\cdots \cup H_n)$ is finite. It is easy to check that $Hom(c)$ is $F_\sigma$ and, by Ramsey's theorem, $Hom(c)$ is tall. In general, $Hom(c)$ could be trivial. For instance, suppose there are finitely many maximal 0-homogeneous sets, say $H_1, \cdots, H_n$ (no assumption about the size of each $H_i$). We claim that $Hom(c)$ is trivial. In fact, let $x,y\nin H_1\cup\cdots\cup H_n$. Then necessarily $c\{x,y\}=1$ (otherwise there is $i$ such that $x,y\in H_i$). That is, $L=\N\setminus (H_1\cup\cdots\cup H_n)$ is 1-homogeneous. Hence $\N$ is the union of finitely many homogeneous sets. The collection of homogeneous sets is a typical example of a tall family that has a Borel selector (see \cite{GrebikUzca2018}). A function $\varphi:\mathcal{P}(\N)\to[0,\infty]$ is a {\em lower semicontinuous submeasure (lscsm)} if $\varphi(\emptyset)=0$, $\varphi(A)\leq\varphi(A\cup B)\leq\varphi(A)+\varphi(B)$, $\varphi(\{n\})<\infty$ for all $n$ and $\varphi(A)=\lim_{n\to\infty}\varphi(A\cap\{0,1,\cdots, n\})$. Three ideals associated to a lscsm are the following: \[ \begin{array}{lcl} \fin(\varphi)& =& \{A\subseteq\N:\varphi(A)<\infty\}. \\ \mbox{\sf Exh}(\varphi)& = &\{A\subseteq\N: \lim_{n\to\infty}\varphi(A\setminus\{0,1,\dots, n\})=0\}.\\ \mbox{\sf Sum}(\varphi) & = & \{A\subseteq \mathbb{N}:\sum_{n\in A}\varphi\{n\}<\infty\}. \end{array} \] Notice that $ \mbox{\sf Sum}(\varphi)\subseteq \mbox{\sf Exh}(\varphi)\subseteq \fin(\varphi)$. These ideals have been extensively investigated. The work of Farah \cite{Farah2000} and Solecki \cite{Solecki1999} are two of the most important early works for the study of the ideals associated to submeasures. An ideal $\mathcal{I}$ is a {\em $P$-ideal} if for every sequence $(A_n)_n$ of sets in $\mathcal{I}$ there is $A\in \mathcal{I}$ such that $A_n\setminus A$ is finite for all $n$. The following representation of analytic $P$-ideals is the most fundamental result about them. It says that any $P$-ideal is in a sense similar to a density ideal. \begin{teo} \label{solecki} (S. Solecki \cite{Solecki1999}) Let $\mathcal{I}$ be an analytic ideal on $\N$. The following are equivalent: \begin{itemize} \item[(i)] $\mathcal{I}$ is a $P$-ideal. \item[(ii)] There is a lscsm $\varphi$ such that $\mathcal{I}=\mbox{\sf Exh}(\varphi)$. Moreover, there is such $\varphi$ bounded. \end{itemize} In particular, every analytic $P$-ideal is $F_{\sigma\delta}$. Moreover, $\mathcal{I}$ is an $F_\sigma$ $P$-ideal, if, and only if, there is a lscsm $\varphi$ such that $\mathcal{I}=\mbox{\sf Exh}(\varphi)= \fin(\varphi)$. \end{teo} We say that $\mu $ is {\em dominated by $\varphi$}, if $\mu(A)\leq \varphi(A)$ for all $A$, in this case, we write $\mu\leq \varphi$. A lscsm $\varphi$ is {\em non-pathological} if it is the supremmum of all ($\sigma$-additive) measures dominated by $\varphi$. Farah's approach on pathology of submeasures on $\N$ \cite{Farah2000} includes a concept of pathological degree, which we comment below. Associated to each lscsm $\varphi$, Farah defined another lscsm $\widehat{\varphi}$, as the maximal non-pathological submeasure dominated by $\varphi$, i. e. $$ \widehat{\varphi}(A)=\sup\{\mu(A):\mu \text{ is a measure dominated by }\varphi\}, $$ for all $A\subseteq \N$. The \emph{pathological degree}, which measures how far is a submeasure from being non-pathological, is defined by $$ P(\varphi)=\sup\left\{\frac{\varphi(A)}{\widehat{\varphi}(A)}: \widehat{\varphi}(A)\neq 0 \; \& \; A\in \fin \right\}. $$ Note $P(\varphi)=1$ if and only if $\varphi$ is non-pathological. \medskip The following is a natural question which is open. It will be reformulated later in terms of the representation of an $F_\sigma$ ideal in a Banach space (see Question \ref{non-B-repre}). \begin{question} \label{absolut-patologico} Is there an $F_\sigma$ ideal $\mathcal{I}$ such that every lscsm $\varphi$ with $\mathcal{I}=\fin(\varphi)$ is necessarily pathological? \end{question} The pathological degree can be used to attack the previous question. If $P(\varphi)<\infty$, then there is a non-pathological lscsm $\psi$ such that $\fin(\varphi)=\fin(\psi)$ and $\mbox{\sf Exh}(\varphi)=\mbox{\sf Exh}(\psi)$. At this moment, we do not know any example of a submeasure $\varphi$ on a finite set such that $P(\varphi)\geq 3$ and, of course, we do not know if $P(\varphi)$ can be infinite. A non-pathological submeasure may be defined as the supremum of \emph{some} family of $\sigma$-additive measures, i. e. for a given family $\mathcal{S}$ of $\sigma$-additive measures on $\N$, the function $\psi$ given by $\psi(A)=\sup\{\mu(A): \mu\in\mathcal{S}\}$ is a non-pathological lscsm. In particular, the supremum of every sequence of finitely supported measures on $\N$ defines a non-pathological lscsm. The reverse implication is also true, as the following proposition shows. We include its proof for the sake of completeness. \begin{prop} \label{med-sop-fin} Let $\varphi$ be a non-pathological lscsm on $\N$. Then there is a sequence $(\mu_k)_k$ of finitely supported measures on $\N$ such that \[ \varphi(A)=\sup_k\{\mu_k(A)\} \] for all $A\subseteq\N$. \end{prop} \proof Let $(A_k)_k$ be an enumeration of $\fin$. For each $k$ and $n$ pick a measure $\lambda_{(n,k)}\leq \varphi$ such that $\varphi(A_k)-1/(n+1)\leq \lambda_{(n,k)}(A_k)$. Let $\mu_{(n,k)}=\lambda_{(n,k)}|_{A_k}$, then $\{\mu_{(n,k)}: \; n,k\in \N\}$ works. \endproof Pathology of submeasures has been extensively studied mainly on analytic P-ideals (for example, in \cite{Farah2000}, \cite{Hrusak2017}, and \cite{Borodulinetal2015}), but it has not been used for $F_{\sigma}$ ideals. In Section 4 we are going to show that $\fin(\varphi)$, for non-pathological $\varphi$, are exactly those ideals having a representation as $\B({\bf x})$, for ${\bf x}$ in a Banach space. \section{Submeasures for $F_\sigma$ ideals} \label{lscsmFsigma} In this section we present some results about lscsm for $F_\sigma$ ideals. We are especially interested in non-pathological lscsm as they are the link between $F_\sigma$ ideals and Banach spaces. On the other hand, we address the question of when $\fin(\varphi)$ is tall. \subsection{ Integer valued submeasures and pathology} \label{integervalued} $F_\sigma$ ideals are precisely the ideals of the form $\fin(\varphi)$ for some lscsm $\varphi$. We recall this result to point out that such $\varphi$ can have an extra property, which we use later in our discussion of pathology of submeasures. \begin{teo} \label{mazur} (Mazur \cite{Mazur91}) For each $F_\sigma$ ideal $\mathcal{I}$ on $\N$, there is a lsc submeasure $\varphi$ such that $\mathcal{I}=\fin(\varphi)$ and $\varphi$ takes values in $\N\cup\{\infty\}$. Moreover, there is such $\varphi$ sastifying that $\varphi\{n\}=1$ for all $n\in \N$. \end{teo} \proof We include a sketch in order to verify the last claim. Let $(\mathcal{K}_n)_n$ be a collection of closed hereditary subsets of $2^{\N}$ such that $\mathcal{K}_n\subseteq \mathcal{K}_{n+1}$, $A\cup B\in \mathcal{K}_{n+1}$ for all $A,B\in \mathcal{K}_n$ and $\mathcal{I}=\bigcup_n \mathcal{K}_n$. We can assume that $\mathcal{K}_0=\{\emptyset\}$, and since $\{\{n\}:\;n\in \N\}\cup\{\emptyset\}$ is a closed subset of $2^{\N}$, we can assume that $\{n\}\in \mathcal{K}_1$ for all $n\in \N$,.Then the submeasure associated to $(\mathcal{K}_n)_n$ is given by $\varphi(A)=\min\{n+1\in \N:\; A\in \mathcal{K}_n\}$, if $\emptyset\neq A\in \mathcal{I}$, and $\varphi(A)=\infty$, otherwise. \qed We say that a lscsm $\varphi$ is {\em integer valued} if it takes values in $\N\cup\{\infty\}$ and $\varphi(\N)=\infty$. Clearly, Mazur's measures for $F_{\sigma}$ ideals are of this kind. The following two propositions aim to analyze the meaning of pathology in this class of submeasures. \ \begin{prop} Let $\varphi$ be an integer valued lscsm such that $\varphi\{x\}=1$ for all $x\in \N$. Then \begin{enumerate} \item[(i)] Every $n\in \N$ belongs to the range of $\varphi$. \item[(ii)] For every integer $k\geq 2$, there is a finite set $B$ such that $\varphi(B)=k$ and $\varphi(C)<k$ for all $C\subset B$ with $C\neq B$. \end{enumerate} \end{prop} \proof (i) Suppose not and let $n=\min(\N\setminus range(\varphi))$. Since $\varphi(\N)=\infty$, by the lower semicontinuity there is a finite set $A$ such that $\varphi(A)=\min\{\varphi(B): \;n<\varphi(B)\}$. Let $m=\varphi(A)$. We can also assume that $A$ is $\subseteq$-minimal with that property, that is, $\varphi(B)<n$ for all $B\subseteq A$ with $B\neq A$. Let $x\in A$, then \[ \varphi(A)\leq \varphi(A\setminus\{x\})+\varphi\{x\}<n+1\leq m, \] a contradiction. (ii) By (i) there is a $\subseteq$-minimal finite set $B$ such that $\varphi(B)=k$. \qed \medskip The following proposition gives us a criterion for showing that an integer-valued submeasure is pathological. \begin{prop} \label{pathological} Let $\varphi$ be an integer valued lscsm on a set $X$. Suppose there is a finite set $A\subseteq X$ with $|A|\geq 2$ such that $\varphi(A\setminus\{x\})<\varphi(A)$ for all $ x\in A$ and $\varphi(A)<|A|$. Then $\varphi$ is pathological. \end{prop} \proof Suppose $\varphi$ is non-pathological. Let $m=|A|$. Let $x_0\in A$ be such that $\varphi(A\setminus\{x_0\})=\max\{\varphi(A\setminus\{x\}):\; x\in A\}$. Since $\varphi$ takes integer values and $\varphi(A)/m<1$, $\varphi(A\setminus\{x_0\})+\varphi(A)/m < \varphi(A)$. Pick a measure $\mu\leq \varphi$ such that $$ \varphi(A\setminus\{x_0\})+\varphi(A)/m\;<\mu(A)\leq \varphi(A). $$ There is $y\in A$ such that $\mu(\{y\})\leq \varphi(A)/m$. Then \[ \mu(A\setminus\{y\})=\mu(A)-\mu(\{y\})\geq \mu(A)-\varphi(A)/m>\varphi(A\setminus\{x_0\}) \geq \varphi(A\setminus\{y\}), \] which contradicts that $\mu\leq \varphi$. \endproof \medskip Now we present a very elementary example of a pathological lscsm. \begin{ex} \label{patologica-minima} {\em Let $\varphi$ be the lscsm defined on $\{0,1,2\}$ by $\varphi(\emptyset)=0$, $\varphi(a)=1$ if $0<\vert a \vert\leq 2$ and $\varphi(\{0,1,2\})=2$. Then $\varphi$ is the minimal example of a patological integer-valued submeasure on a finite set, where singletons have submeasure 1. By doing some calculations, it can be proved that $P(\varphi)=\frac{3}{2}$. } \end{ex} As it was mentioned in the previous section, if the pathological degree of a lscsm $\varphi$ is finite, then $\fin(\varphi)$ can also be represented by a non-pathological lscsm. We do not know if this holds for all pathological lscsm. On the other hand, by trivial reasons, all $F_{\sigma}$ ideals and all analytic P-ideals can be induced by a pathological lscsm, as the following proposition shows. \begin{prop} Let $\varphi$ be any lscsm on $\N$. There is a pathological lscsm $\psi$ such that $\fin(\varphi)=\fin(\psi)$ and $\mbox{\sf Exh}(\varphi)=\mbox{\sf Exh}(\psi)$. \end{prop} \proof Let $\varphi_0$ be the submeasure on $\{0,1,2\}$ defined in Example \ref{patologica-minima}. Let $$ \psi(A)=\varphi_0(A\cap\{0,1,2\})+\varphi(A\setminus\{0,1,2\}). $$ $\psi$ is pathological, as $\psi\restriction \{0,1,2\}=\varphi_0$. Clearly $\psi$ works. \endproof We now provide non trivial examples of pathological and non-pathological submeasures inducing the same $F_{\sigma}$ ideal. Next example is a non-tall ideal. \begin{ex} \label{finxvacio} {\em {\em Two non-pathological lscsm for $\fin \times \{\varnothing \}$}. We fix a partition $(B_n)_n$ of $\N$ into infinitely many infinite sets. We recall that the ideal $\fin\times\{\emptyset\}$ is defined by $A\in \fin \times \{\varnothing \}$ iff there is $k$ such that $A\subseteq B_0\cup\cdots\cup B_k$. Consider the following lscsm $\varphi$. For $\emptyset\neq A\subseteq \N$, let \[ \varphi(A)=\sup\{m: A\cap B_m\neq \emptyset\}+1. \] To see that $\varphi$ is non-pathological, consider the family of measures $\{\mu_k:k\in\N\}$, where for each $k$, $supp(\mu_k)=\{k\}$ and $\mu_k(\{k\})=m+1$ iff $k\in B_m$. Then $\varphi=\sup_k\mu_k$. Let us note that $\varphi$ is the submeasure comming from the proof of Mazur's Theorem \ref{mazur}. Let $$ \mathcal{K}_{n}=\mathcal{P}(B_0\cup\cdots\cup B_n). $$ Then $\fin \times \{\varnothing \}=\bigcup_n \mathcal{K}_{n}$ and $\varphi$ is the submeasure associated to $(\mathcal{K}_n)_n$. Another integer valued submeasure for $\fin\times\{\emptyset\}$ is defined as follows. Let \[ \psi(A)=\min\{ |F|: \; F\subseteq\N\; \&\; A\subseteq\bigcup_{n\in F}B_n\}. \] Then $\fin \times \{\varnothing \}=\fin(\psi)$. We claim that $\psi$ is non-pathological. We recall that for $A\subseteq \N$, we define \[ \widehat{\psi}(A)=\sup\{\mu(A): \; \mu\; \mbox{is a measure with $\mu\leq \psi$}\}. \] A partial selector for $(B_n)_n$ is a set $S\subseteq\N$ such that $|S\cap B_n|\leq 1$ for all $n$. For each finite partial selector $S$ for $(B_n)_n$ we have that $\psi(S)=|S|$. Let $\mu_S$ be the measure on $S$ given by $\mu_S(A)=|A\cap S|$. Then $\mu_S\leq \psi$ and $\mu_S(S)=\psi(S)$. Then $\psi(S)=\widehat{\psi}(S)$ for every finite selector $S$. Now, for every finite set $F$ there is a selector $S\subseteq F$ such that $\psi(F)=\psi(S)$ and thus $\psi(F)=\widehat{\psi}(S)\leq \widehat{\psi}(F)$. By the lower semicontinuity, $\psi\leq \widehat{\psi}$ and thus $\psi=\widehat\psi$, i.e. $\psi$ is non-pathological. } \end{ex} \bigskip Now we present an example of a pathological lscsm $\varphi$ and a non-pathological lscsm$\psi$ such that $\fin(\varphi)=\fin(\psi)$ and $\fin(\varphi)$ is tall. \begin{ex} \label{ED} {\em Let $(B_n)_n$ be a partition of $\N$ into infinite sets. The ideal $\mathcal{ED}$ is defined as the ideal generated by pieces and selectors of the partition $(B_n)_n$. Let $\mathcal{K}_0$ be the set $\{\emptyset\}$ and \[ \mathcal{K}_1= \{H\subseteq\N: H\subseteq B_n\;\text{for some $n$}\}\cup\{H\subseteq \N: \text{$H$ is a partial selector for $(B_n)_n$} \}. \] Then $\mathcal{K}_1$ is closed hereditary and $\mathcal{ED}$ is the ideal generated by $\mathcal{K}_1$. Let \[ \mathcal{K}_{n+1}= \{H_1\cup\cdots\cup H_{n+1}: H_i\in \mathcal{K}_n\; \text{for $1\leq i\leq n+1$}\}. \] Let $\varphi$ be the Mazur's submeasure for this family of closed hereditary sets. Clearly $\mathcal{ED}=\fin(\varphi)$. We use proposition \ref{pathological} to show that $\varphi$ is pathological. Pick a set $A=\{x_1, x_2, x_3\}$ such that $x_1\in B_0$ and $x_2, x_3\in B_1$. Notice that $\varphi(A)=2$ and $\varphi(A\setminus\{y\})=1$ for all $y\in A$. Consider the submeasure on $\N$ given by $$ \psi(A)=\min\{m\in\N :(\forall n> m)\vert A\cap B_n\vert\leq m\}. $$ It is easy to see that $\fin(\psi)=\mathcal{ED}$. Now, let us consider the family $$ \mathcal{S}=\{\mu_n^F: n\in\N, F\in[B_n]^{n+1}\} $$ where, for fixed $n$ and $F$, $\mu_n^F$ is the counting measure supported on $F$. Define $\psi(A)=\sup\{ \mu(A):\mu\in\mathcal{S}\}$. Note that if $A\subseteq B_n$ then $\psi(A)\leq n$, and $\psi(A)=\infty$ if and only if for every $m$ there exists $n\geq m$ such that $\vert A\cap B_n\vert\geq \mu^F_n(A)\geq m$ for some $F\in[B_n]^{n+1}$, which is equivalent to saying that $A\notin \mathcal{ED}$. Then $\fin(\psi)=\mathcal{ED}$, and obviously, $\psi$ is non-pathological. } \end{ex} For the sake of completeness, we mention that both submeasures $\varphi$ and $\psi$ from the previous example remain pathological and non-pathological respectively, when they are restricted to $\Delta=\bigcup_n C_n$, where each $C_n$ is a fixed subset of $B_n$ with cardinality $n+1$. $\mathcal{ED}_{fin}$ denotes the restriction of $\mathcal{ED}$ to $\Delta$. It is immediate to see that, in general, every restriction of a non-pathological submeasure is non-pathological, while some restrictions of pathological submeasures are non-pathological. Another kinds of preservation of non-pathology are described in the following proposition. \begin{prop} If $\{\varphi_n:n\in\N\}$ is a family of non-pathological lscsm, then $\sup_n\varphi_n$ is a non-pathological lscsm. If $(B_n)_n$ is a point-finite covering for $\N$ (i.e. each $m$ belongs to finitely many sets $B_n$, note that this is the case of partitions) and $\varphi_n$ a non-pathological lscsm on $B_n$, then $\sum_n \varphi_n$ is non-pathological on $\N$. \end{prop} \begin{proof} The first claim is evident. For the second, since $\varphi_{n}$ is non-pathological, there exists a sequence of measures $(\nu_{n}^{k})_k$ such that $\varphi_n=\sup_{k}\nu_{n}^{k}$. For each $F\in \fin$, there exists $n_{F}\in \mathbb{N}$ such that \begin{equation}\label{eqnsup} \varphi(F)=\sum_{n=0}^{n_{F}}\varphi_{n}(F)=\sum_{n=0}^{n_{F}}\sup_{k}\nu_{n}^{k}(F)=\sup_{k} \sum_{n=0}^{n_{F}}\nu_{n}^{k}(F). \end{equation} Define \begin{equation}\label{eqnmeas} \mu_{k}^{F}=\sum_{n=0}^{n_{F}}\nu_{n}^{k}. \end{equation} Note that $\mu_k^{F}$ is a measure on $\N$ for every $k\in \mathbb{N}$ and $F\in \fin$. Let $(\mu_k)_k$ be an enumeration of $\{\mu_{k}^{F}:k\in \mathbb{N} \text{ and } F\in \fin \}$. From equations (\ref{eqnsup}) and (\ref{eqnmeas}), $\varphi(F)=\sup_{k}\mu_{k}(F)$. In fact, given $A\subseteq \mathbb{N}$, we have $$ \varphi(A)=\sup_{F\in [A]^{<\omega}}\varphi(F)=\sup_{F\in [A]^{<\omega}}\sup_{k}\mu_{k}(F)=\sup_{k}\sup_{F\in [A]^{<\omega}}\mu_{k}(F)=\sup_{k}\mu_{k}(A). $$ Thus $\varphi$ is non-pathological. \end{proof} \subsection{Tallness. Property A and ideals generated by homogeneous sets for colorings} \label{propiedadA} What conditions on $F_\sigma$ ideals imply tallness? In this subsection we are going to examine two very different such conditions. Greb\'{\i}k and Hru\v{s}\'{a}k \cite{GrebikHrusak2020} showed that there are no simple characterizations of the class of tall $F_\sigma$ ideals, in fact, they showed that the collection of closed subsets of $2^{\N}$ which generates an $F_\sigma$ tall ideal is not Borel as a subset of the hyperspace $K(2^{\N})$. As a first remark, note that $\mbox{\sf Exh}(\varphi)$ is tall iff $\mbox{\sf Sum}(\varphi)$ is tall iff $\varphi(\{n\})\rightarrow 0$. Indeed, if we let $C_{n}=\{x\in \N:\; 2^{-n}<\varphi(\{x\})\}$, then $\mbox{\sf Exh}(\varphi)$ is tall iff each $C_n$ is finite. Notice also that $\fin(\varphi)$ is tall, whenever $\mbox{\sf Exh}(\varphi)$ is tall, as $\mbox{\sf Exh}(\fin)\subseteq\fin(\varphi)$. We now introduce the Property A, which is weaker than requiring that $\lim_n \varphi (\{n\})=0$ but which suffices to get that $\fin(\varphi)$ is tall. We notice that it is possible that $\fin(\varphi)$ is tall while $\mbox{\sf Sum}(\varphi)$ and $\mbox{\sf Exh}(\varphi)$ are not (see Example \ref{ejemadecuada}). \begin{defi} A lscsm $\varphi$ on $\N$ has {\em property A}, if $\varphi(\N)=\infty$ and $\varphi(\{n\in \N:\; \varphi\{n\}>\varepsilon\})<\infty$ for all $\varepsilon>0$. \end{defi} Property A can be seen as a condition about the convergency of $(\varphi\{n\})_n$ to 0, but in a weak sense. In fact, let us recall that for a given a filter $\mathcal{F}$ on $\N$, a sequence $(r_n)_n$ of real numbers {\em $\mathcal{F}$-converges} to $0$, if $\{n\in \N: |r_n|<\varepsilon\}\in \mathcal{F}$ for all $\varepsilon>0$. Let $\mathcal{F}$ be the dual filter of $\fin(\varphi)$. Then $\varphi$ has property A iff $(\varphi\{n\})_n$ $\mathcal F$-converges to $0$. On the other hand, property A has also a different interpretation. We recall that an ideal $\mathcal{I}$ over $\N$ is {\em weakly selective} \cite{HMTU2017}, if given a positive set $A\in \mathcal{I}^+$ and a partition $(A_n)_n$ of $A$ into sets in $\mathcal{I}$, there is $S\in\mathcal{I}^+$ such that $S\cap A_n$ has at most one point for each $n$. A submeasure $\varphi$ has property A, if $\fin(\varphi)$ fails to be weakly selective in the following partition of $\N$: $A_{n+1}=\{x\in \N: 1/2^{n+1} \leq\varphi \{x\} < 1/2^n\}$ and $A_0= \{x\in \N: 1\leq \varphi\{x\}\}$. In fact, any selector for $\{A_n:\; n\in \N\}$ belongs to $\mbox{\sf Exh}(\varphi)$ and thus to $\fin(\varphi)$. \begin{prop} \label{propAtall} $\fin(\varphi)$ is tall for all lscsm $\varphi$ with property A. \end{prop} \proof Let $A\subseteq \N$ be an infinite set. If there is $\varepsilon>0$ such that $A\subseteq \{n\in \N:\; \varphi\{n\}>\varepsilon\}$, then $A\in \fin(\varphi)$ as $\varphi$ has property A. Otherwise, pick $n_k\in A$ such that $\varphi(n_k)\leq 2^{-k}$ for all $k\in \N$. Let $B=\{n_k:\; k\in \N\}$. Then $\varphi(B)\leq \sum_k \varphi (n_k)<\infty$. \qed Note that any integer valued lscsm fails to have the property A. Thus, every $F_\sigma$ tall ideal $\mathcal{I}$ is induced by a lscsm $\psi$ without the property A (for example, the one given by the proof of Mazur's theorem applied to $\mathcal{I}$). \bigskip Now we present a natural construction of submeasures with property A. In particular, it provides a lscsm $\varphi$ such that $\fin(\varphi)$ is tall but $ \varphi (\{n\})\not\to 0$. \begin{ex} \label{ejemadecuada} {\em Let $\{B_n:n\in\N\}$ be a partition of $\N$ into infinite sets and $ \{B_n^k:k\in\N\}$ be a partition of $B_n$ satisfying: \begin{itemize} \item $B_n^0$ consists of the first $2^n(n+1)$ elements of $B_n$. \item $\min B_n^{k+1}=\min\{x\in B_n:x>\max B_n^k\}$. \item $\vert B_n^{k+1}\vert \geq \vert B_n^k\vert$. \end{itemize} \medskip \noindent Let $\nu_n^k$ be the measure on $B_n^k$ given by $\nu_n^k (\{x\})=\frac{n+1}{\vert B_n^k\vert}$ for all $x\in B_n^k$. Let $$ \varphi_n=\sup_k \nu_n^k$$ and $$ \varphi=\sum_n \varphi_n. $$ Then $\varphi$ is a non-pathological lscsm. We list some useful facts about this construction. \medskip \begin{enumerate} \item $\varphi (B_n)=\varphi (B_n^k)=n+1$ for all $n$ and $k$. \item Let $(n_i)_i$ and $(k_i)_i$ two sequences in $\N$. Suppose $(n_i)_i$ is increasing. Then $\varphi(\bigcup_i B_{n_i}^{k_i})=\infty$. \item $\varphi$ has property A. Let $\varepsilon>0$ and $M_{\varepsilon}=\{x\in\N: \varphi(\{x\})\geq\varepsilon\}$. Notice that $\nu_n^k (\{x\})\leq \frac{1}{2^n}$ for all $x\in B_n^k$. Let $N$ be such that $2^{-N}<\varepsilon$, then $M_{\varepsilon}$ is disjoint from $B_{m}$ for all $m>N$ and thus $M_{\varepsilon}\subseteq B_0\cup \cdots\cup B_N$ belongs to $\fin(\varphi)$. \item $\fin (\varphi)$ is not a P-ideal. In fact, the $P$-property fails at $(B_n)_n$. Indeed, let us supose that $B_n\subseteq^*X$ for all $n$. Then for each $n$ there is $k_n$ such that $B_n^{k_n}\subseteq X$, and thus $\bigcup_nB_n^{k_n}\subseteq X$. By (2), $X\notin\fin(\varphi)$. \item Every selector of the $B_n$'s belongs to $\mbox{\sf Sum}(\varphi)$. \item $B_n\in \fin(\varphi)\setminus \mbox{\sf Exh}(\varphi)$ for all $n$. \end{enumerate} \bigskip Now we present two particular examples of the previous general construction. \medskip \begin{itemize} \item[(a)] Suppose $\vert B_n^{k+1}\vert=\vert B_n^k\vert$ for all $n$ and $k$. Notice that $\nu_n^k (\{x\})=\frac{n+1}{\vert B_n^k\vert}=1/2^n$ for all $x\in B_n$. Thus $\varphi(\{x\})=1/2^n$ for all $x\in B_n$ and $\lim_n \varphi(\{n\})$ does not exist. Then $\mbox{\sf Sum}(\varphi)$ and $\mbox{\sf Exh}(\varphi)$ are not tall, but $\fin(\varphi)$ is tall since it has the property A. \medskip \item[(b)] Suppose $\vert B_n^{k+1}\vert=\vert B_n^k\vert+n+1=(n+1)(2^n+k)$. Then $\nu_n^k (\{x\})=\frac{n+1}{\vert B_n^k\vert}=\frac{1}{2^n+k}$ for all $x\in B_n^k$. We show that $\varphi(\{m\})\to 0$, when $m\to \infty$. Given $\varepsilon>0$, we have seen that $M_{\varepsilon}=\{x\in\N: \varphi(\{x\})\geq\varepsilon\}$ is disjoint from $B_{m}$ for all $m>N$ when $2^{-N}<\varepsilon$, and it is also disjoint from $B_n^k$ when $k^{-1}<\varepsilon$. Hence $M_{\varepsilon}$ is finite. We claim that $\mbox{\sf Sum}(\varphi)\neq \mbox{\sf Exh}(\varphi)\neq \fin(\varphi)$. By (4) and (6), it is sufficient to prove that there is $X\in\mbox{\sf Exh}(\varphi)\setminus \mbox{\sf Sum}(\varphi)$. For a fixed $n$, let $X=\{x_k:k\in\N\}$ be such that $x_k\in B_n^k$. Since $\varphi(\{x_k\})=\frac{1}{2^n+k}$ for all $k$, $X\not\in \mbox{\sf Sum}(\varphi)$. On the other hand, $\varphi(\{x_k:\, k\geq m\})=\frac{1}{2^n+m}\to 0$ when $m\to \infty$. \end{itemize} \medskip In both examples (a) and (b), $\varphi$ is a non-pathological submeasure since it can be expressed as $\sup\{ \mu_s:s\in\mathbb{N}^{<\omega}\}$, where $\mu_s$ is defined by $$\mu_s(D)= \sum_{j<\vert s \vert} \nu_{j}^{s(j)}(D\cap B_{j}^{s(j)})$$ for $s\in\mathbb{N}^{<\omega}$ and $D\subseteq \mathbb{N}$. } \end{ex} \qed Now we present some examples of tall ideals which do not contain $\fin(\varphi)$ for any $\varphi$ with property A. Our examples are motivated by Ramsey's theorem. We refer the reader to section \ref{preliminares} where the notation is explained and to \cite{GrebikUzca2018} for more information about this type of ideals We say that a coloring $c:[X]^2\to 2$ \emph{favors color} $i$, if there are no infinite $(1-i)$-homogeneous sets and in every set belonging to $Hom(c)^*$ there are $(1-i)$-homogeneous sets of any finite cardinality. \begin{prop} \label{favor0} Let $c:[X]^2\to 2$ be a coloring that favors a color. Then, $Hom(c)$ does not contain $\fin(\varphi)$ for any lscsm $\varphi$ with property A. \end{prop} \begin{proof} Suppose $c$ favors color 0. Let $\varphi$ be an arbitrary lscsm on $X$ with property A and suppose that $\fin(\varphi)\subseteq Hom(c)$. We will construct a set $A$ in $Hom(c)^+$ with $\varphi(A)<\infty$, which is a contradiction. Let $B_{n}=\{x\in \N:\; 2^{-n}<\varphi(\{x\})\}$ for each $n\in \N$. As $\varphi$ has property $A$, $B_n\in \fin(\varphi)$ and $\N\setminus B_n\in Hom(c)^*$. By hypothesis, for each $n\in \N$, there is a 1-homogeneous finite set $A_n$ with $n$ elements and such that $A_n\cap B_n=\emptyset$. Since $\varphi(A_n)\leq \frac{n}{2^ n}$, $A=\bigcup_n A_n\in \fin(\varphi)$. As $\fin(\varphi)\subseteq Hom(c)$, there is a finite union of 0-homogeneous sets $C=C_1\cup\cdots\cup C_k$ such that $A\subseteq^* C$. As $A$ is infinite, there is $l>k$ such that $A_{l} \subseteq C$. Since $A_l$ has $l$ elements, there are $i\leq k$ and $x\neq y\in A_{l} \cap C_i$ which is imposible as $A_{l}$ is 1-homogeneous and $C_i$ is 0-homogeneous. A contradiction. \end{proof} We present two examples of colorings satisfying the hypothesis of the previous proposition. \begin{ex} \label{edfin} Let $(P_n)_n$ be a partition of $\N$ such that $|P_n|=n$. Let $c$ be the coloring given by $c\{x,y\}=0$ iff $x,y\in P_n$ for some $n$. This coloring favors color 1. Notice that $\mathcal{ED}_{fin}$ is the ideal generated by the $c$-homogeneous sets. \end{ex} \begin{ex} \label{SI} Let $\Q$ be the rational numbers in $[0,1]$. Let $\{r_n:\; n\in\N\}$ be an enumeration of $\Q$. The {Sierpinski coloring}, $c:\Q^{[2]}\to 2$, is defined by $c\{r_n, r_m\}=0$ if $n<m$ iff $r_n<r_m$. Denote by $\mathcal{SI}$ the ideal generated by the $c$-homogeneous sets. Observe that the homogeneous sets are exactly the monotone subsequences of $\{r_n:\; n\in \N\}$. For each $n$, pick $X_n\subseteq (n,n+1)$ an infinite homogeneous set of color 0 and let $X=\bigcup_n X_n$. Then $X\in Hom(c)^+$. It is easy to check that $c\restriction X$ favors color $0$. \end{ex} \bigskip To see that Proposition \ref{favor0} is not an equivalence, we notice that $\mathcal{ED}$ is the ideal generated by the homogeneous sets of a coloring not favoring any color, nevertheless, by an argument similar to that used in the proof of Proposition \ref{favor0}, it can be shown that $\mathcal{ED}$ does not contain $\fin(\varphi)$ for any lscsm $\varphi$ with property A. \subsection{Borel selectors} \label{BS} A tall family $\mathcal{I}$ admits a {\em Borel selector}, if there is a Borel function $F: [\N]^\omega\to [\N]^\omega$ such that $F(A)\subseteq A$ and $F(A)\in \mathcal{I}$ for all $A$. This notion was studied in \cite{GrebikHrusak2020,GrebikUzca2018,grebik2020tall}. The typical examples of such families are the collection of homogeneous sets. This fact can be used to show that some tall ideals admits a Borel selector. For instance, suppose $\varphi$ is lscsm with property A. We have seen that $\fin(\varphi)$ is tall. To see it has a Borel selector we could proceed as follows. We assume that $\varphi(\{n\})\neq 0$ for all $n$. Let \[ B_{k+1}=\{n\in \N:\; 2^{-k-1}\;\leq \varphi(\{n\})<2^{-k}\} \] for $k\in \N$ and $B_0=\{n\in \N:\; \varphi(\{n\})\geq 1\}$. Let $c:[\N]^{2}\to \{0,1\}$ be the coloring associated to the partition $(B_k)_k$, that is to say, $c(\{n,m\})=1$ if, and only if, there is $k$ such that $n,m\in B_k$. Let $\hom(c)$ be the collection of $c$-homogeneous sets. The argument used in the proof of Proposition \ref{propAtall} shows that $\hom(c)\subseteq \fin(\varphi)$. Since $\hom(c)$ has a Borel selector (see \cite[corollary 3.8]{{GrebikUzca2018}} or ), so does $\fin(\varphi)$. \bigskip In this section we analyze Borel selectors for ideals $\fin(\varphi)$ for a non-pathological $\varphi$. Let $(\mu_k)_k$ be a sequence of measures on $\N$ and $\varphi=\sup_k\mu_k$. If $\fin(\varphi)$ is tall, then clearly $\sup_n\varphi(\{n\})\leq M$ for some $M$ and we can assume, after dividing every measure by $M$, that $\mu_k(\{n\})\leq 1$ for all $n$ and $k$. To each $n$ we associate a partition of $\N$ as follows. For $i\in \N$, let \[ A^n_i=\left\{k\in \N: \; \frac{1}{2^{i+1}}\leq \mu_k(\{n\}) <\frac{1}{2^{i}}\right\} \] and \[ A^n_\infty=\{k\in\N:\; \mu_k(\{n\})=0\}. \] Each $\mathcal{P}_n=\{A^n_i:\; i\in\N\cup\{\infty\}\}$ is a partition of $\N$. Let $L_n=\{i\in \N:\; A^n_i\neq \emptyset\}$. We will see later that, without lost of generality, we can assume that each $L_n$ is finite (see remark \ref{equalB2}). We present some conditions on those partitions which implies that $\fin(\varphi)$ is tall and has a Borel selector. \bigskip \begin{prop} \label{c0like} Let $(\mu_k)_k$ be a sequence of measures on $\N$ such that $\mu_k(\{n\})\leq 1$ for all $n$ and $k$. Let $\varphi=\sup_k\mu_k$. Suppose \begin{equation} \label{coloring2b} (\forall n)(\exists m>n)( \forall i) \;(A^n_i\subseteq A^m_\infty \cup \bigcup_{j=i+1}^{\infty} A^m_j). \end{equation} Then $\fin(\varphi)$ is tall and has a Borel selector. \end{prop} \proof Define a coloring $c:[\N]^2\to 2$ as follows: for $n<m$ \begin{equation} \label{coloring2a} c\{n,m\} =1 \;\text{iff} \; ( \forall i\in\N) \;(A^n_i\subseteq A^m_\infty \cup \bigcup_{j=i+1}^{\infty} A^m_j). \end{equation} We will show that $\hom(c)\subseteq \fin(\varphi)$. Notice that from the hypothesis we have that every $c$-homogeneous set is of color 1. Thus it suffices to show that every $c$-homogeneous set of color 1 belongs to $\fin(\varphi)$. Let $H=\{n_i:\;i\in\N\}$ be the increasing enumeration of an homogeneous set of color 1. We claim that for all $k$ and $m$ \[ \mu_k(\{n_0, \cdots, n_m\})=\sum_{i=0}^m \mu_k(\{n_i\})\leq 2. \] Which implies that $\varphi(H)\leq 2$. In fact, fix $k$ and $m$. We can assume without lost of generality that $\mu_k(\{n_i\})\neq 0$ for all $i\leq m$. For each $i\leq m$, let $j_i$ be such that $k\in A^{n_i}_{j_i}$. Since $H$ is 1-homogeneous and $k\in A^{n_0}_{j_0}$ and $k\in A^{n_1}_{j_1}$, we have $j_0<j_1$. In general, we conclude that $j_0<j_1<\cdots <j_m$. Thus \[ \displaystyle\sum_{i=0}^m \mu_k(\{n_{i}\})\leq \sum_{i=0}^\infty\frac{1}{2^{i}} \] \endproof \bigskip \begin{ex} Let $\{C_j: j\in \N\}$ be a partition of $\N$. For each $i, n\in \N$, let \[ A^n_i=\begin{cases} \emptyset, & \text{if $0\leq n< 2^i$;}\\ \\ C_j, & \text{if $2^i(j+1)\leq n <2^i(j+2)$.} \end{cases} \] For each $k$ we define a measure $\mu_k$ as follows: $\mu_k(\{n\})=2^{-i-1}$ if $k\in A^n_i$ and is equal to 0 otherwise. Then the associated partitions $\mathcal{P}_n$ is $\{A^n_i: i\in \N\}\cup\{A^n_\infty\}$. The reader can easily verify that \eqref{coloring2b} holds. \end{ex} \medskip Let $(\mathcal{Q}_n)_n$ be a sequence of pairwise disjoint subsets $\N$, say $\mathcal{Q}_n=\{B^n_i:\; i\in \N\}$. The sequence is {\em eventually disjoint}, if there is $p\in \N$ such that for all $n\neq m$: \begin{equation} \label{evenDis} \,B^n_i \cap B^m_i=\emptyset,\;\;\mbox{for all $i>p$}. \end{equation} \begin{prop} \label{evenDis2} Let $(\mathcal{Q}_n)_n$ be a sequence of pairwise disjoint subsets of $\N$, say $\mathcal{Q}_n=\{B^n_i:\;i\in\N\}$. Let $L_n=\{i\in \N:\: B^n_i\neq \emptyset\}$. Suppose there is $l\in \N$ such that $|L_n|\leq l$ for every $n$. Then, there is an infinite set $A\subseteq \N$ such that $(\mathcal{Q})_{n\in A}$ is eventually disjoint. \end{prop} \proof By induction on $l$. For $l=1$. Let $\mathcal{Q}_n=\{B^n_{i_n}\}$. We consider two cases: (a) There is $A\subseteq \N$ infinite such that $(i_n)_{n\in A}$ is constant. Then $(\mathcal{Q}_n)_{n\in A}$ is eventually disjoint. (b) There is $A\subseteq \N$ such that $i\in A\mapsto i_n$ is 1-1, then $(\mathcal{Q}_n)_{n\in A}$ is eventually disjoint. Suppose it holds for any sequence of pairwise disjoint sets with at most $l$ non empty sets. Let $\mathcal{Q}_n=\{B^n_i:\;i\in\N\}$ such that $|L_n|\leq l+1$. Let $i_n=\min(L_n)$. There are two cases to be considered. (a) Suppose $\sup_n i_n=\infty$. Pick $A=\{n_k:\; k\in \N\}$ such that $\max(L_{n_k})<\min(L_{n_{k+1}})$. Then $(\mathcal{Q}_{n_k})_{k\in \N}$ is eventually disjoint. (b) There is $B\subseteq \N$ infinite such that $(i_n)_{n\in B}$ is constant. Let $\mathcal{Q}_n'=\mathcal{Q}_n\setminus \{B^n_{i_n}\}$. Then, we can apply the inductive hypothesis to $ (\mathcal{Q}_n')_{n\in B}$ and find $A\subseteq B$ infinite such that $ (\mathcal{Q}_n')_{n\in A}$ is eventually disjoint. Then $ (\mathcal{Q}_n)_{n\in A}$ is also eventually disjoint. \endproof \bigskip \begin{prop} Let $\varphi=\sup_k\mu_k$ where each $\mu_k$ is a measures on $\N$ such that $\mu_k(\{n\})\leq 1$ for all $n$ and $k$. Suppose the sequence of partitions associate to $(\mu_k)_k$ is eventually disjoint. If $\fin(\varphi)$ is tall, then it has a Borel selector. \end{prop} \proof Let $p\in \N$ as in \eqref{evenDis}. Consider the following coloring of the Schreier barrier $\mathcal{S}$: \[ c_3(\{q,n_1, \cdots, n_q\})=1\;\;\; \mbox{iff there is $k\in \N$ such that $\mu_k(\{n_j\})\geq 2^{-p-1}$ for all $1\leq j\leq q$.} \] We will show that $hom(c_3)\subseteq \fin(\varphi)$. We first show that every infinite homogeneous set is of color 0. In fact, let $A$ be an infinite set. So it suffices to find $s\subseteq A$ with $s\in \mathcal{S}$ and of color 0. Since $\fin(\varphi)$ is tall, we also assume that $\varphi (A)<\infty$. Let $q\in A$ be such that $q2^{-p-1}>\varphi(A)$. Let $q<n_1<\cdots<n_q$ in $A$, we claim that $c_3(\{q,n_1, \cdots, n_q\})=0$. In fact, suppose not, and let $k$ be such that $\mu_k(\{n_j\})\geq 2^{-p-1}$ for all $1\leq j\leq q$. Thus \[ \sum_{j=1}^q\mu_k(\{n_j\})\geq q 2^{-p-1}>\varphi(A), \] a contradiction. To finish the proof, it suffices to show that every infinite homogeneous set $H$ of color 0 belongs to $\fin(\varphi)$. Let $q=\min(H)$ and $F\subseteq H/q$ be a finite set. Fix $k\in \N$ and let \[ F_k=\{n\in F:\;\mu_k(\{n\})\geq 2^{-p-1}\}. \] Since $H$ is homogeneous of color $0$, $|F_k|<q$. Let $n,m\in F\setminus F_k$ with $n\neq m$. Then $\mu_k(\{n\})<2^{-p-1}$ and $\mu_k(\{m\})<2^{-p-1}$. Thus $k\in A^n_i\cap A^m_j$ for some $i,j> p$ and by \eqref{evenDis}, we have that $i\neq j$. Thus \[ \sum_{n\in F\setminus F_k}\mu_k(\{n\})\leq \sum_{i>p} 1/2^i. \] Then \[ \sum_{n\in F}\mu_k(\{n\})= \sum_{n\in F_k}\mu_k(\{n\})+\sum_{n\in F\setminus F_k}\mu_k(\{n\}) \leq q+2. \] \endproof From the previous results we have \begin{prop} Let $\varphi=\sup_k\mu_k$ where each $\mu_k$ is a measures on $\N$ such that $\mu_k(\{n\})\leq 1$ for all $n$ and $k$. Let $\{A^n_i:\; i\in \N\}\cup \{A^n_\infty\}$ be the associated partitions and $L_n=\{i\in \N: A^n_i\neq \emptyset\}$. Suppose there is $l$ such that $|L_n|\leq l$ for all $n$. If $\fin(\varphi)$ is tall, then it has a Borel selector. \end{prop} The main open question we have left is the following: \begin{question} \label{nonpatho-selector} Let $\varphi$ be a non-pathological lscsm such that $\fin(\varphi)$ is tall. Does $\fin(\varphi)$ have a Borel selector? \end{question} \section{$\mathcal{B}$-representable ideals} \label{B-representable} Let $(G,+,d)$ be a Polish abelian group. Let ${\bf x}= (x_n)_n$ be a sequence in $G$. We say that $\sum_n x_n$ is {\em unconditionally convergent}, if there is $a\in G$ such that if $x_{\pi(0)}+x_{\pi(1)}+\cdots+ x_{\pi(n)} \to a$ for all permutation $\pi$ of $\N$. We say that ${\bf x}$ is {\em perfectly bounded}, if there exists $k>0$ such that for every $F\in \fin$, $$ d\left (0,\sum_{n\in F}x_n \right )\leq k. $$ Following \cite{Borodulinetal2015,Drewnowski}, we introduce two ideals. Given ${\bf x}= (x_n)_n$ a sequence in $G$, let $$ \mathcal{C}({\bf x})=\left \{ A\subseteq \mathbb{N}: \sum_{n\in A} x_n \text{ is unconditionally convergent}\right \} $$ and $$ \mathcal{B}({\bf x})=\left \{ A\subseteq \mathbb{N}: \sum_{n\in A} x_n \text{ is perfectly bounded }\right \}. $$ We observe that the ideal $\B(x)$ depends on the metric of the group, not just on the topology, as is the case for $\C({\bf x})$. An ideal $\mathcal{I}$ is {\em Polish $\mathcal{B}$-representable} (resp. {\em Polish $\C$-representable}) if there exists a Polish abelian group $G$ and a sequence ${\bf x}=(x_n)_n$ in $G$ such that $\mathcal{I}=\mathcal{B}({\bf x})$ (resp. $\mathcal{I}=\C({\bf x})$). Polish $\C$-representable ideals were studied in \cite{Borodulinetal2015}. As a consequence of Solecki's Theorem \ref{solecki} they proved the following. \begin{teo} \label{Polish C-repre} (Borodulin et al \cite{Borodulinetal2015}) An ideal $\mathcal{I}$ is Polish $\C$-representable iff it is an analytic $P$-ideal. \end{teo} In this paper we focus on Polish $\B$-representable ideals. \subsection{$\B$-representability in Polish groups} Let $G$ be a Polish abelian group, $d$ a complete, translation invariant metric on $G$ and ${\bf x}=(x_n)_n$ a sequence in $G$. We associate to ${\bf x}$ a lscsm $\varphi_{\bf x}$ as follows: $\varphi_{\bf x}(\varnothing)=0$ and if $A\neq \varnothing$ we let $$ \varphi_{\bf x}(A)=\sup \left \{ d\left (0,\sum_{n\in F}x_n \right ): \varnothing \neq F \in [A]^{< \omega} \right \}. $$ We show that $\varphi_{\bf x}$ is indeed a lscsm. Let $A$ and $B$ be finite disjoint subsets of $\N$. Then, by the translation invariance of $d$, we have \[ d(0,\sum_{n\in A\cup B}x_n)=d(0, \sum_{n\in A}x_n+\sum_{n\in B}x_n)\leq d(0,\sum_{n\in A}x_n)+d(\sum_{n\in A}x_n,\sum_{n\in A}x_n+\sum_{n\in B}x_n)= d(0,\sum_{n\in A}x_n)+d(0,\sum_{n\in B}x_n). \] \medskip Let $A,B$ be arbitrary subsets of $\N$, and $\varepsilon>0$. Take a finite subset $F$ of $A\cup B$ such that $d(0,\sum_{n\in F}x_n)\geq \varphi_{\bf x}(A\cup B)-\varepsilon$. Since $d(0,\sum_{n\in F}x_n)\leq d(0,\sum_{n\in F\cap A}x_n)+d(0,\sum_{n\in F\setminus A}x_n)\leq \varphi_{\bf x}(A)+\varphi_{\bf x}(B)$, it follows that $ \varphi_{\bf x}$ is subadditive and thus is a lscsm. \begin{lema} \label{B-F-sigma} Let $G$ be a Polish abelian group and ${\bf x}=(x_n)_n$ a sequence in $G$. Then $\varphi_{\bf x}$ is a lscsm, $\mathcal{B}({\bf x })=\fin(\varphi_{\bf x})$ and $\B({\bf x})$ is $F_{\sigma}$. \end{lema} \begin{proof} Let $A\in \mathcal{B}({\bf x})$. Then there exists $k>0$ such that for every $\varnothing \neq F\in [A]^{<\omega}$, $$ d\left (0, \sum_{n\in F}x_n \right )\leq k. $$ By the definition of $\varphi_{\bf x}$, we have $\varphi_{\bf x}(A)\leq k$. Hence $A\in \fin(\varphi_{\bf x})$. Conversely, assume that $A\in \fin(\varphi_{\bf x})$, then there exists $k>0$ such that $\varphi_{\bf x}(A)\leq k$. By the definition of $\varphi_{\bf x}$, we clearly have that $A\in \mathcal{B}({\bf x })$. \end{proof} \bigskip \begin{teo} The following statements are equivalent for any ideal $\mathcal{I}$ on $\N$. \begin{itemize} \item[(i)] $\mathcal{I}$ is $F_\sigma$. \item[(ii)] $\mathcal{I}$ is $\B$-representable in $(\fin,d)$ for some compatible metric $d$ on $\fin$ (as discrete topological group). \item[(iii)] $\mathcal{I}$ is Polish $\B$-representable. \end{itemize} \end{teo} \proof By Lemma \ref{B-F-sigma}, (i) follows from (iii) and clearly (ii) implies (iii). To see that (i) implies (ii), let $\mathcal{I}$ be a $F_\sigma$ ideal on $\N$. By Theorem \ref{mazur}, there is a lscsm $\varphi$ taking values on $\N\cup\{\infty\}$ such that $\mathcal{I}=\fin(\varphi)$. Then $\mbox{\sf Exh}{(\varphi)}=\fin$. From the proof of Soleckyi's theorem \ref{solecki} we know that the complete metric on $\fin$ given by $d(A,B)=\varphi (A\triangle B)$ is compatible with the group structure of $\fin$. Let $x_n=\{n\}$ and ${\bf x}=(x_n)_n$. We claim that $\mathcal{I}=\B({\bf x})$ in the Polish group $(\fin, d)$. First note that for every $\emptyset\neq F\in \fin$, $ F=\sum_{n\in F}x_n$. Thus \[ d\left (\varnothing, \sum_{n\in F}x_n \right )=\varphi(F). \label{Note33} \] By the lower semicontinuity of $\varphi$, we conclude that $\varphi=\varphi_{\bf x}$. Therefore, by Lemma \ref{B-F-sigma}, $\fin(\varphi)=\B({\bf x})$. \qed \subsection{$\B$-representability in Banach spaces} The motivating example of Polish representability is when the group is a Banach space. We rephrase the definitions of $\C({\bf x})$ and $\B({\bf x})$ for the context of a Banach space. Let ${\bf x}=(x_n)_n$ be a sequence in $X$. \begin{itemize} \item $\sum x_n$ converges unconditionally, if $\sum x_{\pi(n)}$ converges for all permutation $\pi:\N \rightarrow \N$. \item $\sum x_n$ is perfectly bounded, if there is $k>0$ such that for all $F\subset \N$ finite, $ \left \| \sum_{n\in F}x_n \right \|\leq k$. \item The lscsm associated to $\bf x$ is given by $\varphi_{\bf x}(\emptyset)=0$ and for $A\subseteq \N$ non empty, we put \begin{equation} \label{fisubx} \varphi_{\bf x}(A)=\sup\{ \|\sum_{n\in F}x_n\|:\; F\subseteq A\; \mbox{is finite non empty}\}. \end{equation} \end{itemize} A motivation for studying $\B({\bf x})$ comes from the next result (part (iii) was not explicitly included but follows from the proof Theorem 1.3 of \cite{Drewnowski}). \begin{teo}\label{DreLabu} (Drewnowski-Labuda \cite{Drewnowski}) Let $X$ be a Banach space. The following statements are equivalent: \begin{itemize} \item[(i)] $X$ does not contain an isomorphic copy of $c_0$. \item[(ii)] $\mathcal{C}({\bf{x}})$ is $F_\sigma$ in $2^{\N}$ for each sequence ${\bf{x}}$ in $X$. \item[(iii)] $\mathcal{C}({\bf{x}})=\mathcal{B}({\bf{x}})$ for each sequence ${\bf{x}}$ in $X$. \end{itemize} \end{teo} When working in Banach spaces, Theorem \ref{Polish C-repre} is strengthened as follows. \begin{teo}\label{C-repre} (Borodulin et al \cite{Borodulinetal2015}) Let $\mathcal{I}$ be an ideal on $\N$. The following are equivalent: \begin{itemize} \item[(i)] $\mathcal{I}=\mbox{\sf Exh}(\varphi)$ for a non-pathological lscsm $\varphi$. \item[(ii)] $\mathcal{I}=\C({\bf x})$ for some sequence ${\bf x}=(x_n)_n$ in a Banach space. \end{itemize} \end{teo} The proof of the previous result also provides a characterization of $\B$-representability on Banach spaces, as we show below. Since $l_\infty$ contains isometric copies of all separable Banach spaces, we have the following (already used in \cite{Borodulinetal2015} for $\C({\bf x})$). \begin{prop} \label{linfinito} Let $X$ be a Banach space and ${\bf x}=(x_n)_n$ be a sequence in $X$. There is ${\bf y}=(y_n)_n$ in $l_\infty$ such that $\B({\bf x})=\B({\bf y})$. \end{prop} From now on, we only work with $l_\infty$ (or $c_0$), this assumption implies that $\varphi_{\bf x}$ has the important extra feature of being non-pathological. It is convenient to have that the vectors $x_n\in l_\infty$ used in the representation of an ideal are of non negative terms. The following result was proved in \cite{Borodulinetal2015} for $\C({\bf x})$, a similar argument works also for $\B({\bf x})$. \begin{lema} \label{Banach10} Let ${\bf x}=(x_n)_n$ be a sequence in $l_\infty$. Let ${\bf x'}=(x'_n)$, where $x'_n(k)=|x_n(k)|$ for each $n,k\in \N$, then $\mathcal{B}({\bf x})=\mathcal{B}({\bf x'})$. \end{lema} \begin{proof} Since $\left \| \sum_{n\in F}x_n \right \|_\infty\leq \left \| \sum_{n\in F}x'_{n} \right \|_\infty$, for any finite set $F$, $\mathcal{B}({\bf x'})\subseteq \mathcal{B}({\bf x})$. To check the other inclusion, let $A\nin \mathcal{B}({\bf x'})$. Thus for every $k>0$, there is $F_k\in [A]^{<\omega}$ such that $$ \left \| \sum_{n\in F_k}x'_n \right \|_\infty >k. $$ Hence there is $m\in \N$ such that $$ \sum_{n\in F_k}x'_n(m)>k. $$ Let $F_k^{+}=\{n\in F_k: x'_n(m)>0\}$ and $F_k^{-}=\{n\in F_k: x'_n(m)\leq 0\}$. Thus, $$ \sum_{n\in F_k^{+}}x'_n(m) >\frac{k}{2}\;\;\; \text{ or } \;\;\; \sum_{n\in F_k^{-}}x'_n(m) >\frac{k}{2}. $$ Hence, $$ \sum_{n\in F_k^{+}}x_n(m) > \frac{k}{2}\;\;\; \text{ or } \;\;\; -\sum_{n\in F_k^{-}}x_n(m) >\frac{k}{2}. $$ Therefore, $$ \left \| \sum_{n\in F_k^{+}}x_n \right \|_\infty >\frac{k}{2} \;\;\; \text{ or } \;\;\; \left \| \sum_{n\in F_k^{-}}x_n \right \|_\infty >\frac{k}{2}, $$ for all $k>0$. Hence $A\nin \mathcal{B}({\bf{x}})$. \end{proof} \bigskip Now we recall why the lscsm $\varphi_{\bf x}$ given by \eqref{fisubx} is non-pathological when working on $l_\infty$. Let ${\bf{x}}=(x_n)_n$ be a sequence in $l_\infty$ and assume that $x_n(k)\geq 0$ for all $n,k\in \N$. Define a sequence of measures as follows. For $A\subseteq \N$ and $k\in \N$, let $$ \mu_k(A)=\sum_{n\in A}x_n(k). $$ Let $ \psi=\sup_k \mu_k$, thus $\psi$ is a non-pathological lscsm. Notice that $\psi(\{n\})= \|x_n\|_\infty$ for all $n$ and, more generally, for each $F\subseteq\N$ finite we have $$ \psi(F)=\left \| \sum_{n\in F}x_n \right \|_\infty. $$ Since $\psi$ is monotone, $\psi(F)=\varphi_{\bf{x}} (F)$ for every finite set $F$. Therefore $\psi=\varphi_{\bf{x}}$. \begin{teo} Let ${\bf{x}}=(x_n)_n$ be a sequence in $l_\infty$ with $x_n\geq 0$ for all $n$. Then $\varphi_{\bf{x}}$ is a non-pathological and \begin{itemize} \item[(i)] $\mathcal{C}({\bf{x}})=\mbox{\sf Exh}(\varphi_{\bf{x}})$. \item[(ii)] $\mathcal{B}({\bf{x}})=\fin(\varphi_{\bf{x}})$. \end{itemize} \end{teo} \proof (i) follows from the proof of Theorem 4.4 of \cite{Borodulinetal2015}. (ii) Follows from Lemma \ref{B-F-sigma}. \endproof Conversely, given a non-pathological lscsm $\varphi$, say $\varphi=\sup_k \mu_k$, where $\mu_k$ is a measure for each $k$, we associate to it a sequence ${\bf{x}}_\varphi=(x_n)_n $ of elements of $l_\infty$ as follows: Given $n\in \N$, let $$ x_n=(\mu_0(\{n\}),\ldots ,\mu_k(\{n\}),\ldots ). $$ Notice that $\| x_n\|_\infty= \varphi(\{n\})$ for all $n$. For each $F\in \fin$, we have $$ \varphi(F)=\sup\{\mu_k(F): \;k\in \N\}=\sup\{\sum_{n\in F}\mu_k(\{n\}): \;k\in \N\}=\sup\{\sum_{n\in F}x_n(k): \;k\in \N\}=\left \| \sum_{n\in F}x_n \right \|_\infty. $$ In other words, $\varphi=\varphi_{{\bf x}_\varphi}$. The next result follows from \cite[Theorem 4.4]{Borodulinetal2015} and the above discussion. \begin{teo} Let $\varphi$ be a non-pathological lscsm. Then \begin{itemize} \item[(i)] $\mathcal{C}({\bf{x}}_\varphi)=\mbox{\sf Exh}(\varphi)$. \item[(ii)] $\mathcal{B}({\bf{x}}_\varphi)=\fin(\varphi)$. \end{itemize} \end{teo} The following theorem is analogous to Theorem \ref{C-repre} but for $\B$-representability. \begin{teo} \label{TeoremaB} An ideal $\mathcal{I}$ is $\mathcal{B}$-representable in a Banach space if, and only if, there is a non-pathological lscsm $\varphi$ such that $\mathcal{I}=\fin(\varphi)$. \end{teo} \begin{proof} Suppose $\mathcal{I}$ is a $\mathcal{B}$-representable ideal in a Banach space. By Lemma \ref{linfinito}, we can assume that $\mathcal{I}$ is $\mathcal{B}$-representable in $l_\infty$. Let ${\bf{x}}=(x_n)_n$ be a sequence in $l_\infty$ such that $\mathcal{I}=\mathcal{B}({\bf{x}})$. By Lemma \ref{Banach10} we assume that $x_n\geq 0$ for all $n$. Now the result follows from the previous considerations where we have shown that $\B({\bf x})=\fin(\varphi_{\bf x})$. Conversely, if $\varphi$ is non-pathological, we have shown above that $\fin(\varphi)=\B({\bf x}_\varphi)$. \end{proof} Now we present an example of an ideal which is $\mathcal{B}$-representable in $c_0$ and is not a $P$-ideal, in particular, is not $\mathcal C$-representable in any Polish group. \begin{ex} \label{ejem-B-no-C} {\em $\fin \times \{\varnothing \}$ is $\mathcal{B}$-representable in $c_0$. Recall that $\fin\times\{\emptyset\}$ is defined by letting $A\in \fin \times \{\varnothing \}$ iff there is $k$ such that $A\subseteq B_0\cup\cdots\cup B_k$, where $(B_n)_n$ is a partition of $\N$ into infinitely many infinite sets. Let ${\bf{x}}=(x_n)_n$ be given by $x_n=me_n$, for $n\in B_m$, where $(e_n)_n$ is the usual base for $c_0$. We will show that $\fin \times \{\varnothing \}=\mathcal{B}({\bf{x}})$. It is well known, and easy to verify, that $\fin \times \{\varnothing \}$ is not a $P$-ideal. To see that $\fin \times \{\varnothing \} \subseteq \mathcal{B}({\bf{x}})$ it suffices to show that $B_m\in \B({\bf{x}})$ for every $m$. In fact, let us observe that for any $F\subseteq B_m$ finite, we have $$ \left \| \sum_{i\in F} x_i \right \|_\infty=\left \| \sum_{i\in F} me_i \right \|_\infty =m. $$ Thus $\sum_{i\in B_m}x_i$ is perfectly bounded. Conversely, let $A\not \in \fin \times \{\varnothing \}$, we show that $\sum_{n\in A}x_n$ is not perfectly bounded. There is $S\subseteq \N$ infinite and $m_i\in \N$ such that $m_i\in A\cap B_{i}$ for all $i\in S$. Notice that $\|x_{m_i}\|_\infty=\|ie_{m_i}\|_\infty=i$ for every $i\in S$. We claim that $\{m_i:\; i\in S\}\not\in\B({\bf x})$. Indeed, for a finite $F\subseteq S$, we have $\|\sum_{i\in F}x_{m_i}\|=\max(F)$. Now we calculate $\varphi_{\bf x}$ and show that it is the same lscsm presented in Example \ref{finxvacio}. For any $F\subseteq \N$ finite, we have that $\|\sum_{n\in F}x_n\|_\infty=\max\{m: \; F\cap B_m\neq \emptyset\}$. Given $A\subseteq \N$, we have \[ \varphi_{\bf x}(A)=\sup\{\varphi_{\bf x}(F): F\in [A]^{<\infty}\}=\sup\{\|\sum_{n\in F}x_n\|_\infty: F\in [A]^{<\infty}\}=\sup\{m: A\cap B_m\neq \emptyset\}. \] The ideal $\fin\times\{\emptyset\}$ also admits a representing sequence in $l_\infty$ which is bounded. In Example \ref{finxvacio} we defined a nonpathological lscsm $\psi$ such that $\fin\times\{\emptyset\}=\B({\bf x}_\psi)$ and an easy computation shows that ${\bf x}_\psi$ is bounded by $1$. \qed } \end{ex} The following is a reformulation of Question \ref{absolut-patologico}. \begin{question} \label{non-B-repre}Is there an $F_\sigma$ ideal which is not $\B$-representable in a Banach space? \end{question} \subsection{$\B$-representation on $c_0$} In this subsection we show a characterization of representability on $c_0$. We first show that in general we could assume that the representing sequence is $w^*$-null, however, we will see later that is not always weakly null. \begin{prop} \label{w*null} Let ${\bf x}=(x_n)_n$ be a bounded sequence in $l_\infty$. Then there is a $w^*$-null sequence ${\bf y}=(y_n)_n$ in $l_\infty$ such that $\B({\bf x})=\B({\bf y})$ and $\|x_n\|_\infty=\|y_n\|_\infty$ for all $n$. \end{prop} \proof From Lemma \ref{Banach10} we can assume that $x_n(k)\geq 0$ for all $n$ and $k$ and hence $\varphi_{\bf x}$ is a non-pathological lscsm. Hence by Proposition \ref{med-sop-fin}, there is a sequence $\mu_k$ of finitely supported measures such that $\varphi_{\bf x}=\sup_k\mu_k$. Let $$ y_n=(\mu_0(\{n\}),\ldots ,\mu_k(\{n\}),\ldots ) $$ and ${\bf y}=(y_n)_n$. Notice that $\varphi_{\bf x}=\varphi_{\bf y}$ and we have seen before that $\B({\bf x})=\fin(\varphi_{\bf x})=\fin(\varphi_{\bf y})=\B({\bf y})$. We have $\|x_n\|_\infty=\varphi_{\bf x}(\{n\})=\sup_k|\mu_k(\{n\})|=\|y_n\|_\infty$ for all $n$. Finally, notice that $y_n(k)\neq 0$ iff $n\in supp (\mu_k)$. Thus $y_n(k)\to 0$ when $n\to \infty$, thus $(y_n)_n$ is $w^*$-null. \endproof Next we present a sufficient condition for having $\B({\bf x})=\B({\bf y})$. \begin{prop} \label{equalB} Let ${\bf x}=(x_n)_n$ and ${\bf y}=(y_n)_n$ be two sequences in $l_\infty$. Suppose $\sum_n\| y_n- x_n \|_\infty <\infty$, then $\B({\bf{x}})= \B({\bf{y}})$. In particular, for every ${\bf x}=(x_n)_n$ in $l_\infty$ there is ${\bf y}=(y_n)_n$ in $l_\infty$ such that each $y_n$ takes finitely many values and $\B({\bf x})=\B({\bf y})$. \end{prop} \begin{proof} Let $F\subseteq \N$ finite, then \[ \left \| \sum_{n\in F}y_n \right \|_\infty \leq \left \| \sum_{n\in F}y_n -x_n \right \|_\infty + \left \| \sum_{n\in F}x_n \right \|_\infty \leq \sum_{n}\left \| y_n -x_n \right\|_\infty + \left \| \sum_{n\in F}x_n \right \|_\infty. \] From this we have that $\B({\bf x})\subseteq \B({\bf y})$ and, by symmetry, $\B({\bf x})\subseteq \B({\bf y})$. For the last claim, let $y_n$ be given by $y_n(k)=\|x_n\|/2^{i}$, if $\|x_n\|/2^{i+1} < x_n(k)\leq \|x_n\|/2^{i}$ for some $i< n$, and $y_n(k)=0$ otherwise. Then $y_n$ takes at most $n+1$ values and $\|x_n-y_n\|\leq \|x_n\|/2^{n}$ for all $n$. \end{proof} \begin{rem} \label{equalB2} {\em We have seen in section \ref{BS} that to each non pathological submeasure $\varphi=\sup_k \mu_k$ we can associate to each $n$ a partition $\mathcal{P}_n=\{A^n_i:\; i\in \N\}\cup\{A^n_\infty\}$ of $\N$. For $\B$-represented ideals we have, for $n,i\in \N$ \[ A^n_i=\left\{k\in \N: \; \frac{1}{2^{i+1}}\leq x_n(k) <\frac{1}{2^{i}}\right\} \] and \[ A^n_\infty=\{k\in\N:\; x_n(k)=0\}. \] The previous result says that we can assume without lost of generality that for each $n$ there are only finitely many $i$ such that $A^n_i$ is not empty. } \end{rem} The following fact is motivated by an analogous result for $\C$-representability on $c_0$ (see \cite[Proposition 5.3]{Borodulinetal2015}). It says that for any ideal which is representable in $c_0$, the representing sequence can be found in $c_{00}$. Even though the proof is essentially the same as that in \cite[Proposition 5.3]{Borodulinetal2015} we include it for the sake of completeness. \begin{teo} \label{rep-in-c0} Let $\mathcal{I}$ be an ideal on $\N$. The following statements are equivalent. \begin{itemize} \item[(i)] $\mathcal{I}$ is $\B$-representable in $c_0$. \item[(ii)] There is a sequence ${\bf y}=(y_n)_n$ in $c_{00}$ such that $\mathcal{I}=\B({\bf y})$. \item[(iii)] There is a sequence of measures $(\mu_k)_k$ such that $\mathcal{I}=\fin(\varphi)$ where $\varphi=\sup_k \mu_k$ and $\{k\in \N:\; m\in supp(\mu_k)\}$ is finite for every $m$. \end{itemize} \end{teo} \begin{proof} $(i)\Rightarrow (ii)$: Let ${\bf x}=(x_n)_n$ be a sequence in $c_0$ such that $\mathcal{I} =\B({\bf x})$. By Proposition \ref{Banach10} we assume that $x_n(k)\geq 0$ for all $n$ and $k$. For each $n$ pick $k_n$ such that $x_n(k)<2^{-n}$ for all $k> k_n$. Let $y_n(k)=x_n(k)$ for $0\leq k\leq k_n$ and $y_n(k)=0$ for $k>k_n$ and ${\bf y}=(y_n)_n$. As $\sum_n\| y_n- x_n \|_\infty <\infty$, by Proposition \ref{equalB}, we have $\B({\bf x})=\B({\bf y})$. $(ii)\Rightarrow (iii)$: Use the measure associate to $\varphi_{\bf y}$, namely, $\mu_k(A)=\sum_{n\in A}y_n(k)$. Notice that $m\in supp(\mu_k)$ iff $y_m(k)\neq 0$. $(iii)\Rightarrow (ii)$: Let $\varphi$ be as in the hypothesis of (iii), then ${\bf x}_\varphi$ satisfies (ii). \end{proof} \bigskip \section{Tallness of $\B({\bf x})$} \label{tallnessB} In this section we address the tallness of $\B({\bf x})$. It is easy to check that $\C({\bf x})$ is tall iff $\|x_n\|\to 0$. We show that the tallness of $\B({\bf x})$ is related to the weak topology. A classical characterization of perfect boundedness is as follows (see, for instance, \cite[Lemma 2.4.6]{albiac-kalton}). \begin{prop} \label{wuc} A series $\sum_nx_n$ in a Banach space is perfectly bounded iff $\sum_n |x^*(x_n)|<\infty$ for all $x^*\in X^*$. \end{prop} From this we get the following. \begin{prop} \label{tall-wn} Let ${\bf{x}}=(x_n)_n$ be a sequence in a Banach space $X$. If $\mathcal{B}({\bf x})$ is tall, then ${\bf x}=(x_n)_n$ is weakly null. \end{prop} \begin{proof} Suppose $\B({\bf x})$ is tall and $(x_n)_n$ is not weakly null. Then there is $A\subseteq \N$ infinite and $x^*\in X^*$ such that $\inf_{n\in A}x^*(x_n)>0$. Let $B\subseteq A$ infinite such that $\sum_{n\in B}x_n$ is perfectly bounded. This contradicts proposition \ref{wuc}. \end{proof} We can translate the property A to the context of Banach spaces as follows. We say that ${\bf{x}}=(x_n)_n$ has {\em property A} if $\{n\in \N:\; \|x_n\|>\varepsilon\}\in \B({\bf x})$ for all $\varepsilon>0$. Notice that ${\bf x}$ has property A iff $\varphi_{\bf x}$ has property A. Thus from Proposition \ref{propAtall} we have the following. \begin{prop} If ${\bf{x}}=(x_n)_n$ is a sequence in $\in l_\infty$ with property A, then $\B({\bf{x}})$ is tall. In particular, if $\|x_n\|_\infty \to 0$, then $\B({\bf{x}})$ is tall. \end{prop} Below we provide examples of a sequence ${\bf x}=(x_n)_n$ in $l_\infty$ such that ${\bf x}$ has property $A$ and $\|x_n\|_\infty\not\to 0$ and another one such that $\B({\bf{x}})$ is tall and ${\bf x}$ has not property $A$. \begin{ex} \label{ex-propertyA} {\em Let $\varphi$ be the lscsm defined in part (a) of Example \ref{ejemadecuada}. We showed there that $\varphi$ is non-pathological and has property A and such that $\varphi(\{n\})\not\to 0$ . Therefore $\B({\bf x}_\varphi)$ is tall. Since $\|x_n\|_\infty = \varphi (\{n\})$, we have $\|x_n\|_\infty\not\to 0$. } \qed \end{ex} \begin{ex} \label{tall-no-A} {\em We have seen in Example \ref{ED} that $\mathcal{ED}_{fin}$ is a tall ideal which can be represented by a non-pathological lscsm. Thus ${\mathcal{ED}}_{fin}$ is $\B$-representable. However, it follows from Example \ref{edfin}, that there is no ${\bf x}=(x_n)_n$ in $l_\infty$ with property $A$ such that ${\mathcal{ED}}_{fin} =\B({\bf x})$. \qed } \end{ex} For a sequence ${\bf x}=(x_n)_n$, we have the following implications: \[ (x_n)_n \; \mbox{is $\|\cdot \|$-null} \;\Rightarrow (x_n)_n\;\mbox{property has $A$}\;\Rightarrow \B({\bf x})\; \mbox{is tall}\;\Rightarrow (x_n)_n\; \mbox{is weakly null}. \] \medskip In the next section, we analice the particular case of $c_0$ where the last implication is an equivalence. \subsection{Borel selectors for $\B({\bf x})$ in $c_0$} We use the classical Bessaga-Pelczynski's selection principle to show that if ${\bf x}=(x_n)_n$ is a weakly null sequence in $c_0$, then $\B({\bf x})$ admits a Borel selector. We state this principle only for $c_0$ to avoid introducing the notion of a basic sequence. We recall its proof in order to verify that the process of getting the subsequence can be done in a Borel way. We follow the presentation given in \cite{albiac-kalton}. From its proof we have isolated the following selection principle. \begin{lema} \label{selection} Let $X$ be a Banach space, $S_n:X\to X$ be a continuous functions for all $n$, $(x_n)_n$ be a sequence in $X$ and $\alpha>0$. Suppose \begin{itemize} \item[(i)] $x_m=\lim_n S_n(x_m)$ for all $m$. \item[(ii)] $\lim_m S_n(x_m)=0$ for all $n$. \end{itemize} Then, there is a continuous function $F:[\N]^{\omega} \to [\N]^{\omega} \times {\N}^{\N}$ such that, if $F(A)=(B,(n_k)_k)$, then (1) $B\subseteq A$. (2) Let $\{b_k:\; k\in \N\}$ be the increasing enumeration of $B$. Then \[ \|x_{b_k}-S_{n_k}(x_{b_k})\| < \alpha/2^{k+1} \] and \[ \|S_{n_k}(x_{b_{k+1}})\|< \alpha/2^{k+1} \] for all $k$. \end{lema} \proof Fix $A\subseteq\N$ infinite and let $b_1=\min( A)$. By (i), there is $n_1$ such that \[ \|x_{b_1} - S_{n_1}(x_{b_1})\|< \alpha /2^4. \] Since $\lim_k S_{n_1}(x_{k})=0$, let $b_2$ be the minimal element of $A$ with $b_1<b_2$ and such that \[ \|S_{n_1}(x_{b_2})\|< \alpha /2^5. \] By (i), let $n_2$ be such that \[ \|x_{b_2} - S_{n_2}(x_{b_2})\|< \alpha /2^5. \] Recursively, let $b_{i+1}$ be the least $b\in A$ such that $b_i< b$ and \[ \|S_{n_{i}}(x_{b})\|< \alpha /2^{i+4}. \] Now let $n_{i+1}$ be such that \[ \| x_{b_{i+1}} - S_{n_{i+1}}(x_{b_{i+1}})\|< \alpha /2^{i+4}. \] Let $F(A)=(\{n_{b_k}:\;k\in \N\}, (n_k)_k)$. By the way the $b_{k}$'s were selected, $F$ is easily seen to be continuous. \endproof To state the Bessaga-Pelczynski's selection principle we need to recall some notions. Let ${\bf e}=(e_n)_n$ be the standard base for $c_0$, i.e. $e_n(n)=1$ and $e_n(i)=0$ for $i\neq n$. Clearly $\sum_n e_n$ is perfectly bounded and thus $\B({\bf e})=\mathcal{P}(\N)$. A sequence $u_n$ is called a {\em block sequence} of $(e_n)_n$, if there is a strictly increasing sequence $(p_k)_k$ in $\N$ with $p_0=0$ and sequence of reals $(a_n)_n$ such that \[ u_n=\sum_{j=p_{n-1}+1}^{p_n}a_je_j. \] The closed linear span of a sequence $(x_n)_n$ is denoted by $[x_n]$. Two sequences $(x_n)_n$ and $(y_n)_n$ are called {\em congruent}, if there is an invertible continuous linear operator $T:[x_n]\to [y_n]$ such that $t(x_n)=y_n$ for all $n$. \begin{lema} \label{pb} Let $(x_n)_n$ and $(y_n)_n$ be sequences in $c_0$. (i) If $(x_n)_n$ is a bounded block sequence of $(e_n)_n$, then $(x_n)_n$ is perfectly bounded. (ii) If ${\bf x}=(x_n)_n$ and ${\bf y}=(y_n)_n$ are congruent, then $\B({\bf x})=\B({\bf y})$. \end{lema} \proof Straightforward. \endproof \bigskip \begin{teo} \label{bessaga-P} (Bessaga-Pelczynski's selection principle) Let $(e_n)_n$ be the standard basis for $c_0$. Suppose $(x_n)_n$ is weakly null sequence in $c_0$ such that $\inf_n\|x_n\|_\infty> 0$. For every $A\subseteq\N$ infinite, there is $B\subseteq A$ infinite such that $(x_{k})_{k\in B}$ is congruent with some block sequence $(y_k)_k$ of $(e_n)_n$. Moreover, the map $A\mapsto B$ can be found continuous as a map from $[\N]^\omega$ to $[\N]^\omega$. \end{teo} \proof Let $S_n:c_0\to c_0$ be the $n$th partial sum operator, that is, if $x=\sum^\infty_n a_je_j$, then \[ S_n(x)=\sum_{i=1}^n a_ie_i. \] It is clear that the sequence $(S_n)_n$ satisfies (i) of Lemma \ref{selection} and it also satisfies (ii) as $(x_n)_n$ is weakly null. Let $\alpha=\inf_n \|x_n\|_\infty$ and $A\subseteq \N$ be an infinite set. Let $F(A)=(B, (n_k)_k)$ given by Lemma \ref{selection}. Let $B=\{b_k:\; k\in \N\}$, then \[ \|S_{n_{i}}(x_{b_{i+1}})\|< \alpha /2^{i+4} \] and \[ \| x_{b_{i+1}} - S_{n_{i+1}}(x_{b_{i+1}})\|< \alpha /2^{i+4}. \] Let $y_k= S_{n_{k}}(x_{b_{k}})-S_{n_{k-1}}(x_{b_{k}})$ for $k\geq 1$ where $n_0=0$ and $S_0=0$. Then $(y_k)_k$ is a block basic sequence of $(e_n)_n$. Now we show that $(x_{b_k})_k$ and $(y_k)_k$ are congruent. First \[ \|x_{b_k}-y_k\|=\|x_{b_k}-S_{n_{k}}(x_{b_{k}}) + S_{n_{k-1}}(x_{b_{k}})\|\leq \alpha /2^{k+3}+ \alpha /2^{k+3}=\alpha /2^{k+2} \] and \[ \|y_k\|\geq \alpha-\alpha/2^{k+2}. \] Thus \[ \sum_{k=1}^\infty \frac{\|x_{b_k}-y_k\|}{\|y_k\|}\leq \sum_{k=1}^\infty \frac{1}{2^{k+2}-1}<\frac{1}{2}. \] Therefore, by \cite[Theorem 1.3.9]{albiac-kalton}, $(x_{b_k})_k$ and $(y_k)$ are congruent. \endproof \begin{teo} \label{c0-tall} Let ${\bf x}=(x_n)_n$ be a weakly null sequence in $c_0$. Then $\B({\bf x} )$ is tall and has a Borel selector. \end{teo} \proof We consider two cases. (i) Suppose $\|x_n\|\to 0$ and let $A\subseteq\N$ infinite. By an easy recursion pick $b_n\in A$ increasing such that $\| x_{b_n}\| \leq 2^{-n}$. Then $B=\{b_n: n\in \N\}\in \B({\bf x})$. Clearly $B$ is obtained continuously from $A$. (ii) Suppose $\|x_n\|\not\to 0$. We describe a continuous selector for $\B({\bf x})$. Let $A\subseteq \N$ infinite. There are two cases to be considered. (a) There is $B\subseteq A$ infinite such that $\|x_n\|\leq 2^{-n}$ for all $n\in B$. Then we proceed as in (i). (b) Suppose $\inf\{\|x_n\|_\infty : \;n\in A\}>0$. By Theorem \ref{bessaga-P}, there is a continuous function $F:[A]^{\omega} \to [A]^\omega$ such that for all $D\subseteq A$ infinite, $F(D)\subseteq D$ and, letting $B=F(D)$, then $(x_n)_{n\in B}$ is congruent to a block sequence $(y_n)_n$ of the standard basis $(e_n)_n$ of $c_0$. By Lemma \ref{pb}, $(y_n)_n$ is perfectly bounded and thus $(x_n)_{n\in B}$ is also perfectly bounded. We let the reader to verify that the proof of \ref{bessaga-P} guarantees that such selecting process depends in a Borel way from $A$. \endproof \begin{rem}{\em We can use also Proposition \ref{c0like} to give a different proof of Theorem \ref{c0-tall}. Let ${\bf x}=(x_n)_n$ be a weakly null sequence in $c_{00}$. Let $\varphi_{\bf x}$ be the non-pathological lscsm given by \eqref{fisubx}. It is not difficult to verify that the associate sequence of partitions $\{A^n_i:\; i\in\N\}$, for $n\in \N$ satisfies \eqref{coloring2b} in Proposition \ref{c0like}. Thus, $\B({\bf x})$ is tall and has a Borel selector. } \end{rem} It is not true in general that $\B({\bf x})$ is tall for any weakly null sequence ${\bf x}= (x_n)_n$ in a Banach space $X$. For instance, this fails in $l_2$, because $l_2$ does not contain copies of $c_0$. For instance, $\B({\bf x})=\fin$ for ${\bf x}=(x_n)_n$ the usual basis of $l_2$ (which is weakly null). Moreover, it does not happen in $l_\infty$, as this space contains isomorphic copies of every separable Banach space. However, we have the following reformulation of Question \ref{nonpatho-selector}. \begin{question} Let ${\bf x}= (x_n)_n$ be sequence en $l_\infty$ such that $\B({\bf x})$ is tall. Does it have a Borel selector? \end{question} It is not know a characterization of $\C$-representability on $c_0$ (\cite[Question 5.10]{Borodulinetal2015}). So, it is natural to also ask the following. \begin{question} Which ideals are $\B$-representable in $c_0$? \end{question} \noindent{\bf Acknowledgment.} We thank Michael Rinc\'on Villamizar for some comments about an early version of this paper and for bringing to our attention the article \cite{Drewnowski}.
{ "timestamp": "2021-11-23T02:11:06", "yymm": "2111", "arxiv_id": "2111.10598", "language": "en", "url": "https://arxiv.org/abs/2111.10598", "abstract": "Let ${\\bf x}=(x_n)_n$ be a sequence in a Banach space. A set $A\\subseteq \\mathbb{N}$ is perfectly bounded, if there is $M$ such that $\\|\\sum_{n\\in F}x_n\\|\\leq M$ for every finite $F\\subseteq A$. The collection $B({\\bf x})$ of all perfectly bounded sets is an ideal of subsets of $\\mathbb{N}$. We show that an ideal $\\mathcal{I}$ is of the form $B({\\bf x})$ iff there is a non pathological lower semicontinuous submeasure $\\varphi$ on $\\mathbb{N}$ such that $\\mathcal{I} =FIN(\\varphi)=\\{A\\subseteq \\mathbb{N}: \\;\\varphi(A)<\\infty\\}$. We address the questions of when $FIN(\\varphi)$ is a tall ideal and has a Borel selector. We show that in $c_0$ the ideal $B({\\bf x})$ is tall iff $(x_n)_n$ is weakly null, in which case, it also has a Borel selector.", "subjects": "Logic (math.LO)", "title": "$F_σ$ ideals of perfectly bounded sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363503693294, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.707338569914814 }
https://arxiv.org/abs/1902.00441
A Nonlocal Functional Promoting Low-Discrepancy Point Sets
Let $X = \left\{x_1, \dots, x_N\right\} \subset \mathbb{T}^d \cong [0,1]^d$ be a set of $N$ points in the $d-$dimensional torus that we want to arrange as regularly possible. The purpose of this paper is to introduce a curious energy functional $$ E(X) = \sum_{1 \leq m,n \leq N \atop m \neq n} \prod_{k=1}^{d}{ (1 - \log{\left(2 \sin{ \left( \pi |x_{m,k} - y_{m,k} |\right)} \right)})}$$ and to suggest that moving a set $X$ into the direction $-\nabla E(X)$ may have the effect of increasing regularity of the set in the sense of decreasing discrepancy. We numerically demonstrate the effect for Halton, Hammersley, Kronecker, Niederreiter and Sobol sets. Lattices in $d=2$ are critical points of the energy functional, some (possibly all) are strict local minima.
\section{Introduction} \subsection{Introduction.} This paper is partially motivated by earlier results about how to distribute points on a manifold in a regular way. One idea (from \cite{sachs, stein}) is to not construct these points a priori but instead use (local) minimizers of an energy functional. For example, suppose we want to distribute $N$ points on the two-dimensional torus $\mathbb{T}^2$ in a way that is good for numerical integration. One way of doing this is by trying to find local minimizers of the energy functional $$ F(X) = \sum_{1 \leq m,n \leq N \atop m \neq n}{ e^{- cN^{-1} \|x_i - x_j\|^2}},$$ where $c \sim 1$ is a constant. These point configurations are empirically shown \cite{sachs} to be better at integrating trigonometric polynomials than commonly used classical constructions, the reason for that being a connection between the Gaussian and the heat kernel (which, in itself, can be interpreted as a mollifier in Fourier space dampening high oscillation). This method is also geometry independent and works on general compact manifolds (with $\|x_i -x_j\|$ replaced by the geodesic distance). \subsection{The problem.} We were curious whether there is any way to proceed similarly in the problem of finding low-discrepancy sets of points. Suppose $X \subset [0,1]^2$ is a set $\left\{x_1, \dots, x_N\right\}$ of $N$ distinct points. A classical question is how would to arrange them so as to minimize the star discrepancy $D_N^*(X)$ defined by $$ D^*_N(X) = \max_{0 \leq x,y \leq 1} \left| \frac{ \# \left\{1 \leq i \leq N: x_{i,1} \leq x \wedge x_{i,2} \leq y\right\}}{N} - xy \right|.$$ A seminal result of Schmidt \cite{sch} is $$D_N^* \gtrsim \frac{\log{N}}{N}.$$ Many constructions of sets are known that attain this growth, we refer to the classical textbooks by Dick \& Pillichshammer \cite{dick}, Drmota \& Tichy \cite{drmota} and Kuipers \& Niederreiter \cite{kuipers} for descriptions. Some of the classical configurations are also used as examples in this paper. The problem is famously unsolved in higher dimensions where the best known constructions \cite{halton, hammersley, nied, niederreiter, sobol} satisfy $D_N \lesssim (\log{N})^{d-1} N^{-1}$ but no matching lower bound exists (see \cite{bil1, bil3, bil4}). Indeed, there is not even consensus as to whether the best known constructions attain the optimal growth or whether there might be more effective constructions in $d \geq 3$. \begin{figure}[h!] \begin{minipage}[l]{.49\textwidth} \includegraphics[width = 5cm]{pic7.pdf} \end{minipage} \begin{minipage}[r]{.49\textwidth} \includegraphics[width = 5cm]{pic8.pdf} \end{minipage} \caption{Left: 50 points of a Niederreiter sequence with $D_N^* \sim 0.082$. Right: gradient flow produces a (similar) set, $D_N^* \sim 0.061$.} \end{figure} We were interested in whether it is possible to assign a notion of 'energy' to a set of points that vaguely corresponds to discrepancy in the sense that moving the points in such a way that perturbations of the points decreasing the energy also often decrease discrepancy. What would be of interest is a notion of energy that is \begin{enumerate} \item fast to compute \item often helpful in improving existing point sets \item and may have the potential to lead to new constructions. \end{enumerate} We believe this questions to be of some interest. The purpose of this paper is to derive one functional that seems to work very well in practice. Indeed, it works strikingly well: when applied to the classical low discrepancy constructions, it always seems to further decrease discrepancy (though sometimes, when the sets are already well distributed, only by very little). We provide a heuristic explanation in \S 3.3. There might be many other such functionals (possibly related to different kinds of mathematics, e.g. \cite{ba, mo, os}) and we believe that constructing and understanding them could be quite interesting indeed. \begin{quote} \textbf{Open Problem.} Construct other energy functionals whose gradient flow has a beneficial effect on discrepancy. What can be rigorously proven? Can they be used for numerical integration? How do they scale in the dimension? \end{quote} \subsection{Related results.} We emphasize that this open problem stated in \S 1.2. is \textit{wide} open. In particular, we do not claim that our energy functional is necessarily the most effective one. Our functional certainly seems natural in light of our derivation; moreover, the author recently used it \cite{steind} to define sequences $(x_n)_{n=1}^{\infty}$ whose discrepancy seems to be extremely good when compared to classical sequences (however, the only known bound for these sequences is currently $D_N \lesssim N^{-1/2} \log{N}$). Nonetheless, there may be other functionals that are as natural and even more efficient. As an example of another functional that could be of interest, we mention Warnock's formula \cite[Lemma 2.14]{mat} for the $L^2-$discrepancy $$ L^2(X)^2 = \frac{1}{3^s} - \frac{2}{N}\sum_{n=1}^{N}{ \prod_{i=1}^{d}{ \frac{1-x_{n,i}^2}{2}}} + \frac{1}{N^2} \sum_{n,m=1}^{N} \prod_{i=1}^{d} \min(1-x_{n,i}, 1-x_{m,i})$$ This could be used to define a gradient flow (where one has to be a bit careful with the non-differentiability of the minimum). A similar construction is presumably possible at a much greater level by using integration formulas in reproducing kernel Hilbert spaces (see e.g. \cite{fritz}). We recall that, if we sample in $(x_j)_{j=1}^{n}$ with weights $(w_j)_{j=1}^{n}$, then the worst case error in a reproducing kernel Hilbert space is given by the formula \begin{align*} \mbox{worst-case error} &= \sum_{i,j=1}^{n}{w_i w_j K(x_i, x_j)} - 2 \sum_{j=1}^{n}{w_j} \int_{\Omega} K(x_j, y) d\mu(y) \\ &+ \int_{\Omega} \int_{\Omega} K(x,y) d\mu(x) d\mu(y). \end{align*} Functionals of this flavor might be amenable to a gradient flow approach at a great level of generality, however, this is outside the scope of this paper. \begin{figure}[h!] \begin{minipage}[l]{.49\textwidth} \includegraphics[width = 5cm]{pic17.pdf} \end{minipage} \begin{minipage}[r]{.49\textwidth} \includegraphics[width = 5cm]{pic18.pdf} \end{minipage} \caption{Left: the set $\left\{ (\left\{\sqrt{2}n\right\}, \left\{\sqrt{\pi} n \right\} ): 1 \leq n \leq 100\right\}$ with $D_N^* \sim 0.04$. Right: evolution of the gradient flow leads to a set with discrepancy $D_N^* \sim 0.03$.} \end{figure} To the best of our knowledge, the approach outlined in this paper as well as the associated functional is new. There is a broad literature around the underlying problem of construction of low-discrepancy sequences by various means. Traditional results where mainly concerned with asymptotic results (see e.g. \cite{dick, drmota, kuipers}). These constructions often have implicit constants that grow very quickly in the dimension; the search for results that are effective for a small number of points initiated a fertile area of research \cite{aisti, doerr00, heinrich, hinrich}. Even the mere task of computing discrepancy in high dimensions is nontrivial \cite{doerr2, gnewuch, gnewuch2}. We are not aware of any optimization algorithms that take an explicit set of points and then induce a gradient flow to try to decrease the discrepancy. \subsection{Outline of the paper.} \S 2 introduces the energy functional and the main result. \S 3 explains how the energy functional was derived, describes the one-dimensional setting and relates it to Fourier analysis. A proof of the main result is given in \S 4. Numerical examples of how the energy functional acts on well-known constructions are given throughout the paper -- these examples are all two-dimensional (for simplicity of exposition). \begin{table}[h!] \begin{tabular}{ l | c| c | c } Type of Sequences & $N$ & Discrepancy $D_N(X_N)$ & $D_N$ after Optimization \\[0.05cm] Niederreiter sequence & 50 & 0.082 & 0.061 \\[0.05cm] Hammersley (base 3) & 50 & 0.064 & 0.042 \\[0.05cm] Sobol & 50 & 0.063 & 0.057 \\[0.05cm] Halton (base 2 and 5) & 64 & 0.064 & 0.045 \\[0.05cm] random points & 100 & 0.12 & 0.05 \\[0.05cm] Halton (base 2 and 3) & 128 & 0.032 & 0.025 \\[0.05cm] Niederreiter in $[0,1]^3$ & 50 & 0.098 & 0.093 \\[0.05cm] $\mbox{vdc}_2 \times \mbox{vdc}_3 \times \left\{ \pi n\right\}$ & 100 & 0.074 & 0.066 \end{tabular} \vspace{3pt} \caption{Examples shown in this paper.} \end{table} We emphasize that the examples of point sets are all essentially picked at random, the functional does seem to work at an \textit{overwhelming} level of generality and we invite the reader to try it on their own favorite sets. \section{An energy functional} \subsection{The functional.} Given a set $X = \left\{x_1, \dots, x_N\right\} \subset \mathbb{T}^d \cong [0,1]^d$ of $N$ points in the $d-$dimensional torus where each point is given by $$ x_n = (x_{n,1}, \dots, x_{n,d}) \in \mathbb{T}^d,$$ we introduce the energy function $E:([0,1]^d)^N \rightarrow \mathbb{R}$ via $$ E(X) = \sum_{1 \leq m,n \leq N \atop m \neq n} \prod_{k=1}^{d}{ (1 - \log{\left(2 \sin{ \left( \pi |x_{m,k} - x_{n,k} |\right)} \right)})}.$$ We note that, for $0 \leq x,y \leq 1$ we have that $$1 - \log{(2 \sin{ \pi |x - y|})} \geq 1 - \log{2}$$ and so every term in the product is always positive. We also note that if two different points $x_i, x_j$ have the same $k-$th coordinate, then the functional is not defined and we set $E(X) = \infty$ in that case. In practice, we can always perturb points ever so slightly to avoid that scenario. We note that the functional has an interesting structure: it very much likes to avoid having too many points that have very similar coordinates. This makes sense since such points can be easily captured by a thin (hyper-)rectangle. We now first discuss how to actually minimize it in practice and then discuss our main result. \begin{figure}[h!] \begin{minipage}[l]{.49\textwidth} \includegraphics[width = 5.5cm]{pic3.pdf} \end{minipage} \begin{minipage}[r]{.49\textwidth} \includegraphics[width = 5.5cm]{pic4.pdf} \end{minipage} \caption{Left: 128 points of the Halton sequence in base 2 and 3 having $D_N^* \sim 0.032$. Right: evolution of the gradient flow changes the set a tiny bit to one with discrepancy $D_N^* \sim 0.025$.} \end{figure} \subsection{How to compute things.} We are using the standard gradient descent: if $f:\mathbb{R}^d \rightarrow \mathbb{R}$ is a differentiable function, gradient descent is trying to find a (local) minimum by defining an iterative sequence of points via $$ x_{n+1} = x_{n} - \alpha \nabla f(x_n),$$ were $\alpha > 0$ is the step-size. This is exactly how we proceed as well. The gradient $\nabla E$ can be computed explicitly and $$ \frac{\partial E}{\partial x_{n, i}} = \sum_{m=1 \atop m \neq n}^{N} \left( \prod_{k=1 \atop k \neq i}^{d}{ (1 - \log{\left(2 \sin{ \left( \pi |x_{m,k} - x_{n,k} |\right)} \right)})} \right) h(x_{n,i} - x_{m,i}),$$ where $$ h(x) = - \pi \cot{(x)} \mbox{sign}(x).$$ This allows us to compute $$ \frac{\partial E}{\partial x_{n}} = \left( \frac{\partial E}{\partial x_{n, 1}}, \frac{\partial E}{\partial x_{n, 2}}, \dots, \frac{\partial E}{\partial x_{n, d}} \right)$$ which is the infinitesimal direction in which we have to move $x_n$ to get the largest increase in the energy functional. Since we are interested in decreasing it, we replace $$ x_n \leftarrow x_n - \alpha \frac{\partial E}{\partial x_{n}}.$$ The algorithm is somewhat sensitive to the choice of $\alpha$ (this is not surprising and a recurring theme for gradient methods): it has to be chosen so small that the first order approximation is still somewhat valid, however, if it is chosen too small, then convergence becomes very slow and one needs more iterations to converge. In practice, for point sets containing $\sim 100$ points, we worked with $\alpha \sim 10^{-5}$ which usually leads to a local minimum within less than a hundred iterations. The cost of computing a gradient step is of order $\mathcal{O}(N^2 d)$ when $N \geq d$ and thus not at all unreasonable. There are presumably ways of optimizing both the choice of $\alpha$ as well as the cost of computing the energy (say, by fast multipole techniques) but this is beyond the scope of this paper. \begin{figure}[h!] \begin{minipage}[l]{.49\textwidth} \includegraphics[width = 5.5cm]{pic9.pdf} \end{minipage} \begin{minipage}[r]{.49\textwidth} \includegraphics[width = 5.5cm]{pic10.pdf} \end{minipage} \caption{Left: 50 points of a Sobol sequence with $D_N^* \sim 0.063$. Right: evolution of the flow leads to a set with $D_N^* \sim 0.057$.} \end{figure} \subsection{Lattices.} We observe that if the initial point set is already very well distributed, then minimizing the energy tends to have very little effect on both the set and the discrepancy. There is one setting where this behavior is especially pronounced. We will consider lattice rules of the type $$ X_N = \left\{ \left( \frac{n}{N}, \left\{ \frac{a n}{N}\right\} \right): 0 \leq n \leq N-1\right\},$$ where $a,N \in \mathbb{Z}$ are coprime and $\left\{ x \right\} = x - \left\lfloor x \right\rfloor$ is the fractional part. Lattice rules are classical examples of sequences with small discrepancy, we refer to \cite{dick, drmota, kuipers} and refer to \cite{fritz2, fritz3} for examples of more recent results. \begin{thm} Every lattice rule $X_N$ is a critical point of the energy functional. Moreover, if $a^2 \equiv 1~(\mbox{mod}~N)$, then $X_N$ is a strict local minimum. \end{thm} We understand critical point in the following sense: if we fix all but one point and then move the one point distance $\varepsilon$, then the energy changes by a factor proportional to $\varepsilon^2$. If $a^2 \equiv 1~(\mbox{mod}~N)$, then the energy changes like $\sim c \varepsilon^2$ for some $c>0$. Some restriction like this is clearly necessary since, if we move all the points by the same fixed vector, the energy remains unchanged. Nonetheless, we expect stronger statements to be true. We also do not know whether the condition $a^2 \equiv 1~(\mbox{mod}~N)$ is necessary, it seems like it should not be; we comment on this at the end of the paper. Several of the classical point sets (i.e. Sobol sequences) barely move under the gradient flow -- is it maybe true that many classical sequences have a local minimum nearby? \begin{figure}[h!] \begin{minipage}[l]{.49\textwidth} \includegraphics[width = 5cm]{pic11.pdf} \end{minipage} \begin{minipage}[r]{.49\textwidth} \includegraphics[width = 5cm]{pic12.pdf} \end{minipage} \caption{Left: 64 Halton points (base 2, 5) with $D_N^* \sim 0.065$. Right: the gradient flow leads to a set with $D_N^* \sim 0.045$.} \end{figure} \subsection{Related functionals.} One question of obvious interest is whether there are related functionals. We point out that our functional is part of a natural 1-parameter family of functionals that are naturally defined via certain fractional integral operators. This is \textit{not} how our functional was originally derived (that derivation can be found in \S 3 and is based on the Erd\H{o}s-Turan inequality) but may provide an interesting avenue for further research. We note that our approach to the Erd\H{o}s-Turan inequality involves an application of a Cauchy-Schwarz inequality that could also be done in a different fashion and this would, somewhat naturally, lead to the inverse fractional Laplacian. We quickly introduces this fascinating object here and then mention explicitly in the proof how one could deviate from the derivation. A full exploration of this case is outside the scope of this paper. If $f:\mathbb{T}^d \rightarrow \mathbb{R}$ is sufficiently smooth, then we can differentiate term by term and obtain, for any $s \in \mathbb{N}$, $$ (-\Delta)^s f = \sum_{k \in \mathbb{Z}^d}{ \widehat{f}(k) e^{2\pi i \left\langle k, x \right\rangle}} = \sum_{k \in \mathbb{Z}^d \atop k \neq 0}{(2\pi \|k\|)^{2s} \widehat{f}(k) e^{2\pi i \left\langle k, x \right\rangle}}.$$ However, as is easily seen, this definition actually makes sense for $s \in \mathbb{R}$: if $s$ is positive, then we require that $\widehat{f}(k)$ decays sufficiently quickly for the sum to be defined. If $s$ is negative, then it suffices to assume that $f\in L^2(\mathbb{T}^d)$ since $\| (-\Delta)^s f \|_{L^2} \leq \|f\|_{L^2}$ for all $s<0$ (we refer to \cite{luz} for an introduction into the fractional Laplacian on the Torus). We will now compute $(-\Delta)^{-1/2} \delta_0$, where $\delta_0$ is a Dirac measure in 0 in $\mathbb{T}$. We see that \begin{align*} (-\Delta)^{-\frac12} \delta_0 &= \sum_{k \in \mathbb{Z} \atop k \neq 0}{(2\pi \|k\|)^{-1} e^{2\pi i k x}} = \frac{1}{2\pi} \sum_{k \in \mathbb{Z} \atop k \neq 0}{\frac{e^{2\pi i k x}}{k}} \\ &= \frac{1}{\pi} \sum_{k=1}^{\infty}{ \frac{\cos{(2 \pi k x)}}{k}} = - \frac{1}{\pi} \log{(2\sin{|\pi x|})} \end{align*} This is, up to a factor of $\pi$, exactly the factor arising in our computation. It is well understood that $s=-1/2$ is a special scale and that the fractional Laplacian has different behavior for $s<-1/2$ and $s>-1/2$ but it does suggest many other factors that can be computed in a similar way. It also suggests that it might be potentially worthwhile to study functionals of the type $$ E(X) = \left\| \sum_{k=1}^{n} (-\Delta)^{s} \delta_{x_k} \right\|_{L^2(\mathbb{T}^d)}^2$$ which can be simplified \begin{align*} E(X) &= \left\langle \sum_{k=1}^{n} (-\Delta)^{s} \delta_{x_k}, \sum_{k=1}^{n} (-\Delta)^{s} \delta_{x_k} \right\rangle = \sum_{k, \ell=1}^{n}{ \left\langle (-\Delta)^{s} \delta_{x_k} , (-\Delta)^{s} \delta_{x_\ell} \right\rangle} \\ &= n \left\langle (-\Delta)^{s} \delta_{0}, (-\Delta)^{s} \delta_{0} \right\rangle + \sum_{k, \ell=1 \atop k \neq \ell}^{n}{ \left\langle (-\Delta)^{s} \delta_{x_k} , (-\Delta)^{s} \delta_{x_\ell} \right\rangle } \end{align*} Using self-adjointness of the inverse fractional Laplacian, we can simplify the relevant term as \begin{align*} \sum_{k, \ell=1 \atop k \neq \ell}^{n}{ \left\langle (-\Delta)^{s} \delta_{x_k} , (-\Delta)^{s} \delta_{x_\ell} \right\rangle } &= \sum_{k, \ell=1 \atop k \neq \ell}^{n}{ \left\langle (-\Delta)^{2s} \delta_{x_k} , \delta_{x_\ell} \right\rangle } = \sum_{k, \ell=1 \atop k \neq \ell}^{n}{ \left( (-\Delta)^{2s} \delta_{0}\right)(x_k - x_{\ell}) } \end{align*} This, in turn, can be rewritten as $$ \sum_{k, \ell=1 \atop k \neq \ell}^{n}{ \left( (-\Delta)^{2s} \delta_{0}\right)(x_k - x_{\ell}) } = (2\pi)^{2s} \sum_{k, \ell=1 \atop k \neq \ell}^{n}{ \sum_{m \in \mathbb{Z}^d \atop m \neq 0}{ m^{2s} e^{2\pi i \left\langle m, x_k -x_{\ell}\right\rangle}}}$$ which, obviously, admits a gradient formulation. One could also consider a possible trunction in frequency followed by a gradient formulation as well as various mollification mechanism. We want to strongly suggest the possibility that the optimal value of $s$ for these kinds of methods may depend on the dimension. \section{Heuristic Derivation of the Energy Functional} We first give a one-dimensional argument to avoid notational overload and then derive the analogous quantity for higher dimensions in \S 3.2. \subsection{One dimension.} Our derivation is motivated by the Erd\H{o}s-Turan inequality bounding the discrepancy $D_N$ of a set $\left\{x_1, \dots, x_N\right\} \subset [0,1]$ by $$ D_N \lesssim \frac{1}{N} + \sum_{k=1}^{N}{ \frac{1}{k} \left| \frac{1}{N} \sum_{n=1}^{N}{ e^{2 \pi i k x_n} } \right|}.$$ We can bound this from above, using $x \leq (1+x^2)/2$ valid for all real $x$, by $$ \sum_{k=1}^{N}{ \frac{1}{k} \left| \frac{1}{N} \sum_{n=1}^{N}{ e^{2 \pi i k x_n} } \right|} \leq \frac{1}{2}\sum_{k=1}^{N}{\left( \frac{1}{k} \frac{1}{N} + \frac{1}{k} \frac{1}{N} \left| \sum_{n=1}^{N}{ e^{2 \pi i k x_n} } \right|^2 \right)}.$$ Using merely this upper bound, we want to make sure that the second term is small. This second term simplifies to $$ \frac{1}{N}\sum_{k=1}^{N}{\left(\frac{1}{k} \left| \sum_{n=1}^{N}{ e^{2 \pi i k x_n} } \right|^2 \right)} = \frac{1}{N}\sum_{k=1}^{N}{\frac{1}{k} \sum_{n, m=1}^{N}{ e^{2 \pi i k (x_n-x_m)} } }$$ Ignoring the scaling factor $N^{-1}$, we decouple into diagonal and off-diagonal terms and obtain $$ \sum_{k=1}^{N}{\frac{1}{k} \sum_{n, m=1}^{N}{ e^{2 \pi i k (x_n-x_m)} } } = \sum_{k=1}^{N}{\frac{N}{k}} + \sum_{k=1}^{N}{\frac{1}{k}\sum_{m,n = 1 \atop m \neq n}^{N}{ \cos{(2 \pi k (x_m - x_n))}}}.$$ The first term is a fixed constant and thus independent of the actual points, the second sum can be written as $$ \sum_{k=1}^{N}{\frac{1}{k}\sum_{m,n = 1 \atop m \neq n}^{N}{ \cos{(2 \pi k (x_m - x_n))}}} = \sum_{m,n = 1 \atop m \neq n}^{N}{ \sum_{k=1}^{N}{ \frac{ \cos{(2 \pi k (x_m - x_n))}}{k} }}.$$ The inner sum can now be simplified \cite{trig} by letting the limit go to infinity since $$ \sum_{k=1}^{\infty}{ \frac{ \cos{(2 \pi k x)}}{k} } = - \log{(2 \sin{( \pi |x|)})}.$$ This suggests that we should really try to minimize the functional $$ E(X) = \sum_{m,n = 1 \atop m \neq n}^{N}{ - \log{(2 \sin{( \pi |x_m - x_n|)})}}.$$ \textbf{Remark.} There is one step in the derivation where we could have argued somewhat differently: we could have written, for any $0 < \gamma < 1$, \begin{align*} \sum_{k=1}^{N}{ \frac{1}{k} \left| \frac{1}{N} \sum_{n=1}^{N}{ e^{2 \pi i k x_n} } \right|} &= \sum_{k=1}^{N}{ \frac{1}{k^{1-\gamma}} \frac{1}{k^{\gamma}}\left| \frac{1}{N} \sum_{n=1}^{N}{ e^{2 \pi i k x_n} } \right|} \\ &\leq \left(\sum_{k=1}^{N}{ \frac{1}{k^{2-2\gamma}}} \right)^{1/2} \frac{1}{N} \left( \sum_{k=1}^{N}{ \frac{1}{k^{2\gamma}} \left| \sum_{n=1}^{N}{ e^{2 \pi i k x_n} } \right|^2} \right)^{1/2}. \end{align*} The first sum simplifies to either to $\sim N^{\gamma - 1/2}$ (for $\gamma > 1/2$), to $\sim \log{N}$ (for $\gamma =1/2$) or to $\sim 1$ (for $\gamma < 1/2$). The second term simplifies, after squaring the inner term and taking the limit of $N \rightarrow \infty$ over the Fourier series, to the definition of the fractional Laplacian $(-\Delta)^{-\gamma}$ (see \S 2.4.) applied to the measure $\sum_{k=1}^{N}{\delta_{x_k}}$. \subsection{Higher dimensions.} The general case follows from the Erd\H{o}s-Turan-Koksma \cite{erd1,erd2,koksma} inequality and the heuristic outlined above for the one-dimensional case. We recall that the Erd\H{o}s-Turan-Koksma inequality allows us to bound the discrepancy of a set $\left\{x_1, \dots, x_N\right\} \subset [0,1]^d$ by $$ D_N \lesssim_d \frac{1}{M + 1} + \sum_{\|k\|_{\infty} \leq M}{ \frac{1}{r(k)} \frac{1}{N} \left| \sum_{\ell=1}^{N}{e^{2\pi i \left\langle k, x_{\ell} \right\rangle}} \right|},$$ where $r:\mathbb{Z}^d \rightarrow \mathbb{N}$ is given by $$ r(k) = \prod_{j=1}^{d}{\max\left\{1, |k_j| \right\}}.$$ We note that, since $r(k) \leq r(2k) \leq 2^d r(k)$, we can change $r(k)$ to $r(2k)$ at merely the cost of a constant depending only on the dimension and thus \begin{align*} \sum_{\|k\|_{\infty} \leq M}{ \frac{1}{r(k)} \frac{1}{N} \left| \sum_{\ell=1}^{N}{e^{2\pi i \left\langle k, x_{\ell} \right\rangle}} \right|} &\lesssim_d \sum_{\|k\|_{\infty} \leq M}{ \frac{1}{r(2k)} \frac{1}{N}} \\ &+\sum_{\|k\|_{\infty} \leq M}{ \frac{1}{r(2k)} \frac{1}{N} \left| \sum_{\ell=1}^{N}{e^{2\pi i \left\langle k, x_{\ell} \right\rangle}} \right|^2}. \end{align*} The second sum we can expand into $$ \sum_{\|k\|_{\infty} \leq M}{ \frac{1}{r(2k)} \frac{1}{N} \left| \sum_{\ell=1}^{N}{e^{2\pi i \left\langle k, x_{\ell} \right\rangle}} \right|^2} = \frac{1}{N}\sum_{m,n = 1}^{N} \prod_{j=1}^{d} \left( 1 + \sum_{k=-M \atop k \neq 0}^{M}{\frac{1}{2|k|} e^{2\pi i k (x_{m,j} - x_{n,j})}} \right).$$ Letting $M \rightarrow \infty$, we can simplify every one of these terms to $$\sum_{k \in \mathbb{Z} \atop k \neq 0}^{\infty}{\frac{e^{2\pi i k (x_{m,j} - x_{n,j})}}{2|k|} } = - \log{(2 \sin{(\pi |x_{m,j} - x_{n,j}|)})}$$ and we obtain the general form of the energy functional. \begin{figure}[h!] \begin{minipage}[l]{.49\textwidth} \includegraphics[width = 5cm]{pic5.pdf} \end{minipage} \begin{minipage}[r]{.49\textwidth} \includegraphics[width = 5cm]{pic6.pdf} \end{minipage} \caption{Left: 100 random points with $D_N^* \sim 0.12$. Right: evolution of the gradient flow leads to a set with $D_N^* \sim 0.05$.} \end{figure} The Erd\H{o}s-Turan-Koksma inequality shows $$ D_N \lesssim_d \frac{1}{M + 1} + \sum_{\|k\|_{\infty} \leq M}{ \frac{1}{r(k)} \frac{1}{N} \left| \sum_{\ell=1}^{N}{e^{2\pi i \left\langle k, x_{\ell} \right\rangle}} \right|}.$$ We know that the best possible behavior is on the scale of $D_N \lesssim (\log{N})^{d-1} N^{-1}$ (or possibly even smaller). This suggests that the exponential sums cannot typically be that large, it should be roughly at scale $\sim 1$ most of the time. Understanding this better could lead to precise estimates comparing how much our energy exceeds the discrepancy. We conclude by establishing a rigorous bound. \begin{lem} We have, for $X=\left\{x_1, \dots, x_N \right\} \subset \mathbb{T}^d$, $$\sum_{\|k\|_{\infty} \leq N}{ \frac{1}{r(k)}\left| \sum_{\ell=1}^{N}{e^{2\pi i \left\langle k, x_{\ell} \right\rangle}} \right|^2} \lesssim_d E(X)$$ \end{lem} \begin{proof} The argument outlined above already establishes the result except for one missing ingredient: for all $0 < x < 1$, there is a uniform bound $$ \max_{n \in \mathbb{N}} \sum_{k=1}^{n}{\frac{\cos{(2\pi k x)}}{k}} \lesssim 1-\log{|\sin{(\pi x)}|}.$$ We can assume w.l.o.g. that $0 < x < 1/2$. We use Abel summation to write \begin{align*} \sum_{k=1}^{n}{\frac{\cos{(2\pi k x)}}{k}} &= (n+1)\frac{\cos{(2\pi n x)}}{n} \\ &+ \int_{1}^{n}{ \left\lfloor k+1 \right\rfloor \left( \frac{\cos{(2 \pi k x)}}{k^2} + \frac{ 2\pi x \sin{(2\pi k x)}}{k} \right) dk}. \end{align*} The first term is $\mathcal{O}(1)$, it remains to treat the integral. The first term has the structure of an alternating Leibniz series with the first root being at $kx = 1/4$. Thus \begin{align*} \int_{1}^{n}{ \left\lfloor k+1 \right\rfloor \frac{\cos{(2 \pi k x)}}{k^2} dk} &\lesssim \int_{1}^{1/(4x)}{ \left\lfloor k+1 \right\rfloor \frac{\cos{(2 \pi k x)}}{k^2} dk} \\ &\lesssim \int_{1}^{1/(4x)}{ \frac{\cos{(2 \pi k x)}}{k} dk} \lesssim \log{(1/x)}. \end{align*} The second integral simplifies to $$ \int_{1}^{n}{ \left\lfloor k+1 \right\rfloor \frac{ 2\pi x \sin{(2\pi k x)}}{k} dk} = 2\pi x \int_{1}^{n}{ \left\lfloor k+1 \right\rfloor \frac{\sin{(2\pi k x)}}{k} dk} \lesssim 1.$$ \end{proof} \begin{figure}[h!] \begin{minipage}[l]{.49\textwidth} \includegraphics[width = 5.5cm]{pic13.pdf} \end{minipage} \begin{minipage}[r]{.49\textwidth} \includegraphics[width = 5.5cm]{pic14.pdf} \end{minipage} \caption{Left: 50 points of the Hammersley sequence in base 3 with $D_N^* \sim 0.064$. Right: evolution of the flow leads to a set with $D_N^* \sim 0.042$.} \end{figure} \subsection{The case $d=1$.} Things are usually simpler in one dimension (though also less interesting because the optimal constructions are trivial and given by equispaced points). We have the following basic result. \begin{proposition} Let $(x_n)$ be a sequence in $\mathbb{T} \cong [0,1]$. If $$ \limsup_{N \rightarrow \infty} \frac{1}{N^2} \sum_{1 \leq m \neq n \leq N}{ (1 - \log{(2 \sin{(\pi |x_m - x_n|)})})} = 1,$$ then the sequence is uniformly distributed. \end{proposition} \begin{proof} The proof is similar in spirit to the main argument in \cite{steini}, we refer to that paper for definition of the Jacobi $\theta-$function and the main idea. We define a one-parameter family of functions via $$ f_N(t,x) = \sum_{k=1}^{N}{ \theta_t(x-x_k)}$$ where $\theta_t$ is the Jacobi $\theta-$function. In particular $$ \lim_{t \rightarrow 0^+}{ f_N(t,x)} = \frac{1}{N}\sum_{k=1}^{N}{\delta_{x_k}} \qquad \mbox{in the sense of weak convergence.}$$ Defining $$ g(x) = 1 - \log{(2 \sin{( \pi |x|)} )},$$ we can define the function $$h(t) = \left\langle g* f_N(t,x), f_N(t,x) \right\rangle$$ is monotonically decaying in time. This is seen by applying the Plancherel theorem \begin{align*} h(t) &= \sum_{k \in \mathbb{Z}} \widehat{g}(k) |\widehat{ f_N(t,x)}(k)|^2 \\ &= \sum_{k \in \mathbb{Z}}{ \widehat{g}(k) e^{-4 \pi^2 k^2 t} \left|\sum_{\ell=1}^{N}{e^{-2 \pi i k x_{\ell}}} \right|^2} \end{align*} and using $ \widehat{g}(k) > 0$. We can now take the limit $t \rightarrow \infty$ and obtain that $$ h(t) \geq \widehat{g}(0) N^2 = N^2.$$ As for the second part of the argument, suppose that $(x_n)$ is not uniformly distributed. Weyl's theorem implies that there exist $\varepsilon>0, k \in \mathbb{N}$ such that $$ \left| \sum_{\ell=1}^{N}{ e^{-2 \pi i k x_{\ell}}} \right|^2 \geq \varepsilon \qquad \mbox{for infinitely many}~n.$$ Then, however, $$ h(1) \geq \widehat{g}(0) N^2 + e^{-4 \pi^2 k^2} |\widehat{g}(k)|^2 \geq (1 + \delta)N^2$$ for some $\delta > 0$ and infinitely many $N$. \end{proof} \section{Proof of the Theorem} \subsection{An Inequality.} We first prove an elementary inequality. \begin{lem} Let $0 < x,y < 1$. Then $$ 2 \left|\cot{(\pi x)} \cot{(\pi y)} \right|< (1-\log{(2\sin{(\pi x)})}) \csc^2{(\pi y)} + \csc^2{(\pi x)}(1-\log{(2\sin{(\pi y)})}).$$ \end{lem} \begin{proof} The right-hand side is always positive, we can thus assume w.l.o.g. that $0 < x,y < 1/2$. Multiplying with $\sin^2{(\pi x)}\sin^2{(\pi y)}$ on both sides leads to the equivalent statement $A \leq B$, where $$ A = 2\sin{(\pi x)}\cos{(\pi x)} \sin{(\pi y)}\cos{(\pi y)}$$ and $$B = (1-\log{(2\sin{(\pi x)})}) \sin^2{(\pi x)} + \sin^2{(\pi y)}(1-\log{(2\sin{(\pi y)})}).$$ We use $2ab \leq a^2 + b^2$ to argue that \begin{align*} A \leq \sin^2{(\pi x)}\cos^2{(\pi x)} + \sin^2{(\pi y)}\cos^2{(\pi y)}. \end{align*} The result then follows from the inequality $$ \cos^2{(\pi x)} < 1-\log{(2\sin{(\pi x)})} \qquad \mbox{for all}~0 < x \leq \frac{1}{2}$$ which can be easily seen by elementary methods. \end{proof} \subsection{Proof of the Theorem} \begin{proof}[Proof of the Theorem.] The symmetries of the sequence and the energy functional imply that it is sufficient to show that the energy is locally convex around the point in $(0,0)$. This means we want to show that $$f(\varepsilon, \delta) = \sum_{n=1}^{N-1}{ \left(1 - \log{\left(2 \sin{ \left( \pi \left|\frac{n}{N} - \varepsilon \right|\right)} \right)}\right) \left(1 - \log{\left(2 \sin{ \left( \pi \left| \left\{ \frac{an}{N}\right\} - \delta \right|\right)} \right)} \right) }$$ is strictly positive for all $\varepsilon, \delta$ sufficiently small. We can assume $|\varepsilon|, |\delta| < N^{-1}$, expand the first term in $\varepsilon$ up to second order and note that \begin{align*} 1 - \log{( 2 \sin{(\pi (x - \varepsilon))})}&= \left(1 - \log{( 2 \sin{(\pi x)})}\right) + \pi \cot{(x \pi)} \varepsilon \\ &+ \pi^2 \csc^2{(\pi x)} \frac{ \varepsilon^2}{2} + \mathcal{O}(\varepsilon^3). \end{align*} This shows that $$ \frac{\partial}{\partial \varepsilon} g(\varepsilon, 0) \big|_{\varepsilon = 0} = \sum_{n=1}^{N-1}{ \cot{ \left(\frac{n \pi }{N} \right)} \left(1 - \log{\left(2 \sin{ \left( \pi \left\{ \frac{an}{N}\right\} \right)} \right)} \right) }.$$ We group the summand $n$ and $N-n$ and observe that $\cot$ is odd on $(0, \pi)$ while the second summand is even, therefore the sum evaluates to 0. The other derivative $$ \frac{\partial}{\partial \delta} g(0, \delta) \big|_{\delta = 0} = \sum_{n=1}^{N-1}{ \cot{ \left( \pi \left\{ \frac{an}{N}\right\} \right)} \left(1 - \log{\left(2 \sin{ \left( \pi \frac{n }{N} \right)} \right)} \right) }$$ vanishes for exactly the same reason and therefore the lattice is a critical point. It remains to show that it is a local minimizer which requires an expansion up to second order. This expansion naturally decouples into three sums, where \begin{align*} (I) &= \frac{\pi^2 \varepsilon^2}{2} \sum_{n=1}^{N-1}{ \csc^2{ \left( \pi\frac{n}{N} \right)} \left(1 - \log{\left(2 \sin{ \left( \pi \left\{ \frac{an}{N}\right\} \right)} \right)} \right) }\\ (II) &= \pi^2 \varepsilon \delta \sum_{n=1}^{N-1}{ \cot{ \left( \pi\frac{n}{N} \right)}\cot{ \left( \left\{ \pi\frac{an}{N} \right\} \right)}} \\ (III) &=\frac{\pi^2 \delta^2}{2} \sum_{n=1}^{N-1}{ \csc^2{ \left( \pi \left\{ \frac{an}{N} \right\} \right)} \left(1 - \log{\left(2 \sin{ \left( \pi \frac{n}{N} \right)}\right)}\right) } \end{align*} \begin{figure}[h!] \begin{minipage}[l]{.49\textwidth} \includegraphics[width = 5cm]{pic15.pdf} \end{minipage} \begin{minipage}[r]{.49\textwidth} \includegraphics[width = 5cm]{pic16.pdf} \end{minipage} \caption{Left: 101 points combined from a Halton sequence $(x \leq 0.5)$ and a Sobol sequence $(x \geq 0.5)$ with $D_N^* \sim 0.042$. Right: gradient flow leads to a set with discrepancy $D_N^* \sim 0.034$. .} \end{figure} We can now argue that $(II)$ is bounded by \begin{align*} \left| \sum_{n=1}^{N-1}{ \varepsilon \delta \cot{ \left( \pi\frac{n}{N} \right)}\cot{ \left( \left\{ \pi\frac{an}{N} \right\} \right)}} \right| &\leq \left(\frac{\varepsilon^2}{2} + \frac{\delta^2}{2}\right) \left|\sum_{n=1}^{N-1}{ \cot{ \left( \pi\frac{n}{N} \right)}\cot{ \left( \left\{ \pi\frac{an}{N} \right\} \right)}} \right|\\ &\leq \frac{ \varepsilon^2}{2} \left| \sum_{n=1}^{N-1} \cot{ \left( \pi\frac{n}{N} \right)}\cot{ \left( \left\{ \pi\frac{an}{N} \right\} \right) } \right| \\ &+ \frac{ \delta^2}{2} \left| \sum_{n=1}^{N-1} \cot{ \left( \pi\frac{n}{N} \right)}\cot{ \left( \left\{ \pi\frac{an}{N} \right\} \right) } \right|. \end{align*} The Lemma implies that we can bound the first term by \begin{align*} \sum_{n=1}^{N-1} \left| \cot{ \left( \pi\frac{n}{N} \right)}\cot{ \left( \left\{ \pi\frac{an}{N} \right\} \right) } \right| &\leq \frac{1}{2}\sum_{n=1}^{N-1} \csc^2{ \left( \pi\frac{n}{N} \right)} \left(1 - \log{\left(2 \sin{ \left( \pi \left\{ \frac{an}{N}\right\} \right)} \right)} \right) \\ &+ \frac{1}{2}\sum_{n=1}^{N-1} \csc^2{ \left( \pi \left\{ \frac{an}{N}\right\} \right)} \left(1 - \log{\left(2 \sin{ \left( \pi\frac{n}{N} \right)} \right)} \right) \end{align*} We finally use the algebraic structure and argue that if $a^2 \equiv 1~(\mbox{mod}~N)$, then $$ n \rightarrow a \cdot n \qquad \mbox{is an involution mod}~N$$ and that implies that both sums are actually the same sum written in a different order. The arising sum is actually the term we are given in $(I)$. The argument for the third sum is identical and altogether we conclude that $$ (II) \leq (I) + (III)$$ which implies the desired result. \end{proof} It remains an open question whether the same result $ (II) \leq (I) + (III)$ remains true in general. Basic numerical experiments suggest that this should be the case. We can reformulate the problem by writing out the quadratic form and computing its determinant. The relevant question is then whether $ (1)(2) \geq (3)^2,$ where \begin{align*} (1) &= \sum_{n=1}^{N-1}{ \csc^2{ \left( \pi\frac{n}{N} \right)} \left(1 - \log{\left(2 \sin{ \left( \pi \left\{ \frac{an}{N}\right\} \right)} \right)} \right) }\\ (2) &= \sum_{n=1}^{N-1}{ \cot{ \left( \pi\frac{n}{N} \right)}\cot{ \left( \left\{ \pi\frac{an}{N} \right\} \right)}} \\ (3) &= \sum_{n=1}^{N-1}{ \csc^2{ \left( \pi \left\{ \frac{an}{N} \right\} \right)} \left(1 - \log{\left(2 \sin{ \left( \pi \frac{n}{N} \right)}\right)}\right) }. \end{align*}
{ "timestamp": "2019-05-24T02:03:03", "yymm": "1902", "arxiv_id": "1902.00441", "language": "en", "url": "https://arxiv.org/abs/1902.00441", "abstract": "Let $X = \\left\\{x_1, \\dots, x_N\\right\\} \\subset \\mathbb{T}^d \\cong [0,1]^d$ be a set of $N$ points in the $d-$dimensional torus that we want to arrange as regularly possible. The purpose of this paper is to introduce a curious energy functional $$ E(X) = \\sum_{1 \\leq m,n \\leq N \\atop m \\neq n} \\prod_{k=1}^{d}{ (1 - \\log{\\left(2 \\sin{ \\left( \\pi |x_{m,k} - y_{m,k} |\\right)} \\right)})}$$ and to suggest that moving a set $X$ into the direction $-\\nabla E(X)$ may have the effect of increasing regularity of the set in the sense of decreasing discrepancy. We numerically demonstrate the effect for Halton, Hammersley, Kronecker, Niederreiter and Sobol sets. Lattices in $d=2$ are critical points of the energy functional, some (possibly all) are strict local minima.", "subjects": "Optimization and Control (math.OC)", "title": "A Nonlocal Functional Promoting Low-Discrepancy Point Sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682478041813, "lm_q2_score": 0.7154240079185319, "lm_q1q2_score": 0.7073170003458596 }
https://arxiv.org/abs/1310.1327
Games and Complexes II: Weight Games and Kruskal-Katona Type Bounds
A strong placement game $G$ played on a board $B$ is equivalent to a simplicial complex $\Delta_{G,B}$. We look at weight games, a subclass of strong placement games, and introduce upper bounds on the number of positions with $i$ pieces in $G$, or equivalently the number of faces with $i$ vertices in $\Delta_{G,B}$, which are reminiscent of the Kruskal-Katona bounds.
\section{Introduction} Our goal in this paper is to study complexes of placement games (Definition \ref{def:placement}). In \cite{GameCompI} we demonstrated that to a placement game $G$ played on a board $B$ one can associate a simplicial complex $\Delta_{G,B}$ where $G$ can be considered as a game played on $\Delta_{G,B}$. The main questions that we address in this paper is: What complexes can be legal complexes of a placement game? We give partial answers to this question in specific cases: when the board is a path, a cycle, or a complete graph (also see \cite{MSc}). We begin by introducing some of the concepts needed. A complete introduction is given in \cite{GameCompI}. \begin{definition}\label{def:placement} A \textit{strong placement game} is a combinatorial game played on a graph which satisfies the following: \begin{itemize} \item[(i)] The starting position is the empty board. \item[(ii)] Players place pieces on empty spaces of the board according to the rules. \item[(iii)] Pieces are not moved or removed once placed. \item[(iv)] The rules are such that if it is possible to reach a position through a sequence of legal moves, then any sequence of moves leading to this position consists only of legal moves. \end{itemize} The \textsc{Trivial} placement game on a board is the placement game that has no additional rules. \end{definition} Throughout this paper `placement game' refers to a strong placement game. Since placement games are played on a graph, we use the terms board and graph, and space and vertex interchangeably. A \textit{basic position} is a board with only one piece placed. Any position, whether legal or illegal, in a placement game can be decomposed into basic positions. \begin{definition} A \textit{simplicial complex} $\Delta$ on a finite vertex set $V$ is a set of subsets (called \textit{faces}) of $V$ with the conditions that if $A\in \Delta$ and $B\subseteq A$, then $B\in \Delta$. The \textit{$f$-vector} $(f_0, f_1, \ldots, f_k)$ of a simplicial complex $\Delta$ enumerates the number of faces $f_i$ with $i$ vertices. Note that if $\Delta\neq\emptyset$, then $f_0=1$. \end{definition} The \textit{legal complex} \cite{GameCompI}, denoted by $\Delta_{G,B}$, is the simplicial complex whose faces correspond to the legal positions of the placement game $G$ played on the board $B$. \begin{question}\label{Q} Is every simplicial complex the legal complex of a placement game? \end{question} In respect to this question, we are interested in the possible $f$-vectors of legal complexes, thus we will consider the following: The number of positions in $G$ on $B$ with $i$ pieces played, or equivalently the number of faces with $i$ vertices in the legal complex $\Delta_{G,B}$, is denoted by $f_i(G,B)$, or shortened to $f_i$ if the game and board are clear. In this work, we will be considering upper bounds on $f_i(G,B)$. Specifically, we will be considering Kruskal-Katona type bounds for weight games played on a path, on a cycle, or on a complete graph. \section{The Kruskal-Katona Theorem} Kruskal \cite{Kr63} and Katona \cite{Ka68} proved that for each pair of non-negative integers $f$ and $i$, $f$ can be written in the form \[f=\binom{n_i}{i}+\binom{n_{i-1}}{i-1}+\ldots+\binom{n_{i-s}}{i-s}\] where $n_i>n_{i-1}>\ldots>n_{i-s}\geq i-s\geq 1$ are unique. This sum is called the \textit{$i$-canonical representation of $f$}. We can then define the \textit{$j$th pseudopower of $f$} \[f^{(j)}_i=\binom{n_i}{j}+\binom{n_{i-1}}{j-1}+\ldots+\binom{n_{i-s}}{j-s}\] for $j\ge 1$. The Kruskal-Katona theorem gives necessary and sufficient conditions for a vector $(f_0, f_1,\ldots, f_k)$ with entries from the non-negative integers to be the $f$-vector of a simplicial complex. The following is the version proven by Kruskal: \begin{theorem}[Kruskal \cite{Kr63}] For the sequence of non-negative integers $(f_0, f_1,\ldots, f_k)$ the following are equivalent: \begin{itemize} \item[\emph{(i)}] $(f_0, f_1,\ldots, f_k)$ is the $f$-vector of a non-empty simplicial complex; \item[\emph{(ii)}] $f_0=1$ and $f_{j}\le f_{i}^{(j)}$ for all $1\le i\le j$; \item[\emph{(iii)}] $f_0=1$ and $f_{j}\ge f_{i}^{(j)}$ for all $1\le j\le i$. \end{itemize} \end{theorem} To show that (ii) holds, it is sufficient to show that $f_0=1$ and $f_{i+1}\le f_i^{(i+1)}$ for all $i\ge 1$ since all other cases follow. Similarly, to show (iii), showing $f_0=1$ and $f_j\ge f_{j+1}^{(j)}$ for all $j\ge 1$ is sufficient. The Kruskal-Katona theorem is usually stated in terms of either one of these. If the answer to Question \ref{Q} is ``no'', then not every vector that is an $f$-vector of a simplicial complex is also an $f$-vector of a legal complex. Thus for the remainder, after introducing weight games, we will give improved upper bounds on the entries of an $f$-vector of a legal complex. \section{Games with Weight}\label{sec:weight} In the remainder, we will consider playing pieces of larger size. Specifically, we call the number of connected vertices a piece covers the \textit{weight} of this piece. Many placement games have pieces of weight greater than 1. For example, in \textsc{Domineering} \cite{BCG04} and \textsc{Crosscram} \cite{Ga74} Left and Right both play dominoes as their pieces, and so their pieces are of weight 2. Also, as we will mention in Remark \ref{remark:octal}, partizan octal games are equivalent to placement games with weight on a path. \begin{example} Consider the board given in Figure \ref{fig:weightex}. A piece that has weight 4 could for example be played on the vertex set $\{1,2,3,4\}$, but not on the vertex set $\{1,3,5,6\}$ since these vertices are not connected. \begin{figure}[!ht] \begin{center} \begin{tikzpicture}[scale=1.5] \node[shape=circle, draw] (1) at (-0.5,0) {\footnotesize 1}; \node[shape=circle, draw] (2) at (1, -0.5) {\footnotesize 2}; \node[shape=circle, draw] (3) at (0,-1.5) {\footnotesize 3}; \node[shape=circle, draw] (4) at (2, -1.5) {\footnotesize 4}; \node[shape=circle, draw] (5) at (2,1) {\footnotesize 5}; \node[shape=circle, draw] (6) at (1.25, 1.75) {\footnotesize 6}; \node[shape=circle, draw] (7) at (3, 0.8) {\footnotesize 7}; \node[shape=circle, draw] (8) at (2.5,0) {\footnotesize 8}; \node[shape=circle, draw] (9) at (4, 1.25) {\footnotesize 9}; \draw (1) to (2) to (3) to (4) to (2) to (5) to (7) to (8); \draw (5) to (6); \draw (7) to (9); \end{tikzpicture} \end{center} \caption{An Example Board} \label{fig:weightex} \end{figure} \end{example} We usually assume that every piece of Left has the same weight $a$, and every piece of Right has the same weight $b$. \begin{definition}\label{def:weightgame} A placement game in which the players play pieces of fixed weights is called a \textit{game with weights}. If the game has no rules besides pieces having to be placed on connected sets of empty vertices, we call it a \textit{weight game}. A $2$-player weight game will be denoted by $\weight{a}{b}$ where $a$ is the weight that Left plays, while $b$ is the weight that Right plays. \end{definition} Essentially, the weight game is the \textsc{Trivial} placement game with weights. In \cite{MSc}, it is shown that the game $\weight{a}{a}$ played on a path or a cycle is equivalent to another placement game in which both Left and Right play pieces of weight $1$. This is not necessarily true though if we force every basic position to be legal, as the following discussion shows. Consider a placement game $G$ in which both Left and Right play pieces of weight $1$ and every basic position is legal. Since the basic positions in this case consists of Left or Right occupying a single vertex, we have $n$ Left and Right basic positions each, where $n$ is the number of vertices of the board. Thus we have that the number of legal positions with one piece is the number of basic positions, namely $f_1=2n$. This also implies that a weight game $\weight{a}{b}$ where $f_1$ is odd is not equivalent to a placement game where both Left and Right play pieces of weight $1$ and basic positions are legal. Weight games with $f_1$ odd indeed exist, as seen in the following example. \begin{example} Consider $\weight{1}{2}$ played on $P_2$. The basic positions are \begin{center} \begin{tikzpicture}[scale=0.75] \draw[line width=1pt] (0,0)--(2,0)--(2,-1)--(0,-1)--cycle; \draw[line width=1pt] (1,0)--(1,-1); \filldraw[fill=gray!30, line width=1pt] (0.15,-0.2)--(0.85,-0.2)--(0.85,-0.8)--(0.15,-0.8)-- cycle; \draw (0.5,-0.5) node {$L$}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.75] \draw[line width=1pt] (0,0)--(2,0)--(2,-1)--(0,-1)--cycle; \draw[line width=1pt] (1,0)--(1,-1); \filldraw[fill=gray!30, line width=1pt] (1.15,-0.2)--(1.85,-0.2)--(1.85,-0.8)--(1.15,-0.8)-- cycle; \draw (1.5,-0.5) node {$L$}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.75] \draw[line width=1pt] (0,0) -- (2,0) -- (2, -1) -- (0, -1) -- cycle; \draw[line width=1pt] (1,0) -- (1, -1); \filldraw[fill=gray!30, line width=1pt] (0.15,-0.2)--(1.85,-0.2)--(1.85,-0.8)--(0.15,-0.8)-- cycle; \draw (0.5,-0.5) node {$R$}; \draw (1.5,-0.5) node {$R$}; \draw[draw=gray, line width=1pt] (1,-0.23)--(1,-0.77); \end{tikzpicture} \end{center} and thus if all basic positions are legal, then $f_1=3$. \end{example} Suppose the weight of the Left pieces is $a$ and the weight of the Right pieces $b$ and without loss of generality $a\le b$, then Left would be able to place at most $\lfloor n/a\rfloor$ pieces on a board of $n$ vertices. If we place a mix of Left and Right pieces or just Right pieces, the number of pieces we are able to place will be equal or less. Thus if the $f$-vector of the legal complex is $(f_0, f_1, \ldots, f_k)$, then \[k\le\max\{\left\lfloor n/a\right\rfloor, \left\lfloor n/b\right\rfloor\}.\] \begin{proposition}\label{thm:weight1bound} For legal complexes corresponding to games on any board of $n$ vertices with pieces of weight 1, we have \begin{equation*} f_i\leq \binom{n}{i}2^i \end{equation*} for $i\ge 0$. \end{proposition} \begin{proof} We will consider the number of positions with $i$ pieces of weight 1 in the placement game that has no additional rules, {i.e}\onedot} \def\Ie{{I.e}\onedot the \textsc{Trivial} placement game. As we add rules to this game to get other placement games with pieces of weight 1, the number of positions decreases, thus the number of such positions in \textsc{Trivial} gives the maximum. In \textsc{Trivial}, there are $\binom{n}{i}$ ways to choose $i$ spaces to place pieces, for each there are 2 choices: either a Left piece, or a Right piece. Our claim now follows. \end{proof} We will now look at how playing pieces of specified weight on different classes of boards influences the $f$-vector of the corresponding legal complex. The classes of boards we specifically look at are paths, cycles, and complete graphs. Note that the $f$-vector of a weight game gives an upper bound on the $f$-vector of a game with the same weights. Thus the formulae for the weight games in the following sections give bounds for games with weight. In \cite{MSc}, these formulae are also generalized to $t$-player weight games. \section{Playing on the Path $P_n$} In this section, we study placement games played on the path $P_n$, $n\ge 1$, in which Left plays pieces of weight $a$ and Right pieces of weight $b$. \begin{proposition}\label{thm:path1} If a simplicial complex is the legal complex of a weight game $\weight{a}{b}$ played on $P_n$ then \begin{equation} f_1= \begin{cases} 0 & \text{if } a,b>n,\\ n-a+1 & \text{if } a\le n \text{ and } b>n,\\ n-b+1 & \text{if } a> n \text{ and } b\le n,\\ 2n-a-b+2 & \text{if } a,b\le n. \end{cases} \end{equation} \end{proposition} \begin{proof} We are measuring the number of legal basic positions. If $a,b>n$, then neither Left nor Right can place a piece, thus $f_1=0$. If $n\ge a$, then placing one piece of weight $a$ on a strip of length $n$ is equivalent to placing one piece of weight $1$ (think of the left-most end of the piece) on a strip of length $n-(a-1)=n-a+1$, so the second and third case follow. Similarly, for the final case \begin{align*} f_1&= (n-a+1)+(n-b+1)\\ &=2n-a-b+2.\qedhere \end{align*} \end{proof} \begin{proposition}\label{thm:path2} In a weight game $\weight{a}{b}$ played on $P_n$, the number of positions with one Left and one Right piece is \begin{equation*} N_{LR}= \begin{cases} 0 & \text{if } a+b>n,\\ 2\binom{n-a-b+2}{2} & \text{if } a+b\le n. \end{cases} \end{equation*} The number of positions with two Left pieces or two Right pieces, respectively, is \begin{equation*} N_{LL}= \begin{cases} 0 & \text{if } 2a>n,\\ \binom{n-2a+2}{2} & \text{if } 2a\le n; \end{cases}\qquad N_{RR}= \begin{cases} 0 & \text{if } 2b>n,\\ \binom{n-2b+2}{2} & \text{if } 2b\le n. \end{cases} \end{equation*} For the legal complex of such a game we have \begin{equation}\label{eq:path2final} f_2= N_{LR}+N_{LL}+N_{RR}. \end{equation} \end{proposition} \begin{proof} To find $N_{LR}$ when $n\ge a+b$, we only consider the case in which the Left piece is the left-most piece. The other case is symmetric. We will first place the Left piece in position $i$. To be able to fit a Right piece to the right of this, we have $1\le i\le n-a-b+1$. \begin{figure}[!ht] \begin{center} \begin{tikzpicture} \draw (0,1)--(9,1)--(9,0)--(0,0)--(0,1); \draw (3,1)--(3,0); \draw (3.5,1)--(3.5,0); \draw (5.5,1)--(5.5,0); \node at (3.25,0.5) {$i$}; \foreach \x in {0.5,1,1.5,2,2.5,4,4.5,5,6,6.5,7,7.5,8,8.5}{ \draw[gray!50] (\x,0.9)--(\x,0.1);} \draw [decorate,decoration={brace,amplitude=10pt,raise=4pt},yshift=0pt] (0,1) -- (3.5,1) node [black,midway,yshift=0.8cm] {$i$}; \draw [decorate,decoration={brace,amplitude=10pt,mirror,raise=4pt},yshift=0pt] (3.5,0) -- (5.5,0) node [black,midway,yshift=-0.8cm] {$a-1$}; \draw [decorate,decoration={brace,amplitude=10pt,raise=4pt},yshift=0pt] (3.5,1) -- (9,1) node [black,midway,yshift=0.8cm] {$n-i$}; \draw [decorate,decoration={brace,amplitude=10pt,mirror,raise=4pt},yshift=0pt] (5.5,0) -- (9,0) node [black,midway,yshift=-0.8cm] {$n-a+1-i$}; \end{tikzpicture} \end{center} \caption{Proof to Proposition \ref{thm:path2}: Placing a Piece of Weight $a$ on a Path} \label{fig:pathbreakup} \end{figure} The strip to the right then has length $n-(i+a-1)=n-a+1-i$ (see Figure \ref{fig:pathbreakup}). Thus we have $n-a+1-i-(b-1)=n-a-b+2-i$ choices to place the Right piece (see Proposition \ref{thm:path1}). Thus the number of position with Left on the left and Right on the right is \begin{align*} &\sum_{i=1}^{n-a-b+1} (n-a-b+2-i)\\ =&(n-a-b+1)(n-a-b+2)-\sum_{i=1}^{n-a-b+1} i\\ =&(n-a-b+1)(n-a-b+2)-\frac{(n-a-b+1)(n-a-b+2)}{2}\\ =&\frac{(n-a-b+1)(n-a-b+2)}{2}\\ =&\binom{n-a-b+2}{2}. \end{align*} Then $N_{LR}=2\binom{n-a-b+2}{2}$. Similarly, the number of positions with Left on the left and right for $n\ge 2a$ and Right on the left and right for $n\ge 2b$ respectively, then are \[ N_{LL}=\binom{n-2a+2}{2}\qquad N_{RR}=\binom{n-2b+2}{2}.\] Since these are the only three possibilities for pairs of pieces, Equation \ref{eq:path2final} follows immediately. \end{proof} It is easy to see that if $a=b=1$, then the previous two bounds are \begin{align*} f_1&=2n;\\ f_2&=4\binom{n}{2}. \end{align*} These are the bounds given in Proposition \ref{thm:weight1bound}. \begin{example} Consider $\weight{2}{3}$ on the path $P_5$. Let $x_i$ represent a Left piece occupying the spaces $i$ and $i+1$, and similarly for $y_i$. For example, the position in Figure \ref{fig:pathpos} is represented by $x_1y_3$. \begin{figure}[!ht] \begin{center} \begin{tikzpicture}[scale=1] \draw[line width=1pt] (0,0) -- (5,0) -- (5, -1) -- (0, -1) -- cycle; \draw[line width=1pt] (1,0) -- (1, -1); \draw[line width=1pt] (2,0) -- (2, -1); \draw[line width=1pt] (3,0) -- (3, -1); \draw[line width=1pt] (4,0) -- (4, -1); \filldraw[fill=gray!30, line width=1pt] (0.15,-0.2)--(1.85,-0.2)--(1.85,-0.8)--(0.15,-0.8)-- cycle; \filldraw[fill=gray!30, line width=1pt] (2.15,-0.2)--(4.85,-0.2)--(4.85,-0.8)--(2.15,-0.8)-- cycle; \draw (0.5,-0.5) node {$L$}; \draw (1.5,-0.5) node {$L$}; \draw (2.5,-0.5) node {$R$}; \draw (3.5,-0.5) node {$R$}; \draw (4.5,-0.5) node {$R$}; \draw[draw=gray, line width=1pt] (1,-0.23)--(1,-0.77); \draw[draw=gray, line width=1pt] (3,-0.23)--(3,-0.77); \draw[draw=gray, line width=1pt] (4,-0.23)--(4,-0.77); \end{tikzpicture} \end{center} \caption{An Example Position for $\weight{2}{3}$ on $P_5$} \label{fig:pathpos} \end{figure} The corresponding simplicial complex is given in Figure \ref{fig:pathsc}. \begin{figure}[!ht] \begin{center} \begin{tikzpicture}[scale=1] \draw[line width=2pt] (0,0) .. controls (2,2) and (4,2) .. (6,0); \draw[line width=2pt] (0,0) .. controls (2,1) .. (4,0); \draw[line width=2pt] (2,0) .. controls (4,1) .. (6,0); \draw[line width=2pt] (0,0)--(5,-2); \draw[line width=2pt] (6,0)--(1,-2); \filldraw (0,0) circle (0.1cm); \filldraw (2,0) circle (0.1cm); \filldraw (4,0) circle (0.1cm); \filldraw (6,0) circle (0.1cm); \filldraw (1,-2) circle (0.1cm); \filldraw (3,-2) circle (0.1cm); \filldraw (5,-2) circle (0.1cm); \draw (-0.5,0) node {$x_1$}; \draw (1.5,0) node {$x_2$}; \draw (4.5,0) node {$x_3$}; \draw (6.5,0) node {$x_4$}; \draw (1,-2.5) node {$y_1$}; \draw (3,-2.5) node {$y_2$}; \draw (5,-2.5) node {$y_3$}; \end{tikzpicture} \end{center} \caption{The Legal Complex $\Delta_{\weight{2}{3},P_5}$} \label{fig:pathsc} \end{figure} By Propositions \ref{thm:path1} and \ref{thm:path2} we have \begin{align*} f_0&=1\\ f_1&=2n-a-b+2=7,\\ f_2&=\binom{n-2a+2}{2}+2\binom{n-a-b+2}{2}=5, \end{align*} and since $\max\{\left\lfloor n/a\right\rfloor, \left\lfloor n/b\right\rfloor\}=2$, we get the $f$-vector $(1, 7, 5)$, which can be verified from the simplicial complex. To compare this with the Kruskal-Katona bound, we first need to find the $i$-canonical representations and calculate the $j$th pseudopowers. \begin{center} \begin{tabular}{ll} $f_1=\binom{7}{1}$ & $f_1^{(2)}=\binom{7}{2}=21$\\ $f_2=\binom{3}{2}+\binom{2}{1}$ & $f_2^{(3)}=\binom{3}{3}+\binom{2}{2}=2$\\ & $f_2^{(1)}=\binom{3}{1}+\binom{2}{0}=4$ \end{tabular} \end{center} Then $f_2=5<f_1^{(2)}=21$, $f_3=0<f_2^{(3)}=2$, and $f_1=7>f_2^{(1)}=4$, showing that the formulae in Propositions \ref{thm:path1} and \ref{thm:path2} give, at least for this example, improved necessary conditions for a vector to be the $f$-vector of a legal complex of a placement game played on a path over the ones given in the Kruskal-Katona theorem. \end{example} We will now show that for fixed $a$ and $b$ and sufficiently large $n$, then the bound in Proposition \ref{thm:path2} on $f_2$ is better than the Kruskal-Katona bound. By the Kruskal-Katona theorem we have \begin{align*} f_2\le f_1^{(2)}&=\binom{2n-a-b+2}{2}\\ &=\frac{1}{2}\left[4n^2+n(6-4a-4b)+g(a,b)\right], \end{align*} where $g(a,b)$ is a function in $a$ and $b$, whereas Proposition \ref{thm:path2} gives \begin{align*} f_2=&\binom{n-2a+1}{2}+\binom{n-2b+1}{2}+2\binom{n-a-b+1}{2}\\ =&\frac{1}{2}\left[4n^2+2n(6-4a-4b)+h(a,b)\right], \end{align*} where $h(a,b)$ is a function in $a$ and $b$. Since $a,b\ge 1$, and thus $6-4a-4b<0$, we have $\frac{1}{2}\left[4n^2+2n(6-4a-4b)+g(a,b)\right]< \frac{1}{2}\left[4n^2+n(6-4a-4b)+h(a,b)\right]$ for sufficiently large $n$, showing that as $n$ grows larger our bound becomes increasingly better than the Kruskal-Katona bound. \begin{remark}\label{remark:octal} The game \textsc{O12} is the weight game $\weight{1}{2}$. It is mentioned by Brown {et al}\onedot in \cite{GamePol} that this game played on a path is equivalent to the partizan Octal game where Left removes one piece and Right two, and both have the possibility to split the heap. It is easy to see that weight games played on a path are all equivalent to a specific partizan Octal game. \end{remark} \section{Playing on the Cycle $C_n$} Consider Left playing pieces of weight $a$ and Right pieces of weight $b$ on a cycle of length $n\ge 3$. For this board, the `left' end of a piece is the end in counter-clockwise direction. \begin{proposition}\label{thm:cycle1} If a simplicial complex is the legal complex of $\weight{a}{b}$ played on $C_n$ then \begin{equation} f_1= \begin{cases} 0 & \text{if } a,b>n,\\ n & \text{if either } a\le n \text{ or } b\le n \text{ but not both},\\ 2n & \text{if } a,b\le n. \end{cases} \end{equation} \end{proposition} \begin{proof} The left end of a piece can be placed on any of the $n$ spaces if its weight is less than $n$, no matter if it is a Right or Left piece. \end{proof} \begin{proposition}\label{thm:cycle2} If a simplicial complex is the legal complex of $\weight{a}{b}$ played on $C_n$ then \begin{equation} f_2= N_{LL}+N_{LR}+N_{RR} \end{equation} where \begin{equation*} N_{LL}= \begin{cases} 0 & \text{if } 2a>n,\\ \frac{n(n-2a+1)}{2} & \text{if } 2a\le n, \end{cases}\qquad N_{RR}= \begin{cases} 0 & \text{if } 2b>n,\\ \frac{n(n-2b+1)}{2} & \text{if } 2b\le n, \end{cases} \end{equation*} are the number of positions with two Left pieces, respectively two Right pieces, and \begin{equation*} N_{LR}= \begin{cases} 0 & \text{if } a+b>n,\\ n(n-a-b+1) & \text{if } a+b\le n, \end{cases} \end{equation*} is the number of positions with one Left and one Right piece. \end{proposition} \begin{proof} We will first look at the number of positions with two Left pieces if $n\ge 2a$. There are $n$ choices for placing the first piece. Placing the second piece is equivalent to placing one piece on the path $P_{n-a}$, i.e. there are $(n-a)-(a-1)$ choices for placing the second piece. Due to symmetry, there are then $n(n-2a+1)/2$ positions of this form. Similarly, the number of positions with two Right pieces is $n(n-2b+1)/2$ if $n\ge 2b$. To count the number of positions with one Left and one Right piece when $n\ge a+b$, we first place the Left, then the Right piece. There are $n$ choices for placing the Left piece. Placing the Right piece is then equivalent to placing a piece of weight $b$ on the path $P_{n-a}$, i.e. there are $(n-a)-(b-1)$ choices for placing the second pieces. Thus, there are $n(n-a-b+1)$ positions of this form. \end{proof} If $a=b=1$, then the previous two bounds are \begin{align*} f_1&=2n;\\ f_2&=4\binom{n}{2}, \end{align*} which are the bounds given in Proposition \ref{thm:weight1bound}. \begin{example} Consider $\weight{2}{3}$ on the cycle $C_5$. Let $x_i$ represent a Left piece whose left end is on space $i$, and similarly for $y_i$. \Eg the position in Figure \ref{fig:C5ex} is represented by $x_1y_3$. \begin{figure}[!ht] \begin{center} \begin{tikzpicture}[scale=0.5] \node[shape=circle, draw, line width=1pt] (1) at (0,3) {$L$}; \node[shape=circle, draw, line width=1pt] (3) at (1.763,-2.427) {$R$}; \node[shape=circle, draw, line width=1pt] (4) at (-1.763,-2.427) {$R$}; \node[shape=circle, draw, line width=1pt] (2) at (2.853,0.927) {$L$}; \node[shape=circle, draw, line width=1pt] (5) at (-2.853,0.927) {$R$}; \draw[line width=3pt] (1)--(2); \draw (2)--(3); \draw[line width=3pt] (3)--(4); \draw[line width=3pt] (4)--(5); \draw (5)--(1); \node at (0, 1.5) {$1$}; \node at (1.6,0.5) {$2$}; \node at (1,-1.3) {$3$}; \node at (-1, -1.3) {$4$}; \node at (-1.6, 0.5) {$5$}; \end{tikzpicture} \end{center} \caption{An Example Position for $\weight{2}{3}$ on $C_5$} \label{fig:C5ex} \end{figure} \noindent The corresponding legal complex is given in Figure \ref{fig:cyclesc}. \begin{figure}[!h] \begin{center} \begin{tikzpicture}[scale=1] \draw[line width=2pt] (0,3) -- (0,2); \draw[line width=2pt] (2.853,0.927) -- (1.902,0.618); \draw[line width=2pt] (1.763,-2.427) -- (1.176,-1.618); \draw[line width=2pt] (-1.763,-2.427) -- (-1.176,-1.618); \draw[line width=2pt] (-2.853,0.927) -- (-1.902,0.618); \draw[line width=2pt] (0,2)--(1.176,-1.618); \draw[line width=2pt] (0,2)--(-1.176,-1.618); \draw[line width=2pt] (1.176,-1.618)--(-1.902,0.618); \draw[line width=2pt] (1.902,0.618)--(-1.902,0.618); \draw[line width=2pt] (-1.176,-1.618)--(1.902,0.618); \filldraw (0,2) circle (0.1cm); \filldraw (1.902,0.618) circle (0.1cm); \filldraw (-1.902,0.618) circle (0.1cm); \filldraw (1.176,-1.618) circle (0.1cm); \filldraw (-1.176,-1.618) circle (0.1cm); \filldraw (0,3) circle (0.1cm); \filldraw (2.853,0.927) circle (0.1cm); \filldraw (1.763,-2.427) circle (0.1cm); \filldraw (-2.853,0.927) circle (0.1cm); \filldraw (-1.763,-2.427) circle (0.1cm); \draw (0.75,3) node {$y_3$}; \draw (0.5,2) node {$x_1$}; \draw (1.676,-1.618) node {$x_3$}; \draw (-1.676,-1.618) node {$x_4$}; \draw (2.302,0.218) node {$x_2$}; \draw (-2.302,0.218) node {$x_5$}; \draw (3.253,0.527) node {$y_4$}; \draw (-3.253,0.527) node {$y_2$}; \draw (2.5,-2.427) node {$y_5$}; \draw (-2.5,-2.427) node {$y_1$}; \end{tikzpicture} \end{center} \caption{The Legal Complex $\Delta_{\weight{2}{3},C_5}$} \label{fig:cyclesc} \end{figure} By Propositions \ref{thm:cycle1} and \ref{thm:cycle2} we have \begin{align*} f_0&=1\\ f_1&=2n=10,\\ f_2&=\frac{n(n-2a+1)}{2}+n(n-a-b+1)=10, \end{align*} and since $\max\{\left\lfloor n/a\right\rfloor, \left\lfloor n/b\right\rfloor\}=2$, we get the $f$-vector $(1, 10, 10)$, which can be verified from the simplicial complex. We will compare these with the Kruskal-Katona bound. The $i$-canonical representations and $j$th pseudopowers are: \begin{center} \begin{tabular}{ll} $f_1=\binom{10}{1}$ & $f_1^{(2)}=\binom{10}{2}=45$\\ $f_2=\binom{5}{2}$ & $f_2^{(3)}=\binom{5}{3}=10$\\ & $f_2^{(1)}=\binom{5}{1}=5$ \end{tabular} \end{center} Then $f_2=10<f_1^{(2)}=45$, $f_3=0<f_2^{(3)}=10$, and $f_1=10>f_2^{(1)}=5$, showing that for this example Propositions \ref{thm:cycle1} and \ref{thm:cycle2} give improved necessary conditions for a vector to be the $f$-vector of a legal complex of a placement game played on a cycle over the ones given in the Kruskal-Katona theorem. \end{example} Similar to placement games on a path, we have that for fixed $a$ and $b$ and sufficiently large $n$ the bound in Proposition \ref{thm:cycle2} on $f_2$ is better than the Kruskal-Katona bound. By the Kruskal-Katona theorem we have \begin{align*} f_2\le f_1^{(2)}&=\binom{2n}{2}\\ &=\frac{1}{2}\left[4n^2+n(-2)\right], \end{align*} whereas Proposition \ref{thm:cycle2} gives \begin{align*} f_2= &\frac{n(n-2a+1)}{2}+\frac{n(n-2b+1)}{2}+n(n-a-b+1)\\ =&\frac{1}{2}\left[4n^2+n(4-4a-4b)\right]\\ <&\frac{1}{2}\left[4n^2+n(-2)\right], \end{align*} since $a,b\ge 1$ implies $4-4a-4b\le -4$, showing that as $n$ grows larger our bound becomes increasingly better than the Kruskal-Katona bound. \section{Playing on the Complete Graph $K_n$} Finally, we will consider placement games played on a complete graph of $n$ vertices in which Left places pieces of weight $a$ and Right pieces of weight $b$. \begin{proposition}\label{thm:complete} If a simplicial complex is the legal complex of $\weight{a}{b}$ played on $K_n$ then \begin{equation}\label{eq:completeab} f_k=\sum_{l=0}^k\left(\frac{\displaystyle\prod_{i=0}^{k-l-1}\binom{n-ia}{a}}{(k-l)!}\right)\left(\frac{\displaystyle\prod_{j=0}^{l-1}\binom{n-(k-l)a-jb}{b}}{l!}\right) \end{equation} for $k\ge 0$. \end{proposition} \begin{proof} Playing a piece of weight $a$ on the complete graph with $n$ vertices is equivalent to deleting $a$ vertices from the graph. Thus placing a second piece on the graph is equivalent to placing a piece on the complete graph on $n-a$ vertices. Also, since every pair of vertices is connected, playing a piece of weight $a$ is equivalent to playing $a$ pieces of weight $1$, thus there are $\binom{n}{a}$ choices for placing the piece. Thus playing $s$ pieces of weight $a$ we have \[\frac{\prod_{i=0}^{s-1}\binom{n-ia}{a}}{s!}\] choices. Then playing $k-l$ pieces of weight $a$ and $l$ pieces of weight $b$ (assuming without loss of generality we place the pieces of weight $a$ first) we have \[\frac{\prod_{i=0}^{k-l-1}\binom{n-ia}{a}}{(k-l)!}\;\frac{\prod_{j=0}^{l-1}\binom{n-(k-l)a-jb}{b}}{l!}\] different positions. To get the total number of positions with $k$ pieces played, we let $l$ range from $0$ to $k$ and add the terms, giving Equation \ref{eq:completeab}. \end{proof} If $a=b$, then the previous bound becomes \begin{align*} f_k&=\sum_{l=0}^k\frac{n(n-1)\cdots(n-(k-l)a+1)(n-(k-l)a)\cdots(n-ka+1)}{(k-l)!l!(a!)^k}\\ &=\frac{n!}{(n-ka)!(a!)^k}\sum_{l=0}^k\frac{1}{k!}\binom{k}{l}\\ &=\frac{n!}{(n-ka)!k!(a!)^k}\sum_{l=0}^k\binom{k}{l}\\ &=\frac{n!}{(n-ka)!k!(a!)^k}2^k. \end{align*} If $a=b=1$, then this becomes \begin{align*} f_k&=\frac{n!}{(n-k)!k!}2^k\\ &=\binom{n}{k}2^k \end{align*} which is the bound given in Proposition \ref{thm:weight1bound}. If we assume without loss of generality that $a\le b$, then we have \begin{align*} f_k&=\sum_{l=0}^k\frac{n(n-1)\cdots(n-(k-l)a-lb+1)}{(k-l)!l!(a!)^{k-l}(b!)^l}\\ &=\frac{n!}{k!}\sum_{l=0}^k\frac{\binom{k}{l}}{(a!)^{k-l}(b!)^l(n-(k-l)a-lb)!}\\ &\le\frac{n!}{k!}\sum_{l=0}^k\frac{\binom{k}{l}}{(a!)^k(n-kb)!}\\ &=\frac{n!}{(n-kb)!k!(a!)^k}2^k. \end{align*} We can similarly find a lower bound. Thus \[\frac{n!}{(n-ka)!k!(b!)^k}2^k\le f_k\le \frac{n!}{(n-kb)!k!(a!)^k}2^k.\] For fixed $a, b,$ and $k$, we then have \[ n(n-1)\cdots(n-ka+1)\frac{2^k}{k!(b!)^k}\le f_k\le n(n-1)\cdots(n-kb+1)\frac{2^k}{k!(a!)^k}, \] and since \[n(n-1)\cdots(n-ka+1)\ge (n-ka+1)^{ka} \text{ and } n(n-1)\cdots(n-kb+1)\le n^{kb},\] this implies \[ C'(n-ka+1)^{ka}=C'n^{ka}+O(n^{ka-1})\le f_k\le Cn^{kb}, \] where $C$ and $C'$ are constants depending on $a$ and $k$, respectively $b$ and $k$. Also note that $\weight{a}{b}$ played on the complete graph $K_n$ is the least restrictive game on the most connected board. Thus the formula in Proposition \ref{thm:complete} gives upper bounds for any placement game with weights on any board. \begin{example}\label{ex:complete} Consider $\weight{2}{2}$ and let the board be the complete graph $K_4$. Let $x_{i,j}$ represent a Left piece occupying the vertices $i$ and $j$, and similarly for $y_{i,j}$. For example the position in Figure \ref{fig:completeex} is represented by $x_{1,4}y_{2,3}$. \begin{figure}[!ht] \begin{center} \begin{tikzpicture} \node[shape=circle, draw] (1) at (0,0) {$L$}; \node[shape=circle, draw] (2) at (2,0) {$L$}; \node[shape=circle, draw] (3) at (0,-2) {$R$}; \node[shape=circle, draw] (4) at (2,-2) {$R$}; \draw (1) to (2) to (3) to (4) to (1) to (3); \draw (2) to (4); \node at (-0.75,0) {$1$}; \node at (-0.75,-2) {$2$}; \node at (2.75,0) {$4$}; \node at (2.75,-2) {$3$}; \end{tikzpicture} \end{center} \caption{An Example Position for $\weight{2}{2}$ on $K_4$} \label{fig:completeex} \end{figure} The corresponding simplicial complex is given in Figure \ref{fig:completesc}. \begin{figure}[!ht] \begin{center} \begin{tikzpicture}[scale=1.5] \draw[line width=1.3] (0,0) -- (0,1) -- (1,1) -- (1,0)--cycle; \draw[line width=1.3] (2,0) -- (3,0) -- (3,1) -- (2,1) -- cycle; \draw[line width=1.3] (4,0) -- (5,0) -- (5,1) -- (4,1) -- cycle; \filldraw (0,0) circle (0.066cm); \filldraw (0,1) circle (0.066cm); \filldraw (1,0) circle (0.066cm); \filldraw (1,1) circle (0.066cm); \filldraw (2,0) circle (0.066cm); \filldraw (2,1) circle (0.066cm); \filldraw (3,0) circle (0.066cm); \filldraw (3,1) circle (0.066cm); \filldraw (4,0) circle (0.066cm); \filldraw (4,1) circle (0.066cm); \filldraw (5,0) circle (0.066cm); \filldraw (5,1) circle (0.066cm); \draw (0,1.3) node {$x_{1,2}$}; \draw (1,1.3) node {$x_{3,4}$}; \draw (2,1.3) node {$x_{1,3}$}; \draw (3,1.3) node {$x_{2,4}$}; \draw (4,1.3) node {$x_{1,4}$}; \draw (5,1.3) node {$x_{2,3}$}; \draw (1,-0.4) node {$y_{1,2}$}; \draw (0,-0.4) node {$y_{3,4}$}; \draw (3,-0.4) node {$y_{1,3}$}; \draw (2,-0.4) node {$y_{2,4}$}; \draw (5,-0.4) node {$y_{1,4}$}; \draw (4,-0.4) node {$y_{2,3}$}; \end{tikzpicture} \end{center} \caption{The Legal Complex $\Delta_{\weight{2}{2},K_4}$} \label{fig:completesc} \end{figure} By Proposition \ref{thm:complete} we have \begin{align*} f_0&=1\\ f_1&=\binom{n}{a}+\binom{n}{b}=12,\\ f_2&=\frac{\binom{n}{a}\binom{n-a}{a}}{2}+\binom{n}{a}\binom{n-a}{b}+\frac{\binom{n}{b}\binom{n-b}{b}}{2}=12, \end{align*} and since $\max\{\left\lfloor n/a\right\rfloor, \left\lfloor n/b\right\rfloor\}=2$, we get the $f$-vector $(1, 12, 12)$, which can be verified from the simplicial complex. The $i$-canonical representations and the $j$th pseudopowers are: \begin{center} \begin{tabular}{ll} $f_1=\binom{12}{1}$ & $f_1^{(2)}=\binom{12}{2}=66$\\ $f_2=\binom{5}{2}+\binom{2}{1}$ & $f_2^{(3)}=\binom{5}{3}+\binom{2}{2}=11$\\ & $f_2^{(1)}=\binom{5}{1}+\binom{2}{0}=6$ \end{tabular} \end{center} Then $f_2=12<f_1^{(2)}=66$, $f_3=0<f_2^{(3)}=11$, and $f_1=12>f_2^{(1)}=6$, showing that for this example the formula in Proposition \ref{thm:complete} gives improved necessary conditions for a vector to be the $f$-vector of a legal complex. \end{example} We will now show that for fixed $a$ and $b$ and sufficiently large $n$, the bound in Proposition \ref{thm:complete} for $f_2$ is better than the Kruskal-Katona bound. By the Kruskal-Katona theorem we have \begin{align*} f_2\le f_1^{(2)}&=\binom{\binom{n}{a}+\binom{n}{b}}{2}\\ &=\frac{1}{2}\left[\binom{n}{a}\left(\binom{n}{a}+2\binom{n}{b}-1\right)+\binom{n}{b}\left(\binom{n}{b}-1\right)\right], \end{align*} whereas Proposition \ref{thm:complete} gives \begin{align*} f_2= &\frac{1}{2}\binom{n}{a}\binom{n-a}{a}+\frac{1}{2}\binom{n}{b}\binom{n-b}{b}+\binom{n}{a}\binom{n-a}{b}\\ =&\frac{1}{2}\left[\binom{n}{a}\left(\binom{n-a}{a}+2\binom{n-a}{b}\right)+\binom{n}{b}\binom{n-b}{b}\right]. \end{align*} Recall that $f(n)= O(g(n))$ means that $f(n)\le Cg(n)$ for some positive constant $C$. Then $f(n)= O(n^k)$ means that $f(n)$ is bounded by a polynomial of degree at most $k$. Also recall that $f(n)=g(n)+O(n^k)$ means $f(n)-g(n)= O(n^k)$. Since \begin{align*} \binom{n}{i}&=\frac{1}{i!}\left(n^i-n^{i-1}\frac{i(i-1)}{2}+O(n^{i-2})\right)\text{ for } i\ge 2\\ \binom{n-i}{j}&=\frac{1}{j!}\left(n^j-n^{j-1}\frac{j(j+2i-1)}{2}+O(n^{j-2})\right)\text{ for } j\ge 2\\ \end{align*} it easily follows that $\binom{n-a}{a}+2\binom{n-a}{b}\le\binom{n}{a}+2\binom{n}{b}-1$ and $\binom{n-b}{b}\le\binom{n}{b}-1$. Thus \begin{align*} &\frac{1}{2}\left[\binom{n}{a}\left(\binom{n-a}{a}+2\binom{n-a}{b}\right)+\binom{n}{b}\binom{n-b}{b}\right]\\ &< \frac{1}{2}\left[\binom{n}{a}\left(\binom{n}{a}+2\binom{n}{b}-1\right)+\binom{n}{b}\left(\binom{n}{b}-1\right)\right], \end{align*} showing that the new bound is better than the Kruskal-Katona bound as $n$ grows larger. We have not compared the bounds for $f_k$ with $k>2$ since it is difficult to find the $i$-canonical representation of $f_{k-1}$ in this case. \section{Discussion} A general question is to find sufficient conditions for a simplicial complex to be a legal complex. Since it is already not easy to find necessary conditions for a vector to be the $f$-vector of a legal complex, this seems to be very hard and much further work is needed.
{ "timestamp": "2015-09-07T02:10:36", "yymm": "1310", "arxiv_id": "1310.1327", "language": "en", "url": "https://arxiv.org/abs/1310.1327", "abstract": "A strong placement game $G$ played on a board $B$ is equivalent to a simplicial complex $\\Delta_{G,B}$. We look at weight games, a subclass of strong placement games, and introduce upper bounds on the number of positions with $i$ pieces in $G$, or equivalently the number of faces with $i$ vertices in $\\Delta_{G,B}$, which are reminiscent of the Kruskal-Katona bounds.", "subjects": "Combinatorics (math.CO)", "title": "Games and Complexes II: Weight Games and Kruskal-Katona Type Bounds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682491397237, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7073169953025652 }
https://arxiv.org/abs/2108.02914
The minimal genus problem for right angled Artin groups
We investigate the minimal genus problem for the second homology of a right angled Artin group (RAAG). Firstly, we present a lower bound for the minimal genus of a second homology class, equal to half the rank of the corresponding cap product matrix. We show that for complete graphs, trees, and complete bipartite graphs, this bound is an equality, and furthermore in these cases the minimal genus can always be realised by a disjoint union of tori. Additionally, we give a full characterisation of classes that are representable by a single torus. However, the minimal genus of a second homology class of a RAAG is not always realised by a disjoint union of tori as an example we construct in the pentagon shows.
\section{Introduction} In this paper, we investigate the minimal genus of a second homology class of a right angled Artin group. We always consider integral homology, and say that a continuous map~$f\colon \Sigma \to X$ from a compact (potentially disjoint) oriented surface \emph{represents a second homology class~$\alpha$} of a space $X$ if it maps the fundamental class of~$\Sigma$ to~$\alpha$. We define the \emph{minimal genus} of~$\alpha$, denoted by $\operatorname{gen}(\alpha)$, to be the minimal genus of a surface~$\Sigma$ representing~$\alpha$ in this way, where the genus of a disconnected surfaces is defined as the sum of the genera of the connected components. When we talk about the minimal genus of a second homology class of a group~$G$, we mean the minimal genus of a second homology class in~$BG$---the classifying space for the group. Note that homotopy equivalences preserve the minimal genus, hence any model for the classifying space yields the same minimal genus. We will restrict ourselves to the case where~$G$ is a right angled Artin group, also known as a RAAG. Recall that a right angled Artin group~$A_\Gamma$, is a group associated to a (finite) simple graph~$\Gamma$ whose generators are given by the vertices~$V(\Gamma)$ and commuting relations by the edges~$E(\Gamma)$, i.e.~the presentation is: \[ A_\Gamma = \langle\,V(\Gamma)~|~st = ts~\forall\,\{s,t\} \in E(\Gamma)\,\rangle. \] Extreme examples are the free abelian group~$\mathbb{Z}^n$, corresponding to $\Gamma$ being the complete graph on $n$ vertices, and the free group~$F_n$, corresponding to $\Gamma$ being $n$ disjoint vertices. The second integral homology $H_2(A_\Gamma)$ can be identified with $\mathbb{Z}^{E(\Gamma)}$, and we call the \emph{support} of the homology class~$\alpha \in H_2(A_\Gamma)$ the union of edges whose corresponding entries in $\alpha \in \mathbb{Z}^{E(\Gamma)}$ (as a vector) are not zero. In the specific case of RAAGs, the minimal genus of~$\alpha \in H_2(A_\Gamma)$ is bounded above by the number of edges in the support of~$\alpha$. There is a very general lower bound for the minimal genus of a second homology class. Namely, given a second homology class $\alpha \in H_2(X)$ of a topological space $X$, consider the \emph{cap product map}~$\alpha \cap - \colon H^1(X) \to H_1(X)$. It follows from naturality of the cap product that the image of~$\alpha \cap -$ must lie in the image of the first homology of any representative of $\alpha$. Since the genus is half the rank of the first homology of a surface, this yields the \emph{cap product inequality}: \[ {\frac{1}{2}} \, \operatorname{rank}(\alpha \cap -) \leq \operatorname{gen}(\alpha). \] The anti-symmetry of the cup product implies that $\operatorname{rank}(\alpha \cap -)$ is always an even integer. This inequality is in general far from an equality (as an example take the classifying space for any perfect group with non vanishing second homology), but we were able to show that it is indeed an equality for large families of RAAGs. \subsection{Results} It was shown in \cite{KastenholzPedron} that the minimal genus only depends on the fundamental group, in the sense that the minimal genus of~$\alpha \in H_2(X)$ is the same as the minimal genus of the image of~$\alpha$ in~$H_2(\pi_1(X))$. All of our results could therefore be phrased in terms of second homology classes of spaces with fundamental group the specified RAAG. However since in practise we prove these statements for a classifying space, we state the results in terms of group homology. For any RAAG, we introduce a diagrammatic description of a class $\alpha \in H_2(A_\Gamma)$ and this provides a matrix description of $\alpha \cap -$, called the \emph{connection matrix} and denoted by~$M_\alpha$. The case where $\Gamma$ is a complete graph, i.e.~$A_\Gamma \cong \mathbb{Z}^n$, serves as a guiding example for the minimal genus problem for all RAAGs. Since every separating curve in a surface is a commutator in~$\pi_1$, it follows that the minimal genus for any space with abelian fundamental group will be realised by a disjoint union of tori. Using this, we can translate the minimal genus problem for $\mathbb{Z}^n$ to an algebraic problem about skew-symmetric integer matrices. We obtain the following: \begin{restate}{Theorem}{abcthm - torus} Let $\Gamma$ be a complete graph---i.e.~$A_\Gamma\cong\mathbb{Z}^n$ and the~$n$-torus is a model for the classifying space---and $\alpha \in H_2(A_\Gamma)$. Then the minimal genus $\operatorname{gen}(\alpha)$ is equal to the cap bound $\frac{1}{2} \, \operatorname{rank}(M_\alpha)$. Furthermore, the minimal genus is always realised by a disjoint union of tori. \end{restate} This complete solution to the minimal genus problem for $\mathbb{Z}^n$ leads to the following questions, which we (partially) answer in this paper: \begin{question}\label{question - cap bound} Is the cap product inequality always an equality for a RAAG? \end{question} \begin{question}\label{question - realised by dj tori} Does every class in the second homology of a RAAG have a minimal genus representative that is a disjoint union of tori? \end{question} We were able to answer both questions for two large classes of RAAGs: \begin{restate}{Theorem}{abcthm - cap bound equality} Let $\Gamma$ be a complete bipartite graph or a tree and $\alpha \in H_2(A_\Gamma)$. Then the minimal genus $\operatorname{gen}(\alpha)$ is equal to the cap bound $\frac{1}{2} \operatorname{rank}(M_\alpha)$. Furthermore, the minimal genus can always be realised by a disjoint union of tori. \end{restate} In the 1980s, Droms~\cite{Droms} classified all RAAGs that appear as fundamental groups of 3-manifolds: $\Gamma$ must be a disjoint union of trees and triangles. Special cases of Theorems~\ref{abcthm - torus} and \ref{abcthm - cap bound equality} come together with Corollary~3.6 in \cite{KastenholzPedron} to give us the following corollary. \setcounter{abcthm}{2} \begin{abccor}\label{abccor-droms} Let~$X$ be a 3-manifold such that~$\pi_1(X)$ is a RAAG. Then for any $\alpha \in H_2(X)$, the minimal genus $\operatorname{gen}(\alpha)$ is equal to the cap bound $\frac{1}{2} \operatorname{rank}(M_\alpha)$. Furthermore, the minimal genus can always be realised by a disjoint union of tori. \end{abccor} \setcounter{abcthm}{0} Although we were not able to answer Question~\ref{question - cap bound} in general, we showed that the cap bound completely determines which classes are representable by a single torus. \begin{restate}{Theorem}{abcthm - n partite torus} Let~$\Gamma$ be any simple graph. Then a nontrivial second homology class $\alpha \in H_2(A_\Gamma)$ is representable by a torus if and only if $\operatorname{rank}(M_\alpha) = 2$. Furthermore, the support of such a homology class is a complete n-partite graph. \end{restate} These results provide a possible, albeit cumbersome, way to compute the minimal number of disjoint tori one needs to represent a second homology class: cover the support by complete~$n$-partite graphs. In all of the above cases, the minimal genus was always realised by a disjoint union of tori. However, we give a negative answer to Question~\ref{question - realised by dj tori} in general: \begin{restate}{Theorem}{abcthm - counterexample to q2} There exists a RAAG~$A_\Gamma$ with a second homology class whose minimal genus representative cannot be realised by a disjoint union of tori. \end{restate} To prove this theorem, we find a second homology class, when~$\Gamma$ is the pentagon, that has minimal genus two, but is not representable by fewer than three disjoint tori. \subsection{Future directions and context} One obvious future direction would be to answer Question~\ref{question - cap bound} in full generality. Although we were unable to do this using the tools we developed, we conjecture the following (partly because our search for a counterexample was fruitless): \begin{conj} The cap product inequality is an equality for any RAAG. \end{conj} In answering Question~\ref{question - realised by dj tori}, the representative we construct for Theorem~\ref{abcthm - counterexample to q2} is not $\pi_1$-injective. It is natural to then ask if the failure of $\pi_1$-injectivity is a necessary condition. \begin{question}\label{question - surface subgroups} Does there exist a second homology class of a RAAG with a minimal genus representative that is not a disjoint union of tori and injective on fundamental groups? \end{question} This question might be of interest to people studying surface subgroups in RAAGs. Fortunately, other examples of minimal genus representatives that fail to be $\pi_1$-injective like the example in Theorem~\ref{abcthm - counterexample to q2} could also be interesting in their own right. We remark that a minimal genus representative with the maximal number of connected components cannot map an essential simple closed curve of the surface to a null-homotopic loop: otherwise, performing surgery at this curve would either increase the number of components (if the curve is separating) or decreases the genus (if the curve is non-separating) while preserving the second homology class represented by the map. Thus for any minimal genus representative the induced map on fundamental groups is a homomorphism from a surface group to the RAAG that has no simple closed curves in its kernel. Crisp, Sageev and Sapir asked whether any homomorphism of a surface group to a RAAG with no hyperbolic surface subgroups is necessarily injective if it has no simple closed curves in its kernel \cite[Problem~1.8]{Crisp}. The authors call the question an analogue of the ``simple curve in the kernel" problem for 3-manifolds. For instance, Stallings proved in~\cite{Stallings} that the restriction of the question to products of free groups is equivalent to the Poincar\'e Conjecture. To answer the general question, one might start by restricting Question~\ref{question - realised by dj tori} to RAAGs with no hyperbolic surface subgroups: \begin{q2prime} In a RAAG with no hyperbolic surface subgroups, does every class in the second homology have a minimal genus representative that is a disjoint union of tori? \end{q2prime} Ideally, there would be a negative answer to this question that has the maximal number of components but is not $\pi_1$-injective---this would answer Crisp, Sageev and Sapir's problem. Another future direction would be to investigate the minimal genus problem for other classes of Artin groups. By Proposition~3.5 in \cite{KastenholzPedron} and the fact that the second homotopy group of the Salvetti complex of any Artin group vanishes \cite[Proposition 1.13]{EliasWilliamson}, the minimal genus problem for an Artin group agrees with the minimal genus problem for the corresponding Salvetti complex. Unfortunately, there is no known formula for the second homology of a general Artin group. Akita and Liu \cite{AkitaLiu} give a general formula for the second homology with~$\mathbb{Z}/2\mathbb{Z}$ coefficients, but no integral results are known. Without such a result, a general investigation of the minimal genus problem for Artin groups seems to be out of reach. \subsection{Outline} In Section~\ref{section - preliminaries}, we provide some background on right angled Artin groups and the Salvetti complex. We also define the minimal genus and and introduce the descriptors we use to study it---the \emph{support} of a class and the corresponding \emph{connection matrix}. In Section~\ref{section - cap product inequality}, we construct our main tool---the \emph{cap bound inequality}---and prove some simple lemmas that we use throughout the paper, as well as Theorem~\ref{abcthm - torus}. Section~\ref{section - question 1} is devoted to answering Question~\ref{question - cap bound} for large classes of RAAGs---we prove Theorem~\ref{abcthm - cap bound equality} in this section. We then completely characterise which classes are representable by a single torus in Section~\ref{section - one torus}, proving Theorem~\ref{abcthm - n partite torus}. We finish in Section~\ref{section - question 2} with a negative answer to Question~\ref{question - realised by dj tori}, proving Theorem~\ref{abcthm - counterexample to q2}. \subsection{Acknowledgements} We would like to thank Mark Pedron for helpful conversations. The first and third authors also thank the Max Planck Institute for Mathematics in Bonn for its support and hospitality. \section{Preliminaries}\label{section - preliminaries} \subsection{Right angled Artin groups} We start by giving some background on right angled Artin groups. For a comprehensive introduction to RAAGs see the survey paper by Charney~\cite{Charney}. \begin{defn}\label{defn- RAAG} Every finite simple graph $\Gamma$ with vertex set~$V(\Gamma)$ and edge set $E(\Gamma)$ determines a \emph{right angled Artin group}, or RAAG, $A_\Gamma$, which is the group with presentation \[ A_\Gamma =\langle V(\Gamma) \mid st=ts \,\, \forall \{s,t\} \in E(\Gamma) \rangle. \] \end{defn} \begin{exam} Figure~\ref{fig - raag examples} shows a few examples of graphs $\Gamma$ and their corresponding RAAGs. \begin{figure}[ht] \begin{tikzpicture}[thick, scale=.84, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \path (0,0) -- node[below=.3, scale=1] {$A_\Gamma\cong F_4$} (2,0); \path (0,0) -- node[left=.3, scale=1] {$\Gamma = $} (0,2); \filldraw (0,0) circle [radius=0.1] (2,2) circle [radius=0.1] (2,0) circle [radius=0.1] (0,2) circle [radius=0.1]; \begin{scope}[shift={(5,0)}] \draw (0,0) -- node[below=.3, scale=1] {$A_\Gamma\cong F_2 \times F_2$} (2,0); \draw (0,0) -- node[left=.3, scale=1] {$\Gamma = $} (0,2); \draw (2,2) -- (2,0); \draw (2,2) -- (0,2); \filldraw (0,0) circle [radius=0.1] (2,2) circle [radius=0.1] (2,0) circle [radius=0.1] (0,2) circle [radius=0.1]; \end{scope} \begin{scope}[shift={(10,0)}] \draw (0,0) -- node[below=.3, scale=1] {$A_\Gamma\cong \mathbb{Z}^4$} (2,0); \draw (0,0) -- node[left=.3, scale=1] {$\Gamma = $} (0,2); \draw (2,2) -- (2,0); \draw (2,2) -- (0,2); \draw (0,0) -- (2,2); \filldraw[color=white] (1,1) circle [radius = 0.1]; \draw (0,2) -- (2,0); \filldraw (0,0) circle [radius=0.1] (2,2) circle [radius=0.1] (2,0) circle [radius=0.1] (0,2) circle [radius=0.1]; \end{scope} \begin{scope}[shift={(3,-2)}] \draw (0,0) node[left=.3, scale=1] {$\Gamma = $} -- (2,0) -- node[below=.3, scale=1] {$A_\Gamma = \langle \, v_1, v_2, v_3, v_4~|~v_1v_2=v_2v_1, v_2v_3 = v_3v_2, v_3v_4=v_4v_3 \, \rangle$} (4,0) -- (6,0); \filldraw (0,0) circle [radius=0.1] node[above, scale=1] {$v_1$} (2,0) circle [radius=0.1] node[above, scale=1] {$v_2$} (4,0) circle [radius=0.1] node[above, scale=1] {$v_3$} (6,0) circle [radius=0.1] node[above, scale=1] {$v_4$}; \end{scope} \end{tikzpicture} \caption{Four graphs on four vertices and their corresponding RAAGs.} \label{fig - raag examples} \end{figure} For an arbitrary simple graph~$\Gamma$, the corresponding RAAG~$A_\Gamma$ does not usually have a better description than the presentation given in Definition~\ref{defn- RAAG}. \end{exam} To study the minimal genus problem for RAAGs, we need two things: firstly a nice model for the classifying space of a RAAG, $BA_\Gamma$, and secondly a method to describe a second homology class~$\alpha \in H_2(A_\Gamma)$ since the minimal genus problem is concerned with such classes. For RAAGs, there exists a finite dimensional cube complex called the \emph{Salvetti complex} that is a model for the classifying space $BA_\Gamma$. This complex was defined for general Artin groups by Salvetti in the 80s~\cite{Salvetti} and is a cube complex only for RAAGs. In general, it is not known whether it is always a classifying space, this is the well known \emph{$K(\pi,1)$ conjecture}. We therefore restrict to RAAGs for our definition. \begin{defn}\label{defn - salvetti} Given a simple graph~$\Gamma$ and corresponding RAAG~$A_\Gamma$, the \emph{Salvetti complex} $\operatorname{Sal}_\Gamma$ is the cube complex with: \begin{itemize} \item one vertex, or 0-cube,~$x_0$; \item one edge, or $1$-cube, for each generator $s\in V(\Gamma)$, attached to~$x_0$ at both ends; \item one square, or $2$-cube, for each edge $\{s_i,s_j\}\in E(\Gamma)$, attached to the $1$-skeleton along the boundary edges using the relation $s_is_js_i^{-1}s_j^{-1}$. The image of each square is a $2$-torus in the $2$-skeleton; \item one $3$-cube, for each triangle in $\Gamma$, attached to the $2$-skeleton by identifying opposite boundary squares with the $2$-tori corresponding to the three edges of the triangle; and \item generally, one $k$-cube, for each $k$-clique (complete graph on $k$ vertices) in $\Gamma$, attached to the $(k-1)$-skeleton by identifying opposite boundary $(k-1)$-cubes with the $(k-1)$-tori corresponding to the $(k-1)$-cliques in the $k$-clique. \end{itemize} \end{defn} \begin{exam} In the extreme case when~$\Gamma$ is a totally disconnected graph on $n$~vertices, the Salvetti complex~$\operatorname{Sal}_\Gamma$ is a~\emph{rose}~$R_n= \bigvee_{i=1}^n S^1$ with $n$~petals. On the other hand, when~$\Gamma$ is a complete graph on $n$~vertices, the Salvetti complex~$\operatorname{Sal}_\Gamma$ is an $n$-torus $\mathbb{T}^n = (S^1)^n$. For an intermediate example, let~$\Gamma$ be a square; then~$\operatorname{Sal}_\Gamma$ is a product of roses~$R_2 \times R_2$. And as last example, suppose~$\Gamma$ is a line with $3$~vertices; then~$\operatorname{Sal}_\Gamma$ consists of two copies of a torus~$\mathbb{T}^2$ glued along a longitude in each copy. \end{exam} \begin{lem}[\cite{CharneyDavis}]\label{lem - salvetti is classifying space} Let~$\Gamma$ be a simple graph,~$A_\Gamma$ the associated RAAG, and~$\operatorname{Sal}_\Gamma$ the Salvetti complex. Then~$\operatorname{Sal}_\Gamma$ is a model for the classifying space~$BA_\Gamma$, or in other words, a~$K(A_\Gamma,1)$. \begin{proof} The fundamental group of~$\operatorname{Sal}_\Gamma$ is~$A_\Gamma$ by construction. The fact that the universal cover is contractible follows from $\operatorname{Sal}_\Gamma$ being a locally CAT(0) cube complex---a property unique to the Salvetti complex of a RAAG. \end{proof} \end{lem} \begin{remark}[Orientation of 1-skeleton] In Lemma~\ref{lem - salvetti is classifying space}, we implicitly choose an identification of~$\pi_1(\operatorname{Sal}_\Gamma)$ with~$A_\Gamma$. This identification automatically endows the Salvetti complex with a preferred orientation on its 1-skeleton. We choose such an identification now and fix it for the remainder of the paper. \end{remark} \begin{prop} \label{prop - homology of salvetti} Given a simple graph~$\Gamma$, \begin{align*} H_1(A_\Gamma) &= H_1(\operatorname{Sal}_\Gamma) \cong \mathbb{Z}^{V(\Gamma)} \quad \text{and}\\ H_2(A_\Gamma) &= H_2(\operatorname{Sal}_\Gamma) \cong \mathbb{Z}^{E(\Gamma)}, \end{align*} where~$V(\Gamma)$ and $E(\Gamma)$ are the vertex set and edge set of~$\Gamma$ respectively. \end{prop} This proposition follows immediately from the cellular chain complex of the Salvetti complex. But there is one small caveat, namely these isomorphisms implicitly choose orientations on the $1$-cells and $2$-cells of the Salvetti complex. We already chose an orientation on the $1$-cells in the above remark, and we will now address how to choose an orientation of the $2$-cells. We first orient $\Gamma$: \begin{defn} Each edge $\{v,w\}\in E(\Gamma)$ has two orientations given by the ordered pairs $(v,w)$ and $(w,v)$. An \emph{orientation} of $\Gamma$ will be a choice of an oriented edge---either $(v,w)$ or $(w,v)$---for each edge $\{v, w\}$ in~$\Gamma$. A simple graph with an orientation will be called an \emph{oriented graph}, and we denote the set of orientated edges by $E^{or}(\Gamma)$. \end{defn} \begin{remark}[Orientation of 2-skeleton]\label{rem - orientation of 2-skeleton} Given an orientation of $\Gamma$, we orient the $2$-cell in $\operatorname{Sal}_\Gamma$ corresponding to an oriented edge $e=(v,w) \in E^{or}(\Gamma)$ by considering the dual of $v^* \cup w^*$, where~$v^*$ and~$w^*\in H^1(\operatorname{Sal}_\Gamma)$ denote the dual of the homology classes corresponding to $v$ and $w$ respectively (here we take the dual with respect to the basis given by the vertices in Proposition~\ref{prop - homology of salvetti}). Note that since~$v^* \cup w^*$ is a generator of~$H^2(\operatorname{Sal}_\Gamma)$, its dual is a fundamental class of the corresponding torus, and thus gives an orientation of the 2-cell. Anti-symmetry of the cup product means that doing the same construction with the oriented edge~$(w,v)$ will yield the opposite orientation. \end{remark} Using an orientation on $\Gamma$ and the induced orientation on $\operatorname{Sal}_\Gamma$, we obtain a canonical generator for each~$\mathbb{Z}$ factor in~$H_2(A_\Gamma)\cong \mathbb{Z}^{E(\Gamma)}$---we let~$e_{(v,w)} \in \mathbb{Z}^{E(\Gamma)}$ be the basis element corresponding to the oriented~$2$-cell given by the oriented edge~$(v,w) \in E^{or}(\Gamma)$, and~$-e_{(v,w)}$ correspond to the same $2$-cell with the opposite orientation. This allows us to do the following. Let~$\Gamma$ be a simple graph and fix~$\alpha \in H_2(A_\Gamma)$. To describe the class with a pictorial approach, we choose an orientation of the graph; this is equivalent to decorating each edge with an arrow. As mentioned in the preceding paragraph, an orientation of~$\Gamma$ determines a basis for $H_2(A_\Gamma) \cong \mathbb{Z}^{E(\Gamma)}$. We can now label each oriented edge $(v,w)$ with the integer coefficient for the basis vector~$e_{(v,w)}$ in the vector $\alpha \in \mathbb{Z}^{E(\Gamma)}$. We denote this coefficient by~$l(v,w)$. This labelled oriented graph uniquely determines the class~$\alpha$ up to the following relation on labelled oriented graphs: \begin{center} \begin{tikzpicture}[thick, scale=.84, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate}] (0,0) -- node[above=2pt] {$n$} (2,0); \draw[postaction={decorate}] (6,0) -- node[above=2pt] {$-n$} (4,0); \draw (3,0) node {$=$}; \filldraw (0,0) circle [radius=0.1] node[above=1pt] {$v$} (2,0) circle [radius=0.1] node[above=1pt] {$w$} (4,0) circle [radius=0.1] node[above=1pt] {$v$} (6,0) circle [radius=0.1] node[above=1pt] {$w$}; \end{tikzpicture} \end{center} We omit the integer label on an edge when the label is zero. \begin{defn}\label{defn-support} The \emph{support} of a class~$\alpha\in H_2(A_\Gamma)$, denoted by~$\operatorname{supp}(\alpha)$, is the oriented subgraph of~$\Gamma$ spanned by oriented edges~$(v,w)$ where the label~$l(v,w)$ described above is non-zero. Additionally, we label the oriented edges~$(v,w)$ in the support with the non-zero integer label~$l(v,w)$. This defines~$\operatorname{supp}(\alpha)$ as a labelled oriented graph. Note that the underlying graph of~$\operatorname{supp}(\alpha)$ does not depend on the chosen orientation of~$\Gamma$ and we will sometimes also consider~$\operatorname{supp}(\alpha)$ as a subgraph of an unoriented graph~$\Gamma$. \end{defn} One of our main tools in this work is the following matrix, derived from the labelled support of a class $\alpha \in H_2(A_\Gamma)$. \begin{defn}\label{defn-connection matrix} Let~$\Gamma$ be a simple graph and~$\alpha \in H_2(A_\Gamma)$ a given class. Following the preceding discussion, the class~$\alpha$ can be described by choosing an orientation of the graph~$\Gamma$ and labelling each oriented edge $(v,w) \in E^{or}(\Gamma)$ with an appropriate integer $l(v,w)$. The \emph{connection matrix} of the class~$\alpha$ is a square matrix~$M_\alpha$ with rows and columns indexed by the vertices of~$\Gamma$, and whose matrix entries are given by: \[ (M_\alpha)_{v,w} = \begin{cases} 0 & \text{if } v=w \\ l(v,w) & \text{if } (v,w) \in E^{or}(\Gamma) \\ -l(w,v) & \text{if } (w,v) \in E^{or}(\Gamma). \end{cases} \] \end{defn} Note that the connection matrix is a skew-symmetric integer matrix and, due to the relation on labelled graphs preceding Definition~\ref{defn-support}, it is independent of the orientation of~$\Gamma$ we choose when depicting~$\alpha$ as a labelled oriented graph. We now give an example of a homology class, its labelled support, and its connection matrix. \begin{exam} Let $\Gamma$ be the square with vertices $\{ v_1, v_2, w_1, w_2 \}$ and an orientation given by $E^{or}(\Gamma)=\{ v_1, v_2 \} \times \{ w_1, w_2 \}$. Then $H_2(A_\Gamma)$ has basis given by $e_{(v_1, w_1)},\,e_{(v_1, w_2)},\,e_{(v_2, w_1)},$ and~$e_{(v_2, w_2)}$. Set $\alpha = e_{(v_1, w_1)} - e_{(v_2, w_2)}$ and $\beta = 2 e_{(v_1, w_1)} + 4e_{(v_1, w_2)} + 3e_{(v_2, w_1)} + 6e_{(v_2, w_2)}$ in $H_2(A_\Gamma)$. Then the connection matrices are \[ M_\alpha = \kbordermatrix{ & v_1 & v_2 & w_1 & w_2 \\ v_1 & 0 & 0 & 1 & 0 \\ v_2 & 0 & 0 & 0 & -1 \\ w_1 & -1 & 0 & 0 & 0 \\ w_2 & 0 & 1 & 0 & 0} \quad \text{and} \quad M_\beta = \kbordermatrix{ & v_1 & v_2 & w_1 & w_2 \\ v_1 & 0 & 0 & 2 & 4 \\ v_2 & 0 & 0 & 3 & 6 \\ w_1 & -2 & -3 & 0 & 0 \\ w_2 & -4 & -6 & 0 & 0}. \] Figure~\ref{fig - a class} is a visual representation of the classes with their supports highlighted. \begin{figure}[ht] \begin{tikzpicture}[thick, scale=.84, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate}] (0,0) -- node[below] {1} (2,0); \draw[color=gray, thin, postaction={decorate}] (0,0) -- (0,2); \draw[color=gray, thin, postaction={decorate}] (2,2) -- (2,0); \draw[postaction={decorate}] (2,2) -- node[above] {-1} (0,2); \draw[fill=white] (0,0) circle [radius=0.1] node[below left] {$v_1$} (2,2) circle [radius=0.1] node[above right] {$v_2$}; \filldraw (2,0) circle [radius=0.1] node[below right] {$w_1$} (0,2) circle [radius=0.1] node[above left] {$w_2$}; \begin{scope}[shift={(5,0)}] \draw[postaction={decorate}] (0,0) -- node[below] {2} (2,0); \draw[postaction={decorate}] (0,0) -- node[left] {4} (0,2); \draw[postaction={decorate}] (2,2) -- node[right] {3} (2,0); \draw[postaction={decorate}] (2,2) -- node[above] {6} (0,2); \draw[fill=white] (0,0) circle [radius=0.1] node[below left] {$v_1$} (2,2) circle [radius=0.1] node[above right] {$v_2$}; \filldraw (2,0) circle [radius=0.1] node[below right] {$w_1$} (0,2) circle [radius=0.1] node[above left] {$w_2$}; \end{scope} \end{tikzpicture} \caption{An illustration of $\alpha$ and $\beta$.} \label{fig - a class} \end{figure} \end{exam} \subsection{Minimal genus} We will now define the minimal genus and discuss some its properties, in particular in the setting of group homology. See \cite{KastenholzPedron} for an introduction to the minimal genus problem. \begin{defn}\label{defn-minimal genus} Given a space~$X$ and a class~$\alpha \in H_2(X;\mathbb{Z})$, we define the \emph{minimal genus} of~$\alpha$, denoted by $\operatorname{gen}(\alpha)$, to be the minimal genus of a surface~$\Sigma$ such that there exists a continuous map~$f\colon \Sigma \to X$ and $f_*([\Sigma])=\alpha$ in~$H_2(X)$. Here $\Sigma$ is a collection of one or more disjoint compact oriented surface(s), and the surface genus of disconnected surfaces is the sum of the genera of the connected components. We say~$\alpha$ is \emph{representable by~$\Sigma$}. \end{defn} We start by making a straightforward observation that will be used later on when we split homology classes. \begin{lem}\label{lem-subadditive} The minimal genus is subadditive: \[ \operatorname{gen}(\alpha + \beta) \leq \operatorname{gen}(\alpha)+\operatorname{gen}(\beta) \quad \forall \alpha, \beta \in H_2(X). \] \end{lem} In this work, we are interested in the minimal genus of a second integral homology class of a group---recall that the homology of a group~$G$ is defined to be the homology of its \emph{classifying space}~$BG$. The classifying space is well-defined up to homotopy: it is a~$K(G,1)$ space. Homotopy equivalences preserve the minimal genus, hence any model for the classifying space yields the same minimal genus. \begin{defn} The \emph{minimal genus} of a second homology class of a group $G$ is the minimal genus of a second homology class in~$BG$, the classifying space for the group. \end{defn} In the case of RAAGs, we can restrict ourselves to studying the minimal genus of classes in~$H_2(\operatorname{Sal}_\Gamma)$. More precisely, we define our classes via labelled oriented graphs and this concretely refers to a class in~$H_2(\operatorname{Sal}_\Gamma)=\mathbb{Z}^{E(\Gamma)}$, which has a canonical choice of basis as described in Remark~\ref{rem - orientation of 2-skeleton}. \section{The cap product inequality}\label{section - cap product inequality} In the first half of this section, we use the cap product to give a lower bound on the minimal genus. The second half uses this lower bound to compute the minimal genus of a second homology class in an $n$-torus~$\mathbb{T}^n$. Given a space~$X$ and~$\alpha\in H_2(X)$, recall that the \emph{cap product} map of~$\alpha$ is the map \[ \alpha \cap - \colon H^1(X)\to H_1(X). \] This map leads to a lower bound for the minimal genus: \begin{prop}\label{prop-cap product inequality} Let~$X$ be any space and~$\alpha\in H_2(X)$. Then the following inequality holds \[ 2 \, \operatorname{gen}(\alpha)\geq \operatorname{rank}(\alpha \cap -). \] \begin{proof} We first make a general observation. Suppose~$f:Y \to X$ is a map from some space $Y$ to~$X$ and let~$\beta \in H_m(Y)$ and~$\sigma \in H^n(X)$ be arbitrary classes. If~$f_*$ and $f^*$ are the induced maps on homology and cohomology respectively, then the cap product \[ f_*(\beta) \cap \sigma = f_*(\beta \cap f^*(\sigma)) \] lies in $f_*(H_{m-n}(Y))$. Now assume $\alpha \in H_2(X)$ and $f:\Sigma \to X$ represents $\alpha$, i.e.~$\Sigma$ is a possibly disconnected oriented surface of genus $g$ and~$f_*([\Sigma]) = \alpha$ where $[\Sigma] \in H_2(\Sigma)$ is the fundamental class. Then, by the previous observation, the cap product $\alpha \cap \sigma$ lies in $f_*(H_1(\Sigma))$ for all $\sigma \in H^1(X)$. Therefore, the image of the cap product map $\alpha \cap -$ is contained in the image of $f_*:H_1(\Sigma) \to H_1(X)$, which has rank at most $2g$. So we get $2g \geq \operatorname{rank}(\alpha \cap -)$. \end{proof} \end{prop} We now consider the case where~$\Gamma$ is a simple graph and~$\alpha\in H_2(A_\Gamma)$. In this case, it is easy to compute the bound~$\operatorname{rank}(\alpha \cap -)$ using the connection matrix~$M_\alpha$ (introduced in Definition~\ref{defn-connection matrix}). \begin{prop}\label{prop-cap bound matrix rank} Let $\Gamma$ be a simple graph,~$\alpha\in H_2(A_\Gamma)$ an arbitrary class, and~$M_\alpha$ its connection matrix. Then the connection matrix~$M_\alpha$ is also the matrix representation of the cap product map \[ \alpha \cap - \colon H^1(A_\Gamma) \to H_1(A_\Gamma) \] with respect to the basis given by the fixed orientation on the $1$-skeleton. \begin{proof} The $2$-skeleton of the Salvetti complex $\operatorname{Sal}_{\Gamma}$ is a quotient of the space \[X = \bigsqcup_{\{v,w\} \in E(\Gamma)} (S^1\times S^1)_{\{v,w\}},\] where $(S^1\times S^1)_{\{v,w\}}$ is (a copy of) a $2$-torus. The quotient map $\pi \colon X \to \operatorname{Sal}_{\Gamma}^{(2)}$ and the inclusion $\operatorname{Sal}_\Gamma^{(2)}\to \operatorname{Sal}_\Gamma$ both induce isomorphisms on second homology. Furthermore, the inclusion $\operatorname{Sal}_\Gamma^{(2)}\to \operatorname{Sal}_\Gamma$ induces an isomorphism on homology and cohomology in degrees $1$ and $2$. A distinguished basis for the first homology of $\operatorname{Sal}_\Gamma^{(2)}$ is given by~$V(\Gamma)$, and a distinguished basis for the first homology of $X$ is given by $(S^1\times \{\ast\})_{\{v,w\}}$ and \emph{dual curves} $(\{\ast\} \times S^1)_{\{v,w\}}$ for each $\{v,w\} \in E(\Gamma)$. Additionally, a basis for the first cohomology of $\operatorname{Sal}_\Gamma^{(2)}$ is given by the duals~$v^*$ of every element $v$ of~$V(\Gamma)$. The quotient map~$\pi$ induces the following map on cohomology \begin{eqnarray*} \pi^* \colon H^1(\operatorname{Sal}_\Gamma^{(2)}) &\to& H^1(X)\\ v^* &\mapsto& \sum_{p\in P_v} p^* \end{eqnarray*} Here~$P_v$ is the set of circles in~$X$ which map to the circle in~$\operatorname{Sal}_{\Gamma}$ corresponding to~$v$ under~$\pi$, thus each~$p\in P_v$ represents a class in~$H_1(X)$ and~$p^*$ denotes its dual in~$H^1(X)$. The cap product of a fundamental class of one torus $(S^1\times S^1)_{\{s,t\}}$ in $X$ with $\pi^*(v^*)$ is either the dual curve to the corresponding $p \in P_v$ or zero if there is no $p \in P$ that lies in that torus. In other words, if $p$ is $(S^1\times \{\ast\})_{\{s,t\}}$ then the cap product is given by \[ [(S^1\times S^1)_{\{s,t\}}]\cap p^*=\pm [(\{\ast\}\times S^1)_{\{s,t\}}], \]where the sign depends on the orientation of the surface. Since the cap product is natural and linear, this yields the desired matrix~$M_\alpha$. \end{proof} \end{prop} \begin{remark} For another proof to Proposition~\ref{prop-cap bound matrix rank}, consider the inclusion $\operatorname{Sal}_\Gamma \to (S^1)^{|V(\Gamma)|}$ coming from the abelianisation map. This map induces an isomorphism on first homology and an injection on second homology. We can deduce the result from the cap product structure of the torus. \end{remark} Putting Proposition~\ref{prop-cap product inequality} and Proposition~\ref{prop-cap bound matrix rank} together, we get the \emph{cap product inequality:} \begin{equation}\label{eq - cap bound} \operatorname{gen}(\alpha)\geq \frac{1}{2} \, \operatorname{rank}(M_\alpha) \quad \text{for all } \alpha \in H_2(A_\Gamma). \end{equation} The right-hand side,~$\frac{1}{2} \, \operatorname{rank}(M_\alpha)$, will be referred to as the \emph{cap bound}. To conclude the section, we compute the minimal genus when~$A_\Gamma\cong \mathbb{Z}^n$ using the cap bound. In this case, we have extra tools that we can use. Firstly, there is an isomorphism of graded rings \[H_*(\mathbb{Z}^n)\cong \bigwedge\nolimits^* \mathbb{Z}^n, \] where the ring structure on the left stems from the group multiplication in $\mathbb{T}^n$, i.e.~it is the Pontryagin ring structure. This means we can interpret $\alpha \in H_2(\mathbb{Z}^n)$ as a skew-symmetric bilinear form on~$\mathbb{Z}^n$. Furthermore, taking the dual on the left, we have an isomorphism of graded rings \[H^*(\mathbb{Z}^n)\cong \bigwedge\nolimits^* \mathbb{Z}^n, \] where the ring structure on the left is given by the cup product. Under these two ring isomorphisms, the matrix representation~$M_\alpha$ of the cap product map \[\alpha \cap - \colon \mathbb{Z}^n \cong H^1(\mathbb{T}^n) \to H_1(\mathbb{T}^n) \cong \mathbb{Z}^n\] yields the same matrix as interpreting $\alpha$ as a skew-symmetric bilinear form. \begin{prop}\label{prop - n torus cap equality} Let $\Gamma$ be a complete graph on $n$ vertices and $\alpha \in H_2(A_\Gamma) \cong \bigwedge^2 \mathbb{Z}^n$. Then the minimal genus $\operatorname{gen}(\alpha)$ is always realized by a disjoint union of tori. Furthermore, it is equal to the minimal number of elementary wedges, i.e.~elements of the form~$a \wedge b$, needed to represent~$\alpha$. \begin{proof} A disjoint union of $2$-tori can serve as the minimal genus representative for any second homology class of the $n$-torus~$\mathbb{T}^n = (S^1)^n$ since the fundamental group $\pi_1(\mathbb{T}^n) \cong \mathbb{Z}^n$ is abelian. Now suppose that we have a class in~$H_2(\mathbb{T}^n)$ that is representable by a torus, i.e.~by a map~$\tau \colon \mathbb{T}^2 \to \mathbb{T}^n$. Since the $2$-torus and the $n$-torus are aspherical, the map $\tau$ is, up to homotopy, determined by the induced homomorphism on the fundamental groups. Thus we may assume it is given by $(s,t) \mapsto g(s)\cdot h(t)$ for some pair of based loops $g,h \colon S^1\to \mathbb{T}^n$, where multiplication is the group operation in~$\mathbb{T}^n$. Let $[g]$ and $[h]$ denote the images of~$g_*([S^1])$ and $h_*([S^1])$ respectively under the chain of isomorphisms $\pi_1(\mathbb{T}^n) \cong H_1(\mathbb{T}^n) \cong \mathbb{Z}^n$. Then under the identification \[ H_2(\mathbb{T}^n) \cong \@ifnextchar^\@extp{\@extp^{\,}}^2 H_1(\mathbb{T}^n) \cong \@ifnextchar^\@extp{\@extp^{\,}}^2 \mathbb{Z}^n,\] such a map $\tau$ sends the fundamental class of $\mathbb{T}^2$ to the elementary wedge $[g] \wedge [h]$. Conversely, let an elementary wedge $a \wedge b \in \@ifnextchar^\@extp{\@extp^{\,}}^2 \mathbb{Z}^n \cong H_2(\mathbb{T}^n)$ be given. Choose explicit representatives $\hat{a},\hat{b} \colon S^1 \to \mathbb{T}^n$ for $a,b\in \mathbb{Z}^n \cong H_1(\mathbb{T}^n) \cong \pi_1(\mathbb{T}^n)$. Then the map $\mathbb{T}^2 \to \mathbb{T}^n$ defined by $(s,t) \mapsto \hat{a}(s)\cdot \hat{b}(t)$ maps the fundamental class of~$\mathbb{T}^2$ to~$a \wedge b$, i.e.~$a \wedge b$ is representable by a torus. All in all, this proves that the minimal genus of any~$\alpha \in H_2(\mathbb{Z}^n) \cong \@ifnextchar^\@extp{\@extp^{\,}}^2 \mathbb{Z}^n$ is the minimal number of elementary wedges, i.e.~elements of the form $a\wedge b$, needed to represent~$\alpha$. \end{proof} \end{prop} Fortunately, we can use linear algebra (over the integers) to compute the minimal number of elementary wedges of an element in~$\@ifnextchar^\@extp{\@extp^{\,}}^2 \mathbb{Z}^n$ using an appropriate skew-symmetric integer matrix. \begin{prop}\label{prop - minimal number of elementary wedges} Let $\Gamma$ be a complete graph on $n$ vertices and $\alpha \in H_2(A_\Gamma) \cong \@ifnextchar^\@extp{\@extp^{\,}}^2 \mathbb{Z}^n$. Then the cap bound $\frac{1}{2} \, \operatorname{rank}(M_\alpha)$ equals the minimal number of elementary wedges needed to represent~$\alpha$. \begin{proof} The solution to this purely algebraic problem is classical. By silently identifying $\mathbb{Z}^n$ with its dual, we can think of $\alpha \in \@ifnextchar^\@extp{\@extp^{\,}}^2 \mathbb{Z}^n$ as a skew-symmetric bilinear form on $\mathbb{Z}^n$ (see the discussion before Proposition~\ref{prop - n torus cap equality}). Following this, by Section~14 (Skew-symmetric Matrices) in \cite{FranklinVeblenIntegerMatrices}, such a skew-symmetric bilinear form---represented as a matrix~$M$---has a normal form~$\widehat{M}$ consisting of a direct sum of integer multiples of the standard hyperbolic form and the zero form. This normal form is shown below, where~$0 \neq \lambda_i\in \mathbb{Z}$ for~$1\leq i\leq k$. \[ \widehat{M}= \begin{pNiceArray}{ccccccc:ccc}[first-row, first-col] &v_1 & w_1 & v_2 & w_2& \cdots&v_k&w_k& \Block{1-3}{\cdots}&\\ v_1 & 0 & \lambda_1 &&&\Block{4-3}<\huge>{0}&&&\Block{7-3}<\huge>{0}& \\ w_1& -\lambda_1 &0 &&&&&&& \\ v_2& && 0&\lambda_2&& & &&\\ w_2& && -\lambda_2&0 & & & &&\\ \vdots&\Block{3-4}<\huge>{0}& && &\ddots& &&&\\ v_k && && && 0 & \lambda_k &&\\ w_k && && && -\lambda_k&0 &&\\ \hdottedline \Block{3-1}{\vdots}& \Block{3-7}<\huge>{0}&& & &&&&\Block{3-3}<\huge>{0}&&\\ &&&&&&&&&\\ &&&&&&&&& \end{pNiceArray} \] Consider the basis of $\mathbb{Z}^n$ such that as a matrix~$\alpha$ has the above normal form (with~$k$ hyperbolic blocks), and let $v_1, w_1, \ldots, v_k, w_k$ denote the first~$2k$ basis vectors as shown in the matrix diagram, i.e.~paired according to the hyperbolic forms. Then it follows that $\alpha = \sum_{i=1}^k \lambda_i v_i \wedge w_i$, i.e.~it is a sum of~$k$ elementary wedges. By Proposition~\ref{prop - n torus cap equality}, we get $\operatorname{gen}(\alpha) \le k$. Evidently, the number of hyperbolic blocks $k$ equals half the rank of the matrix~$M$ representing the skew-symmetric form. Finally, the matrix representation~$M_\alpha$ of the cap-product $\alpha \cap - \colon \mathbb{Z}^n \cong H^1(\mathbb{T}^n) \to H_1(\mathbb{T}^n) \cong \mathbb{Z}^n$ yields the same matrix~$M$ as interpreting $\alpha$ as a skew-symmetric bilinear form. So, combining with the cap product inequality (Equation~\ref{eq - cap bound}), we get~$\operatorname{gen}(\alpha) = \frac{1}{2} \, \operatorname{rank}(M_\alpha)$. \end{proof} \end{prop} Together, the previous two propositions give us our first major result: \begin{abcthm}\label{abcthm - torus} Let $\Gamma$ be a complete graph---i.e.~$A_\Gamma\cong\mathbb{Z}^n$ and the~$n$-torus is a model for the classifying space---and $\alpha \in H_2(A_\Gamma)$. Then the minimal genus $\operatorname{gen}(\alpha)$ is equal to the cap bound $\frac{1}{2} \, \operatorname{rank}(M_\alpha)$. Furthermore, the minimal genus is always realised by a disjoint union of tori. \end{abcthm} \section{A partial answer to Question~\ref{question - cap bound}}\label{section - question 1} The cap product inequality (Equation~\ref{eq - cap bound}) and Theorem~\ref{abcthm - torus} naturally lead us to ask: \setcounter{question}{0} \begin{question} Is the cap product inequality always an equality for a RAAG? \end{question} This section answers the question in the affirmative for two large classes of RAAGs: \begin{abcthm}\label{abcthm - cap bound equality} Let $\Gamma$ be a complete bipartite graph or a tree and $\alpha \in H_2(A_\Gamma)$. Then the minimal genus $\operatorname{gen}(\alpha)$ is equal to the cap bound $\frac{1}{2} \operatorname{rank}(M_\alpha)$. Furthermore, the minimal genus can always be realised by a disjoint union of tori. \end{abcthm} Since the proofs are quite different for a complete bipartite graph and a tree, we handle them in two separate subsections, culminating in Theorems~\ref{thrm - inequality is equality complete bipartite} and~\ref{thrm - inequality is equality trees} respectively. \subsection{Bipartite graphs}\label{subsect - complete bipartite} Unless otherwise stated, we assume in this subsection that $\Gamma$ is a complete bipartite finite graph, i.e.~there is a partition of the vertices $V(\Gamma) = \{ v_1, \ldots, v_n \} \sqcup \{ w_1, \ldots, w_m \}$ such that each edge in~$\Gamma$ has endpoints $v_i$ and $w_j$ for some $i,j$ and, conversely, each pair $\{v_i, w_j\}$ corresponds to a (unique) edge in~$\Gamma$. Then $A_\Gamma\cong F_n \times F_m$ where $F_n$ and $F_m$ are free groups generated by $\{ v_1, \ldots, v_n \}$ and $\{ w_1, \ldots, w_m \}$ respectively. Elements of $A_\Gamma$ are considered as words $v w$ where $v \in F_n$ and $w \in F_m$ rather than as ordered pairs $(v, w)$. For an orientation on $\Gamma$, we shall choose the oriented edges $E^{or}(\Gamma) = \{v_1, \ldots, v_n \} \times \{w_1, \ldots, w_m \}$. We write $\overline{v_i}$ and $\overline{w_j}$ for the images of~$v_i$ and $w_j$ in $H_1(F_n)$ and $H_1(F_m)$ respectively. In this setting, $H_2(A_\Gamma) \cong H_1(F_n) \otimes H_1(F_m)$ is generated by $e_{(v_i, w_j)} = \overline{v_i} \otimes \overline{w_j}$ for $(v_i, w_j) \in E^{or}(\Gamma)$. We start by constructing examples of classes representable by a torus. \begin{exam}\label{Thorben} Let~$\mathbb{Z}^2$ be generated by~$a$ and~$b$. For any $v \in F_n$ and $w \in F_m$, we define a homomorphism \begin{eqnarray*} \tau:\mathbb{Z}^2 &\to& A_\Gamma\\ a &\mapsto& v \\ b &\mapsto& w. \end{eqnarray*} Using the identifications $H_2(\mathbb{Z}^2) \cong \@ifnextchar^\@extp{\@extp^{\,}}^2 \mathbb{Z}^2$ and $H_2(A_\Gamma) \cong H_1(F_n) \otimes H_1(F_m)$, direct computation shows that the induced homomorphism $\tau_* \colon H_2(\mathbb{Z}^2) \to H_2(A_\Gamma)$ maps the fundamental class $a \wedge b$ to $\overline v \otimes \overline w$. As~$\mathbb{T}^2$ and~$\operatorname{Sal}_\Gamma$ are $K(\pi,1)$-spaces, we get a map $\mathbb{T}^2 \to \operatorname{Sal}_\Gamma$ that induces $\tau$ on fundamental groups, and hence the class $\alpha=\overline v \otimes \overline w$ is representable by a torus. \end{exam} Recall that a class $\alpha \in H_2(A_\Gamma)$ is visually represented by integer labels on the edges of the graph~$\Gamma$ equipped with an orientation. The class $\alpha = \overline v \otimes \overline w$ has the special property that there is an integer labelling of the vertices in $\Gamma$ that induces the edge labelling in the following way: the edge label is given by multiplying the vertex labels on the edge's incident vertices; the vertex labels are precisely the coordinates of~$\overline v \in H_1(F_n) \cong \mathbb{Z}^n$ and~$\overline w \in H_1(F_m) \cong \mathbb{Z}^m$ with respect to the canonical bases. See Figure~\ref{figEx} for an illustration. \comment{ \begin{figure}[ht] \centering \includegraphics[scale=.5]{figEx.png} \caption{A class whose edge labels are induced by vertex labels.} \label{figEx2} \end{figure} } \begin{figure}[ht] \begin{tikzpicture}[thick, scale=.84, decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw[postaction={decorate}] (0,0) -- node[below] {2} (2,0); \draw[postaction={decorate}] (0,0) -- node[left] {4} (0,2); \draw[postaction={decorate}] (2,2) -- node[right] {3} (2,0); \draw[postaction={decorate}] (2,2) -- node[above] {6} (0,2); \draw[fill=white] (0,0) circle [radius=0.1] (2,2) circle [radius=0.1]; \filldraw (2,0) circle [radius=0.1] (0,2) circle [radius=0.1]; \begin{scope}[shift={(5,0)}] \draw[postaction={decorate}] (0,0) -- node[below] {$2 \cdot 1$} (2,0); \draw[postaction={decorate}] (0,0) -- node[left] {$2 \cdot 2$} (0,2); \draw[postaction={decorate}] (2,2) -- node[right] {$3 \cdot 1$} (2,0); \draw[postaction={decorate}] (2,2) -- node[above] {$3 \cdot 2$} (0,2); \draw[fill=white] (0,0) circle [radius=0.1] node[above right] {2} (2,2) circle [radius=0.1] node[below left] {3}; \filldraw (2,0) circle [radius=0.1] node[above left] {1} (0,2) circle [radius=0.1] node[below right] {2}; \end{scope} \end{tikzpicture} \caption{A class whose edge labels are induced by vertex labels. The bipartition of the vertices is shown in black and white.} \label{figEx} \end{figure} Surprisingly, this simple example allows us to compute the minimal genus of any class. \begin{thrm}\label{thrm - inequality is equality complete bipartite} Let $\Gamma$ be a complete bipartite graph and $\alpha \in H_2(A_\Gamma) \cong H_1(F_n) \otimes H_1(F_m)$. Then the minimal genus $\operatorname{gen}(\alpha)$ is equal to the cap bound $ \frac{1}{2} \operatorname{rank}(M_\alpha)$ and the minimal number of pure tensors, i.e.~elements of the form~$\overline v \otimes \overline w$, needed to represent~$\alpha$. Furthermore, the minimal genus is always realised by a disjoint union of tori. \begin{proof} Let~$\Gamma$ be a complete bipartite graph with vertices $\{v_1, \ldots, v_n \} \sqcup \{ w_1, \ldots, w_m \}$ and an orientation given by $E^{or}(\Gamma)=\{v_1, \ldots, v_n \} \times \{ w_1, \ldots, w_m \}$. Suppose $\alpha \in H_2(A_\Gamma)$ is an arbitrary class. We want to show that $2 \, \operatorname{gen}(\alpha) = \operatorname{rank}(M_\alpha)$. This holds automatically if~$\alpha$ is trivial, so we may assume it is nontrivial. Recall that the connection matrix is \[ M_\alpha = \begin{pNiceArray}{ccc|ccc}[first-row, first-col] & v_1 & \Cdots & v_n & w_1 & \Cdots & w_m \\ v_1 & \Block{3-3}<\Large>{0} & & & l(v_1,w_1) & \cdots & l(v_1,w_m) \\ \Vdots & & & & & \ddots & \\ v_n & & & & l(v_n,w_1) & \cdots & l(v_n,w_m) \\ \hline w_1 & -l(v_1,w_1) & \cdots & -l(v_n,w_1) & \Block{3-3}<\Large>{0} & & \\ \Vdots & & \ddots & & & & \\ w_m & -l(v_1,w_m) & \cdots & -l(v_n,w_m) & & & \end{pNiceArray}. \] Note that $\operatorname{rank}(M_\alpha)$ is twice the rank of the submatrix given by the first $n$ rows and the last $m$ columns. We start with the case $\operatorname{rank}(M_\alpha) = 2$. In this case, the rows of the submatrix span a cyclic subgroup in $\mathbb{Z}^m$. This implies there is a nontrivial integral row vector $(z_j)_{j=1}^m$ and integral multipliers $(c_i)_{i=1}^n$ such that $c_i \cdot z_j = l(v_i, w_j)$ for $1 \le i \le n$ and $1 \le j \le m$. In other words, $\alpha = \overline x \otimes \overline y$ if we set~$\overline x = \sum_{i=1}^n c_i \overline v_i$ and~$\overline y = \sum_{j=1}^m z_j \overline w_j$ in~$H_1(F_n)$ and $H_1(F_m)$ respectively. By Example~\ref{Thorben}, this implies~$\alpha$ is representable by a torus and $2 \, \operatorname{gen}(\alpha) = 2 = \operatorname{rank}(M_\alpha)$. More generally, suppose $\operatorname{rank}(M_\alpha) = 2k$. Then there are $k$ linearly independent integral row $m$-vectors $\overline z_1 = (z_{1j})_{j=1}^m, \ldots,\,\overline z_k = (z_{kj})_{j=1}^m$ and $k$ integral $n$-vectors $\overline c_1 = (c_{1i})_{i=1}^n, \ldots,\,\overline c_k = (c_{ki})_{i=1}^n$ such that $ c_{1i} \cdot z_{1j} + \cdots + c_{ki} \cdot z_{kj} = l(v_i, w_j)$ for $1 \le i \le n$ and $1 \le j \le m$. As before, this means $\alpha = \sum_{l=1}^k \overline x_l \otimes \overline y_l$ where we set~$\overline x_l = \sum_{i=1}^n c_{li} \overline v_i$ and~$\overline y_l = \sum_{j=1}^m z_{lj} \overline w_j$ for~$l = 1, \ldots, k$. But each summand has a torus representative by Example~\ref{Thorben}, and so $2 \, \operatorname{gen}(\alpha) = 2k = \operatorname{rank}(M_\alpha)$. \end{proof} \end{thrm} In particular, this theorem characterises classes representable by a torus as those with the same form as in Example~\ref{Thorben}. \begin{cor}\label{cor - torusrep} Let~$\Gamma$ be a complete bipartite graph. A class $\alpha \in H_2(A_\Gamma)$ is representable by a torus if and only if $\alpha = \overline v \otimes \overline w$ for some $\overline v \in H_1(F_n)$ and $\overline w \in H_1(F_m)$. \begin{proof} The reverse direction is precisely the construction in Example~\ref{Thorben}. For the forward direction, suppose $\alpha \in H_2(A_\Gamma)$ is nontrivial and $\operatorname{gen}(\alpha) = 1$. Then, by Theorem~\ref{thrm - inequality is equality complete bipartite}, this implies $\alpha = \overline v \otimes \overline w$ for some $\overline v \in H_1(F_n)$ and $\overline w \in H_1(F_m)$. \end{proof} \end{cor} \begin{defn} We call a simple graph a \emph{star} if it consists of one vertex of valence~$n$,~$n$ edges, and~$n$ leaves. In other words, it is a complete bipartite graph with partition of the vertices $V(\Gamma) = \{ v_1, \ldots, v_n \} \sqcup \{ w \}$. \end{defn} The following lemma is another immediate consequence of Example~\ref{Thorben}. \begin{lem}\label{lemma-star} Let~$\Gamma$ be any simple graph and~$\alpha \in H_2(A_{\Gamma})$. If~$\operatorname{supp}(\alpha)$ is a star, then~$\alpha$ is representable by a torus. \begin{proof} Fix an orientation of~$\Gamma$. If~$\Gamma'=\operatorname{supp}(\alpha)$ is a star with vertex set $\{ v_1, \ldots, v_n \} \sqcup \{ w \}$, then we can induce the appropriate edge labels of~$\alpha$ from vertex labels: label the vertex~$w$ with~$1$ and the vertex~$v_i$ with the same label as the edge~$\{w,v_i\}$. Let~$\alpha' \in H_2(A_{\Gamma'})$ be the class whose edge labels match those of $\alpha$. By the discussion after Example~\ref{Thorben}, the class~$\alpha'$ is representable by a torus. The inclusion $\Gamma' \subseteq \Gamma$ induces $\iota_*:H_2(A_{\Gamma'}) \to H_2(A_{\Gamma})$ with $\iota_*(\alpha') = \alpha$. Therefore, $\alpha$ is representable by a torus. \end{proof} \end{lem} \subsection{Trees}\label{subsect - tree} In this subsection, we compute the minimal genus in the case where~$\Gamma$ is a tree. Recall that a tree is a simple graph with no cycles. Some of our results hold for general graphs~$\Gamma$ and so we will always make explicit our assumptions on the graph. The cap bound allows us to constrain the options for~$\operatorname{supp}(\alpha)$ when~$\alpha$ is representable by a torus. \begin{lem}\label{lem - two edges form square in sup(alpha)} Let $\Gamma$ be any simple graph and~$\alpha\in H_2(A_\Gamma)$. If~$\operatorname{rank}(M_\alpha)=2$, then any two edges with distinct vertices~$\{v_1,v_2\}$ and~$\{w_1,w_2\}$ in~$\operatorname{supp}(\alpha)$ form two sides of a square in~$\operatorname{supp}(\alpha)$. \begin{proof} Suppose~$\alpha \in H_2(A_\Gamma)$ satisfies~$\operatorname{rank}(M_\alpha)=2$, and suppose $\{v_1,v_2\}$ and~$\{w_1,w_2\}$ are edges in~$\operatorname{supp}(\alpha)$ with~$l(v_1,v_2)=\lambda\neq 0$ and~$l(w_1,w_2)=\beta\neq 0$. For brevity, set~$l^i_j=l(v_i,w_j)$. Then the matrix~$M_\alpha$ has the following submatrix: \[ M_\alpha |_{\langle v_1,v_2,w_1,w_2 \rangle}= \kbordermatrix{ & v_1 & v_2 & w_1 & w_2 \\ v_1 & 0 & \lambda & l^1_1 & l^1_2 \\ v_2 & -\lambda & 0 & l^2_1 & l^2_2 \\ w_1 & -l^1_1 & -l^2_1& 0& \beta \\ w_ 2 & -l^1_2 & -l^2_2 & -\beta&0}. \] If the two edges are disjoint, all~$l^i_j=0$ and the submatrix has rank 4. So~$\operatorname{rank}(M_\alpha)\geq 4$ and this contradicts $\operatorname{rank}(M_\alpha)=2$. The other option if~$\{v_1,v_2\}$ and $\{w_1,w_2\}$ do not make a square is that~$l^1_1=l^1_2=0$ but one or both of~$l^2_1$ and~$l^2_2$ are non-zero. The matrix becomes \[ M_\alpha |_{\langle v_1,v_2,w_1,w_2 \rangle}= \kbordermatrix{ & v_1 & v_2 & w_1 & w_2 \\ v_1 & 0 & \lambda & 0 & 0 \\ v_2 & -\lambda & 0 & l^2_1 & l^2_2 \\ w_1 & 0 & -l^2_1& 0& \beta \\ w_ 2 & 0 & -l^2_2 & -\beta&0}, \] which again has rank $4$ and contradicts the hypothesis. \end{proof} \end{lem} As a corollary, any class representable by a torus has a connected support. \begin{cor}\label{cor - supp connected for torus rep} Let $\Gamma$ be a simple graph and~$\alpha\in H_2(A_\Gamma)$. If~$\operatorname{rank}(M_\alpha) = 2$, then the support~$\operatorname{supp}(\alpha)$ is connected. \begin{proof} We prove the contrapositive, assuming~$\alpha \in H_2(A_\Gamma)$ is nontrivial. Suppose the support is disconnected and consider two edges taken from two connected components. Then these edges do not form a square. So~$\operatorname{rank}(M_\alpha) \ge 4$ by Lemma~\ref{lem - two edges form square in sup(alpha)}. \end{proof} \comment{ \begin{proof} Suppose that~$\operatorname{supp}(\alpha)$ is disconnected. Since~$\operatorname{supp}(\alpha)$ is a union of labelled edges,it follows that there are two edges~$(v_1,v_2)$ and~$(w_1,w_2)$ in~$\operatorname{supp}(\alpha)$ such that~$v_i$ is not connected to~$w_j$ for~$i,j \in \{1,2\}$. Let~$l(v_1,v_2)=\lambda$ and~$l(w_1,w_2)=\beta$. The matrix~$M_\alpha$ has the following submatrix. \[ M_\alpha |_{<v_1,v_2,w_1,w_2>}= kbordermatrix{ & v_1 & v_2 & w_1 & w_2 \\ v_1 & 0 & \lambda & 0 & 0 \\ v_2 & -\lambda & 0 & 0 & 0 \\ w_1 & 0 & 0& 0& \beta \\ w_ 2 & 0 & 0 & -\beta&0} \] Since this submatrix has rank 4, it follows~$\operatorname{rank}(M_\alpha) \geq 4$, which bounds the genus below by 2 using Equation~\ref{eq - cap bound}. It follows~$\alpha$ is not representable by a torus. \end{proof} } \end{cor} \begin{prop}\label{prop- star in tree is torus} Let~$\Gamma$ be a tree. A class~$\alpha\in H_2(A_\Gamma)$ is representable by a torus if and only if $\operatorname{supp}(\alpha)$ is a complete bipartite graph (or, alternatively, a star). \begin{proof} The statement is vacuously true when~$\alpha$ is trivial. So we may assume that~$\alpha$ is nontrivial. If $\operatorname{supp}(\alpha)$ is a complete bipartite graph, then~$\operatorname{supp}(\alpha)$ is a star since~$\Gamma$ is a tree. From Lemma~\ref{lemma-star}, it follows that~$\alpha$ is representable by a torus. Suppose, conversely, that $\alpha$ is representable by a torus but $\operatorname{supp}(\alpha)$ is not a complete bipartite graph, i.e.~not a star. Then since~$\alpha$ is representable by a torus,~$\operatorname{supp}(\alpha)$ is connected by the cap product inequality (Equation~\ref{eq - cap bound}) and Corollary~\ref{cor - supp connected for torus rep}. Moreover, as~$\Gamma$ is a tree, there is a subgraph of~$\operatorname{supp}(\alpha)$ of the following form: \begin{center} \begin{tikzpicture}[thick, scale=.7] \begin{scope}[scale=1.2,shift={(-5,3)},decoration={ markings, mark=at position 0.5 with {\arrow{>}}}] \draw[rotate =0] (0,0)--(1,0); \draw[rotate =0] (1,0)--(2,0); \draw[rotate =60] (0,0)--(1,0); \draw[rotate =120] (0,0)--(1,0); \draw[rotate =220] (0,0)--(1,0); \draw[rotate =180] (0,0)--(1,0); \draw[rotate =310] (.8,0) circle [radius=0.01]; \draw[rotate =300] (.805,0) circle [radius=0.01]; \draw[rotate =290] (.8,0) circle [radius=0.01]; \filldraw (0,0) circle [radius=0.1] node[below] {$w$}; \filldraw (1,0) circle [radius=0.1] node[above] {$v_k$}; \filldraw (2,0) circle [radius=0.1] node[above] {$z$}; \filldraw (-1,0) circle [radius=0.1] node[left] {$v_3$}; \filldraw[rotate =60] (1,0) circle [radius=0.1] node[above] {$v_1$}; \filldraw[rotate =120] (1,0) circle [radius=0.1] node[above] {$v_2$}; \filldraw[rotate =220] (1,0) circle [radius=0.1] node[below] {$v_4$}; \end{scope} \end{tikzpicture} \end{center} \noindent where~$k$ is at least~$2$. The edges~$\{w, v_1\}$ and~$\{v_k,z\}$ cannot form two sides of a square in $\operatorname{supp}(\alpha)$ since~$\Gamma$ is a tree. By Lemma~\ref{lem - two edges form square in sup(alpha)}, we have~$\operatorname{rank}(M_\alpha) \ge 4$. This contradicts the cap product inequality (Equation~\ref{eq - cap bound}) and the assumption~$\alpha$ was representable by a torus; therefore, the support~$\operatorname{supp}(\alpha)$ is a complete bipartite graph. \end{proof} \end{prop} \begin{defn} A \emph{star covering} of an integer labelled oriented simple graph~$\Gamma$ with labels~$l(v,w)$ is a finite collection of integer labelled oriented star graphs $\{S_1,\ldots, S_k\}$ such that: \begin{enumerate} \item for all~$i$, $S_i$ is a subgraph of $\Gamma$, and the orientation on~$S_i$ is induced by the orientation on~$\Gamma$; and \item let~$l_i(v,w)$ be the label of~$(v,w)$ if it lies in~$S_i$, and zero otherwise; then for all oriented edges~$(v,w)\in E^{or}(\Gamma)$, we require $\sum_{i=1}^k l_i(v,w)=l(v,w)$. \end{enumerate} We denote by~$\operatorname{sc}(\Gamma)$ the minimal~$k$ for which there exists such a star covering of~$\Gamma$. We also write $\operatorname{sc}(\alpha)$ for $\operatorname{sc}(\operatorname{supp}(\alpha))$, where we now consider~$\operatorname{supp}(\alpha)$ as a labelled oriented graph. Note that~$\operatorname{sc}(\alpha)$ is independent of the orientation chosen to depict~$\alpha$. \end{defn} \begin{remark} In the literature an unlabelled version of this cover is often called a \emph{vertex cover}. Since our emphasis lies on homology classes described by labelled oriented edges we use the term star cover instead. \end{remark} \begin{lem} \label{lem - minimal star covering disjoint} Given a star covering of a labelled oriented graph~$\Gamma$, there is a covering~$\{S_1,\ldots S_k\}$ with the same cardinality so that for all~$i\neq j$,~$S_i$ and~$S_j$ have disjoint edge sets (their vertex sets may intersect non-trivially). \begin{proof} Suppose multiple stars~$\{S_i\}_{i\in I}$ have a common oriented edge~$(v,w)$ for some $I\subseteq \{1,\ldots k\}$ and that~$S_i$ has label~$l_i(v,w)\neq 0$ for~$i\in I$. Choose one~$j \in I$ and remove the edge~$(v,w)$ from every star $S_i$ with $j \neq i \in I$. Following this, change the label on~$S_j$ to~$l_j(v,w) = l(v,w)$. Repeat this process for all edges in~$\Gamma$ which are common edges between stars. \end{proof} \end{lem} \begin{prop}\label{prop - star covering bound} Let~$\Gamma$ be any oriented simple graph and~$\alpha \in H_2(A_\Gamma)$. The minimal cardinality of a star covering of~$\operatorname{supp}(\alpha)$ bounds the minimal genus from above, i.e. \[\operatorname{sc}(\alpha)\geq \operatorname{gen}(\alpha).\] \end{prop} Note that, a priori, an upper bound on the minimal genus for a second homology class of a RAAG is given by the number of edges in the support. This proposition provides a substantial improvement to this bound. \begin{proof} Given a minimal star covering $\{S_1,\ldots S_k\}$ of~$\operatorname{supp}(\alpha)$ (so~$k=\operatorname{sc}(\alpha)$), by Lemma~\ref{lem - minimal star covering disjoint}, we can assume the edge sets of the stars are disjoint, i.e.~we can label the edges of the stars with the edge labels of~$\alpha$. For~$1\leq i \leq k$, we associate to the star~$S_i$ a class~$s_i \in H_2(A_\Gamma)$ with labelled support~$S_i$ such that \[ \alpha=\sum_{i=1}^k s_i. \] By Lemma~\ref{lemma-star}, we know that~$g(s_i)=1$ for~$1\leq i\leq k$ and so it follows from subadditivity (Lemma~\ref{lem-subadditive}) that \[\operatorname{gen}(\alpha)\leq k = \operatorname{sc}(\alpha).\qedhere \] \end{proof} \begin{prop}\label{prop - stc equals cap} Let $\Gamma$ be an oriented tree and $\alpha \in H_2(A_\Gamma)$. Then the cap bound $\frac{1}{2} \operatorname{rank}(M_\alpha)$ is equal to~$\operatorname{sc}(\alpha)$. \begin{proof} From Proposition~\ref{prop - star covering bound}, we know that $\operatorname{sc}(\alpha)\geq \operatorname{gen}(\alpha)$, and by Equation \ref{eq - cap bound}, this implies that $\operatorname{sc}(\alpha)\geq \frac{1}{2} \operatorname{rank}(M_\alpha)$. We now show the reverse inequality. The result is obvious if~$\alpha$ is trivial, so we may assume it is nontrivial. First of all, we assume that $\operatorname{supp}(\alpha)$ is connected. Let~$\{S_1,\ldots S_k\}$ be a minimal star covering on~$\operatorname{supp}(\alpha)$ such that all stars have disjoint edge sets (Lemma~\ref{lem - minimal star covering disjoint}). We proceed by induction on~$k = \operatorname{sc}(\alpha)$. If~$k=1$, then~$\operatorname{supp}(\alpha)$ is a star and, by Proposition~\ref{prop- star in tree is torus},~$\operatorname{gen}(\alpha)=1$. This implies $\frac{1}{2} \operatorname{rank}(M_\alpha)=1=\operatorname{sc}(\alpha)$. Assume that~$k>1$ and that the theorem holds for classes~$\alpha'\in H_2(A_\Gamma)$ with~$\operatorname{sc}(\alpha')\leq k-1$. Then, since~$\Gamma$ is a tree,~$\operatorname{supp}(\alpha)$ has the following form \begin{center} \begin{tikzpicture}[thick, scale=.7] \begin{scope}[scale=1.2,shift={(-5,3)},decoration={markings, mark=at position 0.5 with {\arrow{>}}}] \draw[rotate =0, postaction=decorate] (0,0)--(1,0); \draw[rotate =60, postaction=decorate] (0,0)--(1,0); \draw[rotate =120, postaction=decorate] (0,0)--(1,0); \draw[rotate =220, postaction=decorate] (0,0)--(1,0); \draw[rotate =180, postaction=decorate] (0,0)--(1,0); \draw (1,0)--(1.3,0.25) (1,0)--(1.3,-.25) (1,0)--(1.4,0); \draw[rotate =310] (.8,0) circle [radius=0.01]; \draw[rotate =300] (.805,0) circle [radius=0.01]; \draw[rotate =290] (.8,0) circle [radius=0.01]; \draw (1,-.85)--(1,.85)--(5,.85)--(5,-.85)--(1,-.85); \filldraw (0,0) circle [radius=0.1] node[below] {$w$}; \filldraw (1,0) circle [radius=0.1] node[above, xshift=-1.1ex] {$v_p$}; \filldraw (-1,0) circle [radius=0.1] node[left] {$v_3$}; \filldraw[rotate =60] (1,0) circle [radius=0.1] node[above] {$v_1$}; \filldraw[rotate =120] (1,0) circle [radius=0.1] node[above] {$v_2$}; \filldraw[rotate =220] (1,0) circle [radius=0.1] node[below] {$v_4$}; \draw (3,0) node {$\Gamma'$}; \end{scope} \end{tikzpicture} \end{center} where the star pictured is~$S_k$,~$v_1$ is a leaf of~$\operatorname{supp}(\alpha)$, and the stars~$\{S_1,\ldots, S_{k-1}\}$ form a minimal star cover for~$\Gamma'=\operatorname{supp}(\alpha)\backslash S_k$. Then there is a class~$\alpha' \in H_2(A_\Gamma)$ with~$\operatorname{supp}(\alpha')=\Gamma'$ and~$\operatorname{sc}(\alpha')=k-1$. By the induction hypothesis,~$\frac{1}{2} \operatorname{rank}(M_{\alpha'})=k-1$. Let~$u$ be a leaf of~$\operatorname{supp}(\alpha)$ and~$S_k$. If~$S_k$ only has one edge~$\{u,v\}$, then there is another star~$S_j$ in the star cover containing an edge~$\{v,z\}$ for some~$z\neq v$ (because $\operatorname{supp}(\alpha)$ is connected and $k>1$). Remove the edge~$\{v,z\}$ from~$S_j$ and add it to~$S_k$ along with its orientation and label. This now gives a star covering of the same cardinality such that~$S_k$ is as pictured above with~$p\geq 2$. If~$S_k$ has two or more edges, we automatically have~$p \geq 2$ and no modifications are needed. Either way, we assume~$S_k$ was chosen such that~$p\geq 2$. Consider the connection matrix~$M_\alpha$ restricted to~$v_1, w$, and the vertices in~$\Gamma'$, and let~$l(v_1,w)=\lambda$ and~$l(w,z_i)=\beta_i$ where~$\{z_1,\ldots ,z_r\}$ is the vertex set of~$\Gamma'$. \[ M_\alpha|_{\langle v_1,w, V(\Gamma') \rangle}= \begin{pNiceArray}{cc|ccc}[first-row, first-col] &v_1 & w & \Block{1-3}{V(\Gamma')}&&\\ v_1 & 0 & \lambda & 0 &\Cdots & 0\\ w& -\lambda &0 & \beta_1 & \Cdots & \beta_r\\ \hline \Block{3-1}{\rotate V(\Gamma')}&0& -\beta_1 &\Block{3-3}<\Large>{M_{\alpha'}}&&\\ & \Vdots & \Vdots & &&\\ &0 &-\beta_r & && \end{pNiceArray} \] Observe that~$\operatorname{rank}(M_\alpha)\geq 2 + \operatorname{rank}(M_{\alpha'})$. Since~$\frac{1}{2} \operatorname{rank}(M_{\alpha'})=k-1$, it follows that $\frac{1}{2} \operatorname{rank}(M_\alpha)\geq k=\operatorname{sc}(\alpha)$ and this shows the reverse inequality we required. So~$\frac{1}{2} \operatorname{rank}(M_\alpha)=\operatorname{sc}(\alpha)$. Now suppose that~$\operatorname{supp}(\alpha)$ is disconnected. Then \[ \alpha=\sum_{j=1}^l \beta_j \] for some~$\beta_j\in H_2(A_\Gamma)$ such that~$\operatorname{supp}(\beta_j)$ is connected, and~$l$ is the number of connected components of~$\operatorname{supp}(\alpha)$. Then since the~$\beta_j$ all have disjoint support, $M_\alpha$ is a block diagonal matrix with blocks~$M_{\beta_j}$ for~$1\leq j \leq l$. It follows from computing the rank that \[ \frac{1}{2}\operatorname{rank}(M_\alpha) =\frac{1}{2} \sum_{j=1}^l \operatorname{rank}(M_{\beta_j}) = \sum_{j=1}^l \operatorname{sc}(\beta_j) \] where the final equality comes from the first part of this proof applied to each~$\beta_j$. Finally, we note that since each~$\beta_j$ corresponds to a connected component of~$\operatorname{supp}(\alpha)$, any star covering of~$\alpha$ is a union of star coverings of the~$\beta_j$ and vice versa. So~$\frac{1}{2} \operatorname{rank}(M_\alpha) = \sum_{j=1}^l \operatorname{sc}(\beta_j)=\operatorname{sc}(\alpha)$ as required. \end{proof} \end{prop} \begin{thrm}\label{thrm - inequality is equality trees} Let $\Gamma$ be a tree and $\alpha \in H_2(A_\Gamma)$. Then the minimal genus $\operatorname{gen}(\alpha)$ is equal to the cap bound $\frac{1}{2} \operatorname{rank}(M_\alpha)$ and~$\operatorname{sc}(\alpha)$. Furthermore, the minimal genus is always realised by a disjoint union of tori. \begin{proof} Fix an orientation on the tree~$\Gamma$. From Proposition~\ref{prop - star covering bound}, we know that $\operatorname{sc}(\alpha)\geq \operatorname{gen}(\alpha)$. On the other hand, from a combination of Proposition~\ref{prop - stc equals cap} and Equation \ref{eq - cap bound}, we have \[\operatorname{sc}(\alpha)=\frac{1}{2} \operatorname{rank}(M_\alpha)\leq \operatorname{gen}(\alpha).\] Putting these two inequalities together gives $\operatorname{sc}(\alpha)=\operatorname{gen}(\alpha)$. Recall that each star in the star covering of minimal cardinality is representable by a torus by Lemma~\ref{lemma-star} and the disjoint union of these tori has genus $\operatorname{sc}(\alpha)=\operatorname{gen}(\alpha)$. Thus the minimal genus is always realised by a disjoint union of tori. \end{proof} \end{thrm} \section{Classes representable by a torus}\label{section - one torus} In this section, we investigate the relationship between the support of a class and its minimal genus. We restrict ourselves to the case where the minimal genus is one, i.e.~the class is representable by a torus. Recall that a \emph{complete~$n$-partite graph}~$\Gamma$ has a partition of the vertices $V(\Gamma)=\sqcup_{i=1}^n X_i$ such that for every $1\leq i \leq n$: \begin{itemize} \item all $x_i\in X_i$ are joined by an edge to every vertex in~$V(\Gamma)\backslash X_i$ \item there exist no edges between any pair of vertices in~$X_i$. \end{itemize} Given an arbitrary graph~$\Gamma$, we also use the notion of a \emph{maximal complete $m$-partite subgraph}. This is a full subgraph~$Y$ of~$\Gamma$ such that~$Y$ is complete $m$-partite for some~$m$, and~$V(Y)$ is maximal over all full, complete~$k$-partite subgraphs of~$\Gamma$ for any~$k$. Note that such a maximal complete $m$-partite subgraph is not unique. Recall that a full subgraph is one which inherits all edges that are present in~$\Gamma$, i.e.~if~$v, w \in V(Y)$ and~$\{v,w\} \in E(\Gamma)$, then~$\{v,w\}\in E(Y)$. \begin{prop} \label{prop - rank 2 implies n partite} Let~$\Gamma$ be a simple graph and~$\alpha \in H_2(A_\Gamma)$. If~$\operatorname{rank}(M_\alpha) = 2$, then~$\operatorname{supp}(\alpha)$ is a complete $n$-partite graph for some positive integer $n$. \begin{proof} Since~$\operatorname{rank}(M_\alpha)=2$, the support~$\operatorname{supp}(\alpha)$ is nonempty and it follows from Corollary~\ref{cor - supp connected for torus rep} that~$\operatorname{supp}(\alpha)$ is connected. We assume that~$\operatorname{supp}(\alpha)$ is not complete~$n$-partite for any integer~$n \ge 1$ and work towards a contradiction. Consider a maximal complete $m$-partite subgraph~$Y$ of~$\operatorname{supp}(\alpha)$ with~$V(Y)=\sqcup_{i=1}^m X_i$. Then~$Y\neq\operatorname{supp}(\alpha)$ by the assumption and, since~$\operatorname{supp}(\alpha)$ is connected, there exists a vertex~$w$ in~$\operatorname{supp}(\alpha)\backslash Y$ connected by an edge of~$\operatorname{supp}(\alpha)$ to some vertex in~$Y$. {\bf Claim:} If~$w$ is connected by an edge of~$\operatorname{supp}(\alpha)$ to some vertex in~$X_j$, then~$w$ is connected by an edge of~$\operatorname{supp}(\alpha)$ to every vertex in~$X_j$. \emph{Proof of claim.} Suppose that~$w$ is connected by an edge of~$\operatorname{supp}(\alpha)$ to~$x \in X_j$ but not to~$y\in X_j$. We consider the connection matrix for~$\alpha$ restricted to~$w,x,y,$ and~$V(Y)\backslash X_j$. Let~$l(w,x)=\lambda \neq 0$ and denote the labels between~$y$ and~$V(Y)\backslash X_j$ by~$\mu_i \neq 0$ for~$1\leq i \leq r$. The labels~$\mu_i$ are non-zero since~$Y$ is a complete~$m$-partite subgraph of~$\operatorname{supp}(\alpha)$. (We mark unimportant entries with stars.) \[ M_\alpha|_{\langle w,x,y, V(Y)\backslash X_j \rangle}= \begin{pNiceArray}{ccc|ccc}[first-row, first-col] &w & x & y & \Block{1-3}{V(Y)\backslash X_j }&&\\ w & 0 & \lambda &0& * & \Cdots& *\\ x& -\lambda &0 & 0 &* & \Cdots & * \\ y& 0 &0 & 0& \mu_1 & \Cdots & \mu_r\\ \hline \Block{3-1}{\rotate V(Y)\backslash X_j }& *& *&-\mu_1 &\Block{3-3}<\Large>{M_{\alpha}|_{\langle V(Y)\backslash X_j \rangle}}&&\\ & \Vdots &\Vdots & \Vdots &&&\\ & *&*&-\mu_r & && \end{pNiceArray} \] Since this is a submatrix of~$M_\alpha$, and the~$\mu_i$ are all non-zero, it follows that~$\operatorname{rank}(M_\alpha)\geq 3$, which contradicts the hypothesis that~$\operatorname{rank}(M_\alpha) = 2$. This proves the claim. Thus for each~$X_j\subset V(Y)$,~$w$ is connected (in~$\operatorname{supp}(\alpha)$) either to all or none of the vertices in~$X_j$. For the contradiction, we now rule out all possible ways~$w$ may be connected to~$Y$ by an edge of~$\operatorname{supp}(\alpha)$. First, we note that $w$ is not connected by an edge of~$\operatorname{supp}(\alpha)$ to all~$v\in Y$ because then the full subgraph of~$\operatorname{supp}(\alpha)$ spanned by~$w$ and $V(Y)$ would be a complete $(m+1)$-partite subgraph of~$\operatorname{supp}(\alpha)$ with one more vertex than~$Y$, and this contradicts the maximality of~$Y$. For the same reason,~$w$ cannot be connected by an edge of~$\operatorname{supp}(\alpha)$ to all vertices in~$V\backslash X_j$ for some~$j$, and disconnected from~$X_j$: replacing~$X_j$ with $\{ w \} \cup X_j$ would result in the full subgraph of~$\operatorname{supp}{\alpha}$ spanned by~$w$ and $V(Y)$ being a complete $m$-partite subgraph of~$\operatorname{supp}(\alpha)$ with one more vertex than~$Y$. The final case to check is when~$w$ is not connected to~$l$ of the~$X_j$, but is connected to~$(m-l)$ of the~$X_j$, for~$1<l<m$. Without loss of generality, suppose~$w$ is disconnected from all vertices in~$\sqcup_{j=1}^l X_j$ and connected by an edge to all vertices in~$\sqcup_{j=l+1}^m X_j$. Then the submatrix~$M_\alpha|_{\langle w, V(Y) \rangle}$ is given by: \[ M_\alpha|_{\langle w, V(Y) \rangle}= \begin{pNiceArray}{c|ccc|ccc}[first-row, first-col] &w & X_1 & \Cdots & X_l& X_{l+1}&\Cdots&X_m\\ w & 0 & 0&\cdots& 0 & \mu_{l+1} &\cdots& \mu_m \\ \hline X_1& 0 &\Block{3-3}<\Large>{M_{\alpha}|_{\langle X_{1},\ldots , X_l \rangle}} & & & \Block{3-3}<\Large>{N} & \\ \Vdots& \vdots & & &&& & \\ X_l& 0 & & & & & & \\ \hline X_{l+1}& -\mu_{l+1}& \Block{3-3}<\Large>{-N}& &&\Block{3-3}<\Large>{M_{\alpha}|_{\langle X_{l+1},\ldots , X_m \rangle}}& &\\ \vdots & \Vdots & & &&&\\ X_m& -\mu_m&& & && \end{pNiceArray} \] where the matrix~$N$ has no zero entries and~$\mu_j \neq 0$ for~$l+1\leq j \leq m$. Then, since~$l$ is greater than~$1$, $\operatorname{rank}(M_{\alpha}|_{\langle X_{1},\ldots , X_l \rangle})\geq 2$ and thus~$\operatorname{rank}(M_\alpha|_{\langle w, V(Y) \rangle})\geq 3$ (since the~$\mu_j$ are non-zero), which contradicts~$\operatorname{rank}(M_\alpha)=2$. \end{proof} \end{prop} \begin{prop}\label{prop - n-partitite + rank 2 implies torus} Let~$\Gamma$ be a complete n-partite graph and~$\alpha \in H_2(A_\Gamma)$. If $\operatorname{rank}(M_\alpha) = 2$, then~$\alpha$ is representable by a torus. \begin{proof} The abelianisation of~$A_\Gamma$ is~$\mathbb{Z}^{V(\Gamma)}$, and the map from~$A_\Gamma$ to its abelianisation corresponds to embedding~$\Gamma$ into the complete graph on the vertex set~$V(\Gamma)$. Let $\alpha^{ab} \in H_2(\mathbb{Z}^{V(\Gamma)})$ denote the image of~$\alpha$ under the homomorphism on second homology induced by the abelianisation. Then the connection matrices are equal, i.e.~$M_{\alpha^{ab}} = M_\alpha$. Recall that~$H_2(\mathbb{Z}^{V(\Gamma)})$ is identified with $\bigwedge^2 \mathbb{Z}^{V(\Gamma)}$. Since~$\operatorname{rank}(M_\alpha) = 2$, it follows from Proposition~\ref{prop - minimal number of elementary wedges} that the class~$\alpha^{ab}$ is an elementary wedge $a \wedge b$ for some vectors~$a,b \in \mathbb{Z}^{V(\Gamma)}$. Using the partition $V(\Gamma)=\sqcup_{i=1}^n X_i$, write $a=\sum_{i=1}^n a_i,~b=\sum_{i=1}^n b_i$ where each~$a_i, b_i$ lie in the subgroup generated by $X_i$. The partition of~$V(\Gamma)$ induces a block decomposition of~$M_\alpha$ with diagonal blocks~$M_\alpha|_{\langle X_i\rangle}$; these diagonal blocks are necessarily zero matrices since there are no edges in~$\Gamma$ between vertices in~$X_i$. Since~$M_\alpha=M_{\alpha^{ab}}$, the blocks~$M_\alpha|_{\langle X_i\rangle}$ correspond to the elementary wedges~$a_i \wedge b_i$ under the identification $\alpha^{ab}=a \wedge b$, and it follows that~$a_i \wedge b_i$ is trivial. This implies~$a_i$ and~$b_i$ are linearly dependent, i.e.~there are vectors~$v_i$ and multipliers~$r_i, s_i \in \mathbb{Z}$ such that~$a_i = r_i v_i,~b_i = s_i v_i$ for~$i= 1, \ldots, n$. Choose lifts~$\nu_i \in A_\Gamma$ of the vectors~$v_i \in \mathbb{Z}^{V(\Gamma)}$. As each element $\nu_i$ lies in the subgroup generated by~$X_i$, the set of elements~$\nu_i$ pairwise commute and thus determine a homomorphism $\tau: \mathbb{Z}^n \to A_\Gamma$ that maps the standard basis $e_i$ to $\nu_i$. Set $c = \sum_{i=1}^n r_i e_i$ and $d = \sum_{i=1}^n s_i e_i$; using the Pontryagin ring structures on homology groups of $\mathbb{Z}^n$ and $\mathbb{Z}^{V(\Gamma)}$, we see that $\tau_*(c \wedge d)^{ab} = a \wedge b$, where $\tau_*\colon H_2(\mathbb{Z}^n) \to H_2(A_\Gamma)$ is the homomorphism induced by $\tau$. So $\alpha = \tau_*\left(c \wedge d\right)$ since the abelianisation map is injective on second homology. By Proposition~\ref{prop - n torus cap equality}, the elementary wedge~$c \wedge d$, and hence its image $\alpha$, is representable by a torus. \end{proof} \end{prop} Putting the results of this section together gives: \setcounter{abcthm}{3} \begin{abcthm}\label{abcthm - n partite torus} Let~$\Gamma$ be any simple graph. Then a nontrivial second homology class $\alpha \in H_2(A_\Gamma)$ is representable by a torus if and only if $\operatorname{rank}(M_\alpha) = 2$. Furthermore, the support of such a homology class is a complete n-partite graph. \begin{proof} If~$\alpha \in H_2(A_\Gamma)$ is nontrivial, then automatically~$\operatorname{rank}(M_\alpha) \ge 2$. If we also assume~$\alpha$ is representable by a torus, then $\operatorname{rank}(M_\alpha) = 2$ by the cap product inequality (Equation~\ref{eq - cap bound}). Conversely, if~$\operatorname{rank}(M_\alpha) = 2$, then~$\alpha$ is nontrivial and, by Proposition~\ref{prop - rank 2 implies n partite}, its support~$\operatorname{supp}(\alpha)$ is a complete $n$-partite graph for some positive integer~$n$. Restrict the graph~$\Gamma$ if necessary and assume~$\operatorname{supp}(\alpha) = \Gamma$. By Proposition~\ref{prop - n-partitite + rank 2 implies torus}, the class~$\alpha$ is representable by a torus. \end{proof} \end{abcthm} \section{An example answering Question~\ref{question - realised by dj tori}}\label{section - question 2} In this final section, we answer the following question: \begin{question} Does every class in the second homology of a RAAG have a minimal genus representative that is a disjoint union of tori? \end{question} To describe a second homology class representative in general, we use \emph{Van Kampen diagrams} (see~\cite{VKDiagrams} for a more thorough introduction). These diagrams encode a cellular structure of a compact surface~$\Sigma$ and information on how the cells are mapped to a CW space~$X$: overall this gives a cellular map~$f\colon \Sigma \to X$. In our case, we want a map from a surface $\Sigma$ to $\operatorname{Sal}_\Gamma$, so the Van Kampen diagram is a tesselation of $\Sigma$ by squares with oriented edges labelled by vertices of $\Gamma$ such that opposite edges of the squares have the same labelling and orientation and for each square either: \begin{enumerate} \item all edges are labelled by the same generator, or \item edges are labelled by two generators $v$ and $w$ such that $\{v,w\}\in E(\Gamma)$ \end{enumerate} On the $1$-skeleton, the corresponding map $f\colon \Sigma \to \operatorname{Sal}_\Gamma$ is given by mapping each vertex to the single vertex $x_0$ of $\operatorname{Sal}_\Gamma$ and every edge to the 1-cube (or circle) in the Salvetti complex corresponding to its label. Since the boundary of a square is mapped to the commutator of its labels, the map on the $1$-skeleton can be extended to the 2-skeleton if and only if said commutators vanish. In our case this holds since either the commutator is trivially trivial (case~1 above) or there is an edge in~$\Gamma$ so the commutator is trivial in~$A_\Gamma$ (case~2 above). We note that such an extension is unique up to homotopy as $\pi_2(\operatorname{Sal}_\Gamma)$ is trivial. Hence a Van Kampen diagram encodes a map $f\colon \Sigma\to \operatorname{Sal}_\Gamma$ and it follows from Lemma~1.11 in~\cite{VKDiagrams} that, up to homotopy, every map arises in such a way. \begin{exam}\label{example - pentagon} Let $\Gamma$ be the oriented pentagon, as shown in Figure~\ref{fig - pentagon VK diagram} below. By Proposition~\ref{prop - rank 2 implies n partite}, the support of any class representable by a torus in the pentagon is a line consisting of at most two edges. So any second homology class with full support can be represented with no less than three disjoint tori. Let $\alpha \in H_2(\operatorname{Sal}_\Gamma)$ be the class pictorially shown in the Figure~\ref{fig - pentagon VK diagram}, so~$\operatorname{supp}(\alpha)$ is the full pentagon. We exhibit a genus two representative for this class using the Van Kampen diagram on the right of the figure. A quick Euler characteristic computation verifies that this diagram indeed represents a connected closed surface of genus two. This gives a map~$f\colon \Sigma_2 \to \operatorname{Sal}_\Gamma$ and the image of a fundamental class coincides with~$\alpha \in H_2(\operatorname{Sal}_\Gamma)$---to see this, note the number and orientation of each of the $2$-cells in the Van Kampen diagram. \begin{figure}[t] \def0.9\textwidth{0.9\textwidth} \input{vk.pdf_tex} \caption{On the left is the orientation graph ~$\Gamma$ with labels corresponding to the class $\alpha \in H_2(\operatorname{Sal}_\Gamma)$. On the right is the Van Kampen diagram corresponding to the map~$f\colon \Sigma_2\to \operatorname{Sal}_\Gamma$, where non-glued opposite edges are identified except for those at the endpoints of the dotted lines which are glued together. Vertices with the same decoration are identified.} \label{fig - pentagon VK diagram} \end{figure} \end{exam} This example provides a negative answer to Question~\ref{question - realised by dj tori}, and thus proves our final Theorem. \begin{abcthm}\label{abcthm - counterexample to q2} There exists a RAAG~$A_\Gamma$ with a second homology class whose minimal genus representative cannot be realised by a disjoint union of tori. \end{abcthm}
{ "timestamp": "2021-08-09T02:05:58", "yymm": "2108", "arxiv_id": "2108.02914", "language": "en", "url": "https://arxiv.org/abs/2108.02914", "abstract": "We investigate the minimal genus problem for the second homology of a right angled Artin group (RAAG). Firstly, we present a lower bound for the minimal genus of a second homology class, equal to half the rank of the corresponding cap product matrix. We show that for complete graphs, trees, and complete bipartite graphs, this bound is an equality, and furthermore in these cases the minimal genus can always be realised by a disjoint union of tori. Additionally, we give a full characterisation of classes that are representable by a single torus. However, the minimal genus of a second homology class of a RAAG is not always realised by a disjoint union of tori as an example we construct in the pentagon shows.", "subjects": "Geometric Topology (math.GT); Algebraic Topology (math.AT); Group Theory (math.GR)", "title": "The minimal genus problem for right angled Artin groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682464686386, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7073169933916068 }
https://arxiv.org/abs/1809.00476
A note on non-commutative polytopes and polyhedra
It is well-known that every polyhedral cone is finitely generated (i.e. polytopal), and vice versa. Surprisingly, the two notions differ almost always for non-commutative versions of such cones. This was obtained as a byproduct in an earlier paper. In this note we give a direct and constructive proof of the statement. Our proof also yields a surprising quantitative result: the difference of the two notions can always be seen at the first level of non-commutativity, i.e. for matrices of size $2$, independent of dimension and complexity of the initial convex cone.
\section{Introduction and Preliminaries} A convex cone $C\subseteq\R^d$ is called {\it polyhedral}, if there exist linear functionals $\ell_1,\ldots,\ell_m\colon\R^d\to\R$ with $$C=\left\{ a\in\R^d\mid \ell_1(a)\geq 0, \ldots, \ell_m(a)\geq 0\right\}.$$ A convex cone $C$ is called {\it finitely generated} (or {\it polytopal}) if there are $v_1,\ldots, v_n\in\R^d$ with $$C={\rm cc}\left\{ v_1,\ldots, v_n\right\}:=\left\{ \sum_{i=1}^n \lambda_iv_i\mid \lambda_1,\ldots, \lambda_n \geq 0 \right\}.$$ The Minkowski-Weyl-Theorem (see for example \cite{sch}) states that each polyhedral cone is finitely generated, and each finitely generated cone is polyhedral. A recent development in real algebraic geometry and convexity theory is to consider {\it non-commutative} sets and cones. They arise by replacing points from $\R^d$ with $d$-tuples of Hermitian matrices (of arbitrary size). A lot of meaningful information about polynomials and semialgebraic sets comes to light when these non-commutative levels are added to the classical setup. Examples are Helton's Positiv-stellensatz \cite{h} and the analysis of Ben-Tal and Nemirovski's algorithm for checking inclusion of spectrahedra \cite{bn,fnt,hkm}, among others (see also \cite{hkm2} for an overview). For a polyhedral/polytopal cone $$C=\left\{ a\in\R^d\mid \ell_1(a)\geq 0, \ldots, \ell_m(a)\geq 0\right\}={\rm cc}\left\{ v_1,\ldots, v_n\right\}$$ there are two natural ways to extend the cone to matrix levels. The first one uses the polyhedral description, and is the standard way of defining non-commutative semialgebraic sets by polynomial inequalities. For each $s\in\N$ we define $$C_s^{\rm ph} :=\left\{ (A_1,\ldots,A_d)\in {\rm Her}_s^d\mid \ell_i(A_1,\ldots, A_d)\geqslant 0, i=1,\ldots, m\right\},$$ where ${\rm Her}_s$ is the real vector space of complex Hermitian $s\times s$-matrices, and $\geqslant 0$ means that a matrix is positive semidefinite. Note that the above definition makes sense, since a real linear polynomial can be evaluated at a tuple of Hermitian matrices, and the result is again Hermitian. Also note that $C_1^{\rm ph}$ coincides with $C$. We now consider the collection over all matrix-sizes as our {\it non-commutative polyhedral extension of $C$}: $$C^{\rm ph} := \left( C_s^{\rm ph}\right)_{s\in\N}.$$ The second non-commutative extension of $C$ uses the generators of $C$ and looks a little less natural at first sight. However, there are good reasons for the following definition, as we will argue below. We just replace nonnegative numbers by positive semidefinite matrices and define for any $s\in \N:$ $$C_s^{\rm pt}:=\left\{ \sum_{i=1}^n P_i\otimes v_i\mid P_i\in {\rm Her}_s, P_i\geqslant 0, i=1,\ldots, n \right\}.$$ Here, $\otimes$ denotes the Kronecker (=tensor) product of matrices. In our case it just means we put $P_i$ into each component of the vector $v_i$ and multiply it with the real number in there. The result is a $d$-tuple of Hermitian matrices of size $s$, and so is the sum over all $i$. Also note that $C_1^{\rm pt}$ again coincides with $C$, since positive semidefinite matrices of size $1$ are just nonnegative real numbers. Now the collection $$C^{\rm pt}:=\left( C_s^{\rm pt}\right)_{s\in\N}$$ is the {\it non-commutative polytopal extension of $C$}. We will restrict to {\it proper convex cones} from now on, i.e. closed convex cones $C$ with nonempty interior and $C\cap -C=\{0\}.$ Then all $C_s^{\rm ph}$ and $C_s^{\rm pt}$ have the same property, and they fit well into the context of {\it operator systems} (see \cite{fnt} and the references therein for details). In fact both $C^{\rm ph}$ and $C^{\rm pt}$are abstract operator systems with $C$ at scalar level, and in particular convex in the non-commutative sense. It is easily seen (and shown in \cite{fnt}) that $C^{\rm ph}$ is the largest operator system with $C$ at scalar level, and $C^{\rm pt}$ is the smallest such operator system. In particular we have $C_s^{\rm pt}\subseteq C_s^{\rm ph}$ for all $s$ (which can also be easily checked directly). \section{Main Result} Theorem \ref{thm:main} below is the main result of these notes. Without the information about the matrix size $2$, the result is a byproduct of the main results of \cite{fnt} (see Remark 4.9 from that work). However, the proof there is quite involved and non-constructive, in particular since the focus is on a different property of operator systems. See also Remark \ref{rem:just} below for more comments on the difference of the two proofs. The main result was later generalized in Theorem 4.1\ from \cite{PSS}, to include the case that $C$ is not polyhedral. It is also shown there that the difference of the cones can always be seen at level $2^{d-1}$. In Problem 4.3\ the authors then ask whether this bound can be improved. We now give direct, simple and completely constructive proof of the main result. It also answers Problem 4.3 in proving the somewhat surprising result about the matrix size $2$. \begin{theorem}\label{thm:main} Let $C\subseteq\R^d$ be a proper polyhedral cone. (i) If $C$ is a simplex cone, then $C^{\rm pt}=C^{\rm ph}$. (ii) If $C$ is not a simplex cone, then $C_2^{\rm pt}\subsetneq C_2^{\rm ph}.$ \end{theorem} \begin{proof} Statement ($i$) is easy. The argument is the same as in \cite{fnt}, we repeat it for completeness. If $C$ is a simplex cone, then up to a linear isomorphism of the underlying space $\R^d$ we can assume $C=\R_{\geq 0}^d,$ the positive orthant. In that case one readily checks $$C_s^{\rm ph}=\left\{(A_1,\ldots, A_d)\in{\rm Her}_s^d\mid A_i\geqslant 0, i=1,\ldots, d\right\}=C_s^{\rm pt}$$ for all $s\in\N$. For ($ii$) assume that $C$ is not a simplex cone. We first settle the case of smallest possible dimension, namely $d=3$. Since $C$ is proper and has at least $4$ extremal rays, after applying a linear isomorphism we can assume that $C$ is generated by $$v_1=(1,-1,1), v_2=(-1,-1,1), v_3=(-1,1,1), v_4=(1,1,1)$$ and some $v_5,\ldots, v_n\in (1,\infty) \times (-1,1) \times\{1\}$ (see for example \cite{go} Section 2.8.1.\ for an explicit construction of such an isomorphism and Figure \ref{fig:poly} for the intersection of the cone $C$ with the plane defined by $x_3=1$). \begin{center} \begin{figure}[h!] \begin{tikzpicture}[scale=1.5] \draw[->](-1.3,0)--(3.5,0);\put (155,0) {$x_1$}; \draw[->](0,-1.3)--(0,1.3);\put (0,60) {$x_2$}; \draw[red,thick] (1,1)--(4.8,-1); \draw[red,thick] (-1,-1)--(4.8,-1); \draw[red,thick] (1,1)--(-1,1)--(-1,-1)--(1,-1); \filldraw[red, opacity=0.3] (1,1) -- (-1,1) -- (-1,-1) -- (1,-1) -- (4.8,-1)-- cycle; \draw[thick,blue,opacity=0.6] (1,1) -- (-1,1) -- (-1,-1) -- (1,-1) -- (1.3,-0.8) -- (1.8,-0.1) -- (1.7,0.4) -- (1.45,0.7)-- cycle; \filldraw[blue, opacity=0.4] (1,1) -- (-1,1) -- (-1,-1) -- (1,-1) -- (1.3,-0.8) -- (1.8,-0.1) -- (1.7,0.4) -- (1.45,0.7)-- cycle; \filldraw (4.8,-1) circle (0.8pt);\put (200,-50) {$w$}; \filldraw (1,-1) circle (0.8pt); \put (40,-50) {$v_1$}; \filldraw (-1,-1) circle (0.8pt); \put (-45,-50) {$v_2$}; \filldraw (-1,1) circle (0.8pt); \put (-45,47) {$v_3$}; \filldraw (1,1) circle (0.8pt); \put (40,47) {$v_4$}; \filldraw (1.45,0.7) circle (0.8pt); \put (65,33) {$v_5$}; \filldraw (1.7,0.4) circle (0.8pt); \put (76,15) {$v_6$}; \filldraw (1.8,-0.1) circle (0.8pt); \put (83,-5) {$\hdots$}; \filldraw (1.3,-0.8) circle (0.8pt); \put (60,-35) {$v_n$}; \end{tikzpicture} \caption{section with plane $x_3=1$ of $C$ (blue) and $D$ (red)}\label{fig:poly} \end{figure} \end{center} \vspace{-0.5cm} We now consider the matrix tuple $$\underline A:=(A_1, A_2,A_3):=\left(\left(\begin{array}{cc}1 & 0 \\0 & -1\end{array}\right), \left(\begin{array}{cc}0 & 1 \\1 & 0\end{array}\right),\left(\begin{array}{cc}1 & 0 \\0 & 1\end{array}\right)\right)\in{\rm Her}_2^3$$ and claim that $\underline A\in C_2^{\rm ph}.$ It is easily checked that $\underline A$ even fulfills $$A_3\pm A_1\geqslant 0, \quad A_3\pm A_2\geqslant 0,$$ and by Farkas Lemma \cite{far} in particular the inequalities defining $C$. Let us prove $\underline A\notin C_2^{\rm pt}.$ First choose another point $w=(\lambda,-1,1)$ with $\lambda$ so large that $$C\subseteq {\rm cc}\{w,v_2,v_3,v_4\}=:D$$ (see Figure \ref{fig:poly}). We now even prove $\underline A\notin D_2^{\rm pt}.$ Assume to the contrary that there exists positive semidefinite matrices $P_1,P_2,P_3,P_4\in {\rm Her}_2$ with \begin{align*}\underline A&=(A_1,A_2,A_3)\\ &=P_1\otimes w+P_2\otimes v_2+P_3\otimes v_3+P_4\otimes v_4\\ &=(\lambda P_1-P_2-P_3+P_4,-P_1-P_2+P_3+P_4,P_1+P_2+P_3+P_4).\end{align*} Adding the first and third entry we obtain \begin{equation}\label{first}\left(\begin{array}{cc}2 & 0 \\0 & 0\end{array}\right)=A_1+A_3=(1+\lambda)P_1+2P_4,\end{equation} which implies $$P_1=\left(\begin{array}{cc}\alpha_1 & 0 \\0 & 0\end{array}\right), \quad P_4=\left(\begin{array}{cc}\alpha_4 & 0 \\0 & 0\end{array}\right) $$ for some $\alpha_1,\alpha_4\geq 0$, since $P_1,P_4\geqslant 0.$ Similarly we get $$\left(\begin{array}{cc}1 & -1 \\-1 & 1\end{array}\right)=A_3-A_2=2(P_1+P_2),$$ implying $$P_2=\frac{1}{2}\left(\begin{array}{cc}\alpha_2 & -1 \\-1 & 1\end{array}\right).$$ Plugging all of this into the equation for $A_1$ we get $$\left(\begin{array}{cc}1 & 0 \\0 & -1\end{array}\right)=\left(\begin{array}{cc}\lambda\alpha_1-\alpha_2/2+\alpha_4 & 1/2 \\1/2 & -1/2\end{array}\right) -P_3,$$ implying $$P_3=\left(\begin{array}{cc}\alpha_3 & 1/2 \\1/2 & 1/2\end{array}\right).$$ From $P_2,P_3\geqslant 0$ we obtain $\alpha_2\geq 1$ and $\alpha_3\geq 1/2.$ From $I_2=P_1+P_2+P_3+P_4$ we thus find $\alpha_1=\alpha_4=0$, so $P_1=P_4=0$, which contradicts ($\ref{first}$). This proves $\underline A\notin D_2^{\rm pt}\supseteq C_2^{\rm pt}$, and thus settles the case $d=3$. We now proceed by induction on $d$. Let $C\subseteq\R^d$ be a proper polyhedral cone which is not a simplex cone. Then either $C$ has a facet which is not a simplex cone, or a vertex figure which is not a simplex cone \cite{zi}. In the first case we can assume that $C$ is contained in the halfspace defined by $x_1\leq 0$ and the non-simplex facet $F$ lies in the hyperplane defined by $x_1=0.$ We can apply the induction hypothesis to $F\subseteq \R^{d-1}$ and find $(A_2,\ldots,A_d)\in F_2^{\rm ph}\setminus F_2^{\rm pt}.$ Then for $\underline A:=(0,A_2,\ldots, A_d)$ we obviously have $\underline A\in C_2^{\rm ph}.$ Now assume $\underline A\in C_2^{\rm pt}$. By looking at the first component in a representation $$(0,A_2,\ldots, A_d)=\sum_i P_i \otimes v_i$$ with $v_i\in C$ we see that $P_i\neq 0$ can only occur for $v_i\in F$. Indeed any $v_i\in C\setminus F$ has a negative first entry, and such terms cannot cancel to yield $0$. So the representation is a representation of $(A_2,\ldots, A_d)$ in $F_2^{\rm pt}$, which does not exist. So we have shown $\underline A\notin C_2^{\rm pt}.$ In the second case we can assume that the non-simplex vertex-figure $F$ of $C$ is cut out by the hyperplane defined by $ x_1=0$, and further that $v_1$ spans the only extreme ray of $C$ with negative $x_1$-entry, whereas all other generators have a positive first entry (see Figure \ref{fig:verfig} for an illustration). \begin{center} \begin{figure}[h!!] \begin{tikzpicture}[xscale=3,yscale=1.8] \draw[->] (0.5,0.5) -- (1.2,0.5); \draw[->] (0.5,0.5) -- (0.5,2.4); \draw[->] (0.5,0.5) -- (0.9,0.9) node[above right] {$x_1$}; \filldraw[fill=blue] (0.5,0.5)--(0.3,1.9)--(0.8,1.9)--(0.5,0.5); \draw[] (0.3,2.2) -- (0.5,0.5); \draw[] (0.6,2.3)-- (0.5,0.5); \draw[] (1,2.3)-- (0.5,0.5); \draw[thick, fill=red, opacity=0.6] (0.5,0.5) -- (0.2,2)--(0.4,1.8)--(0.5,0.5) --(1.2,2)--(0.4,1.8) ; \draw[thick, fill opacity=0.3, fill=red] (0.2,2) -- (0.3,2.2) -- (0.6,2.3) --(1,2.3)--(1.2,2)-- (0.4,1.8)-- cycle; \put (37,85) {$v_1$} \put (5,100) {$v_2$} \put (13,115) {$v_3$} \put (48,122) {$\cdots$} \put (80,123) {$v_{n-1}$} \put (108,100) {$v_{n}$} \end{tikzpicture}\caption{vertex figure $F$ (blue) of $C$ (red)}\label{fig:verfig} \end{figure} \end{center} After scaling the generators $v_i$ we can even assume that the $x_1$-component of $v_1$ is $-1$, and the $x_1$-component of all other $v_i$ is $1$. Then the cone $F$ is generated by vectors $w_2,\ldots, w_n,$ where each $w_i$ is of the form $$w_i= \frac12v_1+\frac12v_i.$$ Since $F$ is not a simplex cone we can apply the induction hypothesis to $F\subseteq \R^{d-1}$ and again find $(A_2,\ldots, A_d)\in F_2^{\rm ph}\setminus F_2^{\rm pt}.$ As before we now argue that $$\underline A:=(0,A_2,\ldots,A_d)\in C_2^{\rm ph}\setminus C_2^{\rm pt},$$ where $\underline A \in C_2^{\rm ph}$ is again clear. So assume for contradiction that $\underline A\in C_2^{\rm pt}$, so there exists some positive semidefinite $P_i\in {\rm Her}_2$ with $$(0,A_2,\ldots, A_d)=P_1\otimes v_1+ P_2\otimes v_2+\cdots +P_n\otimes v_n.$$ Since the first entry of this matrix tuple is zero, we get $P_1=P_2+\cdots +P_n$, which implies \begin{align*}\underline A&= \left(P_2+\cdots +P_n\right)\otimes v_1+ P_2\otimes v_2+\cdots +P_n\otimes v_n\\ &= P_2\otimes (v_1+v_2) +\cdots +P_n\otimes (v_1+v_n) \\ &= 2P_2\otimes w_2+\cdots +2P_n\otimes w_n.\end{align*} This contradicts $(A_2,\ldots, A_n)\notin F_2^{\rm pt},$ and finishes the proof. \end{proof} \begin{remark}\label{rem:just} (i) Let us comment on the difference of the above proof and the proof from \cite{fnt}. First, the main result from \cite{fnt} states that the abstract operator system $C^{\rm ph}$ admits a finite-dimensional realization, whereas $C^{\rm pt}$ does not. This of course implies that they cannot coincide, but gives no result on the level at which the differ. The proof starts in a similar fashion as the above, first settling the case $d=3$. But already here our construction of $\underline A$ is much more explicit and simpler than what was done in \cite{fnt}. The induction step in \cite{fnt} is completely non-constructive and cannot be transformed into an explicit argument. Our argument above is completely constructive. After applying the necessary isomorphisms and induction steps one obtains some explicit $\underline A\in C_2^{\rm ph}\setminus C_2^{\rm pt}.$ (ii) Note that all appearing matrices above are real symmetric. So the difference between the cones appears not only in the Hermitian case, but already when we restrict ourselves to real symmetric matrices. \end{remark} \begin{example} We consider the $3$-dimensional square-cone \begin{align*}C&=\left\{ a\in\R^3\mid a_3\pm a_1\geq 0, a_3\pm a_2\geq 0\right\}\\& ={\rm cc}\left\{(1,-1,1),(-1,-1,1),(-1,1,1),(1,1,1) \right\}.\end{align*} We have seen in the proof of Theorem \ref{thm:main} that $$\underline A=\left(\left(\begin{array}{cc}1 & 0 \\0 & -1\end{array}\right), \left(\begin{array}{cc}0 & 1 \\1 & 0\end{array}\right),\left(\begin{array}{cc}1 & 0 \\0 & 1\end{array}\right)\right)\in C_2^{\rm ph}\setminus C_2^{\rm pt}.$$ So we can see the difference of the two cones for example in the affine subspace $$V:=\left\{ \left( \left(\begin{array}{cc}x & 0 \\0 & -1\end{array}\right),\left(\begin{array}{cc}0 & y \\y & 0\end{array}\right),\left(\begin{array}{cc}1 & 0 \\0 & 1\end{array}\right)\right)\mid x,y\in \R\right\}\subseteq {\rm Her}_s^3.$$ After identifying $V$ with $\R^2$ it is a straightforward computation to see that $$C_2^{\rm ph}\cap V=[-1,1]\times[-1,1].$$ Determining $C_2^{\rm pt}\cap V$ needs some more computation. After imposing all necessary linear constraints on $P_1,P_2,P_3,P_4\in {\rm Her}_2$ to ensure $$P_1\otimes v_1 +P_2\otimes v_2+ P_3\otimes v_3+ P_4\otimes v_4\in V,$$ then using the conditions that all $P_i$ must be positive semidefinite, and then solving for $x$ and $y$, one gets $$C_2^{\rm pt}\cap V= \left\{ (x,y)\in [-1,1]^2\mid x+2y^2\leq 1\right\}.$$ Figure \ref{fig:sec} shows the two affine sections. The black dot corresponds to the point $\underline A\in C_2^{\rm ph}\setminus C_2^{\rm pt}$ from above. \begin{center} \begin{figure}[h!] \begin{tikzpicture}[scale=1.5] \draw[->](-1.3,0)--(1.3,0);\put (60,-2) {$x$}; \draw[->](0,-1.3)--(0,1.3);\put (-2,60) {$y$}; \filldraw[red, opacity=0.5] (1,1) -- (-1,1) -- (-1,-1) -- (1,-1) --cycle; \draw[red] (1,1) -- (-1,1) -- (-1,-1) -- (1,-1) --cycle; \draw [ domain=-1:1,samples=100,blue] plot (\x, {sqrt((1-\x)/2)}); \draw [ domain=-1:1,samples=100,blue] plot (\x, {-sqrt((1-\x)/2)}); \filldraw [domain=-1:1,samples=100,blue, opacity=0.4] plot (\x, {sqrt((1-\x)/2)}); \filldraw [domain=-1:1,samples=100,blue, opacity=0.4] plot (\x, {-sqrt((1-\x)/2)}); \draw[blue] (-1,-1)--(-1,1); \fill [blue, opacity=0.4] (-1,-1)--(1,-0.0264)--(1,0.0264)--(-1,1); \filldraw (1,1) circle (0.8pt); \put (48,42) {$\underline A$}; \end{tikzpicture} \caption{affine section of $C_2^{\rm ph}$ (red) and $C_2^{\rm pt}$ (blue)}\label{fig:sec} \end{figure} \end{center} \end{example} \bibliographystyle{plain}
{ "timestamp": "2019-03-01T02:23:57", "yymm": "1809", "arxiv_id": "1809.00476", "language": "en", "url": "https://arxiv.org/abs/1809.00476", "abstract": "It is well-known that every polyhedral cone is finitely generated (i.e. polytopal), and vice versa. Surprisingly, the two notions differ almost always for non-commutative versions of such cones. This was obtained as a byproduct in an earlier paper. In this note we give a direct and constructive proof of the statement. Our proof also yields a surprising quantitative result: the difference of the two notions can always be seen at the first level of non-commutativity, i.e. for matrices of size $2$, independent of dimension and complexity of the initial convex cone.", "subjects": "Algebraic Geometry (math.AG); Operator Algebras (math.OA)", "title": "A note on non-commutative polytopes and polyhedra", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682454669814, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7073169926749973 }
https://arxiv.org/abs/1705.10126
Tensor tomography on Cartan-Hadamard manifolds
We study the geodesic X-ray transform on Cartan-Hadamard manifolds, and prove solenoidal injectivity of this transform acting on functions and tensor fields of any order. The functions are assumed to be exponentially decaying if the sectional curvature is bounded, and polynomially decaying if the sectional curvature decays at infinity. This work extends the results of Lehtonen (2016) to dimensions $n \geq 3$ and to the case of tensor fields of any order.
\section{Introduction}\label{sec:introduction} This article considers the geodesic X-ray transform on noncompact Riemannian manifolds. This transform encodes the integrals of a function $f$, where $f$ satisfies suitable decay conditions at infinity, over all geodesics. In the case of Euclidean space the geodesic X-ray transform is just the usual X-ray transform involving integrals over all lines, and in two dimensions it coincides with the Radon transform introduced in the seminal work of Radon in 1917~\cite{Rad17}. For Euclidean or hyperbolic space in dimensions $n \geq 2$, one has the following basic theorems on the injectivity of this transform (see~\cite{Hel99},~\cite{Jen04},~\cite{Hel94}): \begin{theoremA} If $f$ is a continuous function in $\mR^n$ satisfying $|f(x)| \leq {C(1+|x|)}^{-\eta}$ for some $\eta > 1$, and if $f$ integrates to zero over all lines in $\mR^n$, then $f \equiv 0$. \end{theoremA} \begin{theoremB} If $f$ is a continuous function in the hyperbolic space $\mathbb{H}^n$ satisfying $|f(x)| \leq C \e^{-d(x,o)}$, where $o \in \mathbb{H}^n$ is some fixed point, and if $f$ integrates to zero over all geodesics in $\mathbb{H}^n$, then $f \equiv 0$. \end{theoremB} We remark that some decay conditions for the function $f$ are required, since there are examples of nontrivial functions in $\mathbb{R}^2$ which decay like $|x|^{-2}$ on every line and whose X-ray transform vanishes~\cite{Zal82},~\cite{Arm94}. Related results on the invertibility of Radon type transforms on constant curvature spaces or noncompact homogeneous spaces may be found in~\cite{Hel99},~\cite{Hel13}. The purpose of this article is to give analogues of the above theorems on more general, not necessarily symmetric Riemannian manifolds. We will work in the setting of Cartan-Hadamard manifolds, i.e.\ complete simply connected Riemannian manifolds with nonpositive sectional curvature. Euclidean and hyperbolic spaces are special cases of Cartan-Hadamard manifolds, and further explicit examples are recalled in Section~\ref{sec:examples}. It is well known that any Cartan-Hadamard manifold is diffeomorphic to $\mathbb{R}^n$, the exponential map at any point is a diffeomorphism, and the map $x \mapsto {d(x,p)}^2$ is strictly convex for any $p \in M$ (see e.g.~\cite{Pet06}). \begin{definitionnonum} Let $(M,g)$ be a Cartan-Hadamard manifold, and fix a point $o \in M$. If $\eta > 0$, define the spaces of exponentially and polynomially decaying continuous functions by \begin{align*} E_{\eta}(M) &= \{ f \in C(M) \,;\, |f(x)| \leq C \e^{-\eta d(x,o)} \text{ for some $C > 0$} \}, \\ P_{\eta}(M) &= \{ f \in C(M) \,;\, |f(x)| \leq {C(1+d(x,o))}^{-\eta} \text{ for some $C > 0$} \}. \end{align*} Also define the spaces \begin{align*} E_{\eta}^1(M) &= \{ f \in C^1(M) \,;\, |f(x)| + |\nabla f(x)| \leq C \e^{-\eta d(x,o)} \text{ for some $C > 0$} \}, \\ P_{\eta}^1(M) &= \{ f \in C^1(M) \,;\, \text{$|f(x)| \leq {C(1+d(x,o))}^{-\eta}$ and } \\ & \hspace{100pt} \text{$|\nabla f(x)| \leq {C(1+d(x,o))}^{-\eta-1}$ for some $C > 0$} \}. \end{align*} Here $\nabla = \nabla_g$ is the total covariant derivative in $(M,g)$ and $|\,\cdot\,| = |\,\cdot \,|_g$ is the $g$-norm on tensors. \end{definitionnonum} It follows from Lemma~\ref{lemma_u_finite} that if $f \in P_{\eta}(M)$ for some $\eta > 1$, then the integral of $f$ over any maximal geodesic in $M$ is finite. For such functions $f$ we may define the geodesic X-ray transform $I_0 f$ of $f$ by \[ I_0 f(\gamma) = \int_{-\infty}^{\infty} f(\gamma(t)) \,dt, \qquad \text{$\gamma$ is a geodesic}. \] The inverse problem for the geodesic X-ray transform is to determine $f$ from the knowledge of $I_0 f$. By linearity, uniqueness for this inverse problem reduces to showing that $I_0 f = 0$ implies $f = 0$. More generally, suppose that $f$ is a $C^1$-smooth symmetric covariant $m$-tensor field on $M$, written in local coordinates (using the Einstein summation convention) as \begin{equation*} f = f_{j_1\dots j_m}(x) \,dx^{j_1} \otimes \dots \otimes dx^{j_m}. \end{equation*} We say that $f \in P_{\eta}(M)$ if $|f|_g \in P_{\eta}(M)$, and $f \in P_{\eta}^1(M)$ if $|f|_g \in P_{\eta}(M)$ and $|\nabla f|_g \in P_{\eta+1}(M)$, etc. Now if $f \in P_{\eta}(M)$ for some $\eta > 1$, then the geodesic X-ray transform $I_m f$ of $f$ is well defined by the formula \[ I_m f(\gamma) = \int_{-\infty}^{\infty} f_{\gamma(t)}(\dot{\gamma}(t), \ldots, \dot{\gamma}(t)) \,dt, \qquad \text{$\gamma$ is a geodesic}. \] This transform always has a kernel when $m \geq 1$: if $h$ is a symmetric $(m-1)$-tensor field satisfying $h \in P_{\eta}^1(M)$ for some $\eta > 0$, then $I_m (\sigma \nabla h) = 0$ where $\sigma$ denotes symmetrization of a tensor field (see Section~\ref{subsec:tensors}). We say that $I_m$ is solenoidal injective if $I_m f = 0$ implies $f = \sigma \nabla h$ for some $(m-1)$-tensor field $h$. Our first theorem proves solenoidal injectivity of $I_m$ for any $m \geq 0$ on Cartan-Hadamard manifolds with bounded sectional curvature, assuming exponential decay of the tensor field and its first derivatives. We will denote the sectional curvature of a two-plane $\Pi \subset T_x M$ by $K_x(\Pi)$, and we write $-K_0 \leq K \leq 0$ if $-K_0 \leq K_x(\Pi) \leq 0$ for all $x \in M$ and for all two-planes $\Pi \subset T_x M$. \begin{theorem}\label{thm_main1} Let $(M,g)$ be a Cartan-Hadamard manifold of dimension $n \geq 2$, and assume that \[ -K_0 \leq K \leq 0, \qquad \text{for some $K_0 > 0$.} \] If $f$ is a symmetric $m$-tensor field in $E_{\eta}^1(M)$ for some $\eta > \frac{n+1}{2}\sqrt{K_0}$, and if $I_m f = 0$, then $f = \sigma \nabla h$ for some symmetric $(m-1)$-tensor field $h$. (If $m=0$, then $f \equiv 0$.) \end{theorem} The second theorem considers the case where the sectional curvature decays polynomially at infinity, and proves solenoidal injectivity if the tensor field and its first derivatives also decay polynomially. \begin{theorem}~\label{thm_main2} Let $(M,g)$ be a Cartan-Hadamard manifold of dimension $n \geq 2$, and assume that the function \[ \mathcal{K}(x) = \sup \, \{ |K_x(\Pi)| \,;\, \Pi \subset T_x M \ \mathrm{is\ a\ two\text{-}plane} \} \] satisfies $\mathcal{K} \in P_{\kappa}(M)$ for some $\kappa > 2$. If $f$ is a symmetric $m$-tensor field in $P_{\eta}^1(M)$ for some $\eta > \frac{n+2}{2}$, and if $I_m f = 0$, then $f = \sigma \nabla h$ for some symmetric $(m-1)$-tensor field $h$. (If $m=0$, then $f \equiv 0$.) \end{theorem} The second theorem is mostly of interest in two dimensions because of the following rigidity phenomenon: any manifold of dimension $\geq 3$ that satisfies the conditions of the theorem is isometric to Euclidean space~\cite{GW82}. See Section \ref{sec:examples} for a discussion. We will give the proof in any dimension since this may be useful in subsequent work. We remark that Theorems~\ref{thm_main1}--\ref{thm_main2} correspond to Theorems A and B above, but the manifolds considered in Theorems~\ref{thm_main1}--\ref{thm_main2} can be much more general and include many examples with nonconstant curvature (see Section~\ref{sec:examples}). The results will be proved by using energy methods based on Pestov identities, which have been studied extensively in the case of compact manifolds with strictly convex boundary. We refer to~\cite{Muk77},~\cite{PS88},~\cite{Sha94},~\cite{Kn02},~\cite{PSU14} for some earlier results. In fact, Theorems~\ref{thm_main1}--\ref{thm_main2} can be viewed as an extension of the tensor tomography results in~\cite{PS88} from the case of compact nonpositively curved manifolds with boundary to the case of certain noncompact manifolds. We remark that one of the main points in our theorems is that the functions and tensor fields are not compactly supported (indeed, the compactly supported case would reduce to known results on compact manifolds with boundary). More recently, the work~\cite{PSU13} gave a particularly simple derivation of the basic Pestov identity for X-ray transforms and proved solenoidal injectivity of $I_m$ on simple two-dimensional manifolds. Some of these methods were extended to all dimensions in~\cite{PSU15} and to the case of attenuated X-ray transforms in~\cite{GPSU16}. Following some ideas in~\cite{PSU13}, the work~\cite{Leh16} proved versions of Theorems~\ref{thm_main1}--\ref{thm_main2} for the case of two-dimensional Cartan-Hadamard manifolds. In this paper we combine the main ideas in~\cite{Leh16} with the methods of~\cite{PSU15} and prove solenoidal injectivity results on Cartan-Hadamard manifolds in any dimension $n \geq 2$. However, instead of using the Pestov identity in its standard form (which requires two derivatives of the functions involved), we will use a different argument from~\cite{PSU15} related to the $L^2$ contraction property of a Beurling transform on nonpositively curved manifolds. This argument dates back to~\cite{GK80a,GK80}, it only involves first order derivatives and immediately applies to tensor fields of arbitrary order. The $C^1$ assumption in Theorems~\ref{thm_main1}--\ref{thm_main2} is due to this method of proof, and the decay assumptions are related to the growth of Jacobi fields. We mention that Theorems~\ref{thm_main1}--\ref{thm_main2} also extend the two-dimensional results of~\cite{Leh16} by assuming slightly weaker conditions. This article is organized as follows. Section~\ref{sec:introduction} is the introduction, and Section~\ref{sec:examples} contains examples of Cartan-Hadamard manifolds. In Section~\ref{sec:preliminaries} we review basic facts related to geodesics on Cartan-Hadamard manifolds, geometry of the sphere bundle and symmetric covariant tensors fields, following~\cite{Leh16}, \cite{PSU15}, \cite{DS11}. Section~\ref{sec:estimates} collects some estimates concerning the growth of Jacobi fields and related decay properties for solutions of transport equations. Finally, Section~\ref{sec:proofs} includes the proofs of the main theorems based on $L^2$ inequalities for Fourier coefficients. \subsection*{Acknowledgements} All authors were supported by the Academy of Finland (Centre of Excellence in Inverse Problems Research), and M.S.\ was also partly supported by an ERC Starting Grant (grant agreement no 307023). \section{Examples of Cartan-Hadamard manifolds}\label{sec:examples} In this section we recall some facts and examples related to Cartan-Hadamard manifolds. Most of the details can be found in~\cite{BO69},~\cite{KW74},~\cite{GW79},~\cite{GW82},~\cite{Pet06}. We first discuss the case of two-dimensional manifolds, which is quite different compared to manifolds of higher dimensions. \subsection{Dimension two} Let $K \in C^\infty(\mathbb{R}^2)$. A theorem of Kazdan and Warner~\cite{KW74} states that a necessary and sufficient condition for existence of a complete Riemannian metric on $\mathbb{R}^2$ with Gaussian curvature $K$ is \begin{equation}\label{eq:KW74cond} \lim_{r\to \infty} \inf_{\abs{x}\geq r} K(x) \leq 0. \end{equation} This provides a wide class of Riemannian metrics satisfying the assumptions of Theorem~\ref{thm_main1} in dimension two. However, this does not directly give an example of a manifold satisfying the assumptions of Theorem~\ref{thm_main2} since the condition~\eqref{eq:KW74cond} is given with respect to the Euclidean metric of $\mathbb{R}^2$. Examples of manifolds satisfying the assumptions of Theorem~\ref{thm_main2} can be constructed using warped products. Let $(r,\theta)$ be the polar coordinates in $\mathbb{R}^2$ and consider a warped product \begin{equation}\label{eq:warped-metric} ds^2 = dr^2 + f^2(r)d\theta^2, \end{equation} where $f$ is a smooth function that is positive for $r > 0$ and satisfies $f(0) = 0$ and $f'(0) = 1$. This is a Riemannian metric on $\mR^2$ having Gaussian curvature \begin{equation} K(x) = -\frac{f''(\abs{x})}{f(\abs{x})}, \end{equation} which depends only on the Euclidean distance $|x| \mdef r(x)$ to the origin. We remark that distances to the origin in the Euclidean metric and in the warped metric coincide. It is shown in~\cite[Proposition 4.2]{GW79} that for every $k \in C^\infty([0,\infty))$ with $k \leq 0$ there exists a unique warped metric of the form~\eqref{eq:warped-metric} such that $k(|x|) = K(x)$. Hence warped products provide many examples of two-dimensional manifolds for which $\mathcal{K}(x) \leq C(1+\abs{x})^{-\kappa}$ with $\kappa > 0$, i.e.~$\mathcal{K} \in P_\kappa(M)$. \subsection{Higher dimensions} Warped products can also be used to construct examples of higher dimensional Cartan-Hadamard manifolds satisfying the assumptions of Theorem~\ref{thm_main1}, see e.g.~\cite{BO69}. In the case of Theorem~\ref{thm_main2} it turns out that the decay condition for curvature is very restrictive in higher dimensions: the only possible geometry is the Euclidean one. This follows directly from a theorem by Greene and Wu in~\cite{GW82}. If $M$ is a Cartan-Hadamard manifold with $n = \dim(M) \geq 3$, $k(s) = \sup \{\,\mathcal{K}(x)\,; x\in M, d(x,o) = s\,\}$, where $o$ is a fixed point, and one of the following holds: \begin{enumerate} \item $n$ is odd and $\lim\inf_{s \to \infty} s^2k(s) \to 0$ or \item $n$ is even and $\int_0^\infty sk(s) \, \text{d} s$ is finite, \end{enumerate} then $M$ is isometric to $\mathbb{R}^n$. \section{Geometric facts}\label{sec:preliminaries} Throughout this work we will assume $(M,g)$ to be an $n$-dimensional Cartan-Hadamard manifold with $n \geq 2$ unless otherwise stated. We also assume unit speed parametrization for geodesics. \subsection{Behaviour of geodesics}\label{subsec:geodesics} By the Cartan-Hadamard theorem the exponential map $\exp_x$ is defined on all of $T_x M$ and is a diffeomorphism for every $x \in M$. Hence every pair of points can be joined by a unique geodesic. Let $SM = \{ (x,v) \in TM \,;\, \abs{v}=1 \}$ be the unit sphere bundle, and if $(x,v) \in SM$ denote by $\gamma_{x,v}$ the unique geodesic with $\gamma(0) = x$ and $\dot{\gamma}(0) = v$. The triangle inequality implies that \begin{equation}\label{eq:geodesic-distance} d_g(\gamma_{x,v}(t),o) \geq |t| - d_g(x,o) \end{equation} for all $t \in \mR, o \in M$. We say that a geodesic $\gamma$ is escaping with respect to the point $o$ if the function $t \mapsto d_g(\gamma(t),o)$ is strictly increasing on the interval $[0,\infty)$. The set of all such geodesics is denoted by $\esc{o}$. For $\gamma_{x,v} \in \esc{o}$ the triangle inequality gives \begin{equation}\label{eq:espacing-geodesic-distance} d_g(\gamma_{x,v}(t),o) \geq \begin{cases} d_g(x,o), &\text{if } 0 \leq t \leq 2 d_g(x,o),\\ t - d_g(x,o), &\text{if } 2 d_g(x,o) < t. \end{cases} \end{equation} However, since $(M,g)$ is a Cartan-Hadamard manifold, Jacobi field estimates give a stronger bound. For $\gamma_{x,v} \in \esc{o}$ one has (see \cite[Corollary 4.8.5]{Jos08} or \cite[Section 6.3]{Pet06}) \begin{equation}\label{eq:espacing-geodesic-distance-stronger} d_g(\gamma_{x,v}(t),o) \geq \sqrt{d_g(x,o)^2 + t^2}, \qquad t \geq 0. \end{equation} The following lemma is proved in~\cite{Leh16} in two dimensions. The proof in higher dimensions is identical, but we include a short argument for completeness. \begin{lemma}\label{lma:escaping-direction} Suppose $o \in M$. At least one of the geodesics $\gamma_{x,v}$ and $\gamma_{x,-v}$ is in $\esc{o}$. \end{lemma} \begin{proof} Since $(M,g)$ is a Cartan-Hadamard manifold, the function $h(t) = d_g(\gamma_{x,v}(t),o)^2$ is strictly convex, $h'' > 0$, on $\mR$. If $h'(0) \geq 0$ then $\gamma_{x,v}$ is escaping, and if $h'(0) \leq 0$ then $\gamma_{x,-v}$ is escaping. \end{proof} \subsection{On the geometry of the unit tangent bundle}\label{subsec:tm} We first briefly explain the splitting of the tangent bundle into horizontal and vertical bundles. Then we give a short discussion on geodesics of $SM$. Finally, we include a proof that $SM$ is complete when $M$ is. \subsubsection{The structure of the tangent bundle} The following discussion is based on~\cite{Pat99},~\cite{PSU15}, where these topics are considered in more detail. We denote by $\pi \colon TM \to M$ the usual base point map $\pi(x,v) = x$. The connection map $\cm \colon T(TM) \to TM$ of the Levi-Civita connection $\nabla$ of $M$ is defined as follows. Let $\xi \in T_{x,v} TM$ and $c \colon (-\eps,\eps) \to TM$ be a curve such that $\dot{c}(0) = \xi$. Write $ c(t) = (\gamma(t),Z(t)), $ where $Z(t)$ is a vector field along the curve $\gamma$, and define \begin{equation*} \cm(\xi) \mdef D_t Z(0) \in T_x M. \end{equation*} The maps $\cm$ and $d\pi$ yield a splitting \begin{equation}\label{eq:TTM-splitting} T_{x,v} TM = \tilde\mathcal{H}(x,v) \oplus \tilde\mathcal{V}(x,v) \end{equation} where $\tilde\mathcal{H}(x,v) = \ker \cm$ is the horizontal bundle and $\tilde\mathcal{V}(x,v) = \ker d_{x,v} \pi$ is the vertical bundle. Both are $n$-dimensional subspaces of $T_{x,v} TM$. On $TM$ we define the Sasaki metric $\sasaki{g}$ by \begin{equation*} \br{v,w}_{\sasaki{g}} = \br{\cm (v),\cm (w)}_g + \br{d\pi (v), d\pi (w)}_g, \end{equation*} which makes $(TM,\sasaki{g})$ a Riemannian manifold of dimension $2n$. The maps $\cm \colon \tilde\mathcal{V}(x,v) \to T_x M$ and $d\pi \colon \tilde\mathcal{H}(x,v) \to T_x M$ are linear isomorphisms. Furthermore, the splitting~\eqref{eq:TTM-splitting} is orthogonal with respect to $g_s$. Using the maps $\cm$ and $d\pi$, we will identify vectors in the horizontal and vertical bundles with corresponding vectors on $T_x M$. The unit sphere bundle $SM$ was defined as \begin{equation*} SM \mdef \bigcup_{x \in M} S_x M, \qquad S_x M \mdef \set{(x,v) \in T_xM}{|v|_g = 1}. \end{equation*} We will equip $SM$ with the metric induced by the Sasaki metric on $TM$. The geodesic flow $\phi_t(x,v) \colon \mR \times SM \to SM$ is defined as \begin{equation*} \phi_t(x,v) \mdef (\gamma_{x,v}(t),\dot\gamma_{x,v}(t)). \end{equation*} The associated vector field is called the geodesic vector field and denoted by $X$. For $SM$ we obtain an orthogonal splitting \begin{equation}\label{eq:TSM-splitting} T_{x,v} SM = \mR X(x,v) \oplus \mathcal{H}(x,v) \oplus \mathcal{V}(x,v) \end{equation} where $\mR X \oplus \mathcal{H}(x,v) = \tilde\mathcal{H}(x,v)$ and $\mathcal{V}(x,v) = \ker d_{x,v}(\pi|_{SM})$. Both $\mathcal{H}(x,v)$ and $\mathcal{V}(x,v)$ have dimension $n-1$ and can be canonically identified with elements in the codimension one subspace ${\{ v \}}^\perp \subset T_x M$ via $d\pi$ and $\cm$, respectively. We will freely use this identification. Following~\cite{PSU15}, if $u \in C^1(SM)$, then the gradient ${\nabla_{SM}} u$ has the decomposition \begin{equation*} {\nabla_{SM}} u = (Xu)X + \overset{h}{\nabla} u + \overset{v}{\nabla} u, \end{equation*} according to~\eqref{eq:TSM-splitting}. The quantities $\overset{h}{\nabla} u$ and $\overset{v}{\nabla} u$ are called the horizontal and the vertical gradients, respectively. It holds that $\br{\overset{v}{\nabla} u(x,v),v} = 0$ and $\br{\overset{h}{\nabla} u(x,v),v} = 0$ for all $(x,v) \in SM$. As discussed in~\cite{PSU15}, on two-dimensional manifolds the horizontal and vertical gradients reduce to the horizontal and vertical vector fields $X_\perp$ and $V$ via \begin{equation*} \overset{h}{\nabla} u(x,v) = -(X_\perp u(x,v)) v^\perp \middletext{and} \overset{v}{\nabla} u(x,v) = (Vu(x,v)) v^\perp \end{equation*} where $v^\perp$ is such that $\{ v, v^\perp \}$ is a positive orthonormal basis of $T_x M$. In~\cite{Leh16} the flows associated with $X_\perp$ and $V$ were used to derive estimates for $X_\perp u$ and $Vu$. We will proceed in a similar manner in the higher dimensional case. Let $(x,v) \in SM$ and $w \in S_x M, \,w \perp v$. We define $\hflow{w}{t} \colon \mR \to SM$ by $\hflow{w}{t}(x,v) = (\gamma_{x,w}(t),V(t))$, where $V(t)$ is the parallel transport of $v$ along $\gamma_{x,w}$. It holds that \begin{equation}\label{eq:hflow-K-dpi} \cm \left(\frac{\text{d}}{\text{d} t} \hflow{w}{t}(x,v)\valueat{t=0}\right) = 0 \middletext{and} d\pi \left(\frac{\text{d}}{\text{d} t} \hflow{w}{t}(x,v)\valueat{t=0}\right) = w. \end{equation} We define $\vflow{w}{t} \colon \mR \to SM$ by $\vflow{w}{t}(x,v) = (x,(\cos t)v + (\sin t)w)$. It holds that \begin{equation}\label{eq:vflow-K-dpi} \cm \left(\frac{\text{d}}{\text{d} t} \vflow{w}{t}(x,v)\valueat{t=0}\right) = w \middletext{and} d\pi \left(\frac{\text{d}}{\text{d} t} \vflow{w}{t}(x,v)\valueat{t=0}\right) = 0. \end{equation} The following lemma states the relation between $\hflow{w}{t}$ and $\vflow{w}{t}$ and the horizontal and the vertical gradients of a function. \begin{lemma}\label{lma:sm-gradients-flows} Suppose $u$ is differentiable at $(x,v) \in SM$. Fix $w \in S_x M, w \perp v$. Then it holds that \begin{equation*} \br{\overset{h}{\nabla} u(x,v),w} = \frac{\text{d}}{\text{d} t}u(\hflow{w}{t}(x,v)) \valueat{t=0} \end{equation*} and \begin{equation*} \br{\overset{v}{\nabla} u(x,v),w} = \frac{\text{d}}{\text{d} t}u(\vflow{w}{t}(x,v)) \valueat{t=0}. \end{equation*} \end{lemma} \begin{proof} Using the chain rule and the equations~\eqref{eq:hflow-K-dpi} we get \begin{equation*} \frac{\text{d}}{\text{d} t}u(\hflow{w}{t}(x,v)) \valueat{t=0} = \br{\nabla_{SM} u(\hflow{w}{t}(x,v)), \frac{\text{d}}{\text{d} t}\hflow{w}{t}(x,v)}_{\sasaki{g}}\valueat{t=0} = \br{\overset{h}{\nabla} u(x,v), w}. \end{equation*} For $\overset{v}{\nabla}$ we use the equations~\eqref{eq:vflow-K-dpi} in a similar fashion. \end{proof} The maps $\hflow{w}{t}$ and $\vflow{w}{t}$ are related to normal Jacobi fields along geodesics. We can define \begin{equation*} J^{h}_w(t) \mdef \frac{\text{d}}{\text{d} s}\pi\left(\phi_t(\hflow{w}{s}(x,v))\right) \valueat{s=0} = d_{\phi_t(x,v)}\pi\left(\frac{\text{d}}{\text{d} s} \phi_t(\hflow{w}{s}(x,v))\valueat{s=0} \right). \end{equation*} Since $\Gamma(s,t) = \pi\left(\phi_t(\hflow{w}{s}(x,v))\right)$ is a variation of $\gamma_{x,v}$ along geodesics, $J^{h}_w(t)$ is a Jacobi field along $\gamma_{x,v}$. It has the initial conditions $J^{h}_w(0) = w$ and $D_t J^{h}_w(0) = 0$ by the symmetry lemma (see e.g.~\cite{Lee97}). Replacing $\hflow{w}{s}$ with $\vflow{w}{s}$ gives a Jacobi field $J^{v}_w(t)$ with the initial conditions $J^{v}_w(t)(0) = 0$ and $D_t J^{v}_w(t)(0) = w$. In the both cases the Jacobi field is normal because $\br{v,w}_g = 0$. By the symmetry lemma \begin{equation*} \cm \left(\frac{\text{d}}{\text{d} s} \phi_t(\hflow{w}{s}(x,v))\valueat{s=0} \right) = D_s \partial_t \gamma_{\hflow{w}{s}(x,v)}(t)\valueat{s=0} = D_t \partial_s \gamma_{\hflow{w}{s}(x,v)}(t)\valueat{s=0} = D_t J^{h}_w(t). \end{equation*} From the definition of the Sasaki metric we then see that \begin{equation*} \br{{\nabla_{SM}} u(x,v) , \frac{\text{d}}{\text{d} s} \phi_t(\hflow{w}{s}(x,v))\valueat{s=0}}_{\sasaki{g}} = \br{\overset{h}{\nabla} u(x,v), J^{h}_w(t)}_g + \br{\overset{v}{\nabla} u(x,v), D_t J^{h}_w(t)}_g. \end{equation*} and \begin{equation*} \br{{\nabla_{SM}} u(x,v) , \frac{\text{d}}{\text{d} s} \phi_t(\vflow{w}{s}(x,v))\valueat{s=0}}_{\sasaki{g}} = \br{\overset{h}{\nabla} u(x,v), J^{v}_w(t)}_g + \br{\overset{v}{\nabla} u(x,v), D_t J^{v}_w(t)}_g. \end{equation*} \begin{remark}\label{rmk:diff_ae} The constructions in this subsection remain valid at a.e.\ $(x,v) \in SM$ if one assumes that $u$ is in the space $W_\text{loc}^{1,\infty}(SM)$. Functions in $W_\text{loc}^{1,\infty}(SM)$ are characterized as locally Lipschitz functions, and further by Rademacher's theorem, differentiable almost everywhere and weak gradients equal to gradients almost everywhere (see e.g. \cite[Chapters 5.8.2--5.8.3]{Eva98}). \end{remark} \subsubsection{Geodesics on the unit tangent bundle} Next we describe some facts related to geodesics on $SM$ (see e.g.~\cite{BBNV03} and references therein). Let $R(U,V)$ denote the Riemannian curvature tensor. A curve $\Gamma(t) = (x(t),V(t))$ on $SM$ is a geodesic if and only if \begin{equation} \label{eq:smgeod} \left\{ \begin{aligned} \nabla_{\dot{x}}\dot{x} &= -R(V,\nabla_{\dot{x}}V)\dot{x} \\ \nabla_{\dot{x}}\nabla_{\dot{x}}V &= -\abs{\nabla_{\dot{x}}V}_g^2 V, \quad \abs{\nabla_{\dot{x}}V}_g^2 \text{ is a constant along $x(t)$} \end{aligned} \right. \end{equation} holds for every $t$ in the domain of $\Gamma$ (see \cite[Equations 5.2]{Sas62}). Given $(x,v) \in SM$, the horizontal lift of $w \in T_xM$ is denoted by $w^h$, i.e. the unique vector $w^h \in T_{x,v}(SM)$ such that $d(\pi|_{SM})(w^h) = w$ and $\cm(w^h) = 0$, and the vertical lift $w^v$ is defined similarly. Initial conditions for $x, \dot{x}, V$ and $\nabla_{\dot{x}}V$ at $t = 0$ with $g(V(0),\nabla_{\dot{x}(0)}V(0)) = 0$ and $\abs{V(0)}_g = 1$ determine a unique geodesic $\Gamma = (x,V)$, by (\ref{eq:smgeod}), which satisfies the initial conditions $\Gamma(0) = (x(0),V(0))$ and $\dot{\Gamma}(0) = \dot{x}(0)^h+(\nabla_{\dot{x}(0)}V(0))^v$ where the lifts are done with respect to $(x(0),V(0)) \in SM$. The geodesics of $SM$ are of the following three types: \begin{enumerate} \item If $\nabla_{\dot{x}(0)}V(0) = 0$, then $\Gamma$ is a parallel transport of $V(0)$ along the geodesic $x$ on $M$ (horizontal geodesics). \item If $\dot{x}(0) = 0$, then $\Gamma$ is a great circle on the fibre $\pi^{-1}(x(0))$ and $x(t) = x(0)$ (vertical geodesics, in this case one interprets the system (\ref{eq:smgeod}) via $\nabla_{\dot{x}} = D_t$). \item All the rest, i.e. solutions of (\ref{eq:smgeod}) with initial conditions $\dot{x}(0) \neq 0$ and $\nabla_{\dot{x}(0)}V(0) \neq 0$ (oblique geodesics). \end{enumerate} We state the following lemma for the sake of clarity. \begin{lemma}\label{lem:flowsAreGeod} Fix $(x,v) \in SM$ and $w \in S_xM$, $w \,\bot\, v$. Then $\phi_t(x,v)$ and $\hflow{w}{t}(x,v)$ are horizontal unit speed geodesics and $\vflow{w}{t}(x,v)$ is a vertical unit speed geodesic with respect to $t$. \end{lemma} \begin{proof} The fact that $\phi_t(x,v)$ and $\hflow{w}{t}(x,v)$ are horizontal geodesics and $\vflow{w}{t}(x,v)$ is a vertical geodesic follows immediately from their definitions and the above discussion based on the system of differential equations (\ref{eq:smgeod}). The fact that $\phi_t(x,v)$, $\hflow{w}{t}(x,v)$ and $\vflow{w}{t}(x,v)$ are unit speed follows from the equations (\ref{eq:hflow-K-dpi}) and (\ref{eq:vflow-K-dpi}) and the definition of the Sasaki metric. \end{proof} Lemma \ref{lem:flowsAreGeod} allows us to derive the following formulas which are used in the proof of Lemma \ref{lma:uf-gradient-estimates}. \begin{corollary}\label{cor:geodFlowDiff} Let $(x,v) \in SM$. Assume that $Y \in T_{x,v}(SM)$ has the decomposition \[Y = aX|_{x,v} + H + V, \quad H \in \mathcal{H}(x,v), V \in \mathcal{V}(x,v), a \in \mR.\] Then \[\begin{split} (D\phi_t)_{x,v}(aX|_{x,v}) &= aX|_{\phi_t(x,v)}, \\ (D\phi_t)_{x,v}(H) &= |H|_{g_s}\Big[(J_{w_h}^h(t))^h + (D_t J_{w_h}^h(t))^v\Big], \\ (D\phi_t)_{x,v}(V) &= |V|_{g_s}\Big[(J_{w_v}^v(t))^h + (D_t J_{w_v}^v(t))^v\Big], \end{split}\] where $D\phi_t$ is the differential of $\phi_t$, $w_h = d\pi(H)/\abs{d\pi(H)}$ and $w_v = K_{\nabla}(V)/\abs{K_{\nabla}(V)}$. Moreover, $(D\phi_t)_{x,v}(X|_{x,v})$ is orthogonal to $(D\phi_t)_{x,v}(H)$ and $(D\phi_t)_{x,v}(V)$. \end{corollary} \begin{proof} Lemma~\ref{lem:flowsAreGeod} gives that $\phi_s(x,v)$, $\phi_{w_h,s}^h(x,v)$ and $\phi_{w_v,s}^v(x,v)$ are unit speed geodesics on $SM$. If $\Gamma(s) = \phi_s(x,v)$, then $\Gamma(s)$ is a unit speed geodesic on $SM$, $\dot{\Gamma}(0) = X|_{x,v}$, and \[ (D\phi_t)_{x,v}(X|_{x,v}) = D\phi_t(\dot{\Gamma}(0)) = (\phi_t \circ \Gamma)'(0) = X|_{\phi_t(x,v)}. \] Moreover, using the unit speed geodesic $\Gamma(s) = \phi_{w_h,s}^h(x,v)$ on $SM$, and using the formulas after Lemma~\ref{lma:sm-gradients-flows}, gives \[\begin{split} (D\phi_t)_{x,v}(H) &= D\phi_t(|H|_{g_s}\dot{\Gamma}(0)) = |H|_{g_s}(\phi_t \circ \Gamma)'(0) \\ &= |H|_{g_s}\Big[(J_{w_h}^h(t))^h + (D_t J_{w_h}^h(t))^v\Big] \end{split} \] which is orthogonal to $X|_{\phi_t(x,v)}$. Finally, the unit speed geodesic $\Gamma(s) = \phi_{w_v,s}^v(x,v)$ on $SM$ gives \[\begin{split} (D\phi_t)_{x,v}(V) &= D\phi_t(|V|_{g_s}\dot{\Gamma}(0)) = |V|_{g_s}(\phi_t \circ \Gamma)'(0) \\ &= |V|_{g_s}\Big[(J_{w_v}^v(t))^h + (D_t J_{w_v}^v(t))^v\Big] \end{split} \] which is also orthogonal to $X|_{\phi_t(x,v)}$. \end{proof} \subsubsection{Completeness of the unit tangent bundle} We will need the fact that $SM$ is complete when $M$ is complete. This need arises from theory of Sobolev spaces on manifolds (see Section \ref{sec:proofs}). We could not find a reference so a proof is included. \begin{lemma}\label{lem:SMcomplete} Let $M$ be a complete Riemannian manifold with or without boundary. Then $SM$ is complete. \end{lemma} \begin{proof} Let $(y^{(j)})$ be a Cauchy sequence in $(SM,d_{g_s})$. We show that it converges in the topology induced by $g_s$. The definition of the Sasaki metric implies that \[ L_{g_s}(\Gamma) \geq \int_0^\tau \abs{d\pi_{\Gamma(t)}(\dot{\Gamma}(t))}_g \,\text{d} t = L_g(\pi \circ \Gamma) \geq d_g(\pi(\Gamma(0)),\pi(\Gamma(\tau))) \] where $\Gamma: [0,\tau] \to SM$ is any piecewise $C^1$-smooth curve. Hence \begin{equation}\label{eq:sasaki-comparison} d_{g_s}(a,b) \geq d_g(\pi(a), \pi(b)) \end{equation} for all $a, b \in SM$. The above inequality implies that $(\pi(y^{(j)}))$ is a Cauchy sequence in $(M,g)$ and converges, say to $p \in M$, by completeness of $M$. Consider a coordinate neighborhood $U$ of $p$ in $M$, so that $\pi^{-1}(U)$ is diffeomorphic to $U \times S^{n-1}$. Choose an open set $V$ and a compact set $K$ so that $p \in V \subset K \subset U$. Now $\pi^{-1}(K)$ is homeomorphic to $K \times S^{n-1}$ which is compact as a product of two compact sets. Since $\pi(y^{(j)}) \to p$, there exists $N$ such that $\pi(y^{(j)}) \in V$ for all $j \geq N$, and this implies $y^{(j)} \in \pi^{-1}(K)$ for all $j \geq N$. Hence $(y^{(j)})$ has a limit in $(\pi^{-1}(K),d_{g_s}|_{\pi^{-1}(K)})$ since it is a Cauchy sequence, and thus $(y^{(j)})$ converges also in $(SM,d_{g_s})$. \end{proof} \subsection{Symmetric covariant tensors fields}\label{subsec:tensors} We denote by $\tf{m}(M)$ the set of $C^1$-smooth symmetric covariant $m$-tensor fields and by $\tf{m}_x(M)$ the symmetric covariant $m$-tensors at point $x$. Following \cite{DS11} (where more details are also given), we define the map $\lambda_x \colon \tf{m}_x(M) \to C^\infty(S_x M)$, \begin{equation*} \lambda_x(f)(v) = f_x(v,\dots,v) \end{equation*} which is given in local coordinates by \begin{equation*} \lambda_x (f_{i_1 \dots i_m} dx^{i_1} \otimes \dots \otimes dx^{i_m})(v) = f_{i_1 \dots i_m}(x)v^{i_1}\dots v^{i_m}. \end{equation*} If $\tf{m}_x(M)$ and $C^\infty(S_x M)$ are endowed with their usual $L^2$-inner products, then $\lambda_x$ is an isomorphism and even isometry up to a factor. It smoothly depends on $x$ and hence we get an embedding $\lambda \colon \tf{m}(M) \to C^1(SM)$. The mapping $\lambda$ identifies symmetric covariant $m$-tensor fields with homogeneous polynomials (with respect to $v$) of degree $m$ on $SM$. We will use this identification and do not always write $\lambda$ explicitly. The symmetrization of a tensor is defined by \begin{equation*} \sigma(\omega_1 \otimes \cdots \otimes \omega_m) = \frac{1}{m!} \sum_{\pi \in \Pi_m} \omega_{\pi(1)} \otimes \cdots \otimes \omega_{\pi(m)}, \end{equation*} where $\Pi_m$ is the permutation group of $\{1,\dots,m\}$. From the above expression we see that if a covariant $m$-tensor field $f$ is in $E_\eta^1(M)$ or $P_\eta^1(M)$ for some $\eta > 0$, then so is $\sigma f$ too. Furthermore, for $f \in \tf{m}(M)$ one has \begin{equation}\label{eq:sn=X} \lambda(\sigma \nabla f) = X\lambda(f). \end{equation} It follows from the last identity and the fundamental theorem of calculus that if $f \in P^1_{\eta}(M)$ for some $\eta > 0$, then $I_m(\sigma \nabla f) = 0$. This shows that $I_m$ always has a nontrivial kernel for $m \geq 1$, as described in the introduction. The next lemma states how the decay properties of a tensor field carry over to functions on $SM$. \begin{lemma}\label{lma:tensor-sm-gradients} Suppose $f \in S^m(SM)$ and $\eta > 0$. \begin{enumerate} \item[(a)] If $f \in E_\eta^1(M)$, then \begin{equation*} \sup_{v \in S_x M}|Xf(x,v)| \in E_\eta(M),\quad \sup_{v \in S_x M}|\overset{h}{\nabla} f(x,v)| \in E_\eta(M) \middletext{and} \sup_{v \in S_x M}|\overset{v}{\nabla} f(x,v)| \in E_\eta(M). \end{equation*} \item[(b)] If $f \in P_\eta^1(M)$, then \begin{equation*} \sup_{v \in S_x M}|Xf(x,v)| \in P_{\eta+1}(M),\quad \sup_{v \in S_x M}|\overset{h}{\nabla} f(x,v)| \in P_{\eta+1}(M) \middletext{and} \sup_{v \in S_x M}|\overset{v}{\nabla} f(x,v)| \in P_\eta(M). \end{equation*} \end{enumerate} \end{lemma} \begin{proof} (a) The result for $Xf$ follows from~\eqref{eq:sn=X}. To prove the other statements we take $x \in M$ and use local normal coordinates $(x^1,\dots,x^n)$ centered at $x$ and the associated coordinates $(v^1,\dots,v^n)$ for $T_x M$. In these coordinates $f(x) = f_{i_1 \dots i_m}(x) \,dx^{i_1} \otimes \dots \otimes dx^{i_m}$ and $\nabla f(x) = \partial_{x_j} f_{i_1 \dots i_m}(x) \,dx^{j} \otimes dx^{i_1} \otimes \dots \otimes dx^{i_m}$. We see that \begin{equation*} \abs{f(x)}_g = {\left( \sum_{{i_1,\dots,i_m}} \abs{f_{i_1 \dots i_m}(x)}^2 \right)}^{1/2} \middletext{and} \abs{\nabla f(x)}_g = {\left(\sum_{j,i_1,\dots,i_m} \abs{\partial_{x_j} f_{i_1 \dots i_m}(x)}^2 \right)}^{1/2}. \end{equation*} For $Xf, \overset{h}{\nabla} f$ and $\overset{v}{\nabla} f$ at $x$ we have coordinate representations (see~\cite[Appendix A]{PSU15}) \begin{equation*} \begin{split} &Xf(x,v) = v^j \partial_{x_j} f,\\ &\overset{h}{\nabla} f(x,v) = \left(\partial^{x_j} f - (v^k \partial_{x_k}f)v^j \right)\partial_{x_j},\\ &\overset{v}{\nabla} f(x,v) = \partial^{v_j}f\partial_{x_j}. \end{split} \end{equation*} We get that \begin{equation*} Xf(x,v)X(x,v) + \overset{h}{\nabla} f(x,v) = \partial^{x_j} f \partial_{x_j} = \partial^{x_j} f_{i_1 \dots i_m}(x) v^{i_1} \dots v^{i_m}\partial_{x_j} \end{equation*} and, using the orthogonality of $Xf(x,v)X(x,v)$ and $\overset{h}{\nabla} f(x,v)$ and the Cauchy-Schwarz inequality, \begin{equation*} \sup_{v \in S_x M} |\overset{h}{\nabla} f(x,v)| \leq {\left(\sum_{j,i_1,\dots,i_m} \abs{\partial_{x_j} f_{i_1 \dots i_m}(x)}^2 \right)}^{1/2} = \abs{\nabla f(x)}_g. \end{equation*} This implies that $ \sup_{v \in S_x M}|\overset{h}{\nabla} f(x,v)| \in E_\eta(M)$. For $\overset{v}{\nabla} f$, the identity $\partial_{v_j} v^k = \delta_j^k - v_j v^k$ (see \cite{PSU15}) implies that \begin{align*} \overset{v}{\nabla} f(x,v) &= \sum_{j=1}^n (f_{j i_2 \dots i_m} v^{i_2} \dots v^{i_m} - f(x,v) v_j) \partial_{x_j} + \ldots + \sum_{j=1}^n (f_{i_1 \dots i_{m-1} j} v^{i_1} \dots v^{i_{m-1}} - f(x,v) v_j) \partial_{x_j} \\ &= m \sum_{j=1}^n (f_{j i_2 \dots i_m} v^{i_2} \dots v^{i_m} - f(x,v) v_j) \partial_{x_j} \end{align*} Thus orthogonality and expanding the squares gives \begin{equation*} \abs{\overset{v}{\nabla} f(x,v)}^2 = m^2 \sum_{j=1}^n \abs{f_{j i_2 \dots i_m}(x) v^{i_2} \dots v^{i_m}}^2 \leq m^2 \sum_{i_1,\dots,i_m} \abs{f_{i_1 \dots i_m}(x)}^2 = m^2 \abs{f(x)}_g^2 \end{equation*} which in turn implies that $\sup_{v \in S_x M}|\overset{v}{\nabla} f(x,v)| \in E_\eta(M)$. The proof for (b) is the same. \end{proof} \section{Growth estimates}\label{sec:estimates} Throughout this section we assume that $f$ is a symmetric covariant $m$-tensor field in $P_\eta(M)$ for some $\eta > 1$. We begin by observing that the geodesic X-ray transform is well defined for such $f$. \begin{lemma} \label{lemma_u_finite} Let $f \in P_\eta(M)$ for some $\eta > 1$. For any $(x,v) \in SM$ one has \[ \int_{-\infty}^\infty \lvert f_{\gamma_{x,v}(t)} (\dot{\gamma}_{x,v}(t),\dots,\dot{\gamma}_{x,v}(t)) \rvert \, \text{d} t < \infty. \] \end{lemma} \begin{proof} The assumption implies that $\lvert f_{\gamma_{x,v}(t)} (\dot{\gamma}_{x,v}(t),\dots,\dot{\gamma}_{x,v}(t)) \rvert \leq C(1+d(\gamma_{x,v}(t),o))^{-\eta}$. One can then change variables so that $t=0$ corresponds to the point on the geodesic that is closest to $o$, split the integral over $t \geq 0$ and $t \leq 0$, and use the fact that the integrands are $\leq C (1+\abs{t})^{-\eta}$ by the estimate \eqref{eq:espacing-geodesic-distance-stronger}. \end{proof} If $f \in P_{\eta}(M)$ for some $\eta > 1$, we may now define \begin{equation*} u^f(x,v) \mdef \int_0^\infty f_{\gamma_{x,v}(t)} (\dot{\gamma}_{x,v}(t),\dots,\dot{\gamma}_{x,v}(t)) \, \text{d} t. \end{equation*} It easy to see that \begin{equation*} u^f(x,v) + (-1)^m u^f(x,-v) = If(x,v) \end{equation*} for all $(x,v) \in SM$. We have the usual reduction to the transport equation. \begin{lemma}\label{lma:Xuf} Let $f \in P_\eta(M)$ for some $\eta > 1$. Then $Xu^f = -f$. \end{lemma} \begin{proof} By definition \begin{equation*} Xu^f(x,v) = \lim_{s \to 0} -\frac{1}{s}\int_0^s f_{\gamma_{x,v}(t)}(\dot\gamma_{x,v}(t),\dots,\dot\gamma_{x,v}(t)) \, \text{d} t = -f_x(v,\dots,v).\qedhere \end{equation*} \end{proof} Next we derive decay estimates for $u^f$ under the assumption that $If = 0$. \begin{lemma}\label{lma:uf-estimate} Suppose that $If = 0$. \begin{enumerate} \item[(a)] If $f \in E_\eta(M)$ for $\eta > 0$, then \begin{equation*} \abs{u^f(x,v)} \leq C(1+d_g(x,o))\e^{-\eta d_g(x.o)} \end{equation*} for all $(x,v) \in SM$. \item[(b)] If $f \in P_\eta(M)$ for $\eta > 1$, then \begin{equation*} \abs{u^f(x,v)} \leq \frac{C}{{(1+d_g(x,o))}^{\eta-1}} \end{equation*} for all $(x,v) \in SM$. \end{enumerate} \end{lemma} \begin{proof} Since $If = 0$, one has $\abs{u^f(x,v)} = \abs{u^f(x,-v)}$. By Lemma \ref{lma:escaping-direction}, possibly after replacing $(x,v)$ by $(x,-v)$, we may assume that $\gamma_{x,v}$ is escaping. We have \begin{equation*} \abs{u^f(x,v)} = \aabs{\int_0^\infty f(\gamma(t))(\dot\gamma(t),\dots,\dot\gamma(t)) \, \text{d} t} \leq \int_0^\infty \abs{f(\gamma(t))}_g \, \text{d} t. \end{equation*} The rest of the proof is as in~\cite[Lemma 3.2]{Leh16}. \end{proof} \begin{lemma}\label{lma:sm-gradients-symmetry} Let $f \in P_\eta(M)$ for some $\eta > 1$. If $If = 0$ and $u^f$ is differentiable at $(x,v) \in SM$, then \begin{equation*} \overset{h}{\nabla} u^f(x,-v) = (-1)^{m-1} \overset{h}{\nabla} u^f(x,v) \middletext{and} \overset{v}{\nabla} u^f(x,-v) = (-1)^m \overset{v}{\nabla} u^f(x,v). \end{equation*} \end{lemma} \begin{proof} From $If = 0$ it follows that \begin{equation*} u^f(x,v) + (-1)^m u^f(x,-v) = 0. \end{equation*} Fix $w \in S_x M, \,w \perp v$. We note that \begin{equation*} u^f (\hflow{w}{s}(x,-v)) + (-1)^m u^f(\hflow{-w}{-s}(x,v)) = 0 \end{equation*} and hence \begin{equation*} \frac{\text{d}}{\text{d} s} u^f(\hflow{w}{s}(x,-v))\valueat{s=0} = -(-1)^m \frac{\text{d}}{\text{d} s} (u^f(\hflow{-w}{-s}(x,v)))\valueat{s=0} = (-1)^m \frac{\text{d}}{\text{d} s} (u^f(\hflow{-w}{s}(x,v)))\valueat{s=0}. \end{equation*} By Lemma \ref{lma:sm-gradients-flows} \begin{equation*} \br{\overset{h}{\nabla} u^f(x,-v),w} = (-1)^m\br{\overset{h}{\nabla} u^f(x,v),-w} = -(-1)^m \br{\overset{h}{\nabla} u^f(x,v),w}. \end{equation*} For $\overset{v}{\nabla} u^f$ we use that \begin{equation*} u^f (\vflow{w}{s}(x,-v)) + (-1)^m u^f(\vflow{-w}{s}(x,v)) = 0 \end{equation*} and by Lemma \ref{lma:sm-gradients-flows} we get that \begin{equation*} \br{\overset{v}{\nabla} u^f(x,-v),w} = (-1)^{m-1} \br{\overset{v}{\nabla} u^f(x,v),-w} = (-1)^m \br{\overset{v}{\nabla} u^f(x,v), w}.\qedhere \end{equation*} \end{proof} We move on to prove growth estimates for Jacobi fields. These estimates will be used to derive estimates for $\overset{h}{\nabla} u^f$ and $\overset{v}{\nabla} u^f$. \begin{lemma}\label{lma:jacobi-estimates} Suppose $J(t)$ is a normal Jacobi field along a geodesic $\gamma$. \begin{enumerate} \item[(a)] If all sectional curvatures along $\gamma([0,\tau])$ are $\geq -K_0$ for some constant $K_0 > 0$, and if $J(0) = 0$ or $D_t J(0) = 0$, then \begin{equation*} \abs{J(t)} \leq \abs{J(0)} \,\cosh(\sqrt{K_0} t) + \abs{D_t J(0)} \,\frac{\sinh(\sqrt{K_0} t)}{\sqrt{K_0}} \end{equation*} for $t \in [0,\tau]$. \item[(b)] If $t_0 \in (0,\tau)$, then \begin{equation*} \abs{D_t J(t)} + \abs{\frac{J(t)}{t} - D_t J(t)} \leq \left[ \abs{D_t J(t_0)} + \abs{\frac{J(t_0)}{t_0} - D_t J(t_0)} \right] \,\e^{2 \int_{t_0}^t s \mathcal{K}(\gamma(s)) \,\text{d} s} \end{equation*} for $t \in [t_0,\tau]$ \end{enumerate} \end{lemma} \begin{proof} (a) follows from the Rauch comparison theorem \cite[Theorem 4.5.2]{Jos08}. For (b), we follow the argument in~\cite{Leh16}. Consider an orthonormal frame $\{ \dot{\gamma}(t), E_1(t), \ldots, E_{n-1}(t) \}$ obtained by parallel transporting an orthonormal basis of $T_{\gamma(0)} M$ along $\gamma$. Write $J(t) = u^j(t) E_j(t)$, so that the Jacobi equation becomes \begin{equation} \label{eq:jacobi-system} \ddot{u}(t) + R(t) u(t) = 0 \end{equation} where $u(t) = (u^1(t), \ldots, u^{n-1}(t))$ and $R_{jk} = R(E_j, \dot{\gamma}, \dot{\gamma}, E_k)$. We wish to estimate $v(t) = \frac{u(t)}{t}$, and we do this by writing $v(t) = A(t) + \frac{B(t)}{t}$ where \begin{equation*} A(t) = \dot{u}(t), \qquad B(t) = u(t) - t \dot{u}(t). \end{equation*} By using the equation, we see that \begin{align*} A(t) - A(t_0) &= - \int_{t_0}^t s R(s) v(s) \,\text{d} s, \\ B(t) - B(t_0) &= \int_{t_0}^t s^2 R(s) v(s) \,\text{d} s. \end{align*} Write $g(t) = \abs{A(t)} + \abs{\frac{B(t)}{t}}$. If $t \geq t_0$ one has \begin{equation*} g(t) = \abs{A(t_0) - \int_{t_0}^t s R(s) v(s) \,\text{d} s} + \frac{1}{t} \abs{B(t_0) + \int_{t_0}^t s^2 R(s) v(s) \,\text{d} s} \leq g(t_0) + 2 \int_{t_0}^t s \norm{R(s)} g(s) \,\text{d} s. \end{equation*} The Gronwall inequality implies that \begin{equation*} g(t) \leq g(t_0) \e^{2 \int_{t_0}^t s \norm{R(s)} \,\text{d} s}. \end{equation*} The result follows from this, since $\norm{R(s)} = \sup_{\abs{\xi}=1} R(s) \xi \cdot \xi = \sup_{\dot{\gamma}(s) \in \Pi} K(\Pi) \leq \mathcal{K}(\gamma(s))$. \end{proof} \begin{corollary}\label{cor:jacobi-estimates} Suppose that $(M,g)$ is a Cartan-Hadamard manifold. Let $\gamma$ be a geodesic and $J$ a normal Jacobi field along it, satisfying either $J(0) = 0$ and $\abs{D_t J(0)} \leq 1$ or $\abs{J(0)} \leq 1$ and $D_t J(0) = 0$. \begin{enumerate} \item[(a)] If $-K_0 \leq K \leq 0$ and $K_0 > 0$, then \begin{equation*} |J(t)| \leq C \e^{\sqrt{K_0} t} \middletext{and} |D_t J(t)| \leq C \e^{\sqrt{K_0} t} \end{equation*} for $t \geq 0$ where the constants do not depend on the geodesic $\gamma$. \item[(b)] If $\mathcal{K} \in P_\kappa(M)$ for some $\kappa > 2$, then \begin{equation*} |J(t)| \leq C(t+1) \middletext{and} |D_t J(t)| \leq C \end{equation*} for $t \geq 0$. If in addition $\gamma \in \esc{o}$, then the constants do not depend on the geodesic $\gamma$. \end{enumerate} \end{corollary} \begin{proof} (a) The estimate for $|J(t)|$ follows directly from Lemma~\ref{lma:jacobi-estimates}. Using the same notations as in the proof of that Lemma we have $|D_t J(t)| = |\dot{u}(t)|$ and by integrating~\eqref{eq:jacobi-system} from $0$ to $t$ we get \begin{equation*} \begin{split} |\dot{u}(t)| &\leq |\dot{u}(0)| + \int_0^t \norm{R(s)}|u(s)| \, \text{d} s \\ &\leq |D_t J(0)| + \int_0^t K_0 |J(s)| \, \text{d} s\\ &\leq C\e^{\sqrt{K_0} t}. \end{split} \end{equation*} (b) For a fixed geodesic, the estimates follow from Lemma~\ref{lma:jacobi-estimates}. If $\mathcal{K} \in P_\kappa(M)$ for $\kappa > 2$, then \begin{equation*} A \mdef \sup_{\gamma \in \esc{o}} \int_0^\infty s \mathcal{K}(\gamma(s)) \, \text{d} s \leq C \sup_{\gamma \in \esc{o}} \int_0^\infty s (1+d_g(\gamma(s),o))^{-\kappa} \, \text{d} s < \infty \end{equation*} by using \eqref{eq:espacing-geodesic-distance-stronger}. Let us fix $t_0 = 1$ and suppose that $J$ is a Jacobi field along a geodesic in $\esc{o}$ whose initial values satisfy the given assumptions. From Lemma~\ref{lma:jacobi-estimates} and (a) we then get that \begin{equation*} \begin{split} |J(t)| &\leq \e^{2A} \left( 2\abs{D_t J(1)} + \abs{J(1)} \right) t\\ &\leq \e^{2A}C\e^{\sqrt{K_0}} t \end{split} \end{equation*} for $t \geq 1$, where $K_0 = \sup_{x\in M} \abs{\mathcal{K}(x)}$. For $t \in [0,1]$ we can estimate $|J(t)| \leq C\e^{\sqrt{K_0}}$. By combining these two estimates we get \begin{equation*} |J(t)| \leq C(1+\e^{2A})t \end{equation*} for $t \geq 0$, and the constants do not depend on $\gamma \in \esc{o}$. For $|D_t J(t)|$, Lemma~\ref{lma:jacobi-estimates} gives the estimate \begin{equation*} |D_t J(t)| \leq \e^{2A} \left( 2\abs{D_t J(1)} + \abs{J(1)} \right) \end{equation*} for $t \geq 1$, and for $t \in [0,1]$ we get a bound from (a). Neither of these bounds depends on $\gamma \in \esc{o}$. \end{proof} \begin{lemma}\label{lma:uf-gradient-estimates} Suppose that $If = 0$. \begin{enumerate} \item[(a)] If $-K_0 \leq K \leq 0$, $K_0 >0$ and $f \in E_\eta^1(M)$ for some $\eta > \sqrt{K_0}$, then $u^f$ is differentiable along every geodesic on $SM$, $u^f \in W^{1,\infty}(SM)$ and \begin{equation*} \abs{\overset{h}{\nabla} u^f(x,v)} \leq C\e^{-(\eta-\sqrt{K_0})d_g(x,o)} \end{equation*} for a.e. $(x,v) \in SM$. \item[(b)] If $\mathcal{K} \in P_\kappa(M)$ for some $\kappa > 2$ and $f \in P_\eta^1(M)$ for some $\eta > 1$, then $u^f$ is differentiable along every geodesic on $SM$, $u^f \in W^{1,\infty}(SM)$ and \begin{equation*} \abs{\overset{h}{\nabla} u^f(x,v)} \leq \frac{C}{{(1 + d_g(x,o))}^{\eta-1}} \end{equation*} for a.e. $(x,v) \in SM$. \end{enumerate} The same estimates hold for $\overset{v}{\nabla} u^f$ with the same assumptions. \end{lemma} \begin{proof}[Proof of $u^f \in W^{1,\infty}_\text{loc}(SM)$] We show that $u^f$ is locally Lipschitz continuous. Fix $(x_0, v_0) \in SM$, and suppose that $\Gamma(s)$ is a unit speed geodesic on $SM$ through $(x_0,v_0)$. We have \begin{align} \frac{u^f(\Gamma(r)) - u^f(\Gamma(0))}{r} &= \int_0^{\infty} \frac{f(\phi_t(\Gamma(r))) - f(\phi_t(\Gamma(0)))}{r} \,\text{d} t \notag \\ & = \int_0^{\infty} \frac{1}{r} \int_0^r \frac{\partial}{\partial s} \left[ f(\phi_t(\Gamma(s))) \right] \,\text{d} s \,\text{d} t \label{u_difference_quotient}\\ &= \int_0^{\infty} \frac{1}{r} \int_0^r \langle \nabla_{SM} f(\phi_t(\Gamma(s))), D\phi_t(\Gamma(s)) \dot{\Gamma}(s) \rangle \,\text{d} s \,\text{d} t. \notag \end{align} When we apply Corollary \ref{cor:geodFlowDiff} to the right hand side of \eqref{u_difference_quotient} (and omit the identifications), we find that \begin{equation}\label{eq:integral-on-rhs} \begin{split} &\frac{u^f(\Gamma(r)) - u^f(\Gamma(0))}{r} = \int_0^{\infty} \frac{1}{r} \int_0^r \Bigg[ Xf(\phi_t(\Gamma(s))) \langle \dot{\Gamma}(s), X \rangle \\ &\quad+ \langle \overset{h}{\nabla} f(\phi_t(\Gamma(s))), |\dot{\Gamma}(s)^h| J_{w_h(s)}^h(t) + |\dot{\Gamma}(s)^v| J_{w_v(s)}^v(t) \rangle \\ &\quad+ \langle \overset{v}{\nabla} f(\phi_t(\Gamma (s))), |\dot{\Gamma}(s)^h| D_t J_{w_h(s)}^h(t) + |\dot{\Gamma}(s)^v| D_t J_{w_v(s)}^v(t) \rangle \Bigg] \,\text{d} s \,\text{d} t \end{split} \end{equation} where $w_h(s) = \dot{\Gamma}(s)^h/|\dot{\Gamma}(s)^h|$ and $w_v(s) = \dot{\Gamma}(s)^v/|\dot{\Gamma}(s)^v|$. Here the Jacobi fields are along the geodesic $\gamma_{\Gamma(s)}(t) := \pi(\phi_t(\Gamma(s)))$. By definition their initial values fulfill the assumptions of Corollary~\ref{cor:jacobi-estimates}. From this point on we will work under assumptions of (b). The proof under assumptions of (a) is similar but simpler. We fix a small $\eps > 0$. We show that the integral \eqref{eq:integral-on-rhs} has a uniform upper bound for every $r \in (0,1]$ and every geodesic $\Gamma$ through a point in $\ball{(x_0,v_0)}{\eps} \subset SM$. For $(x,v) \in SM$ we denote by $\mathcal{G}(x,v)$ the set of unit speed geodesics on $SM$ through $(x,v)$, and define \[J(x_0,v_0,\eps) \mdef \set{\Gamma \in \mathcal{G}(x,v)}{(x,v) \in \ball{(x_0,v_0)}{\eps}}.\] For all $\Gamma \in J(x_0,v_0,\eps), \Gamma(0) = (x,v)$, and $s \in (0,r]$ the estimate~\eqref{eq:sasaki-comparison} gives that $d_g(x,x_0) \leq \eps$ and \[ d_g(\gamma_{\Gamma(s)}(0),x) = d_g(\pi(\Gamma(s)),x) \leq d_{\sasaki{g}}(\Gamma(s),(x,v)) \leq s. \] The estimate~\eqref{eq:geodesic-distance} implies that \begin{equation}\label{eq:dist-gamma-ineq} \begin{split} d_g (\pi(\phi_t(\Gamma(s))),o) &= d_g(\gamma_{\Gamma(s)}(t),o) \geq t - d_g(\gamma_{\Gamma(s)}(0),x_0) \\ &\geq t - \sup_{s\in(0,r]}d_g(\gamma_{\Gamma(s)}(0),o) \geq t - d_g(x,o) - r \\ &\geq t - d_g(x_0,o) - \eps -r \end{split} \end{equation} for all $t \geq t_0$ where $t_0 \mdef d_g(x_0,o) + r + \eps$. We can use a trivial estimate $d_g (\pi(\phi_t(\Gamma(s))),o) \geq 0$ on the interval $[0,t_0]$. Further, the estimate~\eqref{eq:dist-gamma-ineq} gives \begin{equation}\label{eq:K-decays-nicely} \mathcal{K}(\gamma_{\Gamma(s)}(t)) \leq \frac{C}{(1+ d_g(\gamma_{\Gamma(s)}(t),o))^\eta} \leq \frac{C}{(1+ t - d_g(x_0,o)-\eps-r)^\eta} \end{equation} for all $t \geq t_0$ where the constant $C$ does not depend on $s \in (0,r]$ or the geodesic $\Gamma \in J(x_0,v_0,\eps)$, and hence \begin{equation}\label{eq:Gamma-sup-K} \sup_{\substack{\Gamma \in J(x_0,v_0,\eps),\\s \in (0,r]}} \int_0^\infty t\mathcal{K}(\gamma_{\Gamma(s)}(t)) \, \text{d} t < \infty. \end{equation} Using the proof of Corollary~\ref{cor:jacobi-estimates} together with~\eqref{eq:Gamma-sup-K}, we can find a constant $C$ which does not depend on $s \in (0,r]$ so that one has \begin{equation*} |J_{w_h(s)}^h (t)| \leq Ct,\quad |D_t J_{w_h(s)}^h (t)| \leq C \end{equation*} for all $t \geq 0$ and $\Gamma \in J(x_0,v_0,\eps)$. Similar estimates hold also uniformly for $J_{w_v(s)}^v (t)$ and $D_t J_{w_v(s)}^v (t)$. Recall that $|\dot{\Gamma}(s)^h|, |\dot{\Gamma}(s)^v| \leq |\dot{\Gamma}(s)| = 1$, and that $w_h(s), w_v(s)$ depend on $\Gamma$. By combining the above estimates for Jacobi fields with estimate~\eqref{eq:dist-gamma-ineq} and Lemma~\ref{lma:tensor-sm-gradients} we get for the integrand in~\eqref{eq:integral-on-rhs} that \begin{equation} \begin{split} & \big|Xf(\phi_t(\Gamma(s))) \langle \dot{\Gamma}(s), X \rangle \\ &\quad+ \langle \overset{h}{\nabla} f(\phi_t(\Gamma(s))), |\dot{\Gamma}(s)^h| J_{w_h(s)}^h(t) + |\dot{\Gamma}(s)^v| J_{w_v(s)}^v(t) \rangle \\ &\quad+ \langle \overset{v}{\nabla} f(\phi_t(\Gamma (s))), |\dot{\Gamma}(s)^h| D_t J_{w_h(s)}^h(t) + |\dot{\Gamma}(s)^v| D_t J_{w_v(s)}^v(t) \rangle\big| \\ &\leq \abs{Xf(\gamma_{\Gamma(s)}(t))} + \abs{\overset{h}{\nabla} f(\gamma_{\Gamma(s)}(t))}\abs{|\dot{\Gamma}(s)^h|J_{w_h(s)}^h(t) + |\dot{\Gamma}(s)^v|J_{w_v(s)}^v(t)}\\ &\quad+ \abs{\overset{v}{\nabla} f(\gamma_{\Gamma(s)}(t))}\abs{|\dot{\Gamma}(s)^h|D_t J_{w_h(s)}^h(t) + |\dot{\Gamma}(s)^v|D_t J_{w_v(s)}^v(t)}\\ &\leq \abs{Xf(\gamma_{\Gamma(s)}(t))} + \abs{\overset{h}{\nabla} f(\gamma_{\Gamma(s)}(t))}\left(|J_{w_h(s)}^h(t)| + |J_{w_v(s)}^v(t)|\right) \\ &\quad+ \abs{\overset{v}{\nabla} f(\gamma_{\Gamma(s)}(t))}\left(|D_t J_{w_h(s)}^h(t)| + |D_t J_{w_v(s)}^v(t)|\right)\\ &\leq \frac{Ct}{(1+t-d_g(x_0,o)- \eps - r)^{\eta + 1}} + \frac{C}{(1+t-d_g(x_0,o)- \eps - r)^{\eta}} \end{split} \end{equation} for all $t \in [t_0,\infty)$, $s \in (0,r]$ and $\Gamma \in J(x_0,v_0,\eps)$. On the interval $[0,t_0]$ we also get a uniform upper bound since $f$, its covariant derivative and sectional curvatures are all bounded. We can conclude that integral on the right hand side of \eqref{eq:integral-on-rhs} converges absolutely with some uniform bound $C < \infty$ over $r \in (0,1]$ and the set $J(x_0,v_0,\eps)$. This shows that $u^f$ is locally Lipschitz, i.e.~$u^f \in W_\text{loc}^{1,\infty}(SM)$ (cf. Remark \ref{rmk:diff_ae}). Moreover, the uniform estimate together with the dominated convergence theorem guarantees that the limit $r\to 0$ of~\eqref{u_difference_quotient} exists for all geodesics $\Gamma$ on $SM$. This finishes the first part of the proof. \end{proof} \begin{proof}[Proof of the gradient estimates] By Rademacher's theorem $u^f$ is differentiable almost everywhere, and thus we can assume that $u^f$ is differentiable at $(x,v) \in SM$. By Lemmas~\ref{lma:escaping-direction} and~\ref{lma:sm-gradients-symmetry} we can assume that $(x,v)$ satisfies $\gamma = \gamma_{x,v} \in \esc{o}$. We may also assume that $\overset{h}{\nabla} u^f(x,v) \neq 0$. Since $\br{\overset{h}{\nabla} u^f(x,v),v} = 0$, we can take $w = \overset{h}{\nabla} u^f(x,v)/\absn{\overset{h}{\nabla} u^f(x,v)}$ in Lemma~\ref{lma:sm-gradients-flows} and get that \begin{equation} \begin{split} \abs{\overset{h}{\nabla} u^f(x,v)} &= \frac{\text{d}}{\text{d} s} u^f(\hflow{w}{s}(x,v))\valueat{s=0}\\ &= \int_0^\infty \br{\overset{h}{\nabla} f(\phi_t(x,v)), J^h(t)} + \br{\overset{v}{\nabla} f(\phi_t(x,v)), D_t J^h(t)} \, \text{d} t \end{split} \end{equation} where $J^h$ is again a Jacobi field along $\gamma$ fulfilling the assumptions of Corollary~\ref{cor:jacobi-estimates}. Under the conditions in part (a), the estimate~\eqref{eq:espacing-geodesic-distance-stronger} implies \[ \abs{\overset{h}{\nabla} u^f(x,v)} \leq C \int_0^\infty \e^{-\eta d_g(\gamma(t),o)} \e^{\sqrt{K_0} \,t} \, \text{d} t \leq \int_0^\infty \e^{-\eta \sqrt{d_g(x,o)^2 + t^2}} \e^{\sqrt{K_0} \,t} \, \text{d} t. \] Writing $r = d_g(x,o)$ and splitting the integral over $[0,r)$ and $[r, \infty)$ gives \[ \abs{\overset{h}{\nabla} u^f(x,v)} \leq C \left[ \int_0^r \e^{-\eta r} \e^{\sqrt{K_0} \,t} \,\text{d} t+ \int_r^{\infty} \e^{-\eta t} \e^{\sqrt{K_0}\,t} \,\text{d} t \right] \leq C \e^{-(\eta-\sqrt{K_0}) d_g(x,o)}. \] The above estimate also shows that $\absn{\overset{h}{\nabla} u^f}$ is bounded. Similarly, under the conditions in part (b), Lemma \ref{lma:tensor-sm-gradients}, Corollary \ref{cor:jacobi-estimates} and~\eqref{eq:espacing-geodesic-distance-stronger} imply \begin{align*} \abs{\overset{h}{\nabla} u^f(x,v)} &\leq C \int_0^{\infty} \frac{1+t}{(1+d_g(\gamma(t),o))^{\eta+1}} \,\text{d} t + C \int_0^{\infty} \frac{C}{(1+d_g(\gamma(t),o))^{\eta}} \,\text{d} t \\ &\leq C \left[ \int_0^r \frac{1+t}{(1+r)^{\eta+1}} \,\text{d} t + \int_r^{\infty} \frac{1+t}{(1+t)^{\eta+1}} \,\text{d} t \right] \leq C(1+r)^{-(\eta-1)} \end{align*} where $r = d_g(x,o)$. The same arguments apply to $\overset{v}{\nabla} u^f$. Hence $u^f \in W^{1,\infty}(SM)$ in the both cases, (a) and (b). \end{proof} \begin{lemma}\label{lma:sphere-volume} \begin{enumerate} \item[(a)] If $-K_0 \leq K \leq 0$ and $K_0 > 0$, then \begin{equation*} \Vol \sphere{o}{r} \leq C \e^{(n-1)\sqrt{K_0} r} \end{equation*} for all $r \geq 0$. \item[(b)] If $\mathcal{K} \in P_\kappa(M)$ for $\kappa >2$, then \begin{equation*} \Vol \sphere{o}{r} \leq Cr^{n-1} \end{equation*} for all $r \geq 0$. \end{enumerate} \end{lemma} \begin{proof} We define the mapping $f \colon S_o M \to \sphere{o}{r}$, \begin{equation*} f(v) = (\pi \comp \phi_r)(o,v) = \exp_o(rv). \end{equation*} We denote by $\text{d} \Sigma$ the volume form on $\sphere{o}{r}$ and have that \begin{equation*} \Vol \sphere{o}{r} = \int_{\sphere{o}{r}} \text{d} \Sigma = \int_{S_o M} f^*(\text{d} \Sigma) = \int_{S_o M} \mu \, \text{d} S, \end{equation*} where $\text{d} S$ denotes the volume form on $S_o M$ (induced by Sasaki metric) and $\mu \colon S_o M \to \mR$. Let $v \in S_o M$ and ${\{ w_i \}}_{i=1}^{n-1}$ be an orthonormal basis for $T_v S_o M$ with respect to Sasaki metric. By the Gauss lemma ${\{d_v f(w_i) \}}_{i=1}^{n-1}$ is an orthonormal basis for $T_{f(v)} \sphere{o}{r}$ and \begin{equation*} {f^*(\text{d}\Sigma)}_v (w_1,\dots,w_{n-1}) = \text{d}\Sigma_{f(v)}(d_v f(w_1),\dots,d_v f(w_{n-1})). \end{equation*} It holds that $d_v f(w_i) = J_i(r)$ where $J_i$ is a Jacobi field along the geodesic $\gamma_{o,v}$ with initial values $J(0) = d_v \pi(w_i)$ and $D_t J_i(0) = \cm(w_i)$. We get that \begin{equation*} \abs{\mu(v)} \leq \prod_{i=1}^{n-1} \abs{d_v f(w_i)} = \prod_{i=1}^{n-1} \abs{J_i(r)}. \end{equation*} Since the tangent vectors $w_i$ lie in $\mathcal{V}(o,v)$ we have $\abs{J_i(0)}_g = 0$ and $\abs{D_t J_i(0)}_g = |w_i|_{\sasaki{g}} = 1$, and the estimates for the volume of $\sphere{o}{r}$ then follow from Corollary~\ref{cor:jacobi-estimates}. \end{proof} \section{Proof of the main theorems}\label{sec:proofs} We begin by introducing some useful notation related to operators on the sphere bundle and spherical harmonics. One can find more details in~\cite{GK80},~\cite{DS11} and~\cite{PSU15}. We prove the main theorems of this work in the end of this section. The norm $\norm{\,\cdot\,}$ in this section will always be the $L^2(SM)$-norm. We define the Sobolev space $H^1(SM)$ as the set of all $u \in L^2(SM)$ for which $\norm{u}_{H^1(SM)} < \infty$, where \begin{equation*} \begin{split} \norm{u}_{H^1(SM)} &= {\left(\norm{u}^2 + \norm{{\nabla_{SM}} u}^2\right)}^{1/2}\\ &= {\left(\norm{u}^2 + \norm{Xu}^2 + \norm{\overset{h}{\nabla} u}^2 + \norm{\overset{v}{\nabla} u}^2 \right)}^{1/2}. \end{split} \end{equation*} Let $C_c^\infty(S M)$ denote the smooth compactly supported functions on $S M$. It is well known that if $N$ is complete Riemannian manifold, then $C_c^\infty(N)$ is dense in $H^1(N)$ (see \cite[Satz 2.3]{Eic88}). By Lemma~\ref{lem:SMcomplete} $SM$ is complete when $M$ is complete. Hence $C^{\infty}_c(SM)$ is dense in $H^1(SM)$. For the following facts see~\cite{PSU15}. The vertical Laplacian $\Delta : C^\infty(SM) \to C^\infty(SM)$ is defined as the operator \begin{equation*} \quad \Delta \mdef -\overset{v}{\diver}\overset{v}{\nabla}. \end{equation*} The Laplacian $\Delta$ has eigenvalues $\lambda_k = k(k+n-2)$ for $k =0,1,2,\dots$, and its eigenfunctions are homogeneous polynomials in $v$. One has an orthogonal eigenspace decomposition \begin{equation*} L^2(SM) = \bigoplus_{k \geq 0} H_k(SM), \end{equation*} where $H_k(SM) \mdef \{ f \in L^2(SM) \,; \Delta f = \lambda_k f \}$. We define $\Omega_k = H_k(SM) \cap H^1(SM)$. In particular, by Lemma \ref{lma:norm_sum} below any $u \in H^1(SM)$ can be written as \begin{equation*} u = \sum_{k=0}^\infty u_k, \quad u_k \in \Omega_k, \end{equation*} where the series converges in $L^2(SM)$. One can split the geodesic vector field in two parts, $X = X_+ + X_-$, so that (by Lemma \ref{lma:norm_sum}) $X_+: \Omega_k \to H_{k+1}(SM)$ and $X_-: \Omega_k \to H_{k-1}(SM)$. The next lemma gives an estimate for $X_{\pm} u$ in terms of $Xu$ and $\overset{h}{\nabla} u$. \begin{lemma}\label{lma:norm_sum} Suppose $u \in H^1(SM)$. Then $X_{\pm} u \in L^2(SM)$ and \begin{equation*} \norm{X_+ u}^2 + \norm{X_- u}^2 \leq \norm{Xu}^2 + \norm{\overset{h}{\nabla} u}^2. \end{equation*} Moreover, for each $k \geq 0$ one has $u_k \in H^1(SM)$, and there is a sequence $(u_k^{(j)})_{j=1}^{\infty} \subset C^{\infty}_c(SM) \cap H_k(SM)$ with $u_k^{(j)} \to u_k$ in $H^1(SM)$ as $j \to \infty$. \end{lemma} \begin{proof} Let $u \in C_c^\infty(SM)$. One has the decomposition \begin{equation*} \overset{h}{\nabla} u = \overset{v}{\nabla} \left[\sum_{l=1}^\infty \left( \frac{1}{l}X_+u_{l-1} - \frac{1}{l+n-2}X_-u_{l+1}\right) \right] + Z(u) \end{equation*} where $Z(u)$ is such that $\overset{v}{\diver} \,Z(u) = 0$ (see~\cite[Lemma 4.4]{PSU15}). Hence \begin{align*} \norm{\overset{h}{\nabla} u}^2 &= \sum_{l=1}^\infty \left(l(l+n-2) \norm{\frac{1}{l}X_+ u_{l-1} - \frac{1}{l+n-2} X_- u_{l+1}}^2\right) + \norm{Z(u)}^2\\ &= \sum_{l=1}^\infty \left(\frac{l+n-2}{l}\norm{X_+ u_{l-1}}^2 - 2 \br{X_+ u_{l-1},X_- u_{l+1}} + \frac{l}{l+n-2}\norm{X_- u_{l+1}}^2\right) + \norm{Z(u)}^2. \end{align*} We also have \begin{align*} \norm{Xu}^2 &= \norm{X_- u_1}^2 + \sum_{l=1}^\infty \left(\norm{X_+ u_{l-1} + X_- u_{l+1}}^2\right) \\ &= \norm{X_- u_1}^2 + \sum_{l=1}^\infty \left(\norm{X_+ u_{l-1}}^2 + 2 \br{X_+ u_{l-1},X_- u_{l+1}} +\norm{X_- u_{l+1}}^2\right) \end{align*} by the definition of $X_+$ and $X_-$. Adding up these estimates gives that \begin{equation*} \norm{Xu}^2 + \norm{\overset{h}{\nabla} u}^2 = \norm{Z(u)}^2 + \norm{X_- u_1}^2 + \sum_{l=1}^\infty \left(A(n,l) \norm{X_+ u_{l-1}}^2 + B(n,l) \norm{X_- u_{l+1}}^2 \right) \end{equation*} where $A(n,l) = 2+\frac{n-2}{l}$ and $B(n,l) = 1+\frac{l}{l+n-2}$. Since $A(n,l) \geq 1$ and $B(n,l) \geq 1$ for all $l = 1,2,\dots$ and $n \geq 2$, the estimate for $\norm{X_+ u}^2 + \norm{X_- u}^2$ follows when $u \in C^{\infty}_c(SM)$, and it extends to $H^1(SM)$ by density and completeness. Moreover, if $u \in C^{\infty}_c(SM)$ and if $k \geq 0$, then the triangle inequality $\norm{X u_k} \leq \norm{X_+ u_k} + \norm{X_- u_k}$ and orthogonality imply that \[ \norm{u_k} + \norm{X u_k} + \norm{\overset{v}{\nabla} u_k} \leq \norm{u} + \norm{X_+ u} + \norm{X_- u} + \norm{\overset{v}{\nabla} u}. \] We may also estimate $\overset{h}{\nabla} u_k$ by \cite[Proposition 3.4]{PSU15} and orthogonality to obtain \[ \norm{\overset{h}{\nabla} u_k}^2 \leq (2k+n-1) \norm{X_+ u_k}^2 + (\sup_M K) \norm{\overset{v}{\nabla} u_k}^2 \leq C_k (\norm{X_+ u}^2 + \norm{\overset{v}{\nabla} u}^2). \] It follows from the first part of this lemma that \[ \norm{u_k}_{H^1(SM)} \leq C_k \norm{u}_{H^1(SM)}, \qquad u \in C^{\infty}_c(SM). \] This extends to $u \in H^1(SM)$ by density and completeness. Finally, if $u \in H^1(SM)$ and the sequence $(u^{(j)}) \subset C^{\infty}_c(SM)$ satisfies $u^{(j)} \to u$ in $H^1(SM)$, then also $u^{(j)}_k \to u_k$ in $H^1(SM)$ by the above inequality. \end{proof} \begin{corollary} \label{cor:chap4cor2} Suppose $u \in H^1(SM)$. Then \begin{equation*} \lim_{k\to\infty} \norm{X_+ u_k}_{L^2(SM)} = 0. \end{equation*} \end{corollary} \begin{proof} By Lemma~\ref{lma:norm_sum} one has \[ \norm{X_+ u}^2 = \sum_{k=0}^{\infty} \,\norm{X_+ u_k}^2 < \infty \] which implies the claim. \end{proof} \begin{lemma}\label{lma:iteration} Let $u \in H^1(SM)$ and $k \geq 1$. Then one has that \begin{equation*} \norm{X_- u_k} \leq D_n(k) \norm{X_+ u_k} \end{equation*} where \begin{equation*}\begin{split} D_2(k) &= \begin{cases} \sqrt{2}, & k = 1 \\ 1, & k \geq 2,\end{cases} \\ D_3(k) &= {\left[1+\frac{1}{{(k+1)}^2(2k-1)}\right]}^{1/2} \\ D_n(k) &\leq 1 \quad \text{for $n \geq 4$.} \end{split} \end{equation*} \end{lemma} \begin{proof} This result was shown for smooth compactly supported functions in~\cite[Lemma 5.1]{PSU15}. The result follows for $u\in H^1(SM)$ by an approximation argument using Lemma~\ref{lma:norm_sum}. \end{proof} The estimates from Section~\ref{sec:estimates} allow us to prove the following result: \begin{lemma}\label{lma:u-H1} Suppose that $f$ is a symmetric $m$-tensor field and either of the following holds: \begin{enumerate} \item[(a)] $-K_0 \leq K \leq 0$, $K_0 > 0$ and $f \in E_\eta^1(M)$ for $\eta > \frac{(n+1)\sqrt{K_0}}{2}$ \item[(b)] $\mathcal{K} \in P_\kappa(M)$ for $\kappa > 2$ and $f \in P_\eta^1(M)$ for $\eta > \frac{n+2}{2}$. \end{enumerate} Then $u^f \in H^1(SM)$. \end{lemma} \begin{proof} We prove only (a), the proof for (b) is similar. By Lemma~\ref{lma:uf-gradient-estimates} we have that $u^f \in W^{1,\infty}(SM)$. Lemma~\ref{lma:uf-estimate} gives that \begin{equation*} |u^f(x,v)| \leq C(1+d_g(x,o))\e^{-\eta d_g(x,o)} \end{equation*} on $SM$. By using the coarea formula with Lemma~\ref{lma:sphere-volume} we get \begin{equation*} \begin{split} \int_{SM} |u^f(x,v)|^2 \, \text{d} V_{\sasaki{g}} &\leq C \int_{M} (1+d_g(x,o))^2 \e^{-2\eta d_g(x,o)} \, \text{d} V_g \\ &= C\int_0^\infty (1+r)^2 \e^{-2\eta r} \left(\int_{\sphere{o}{r}} \, \text{d} S\right) \, \text{d} r \\ & \leq C\int_0^\infty (1+r)^2 \e^{-2\eta r}\e^{(n-1)\sqrt{K_0}r} \text{d} r.\\ \end{split} \end{equation*} The last integral above is finite and hence $u^f \in L^2(SM)$. Similar calculations using Lemmas \ref{lma:Xuf} and \ref{lma:uf-gradient-estimates} show that $Xu^f, \overset{h}{\nabla} u^f$ and $\overset{v}{\nabla} u^f$ all have finite $L^2$-norms under the assumption $\eta > \frac{(n+1)\sqrt{K_0}}{2}$, and therefore the $H^1$-norm of $u^f$ is finite. \end{proof} We are ready to prove our main theorems. \begin{proof}[Proof of Theorems~\ref{thm_main1} and~\ref{thm_main2}] Suppose that the $m$-tensor field $f$ and the sectional curvature $K$ satisfy the assumptions of Theorem~\ref{thm_main1} or~\ref{thm_main2}. Recall that we identify $f$ with a function on $SM$ as described in Section \ref{subsec:tensors}. Then $u=u^f$ is in $H^1(SM)$ by Lemma~\ref{lma:u-H1}, and Lemma~\ref{lma:Xuf} states that $Xu = -f$ on $SM$. Note also that $f \in H^1(SM)$, which follows as in the proof of Lemma \ref{lma:u-H1}. Since $f$ is of degree $m$ it has a decomposition \begin{equation*} f = \sum_{k=0}^m f_k,\quad f_k \in \Omega_k, \end{equation*} and $u$ has a decomposition \begin{equation*} u = \sum_{k=0}^\infty u_k, \quad u_k \in \Omega_k. \end{equation*} We first show that $u_k = 0$ for $k \geq m$. From $Xu = -f$ it follows that for $k \geq m$ we have \begin{equation*} X_+ u_k + X_- u_{k+2} = 0. \end{equation*} This implies that \begin{equation} \label{xplusminus_solution_inequality} \norm{X_+ u_k} \leq \norm{X_- u_{k+2}}, \qquad k \geq m. \end{equation} Fix $k \geq m$. We apply Lemma~\ref{lma:iteration} and the inequality \eqref{xplusminus_solution_inequality} iteratively to get \begin{equation*} \begin{split} \norm{X_- u_k} &\leq D_n(k) \norm{X_+ u_k} \\ &\leq D_n(k) \norm{X_- u_{k+2}}\\ &\leq D_n(k) D_n(k+2) \norm{X_+ u_{k+2}} \\ &\leq \left[ \prod_{l=0}^N D_n(k+2l) \right] \norm{X_+ u_{k+2N}}. \end{split} \end{equation*} By Corollary~\ref{cor:chap4cor2} \begin{equation*} \lim_{l \to \infty }\norm{X_+ u_{k+2l}} = 0. \end{equation*} Moreover, as stated in \cite[Theorem 1.1]{PSU15}, one has \begin{equation*} \prod_{l=0}^\infty D_n(k+2l) < \infty. \end{equation*} Thus we obtain that \begin{equation*} \norm{X_- u_k} = \norm{X_+ u_k} = 0. \end{equation*} This gives $Xu_k = 0$, which implies that $t \mapsto u_k(\varphi_t(x,v))$ is a constant function on $\mR$ for any $(x,v) \in SM$. Since $u$ decays to zero along any geodesic we must have $u_k = 0$, and this holds for all $k \geq m$. It remains to verify that the equation $Xu = -f$ on $SM$ together with the fact $u = \sum_{k=0}^{m-1} u_k$ imply the conclusions of Theorems \ref{thm_main1} and \ref{thm_main2}. This is done as in \cite[end of Section 2]{PSU13}. Suppose that $m$ is odd (the case where $m$ is even is similar). The function $f$ is a homogeneous polynomial of order $m$ in $v$ and hence its Fourier decomposition has only odd terms, i.e. \begin{equation*} f = f_m + f_{m-2} + \cdots + f_1. \end{equation*} It follows that the decomposition of $u$ has only even terms, \begin{equation*} u = u_{m-1} + u_{m-3} + \cdots + u_{0}. \end{equation*} By taking tensor products with the metric $g$ and symmetrizing it is possible to raise the degree of a symmetric tensor: if $F \in \tf{m}(M)$, then $\alpha F \mdef \sigma(F \otimes g) \in \tf{m+2}(M)$. One has $\lambda(\alpha F) = \lambda(F)$, since $\lambda(g)$ has a constant value 1 on $SM$. We define $h \in \tf{m-1}(M)$ by \begin{equation*} h \mdef -\sum_{j=0}^{(m-1)/2} \alpha^j (\lambda^{-1}(u_{m-1-2j})). \end{equation*} Then $\lambda(h) = -u$, so equation \eqref{eq:sn=X} gives $\lambda(\sigma \nabla h) = \lambda(f)$, which implies $f = \sigma \nabla h$. \end{proof} \bibliographystyle{alpha}
{ "timestamp": "2017-05-30T02:09:46", "yymm": "1705", "arxiv_id": "1705.10126", "language": "en", "url": "https://arxiv.org/abs/1705.10126", "abstract": "We study the geodesic X-ray transform on Cartan-Hadamard manifolds, and prove solenoidal injectivity of this transform acting on functions and tensor fields of any order. The functions are assumed to be exponentially decaying if the sectional curvature is bounded, and polynomially decaying if the sectional curvature decays at infinity. This work extends the results of Lehtonen (2016) to dimensions $n \\geq 3$ and to the case of tensor fields of any order.", "subjects": "Differential Geometry (math.DG)", "title": "Tensor tomography on Cartan-Hadamard manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682444653243, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7073169919583877 }
https://arxiv.org/abs/dg-ga/9412005
Symplectic toric orbifolds
A symplectic toric orbifold is a compact connected orbifold $M$, a symplectic form $\omega$ on $M$, and an effective Hamiltonian action of a torus $T$ on $M$, where the dimension of $T$ is half the dimension of $M$. We prove that there is a one-to-one correspondence between symplectic toric orbifolds and convex rational simple polytopes with positive integers attached to each facet.
\section{Introduction} The main purpose of this paper is to demonstrate the following theorem.\\ \noindent {\bf Theorem } {\em Symplectic toric orbifolds are classified by convex rational simple polytopes with a positive integer attached to each facet.}\\ At the same time, however, we would like to build a foundation for further work on symplectic orbifolds. For this reason, we will try to state lemmas in their most natural generality, rather than restricting to our special case. The above theorem generalizes a theorem of Delzant \cite{Del} to the case of orbifolds. He proved that symplectic toric {\em manifolds} are classified by the image of their moment maps, that is, by a certain class of rational polytopes. It is easy to see that additional information is necessary in our case: \begin{example}{\em Given positive integers $r$ and $s$, let $l$ be their greatest common divisor. Let $K = \Z/(l\Z) \times S^1$ act on $\C^2$ by $(\xi,\lambda) \cdot (x,y) = (\xi \lambda^n x, \lambda^m y)$, where $n = r/l$ and $m = s/l$. Let $(M,\omega)$ be the symplectic reduction of $\C^2$ with its standard symplectic form at a positive number. Then $T = (S^1)^2/K$ has an effective Hamiltonian action on $(M,\omega)$. The image of the moment map is always a line interval, but these spaces are not isomorphic. Although $M$ is (topologically) a two sphere, it has two orbifold singularity, which look locally like $\C/(\Z/r\Z)$ and $\C/(\Z/s\Z)$.} \end{example} We begin by defining a few terms. A {\em symplectic toric orbifold} is a compact connected orbifold $M$, a symplectic form $\omega$ on $M$, and an effective Hamiltonian action of a torus $T$ on $M$, where the dimension of $T$ is half the dimension of $M$. This is not the definition used in algebraic geometry. We will see that every symplectic toric orbifold can be given the structure of a projective toric orbifold. Two symplectic toric orbifolds are {\em isomorphic} if they are equivariantly symplectomorphic. Let $\ft$ be a vector space with a lattice $\ell$; let $\ft^*$ denote the dual vector space. A convex polytope $\Delta \subset \ft^*$ is {\em rational} if the hyperplanes supporting its facets are defined by the elements of the lattice $\ell$, that is, $$\Delta = \cap_{i=1}^N \{\alpha \in \ft^* \mid \left< \alpha, y_i \right> \geq \eta_i \} $$ for some $y_i \in \ell$ and $\eta_i \in \R$. Recall that a {\em facet} is a face of codimension one. An $n$ dimensional polytope is {\em simple} if exactly $n$ facets meet at every vertex. For this paper, we shall adopt the convenient but non standard abbreviation that a {\em weighted polytope} is a convex rational simple polytope plus a positive integer attached to each facet. Two weighted polytopes are {\em isomorphic} if they differ by the composition of a translation with an element of $\Sl(\ell) \equiv \Sl(n,\Z)$ such that the corresponding facets have the same integers attached to them. Finally, let $x$ be a point in an orbifold $M$, and let $({ \tilde{U} },\Gamma, \phi)$ be a local uniformizing system for neighborhood $U$ of $x$ (see \cite{Sa}), then the {\em (orbifold) structure} group of $x$ is the isotropy group of $\tilde{x} \in { \tilde{U} }$, where $\phi(\tilde{x}) = x$. This group is well defined. We are now ready to give a precise statement of our main theorem: \begin{theorem} \label{classification} For a symplectic toric orbifold $(M,\omega,T)$ the image of the moment map is a rational simple polytope. A positive integer $n$ is attached to every open facet of this polytope as follows: for any $x$ in the preimage of the facet the structure group of $x$ is $\Z/n\Z$. Two symplectic toric orbifolds are isomorphic if and only if their associated weighted polytopes are isomorphic. Every weighted polytope occurs as the image of a symplectic toric orbifold. \end{theorem} \begin{remark}{\em We will see in section~\ref{surj} that all symplectic toric orbifolds have K\"{a}hler structures.} \end{remark} \noindent {\sc Acknowledgments}\\ \vspace{-12pt} At the conference {\em Applications of Symplectic Geometry} at the Newton Institute, 10/31/94 - 11/11/94, we learned that R. de Souza and E. Prato have been working independently on the same problem. It is a pleasure to thank Chris Woodward for many useful discussions. In particular, section~\ref{local to global} is joint work with Chris Woodward. \\ \section{Group actions on orbifolds} In this section, we recall facts about the action of groups on orbifolds, following Haefliger and Salem \cite{HS}. Although some theorems about group actions on manifolds hold, there are a few differences. We begin with a few basic definitions. An {\em orbifold} $M$ is a topological space $|M|$, plus an {\em atlas} of {\em uniformizing charts} $({ \tilde{U} },\Gamma,\varphi)$, where ${ \tilde{U} }$ is open subset of $\R^n$, $\Gamma$ is a finite group which acts linearly on ${ \tilde{U} }$ and fixes a set of codimension at least two, and $\varphi: { \tilde{U} } \to |M|$ induces a homeomorphism from ${ \tilde{U} }/\Gamma$ to $U \subset |M|$. Just as for manifolds, these charts must cover $|M|$; they are also subject to certain compatibility conditions; and there is a notion of when two atlases of charts are equivalent. For more details, see Satake \cite{Sa}. Given $x$, we may choose a uniformizing chart $({ \tilde{U} },\Gamma,\varphi)$ such that $\varphi^{-1}(x)$ is a single point which is fixed by $\Gamma$. Given $x \in M$, denote the {\em uniformizing tangent space at $x$} by ${ \tilde{T} }_xM$; it is $T_{\varphi^{-1}(x)}{ \tilde{U} }$: the tangent space to $\varphi^{-1}(x)$ in ${ \tilde{U} }$. Then $T_x M$, the fiber of the tangent bundle of $M$ at $x$ is ${ \tilde{T} }_x M/\Gamma$. A vector field $\xi$ on $M$ is a $\Gamma$ invariant vector field $\tilde{\xi}$ on each uniformizing chart $({ \tilde{U} },\Gamma,\phi)$; of course, these must agree on overlaps. Similar definition apply to differential forms, etc. Let $M$ and $N$ be orbifolds with atlases $\tcU$ and $\tcV$. A {\em map of orbifolds} $f: M \to N$ is a map $f : \tcU \to \tcV$, and an equivariant map $\tilde{f}_{ \tilde{U} }: ({ \tilde{U} },\Gamma) \to ({ \tilde{V} },\Upsilon)$ for each $({ \tilde{U} },\Gamma) \in \tcU$, where $({ \tilde{V} },\Upsilon) = f({ \tilde{U} },\Gamma)$. These $\tilde{f}_{ \tilde{U} }$ are subject to a compatibility condition which insures, for instance, that $f$ induces a continuous map of the underlying spaces. Additionally, there is a notion of when two such maps are equivalent. Again, see \cite{Sa} for details. We are now ready to recall the definition of the group action on an orbifold. \begin{definition}{\rm Let $G$ be a Lie group. A {\em smooth action} $a$ of $G$ on an orbifold $M$ is a smooth orbifold map $a: G\times M \to M$ satisfying the usual group laws, that is, for all $g_1, g_2 \in G$ and $x \in M$ $$ a(g_1, a(g_2, x )) = a(g_1 g_2, x ) \quad \hbox{and} \quad a(1_G, x) = x, $$ where $1_G$ is the identity element of $G$. } \end{definition} Technically, by ``=", we mean ``are equivalent as maps of orbifolds." The action $a$ induces a continuous action $|a|$ of $G$ on the underlying topological space $|M|$, $$ |a|: G\times |M| \to |M|. $$ In particular, this definition states that for every $g_0 \in G$ and $x_0 \in M$ there are neighborhoods $W$ of $g_0$ in $G$, $U$ of $x_0$ in $M$ and $U'$ of $a(g_0,x_0) $ in $M$, charts $(\tilde{U}, \Gamma, \varphi) $ and $(\tilde{U}', \Gamma ', \varphi') $ and a smooth map $\tilde{a}: W\times \tilde{U} \to \tilde{U}'$ such that $\varphi' (\tilde{a}(g, \tilde{x})) =|a|(g, \varphi (x))$ for all $(g, \tilde{x}) \in W\times \tilde{U}$. Note that $\tilde{a}$ is not unique, it is defined up to composition with elements of the orbifold structure groups $\Gamma $ of $x_0$ and $\Gamma' $ of $g_0\cdot x_0$. If $g_0 = 1_G$, the identity of $G$, then we may assume ${ \tilde{U} } \subset { \tilde{U} }'$, and we can choose $\tilde{a}$ such that $\tilde{a}(1_G, x) = x$. Then $\tilde{a}$ induces a local action of $G$ on ${ \tilde{U} }$. If in addition $x_0$ is {\em fixed} by the action of $G$ and $G$ is compact, then the local action $G$ on ${ \tilde{U} }$ generates an action of ${ \tilde{G} }$ on ${ \tilde{V} } \subset { \tilde{U} }$, where $\tilde{G}$ is a cover of the identity component of $G$. Note that the actions of $\tilde{G}$ and $\Gamma $ on ${ \tilde{V} }$ {\em commute}. More generally one can show that for a fixed point $x$ with structure group $\Gamma$ there exists a uniformizing chart $({ \tilde{U} },\Gamma,\phi)$ for a neighborhood $U$ of $x$, an exact sequence of groups $$ 1\to \Gamma \to \hat{G} \stackrel{\pi}{\to} G \to 1, $$ and an action of $\hat{G}$ on ${ \tilde{U} }$ such that the following diagram commutes: $$ \begin{CD} \hat{G} \times { \tilde{U} } @>>> { \tilde{U} }\\ @VVV @VVV\\ G \times U @>>> U \end{CD} $$ The extension $\hat{G}$ of $G$ depends on $x$ and, in particular, is not globally defined. For any $\xi \in \fg$, there is an associated vector field $\xi_M$ on $M$. On each ${ \tilde{U} }$, it is defined via the local action of $G$ of ${ \tilde{U} }$. The map $\pi: \hat{G} \to G$ induces an isomorphism of Lie algebras $\pi: \hat{\fg} \to \fg$. Locally, for any $\hat{\xi} \in \hat{\fg}$, $\hat{\xi}_{ \tilde{U} } = \pi(\hat{\xi})_M |_{ \tilde{U} }$. \begin{remark}{\em If $\hat\ft$ is a Cartan subalgebra of $\hat{\fg}$, then $\ft = \pi(\hat{\ft})$ is also a Cartan subalgebra. Define a lattice $\hat{\ell}$ by $\hat{\ell} = \{ \hat{\xi} \in \hat{\ft} \mid \exp(\hat{\xi}) = 1\}$, and define $\ell \subset \ft$ analogously. Then $\pi: \hat{\ell} \to \ell$ is not an isomorphism; $\pi(\hat{\xi})$ is in $\ell$ exactly if $exp(\hat{\xi})$ is in $\Gamma$. Let $\hat{\alpha}_i \in \hat{\ell}^*$ for $i \in I$ be the weights for the action of $\hat{G}$ on $\tilde{U}$. Although $\alpha_i = \pi(\hat{\alpha}_i) \in \ft$ may not lie in the weight lattice $\ell^*$, $|\Gamma| \alpha_i$ does lie in $\ell^*$, where $|\Gamma|$ is the order of $\Gamma$. We will call $\alpha_i$ the {\em orbi-weights} for the action of $G$ on $U$. }\end{remark} If $G$ is a compact Lie group acting on an orbifold $M$, we can define local slices to orbits in the differential category. Since the orbit $G\cdot x$ is a {\em submanifold} of $M$, ${ \tilde{T} }_x (G\cdot x)$ is a $\Gamma$ invariant subspace of ${ \tilde{T} }_x M$. We define the {\em slice} at $x$ for the action of $G$ on $M$ to be the orbifold $W/\Gamma$, where $W= { \tilde{T} }_x M / { \tilde{T} }_x (G\cdot x)$. We may also identify $W$ with the orthogonal complement to ${ \tilde{T} }_x (G\cdot x)$ in ${ \tilde{T} }_x M$ with respect to some invariant metric. It is not hard then to see that the slice theorem takes the following form. \begin{proposition}{\rm ({\bf Slice theorem})} Suppose a compact Lie group $G$ acts on an orbifold $M$ and $G\cdot x$ is an orbit of $G$. Then a $G$ invariant neighborhood of the orbit is equivariantly diffeomorphic to a neighborhood of the zero section in the associated orbi-bundle $G\times _{G_x} W/\Gamma$, where $G_x$ is the isotropy group of $x$ with respect to the action of $G$, $\Gamma$ is the orbifold structure group of $x$ and $W = { \tilde{T} }_x M/ T_x (G\cdot x)$ \end{proposition} \begin{proof} This is completely analogous to the slice theorem for actions on manifolds, and follows immediately from the fact that metrics can be averaged over compact Lie groups. \end{proof}\vspace{-5mm} \begin{remark}{\em As in the smooth case, the compactness of the group $G$ is not necessary for the existence of slices. It is enough to require that the induced action on the underling topological space is proper.} \end{remark} For connected $G$, it follows from the existence of slices that the fixed point set is a suborbifold. Therefore, the decomposition of M into infinitesimal orbit types is a stratification into suborbifolds. On the other hand, $M^G$ need not be a suborbifold in general. \begin{example}{\em Let $\Gamma = \Z/(2\Z)$ act on $\C^2$ by sending $(x,y)$ to $(-x,-y)$ Let $G = \Z/(2\Z)$ act on $\C^2/\Gamma$ by sending $[x,y]$ to $[x,-y]$. Then $M^G = \{[x,0]\} \cup \{[0,y]\} = \C/\Gamma \cup \C/\Gamma$.} \end{example} Consequently the decomposition of an orbifold according to the orbit type is not a stratification. Fortunately the following lemma still holds. \begin{lemma}\label{lem.princ.orbittype} If $G$ is a compact Lie group acting on a connected orbifold $M$ then there exists an open dense subset of $M$ consisting of points with the same orbit type. \end{lemma} \begin{proof} We first decompose the orbifold into the open dense set of smooth points $M_{\text{smooth}}$ and the set of singular points. Since we assume that all the singularities have codimension 2 or greater, $M_{\text{smooth}}$ is connected. A smooth group action preserves this decomposition. Since $G$ is compact, the action of $G$ on $M_{\text{smooth}}$ has a principal orbit type (see for example Theorem~4.27 in \cite{Kaw}). The set of points of this orbit type is open and dense in $M_{\text{smooth}}$, hence open and dense in $M$. \end{proof} \begin{corollary}\label{cor.loc_free} If a torus $T$ acts effectively on a connected orbifold $M$ then the action of $T$ is free on a dense open subset of $M$. \end{corollary} \section{Symplectic local normal forms} In this section, we write down normal forms for the neighborhoods of orbits of compact Lie groups $G$ acting symplectically on $(M,\omega)$; that is, we classify such neighborhoods up to equivariant symplectomorphisms. We also point out consequences of this form, both generally and in the special case of symplectic toric orbifolds. A {\em symplectic orbifold} is an orbifold $M$ with a closed nondegenerate $2$-form $\omega$. A group $G$ acts {\em symplectically} on $M$ if $G$ preserves $\omega$. A moment map $\phi: M \to \fg^*$ is a map such that for each ${ \tilde{U} }$, the map $\tilde{\phi}$ is a moment map for the local group action of $G$ on ${ \tilde{U} }$. If there is a moment map for $G$, we say that $M$ is a {\em Hamiltonian $G$ space}. If $G$ is a compact Lie group which acts symplectically on $(M,\omega)$, we can define the {\em symplectic slice} at a point $x$. The $2$-form $\omega$ induces a non-degenerate antisymmetric bilinear form $\omega$ on ${ \tilde{T} }_xM$. Let ${ \tilde{T} } (G\cdot x)^\omega $ be the symplectic perpendicular to the tangent space of $G \cdot x$ with respect to $\omega$. The quotient $$ V = { \tilde{T} } (G\cdot x)^{\omega }/( { \tilde{T} } (G\cdot x)\cap { \tilde{T} } (G\cdot x)^{\omega }) $$ is naturally a symplectic vector space. The {\em symplectic slice } at $x$ is the symplectic orbifold $V/\Gamma,$ where $\Gamma$ is the structure group of $x$. Notice that ${ \tilde{T} }(G \cdot x)$ and ${ \tilde{T} }(G \cdot x)^\omega$ are both $\hat{G}_x$ invariant, where $\pi: \hat{G}_x \to G_x$ is an extension of $G_x$ by $\Gamma$. Therefore there is a {\em symplectic linear action} of $G_x$ on $V/\Gamma$, that is, a symplectic linear action of $\hat{G}_x$ on $V$. \begin{remark} {\em \label{rem_moment} Let $\pi: \hat{\fg}_x \to \fg_x$ be the induced map of Lie algebras. Let $\hat{\phi}_V: V \to \hat{\fg}_x^*$ be the moment map for the action of $\hat{G}_x$ on $V$. Then $\phi_{V/\Gamma}: V/\Gamma \to \fg_x^*$, the moment map for the action of $G_x$ on $V/\Gamma$, is given by the following diagram: $$ \begin{CD} V @>>> V/\Gamma \\ @V{\hat{\phi}_V}VV @VV{\phi_{V/\Gamma}}V\\ \hat{\fg}_x^* @<\pi^*<< \fg_x^* \end{CD} $$ Since $\pi^*$ is a vector space isomorphism such that $\pi^*(\ell^*) \subset \hat{\ell}^*$, $\hat{\phi}_V(V)$ and $\phi_V(V/\Gamma)$ are isomorphic as subsets of vector spaces. In particular, if $G$ is Abelian, then $\phi_{V/\Gamma}(V/\Gamma)$ is the rational convex polyhedral cone spanned by the orbi-weights for the action of $G_x$ on $V/\Gamma$. However, since $\pi^*$ need not be an isomorphism of lattices, $\hat{\phi}_V(V) \cap \hat{\ell}^*$ and $\phi_{V/\Gamma}(V/\Gamma) \cap \ell^*$ may not be isomorphic as semigroups. } \end{remark} As in the case of manifolds, the {\em differential} slice at $x$ is isomorphic, as a $G_x$ space, to the product $$ (\fg/\fg_x)^* \times V/\Gamma . $$ Thus, by the previous section, a neighborhood of the $G$ orbit of $x$ in $(M, \omega)$ is equivariantly diffeomorphic to a neighborhood of the zero section in the associated orbi-bundle $$ Y = G\times_{G_x} \left((\fg/\fg_x)^* \times V/\Gamma \right). $$ \begin{Lemma} \label{locsymp} Let $G\cdot x$ be an {\em isotropic} orbit in $(M, \omega)$. For every $G_x$ equivariant projection $A: \fg \to \fg_M$, there is a symplectic form on $Y$ such that \begin{enumerate} \item a neighborhood of the zero section in $Y$ is equivariantly symplectomorphic to a neighborhood of $G\cdot x$ in $M$, and \item the moment map map $\Phi_Y : Y \to \fg^*$ is given by $$ \Phi _Y ([g, \eta, [v]])= Ad^\dagger (g) (\eta + A^* \phi _{V/\Gamma} ([v])),$$ where $(\fg/\fg_x)^*$ is identified with the annihilator of $\fg_x$ in $\fg^*$, $A^*: \fg^*_x \to \fg^* $ is dual to $A$, and $\phi_{V/\Gamma} : V/\Gamma \to \fg_x^*$ is the moment map for the action of $G_x$ on $V/\Gamma$, as in remark \ref{rem_moment}. \end{enumerate} \end{Lemma} In particular, if $G$ is Abelian then $G \cdot x$ is always isotropic and $Ad^\dagger(g)$ is trivial; the moment map takes the form $$ \Phi _Y ([g, \eta, [v]])= \eta + A^* \phi _{V/\Gamma} ([v]). $$ \begin{Proof} The construction is standard in the smooth case (cf \cite{GS}); we simply adapt it for orbifolds. The group $G_x$ acts on $G$ by $g_x \cdot g = g g_x^{-1}$; this lifts to a symplectic action on $T^*G$. The corresponding diagonal action of $G_x$ on $T^* G \times V/\Gamma$ is Hamiltonian. An equivariant projection $A: \fg \to \fg_x$ defines a left $G$-invariant connection 1-form on the principal $G_x$ bundle $G \to G/G_x$, and thereby identifies $Y$ with the reduced space $(T^* G \times V/\Gamma)_0$, thus giving $Y$ a symplectic structure. The $G$ moment map on $T^* G \times V/\Gamma$ descends to a moment map for $Y$, giving the formula in $(2)$. The proof that the neighborhoods are equivariantly symplectomorphic reduces to a form of the equivariant relative Darboux theorem; it is identical to the proof in the smooth case. \end{Proof} \vspace{-5mm} \begin{remark}\label{uniqueness remark}{\em The model embedding $i:G\cdot x \hookrightarrow Y$ is {\em unique } in the following sense. If $i': G\cdot x \hookrightarrow (N, \sigma)$ is any equivariant isotropic embedding into a symplectic $G$ orbifold $(N, \sigma)$ such that the symplectic slice at $i'(x)$ is the same as the symplectic slice $V/\Gamma$ of the model $Y$ then there exist a neighborhood of $i(G\cdot x)$ in $Y$, a neighborhood $U'$ of $i'(G\cdot x)$ in $N$ and a symplectic equivariant diffeomorphism $\psi :U \to U'$ such that $i' = \psi \circ i$. The proof of the existence of the map $\psi$ is again, essentially, a form of the equivariant relative Darboux theorem.} \end{remark} The following two lemmas are consequences of lemma \ref{locsymp} above. \begin{lemma}\label{lem.loc_convex} If $G$ is a Hamiltonian torus action on a symplectic orbifold, then the image under the moment map of a neighborhood of an orbit is the neighborhood of a point in a rational polyhedral cone. \end{lemma} \begin{lemma} If $G$ is connected, then $M^G$, the set of points which are fixed by $G$, is a symplectic suborbifold. \end{lemma} We are now ready to specialize to the case of symplectic toric orbifolds. \begin{lemma} \label{orbit} Let $(M,\omega,G)$ be a symplectic toric orbifold with moment map $\Phi_M: M \to \fg^*$. Then for any $x \in M$, the stabilizer of $x$ is connected. Moreover, there is a $G$ invariant neighborhood $U$ of $G \cdot x$ on which \begin{enumerate} \item $\Phi_M$ induces a homeomorphism form $U/G$ to $\Phi_M(U)$. \item the image $\Phi_M(U)$ is the neighborhood of a point in a simple rational polyhedral cone. \item a neighborhood of $G \cdot x$ is classified by $\Phi_M(U)$, plus a positive integer attached to each facet. \end{enumerate} \end{lemma} \begin{proof} Let $H$ be the stabilizer of $x$; choose a projection $A: \fg \to \fh $. By lemma~\ref{locsymp}, there is a neighborhood of $G \cdot x$ which is equivariantly symplectomorophic to a neighborhood of the zero section of the model space $Y= G\times _H ( (\fg/\fh)^* \times V/\Gamma)$, where $\Gamma$ is the orbifold structure group at $x$, and $V/\Gamma$ is the symplectic slice at $x$. Therefore, it suffices to prove the above claims for the model $Y$. Since $G$ is abelian, $H$ does not act on $(\fg/\fh)^*$. By assumption the action of $G$ on $M$ is effective and therefore, by corollary~\ref{cor.loc_free} generically free. Therefore the action of $H$ on $V/\Gamma$ is generically free as well. Since the action of $\Gamma$ on $V$ is generically free, it follow that the action of $\hat{H}$, the extension $H$ by $\Gamma$, on $V$ is generically free. Hence the symplectic representation $\hat{H} \to Sp (V, \omega _V)$ is faithful. That is to say we may think of $\hat{H}$ as a compact subgroup of $Sp (2h, \R)$, where $2h= \dim V$, hence as a subgroup of the unitary group $U(h)$, the maximal compact subgroup of the symplectic group. Also by assumption $\dim G = \frac{1}{2} \dim Y$. Consequently $\dim H = \frac{1}{2} \dim V = h$. The group $\tilde{H}$, the connected component of $1$ in $\hat{H}$ is a cover of the component of $1$ in $H$, hence is a torus of dimension $h$. Therefore, $\tilde{H}$ is a maximal torus of $U(h)$. Consequently we may identify $V$ with $\C ^h$ and $\tilde{H}$ with the standard torus $T^h$ of diagonal unitary matrices. Since $\tilde{H}$ commutes with the action of $\Gamma$ and since the centralizer of the maximal torus in $U(h)$ is the torus itself, $\Gamma$ must be a subgroup of $\tilde{H}$. In particular {\em $\Gamma$ is abelian}. We next argue that $H$ has to be connected. Since $H$ is abelian, all the elements of $H$ commute with the elements of the identity component of $H$. By continuity they lift to the elements of $U(h)$ that commute with the elements of $\tilde{H}$, hence by the same argument as before, have to lie in $\tilde{H}$. Therefore $\hat{H}$ is connected and consequently $H$ itself is connected. The moment maps $\hat{\phi}_V : V \to \hat\fh$ and $\phi_{V/\Gamma} : V/\Gamma \to \fh$ are orbit maps. The moment map map $\Phi_Y : Y \to \fg^*$ is given by $\Phi_Y ([g, \eta, [v]])= \eta + A^* \phi _{V/\Gamma} ([v]).$ Therefore $\Phi_Y$ is also an orbit map. Hence the original moment map $\Phi _M$ is an orbit map. Because the pair $(\hat{H},V)$ can be identified with $(T^h,\C^h)$, the image $\hat{\phi}_V(V) \subset \hat{\fh}^*$ is the positive orthant. Let $x_i = \pi(e_i) \in \ell$, where $e_i$ is a standard basis vector in $\hat\fh = \R^h$ and $\pi : \hat{\fh} \to \fh$ is the obvious projection (it is actually an isomorphism). Since $(\pi^*)^{-1}(\hat{\phi}_V(V)) = \phi_{V/\Gamma}(V/\Gamma)$, $\phi_{V/\Gamma}(V/\Gamma)$ is the image of the positive orthant under $(\pi^*)^{-1}$. An easy computation shows that $\Phi_{Y}(Y) = \cap_{i=1}^h \{ \xi \in \fg^* \mid \left<\xi, x_i \right> \geq 0 \}$. Therefore, $\Phi_Y(Y)$ is a simple rational convex polyhedral cone. For $y = [g,\eta,[v]] \in Y$, notice that $\Phi_Y(y)$ lies in the interior of the $i$th facet exactly if $v_i = 0$, but the other components do not. Here we think of $v\in V$ as being the $h$ tuple $v= (v_1, \ldots, v_h)\in \C^h$. In this case, it is easy to check that the structure group of $v$ is $\Z/(m_i\Z)$, where $m_i$ is the length of $x_i$. Let $m_i$ be the number assigned to the $i$th face. Finally, we must show that $Y$ is determined, up to equivariant symplectomorphism, by $\phi_Y(Y)$ plus the positive integers attached to facets. But $\fh$ is determined by the cone $\phi_Y(Y)$, since $H$ is connected, it is also determined by $\phi_Y(Y)$. Moreover, the projection $\pi: \R^h \to \fh$ could be read off from the data, because $\pi(e_i) = m_i y_i$, where $m_i$ is the integer attached to the $i$th facet and $y_i$ is the unique primitive outward normal to the $i$th facet. This allows us to recover the structure group $\Gamma$ and thereby the symplectic slice. Since symplectically and equivariantly a neighborhood of an orbit $G\cdot x$ in $(M, \omega, G)$ is uniquely determined by the representation of the isotropy group of $x$ on the symplectic slice at $x$ (cf. remark~\ref{uniqueness remark}), we are done. \end{proof} \section{Morse Theory}\label{section.Morse} In this section, we extend Morse theory to orbifolds. Since orbifolds are stratified spaces, this is simply a special case of Morse theory on stratified spaces \cite{MTSS}. Nevertheless, it is an important special case which does not seem to be readily available in literature. We need Morse theory for the following result. \begin{Lemma} \label{onedim} $M$ be a connected compact $n$ dimensional orbifold, and $f: M \to \R$ be a Bott-Morse function with no critical suborbifold of index $1$ or $n-1$. Then $M_{(a,b)} = f^{-1}(a,b)$ is connected for all $a, b \in \R$. \end{Lemma} We will use this result in the next section to prove that the fibers of a torus moment map are connected, and that the image of a compact symplectic orbifold under a torus moment map is a convex polytope. There are two main theorems in Morse theory; both relate the Morse polynomial to the Poincar\'{e} polynomial. The first states that $\cm_i \geq \cp_i$, where $\cm_i$ is the number of critical points of index $i$, and $\cp_i$ is the $i^{\text{th}}$ Betti number. The second, stronger theorem states that $\cm ( x)- \cp(x) = (1+x) Q(x)$, where $Q$ is a polynomial with nonnegative coefficients, $\cm (x) = \sum \cm _i x^i$ is the Morse polynomial, and $\cp(x)=\sum \cp _i x^i$ is the Poincar\'{e} polynomial. Although the first theorem holds for orbifolds, the second statement no longer holds; see the following counterexample. Luckily, most applications of Morse theory to symplectic geometry rely only on the first theorem. \begin{example} {\em Let $M$ be torus, stood on end (see Figure~1 below). Let $f: M \to \R$ be the height function. Let $\Gamma = \Z/(2\Z)$ act on $M$ by rotating it $180$ degrees. Then $H_0(M/\Gamma) = H_2(M/\Gamma) = \R$, but $H_1(M/\Gamma) = 0$. On the other hand, $\cm(x) = 1 + 2x + x^2$.} \end{example} \setlength{\unitlength}{0.008in} \begin{picture}(441,400)(10,-50) \put(81,185){\ellipse{80}{156}} \put(80,184){\ellipse{160}{240}} \path(440,44)(439,329) \path(441.028,321.007)(439.000,329.000)(437.028,320.993) \path(182,183)(216,183) \path(208.000,181.000)(216.000,183.000)(208.000,185.000) \path(342,185)(386,185) \path(378.000,183.000)(386.000,185.000)(378.000,187.000) \path(322,63) (317.538,63.540) (313.694,64.045) (310.404,64.527) (307.603,64.998) (303.210,65.951) (300.000,67.000) \path(300,67) (295.689,69.213) (293.183,70.748) (290.604,72.447) (285.716,75.957) (282.000,79.000) \path(282,79) (279.609,81.365) (276.868,84.374) (273.917,87.832) (270.899,91.542) (267.956,95.310) (265.230,98.939) (262.864,102.235) (261.000,105.000) \path(261,105) (258.511,109.277) (257.081,111.960) (255.625,114.812) (254.215,117.688) (252.926,120.442) (251.000,125.000) \path(251,125) (249.666,129.365) (248.963,132.083) (248.267,134.956) (247.599,137.835) (246.984,140.569) (246.000,145.000) \path(246,145) (245.010,149.381) (244.408,152.088) (243.791,154.947) (243.200,157.812) (242.677,160.539) (242.000,165.000) \path(242,165) (241.710,168.978) (241.516,173.834) (241.453,176.500) (241.412,179.275) (241.391,182.124) (241.390,185.009) (241.409,187.893) (241.445,190.741) (241.499,193.515) (241.569,196.179) (241.757,201.030) (242.000,205.000) \path(242,205) (242.481,209.221) (242.850,211.812) (243.273,214.539) (243.724,217.260) (244.179,219.837) (245.000,224.000) \path(245,224) (246.144,228.680) (246.906,231.551) (247.741,234.570) (248.608,237.579) (249.468,240.425) (251.000,245.000) \path(251,245) (252.753,249.297) (253.899,251.913) (255.148,254.653) (256.441,257.373) (257.719,259.933) (260.000,264.000) \path(260,264) (263.219,268.468) (265.314,271.116) (267.578,273.857) (269.894,276.553) (272.147,279.067) (276.000,283.000) \path(276,283) (280.956,287.218) (284.088,289.681) (287.446,292.196) (290.873,294.624) (294.213,296.826) (297.307,298.664) (300.000,300.000) \path(300,300) (303.061,301.052) (307.253,302.007) (309.927,302.477) (313.069,302.958) (316.739,303.462) (321.000,304.000) \path(321,263) (316.651,261.327) (312.917,259.833) (309.738,258.489) (307.051,257.267) (302.909,255.067) (300.000,253.000) \path(300,253) (296.152,248.198) (294.346,245.330) (293.000,243.000) \path(293,243) (290.975,238.912) (289.825,236.355) (288.660,233.641) (287.538,230.910) (286.515,228.300) (285.000,224.000) \path(285,224) (283.891,219.626) (283.312,216.910) (282.745,214.041) (282.210,211.166) (281.727,208.435) (281.000,204.000) \path(281,204) (280.412,199.831) (280.081,197.254) (279.758,194.535) (279.464,191.811) (279.225,189.221) (279.000,185.000) \path(279,185) (279.215,180.557) (279.454,177.832) (279.749,174.964) (280.075,172.101) (280.408,169.389) (281.000,165.000) \path(281,165) (281.688,160.344) (282.138,157.477) (282.644,154.458) (283.194,151.445) (283.777,148.593) (285.000,144.000) \path(285,144) (286.902,139.622) (288.217,137.012) (289.662,134.296) (291.148,131.606) (292.591,129.073) (295.000,125.000) \path(295,125) (296.375,122.700) (298.158,119.822) (302.000,115.000) \path(302,115) (304.652,113.231) (308.406,111.433) (310.835,110.465) (313.707,109.419) (317.077,108.272) (321.000,107.000) \path(321,303) (319.271,298.589) (318.090,295.306) (317.000,291.000) \path(317,291) (316.893,287.638) (317.140,283.449) (317.567,279.285) (318.000,276.000) \path(318,276) (318.908,271.452) (319.771,267.866) (321.000,263.000) \path(321,106) (319.271,101.589) (318.090,98.306) (317.000,94.000) \path(317,94) (316.893,90.638) (317.140,86.449) (317.567,82.285) (318.000,79.000) \path(318,79) (318.908,74.451) (319.771,70.866) (321.000,66.000) \put(193,196){\makebox(0,0)[lb]{$f$}} \put(357,196){\makebox(0,0)[lb]{$f$}} \put(51,30){\makebox(0,0)[lb]{$\text{torus}$}} \put(279,36){\makebox(0,0)[lb]{$\text{torus}/\Z_2$}} \put(450,312){\makebox(0,0)[lb]{$\R$}} \put(200,0){\makebox(0,0)[lb]{{\rm Figure 1}}} \end{picture} The basic definitions for Morse theory on orbifolds are identical to their smooth counterparts. Let $M$ be a compact orbifold. Let $f: M \to \R$ be a smooth function. We say a critical point $x$ of $f$ is {\em non-degenerate} if the Hessian $H(f)_x$ of $f$ is non-degenerate. In this case, the {\em index} of $f$ at $x$ is the dimension of the negative eigenspace of the Hessian $H(f)_x$. We say $f$ is {\em Morse} if every critical point $x$ is non-degenerate.\\[4pt] {\bf Notation } Denote $f^{-1}(-\infty,a)$ by $M^-_a$ for all $a \in \R$. \begin{Lemma} Choose $a < b \in \R $ such that $[a,b]$ contains no critical values. Then $M^-_a$ is diffeomorphic to $M^-_b$, and $f^{-1}(a)$ is diffeomorphic to $f^{-1}(b)$. \end{Lemma} \begin{proof} The usual proof still applies, i.e., given a Riemannian metric on $M$, one simply flows along the (renormalized) gradient of $f$. \end{proof} For critical points, the situation is only slightly more complicated. \begin{Lemma} {\em ({\bf Morse Lemma})} Let $p$ be a non-degenerate critical point for $f: M \to \R$ on an $n$ dimensional orbifold $M$. There exists a neighborhood $U$ of $p$ in $M$ and an uniformizing chart $({ \tilde{U} },\Gamma,\phi)$ for $U$ such that ${ \tilde{U} } \subset \R^n$, $\Gamma$ acts linearly, $\phi^{-1} (p)$ is a single point $\tilde{p}$, and the Hessian of $f\circ \phi $ at $\tilde{p}$ equals $f\circ\phi$: $$ H(f\circ \phi)_{\tilde{p}} = f \circ \phi . $$ Note that the action of $\Gamma$ on ${ \tilde{U} }$ preserves the positive and negative eigenspaces of the Hessian of $f\circ\phi$. \end{Lemma} \begin{proof} This is simply an equivariant version of the Morse lemma for manifolds. \end{proof} Given a critical point $p$ of a function $f$, we can choose $\epsilon> 0$ such that $a=f(p)$ is the only critical value in $[f(p)-\epsilon,f(p) + \epsilon]$. Suppose further that $p$ is the only critical point of $f$ in the level set $f^{-1}(a)$ and that the index of $p$ is $\lambda$. Then the manifold $M^-_{a+\epsilon}$ has the homotopy type of the space obtained by attaching the ``cell'' $D^\lambda/\Gamma$ to $M_{a -\epsilon}$ by a map from $S^{\lambda-1}/\Gamma$ to $f^{-1}(a-\epsilon)$. Here, $D^\lambda$ is is the standard closed disk in the negative eigenspace of $H(f)_p$ with the action of $\Gamma$ given as above; $S^{\lambda-1}$ is its boundary. As before, it suffices to check that the usual proof is equivariant. Therefore, $H_*(M^-_{a+\epsilon},M^-_{a-\epsilon}) = H_*(D^\lambda/\Gamma, S^{\lambda-1}/\Gamma)$. When we consider coefficients in $\Z$, this can be complicated. Over $\R$, however, the following lemma is immediate: \begin{lemma} Let $f$ be a Morse function on an orbifold $M$. Suppose that $p$ is the only critical point of $f$ in $f^{-1}(a-\epsilon ,a+ \epsilon )$ and that it has index $\lambda$. Then $H_i(M^-_{a+\epsilon},M^-_{a-\epsilon};\R) = 0$ for $i \neq \lambda$; whereas $H_\lambda(M^-_{a+\epsilon},M^-_{a-\epsilon};\R) = \R$ if $\Gamma$ preserves the orientation of $D^\lambda$, and is trivial otherwise. \end{lemma} \begin{corollary} The number of critical points with index $i$ is greater than or equal to the dimension of $H_i(M)$. \end{corollary} \begin{remark} {\em We will see in the next section that every moment map corresponding to a circle action with isolated fixed points is a Morse function with even indices. Moreover, it is clear that the orbifold structure group $\Gamma$ preserves the symplectic form, and hence the orientation, on $D^\lambda$. Therefore, these moment maps are perfect Morse functions, i.e., the $i^{\text{th}}$ coefficient of the Morse polynomial equals the $i^{\text{th}}$ coefficient of the Poincar\'{e} polynomial: $\cm_i = \dim(H_i(M))$.} \end{remark} We must also consider moment maps which correspond to circle actions with non-isolated fixed points, that is, consider Bott-Morse theory. \begin{definition} A smooth function $f: M \to \R$ is Bott-Morse if the set of critical points is the disjoint union of suborbifolds, and if for every point $x$ of such a suborbifold $F \subset M$, the null space of the Hessian $H(f)_x$ is precisely the tangent space to $F$. \end{definition} If $f:M \to \R$ is Bott-Morse and $F$ is a critical orbifold of $f$, the normal orbi-bundle of $F$ splits as a direct sum of vector orbi-bundles $E^-$ and $E^+$ corresponding to the negative and positive spectrum of the Hessian of $f$ along $F$.\footnote{Of course one has to be careful when talking about direct sums of vector orbi-bundles.} Given any metric on $M$, let $D=D_F$ denote a disc bundle of $E^-$ and $S= S_F$ denote the corresponding sphere bundle. The index of $f$ at $F$ is the dimension of $D_F$. In this case, we have the following result: \begin{lemma} \label{morsebott} For small $\epsilon$, $H_*(M^-_{f(F)+\epsilon},M^-_{f(F)-\epsilon}) = H_*(D_F,S_F)$. Moreover, the boundary map from $H_q(M^-_{f(F)+\epsilon},M^-_{f(F)-\epsilon})$ to $H_{q-1}(M^-_{f(F) - \epsilon})$ in the long exact sequence of relative homology is the composition of the boundary map from $H_q(D_F,S_F)$ to $H_{q-1}(S_F)$ and the map on homology induced by the ``inclusion'' map from $S_F$ to $M^-_{f(F) - \epsilon}$. \end{lemma} \begin{proof}Again, the manifold proof (see for example \cite{Chang}) can be adapted to the case of orbifolds. \end{proof} We now prove lemma~\ref{onedim}. We do it in a sequence of lemmas. \begin{Lemma}\label{lemma2.1} Let $F$ be an orbifold, $\pi: E \to F$ be a $\lambda$ dimensional real vector orbi-bundle and $D(E)$ and $S(E)$ the corresponding disk and M sphere orbi-bundles with respect to some metric. If $\lambda > 1$, then $H_1(D(E),S(E)) = 0$. \end{Lemma} \begin{proof} By the long exact sequence in relative homology it suffices to show that the maps $H_0 (S(E)) \to H_0 (D(E))$ and $H_1 (S(E)) \to H_1 (D(E))$ induced by inclusion are surjective. But this follows from two facts: the fibers of $\pi: E \to F$ are path connected and any path in the base $F$ can be lifted to a path in the sphere bundle $S(E)$. \end{proof} \begin{Lemma} \label{Morsesurj} Let $M$ be a connected compact orbifold, and $f: M \to \R$ be a Bott-Morse function with no critical suborbifold of index $1$. Then \begin{enumerate} \item $M^-_a = \{m \in M: f(m) <a\}$ is connected for all $a \in \R$, and \item if $M^-_a \neq \emptyset$, then $H_1(M^-_a) \to H_1(M)$ is a surjection. \end{enumerate} \end{Lemma} \begin{Proof} Let $F \subset M$ be a critical suborbifold of $f$ index $\lambda$. Let $D_F$ and $S_F$ be the disk and sphere bundles of the negative orbi-bundle of $f$ along $F$. Let $a = f(F)$, and let $\epsilon > 0$ be small. We assume, for simplicity, that no other critical suborbifold maps to $a$. If $\lambda = 0$ then $S_F$ is empty. Otherwise, since $\lambda \neq 1$, $\lambda$ is greater than one and $H_1(D_F,S_F)$ is trivial by lemma~\ref{lemma2.1}. In either case the map $H_1(D_F,S_F) \to H_0(S_F)$, and hence also the map $ H_1(D_F,S_F) \to H_0(M^-_{a-\epsilon})$, is trivial. Therefore, the following sequence is exact: $$ 0 \to H_0(M^-_{a-\epsilon}) \to H_0(M^-_{a+\epsilon}) \to H_0(D_F,S_F) \to 0 $$ Notice that $\dim(H_0(M^-_{a + \epsilon})) \geq \dim(H_0(M^-_{a - \epsilon}))$. Since $M$ is connected, this completes part (1). If $\lambda = 0$, then $\dim(H_0(M^-_{a + \epsilon})) > \dim(H_0(M^-_{a - \epsilon}))$. Therefore, since $M$ is connected, the minimum is the unique critical value of index $0$. For any other critical value $a$, $H_1(M^-_{a+\epsilon},M^-_{a-\epsilon}) = 0$, and the map $H_1(M^-_{a-\epsilon}) \to H_1(M^-_{a+\epsilon})$ is a surjection. \end{Proof} \begin{proof}{\bf of lemma~\ref{onedim}} We may assume that $a$ and $b$ are regular and that $M_{(a,b)} = \{ m \in M \mid a < f(m) < b \} \}$ is not empty. By lemma \ref{Morsesurj}, $H_1(M^-_a) \oplus H_1(M^+_b) \to H_1(M)$ is a surjection, where $M^+_a = f^{-1}(a,\infty)$. Therefore, by Mayer-Vietoris, the following sequence is exact: $$ 0 \to H_0(M_{(a,b)}) \to H_0(M^-_a) \oplus H_0(M^+_b) \to H_0(M) \to 0.$$ Finally, by lemma~\ref{Morsesurj}, $M^-_a$ and $M^+_b$ are connected. \end{proof} \begin{remark}{\em Since $f:M \to \R$ is proper, it follows from lemma~\ref{onedim} and a simple point set topology argument that for any $a\in \R$ the fiber $f^{-1}(a)$ is connected. } \end{remark} \section{Connectedness and Convexity} Let $(M,\omega,T)$ be a symplectic toric orbifold with moment map $\phi : M \to \ft^*$. In this section we show that $\phi(M) \subset \ft^*$ is a convex rational simple polytope. We will prove this as a corollary of the Atiyah-Guillemin-Sternberg convexity theorem \cite{A} \cite{GS} for orbifolds; our proof is similar to Atiyah's. We also prove that the fibers of the moment map $\phi$ are connected. In fact, as was observed by Atiyah (op.\ cit.) in the manifold case, convexity is an easy consequence of connectedness. In turn, the the fibers of toral moment maps are connected because the components of these moment maps are Bott-Morse functions with even indices. We now give precise statements of the main results of the section. \begin{Theorem} \label{connected} Let $M$ be a Hamiltonian $T$ orbifold, $T$ a torus, with a moment map $\phi:M \to \ft^*$. The fibers of $\phi$ are connected. \end{Theorem} \begin{Theorem} \label{convex} Let $M$, $T$ and $\phi:M \to \ft^*$ be as in theorem~\ref{connected} above. Then $\Delta = \phi(M) \subset \ft^*$ is a rational convex polytope. In particular it is the convex hull of the image of the points in $M$ fixed by $T$, $$ \phi (M) = \text{convex hull } (\phi (M^T)). $$ \end{Theorem} \begin{corollary}\label{cor simple} Let $(M,\omega,T)$ be a symplectic toric orbifold with moment map $\phi: M \to \ft^*$. Then $\Delta = \phi(M)$ is a rational simple polytope. \end{corollary} We begin the proof of the above statements with the following lemma. \begin{lemma}\label{lemma Bott-Morse} Let $G\times (M, \omega) \to (M, \omega )$ be a Hamiltonian group action of a compact Lie group on a symplectic orbifold with moment map $\phi : M \to \fg^*$. Then for any $\xi \in \fg$ the $\xi ^{\text{th}}$ component of the moment map $\phi ^\xi := \xi \circ \phi$ is Bott-Morse and the indices of its critical orbifolds are all even. \end{lemma} \begin{proof} This is a generalization of Theorem~5.3 of \cite{GS} and of Lemma~(2.2) of \cite{A} to the case of orbifolds. The proof is the same, except one has to use the orbifold version of the equivariant Darboux theorem (cf. lemma~\ref{locsymp} which specializes to the equivariant Darboux theorem when the orbit is a point). \end{proof} \begin{Proof} {\bf \!\!of Theorem \ref{connected}} Because the moment map $\phi$ is continuous and proper, the connectedness of fibers is implied by the connectedness of the preimages of balls. We will prove this stronger statement by induction on the dimension of $T$. By lemma~\ref{lemma Bott-Morse}, moment maps for circle actions are Bott-Morse functions of even index. Then the preimages of balls are connected by lemma~\ref{onedim}. Suppose now that $T$ is a $k$ dimensional torus and let $B$ be a ball in $\ft^*$. Let as before $\ell \subset \ft$ denote the lattice of circle subgroups. Then for every $0\not =\xi \in \ell$ the map $\phi ^\xi \equiv \xi \circ \phi$ is a moment map for the action of the circle $S_\xi := \{ \exp t\xi : t\in \R\}$. Let $\R_\xi $ denote the set of regular values of $\phi ^\xi$. For every $a\in \R_\xi$ the reduced space $M_{a, \xi} := (\xi \circ \phi)^{-1} (a)/S_\xi$ is a symplectic orbifold. The $k-1$ dimensional torus $H:= T/S_\xi$ acts on $M_{a, \xi}$ and the action is Hamiltonian. By inductive assumption the preimages of balls under the $H$ moment maps $\phi ^H : M_{a, \xi} \to \fh^*$ are connected. The affine hyperplane $\{\eta \in \ft^*: \xi (\eta ) = a\}$ is naturally isomorphic to the dual of the Lie algebra of $\fh$, and we can identify the restriction of $\phi$ to $(\phi ^\xi)^{-1} (a)$ with the pull-back of $\phi ^H$ by the orbit map $\pi :(\phi ^\xi)^{-1} (a) \to M_{a, \xi}$. It follows that $\phi ^{-1} (B \cap \{\eta : \xi (\eta ) = a\}) = \pi ^{-1} ( (\phi ^H)^{-1} (B))$ is connected. Now the set $$ U = \bigcup _{\xi \in \ell} \bigcup _{a\in \R_\xi } B \cap \{\eta : \xi (\eta ) = a\} $$ is connected and dense in the ball $B$, and its preimage $\phi^{-1}(U)$ is connected. Therefore the closure $\overline{\phi^{-1} (U)}$ in $M$ is connected. Since $\phi $ is proper,$\overline{\phi^{-1} (U)}$ is the preimage of the closure of $U$ in $\ft^*$, which is $\phi ^{-1}(B)$. Hence $\phi ^{-1}(B)$ is connected. \end{Proof} \begin{Proof} {\bf \!\!of Theorem~\ref{convex} } It is no loss of generality to assume that the action of $T$ is effective, hence by corollary~\ref{cor.loc_free} free on a dense subset. Consequently the interior of the image $\phi (M)$ is nonempty. To prove that $\phi (M) $ is convex it suffices to show that for any affine line ${\cal L}\subset \ft^*$, the intersection ${\cal L}\cap \phi (M)$ is connected. In fact it is enough to prove this for just rational lines, i.e. the lines of the form $\R \upsilon +a$ where $a\in \ft^* $ and $\upsilon \in \ell ^*$, the weight lattice of $T$. If $\upsilon \in \ell^*$ then $\ker \upsilon \subset \ft$ is the Lie algebra of a subtorus $H= H_\upsilon$ of $T$. A moment map $\phi ^H$ for the action of $H$ on $M$ is given by the formula $\phi ^H =i^* \circ \phi$ where $i^*$ is the dual of the inclusion $i:\fh \to \ft$. The fibers $(\phi ^H)^{-1} (\alpha )$ are connected by theorem~\ref{connected}. On the other hand $$ (\phi ^H)^{-1} (a) = \phi ^{-1} ((i^*)^{-1}(a)) = \phi ^{-1} (\phi (M) \cap (a + \R \upsilon)) $$ for some $a\in (i^*)^{-1}(\alpha )$. This proves that the image $\phi (M)$ is convex. Since $\phi (M)$ is compact it is, by Minkowski's theorem, the convex hull of its extreme points. Recall that a point $\alpha $ in the convex set $A$ is extreme for $A$ if it {\em cannot} be written in the form $\alpha = \lambda \beta +(1-\lambda)\gamma$ for any $\beta, \gamma \in A$ and $\lambda \in (0,1)$. Lemma~\ref{locsymp} shows that for any point $x$ in a Hamiltonian $T$ orbifold $M$ the image $\phi (M)$ contains an open ball in the affine plane $\phi (x) + \ft_x^\circ$, where $\ft_x^\circ$ is the annihilator of the isotropy Lie algebra of $x$ in $\ft^*$. Therefore the preimage of extreme points of $M$ consists entirely of fixed points. Since the set of fixed points $M^T$ is closed and $M$ is compact, $H_0 (M^T)$ is finite. Since $\phi$ is locally constant on $M^T$, $\phi (M^T)$ is finite. Therefore $\phi (M) =\text{convex hull }(\phi (M^T))$ is a convex polytope. To prove that $\phi (M)$ is rational we need to show that the hyperplanes supporting its facets are defined by elements of the circle subgroup lattice $\ell \subset \ft$ (cf. the introduction). Suppose $\xi \in \ft$ defines a hyperplane supporting a facet $F$ of $\phi (M)$. Then the function $\phi ^\xi$ takes a global minimum on the preimage of the facet $\phi ^{-1}(F)$. Therefore the points in $\phi ^{-1}(F)$ are fixed by the closure $H$ of $\{\exp t\xi: t\in \R\}$ in $T$. It follows from the local normal form (lemma~\ref{locsymp}) that the images of the connected components of $M^H$ lie in the affine translates of the annihilator of $\fh$ in $\ft^*$. Since the affine hull of the facet $F$ has codimension 1, $\fh$ is one-dimensional, i.e. $H$ is a circle. Therefore $\xi $ is in $\ell$. \end{Proof} \begin{proof}{\bf of corollary~\ref{cor simple} } We saw in the proof of theorem~\ref{convex} that facets of the image $\phi (M)$ correspond to one dimensional isotropy subgroups. Therefore in order to prove that $\phi (M)$ is simple it is enough to show that in a neighborhood of the preimage of a vertex the number of circle isotropy groups that can occur is the same as the dimension of the torus. But this is precisely the content of the second assertion of lemma~\ref{orbit}. \end{proof} \section{From Local to Global}\label{local to global} Let $(M,\omega,T)$ and $(M',\omega',T')$ be symplectic toric orbifolds with isomorphic associated weighted polytopes. In this section, we show that the orbifolds are isomorphic, that is, equivariantly symplectomorphic. The first step is to show that $M$ and $M'$ are {\em locally} isomorphic. Then, by extending proposition~2.4 in \cite{HS} to the symplectic category, we show that we can ``glue'' these local isomorphisms together to construct a global isomorphism. Note that we may assume that $T = T'$, and that the associated weighted polytopes are equal. Let $T$ be a torus, and let $(M,\omega)$ be a Hamiltonian $T$ orbifold of any dimension. Let $\pi : M \to M/T $ be the orbit map. Since the moment map $\phi: M \to \ft $ is $T$ invariant, we descends to a map $\underline{\phi}:M/T \to \ft$. Denote $\im(\phi) \subset \ft^*$ by $\Delta$. Suppose $M'$ is another Hamiltonian $T$ orbifold. We say $M'$ is a orbifold {\em over $M/T$ } if there is a continuous map $\pi' :M' \to M/T$ which induces a homeomorphism from $M'/T$ to $M/T$. In this case, define $\phi' : M' \to \ft$ by $\phi' = \underline{\phi} \circ \pi'$. For the purposes of this section two Hamiltonian $T$ orbifolds $(M,\omega,\pi)$ and $(M', \omega'\pi')$ over $M/T$ are {\em isomorphic} if there exists a $T$ equivariant symplectomorphism $f: M \to M'$ such that $\pi \circ f = \pi'$. Two Hamiltonian $T$ orbifolds $(M,\omega,\pi)$ and $(M', \omega',\pi')$ over $M/T$ are {\em locally isomorphic over $\Delta$} if every point in $\Delta$ has an open neighborhood $U$ and a $T$ equivariant symplectomorphism $f: {\phi'}^{-1}(U) \to \phi^{-1} (U)$ such that $\pi \circ f = \pi'$. In this case, $\phi' :M' \to \ft$ is a moment map for the action of $T$ on $(M',\omega')$. \begin{remark}{\em If $\dim T = \frac{1}{2}\dim M$, then $(M,\omega,\pi)$ and $(M',\omega',\pi')$ are isomorphic exactly if they are equivariantly symplectomorphic. Furthermore, $\Delta = M/T$, so there is no need to distinguish the two. In contrast, if $\dim T < \frac{1}{2} \dim M$, then $(M,\omega,\pi)$ and $(M',\omega',\pi')$ may be equivariantly symplectomorphic but not isomorphic. Furthermore, $\Delta \neq M/T$, so it is important to notice that although we consider isomorphisms of neighborhoods of fibers of the moment map, we demand that these isomorphisms fix the orbits of $M/T$. Since we only need the former case, the reader may wish to assume that $\dim T = \frac{1}{2}\dim M$. }\end{remark} \begin{lemma} \label{locequiv} Let $(M,\omega,T)$ and $(M',\omega',T)$ be symplectic toric orbifolds. Let $\Phi: M \to \ft^*$ and $\Phi': M \to \ft^* $ be the associated moment maps. Assume that $\Phi(M) = \Phi'(M')$ and that the integers associated to each facet are the same. Then $M$ and $M'$ are locally isomorphic. \end{lemma} \begin{proof} The proof is a direct corollary of the local normal form for symplectic toric orbifolds (lemma \ref{orbit}), the connectedness of fibers of the moment map (theorem \ref{connected}) and the properness of the moment map. \end{proof} \begin{lemma} \label{lem.sheaf} Let $(M,\omega)$ be a Hamiltonian $T$ orbifold; let $\Delta$ denote the image of its moment map. Let $\cS$ be the sheaf over $\Delta$ defined as follows: for each open $U \subset \Delta$, $\cS(U)$ is the set of isomorphisms of $\phi^{-1}(U)$, that is, the set $T$-equivariant symplectomorphisms of $\phi^{-1}(U)$ which preserve the orbits of $T$. Then isomorphisms classes of Hamiltonian $T$ orbifolds over $M/T$ which are locally isomorphic to $M$ are classified by $H^1(\Delta,\cS)$. \end{lemma} \begin{proof} Let $\U = \{ U_i \}_i \in I$ be a covering of $\Delta$ such that that there is an isomorphism $h_i : \phi^{-1}(U_i) \to {\phi'}^{-1}(U_i)$ for each $i \in I$. Define $f_{ij}: \phi^{-1}(U_i \cap U_j) \to \phi^{-1}(U_i \cap U_j)$ by $f_{ij} = h_i^{-1} \circ h_j$. These $f_{ij}$'s give a closed element of $C^1(\U,\cS)$. Moreover, the cohomology class of this element is independent of the choices of the isomorphisms $h_i$. Conversely, if $\{f_{ij}\} \in C^1(\U,\cS)$ is closed, we can construct a Hamiltonian $T$ orbifold over $M/G$ by taking the disjoint union of the $\phi^{-1}(U_i)$'s and gluing $\phi^{-1}(U_i)$ and $\phi^{-1}(U_j)$ together using $f_{ij}$. \end{proof} \begin{proposition} \label{loctoglob} Let $M$ and $M'$ be Hamiltonian $T$ orbifolds over $M/T$ which are locally isomorphic. Then $M$ and $M'$ are isomorphic. \end{proposition} \begin{proof} Let $\cC^\infty$ denote the sheaf of germs of smooth functions on $\Delta$. Let $\underline{\ell \times \R}$ denote the sheaf of locally constant functions with values in $\ell \times \R$. Since $\cC^{\infty}$ is a fine sheaf, $H^i(\Delta,C^{\infty}) = 0$ for all $i > 0$. Since $\Delta$ is contractable, $H^i(\Delta,\underline{\ell \times \R}) = 0$ for all $i > 0$. By lemma \ref{lem.sheaf} above, it suffices to show that $H^1(\Delta,\cS) = 0.$ Therefore, by the above comments, it is sufficient to show that that the following sequence of sheaves is exact: $$0 \to \underline{\ell \times \R} \stackrel{j}{\to} \cC^\infty \stackrel{\Lambda}{\to} \cS \to 0.$$ First we construct the map $\Lambda: C^\infty \to \cS$. For $U \subset \Delta$, let $f :U \to \R$ be a smooth function. Then the Hamiltonian vector field $X$ on $M$ of the function $f\circ \phi$ is a $T$ invariant symplectic vector field. Moreover, $X$ preserves $T$ orbits. Therefore $\exp(X)$ is an isomorphism of $\phi^{-1}(U)$, i.e., $\exp(X) \in \cS(U)$. To define $j : \underline{\R \times \ell} \to \cC^\infty$, for any $(c,\xi) \in \R \times \ell$ and $\eta \in \ft^*$, let $j(c,\xi)(\eta) = c + \left<\xi,\eta\right>.$ It is clear that $j$ is injective, and that $\im(j) = \ker(\Lambda)$. The final step is to show that $\Lambda$ is surjective. Let $\psi$ be an isomorphism of $\phi^{-1}(U)$, that is, a $T$-equivariant symplectic diffeomorphism which preserves orbits. Then obviously $\psi$ is a $T$-equivariant diffeomorphism of $\phi^{-1}(U)$ which preserves orbits. Therefore, by Theorem~3.1 in \cite{HS}, there exists a smooth $T$ invariant map $h: \phi^{-1}(U) \to T$ such that $\psi(x) = h(x) \cdot x$. For sufficiently small $U$, there exists a smooth $T$ invariant map $\theta :\phi^{-1}(U) \to \ft$ such that $\exp \circ \theta = h$. Define a vector field $X_\theta$ on $M$ by $X_\theta (x) = \left. \frac{d}{ds}\right |_{s= 0} \exp (s\theta (x))\cdot x$. A computation using a local normal form at the points where the action of $T$ is free show that $X_\theta $ is symplectic. Hence locally $X_\theta $ is Hamiltonian. Choose $f$ such that $df = i_{X} \omega$. Since $X_theta$ is tangent to orbits of $T$, the function $f$ Poisson commutes with all $T$ invariant functions on $M$. Since the arguments in \cite{L} works in the case of orbifolds, it follows that there is a smooth function $\underline{f}: \Delta \to \R$ such that $\phi^* \underline{f} = f$. Finally, it is not hard to see that $\Lambda(\underline{f}) = \psi$. \end{proof} \begin{Theorem} Let $(M,\omega,T)$ and $(M',\omega',T')$ be symplectic toric orbifolds with isomorphic weighted polytopes. Then $(M,\omega, T)$ and $(M',\omega',T')$ are isomorphic. \end{Theorem} \begin{proof} By lemma \ref{locequiv}, $M$ and $M'$ are locally isomorphic. By proposition \ref{loctoglob}, locally isomorphic implies isomorphic, so we are done. \end{proof} \vspace{-4mm} \begin{remark}{\em Given any weighted polytope $\Delta$, one can construct local models for the symplectic toric orbifold associated to $\Delta$. Since we've shown that $H^2(\cS,\Delta) = 0$, the arguments in \cite{HS} allow one to show that there exists a symplectic toric orbifold which corresponds to the given weighted polytope. However, in section \ref{surj}, we give a more explicit construction. }\end{remark} \section{Surjectivity} \label{surj} Finally, for every weighted polytope we construct a corresponding K\"{a}hler toric orbifold. Our construction is a slight variation of Delzant's construction. Let $\ft$ be a vector space with a lattice $\ell$. Let $\Delta \subset \ft^*$ be a rational simplicial polytope with $N$ facets, and a positive integer $m_i$ associated to each facet. Then $\Delta$ can be written uniquely as $$\Delta = \cap_{i = 1}^N \{ \alpha \in \ft^* \mid \left< \alpha, y_i \right> \geq \eta_i \},$$ where $y_i \in \ell$ is primitive. Let $x_i = m_i y_i$. Define a linear projection $\pi : \R^N \to \ft$ by $\pi(e_i) = x_i$. Let $\fk$ be the kernel of $\pi$; let $j:\fk \to \R^N$ denote the inclusion map. Let $K$ be the kernel of the map from $\R^N/\Z^N $ to $\ft/\ell$ induced by $\pi$. Let $\omega$ be the standard symplectic form on $\C^N$. The standard action of $({S^1})^N$ on $\C^N$ has moment map $\phi_{({S^1})^N}(z_1,\ldots,z_N) = |z_1|^2 + \cdots + |z_N|^2.$ Since $K$ is a subset of $\R^N/\Z^N$, the identification of $\R^N/\Z^N $ with $({S^1})^N$ induces an action of $K$ on $\C^N$; its moment map is given by $\phi_K = j^* \circ \phi_{{S^1}^N}$. Let $(M,\sigma)$ be the symplectic reduction of $\C^N$ by $K$ at $j^*(m_1 \eta_1,\ldots, m_N \eta_N).$ $T = \ft/\ell = ({S^1})^N/K$ acts symplectically on $(M,\sigma)$. Since the action of $K$ on $\C^N$ preserves the K\"{a}hler structure on $\C^N$, $(M,\sigma,T)$ has an induced K\"{a}hler structure. It is easy to check that $$ \im(\phi_T) = \cap_{i = 1}^N \left\{ \alpha \in \ft^* \mid \left< \alpha, m_i y_i \right> \geq m_i \eta_i \right\} = \Delta, $$ where $\phi_T:M \to \ft^*$ is the moment map for $T$. Consider $[z] \in M$, where $z = (z_1,\ldots,z_N) \in \C^N$. It is clear that $\phi_T([z])$ lies in the interior of the $i$th facet exactly if $\phi_{({S^1})^N}(z)$ lies in the interior of the $i$th coordinate hyperplane, that is, exactly if $z_i = 0$, but $z_j \neq 0$ for $j \neq i$. The structure group of such points is just the intersection of $K$ with the $i$th $S^1$, that is, $\Z/(m_i\Z)$. Therefore, $(M,\sigma,T)$ is a K\"{a}hler toric variety, and $(\Delta,\{m_i\})$ is the associated weighted polytope.
{ "timestamp": "1994-12-23T22:18:57", "yymm": "9412", "arxiv_id": "dg-ga/9412005", "language": "en", "url": "https://arxiv.org/abs/dg-ga/9412005", "abstract": "A symplectic toric orbifold is a compact connected orbifold $M$, a symplectic form $\\omega$ on $M$, and an effective Hamiltonian action of a torus $T$ on $M$, where the dimension of $T$ is half the dimension of $M$. We prove that there is a one-to-one correspondence between symplectic toric orbifolds and convex rational simple polytopes with positive integers attached to each facet.", "subjects": "Differential Geometry (math.DG)", "title": "Symplectic toric orbifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682444653242, "lm_q2_score": 0.7154240018510025, "lm_q1q2_score": 0.7073169919583875 }
https://arxiv.org/abs/1506.05513
When is a subgroup of a ring an ideal?
Let $R$ be a commutative ring. When is a subgroup of $(R, +)$ an ideal of $R$? We investigate this problem for the rings $\mathbb{Z}^{d}$ and $\prod_{i=1}^{d} \mathbb{Z}_{n_{i}}$. For various subgroups of these rings we obtain necessary and sufficient conditions under which the above question has an affirmative answer. In the case of $\mathbb{Z} \times \mathbb{Z}$ and $\mathbb{Z}_n \times \mathbb{Z}_m$, our results give, for any given subgroup of these rings, a computable criterion for the problem under consideration. We also compute the probability that a randomly chosen subgroup from $\mathbb{Z}_n \times \mathbb{Z}_m$ is an ideal.
\section{Introduction} Let $R$ be a commutative ring. The object of this paper is to determine necessary and sufficient conditions for a given subgroup of $(R, +)$ to be an ideal of $R$. Our motivation for asking this question arose from some problems on Mathieu subspaces (more is explained in the next paragraph). To begin, consider the ring $\mathbb{Z}$ of integers. Every subgroup of $\mathbb{Z}$ is of the form $k \mathbb{Z}$ for some integer $k$, and each of these subgroups is clearly also an ideal. In fact, the same is true also for the rings $\mathbb{Z}_n$ (the ring of integers modulo $n$). It turns out that these are the only rings $R$ in which every subgroup of $(R, +)$ is also an ideal of $R$; see Proposition \ref{zzn}. In particular, when we consider product rings we get some subgroups that are not ideals. For instance the diagonal $\{(x, x) \, | \, x \in \mathbb{Z}\}$ in $\mathbb{Z} \times \mathbb{Z}$ is clearly a subgroup of $(\mathbb{Z} \times \mathbb{Z}, +)$ but not an ideal in the ring $\mathbb{Z} \times \mathbb{Z}$. In this paper we consider the product rings $\mathbb{Z}^{d}$ (in Section \ref{infinite}) and $\prod_{i=1}^{d} \mathbb{Z}_{n_{i}}$ (in Section \ref{finite}), and for various subgroups of these rings we give necessary and sufficient conditions for a given subgroup to be an ideal. In the case of $\mathbb{Z} \times \mathbb{Z}$ and $\mathbb{Z}_n \times \mathbb{Z}_m$, our necessary and sufficient conditions are also computable for any given subgroup of these rings. As one would expect, our results show that in general an arbitrary subgroup of a ring is seldom an ideal. In fact, we make this statement precise in Theorem \ref{counting} where we compute explicitly the probability that a randomly chosen subgroup from $\mathbb{Z}_n \times \mathbb{Z}_m$ is an ideal. For instance, when $p$ is a prime and the ring is $\mathbb{Z}_p \times \mathbb{Z}_p$, this probability is only $\frac{4}{p+3}$. We will use several basic facts and tools from abstract algebra which can be found in \cite{DummitFoote}. We also use a theorem in group theory due to Goursat; a good exposition of this theorem can be found in \cite{jp}, and we review it in Theorem \ref{Goursat}. Although we focus mainly on the rings $\mathbb{Z} \times \mathbb{Z}$ and $\mathbb{Z}_n \times \mathbb{Z}_m$, where possible we offer some generalizations. By a subgroup of a ring $R$, we always mean a subgroup of the additive group $(R, +)$. This problem came up naturally when the first author and his collaborators (Yamskulna and Zhao) were recently working on some problems involving Mathieu subspaces in some rings. A Mathieu subspace is a generalization of a ideal: For a commutative ring $R$, a $\mathbb{Z}$-submodule $M$ of $R$ is said to be a Mathieu subspace of $R$ if whenever $a^n$ belongs to $M$ (for all $n \ge 1$), then $ra^n$ belongs to $M$ for all $n$ sufficiently large. Every ideal is a Mathieu subspace, but the converse is not necessarily true. The notion of a Mathieu subspace was introduced by Wenhua Zhao in \cite{wz}, and it proved to be a central idea in the research on several landmark conjectures in algebra and geometry including the Jacobian conjecture. As a result, Mathieu subspaces received serious attention and extensive writing; see \cite{wz12} and references in it. Recently when the first author and his collaborators were working on some problems on Mathieu subspaces, they were led to the problem of determining when a subgroup of a ring is a Mathieu subspace. Since ideals are important and relatively well-understood classes of Mathieu subspaces, it was natural to investigate the same question for ideals. Thus the problem we study in this paper is an interesting offshoot of our Mathieu subspaces project. \vskip 3mm\noindent \textbf{Acknowledgements:} We would like to thank the referee for his/her comments and suggestions which we used to improve the exposition of this paper. \section{Generators} \label{generators} In the introduction we noted that the rings $\mathbb{Z}$ and $\mathbb{Z}_n$ have the property that every subgroup in them is also an ideal. It is not hard to show that these are the only rings with this property. \begin{prop} \label{zzn} Let $R$ be a unital commutative ring. i.e., a commutative ring with a multiplicative identity. If every subgroup of $(R, +)$ is also an ideal, then $R$ is isomorphic to either $\mathbb{Z}$ or $\mathbb{Z}_n$ for some positive integer $n$. \end{prop} \begin{proof} Since $R$ is a unital ring, there is a natural map $\phi \colon \mathbb{Z} \ensuremath{\longrightarrow} R$ which sends $1$ to $1_R$, the multiplicative identity of $R$. The image of this homomorphism is exactly the subgroup of $(R, +)$ that is generated by $1_R$. If every subgroup of $(R, +)$ is an ideal, then, in particular, the subgroup generated by $1_R$ is also an ideal. However, the only ideal which contains $1_R$ is the entire ring $R$. This means $\phi$ is surjective. From the first isomorphism theorem, we have $\mathbb{Z}/\ker \phi \cong R$. It follows that $R$ is isomorphic to $\mathbb{Z}$ or $\mathbb{Z}_n$ for some integer $n$. (In the former case $R$ has characteristic $0$, and in the latter $R$ has characteristic $n$.) \end{proof} We will now show that every subgroup of $\mathbb{Z}^{d}$ and $\prod_{i=1}^{d} \mathbb{Z}_{n_{i}}$ is generated by at most $d$ elements. We will recall some standard results from abstract algebra which can be found in \cite{DummitFoote}. \begin{thm} Let $R$ be a PID and let $M$ be a free $R$-module of rank $r$. Then every submodule of $M$ is also free and has rank at most $r$. \end{thm} This theorem takes care of $\mathbb{Z}^{d}$. For $\prod_{i=1}^{d} \mathbb{Z}_{n_{i}}$, we need the following corollary which can be derived easily from the above theorem. \begin{cor} Let $R$ be a PID and let $M$ be a finitely generated $R$-module. If $M$ is generated by $r$ elements, then every submodule of $M$ is generated by at most $r$ elements. \end{cor} \begin{comment} \begin{proof} Since $M$ is generated by $r$ elements, we have a surjective $R$-module map \[ \phi \colon R^r \ensuremath{\rightarrow} M\] where the standard basis elements of $R^r$ are mapped onto a chosen set of $r$ generators of $M$. Let $N$ be an arbitrary submodule of $M$. Now consider $\phi^{-1}(N)$, which is an $R$-submodule of $R^r$. By the above theorem, it is free and has rank $s$ where $s \le r$. The restriction of $\phi$ to $\phi^{-1}(N)$ surjects onto $N$. In other words, we have \[ \phi |_{\phi^{-1}(N)} \colon\ \ \phi^{-1}(N) \cong R^s \ensuremath{\rightarrow} N.\] This shows that $N$ is generated by at most $r$ elements, as desired. \end{proof} \end{comment} \begin{cor} Every subgroup of $(\prod_{i=1}^{d} \mathbb{Z}_{n_{i}}, +)$ and that of $(\mathbb{Z}^{d}, +)$ is generated by at most $d$ elements. \end{cor} \begin{proof} The ring $\prod_{i=1}^{d} \mathbb{Z}_{n_{i}}$ is a $\mathbb{Z}$-module that is clearly generated by $d$ elements; the standard basis forms a generating set. Therefore by the above corollary every subgroup of $\prod_{i=1}^{d} \mathbb{Z}_{n_{i}}$ is generated by at most $d$ elements. The corresponding statement for $\mathbb{Z}^{d}$ is a special case of the above theorem. \end{proof} This corollary gives a natural stratification of the class of all non-subgroups of these rings which is based on the minimal number of generators of a given subgroup. This stratification will be helpful in our analysis. \section{The ring $\mathbb{Z} \times \mathbb{Z}$} \label{infinite} In this section we determine when a given additive subgroup of the ring $\mathbb{Z}^{d}$ is an ideal. The trivial subgroup which consists of the single element $(0, 0, \cdots, 0)$ is also trivially an ideal, so we will consider non-zero subgroups. As explained in the previous section, a non-zero subgroup of $\mathbb{Z}^{d}$ is free of rank at most $d$. We will be begin with rank 1 subgroups where the problem is straightforward. \begin{prop} Let $L$ be a subgroup of $\mathbb{Z}^{d}$ generated by $(a_{1}, \cdots, a_{d})$. $L$ is an ideal if and only if all but one of the $a_{i}$'s are zero. \end{prop} \begin{proof} If all but one of the $a_{i}$'s are zero, then $L$ is clearly an ideal in one of the factors of $\mathbb{Z}^{d}$. On the other hand, if we have more than one non-zero $a_{i}$'s, say $a_{i}$ and $a_{j}$, then consider $e_{i } = (0, \cdots 0, 1, 0 \cdots 0)$ which has one at the $ith$ spot. If $L$ is an ideal, then $e_{i}.(a_{1}, \cdots, a_{d}) = (0, \cdots 0, a_{i}, 0 \cdots 0)$ should belong to $L$. This is a contradiction, so we are done. \end{proof} More generally, the following is true. \begin{lemma} Let $R$ be an integral domain. A subgroup of $(R, +)$ generated by a non-zero element $a$ is an ideal of $R$ if and only if $R$ is isomorphic to $\mathbb{Z}$ or $\mathbb{Z}_p$ for some prime $p$. \end{lemma} \begin{proof} Let $\langle a \rangle$ be the additive subgroup of $(R, +)$ generated by $a ( \ne 0)$. Let $r$ be an arbitrary element of $R$. If $\langle a \rangle$ is an ideal, then we should have $ra = na$ for some integer $n$. This equation implies that $ (r- n1_R)a = 0$. Since we are working in an integral domain and $a$ is non-zero, we get $r - n1_R = 0$, or $r = n1_R$. Since $r$ was arbitrary, this implies that $(R, +)$ is a cyclic group generated by $1_R$. This means $R$ is isomorphic to $\mathbb{Z}$ or $\mathbb{Z}_n$ for some $n$. But since $R$ is an integral domain, $n$ has to be a prime. \end{proof} Now we move on to subgroups of rank at least $2$ in $\mathbb{Z}^{d}$ where the problem is more interesting. We begin with an example to show the subtlety in the problem. \begin{example} Consider the ring $\mathbb{Z} \times \mathbb{Z}$ and let $S$ and $T$ denote the following rank two subgroups of $(\mathbb{Z} \times \mathbb{Z}, +)$. \begin{eqnarray*} S & = & \langle (2, 0), (3, 1) \rangle \\ T & = & \langle (2, 0) , (2, 1) \rangle \end{eqnarray*} We claim that $S$ is not an ideal but $T$ is. If $S$ is an ideal, then the element $(0, 1)$ ($= (0, 1) (3, 1)$) should belong to it. That means the pair of equations $2x+ 3y = 0$ and $y = 1$ have to be consistent over $\mathbb{Z}$. However, it is easy to see that this is not the case. On the other hand, $T$ is an ideal in $\mathbb{Z} \times \mathbb{Z}$. In fact, $T = 2 \mathbb{Z} \times \mathbb{Z}$. See Theorem \ref{2x2} for the general result. \end{example} We begin by classifying ideals of $\mathbb{Z}^{d}$ whose additive groups are free of rank $k$. \begin{prop} \label{prop:free} Let $I$ be an ideal in $\mathbb{Z}^{d}$. $I$ is free of rank $k$ $(1 \le k \le n)$ if and only if $I$ is of the form $\prod_{i=1}^{d} d_{i} \mathbb{Z}$ where exactly $k$ of the numbers $d_{i}$ are non-zero. \end{prop} \begin{proof} Recall that every ideal in $\mathbb{Z}^{d}$ is of the form $\prod_{i=1}^{d} d_{i} \mathbb{Z}$, where the $d_{i}$ are integers. The rank of $\prod_{i=1}^{d} d_{i} \mathbb{Z}$ is exactly the number of $d_{i}$s that are non-zero, so we are done. \end{proof} In view of this proposition, to determine when a subgroup of rank $k$ in $\mathbb{Z}^{d}$ is an ideal, it is enough (after deleting the zero coordinates) to consider the problem when $d=k$. The latter is addressed in the next two theorems. We begin with a lemma which we will need in these theorems. Recall that an integer matrix $A$ is said to be unimodular if it is invertible over the ring of integers. This statement is equivalent (as can be seen by Cramer's formula for the inverse) to saying that the determinant of $A$ is either $1$ or $-1$. In the following lemma, a subgroup of $\mathbb{Z}^{n}$ of rank $n$ will be called a lattice of $\mathbb{Z}^{n}$. \begin{lemma} Let $A$ and $B$ be two $n \times n$ matrices over the integers that are invertible over the rationals. The columns of $A$ and those of $B$ form two bases for a lattice $L$ if and only if there exists a unimodular matrix $X$ such that $AX= B$. \end{lemma} \begin{proof} Since the columns of $A$ and $B$ form a basis for $L$, there exist integer square matrices $X$ and $Y$ such that $AX = B$ and $B Y = A$. Multiplying the first equation on the right hand side by $Y$, we get $AXY = BY$. But $BY = A$, so we get $AXY = A$. Since $A$ is invertible over the rationals, we multiply the inverse (over the rationals) of $A$ on both sides to conclude that $XY = I$. This means $X$ is invertible over $\mathbb{Z}$ (i.e, it is unimodular) and $AX = B$. For the other direction, let $Y$ be the inverse of $X$ over $\mathbb{Z}$, so we have $AX = B$ and $BY = A$. The first equation tells us that the column space of $B$ is contained in that of $A$, and the second equation says that the column space of $A$ is contained in that of $B$. This completes the proof of the lemma. \end{proof} \begin{thm} \label{thmpart1} Let $H$ be a subgroup of rank $k$ in $\mathbb{Z}^{k}$. Let the columns of a $k \times k $ matrix $A$ be a $\mathbb{Z}$-basis for $H$. Then the following are equivalent. \begin{enumerate} \item $H$ is an ideal in $\mathbb{Z}^{k}$ \item There exists a unimodular matrix $U$ such that $AU$ is a diagonal matrix. \item There is a sequence of elementary row operations (over $\mathbb{Z}$) that can convert $A$ into a diagonal matrix \end{enumerate} \end{thm} \begin{proof} Let $H$ (as in the statement of the theorem) be an ideal in $\mathbb{Z}^{k}$. Then by Proposition \ref{prop:free}, $H$ is of the form $\prod_{i=1}^{k} d_{i} \mathbb{Z}$ for some integers $d_{i}$. Since $H$ has rank $k$, all these integers have to be non-zero. $H$ can be written in this form if and only if the columns of $A$ and those of the diagonal matrix $D = \text{Diagonal}(d_{1}, \cdots d_{k})$ form a basis for $H$. By the above lemma, this happens if and only if there is a unimodular matrix $U$ such that $AU = D$. Hence we have the equivalence of statements (1) and (2). The equivalence of (2) and (3) for the field of real numbers is well-known (the famous reduced row echelon form of an invertible matrix). The reader can verify that the proof works over $\mathbb{Z}$ when properly interpreted. For instance, the role played by non-zero real numbers in the world of $\mathbb{Z}$ are the units $\pm{1}$. That will give the equivalence of statements (2) and (3). \end{proof} Since $\mathbb{Z}$ is a Euclidean domain where we can talk about gcds, we can take the above theorem one step further. Let $A^{*}$ denote the adjoint matrix of $A$. Recall that the formula for the inverse of $A$ (an invertible matrix) is given by $A^{-1} = \frac{1}{\det(A)} A^{*} = \frac{1}{\det(A)} ((a^{*}_{ij}))$. \begin{thm} \label{thmpart2} Let $H$ be a subgroup of rank $k$ in $\mathbb{Z}^{k}$. Let the columns of a $k \times k $ matrix $A$ be a $\mathbb{Z}$-basis for $H$. Then the following are equivalent. \begin{enumerate} \item $H$ is an ideal in $\mathbb{Z}^{k}$ \item There exists a unimodular matrix $U$ such that $AU$ is a diagonal matrix. \item There is a sequence of $k$ non-zero integers $d_{1}, d_{2}, \cdots d_{k}$ such that \begin{enumerate} \item $\det(A) = \pm{d_{1}d_{2}\cdots d_{k}}$ \item $\det(A)/d_{i}$ divides $\gcd(a^{*}_{1i}, \cdots, a^{*}_{ki})$ for all $i$. \end{enumerate} \end{enumerate} \end{thm} \begin{proof} We already saw the equivalence of (1) and (2) in Theorem \ref{thmpart1}. Now we will show that (2) and (3) are equivalent. Let $H$ and $A$ be as in the statement of the theorem. There exists a unimodular matrix $U$ such that $AU$ is a diagonal matrix if and only of for some diagonal matrix $D = \text{Diagonal}(d_{1}, \cdots d_{k})$, $A^{-1}D$ is unimodular. Using Cramer's formula for the inverse, we can equivalently say that \[ X = \frac{1}{\det(A)} A^{*} D\] is unimodular. Since $X$ is unimodular, its determinant is $\pm{1}$. Taking determinants of both sides of the above matrix equation will give (a). Moreover, the entries of $X$ should be all integers. For that to happen, $\det(A)$ should divide all the entries in each of the columns $d_{i}(a^{*}_{1i}, \cdots, a^{*}_{ki})^{T}$, or equivalently $\det(A)/d_{i}$ should divide all the entries in each of the columns $(a^{*}_{1i}, \cdots, a^{*}_{ki})^{T}$. Since $\mathbb{Z}$ is a Euclidean domain, the last statement is equivalent to (b). \end{proof} We can tell exactly when the condition (2) of Theorem \ref{thmpart2} holds in the case of $\mathbb{Z} \times \mathbb{Z}$. That gives the following result, which along with the rank $1$ result proved earlier gives a full answer to our problem for the ring $\mathbb{Z} \times \mathbb{Z}$. \begin{thm} \label{2x2} Let $L$ be a rank $2$ subgroup of $\mathbb{Z} \times \mathbb{Z}$ that is generated by vectors $(a, b)$ and $(c, d)$. $L$ is an ideal in $\mathbb{Z} \times \mathbb{Z}$ if and only if $ad-bc$ divides $\gcd(a, c) \cdot \gcd(b, d)$. \end{thm} \begin{proof} Let $L$ be a rank $2$ subgroup of $\mathbb{Z} \times \mathbb{Z}$ that is generated by vectors $(a, b)$ and $(c, d)$, and let $A$ be the $2 \times 2$ matrix with these two columns. From the above theorems, and using the formula for the inverse of a $2 \times 2$ matrix, we conclude that $L$ is an ideal if and only if there exists non-zero integers $d_{1}$ and $d_{2}$ such that \begin{enumerate} \item $ad-bc= \pm{d_{1}d_{2}}$ \item $(ad-bc)/d_{1}$ divides $\gcd(b, d)$ and $(ad-bc)/d_{2}$ divides $\gcd(a, c)$. \end{enumerate} We claim that non-zero integers $d_{1}$ and $d_{2}$ exist with these properties if and only if $ad-bc$ divides $\gcd(a, c) \cdot \gcd(b, d)$. If $d_{1}$ and $d_{2}$ exist such that (1) and (2) hold, then from (2) we get $(ad-bc)^{2}/ (d_{1}d_{2})$ divides $\gcd(a, c) \cdot \gcd(b, d)$, but $(ad-bc)^{2}/ (d_{1}d_{2}) = ad-bc$. This proves one direction. For the other direction, suppose $ad-bc$ divides $\gcd(a, c) \cdot \gcd(b, d)$. Then an elementary number theory fact tells us we can write $ad-bc$ as $d_{1}d_{2}$ where $d_{1}$ divides $\gcd(a, c)$ and $d_{2}$ divides $\gcd(b, d)$. \end{proof} We now explain how one can arrive at Theorem \ref{2x2} more directly by solving linear equations over $\mathbb{Z}$. Recall that our problem boils down to the following question. \emph{Given an integer matrix $A$ with non-zero determinant, when does there exist a unimodular matrix $X$ such that $AX$ is a diagonal matrix?} To address this, we let $X = (x_{ij})$ and consider the matrix equation \[ \begin{bmatrix} a & c \\ b & d\end{bmatrix} \begin{bmatrix} x_{11} & x_{12} \\ x_{21} & x_{22}\end{bmatrix}= \begin{bmatrix} u & 0 \\ 0 & v \end{bmatrix} .\] This gives us the following set of equations: \begin{eqnarray} a x_{12} + c x_{22} & = & 0 \\ b x_{11} + d x_{21} & = & 0 \\ x_{11}x_{22} - x_{12}x_{21} &=& 1 \end{eqnarray} ($X$ is unimodular, so its determinant is either $1$ or $-1$. However, by swapping the columns of $A$ if necessary, we may assume that the determinant of $X$ is $1$ which gives us the third equation.) $L$ is an ideal if and only if the above system of equations has a solution in integers $x_{ij}$. Let us begin with equation 1: $a x_{12} + c x_{22} = 0$ if and only if $a x_{12} = - c x_{22}$. Then \[ x_{12} = \frac{-c}{\gcd(a, c)} \alpha \ \ \text{ and } \ \ x_{22} = \frac{a}{\gcd(a, c)} \alpha \ \ \text{for some integer } \alpha.\] Similarly, using equation 2, we get \[ x_{11} = \frac{-d}{\gcd(b, d)} \beta \ \ \text{ and } \ \ x_{21} = \frac{b}{\gcd(b, d)} \beta, \ \ \text{for some integer } \beta.\] Substituting these values in the determinant condition (equation 3), we get \begin{eqnarray*} x_{11}x_{22} - x_{12}x_{21} &=& 1\\ \frac{-d}{\gcd(b, d)} \beta \frac{a}{\gcd(a, c)} \alpha - \frac{-c}{\gcd(a, c)} \alpha \frac{b}{\gcd(b, d)} \beta & = & 1\\ \alpha \beta \left(\frac{-ad}{ \gcd(a, c)\gcd(b, d)} - \frac{-bc}{\gcd(a, c) \gcd(b, d)} \right) & = & 1 \\ - \alpha \beta (ad - bc) & = & \gcd(a, c) \gcd(b, d) \end{eqnarray*} Thus we see from the last equation that the above system of equations is consistent over $\mathbb{Z}$ if and only if $\det(A) = ad-bc$ divides $\gcd(a, c) \gcd(b, d)$ in $\mathbb{Z}$. (In that case, we can take $\alpha = -1$ and $\beta = \frac{\gcd(a, c) \gcd(b, d) }{ad-bc}.)$ This completes the alternative proof of Theorem \ref{2x2}. The following corollary follows immediately from Theorem \ref{2x2}. \begin{cor} Let $(a, b)$ and $(c, d)$ be two vectors in $\mathbb{Z} \times \mathbb{Z}$ and $L$ be the lattice generated by these two vectors. \begin{enumerate} \item If $ad-bc= \pm 1$, then $L$ is an ideal in $\mathbb{Z} \times \mathbb{Z}$. \item If $ad-bc$ is a prime, then $L$ is an ideal if and only if $ad-bc$ divides either $\gcd(a, c)$ or $\gcd(b, d)$. \end{enumerate} \end{cor} \section{The ring $\mathbb{Z}_n \times \mathbb{Z}_m$} \label{finite} Let $n$ and $m$ be positive integers and consider the ring $\mathbb{Z}_n \times \mathbb{Z}_m$. Our problem is to determine when a subgroup of $(\mathbb{Z}_n \times \mathbb{Z}_m, +)$ is an ideal. We have seen that a non-zero subgroup of $\mathbb{Z}_n \times \mathbb{Z}_m$ is generated by either one or two elements, so we have two cases to consider. First, consider a subgroup $L$ in the ring $\mathbb{Z}_n \times \mathbb{Z}_m$ that is generated by $(a, b)$. If either $a = 0$ in $\mathbb{Z}_n$ or $b = 0$ in $\mathbb{Z}_m$, the problem is trivial because $L$ is simply an ideal in one of the components of $\mathbb{Z}_n \times \mathbb{Z}_m$. So let us assume that both $a$ and $b$ are non-zero in their respective component rings. Then we have the following theorem. \begin{thm} \label{gcd} Let $1 \le a < n$ and $1 \le b < m$. The subgroup generated by $(a, b)$ in the ring $\mathbb{Z}_n \times \mathbb{Z}_m$ is a ideal if and only if \[ \gcd\left( \frac{n}{\gcd(a, n)}, \frac{m}{\gcd(b,m)}\right) = 1.\] \end{thm} \begin{proof} Since our rings are principal ideal rings, every ideal in $\mathbb{Z}_n \times \mathbb{Z}_m$ is of the form $d_1\mathbb{Z}_n \times d_2\mathbb{Z}_m$, where $d_1$ and $d_2$ are some integers. For brevity we will denote this ideal by $ \langle d_1 \rangle \times \langle d_2 \rangle $. Returning to our problem, let us assume that the line $L$ generated by $(a, b)$ is an ideal of $\mathbb{Z}_n \times \mathbb{Z}_m$. From above, we have \[ L =\langle d_1 \rangle \times \langle d_2 \rangle. \] Consider the restrictions to $L$ of the natural projection maps: $\pi_1 \colon \mathbb{Z}_n \times \mathbb{Z}_m \ensuremath{\rightarrow} \mathbb{Z}_n$ and $\pi_2 \colon \mathbb{Z}_n \times \mathbb{Z}_m \ensuremath{\rightarrow} \mathbb{Z}_m$. We will compute $\pi_1(L)$ in two different ways. On the one hand, since $L = \langle d_1 \rangle \times \langle d_2 \rangle$, we have $\pi_1(L) = \langle d_1\rangle$. On the other hand, $L$ is generated by $(a,b)$, so the first components of the elements of $L$ pick up all multiples of $a$. Therefore $\pi_1(L) = \langle a \rangle$. This shows that $\langle a \rangle= \langle d_1 \rangle $. Similarly, working with the second projection map, we conclude that $\langle b \rangle = \langle d_2 \rangle$. To summarize, $L$ spanned by $(a, b)$ is an ideal if and only if \[ \langle (a, b) \rangle = \langle a \rangle \times \langle b \rangle.\] The inclusion $ \langle (a, b) \rangle \subseteq \langle a \rangle \times \langle b \rangle$ is obvious. Therefore, equality holds if and only if both sides have the same cardinality. These cardinalities are given by the following formulas ($\text{ord}(x)$ denotes the additive order of $x$). \begin{eqnarray*} |\langle (a, b) \rangle | & = & \text{lcm}(\text{ord}(a), \text{ord}(b)) = \frac{\text{ord}(a) \, \text{ord}(b)}{\gcd(\text{ord}(a), \text{ord}(b)) }\\ | \langle a \rangle \times \langle b \rangle | & = & \text{ord}(a) \, \text{ord}(b) \end{eqnarray*} Equating these two expressions, clearly $L$ spanned by $(a, b)$ in $\mathbb{Z}_n \times \mathbb{Z}_m$ is ideal if and only $\gcd(\text{ord}(a), \text{ord}(b)) = 1$. The theorem now follows from the fact that the order of an element $c$ in $(\mathbb{Z}_s, +)$ is given by $ \frac{s}{\gcd(c, s)}$. \end{proof} \begin{rem} When $m$ and $n$ are relatively prime, Theorem \ref{gcd} implies that every line in $\mathbb{Z}_n \times \mathbb{Z}_m$ is an ideal. This is indeed the case because for relatively prime integers $m$ and $n$ we have $\mathbb{Z}_n \times \mathbb{Z}_m \cong \mathbb{Z}_{nm}$. \end{rem} More generally, the following theorem is true: \begin{thm} The subgroup generated by the element $(a_{1}, a_{2}, \cdots, a_{k})$ in $\mathbb{Z}_{n_{1}} \times \mathbb{Z}_{n_{2}} \times \cdots \times \mathbb{Z}_{n_{k}}$ is an ideal if and only if \[ \prod_{1 \le i < j \le n} \gcd\left( \frac{n_{i}}{\gcd(a_{i}, n_{i})}, \frac{n_{j}}{\gcd(a_{j}, n_{j})} \right) = 1. \] \end{thm} \begin{proof} From the proof of Theorem \ref{gcd}, it follows that the subgroup generated by the element $(a_{1}, a_{2}, \cdots, a_{k})$ in $\mathbb{Z}_{n_{1}} \times \mathbb{Z}_{n_{2}} \times \cdots \times \mathbb{Z}_{n_{k}}$ is an ideal if and only if \[ \prod_{i} \text{ord}(a_{i}) = \underset{i}{\text{lcm}} \; \text{ord}(a_{i}).\] Showing that this last equation holds if and only if \[ \prod_{1 \le i < j \le n} \gcd(\text{ord}(a_{i}), \text{ord}(a_{j})) = 1 \] can be done as an exercise. Then using the formula mentioned above for the order of an element in $\mathbb{Z}_{s}$, we now get the condition given in the statement of the theorem. \end{proof} We now investigate when a subgroup of $\mathbb{Z}_n \times \mathbb{Z}_m$ generated by two elements is an ideal. To this end, the following theorem from group theory due to Goursat will be useful. We will also use this theorem in the next section where we compute some probabilities. \begin{thm} (Goursat) \cite{jp} \label{Goursat} Let $G_1$ and $G_2$ be any two groups. There exists a bijection between the set $S$ of all subgroups of $G_1 \times G_2$ and the set $T$ of all $5$-tuples $(A_1, B_1, A_2, B_2, \phi)$ where $A_i$ is a subgroup of $G_i$, $B_i$ is a normal subgroup of $A_i$, and $\phi$ is a group isomorphism from $A_1/B_1$ to $A_2/B_2$. \end{thm} Let $\pi_i \colon G_1 \times G_2 \ensuremath{\rightarrow} G_i$ denote the projection homomorphisms. The desired bijection in this theorem is given as follows. For a subgroup $U$ of $G_1 \times G_2$, we define a $5$-tuple $(A_{U_1}, B_{U_1}, A_{U_2}, B_{U_2}, \phi_U)$ where \begin{eqnarray*} A_{U_1} & = & \text{Im}(\pi_1|_U)\\ B_{U_1} & = & \pi_1(\ker(\pi_2|_U))\\ A_{U_2} & = & \text{Im}(\pi_2|_U)\\ B_{U_2} & = & \pi_2(\ker(\pi_1|_U)) \text{ and }\\ \phi_U(a_1B_{U_1}) & = & a_2B_{U_2} \text{ when } (a_1, a_2) \in U. \end{eqnarray*} Conversely, given a $5$-tuple $(A_1, B_1, A_2, B_2, \phi)$, the corresponding subgroup $U$ of $G_1 \times G_2$ is given by \[U_{\phi} = \{ (a_1, a_2) \in A_1 \times A_2 \, | \, \phi(a_1 B_1) = a_2 B_2\}.\] \begin{cor} \label{cor:Goursat} Let $G_1 \times G_2$ be a finite group and let $(A_{U_1}, B_{U_1}, A_{U_2}, B_{U_2}, \phi_U)$ correspond to the subgroup $U$ of $G_1 \times G_2$. Then we have \[ | U | = |A_{U_1}||B_{U_2}|. \] \end{cor} \begin{proof} It is clear from the correspondence in Goursat's theorem that \[|U| = |A_{U_1}/B_{U_1}| |B_{U_1}||B_{U_2}| = |A_{U_1}||B_{U_2}|.\] \end{proof} Given elements $\alpha$ and $\beta$ in $\mathbb{Z}_n$, consider the linear map $\phi_{\alpha, \beta} \colon \mathbb{Z} \times \mathbb{Z} \ensuremath{\rightarrow} \mathbb{Z}_n$ defined by $\phi_{\alpha, \beta}(x, y) = \alpha x + \beta y$. Then we have the following theorem. \begin{thm} \label{thm:Goursat} The subgroup of $\mathbb{Z}_n \times \mathbb{Z}_m$ generated by $(a, b)$ and $(c, d)$ is an ideal of $\mathbb{Z}_n \times \mathbb{Z}_m$ if and only if \[(\ker \phi_{a, c})(\ker \phi_{b, d}) = \mathbb{Z} \times \mathbb{Z}. \] \end{thm} \begin{proof} Let $H$ denote the subgroup generated by $(a, b)$ and $(c, d)$ in $\mathbb{Z}_n \times \mathbb{Z}_m$. Suppose $H$ is an ideal in $\mathbb{Z}_n \times \mathbb{Z}_m$. Then there exists $\alpha$ in $\mathbb{Z}_n$ and $\beta$ in $\mathbb{Z}_m$ such that $H = \langle \alpha \rangle \times \langle \beta \rangle$. Taking projection maps, we can see that $\alpha = \gcd(a, c) \mod n$ and $\beta = \gcd(b, d) \mod m$. Thus $H$ is an ideal if and only if $\langle (a, b), (c, d) \rangle = \langle \gcd(a, c) \rangle \times \langle \gcd(b, d) \rangle$. As in Theorem 4.1, the left hand side is easily seen to be contained in the right hand side and we have equality if and only if both sides have the same cardinality. The cardinality of the right hand side is $\text{ord}( \gcd(a, c)) \text{ord}( \gcd(b, d))$. The cardinality of the left hand side can be computed using Corollary \ref{cor:Goursat}: It is given by $\text{ord}( \gcd(a, c)) |\pi_2(\ker \pi_1|_H)|$. Equating these two expressions, we conclude that $H$ is an ideal if and only if $\text{ord}( \gcd(b, d)) = |\pi_2(\ker \pi_1|_H)|$. The left hand side of this equation is the cardinality of the set \[ S = \{ bx + dy \, | \, x, y \in \mathbb{Z} \} \subseteq \mathbb{Z}_{m},\] and the right hand side is the cardinality of the set \[ T = \{ bx + dy\, | \, x, y \in \mathbb{Z} \text{ such that } ax+cy = 0 \in \mathbb{Z}_n\} \subseteq \mathbb{Z}_{m}.\] $S$ and $T$ have the same cardinality precisely when the image of $\phi_{b,d} \colon \mathbb{Z} \times \mathbb{Z} \ensuremath{\rightarrow} \mathbb{Z}_m$ is the same as the image of $\phi_{b, d}$ restricted to the kernel of $\phi_{a, c} \colon \mathbb{Z} \times \mathbb{Z} \ensuremath{\rightarrow} \mathbb{Z}_n$. That happens exactly when $\ker(\phi_{a,c})$ intersects every coset in $\mathbb{Z} \times \mathbb{Z}/\ker(\phi_{b,d})$ which is true if and only if $(\ker \phi_{a, c})(\ker \phi_{b, d}) = \mathbb{Z} \times \mathbb{Z}.$ \end{proof} We can get a finite-type condition that is equivalent to the one given in Theorem \ref{thm:Goursat}. To get this, set $l = \text{lcm}(m, n)$. Then given elements $\alpha$ and $\beta$ in $\mathbb{Z}_n$, define the linear map $\psi_{\alpha, \beta} \colon \mathbb{Z}_{l} \times \mathbb{Z}_{l} \ensuremath{\rightarrow} \mathbb{Z}_n$ as $\psi_{\alpha, \beta}(x, y) = \alpha x + \beta y$. We now have the following corollary. \begin{cor} The subgroup of $\mathbb{Z}_n \times \mathbb{Z}_m$ generated by $(a, b)$ and $(c, d)$ is an ideal of $\mathbb{Z}_n \times \mathbb{Z}_m$ if and only if \[|(\ker \psi_{a, c})(\ker \psi_{b, d})| = nm. \] \end{cor} \begin{proof} This follows from the proof of the previous theorem. Note that the maps $\phi_{a,c}$ and $\phi_{b, d}$ factor through $\psi_{a,c}$ and $\psi_{b, d}$ respectively. \end{proof} Goursat's theorem for more than two components \cite{genGoursat} has a very complicated structure and in particular, it is not helpful to solve our problem. \section{Probability for a subgroup to be an ideal} As one would expect, the above results suggest that a subgroup of a ring is rarely an ideal. Now we will make this precise by computing explicitly the probability that a randomly chosen subgroup of $\mathbb{Z}_n \times \mathbb{Z}_m$ is an ideal using the approach and results from \cite{jp}. Let $P_R$ denote the probability that a randomly chosen subgroup of a finite ring $R$ is an ideal. This probability is given by \[ P_R = \frac{\text{ total number of ideals in } R} { \text{total number of subgroups in }(R, +)}.\] Our interest is in the ring $\mathbb{Z}_n \times \mathbb{Z}_m$. If either $n$ or $m$ is one, then clearly $P_R =1$. So we will assume that $n > 1$ and $m >1$. Let $S = \{ p_1, \cdots p_k\}$ denote the set of all distinct primes which divide $mn$. Then the prime factorizations of $m$ and $n$ are given by \[ m = p_1^{r_1} \cdots p_k^{r_k} \text { and } n = p_1^{s_1} \cdots p_k^{s_k},\] where the exponents are non-negative integers, and the Chinese remainder theorem gives the decomposition \[\mathbb{Z}_n \times \mathbb{Z}_m = \left( \mathbb{Z}_{p_1^{r_1}} \times \mathbb{Z}_{p_1^{s_1}} \right) \times \cdots \times \left( \mathbb{Z}_{p_k^{r_k}} \times \mathbb{Z}_{p_k^{s_k}} \right). \] \begin{lemma} \label{reduction} \[P_{\mathbb{Z}_n \times \mathbb{Z}_m} = \prod_{i=1}^k P_{\mathbb{Z}_{p_1^{r_1}} \times \mathbb{Z}_{p_1^{s_1}}}\] \end{lemma} \begin{proof} This follows from two facts. First, note that every ideal $I$ in $\mathbb{Z}_n \times \mathbb{Z}_m$ is of the form $I = \prod_{i=1}^k I_i$ where $I_i$ is an ideal of $\mathbb{Z}_{p_i^{r_i}} \times \mathbb{Z}_{p_i^{s_i}}$. Next we use a theorem of Suzuki \cite{suz} which says if $G_{1}$ and $G_{2}$ are two finite groups with relatively prime orders, then every subgroup of $G_{1} \times G_{2}$ is of the form $H_{1} \times H_{2}$, where $H_{i}$ is a subgroup of $G_{i}$. In particular, every subgroup $H$ of $(\mathbb{Z}_n \times \mathbb{Z}_m, +)$ is of the form $\prod_{i=1}^k H_i$ where $H_i$ is a subgroup of $\mathbb{Z}_{p_i^{r_i}} \times \mathbb{Z}_{p_i^{s_i}}$. Then we have the following equations which complete the proof of the lemma. \begin{eqnarray*} P_{\mathbb{Z}_n \times \mathbb{Z}_m} & = & \frac{\text{ total number of ideals in } \mathbb{Z}_n \times \mathbb{Z}_m} { \text{total number of subgroups in }(\mathbb{Z}_n \times \mathbb{Z}_m, +)} \\ & = & \prod_{i=1}^k \frac{\text{ total number of ideals in } \mathbb{Z}_{p_i^{r_i}} \times \mathbb{Z}_{p_i^{s_i}}} { \text{total number of subgroups in }(\mathbb{Z}_{p_i^{r_i}} \times \mathbb{Z}_{p_i^{s_i}}, +)}\\ & = & \prod_{i=1}^k P_{\mathbb{Z}_{p_i^{r_i}} \times \mathbb{Z}_{p_i^{s_i}}}. \end{eqnarray*} \end{proof} In view of this lemma, it is enough to compute $P_{\mathbb{Z}_{p_i^{r_i}} \times \mathbb{Z}_{p_i^{s_i}}}$. We will do this in the next two lemmas, beginning by computing the number of ideals. \begin{lemma} The number of ideals in $\mathbb{Z}_{p^{r}} \times \mathbb{Z}_{p^{s}}$ is equal to $(r+1)(s+1)$. \end{lemma} \begin{proof} Every ideal in $\mathbb{Z}_{p^{r}} \times \mathbb{Z}_{p^{s}}$ is of the form $ a\mathbb{Z}_{p^{r}} \times b \mathbb{Z}_{p^{s}}$, where $a$ is a divisor of $p^{r}$ and $b$ is a divisor of $p^{s}$. This gives $(r+1)(s+1)$ for the total number of ideals. \end{proof} Next we have to compute the number of subgroups in $\mathbb{Z}_{p^{r}} \times \mathbb{Z}_{p^{s}}$. This number can be obtained using the above-mentioned Goursat's theorem and can be found in \cite{jp}. \begin{lemma} \cite{jp} The total number of subgroups of $\mathbb{Z}_{p^{r}} \times \mathbb{Z}_{p^{s}}$ ($r \le s $) is given by \[\frac{p^{r+1}[(s-r+1)(p-1)+2]- [(s+r+3)(p-1)+2]}{(p-1)^2}\] \end{lemma} \noindent \emph{Proof Sketch:} Goursat's theorem can be greatly simplified in the case under consideration. There is a unique subgroup of order $p^k$ in $\mathbb{Z}_{p^{r}}$ for any $0 \le k \le r$ and these subgroups form a linear chain. Moreover the group of automorphisms of $\mathbb{Z}_{p^k}$ corresponds to the units in this ring, and we have $p^k - p^{k-1}$ of them. We now have to count the 5-tuples $(A_{1}, B_{1}, A_{2}, B_{2}, \phi)$ which correspond to subgroups in Goursat's theorem. If $|A_i/B_i| = 1$, the number of subgroups is $(r+1)(s+1)$ because we have $r+1$ choices for $A_1/B_1$ and $s+1$ choices for $A_2/B_2$ (clearly $\phi$ is trivial). If $|A_i/B_i| = p^k$ for $1 \le k \le r$, we have $r-k+1$ choices for $A_1/B_1$ and $s-k+1$ choices for $A_2/B_2$, and finally $p^k - p^{k-1}$ choices for $\phi$, so in this case we have $(r-k+1)(s-k+1)(p^k-p^{k-1})$ subgroups. In total we have \[ (r+1)(s+1) + \sum_{k=1}^r (r-k+1)(s-k+1) (p^k-p^{k-1})\] subgroups. The rest is straightforward algebra; see \cite{jp}. \vskip 3mm Combining the above lemmas, we get our formulas for $P_{\mathbb{Z}_{p^r} \times \mathbb{Z}{p^s}}$ and $P_{\mathbb{Z}_n \times \mathbb{Z}_m}$. \begin{thm} \label{counting} Let $p$ be a prime and let $r, s \, (r \le s), n$ and $m$ be positive integers. \[P_{\mathbb{Z}_{p^{r}} \times \mathbb{Z}_{p^{s}}} = \frac{(r+1)(s+1)(p-1)^2}{p^{r+1}[(s-r+1)(p-1)+2]- [(s+r+3)(p-1)+2]} \] \[ P_{\mathbb{Z}_n \times \mathbb{Z}_m} = \prod_{i=1}^k \frac{(r_i+1)(s_i+1)(p_i-1)^2}{p_i^{r_i+1}[(|s_i-r_i|+1)(p_i-1)+2]- [(s_i+r_i+3)(p_i-1)+2]}\] \end{thm} We now record two special cases which can be be derived from Theorem \ref{counting} using routine algebra. \begin{cor} Let $p$ be a prime and let $r$ be a positive integer. \[P_{\mathbb{Z}_{p^{r}} \times \mathbb{Z}_{p^{r}}} = \frac{(r+1)^2(p-1)^2}{p^{r+1}(p+1) -2r(p-1)-3p+1} \] \[P_{\mathbb{Z}_p \times \mathbb{Z}_p} = \frac{4}{p+3}\] \end{cor} It is clear from the above expressions that these probabilities are small, as expected. For instance, by choosing a large prime the value of $P_{\mathbb{Z}_p \times \mathbb{Z}_p}$ can be made arbitrarily small. Similarly for a fixed prime $p$, the numerator of $P_{\mathbb{Z}_{p^{r}} \times \mathbb{Z}_{p^{r}}}$ is a polynomial function in $r$ whereas the denominator is an exponential function in $r$. Thus $\lim_{r \ensuremath{\rightarrow} \infty} P_{\mathbb{Z}_{p^{r}} \times \mathbb{Z}_{p^{r}}} = 0$. The main obstruction in generalizing these formulas to the rings $R = \prod_{i=1}^k \mathbb{Z}_{n_i}$ is the lack of a closed formula for the number of subgroups in $(\prod_{i=1}^k \mathbb{Z}_{p^i}, +)$ when $k \ge 3$. However, when the integers $n_i$ are all square-free, one can compute $P_R$ easily. This is because Lemma \ref{reduction} helps us to reduce the problem of computing $P_R$ to the problem of computing $P_{S}$ where $S = \prod_{i=1}^r \mathbb{Z}_p$ for some prime $p$ and positive integer $r$ ($\le k$). The latter is a vector space over $\mathbb{F}_p$ where subgroups are same as vector subspaces. The number of subspaces in $(S, +)$ is given by the well-known formula \[ \sum_{i=1}^{r} {r \choose i}_{p}\] where ${r \choose i}_{p}$ is the Gaussian binomial coefficient which counts the number of $i$-dimensional subspaces of $\mathbb{F}_{p}^{r}$. Explicitly its value is given by \[{r \choose i}_{p} = \frac{(p^{r}-1)(p^{r}-p)\cdots(p^{r}-p^{r-1})}{(p^{i}-1)(p^{i}-p)\cdots(p^{i}-p^{i-1})}.\] Since the number of ideals in $S$ is $2^r$, we get \begin{prop} \[P_{\mathbb{Z}_{p}^{r}} = \frac{2^r}{\sum_{i=1}^{r} {r \choose i}_{p}}.\] \end{prop}
{ "timestamp": "2015-06-19T02:03:50", "yymm": "1506", "arxiv_id": "1506.05513", "language": "en", "url": "https://arxiv.org/abs/1506.05513", "abstract": "Let $R$ be a commutative ring. When is a subgroup of $(R, +)$ an ideal of $R$? We investigate this problem for the rings $\\mathbb{Z}^{d}$ and $\\prod_{i=1}^{d} \\mathbb{Z}_{n_{i}}$. For various subgroups of these rings we obtain necessary and sufficient conditions under which the above question has an affirmative answer. In the case of $\\mathbb{Z} \\times \\mathbb{Z}$ and $\\mathbb{Z}_n \\times \\mathbb{Z}_m$, our results give, for any given subgroup of these rings, a computable criterion for the problem under consideration. We also compute the probability that a randomly chosen subgroup from $\\mathbb{Z}_n \\times \\mathbb{Z}_m$ is an ideal.", "subjects": "Commutative Algebra (math.AC); Number Theory (math.NT)", "title": "When is a subgroup of a ring an ideal?", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682484719526, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7073169888260521 }
https://arxiv.org/abs/1408.5849
Distinguishing extension numbers for $\mathbf R^n$ and $S^n$
In the setting of a group $\Gamma$ acting faithfully on a set $X$, a $k$-coloring $c: X\rightarrow \{1, 2, ..., k\}$ is called $\Gamma$-distinguishing if the only element of $\Gamma$ that fixes $c$ is the identity element. The distinguishing number $D_\Gamma(X)$ is the minimum value of $k$ such that a $\Gamma$-distinguishing $k$-coloring of $X$ exists. Now, fixing $k= D_\Gamma(X)$, a subset $W\subset X$ with trivial pointwise stabilizer satisfies the precoloring extension property $P(W)$ if every precoloring $c: X-W\rightarrow \{1, ..., k\}$ can be extended to a $\Gamma$-distinguishing $k$-coloring of $X$. The distinguishing extension number $\text{ext}_D(X, \Gamma)$ is then defined to be the minimum $n$ such that for all applicable $W\subset X$, $|W|\geq n$ implies that $P(W)$ holds. In this paper, we compute $\text{ext}_D(X, \Gamma)$ in two particular instances: when $X = S^1$ is the unit circle and $\Gamma = \text{Isom}(S^1) = O(2)$ is its isometry group, and when $X = V(C_n)$ is the set of vertices of the cycle of order $n$ and $\Gamma = \text{Aut}(C_n) = D_n$, the dihedral group of a regular $n$-gon. This resolves two conjectures of Ferrara, Gethner, Hartke, Stolee, and Wenger. In the case of $X=\mathbf R^2$, we prove that $\text{ext}_D(\mathbf R^2, SE(2))<\infty$, which is consistent with (but does not resolve) another conjecture of Ferrara et al. On the other hand, we also prove that for all $n\geq 3$, $\text{ext}_D(S^{n-1}, O(n)) = \infty$, and for all $n\geq 3$, $\text{ext}_D(\mathbf R^n, E(n))=\infty$, disproving two other conjectures from the same authors.
\section{Introduction} Let $\Gamma$ be a group which acts faithfully on a set $X$. As defined by Tymoczko in [7], a $k$-coloring $c: X\rightarrow \{1, ..., k\}$ is distinguishing with respect to $\Gamma$ if the only $\gamma\in \Gamma$ for which $c\circ \gamma = c$ is the identity element (that is, no nontrivial action of some $\gamma\in \Gamma$ fixes the coloring). The distinguishing number of $(X, \Gamma)$, denoted $D_\Gamma(X)$, is defined to be the smallest $k$ such that $X$ has a $\Gamma$-distinguishing $k$-coloring. A special case of this introduced by Anderson and Collins in [1] takes $X = V(G)$ to be the vertices of a graph, and $\Gamma = \Aut(G)$ to be the automorphism group of the graph. Of particular interest are the cases $G = C_n$, the cycle of order $n$. In $[1]$, it is proved that $D(C_n) = 2$ for all $n\geq 6$, while $D(C_3) = D(C_4)=D(C_5)=3$. \\\\ In [2], Ferrara, Gethner, Hartke, Stolee, and Wenger introduce a refinement to the distinguishing number problem, in the form of extending precolorings. For the rest of the paper, we fix $k= D_\Gamma(X)$. Then, given a subset $W\subset X$ and any precoloring $c: X-W\rightarrow \{1, ..., k\}$, we can ask if it is possible to extend $c$ to a $\Gamma$-distinguishing coloring $c^*: X\rightarrow \{1, .., k\}$. For convenience, we introduce the following notation. \\\\ \textbf{Definiton.} For $W\subset X$ such that the pointwise stabilizer $\stab_\Gamma(W)$ is trivial, we define the \textit{precoloring extension property} $P(W)$ as follows: $P(W)$ holds if and only if every precoloring $c: X-W\rightarrow \{1, 2, ..., k\}$ can be extended to a distinguishing $k$-coloring of $X$. \\\\ Based on this notion, in [2], the notion of a \textit{distinguishing extension number} is introduced. \\\\ \textbf{Definition.} The \textit{distinguishing extension number} $\text{ext}_D(X, \Gamma)$ is equal to the smallest value of $n$ such that for all $W\subset X$, if $|W|\geq n$ and $W$ is not pointwise stabilized by any nontrivial $\gamma\in \Gamma$, then $P(W)$ holds. \\\\ In this paper, we investigate $\ext_D(\R^n, \text{Isom}(\R^n))$ and $\ext_D(S^n, \text{Isom}(S^n))$, where $S^n$ denotes the unit $n$-sphere; for the rest of the paper, we use $O(n)$ to denote $\text{Isom}(S^{n-1})$ and $E(n)$ to denote $\text{Isom}(\R^n)$. \\\\ In the first half of this paper, we compute $\text{ext}_D(X, \Gamma)$ in two particular cases. One case consists of the graph setting mentioned above; in particular, for $X = V(C_n)$ and $\Gamma = \Aut(C_n) = D_n$, where $C_n$ is the cycle of order $n$ and $D_n$ is the dihedral group of a regular $n$-gon. As mentioned earlier, we already know that for $n\geq 6$, $C_n$ has distinguishing number equal to $2$. The other case consists of $X = S^1$ and $\Gamma = O(2)$; it is easy to see that $D_{O(2)}(S^1) = 2$ as well. In $[2]$, it was proved that $\text{ext}_D(\R, E(1)) = 4$, and some partial results on $C_n$ and $S^1$ were given. \\\\ \textbf{Theorem} [2]. \textit{If $n\geq 6$ is not divisible by $2, 3$, or $5$, then $\ext_D(C_n) = 4$. Furtherore, $\ext_D(S^1, O(2))\leq 16$.} \\\\ The authors of [2] also conjectured the exact values for the extension numbers for $S^1$ and the remaining $C_n$, which I prove correct here. \\\\ \textbf{Theorem 1.} \textit{Let $n\geq 6$. If $4$ and $5$ do not divide $n$, then $\ext_D(C_n) = 4$. If $4\mid n$ but $5\nmid n$, then $\ext_D(C_n) = 5$. If $5\mid n$, then $\ext_D(C_n) = 6$. Finally, $\ext_D(S^1, O(2)) = 6$.} \\\\ The proof of this theorem involves considering general subsets $W\subset S^1$ of cardinality equal to $4$ or $5$ and investigating when $P(W)$ holds. In order to prove Theorem 1, we prove a somewhat stronger characterization of when $P(W)$ holds in this situation. \\\\ \textbf{Proposition.} \\ $i)$ \textit{Let $W\subset S^1$ and $|W| = 5$. Then, $P(W)$ holds unless $W$ is the set of vertices of a regular pentagon.} \\ $ii)$ \textit{Let $W\subset S^1$, $|W|=4$, and suppose that the four orbits of elements of $W$ under the translation of order $5$ are distinct. Then, $P(W)$ holds unless $W$ is the set of vertices of a square.} \\\\ The proposition tells us that the only obstructions to extending all precolorings of $S^1- \{\text {four points}\}$ are the obstructions due to symmetries of $C_4$ and $C_5$. \\\\ In the second half of the paper, we consider what happens in higher dimensions; we present fairly concrete examples to demonstrate that for all $n\geq 3$, $\ext_D(\R^n, E(n))=\infty$ and $\ext_D(S^n, O(n))=\infty$. In fact, we prove a stronger result. \\\\ \textbf{Theorem 2.} \textit{Let $n\geq 3$, $X = \R^n$ or $S^n$, and $\Gamma = \text{Isom}(X)$. Then, there exist uncountable sets $W\subset X$ which have trivial pointwise stabilizer inside $\Gamma$ but do not satisfy $P(W, \Gamma)$.} \\\\ The extension number $\ext_D(\R^3, E(3))$ was previously conjectured to be finite; in fact, Theorem 2 provides the first known instances in which $\ext_D(X, \Gamma)$ is infinite. The question in two dimensions is harder to resolve; in the case of $\R^2$, the authors of $[2]$ conjectured the following. \\\\ \textbf{Conjecture} [2]. $\ext_D(\R^2, E(2))=7$. \\\\ We obtain some partial results by considering subgroups of $E(2)$. Let $SE(2): \{\vec v\mapsto A \vec v + \vec b,\spa A\in SO(2),\spa \vec b\in \R^2\}$. We prove the following theorem. \\\\ \textbf{Theorem 3.} \textit{$\ext_D(\R^2, O(2))=7$, and $\ext_D(\R^2, SE(2))<\infty$.} \\\\ In the case of $S^2$, we are able to show that $\ext_D(S^2, O(3))=\infty$ using an entirely different argument from the arguments in higher dimensions. We prove the following theorem. \\\\ \textbf{Theorem 4.} \textit{$P(W, SO(3))$ does not hold for any finite subset of $S^2$. Furthermore, assuming the axiom of choice, $P(W, SO(3))$ does not hold for any countable subset of $S^2$.} \\\\ Finally, after proving Theorem 4, we discuss a few unanswered questions regarding extending precolorings on $\R^n$ and $S^n$. \section{Theorem 1: Extending precolorings on $S^1$} \subsection{Preliminaries} Theorem 1 is concerned with computing $\ext_D(S^1, O(2))$ and $\ext_D(C_n, \Aut(C_n))$. We note that the lower bound $\text{ext}_D(S^1)\geq 6$ was already proven in [2] (and the appropriate lower bounds for all of the $C_n$ were also proven). This was done by embedding $C_4$ or $C_5$ (as appropriate) into $S^1$ and $C_n$, and observing that two colors are insufficient to distinguish $C_4$ and $C_5$ (so $P(C_4, S^1)$ and $P(C_5, S^1)$ do not hold). Therefore, we only need to show that the appropriate values also serve as upper bounds to the extension numbers. \\\\ First of all, we can quickly eliminate all dependence on $C_n$ and work entirely over $S^1$ (see [2] for the complete framework). The vertices of $C_n$ can be embedded into $S^1$ by a map $\phi$ which sends $\{1, 2, ..., n\}$ to the $n$th roots of unity. Under this embedding, for any $W\subset C_n$, we have the following fact. \\\\ \textbf{Fact.} $P(\phi(W), S^1)$ implies $P(W, C_n)$. \\\\ This holds because the setwise stabilizer of $\phi(V(C_n))$ inside $O(2)$ is canonically isomorphic to $\Aut(C_n)$. As a result, for the rest this section, we will consider subsets $W\subset S^1$, and $P(W)$ will always be taken to be over $S^1$. Furthemore, we have the following reduction. \\\\ \textbf{Observation.} For all $\gamma\in O(2)$, $P(\gamma W)$ holds if and only if $P(W)$ holds. \\\\ This is true because if the coloring $c: \gamma W\rightarrow \{R, B\}$ is preserved by $\gamma^\prime \in O(2)$, then the coloring $c\circ \gamma: W\rightarrow \{R, B\}$ is preserved by $\gamma^{-1}\gamma^\prime \gamma$. \\\\ This reduction will be used extensively throughout the rest of the paper in the following way: we say that two subsets $W$ and $W^\prime$ of $S^1$ are $O(2)$-equivalent (written $W\cong W^\prime$) if $W^\prime = \gamma W$ for some $\gamma\in O(2)$. This observation tells us that if $W\cong W^\prime$, then we can always replace $W$ with $W^\prime$ without loss of generality, in order to determine if $P(W)$ holds or not. \\\\ Finally, to establish notation for the rest of the proof, we identify $S^1 \cong \R/2\pi \Z$. We use $\sigma$ to denote a translation, with $\sigma_a$ denoting the map $x\mapsto x+a$, and use $\tau$ to denote a reflection, with $\tau_a$ denoting the map $x\mapsto -x + 2a$. If $c$ is a $2$-coloring, then we will use $c_-$ to denote the opposite coloring to $c$ (i.e., the unique coloring such that $c(x)\neq c_-(x)$ wherever $c$ is defined). \subsection{An extension of [2] Theorem 7} In [2], the following theorem was proved. \\\\ \textbf{Theorem} \textit{[2, Theorem 7]. Suppose $W\subset S^1$ of cardinality $4$ satisfies the following condition, denoted $T(W)$: the intersection $(W+\f i k)\cap W = \emptyset$ for $2\leq k\leq 5$, $1\leq i\leq k-1$. Then, $P(W)$ holds.} \\\\ To prove this theorem, the authors prove as a lemma that $T(W)$ implies $R(W)$ where $R(W)$ is the following property: there exists $w_0\in W$ such that $\tau_{w_0}(W-\{w_0\})\cap W=\emptyset$. It is then proved that $T(W)$ and $R(W)$ together imply $P(W)$. The goal of Section $3.2$ is to prove that $R(W)$ alone implies $P(W)$. Later, we will replace condition $T(W)$ with successively weaker translation conditions until we have proven Theorem 1. \\\\ The proof that $R(W)$ implies $P(W)$ is almost exactly the same as the proof of Theorem 7 in [2]; however, we need to substitute the following lemma for Lemma 4 in [2]. \\\\ \textbf{Lemma 2.2.1.} \textit{Suppose that $W\subset S^1$ of cardinality 4 satisfies $R(W)$, and we have a precoloring $c: S^1-W\rightarrow \{R, B\}$. Then, there are at most six extensions of $c$ to $S^1-\{w_0\}$ which are preserved by either $\tau_{w_0}$ or a translation (called ``forbidden" in [2]).} \\\\ \textbf{Proof.} Since $\tau_{w_0}(W-\{w_0\})\cap W =\emptyset$, there is at most one extension of $c$ to $S^1-\{w_0\}$ which permits $\tau_{w_0}$. Let $\sigma_{\f 1 2}$ be the translation of order $2$. Then, $\sigma_{\f 1 2}(W)\neq W$, because if equality were to hold, property $R(W)$ would not be satisfied. Therefore, there are at most two extensions of $c$ which are preserved by $\sigma_{\f 1 2}$ (there may be two if $W = \{w_0, a, a+\f 1 2, b\}$ where $b\neq \f 1 2+w_0$). Furthermore, if $c^*$ is an extension of $c$ preserved by $\sigma$ of even order, then it is also preserved by $\sigma_{\f 1 2}$ [either clockwise or counterclockwise iteration of $\sigma$ will avoid crossing $w_0$, and shows us that $\sigma_{\f 1 2}$ will in fact always preserve $c^*$]. \\\\ Suppose $c_1$ and $c_2$ are extensions of $c$ which permit $\sigma_1$ and $\sigma_2$ of odd or infinite order, and let $w\in W$ be such that $c_1(w)\neq c_2(w)$. We claim that at least one of $\sigma_1$ and $\sigma_2$ has order $3$. On the contrary, suppose that neither $\sigma_1$ nor $\sigma_2$ had order $3$. If $|\sigma_1| = |\sigma_2|<\infty$, then as in the Lemma 4 argument in [2], we let $\mc O_w$ denote the $\sigma_1$-orbit of $w$. In this situation, we may suppose that $\sigma_1 = \sigma_2$ (as $\sigma_1$ will always be some power of $\sigma_2$). Since $c_1(\mc O_w)\cap c_2(\mc O_w) = \emptyset$, we can conclude that $\mc O_2\subset W$. But $|\sigma_1|>4$ (because $|\sigma|$ is odd); since $|W|=4$, we have a contradiction. Therefore, we may assume that $|\sigma_2 | > |\sigma_1|>3$, so $|\sigma_2|\geq 7$. From here, the argument from [2] applies (it is possible to find an element $x_\mc O$ of any $\sigma_1$-orbit $\mc O$ such that $x_\mc O$ and $\sigma_2(x_\mc O)$ are not in $W$), and we obtain a contradiction. Thus, either $\sigma_1$ or $\sigma_2$ has order $3$. \\\\ Now, suppose we have $c_1, c_2$, and $c_3$ (of odd or infinite order) which permit $\sigma_1, \sigma_2$, and $\sigma_3$. By the previous paragraph, we obtain that without loss of generality, $|\sigma_1|=|\sigma_2|=3$, which also means that without loss of generality, $\sigma_1=\sigma_2$. Therefore, $|\sigma_1 (W)\cap W| = 3$, and assuming $c_1\neq c_2$, we conclude that $c_1 = c_2$ outside of $\sigma_1(W)\cap W$, while $c_1 = {c_2}_{-}$ on $\sigma_1(W)\cap W$. If we had a fourth extension $c_4$, we would also have $|\sigma_4|=3$, but then we would obtain that $c_4=c_1$ or $c_4=c_2$, a contradiction. Therefore, there are at most three extensions of $c$ which permit a translation of order greater than two. In total, then, we have at most $3+2+1 = 6$ forbidden extensions, which proves Lemma 3.2. $\hfill\blacksquare$ \\\\ \textbf{Theorem 2.2.2.} \textit{If $W\subset S^1$ of cardinality $4$ satisfies condition $R(W)$, then $P(W)$ holds.} \\\\ \textbf{Proof.} Suppose that $W\subset S^1$, $|W|=4$, and $R(W)$ holds. Given any precoloring $c: S^1-W\rightarrow \{R, B\}$, Lemma 3.2 tells us that there are at least two non-forbidden extensions of $c$ to $S^1-\{w_0\}$. Let $c^*$ be one such extension, which we may further extend to $S^1$ by choosing a color for $w_0$. Assuming for the sake of contradiction that $c$ cannot be extended to distinguish $O(2)$, Lemma 3.2 tells us that the two colorings $c_R$ (obtained from coloring $w_0$ red) and $c_B$ (obtained from coloring $w_0$ blue) are preserved by reflections $\tau_R, \tau_B$ which do not fix $w_0$. The rest of the proof can be taken almost word for word from [2], with the following caveats: \\\\ 1) In the proof of Lemma 5 in [2], we know that $w_0+\f 1 2\not\in W$ by $R(W)$. \\ 2) In the proof of Lemma 12 in [2], the fact that $|\mc O_0|\geq 6$ (which depends on $T(W)$) is irrelevant to the proof of the lemma, and therefore can be omitted. \\\\ Otherwise, all arguments carry over exactly as written. $\hfill \blacksquare$ \subsection{A weakening of condition $T(W)$} In order to prove Theorem 1, we will introduce another translational condition $T^\prime(W)$, which is strictly weaker than $T(W)$, and show that $T^\prime(W)$ implies $P(W)$. \\\\ \textbf{Condition} $T^\prime(W)$: $(W+\f i k)\cap W=\emptyset$ for $k=4, 5$ and $\gcd(i, k) = 1$. \\\\ \textbf{Theorem 2.3.1.} \textit{If $W\subset S^1$ of cardinality $4$ satisfies $T^\prime(W)$, then it also satisfies $P(W)$.} \\\\ \textbf{Corollary 2.3.2.} \textit{$\text{ext}_D(C_n)=4$ for all $n\geq 6$ such that $4\nmid n$, $5\nmid n$.} \\\\ Most of the necessary work for Theorem 2.3.1 involves checking that $P(W)$ holds in a few specific cases, which occurs in subsequent lemmas. We will first present a short argument that proves Theorem 2.3.1 assuming those lemmas, and prove the lemmas afterwards. \\\\ \textbf{Proof of Theorem 2.3.1}. Suppose $W\subset S^1$ of size $4$ satisfies $T^\prime(W)$. By Lemma $2.3.3$, there are four possibilities: either $R(W)$ holds (in which case $P(W)$ holds by Theorem $2.2.2$), $W$ is $O(2)$-equivalent to $\{0, \f 1 2, a, a + \f 1 2\}$ ($a\neq \pm \f 1 4$), or $W$ falls into one of two sporadic cases. Lemmas 2.3.4, 2.3.8, and 2.3.10 show that in each of the latter three cases, $P(W)$ holds, so we are done. $\hfill \blacksquare$ \\\\ \textbf{Lemma 2.3.3.} \textit{Suppose $W\subset S^1$ satisifes condition $T^\prime(W)$. Then, either $W$ satisfies $R(W)$, $W \cong \{0, \f 1 2, a, a + \f 1 2\}$ for some $a\neq \pm \f 1 4\in S^1$, $W\cong \{0, \f 1 3, \f 1 2, \f 2 3\}$, or $W\cong \{0, \f 1 6, \f 1 3, \f 1 2\}$.} \\\\ \textbf{Proof.} Let $W\subset S^1$ be such that $|W|=4$, $T^\prime(W)$ holds, and $R(W)$ does not hold. Without loss of generality (by application of an automorphism of $S^1$), we may assume that $0\in W$. Since $R(W)$ does not hold, we know that $\tau_0(W-\{0\})\cap W\neq \emptyset$, which means that one of the following two statements is true. \\ (1) $\f 1 2\in W$ \\ (2) $\exists a\not\in\{ 0, \f 1 2\}$ such that $\{a, -a\}\subset W$ \\\\ Suppose that $\f 1 2\in W$. In this case, we have $W = \{0, \f 1 2, a, b\}$ for some $a$ and $b$. Then, $\tau_a$ does the following: $$0\mapsto 2a,$$ $$\f 1 2 \mapsto 2a + \f 1 2,$$ $$a\mapsto a,$$ $$b\mapsto 2a-b.$$ Since $R(W)$ does not hold, we know that $\tau_a(W-\{a\})\cap W=\emptyset$. However, we know that $\tau_a$ cannot fix $0$ or $\f 1 2$. If $\tau_a$ fixes $b$, then we have $ b = a+\f 1 2$, as claimed. Otherwise, $\tau_a$ must swap two of $\{0, \f 1 2, b\}$; furthermore, possibly shifting $W$ by $\f 1 2$, we may assume that $\tau_a(0)=\f 1 2$ or $\tau_a(0)=b$. \\\\ If $\tau_a(0) = \f 1 2$, then we have $a = \f 1 4$ or $\f 3 4$, contradicting $T^\prime(W)$. If $\tau_a(0) = b$, then $b=2a$, and we consider $\tau_b$. Again, we know that $|\tau_b(W)\cap W|\geq 2$, and by the same arguments as for $\tau_a$, we may conclude that $\tau_b$ must swap two of $\{0, \f 1 2, a\}$. Furthermore, $\tau_b$ cannot send $0$ to $\f 1 2$. If $\tau_b(0) = a$, then $4a = 2b = a\rightarrow 3a = 0$, which is one of the exceptions covered by the claim. Finally, if $\tau_b(\f 1 2) = a$, then $4a + \f 1 2 = a\rightarrow 3a = \f 1 2$, which is the last exception covered by the claim. Therefore, if $\f 1 2\in W$, $W$ does fall into one of the listed exceptions. \\\\ On the other hand, suppose that statement $(1)$ is false; by translational symmetry, we may now assume that $(W+\f 1 2)\cap W = \emptyset$. Furthermore, we know that $W = \{0, a, -a, b\}$ for some $a, b\not\in \{0, \f 1 2\}$. By the assumption that $R(W)$ does not hold, we know that $\tau_a(W-\{a\})\cap W\neq \emptyset$; since we also assumed that $a+\f 1 2\not\in W$, there remain three possibilities: $\tau_a(0) = -a$, $\tau_a(0) = b$, or $\tau_a(-a) = b$. \\\\ If $2a=\tau_a(0)=-a$, then $3a = 0$ and by symmetry, we may assume that $a = \f 1 3$. Then, $\tau_b$ cannot send one cube root of unity to another unless $b$ is some $6$th root of unity, as included in the list of exceptions. \\\\ If $2a = \tau_a(0)=b$, then we consider $\tau_{-a}(W) = \{-2a, -3a, -a, -4a\}$. Since $|\tau_{-a}(W)\cap W|\geq 2$, either $3a, 4a, 5a$, or $6a$ is equal to $0$. But $3a=0\rightarrow b=2a = -a$, a contradiction, while the second two subcases are impossible by $T^\prime(W)$ and the fact that $\f 1 2\not\in W$. The final subcase is one of the listed exceptions. \\\\ If $3a = \tau_a(-a) = b$, then we consider $\tau_{-a}$; if it does not fall into either of the first two categories, then we obtain the opposite result: $-3a = b$ as well. But then $2b = 0$, a contradiction of Case 2 ($(W+\f 1 2)\cap W = \emptyset$). Thus, the listed exceptions are in fact the only exceptions, as desired. $\hfill \blacksquare$ \\\\ \textbf{Lemma 2.3.4.} \textit{If $W = \{0, a, \f 1 2, a + \f 1 2\}$ for some $a\neq \pm \f 1 4$, then $P(W)$ holds.} \\\\ \textbf{Proof.} Since $a\neq \pm \f 1 4$, the collection $\{0, a, \f 1 2, a+\f 1 2, -a, -a+\f 1 2\}$ consists of six distinct points. Let $c$ be any precoloring of $S^1-W$ such that $c(-a)=c(-a+\f 1 2) = R$, and let $d$ be any precoloring such that $d(-a)=R$, $d(-a+\f 1 2) = B$ (by negating colorings, proving Lemma 2.3.4 in these two cases suffices to prove the $(B, R)$ and $(B, B)$ cases as well). \\\\ Extend $c$ (respectively, $d$) to $c_1$ and $c_2$ (respectively, $d_1$ and $d_2$) in the following way: define $c_2(0)=d_1(0)=R$, $c_1(0)=d_2(0)=B$, $c_i(a)=d_i(a)=B$ for $i=0, 1$, $c_i(\f 1 2)=d_i(\f 1 2) = R$, $c_1(a+\f 1 2)=d_1(a+\f 1 2)=B$, $c_2(a+\f 1 2)=d_2(a+\f 1 2) = R$. \\\\ \textbf{Note 2.3.5.} For $k$ equal to $c$ or $d$, $k_1$ and $k_2$ differ only on $W^\prime := \{0, a+\f 1 2\}$. \\\\ \textbf{Note 2.3.6.} $\tau_0, \tau_{\f 1 4}$, $\tau_{\f {a+\f 1 2}2}$, $\sigma_{\f 1 2}$, $\sigma_a$, and $\sigma_{a+\f 1 2}$ do not preserve any of $c_1, c_2, d_1, d_2$. Furthermore, $\tau_{\f a 2}$ does not preserve $c_1, c_2$, or $d_1$, and $\tau_{-\f a 2}$ does not preserve $c_1, d_1$, or $d_2$. In particular, if $\gamma\in O(2)$ preserves $c_1, c_2, d_1$, or $d_2$, then $\gamma(0)\not\in W^\prime$. \\\\ Finally, for $k=c$ or $k=d$, let the ``intermediate coloring'' $k_3$ be such that $k_3=k_1=k_2$ on $S^1-\{0, a+\f 1 2\}$, $k_3(0)=k_-(2a)$ (which is well-defined because $2a\not\in \{0, a+\f 1 2\}$), and $d_3(a+\f 1 2)= d_3(0)$ while $c_3(a+\f 1 2)= {c_3}_-(0)$. \\\\ We prove Lemma 2.3.4 by showing that one of $\{c_1, c_2, c_3\}$ is distinguishing, and one of $\{d_1, d_2, d_3\}$ is distinguishing. The arguments for $c$ and $d$ are extremely similar; we will have $k$ denote either $c$ or $d$ and specify at which points the two arguments differ. \\\\ Assume that neither $k_1$ nor $k_2$ is distinguishing. Then, $k_1$ and $k_2$ must each be invariant under some nontrivial reflection or translation. \\\\ Suppose $k_1$ and $k_2$ are invariant under translations $\sigma_1$ and $\sigma_2$. Then, we will use the fact that $\sigma_1\sigma_2(0)=\sigma_2\sigma_1(0)$ to derive a contradiction. By definition of $\sigma_1$, $k_1(\sigma_1(0))=k_1(0)$, which implies that $k_2(\sigma_1(0))=k_1(0)$ unless $\sigma_1(0)\in W^\prime$; this is not the case by Note 2.3.6. \\\\ Therefore, $k_2(\sigma_1(0)) = k_1(0)$. This then implies that $k_2(\sigma_2\sigma_1(0))=k_1(0)$, which implies that $k_1(\sigma_2\sigma_1(0))=k_1(0)$ except in the following two situations. \\\\ $i)$ $\sigma_2\sigma_1(0)=0$: This would mean that $\sigma_1(0)=\sigma_2^{-1}(0)$, which cannot happen because we know that $k_2(\sigma_1(0))=k_1(0)$, but $k_2(\sigma_2^{-1}(0))=k_2(0)\neq k_1(0)$. \\\\ $ii)$ $\sigma_2\sigma_1(0)=a+\f 1 2$: since $\sigma_1^2 \neq \sigma$ and $\sigma_1^2\neq \id$ (Note 2.3.6), we may suppose that $\sigma_2\sigma_1(0)\neq a+\f 1 2$ by replacing $\sigma_1$ with $\sigma_1^2$ if necessary (it cannot be the case that $\sigma_2\sigma_1(0)=a+\f 1 2 = \sigma_2\sigma_1^2(0)$). \\\\ Since we can arrange that $\sigma_2\sigma_1(0)\neq a+\f 1 2$, we can conclude that $k_1(\sigma_2\sigma_1(0))=k_2(\sigma_2\sigma_1(0))=k_1(0)$. Because this is entirely symmetric in $\sigma_1, \sigma_2$, the same argument (except applying $\sigma_2$ first) proves that $k_1(\sigma_2\sigma_1(0))=k_2(0)$, which is a contradiction. \\\\ Now suppose that $k_1$ is invariant under a translation $\sigma$ and $k_2$ is invariant under a reflection $\tau$. Again, by replacing $\sigma$ with $\sigma^2$ if necessary, we can arrange that $\tau\sigma(0)\neq a+\f 1 2$. We have the relation $\tau\sigma(0) = \sigma^{-1}\tau(0)$, which we will use to derive a contradiction. In fact, the previous argument fully carries over to allow us to conclude that $k_1(\tau\sigma(0))=k_2(\tau\sigma(0)) = k_1(0)$, while $k_1(\sigma^{-1}\tau(0))=k_2(\sigma^{-1}\tau(0))=k_2(0)$. This argument also applies when $k_1$ is invariant under a reflection and $k_2$ is invariant under a translation (this is essentially a Red-Blue color swap). \\\\ Therefore, if neither $k_1$ nor $k_2$ is distinguishing, then they must be invariant under reflections $\tau_1$ and $\tau_2$, respectively. For the remainder of the proof, let the translation $\sigma = \tau_2\tau_1$, and let $k$ be defined on $S^1-W^\prime$ (i.e., extend it to $a$ and $\f 1 2$ because $k_1$ and $k_2$ match there). This step is inspired by the argument in $[2]$ but takes it further -- in $[2]$, two reflections are composed in this way in a situation where their corresponding colorings differ at only one point. As a result, the orbits here are more complicated. \\\\ \textbf{Observation 2.3.7.} $\sigma$ preserves $k$ on $S^1-\{0, \sigma^{-1}(0), \tau_1(0), a+\f 1 2, \sigma^{-1}(a+\f 1 2), \tau_1(a+\f 1 2)\}$. Additionally, $k$ takes specific values as dictated by the following chart. \begin{center} \begin{tabular}{ | l | r | } \hline $\sigma^{-1}(0)$ & $k_2(0)$ \\ \hline $\sigma(0)$ & $k_1(0)$ \\ \hline $\tau_1(0)$ & $k_1(0)$ \\ \hline $\tau_2(0)$ & $k_2(0)$ \\ \hline $\tau_1(a+\f 1 2)$ & Blue \\ \hline $\tau_2(a+\f 1 2)$ & Red \\ \hline \hline \end{tabular} \end{center} \noindent \textbf{Justifications.} 1. For $\theta\in S^1$, if $\theta\not\in W^\prime$ and $\sigma(\theta)\not\in W^\prime$, then $k(\theta)=k(\sigma(\theta))$ (which is $k(\tau_2\tau_1(\theta))$) unless $\tau_1(\theta)\in W^\prime$. The colors of the $\tau_i(0), \tau_i(a+\f 1 2)$ are dictated by the reflections' color-preserving properties. \\\\ 2 (to show $k(\sigma(0))=k_1(0)$). We know that $k(\sigma(0))=k_1(0)$ unless $\tau_1(0)\in W^\prime$ or $\sigma(0)\in W^\prime$. But $\tau_1(0)\not\in W^\prime$ by Note 2.3.6, and $\sigma(0)\neq 0$ because if the opposite were true, then $\tau_1(0) = \tau_2(0)$, despite the fact that $\tau_1(0)$ and $\tau_2(0)$ must have opposite colors. Finally, if $\sigma(0)=a+\f 1 2$, then $\sigma(\f 1 2) = a$. We know that $k(a)=B$ and $k(\f 1 2)=R$ for both $k=c$ and $k=d$. Therefore, there is a violation of $\sigma$ color-preservation in either case ($\sigma$ taking a red point to a blue point). We already have enough information to know that this happens at most at $\tau_1(0)\mapsto \tau_2(0)$ [for $k=c$, $\text{Red}\mapsto \text{Blue}$ in fact never happens], so we would need that $\tau_1(0)=\f 1 2$, which is impossible by Note $2.3.6$. \\\\ 3 (to show $k(\sigma^{-1}(0))=k_2(0)$). The argument is similar here. As before, $\tau_2(0)\not\in W^\prime$ by Note 2.3.6, and we have already shown that $\sigma^{-1}(0)\neq 0$. If $\sigma^{-1}(0)=a+\f 1 2$, then then we use the fact that then $\sigma(0)=-a+\f 1 2$. We proved already that $k(\sigma(0))=k_1(0)$, but in both the $k=c$ and $k=d$ cases, $k(-a+\f 1 2)\neq k_1(0)$, a contradiction. Therefore, $k(\sigma^{-1}(0))=k_2(0)$. \\\\ To complete the proof of Lemma 2.3.4, we now assume for the sake of contradiction that $k_3$ is also not distinguishing. \\\\ Again, we can see easily that $k_3$ cannot permit a nontrivial translation $\sigma_3$, using the relation $\tau\sigma_3=\sigma^{-1}_3\tau$ for $\tau \in \{\tau_1, \tau_2\}$. Pick $i\in \{1, 2\}$ such that $k_i(0)\neq k_3(0)$ (there is exactly one such $i$). On one hand, $\sigma_3(0)\not \in W^\prime$ (this can be easily checked), so $k(\sigma_3(0))=k_3(\sigma_3(0))=k_3(0)$ and therefore $k_i(\tau_i\sigma_3(0))=k_3(0)$. On the other hand, we already know that $\tau_i(0)\not\in W^\prime$ (by Note 2.3.6) and so $k_3(\tau_i(0))=k_i(\tau_i(0))=k_i(0)$, and hence $k_3(\sigma_3^{-1}\tau_i(0))=k_i(0)$. Since $k_3$ and $k_i$ differ at only $0$, this means that $\sigma_3\tau_i(0))=0$, i.e., $\tau_i(0)=\sigma_3^{-1}(0)$; this contradicts the fact that $\tau_i(0)$ and $\sigma_3^{-1}(0)$ have opposite colors. \\\\ Therefore, we conclude that $k_3$ permits another reflection, $\tau_3$, and because $k_3$ permits neither $\tau_0$ nor $\tau_{a+\f 1 2}$ (we chose $k_3$ specifically so this is the case), we have that $\tau_3\neq \tau_1, \tau_3\neq \tau_2$. Thus, we have two nontrivial translations $\sigma_{31}:= \tau_3\tau_1$ and $\sigma_{23}:=\sigma_2\sigma_3$, satisfying the relation $\sigma_{23}\sigma_{31}=\sigma:= \sigma_{21}$. \\\\ We will derive a contradiction using the fact that $\sigma_{21}$ is also equal to $\sigma_{31}\sigma_{23}$ (i.e., the translations commute). Let $i\in\{1, 2\}$ be such that $k_i(0)\neq k_3(0)$, and let $j \in \{1, 2\}$ be such that $j\neq i$. Observation $2.3.7$ tells us that $k(\sigma_{ji}(0))=k_i(0)$. On the other hand, $\sigma_{ji}(0)=\sigma_{3i}\sigma_{j3}(0)$, and $k_j(0)=k_3(0)$. Since $k_j$ and $k_3$ differ only at $a+\f 1 2$, this means that $k_3(\sigma_{j3}(0))=k_j(0)$ unless: \\\\ $1)$ $\tau_3(0)=a+\f 1 2$, i.e., $\tau_3 = \tau_{\f{a+\f 1 2}2}$. This does not hold, as we can easily note that $\tau_{\f{a+\f 1 2}2}(a)=\f 1 2$ implies that this particular reflection does not preserve $k_3$. \\\\ $2)$ $\tau_j\tau_3(0)=a+\f 1 2$. For $k=d$, this does not hold, because $\tau_3(0)$ has the same color as $0$ under both $k_j$ and $k_3$, while $a+\f 1 2$ has the opposite color under $k_j$. For $k=c$, this does not hold, because then we would have $\tau_j\tau_3(\f 1 2) = a $. Since $k(\f 1 2) = R$ and $k(a) = B$, this means that $\tau_3(\f 1 2) = \tau_j(a) \in W^\prime$. In particular, $c_j(\tau_j(a))=B$ implies that $j=1$, while $c_3(\tau_j(a))=R$ implies that $\tau_3(\f 1 2)=\tau_1(a)=a+\f 1 2$. Then, we get a contradiction from the fact that $\tau_1\tau_3(-a+\f 1 2)=0$, while $\tau_3(-a+\f 1 2)=2a+\f 1 2\not\in W^\prime$ [$\tau_1\tau_3$ sends the red $-a+\f 1 2$ to the $c_1$-blue $0$]. \\\\ Therefore, $k_3(\sigma_{j3}(0))=k_j(\sigma_{j3}(0))=k_j(0)$. Since $\sigma_{j3}(0)\neq 0$, this means that $k_i(\sigma_{j3}(0))=k_j(0)$ as well. But this implies that $k_i(\tau_i\sigma_{j3}(0))=k_j(0)$, and then we conclude that $k_i(\sigma_{3i}\sigma_{j3}(0))=k_j(0)$ unless $\sigma_{3i}\sigma_{j3}(0)=0$ (but $\sigma_{3i}\sigma_{j3}=\sigma_{ji}$, which we know is nontrivial) or $\tau_i\sigma_{j3}(0)=0$ (which contradicts the fact that $k_i(\tau_i\sigma_{j3}(0))=k_j(0)\neq k_i(0)$). Thus, $k_i(\sigma_{ji}(0))=k_i(\sigma_{3i}\sigma_{j3}(0))=k_j(0)$, contradicting Observation 2.3.7 (which says that $k(\sigma_{ji}(0))=k_i(0)$). Hence, one of $k_1, k_2$, and $k_3$ is distinguishing. This proves Lemma 2.3.4. $\hfill \blacksquare$ \\\\ \textbf{Lemma 2.3.8.} \textit{If $W=\{0, \f 1 3, \f 1 2, \f 2 3\}$, then $P(W)$ holds.} \\\\ \textbf{Proof.} The principles behind this proof are the same as those behind Lemma 2.3.4, and many of the same reductions are made. Let $c$ be any precoloring of $S^1-W$ such that $c(\f 5 6)=c(\f 1 6)=R$, and let $d$ be any precoloring of $S^1-W$ such that $d(\f 5 6)=B$ and $d(\f 1 6)=R$. Let $W^\prime = \{0, \f 2 3\}$, and extend $c$ and $d$ to $c_1, d_1, c_2, d_2, c_3$, and $d_3$ in the following way: \begin{center} \begin{tabular}{ | l || c | c | c || c | c | c |} \hline & $c_1$ & $c_3$ & $c_2$ & $d_1$ & $d_3$ & $d_2$ \\ \hline $0$ & B & R & R & B & B & R \\ \hline $\f 1 3$ & B & B & B & R & R & R \\ \hline $\f 1 2$ & B & B & B & B & B & B \\ \hline $\f 2 3$ & R & R & B & R & B & B \\ \hline \end{tabular} \end{center} In this proof, $c_1, c_2$, and $c_3$ have the same purpose as they did in the proof of Lemma 2.3.4; that is, we first assume that none of $c_1, c_2$, or $c_3$ is distinguishing, and show that $c_1$ and $c_2$ both permit translations $\tau_1, \tau_2$. Then, we will show that the ``intermediate'' coloring $c_3$ also permits a translation $\tau_3$, and derive a contradiction from the (commutative) relation $$\tau_2\tau_1 = (\tau_2\tau_3)(\tau_3\tau_1) = (\tau_3\tau_1)(\tau_2\tau_3).$$ First, note the following things about the six colorings defined above. \\\\ \textbf{Note 2.3.9.} $c_1$ and $d_1$ are $\Aut(C_6)$-distinguishing. Considering $\{0, \pm \f 1 6, \pm \f 1 3, \f 1 2\}$ as a copy of $C_6$ sitting inside $S^1$; the only element of $\Aut(C_6)$ which fixes $c_2 :C_6\rightarrow \{R, B\}$ is the reflection about $0$; the only element of $\Aut(C_6)$ which fixes $d_2$ is the reflection about $\f 1 6$; the only element of $\Aut(C_6)$ which fixes $c_3$ is the reflection about $-\f 1 {12}$; the only element of $\Aut(C_6)$ which fixes $d_3$ is the reflection about $\f 1 4$. This means that no two of $\{c_1, c_2, c_3\}$ and $\{d_1, d_2, d_3\}$ can preserve the same reflection, because any such reflection would have to stabilize one element of $W^\prime$, and no $C_6$-reflecition can preserve more than one of the listed colorings. \\\\ Let $k$ be equal to $c$ or $d$. Assume for the sake of contradiction that none of $k_1, k_2$, or $k_3$ is $O(2)$-distinguishing. \\\\ First, suppose that $k_1$ and $k_2$ are invariant under translations $\sigma_1, \sigma_2$. We know that $k_1(\sigma_1(\f 2 3))=R$ while $k_2(\sigma_2(\f 2 3))=B$. This means that $k_2(\sigma_1(\f 2 3))=R$ and $k_1(\sigma_2(\f 1 3))=B$, because Note 2.3.9 tells us that $\sigma_i(W)\cap W = \emptyset$ (as any non-trivial translation which sends elements of $C_6$ to elements of $C_6$ does not even preserve color on $C_6\subset S^1$). Applying $\sigma_2$ and $\sigma_1$, respectively, we see that $k_2(\sigma_2\sigma_1(\f 2 3))=R$ while $k_1(\sigma_1\sigma_2(\f 2 3))=B$. Since $\sigma_1\sigma_2=\sigma_2\sigma_1$, this is only possible if $\sigma_2\sigma_1(\f 2 3)\in W^\prime$. On one hand, $\sigma_2\sigma_1(\f 2 3)\neq \f 2 3$, because then $\sigma_1(\f 2 3)=\sigma_2^{-1}(\f 2 3)$, contradicting the fact that $\sigma_1(\f 2 3)$ and $\sigma_2^{-1}(\f 2 3)$ (which are not elements of $W^\prime$) have different colors under $k$. On the other hand, $\sigma_2\sigma_1(\f 2 3)$ cannot be equal to $0$, because then $\sigma_2\sigma_1(\f 1 6) = \f 1 2$; since $\sigma_1(\f 1 6)\not\in W$, this means that $$R = k(\f 1 6) = k(\sigma_1(\f 1 6))=k(\sigma_2\sigma_1(\f 1 6))=k(\f 1 2) = B,$$ a contradiction. \\\\ Next, suppose that $k_1$ is invariant under $\sigma_1$ and $k_2$ is invariant under a reflection, $\tau_2$. Then, we will use the fact that $\sigma_1\tau_2(\f 2 3)= \tau_2\sigma_1^{-1}(\f 2 3)$ to derive a contradiction. Note $2.3.9$ tells us that $\sigma_1(W)\cap W=\emptyset$, and either $\tau_2(0)\not\in W^\prime$ or $\tau_2(\f 2 3)\not\in W^\prime$ (which one depends on whether $k=c$ or $k=d$). Therefore, we know that for some $w\in W^\prime$, $$k_2(w)= k_2(\tau_2(w))=k_1(\tau_2(w)) = k_1(\sigma_1\tau_2(w))$$ while $$k_1(w) = k_1(\sigma_1^{-1}(w))=k_2(\sigma_1^{-1}(w)) = k_2(\tau_2\sigma_1^{-1}(w))$$ implying that $\sigma_1\tau_2(w) \in W^\prime$ (because $k_1(w)\neq k_2(w)$). But $\sigma_1\tau_2(w)\neq w$, because then $\tau_2(w) = \sigma_1^{-1}(w)$, contradicting the fact that these points (which cannot be in $W^\prime$) have different colors under $k$. \\\\ Therefore, the only option is that $\sigma_1\tau_2(w)$ is equal to the other element of $W^\prime$. But by replacing $\sigma_1$ with $\sigma_1^2$ (which is guaranteed to be different from $\sigma$ and different from the identity), we can ensure that this does not happen, giving us our contradiction. \\\\ Since an argument analogous to the above will also work if $k_1$ were invariant under a reflection and $k_2$ were invariant under a translation, we conclude that $k_1$ is preserved by some reflection $\tau_1$ and $k_2$ is preserved by $\tau_2$. Note 2.3.9 tells us that $\tau_1\neq \tau_2$, and thus $\sigma_{21}:= \tau_2\tau_1$ is a nontrivial translation. \\\\ \textbf{Case 1.} $k=d$. In this situation, we claim that $\sigma_{21}(\f 2 3)\not\in W^\prime$ and $k(\sigma_{21}(\f 2 3))=k_1(\f 2 3) = R$. \\\\ To prove the claim, note that we know $R = k_1(\f 2 3) = k_1(\tau_1(\f 2 3))=k_2(\tau_1(\f 2 3)) = k_2(\tau_2\tau_1(\f 2 3))$, because $\tau_1(\f 2 3)\not\in W^\prime$ by Note 2.3.9. Therefore, the claim holds provided that $\sigma_{21}(\f 2 3)\not\in W^\prime$. Since $\sigma_{21}$ is nontrivial, we know that $\sigma_{21}(\f 2 3)\neq \f 2 3$. If $\sigma_{21}(\f 2 3) = 0$, then we derive a contradiction from the fact that $\sigma_{21}$ sends the $\f 1 2, \f 5 6, \f 1 6$ triangle to itself. Note 2.3.9 tells us that $\sigma_{21}$ would have to preserve $k$ on this triangle (as no intermediate $\tau_1$-reflection could be an element of $W^\prime$), but this triangle is not monochromatic under $k$. Thus, $\sigma_{21}(\f 2 3)\not \in W^\prime$ and $k(\sigma_{21}(\f 2 3))=R$, as desired. \\\\ Finally, we consider $k_3$. If $k_3$ is preserved by some translation $\sigma$, then note that $\sigma^{-1}_3(\f 2 3)\not\in W^\prime$ (no $C_6$-translation preserves $k_3$), so $$B=k_3(\f 2 3) = k_3(\sigma^{-1}_3(\f 2 3)) = k_1(\sigma^{-1}_3(\f 2 3))=k_1(\tau_1\sigma^{-1}_3(\f 2 3))$$ which is equal to $k_2(\tau_1\sigma^{-1}_3(\f 2 3))$ provided that $\tau_1\sigma_3^{-1}(\f 2 3)\not \in W^\prime$. Since $\tau_1(\f 2 3)$ and $\sigma^{-1}_3(\f 2 3)$ have different colors under $k$, we know that $\tau_1\sigma_3^{-1}(\f 2 3)\neq \f 2 3$. By replacing $\sigma_3$ with $\sigma_3^2$ if necessary, we can ensure that $\tau_1\sigma_3^{-1}(\f 2 3)$ is not equal to $0$, which allows us to conclude that $B=k_2(\tau_1\sigma_3^{-1}(\f 2 3))=k_2(\sigma_{21}\sigma_3^{-1}(\f 2 3))$. On the other hand, we know that $\sigma_{21}(\f 2 3)\not\in W^\prime$ and $k(\sigma_{21}(\f 2 3))=R$. This implies that $k_3(\sigma_3^{-1}\sigma_{21}(\f 2 3))=R$, which is a contradiction (as $k_3(\f 2 3)=k_3(0)=B$). \\\\ If $k_3$ is preserved by a reflection $\tau_3$, then Note 2.3.9 tells us that $\tau_3\neq \tau_1$ and $\tau_3\neq \tau_2$, meaning that $\sigma_{23} := \tau_2\tau_3$ and $\sigma_{31}:= \tau_3\tau_1$ are nontrivial translations, and satisfy the relation $$\sigma_{21} = \sigma_{23}\sigma_{31}=\sigma_{31}\sigma_{23}.$$ We already know that $\sigma_{21}(\f 2 3)$ is red, and not an element of $W^\prime$. However, $\sigma_{23}(\f 2 3)\neq \f 2 3$ is certainly blue under $k_2$ because $\tau_3(\f 2 3)\not\in W^\prime$ [this can be verified using Note 2.3.9]. Therefore, $\sigma_{23}(\f 2 3)\neq 0$ (which is red under $k_2$), and hence $\sigma_{31}$ sends a blue element of $S^1-W^\prime$ ($\sigma_{23}(\f 2 3)$) to a red element of $S^1-W^\prime$ ($\sigma_{21}(\f 2 3)$); this never happens (it sends the red $\tau_1(\f 2 3)$ to the blue $\tau_3(\f 2 3)$, but never the other way around). Hence, Case 1 leads to a contradition. \\\\ \textbf{Case 2.} $k=c$. Here, we claim that $\sigma_{21}(0)\not\in W^\prime$ and $k(\sigma_{21}(0)) = B$. To see this, note that we know $B = k_1(0) = k_1(\tau_1(0))= k_2(\tau_1(0))=k_2(\tau_2\tau_1(0))$ [$\tau_1(0)\not\in W^\prime$ by Note 2.3.9], so the claim follows if $\sigma_{21}(0)\not\in W^\prime$. We already know that $\sigma_{21}(0)\neq 0$, and if $\sigma_{21}(0) = \f 2 3$, then $\sigma_{21}$ sends the $\f 1 2, \f 5 6, \f 1 6$ triangle to itself, giving us the same contradiction as in the $k=d$ case. \\\\ Now, just as in the $k=d$ case, $k_3$ cannot be preserved by a translation $\sigma_3$, because then, after arranging for $\sigma_3\sigma_{21}(0)\not\in W^\prime$, we obtain a contradiction. \\\\ If $k_3$ is preserved by a reflection $\tau_3$, then we define $\sigma_{32}$ and $\sigma_{21}$ as before, and use the fact that, $\sigma_{21}(0)$, which is blue under all three extensions of $k$, is also equal to $\sigma_{31}\sigma_{23}(0)$. Furthermore, $\sigma_{23}(0)$ is red under $k_2$, because $\tau_3(0)\not \in W^\prime$ [this can be checked using Note 2.3.9]. Therefore, $\sigma_{23}(0)\neq \f 2 3$ (which is blue under $k_2$), and we conclude that $\sigma_{31}$ sends a red element of $S^1-W^\prime$ to a blue element of $S^1-W^\prime$; this never happens; contradiction. \\\\ Thus, one of $k_1, k_2$, and $k_3$ is distinguishing with respect to $O(2)$, which proves Lemma 2.3.8. $\hfill \blacksquare$ \\\\ \textbf{Lemma 2.3.10.} \textit{If $W=\{0, \f 1 6, \f 1 3, \f 1 2\}$, then $P(W)$ holds.} \\\\ \textbf{Proof.} This actually follows from the proof of Lemma 2.3.8 without any extra work. Instead of $W= \{0, \f 1 6, \f 1 3, \f 1 2\}$, we may equivalently consider $W = \{\f 2 3, \f 5 6, 0, \f 1 6\}$. Let $c$ be any precoloring of $S^1-W$ such that $c(\f 1 3)=c(\f 1 2)=B$, and let $d$ be any precoloring of $S^1-W$ such that $c(\f 1 3)=R$ and $c(\f 1 2)=B$. Then, we may extend $c$ and $d$ to $c_1, d_1, c_2, d_2, c_3$, and $d_3$ exactly as in the chart from Lemma 2.3.8. Since we already proved that one of $\{c_1, c_2, c_3\}$ and one of $\{d_1, d_2, d_3\}$ distinguishes $O(2)$, we are done. $\hfill \blacksquare$ \\\\ This completes the proofs of the collection of lemmas necessary for Theorem 2.3.1. \subsection {Completing the proof of the Theorem 1} Having finished Theorem 2.3.1, we can loosen the translation constraint further and obtain an even better result for $|W|=4$. \\\\ \textbf{Condition} $T^{\prime\prime}(W)$: $(W+\f i 5)\cap W = \emptyset$ for $\gcd(i, 5)=1$. \\\\ \textbf{Theorem 2.4.1.} \textit{If $W\subset S^1$, $|W|=4$, and $T^{\prime\prime}(W)$ holds, then either $P(W)$ holds, or $W$ is $O(2)$-equivalent to $\{0, \f 1 4, \f 2 4, \f 3 4\}$.} \\\\ \textbf{Proof.} Suppose $|W|=4$, $T^{\prime\prime}(W)$ holds, and $W$ is not equivalent to $\{0, \f 1 4, \f 2 4, \f 3 4\}$. If $T^\prime(W)$ holds, then by Theorem $2.3.1$, $P(W)$ holds. If $T^\prime(W)$ does not hold, then there are two possibilities. \\\\ \textbf{Case 1.} $W$ is equivalent to $\{0, \f 1 4, a, b\}$ for $a, b\not\in \{\f 2 4, \f 3 4\}$. In this case, we claim that $R(W)$ holds. In particular, suppose that $R(W)$ does not hold. Then, since $\tau_0(\f 1 4) = -\f 1 4\not\in W$ and $\f 1 2\not\in \{a, b\}$ (i.e., $a$ and $b$ are not fixed by $\tau_0$), we must have that $b = -a$ (or else $\tau_0$ would satisfy the desired property). But since $\tau_{\f 1 4}(0)\not\in W$ and $\f 3 4\not\in \{a, b\}$, we also must have that $b = -a+\f 1 2$; a contradiction. Hence, $R(W)$ holds, and we conclude that $P(W)$ holds by Theorem 2.2.2. \\\\ \textbf{Case 2.} $W$ is equivalent to $\{0, \f 1 4, \f 3 4, a\}$ for some $a\neq \f 1 2$. Lemma 2.4.2 tells us that $P(W)$ holds in this situation (again, we will present the proof of the lemma below). Assuming this lemma, the proof of Theorem 2.4.1 is complete. $\hfill \blacksquare$ \\\\ \textbf{Lemma 2.4.2.} \textit{If $W \cong \{0, \f 1 4, \f 3 4, a\}$ for $a\neq \f 1 2$, then $P(W)$ holds.} \\\\ \textbf{Proof.} The principles behind the argument are again similar to those in Lemma 2.3.4, Lemma 2.3.8, and Lemma 2.3.10. Let $c$ be a precoloring of $S^1-W$; by negating $c$ if necessary, we may assume without loss of generality that $c(\f 1 2) = B$. Then, extend $c$ to $c_1, c_2, c_3$ in the following way: pick $c_1(a)=c_2(a)=c_3(a)$ such that exactly three of $\{a, -a, a+\f 1 2, -a+\f 1 2\}$ have the same color (this is always possible because either $-a, a+\f 1 2$, and $-a+\f 1 2$ all have the same color to begin with, or exactly two of the three have the same color). This choice guarantees that $c_1, c_2$, and $c_3$ are not preserved by $\sigma_{\f 1 2}$, $\tau_0$, or $\tau_{\f 1 4}$. The rest of $W$ is colored according to this chart. \begin{center} \begin{tabular}{ | l || c | c | c |} \hline & $c_1$ & $c_3$ & $c_2$ \\ \hline $0$ & R & B & B \\ \hline $\f 1 4$ & B & B & R \\ \hline $\f 3 4$ & R & R & R \\ \hline \end{tabular} \end{center} Note that $c_1$ and $c_2$ only differ on $W^\prime := \{0, \f 1 4\}$. Also note that $\sigma_{\f 1 2}$, $\sigma_{\f 1 4}$, $\tau_0$, $\tau_{\f 1 4}$, and $\tau_{\f 1 8}$ do not preserve any of the $c_i$.. We will now run through the same argument as before -- assuming that $c_1, c_2$, and $c_3$ all satisfy some symmetry, showing that all of the symmetries must be reflections $\tau_1, \tau_2, \tau_3$, and deriving a contradiction from the fact that $\tau_2\tau_1 = (\tau_2\tau_3)(\tau_3\tau_1)=(\tau_3\tau_1)(\tau_2\tau_3)$. \\\\ Assume for the sake of contradiction that $P(W)$ does not hold; in particular, we assume that none of $c_1$, $c_2$, and $c_3$ are distinguishing. Suppose that $c_1$ and $c_2$ are both preserved by nontrivial translations $\sigma_1, \sigma_2$. Then, $c_1(\sigma_1(0)) = c_1(0)=R$, which implies that $\sigma_1(0)\neq \f 1 4$. Thus, $\sigma_1(0)\not\in W^\prime$, and hence $c_2(\sigma_2\sigma_1(0))=c_2(\sigma_1(0))=c_1(\sigma_1(0))=R$. This argument is symmetric in $c_1, c_2$, so we also conclude that $c_1(\sigma_2\sigma_1(0))=B$. This can only happen if $\sigma_2\sigma_1(0))=\f 1 4$, but this implies that $\sigma_2\sigma_1(\f 1 2) = \f 3 4$. Since $\sigma_1(\f 1 2)\not\in W^\prime$ (as $\sigma_{\f 1 2}$ and $\sigma_{\f 1 4}$ do not preserve $c_1$), we know that $c(\sigma_1(\f 1 2))=c_1(\f 1 2)=B$. Then, we have $\sigma_2$ taking a blue element of $S^1-W^\prime$ to a red element of $S^1-W^\prime$; a contradiction. Therefore, it is not the case that $c_1$ and $c_2$ are both preserved by translations. \\\\ Now suppose that $c_1$ is preserved by a translation $\sigma_1$ and $c_2$ is preserved by a reflection $\tau_2$. Since $\tau_2(0)\not \in W^\prime$ ($\tau_0$ and $\tau_{\f 1 8}$ do not preserve $c_1$ or $c_2$), we know that $c(\tau_2(0))=c_2(0)=B$, and hence $c_1(\sigma_1\tau_2(0))=B$. Similarly, since $\sigma^{-1}_1(0)\not\in W^\prime$, we have that $c_2(\tau_2\sigma_1^{-1}(0))=c(\sigma_1^{-1}(0))=c_1(0)=R$. Since $\tau_2\sigma^{-1}_1=\sigma_1\tau_2$, this is only possible if $\tau_2\sigma_1^{-1}(0)=\f 1 4$. But then we would have that $\tau_2\sigma_1^{-1}(\f 1 2)=\f 3 4$, which is a contradiction (lack of color preservation) because $\sigma_1^{-1}(\f 1 2)\not\in W^\prime$. The same argument applies when $c_1$ is preserved by a reflection and $c_2$ is preserved by a translation. \\\\ We conclude that $c_1$ and $c_2$ are both preserved by reflections $\tau_1$ and $\tau_2$. Since $\sigma_{\f 1 2}$ and $\sigma_{\f 1 4}$ also do not preserve $c_3$, the argument from the previous paragraph also proves that $c_3$ must also be preserved by a reflection, $\tau_3$. Because none of the colorings are preserved by $\tau_0$ or $\tau_{\f 1 4}$, we know that $\tau_1, \tau_2$, and $\tau_3$ are pairwise distinct, allowing us to define nontrivial translations $\sigma_{ij} := \tau_i\tau_j$. \\\\ We already know that $\tau_i(W^\prime)\cap W^\prime = \emptyset$ for all $i$, so we have that $c_i(\sigma_{ij}(0))=c(\tau_j(0))=c_j(0)$. Furthermore, we also know that $\sigma_{ij}(0)\not\in W^\prime$, because $\sigma_{ij}(0)=\f 1 4\rightarrow \sigma_{ij}(\f 1 2) = \f 3 4$, which is only possible if $\tau_j(\f 1 2)\in W^\prime$. But $\tau_j(\f 1 2)\neq 0$ (because $\tau_{\f 1 4}$ never preserves a coloring) and $\tau_j(\f 1 2)=\f 1 4\rightarrow \tau_i(\f 1 4) = \f 3 4$, a contradiction ($\tau_0$ never preserves a coloring). Thus, $c(\sigma_{ij}(0))=c_j(0)$ for all $i\neq j$. In particular, $c(\sigma_{23}(0))=B$ and $c(\sigma_{21}(0))=R$. But $\sigma_{31}\sigma_{23}(0)=\sigma_{21}(0)$, so $\sigma_{31}$ sends a blue element of $S^1-W^\prime$ to a red element of $S^1-W^\prime$. Since $c_3$ and $c_1$ differ only at $0$ (where $c_1(0)=R$ and $c_3(0)=B$), it is easy to see that this is impossible. Thus, one of $c_1, c_2$, and $c_3$ is distinguishing, which proves Lemma 2.4.2. $\hfill \blacksquare$ \\\\ \textbf{Corollary 2.4.3.} \textit{If $W\subset S^1$, $|W|=5$, and $T^{\prime\prime}(W)$ holds, then $P(W)$ holds.} \\\\ \textbf{Proof.} If $W\subset S^1$, $|W|=5$, and $T^{\prime\prime}(W)$ holds, then there is some subset $W_4$ of $W$ of size $4$ which is not equivalent to $\{0, \f 1 4, \f 1 2, \f 3 4\}$ (at worst, $W$ can be equivalent to $\{0, \f 1 4, \f 1 2, \f 3 4, a\}$, and we can remove $0$, for instance). Then, by Theorem 2.4.1, $P(W_4)$ holds, and hence $P(W)$ holds. $\hfill \blacksquare$ \\\\ Finally, we will remove all constraints on $W$ to prove Theorem 1. \\\\ \textbf{Theorem 2.4.4.} \textit{If $W\subset S^1$ and $|W|=5$, then $P(W)$ holds unless $W\cong \{0, \f 1 5, \f 2 5, \f 3 5, \f 4 5\}$.} \\\\ \textbf{Corollary 2.4.5.} \textit{If $W\subset S^1$ and $|W|=6$, then $P(W)$ holds unconditionally.} \\\\ \textbf{Proof.} Let $W\subset S^1$ be such that $|W|=5$ and $W$ is not equivalent to the one exceptional set, and let $W_4$ be equal to \textit{any} subset of $W$ of size $4$. Then, if $T^{\prime\prime}(W_4)$ holds, either $P(W_4)$ holds (in which case $P(W)$ holds), or $W_4\cong \{0, \f 1 4, \f 2 4, \f 3 4\}$. In the latter case, let $a\in W-W_4$; then, $P(\{a\}\cup W_4- \{\f 2 4\})$ holds by Lemma 2.4.2. Therefore, $T^{\prime\prime}(W_4)$ cannot hold for any such $W_4$. Then, there are two possibilities. \\\\ \textbf{Case 1.} $W\cong \{0, \f 1 5, \pm \f 2 5, a, b\}$ for some $a, b\in S^1$ [the $\pm $ notation here means that \textit{either} $\f 2 5\in W$ or $-\f 2 5\in W$, but not both]. In this case, without loss of generality, let $a$ be such that $a\not\in \{\pm \f 3 5, \f 4 5\}$ (either $a$ or $b$ must satisfy this property). Let $W_4 = \{0, \f 1 5, \pm \f 2 5, a\}$. We claim that $R(W_4)$ holds. To verify this, suppose that it were not the case. Since $\tau_0(\f 1 5)=-\f 1 5$ and $\tau_0(\pm\f 2 5) = \mp \f 2 5$ are not in $W_4$, if $R(W_4)$ does not hold, then $-a\in W_4$. But we know that $-a\not\in \{0, \f 1 5, \pm \f 2 5\}$, so we conclude that $-a = a$ and hence $a=\f 1 2$. But then we can consider $\tau_{\f 1 5}$ and $\tau_{\pm \f 2 5}$; one of these reflections ($\tau_{\f 2 5}$ in the $\f 2 5$ case, and $\tau_{\f 1 5}$ in the $-\f 2 5$ case) satisfies $\tau(W_4-\{a\})\cap W_4=\emptyset$. Therefore, if $R(W_4)$ does not hold, then we would need for $\tau(a)=a$ for this second reflection as well; since $\f 1 2$ is not invariant under either $\tau_{\f 1 5}$ or $\tau_{\f 2 5}$, we conclude that $R(W_4)$ holds. As a result, Theorem 2.2.2 tells us that $P(W_4)$ holds, and thus $P(W)$ holds. \\\\ \textbf{Case 2.} $W\cong \{0, \f {i_0}5, a, a+\f {i_a}5, b\}$ for some $a, b\in S^1$ such that the $\sigma_{\f 1 5}$-orbits of $0, a$, and $b$ are all distinct. In this case, let $W_3 = \{0, a, b\}$; if $\tau_0(W_3-\{0\})\cap W_3 = \emptyset$, then we define $W_4 = W_3 \cup \{\f {i_0}5\}$. Since $\tau_{w_0}(\f{i_0}5) = - \f{i_{w_0}}5$ (which is not an element of $W$ by the $\sigma_{\f 1 5}$-orbit condition), we conclude that $R(W_4)$ holds, and hence $P(W_4)$ holds by Theorem 2.2.2. The same reasoning applies if $\tau_a(W_3 - \{a\})\cap W_3=\emptyset$. If neither the $\tau_0$ condition nor the $\tau_a$ condition holds, it is easily verified that $a = \f 1 2$ or $W_3$ is equivalent to either $\{0, \f 1 4, \f 3 4\}$ or $\{0, \f 1 3, \f 2 3\}$. If $a=\f 1 2$, then we let $W_4 = \{0, \f {i_0}5, \f 1 2 + \f{i_{\f 1 2}}5, b\}$ and note that $R(W_4)$ holds (in particular, $\tau_0(W_4-\{0\})\cap W_4 =\emptyset$ by the $\sigma_{\f 1 5}$-condition). Hence, $P(W_4)$ holds by Theorem 2.2.2, and so $P(W)$ holds. If $W_3 \cong \{0, \f 1 4, \f 3 4\}$, then $P(W_3\cup \{\f {i_0}5\})$ holds by Lemma 2.4.2, and hence $P(W)$ holds. Finally, we can ensure that $W_3\neq \{0, \f 1 3, \f 2 3\}$ by applying a translation by $\{-\f {i_0}5\}$ if necessary (i.e., considering $\{\f {i_0}5, a, b\}$ if $\{0, a, b\} = \{0, \f 1 3, \f 2 3\}$). Therefore, we may assume without loss of generality that $R(W_3)$ holds (in all cases in which we are not already done). Let $w_0\in W_3$ be as described in condition $R(W_3)$, and let $W_4 = W_3 \cup \{w_0 + \f{i_{w_0}}5\}\subset W$. Thus, since $R(W_4)$ holds, $P(W_4)$ holds by Theorem 2.2.2, and hence $P(W)$ holds in all cases. $\hfill \blacksquare$ \subsection{An interesting corollary} As a result of our work above (in particular, from our proof of Lemma 2.3.4), we also get a result of a slightly different flavor. \\\\ \textbf{Corollary 2.5.1.} Let $c$ be any $2$-coloring of $S^1$. Then, there exists a distinguishing coloring $c^*: S^1\rightarrow \{R, B\}$ such that $c(x)\neq c^*(x)$ for at most three values of $x$. \\\\ \textbf{Proof.} Suppose we have a $2$-coloring of $S^1$, denoted $c$. If $c$ is identically one color, then changing the colors of $0, \f 1 3$, and $\f 1 2$ suffices to produce a distinguishing $2$-coloring. \\\\ If $c$ is not identically one color, then we claim that there exists a reflection (about some point $w_0$) $\tau_{w_0}$ and a point $a\neq w_0 \pm \f 1 4$ such that $c(\tau_{w_0}(a))\neq c(a)$. If this were not the case, then, for example, $c(0)$ must be equal to $c(\theta)$ for all $\theta \neq \pm \f 1 4$. Similarly, $c(\f 1 3)$ must be equal to $c(\theta)$ for all $\theta\neq \f 1 3 \pm \f 1 4$. But then by transitivity we find that $c(0)=c(\theta)$ for all $\theta\in S^1$, contradicting the fact that $c$ is not uniformly one color. This proves the claim. \\\\ Let $\tau_{w_0}$ be a reflection described in the claim. By translational symmetry, we may assume that $w_0=0$, so there exists some $a\neq \pm \f 1 4$ such that $c(a)\neq c(-a)$. Let $W = \{0, a, \f 1 2, a+\f 1 2\}$, and let $c^\prime$ be the restriction of $c$ to $S^1-W$. Lemma $2.3.4$ tells us that there exists an extension $c^*$ of $c^\prime$ which is distinguishing; furthermore, the proof of Lemma 2.3.4 specifies that there exists a distinguishing $c^*$ such that $c^*(a)\neq c^\prime(-a)= c(-a)$. This means that $c^*(a)=c(a)$, so $c^*$ and $c$ differ at most on $\{0, \f 1 2, a+\f 1 2\}$. $\hfill \blacksquare$ \section{Extending precolorings on $\R^2$: a proof of Theorem 3} The complexity of extending precolorings on $\R^2$ is highly dependent on the choice of symmetry group $\Gamma$. First, we will show that the case of $\Gamma = O(2)$ has already been resolved by Theorem 1. \\\\ \textbf{Theorem 3.1.} \textit{$\ext_D(\R^2, O(2))=7$.} \\\\ \textbf{Proof.} The fact that $7$ is a lower bound to the extension number was proved in [2]. Let $W\subset \R^2$ be such that $|W|=7$ and the pointwise stabilizer $\stab_{O(2)}(W)$ is trivial, and let $c$ be a precoloring of $\R^2-W$. Assume for the sake of contradiction that this precoloring cannot be extended to a distinguishing coloring of $\R^2$. Note that the action of $O(2)$ on $\R^2 = \bigcup_{r\in \R_{\geq 0}} r\cdot S^1$ can be decomposed into separate actions of $O(2)$ on each individual $r\cdot S^1$. If there is any $r\in \R$ such that $|W\cap r\cdot S^1|\geq 6$, then $P(W, O(2))$ holds by Theorem 1. If not, then there are at least two nondegenerate circles in $\R^2$ which intersect $W$. \\\\ We now claim that there exist two points $x_1, x_2\in W-\{0\}$ such that $x_1$ and $x_2$ are not on the same $r\cdot S^1$ and the line connecting $x_1$ to $x_2$ does not intersect $0\in \R^2$. Suppose this were not the case. Let $C=r\cdot S^1$ be a nondegenerate circle such that $|C\cap W|>0$ is minimal among circles that intersect $W$. If $C\cap W = \{x_1\}$, then if the claim is false, all of the other points in $W$ lie on the line connecting $0$ to $x_1$; this contradicts the stabilizer condition, because reflection across this line pointwise stabilizes $W$. On the other hand, if $C\cap W\supset \{x_1, x_2\}$ and the claim is false, we get that all elements of $W-C\cap W$ lie on the line connecting $0$ to $x_1$ as well as the line connecting $0$ to $x_2$, showing that $x_2$ also lies on the line connecting $0$ to $x_1$. Applying this reasoning to all pairs of points inside $C\cap W$, we again obtain that $W$ lies on a line, a contradiction. Thus, we may find $x_1, x_2$ as stated. \\\\ Let $x_1\in r_1\cdot S^1$ and $x_2\in r_2\cdot S^1$ be two points which satisfy the claim. Color the rest of $W$ red (or any other combination of colors). Then, let $c_1$ be the coloring of $r_1\cdot S^1$ where $x_1$ is red, and $c_2$ be the coloring where $x_1$ is blue. If $c_1$ satisfies an $SO(2)$ symmetry $\sigma_1$ and $c_2$ satisfies any $O(2)$ symmetry which does not fix $x_1$, ($\sigma_2$ or $\tau_2$) we obtain the usual contradiction from the relation $\sigma_1\sigma_2=\sigma_2\sigma_1$ or $\sigma_1\tau_2 = \tau_2\sigma_1^{-1}$. The same holds where $c_1$ and $c_2$ are exchanged. Therefore, we may assume that $c_1$ satisfies a reflection $\tau_1$ which fixes $x_1$ and no other symetries, or both $c_1$ and $c_2$ satisfy reflections $\tau_1, \tau_2$ and no other symmetries. \\\\ In either case, $c_1$ satisfies only $\tau_1$, a reflection. If $\tau_1(x_2)\neq x_2$, we may color $x_2$ so that the final coloring $c^*$ distinguishes $O(2)$, a contradiction. If $\tau_1(x_1)=x_1$, then by the claim, $\tau_1(x_2)\neq x_2$, so we are done. On the other hand, if $\tau_1(x_1)\neq x_1$, then $c_2$ satisfies only $\tau_2\neq \tau_1$, so one of $\tau_1$ or $\tau_2$ must satisfy $\tau_i(x_2)\neq x_2$. Thus, we have proved Theorem 3.1. $\hfill \blacksquare$ \\\\ The full isometry group $E(2)\supset O(2)$ is much more difficult to deal with. First, we'll classify the elements of $E(2)$, based on the identification $\R^2\cong \C$: every $\gamma\in E(2)$ is either a translation ($z\in \C\mapsto z+a$), a rotation about some point ($z\mapsto \omega z + a$, $\omega\in S^1, a\in \C$), a reflection over some line ($z\mapsto \f{-a}{\overline a} \overline z+a, a\in \C$), or a ``glide reflection'', which is a reflection over a line combined with a translation parallel to that line. The subgroup of $E(2)$ composed of translations and rotations is called $SE(2)$. \\\\ So far, we have used the work done on $S^1$ to prove $\ext_D(\R^2, O(2))=7$. We can also apply the work from [2] done on $(\R, E(1))$ by considering the following subgroup $\Gamma$ of $E(2)$: the group of translations $z\mapsto z+a$ and $180^\circ$ rotations $z\mapsto -z+2a$. By using the techniques from [2], we can prove the following lemma. \\\\ \textbf{Lemma 3.2.} \textit{Let $\Gamma\subset E(2)$ be as described above. Then, $\ext_D(\R^2, \Gamma)=4$}. \\\\ \textbf{Proof.} The fact that $4$ is a lower bound to the extension number follows from the fact that $\ext_D(\R, E(1))=4$; whenever $W$ is contained in a line $\ell$, a precoloring of $\R^2 - W$ which colors $\R^2-\ell$ red must be extended to distinguish the action of $E(1)$ on that line. \\\\ Let $W\subset \R^2$ be of size 4, and let $c$ be a precoloring of $\R^2-W$. Assume that $c$ cannot be extended to a distinguishing coloring of $\Gamma$. Recall the definition of the property $R(W)$ as it pertains to $\Gamma$: $R(W)$ holds if there exists some $w_0\in W$ such that $\tau_{w_0}$ (which is now the $180^\circ$ rotation about $w_0$) sends $W-\{w_0\}$ outside of $W$. \\\\ \textbf{Claim.} $R(W)$ always holds. \\\\ \textbf{Proof.} Among all $w\in W$ with minimal $x$-coordinate, pick the unique $w_0$ with minimal $y$-coordinate. Drawing axes perpendicular to the $x$ and $y$-axes which meet at $w_0$, it is clear that all of $W$ sits inside the union of the right half-plane and the positive $y$-axis as defined by those axes. Therefore, $\tau_{w_0}(W)$ sits inside the left half-plane and the negative $y$-axis; this means that $\tau_{w_0}(W)\cap W \subset \{w_0\}$, proving the claim. \\\\ The rest of the proof of Lemma 3.2 follows exactly as the proof of Theorem 2.2.2, with no modifications. $\hfill \blacksquare$ \\\\ We will now use Lemma 3.2 to prove the second half of Theorem 3. Let $SE(2)= \{\vec v \mapsto A \vec v + \vec b, A\in SO(2), \vec b\in \R^2\}$. \\\\ \textbf{Theorem 3.3.} \textit{$\ext_D(\R^2, SE(2))<\infty$.} \\\\ Before proving the theorem, we first analyze the case of $|W|=4$ in detail. \\\\ \textbf{Lemma 3.4.} Let $W\subset \R^2$ be such that $|W|=4$, and let $c$ be a precoloring of $\R^2-W$. Suppose that $c$ cannot be extended to a $SE(2)$-distinguishing coloring of $\R^2$. Then, there exists an extension $c_0$ of $c$ which is $\Gamma$-distinguishing, and satisfies a $120^\circ$ rotational symmetry. \\\\ \textbf{Proof.} Let $W$ and $c$ be as described in the statement of the lemma. The following technical lemma will be used to derive a contradiction. \\\\ \textbf{Lemma 3.5.} There exists an extension $c_0$ of $c$ which is $\Gamma$-distinguishing, and preserved by some rotation $\gamma_1$ of either odd or infinite order (without loss of generality, about the origin $0\in \R^2$). Furthermore, for at least one such $c_0$, there exists an extension $c_1$ of $c$ which is preserved under $\gamma_1$ and a point $x_0\in W$ which is not fixed by $\gamma_1$, such that the extension $c_2$ obtained from switching $c_1(x_0)$ to the opposite color is not preserved by either a rotation about $x_0$ or the map $z\mapsto -z+x_0$. \\\\ \textbf{Proof:} First, note that if we choose $c_1$ so that it is $\Gamma$-distinguishing and rotationally symmetric about $0$, then changing $c_1(x_0)$ to the opposite color for any $x_0\in W-\{0\}$ will never result in a rotational symmetry about $x_0$. This is because if such a symmetry did result, then $c_1$ would itself be symmetric under some rotation about $x_0$ as well as another rotation about $0$. The commutator of these two rotations is a nontrivial translation, so $c_1$ would not be $E_1$-distinguishing. Therefore, we only have to deal with $x_0$-rotational symmetry if $c_1$ is not chosen to be $\Gamma$-distinguishing. \\\\ Let $c_0$ be an extension of $c$ to $\R^2$ which is $\Gamma$-distinguishing (Lemma 3.2 guarantees that $c_0$ exists). Since $c_0$ cannot be $SE(2)$-distinguishing, $c_0$ must be preserved by some rotation which is not $180^\circ$. Without loss of generality, we may assume that the rotation is about the point $0\in \R^2$, so $c_0$ is preserved by $\gamma_0: z\mapsto \omega z$. Furthermore, $\gamma_0$ cannot have even order, for otherwise some power of $\gamma_0$ is the map $z\mapsto -z$, which we know does not preserve $c_0$. \\\\ Now, suppose that for every $x\in W-\{0\}$, the coloring $c_x$ obtained from changing $c_0(x)$ to the opposite color \textit{is} preserved by $z\mapsto -z+x$. \\\\ \textbf{Case 1.} $0\not\in W$. Then for every $x\in W$, we have that $c_x(x) = c_x(0)=c_0(0)$, and so $c_0(x)\neq c_0(0)$. In other words, all of the points in $W$ have the same color under $c_0$. Now, let $x_1, x_2, x_3\in W$ be such that $x_1\neq 2^{\pm 1} x_2$ and $x_1\neq 2^{\pm 1} x_3$ (three such points in $W$ exist). Let $c_1$ be the coloring which matches $c_0$ except at $x_1$ and $x_2$. Furthermore, by switching $x_2$ and $x_3$ if necessary (in the definition of $c_1$), we may assume that the $180^\circ$ rotation about $0$ does not preserve $c_1$. \\\\ We want to show that $c_1$ satisfies the properties demanded by Lemma 3.5. In particular, we claim that $c_1$ is $\Gamma$-distinguishing. \\\\ To show this, suppose that $c_1$ is preserved by a translation $\sigma_1: z\mapsto z+a$. By replacing $a$ with a sufficiently large multiple of $a$, we can ensure that $a+x_1-x_2\not\in \{x_1, x_2\}$ and $x_2-a\neq x_1$ (so it makes sense to talk about $c(a+x_1-x_2)$ and $c(x_2-a)$). Furthermore, we know already that $x_1-x_2\not\in \{x_1, x_2\}$ by our choice of $x_1$ and $x_2$. Therefore, we see that $c(a+x_1-x_2) = c(\sigma_1(x_1-x_2))=c(x_1-x_2)=c_{x_1}(x_2)=c_0(x_2)$, but also that $c(a+x_1-x_2) = c(-(x_2-a)+x_1) = c(x_2-a) = c_1(x_2)\neq c_0(x_2)$, a contradiction. \\\\ On the other hand, if $c_1$ is preserved by a $180^\circ$ rotation $\gamma_1: z\mapsto -z+a$, by the construction of $c_1$ we know that $a\not\in \{0, x_1, x_2\}$. Furthermore, we know that $x_1-x_2\not\in \{x_1, x_2\}$ Therefore, we see that $c_{x_1}(x_1-a)=c(a) = c(0)=c_1(x_2)\neq c_{x_1}(x_2)=c(x_1-x_2)=c_1(x_2-x_1+a)$. Now, $c_1(x_2-x_1+a)=c_{x_2}(x_2-x_1+a)$ as long as $x_2-x_1+a\neq x_1$, and we already know that $c_1(x_1)=c_1(x_2)\neq c_{x_1}(x_2)$, so this cannot happen. Therefore, we have that $c_{x_1}(x_1-a)=c(0)\neq c_1(x_2-x_1+a)= c_{x_2}(x_2-x_1+a)=c_{x_2}(x_1-a)$, which can only happen if $x_1-a=x_1$, i.e., $a=0$; a contradiction. This proves the claim that $c_1$ is $\Gamma$-distinguishing. \\\\ Since $c_1$ is $\Gamma$-distinguishing, we obtain $\gamma_1$ analogously to $\gamma_0$ as described before. Since two elements of $W$ are colored red under $c_1$ and two elements are colored blue under $c_1$, it is now certain that we can pick $x_0\in W-\stab_{\gamma_1}(\R^2)$ as the lemma describes. \\\\ \textbf{Case 2.} $0\in W$. First of all, the argument from Case 1 still works almost all of the time. In this situation, we have that the three points in $W-\{0\}$ are all the same color (and opposite the color of $0$) under $c_0$. We can still flip the colors of $x_1, x_2\in W-\{0\}$ to obtain another $\Gamma$-distinguishing coloring unless the following things both happen. \\\\ 1) $W = \{0, \f 1 2 x, x ,2x\}$ for some $x$ (meaning we can only find two points $x_1, x_2$, and not a third). \\ 2) $c_0(-\f 1 2 x) = c_0(-2 x)=c_0(0)$ and $c_0(-x)=c_0(x)$ (when we change the colors of $\f 1 2 x_0$ and $2x_0$, a $z\mapsto -z$ symmetry results). \\\\ However, if (1) and (2) both happen, we note that $c_0(-\f 1 2 x)\neq c_0(x)$, contradicting the fact that flipping the color of $\f 1 2 x$ is supposed to result in a $z\mapsto -z+\f 1 2 x$ symmetry. Therefore, we can still find $x_1, x_2\in W-\{0\}$ such that flipping the $c_0$-colors of $x_1$ and $x_2$ results in an $\Gamma$-distinguishing coloring. Under this new coloring, which we call $c^*$, $0, x_1$, and $x_2$ all have the same color while $x_3$ (the last element of $W$) has the opposite color. As long as $c^*$ is not rotationally symmetric about $x_3$, we are again done. \\\\ Therefore, we may assume that $c^*$ is symmetric under some rotation about $x_3$. Then, pick $c_1$ to be the coloring which matches $c_0$ except at $0$ (this is still symmetric under $\gamma_0$) and pick $x_0 = x_3$. Since $c_1$ itself is symmetric under $z\mapsto -z+x_3$, we know that $c_2$ (which is equal to $c_1$ except at $x_3$) is not symmetric under this rotation. Then, we are done unless $c_2$ is symmetric under some rotation about $x_3$. This, combined with the fact that $c^*$ is also symmetric under some rotation about $x_3$, gives us a contradiction (from the usual commutation relation) unless $0, x_1$, and $x_2$ lie on a circle centered at $x_3$ such that $0, x_1$, and $x_2$ form an equilateral triangle. \\\\ Finally, this condition is actually symmetric under exchange of $c_0$ and $c^*$ as the intial coloring (because both are $\Gamma$-distinguishing), so $x_3$ would also have to form an equilateral triangle with two out of $\{0, x_1, x_2\}$. However, the first triangle being equilateral and centered around $x_3$ rules out the possibility that any such second triangle could be equilateral; contradiction. This proves Lemma 3.5. $\hfill \blacksquare$ \\\\ To finish the proof of Lemma 3.4, let $c_1$, $\gamma_1: z\mapsto \omega_1 z$, $x_0$, and $c_2$ be as asserted by Lemma 3.5. Since we assumed that $c$ is not $SE(2)$-distinguishing, $c_2$ must be preserved by some $\gamma_2: z\mapsto \omega_2 z + a$, where $\gamma_2(x_0)\neq x_0$ and $(\omega_2, a)\neq (-1, x_0)$ by the lemma. If $a=0$, since $c_1$ and $c_2$ differ only at $x_0$, we can derive the usual contradiction from the fact that $\gamma_1\gamma_2=\gamma_2\gamma_1$; therefore, we may suppose that $a\neq 0$. Now, note that the commutator $\sigma_1 = \gamma_2^{-1}\gamma_1^{-1}\gamma_2\gamma_1$ is a translation, $z\mapsto z+\f{a(1-\om_1)}{\om_1\om_2}$. Since we know that $a\neq 0$ and $\om_1\neq 1$, we in fact know that this is a nontrivial translation. Similarly, we have that $\sigma_2 := \gamma_1^{-2}\gamma_2^{-1}\gamma_1^2\gamma_2: z\mapsto z - \f{a(1-\om_1^2)}{\om_1^2\om_2}$ is a nontrivial translation, as $a\neq 0$ and $\om_1^2\neq 1$. Therefore, we have the relation $\sigma_1\sigma_2=\sigma_2\sigma_1$, which in terms of $\gamma_1$ and $\gamma_2$ becomes $$\gamma_2^{-1}\gamma_1^{-1}\gamma_2\gamma_1^{-1}\gamma_2^{-1}\gamma_1^2\gamma_2 =\gamma_1^{-2}\gamma_2^{-1}\gamma_1\gamma_2\gamma_1.$$ Without loss of generality, suppose that $c_1(x_0)=R$. Noting that $\sigma_1\sigma_2(x_0)=x_0+\f{a(\om_1-1)}{\om_1^2\om_2}\neq x_0$, we will show that, under the caveat that $\gamma_1$ may be replaced by $\gamma_1^{n}$ for some $n$ and $\gamma_2$ may be replaced with $\gamma_2^2$, $c(\sigma_1\sigma_2(x_0))=B$ while $c(\sigma_2\sigma_1(x_0))=R$. \\\\ We start by looking at the right side of the equation (and applying each letter of the word one at a time). We know that $\gamma_1(x_0)=\om_1x_0\neq x_0$, and hence $c(\gamma_1(x_0))=R$. Therefore, we have that $c(\sigma_2\sigma_1(x_0))=R$ unless one of the following two things happens: \\\\ 1) $\gamma_1\gamma_2\gamma_1(x_0)=x_0$. \\ 2) $\sigma_2\sigma_1(x_0)=x_0$ (which we have already ruled out). \\\\ It does not matter if, say, $\gamma_2\gamma_1(x_0)=x_0$, because then $\gamma_2\gamma_1(x_0)$ will still be red under $c_1$, and we can continue applying the next letter. Writing out an explicit formula for $\gamma_1\gamma_2\gamma_1$ then tells us that $c(\sigma_2\sigma_1(x_0))=R$ unless $\om_1^2\om_2x_0+a\om_1 = x_0$, i.e., $x_0 = \f{a\om_1}{1-\om_1^2\om_2}:= A_1(\gamma_1, \gamma_2)$. \\\\ Now, analyzing the left hand side of the equation, we know that $\gamma_2(x_0)\neq x_0$ by Lemma 3.5, so $c(\gamma_2(x_0))=B$. Therefore, we have that $c(\sigma_1\sigma_2(x_0))=B$ unless one of the following two things happens: \\\\ 1) $\gamma_2^{-1}\gamma_1^2\gamma_2(x_0)=x_0$, which reduces to the equation $x_0 = \f{-a}{\om_2} := A_3(\gamma_1, \gamma_2)$. \\ 2) $\gamma_2\gamma_1^{-1}\gamma_2^{-1}\gamma_1^2\gamma_2(x_0)=x_0$, which reduces to the equation $x_0 = \f{a(\om_1^2+\om_1-1)}{\om_1(1-\om_1\om_2)}:= A_2(\gamma_1, \gamma_2)$. \\\\ Thus, we obtain our contradiction unless $x_0 = A_i(\gamma_1, \gamma_2)$ for some $1\leq i\leq 3$. Now, we show that with the proper modifications, we can ensure that this does not happen. \\\\ \textbf{Step 1.} Possibly replacing $\gamma_2$ with $\gamma_2^2$, we can ensure that $x_0\neq A_3(\gamma_1, \gamma_2)$. \\\\ Suppose that $x_0 = \f{-a}{\om_2}$. Then, as $\gamma_2^2(z) = \om_2^2 z + \om_2 a + a$, we see that $A_3(\gamma_1, \gamma_2^2) = \f{-a(\om_2+1)}{\om_2^2}$, and if $x_0 = A_3(\gamma_1, \gamma_2^2)$ as well, we obtain that either $a=0$ or $\om_2 = \om_2+1$; a contradiction in either case. Therefore, as long as $\gamma_2^2$ does not fix $x_0$, we can safely replace $\gamma_2$ with $\gamma_2^2$. Furthermore, since $\gamma_2^2$ is a rotation about some $y\neq x_0$, $\gamma_2^2$ fixes $x_0$ if and only if either $\om_2=-1$ or $\gamma_2^2$ is the identity (which also implies that $\om_2 = -1$). In this situation, since $x_0 = \f{-a}{\om_2}$, we get that $x_0 = a$, which (combined with $\om_2 = -1$) cannot be the case by Lemma 3.5. Thus, we may assume without loss of generality that $x_0\neq A_3(\gamma_1, \gamma_2)$. Furthermore, since $A_3(\gamma_1, \gamma_2)$ is actually independent of $\gamma_1$, any future modifications to $\gamma_1$ will not change this fact. \\\\ \textbf{Step 2.} Attempt to replace $\gamma_1$ with $\gamma_1^2$. \\\\ Since $\gamma_1$ does not have even order, $\gamma_1^2$ also does not have even order, so we may repeat the above process using $\gamma_1^2$ instead of $\gamma_1$. Therefore, we obtain our desired contradiction unless $x_0 = A_i(\gamma_1, \gamma_2)$ for some $1\leq i\leq 2$ and $x_0 = A_j(\gamma_1^2, \gamma_2)$ for some $1\leq j\leq 2$. This gives us four pairs of simultaneous equations in the three variables $\om_1, \om_2, a$, whose solutions (simplified using $a\neq 0$ and $\om_1 \not\in\{0, 1\}$) are as follows: $$B_{1, 1}(\om_1, \om_2, a) := \om_2 + \f 1 {\om_1^3} = 0,$$ $$B_{1, 2} (\om_1, \om_2, a) := \om_1^3 + \om_1 + 1 = 0,$$ $$ B_{2, 1}(\om_1, \om_2, a) := \om_2 + \f{\om_1^2 - 1}{\om_1^4 (\om_1+2)} = 0,$$ $$B_{2, 2}(\om_1, \om_2, a) := \om_2 + \f{\om_1^3 + 1}{\om_1(\om_1^2 - \om_1 - 1)} = 0.$$ First, note that for a fixed $\om_2$, there are at most $3+3+5+3 = 14$ values of $\om_1$ which satisfy any of the four above equations. Leaving $\om_2$ fixed, we may now replace $\om_1$ (respectively, $\gamma_1$) with $\om_1^n$ ($\gamma_1^n$) for any $n\in \Z$ unless $\om_1^{2n} = 1$ (as then $\gamma_1^{2n}$ is the identity). If $\om_1$ has infinite order in $S^1$, then there are infinitely many elements in $\{\om_1^n, n\in \Z-\{0\}\}$, and so there exists some value of $n$ for which $B_{i,j}(\om_1^n, \om_2, a)\neq 0$ for all $i, j\in \{1, 2\}$; this in turn implies that we get our contradiction. \\\\ Thus, $\om_1$ has order $N$ for some $N\in \Z_{>0}$, and Lemma 3.5 tells us that $N$ is odd. It can be easily checked (by computer, for example) that $B_{1, 2}(\om_1, \om_2, a)= 0$ has no solutions over the $N$th roots of unity, so we are further restricted to only considering $B_{1, 1}, B_{2, 1}$, and $B_{2, 2}$. Now, replacing $\om_1$ with $\om_1^{-1}$ must still leave one of $B_{1, 1}$, $B_{2, 1}$ or $B_{2, 2}$ satisfied (else we are done); this gives us nine pairs of simultaneous equations which all have explicit solutions for $\om_1\in \C$. Checking by computer, we find that the only solutions for $\om_1$ within the odd roots of unity are when $\om_1$ is a cube root of unity. Lemma 3.5 tells us that there exists an extension $c_0$ of $c$ which is $\Gamma$-distinguishing and preservd by $\gamma_1$, so we have now proved Lemma 3.4. $\hfill \blacksquare$ \\\\ Lemma 3.4 gives us very specific information about when precolorings of $\R^2-\{\text{ four points }\}$ cannot be extended to distinguish $SE(2)$. We can now use this to prove Theorem 3.3. \\\\ \textbf{Proof of Theorem 3.3.} Let $N>>0$, let $|W|\subset \R^2$ be such that $|W|=N$, and let $c$ be a precoloring of $\R^2-W$. Assume for the sake of contradiction that $c$ cannot be extended to distinguish $SE(2)$. Then, Lemma 3.4 tells us that there exists an extension $c_0$ of $c$ which is $\Gamma$-distinguishing, and invariant under some order 3 rotation $\gamma_0$. Identify a set $Y = \{y_1, y_2, y_3, y_4\}$ of four special points such that $Y$ contains no equilateral triangle. Let $W^\prime = W-Y$. Now, from $W^\prime$ pick a set of four points $S_4 = \{x_4^{(i)}, 1\leq i\leq 4\}$, and three additional points $x_1, x_2, x_3$. There are $F(N)=$ $N-4 \choose 4$$(N-8)(N-9)(N-10)$ such ordered collections $(S_4, x_1, x_2, x_3)$. On the other hand, there are only $O(\f{F(N)}N)$ such ordered collections such that $S_4\cup\{x_1, x_2, x_3\} \cup Y$ contains any equilateral triangle (because for any pair of points $(x_1, x_2)$, there are only two points in $\R^2$ which form an equilateral triangle with them). \\\\ For any collection $(S_4, x_1, x_2, x_3)$ such that $S_4\cup\{x_1, x_2, x_3\}\cup Y$ does not contain any equilateral triangle, for each $1\leq j\leq 3$, let $c_j^*$ be the precoloring of $\R^2 -\{y_1, y_2, y_3, y_4\}$ which differs from $c_0$ at exactly $x_j$, and let $c_4^*$ be the precoloring which differs from $c_0$ on exactly $S_4$. By Lemma 3.4, there exists an extension $c_j$ of $c_j^*$ which is $\Gamma$ invariant and symmetric under some rotation $\gamma_j$ of order $3$; we can arrange that the $\gamma_j$ are all of the form $z\mapsto e^{\f{2\pi i}3}z + a_j$. Let $(S_4, x_1^{(1)}, x_2, x_3)$ and $(S_4, x_1^{(2)}, x_2, x_3)$ be two such collections. If $\gamma_1^{(1)} = \gamma_1^{(2)}$, then $\gamma_1^{(1)}$ fixes two colorings $c_1^{(1)}$ and $c_1^{(2)}$ which differ for at least two points ($x_1^{(1)}$ and $x_1^{(2)}$); thus, since $\gamma_1$ has order $3$, $Y\cup \{x_1^{(1)}, x_1^{(2)}\}$ must contain an equilateral triangle. Since $Y\cup \{x_1^{(1)}\}$ does not contain an equilateral triangle, for any fixed $x_1^{(1)}$, there are at most $k$ possible values of $x_1^{(2)}$ for which $\gamma_1^{(1)} = \gamma_1^{(2)}$ could be the case, where $k$ is small and independent of $N$. The same holds true for varying $x_2$ and $x_3$. \\\\ We know that $\gamma_4$ has exactly one fixed point; therefore, at least three elements of $S_4$ are not fixed by $\gamma_4$. Furthermore, for each of these three $x_4^{(j)}$, either $\gamma_4(x_4^{(j)})\not\in S_4\cup Y\cup \{x_3\}$ or $\gamma_4^{-1}(x_4^{(j)})\not\in S_4\cup Y\cup \{x_3\}$ (this is because $S_4\cup Y\cup \{x_3\}$ contains no equilateral triangle). Without loss of generality, this means that $\gamma_4^{-1}(x_4^{(j)})\not\in S_4\cup Y\cup \{x_3\}$ for $1\leq j\leq 2$ (two of the three have to satisfy the property for the same power of $\gamma_4$, which can be $\gamma_4^{-1}$ without loss of generality). Then, among these two $x_4^{(j)}$, pick $x_4$ such that $\gamma_4^{-1}(x_4)\neq \gamma_3^{-1}(x_3)$. Thus far, we have simply picked one element of $S_4$, given a fixed collection $(S_4, x_1, x_2, x_3)$. \\\\ Again because $Y\cup S_4\cup \{x_1, x_2, x_3\}$ contains no equilateral triangle, we may suppose that $\gamma_2^{-1}(x_4)\not\in Y\cup S_3\cup\{x_1, x_2, x_3\}$ (the power of $\gamma_2$ does not matter). We will now find a collection $(S_4, x_1, x_2, x_3)$ which gives us a contradiction from the relation $\gamma_1\gamma_2^{-1}\gamma_3\gamma_4^{-1}=\gamma_3\gamma_4^{-1}\gamma_1\gamma_2^{-1}$ (by construction, $\gamma_1\gamma_2^{-1}$ and $\gamma_3\gamma_4^{-1}$ are translations). \\\\ First, note that we have already shown that $\gamma_4^{-1}(x_4)\not\in S_4\cup Y\cup \{x_3\}$, so $c_3(\gamma_3\gamma_4^{-1}(x_4))=c_3(\gamma_4^{-1}(x_4))=c_4(\gamma_4^{-1}(x_4))=c_4(x_4)$. Now $c_3(\gamma_3\gamma_4^{-1}(x_4))=c_2(\gamma_3\gamma_4^{-1}(x_4))$ as long as $\gamma_3\gamma_4^{-1}(x_4)\not\in Y\cup \{x_2, x_3\}$. We already know that $\gamma_3\gamma_4^{-1}(x_4)\neq x_3$. Let $S_4, x_1$, and $x_2$ be fixed. Then, the equations $\gamma_3\gamma_4(x_4) = x^*$ for $x^*\in Y\cup \{x_2\}$ each have exactly one solution for $\gamma_3: z\mapsto \om z + a_3$ (i.e., we can solve the linear equation in $a_3$), so there are at most five such ``bad'' rotations. We also showed that there are at most $k$ choices for $x_3$ yielding any given rotational symmetry, so there are at most $5k$ choices of $x_3$ that are potentially problematic. In total, this means that at most $5k$ $N-4\choose 4$ $(N-8)(N-9) = O(\f{F(N)}N)$ ordered collections which are problematic at this step. \\\\ Suppose that our collection is not one of those problematic collections. Then, our work so far has shown that $c_2(\gamma_2^{-1}\gamma_3\gamma_4^{-1}(x_4))=c_2(\gamma_3\gamma_4^{-1}(x_4))=c_4(x_4)$. We may change the $c_2$ on the left hand side of this equation to a $c_1$ as long as $\gamma_2^{-1}\gamma_3\gamma_4^{-1}(x_4)\not\in Y\cup\{x_1, x_2\}$. For any $x^*\in Y\cup\{x_1, x_2\}$, the equation $\gamma_3\gamma_4^{-1}(x_4) = \gamma_2(x^*)$ yields at most $6k$ more ``bad'' choices of $x_3$ given a fixed $(S_4, x_1, x_2)$, and again at most $O(\f{F(N)}N)$ problematic collections. \\\\ Supposing that our collection is still not problematic, our work so far shows that $c_1(\gamma_1\gamma_2^{-1}\gamma_3\gamma_4^{-1}(x_4))=c_4(x_4)$. To ensure that $\gamma_1\gamma_2^{-1}\gamma_3\gamma_4^{-1}(x_4))\not\in Y\cup\{x_1, x_3\}$, by the same argument an in the previous paragraphs, we have to remove from consideration $O(\f{F(N)}N)$ problematic collections -- the only difference here is that we impose a constraint on the choice of $x_2$ given any fixed $S_4, x_1, x_3$. Thus, for all but $O(\f{F(N)}N)$ of the $F(N)$ possible collections, we find that $c_3(\gamma_1\gamma_2^{-1}\gamma_3\gamma_4^{-1}(x_4))=c_4(x_4)$. \\\\ We can use the same exact argument to show that for all but $O(\f{F(N)}N)$ of the $F(N)$ collections, $c_3(\gamma_3\gamma_4^{-1}\gamma_1\gamma_2^{-1}(x_4))=c_2(x_4)$; the only point at which there could be a problem is if $\gamma_2^{-1}(x_4) = x_2$, but we have already arranged for this not to be the case. Therefore, we obtain a contradiction by using any of $F(N)-O(\f{F(N)}N)$ collections $(S_4, x_1, x_2, x_3)$; thus, for sufficiently large $N$, all sets $W$ such that $|W|\geq N$ satisfy $P(W, SE(2))$. This completes the proof of Theorem 3.3. $\hfill \blacksquare$ \section{Infinite extension numbers in higher dimensions} In [2], it was conjectured that $\ext_D(\R^2, E(2))=7$, $\ext_D(S^2, O(3))= 9$, and $\ext_D(\R^3, E(3)) = 10$. The authors of [2] also posed the question of computing these extension numbers in higher dimensions. In Section $8$, we focused on the $\R^2$ case, where some progress was made; however, the conjecture from [2] remains open. On the other hand, once we go beyond $\R^2$ to $X = \R^n$ ($n\geq 3$) or $X = S^n$ ($n\geq 2$), we now show that the extension number $\ext_D(X, \text{Isom}(X))$ is always infinite (indeed, we will see in Section 5.2 that this is even the case for small subgroups of $\text{Isom}(X)$). However, we separate the case of $S^2$ from the others, because for $X=\R^n$ and $X=S^n$ when $n\geq 3$, we can give very explicit uncountable sets $W$ and precolorings of $X-W$ which cannot be extended. \subsection{A proof of Theorem 2} \textbf{Example 4.1.1}. Let $X = \R^n$ for $n\geq 4$, and let $\{e_1, ... , e_n\}$ be the standard basis for $X$. Let $W = \R e_1 \cup \{e_2, e_3, ..., e_n\}$, and note that the pointwise stabilizer of $W$ inside $E(n)$ is trivial, because any isometry which fixes the standard basis of $\R^n$ fixes all of $\R^n$. \\\\ \textbf{Claim.} $P(W, E(n))$ does not hold. \\\\ \textbf{Proof.} Let $c$ be the precoloring of $X-W$ which is uniformly red, and let $c^*$ be any extension of $c$ to $X$. Then, since $|\{e_2, ..., e_n\}|\geq 3$, there exist $e_i$ and $e_j$ with $1\not\in\{i, j\}$ such that $c^*(e_i)=c^*(e_j)$. Furthermore, there exists $\gamma\in O(n)\subset E(n)$ such that $\gamma(e_i)=e_j$, $\gamma(e_j)=e_i$, and $\gamma$ pointwise stabilizes the orthogonal complement of $\R e_i + \R e_j$. Since $e_i$ and $e_j$ are the only possible elements of $\R e_i + \R e_j$ not colored red, we immediately obtain that $\gamma$ fixes $c^*$. $\hfill \blacksquare$ \\\\ \textbf{Example 4.1.2.} Let $X = \R^3$, and let $\{e_1, e_2, e_3\}$ be the standard basis. Considering $\R e_1 + \R e_2 \cong \R^2$, let $T$ be the vertices of an equilateral triangle in $\R e_1 + \R e_2$ centered at the origin. Then, let $W = T \cup \R e_3$. Since $W$ linearly spans all of $\R^3$, it has trivial pointwise stabilizer. \\\\ \textbf{Claim.} $P(W, O(3))$ does not hold. \\\\ \textbf{Proof.} Let $c$ be the precoloring of $\R^3 - W$ which is uniformly red, and let $c^*$ be any extension of $c$ to $\R^3$. Let $S^1$ be the copy of the unit circle inside $\R^3$ which contains $T$. Since $\text{Isom}(S^1)$ does not distinguish this coloring of $S^1$, there exists some planar reflection or rotation which fixes $c^*|_{S^1}$. Let $\gamma\in O(3)$ act by this reflection or rotation on $\R e_1 + \R e_2$ and stabilize $e_3$ (such an element of $O(3)$ certainly exists). Then, because $\gamma$ stabilizes $\R e_3$ and preserves $c^*|_{S^1}$, since $c^*(\R^3 - \R e_3 \cup S^1) = R$, we conclude that $\gamma$ preserves $c^*$. $\hfill \blacksquare$ \\\\ \textbf{Example 4.1.3.} Let $X = S^n\subset \R^{n+1}$ with $n\geq 3$, and let $e_1, ..., e_{n+1}$ be the standard basis of $\R^{n+1}$. Let $S^1$ be the copy of the unit circle sitting inside $S^n$ with coordinates $x_3 = x_4 = \cdots = x_{n+1} = 0$, and let $T\subset S^1$ be the vertices of an equilateral triangle. Finally, let $S^{n-2}$ be the orthogonal complement of $S^1$ sitting inside $S^n$, and let $W = S^{n-2}\cup T$. Since $W$ linearly spans $\R^{n+1}$, it has trivial pointwise stabilizer within $O(n+1)$. \\\\ \textbf{Claim.} $P(W, O(n+1))$ does not hold. \\\\ \textbf{Proof.} This is essentially the same as Example 4.1.2. Letting $c$ be the precoloring of $S^n-W$ which is uniformly red, any extension $c^*$ of $c$ to $S^n$, when restricted to $S^1$, is preserved by some planar rotation or reflection. Then, we may let $\gamma\in O(n)$ be the isometry which is equal to this rotation or reflection when restricted to $S^1$ and pointwise stabilizes $S^{n-2}$. By construction, $\gamma$ preserves $c^*$. $\hfill \blacksquare$ \subsection{Extending precolorings on $S^2$: a proof of Theorem 4} Since all of the counterexamples from Section 5.1 involved invariance under some reflectional symmetry, it is reasonable to ask if removing reflections from the isometry groups would give us finite extension numbers. While one can create counterexamples in $\R^6$ very similar to Example 4.1.1 which do not satisfy $P(W, SO(6))$ (and similarly in higher dimensions), we can also employ another method to create huge numbers of counterexamples in lower dimensions -- even on $S^2$, where Section 5.1 failed to produce any results. \\\\ Let us recall the statement of Theorem 4. \\\\ \textbf{Theorem 4.} \textit{If $W\subset S^2$ is finite, then $P(W, SO(3))$ does not hold. Assuming the axiom of choice, we may replace ``finite'' with ``countable.''} \\\\ In particular, if we let $SO(3)$ act on $\R^n$ by acting on a particular copy of $\R^3\subset \R^n$, Theorem 4 implies that $\ext_D(\R^n, SO(3))=\infty$. In some sense, this is a much stronger result than Theorem 2, because it produces a huge class of counterexamples. On the other hand, it does not produce any uncountable sets $W$ such that $P(W)$ does not hold. \\\\ \textbf{Proof of Theorem 4.} Let $W\subset S^2$ be any finite set, with $|W| = n$. We will construct a precoloring of $S^2-W$ which cannot be extended to distinguish $SO(3)$. First, we'll establish a framework to make the problem easier to think about. \\\\ Let $\Gamma\subset SO(3)$ be a subgroup with generating set $S$, by which we mean that the elements of $S$ and the elements of $S^{-1}$ together generate $\Gamma$ as a group. Then, we can produce a graph $G(W, \Gamma, S)$ as follows: let $V(G) = \Gamma \cdot W$ and $E(G) = \{(x, y, s)\in V(G)\times V(G)\times S: y = s^{\pm 1} x \text{ for some } s\in S\}$. In other words, each edge is labelled by some element of $S$, and there may be more than one edge connecting two vertices. We say that a $2$-coloring $c$ of $G(W, \Gamma, S)$ is \textit{invariant} under a particular $s\in S$ if all $s$-adjacent vertices (that is, pairs $(x, y)$ such that $(x, y, s)\in E(G)$) have the same color under $c$. \\\\ We will construct a ``bad'' precoloring $c$ of $S^2-W$ by picking a group $\Gamma$ (with generating set $S$) such that the graph $G(W, \Gamma, S)$ has a particularly nice structure. In particular, we will use the result, attributed to Hausdorff in [4] (1914), that $SO(3)$ contains a copy of $F_2$, the free group on two letters. \\\\ There are explicit constructions of free subgroups of $SO(3)$; for example, in [6], it is shown that rotations about the angle $\phi$ in the $x$-$y$ plane and in the $x$-$z$ plane generate a free group provided that $\cos(\phi)\in \Q-\{0, \pm 1, \pm \f 1 2\}$. Furthermore, it is well known that $F_2$ contains as a subgroup $F_{\N}$, the free group on countably infinite many letters (for example, see [5]). Let $\Gamma^\prime = F_\N\subset SO(3)$. \\\\ \textbf{Claim.} There exists a free subgroup $\Gamma\subset \Gamma^\prime$ with infinite generating set $S$ such that the following two statements are true: (1) the connected components of the elements $w_i\in W$ inside $G(W, \Gamma, S)$ are pairwise disjoint (i.e., there are no paths between elements of $W$), and (2) $G(W, \Gamma, S)$ contains no cycles which contain any element of $W$. \\\\ \textbf{Proof.} We first find a subgroup satisfying property (2). Let $S^\prime = \{s_1, s_2, ...\}$ be the free generating set of $\Gamma^\prime$. Let $w_1\in W$. Then, $\stab_{\Gamma^\prime}(w_1)$ is a subgroup of $\Gamma^\prime$ which is also abelian, because $\stab_{SO(3)}(\{w_1\})$ is abelian. We know that abelian subgroups of a free group are isomorphic to $\Z$; this is a special case of the Nielsen-Schreier theorem (which relies on the axiom of choice), but this special case does not rely on the axiom of choice (for example, see [5]). Therefore, $\stab_{\Gamma^\prime}(\{w_1\})$ is generated by a single $\gamma\in \Gamma^\prime$ that can be written as a word in finitely many letters $s_{i_1}, s_{i_2}, ..., s_{i_k}$. Letting $S^\prime_1 = S^\prime - \{s_{i_1}, s_{i_2}, ..., s_{i_k}\}$ and $\Gamma^\prime_1 = \langle S^\prime \rangle$, we see that $\stab_{\Gamma^\prime_1}(\{w_1\})$ is trivial. Repeating this process for each of the elements of $W$, we obtain the subgroup $\Gamma^\prime_n\subset \Gamma^\prime$ with infinite generating set $S^\prime_n$ such that $\stab_{\Gamma^\prime_n}(\{w\})$ is trivial for any $w\in W$ - in other words, $(\Gamma^\prime_n, S^\prime_n)$ satisfies property (2). \\\\ To prove the full claim, for any pair $w_i, w_j\in W$, we note that there is at most one element $\gamma\in\Gamma^\prime_n$ such that $\gamma w_i = w_j$; this is because if there were two such elements, $\gamma_1, \gamma_2$, then $\gamma_1\gamma_2^{-1}\in \stab_{\Gamma^\prime_n}(w_j)$, which is trivial. Since there are finitely many elements of $W$, there are finitely many $\gamma$'s in total, each of which is a word in finitely many generators $s_{j_1}, s_{j_2}, ..., s_{j_l}$. Letting $S = S^\prime_n - \{s_{j_1}, ..., s_{j_l}\}$ and $\Gamma = \langle S \rangle$, we see that $(\Gamma, S)$ satisfies both (1) and (2), as desired in the claim. $\hfill \blacksquare$ \\\\ Let $(\Gamma, S)$ be as asserted in the claim. Since $S$ is infinite, let $s_1, ..., s_{2^n}$ be $2^n$ elements of $s$. We will construct a precoloring $c$ of $S^2-W$ such that, enumerating the extensions of $c$ by $c_1, ..., c_{2^n}$, $c_i$ is fixed by $s_i$ for each $1\leq i\leq 2^n$. We will construct $c$ as follows: color $\R^2 - G(W, \Gamma, S)$ red. Thus, it will suffice to show in the end that each of the extensions, when restricted to $G(W, \Gamma, S)$, is invariant under some $s_i$. By property (1), we may consider (and color) the connected components of each $w\in W$ separately. \\\\ Enumerate the colorings of $W$ by $c_1^*, c_2^*, ..., c_{2^n}^*$, and let $w\in W$. First, we consider the vertices $x\in V(G(W, \Gamma, S))$ which are adjacent to $w$ -- in other words, $x = s_i^{\pm 1} w$ for some $i\in \N$. If $i>2^n$, then define $c(x) = R$. If $i\leq 2^n$, define $c(x) = c_i^*(w)$. Now, since $G(W, \Gamma, S)$ contains no cycles that contain $w$, for every $y\in \Gamma \{w\}$, there exists a unique ``branch'' $x\in V(G(W, \Gamma, S))$ which is adjacent to $w$ such that every path from $y$ to $w$ has $x$ as its second to last vertex. Then, define $c(y) = c(x)$. Under this construction of $c$, it is clear that $c_i(x) = c_i(w)$ if $x$ is $s_i$-adjacent to $w$. Furthermore, if $y_1$ and $y_2$ are $s_i$-adjacent, then it is clear that $y_1$ and $y_2$ have the same branch $x$, so $c(y_1)=c(x)=c(y_2)$. Thus, the coloring $c_i$ is invariant under $s_i$ for each $1\leq i\leq 2^n$, as desired. \\\\ For $W$ finite, we managed the above construction without invoking the axiom of choice. For $W$ countably infinite, we can do the same exact construction, but we must use the following fact proved by de Groot and Dekker in [3]: assuming the axiom of choice, $SO(3)$ contains a free group $F$ on uncountably many letters. \\\\ We need to use this fact because if $W$ is countably infinite, we may need to remove countably many generators from the generating set of $F$ to remove all of the cycles that contain elements of $W$, and more importantly, there are uncountably many colorings of $W$ which need to satisfy some symmetry. However, replacing ``finite'' with ``countable'' and ``countable'' with ``uncountable'' as necessary, the above arugument will construct a precoloring of $S^2-W$ which cannot be extended to distinguish $F$, or $SO(3)$. $\hfill \blacksquare$ \\\\ Finally, we note that Theorem 4 remains true even if we consider $k$-colorings for $k>2$. In $[2]$, the distinguishing extension number is defined for any $k\geq D_\Gamma(X)$ (rather than just $k = D_\Gamma(X)$), but $\ext_D(S^2, O(3), k) = \infty$ for every $k\geq 2$. \section{Open questions} Of the conjectures and questions posed in [2], the conjecture that $\ext_D(\R^2, E(2))=7$ remains open. We pose a weakened version of this conjecture, as well as another related conjecture. \\\\ \textbf{Conjecture 5.1.} $\ext_D(\R^2, E(2))<\infty$. \\\\ \textbf{Conjecture 5.2.} $\ext_D(\R^2, SE(2))=4$. \\\\ We can also ask if Theorem 2 (which holds for $S^n$ and $\R^n$ for $n\geq 3$) applies to $S^2$. \\\\ \textbf{Question 5.3.} Do there exist uncountable subsets $W\subset S^2$ such that $P(W, O(3))$ does not hold? Such that $P(W, SO(3))$ does not hold? At least in the case of $SO(3)$, we are inclined to believe that this is not the case. \\\\ Finally, the motivation for introducing the distinguishing extension number was to better differentiate group actions on sets -- the distinguishing number $D_\Gamma(X)$ is very often 1, 2, or 3, for example. However, $\ext_D(X, \Gamma)$ cannot differentiate between $O(3)$ acting on $\R^3$ and $O(4)$ acting on $\R^4$, among other things. One possible alternative to the extension number is the following. \\\\ \textbf{Definition.} The \textit{replacement number} $R(X, \Gamma)$ is the smallest $n\in \N$ such that for every $D_\Gamma(X)$-coloring of $X$, we may replace the colors of at most $n$ points in $X$ to obtain a distinguishing coloring of $X$. \\\\ For example, Corollary 17.7 states that $R(S^1, O(2))=3$. It is easy to further establish that $R(\R, E(1))=3$ as well. To make sure that $R(X, \Gamma)$ is not bounded in terms of the distinguishing number (in an obvious way, at least), we note that $R(\R^n, E(n))\geq n$, because any $n-1$ points lie on some hyperplane (so the all-red coloring cannot be fixed using $n-1$ points). Since the ``replacement'' constraint is considerably weaker than the ``extension'' constraint, we are led to the following conjecture. \\\\ \textbf{Conjecture 5.4.} $R(\R^n, E(n))<\infty$. \section{Acknowledgements} This research was conducted under the supervision of Joe Gallian at the University of Minnesota Duluth REU, supported by NSF grant 1358659 and NSA grant H98230-13-1-0273. I would like to thank Joe Gallian, as well as program advisors Noah Arbesfeld, Daniel Kriz, and Adam Hesterberg, for their support throughout the research process. I would also like to thank Aaron Abrams, Xiaoyu He, Brian Lawrence, and David Moulton for their insights and suggestions.
{ "timestamp": "2014-08-26T02:17:46", "yymm": "1408", "arxiv_id": "1408.5849", "language": "en", "url": "https://arxiv.org/abs/1408.5849", "abstract": "In the setting of a group $\\Gamma$ acting faithfully on a set $X$, a $k$-coloring $c: X\\rightarrow \\{1, 2, ..., k\\}$ is called $\\Gamma$-distinguishing if the only element of $\\Gamma$ that fixes $c$ is the identity element. The distinguishing number $D_\\Gamma(X)$ is the minimum value of $k$ such that a $\\Gamma$-distinguishing $k$-coloring of $X$ exists. Now, fixing $k= D_\\Gamma(X)$, a subset $W\\subset X$ with trivial pointwise stabilizer satisfies the precoloring extension property $P(W)$ if every precoloring $c: X-W\\rightarrow \\{1, ..., k\\}$ can be extended to a $\\Gamma$-distinguishing $k$-coloring of $X$. The distinguishing extension number $\\text{ext}_D(X, \\Gamma)$ is then defined to be the minimum $n$ such that for all applicable $W\\subset X$, $|W|\\geq n$ implies that $P(W)$ holds. In this paper, we compute $\\text{ext}_D(X, \\Gamma)$ in two particular instances: when $X = S^1$ is the unit circle and $\\Gamma = \\text{Isom}(S^1) = O(2)$ is its isometry group, and when $X = V(C_n)$ is the set of vertices of the cycle of order $n$ and $\\Gamma = \\text{Aut}(C_n) = D_n$, the dihedral group of a regular $n$-gon. This resolves two conjectures of Ferrara, Gethner, Hartke, Stolee, and Wenger. In the case of $X=\\mathbf R^2$, we prove that $\\text{ext}_D(\\mathbf R^2, SE(2))<\\infty$, which is consistent with (but does not resolve) another conjecture of Ferrara et al. On the other hand, we also prove that for all $n\\geq 3$, $\\text{ext}_D(S^{n-1}, O(n)) = \\infty$, and for all $n\\geq 3$, $\\text{ext}_D(\\mathbf R^n, E(n))=\\infty$, disproving two other conjectures from the same authors.", "subjects": "Combinatorics (math.CO); Metric Geometry (math.MG)", "title": "Distinguishing extension numbers for $\\mathbf R^n$ and $S^n$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682478041812, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7073169883483124 }
https://arxiv.org/abs/2209.09728
Rotation inside convex Kakeya sets
Let $K$ be a convex body (a compact convex set) in $\mathbb{R}^d$, that contains a copy of another body $S$ in every possible orientation. Is it always possible to continuously move any one copy of $S$ into another, inside $K$? As a stronger question, is it always possible to continuously select, for each orientation, one copy of $S$ in that orientation? These questions were asked by Croft.We show that, in two dimensions, the stronger question always has an affirmative answer. We also show that in three dimensions the answer is negative, even for the case when $S$ is a line segment -- but that in any dimension the first question has a positive answer when $S$ is a line segment. And we prove that, surprisingly, the answer to the first question is negative in dimension four for general $S$.
\section{Introduction} A subset $K$ of $\RR^d$ is called a \emph{Kakeya set} (or \emph{Besicovitch set}) if it contains a unit segment in all directions, i.e., whenever $v\in \Sph^{d-1}$ then there is some $w\in K$ such that $w+tv\in K$ for all $t\in[0,1]$. Some of the earliest results about Kakeya sets were proved by Besicovitch \cite{besicovitch1,besicovitch1928kakeya}, who proved that there exist Kakeya sets of measure zero, and also showed that there are (Kakeya) sets in $\RR^2$ of arbitrarily small measure in which a unit segment can be continuously moved and rotated around by $360^\circ$. Since then there has been a lot of interest in Kakeya sets and related problems, see, e.g., \cite{davies_1971,bourgain1999dimension,kolasa1999some,katz2002new}. The study of Kakeya sets is connected to surprisingly many different areas of mathematics, including harmonic analysis, arithmetic combinatorics and PDEs (see, e.g.,~\cite{fefferman1971multiplier,bourgain1999dimension}). One of the most interesting open problems about Kakeya sets is the Kakeya conjecture, which claims that if $K$ is a compact Kakeya set in $\RR^d$, then $K$ has (Hausdorff) dimension $d$ (see, e.g., \cite{bourgain1999dimension}). While the conjecture above is probably the most important open problem in the area, there has also been much interest in questions about Kakeya sets that are more similar to the original problem, and study when we can rotate a unit segment around inside another body. For example, van~Alphen~\cite{vanalphen} showed that it is possible to construct sets of arbitrarily small area and bounded diameter in $\RR^2$ in which a segment can be rotated around. Cunningham~\cite{cunningham1971kakeya} showed that such a set can even be made simply connected. Csörnyei, Héra and Laczkovich~\cite{csornyei2017closed} showed that if $S$ is a closed and connected set in $\RR^2$ such that any two copies of $S$ can be moved into each other within a set of arbitrarily small measure, then $S$ must be a segment, a circular arc, or a singleton. Järvenpää, Järvenpää, Keleti and Máthé~\cite{jarvenpaa2011continuously} proved that for $n\geq 3$ it is possible to move a line around within a set of measure zero in $\RR^n$ such that all directions are traversed; however, if $K\subseteq \RR^n$ is such that we can choose a copy of a line in each direction simultaneously in a continuous way (parametrized by $\Sph^{n-1}$), then the complement of $K$ must be bounded. There is a very large literature on Kakeya sets, and many other interesting problems have been studied, see, e.g.,~\cite{dvir2009size,hera2016kakeya,falconer1980continuity}. As hinted above, several results about Kakeya sets concern the stronger property of being able to continuously move and rotate around a segment (or some other set), as opposed to simply containing a segment in each direction (i.e., being Kakeya). It is then interesting to ask how strong the former property is compared to the latter: can we make some additional, natural assumption on our set such that the second property implies the first one? Without any such assumptions, being Kakeya does not imply the first property -- for example, our set could consist of two, disconnected components that together cover all possible orientations of segments. It is also easy to see that being connected is not enough -- but what happens if our set is \emph{convex}? This question, in the following more general form, was asked by Croft~\cite{hallardcroft}. \begin{question}[Croft \cite{hallardcroft}]\label{question_kakeya} If $K$ is a convex and compact set in $\RR^d$ that contains a copy of $S\subseteq \RR^d$ in every possible orientation, is it necessarily possible to continuously transform any given copy of $S$ into any other one within $K$? \end{question} While it is very natural to study \emph{convex} Kakeya sets, and they were already considered over a hundred years ago by Pál~\cite{pal1921minimumproblem} (who proved that the minimal possible area of a convex Kakeya set in $\RR^2$ is $1/\sqrt{3}$), it is important to point out that the question above is of a different flavour. Indeed, apart from focusing on convex sets, a significant difference between Question~\ref{question_kakeya} and most of the known results about Kakeya sets is that here we are not interested in the measure of our Kakeya set (unlike in the papers mentioned earlier). To formalise Question~\ref{question_kakeya}, we first need some definitions. For any set $S\subseteq \RR^d$, let us say that $K\subseteq \RR^d$ is \emph{$S$-Kakeya} if $K$ contains a translate of any rotated copy of $S$, i.e., whenever $\rho\in \SO(d)$ then there is some $w\in \RR^d$ such that $\rho(S)+w\subseteq K$. In particular, when $S$ is a segment of length $1$ then this is just the usual notion of being a Kakeya set. Let us also say that \emph{any two $S$-copies can be rotated into each other within $K$} if whenever $\rho_0,\rho_1\in \SO(d)$ and $w_0,w_1\in \RR^d$ are such that $\rho_i(S)+w_i\subseteq K$ ($i=0,1$), then there are some $\gamma:[0,1]\to \SO(d)$ and $\delta: [0,1]\to \RR^d$ continuous functions such that $\gamma(i)=\rho_i, \delta(i)=w_i$ for $i=0,1$ and $\gamma(t)(S)+\delta(t)\subseteq K$ for all $t$. We mention that instead of having continuous $\gamma, \delta$ as above, we could define this notion in terms of a single continuous function $\psi$ mapping each $t\in [0,1]$ to a (rotated and translated) copy of $S$ in a continuous way (with respect to the Hausdorff metric), our results below still hold in this alternative characterisation. Furthermore, in the case of usual Kakeya sets (i.e., when $S$ is a unit segment), we can also parametrize the possible orientations of segments by the sphere $\Sph^{d-1}$ or by the projective space $\mathbb{P}\RR^d$ instead of $\SO(d)$, but these changes would make no difference.\medskip Our first result shows that in the case of usual Kakeya sets, any two unit segments can be rotated into each other within $K$ (if $K$ is convex and compact). \begin{theorem}\label{theorem_Kakeyasegments} Let $d\geq 2$ be a positive integer and let $K$ be a convex Kakeya body in $\RR^d$. Then any two unit segments can be rotated into each other within $K$ \end{theorem} Given Theorem~\ref{theorem_Kakeyasegments}, one might expect that the corresponding statement is in fact true for any set $S$. Surprisingly, this is not the case. \begin{restatable}{theorem}{fourdimcounterexample}\label{theorem_counterexamplereachable_introduction} There exist convex bodies $S$ and $K$ in $\RR^4$ such that $K$ is $S$-Kakeya but there are two $S$-copies which cannot be rotated into each other within $K$ \end{restatable} While the result above is stated for $d=4$, it is in fact easy to modify our construction to get a counterexample for any $d\geq 4$. We mention that, in contrast with Theorem~\ref{theorem_counterexamplereachable_introduction}, if we replace the assumption `$K$ compact' by `$K$ open', then an easy connectedness argument shows that any two copies can be rotated into each other.\medskip An alternative way to interpret Question~\ref{question_kakeya} is to ask for a way to select a copy of $S$ (in an $S$-Kakeya set) in each direction simultaneously in a continuous way. That is, we want the stronger property that there exists a continuous map $f: \SO(d)\to \RR^d$ such that $\rho(S)+f(\rho)\subseteq K$ for all $\rho$. We show that this can be achieved for any shape in $2$ dimensions \begin{theorem}\label{theorem_continuousin2d} Let $K$ be a convex body in $\RR^2$ and let $S\subseteq \RR^2$, $S\not =\emptyset$. Assume that $K$ is $S$-Kakeya. Then there is a continuous map $f: \SO(2)\to \RR^2$ such that $\rho(S)+f(\rho)\subseteq K$ for all $\rho\in \SO(2)$. \end{theorem} Again, in light of Theorem~\ref{theorem_continuousin2d}, one might expect that the corresponding statement is true in higher dimensions too, at least when $S$ is a line segment. However, this strong property fails already when $d=3$, even when $S$ is a unit segment. \begin{theorem}\label{theorem_continuouscounterexample} There exists a convex Kakeya body $K\subseteq \RR^3$ such that there is no continuous function $\psi: \Sph^2\to \RR^3$ satisfying $\psi(v)+tv\in K$ for all $v\in \Sph^2, t\in [0,1]$. \end{theorem} As before, the fact that we chose to parametrize orientations of segments by the sphere $\Sph^{d-1}$ instead of $SO(d)$ or the projective space $\mathbb{P}\RR^d$ does not change anything, Theorem~\ref{theorem_continuouscounterexample} would remain true for these parametrizations as well, and reason why the counterexample works is not topological. The rest of the paper is organised as follows. In Section~\ref{section_2d}, we prove Theorem~\ref{theorem_continuousin2d} and Theorem~\ref{theorem_continuouscounterexample} concerning the stronger property of being able to continuously select in all directions. In Section~\ref{section_Kakeyareachable}, we prove Theorem~\ref{theorem_Kakeyasegments} about rotating in convex Kakeya sets in $\RR^d$ for any $d$, and in Section~\ref{section_reachablecounterexample} we give a counterexample for the corresponding statement for general bodies. We finish with some concluding remarks and open questions in Section~\ref{section_Kakeyaconcluding}. The proofs in Section~\ref{section_2d} are simpler than the ones in the later sections, but several elements of those proofs reappear or motivate our later approach. In particular, one of the main methods we will have for analysing different cases is to consider the dimensions of the sets $I_\rho=\{w\in \RR^d: \rho(S)+w\subseteq K\}$. It is easy to deal with $\rho$ (and its neighbourhood) if $I_\rho$ has dimension $d$ (i.e., has non-empty interior). One might initially expect that the larger the dimension of $I_\rho$ is, the more room we have to move the copies around and hence the easier to deal with $\rho$. However, this is not entirely true, and the $0$-dimensional case (when $I_\rho$ is a single point) will be quite easy to deal with. For example, it is not difficult to prove that if $I_\rho=\{w_\rho\}$ is a single point for all $\rho$, then $\rho\mapsto w_\rho$ must be continuous. So the most difficult cases in Theorem~\ref{theorem_Kakeyasegments} will come from the situation when some $I_\rho$ has dimension between $1$ and $d-1$, and these will also be the cases we use to obtain counterexamples in Theorems~\ref{theorem_continuouscounterexample} and \ref{theorem_counterexamplereachable_introduction}. \section{Continuous choice in each direction}\label{section_2d} In this section we prove Theorem~\ref{theorem_continuousin2d} and Theorem~\ref{theorem_continuouscounterexample} about selecting a copy in each direction in a continuous way. We begin with Theorem~\ref{theorem_continuousin2d}. First we recall the definition of the Hausdorff metric. Given a point $p\in \RR^d$ and a non-empty compact set $A\subseteq \RR^d$, write $$d(p,A)=\min_{a\in A}|p-a|.$$ Given two non-empty compact sets $X,Y\subseteq \RR^d$, their distance in the Hausdorff metric $d$ is defined as $$d(X,Y)=\max\{\max_{x\in X}d(x,Y),\max_{y\in Y}d(y,X)\}.$$ It is well-known that this makes the set $\CC_d$ of non-empty compact subsets of $\RR^d$ a metric space. Let $\mathcal{K}_d$ denote the set of non-empty compact convex sets in $\RR^d$ (so $\mathcal{K}_d\subseteq\mathcal{C}_d$). We will prove the following result. \begin{lemma}\label{lemma_Hausdorffcontinuous} Let $S$ be a non-empty compact subset of $\RR^2$ and let $K$ be convex, compact and $S$-Kakeya. For all $\rho\in \SO(2)$, let $I_{\rho}=\{v\in\RR^2:\rho(S)+v\subseteq K\}$. Then the map $\SO(2)\to \mathcal{K}_2$ given by $\rho\mapsto I_{\rho}$ is continuous. \end{lemma} Given $I\in \mathcal{K}_d$, we say that $I$ has Chebyshev centre $c$ if $x=c$ minimises $\max_{p\in I} |x-p|$ among all points $x\in\RR^d$. We will use the following properties of Chebyshev centres. (Much more general statements are known about Chebyshev centres in Banach spaces, but the next result is enough for our purposes.) \begin{lemma}\label{lemma_chebyshevproperties}(See, e.g., \cite[Theorem~5]{amir1978chebyshev} and \cite[subsection 7.1]{amir1984best}) If $I\in \mathcal{K}_d$ then $I$ has a unique Chebyshev centre $c_I$. Moreover, $c_I\in I$ for all $I$, and the map $\mathcal{K}_d\to\RR^d$ given by $I\mapsto c_I$ is continuous. \end{lemma} It is easy to see that Theorem~\ref{theorem_continuousin2d} follows from Lemmas~\ref{lemma_Hausdorffcontinuous} and \ref{lemma_chebyshevproperties}. So we now need to prove Lemma~\ref{lemma_Hausdorffcontinuous}. In fact, we will prove the following stronger statement. \begin{lemma}\label{lemma_2dallhausdorff} Let $K$ be a compact convex set in $\RR^2$. For any non-empty compact set $S$ in $\RR^2$, let $I_S=\{w\in \RR^2: S+w\subseteq K\}$. Let $\mathcal{A}_K$ be the set of all $S$ with $I_S$ non-empty. Then the map $\psi:\mathcal{A}_K\to \mathcal{K}_2$ given by $S\mapsto I_S$ is continuous (with respect to the Haudorff metric on both sides). \end{lemma} Lemma~\ref{lemma_2dallhausdorff} certainly implies Lemma~\ref{lemma_Hausdorffcontinuous}, as $\rho\mapsto \rho(S)$ is easily seen to be continuous for any fixed $S$. Also, note that Lemma~\ref{lemma_Hausdorffcontinuous} and Lemma~\ref{lemma_2dallhausdorff} are not true in dimensions greater than $2$, by the construction in Theorem~\ref{theorem_continuouscounterexample}. Let us start the proof of Lemma~\ref{lemma_2dallhausdorff}. The first lemma towards the proof essentially says that if $I_S$ is a segment on the $x$ axis (so $I_S$ is one-dimensional), then the projections of $K$ and $S$ to the $y$ axis have the same maximum values (and similarly minimum values). This is rather easy to see when $S$ is a segment, and only slightly more complicated in general. \begin{lemma}\label{lemma_2dsameprojections} Suppose that $K\subseteq\RR^2$ is compact and convex, $S\subseteq \RR^2$ is non-empty and compact, and $\delta>0$ is such that $\{v\in \RR^2: S+v\subseteq K\}\supseteq\{(a,0):|a|\leq \delta\}$. Let $p=(x_0,y_0)$ and $p'=(x_0',y_0')$ be points of $S$ and $K$ (respectively) with maximal second coordinates. Then either $y_0=y_0'$, or there is some $\epsilon>0$ such that $S+(0,\epsilon)\subseteq K$. Similarly, if $p''=(x_0'',y_0'')$ and $p'''=(x_0''',y_0''')$ are points of $S$ and $K$ (respectively) with minimal second coordinates, then either $y_0''=y_0'''$, or there is some $\epsilon>0$ such that $S-(0,\epsilon)\subseteq K$ \end{lemma} \begin{proof} We only prove the first claim, as the second one is similar. Certainly $y_0'\geq y_0$ as $S\subseteq K$. Let us assume that $y_0'>y_0$, we show that if $\epsilon>0$ is sufficiently small then for any $q=(x_1,y_1)\in S$ we have $q+(0,\epsilon)\in K$. It is enough to consider the case $x_1\geq x_0'$. Let $L>0$ be such that $S\subseteq[-L,L]^2$. We know that $q'=(x_1+\delta,y_1)$ is in $K$. By convexity, the line segment between $q'$ and $p'$ also lies in $K$ and hence $\left(x_1, \frac{y_1-y_0'}{x_1+\delta-x_0'}(x_1-x_0')+y_0'\right)\in K$. But we have \begin{align*} \left(\frac{y_1-y_0'}{x_1+\delta-x_0'}(x_1-x_0')+y_0'\right)-y_1=\frac{\delta}{x_1+\delta-x_0'}(y_0'-y_1)\geq \frac{\delta}{2L+\delta}(y_0'-y_0). \end{align*} It follows that $\epsilon=\frac{\delta}{2L+\delta}(y_0'-y_0)$ satisfies the conditions. \end{proof} The next lemma will be used to prove Hausdorff-continuity in the difficult case, i.e., when $I_S$ is one-dimensional. \begin{lemma}\label{lemma_hausdorffclosecopy} Suppose that $K\subseteq\RR^2$ is compact and convex, and define the sets $I_S$ and $\AA_K$ as in Lemma~\ref{lemma_2dallhausdorff}. Assume that $u\in \RR^2$, $\delta>0$ and $S\in \AA_K$ such that $I_S$ has empty interior but $I_S\supseteq\{u+(a,0):|a|\leq \delta\}$. Then for all $\epsilon>0$ there is some $\eta>0$ such that whenever $S'\in \AA_K$ satisfies $d(S,S')<\eta$ then there is some $w\in I_{S'}$ with $|w-u|<\epsilon$. \end{lemma} \begin{proof} We may assume $u=0$ (by replacing $K$ by $K-u$). Since $I_S$ has empty interior, by Lemma~\ref{lemma_2dsameprojections} we have $y_0=y_0'$ and $y_0''=y_0'''$ (using the notation in the statement of that lemma). If $S'\in \AA_K$ and $d(S,S')<\eta$, we know $S'+(0,z)\subseteq \RR\times [y_0'',y_0]$ for some $z\in \RR$ with $|z|<\eta$. (Indeed, we can pick $z$ such that the largest second coordinate of a point in $S'$ is $y_0+z$.) We show that if $\eta$ is small enough, then we must have $(0,z)\in I_{S'}$. (Then we are done, as we can choose $\eta<\epsilon$.) By replacing $S'$ with $S'+(0,z)$ (and $\eta$ by $2\eta$), we may assume that $z=0$. So we need to show that for any point $q=(x_1,y_1)\in S'$, we have $q\in K$ (if $\eta$ is small). We know there is some $q'=(x_2,y_2)\in S$ with $|x_1-x_2|<\eta$, $|y_1-y_2|<\eta$. We may assume that $y_1\geq y_2$. We wish to show that for some $s\in[-\delta,\delta]$, $q-(s,0)$ must lie on the line segment between $p=(x_0,y_0)$ and $q'=(x_2,y_2)$. (Then we are done, since $p,q'\in S$, $S+(s,0)\subseteq K$, and $K$ is convex.) \begin{figure}[h!] \includegraphics[clip,trim=0cm 0cm 0cm 0cm, width=0.6\linewidth]{2dcalc_2} \centering \captionsetup{justification=centering} \caption{The points used in the proof of Lemma~\ref{lemma_hausdorffclosecopy}.} \label{figure_2dcalc} \end{figure} First assume that $y_2=y_0$. So $y_0=y_1=y_2$. But then $q=q'+(s,0)$ for some $s\in (-\eta,\eta)$, so our claim follows easily by picking $\eta<\delta$. So let us now assume $y_2\not =y_0$ (so $y_2<y_0$). Observe that points $(x,y)$ on the segment between $p$ and $q'$ are the ones satisfying the equation $x-x_0=\frac{x_0-x_2}{y_0-y_2}(y-y_0)$ and have $y_2\leq y\leq y_0$. It follows that $(x^*,y_1)$ is on this segment, where $x^*=x_0+\frac{x_0-x_2}{y_0-y_2}(y_1-y_0)$. We have \begin{align*} |x^*-x_2|&=\left|x_0+\frac{x_0-x_2}{y_0-y_2}(y_1-y_0)-x_2\right|\\ &=\left|\frac{x_0-x_2}{y_0-y_2}(y_1-y_2)\right|. \end{align*} We will use the following claim to bound this quantity. \textbf{Claim.} There is some $\mu>0$ depending on $S, \delta$ only such that whenever $(x,y)\in S$ and $y>y_0-\mu$, then there is some $\bar{x}_0$ such that $(\bar{x}_0,y_0)\in S$ and $|\bar{x}_0-x|<\delta/2$. \textbf{Proof of Claim.} If this is not true, then for all $n$ we can find $(x(n),y(n))\in S$ such that $y(n)>y_0-1/n$ and whenever $(\bar{x}_0,y_0)\in S$ then $|\bar{x}_0-x(n)|\geq \delta/2$. By taking a subsequence, we may assume that $(x(n),y(n))$ converges to some $(\tilde{x},\tilde{y})\in S$. But then $\tilde{y}=y_0$ and $\tilde{x}-x(n)\to 0$, giving a contradiction and proving the claim.\qed By the claim above, we can modify $x_0$ if necessary so that either $y_0-y_2\geq \mu$ or $|x_0-x_2|<\delta/2$. In the first case we get $|x^*-x_2|\leq \frac{|x_0-x_2|}{\mu}|y_1-y_2|$. Let $L>0$ be such that $S\subseteq [-L,L]^2$, then we get $|x^*-x_2|\leq \frac{2L}{\mu}\eta$ and hence $|x^*-x_1|\leq \eta+\frac{2L}{\mu}\eta$. This converges to $0$ (independently of $q,q'$) as $\eta\to 0^+$, as required. On the other hand, if $|x_0-x_2|<\delta/2$ then, using $y_0-y_2\geq y_0-y_1$, we get $|x^*-x_2|\leq \delta/2$ and hence $|x^*-x_1|\leq \delta/2+\eta$, which is less than $\delta$ for $\eta<\delta/2$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma_2dallhausdorff}] First note that all sets of the form $I_S$ are convex and compact. Let $S\in\AA_K$ be arbitrary, we show $\psi$ is continuous at $S$, i.e., whenever $S_n\to S$ with $S_n\in\AA_K$, then $d(I_{S_n},I_S)\to 0$. First we show $\max_{x\in I_{S_n}}d(x,I_S)\to 0$. Indeed, if this is not true, then by taking an appropriate subsequence $(S_{k(n)})$ we get that there is a sequence $(x_{n})$ with $x_n\in I_{S_{k(n)}}$ such that $d(x_n,I_S)\not\to 0$ and $x_n\to x$ for some $x$. But we have $S_{k(n)}+x_n\subseteq K$ for all $n$. Hence $S+x\subseteq K$, i.e., $x\in I_S$. (Indeed, for any $s\in S$ we can take a sequence $(s_n)$ with $s_n\in S_{k(n)}$ and $(s_n)\to s$. Then $s_n+x_n\in K$ for all $n$, so, by taking limits, $s+x\in K$.) But then $d(x_n,I_S)\to 0$, giving a contradiction. So $\max_{x\in I_{S_n}}d(x,I_S)\to 0$. It remains to show that $\max_{x\in I_{S}}d(x,I_{S_n})\to 0$. Observe that it suffices to show that $d(x,I_{S_n})\to 0$ for any point $x\in I_S$. Indeed, the functions $x\mapsto d(x,I_{S_n})$ are $1$-Lipschitz on the compact domain $I_S$, so pointwise convergence implies uniform convergence. We consider three cases: when $I_S$ is a single point, when $I_S$ is one-dimensional, i.e., $I_S=\{(1-t)a+tb: t\in [0,1]\}$ for some $a,b\in \RR^2$ distinct, and when $I_S$ is two-dimensional, i.e., has non-empty interior. First assume that $I_S=\{p\}$ is a single point. Then trivially $$d(p,I_{S_n})=\min_{x\in I_{S_n}}d(p,x)\leq \max_{x\in I_{S_n}}d(p,x)=\max_{x\in I_{S_n}}d(x,I_S)\to 0,$$ giving the claim. Next, assume that $I_S$ is one-dimensional (i.e., a segment). By taking an appropriate rotation and translation, we may assume that $I_S=[-\delta,\delta]\times\{0\}$ for some $\delta>0$. Let $x\in I_S$ and $\epsilon>0$ be arbitrary, we show $d(x,I_{S_n})<\epsilon$ for $n$ large enough. We may assume that $\epsilon<\delta$. Let $x'\in [-\delta+\epsilon/2,\delta-\epsilon/2]\times\{0\}$ be such that $|x-x'|\leq\epsilon/2$. Since $I_S\supseteq \{x'+(a,0):|a|\leq \delta/2\}$, Lemma~\ref{lemma_hausdorffclosecopy} shows that for all $n$ large enough there is some $w\in I_{S_n}$ with $|w-x'|\leq \epsilon/4$. But then we also have $|w-x|<\epsilon$, as required. Finally, assume that $I_S$ is two-dimensional, i.e., has non-empty interior. Let $x\in I_S$ and $\epsilon>0$ be arbitrary, we show $d(x,I_{S_n})<\epsilon$ for $n$ large enough. We can find $x'\in I_{S}$ with $|x'-x|<\epsilon$ such that $x'$ is in the interior of $I_{S}$, i.e., $I_S$ contains a ball of radius $r>0$ around $x'$. Then whenever $d(S',S)<r$, we have $x'\in I_{S'}$. (Indeed, $x'+S'\subseteq x'+B_r(0)+S\subseteq I_S+S\subseteq K$, where $B_r(0)$ denotes the ball of radius $r$ centred at $0$.) Hence $x'\in I_{S_n}$ for $n$ large enough, giving the claim. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem_continuousin2d}] Let $S'$ be the the closure of $S$. Then $\{v\in \RR^2: \rho(S)+v\subseteq K\}=\{v\in \RR^2: \rho(S')+v\subseteq K\}$ for all $\rho\in \SO(2)$. By replacing $S$ by $S'$, we may assume that $S$ is compact. Then the result follows easily from Lemma~\ref{lemma_2dallhausdorff} and Lemma~\ref{lemma_chebyshevproperties} by letting $f(\rho)$ be the Chebyshev centre of $I_{\rho(S)}$. \end{proof} We finish this section by proving Theorem~\ref{theorem_continuouscounterexample}. Informally, the construction can be described as follows. Take a circle of diameter $1$ in the $xy$ plane, and start moving it in the $x$ direction while simultaneously rotating it around the $x$ axis. Stop when the rotated circle gets back to the $xy$ plane, and take the convex hull of the points traversed. See Figure~\ref{figure_continuouscounterexample}. The discontinuity will come at the direction $(0,1,0)$ by considering directions of the form $(0,y,\pm\sqrt{1-y^2})$, $y\to 1^-$. The formal proof is given below. \begin{figure}[h!] \centering \begin{subfigure}[h!]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{3dcounterexample_2} \captionsetup{justification=centering} \caption{Some phases of the circle being rotated and translated.} \label{figure_circlephases} \end{subfigure} \hfill \begin{subfigure}[h!]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{3dcounterexample} \captionsetup{justification=centering} \caption{The set of points traversed during the motion. The final construction is obtained by taking the convex hull of this set.} \label{figure_finalpicture} \end{subfigure} \captionsetup{justification=centering} \caption{The counterexample in Theorem~\ref{theorem_continuouscounterexample} is obtained by simultaneously translating and rotating a circle, and then taking convex hull of the points traversed.} \label{figure_continuouscounterexample} \end{figure} \begin{proof}[Proof of Theorem~\ref{theorem_continuouscounterexample}] Define the function $f: [0,\pi]\times \Sph^1\to \RR^3$ by letting $$f(t,x,y)=\frac{1}{2}(t+x,y\cos t, y\sin t).$$ Let $K_0$ be the image of $f$ and let $K$ be the convex hull of $K_0$. Observe that $f$ is continuous and the domain of $f$ is compact, hence $K_0$ is compact. It follows that $K$ is convex and compact. Also, note that if $v\in \Sph^2$, then $v$ can be written as $v=(r_1,r_2\cos \varphi, r_2\sin\varphi)$ for some $r_1,r_2\in\RR$ with $r_1^2+r_2^2=1$ and $\varphi\in [0,\pi]$. Then $f(\varphi, r_1,r_2)-f(\varphi, -r_1,-r_2)=(r_1,r_2\cos \varphi,r_2\sin \varphi)=v$, so $I_v$ is non-empty, where $I_v=\{u\in\RR^2:u,u+v\in K\}$. It remains to show that there is no continuous function $\psi: \Sph^2\to K$ such that $\psi(v)\in I_v$ for all $v$. Let $C=\{(a,b,c)\in\RR^3:b^2+c^2=1/4\}$ and $C'=\{(a,b,c)\in\RR^3:b^2+c^2\leq 1/4\}$. Observe that $K_0\subseteq C'$ and $$K_0\cap C=\left\{\frac{1}{2}(t,s\cos t,s\sin t):s=\pm 1, t\in[0,\pi]\right\}.$$ It is easy to deduce that $K\subseteq C'$ and $$K\cap C=\left\{\frac{1}{2}(t,s\cos t,s\sin t):s=\pm 1, t\in[0,\pi]\right\}\cup\left\{\frac{1}{2}(a,\pm 1,0):a\in [0,\pi]\right\}.$$ It is easy to deduce that if $v=(0,\cos \varphi, \sin\varphi)$ for some $\varphi\in(0,\pi)$, then $I_v$ consists of the single point $\frac{1}{2}(\varphi,-\cos\varphi,-\sin\varphi)$, and if $v=(0,\cos \varphi, \sin\varphi)$ for some $\varphi\in(-\pi,0)$, then $I_v$ consists of the single point $\frac{1}{2}(\pi+\varphi,\cos(\pi+\varphi),\sin(\pi+\varphi))=\frac{1}{2}(\pi+\varphi,-\cos\varphi,-\sin\varphi)$. It follows that if $\psi: \Sph^2\to K$ such that $\psi(v)\in I_v$ for all $v$, then $\psi$ cannot be continuous at $(0,1,0)$. \end{proof} \section{Segments in \texorpdfstring{$\RR^d$}{R^d}}\label{section_Kakeyareachable} \subsection{Proof outline and some simple results} Our goal in this section is to prove Theorem~\ref{theorem_Kakeyasegments} about Kakeya sets in $\RR^d$. Throughout this section, we assume that $d\geq 3$ and $K$ is a compact convex set in $\RR^d$ such that for all $v\in \Sph^{d-1}$, the set $I_v=\{u\in\RR^d: u, u+v\in K\}$ is non-empty. Note that $I_v$ is a compact convex set for all $v$. Given $v, v'\in \Sph^{d-1}$, $u\in I_v, u'\in I_{v'}$ and $\gamma: [0,1]\to \Sph^{d-1}$ continuous with $\gamma(0)=v, \gamma(1)=v'$, say that $(v',u')$ is reachable from $(v,u)$ along $\gamma$ if there is a continuous $\delta:[0,1]\to K$ such that $\delta(t)\in I_{\gamma(t)}$ for all $t$, $\delta(0)=u$ and $\delta(1)=u'$. We say that $v'$ is reachable from $v$ along $\gamma$ if there exist $u,u'$ such that $(v',u')$ is reachable from $(v,u)$ along $\gamma$, and we say $v'$ (or $(v',u')$) is reachable from $v$ (respectively, $(v,u)$) if there exists a $\gamma$ along which it is reachable. Given a subset $X\subseteq \Sph^{d-1}$, $\epsilon\geq 0$ and $\gamma: [0,1]\to \Sph^{d-1}$ we say that $\gamma$ is $\epsilon$-close to $X$ if for all $t\in [0,1]$ there is some $p\in X$ such that $|p-\gamma(t)|\leq\epsilon$. Given $\epsilon\geq 0$ and $\gamma,\gamma': [0,1]\to \Sph^{d-1}$ we say that $\gamma$ is $\epsilon$-close to $\gamma'$ if it is $\epsilon$-close to the image of $\gamma'$. (Note that this relation is not symmetric.) So, using this terminology, our goal is to prove the following result. \begin{theorem}\label{theorem_reachable} Let $v,v'\in \Sph^{d-1}$ and $u\in I_v, u'\in I_{v'}$, and let $\gamma:[0,1]\to \Sph^{d-1}$ be continuous such that $\gamma(0)=v, \gamma(1)=v'$. Then for any $\epsilon>0$, $(v',u')$ is reachable from $(v,u)$ along a path which is $\epsilon$-close to $\gamma$. \end{theorem} Note that the counterexample in Theorem~\ref{theorem_continuouscounterexample} shows that it is not necessarily true that $v'$ is reachable from $v$ along $\gamma$ (or along a path $0$-close to $\gamma$).\medskip We now briefly discuss our approach to proving Theorem~\ref{theorem_reachable}. It is easy to see that if $p\in \Sph^{d-1}$ is such that $I_p$ has non-empty interior, then every $p'$ in some neighbourhood of $p$ is reachable from $p$. Furthermore, it is not difficult to deal with points $p$ such that $I_p$ is a single point. This means that the complicated case is when $I_p$ is not a single point, but has empty interior (i.e., its dimension is between $1$ and $d-1$). We will prove (Lemma~\ref{lemma_spheresaroundtypeline}) that in the neighbourhood of such points $p$, there are `many' points $q$ with $I_q$ having non-empty interior. Moreover, we will show that if for such a $p$ we start moving on the sphere $\Sph^{d-1}$ from $p$ in some direction, then for `most' directions we initially only encounter points $q$ such that $I_q$ has non-empty interior, and that these $q$ are reachable from $p$. We will deduce (Lemma~\ref{lemma_pathsavoidingbadcase}) that Theorem~\ref{theorem_reachable} holds for $\gamma$ if for all points $v$ on $\gamma$ such that $I_v$ has empty interior and is not a single point, the tangent to $\gamma$ at $v$ is not in some special set of `forbidden' directions. Finally, we will show that we can perturb $\gamma$ slightly to make sure that we avoid such cases. We note that in some sense we can have `many' points $p\in \Sph^{d-1}$ such that $I_p$ is not a single point but has empty interior. For example, if $K=\{(x,y,z)\in \RR^3: x\in[-1,1],y^2+z^2\leq 1/4\}$, then all $p$ along a great circle have this property. We believe the reader will not lose much by focusing on the case $d=3$: some of the lemmas are easier to visualise and prove in that case, but the main ideas of the proof are the same.\medskip Let us start with some simple observations. \begin{lemma}\label{lemma_canchooseendpoints} Suppose that $v,v'\in \Sph^{d-1}$ and $\gamma:[0,1]\to \Sph^{d-1}$ are such that $v'$ is reachable from $v$ along $\gamma$. Let $u\in I_v$, $u'\in I_{v'}$ be arbitrary. Then $(v',u')$ is reachable from $(v,u)$ along a path which has the same image as $\gamma$ (and hence is $0$-close to $\gamma$). \end{lemma} \begin{proof} Let $w\in I_v$, $w'\in I_{v'}$ and $\delta: [0,1]\to K$ be such that $\delta$ is continuous, $\delta(t)\in I_{\gamma(t)}$ for all $t$, $\delta(0)=w$ and $\delta(1)=w'$. Define $\gamma'$ and $\delta'$ by setting \begin{equation*} \gamma'(t)= \begin{cases*} v & if $t\in[0,1/3]$\\ \gamma(3(t-1/3)) & if $t\in[1/3,2/3]$\\ v' & if $t\in [2/3,1]$ \end{cases*} \end{equation*} and \begin{equation*} \delta'(t)= \begin{cases*} (1-3t)u+3tw & if $t\in[0,1/3]$\\ \delta(3(t-1/3)) & if $t\in[1/3,2/3]$\\ (1-3(t-2/3))w'+3(t-2/3)u' & if $t\in [2/3,1].$ \end{cases*} \end{equation*} The statement of the lemma follows easily, using that $I_v, I_{v'}$ are convex. \end{proof} Note that Lemma~\ref{lemma_canchooseendpoints} implies that if $v'$ is reachable from $v$ (along some path which is $\epsilon$-close to $X$) and $v''$ is reachable from $v'$ (along some path which is $\epsilon$-close to $Y$) then $v''$ is reachable from $v$ (along some path which is $\epsilon$-close to $X\cup Y$). \begin{lemma}\label{lemma_nicetype} Assume that $V\subseteq \Sph^{d-1}$ is such that for all $v\in V$, $I_v$ has non-empty interior. Assume furthermore that $\gamma: [0,1]\to V$ is continuous. Then $\gamma(1)$ is reachable from $\gamma(0)$ along a path which is $0$-close to $\gamma$. \end{lemma} \begin{proof} For all $t\in [0,1]$ we can find some $r_t>0$, $p_t\in K$ such that $I_{\gamma(t)}$ contains an open ball of radius $r_t$ around $p_t$, i.e., whenever $|z|<r_t$ then $p_t+z, p_t+z+\gamma(t)\in K$. It follows that whenever $|\gamma(s)-\gamma(t)|<r_t$ then $p_t, p_t+\gamma(s)\in K$, i.e., $p_t\in I_{\gamma(s)}$. Let $\eta_t>0$ be such that $|\gamma(s)-\gamma(t)|<r_t$ whenever $|s-t|<\eta_t$. By compactness of $[0,1]$, we can find some $r>0$ such that whenever $s\in[0,1]$ then there is some $t_s\in [0,1]$ such that $|s-t_s|\leq \eta_{t_s}-r$. Pick some $N>1/r$ integer, and let $x(i)=i/N$ ($i=0,\dots,N$). Then $\gamma(x({i+1}))$ is reachable from $\gamma(x(i))$ along a path which has the same image is $\gamma|_{[x(i),x({i+1})]}$ (the corresponding function $\delta$ is constant $p_{t_{x(i)}}$). Using Lemma~\ref{lemma_canchooseendpoints} several times, and concatenating the appropriate paths, we get that $\gamma(1)$ is reachable from $\gamma(0)$ along a path with the same image as $\gamma$. \end{proof} In light of Lemma~\ref{lemma_nicetype}, finding points $v$ such that $I_v$ has non-empty interior is useful for proving reachability. The next lemma gives a convenient condition for checking that $I_v$ has non-empty interior. \begin{lemma}\label{lemma_longsegment} If $v\in \Sph^{d-1}$ and there is some $\lambda>1$ and $u\in K$ such that $u+\lambda v\in K$, then $I_v$ has non-empty interior. \end{lemma} \begin{proof} We may assume that $u=0$. If $p\in K$, then $(1-1/\lambda)p\in K$ and $(1-1/\lambda)p+(1/\lambda) \lambda v\in K$ by convexity, so $(1-1/\lambda)p\in I_{v}$. Given some $w\in \Sph^{d-1}$, there are points $p_1,p_2\in K$ such that $p_2-p_1=w$. Then $(1-1/\lambda)p_i\in I_{v}$ for $i=1,2$, and therefore $I_v$ contains two points $q_1, q_2$ with $q_2-q_1=(1-1/\lambda)w$. So we can pick $e_1, f_1, \dots, e_d, f_d\in I_v$ such that $f_i-e_i$ is the vector with all coordinates zero, except the $i$th coordinate, which is $1-1/\lambda$. Let $c=\frac{1}{2d}\sum(e_i+f_i)$. By convexity of $I_v$, it is easy to see that $c\in I_v$, and whenever $|x_i|\leq \frac{1}{2d}(1-1/\lambda)$ for all $i$ then $c+(x_1,\dots,x_d)\in I_v$. So $I_v$ contains a ball of radius $\frac{1}{2d}(1-1/\lambda)$ around $c$. \end{proof} The following useful lemma gives another condition for finding $v$ such that $I_v$ has non-empty interior, and it also gives some restrictions on what $I_v$ can look like when $I_v$ has empty interior: $I_v-I_v$ must be perpendicular to $v$. \begin{lemma}\label{lemma_largeinnerproducts} Suppose that $p\in \Sph^{d-1}$, $x,q\in \RR^d$ such that $\langle p,q\rangle>1$ and $x, x+q\in K$. Then $I_p$ has non-empty interior. In particular, if $v\in \Sph^{d-1}$ and $u,w\in \RR^d$ such that $\langle v,w\rangle \not =0$ and $u,u+w\in I_v$, then $I_v$ has non-empty interior. \end{lemma} \begin{proof} Let $\epsilon\in (0,1)$ be small enough so that $0\not =|p-\epsilon q|<1-\epsilon$. Note that such an $\epsilon$ exists, since $|p-\epsilon q|^2=1-2\langle p,q\rangle\epsilon+|q|^2\epsilon^2$ is less than $1-2\epsilon+\epsilon^2$ for $\epsilon$ small enough, as $\langle p,q\rangle>1$. Let $p'=\frac{p-\epsilon q}{|p-\epsilon q|}$. Note that $|p'|=1$. We know that there is some $y\in \RR^d$ such that $y,y+p'\in K$. Let \begin{align*} z&=\frac{\epsilon}{\epsilon+|p-\epsilon q|} x+\frac{|p-\epsilon q|}{\epsilon+|p-\epsilon q|}y,\\ z'&=\frac{\epsilon}{\epsilon+|p-\epsilon q|} (x+q)+\frac{|p-\epsilon q|}{\epsilon+|p-\epsilon q|}(y+p').\\ \end{align*} Then $z,z'\in K$ by convexity. But \begin{align*}z'-z&=\frac{\epsilon}{\epsilon+|p-\epsilon q|} q+\frac{|p-\epsilon q|}{\epsilon+|p-\epsilon q|}p'\\ &=\frac{1}{\epsilon+|p-\epsilon q|}p \end{align*} But $\frac{1}{\epsilon+|p-\epsilon q|}>1$, so $I_p$ has non-empty interior by Lemma \ref{lemma_longsegment}. For the final part of the lemma, we may assume $\langle v,w\rangle >0$ (otherwise replace $u$ by $u+w$ and $w$ by $-w$). But then $u, u+v+w\in K$, so we can apply the first part of the lemma with $p=v$, $q=v+w$, $x=u$. \end{proof} \subsection{The main lemmas} The following lemma is one of the key observations. Essentially, it says that if $I_v$ has empty interior but is not a single point, then for `most' points $p$ around $v$ the set $I_p$ has non-empty interior, and those $p$ are reachable from $v$. \begin{lemma}\label{lemma_spheresaroundtypeline} Suppose that $v\in \Sph^{d-1}$, and $u,w\in \RR^d$ such that $ ,u+w\in I_v$, $0<|w|<1$ and $\langle v,w\rangle=0$. Let $P=\{p\in \Sph^{d-1}: \langle p, v+w \rangle > 1 \}\cup \{p\in \Sph^{d-1}: \langle p, v-w \rangle > 1 \}$. Then $I_p$ has non-empty interior for all $p\in P$. Moreover, whenever $p\in P$, then $p$ is reachable from $v$ along a path which is $2|p-v|$-close to $\{v\}$. \end{lemma} Note that the condition $\langle v,w\rangle=0$ holds automatically when $I_v$ has empty interior by the final part of Lemma~\ref{lemma_largeinnerproducts}. Figure~\ref{figure_Kakeyaspheres} shows the set $P$ in the case $d=3$. \begin{figure}[h!] \includegraphics[clip,trim=0cm 0cm 0cm 0cm, width=0.3\linewidth]{Picture} \centering \captionsetup{justification=centering} \caption{The set $P$ in Lemma~\ref{lemma_spheresaroundtypeline} is the region enclosed by the two blue circles ($d=3$). The point $v$ is the intersection of the two circles, and $w$ is parallel to the line connecting the centres of the blue circles. The yellow dotted great circle gives the only direction (for $d=3$) not pointing to the inside of the two circles.} \label{figure_Kakeyaspheres} \end{figure} \begin{proof} The claim that $I_p$ has non-empty interior for all $p\in P$ follows directly from Lemma~\ref{lemma_largeinnerproducts}. For the second claim, let $p\in P$ be arbitrary. We may assume $\langle p,v+w\rangle>1$ (otherwise replace $u$ by $u+w$ and $w$ by $-w$). Note that $\langle v,p\rangle=\langle v+w,p\rangle-\langle w,p\rangle>1-1=0$, and similarly $\langle w,p\rangle>0$. Pick some small $\lambda\in (0,1)$ (to be specified later). It is easy to see that if we write $\gamma(t)=\frac{v+2t\lambda w}{|v+2t\lambda w|}$ and $\delta(t)=u$ for all $t\in [0,1/2]$, then $\delta(t)\in I_{\gamma(t)}$ for all $t\in [0,1/2]$. Furthermore, if we write $q(s)=\frac{(1-s)(v+\lambda w)+sp}{|(1-s)(v+\lambda w)+sp|}$ for all $s\in [0,1]$, then $\langle q(s), v+w\rangle>1$ for all $s\in [0,1]$. Indeed, it is easy to check that $\langle (1-s)(v+\lambda w)+sp, v+w\rangle> 0$, and we have \begin{align*} |(1-s)(v+\lambda w)+sp|^2 &=(1-s)^2(1+\lambda^2|w|^2)+s^2+2s(1-s)\langle p,v+\lambda w\rangle\\ &\leq (1-s)^2(1+\lambda^2|w|^2)+s^2+2s(1-s)\langle p, v+w\rangle \end{align*} and \begin{align*} \langle (1-s)(v+\lambda w)&+sp,v+w\rangle^2 =\left((1-s)(1+\lambda|w|^2)+s\langle p,v+w\rangle\right)^2\\ &=(1-s)^2(1+\lambda |w|^2)^2+s^2\langle p,v+w\rangle^2+2s(1-s)(1+\lambda |w|^2)\langle p,v+w\rangle\\ &> (1-s)^2(1+\lambda|w|^2)+s^2+2s(1-s)\langle p,v+w\rangle\\ &\geq (1-s)^2(1+\lambda^2|w|^2)+s^2+2s(1-s)\langle p, v+w\rangle, \end{align*} giving $\langle q(s), v+w\rangle^2>1$. So $I_{q(s)}$ has non-empty interior for all $s$. Using Lemma~\ref{lemma_nicetype} (and Lemma~\ref{lemma_canchooseendpoints}), it is easy to deduce that we can extend $\gamma, \delta$ to $[0,1]$ such that $\delta(t)\in I_{\gamma(t)}$ for all $t$ and for all $t\geq 1/2$ there is some $s\in [0,1]$ such that $\gamma(t)=q(s)$. Now, if $t\leq 1/2$, then \begin{align*} |v-\gamma(t)|^2&=\left|v-\frac{v+2t\lambda w}{|v+2t\lambda w|}\right|^2\\ &=2-2\left\langle v,\frac{v+2t\lambda w}{|v+2t\lambda w|}\right\rangle\\ &=2-\frac{2}{|v+2t\lambda w|}\\ &=2-\frac{2}{(1+(2t\lambda|w|)^2)^{1/2}}\\ &\leq 2-\frac{2}{(1+\lambda^2|w|^2)^{1/2}}. \end{align*} Furthermore, if $t\geq 1/2$ and $\gamma(t)=q(s)$, then \begin{align*} |v-\gamma(t)|^2&=\left|v- \frac{(1-s)(v+\lambda w)+sp}{|(1-s)(v+\lambda w)+sp|}\right|^2\\ &=2-2\left\langle v, \frac{(1-s)(v+\lambda w)+sp}{|(1-s)(v+\lambda w)+sp|}\right\rangle\\ &=2-2\frac{(1-s)+s\langle v,p\rangle}{|(1-s)(v+\lambda w)+sp|}\\ &\leq 2-2\frac{\langle v,p\rangle}{|(1-s)(v+\lambda w)+sp|}\\ &\leq 2-2\frac{\langle v,p\rangle}{\max\{|v+\lambda w|,|p|\}}\\ &=2-\frac{2\langle v,p\rangle}{(1+\lambda^2|w|^2)^{1/2}}. \end{align*} It follows that $|v-\gamma(t)|\leq \left(2-\frac{2\langle v,p\rangle}{(1+\lambda^2|w|^2)^{1/2}}\right)^{1/2}$ for all $t$. But $\lambda\in (0,1)$ was arbitrary, and taking $\lambda\to 0^+$ we have $\left(2-\frac{2\langle v,p\rangle}{(1+\lambda^2|w|^2)^{1/2}}\right)^{1/2}\to (2-2\langle v,p\rangle )^{1/2}=|v-p|$. It follows that we can choose $\lambda$ such that $|v-\gamma(t)|\leq 2|v-p|$ for all $t$. \end{proof} The next lemma is one of the main corollaries of Lemma~\ref{lemma_spheresaroundtypeline}. \begin{lemma}\label{lemma_pathsavoidingbadcase} Let $\epsilon>0$, and suppose that $\gamma:[0,1]\to \Sph^{d-1}$ is continuously differentiable such that for all $t\in [0,1]$, one of the following holds. \begin{enumerate} \item $I_{\gamma(t)}$ has non-empty interior; \item $I_{\gamma(t)}$ has empty interior, but there exist $u,w\in \RR^d$ such that $u,u+w\in I_{\gamma(t)} and $\langle w,\gamma'(t)\rangle\not =0$; \item $I_{\gamma(t)}$ is a single point. \end{enumerate} Then $\gamma(1)$ is reachable from $\gamma(0)$ along a path which is $\epsilon$-close to $\gamma$. \end{lemma} Note that in the second case we must have $\langle \gamma(t),w\rangle=0$ by the final part of Lemma~\ref{lemma_largeinnerproducts}. \begin{proof} Let $T_i$ be the set of $t\in [0,1]$ belonging to the $i$th case above $(i=1,2,3)$. We first claim that $T_3$ is closed. Indeed, it is easy to see that $T_1$ is open, and if $t\in T_2$ then by Lemma~\ref{lemma_spheresaroundtypeline} there is some $\epsilon>0$ such that $((t-\epsilon,t+\epsilon)\setminus\{t\})\cap [0,1]\subseteq T_1$. Since $T_3$ is closed, $[0,1]\setminus T_3$ is a union of disjoint (open) intervals: $[0,1]\setminus T_3=\bigcup_{J\in \JJ}J$, where for all $J$ either $J=(a_J, a_J+b_J)$ (with $0\leq a_J<a_J+b_J\leq 1$), or $J=[0,b_J)$, or $J=(a_J,1]$, or $J=[0,1]$ (and $J\cap J'=\emptyset$ if $J\not =J'$). For each $J\in \JJ$ (and positive integer $m$), define $J_m$ as follows: \begin{itemize} \item If $J=(a_J, a_J+b_J)$, let $J_m=[a_J+\frac{1}{3m}b_J,a_J+(1-\frac{1}{3m})b_J]$; \item If $J=[0,b_J)$, let $J_m=[0,(1-\frac{1}{3m})b_J]$; \item If $J=(a_J,1]$, let $J_m=[a_J+\frac{1}{3m}(1-a_J),1]$; \item If $J=[0,1]$, let $J_m=[0,1]$. \end{itemize} Note that $\bigcup_{m\geq 1} J_m=J$ and $J_1\subseteq J_2\subseteq J_3\subseteq\dots$. Let us also write $J_0=\emptyset$ for all $J$. Observe that if $t\in T_1\cup T_2$, then for any $\eta>0$ there exists $\mu>0$ such that if $t'\in (t-\mu,t+\mu)\cap [0,1]$ then $\gamma(t')$ is reachable from $\gamma(t)$ along a path which is $\eta$-close to $\{\gamma(t)\}$. Indeed, this is easy to see (and follows from Lemma~\ref{lemma_nicetype}) when $t\in T_1$, and follows from Lemma~\ref{lemma_spheresaroundtypeline} when $t\in T_2$. \textbf{Claim.} We can recursively construct $\alpha_m:\bigcup_{J\in \JJ}J_m\to \Sph^{d-1}$ and $\beta_m:\bigcup_{J\in \JJ}J_m\to \RR^d$ continuous functions such that \begin{enumerate} \item $\beta_m(t)\in I_{\alpha_m(t)}$ for all $t$ (when defined); \item If $m'>m$ then $\alpha_{m'}$ and $\beta_{m'}$ extend $\alpha_{m}$ and $\beta_m$, respectively; \item For all $J\in\JJ$ and $m>0$ we have $\alpha_m(\min J_m)=\gamma(\min J_m)$ and $\alpha_m(\max J_m)=\gamma(\max J_m)$; \item If $t\in J_{m}\setminus J_{m-1}$, then there is some $t'\in J_{m}$ such that $|t-t'|< \operatorname{length}(J)/m$ and $|\alpha_{m}(t)-\gamma(t')|<\min\{\epsilon,\operatorname{length}(J)/m\}$.\label{item_close} \end{enumerate} \textbf{Proof of Claim. }It is enough to show that whenever $a<b$, $[a,b]\subseteq J$, $u\in I_{\gamma(a)}, v\in I_{\gamma(b)}$ and $\eta>0$, then there exist $f:[a,b]\to \Sph^{d-1}$ and $g:[a,b]\to K$ continuous functions such that $g(t)\in I_{f(t)}$ for all $t$, $g(a)=u, g(b)=v, f(a)=\gamma(a),f(b)=\gamma(b)$, and for all $t$ there is some $t'\in [a,b]$ with $|t-t'|<\eta$ such that $|f(t)-\gamma(t')|\leq\eta$. For each $t$, pick $\mu_t$ as in the observation above, we may assume $\mu_t<\eta$ for all $t$. Using the compactness of $[a,b]$, if $N$ is large enough and we write $x(j)=a+j(b-a)/N$, then for all $j$ we have $x(j)-x(j-1)<\eta$ and there is some $t_j$ such that $|x(j)-t_j|,|x(j-1)-t_j|<\mu_{t_j}$. But then $\gamma(x(j)),\gamma(x(j+1))$ are both reachable from $\gamma(t_j)$ along a path which is $\eta$-close to $\{\gamma(t_j)\}$, and hence $x(j+1)$ is reachable from $x(j)$ along a path which is $\eta$-close to $\{\gamma(t_j)\}$. It follows that for any choice of $u_j\in I_{\gamma(x_j)}$ ($j=0,\dots,N$) there exist $f_j:[x(j),x(j+1)]\to \Sph^{d-1}$ and $g_j:[x(j),x(j+1)]\to K$ such that $g_j(t)\in I_{f_j(t)}$ for all $t$, $g_j(x(j))=u_j$, $g_j(x(j+1))=u_{j+1}$, $f_j(x(j))=\gamma(x(j))$, $f_j(x(j+1))=\gamma(x(j+1))$, and for all $t$ we have $|f_j(t)-\gamma(t_j)|\leq\eta$. Picking $u_0=u$ and $u_N=v$ and then putting together these $f_j, g_j$ gives the required functions $f,g$ and finishes the proof of the claim. \qed\medskip Define $\alpha:[0,1]\to \Sph^{d-1}$ and $\beta:[0,1]\to \RR^d$ by setting $\alpha(t)$ to be $\alpha_m(t)$ and $\beta(t)$ to be $\beta_m(t)$ when $t\in T_1\cup T_2$ (and $m$ is large enough so that this exist), and when $t\in T_3$ then setting $\alpha(t)=\gamma(t)$ and $\beta(t)$ to be the unique point in $I_{\gamma(t)}$. It is clear that $\alpha(0)=\gamma(0)$, $\alpha(1)=\gamma(1)$, $\beta(t)\in I_{\alpha(t)}$ for all $t$, and $\alpha, \beta$ are continuous at all points in $T_1\cup T_2$. Also, $\alpha$ is $\epsilon$-close to $\gamma$. We show that $\alpha, \beta$ are continuous at all points in $T_3$ as well. We first prove that if $t\in T_3$ then $\alpha$ is continuous at $t$. Take any sequence $(t_n)\to t$ in $[0,1]$, we show $(\alpha(t_n))\to \alpha(t)=\gamma(t)$. If this is not true, then we can take a subsequence of $(\alpha(t_n))$ that converges to some $p\in \Sph^{d-1}$, $p\not =\gamma(t)$, so we may assume that $(\alpha(t_n))$ is convergent. Also, we may assume that $t_n\in T_1\cup T_2$ for all $n$ (since $\gamma$ is continuous, and $\alpha(t')=\gamma(t')$ if $t'\in T_3$). We may also assume that $(t_n)$ is either decreasing or increasing. Let $J(n)\in \JJ$ be such that $t_n\in J(n)$, and let $m(n)$ be the positive integer such that $t_n\in J(n)_{m(n)}\setminus J(n)_{m(n)-1}$. Furthermore, let $t'_n$ be as given by point \ref{item_close} above for $t_n\in J_{m(n)}\setminus J_{m(n)-1}$. Since $(t_n)$ is either increasing or decreasing, either $J(n)$ is eventually constant and $m(n)\to \infty$, or $J(n)$ takes infinitely many different values and $\operatorname{length}(J(n))\to 0$. In either case, we have $\operatorname{length}(J(n))/m(n)\to 0$. Hence $\alpha(t_n)-\gamma(t'_n)\to 0$ and $t_n-t_n'\to 0$. But then $t_n'\to t$ and hence $\gamma(t_n')\to \gamma(t)$, which implies $\alpha(t_n)\to \gamma(t)$, as claimed. We now show that $\beta$ is also continuous at all $t\in T_3$. Assume that $(t_n)$ is a sequence in $[0,1]$ converging to $t\in T_3$, we show $\beta(t_n)\to \beta(t)$. As before, by taking a subsequence we may assume that $\beta(t_{n})$ converges to some $p\in K$. But $\beta(t_{n})\in I_{\alpha(t_{n})}$ for all $n$, i.e., $\beta(t_{n}), \beta(t_{n})+\alpha(t_n)\in K$ for all $n$. Since $K$ is closed and $\alpha$ is continuous, by taking limits we get $p, p+\alpha(t)\in K$, i.e., $p\in I_{\alpha(t)}=I_{\gamma(t)}$. But $I_{\gamma(t)}=\{\beta(t)\}$, hence $p=\beta(t)$, as claimed. \end{proof} We will attempt to find a `good' path, i.e., one satisfying the conditions of Lemma~\ref{lemma_pathsavoidingbadcase}. Note that the only case we need to avoid is having a point $v$ on $\gamma$ such that $I_v$ has empty interior, is not a single point, and the tangent to $\gamma$ at $v$ is perpendicular to any $u-u'$ ($u,u'\in I_v$). To find such paths, it will be easier to work in $\RR^{d-1}$ instead of on $\Sph^{d-1}$, using that locally they have the same structure. The next lemma captures the key property coming from Lemma~\ref{lemma_spheresaroundtypeline} in terms of parametrizations. While the formal statement is rather complicated, the lemma is intuitively quite simple, as we now explain. Let us focus on the case $d=3$. Using Figure~\ref{figure_Kakeyaspheres}, we know that if $\gamma$ is a path such that $\gamma(t)$ is a `bad point', i.e., the conditions of Lemma~\ref{lemma_pathsavoidingbadcase} are not satisfied there, then we get the two blue circles touching at $v=\gamma(t)$ such that no point in the regions enclosed by the circles can be a bad point for any path. Moreover, we also know that $\gamma$ must have tangent in the direction of the yellow dotted line at $t$. Our next lemma essentially states that if we take charts then we still get the blue circles whose interiors cannot contain bad points. \begin{lemma}\label{lemma_takingchartsnew} Let $\varphi: \RR^{d-1}\to V$ be a smooth parametrization of some open set $V\subseteq \Sph^{d-1}$, and let $X\subseteq \RR^{d-2}$ be an open neighbourhood of $0$. Write $\gamma_x(t)=(t,x)$ for $x\in X, t\in [0,1]$ (so $\gamma_x:[0,1]\to \RR^{d-1}$). Let $Z$ be the set of all $(t,x)\in\RR^{d-1}$ ($t\in[0,1],x\in X$) such that for $v=(\varphi\circ\gamma_x)(t)=\varphi(t,x)$ the set $I_v$ has empty interior, but there exist $u,w\in \RR^d$ such that $u,u+w\in I_v$, $w\not =0$, $\langle v,w\rangle=0$ and $\langle w, (\varphi\circ\gamma_x)'(t)\rangle =0$. Let $X_Z=\{x\in X: (t,x)\in Z\textnormal{ for some }t\in [0,1] \}$, and assume that $x\in X_Z$ and $t_x\in [0,1]$ are such that $(t_x,x)\in Z$. Then there is some $w_x\in\RR^{d-2}$, $w_x\not =0$ such that the open ball of radius $|w_x|$ centred at $(t_x,x+w_x)$ is disjoint from $Z$. \end{lemma} The following lemma tells us that the conclusion of Lemma~\ref{lemma_takingchartsnew} guarantees that there are `few' points we need to avoid. \begin{lemma}\label{lemma_Baire} Let $Z\subseteq \RR^{d-1}$ and let $X$ be an open neighbourhood of $0$ in $\RR^{d-2}$. Let $X_Z=\{x\in X: (t,x)\in Z\textnormal{ for some }t\in [0,1] \}$, and for each $x\in X_Z$ let $t_x\in [0,1]$ be arbitrary such that $(t_x, x)\in Z$. Assume that for each $x\in X_Z$ there is some $w_x\in\RR^{d-2}$, $w_x\not =0$ such that the open ball of radius $|w_x|$ centred at $(t_x,x+w_x)$ is disjoint from $Z$. Then $X_Z\not =X$. \end{lemma} Before we prove Lemma~\ref{lemma_takingchartsnew} and Lemma~\ref{lemma_Baire}, let us first put them together to obtain the lemmas we will use later. \begin{lemma}\label{lemma_takingcharts} Let $\varphi: \RR^{d-1}\to V$ be a smooth parametrization of some open set $V\subseteq \Sph^{d-1}$, and let $X\subseteq \RR^{d-2}$ be an open neighbourhood of $0$. Write $\gamma_x(t)=(t,x)$ for $x\in X, t\in [0,1]$ (so $\gamma_x:[0,1]\to \RR^{d-1}$). Then there exists some $x\in X$ such that for all $\epsilon>0$, $\varphi(\gamma_x(1))$ is reachable from $\varphi(\gamma_x(0))$ along a path which is $\epsilon$-close to $\varphi\circ\gamma_x$. \end{lemma} \begin{proof} Define $Z$, $X_Z$ and $t_x$ (for $x\in X_Z$) as in Lemma~\ref{lemma_takingchartsnew}. By Lemma~\ref{lemma_takingchartsnew}, for each $x\in X_Z$ there is some $w_x\in\RR^{d-2}$, $w_x\not =0$ such that the open ball of radius $|w_x|$ centred at $(t_x,x+w_x)$ is disjoint from $Z$. So we can apply Lemma~\ref{lemma_Baire} to find some $x\in X$ such that $x\not\in X_Z$. Then (using the final part of Lemma~\ref{lemma_largeinnerproducts}) we get that Lemma~\ref{lemma_pathsavoidingbadcase} applies for the path $\varphi\circ\gamma_x$ and hence $\varphi(\gamma_x(1))$ is reachable from $\varphi(\gamma_x(0))$ along a path which is $\epsilon$-close to $\varphi\circ\gamma_x$. \end{proof} For two points $x$ and $y$ in $\RR^{d-1}$, let $\gamma_{x,y}$ denote the straight line segment from $x$ to $y$ (i.e., $\gamma_{x,y}(t)=(1-t)x+ty$ for $t\in [0,1]$). The following lemma is a more convenient version of Lemma~\ref{lemma_takingcharts}. \begin{lemma}\label{lemma_chartsconvenient} Let $\varphi: \RR^{d-1}\to V$ be a smooth parametrization of some open set $V\subseteq \Sph^{d-1}$. Let $U_1, U_2$ be non-empty open subsets of $\RR^{d-1}$. Then there are some $x\in U_1, y\in U_2$ such that, for all $\epsilon>0$, $\varphi(y)$ is reachable from $\varphi(x)$ along a path which is $\epsilon$-close to $\varphi\circ \gamma_{x,y}$. \end{lemma} \begin{proof} We can take a bijective affine map $\psi: \RR^{d-1}\to\RR^{d-1}$ which maps $U_1$ to an open neighbourhood of $0$ and $U_2$ to an open neighbourhood of $(1,0,\dots,0)$. Then the statement follows easily from Lemma~\ref{lemma_takingcharts} applied to the parametrization $\varphi\circ \psi^{-1}$. \end{proof} We finish this subsection by giving the proofs of Lemmas~\ref{lemma_takingchartsnew} and \ref{lemma_Baire}. \begin{proof}[Proof of Lemma~\ref{lemma_takingchartsnew}] Let $v=\varphi(t_x,x)$. By the definition of $Z$, we may find $u,w\in \RR^d$ such that $0<|w|<1$, $u,u+w\in I_v$, $\langle v,w\rangle=0$ and $\langle w,(\varphi\circ\gamma_x)'(t)\rangle =0$. By Lemma~\ref{lemma_spheresaroundtypeline}, the set $P=\{p\in \Sph^{d-1}: \langle p, v+w \rangle > 1 \}\cup \{p\in \Sph^{d-1}: \langle p, v-w \rangle > 1 \}$ has the property that for each $p\in P$, $I_p$ has non-empty interior. In particular, $\varphi(Z)$ is disjoint from $P$. By decreasing $|w|$ if necessary, we may assume that $P\subseteq V$. We want to show that for some $w'\in\RR^{d-2}$ ($w'\not =0$) the set $\varphi^{-1}(P)$ contains an open ball of radius $|w'|$ around $(t_x,x+w')$. Let $D$ be the derivative $D\varphi|_{(t_x,x)}$ of $\varphi$ at $(t_x,x)$, so $D$ is a bijective linear map $\RR^{d-1}\to\{v'\in\RR^d:\langle v,v'\rangle=0\}$. We can find an orthonormal basis $f_1,\dots,f_{d-2}$ of $\RR^{d-2}$ such that $f_1=(1,0,\dots,0)$, $\langle D(f_2), w\rangle>0$ and $\langle D(f_i), w\rangle=0$ for all $i\not =2$. Consider the ball of radius $\rho$ centred at $(t_x,x)+\rho f_2$. Any point of this open ball is of the form $q=(t_x,x)+\sum_{i=1}^{d-2}\lambda_if_i$ with $(\lambda_2-\rho)^2+\sum_{i\not =2}\lambda_i^2<\rho^2$. But we have \begin{align*} \varphi(q)&=v+\sum_{i=1}^{d-2}\lambda_i D(f_i)+O(\sum_{i=1}^{d-2}\lambda_i^2) \end{align*} and hence $$ \varphi(q)=v+\sum_{i=1}^{d-2}\lambda_i D(f_i)+O(2\rho \lambda_2).$$ Using that $\langle v,D(f_i)\rangle=0$ for all $i$ and $\langle w,D(f_j)\rangle =0$ for all $j \not =2$, \begin{align*} \langle v+w,\varphi(q)\rangle&=\langle v+w, v+\sum_{i=1}^{d-2}\lambda_i D(f_i)+O(2\rho \lambda_2)\rangle\\ &=1+\lambda_2 \langle w,D(f_2)\rangle+O(2\rho\lambda_2). \end{align*} Since $\langle w, D(f_2)\rangle>0$, we get that there is some $\rho_0>0$ such that if $\rho\leq \rho_0$ then $\langle v+w,\varphi(q)\rangle>1$ (and hence $q\in\varphi^{-1}(P)$ and thus $q\not \in Z$) for all such points $q$. Since $f_1$ is orthogonal to $f_2$, we have $f_2=(0,y)$ for some $y\in \RR^{d-2}$, $|y| =1$. Then $w_x=\rho_0y$ satisfies the conditions. \end{proof} Before we formally prove Lemma~\ref{lemma_Baire}, let us give a sketch proof in the case when $d=3$, $X=(-1,1)$ and $w_x\in\RR$ is the same for all $x$: $w_x=r\in (0,1)$ for all $x\in X$. Assume that $X_Z=X$. Using that the circle of radius $r$ centred at $(x,t_x+r)$ does not contain $(y,t_y)$, it is easy to see that we must have $|t_y-t_x|\geq \Omega_r(\sqrt{y-x})$ whenever $0<y-x<r$ (see Figure~\ref{figure_Baire}). So if we take $N+1$ equally spaced points $x_0,\dots,x_N$ between $0$ and $r$ ($x_j=jr/N$), then $|t_{x_i}-t_{x_j}|\geq \Omega_r(1/\sqrt{N})$ for all $i,j$. It is easy to see that this gives a contradiction as $N\to\infty$. We will use the Baire category theorem to reduce the general case to a case similar enough to the one discussed above. \begin{figure}[h!] \includegraphics[clip,trim=0cm 0cm 0cm 0cm, width=0.5\linewidth]{Baire} \centering \captionsetup{justification=centering} \caption{Since $(t_y,y)$ is not contained in the ball of radius $r$ centred at $(t_x,x+r)$, we have $|t_y-t_x|=\Omega_r(\sqrt{y-x})$.} \label{figure_Baire} \end{figure} \begin{proof}[Proof of Lemma~\ref{lemma_Baire}] For each positive integer $n$, let $X_Z^{n}=\{x\in X_Z: |w_x|\geq 1/n\}$. Clearly, $X_Z=\bigcup_n X_Z^n$. By the Baire category theorem, it is enough to show that each $X_Z^n$ is a finite union of nowhere dense sets. Assume, for contradiction, that $X_Z^n$ cannot be written as such a finite union. Let $\eta=1/4$, and for all $v\in \Sph^{d-3}$ let $U_v=\{u\in \Sph^{d-3}: \langle u,v\rangle>1-\eta\}$. Since $\Sph^{d-3}$ is compact, it is covered by finitely many such sets $U_v$. Write $Y_v=\{x\in X_Z^n: w_x/|w_x|\in U_v\}$. It follows that not every $Y_v$ is nowhere dense, i.e., there exist $v\in \Sph^{d-3}$, $y\in X$ and $\epsilon>0$ such that the closure of $Y_v$ contains all $x\in X$ with $|x-y|\leq \epsilon$. We may assume $\epsilon<1/n$. Write $x(j)=y+\frac{j}{N}\epsilon v$ for $j=0,\dots, N$, where $N$ is some large positive integer (specified later). Note that $|x(j)-y|\leq \epsilon$ for all $j$, so there are some $y(j)\in Y_v$ such that $|x(j)-y(j)|<\eta/N^2$. \textbf{Claim.} If $0\leq i<j\leq N$ then $|t_{y(i)}-t_{y(j)}|=\Omega_{n,\epsilon}(1/N^{1/2})$. Note that if the claim holds, then $\max_i t_{y(i)}-\min_i t_{y(i)}=\Omega_{n,\epsilon}(N^{1/2})$. Then taking $N$ large enough gives a contradiction. So the lemma follows from the claim above. \textbf{Proof of Claim.} We will use that the open ball centred at $(t_{y(i)},y(i)+w_{y(i)})$ of radius $|w_{y(i)}|$ does not contain $(t_{y(j)},y(j))$. For simplicity, let us write $t_i$ for $t_{y(i)}$, $t_j$ for $t_{y(j)}$ and $w$ for $w_{y(i)}$. We may assume $|w|=1/n$. We have \begin{align*} |(t_i,y(i)+w)-(t_j,y(j))|^2&=|t_i-t_j|^2+|y(i)+w-y(j)|^2. \end{align*} But \begin{align*} |y(i)+w-y(j)|^2&=|w|^2+|y(i)-y(j)|^2-2\langle w, y(j)-y(i)\rangle\\ &=|w|^2+|y(i)-y(j)|^2-2\langle w, x(j)-x(i)\rangle-2\langle w, y(j)-x(j)\rangle+2\langle w, y(i)-x(i)\rangle\\ &\leq |w|^2+|y(i)-y(j)|^2-2\langle w, \frac{j-i}{N}\epsilon v\rangle +4\frac{\eta}{nN^2}\\ &\leq |w|^2+ (|x(i)-x(j)|+2\eta/N^2)^2 -2\frac{j-i}{N}\epsilon(1-\eta)/n +4\frac{\eta}{nN^2}\\ &=|w|^2+ \left(\frac{j-i}{N}\epsilon+2\eta/N^2\right)^2 -2\frac{j-i}{N}\epsilon(1-\eta)/n +4\frac{\eta}{nN^2}. \end{align*} But we know $|w|^2\leq |(t_i,y(i)+w)-(t_j,y(j))|^2$, thus \begin{align*} |t_i-t_j|^2&\geq 2\frac{j-i}{N}\epsilon(1-\eta)/n- \left(\frac{j-i}{N}\epsilon+2\eta/N^2\right)^2 -4\frac{\eta}{nN^2}\\ &=\frac{2(j-i)\epsilon(1-\eta)}{n}\frac{1}{N}-\frac{(j-i)^2\epsilon^2}{N^2}-O_{n,\epsilon}(1/N^2) \end{align*} Using $\frac{(j-i)^2\epsilon^2}{N^2}\leq \frac{(j-i)\epsilon}{n}\frac{1}{N}$ (as $j-i\leq N$ and $\epsilon\leq 1/n$), we get \begin{align*} |t_i-t_j|^2&\geq \left(\frac{2(j-i)\epsilon(1-\eta)}{n}-\frac{(j-i)\epsilon}{n}\right)\frac{1}{N}-O_{n,\epsilon}(1/N^2). \end{align*} As we picked $\eta=1/4$, we get $$ |t_i-t_j|^2\geq \frac{(j-i)\epsilon}{2n}\frac{1}{N}(1-O_{n,\epsilon}(1/N)),$$ and hence $$|t_i-t_j|\geq \left(\frac{\epsilon}{2n}\right)^{1/2}\frac{1}{N^{1/2}}(1-O_{n,\epsilon}(1/N)),$$ proving the claim and hence the lemma. \end{proof} \subsection{Finishing the proof} We now use our earlier lemmas (especially Lemma~\ref{lemma_chartsconvenient} and Lemma~\ref{lemma_spheresaroundtypeline}) to finish the proof of Theorem~\ref{theorem_reachable}. \begin{lemma}\label{lemma_closedirectionseasycase} Let $\varphi: \RR^{d-1}\to V$ be a smooth parametrization of some open set $V\subseteq \Sph^{d-1}$, and let $\epsilon>0$. Assume that $v,v'\in V$ are such that $I_v, I_{v'}$ are non-empty. Then $v'$ is reachable from $v$ along a path which is $\epsilon$-close to $\varphi\circ\gamma_{\varphi^{-1}(v),\varphi^{-1}(v')}$. \end{lemma} \begin{proof} Write $u$ for $\varphi^{-1}(v)$ and $u'$ for $\varphi^{-1}(v')$. As $I_v, I_{v'}$ are non-empty, there is an open set containing $v$ and $v'$ such that whenever $p$ belongs to this set then $I_p$ has non-empty interior. By Lemma~\ref{lemma_nicetype}, there are open balls $U_1, U_2\subseteq \RR^{d-1}$ around $u$ and $u'$ (respectively) such that for any $x\in U_1$, $\varphi(x)$ is reachable from $v$ along a path which is $0$-close to $\varphi\circ\gamma_{u,x}$, and similarly for any $y\in U_2$, $\varphi(y)$ is reachable from $v'$ along a path which is $0$-close to $\varphi\circ\gamma_{u',y}$. Pick $\eta>0$ small (to be specified later). By Lemma~\ref{lemma_chartsconvenient}, we can find $x\in U_1$, $|x-u|<\eta$ and $y\in U_2$, $|y-u'|<\eta$ such that $\varphi(y)$ is reachable from $\varphi(x)$ along a path which is $(\epsilon/2)$-close to $\varphi\circ\gamma_{x,y}$. It follows that $u'$ is reachable from $u$ along a path which is $(\epsilon/2)$-close to the union of the images of $\varphi\circ\gamma_{u,x}$, $\varphi\circ\gamma_{x,y}$, $\varphi\circ\gamma_{y,u'}$. However, by taking $\eta$ small enough, we can guarantee that all points in these images are at most $\epsilon/2$ away from a point in the image of $\varphi\circ\gamma_{u,u'}$, proving the lemma. \end{proof} To extend Lemma~\ref{lemma_closedirectionseasycase} to all $v,v'$, including when $I_v$ or $I_{v'}$ is a single point, we will use the following lemma. \begin{lemma}\label{lemma_typepoint} Let $\varphi: \RR^{d-1}\to V$ be a smooth parametrization of some open set $V\subseteq \Sph^{d-1}$. Assume that $v\in V$ and $I_v$ is a single point. Then one of the following statements hold. \begin{enumerate} \item For all $\eta>0$ there is some $p\in V$ such that $I_p$ has non-empty interior and $p$ is reachable from $v$ along a path which is $\eta$-close to $\{v\}$. \item There is some open neighbourhood $N$ of $v$ such that whenever $p\in N$ then $p$ is reachable from $v$ along $\varphi\circ\gamma_{\varphi^{-1}(v),\varphi^{-1}(p)}$. \end{enumerate} \end{lemma} \begin{proof} First, assume that there is a sequence of points $(p_n)$ in $V$ converging to $v$ such that for all $n$, $I_{p_n}$ is not a single point. We will show that the first conclusion holds. By Lemma~\ref{lemma_spheresaroundtypeline} (and the final part of Lemma~\ref{lemma_largeinnerproducts}), we may modify $p_n$ slightly so that $I_{p_n}$ has non-empty interior for all $n$. Let $\eta>0$ be given. By Lemma~\ref{lemma_closedirectionseasycase}, we can take $\gamma_n: [0,1]\to V$ and $\delta_n:[0,1]\to K$ continuous functions such that $\delta_n(t)\in I_{\gamma_n(t)}$ for all $(n,t)$, $\gamma_n(0)=p_n$ for all $n$, $\gamma_n(1)=p_{n+1}$ for all $n$, and $\gamma_n(t)$ is at most $\eta/2^n$ away from some point on the image of $\varphi\circ\gamma_{\varphi^{-1}(p_n),\varphi^{-1}(p_{n+1})}$ for all $(n,t)$. By taking a subsequence of the form $(p_n)_{n>N_0}$, we may assume that for all $n$, all points on the image of $\varphi\circ\gamma_{\varphi^{-1}(p_n),\varphi^{-1}(p_{n+1})}$ are at most $\eta/2$ away from $v$. So $|\gamma_n(t)-v|\leq \eta$ for all $(n,t)$. Using Lemma~\ref{lemma_canchooseendpoints}, we may also assume that $\delta_n(1)=\delta_{n+1}(0)$ for all $n$. Now define $\gamma: [0,1]\to V$ and $\delta: [0,1]\to K$ as follows. Let $\gamma(0)=v$ and let $\delta(0)$ be the unique point in $I_v$. For $t\in (0,1]$, let $n$ be such that $\frac{1}{n+1}\leq t\leq \frac{1}{n}$, and set $\gamma(t)=\gamma_n(n(n+1)(\frac{1}{n}-t))$ and $\delta(t)=\delta_n(n(n+1)(\frac{1}{n}-t))$. It is easy to check that $\gamma, \delta$ are well-defined and continuous on $(0,1]$, and $|\gamma(t)-v|\leq \eta$ for all $t$. Moreover, using that $(p_n)\to v$ and $\gamma_n(t)$ is at most $\eta/2^n$ away from some point on the image of $\varphi\circ\gamma_{\varphi^{-1}(p_n),\varphi^{-1}(p_{n+1})}$ for all $(n,t)$, we also get that $\gamma$ is continuous at $0$. To show continuity of $\delta$ at $0$, assume that $(t_n)\to 0$ and $(\delta(t_n))\to z$, we prove $z=\delta(0)$. We know $\delta(t_n),\delta(t_n)+\gamma(t_n)\in K$. Using that $K$ is closed and $\gamma$ is continuous, taking limits gives $z, z+\gamma(0)\in K$, i.e., $z, z+v\in K$, i.e., $z\in I_v$. Hence $z=\delta(0)$, as claimed. This proves the claim in the first case. Now assume that such a sequence $(p_n)$ does not exist. This means that there is an open neighbourhood of $v$ consisting only of points $p$ such that $I_p$ is a single point. It follows that there is an open ball $B$ around $u=\varphi^{-1}(v)$ such that whenever $x\in B$ then $I_{\varphi(x)}$ is a single point. Let $N=\varphi(B)$, so $N$ is an open neighbourhood of $v$. Given $p\in N$, let $\varphi^{-1}(p)=q$. We show $p$ is reachable from $v$ along $\varphi\circ\gamma_{u,q}$. Indeed, let $\gamma(t)=\varphi((1-t)u+tq)$ and let $\delta(t)$ be the unique point in $I_{\gamma(t)}$. Then $\delta$ is continuous by an argument almost identical to the one above. Indeed, if $(t_n)\to t$ and $(\delta(t_n))\to z$, then $\delta(t_n), \delta(t_n)+\gamma(t_n)\in K$. Taking limits gives $z,z+\gamma(t)\in K$, i.e., $z\in I_{\gamma(t)}$, i.e., $z=\delta(t)$, as required. This finishes the proof of the lemma. \end{proof} \begin{lemma}\label{lemma_closedirectionswork} Let $\varphi: \RR^{d-1}\to V$ be a smooth parametrization of some open set $V\subseteq \Sph^{d-1}$, and let $\epsilon>0$. Then for any $v,v'\in V$, $v'$ is reachable from $v$ along a path which is $\epsilon$-close to $\varphi\circ\gamma_{\varphi^{-1}(v),\varphi^{-1}(v')}$. \end{lemma} \begin{proof} Write $u$ for $\varphi^{-1}(v)$ and $u'$ for $\varphi^{-1}(v')$. Let $\eta>0$ be small (specified later). There is some open set $V_1\subseteq V$ (not necessarily containing $v$) such that any $p\in V_1$ is reachable from $v$ along a path which is $\eta$-close to $\{v\}$. Indeed, this follows from Lemma~\ref{lemma_spheresaroundtypeline} if $I_v$ is not a single point, and from Lemma~\ref{lemma_typepoint} (together with Lemma~\ref{lemma_nicetype}) when $I_v$ is a single point. Similarly, there is some $V_2\subseteq V$ such that any $q\in V_2$ is reachable from $v'$ along a path which is $\eta$-close to $\{v'\}$. In particular, $|v-p|\leq \eta$ and $|v'-q|\leq \eta$ for any such $p,q$. But, by Lemma~\ref{lemma_chartsconvenient}, there are some $p\in V_1, q\in V_2$ such that $q$ is reachable from $p$ along a path which is $\eta$-close to $\varphi\circ\gamma_{\varphi^{-1}(p),\varphi^{-1}(q)}$. Hence $v'$ is reachable from $v$ along a path which is $\eta$-close to $\{v,v'\}\cup\operatorname{Im}(\varphi\circ\gamma_{\varphi^{-1}(p),\varphi^{-1}(q)})$. By taking $\eta$ small enough, we can guarantee that any point in $\operatorname{Im}(\varphi\circ\gamma_{\varphi^{-1}(p),\varphi^{-1}(q)})$ is at most $\epsilon/2$ away from some point in $\operatorname{Im}(\varphi\circ\gamma_{\varphi^{-1}(v),\varphi^{-1}(v')})$. The result follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem_reachable}] Using Lemma~\ref{lemma_closedirectionswork}, it is easy to see that for any $t\in [0,1]$ there is some $\delta_t>0$ such that whenever $t'\in [0,1]$ and $|t-t'|<\delta_t$, then $\gamma(t')$ is reachable from $\gamma(t)$ along a path which is $\epsilon$-close to $\{\gamma(t)\}$. The result follows easily (using the compactness of $[0,1]$ and Lemma~\ref{lemma_canchooseendpoints}). \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem_Kakeyasegments}] The result follows immediately from Theorem~\ref{theorem_reachable} when $d\geq 3$, and from Theorem~\ref{theorem_continuousin2d} when $d=2$ (using Lemma~\ref{lemma_canchooseendpoints}, which also holds for $d=2$). \end{proof} \section{Counterexample for general bodies}\label{section_reachablecounterexample} In this section, our goal is to prove Theorem~\ref{theorem_counterexamplereachable_introduction}, restated below for convenience. \fourdimcounterexample* We will use similar ideas as for Theorem~\ref{theorem_continuouscounterexample} (but this proof will be significantly more complicated). Note that it is sufficient to find a construction where $S$ is compact but not necessarily convex, as the same set $K$ will still provide a counterexample when $S$ is replaced by its convex hull. The set $S$ in our construction will be given by $$S=\{(x,y,z,w)\in\RR^4: x^2+y^2+z^2+w^2=1, x=\pm 1/2\},$$ see Figure~\ref{figure_counterexample}. \begin{figure}[h!] \includegraphics[clip,trim=0cm 0cm 0cm 0cm, width=0.3\linewidth]{counterexample} \centering \captionsetup{justification=centering} \caption{The set $S$ in our construction is a $4$-dimensional analogue of the blue set (or the convex hull of the blue set), which is a subset of the (red) unit sphere}\label{figure_counterexample} \end{figure} In the proof of Theorem~\ref{theorem_continuouscounterexample} we made sure that our set $K$ lied inside the cylinder $\{(x,y,z):y^2+z^2\leq 1/4\}$, and we controlled the intersection with the boundary of the cylinder. This control enabled us to prove discontinuity by observing that any segment in a direction of the $yz$ plane had to intersect the boundary of the cylinder in a pair of points $(x,y,z), (x,-y,-z)$. We will attempt to do something similar here. Our construction will be contained inside the set $\{(x,y,z,w):x^2+y^2\leq 1\}$, and we will control the intersection with the boundary $C=\{(x,y,z,w)\in\RR^4:x^2+y^2=1\}$ of that set. Observe that any rotated copy of $S$ is of the form $$S_v=\{v'\in\RR^4: |v'|=1, \langle v,v'\rangle=\pm 1/2\}$$ for some $v\in \RR^4$ with $|v|=1$. It is not difficult to deduce that if we only rotate $S$ slightly, then the rotated copy intersects $C$ in two pairs of antipodal points. (See Figure~\ref{figure_counterexample}: great circles close to the one given by $x^2+y^2=1, z=0$ intersect the blue set in two pairs of antipodal points). We will have to make sure that $K$ contains translated copies of any two such pairs of antipodal points (so that a translate of $\rho(S)$ is contained in $K$ for all $\rho$), so for all $(x_1,y_1),(x_2,y_2)\in \Sph^1$ we will have some $(z,w)$ such that $(\pm(x_1,y_1),z,w),(\pm(x_2,y_2),z,w)\in C\cap K$. Meanwhile, we will have restrictions on $C\cap K$ in such a way that we guarantee discontinuity.\medskip Let us now turn to the formal proof of Theorem~\ref{theorem_counterexamplereachable_introduction}. As mentioned above, we will control the intersection of $K$ with $C$, i.e., for all $(x,y)\in \Sph^1$ we will control the set $A_{x,y}=\{(z,w):(x,y,z,w)\in K\}$. The following lemma lists all the properties that we will need -- for now, we only show that such sets $A_{x,y}$ exist in $\RR^2$, at this point they do not necessarily come from a body $K$ in $\RR^4$. \begin{lemma}\label{lemma_cylinderproperties} There exist compact convex sets $(A_p)_{p\in \Sph^1}$ in $\RR^2$ such that the following properties hold. \begin{enumerate} \item For all $p,q\in \Sph^1$, $A_p\cap A_q\not =\emptyset$. \item For all $p\in \Sph^1$, $A_p=A_{-p}$. \item For all $p\in \Sph^1$ and all $t\in A_p$ we have $|t|\leq 1$. \item The set $\{(p,t):p\in \Sph^1, t\in A_p\}$ is closed, i.e, whenever $(p_n)\to p$ in $\Sph^1$ and $(t_n)\to t$ in $\RR^2$ with $t_n\in A_{p_n}$ for all $n$, then $t\in A_p$. \item For all $\epsilon>0$ and $(z,w)\in \RR^2$ there is some $r\in (0,\epsilon)$ such that whenever $p=(x,y)\in \Sph^1$ with $|x-1/2|=r$ then all points of $A_p$ are at least distance $1/100$ away from $(z,w)$. \end{enumerate} \end{lemma} Note that such sets $A_p$ cannot exist in $\RR$ instead of $\RR^2$: each $A_p$ would have to be a non-empty closed bounded interval, and then $\bigcap_{p\in \Sph^1}A_p$ would be non-empty by the first condition, so the last property could not be satisfied. This is the reason we need $\RR^4$ for our construction instead of $\RR^3$. Before we prove Lemma~\ref{lemma_cylinderproperties}, we state two lemmas which show why it is useful: Theorem~\ref{theorem_counterexamplereachable_introduction} will follow immediately from Lemma~\ref{lemma_cylinderproperties} and these lemmas. Recall that $C=\Sph^1\times \RR^2$ and $S_v=\{v'\in\RR^4: |v'|=1, \langle v,v'\rangle=\pm 1/2\}$. \begin{lemma}\label{lemma_constructionexists} Assume that we have compact convex sets $(A_p)_{p\in \Sph^1}$ in $\RR^2$ such that the following properties hold. \begin{enumerate} \item For all $p,q\in \Sph^1$, $A_p\cap A_q\not =\emptyset$. \item For all $p\in \Sph^1$, $A_p=A_{-p}$. \item For all $p\in \Sph^1$ and all $t\in A_p$ we have $|t|\leq 1$. \item The set $\{(p,t):p\in \Sph^1, t\in A_p\}$ is closed, i.e, whenever $(p_n)\to p$ in $\Sph^1$ and $(t_n)\to t$ in $\RR^2$ with $t_n\in A_{p_n}$ for all $n$, then $t\in A_p$. \end{enumerate} Then there exists a compact convex $S$-Kakeya set $K\subseteq \RR^4$ such that $K\subseteq \{(x,y,z,w)\in \RR^4:x^2+y^2\leq 1\}$ and $K\cap C\subseteq \{(p,t):p\in \Sph^1, t\in A_p\}$. \end{lemma} \begin{lemma}\label{lemma_constructionworks} Assume that $(A_p)_{p\in \Sph^1}$ in $\RR^2$ are compact convex sets such that the following property holds: for all $\epsilon>0$ and $(z,w)\in \RR^2$ there is some $r\in (0,\epsilon)$ such that whenever $p=(x,y)\in \Sph^1$ with $|x-1/2|=r$ then all points of $A_p$ are at least distance $1/100$ away from $(z,w)$. Assume furthermore that $K$ is a compact convex set such that $K\subseteq \{(x,y,z,w)\in \RR^4:x^2+y^2\leq 1\}$ and $K\cap C\subseteq \{(p,t):p\in \Sph^1, t\in A_p\}$. Then whenever $\gamma: [0,1]\to \Sph^3$ and $\delta: [0,1]\to \RR^4$ are continuous such that $\gamma(0)=(1,0,0,0)$ and $S_{\gamma(t)}+\delta(t)\subseteq K$ for all $t$, then $\gamma(t)=(1,0,0,0)$ for all $t$. \end{lemma} We now prove Lemmas~\ref{lemma_cylinderproperties}, \ref{lemma_constructionexists} and \ref{lemma_constructionworks}. \begin{proof}[Proof of Lemma~\ref{lemma_cylinderproperties}] Consider the following four sets in $\RR^2$: \begin{align*} T_1&=\{0\}\times [0,1],\\ T_2&=[0,1]\times\{0\},\\ T_3&=\{(z,w)\in\RR^2: z+w=1, 0\leq z,w\leq 1\},\\ T&=\{(z,w)\in\RR^2: 0\leq z,w\leq 1, 0\leq z+w\leq 1\}. \end{align*} Given $(x,y)$ with $x^2+y^2=1$, we define $A_{x,y}$ as follows. Let $\min(|x-1/2|,|x+1/2|)=s$. If $s=0$, then $A_{x,y}=T$. Otherwise, let $k$ be the positive integer such that $1/2^k\geq s>1/2^{k+1}$. If $s=1/2^k$, then let $A_{x,y}=T$. Otherwise let $A_{x,y}=T_{k\textnormal{ mod }3}$. It is straightforward to check that each $A_p$ is convex and compact, and that properties 1, 2 and 3 are satisfied. To see that property 4 holds, observe that if $(p_n)\to p$ and $(t_n)\to t$ as above, then either $A_p=T$, or $A_{p_n}$ is eventually constant and equal to $A_p$. In either case, it is easy to deduce that $t\in A_p$. Finally, we show that property 5 holds. Given such $(z,w)$, we can find some $i\in\{1,2,3\}$ such that any point in $T_i$ has distance at least $1/100$ from $(z,w)$. Then we can find some $r\in (0,\epsilon)$ such that $1/2^k> r>1/2^{k+1}$ for some positive integer $k$ with $k\equiv i\textnormal{ mod $3$}$. It is easy to see that this $r$ satisfies the conditions. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma_constructionexists}] Observe that if $v\in \Sph^3$, the set $S_v$ intersects $C$ in $0$, $2$ or $4$ points: \begin{itemize} \item $S_v$ intersects $C$ in $0$ points if and only if $v_1^2+v_2^2<1/4$; \item $S_v$ intersects $C$ in a pair of points $v', -v'$ if and only if $v_1^2+v_2^2=1/4$; \item $S_v$ intersects $C$ in two pairs of (distinct) points $v', -v', v'', -v''$ if and only if $v_1^2+v_2^2>1/4$. \end{itemize} Let \begin{align*} V_1&=\{v\in \Sph^3: v_1^2+v_2^2\geq 1/4\},\\ V_2&=\{v\in \Sph^3: 1/100\leq v_1^2+v_2^2\leq 1/4\},\\ V_3&=\{v\in \Sph^3: v_1^2+v_2^2\leq 1/100\}. \end{align*} For all $v\in V_1$, let $T_v=\bigcap_{w\in C\cap S_v}A_{w_1,w_2}=\bigcap_{p\in \Sph^1: (p,0,0)\in S_v}A_{p}$. Note that $C\cap S_v=\{v',-v',v'',-v''\}$ for some $v',v''\in \Sph^3$ (not necessarily distinct), so (using $A_{-p}=A_{p}$) we have $T_v=A_{v'_1,v'_2}\cap A_{v''_1,v''_2}$. In particular, $T_v\not =\emptyset$. Let $K_{1,v}=S_v+\{(0,0,t):t\in T_{v}\}$ and $$K_1=\bigcup_{v\in V_1}K_{1,v}.$$ For all $v\in V_2$, let $p_v=\left(\frac{1/2}{\sqrt{v_1^2+v_2^2}}v_1,\frac{1/2}{\sqrt{v_1^2+v_2^2}}v_2, \frac{\sqrt{3}/2}{\sqrt{v_3^2+v_4^2}}v_3,\frac{\sqrt{3}/2}{\sqrt{v_3^2+v_4^2}}v_4\right)$. So $(p_v)_1^2+(p_v)_2^2=1/4$, $|p_v|=1$, and we have $p_v=v$ if $v_1^2+v_2^2=1/4$. Let $K_{2,v}=S_v+\{(0,0,t):t\in T_{p_v}\ $. (Note that $C\cap S_{p_v}=\{v',-v'\}$, where $v'=2((p_v)_1,(p_v)_2,0,0)$ and hence $T_{p_v}= A_{2(p_v)_1,2(p_v)_2}$.) Let $$K_2=\bigcup_{v\in V_2}K_{2,v}.$$ For all $v\in V_3$, let $K_{3,v}=S_v$, and let $$K_3=\bigcup_{v\in V_3}K_{3,v}.$$ Finally, let $K_0=K_1\cup K_2\cup K_3$, and let $K$ be the convex hull of $K_0$.\medskip \textbf{Claim.} The set $K_0$ has the following properties. \begin{enumerate} \item For each $v\in \Sph^3$ there is some $w\in \RR^4$ such that $S_v+w\subseteq K_0$. \item We have $K_0\cap C\subseteq \{(p,t):p\in \Sph^1, t\in A_p\}$, and $K_0$ has no point $(x,y,z,w)$ with $x^2+y^2>1$. \item The set $K_0$ is compact. \end{enumerate} Note that these properties are preserved when taking convex hull. So the claim above implies the statement of the lemma.\medskip \textbf{Proof of Claim.} The first property holds because $K_{i,v}$ contains a translate of $S_v$ if $v\in V_i$. To see that the second property holds, observe that $K_0$ is a union of sets of the form $S_v+(0,0,t)$ for some $t\in \RR^2$. It follows that $K_0$ has no point $(x,y,z,w)$ with $x^2+y^2>1$. Also, if $(p,t)\in K_0\cap C$ ($p\in \Sph^1, t\in \RR^2)$, then $(p,t)\in S_v+(0,0,t)$ for some $v\in \Sph^3$ having $v_1^2+v_2^2\geq 1/4$, and $t\in T_v$ and $(p,0,0)\in C\cap S_v$. But $(p,0,0)\in C\cap S_v$ implies $T_v\subseteq A_p$, so $t\in A_p$, as claimed. It is easy to see that $K_0$ is bounded, so the only property left to check is that $K_0$ is closed. It is enough to show that $K_1, K_2, K_3$ are all closed. We first show that $K_3$ is closed. Assume that $(q_n)$ is a sequence of points in $K_3$ with $(q_n)\to q$, we show that $q\in K_3$. We know $q_n\in S_{v(n)}$ for some $v(n)\in V_3$. By taking an appropriate subsequence, we may assume that $v(n)$ converges to some $v\in V_3$. It is easy to see that $q\in S_v$ must hold, so then $q\in K_3$. Next, we show that $K_2$ is closed. As before, assume that $(q_n)$ is a sequence of points in $K_2$ with $(q_n)\to q$. We have $q_n\in S_{v(n)}+(0,0,t_n)$ for some $v(n)\in V_2$ and $t_n\in T_{p_{v(n)}}=A_{2(p_{v(n)})_1,2(p_{v(n)})_2}$. By taking a subsequence, we may assume that $v(n)$ converges to some $v\in V_2$, and $(t_n)$ converges to some $t\in \RR^2$. Observe that $(p_{v(n)})\to p_v$. But then $t\in A_{2(p_v)_1,2(p_v)_2}=T_{p_v}$ and hence $q\in S_v+(0,0,t)\subseteq S_v+\{(0,0,t'):t'\in T_{p_v}\}$, so $q\in K_{2}$, as required. Finally, we show that $K_1$ is also closed. Again, assume that $(q_n)$ is a sequence of points in $K_1$ with $(q_n)\to q$. We have $q_n\in S_{v(n)}+(0,0,t_n)$ for some $v(n)\in V_1$ and $t_n\in T_{v(n)}$. As before, by taking a subsequence we may assume that $v(n)$ converges to some $v\in V_1$ and $t_n$ converges to some $t\in \RR^2$. We claim that this implies $t\in T_v$. Observe that $C\cap S_{v(n)}$ is of the form $\{v'(n),-v'(n),v''(n),-v''(n)\}$, where $v'(n)=\pm v''(n)$ if and only if $v(n)_1^2+v(n)_2^2=1/4$. So we have $$T_{v(n)}=A_{v'(n)_1,v'(n)_2}\cap A_{v''(n)_1,v''(n)_2}.$$ By taking an appropriate subsequence, we may assume that $v'(n)$ converges to $v'$ and $v''(n)$ converges to $v''$, where $C\cap S_v=\{v',-v',v'',-v''\}$. But we have $t_n\in A_{v'(n)_1,v'(n)_2}$ for all $n$, and hence $t\in A_{v'_1,v'_2}$. Similarly, $t\in A_{v''_1,v''_2}$. Hence $t\in T_v$, as claimed. But then $$q\in S_v+(0,0,t)\subseteq S_v+\{(0,0,t'):t'\in T_v\}=K_{1,v}\subseteq K_1,$$ as claimed. This finishes the proof of the claim and hence the lemma. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma_constructionworks}] Assume, for contradiction, that $\gamma(t)\not=(1,0,0,0)$ for some $t$. We may assume that $\gamma(t)_1>9/10$ for all $t$, and that for all $t>0$ we have $\gamma(t)\not =(1,0,0,0)$. There are some continuous functions $v', v'': [0,1]\to C$ such that $S_{\gamma(t)}\cap C=\{v'(t),-v'(t),v''(t),-v''(t)\}$, $\langle \gamma(t), v'(t)\rangle=\langle \gamma(t),v''(t)\rangle=1/2$ and $v'(0),v''(0)=(1/2,\pm\sqrt{3}/2,0,0)$. Observe that if $\gamma(t)\not =(1,0,0,0)$ then $v'(t)_1\not=1/2$ or $v''(t)_1\not =1/2$. Indeed, we would have $v'(t),v''(t)=(1/2,\pm\sqrt{3}/2,0,0)$ and $1=\langle \gamma(t),v'(t)+v''(t)\rangle=\langle \gamma(t),(1,0,0,0)\rangle$, giving $\gamma(t)=(1,0,0,0)$. It follows that for all $t>0$, either $v'(t)_1\not=1/2$ or $v''(t)_1\not =1/2$. By continuity, there is some $\epsilon>0$ such that for all $t\leq\epsilon$ we have $|\delta(t)-\delta(0)|<1/100$. We know $v'(\epsilon)_1\not=1/2$ or $v''(\epsilon)_1\not =1/2$, we may assume by symmetry that $v'(\epsilon)_1\not=1/2$. By assumption, there is an $x_0$ lying between $1/2$ and $v'(\epsilon)_1$ such that whenever $p\in \Sph^1$ is of the form $p=(x_0,y_0)$ (for some $y_0$) then any point of $A_p$ is at least distance $1/100$ away from $(\delta(0)_3,\delta(0)_4)$. But, by continuity of $v'$, there is some $t_0\in [0,\epsilon]$ such that $v'(t_0)_1=x_0$. Observe that $$K\supseteq S_{\gamma(t_0)}+\delta(t_0)\supseteq \{v'(t_0),-v'(t_0)\}+\delta(t_0).$$ But if $u,u'\in K$ with $u-u'=2(x,y,0,0)$ for some $x,y$ with $x^2+y^2=1$, then we must have $u,u'\in K\cap C$ and $u=(x,y,z,w)$, $u'=(-x,-y,z,w)$ for some $(z,w)\in A_{x,y}$. Hence $\delta(t_0)=(0,0,z,w)$ for some $(z,w)\in A_{v'(t_0)_1,v'(t_0)_2}$. But then $|\delta(t_0)-\delta(0)|>1/100$, giving a contradiction. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem_counterexamplereachable_introduction}] The result follows easily from Lemmas~\ref{lemma_cylinderproperties}, \ref{lemma_constructionexists} and \ref{lemma_constructionworks}. \end{proof} \section{Concluding remarks}\label{section_Kakeyaconcluding} In this paper we answered Question~\ref{question_kakeya} and some related problems. However, there are still some open questions in this topic. For example, our counterexample in Theorem~\ref{theorem_counterexamplereachable_introduction} requires $d\geq 4$, whereas we know that there can be no $2$-dimensional counterexample (by Theorem~\ref{theorem_continuousin2d}). It would be interesting to see a counterexample in $3$ dimensions (we believe that such a construction should exist). \begin{question} Can we find convex bodies $S$ and $K$ in $\RR^3$ such that $S$ is $K$-Kakeya, but there are two $S$-copies in $K$ which cannot be rotated into each other within $K$? \end{question} Furthermore, we showed that if $S$ is a unit segment, then any two $S$ copies can be rotated into each other within a compact convex ($S$-)Kakeya set, but this fails for general bodies $S$. It would be interesting to determine if there are other sets $S$ (or families of such) for which this property holds. (A trivial example is given by closed balls.) \begin{question} Can we find (compact, convex) sets $S$ in $\RR^d$ with $d\geq 3$ such that $S$ is not a segment or a ball, and whenever some convex body $K$ is $S$-Kakeya then any two $S$ copies can be rotated into each other within $K$? \end{question}
{ "timestamp": "2022-11-03T01:16:38", "yymm": "2209", "arxiv_id": "2209.09728", "language": "en", "url": "https://arxiv.org/abs/2209.09728", "abstract": "Let $K$ be a convex body (a compact convex set) in $\\mathbb{R}^d$, that contains a copy of another body $S$ in every possible orientation. Is it always possible to continuously move any one copy of $S$ into another, inside $K$? As a stronger question, is it always possible to continuously select, for each orientation, one copy of $S$ in that orientation? These questions were asked by Croft.We show that, in two dimensions, the stronger question always has an affirmative answer. We also show that in three dimensions the answer is negative, even for the case when $S$ is a line segment -- but that in any dimension the first question has a positive answer when $S$ is a line segment. And we prove that, surprisingly, the answer to the first question is negative in dimension four for general $S$.", "subjects": "Metric Geometry (math.MG); Combinatorics (math.CO)", "title": "Rotation inside convex Kakeya sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.98866824713641, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7073169878705728 }
https://arxiv.org/abs/2003.02964
Stability of Normal Bundles of Space Curves
In this paper, we prove that the normal bundle of a general Brill-Noether space curve of degree $d$ and genus $g \geq 2$ is stable if and only if $(d,g) \not\in \{ (5,2), (6,4) \}$. When $g\leq1$ and the characteristic of the ground field is zero, it is classical that the normal bundle is strictly semistable. We show that this fails in characteristic $2$ for all rational curves of even degree.
\section{Introduction} Let $C$ be a smooth connected curve defined over an algebraically closed field $k$ (of arbitrary characteristic). The normal bundle $N_{C/\mathbb{P}^r}$ of a smooth curve controls the deformations of the curve in $\mathbb{P}^r$ and plays a crucial role in many problems of geometry, arithmetic and commutative algebra. In this paper, we show that the normal bundle of a general Brill-Noether space curve of degree $d$ and genus $g$ is stable if and only if $g \geq 2$ and $(d,g)\not\in \{ (5,2), (6,4) \}$. Let $E$ be a vector bundle on a smooth curve $C$. Let the \defi{slope $\mu(E)$} be \[\mu(E) \colonequals \frac{\deg(E)}{\operatorname{rk}(E)}.\] Then $E$ is called \defi{(semi)stable} if every proper subbundle $F$ of smaller rank satisfies \[\mu(F) \underset{{\scriptscriptstyle (}-{\scriptscriptstyle )}}{<} \mu(E).\] The bundle is called \defi{unstable} if it is not semistable and \defi{strictly semistable} if it is semistable but not stable. By the Brill-Noether Theorem (see \cite{kleimanlaksov, griffithsharris, gieseker, acgh, oss, jp, clt}), a general curve of genus $g$ admits a nondegenerate, degree $d$ map to $\mathbb{P}^r$ if and only if the Brill-Noether number $\rho(g,r,d)$ satisfies $$\rho(g,r,d) : = g - (r+1)(g-d+r) \geq 0.$$ When $r \geq 3$, there is a unique component of the Hilbert scheme that dominates the moduli space $\overline{M}_g$ and whose general member parameterizes a smooth, nondegenerate curve of degree $d$ and genus $g$ in $\mathbb{P}^r$. We call a member of this component a \defi{Brill--Noether curve}. When $r=3$, we call such a curve a \defi{Brill-Noether space curve}. With this terminology, our main theorem is the following. \begin{ithm}\label{thm:main} Let $C \subseteq \mathbb{P}^3$ be a general Brill--Noether space curve of degree $d$ and genus $g$ over an algebraically closed field $k$. \begin{enumerate} \item $N_C$ is stable if and only if $g\geq 2$ and $(d,g) \not\in \{ (5,2), (6,4) \}$. \item $N_C$ is strictly semistable if and only if $g < 2$ and one of the following holds: $\operatorname{char}(k) \neq 2$, $g = 1$, or $d$~is odd. \item $N_C$ is unstable if and only if $(d, g) \in \{(5, 2), (6, 4)\}$, or all of the following hold: $\operatorname{char}(k)=2$, $g = 0$, and $d$ is even. \end{enumerate} \end{ithm} The normal bundles of curves in projective space have been studied by many authors (for example, see \cite{aly, ballicoellia, coskunriedl, einlazarsfeld, ellia, ellingsrudhirschowitz, ellingsrudlaksov, newstead, ran, sacchiero, sacchiero2, sacchiero3}). Our results complete and unify these results for Brill-Noether space curves. If $(d, g) \in \{(5, 2), (6, 4)\}$, then $C$ lies on a unique quadric $Q$ and $N_{C/Q} \subset N_C$ gives a destabilizing subbundle. We will describe the geometry in these two cases more explicitly in \S \ref{sec-unstable}. Every bundle on $\mathbb{P}^1$ splits as a direct sum of line bundles. Hence, the normal bundle of a smooth rational curve can be written as $N_{C} = \bigoplus_{i=1}^{r-1}\O(a_i)$ for some integers $a_1, \dots, a_{r-1}$ with $$\sum_{i=1}^{r-1} a_i = (r+1)d -2.$$ If $C$ is a general rational curve of degree at least $d \geq r$ in $\mathbb{P}^r$, and the characteristic of the ground field is not $2$, then $N_{C/ \mathbb{P}^r}$ splits as equally as possible, i.e. $|a_i - a_j| \leq 1$ (see \cite{sacchiero, ran, coskunriedl, aly}). Hence, $N_{C/ \mathbb{P}^r}$ is strictly semistable when $r-1$ divides $2d -2$ and is unstable otherwise. When $r=3$ and $\text{char}(k)\neq 2$, since the quantity $2d-2$ is always even, the normal bundle of a general rational curve of degree $d \geq 3$ is strictly semistable. If the characteristic is $2$, we show in Lemma \ref{lem:char2_obs} that all $a_i \equiv d$ mod $2$; this obstructs semistability for rational curves with $d$ even. Similarly, normal bundles of genus one curves have been studied extensively (see \cite{einlazarsfeld, ellingsrudhirschowitz, ellingsrudlaksov}). By \cite{ellingsrudhirschowitz}, the normal bundle of a general nondegenerate genus one space curve is semistable. On the other hand, on a genus one curve, there are no stable rank $2$ bundles of degree $4d$. Hence, the normal bundle of a general genus one space curve of degree $d\geq 4$ is strictly semistable. Our techniques will provide short arguments reproving the $g=0$ and $1$ cases. In higher genus, the previously known results were more sporadic. The stability of the normal bundle was proved for $(d, g)=(6, 2)$ by Sacchiero \cite{sacchiero3}, for $(d, g)=(9, 9)$ by Newstead \cite{newstead}, for $(d, g)=(6, 3)$ by Ellia \cite{ellia}, and for $(d, g)=(7, 5)$ by Ballico and Ellia \cite{ballicoellia}. Many of these cases will be important for our inductive arguments. For completeness, we will reprove these cases using our techniques or briefly recall the arguments. More generally, in \cite{ellingsrudhirschowitz}, Ellingsrud and Hirschowitz announced a proof of stability of normal bundles in an asymptotic range of degrees and genera; however, their results do not cover many of the most challenging cases of small degree. We prove Theorem \ref{thm:main} by specialization. We use three basic specializations: (1) we specialize to a curve of degree $(d-1, g)$ union a $1$-secant line; (2) we specialize to a curve of degree $(d-1, g-1)$ union a $2$-secant line; and finally (3) we specialize to a curve of degree $(d-2, g-3)$ union a $4$-secant conic. These degenerations reduce Theorem \ref{thm:main} to a finite set of base cases. The most challenging part of the paper is to verify these base cases. We expect our techniques and results to generalize to $\mathbb{P}^r$ for $r\geq 3$ and hopefully settle the following conjecture. \begin{conj} \label{conj:larger} The normal bundle of a general Brill-Noether curve of genus at least $2$ in $\mathbb{P}^r$ is stable except for finitely many triples $(d,g, r)$. \end{conj} Conjecture \ref{conj:larger} is closely related to several conjectures in the literature. For example, Aprodu, Farkas and Ortega have conjectured that the normal bundle of a general canonical curve of $g \geq 7$ is stable \cite[Conjecture 0.4]{AFO} (see also \cite{Bruns}). \subsection*{Organization of the paper} In \S \ref{sec-prelim}, we will recall basic facts about normal bundles on nodal curves and elementary modifications. In \S \ref{sec-unstable}, we will elaborate on the two cases $(d,g) \in \{(5, 2), (6, 4)\}$ as well as the obstruction to stability for rational curves in characteristic $2$. In \S \ref{sec:stab_degen1}, we will introduce several basic degenerations to reduce the theorem to a small set of initial cases. For the rest of the paper, we will analyze these initial cases. \subsection*{Acknowledgments} We would like to thank Atanas Atanasov, Lawrence Ein, Gavril Farkas, Joe Harris, Eric Riedl, Ravi Vakil, and David Yang for invaluable conversations on normal bundles of curves. \section{Preliminaries}\label{sec-prelim} In this section, we collect basic facts on normal bundles of curves, stability of vector bundles, elementary modifications, and on certain reducible Brill--Noether curves. For more details, we refer the reader to \cite{aly, rbn, rbn2}; when necessary, we provide a characteristic-independent proof here. \subsection*{The normal bundle of a space curve} Let $C \subset \mathbb{P}^r$ be a smooth Brill--Noether curve of degree $d$ and genus $g$. The normal bundle $N_C$ is a rank $r-1$ vector bundle that is presented as a quotient \[0 \to T_C \to T_{\mathbb{P}^r}|_C \to N_C \to 0,\] of the restricted tangent bundle of $\mathbb{P}^r$ by the tangent bundle of $C$. The restricted tangent bundle is itself naturally a quotient in the Euler exact sequence \begin{equation}\label{euler_seq} 0 \to \O_C \to \O_C(1)^{\oplus (r+1)} \to T_{\mathbb{P}^r}|_C \to 0.\end{equation} From this we see that $\deg(N_C) = (r+1)d + 2g - 2$. Specializing to $r=3$, we have that \[\mu(N_C) = 2d + g -1,\] and therefore $N_C$ is stable if and only if all line subbundles $L \subseteq N_C$ have slope at most $2d + g -2$. \subsection*{Stability of vector bundles on nodal curves} In the course of our inductive argument, we will specialize a smooth Brill--Noether curve to a reducible nodal curve. In this section, we generalize the definition of stability of vector bundles to allow $C$ to be a connected nodal curve. We will write \[\nu \colon \tilde{C} \to C\] for the normalization of $C$. For any node $p$ of $C$, write $\tilde{p}_1$ and $\tilde{p}_2$ for the two points of $\tilde{C}$ over $p$. Given a vector bundle $E$ on $C$, the fibers of the pullback $\nu^*E$ to $\tilde{C}$ over $\tilde{p}_1$ and $\tilde{p}_2$ are naturally identified. Given a subbundle $F \subseteq \nu^*E$, it therefore makes sense to compare $F|_{\tilde{p}_1}$ and $F|_{\tilde{p}_2}$ inside $\nu^*E|_{\tilde{p}_1} \simeq \nu^*E|_{\tilde{p}_2}$. \begin{defin} Let $E$ be a vector bundle on a connected nodal curve $C$. For a subbundle $F \subset \nu^*E$, define the \defi{adjusted slope $\mu^{\text{adj}}_C$} by \[\mu^{\text{adj}}_C(F) \colonequals \mu(F) - \frac{1}{\operatorname{rk}{F}} \sum_{p \in C_{\text{sing}}} \operatorname{codim}_{F} \left(F|_{\tilde{p}_1}\cap F|_{\tilde{p}_2} \right),\] where $\operatorname{codim}_F \left(F|_{\tilde{p}_1}\cap F|_{\tilde{p}_2} \right)$ refers to the codimension of the intersection in either $F|_{\tilde{p}_1}$ or $F|_{\tilde{p}_2}$ (which are equal since $\dim F|_{\tilde{p}_1} = \dim F|_{\tilde{p}_2}$). When the curve $C$ is unambiguous, we will omit it from our notation and write simply $\mu^{\text{adj}}(F)$. Note that if $F$ is pulled back from $C$, then $\mu^{\text{adj}}_C(F) = \mu(F)$. We say that $E$ is \defi{(semi)stable} if for all subbundles $F \subset \nu^*E$, \[\mu^{\text{adj}}(F) \underset{{\scriptscriptstyle (}-{\scriptscriptstyle )}}{<} \mu(\nu^*E) = \mu(E). \] \end{defin} With this definition, stability is an open condition in families of connected nodal curves. To show this, we will need the following lemma. \begin{lem} \label{lem:contract} Let $\beta \colon C' \to C$ be a map obtained by contracting a $1$- or $2$- secant $\mathbb{P}^1$: \begin{center} \begin{tikzpicture}[scale=1.3] \draw (0, 0) .. controls (0, 1) and (1, 1) .. (1, 0); \draw (0.5, 0) -- (1.1, 0.3); \draw (2, 0) .. controls (2, 1) and (3, 1) .. (3, 0); \draw[->] (1.3, 0.3) -- (1.9, 0.3); \draw (4, 0.3) node{or}; \draw (5, 0) .. controls (5, 1) and (6, 1) .. (6, 0); \draw (4.9, 0.3) -- (6.1, 0.3); \draw (7, 0) .. controls (9, 1) and (6, 1) .. (8, 0); \draw[->] (6.3, 0.3) -- (6.9, 0.3); \end{tikzpicture} \end{center} If $E$ is a (semi)stable vector bundle on $C$, then $\beta^* E$ is also (semi)stable. \end{lem} \begin{proof} Write $\nu \colon \tilde{C} \to C$ and $\nu' \colon \tilde{C'} \to C'$ for the normalization maps. First consider the $1$-secant case. Write $x$ for the point of attachment (so that $C' = C \cup_x \mathbb{P}^1$). Let $E$ be a (semi)stable vector bundle on $C$, and let $F \subset \nu'^* \beta^* E$ be any subbundle. Since $\nu'^*\beta^*E|_{\mathbb{P}^1}$ is trivial, we have \begin{equation} \label{muf} \mu(F|_{\mathbb{P}^1}) \leq 0. \end{equation} Thus, \[\mu^{\text{adj}}_{C'}(F) = \mu^{\text{adj}}_C (F|_{\tilde{C}}) + \mu(F|_{\mathbb{P}^1}) - \frac{\operatorname{codim}_F \left(F|_{\tilde{x}_1}\cap F|_{\tilde{x}_2} \right)}{\operatorname{rk} F} \leq \mu^{\text{adj}}_C (F|_{\tilde{C}}) \underset{{\scriptscriptstyle (}-{\scriptscriptstyle )}}{<} \mu(E),\] hence $\beta^*E$ is (semi)stable. Similarly in the $2$-secant case, write $C' = C'' \cup_{\{x, y\}} \mathbb{P}^1$. Denote by $\tilde{x}_1$ and $\tilde{y}_1$ (respectively $\tilde{x}_2$ and~$\tilde{y}_2$) the corresponding points on $\mathbb{P}^1$ (respectively $C''$). Let $F \subset \nu'^*\beta^*E$. Since $\nu'^*\beta^*E|_{\mathbb{P}^1}$ is trivial, we can identify the fiber of $E$ at $\tilde{x}_1$ with the fiber of $E$ at $\tilde{y}_1$, and we have \[\mu(F|_{\mathbb{P}^1}) \leq -\frac{1}{\operatorname{rk} F} \cdot \operatorname{codim}_F \left(F|_{\tilde{x}_1}\cap F|_{\tilde{y}_1} \right).\] Thus, for any subbundle $F \subset \nu'^* \beta^* E$, \begin{align*} \mu^{\text{adj}}_{C'}(F) &= \mu^{\text{adj}}_{C''}(F|_{\tilde{C}}) + \mu(F|_{\mathbb{P}^1}) - \frac{1}{\operatorname{rk} F} \cdot \bigg(\operatorname{codim}_F \left(F|_{\tilde{x}_1}\cap F|_{\tilde{x}_2} \right) + \operatorname{codim}_F \left(F|_{\tilde{y}_1}\cap F|_{\tilde{y}_2} \right) \bigg) \\ &\leq \mu^{\text{adj}}_{C''}(F|_{\tilde{C}}) - \frac{1}{\operatorname{rk} F} \cdot \bigg(\operatorname{codim}_F \left(F|_{\tilde{x}_1}\cap F|_{\tilde{y}_1}\right) + \operatorname{codim}_F \left(F|_{\tilde{x}_1}\cap F|_{\tilde{x}_2} \right) + \operatorname{codim}_F \left(F|_{\tilde{y}_1}\cap F|_{\tilde{y}_2} \right) \bigg) \\ \intertext{Twice applying the ``triangle inequality'' $\operatorname{codim} (X \cap Y) + \operatorname{codim} (Y \cap Z) \geq \operatorname{codim} (X \cap Z)$,} &\leq \mu^{\text{adj}}_{C''}(F|_{\tilde{C}}) - \frac{1}{\operatorname{rk} F} \cdot \operatorname{codim}_F \left(F|_{\tilde{x}_2}\cap F|_{\tilde{y}_2} \right) \\ &= \mu^{\text{adj}}_C(F|_{\tilde{C}}) \\ &\underset{{\scriptscriptstyle (}-{\scriptscriptstyle )}}{<} \mu(E). \qedhere \end{align*} \end{proof} \begin{prop} \label{prop:stab-open} Let $\mathscr{C} \to \Delta$ be a family of connected nodal curves over the spectrum of a discrete valuation ring, and $\mathscr{E}$ be a vector bundle on $\mathscr{C}$. \begin{enumerate} \item \label{stab-open} If the special fiber $\mathscr{E}_0 = \mathscr{E}|_0$ is (semi)stable, then the general fiber $\mathscr{E}^* = \mathscr{E}|_{\Delta^*}$ is also (semi)stable. \item \label{ssemi-glob} If $\mathscr{C} \to \Delta$ is smooth, and $\mathscr{E}_0$ is semistable, then any subbundle $\mathscr{F}^* \subset \mathscr{E}^*$ with $\mu(\mathscr{F}^*) = \mu(\mathscr{E}^*)$ extends to a subbundle $\mathscr{F} \subset \mathscr{E}$. \end{enumerate} \end{prop} \begin{proof} Write $\nu \colon \tilde{\mathscr{C}} \to \mathscr{C}$ for the normalization. For part~(\ref{stab-open}), after possibly making a base change, let $\mathscr{F}^* \subset \nu^* \mathscr{E}^*$ be a subbundle with $\mu(\mathscr{F}^*)$ maximal. Since $\mu$ is constant in flat families and $\operatorname{codim}(X \cap Y)$ is lower semicontinuous, $\mu^{\text{adj}}$ is upper semicontinuous in flat families. Therefore, if $\mathscr{F}^*$ extends to a subbundle $\mathscr{F} \subset \nu^* \mathscr{E}$, then \begin{equation} \label{stab-open-easy} \mu^{\text{adj}}(\mathscr{F}^*) \leq \mu^{\text{adj}}(\mathscr{F}_0) \underset{{\scriptscriptstyle (}-{\scriptscriptstyle )}}{<} \mu(\mathscr{E}_0) = \mu(\mathscr{E}^*). \end{equation} Otherwise, we make a blowup $\tilde{\beta} \colon \tilde{\mathscr{C}'} \to \tilde{\mathscr{C}}$ in order to extend $\mathscr{F}^* \subset \nu^* \mathscr{E}^*$ to a subbundle $\mathscr{F} \subset \tilde{\beta}^* \nu^* \mathscr{E}$. By semistable reduction, we may ensure that the central fiber remains reduced. By gluing along sections identified under $\nu$, the blowup $\tilde{\beta}$ induces a map $\beta \colon \mathscr{C}' \to \mathscr{C}$, which is an isomorphism away from the central fiber, and on the central fiber consists of replacing nodes by $1$- and $2$-secant $\mathbb{P}^1$'s. Applying Lemma~\ref{lem:contract}, $\beta^* \mathscr{E}_0$ is (semi)stable. Therefore \eqref{stab-open-easy} holds for $\beta^* \mathscr{E}$. For part~(\ref{ssemi-glob}), we imitate the above argument to extend $\mathscr{F}^*$ to a subbundle of $\beta^* \mathscr{E}$. Since $\mathscr{C} \to \Delta$ is smooth, $\beta$ can be obtained by iteratively contracting $1$-secant $\mathbb{P}^1$s. Since $\mu(\mathscr{F}^*) = \mu(\mathscr{E}^*)$ and $\beta^* \mathscr{E}_0$ is semistable, we must in particular have equality in equation \eqref{muf} from the proof of Lemma \ref{lem:contract} for every such contraction; thus, $\mathscr{F}$ is trivial along every exceptional divisor of $\beta$. In particular, $\mathscr{F}^*$ already extends to a subbundle $\mathscr{F} \subset \mathscr{E}$ without blowing up. \end{proof} \subsection*{Elementary modifications of vector bundles} Let $E$ be a vector bundle on a scheme $X$ and let $F \subset E$ be a subbundle. For any effective Cartier divisor $D \subset X$, we define the \defi{elementary modification of $E$ at $D$ towards $F$} to be the kernel of the natural evaluation map \[ E[D \to F] \colonequals \ker \left( E \to (E/F)|_D \right). \] For an exposition on this construction, see \cite[\S2--3]{aly}. In particular, by \cite[Proposition~2.6]{aly}, $E[D \to F]$ is a vector bundle. Let $q \in \mathbb{P}^r$ be a point. In this paper we will be primarily concerned with modifications of the normal bundle $N_{C/\mathbb{P}^r}$ towards pointing bundles $N_{C \to q}$, which we now recall. For a more detailed exposition see \cite[\S5--6]{aly}. Write \[U_{C, q} = \{p \in C : T_pC \cap q = \emptyset\},\] and let $\pi_q \colon U_{C, q} \to \mathbb{P}^{r-1}$ denote the projection map from $q$; note that $\pi_q$ is unramified by construction. If $U_{C, q}$ is dense in $C$ and contains the singular locus of $C$, then we may define $N_{C \to q}$ to be the unique extension to all of $C$ of the bundle \[N_{C \to q}|_{U_{C, q}} \colonequals \ker \left( N_C \to N_{\pi_q} \right),\] where $N_{\pi_q}$ denotes the normal sheaf of $\pi_q$. Our notation $N_{C \to q}$ is intended to suggest the geometry of sections: they point towards $q$ in $\mathbb{P}^r$. By convention, we will write \[N_C[p \to q] \colonequals N_C[p \to N_{C \to q}].\] The following foundational result of Hartshorne-Hirschowitz underpins our degenerative approach. \begin{lem}[{\cite[Corollary~3.2]{HH}}]\label{lem:hh} Let $X \cup Y$ be a connected nodal curve in $\mathbb{P}^r$. Write $\{p_1, \dots, p_n\} = X \cap Y$ and let $q_i \in T_{p_i}Y$ be a choice of point. Then \[N_{X \cap Y}|_X \simeq N_X(p_1 + \cdots + p_n)[p_1 \to q_1] \cdots [p_n \to q_n].\] \end{lem} \noindent In the course of our degenerations, we will make use of the following lemma. \begin{lem}\label{lem:3pts} Let $D$ be a (smooth) curve of type $(a,b)$ on a smooth quadric surface $Q$. If $q$ is a general point of $D$, then inside $\mathbb{P} N_D$, the two sections coming from the line subbundles $N_{D \to q}$ and $N_{D / Q}$ meet transversely at $a+b-2$ points. \end{lem} \begin{proof} The fibers of $N_{D \to q}$ and $N_{D/Q}$ agree at $p$ if and only if $q$ is contained in $T_pQ$. This occurs exactly at the points $p$ where the two lines through $q$ in $Q$ meet $D$. Since $D$ is of type $(a,b)$ on $Q$, for $q$ general this happens at $a + b - 2$ points of $D$. On the other hand, with multiplicity, the intersection number of these two sections is \[c_1(N_D) - c_1(N_{D \to q}) - c_1(N_{D/Q}) = (2ab + 2a + 2b) - (a+ b +2) - (2ab) = a+b -2.\] Therefore, when $q$ is general, these sections intersect transversely at exactly $a+ b -2$ points. \end{proof} It is a classical fact that the normal bundle of a rational normal (i.e.\ $(d, g) = (3, 0)$) or elliptic normal (i.e.\ $(d, g) = (4, 1)$) curve is semistable, which we record in the following lemma: \begin{lem} \label{lem:01} Let $C$ be a general Brill--Noether curve of degree $d$ and genus $g$, where $(d, g) = (3, 0)$ or $(4, 1)$. Then $N_C$ is semistable. \end{lem} \begin{proof} For $(d, g) = (3, 0)$, let $p$ be a point on $C$, and write $\bar{C} \subset \mathbb{P}^2$ for the image of $C$ under projection from $p$ (which is a conic). Then the semistability of $N_C$ follows from the exact sequence \[0 \to [N_{C \to p} \simeq \O_{\mathbb{P}^1}(5)] \to N_C \to [N_{\bar{C}}(p) \simeq \O_{\mathbb{P}^1}(5)] \to 0.\] For $(d, g) = (4, 1)$, we note that $C$ is the complete intersection of two quadrics; hence $N_C \simeq \O_C(2) \oplus \O_C(2)$ is semistable. \end{proof} \subsection*{Reducible Brill--Noether curves} In this section we show that the basic degenerations we will employ in the proof of Theorem \ref{thm:main} are in the Brill--Noether component of the Hilbert scheme. We say that two curves $X$ and $Y$ meet \defi{quasi-transversely} at a set of points $\Gamma \subset \mathbb{P}^r$ if for each $p \in \Gamma$, the tangent lines $T_pX$ and $T_p Y$ meet only in the isolated point $p$. (If $r \geq 3$, two curves never meet transversely!) The following Lemma is a special case of results of \cite{rbn}, but we include a characteristic-independent proof of this special case. \begin{lem}\label{lem:rightcomp} Let $C$ be a general Brill--Noether curve of degree $d$ and genus $g$ and let $R$ be one of the following \begin{enumerate}[(i)] \item\label{lem:rightcomp_1sec} a $1$-secant line meeting $C$ quasi-transversely at $p$, \item\label{lem:rightcomp_2sec} a $2$-secant line meeting $C$ quasi-transversely at $p$ and $q$, \item\label{lem:rightcomp_4sec} a $4$-secant conic meeting $C$ quasi-transversely at four coplanar points $p_1, \dots, p_4$. \end{enumerate} Then $C \cup R$ is a Brill--Noether curve of degree and genus \eqref{lem:rightcomp_1sec} $(d+1,g)$, \eqref{lem:rightcomp_2sec} $(d+1,g+1)$, \eqref{lem:rightcomp_4sec} $(d+2,g+3)$. \end{lem} \begin{proof} By deformation theory, it suffices to show that $H^1(T_{\mathbb{P}^3}|_{C \cup R})=0$, so that the map $C \cup R \to \mathbb{P}^3$ may be lifted as $C \cup R$ is deformed to a general curve. Moreover, if $C$ is general, then $H^1(T_{\mathbb{P}^3}|_C) = 0$ by the Gieseker-Petri Theorem. We have an exact sequence \begin{equation}\label{rest_tang_es}0 \to T_{\mathbb{P}^3}|_R(-R \cap C) \to T_{\mathbb{P}^3}|_{C \cup R} \to T_{\mathbb{P}^3}|_C \to 0.\end{equation} In cases \eqref{lem:rightcomp_1sec} and \eqref{lem:rightcomp_2sec}, $T_{\mathbb{P}^3}|_R \simeq \O(2) \oplus \O(1)^{\oplus 2}$. Hence $H^1(T_{\mathbb{P}^3}|_R(-p)) = 0$, respectively $H^1(T_{\mathbb{P}^3}|_R(-p-q)) = 0$, and therefore, by \eqref{rest_tang_es} and the Gieseker-Petri Theorem for $C$, we have that $H^1(T_{\mathbb{P}^3}|_{C\cup R}) = 0$. For part \eqref{lem:rightcomp_4sec}, by part \eqref{lem:rightcomp_2sec} we may specialize $C$ to the union of a Brill--Noether curve $C'$ of degree $d-1$ and genus $g-1$ and a $2$-secant line $L$, such that $R$ meets $C'$ at three points and meets $L$ at one point $p$. Let $\Gamma \colonequals (L \cup R) \cap C'$, denoted by solid dots below. \vspace{-0.15in} \begin{center} \begin{tikzpicture}[scale=1.5] \draw (1, 2) .. controls (0.5, 2) and (-0.5, 1.5) .. (0, 1); \draw (0, 1) .. controls (1, 0) and (1, 2) .. (0.1, 1.1); \draw (-0.1, 0.9) .. controls (-0.5, 0.5) and (0.5, -0.3) .. (1, -0.3); \draw (0.145, 1.53) -- (0.845, 0.53); \draw (0.4838, 0.22) .. controls (-0.0362, 1.3328) and (0.67, 1.6628) .. (1.19, 0.55); \draw (0.4838, 0.22) .. controls (1.0038, -0.8928) and (1.71, -0.5628) .. (1.19, 0.55); \draw (0.39, 1.18) circle[radius=0.03]; \draw (0.9, 0.5) node{$L$}; \draw (1.38, 0.43) node{$R$}; \draw (0.39, 1.01) node{$p$}; \draw (1.14, 2) node{$C'$}; \filldraw (0.793, -0.263) circle[radius=0.02]; \filldraw (0.67, 0.78) circle[radius=0.02]; \filldraw (0.31, 0.767) circle[radius=0.02]; \filldraw (0.32, 1.277) circle[radius=0.02]; \filldraw (0.745, 1.155) circle[radius=0.02]; \end{tikzpicture} \end{center} \vspace{-0.3in} First, we show that (a) $C' \cup L \cup R$ is a smooth point of the Hilbert scheme and (b) we can smooth $L \cup R$ to a twisted cubic $R'$ that continues to pass through the $5$-points $\Gamma$. Let $N$ be the subsheaf of $N_{L \cup R}(-\Gamma)$ whose sections fail to smooth the node at $p$. Restriction to $L$ gives an exact sequence \begin{equation}\label{eq:rest_to_L} 0 \to [N_{L \cup R}|_R(-p - \Gamma) \simeq \O\oplus \O(-1)] \to N \to [N|_L \simeq \O(-1)^{\oplus 2}] \to 0; \end{equation} hence, by the long exact sequence associated to \eqref{eq:rest_to_L}, we have $H^1(N) = 0$. By deformation theory, statement (b) follows directly from $H^1(N) = 0$; this vanishing also implies $H^1(N_{C' \cup L \cup R})=0$ (and hence, by deformation theory, statement (a)). To complete the proof, $T_{\mathbb{P}^3}|_{R'}(-R'\cap C') \simeq \O(-1)^{\oplus 4}$ has no higher cohomology and so \eqref{rest_tang_es} and the Gieseker-Petri Theorem for $C'$ show that $H^1(T_{\mathbb{P}^3}|_{C'\cup R'})=0$. Therefore $C' \cup R'$ is in the Brill--Noether component. Since $C' \cup L \cup R$ is a smooth point of the Hilbert scheme and both $C' \cup R'$ and $C \cup R$ are deformations of this, they are in the same component; in particular, $C \cup R$ is in the Brill--Noether component. \end{proof} \section{The Unstable Cases}\label{sec-unstable} \subsection*{Arbitrary characteristic} In two cases --- $(d, g) \in \{(5, 2), (6, 4)\}$ --- Theorem~\ref{thm:main} asserts that, over a field of any characteristic, $N_C$ is unstable. In both of these cases, $C$ lies on a quadric $Q$, and from the normal bundle exact sequence, \begin{equation} \label{inQ} 0 \to [N_{C/Q} \simeq K_C(2)] \to N_C \to [N_Q|_C \simeq \O_C(2)] \to 0, \end{equation} we have that $N_C$ has a subbundle $N_{C/Q}$ of slope $2d + 2g - 2$. If $(d,g) = (5,2)$ (respectively $(6,4)$) then $\mu(N_{C/Q}) = 12$ (respectively $18$), which is strictly more than $\mu(N_C) = 11$ (respectively $15$). In fact, we can say more. Note that $\operatorname{Ext}^1(\O_C(2), K_C(2)) \simeq H^1(K_C)$ is $1$-dimensional; therefore there are only two such extensions up to isomorphism (the split extension, and a unique nontrivial extension). When $(d, g) = (6, 4)$, such curves $C$ are the complete intersection of a quadric and cubic surface, and so \eqref{inQ} is split. When $(d, g) = (5, 2)$, the following lemma is equivalent to the assertion that \eqref{inQ} is nonsplit: \begin{lem}\label{lem:52_sections_from_quadric} Let $D$ be a Brill--Noether curve of degree $5$ and genus $2$ and let $Q$ be the unique quadric containing it. The inclusion $K_D \simeq N_{D/Q}(-2) \subseteq N_D(-2)$ induces an isomorphism on global sections \[H^0(K_D) \simeq H^0(N_D(-2)). \] \end{lem} \begin{proof} As $H^0(K_D) \hookrightarrow H^0(N_D(-2))$, it suffices to show that $h^0(N_D(-2)) = 2$. We will prove this by degenerating the curve $D$ to the union of an elliptic normal curve $E$ of degree $4$ and genus $1$ and a general $2$-secant line $L$ meeting $E$ quasi-transversely at $p$ and $q$, which is a Brill--Noether curve by Lemma \ref{lem:rightcomp}\eqref{lem:rightcomp_2sec}. Recall that $N_L \simeq \O_L(1)^{\oplus 2}$. Therefore by Lemma \ref{lem:hh}, $N_{E \cup L}(-2)|_L \simeq \O_L \oplus \O_L$ has $2$ global sections. Furthermore, since $H^0(N_{E \cup L}(-2)|_L(-p-q)) = 0$, we have that \begin{equation}\label{union_inclusion} H^0(N_{E\cup L}(-2)) \hookrightarrow H^0(N_{E \cup L}(-2)|_{E}). \end{equation} The curve $E$ is the complete intersection of $2$ quadrics $Q_1$ and $Q_2$ in $\mathbb{P}^3$. Since it is one condition for a quadric containing $E$ to contain $L$ as well, we may assume that $Q_1$ contains $L$. Then the normal bundle restricted to $E$ \[N_{E\cup L}(-2)|_E \simeq N_{E/Q_1}(-2)(p + q) \oplus N_{E/Q_2}(-2) \simeq \O_E(p+q) \oplus \O_E, \] has $3$ global sections. It remains to show that one of these sections is not in the image of \eqref{union_inclusion}. We claim that the unique (up to scaling) section of $\O_E$ is not in the image of \eqref{union_inclusion}. Indeed, since $L$ is transverse to $Q_2$, this section fails to smooth both nodes; if it extended across $L$, it must extend to a section in $H^0(N_L(-2)) \subset H^0(N_{E \cup L}|_L(-2))$. But $N_L(-2) \simeq \O_L(-1) \oplus \O_L(-1)$ has no global sections, so any extension across would have to vanish identically along $L$, and in particular at $p$ and $q$ (which this section does not). \end{proof} \subsection*{Characteristic 2} Theorem \ref{thm:main} asserts that, in characteristic $2$, there are infinitely many pairs $(d,g) = (2k,0)$ for which the normal bundle of a general Brill--Noether curve is unstable. This is the first case of a more general phenomena occurring only in characteristic $2$. Let $C \subset \mathbb{P}^r$ be a Brill--Noether curve. In any characteristic, the Euler sequence \eqref{euler_seq} shows that the bundle $N_C^\vee(1)$ sits in an exact sequence \begin{equation}\label{prin_parts} 0 \to N_C^\vee(1) \to \O_C^{\oplus r+1} \to \mathscr{P}^1(\O_C(1)) \to 0,\end{equation} where $\mathscr{P}^1(\O_C(1))$ is the first bundle of principal parts of the line bundle $\O_C(1)$. Now assume that $\operatorname{char}(k) =2$ and let $\pi \colon C \to C^{(2)}$ denote the (relative) Frobenius morphism. Given a reduced point $c \in C$, the fiber of $\pi$ containing $c$ is the nonreduced point $2c$. Therefore \[\mathscr{P}^1(\O_C(1)) \simeq \pi^*\pi_* \O_C(1).\] Thus $N_C^\vee(1) \simeq \pi^* K$ is isomorphic to the pullback of a vector bundle $K$ under Frobenius. Using this, we have the following. \begin{lem}\label{lem:char2_obs} Assume that $\operatorname{char}(k)=2$ and let $C \simeq \mathbb{P}^1$ be a rational curve of degree $d$ in $\mathbb{P}^r$ over $k$. Then the normal bundle splits as \[N_C \simeq \bigoplus_i \O_{\mathbb{P}^1}(a_i), \] for integers $a_i \equiv d \pmod{2}$. \end{lem} \begin{proof} If $\operatorname{char}(k)=2$, then $N_C^\vee(1) \simeq \pi^*K$ for some vector bundle $K$ on $\mathbb{P}^1$. Write $K \simeq \bigoplus \O_{\mathbb{P}^1}(k_i)$. Since $\pi^* \O_{\mathbb{P}^1}(a) \simeq \O_{\mathbb{P}^1}(2a)$, we have $N_C \simeq \bigoplus \O_{\mathbb{P}^1}(d - 2k_i)$ as desired. \end{proof} \begin{cor}\label{cor:char2_stab} Let $C$ be a general rational curve in $\mathbb{P}^r$ of degree $d \geq r$. Then $N_C$ is semistable only if $2d \equiv 2 \pmod{r-1}$; in characteristic~$2$, this can be strengthened to $d \equiv 1 \pmod{r - 1}$. \end{cor} \begin{proof} In any characteristic, $N_C$ can only be semistable if $\mu(N_C) = d + \frac{2d-2}{r-1}$ is an integer. In characteristic~$2$, Lemma~\ref{lem:char2_obs} implies that furthermore $\mu(N_C) - d$ must be an \emph{even} integer. \end{proof} \begin{rem} When $r=3$, we prove in Section \ref{sec:reduction_finite} that Corollary \ref{cor:char2_stab} gives the only obstruction to semistability for the normal bundle of a rational curve in characteristic $2$. With a little more work, one can show the same in any projective space. \end{rem} \section{Stability and degeneration I}\label{sec:stab_degen1} In this section, by specializing to the union of a general Brill--Noether curve and a $4$-secant conic, we reduce Theorem~\ref{thm:main} to the cases $g \leq 8$. Our main tool will be the following first basic lemma proving stability by degeneration. \begin{lem} \label{lem:naive} Suppose that $C = X \cup Y$ is a reducible curve and $E$ is a vector bundle on $C$ such that $E|_X$ and $E|_Y$ are semistable. Then $E$ is semistable. Furthermore, if one of $E|_X$ or $E|_Y$ is stable, then $E$ is stable. \end{lem} \begin{proof} Write $\nu \colon X \sqcup Y \to C$ for the normalization map. For any subbundle $F \subseteq \nu^* E$ we have \[\mu^{\text{adj}} \left( F \right) \leq \mu\left( F|_X \right) + \mu\left( F|_Y \right) \underset{{\scriptscriptstyle (}-{\scriptscriptstyle )}}{<} \mu\left( E|_X \right) + \mu\left( E|_Y \right) = \mu\left( E \right). \qedhere\] \end{proof} \subsection*{\boldmath $4$-secant conic degenerations} Let $C$ be a Brill--Noether curve of degree $d \geq 4$ and genus $g$ in~$\mathbb{P}^3$. Let $H \subset \mathbb{P}^3$ be a $2$-plane meeting $C$ transversely; let $p_1, \dots, p_4$ be four points in $C\cap H$. For $R \subset H$ a conic through $p_1, \dots, p_4$, the union $C\cup R$ is a Brill--Noether curve of degree $d+2$ and genus $g + 3$ by Lemma \ref{lem:rightcomp}\eqref{lem:rightcomp_4sec}. \begin{center} \begin{tikzpicture}[scale=1] \draw (1, 3.5) .. controls (0.5, 3.5) and (-1, 3) .. (0, 2); \draw (0, 2) .. controls (0.5, 1.5) and (1, 1) .. (2, 1); \draw (0.1, 2.1) .. controls (0.5, 2.5) and (1, 3) .. (2, 3); \draw (1, 0.5) .. controls (0.5, 0.5) and (-1, 1) .. (-0.1, 1.9); \draw (2, 1) .. controls (4, 1) and (4, 3) .. (2, 3); \draw (2.25, 2) ellipse (0.5 and 1.25); \filldraw (1.95,2.99) circle[radius=0.03]; \filldraw (1.95,1) circle[radius=0.03]; \filldraw (2.58,2.93) circle[radius=0.03]; \filldraw (2.58,1.06) circle[radius=0.03]; \draw (1.8,3.15) node{$p_1$}; \draw (1.8,0.85) node{$p_4$}; \draw (2.77,3.1) node{$p_2$}; \draw (2.75,0.9) node{$p_3$}; \draw (1.25,3.6) node{$C$}; \draw (2.25,.55) node{$R$}; \end{tikzpicture} \end{center} \begin{lem}\label{lem:4_sec_conic} In the above setup, if $C$ is a general Brill--Noether curve with $(d,g) \neq (3, 0)$ or $(4,1)$, then \[N_{C \cup R}|_R \simeq \O_{\mathbb{P}^1}(5) \oplus \O_{\mathbb{P}^1}(5) \] is semistable. \end{lem} \begin{proof} We will prove this lemma by degeneration of $C$. If $C$ admits a degeneration to $X \cup Y$, where $\deg X \geq 4$, then we may consider degenerations $X \cup Y \cup R$ of $C \cup R$ where the conic $R$ meets $X$ alone; this reduces the case of $C$ to the case of $X$. By repeatedly applying Lemma~\ref{lem:rightcomp} to pull off $1$-secant lines, $2$-secant lines, or $4$-secant conics, we thus reduce to the case where $(d, g)$ satisfies \begin{equation} \label{foo} \rho(g,3,d) \geq 0, \quad g \geq 0, \quad \text{and} \quad (d, g) \neq (3, 0), (4, 1), \end{equation} but $(d', g')$ fails to satisfy \eqref{foo} for each of $(d', g') = (d - 1, g), (d - 1, g - 1)$, and $(d - 2, g - 3)$. By inspection, this is only possible if $(d, g) = (4, 0), (5, 2),$ or $(6, 4)$. (Indeed, if $g \geq 5$, then $(d', g') = (d - 2, g - 3)$ satisfies \eqref{foo}; if $g \leq 4$ and $d \geq 7$, then $(d', g') = (d - 1, g)$ satisfies \eqref{foo}; the finitely many cases with $g \leq 4$ and $d \leq 6$ are easily verified.) In these cases, $C$ is of type $(3, d - 3)$ on a quadric. Specializing $C$ to the union of a curve of type $(3, 1)$ with $d - 4$ lines of type $(0, 1)$, it thus remains only to consider the case $(d, g) = (4, 0)$. When $C$ is a rational quartic curve, we specialize $C$ to $C' \cup L$ where $C'$ is a rational normal curve and $L$ is a $1$-secant line meeting $C'$ at a point $x$. Since $C$ has degree $4$, we must specialize $R$ to meet $L$ in one point $y$ and $C'$ in a set $\{z_1, z_2, z_3\}$ of three points: \begin{center} \begin{tikzpicture}[scale=2] \draw (1, 2) .. controls (0.5, 2) and (-0.5, 1.5) .. (0, 1); \draw (0, 1) .. controls (1, 0) and (1, 2) .. (0.1, 1.1); \draw (-0.1, 0.9) .. controls (-0.5, 0.5) and (0.5, -0.3) .. (1, -0.3); \draw (0.4838, 0.22) .. controls (-0.0362, 1.3328) and (0.67, 1.6628) .. (1.19, 0.55); \draw (0.4838, 0.22) .. controls (1.0038, -0.8928) and (1.71, -0.5628) .. (1.19, 0.55); \draw (0.85, 0.5) node{$L$}; \draw (1.35, 0.43) node{$R$}; \draw (1.12, 2) node{$C'$}; \filldraw (0.793, -0.263) circle[radius=0.02]; \filldraw (0.31, 0.767) circle[radius=0.02]; \filldraw (0.745, 1.155) circle[radius=0.02]; \draw (0.8, 0.5) -- (-0.3, 0.61); \filldraw (0.36, 0.544) circle[radius=0.02]; \filldraw (-0.18, 0.598) circle[radius=0.02]; \draw (0.72, -0.33) node{$z_3$}; \draw (0.3, 0.45) node{$y$}; \draw (-0.25, 0.52) node{$x$}; \draw (0.42, 0.81) node{$z_1$}; \draw (0.84, 1.22) node{$z_2$}; \end{tikzpicture} \end{center} Since $N_{C'} \simeq \O_{\mathbb{P}^1}(5) \oplus \O_{\mathbb{P}^1}(5)$, we may arrange for $C'$ to have general tangent directions at the points $z_i$. Thus, $N_{C' \cup R}|_R \simeq \O_{\mathbb{P}^1}(5) \oplus \O_{\mathbb{P}^1}(4)$. In particular, we have a distinguished subspace of $N_R|_y$ given by the positive subbundle $\O_{\mathbb{P}^1}(5)|_y \subset N_{C' \cup R}|_y \simeq N_R|_y$ --- or equivalently, a distinguished plane $\Lambda \supset T_y R$. Since $x \in C'$ is general, we have $x \notin \Lambda$. Thus \[N_{C' \cup L \cup R}|_R \simeq N_{C' \cup R}|_R(y)[y \to x] \simeq \O_{\mathbb{P}^1}(5) \oplus \O_{\mathbb{P}^1}(5). \qedhere\] \end{proof} \begin{rem} For $(d, g) = (4, 1)$, the conclusion of Lemma~\ref{lem:4_sec_conic} is false: For any $R$, the curve $C$ lies on a quadric $Q$ containing $R$, and $N_{(C \cup R)/Q}|_R$ is destabilizing. \end{rem} \noindent Let $p_i'$ be a point on $T_{p_i}R \smallsetminus p_i$. Then by Lemma \ref{lem:4_sec_conic} combined with Lemma~\ref{lem:naive}, stability for \[N_C[p_1 \to p_1'][p_2 \to p_2'][p_3 \to p_3'][p_4 \to p_4']\] implies stability for $N_{C\cup R}$, and hence for the normal bundle of a general Brill--Noether space curve of degree $d+2$ and genus $g+3$. \subsection*{\boldmath Deformations of $r$-secant rational curves} To use the above degeneration to reduce to a finite list of genera, we must know that such a conic can be suitably deformed while preserving the incidence conditions with $D$. In greater generality, let $D$ be a Brill--Noether curve, and $R$ be a rational curve meeting $D$ at distinct points $p_1, p_2, \ldots, p_r$. The following key assumption generalizes the conclusion of Lemma~\ref{lem:4_sec_conic}: \begin{assumption}\label{assump:balanced_slope} The restricted normal bundle $N_{D\cup R}|_R$ is perfectly balanced with slope \[\mu(N_{D \cup R}|_R) \geq r + 1.\] \end{assumption} \begin{lem}\label{lem:deform_delta} Under assumption \ref{assump:balanced_slope}, there exists a deformation $R(t)$ of $R$, and $p_i(t)$ of $p_i$, such that the rational curve $R(t)$ meets $D$ quasi-transversely in $p_1(t), p_2(t), \ldots, p_r(t)$, and $p_i(t)$ has nonzero derivative at $t = 0$ for all $i$. \end{lem} \begin{proof} For any $i$, let $N_i$ denote the vector bundle on $R$ obtained by gluing the vector bundles $N_{R \cup D}|_{R \smallsetminus p_i}$ and $N_R|_{R \smallsetminus \{p_1, \ldots, \hat{p_i}, \ldots, p_r\}}$ along the natural isomorphism $N_{R \cup D}|_{R \smallsetminus \{p_1, \ldots, p_r\}} \simeq N_R|_{R \smallsetminus \{p_1, \ldots, p_r\}}$. Then obstructions to lifting deformations of $p_i$ to deformations of $R$ that preserve the incidence conditions with $D$ at the $p_j$ lie in $H^1(N_i(-p_1-\cdots-p_r))$; it thus suffices to show \[H^1(N_i(-p_1-\cdots - p_r)) = 0.\] But $N_i(-p_1-\cdots - p_r)$ fits in an exact sequence \[0 \to N_{R \cup D}|_R(-p_1- \cdots - p_{i - 1} - 2 p_i - p_{i+1} - \cdots - p_r) \to N_i(-p_1-\cdots - p_r) \to \O_{p_i} \to 0,\] and so it remains to note that our assumption that $N_{D \cup R}|_R$ is perfectly balanced with slope $\mu(N_{D \cup R}|_R) \geq r + 1$ implies \[H^1(N_{R \cup D}|_R(-p_1- \cdots - p_{i - 1} - 2 p_i - p_{i+1} - \cdots - p_r)) = 0. \qedhere\] \end{proof} \subsection*{Reduction to a finite list of genera} \begin{lem}\label{lem:pancake} Suppose that Theorem \ref{thm:main} is true for all $g\leq 8$. Then it is true for all $g$. \end{lem} \begin{proof} If $\rho(g,3,d) \geq 0$ and $g \geq 9$, then \begin{equation}\label{rho}\rho(g-6, 3, d-4) = \rho(g,3, d) + 2 \geq 0 \quad \text{and} \quad \ g - 6 \geq 2 \quad \text{and} \quad (d - 4, g - 6) \notin \{(5, 2), (6, 4)\},\end{equation} \begin{equation}\label{dminus4} \text{and}\qquad d-4 \geq 4. \end{equation} By \eqref{rho}, a general Brill--Noether curve $D$ of degree $d-4$ and genus $g-6$ has $N_D$ stable by induction. Let $H$ be a general hyperplane; by \eqref{dminus4}, we may let $R_1 \subseteq H$ and $R_2 \subseteq H$ be general $4$-secant conics, both of which meet $D$ at $p_1, \ldots, p_4$: \begin{center} \begin{tikzpicture}[scale=.8] {\small \draw (0, 0) ellipse (2 and 1); \draw (0, 0) ellipse (1 and 2); \filldraw (0.892, 0.892) circle[radius=0.03]; \filldraw (0.892, -0.892) circle[radius=0.03]; \filldraw (-0.892, 0.892) circle[radius=0.03]; \filldraw (-0.892, -0.892) circle[radius=0.03]; \draw (0.892, 0.892) .. controls (0, 0) .. (-0.892, 0.892); \draw (0.892, -0.892) .. controls (0, 0) .. (-0.892, -0.892); \draw (-0.892, 0.892) .. controls (-3, 3) and (-3, -3) .. (-0.892, -0.892); \draw (0.892, 0.892) .. controls (1.5, 1.5) .. (1.75, 2); \draw (0.892, -0.892) .. controls (1.5, -1.5) .. (1.75, -2); \draw (0, -2.25) node{$R_2$}; \draw (2.25, 0) node{$R_1$}; \draw (1.9, 2) node{$D$}; \draw (1.03, 1.3) node{$p_1$}; \draw (1.03, -1.3) node{$p_4$}; \draw (-1, 1.3) node{$p_2$}; \draw (-1, -1.3) node{$p_3$}; } \end{tikzpicture} \end{center} By Lemma~\ref{lem:deform_delta}, we may deform $R_i$ to $4$-secant conics $R_i(t)$ meeting $D$ at $p_{i1}(t)$, $p_{i2}(t)$, $p_{i3}(t)$, and $p_{i4}(t)$, such that $p_{1j}(t)$ and $p_{2j}(t)$ have distinct derivatives: \begin{center} \begin{tikzpicture}[scale=0.5] \draw (0, 0) ellipse (2 and 1); \draw[densely dotted] (0, 0) ellipse (2.2 and 1.1); \draw (0, 0) ellipse (1 and 2); \draw[densely dotted] (0, 0) ellipse (0.9 and 1.8); \draw (0.892, 0.892) .. controls (0, 0) .. (-0.892, 0.892); \draw (0.892, -0.892) .. controls (0, 0) .. (-0.892, -0.892); \draw (-0.892, 0.892) .. controls (-3, 3) and (-3, -3) .. (-0.892, -0.892); \draw (0.892, 0.892) .. controls (1.5, 1.5) .. (1.75, 2); \draw (0.892, -0.892) .. controls (1.5, -1.5) .. (1.75, -2); \end{tikzpicture} \end{center} Combining lemmas~\ref{lem:naive} and \ref{lem:4_sec_conic}, it remains to show the stability of $N_C[p_{ij}(t) \to p_{ij}'(t)]$ for $t \in \Delta$ general, where $p_{ij}'(t)$ denotes a point on $T_{p_{ij}(t)} C \smallsetminus p_{ij}(t)$. By the discussion in Remark~3.4 of \cite{aly}, these vector bundles fit together to form a vector bundle over $D \times \Delta$ whose fiber over $0 \in \Delta$ is the bundle $N_D(-p_1-p_2-p_3-p_4)$ --- which is stable since we have already seen that $N_D$ is stable by induction. \end{proof} \section{Stability and Degeneration II: Gluing Data}\label{sec-gluing} In order to settle the base cases $g \leq 8$, we will need to use degenerations of $C$ to reducible curves $X \cup Y$ where neither $N_{X \cup Y}|_X$ nor $N_{X \cup Y}|_Y$ are necessarily stable. The basic idea is to compare destabilizing subbundles of $N_{X \cup Y}|_X$ and $N_{X \cup Y}|_Y$, and show that they cannot agree sufficiently over $X \cap Y$. \subsection*{\boldmath $1$-secant degenerations} In some cases, we can construct a \emph{modification} of the restriction $N_{X \cup Y}|_X$ whose stability rules out a destabilizing subbundle of $N_{X \cup Y}|_X$ that could agree sufficiently with a destabilizing subbundle of $N_{X \cup Y}|_Y$. This technique works well when we can understand the geometry of $Y$ explicitly. Here we apply this technique when $Y = L$ is a $1$-secant line. Let $D$ be a smooth Brill--Noether curve and $L$ a quasi-transverse $1$-secant line meeting $D$ at $p$. Although $N_{D \cup L}|_L$ is not semistable, so we cannot apply Lemma~\ref{lem:naive}, we can identify the unique destabilizing subbundle of $N_{D \cup L}|_L$, and construct a modification of $N_{D \cup L}|_D$ as described above. For inductive arguments it will be more useful to consider a slightly more general setup: Let $N_{D \cup L}'$ be any vector bundle equipped with an isomorphism with $N_{D \cup L}$ over an open set $U$ of $D \cup L$ containing $L$, and write $N_D'$ for the bundle obtained by gluing $N_D|_U$ to $N_{D \cup L}'|_{D \smallsetminus p}$ along the isomorphism $N_D|_{U \smallsetminus p} \simeq N_{D \cup L}|_{U \smallsetminus p} \simeq N_{D \cup L}'|_{U \smallsetminus p}$. To state the lemma, let $q \in L \smallsetminus p$. \begin{center} \begin{tikzpicture}[scale=1] \draw (1, 3.5) .. controls (0.5, 3.5) and (-1, 3) .. (0, 2); \draw (0, 2) .. controls (0.5, 1.5) and (1, 1) .. (2, 1); \draw (0.1, 2.1) .. controls (0.5, 2.5) and (1, 3) .. (2, 3); \draw (1, 0.5) .. controls (0.5, 0.5) and (-1, 1) .. (-0.1, 1.9); \draw (2, 1) .. controls (4, 1) and (4, 3) .. (2, 3); \draw (1.25,3.6) node{$D$}; \draw (3.3,3.3) -- (1.5,1.5); \draw (3.3,3.3) node[right]{$L$}; \filldraw (2.84,2.84) circle[radius=0.03]; \draw (2.9,3.05) node{$p$}; \filldraw (2,2) circle[radius=0.03]; \draw (2.15,1.85) node{$q$}; \end{tikzpicture} \end{center} \begin{lem}\label{lem:1_sec_balanced} In the above setup, if $N_D'[p \to q][p \to q] \simeq N'_D[2p \to q]$ is (semi)stable, then $N'_{D \cup L}$ is also (semi)stable. \end{lem} \begin{proof} Write $\nu \colon D \sqcup L \to D \cup L$ for the normalization map, and $\tilde{p}_1$ and $\tilde{p}_2$ for the points above $p$ on $L$ and $D$ respectively. Suppose that $F \subseteq \nu^* N'_{D\cup L}$ is a line subbundle. First, we consider the restriction of $F$ to $L$. Let $x$ be a point on $T_pD$ and let $\Lambda$ be the plane spanned by $x$ and $L$. Let $H$ be another plane such that $L = \Lambda \cap H$. Then by Lemma \ref{lem:hh}, \[N_{D\cup L}'|_L \simeq N_L(p)[p \to x] \simeq N_{L/H} \oplus N_{L/\Lambda}(p) \simeq \O_{\mathbb{P}^1}(1) \oplus \O_{\mathbb{P}^1}(2).\] Consequently, \begin{equation} \label{fl} \mu(F|_L) \leq \begin{cases} 2 & \text{if $F|_{\tilde{p}_1} = N_{L/\Lambda}(p)|_{\tilde{p}_1}$;} \\ 1 & \text{otherwise.} \end{cases} \end{equation} Second, we consider the restriction of $F$ to $D$. If $F|_{\tilde{p}_2} = N_{D \to q}(p)|_{\tilde{p}_2}$, then $F|_D$ is a subbundle of $N'_{D \cup L}|_D[p \to q] \simeq N'_D(p)[2p \to q]$; otherwise $F|_D(-\tilde{p}_2)$ is a subbundle of $N'_D(p)[2p \to q]$. Because $N'_D[2p \to q]$ is (semi)stable by assumption and of slope $\mu(N_D') - 1$, it follows that $N'_D(p)[2p \to q]$ is (semi)stable of slope $\mu(N_D')$. Consequently, \begin{equation} \label{fd} \mu(F|_D) \underset{{\scriptscriptstyle (}-{\scriptscriptstyle )}}{<} \begin{cases} \mu(N'_D) + 1 & \text{if $F|_{\tilde{p}_2} \neq N_{D \to q}(p)|_{\tilde{p}_2}$;} \\ \mu(N'_D) & \text{otherwise.} \end{cases} \end{equation} Finally, by \cite[Lemma 8.5]{aly}, the subspace $N_{L/\Lambda}(p)|_{\tilde{p}_1}$ glues to the subspace $N_{D \to q}(p)|_{\tilde{p}_2}$. Consequently, \begin{equation} \label{fp} \operatorname{codim}_{F} (F|_{\tilde{p}_1} \cap F|_{\tilde{p}_2}) \geq \begin{cases} 1 & \text{if $F|_{\tilde{p}_1} = N_{L/\Lambda}(p)|_{\tilde{p}_1}$ and $F|_{\tilde{p}_2} \neq N_{D \to q}(p)|_{\tilde{p}_2}$;} \\ 0 & \text{otherwise.} \end{cases} \end{equation} To finish the proof, we simply combine \eqref{fl}, \eqref{fd}, and \eqref{fp}, to obtain \[\mu^{\text{adj}}(F) = \mu(F|_L) + \mu(F|_D) - \operatorname{codim}_{F} (F|_{\tilde{p}_1} \cap F|_{\tilde{p}_2}) \underset{{\scriptscriptstyle (}-{\scriptscriptstyle )}}{<} \mu(N'_D) + 2 = \mu(N'_{D \cup L}). \qedhere\] \end{proof} \begin{lem}\label{lem:1_sec} Assume that the characteristic of the ground field is not $2$. Suppose that $N_D$ is (semi)stable. If $q \in \mathbb{P}^3$ is a general point and $p \in D$ has ordinary ramification, then the elementary modification $N_D[2p \to q]$ is (semi)stable. \end{lem} \begin{proof} Let $\Lambda\subset \mathbb{P}^3$ be a $2$-plane containing $T_pD$ that is not the osculating $2$-plane to $D$ at $p$. For parameter $s \in \mathbb{P}^1$, let $L_s$ be the pencil of lines through $p$ in $\Lambda$ specializing to $T_pD$ when $s=0$ and let $q(s)$ be a choice of point on $L_s \smallsetminus p$. \begin{center} \begin{tikzpicture}[scale=1] \draw (1, 3.5) .. controls (0.5, 3.5) and (-1, 3) .. (0, 2); \draw (0, 2) .. controls (0.5, 1.5) and (1, 1) .. (2, 1); \draw (0.1, 2.1) .. controls (0.5, 2.5) and (1, 3) .. (2, 3); \draw (1, 0.5) .. controls (0.5, 0.5) and (-1, 1) .. (-0.1, 1.9); \draw (2, 1) .. controls (4, 1) and (4, 3) .. (2, 3); \draw (1.25,.5) node{$D$}; \draw (1.5,3.7) -- (4.5, 1.855); \draw (4.5, 1.855) node[below]{$T_pD$}; \filldraw (3.05,2.74675) circle[radius=0.03]; \draw (3.05,2.74675) node[above]{$p$}; \draw[densely dotted] (3.05 +1.5, 1.99675) -- (3.05-1.5, 3.49675); \draw[densely dotted] (3.05 +1.5, 2.14675) -- (3.05-1.5, 3.34675); \draw[densely dotted] (3.05 +1.5, 2.29675) -- (3.05-1.5, 3.19675); \draw[densely dotted] (3.05 +1.5, 2.44675) -- (3.05-1.5, 3.04675); \end{tikzpicture} \end{center} As (semi)stability is open, and $N_D(-p)$ is (semi)stable by assumption, it suffices to show that the modifications $N_D[2p \to q(s)]$ for $s\neq 0$ fit together into a flat family specializing to $N_D(-p)$ when $L_s = T_pD$. To do this, we first observe that, for $s\neq 0$, \[N_D[2p \to q(s)] \colonequals \ker \left( N_D \to \frac{N_D|_{2p}}{N_{D \to q(s)}|_{2p}} \right) \] is determined by the $2$-dimensional subspace $N_{D \to q(s)}|_{2p}$ of the $4$-dimensional space $N_D|_{2p}$. As the Grassmanian $\operatorname{Gr}(2,4)$ is separated and proper, there is a unique limit of these spaces as $s \to 0$. It suffices to prove, by a calculation in local coordinates, that this subspace is $N_D(-p)|_{2p} \subseteq N_D|_{2p}$. Choose an affine neighborhood $ \aa_{xyz}^3 \subseteq \mathbb{P}^3$ and coordinates such that $p = (0,0,0)$, the tangent line $T_pD$ is $y = z = 0$, the osculating two-plane is $z = 0$, and $\Lambda$ is $y = 0$. Let $q(s) = (1, 0, s)$ so that $L_s$ is the line through $(1,0,s)$ and $(0,0,0)$. Let $t$ be an \'etale local coordinate at $p$ for $D$. Then in an \'etale neighborhood of $p$, the curve $D$ is given parametrically by \[D(t) = \begin{pmatrix} t \\ t^2 + a_3 t^3 + \cdots \\ b_3 t^3 + \cdots \end{pmatrix}.\] We trivialize $N_D$ in a neighborhood of $p$ by $\partial/\partial y$ and $\partial/\partial z$. A section of $N_D$ is then given by \begin{equation}\label{section_mn} (m_0 + m_1 t + m_2 t^2 + \cdots ) \frac{\partial}{\partial y} + (n_0 + n_1 t + n_2 t^2 + \cdots ) \frac{\partial}{\partial z}. \end{equation} We must determine the conditions on the $m_i$ and $n_i$ such that this section points towards $q(s)$ to second order in $t$. The vector from $D(t)$ on $D$ to $q(s)$ \[ D(t) - q(s)= \begin{pmatrix} t-1 \\ t^2 + a_3t^3 + \cdots \\ b_3t^3 + \cdots - s \end{pmatrix}\] is equivalent as a section of $N_D$ to its translate by a tangent vector \[ D(t) - q(s) - (t-1)D'(t) = \begin{pmatrix} t-1 \\ t^2 + a_3t^3 + \cdots \\ b_3t^3 + \cdots - s \end{pmatrix} - \begin{pmatrix} t-1 \\ (t-1)(2t + 3a_3t^2 + \cdots) \\ (t-1)(3b_3t^2 + \cdots) \end{pmatrix} = \begin{pmatrix} 0 \\ 2t + (3a_3 - 1)t^2 + \cdots \\ -s - 3b_3t^2 + \cdots \end{pmatrix}. \] This normal vector now corresponds to the section \[(2t + (3a_3 - 1)t^2 + \cdots) \frac{\partial}{\partial y} + (-s - 3b_3t^2 + \cdots) \frac{\partial}{\partial z} \] under our choosen trivialization. The condition on the $m_i$ and $n_i$ for a section as in \eqref{section_mn} to point towards $q(s)$ at $2p$ is that \[\det \begin{pmatrix} 2t + \cdots & m_0 + m_1t + \cdots \\ -s + \cdots & n_0 + n_1 t + \cdots \end{pmatrix} = -sm_0 +(2n_0 + sm_1)t + \cdots\] vanish to second order in $t$. When $s\neq 0$, this cuts out the $2$-dimensional subspace $m_0= 2n_0 + sm_1 = 0$ in the four dimensional vector space with coordinates $m_0, m_1, n_0, n_1$. \emph{In characteristic distinct from~$2$}, the limit as $s \to 0$ of this subspace is simply $m_0 = n_0 = 0$, i.e.\ the subspace $N_D(-p)|_{2p} \subset N_D|_{2p}$ as claimed. \end{proof} \begin{cor} \label{cor-dplus} Suppose that $N_D$ is (semi)stable for $D$ a general Brill--Noether curve of degree $d$ and genus $g$ in $\mathbb{P}^3$. Then $N_C$ is (semi)stable for $C$ a general Brill--Noether curve of degree $d + \epsilon$ and genus $g$ in $\mathbb{P}^3$, where \[\epsilon = \begin{cases} 1 & \text{if $\operatorname{char}(k) \neq 2$;} \\ 2 & \text{if $\operatorname{char}(k) = 2$.} \end{cases}\] \end{cor} \begin{proof} We specialize $C$ to the union of a general Brill--Noether curve $D$ with $\epsilon$ one-secant lines. Applying Lemma~\ref{lem:1_sec_balanced}, it suffices to show that $N_D[2p \to q]$ (respectively $N_D[2p_1 \to q_1][2p_2 \to q_2]$) is (semi)stable, where the $p_i$ denote general points on $D$, and the $q_i$ denote general points in $\mathbb{P}^3$. As we limit $p_1$ and $p_2$ together to a common point $p$, the vector bundles $N_D[2p_1 \to q_1][2p_2 \to q_2]$ fit together to form a vector bundle with central fiber $N_D(-2p)$ (c.f.\ the discussion in Remark~3.4 of \cite{aly}) --- which is (semi)stable by assumption. In characteristic distinct from $2$, we apply Lemma~\ref{lem:1_sec} to conclude that $N_D[2p \to q]$ is (semi)stable as desired. \end{proof} \section{Reduction to a finite list of $(d,g)$}\label{sec:reduction_finite} In this section we combine the results of the previous section to reduce the proof of Theorem~\ref{thm:main} to a finite list of base cases. \begin{prop}\label{prop:reduction_finite} Suppose that Theorem~\ref{thm:main} holds for curves of degree $d$ and genus $g$ with \begin{multline}\label{finite_list} (d,g) \in \{(3, 0), (4, 1), (5, 1), (6,2), (7,2), (6,3), (7,3), \\ (7,4), (8,4), (7,5), (8,5), (8,6), (9,6), (9,7), (10,7), (9,8), (10,8) \}. \end{multline} Then Theorem~\ref{thm:main} holds in all cases. If the characteristic of the ground field is not $2$, then it suffices to replace list \eqref{finite_list} with \begin{equation}\label{finite_list_not2} (d,g) \in \{(3, 0), (4, 1), (6,2), (6,3), (7,4), (7,5), (8,6), (9,7), (9,8) \}. \end{equation} \end{prop} \begin{proof} We will prove this by induction on $d$ and $g$. By Lemma \ref{lem:pancake}, it suffices to prove this when $g \leq 8$. If the characteristic is not equal to $2$, then by Corollary~5.3, it suffices to check (semi)stability for the smallest degree in each genus for which Theorem~\ref{thm:main} asserts that the normal bundle is (semi)stable. Similarly, if the characteristic is $2$, it suffices to check (semi)stability for the two smallest degrees. Note that, for rational curves of even degrees in characteristic $2$, we have already established that the normal bundles are unstable. Thus we do not need to include $(4, 0)$ in our list \eqref{finite_list}. \end{proof} \begin{rem} By Lemma~\ref{lem:01}, we already know semistability for $(d, g) = (3, 0)$ and $(4, 1)$. This establishes Theorem~\ref{thm:main} for curves of genus $0$ in any characteristic, and for curves of genus $1$ in characteristic distinct from $2$. \end{rem} \begin{rem} The reason that the cases $(6, 2)$ and $(7, 4)$ appeared in our list \eqref{finite_list} of remaining cases is that the cases $(5, 2)$ and $(6, 4)$ were exceptions to Theorem~\ref{thm:main}, and so our induction on the degree broke down. In fact, one cannot degenerate such curves to the union of a Brill--Noether curve $D$ of degree $d - 1$ and genus $g$ with a $1$-secant line and apply Lemma~\ref{lem:1_sec_balanced} (even without applying Lemma~\ref{lem:1_sec}); in both cases, $N_D[2p \to q]$ is unstable (if $Q$ denotes the unique quadric containing $D$ then $N_{D/Q}(-2p) \subset N_D[2p \to q]$ is destabilizing). \end{rem} \section{Base Cases: Applications of Gluing Data} In this section, we establish those base cases appearing in Proposition~\ref{prop:reduction_finite} which can be studied using the techniques of Section~\ref{sec-gluing}. \subsection*{\boldmath The case $(d, g) = (5, 1)$} We degenerate to the union of an elliptic normal curve $C$ with a $1$-secant line. By Lemma~\ref{lem:1_sec_balanced}, it suffices to show $N_C[2u \to v]$ is semistable, where $u \in C$ and $v \in \mathbb{P}^3$ are general. Fix a quadric $Q$ containing $C$, and specialize $v$ to a general point on $C$. By Lemma~\ref{lem:3pts}, there are exactly two points on $C$ at which the fibers of $N_{C \to v}$ and $N_{C/Q}$ coincide; specialize $u$ to one of them. Then $N_C[2u \to v]$ fits in an exact sequence \[0 \to [N_{C/Q}(-u) \simeq \O_C(2)(-u)] \to N_C[2u \to v] \to [N_Q|_C(-u) \simeq \O_C(2)(-u)] \to 0,\] so is semistable as desired. \subsection*{\boldmath The cases $(d, g) = (9, 7), (10, 7), (9, 8)$, and $(10, 8)$} When $(d,g) =(10,7)$ respectively $(10,8)$, we first degenerate the curve to the union of a general Brill--Noether curve $C$ of degree $9$ and genus $7$ respectively $8$, and general $1$-secant line $M$, meeting $C$ at $u$. Choose a point $v \in M \smallsetminus u$ so $M = \overline{uv}$. By Lemma \ref{lem:1_sec_balanced}, it suffices to show that $N_C(u)[2u \to v]$ is stable. Therefore, in order to deal with all of our cases $(d, g) \in \{(9, 7), (10,7), (9, 8), (10,8)\}$, we begin with a curve $C$ of degree $9$ and genus $7$ or $8$. We will degenerate $C$ to the union of a general canonical curve $D$ (of degree $6$ and genus $4$) and a union $R$ of rational curves meeting $D$ quasi-transversely at a set $\Gamma$ of $6$ points (three general $2$-secant lines when $g=7$, respectively a general $2$-secant line and a general $4$-secant conic when $g=8$). \begin{center} \begin{tikzpicture} \draw (1, 3.5) .. controls (0.5, 3.5) and (-1, 3) .. (0, 2); \draw (0, 2) .. controls (0.5, 1.5) and (1, 1) .. (2, 1); \draw (0.1, 2.1) .. controls (0.5, 2.5) and (1, 3) .. (2, 3); \draw (1, 0.5) .. controls (0.5, 0.5) and (-1, 1) .. (-0.1, 1.9); \draw (2, 1) .. controls (4, 1) and (4, 3) .. (2, 3); \draw (1,3.1)--(1.5,.75); \draw (2,3.2)--(2,.75); \draw (3,3.1)--(2.5,.75); \draw (1, 0.25) node{$D$}; \draw [decorate, decoration={brace, mirror, amplitude=0.75ex}] (1.3, 0.7) -- (2.7, 0.7); \draw (2, 0.35) node{$R$}; \draw (1.5, -0.7) node{$(d, g) = (9, 7)$}; \end{tikzpicture} \begin{tikzpicture} \draw (1, 3.5) .. controls (0.5, 3.5) and (-1, 3) .. (0, 2); \draw (0, 2) .. controls (0.5, 1.5) and (1, 1) .. (2, 1); \draw (0.1, 2.1) .. controls (0.5, 2.5) and (1, 3) .. (2, 3); \draw (1, 0.5) .. controls (0.5, 0.5) and (-1, 1) .. (-0.1, 1.9); \draw (2, 1) .. controls (4, 1) and (4, 3) .. (2, 3); \draw (2.25, 2) ellipse (0.5 and 1.25); \draw (0,2.75)--(1.7,.75); \draw (1, 0.25) node{$D$}; \draw [decorate, decoration={brace, mirror, amplitude=0.75ex}] (1.3, 0.7) -- (2.7, 0.7); \draw (2, 0.35) node{$R$}; \draw (1.5, -0.7) node{$(d, g) = (9, 8)$}; \end{tikzpicture} \end{center} Write $Q$ for the unique quadric containing $D$. In both cases, the tangent lines to $R$ at $\Gamma$ are transverse to $Q$, and so the restricted normal bundle $N_{D \cup R}|_D$ fits into a balanced exact sequence: \begin{equation} \label{dseq} 0 \to [N_{D/Q} \simeq \O_D(3)] \to N_{D \cup R}|_D \to [N_Q|_D(\Gamma) \simeq \O_D(2)(\Gamma)] \to 0. \end{equation} In particular, $N_{D \cup R}|_D$ is strictly semistable, and $N_{D/Q}$ gives a destabilizing line bundle. Similarly, after specializing $v$ to a point on $D$, Lemma \ref{lem:3pts} asserts that there are $4$ points $u$ on $D$ where the fibers $N_{D \to v}|_u$ and $N_{D/Q}|_u$ coincide to first order. Specializing $u$ to one of these points, we again have a balanced exact sequence \begin{equation} \label{dseq_2} 0 \to N_{D/Q} \to N_{D \cup R}|_D(u)[2u \to v] \to N_Q|_D(\Gamma) \to 0. \end{equation} In particular, $N_{D \cup R}|_D(u)[2u \to v]$ is strictly semistable, and $N_{D/Q}$ gives a destabilizing line bundle. Let $L$ be a line component of $R$, meeting $D$ at $p_1$ and $p_2$ with $p_i' \in T_{p_i}D \smallsetminus p_i$, and denote by $\Lambda_i$ the plane spanned by $p_i'$ and $L$. Then \[N_{D \cup R}|_L \simeq N_{L / \Lambda_1}(p_1) \oplus N_{L / \Lambda_2}(p_2) \simeq \O_{\mathbb{P}^1}(2) \oplus \O_{\mathbb{P}^1}(2). \] Combining this with Lemma~\ref{lem:4_sec_conic}, the restriction of $N_{D \cup R}$ (resp.~$N_{D \cup R}(u)[2u \to v]$) to each of the components of $R$ is also strictly semistable. In particular, writing $\nu \colon D \sqcup R \to D \cup R$ for the normalization, any destabilizing subbundle $F \subset \nu^* N_{D \cup R}$ (resp.~$F \subset \nu^* N_{D \cup R}(u)[2u \to v]$) must be destabilizing on every component and agree at the points lying over the nodes $D \cap R$. The key observation is that, because $N_{D/Q}$ is a subbundle of $N_D$ as well, its fiber at each of the points of $\Gamma$ is exactly the subspace that does not smooth that node. On the other hand, if $L$ denotes a component of $R$ which is a line, then any destabilizing $\O(2)$ has a fiber at one or more of the nodes that fails to smooth it (otherwise it would be a subbundle of $N_L \simeq \O(1) \oplus \O(1)$). It thus remains to show that $N_{D/Q}$ is the \emph{unique} destabilizing subbundle of $N_{D \cup R}|_D$ (resp.~$N_{D \cup R}|_D(u)[2u \to v]$), or equivalently: \begin{lem} The sequences \eqref{dseq} and \eqref{dseq_2} are nonsplit, i.e. \[H^0(N_{D \cup R}|_D(-2)(-\Gamma)) = 0 \qquad\text{and} \qquad H^0(N_{D \cup R}|_D(-2)(-\Gamma)(u)[2u \to v]) = 0.\] \end{lem} \begin{proof} To show the desired vanishing, we degenerate two points of $\Gamma$ together to a common point $p$ on $D$: \begin{center} \begin{tikzpicture} \draw (1, 3.5) .. controls (0.5, 3.5) and (-1, 3) .. (0, 2); \draw (0, 2) .. controls (0.5, 1.5) and (1, 1) .. (2, 1); \draw (0.1, 2.1) .. controls (0.5, 2.5) and (1, 3) .. (2, 3); \draw (1, 0.5) .. controls (0.5, 0.5) and (-1, 1) .. (-0.1, 1.9); \draw (2, 1) .. controls (4, 1) and (4, 3) .. (2, 3); \draw[densely dotted] (1,3.25)--(1.5,.75); \draw (0.87,3.25)--(2.125,.75); \draw (2,3.25)--(2,.75); \draw (3,3.25)--(2.5,.75); \filldraw (2,.995) circle[radius=0.03]; \draw (1.8,.98) node[below]{$p$}; \end{tikzpicture} \begin{tikzpicture} \draw (1, 3.5) .. controls (0.5, 3.5) and (-1, 3) .. (0, 2); \draw (0, 2) .. controls (0.5, 1.5) and (1, 1) .. (2, 1); \draw (0.1, 2.1) .. controls (0.5, 2.5) and (1, 3) .. (2, 3); \draw (1, 0.5) .. controls (0.5, 0.5) and (-1, 1) .. (-0.1, 1.9); \draw (2, 1) .. controls (4, 1) and (4, 3) .. (2, 3); \draw (2.25, 2) ellipse (0.5 and 1.25); \draw[densely dotted] (0,2.75)--(1.7,.75); \draw (0,2.63)--(2.19,.8); \filldraw (1.95,.995) circle[radius=0.03]; \draw (1.85,.98) node[below]{$p$}; \end{tikzpicture} \end{center} Let $N$ denote the bundle obtained by gluing $N_{D \cup R}|_{D \smallsetminus p}$ to $N_D(p)|_{D \smallsetminus (\Gamma \smallsetminus p)}$ along the natural isomorphism $N_{D \cup R}|_{D \smallsetminus \Gamma} \simeq N_D(p)|_{D \smallsetminus \Gamma}$. By the discussion in Remark~3.4 of \cite{aly}, the bundles $N_{D \cup R}|_D$ (resp.~$N_{D \cup R}|_D(u)[2u \to v]$) fit together to form a bundle whose central fiber is the bundle $N$ (resp.~$N(u)[2u \to v]$). It thus remains to show \[H^0(N(-2)(-\Gamma)) = 0 \qquad\text{and} \qquad H^0(N(u)[2u \to v](-2)(-\Gamma)) = 0.\] To do this, we use the exact sequence \[0 \to [N_{D/Q}(p) \simeq \O_D(3)(p)] \to [N \text{ or } N(u)[2u \to v]] \to [N_Q|_D(\Gamma - p) \simeq \O_D(2)(\Gamma - p)] \to 0;\] twisting this sequence by $\O_D(-2)(-\Gamma)$ and taking global sections, it remains to check that \[H^0(\O_D(1)(-(\Gamma - p))) = H^0(\O_D(-p)) = 0.\] But this is clear since the five points of $\Gamma - p = \Gamma^{\text{red}}$ are in linear general position. \end{proof} \section{Stability and degeneration III: Limits of Gluing Data}\label{sec:limit_gluing} As in the previous section, we again want to degenerate to reducible curves $X \cup Y$ where neither $N_{X \cup Y}|_X$ nor $N_{X\cup Y}|_Y$ are necessarily stable, but the destabilizing subbundles on each component do not agree at $X\cap Y$. The fundamental difficulty we address in this section is that it is often difficult to compute the destabilizing subbundles on each component without further degeneration. We therefore study the agreement conditions at $X \cap Y$ as the points of $X \cap Y$ come together. Let $D$ be a Brill--Noether curve. Fix distinct points $q, p_{11}, \ldots, p_{1r_1}, p_{21}, \ldots, p_{2r_2} \in D$. Let $R_i$ be a rational curve meeting $D$ quasi-transversely exactly at $q, p_{i1}, \ldots, p_{ir_i}$, such that the tangent directions at $q$ to $D$, $R_1$, and $R_2$ span $\mathbb{P}^3$. \begin{center} \begin{tikzpicture} \draw (1, 3.5) .. controls (0.5, 3.5) and (-1, 3) .. (0, 2); \draw (0, 2) .. controls (0.5, 1.5) and (1, 1) .. (2, 1); \draw (0.1, 2.1) .. controls (0.5, 2.5) and (1, 3) .. (2, 3); \draw (1, 0.5) .. controls (0.5, 0.5) and (-1, 1) .. (-0.1, 1.9); \draw (2, 1) .. controls (4, 1) and (4, 3) .. (2, 3); \draw (2, 3) .. controls (1.5, 2) and (2.2, 0) .. (2.2, 1.2); \draw (2, 3) .. controls (3, 2.5) and (4, 1) .. (3.2, 1.5); \draw (2.05, 2) node{$R_1$}; \draw (2.95, 2) node{$R_2$}; \draw (1.2, 3.5) node{$D$}; \draw (2, 3.2) node{$q$}; \filldraw (2, 3) circle [radius=0.03]; \filldraw (2, 1) circle [radius=0.03]; \filldraw (2.19, 1.005) circle [radius=0.03]; \filldraw (3.28, 1.45) circle [radius=0.03]; \filldraw (3.42, 1.66) circle [radius=0.03]; \draw [decorate, decoration={brace, mirror, amplitude=0.8ex}] (1.9, 0.8) -- (2.3, 0.8); \draw (2.1, 0.5) node[below]{$p_{1j}$}; \draw [decorate, decoration={brace, mirror, amplitude=0.8ex}] (3.3, 1.2) -- (3.65, 1.55); \draw (3.8, 1.2) node[below]{$p_{2j}$}; \end{tikzpicture} \end{center} Assume that both $R_i$ satisfy Assumption \ref{assump:balanced_slope}. Using this assumption we may apply Lemma \ref{lem:deform_delta} to show that there exists an \'etale neighborhood $\Delta = q_i(t)$ of $q \in D$, which we normalize so $q_1(t)$ and $q_2(t)$ have distinct derivatives at $t = 0$, and deformations $R_i(t)$ of $R_i$, and $p_{ij}(t)$ of $p_{ij}$, such that for $t \in \Delta$, the rational curve $R_i(t)$ meets $D$ quasi-transversely in $q_i(t), p_{i1}(t), \dots p_{ir_i}(t)$. \begin{center} \begin{tikzpicture}[scale=0.6] \draw (1, 3.5) .. controls (0.5, 3.5) and (-1, 3) .. (0, 2); \draw (0, 2) .. controls (0.5, 1.5) and (1, 1) .. (2, 1); \draw (0.1, 2.1) .. controls (0.5, 2.5) and (1, 3) .. (2, 3); \draw (1, 0.5) .. controls (0.5, 0.5) and (-1, 1) .. (-0.1, 1.9); \draw (2, 1) .. controls (4, 1) and (4, 3) .. (2, 3); \draw (2, 3) .. controls (1.5, 2) and (2.2, 0) .. (2.2, 1.2); \draw (2, 3) -- (2.03, 3.06); \draw (2, 3) -- (1.94, 3.03); \draw (2, 3) .. controls (3, 2.5) and (4, 1) .. (3.2, 1.5); \draw[densely dotted] (1.9, 3.1) .. controls (1.4, 2) and (2.2, -0.3) .. (2.3, 1.2); \draw[densely dotted] (2.1, 3.1) .. controls (3, 2.7) and (4.3, 0.9) .. (3.1, 1.45); \end{tikzpicture} \end{center} Suppose that, for $t \in \Delta^* \colonequals \Delta\smallsetminus 0$, the normal bundle $N_{D \cup R_1(t) \cup R_2(t)}$ is not stable. These bundles fit together to form a vector bundle $\hat{\mathcal{N}}$ over $\Delta^*$. However, since $D \cup R_1 \cup R_2$ is not lci, its normal sheaf is not a vector bundle; there is therefore no obvious way to extend $\hat{\mathcal{N}}$ over $\Delta$. Thus, extracting information at the central fiber is subtle. By the discussion in Remark~3.4 of~\cite{aly}, we may nevertheless extend the \emph{restriction} $\hat{\mathcal{N}}|_D$ to a bundle $\mathcal{N}$ on $D \times \Delta$ whose fiber $N \colonequals \mathcal{N}|_0$ over $0 \in \Delta$ is obtained from gluing $N_{D \cup R_1 \cup R_2}|_{D \smallsetminus q}$ to $N_D(q)|_{D \smallsetminus \{p_{11}, \ldots, p_{1r_1}, p_{21}, \ldots, p_{2r_2}\}}$ along the natural isomorphism \[N_{D \cup R_1 \cup R_2}|_{D \smallsetminus \{q, p_{11}, \ldots, p_{1r_1}, p_{21}, \ldots, p_{2r_2}\}} \simeq N_D|_{D \smallsetminus \{q, p_{11}, \ldots, p_{1r_1}, p_{21}, \ldots, p_{2r_2}\}} \simeq N_D(q)|_{D \smallsetminus \{q, p_{11}, \ldots, p_{1r_1}, p_{21}, \ldots, p_{2r_2}\}}.\] Write $\nu \colon D \sqcup R_1(t) \sqcup R_2(t) \to D \cup R_1(t) \cup R_2(t)$ for the normalization map. Let $\hat{\L} \subset \nu^* \hat{\mathcal{N}}$ be a destabilizing line bundle, i.e.\ which satisfies $\mu^{\text{adj}}(\hat{\L}) \geq \mu(\hat{\mathcal{N}})$. Let $\ell_D$, $\ell_1$, and $\ell_2$ denote the slopes of the restriction of $\hat{\L}$ to $D$, $R_1(t)$, and $R_2(t)$, and $c$ denote the number of nodes of $D \cup R_1(t) \cup R_2(t)$ above which the fibers of $\hat{\L}$ do not coincide (for $t \in \Delta^*$). Since being perfectly balanced is open, Condition~\ref{assump:balanced_slope} implies that the $\hat{\mathcal{N}}|_{R_i(t)}$ are perfectly balanced. We therefore have \begin{equation} \label{ell12} \ell_i \leq \mu(\hat{\mathcal{N}}|_{R_i(t)}) \quad \text{and} \quad c \geq 0, \end{equation} but \[\mu^{\text{adj}}(\hat{\L}) = \ell_1 + \ell_2 + \ell_D - c \geq \mu(\hat{\mathcal{N}}|_{R_1(t)}) + \mu(\hat{\mathcal{N}}|_{R_2(t)}) + \mu(\hat{\mathcal{N}}|_D).\] If $\ell_D > \mu(\hat{\mathcal{N}}|_D)$, i.e.\ $\mathcal{N}^* = \hat{\mathcal{N}}|_D$ is unstable, then $N$ is unstable by Proposition~\ref{prop:stab-open}. Thus either: \begin{enumerate}[(i)] \item \label{unstable} $N$ is unstable, or \item \label{semistable} \eqref{ell12} is an equality --- i.e.\ $\ell_i = \mu(\hat{\mathcal{N}}|_{R_i(t)})$ and $c = 0$ --- and $\ell_D = \mu(N)$. \end{enumerate} In case~(\ref{semistable}), our first task is to translate the condition that \eqref{ell12} is an equality to information about the restriction $\L^* = \hat{\L}|_D$. (The condition that $\ell_D = \mu(N)$ already concerns $\L^*$.) To do this, observe that since the $\hat{\mathcal{N}}|_{R_i(t)}$ are perfectly balanced, we have a canonical isomorphism \[\varphi_{ij}^* \colon\ \mathbb{P} \mathcal{N}^*|_{q_i(t)} \xlongrightarrow{\sim} \mathbb{P} \mathcal{N}^*|_{p_{ij}(t)} \quad \text{for $t \in \Delta^*$}.\] Writing $\L^* = \hat{\L}|_D$, the condition that \eqref{ell12} is an equality then implies that \begin{equation} \label{translate} \L^*|_{p_{ij}(t)} = \varphi_{ij}^*(\L^*|_{q_i(t)}) \quad \text{for $t \in \Delta^*$}. \end{equation} By Proposition~\ref{prop:stab-open}, we can extend $\L^*$ across the central fiber to a subbundle $\L \subset \mathcal{N}$, and consider the restriction $L \colonequals \L|_0 \subset N$ to the central fiber. Our second task is to figure out what \eqref{translate} implies for $L$. (Figuring out what $\ell_D = \mu(N)$ implies for $L$ is easy: Since $\mu$ is constant in flat families, it implies $\mu(L) = \mu(N)$.) To do this, we observe that the bundles $N_{D \cup R_i(t)}$ fit together to form bundles $\hat{\mathcal{N}}_i$ over $\Delta$ (including over $t = 0$). Writing $\mathcal{N}_i = \hat{\mathcal{N}}_i|_D$, there are natural inclusions $\mathcal{N}_i \subset \mathcal{N}$, which are isomorphisms away from $R_{\bar{i}}(t) \cap D$ (here $\bar{i} = 3 - i$ denotes the other index) --- so in particular at $q_i(t)$ for $t \neq 0$, and at $p_{ij}(t)$ for all $t$. This inclusion induces a birational isomorphism on projectivizations $\mathbb{P} \mathcal{N}_i \dashrightarrow \mathbb{P} \mathcal{N}$. The advantage to working with $\mathcal{N}_i$ is that $\hat{\mathcal{N}}_i|_{R_i(t)}$ is perfectly balanced, so we obtain regular maps \emph{defined over $\Delta$ (in particular for $t = 0$)}: \[\varphi_{ij} \colon\ \mathbb{P} \mathcal{N}_i|_{q_i(t)} \xlongrightarrow{\sim} \mathbb{P} \mathcal{N}_i|_{p_{ij}(t)} \quad \text{for $t \in \Delta$},\] that are compatible with the $\varphi_{ij}^*$ in the sense that the following diagram commutes: \[\begin{tikzcd} \mathbb{P} \mathcal{N}_i|_{q_i(t)} \arrow[r, "\varphi_{ij}"] \arrow[d, dashed] & \mathbb{P} \mathcal{N}_i|_{p_{ij}(t)} \arrow[d, equal]\\ \mathbb{P} \mathcal{N}|_{q_i(t)} \arrow[r, "\varphi_{ij}^*", dashed] & \mathbb{P} \mathcal{N}|_{p_{ij}(t)} \end{tikzcd}\] We now restrict to the graph of $q_i(t)$. Then the map $\mathcal{N}_i \subset \mathcal{N}$ drops rank exactly over $t = 0$. Its kernel at $t = 0$ is the one-dimensional subspace $D_i \subset N_{D \cup R_i}|_q$ corresponding to sections that fail to smooth the node at $q$, and its image is given by the one-dimensional subspace of $F_i \subset N|_q$ corresponding to the tangent direction of $R_i$ at $q$. The rational map $\mathbb{P}\mathcal{N}_i \dashrightarrow \mathbb{P}\mathcal{N}$ is thus obtained by blowing up at $D_i$, and contracting the proper transform of the fiber over $q$ to $F_i$: \begin{center} \begin{tikzpicture}[scale=0.8] \draw (5, 4) -- (5, 6); \draw (6, 4) -- (6, 6); \draw (6.5, 4) -- (7.5, 5.5); \draw (6.5, 6) -- (7.5, 4.5); \draw (8, 4) -- (8, 6); \draw (9, 4) -- (9, 6); \draw (0, 0) -- (0, 2); \draw (1, 0) -- (1, 2); \draw (2, 0) -- (2, 2); \draw (3, 0) -- (3, 2); \draw (4, 0) -- (4, 2); \draw (10, 0) -- (10, 2); \draw (11, 0) -- (11, 2); \draw (12, 0) -- (12, 2); \draw (13, 0) -- (13, 2); \draw (14, 0) -- (14, 2); \draw[dashed, ->] (5, 1) -- (9, 1); \draw[->] (6, 3.75) -- (3, 2.25); \draw[->] (8, 3.75) -- (11, 2.25); \filldraw (2, 1) circle[radius=0.03]; \filldraw (12, 1) circle[radius=0.03]; \draw(-0.3,1) node[left]{$\mathbb{P} \mathcal{N}_i|_{q_i(t)}$}; \draw (2.3, 1) node{$D_i$}; \draw (12.3, 1) node{$F_i$}; \draw(14.3,1) node[right]{$\mathbb{P} \mathcal{N}|_{q_i(t)}$}; \end{tikzpicture} \end{center} The line subbundle $\L|_{q_i(t)} \subset \mathcal{N}|_{q_i(t)}$ defines a section of $\mathbb{P} \mathcal{N}|_{q_i(t)}$ and (by curve-to-projective extension) of $\mathbb{P} \mathcal{N}_i|_{q_i(t)}$; if the first of these sections does not pass through $F_i$, then the second \emph{must} pass through $D_i$. Combining this with \eqref{translate}, when we pass to the central fiber, the fibers of $L$ at the $p_{ij}$ can sometimes be described in terms of \[D_{ij} \colonequals \varphi_{ij}(D_i).\] Namely, by our assumption that the tangent directions to $D$, $R_1$, and $R_2$ span $\mathbb{P}^3$, the subspaces $F_1$ and $F_2$ are disjoint. The fiber $L|_q \subset N|_q$ thus either: \begin{enumerate}[(a)] \item\label{fibera} Coincides with neither $F_1$ nor $F_2$: In this case, $L|_{p_{ij}} = D_{ij}$. \item\label{fiberb} Coincides with $F_1$ but not $F_2$: In this case, $L|_{p_{2j}} = D_{2j}$ and $L|_q = F_1$. \item\label{fiberc} Coincides with $F_2$ but not $F_1$: In this case, $L|_{p_{1j}} = D_{1j}$ and $L|_q = F_2$. \end{enumerate} The upshot of this is the following lemma. \begin{lem}\label{lem:limit_gluing} With the above notation, if \[\begin{array}{ll} \text{every sub-line-bundle of\ldots} & \text{has slope\ldots} \\ N & \leq \mu(N) \\ N[p_{ij} \to D_{ij}] & < \mu(N) \\ N[q \to F_1][p_{2j} \to D_{2j}] & < \mu(N) \\ N[q \to F_2][p_{1j} \to D_{1j}] & < \mu(N), \end{array}\] then $N_{D \cup R_1(t) \cup R_2(t)}$ is stable, for $t \in \Delta$ generic. In particular, if these four vector bundles are merely \emph{semistable}, then $N_{D \cup R_1(t) \cup R_2(t)}$ is \emph{stable} for $t \in \Delta$ generic. \end{lem} Now suppose that $R_i$ is a $2$-secant line (meeting $D$ at $q$ and $p_{i1}$), and write $q' \in T_{q}D \smallsetminus q$ and $p_{i1}' \in T_{p_{i1}}D \smallsetminus p_{i1}$ for points on the tangent lines to $D$ at $q$ and $p_{i1}$ respectively. Then we have the explicit decomposition \begin{equation}\label{eq:2sec_decomp}N_{D \cup R_i}|_{R_i} \simeq N_{R_i \to q'}(q) \oplus N_{R_i \to p_{i1}'}(p_{i1}) \simeq \O_{\mathbb{P}^1}(2)^{\oplus 2}.\end{equation} In particular, we see that Assumption~\ref{assump:balanced_slope} is satisfied. Moreover, we may use this decomposition to compute the subspace $D_{i1}$: In terms of \eqref{eq:2sec_decomp}, \[D_i = N_{R_i \to p_{i1}'}(p_{i1})|_q \quad \Rightarrow \quad D_{i1} = N_{R_i \to p_{i1}'}(p_{i1})|_{p_{i1}}. \] To describe this in a way that is compatible with the isomorphism \[N_{D \cup R_i}|_D \simeq N_D(q + p_{i1})[q \to p_{i1}][p_{i1} \to q],\] we apply Lemma~8.4 of \cite{aly}, which states that under this isomorphism we have \[D_{i1} = N_{D \to q}(p_{i1})|_{p_{i1}} \subset N_D(q + p_{i1})[q \to p_{i1}][p_{i1} \to q]|_{p_{i1}}.\] When both $R_1$ and $R_2$ are $2$-secant lines, Lemma~\ref{lem:limit_gluing} thus gives: \begin{cor}\label{cor:limit_gluing_22sec} If $R_1$ and $R_2$ are $2$-secant lines, and the bundles \begin{enumerate}[(a)] \item\label{22sec_a} $N_D[p_{11} \to q][p_{21} \to q]$, \item\label{22sec_b} $N_D[2p_{11} \to q][2p_{21} \to q]$, \item\label{22sec_d} $N_D[p_{11} \to q][q \to p_{11}][2p_{21} \to q]$, and \item\label{22sec_c} $N_D[2p_{11} \to q][p_{21} \to q][q \to p_{21}]$. \end{enumerate} are all \emph{semistable}, then $N_{D \cup R_1(t) \cup R_2(t)}$ is \emph{stable} for $t \in \Delta$ generic. \end{cor} \begin{rem} Since \eqref{22sec_c} is obtained from \eqref{22sec_d} by permuting $p_{21}$ and $p_{11}$, it suffices to prove semistability of \eqref{22sec_a}--\eqref{22sec_d}. \end{rem} Now suppose only that $R_1$ is a $2$-secant line. Applying Lemma~\ref{lem:limit_gluing}, the stability of $N_{D \cup R_1(t) \cup R_2(t)}$ for $t \in \Delta$ generic follows from the assertions that: \[\begin{array}{ll} \text{every sub-line-bundle of\ldots} & \text{has slope\ldots} \\ N & \leq \mu(N) \\ N[p_{11} \to q][p_{2j} \to D_{2j}] & < \mu(N) \\ N[q \to p_{11}][p_{2j} \to D_{2j}] & < \mu(N) \\ N[q \to F_2][p_{11} \to q] & < \mu(N). \end{array}\] This follows in turn from the assertion that \[N[p_{11} \to q] \quad \text{and} \quad N[q \to p_{11}]\] are stable. We therefore have: \begin{cor}\label{cor:limit_gluing_4sec2sec} Suppose that $R_1$ is a $2$-secant line, and write $p_{2j}' \in T_{p_{2j}} R_2 \smallsetminus p_{2j}$ for points on the tangnet lines to $R_2$ at the $p_{2j}$. If the bundles \begin{enumerate}[(a)] \item $N_D[p_{2j} \to p_{2j}'][2p_{11} \to q]$ and \item $N_D[p_{2j} \to p_{2j}'][p_{11} \to q][q \to p_{11}]$ \end{enumerate} are both \emph{stable/semistable}, then $N_{D \cup R_1(t) \cup R_2(t)}$ is \emph{stable} for $t \in \Delta$ generic. \end{cor} The bundles $N_D[p_{2j} \to p_{2j}'][2p_{11} \to q]$ and $N_D[p_{2j} \to p_{2j}'][p_{11} \to q][q \to p_{11}]$ appearing in Corollary~\ref{cor:limit_gluing_4sec2sec} are rank~$2$ vector bundles of odd degree, and hence stability is equivalent to semistability. \section{Base Cases: Applications of Limits of Gluing Data} \subsection*{\boldmath The cases $(d, g) = (7, 2), (6, 3), (7, 3), (7, 4), (8, 4)$, and $(8, 5)$} In these cases, we degenerate to the union of a general Brill--Noether $D$ curve of degree $d - 2$ and genus $g - 2$, a $2$-secant line $R_1$ through general points $q$ and $p_{11}$, and a $2$-secant line $R_2$ through $q$ and another general point $p_{21}$. \begin{center} \begin{tikzpicture} \draw (1, 3.5) .. controls (0.5, 3.5) and (-1, 3) .. (0, 2); \draw (0, 2) .. controls (0.5, 1.5) and (1, 1) .. (2, 1); \draw (0.1, 2.1) .. controls (0.5, 2.5) and (1, 3) .. (2, 3); \draw (1, 0.5) .. controls (0.5, 0.5) and (-1, 1) .. (-0.1, 1.9); \draw (2, 1) .. controls (4, 1) and (4, 3) .. (2, 3); \draw (0.87,3.25)--(2.125,.75); \draw (2,3.25)--(2,.75); \draw (2,.75) node[below right]{$R_1$}; \draw (2.125,.75)node[below left]{$R_2$}; \filldraw (2,1) circle[radius=0.03]; \draw (2,1) node[above right ]{$q$}; \filldraw (2,3) circle[radius=0.03]; \draw (2,2.9) node[above right]{$p_{21}$}; \filldraw (1.083,2.82) circle[radius=0.03]; \draw (1.08,2.9) node[left]{$p_{11}$}; \draw (1.25,3.5) node[above left]{$D$}; \end{tikzpicture} \end{center} Then $R_1$ and $R_2$ satisfy Assumption \ref{assump:balanced_slope}, and so by Lemma \ref{lem:deform_delta}, the union $D \cup R_1 \cup R_2$ deforms to the union of $D$ and two general $2$-secant lines, which by Lemma \ref{lem:rightcomp}\eqref{lem:rightcomp_2sec} is a Brill--Noether curve of degree $d$ and genus $g$. By Corollary \ref{cor:limit_gluing_22sec}, it suffices to check that the three bundles \ref{cor:limit_gluing_22sec}\eqref{22sec_a}-\eqref{22sec_d} are semistable when $D$ is a general curve of degree $d - 2$ and genus $g - 2$. \subsection*{\boldmath $(d, g) = (7, 2)$} Here $D$ is of degree $5$ and genus $0$. We further degenerate $D$ to the union of a general rational normal curve $C$ (i.e.,~degree $3$ and genus $0$) and two general $1$-secant lines $\bar{u_1, v_1}$ and $\bar{u_2, v_2}$ meeting $C$ at $u_1$ and $u_2$ respectively. By Lemma \ref{lem:1_sec_balanced}, it therefore suffices to show that the bundles \begin{enumerate}[(a)] \item $N_C[p_{11} \to q][p_{21} \to q][2u_1 \to v_1][2u_2 \to v_2]$, and \item $N_C[2p_{11} \to q][2p_{21} \to q][2u_1 \to v_1][2u_2 \to v_2]$, and \item $N_C[p_{11} \to q][q \to p_{11}][2p_{21} \to q][2u_1 \to v_1][2u_2 \to v_2]$, \end{enumerate} are semistable. Limiting $u_1$ to $p_{11}$ and $u_2$ to $p_{21}$ (c.f.\ the discussion in Remark~3.4 of \cite{aly}), we obtain \begin{enumerate}[(a)] \item\label{case_a} $N_C(-p_{11}-p_{21})[p_{11} \to v_1][p_{21} \to v_2]$ \item\label{case_b} $N_C(-2p_{11}-2p_{21})$ \item\label{case_c} $N_C(-p_{11} - 2p_{21})[p_{11} \to v_1][q \to p_{11}]$ \end{enumerate} After further limiting $p_{11}$ to $p_{21}$ in \eqref{case_a} (resp.~$q$ to $p_{11}$ in \eqref{case_c}), and using the fact that $N_{C \to v_1}|_{p_{11}}$ is a general subspace, these bundles all specialize to twists of $N_C$, and are therefore semistable. \subsection*{\boldmath $(d, g) = (6, 3)$ and $(7, 3)$} When $(d, g) = (6, 3)$, then $D$ is of degree $4$ and genus $1$. For uniformity of notation, we write $C = D$. When $(d, g) = (7, 3)$, then $D$ is of degree $5$ and genus $1$. We further degenerate $D$ to the union of a general Brill--Noether curve $C$ of degree $4$ and genus $1$, with a general $1$-secant line $M$ meeting $C$ at $u$. Write $v \in M \smallsetminus u$ for another point on $M$. By Lemma \ref{lem:1_sec_balanced}, in these cases it suffices to prove semistability of the bundles \ref{cor:limit_gluing_22sec}\eqref{22sec_a}-\eqref{22sec_d} with the extra modification $[2u \to v]$. Combining these cases, it suffices to show that the following $6$ bundles on $C$ are semistable: \begin{enumerate}[(a)] \item \label{g3a} $N_C[p_{11} \to q][p_{21} \to q]$ and $N_C[2u \to v][p_{11} \to q][p_{21} \to q]$, \item \label{g3b} $N_C[2p_{11} \to q][2p_{21} \to q]$ and $N_C[2u \to v][2p_{11} \to q][2p_{21} \to q]$, \item \label{g3c} $N_C[p_{11} \to q][q \to p_{11}][2p_{21} \to q]$ and $N_C[2u \to v][p_{11} \to q][q \to p_{11}][2p_{21} \to q]$. \end{enumerate} \begin{lem} \label{lem:disaster} Let $C$ be an irreducible curve, and $u, v, p_{11}, p_{21}, q$ be general points on $C$. Suppose that the bundles \eqref{one} and \eqref{three} below are semistable: \begin{enumerate} \item \label{one} $N_C[2p_{11} \to q]$ \item \label{two} $N_C[2p_{11} \to q][2p_{21} \to q]$ \item \label{three} $N_C[p_{11} \to q][q \to p_{11}][2p_{21} \to q]$. \end{enumerate} Then all of the following bundles are also semistable: \begin{enumerate}[(a)] \item \label{aa} $N_C[p_{11} \to q][p_{21} \to q]$ \item \label{bb} $N_C[p_{11} \to q][p_{21} \to v]$ \item \label{cc} $N_C[2u \to v][p_{11} \to q][p_{21} \to q]$ \item \label{dd} $N_C[2u \to v][2p_{11} \to q][2p_{21} \to q]$ \item \label{ee} $N_C[2u \to v][p_{11} \to q][q \to p_{11}][2p_{21} \to q]$ \item \label{ff} $N_C[u \to v][v \to u][p_{11} \to q][p_{21} \to q]$ \item \label{gg} $N_C[u \to v][v \to u][2p_{11} \to q][2p_{21} \to q]$ \item \label{hh} $N_C[u \to v][v \to u][2p_{11} \to q][p_{21} \to q][q \to p_{21}]$. \end{enumerate} \end{lem} \begin{proof} We argue by specializing the various points on $C$, to reduce to twists of bundles that we already assumed or proved were semistable. \begin{enumerate}[(a)] \item Specialize $p_{21}$ to $p_{11}$; the resulting bundle is $N_C[2p_{11} \to q]$, i.e.\ \eqref{one}. \item Specialize $v$ to $q$; the resulting bundle is $N_C[p_{11} \to q][p_{21} \to q]$, i.e.\ \eqref{aa}. \item Specialize $u$ to $p_{21}$; the resulting bundle is $N_C[p_{11} \to q][p_{21} \to v](-p_{21})$, c.f.\ \eqref{bb}. \item Specialize $u$ to $p_{21}$; the resulting bundle is $N_C[2p_{11} \to q](-2p_{21})$, c.f.\ \eqref{one}. \item Specialize $u$ to $q$; the resulting bundle is $N_C[p_{11} \to q][q \to v][2p_{21} \to q](-q)$. \\ Then specialize $v$ to $p_{11}$; the resulting bundle is $N_C[p_{11} \to q][q \to p_{11}][2p_{21} \to q](-q)$, c.f.\ \eqref{three}. \item Specialize $u$ to $p_{21}$; the resulting bundle is $N_C[p_{11} \to q][v \to p_{21}](-p_{21})$. \\ Exchanging $v$ and $p_{21}$, this is $N_C[p_{11} \to q][p_{21} \to v](-v)$, c.f.\ \eqref{bb}. \item Specialize $v$ to $p_{21}$; the resulting bundle is $N_C[u \to p_{21}][2p_{11} \to q][p_{21} \to q](-p_{21})$. \\ Then specialize $u$ to $p_{11}$; the resulting bundle is $N_C[p_{11} \to q][p_{21} \to q](-p_{11} - p_{21})$, c.f.\ \eqref{aa}. \item Specialize $v$ to $p_{11}$; the resulting bundle is $N_C[u \to p_{11}][p_{11} \to q][p_{21} \to q][q \to p_{21}](-p_{11})$. \\ Then specialize $u$ to $q$; the resulting bundle is $N_C[p_{11} \to q][p_{21} \to q](-p_{11} - q)$, c.f.\ \eqref{aa}. \qedhere \end{enumerate} \end{proof} Applying Lemma~\ref{lem:disaster}\eqref{aa}\eqref{cc}\eqref{dd}\eqref{ee}, and using \eqref{two} and \eqref{three} directly, it remains only to show that the three bundles \eqref{one}--\eqref{three} are semistable. Let $Q$ be a quadric containing $C$. In cases \eqref{one} and \eqref{three}, specialize $p_{11}$ to one of the two points guaranteed by Lemma \ref{lem:3pts} for the point $q \in C$; in case \eqref{two}, specialize both $p_{11}$ and $p_{21}$ to the two points guaranteed by Lemma \ref{lem:3pts} for the point $q \in C$. After these specializations, the inclusion $C \subset Q$ induces normal bundle exact sequences for the modified bundles \eqref{one}, \eqref{two}, and \eqref{three}: \begin{gather*} 0 \to N_{C/Q}(-p_{11}) \to N_C[2p_{11} \to q] \to N_Q|_C(-p_{11}) \to 0 \\ 0 \to N_{C/Q}(-p_{11} - p_{21}) \to N_C[2p_{11} \to q][2p_{21} \to q] \to N_Q|_C(-p_{11} - p_{21}) \to 0 \\ 0 \to N_{C/Q}(-2p_{21}) \to N_C[p_{11} \to q][q \to p_{11}][2p_{21} \to q] \to N_Q|_C(-p_{11} - q) \to 0. \end{gather*} These sequences are balanced because $\mu(N_{C/Q}) = 8 = \mu(N_Q|_C)$, so this establishes the semistability of the modified bundles in \eqref{one}, \eqref{two}, and \eqref{three} as desired. \subsection*{\boldmath $(d, g) = (7, 4), (8, 4)$, and $(8, 5)$} When $(d, g) = (7, 4)$, then $D$ is of degree $5$ and genus $2$. For uniformity of notation, we write $C = D$. When $(d, g) = (8, 4)$, then $D$ is of degree $6$ and genus $2$. We further degenerate $D$ to the union of a general Brill--Noether curve $C$ of degree $5$ and genus $2$, with a general $1$-secant line $M$ meeting $C$ at $u$. Write $v \in M \smallsetminus u$ for another point on $M$. By Lemma \ref{lem:1_sec_balanced}, in these cases it suffices to prove semistability of the bundles \ref{cor:limit_gluing_22sec}\eqref{22sec_a}-\eqref{22sec_d} with the extra modification $[2u \to v]$. When $(d, g) = (8, 5)$, then $D$ is of degree $6$ and genus $3$. We further degenerate $D$ to the union of a general Brill--Noether curve $C$ of degree $5$ and genus $2$, with a general $2$-secant line $M$ meeting $C$ at $u$ and $v$. Since $N_{C \cup L}|_L \simeq \O_L(2) \oplus \O_L(2)$ is semistable, it suffices to show that each of the bundles \eqref{22sec_a}-\eqref{22sec_d} are semistable when restricted to $C$, i.e.\ it suffices to prove semistability of the bundles \ref{cor:limit_gluing_22sec}\eqref{22sec_a}-\eqref{22sec_d} with the extra modification $[u \to v][v \to u]$. Combining these cases, we have to check the semistability of $9$ modifications of $N_C$. Applying Lemma~\ref{lem:disaster}\eqref{aa}\eqref{cc}\eqref{dd}\eqref{ee}\eqref{ff}\eqref{gg}\eqref{hh}, and using \eqref{two} and \eqref{three} directly, it suffices to check that the three modifications \eqref{one}, \eqref{two}, and \eqref{three} are semistable for $C$ a general curve of degree $5$ and genus $2$. Let $Q$ be the unique quadric containing $C$. In all cases, specialize $p_{21}$ to one of the three points on $C$ guaranteed by Lemma \ref{lem:3pts} for which $N_{C \to q}|_{p_{21}}$ and $N_{C/Q}|_{p_{21}}$ coincide to first order. Then after these specializations, the inclusion $C \subset Q$ induces the following normal bundle exact sequences for the modified bundles in \eqref{one}, \eqref{two}, and \eqref{three}: \begin{gather*} 0 \to N_{C/Q}(-2p_{11}) \to N_C[2p_{11} \to q] \to N_Q|_C \to 0 \\ 0 \to N_{C/Q}(-2p_{11} - p_{21}) \to N_C[2p_{11} \to q][2p_{21} \to q] \to N_Q|_C(- p_{21}) \to 0 \\ 0 \to N_{C/Q}(-p_{11} - p_{21} - q) \to N_C[p_{11} \to q][q \to p_{11}][2p_{21} \to q] \to N_Q|_C(-p_{21}) \to 0. \end{gather*} These sequences are balanced because $\mu(N_{C/Q}) = 12$ and $\mu(N_Q|_C) = 10$, so this establishes the semistability of the modified bundles in \eqref{one}, \eqref{two}, and \eqref{three} as desired. \subsection*{\boldmath The cases $(d, g) = (8, 6)$ and $(9, 6)$} In these cases, we degenerate to the union of a general Brill--Noether curve $D$ of degree $d - 3$ and genus $g - 4 = 2$, a general $2$-secant line $R_1$, meeting $D$ quasi-transversely precisely at $q$ and $p_{11}$, a general $4$-secant conic $R_2$, meeting $D$ quasi-transversely precisely at $q$, $p_{21}$, $p_{22}$, and $p_{23}$. \begin{center} \begin{tikzpicture}[scale=1] \draw (1, 3.5) .. controls (0.5, 3.5) and (-1, 3) .. (0, 2); \draw (0, 2) .. controls (0.5, 1.5) and (1, 1) .. (2, 1); \draw (0.1, 2.1) .. controls (0.5, 2.5) and (1, 3) .. (2, 3); \draw (1, 0.5) .. controls (0.5, 0.5) and (-1, 1) .. (-0.1, 1.9); \draw (2, 1) .. controls (4, 1) and (4, 3) .. (2, 3); \draw (2.25, 2) ellipse (0.5 and 1.25); \draw (0,2.63)--(2.19,.8); \draw (0.3,2.63) node[above]{$R_1$}; \filldraw (0.355,2.335) circle[radius=0.03]; \draw (0.42,2.28) node[below]{$p_{11}$}; \filldraw (1.95,2.99) circle[radius=0.03]; \filldraw (1.95,1) circle[radius=0.03]; \filldraw (2.58,2.93) circle[radius=0.03]; \filldraw (2.58,1.06) circle[radius=0.03]; \draw (1.965,2.93) node[above left]{$p_{21}$}; \draw (1.965,.98) node[below left]{$q$}; \draw (2.575,2.89) node[above right]{$p_{22}$}; \draw (2.573,1.05) node[below right]{$p_{23}$}; \draw (1.25,3.5) node[above left]{$D$}; \draw (2.25,.4) node{$R_2$}; \end{tikzpicture} \end{center} Then $R_1$ and $R_2$ satisfy Assumption \ref{assump:balanced_slope}, and so by Lemma \ref{lem:deform_delta}, the union $D \cup R_1 \cup R_2$ deforms to the union of $D$, a $2$-secant line, and a $4$-secant conic, which by Lemma \ref{lem:rightcomp}\eqref{lem:rightcomp_2sec} and \eqref{lem:rightcomp_4sec} is a Brill--Noether curve of degree $d$ and genus $g$. By Corollary \ref{cor:limit_gluing_4sec2sec}, it suffices to check that the two bundles \begin{enumerate}[(a)] \item $N_D[p_{21} \to p_{21}'][p_{22} \to p_{22}'][p_{23} \to p_{23}'][2p_{11} \to q]$ and \item $N_D[p_{21} \to p_{21}'][p_{22} \to p_{22}'][p_{23} \to p_{23}'][p_{11} \to q][q \to p_{11}]$ \end{enumerate} are stable when $D$ is a general curve of degree $d - 3$ and genus $2$. Limiting $p_{11}$ to $p_{21}$, these bundles fit into families whose central fibers are \begin{enumerate}[(a)] \item $N_D[p_{22} \to p_{22}'][p_{23} \to p_{23}'][p_{21} \to q]$ \item $N_D[p_{22} \to p_{22}'][p_{23} \to p_{23}'][q \to p_{21}]$ \end{enumerate} These bundles are symmetric under exchanging $p_{21}$ and $q$, so it suffices to show the stability of the first bundle. When $(d, g) = (8, 6)$, then $D$ is of degree $5$ and genus $2$; in this case, for uniformity of notation, we write $C = D$, so our problem is simply to show the stability of the bundle \begin{equation} \label{g6} N_C[p_{22} \to p_{22}'][p_{23} \to p_{23}'][p_{21} \to q]. \end{equation} When $(d, g) = (9, 6)$, then $D$ is of degree $6$ and genus $2$. We further degenerate $D$ to the union of a general Brill--Noether curve $C$ of degree $5$ and genus $2$, with a general $1$-secant line $M$ meeting $C$ at $u$. Write $v \in M \smallsetminus u$ for another point on $M$. By Lemma \ref{lem:1_sec_balanced}, in these cases it suffices to prove stability for the bundle \[N_C[p_{22} \to p_{22}'][p_{23} \to p_{23}'][p_{21} \to q][2u \to v].\] Limiting $u$ to $p_{21}$ reduces the stability of this bundle to the stability of \[N_D[p_{22} \to p_{22}'][p_{23} \to p_{23}'][p_{21} \to v],\] and subsequently limiting $v$ to $q$ reduces its stability to the stability of \eqref{g6}. All that remains is thus to show that \eqref{g6} is stable. The normal bundle exact sequence for the inclusion of $C$ in the unique quadric $Q$ containing it gives rise to the exact sequence \begin{equation}\label{86_quadric} 0 \to N_{C/Q}(-p_{21}-p_{22} - p_{23}) \to {N_C[p_{22} \to p_{22}'][p_{23} \to p_{23}'][p_{21} \to q] } \to \O_C(2) \to 0. \end{equation} These bundles have slopes $9$, $9.5$, and $10$, respectively; hence it suffices to show that this sequence is nonsplit, i.e.\ that \[H^0(N_C(-2)[p_{22} \to p_{22}'][p_{23} \to p_{23}'][p_{21} \to q]) = 0.\] By Lemma \ref{lem:52_sections_from_quadric}, all sections of $N_C(-2)$ come from $H^0(N_{C/Q}(-2))$, which has dimension $2$. After imposing three negative modifications out of the quadric at general points, we therefore have no global sections as desired. \section{ Curves of degree $6$ and genus $2$} This case was done by Sacchiero in \cite{sacchiero3}. For completeness, we provide a characteristic-independent proof here. We shall need the following lemma: \begin{lem}\label{lem:exact_seq} Let $E$ be a vector bundle on a smooth curve $C$ sitting in an exact sequence \[0 \to L_1 \to E \to L_2 \to 0,\] where $L_1$ and $L_2$ are line bundles. If $\mu(L_2) = \mu(L_1) + 2$, and \[\operatorname{Hom}(L_2(-p), E ) \simeq H^0(E \otimes L_2^\vee(p)) = 0\] for all $p \in C$, then $E$ is stable. \end{lem} \begin{proof} Let $\phi \colon F \hookrightarrow E$ be the inclusion of a line subbundle. Then either $\phi$ factors through $L_1 \hookrightarrow E$, in which case $F \simeq L_1$ is not destabilizing, or projection from $E$ to $L_2$ gives a nonzero map $F \to L_2$. In the second case, $F \simeq L_2 (-p_1 - \cdots - p_n)$. Since $\operatorname{Hom}(L_2(-p), E) = 0$ for all $p \in C$ by assumption, but $\operatorname{Hom}(L_2 (-p_1 - \cdots - p_n), E) \neq 0$, we must have $n \geq 2$. Therefore \[\mu(F) = \mu(L_2) - n = \mu(E) - n + 1 < \mu(E). \qedhere\] \end{proof} Now let $C$ be a general Brill--Noether curve of degree $d = 6$ and genus $g = 2$. Since $d > g + r$, our curve $C$ is a projection of a general Brill--Noether curve $\tilde{C} \subset \mathbb{P}^4$; by Lemma 13.2 and the proof of Proposition 13.5 of \cite{aly}, $\tilde{C}$ is a quadric section of a cubic scroll. Thus, $C$ lies on a cubic surface $S$ singular along a line (the projection of the cubic scroll), and the normal bundle exact sequence for $C$ in $S$ gives \begin{equation} \label{inS} 0 \to \O_C(2) \to N_{C/\mathbb{P}^3} \to K_C(2) \to 0. \end{equation} We have $\mu(\O_C(2)) = 11$ and $\mu(K_C(2)) = 13$, so by Lemma \ref{lem:exact_seq}, it suffices to show for any $p \in C$, \[H^0(N_C(-2) \otimes K_C^\vee(p)) = 0.\] Let $q \in C$ be conjugate to $p$ under the hyperelliptic involution on $C$, so $K_C^\vee(p) \simeq \O_C(-q)$ and we must show $H^0(N_C(-2)(-q)) = 0$. As $N_{C/S}(-2) \simeq \O_C$ has one nowhere-vanishing section, it suffices to show $N_{C/S}(-2) \hookrightarrow N_C(-2)$ is surjective on global sections; i.e., that $h^0(N_C(-2)) = 1$. We now prove this by degeneration. (We could not degenerate first, since our desired degeneration would break the exact sequence \eqref{inS}.) Namely, we degenerate $C$ to the union $D \cup_u L$ of a general curve $D$ of degree $5$ and genus $2$, and a general $1$-secant line $L$ meeting at the point $u$. Let $v$ be a point on $L$ away from $u$. By \cite[Lemma 8.5]{aly}, it suffices to show $h^0(N_D(-2)(u)[2u \to v]) = 1$. Let $Q$ be the unique quadric containing $D$. By Lemma \ref{lem:52_sections_from_quadric}, $H^0(N_D(-2))$ is $2$-dimensional. When we twist up by $u$, we have an exact sequence \[0 \to N_{D/Q}(-2)(u) \to N_D(-2)(u) \to \O_D(u) \to 0.\] As $N_{D/Q}(-2)(u) \simeq K_D(u)$ has exactly $2$ global sections and vanishing $H^1$, the associated long exact sequence in cohomology gives $h^0(N_D(-2)(u)) = 3$. Consequently, the image of the evaluation map \[H^0(N_D(-2)(u)) \to N_D(-2)(u)|_u\] is a $1$-dimensional subspace of the fiber at $u$. Since the line $L$ is general, the fiber $N_{D \to v}|_u$ will not coincide with this $1$-dimensional subspace. Therefore, the inclusion $N_D(-2) \subset N_D(-2)(u)[u \to v]$ induces an isomorphism on global sections. Combining this with Lemma~\ref{lem:52_sections_from_quadric}, the inclusion \[N_{D/Q}(-2) \subset N_D(-2)(u)[u \to v]\] also induces an isomorphism on global sections. Modifying once more towards $v$, and noting that the generality of $v$ guarantees that $N_{D \to v}$ and $N_{D/Q}$ are transverse at $u$, we conclude that $N_{D/Q}(-2)(-u) \subset N_D(-2)(u)[2u \to v]$ induces an isomorphism on global sections. Thus \[h^0(N_D(-2)(u)[2u \to v]) = h^0(N_{D/Q}(-2)(-u)) = h^0(K_D(-u)) = 1.\] \section{Curves of degree $7$ and genus $5$} In this section, for completeness we recall Ballico and Ellia's argument \cite{ballicoellia} that shows that if $C$ is a non-hyperelliptic and non-trigonal space curve of degree 7 and genus 5, then $N_C$ is stable. Equivalently, they show that $N_C^\vee(3)$ is stable. The bundle $N_C^\vee(3)$ has degree 6, hence we need that it does not admit a line bundle of degree 3 or more. Let \[0 \to L \to N_C^\vee(3) \to M \to 0\] be a destabilizing sequence. An elementary Riemann-Roch calculation shows that $h^0(\mathcal{I}_C (3)) \geq 3$, where $\mathcal{I}_C$ denotes the ideal sheaf of $C$ in $\mathbb{P}^3$. Since there cannot be a cubic surface double along a curve of degree $7$, the long exact sequence associated to the exact sequence \[0 \to \mathcal{I}_C^2 (3) \to \mathcal{I}_C (3) \to N_C^\vee(3) \to 0\] shows that the image of \[h\colon H^0(\mathcal{I}_C(3)) \to H^0(N_C^\vee(3))\] has dimension at least $3$. Consequently, \[\dim(H^0(L) \cap \mathrm{im}(h)) + \dim(H^0(M)) \geq 3.\] If the degree of $L$ is at least $3$, then the degree of $M$ is at most $3$. Since the curve is not trigonal or hyperelliptic, we conclude that $h^0(M) \leq 1$. Hence, $\dim(H^0(L) \cap \mathrm{im}(h)) \geq 2$. Thus, there are two cubics in the ideal of $C$ whose image in $N_C^\vee(3)$ lie in the same line subbundle $L$. Hence, these cubics are everywhere tangent along $C$. By Bezout's Theorem, these cubic surfaces intersect in a curve of degree $9$ and cannot be tangent along a curve of degree $7$. Consequently, $N_C^\vee(3)$ cannot have a line subbundle of degree $3$ or more and is stable. This completes the proof of Theorem~\ref{thm:main}. \bibliographystyle{plain}
{ "timestamp": "2020-03-09T01:03:40", "yymm": "2003", "arxiv_id": "2003.02964", "language": "en", "url": "https://arxiv.org/abs/2003.02964", "abstract": "In this paper, we prove that the normal bundle of a general Brill-Noether space curve of degree $d$ and genus $g \\geq 2$ is stable if and only if $(d,g) \\not\\in \\{ (5,2), (6,4) \\}$. When $g\\leq1$ and the characteristic of the ground field is zero, it is classical that the normal bundle is strictly semistable. We show that this fails in characteristic $2$ for all rational curves of even degree.", "subjects": "Algebraic Geometry (math.AG)", "title": "Stability of Normal Bundles of Space Curves", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682468025243, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.707316987631703 }
https://arxiv.org/abs/1803.02550
Broadcast domination and multipacking: bounds and the integrality gap
The dual concepts of coverings and packings are well studied in graph theory. Coverings of graphs with balls of radius one and packings of vertices with pairwise distances at least two are the well-known concepts of domination and independence, respectively. In 2001, Erwin introduced \emph{broadcast domination} in graphs, a covering problem using balls of various radii, where the cost of a ball is its radius. The minimum cost of a dominating broadcast in a graph $G$ is denoted by $\gamma_b(G)$. The dual (in the sense of linear programming) of broadcast domination is \emph{multipacking}: a multipacking is a set $P \subseteq V(G)$ such that for any vertex $v$ and any positive integer $r$, the ball of radius $r$ around $v$ contains at most $r$ vertices of $P$. The maximum size of a multipacking in a graph $G$ is denoted by $mp(G)$. Naturally, $mp(G) \leq \gamma_b(G)$. Hartnell and Mynhardt proved that $\gamma_b(G) \leq 3 mp(G) - 2$ (whenever $mp(G)\geq 2$). In this paper, we show that $\gamma_b(G) \leq 2mp(G) + 3$. Moreover, we conjecture that this can be improved to $\gamma_b(G) \leq 2mp(G)$ (which would be sharp).
\section*{Introduction} The dual concepts of coverings and packings are well studied in graph theory. Coverings of graphs with balls of radius one and packings of vertices with pairwise distances at least two are the well-known concepts of domination and independence respectively. Typically we are interested in minimum (cost) coverings and maximum packings. Natural questions to ask are for what graph do these dual problems have equal (integer) values, and in the case they are not equal, can we bound the difference between the two values? The second question is the focus of this paper. The particular covering problem we study is broadcast domination. Let $G=(V,E)$ be a graph. Define the \emph{ball of radius $r$ around $v$} by $N_r(v) = \{ u : d(u,v) \leq r \}$. A \emph{dominating broadcast} of $G$ is a collection of balls $N_{r_1}(v_1), N_{r_2}(v_2), \dots, N_{r_t}(v_t)$ (each $r_i > 0$) such that $\bigcup_{i=1}^t N_{r_i}(v_i) = V$. Alternatively, a dominating broadcast is a function $f: V \to \mathbb{N}$ such that for any vertex $u \in V$, there is a vertex $v \in V$ with $f(v)$ positive and $\mathrm{dist}(u,v) \leq f(v)$. (The ball around $v$ with radius $f(v)$ belongs to the covering.) The \emph{cost} of a dominating broadcast $f$ is $\sum_{v \in V} f(v)$ and the minimum cost of a dominating broadcast in $G$, its \emph{broadcast number}, is denoted by $\ensuremath{\gamma_b}(G)$.\footnote{ One may consider the cost to be any function of the powers (for example the sum of the squares), see e.g.~\cite{HeggernesLokshtanov2006}. We shall stick to the classical convention of linear cost. } When broadcast domination is formulated as an integer linear program, its dual problem is \emph{multipacking}~\cite{Brewster2013,Teshima2012}. A multipacking in a graph $G$ is a subset $P$ of its vertices such that for any positive integer $r$ and any vertex $v$ in $V$, the ball of radius $r$ centered at $v$ contains at most $r$ vertices of $P$. The maximum size of a multipacking of $G$, its \emph{multipacking number}, is denoted by $\ensuremath{\mathrm{mp}}(G)$. Broadcast domination was introduced by Erwin~\cite{Erwin2001, Erwin2004} in his doctoral thesis in 2001. Multipacking was then defined in Teshima's Master's Thesis~\cite{Teshima2012} in 2012, see also~\cite{Brewster2013} (and~\cite{Brewster2017,hartnell_2014,Yang2015} for subsequent studies). As we have already mentioned, this work fits into the general study of coverings and packings, which has a rich history in Graph Theory: Cornu\'ejols wrote a monograph on the topic~\cite{Cornuejols2001}. In early work, Meir and Moon~\cite{MeirMoon1975} studied various coverings and packings in trees, providing several inequalities relating the size of a minimum covering and a maximum packing. Giving such inequalities connecting the parameters $\ensuremath{\gamma_b}$ and $\ensuremath{\mathrm{mp}}$ is the focus of our work. Since broadcast domination and multipacking are dual problems, we know that for any graph $G$, \begin{equation*} \ensuremath{\mathrm{mp}}(G) \leq \ensuremath{\gamma_b}(G). \end{equation*} This bound is tight, in particular for strongly chordal graphs, see~\cite{Farber84,Lubiw87,Teshima2012}. (In a recent companion work we prove equality for grids~\cite{Beaudou2018}.) A natural question comes to mind. How far apart can these two parameters be? Hartnell and Mynhardt~\cite{hartnell_2014} gave a family of graphs $(G_k)_{k \in \mathbb{N}}$ for which the difference between both parameters is $k$. In other words, the difference can be arbitrarily large. Nonetheless, they proved that for any graph $G$ with $\ensuremath{\mathrm{mp}}(G)\geq 2$, \begin{equation*} \ensuremath{\gamma_b}(G) \leq 3 \ensuremath{\mathrm{mp}}(G) - 2 \end{equation*} and asked~\cite[Section~5]{hartnell_2014} whether the factor~$3$ can be improved. Answering their question in the affirmative, our main result is the following. \begin{theorem} \label{thm:bounding} Let $G$ be a graph. Then, \begin{equation*} \ensuremath{\gamma_b}(G) \leq 2 \ensuremath{\mathrm{mp}}(G) + 3. \end{equation*} \end{theorem} Moreover, we conjecture that the additive constant in the bound of Theorem~\ref{thm:bounding} can be removed. \begin{conjecture} \label{conj:fac2} For any graph $G$, $\ensuremath{\gamma_b}(G) \leq 2 \ensuremath{\mathrm{mp}}(G)$. \end{conjecture} In Section~\ref{sec:bound}, we prove Theorem~\ref{thm:bounding}. In Section~\ref{sec:discussion}, we show that Conjecture~\ref{conj:fac2} holds for all graphs with multipacking number at most~$4$. We conclude the paper with some discussions in Section~\ref{sec:remarks}. \section{Proof of Theorem~\ref{thm:bounding}} \label{sec:bound} We want to bound the broadcast number of a graph by a function of its multipacking number. We first state a key counting result which is used throughout the remainder of this paper. For any two relative integers $a$ and $b$ such that $a \leq b$, $\llbracket a, b\rrbracket$ denotes the set $\mathbb{Z} \cap [a,b]$. \begin{lemma} \label{lem:path} Let $G$ be a graph, $k$ be a positive integer and $(u_0,\ldots,u_{3k})$ be an isometric path in $G$. Let \mbox{$P=\{u_{3i} | i \in \llbracket 0,k \rrbracket \}$} be the set of every third vertex on this path. Then, for any positive integer $r$ and any ball $B$ of radius $r$ in~$G$, \begin{equation*} |B \cap P| \leq \left\lceil \frac{2r+1}{3} \right\rceil. \end{equation*} \end{lemma} \begin{proof} Let $B$ be a ball of radius $r$ in $G$, then any two vertices in $B$ are at distance at most $2r$. Since the path $(u_0,\ldots,u_{3k})$ is isometric the intersection of the path and $B$ is included in a subpath of length $2r$. This subpath contains at most $2r+1$ vertices and only one third of those vertices can be in $P$. \end{proof} Any positive integer $r$ is greater than or equal to $\left\lceil \frac{2r+1}{3} \right\rceil$. Thus, Lemma~\ref{lem:path} ensures that $P$ is a valid multipacking of size $k+1$. We have the following (see also~\cite{dun_al_2006}): \begin{proposition} For any graph $G$, \begin{equation*} \ensuremath{\mathrm{mp}}(G) \geq \left\lceil\frac{\text{diam}(G)+1}{3}\right\rceil. \end{equation*} \end{proposition} Building on this idea, we have the following result. \begin{theorem} \label{thm:main} Given a graph $G$ and two positive integers $k$ and $k'$ such that \mbox{$k' \leq k$}, if there are four vertices $x,y,u$ and $v$ in $G$ such that \begin{equation*} d_G(x,u) = d_G(x,v) = 3k \text{, } d_G(u,v) = 6k \text{ and }d_G(x,y) = 3k + 3k', \end{equation*} then \begin{equation*} \ensuremath{\mathrm{mp}}(G) \geq 2k + k'. \end{equation*} \end{theorem} \begin{proof} Let $(u_{-3k},\ldots,u_0,\ldots,u_{3k})$ be the vertices of an isometric path from $u$ to $v$ going through $x$. Note that $u = u_{-3k}$, $x = u_0$ and $v = u_{3k}$. We shall select every third vertex of this isometric path and let $P_1$ be the set $\{u_{3i} | i \in \llbracket -k, k \rrbracket\}$. We thus have already selected $2k+1$ vertices. In order to complete our goal, we need $k'-1$ additional vertices. Let $(x_0,\ldots,x_{3k + 3k'})$ be the vertices of an isometric path from $x$ to $y$. Note that $x = x_0$ and $y = x_{3k+3k'}$. We shall select every third vertex on this isometric path starting at $x_{3k+6}$. Formally, we let $P_2$ be the set $\{ x_{3k+3(i+2)} | i \in \llbracket 0,k'-2 \rrbracket\}$. Finally, we let $P$ be the union of $P_1$ and $P_2$. An illustration of this is displayed in Figure~\ref{fig:firstscheme}. \begin{figure}[ht] \scriptsize \begin{center} \begin{tikzpicture} \node[vertex] (u) at (-3,0) {}; \node[vertex] (v) at (3,0) {}; \node[vertex] (x) at (0,0) {}; \node[vertex] (y) at (0,-5) {}; \node[vertex] (x3k6) at (0,-3.4) {}; \node[unselected] (x3k) at (0,-2.6) {}; \node[unselected] (x3k3) at (0,-3) {}; \node[vertex] (um3) at (-.4,0) {}; \node[vertex] (u3) at (.4,0) {}; \node[unselected] (x3) at (0,-.4) {}; \draw (u) -- (v); \draw (x) -- (x3) -- (x3k) -- (x3k3) -- (y); \node[above] at ($(x) + (0,.2)$) {$x = x_0 = u_0$}; \node[above] at ($(u) + (0,.2)$) {$u = u_{-3k}$}; \node[above] at ($(v) + (0,.2)$) {$v = u_{3k}$}; \node[below] at ($(y) + (0,-.2)$) {$y = x_{3k+3k'}$}; \node[below right] at ($(x3) + (0,0)$) {$x_3$}; \node[right] at (x3k) {$x_{3k}$}; \node[right] at ($(x3k6)+(.1,0)$) {$x_{3k+6}$}; \node[right] at (x3k3) {$x_{3k+3}$}; \node[left] at (-.1,-4.2) {$P_2$}; \node[below] at (-1.5,-0.2) {$P_1$}; \draw (-3.1,.1) rectangle (3.1,-.1); \draw (-.1,-3.3) rectangle (.1,-5.1); \end{tikzpicture} \end{center} \caption{Building of $P$.} \label{fig:firstscheme} \end{figure} Since every vertex of $P_2$ is at distance at least $3k + 6$ from $x$, while every vertex of $P_1$ is at distance at most $3k$ from $x$, we infer that $P_1$ and $P_2$ are disjoint. Thus $|P| = 2k+k'$. We shall now prove that $P$ is a valid multipacking. Let $r$ be an integer between 1 and $|P| - 1$, and let $B$ be a ball of radius $r$ in $G$ (we do not care about the center of the ball). If this ball $B$ intersects only $P_1$ or only $P_2$, then we know by Lemma~\ref{lem:path} that it cannot contain more than $r$ vertices of $P$. We may then consider that the ball $B$ intersects both $P_1$ and $P_2$. Let $l$ denote the greatest integer $i$ such that $x_{3k+3(i+2)}$ is in $B$ and in $P_2$. Let us name this vertex $z$. From this, we may say that \begin{equation} \label{eq:P2} |B \cap P_2| \leq l + 1 \end{equation} Before ending this preamble, we state an easy inequality. For every integer $n$, \begin{equation} \label{eq:mod3} \left\lceil\frac{n}{3}\right\rceil \leq \frac{n}{3} + \frac{2}{3} \end{equation} We now split the remainder of the proof into two cases. \paragraph{Case 1: $3(l+2) \leq r$.} In this case, we just use Lemma~\ref{lem:path} for $P_1$. We have \begin{equation*} |B \cap P_1 | \leq \left\lceil \frac{2r + 1}{3} \right\rceil, \end{equation*} and by Inequality~\eqref{eq:mod3}, this quantity is bounded above by $\frac{2r+1}{3} + \frac{2}{3}$. We obtain with Inequality~\eqref{eq:P2}, \begin{flalign*} &&|B \cap P| & \leq l+1 + \frac{2r+1}{3} + \frac{2}{3}&&\\ && & \leq l+2 + \frac{2r}{3}&&\\ && & \leq \frac{r}{3} + \frac{2r}{3} &\text{(by our case hypothesis)}&\\ && & \leq r.&& \end{flalign*} Therefore, the ball $B$ contains at most $r$ vertices of $P$, as required. \paragraph{Case 2: $3(l+2) > r$.} Here we need some more insight. Recall that $l + 2 $ cannot exceed $k'$ and that $k' \leq k$. Thus \begin{align*} r & < 3(l+2) \\ & < 2k' + l +2\\ & < 2k + l + 2, \end{align*} and since $r$ is an integer, we get \begin{equation} \label{eq:nice} r \leq 2k + l + 1. \end{equation} We also note that any vertex $u_i$ for $|i| \leq 3k + 3(l+2) - (2r+1)$ is at distance at least $2r+1$ from $z$. By the triangle inequality $d(z,u_i) \geq d(z,x)-d(u_i,x)$, where $d(z,x)=3k + 3(l+2)$, and $d(u_i,x) = |i|$. Since the ball $B$ has radius $r$, no such vertex can be in $B$. Since we assumed that $B$ intersects $P_1$, not all the vertices of the $uv$-path are excluded from $B$. This means that \begin{equation} \label{eq:nonzero} 3k > 3k + 3(l+2) - (2r+1). \end{equation} We partition the vertices of $P_1$ into three sets: $U_L, U_M, U_R$. The vertex $u_i$ belongs to: $U_L$ if $i < -3k - 3(l+2) + 2(r+1)$; $U_M$ if $|i| \leq 3k + 3(l+2) - (2r+1)$; and $U_R$ if $i > 3k + 3(l+2) - (2r+1)$. See Figure~\ref{fig:case21}. The distance from $u = u_{-3k}$ to the first vertex (smallest positive index) in $U_R$ is then $6k + 3(l+2) - (2r+1) + 1$. We compare this distance with $2r+1$. \subparagraph{Case 2.1: $6k + 3(l+2) - (2r+1) + 1 \geq 2r+1$.} We match $U_L$ with $U_R$ so that each pair is at distance at least $2r+1$ (match $u_{-3k}$ with the first vertex in $U_R$ and so on, as pictured in Figure~\ref{fig:case21}). Therefore the ball $B$ contains at most one vertex of each matched pair. In other words, $B$ contains at most $\lceil |U_R|/3 \rceil$ vertices from $U_L \cup U_R$, and so \begin{equation*} |B \cap P_1| \leq \left\lceil \frac{3k - (3k + 3(l+2) - 2r) + 1}{3} \right\rceil. \end{equation*} By using Inequality~\eqref{eq:P2} again, \begin{align*} |B \cap P| & \leq l+1 + \left\lceil \frac{2r+1}{3} \right\rceil - (l+2)\\ & \leq r. \end{align*} Therefore, the ball $B$ contains at most $r$ vertices of $P$, as required. \begin{figure}[ht] \begin{center} \scriptsize \subfigure[Case 2.1.]{ \label{fig:case21} \begin{tikzpicture}[xscale=1.6] \node[vertex] (u) at (-3,0) {}; \node[vertex] (v) at (3,0) {}; \node[vertex] (x) at (0,0) {}; \node[vertex] (um31) at (-2.4,0) {}; \node[vertex] (u31) at (2.4,0) {}; \draw (u) -- (v); \node[above] at ($(x) + (0,.2)$) {$u_0$}; \node[left] at ($(u) + (-.2,0)$) {$u_{-3k}$}; \node[right] at ($(v) + (.2,0)$) {$u_{3k}$}; \node[below] at ($(-.5,-1.0) + (0,-.2)$) {$6k+3(l+2)-(2r+1)$}; \node[below] at ($(-.95,-.5) + (0,-.2)$) {$2r+1$}; \node[below] at ($(-2.65,0) + (0,-.2)$) {$U_L$}; \node[below] at ($(2.65,0) + (0,-.2)$) {$U_R$}; \node[below] at ($(0,0) + (0,-.2)$) {$U_M$}; \draw (-2.1,.1) rectangle (2.1,-.1); \draw (-3.1,.1) rectangle (-2.2,-.1); \draw (2.2,.1) rectangle (3.1,-.1); \draw[dashed] (u) to[bend left] (u31); \draw[dashed] (um31) to[bend left] (v); \draw[arrows=<->] (-3,-.7) -- (1.1,-.7); \draw[arrows=<->] (-3,-1.2) -- (2.1,-1.2); \end{tikzpicture} } \quad \subfigure[Case 2.2.]{ \label{fig:case22} \begin{tikzpicture}[xscale=1.6] \node[vertex] (u) at (-3,0) {}; \node[vertex] (v) at (3,0) {}; \node[vertex] (x) at (0,0) {}; \node[vertex] (um31) at (-2.4,0) {}; \node[vertex] (u31) at (2.4,0) {}; \draw (u) -- (v); \node[above] at ($(x) + (0,.2)$) {$u_0$}; \node[left] at ($(u) + (-.2,0)$) {$u_{-3k}$}; \node[right] at ($(v) + (.2,0)$) {$u_{3k}$}; \node[below] at ($(-1,-.5) + (0,-.2)$) {$6k+3(l+2)-(2r+1)$}; \node[below] at ($(-.4,-1) + (0,-.2)$) {$2r+1$}; \node[below] at ($(-2.65,0) + (0,-.2)$) {$U'_L$}; \node[below] at ($(2.65,0) + (0,-.2)$) {$U'_R$}; \node[below] at ($(-1.6,0) + (0,-.2)$) {$U''_L$}; \node[below] at ($(1.6,0) + (0,-.2)$) {$U''_R$}; \node[below] at ($(0,0) + (0,-.2)$) {$U_M$}; \draw (-1.1,.1) rectangle (1.1,-.1); \draw (-2.1,.1) rectangle (-1.2,-.1); \draw (1.2,.1) rectangle (2.1,-.1); \draw (-3.1,.1) rectangle (-2.2,-.1); \draw (2.2,.1) rectangle (3.1,-.1); \draw[dashed] (u) to[bend left] (u31); \draw[dashed] (um31) to[bend left] (v); \draw[arrows=<->] (-3,-.7) -- (1.1,-.7); \draw[arrows=<->] (-3,-1.2) -- (2.2,-1.2); \end{tikzpicture} } \end{center} \caption{Illustrations for Case 2.} \end{figure} \subparagraph{Case 2.2: $6k + 3(l+2) - (2r+1) + 1 < 2r+1$.} We partition each of $U_L$ and $U_R$ as shown in Figure~\ref{fig:case22}. The vertices that are distance at least $2r+1$ from a vertex in $U_L \cup U_R$ are the sets $U'_L$ and $U'_R$, and those that are close to all other vertices are $U''_L$ and $U''_R$. We can match pairs of vertices $U'_L \cup U'_R$. This allows us to say that the extremities of $P_1$ will contribute at most $\left\lceil \frac{6k - (2r+1) + 1}{3} \right\rceil$ which equals $2k + \lceil\frac{-2r}{3}\rceil$. Using again Inequality~\eqref{eq:mod3}, this is bounded above by $2k - \frac{2r}{3} + \frac{2}{3}$. For any integer $i$ between $6k + 3(l+2) - (2r+1) + 1$ and $2r$, vertices $u_{-i}$ and $u_{i}$ belong to $U''_L$ and $U''_R$ respectively. Such vertices may be in $B$. Since $P_1$ contains every third vertex on these two subpaths, this amounts to at most \begin{equation*} 2 \left\lceil\frac{2r - 6k - 3(l+2) + (2r+1)}{3}\right\rceil \end{equation*} such vertices. This quantity is equal to \begin{equation*} 2\left\lceil \frac{4r+1}{3} \right\rceil -4k - 2(l+2), \end{equation*} which in turn, using Inequality~\eqref{eq:mod3} is bounded above by \begin{equation*} \frac{8r}{3} + 2 -4k -2(l+2). \end{equation*} By putting everything together, we derive that \begin{flalign*} && |B \cap P| & \leq (l+1) + \left(2k - \frac{2r}{3} + \frac{2}{3}\right) + \left(\frac{8r}{3} +2 -4k - 2(l+2)\right) &&\\ &&& \leq 2r - 2k - l - \frac{1}{3}.&& \end{flalign*} But since $|B \cap P|$ is an integer, we may rewrite this last inequality as \begin{flalign*}&& |B \cap P| & \leq r + (r - 2k - l - 1) &&\\ &&& \leq r. &\text{(by Inequality~\eqref{eq:nice})}& \end{flalign*} Thus, $|B \cap P|$ cannot exceed $r$ and the ball $B$ contains at most $r$ vertices of $P$, as required. This concludes the proof of Theorem~\ref{thm:main}. \end{proof} Theorem~\ref{thm:main} allows us to give a lower bound on the size of a maximum multipacking in a graph in terms of its diameter and radius. \begin{corollary}\label{coro:diam-rad} For any graph $G$ of diameter $d$ and radius r, \begin{equation*} \ensuremath{\mathrm{mp}}(G) \geq \frac{d}{6} + \frac{r}{3} - \frac{3}{2}. \end{equation*} \end{corollary} \begin{proof} We just pick the integer $k$ such that $d$ can be expressed as $6k + \alpha$ where $\alpha$ is in $\llbracket 0,5 \rrbracket$ and the integer $k'$ such that $r$ can be expressed as $3k + 3k'+\beta$ where $\beta$ is in $\llbracket 0,2\rrbracket$. We must have two vertices at distance $6k$ in $G$. On a shortest path of length $6k$, the middle vertex has some vertex at distance $3k+3k'$. We can then apply Theorem~\ref{thm:main}. \begin{align*} \ensuremath{\mathrm{mp}}(G) &\geq 2k + k'\\ & \geq \frac{1}{3}(d - \alpha) + \frac{1}{3} \left(r - \beta - \frac{1}{2}(d - \alpha)\right)\\ & \geq \frac{d}{6} + \frac{r}{3} - \frac{9}{6}.\qedhere \end{align*} \end{proof} We can now finalize the proof of our main theorem. \begin{proof}[Proof of Theorem~\ref{thm:bounding}] Since the diameter of a graph is always greater than or equal to its radius, we conclude from Corollary~\ref{coro:diam-rad} that $$ \frac{\text{rad}(G)-3}{2} \leq \ensuremath{\mathrm{mp}}(G) \leq \ensuremath{\gamma_b}(G) \leq \text{rad}(G). $$ Hence, for any graph $G$, \begin{equation*} \ensuremath{\gamma_b}(G) \leq 2 \ensuremath{\mathrm{mp}}(G) + 3, \end{equation*} proving Theorem~\ref{thm:bounding}. \end{proof} Note that in our proof, we chose the length of the long path to be a multiple of~$6$ for the reading to be smooth. We think that the same ideas implemented with more care would work for multiples of~$3$. This might slightly improve the additive constant in our bound, but we believe that it would not be enough to prove Conjecture~\ref{conj:fac2} (while adding too much complexity to the proof). \section{Proving Conjecture~\ref{conj:fac2} when $\ensuremath{\mathrm{mp}}(G)\leq 4$}\label{sec:discussion} The following collection of results shows that Conjecture~\ref{conj:fac2} holds for graphs whose multipacking number is at most~$4$. \begin{lemma}\label{lemma-distances} Let $G$ be a graph and $P$ a subset of vertices of $G$. If, for every subset $U$ of at least two vertices of $P$, there exist two vertices of $U$ that are at distance at least $2|U|-1$, then $P$ is a multipacking of $G$. \end{lemma} \begin{proof} We prove the contrapositive. Let $G$ be a graph and $P$ a subset of its vertices which is not a multipacking. Then there is a ball $B$ of radius $r$ which contains $r+1$ vertices of $P$. Let $U$ be the set $B \cap P$, then $U$ has size at least $r+1$. Moreover, any two vertices in $U$ are at distance at most $2r$ which is stricly smaller than $2|U|-1$. \end{proof} \begin{proposition}\label{prop:mp=3} Let $G$ be a graph. If $\ensuremath{\mathrm{mp}}(G)=3$, then $\ensuremath{\gamma_b}(G)\leq 6$. \end{proposition} \begin{proof} We prove the contrapositive again. Let $G$ be a graph with broadcast number at least 7. Then, the eccentricity of any vertex is at least 7 (otherwise we could cover the whole graph by broadcasting with power 6 from a single vertex). Let $x$ be any vertex of $G$. There must be a vertex $y$ at distance 7 from $x$. Let $u$ be any vertex at distance 3 from $x$ and on a shortest path from $x$ to $y$. Then $u$ is at distance 4 from $y$. But $u$ has also eccentricity at least 7. So there is a vertex $v$ at distance 7 from $u$. By the triangle inequality, $v$ is at distance at least 4 from $x$ and at least 3 from $y$. Therefore the set $\{u,v,x,y\}$ satisfies the condition of Lemma~\ref{lemma-distances} and the multipacking number of $G$ is at least 4 (and so it is not equal to 3). \end{proof} The following proposition improves Theorem~\ref{thm:bounding} for graphs $G$ with $\ensuremath{\mathrm{mp}}(G) \leq 6$ and shows that Conjecture~\ref{conj:fac2} holds when $\ensuremath{\mathrm{mp}}(G) = 4$. \begin{proposition}\label{prop:mp=4} Let $G$ be a graph. If $\ensuremath{\mathrm{mp}}(G)\geq 4$, then $\ensuremath{\gamma_b}(G)\leq 3\ensuremath{\mathrm{mp}}(G)-4$. \end{proposition} \begin{proof} For a contradiction, let $G$ be a counterexample, that is a graph with multipacking number $p$ at least 4 while $\ensuremath{\gamma_b}(G)\geq 3p-3$. Then, the eccentricity of any vertex of $G$ is at least $3p-3$ (otherwise we could broadcast at distance $3p-4$ from a single vertex). Let $x$ be a vertex of $G$ and let $V_i$ denote the set of vertices at distance exactly~$i$ of $x$. By our previous remark, $V_{3p-3}$ is non-empty. Let $y$ be a vertex in $V_{3p-3}$ and consider a shortest path $P_{xy}$ from $x$ to $y$ in $G$. Let $v_0=x$, and for $1\leq i\leq p-1$, let $v_i$ be the vertex on $P_{xy}$ belonging to $V_{3i}$ (thus $v_{p-1}=y$). Now, since $\ensuremath{\gamma_b}(G)\geq 3p-3$, there must be a vertex $u$ at distance at least $3p-3$ of $v_{p-2}$ (otherwise we could broadcast from that single vertex). Note that the triangle inequality ensures that the distance between $u$ and $v_i$ is at least $3+3i$ for $i$ between $0$ and $p-2$. The distance from $u$ to $v_{p-1}$ is at least $3p-6$ which is at least 6 since $p$ is at least 4. Consider the set $P=\{u,v_0,\ldots, v_{p-1}\}$. We claim that $P$ is a multipacking of $G$ of size $p+1$, which is a contradiction. Let $B$ be a ball of radius $r$. Since $P_{xy}$ is an isometric path, Lemma~\ref{lem:path} ensures us that $B$ contains at most \begin{equation*} \left\lceil \frac{2r+1}{3} \right\rceil \end{equation*} vertices from $P \cap P_{xy}$ which is smaller than $r$. When $B$ does not include $u$, the ball is satisfied. For balls that contain vertex $u$, the maximum size of $P \cap B$ is \begin{equation*} \left\lceil \frac{2r+1}{3} \right\rceil + 1. \end{equation*} Whenever $r$ is 4 or more, this quantity does not exceed $r$. So every ball with radius $4$ or more is satisfied. We still need to check balls of radius 1,2, and 3 which contain $u$. \begin{itemize} \item Balls of radius 1 are easy to check since every vertex of $P_{xy}$ is at distance at least 3 from $u$. \item For balls of radius 2, it is enough to check that there is only one vertex at distance 4 or less from $u$ in $P \cap P_{xy}$. \item For balls of radius 3, there is only one way to select $u$ and three vertices in $P \cap P_{xy}$ within distance 6 from $u$. We should take $v_0, v_1$ and $v_{p-1}$. But since $v_0$ and $v_{p-1}$ are at distance $3p-3$ from each other, they cannot appear simultaneously in a ball of radius 3 (since $p$ is at least 4, $3p-3$ is at least 9). \end{itemize} Therefore $P$ is a multipacking of size $p+1$, which is a contradiction. \end{proof} \begin{corollary} Let $G$ be a graph. If $\ensuremath{\mathrm{mp}}(G)\leq 4$, then $\ensuremath{\gamma_b}(G)\leq 2\ensuremath{\mathrm{mp}}(G)$. \end{corollary} \begin{proof} When $\ensuremath{\mathrm{mp}}(G)\leq 2$, this is shown in~\cite{hartnell_2014}. The case $\ensuremath{\mathrm{mp}}(G)=3$ is implied by Proposition~\ref{prop:mp=3}, and the case $\ensuremath{\mathrm{mp}}(G)=4$ follows from Proposition~\ref{prop:mp=4}. \end{proof} \section{Concluding remarks}\label{sec:remarks} We conclude the paper with some remarks. \subsection{The optimality of Conjecture~\ref{conj:fac2}} We know a few examples of connected graphs $G$ which achieve the conjectured bound, that is, $\ensuremath{\gamma_b}(G)=2\ensuremath{\mathrm{mp}}(G)$. For example, one can easily check that $C_4$ and $C_5$ have multipacking number~$1$ and broadcast number~$2$. In Figure~\ref{fig:twoFour}, we depict three examples having multipacking number~$2$ and broadcast number~$4$. By making disjoint unions of these graphs, we can build further extremal graphs with arbitrary multipacking number. However, if we only consider connected graphs, we do not even know an example with multipacking number~$3$ and broadcast number~$6$. Hartnell and Mynhardt~\cite{hartnell_2014} constructed an infinite family of connected graphs $G$ with $\ensuremath{\gamma_b}(G)=\tfrac{4}{3}\ensuremath{\mathrm{mp}}(G)$, but we do not know any construction with a higher ratio. Are there arbitrarily large connected graphs that reach the bound of Conjecture~\ref{conj:fac2}? \begin{figure}[!ht] \begin{center} \scalebox{1.0}{\begin{tikzpicture}[join=bevel,inner sep=0.5mm,scale=0.7] \node[vertex](0) at (0:1) {}; \node[vertex](1) at (60:1) {}; \node[vertex](2) at (120:1) {}; \node[vertex](3) at (180:1) {}; \node[vertex](4) at (240:1) {}; \node[vertex](5) at (300:1) {}; \node[vertex](a) at (0:2) {}; \node[vertex](b) at (60:2) {}; \node[vertex](c) at (120:2) {}; \node[vertex](d) at (180:2) {}; \node[vertex](e) at (240:2) {}; \node[vertex](f) at (300:2) {}; \draw[-] (0)--(1)--(2)--(3)--(4)--(5)--(0) (a)--(b)--(c)--(d)--(e)--(f)--(a) (0)--(a) (2)--(c) (4)--(e); \node at (270:2.5) {(a)}; \begin{scope}[xshift=6cm] \node[vertex](0) at (0:1) {}; \node[vertex](1) at (60:1) {}; \node[vertex](2) at (120:1) {}; \node[vertex](3) at (180:1) {}; \node[vertex](4) at (240:1) {}; \node[vertex](5) at (300:1) {}; \node[vertex](a) at (0:2) {}; \node[vertex](b) at (60:2) {}; \node[vertex](c) at (120:2) {}; \node[vertex](d) at (180:2) {}; \node[vertex](e) at (240:2) {}; \node[vertex](f) at (300:2) {}; \node[vertex](x) at (0:1.5) {}; \node[vertex](y) at (180:1.5) {}; \draw[-] (0)--(1)--(2)--(3)--(4)--(5)--(0) (a)--(b)--(c)--(d)--(e)--(f)--(a) (1)--(c) (2)--(b) (4)--(f) (5)--(e) (0)--(x)--(a) (3)--(y)--(d); \node at (270:2.5) {(b)}; \end{scope} \begin{scope}[xshift=12cm,rotate=-22.5] \foreach \i in {0,1,2,3,4,5,6,7} { \node[vertex](x\i) at (45*\i:1) {}; \node[vertex](y\i) at (45*\i:2) {}; } \draw[-] (x0)--(x1)--(x2)--(x3)--(x4)--(x5)--(x6)--(x7)--(x0) (y0)--(y1)--(y2)--(y3)--(y4)--(y5)--(y6)--(y7)--(y0) (x0)--(y1) (y0)--(x1) (x2)--(y3) (y2)--(x3) (x4)--(y5) (y4)--(x5) (x6)--(y7) (x7)--(y6); \node at (292.6:2.5) {(c)}; \end{scope} \end{tikzpicture}} \end{center} \caption{\label{fig:twoFour} Graphs with multipacking number $2$ and broadcast number $4$. Graph (b) comes from L.~Teshima's Master Thesis~\cite{Teshima2012} and (c) was found by C.~R.~Dougherty (private communication).} \end{figure} \subsection{An approximation algorithm} The computational complexity of broadcast domination has been extensively studied, see for example~\cite{Dabney2009,HeggernesLokshtanov2006} and references of~\cite{Brewster2013,Teshima2012,Yang2015}. It is particularly interesting to note that, unlike most other natural covering problems, broadcast domination is solvable in polynomial (sextic) time~\cite{HeggernesLokshtanov2006}. It is not known whether this is also the case for multipacking, but a cubic-time algorithm exists for strongly chordal graphs~\cite{Brewster2017,Yang2015}, as well as a linear-time algorithm for trees~\cite{Brewster2013,Brewster2017,Yang2015}. We note that our proof of Theorem~\ref{thm:bounding}, being constructive, implies the existence of a $(2+o(1))$-factor approximation algorithm for the multipacking problem. \begin{corollary} There is a polynomial-time algorithm that, given a graph $G$, constructs a multipacking of $G$ of size at least $\frac{\ensuremath{\mathrm{mp}}(G)-3}{2}$. \end{corollary} \begin{proof} To construct the multipacking, one first needs to compute the radius $r$ and diameter $d$ of the graph $G$. Then, as described in the proof of Corollary~\ref{coro:diam-rad}, we compute $\alpha$ and $k$, and find the four vertices $x$, $y$, $u$, $v$ and the two isometric paths $P_1$ and $P_2$ described in Theorem~\ref{thm:main}. Finally, we proceed as in the proof of Theorem~\ref{thm:main}, that is, we essentially select every third vertex of these two paths to obtain the multipacking $P$. All distances and paths can be computed in polynomial time using classic methods. By Corollary~\ref{coro:diam-rad}, $P$ has size at least $\frac{\text{rad}(G)-3}{2}$. Since $\ensuremath{\mathrm{mp}}(G)\leq \text{rad}(G)$, the approximation factor follows. \end{proof}
{ "timestamp": "2019-05-29T02:14:01", "yymm": "1803", "arxiv_id": "1803.02550", "language": "en", "url": "https://arxiv.org/abs/1803.02550", "abstract": "The dual concepts of coverings and packings are well studied in graph theory. Coverings of graphs with balls of radius one and packings of vertices with pairwise distances at least two are the well-known concepts of domination and independence, respectively. In 2001, Erwin introduced \\emph{broadcast domination} in graphs, a covering problem using balls of various radii, where the cost of a ball is its radius. The minimum cost of a dominating broadcast in a graph $G$ is denoted by $\\gamma_b(G)$. The dual (in the sense of linear programming) of broadcast domination is \\emph{multipacking}: a multipacking is a set $P \\subseteq V(G)$ such that for any vertex $v$ and any positive integer $r$, the ball of radius $r$ around $v$ contains at most $r$ vertices of $P$. The maximum size of a multipacking in a graph $G$ is denoted by $mp(G)$. Naturally, $mp(G) \\leq \\gamma_b(G)$. Hartnell and Mynhardt proved that $\\gamma_b(G) \\leq 3 mp(G) - 2$ (whenever $mp(G)\\geq 2$). In this paper, we show that $\\gamma_b(G) \\leq 2mp(G) + 3$. Moreover, we conjecture that this can be improved to $\\gamma_b(G) \\leq 2mp(G)$ (which would be sharp).", "subjects": "Combinatorics (math.CO)", "title": "Broadcast domination and multipacking: bounds and the integrality gap", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682454669814, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7073169866762236 }
https://arxiv.org/abs/1702.03021
Elementary $L^\infty$ error estimates for super-resolution de-noising
This paper studies the problem of recovering a discrete complex measure on the torus from a finite number of corrupted Fourier samples. We assume the support of the unknown discrete measure satisfies a minimum separation condition and we use convex regularization methods to recover approximations of the original measure. We focus on two well-known convex regularization methods, and for both, we establish an error estimate that bounds the smoothed-out error in terms of the target resolution and noise level. Our $L^\infty$ approximation rate is entirely new for one of the methods, and improves upon a previously established $L^1$ estimate for the other. We provide a unified analysis and an elementary proof of the theorem.
\section{Introduction} \subsection{Overview and Contributions} Super-resolution techniques are concerned with the recovery of high resolution features from coarse observations, and can be employed to capture information beyond the inherent resolution limit of the measurement system. Applications of super-resolution include microscopy \cite{lindberg2012mathematical}, astronomy \cite{puschmann2005super}, neuroscience \cite{rieke1999spikes}, medical imaging \cite{greenspan2009super}, and geophysics \cite{khaidukov2004diffraction}. While there are numerous empirical results on super-resolution, the theory is still in its infancy. Cand\`{e}s and Fernandez-Granda \cite{candes2014towards} introduced a super-resolution model where the unknown information is a discrete and periodic measure whose support set satisfies a minimum separation condition. They proved that such a measure can be uniquely recovered from a finite number of consecutive Fourier samples by solving a convex minimization problem. Several other papers \cite{de2012exact, tang2013compressed, aubel2015theory, duval2015exact} have addressed variations of this model, but it is also possible to study the super-resolution of non-discrete measures \cite{benedetto2016super}. The literature also focused on the closely related model where the low resolution data is corrupted by additive noise. This situation is important because in applications of the theory, there might be noise due to measurement error, data corruption, or quantization. Under the same minimum separation framework, several papers \cite{candes2013super, fernandez2013support, bhaskar2013atomic, tang2015near, azais2015spike, duval2015exact} used one of two important convex regularization methods, which we shall call Problem (\ref{SR1}) and (\ref{SR2}), to obtain approximations of the original measure. We also adopt the minimum separation model, but unlike the aforementioned papers, we address both convex regularization methods in a \emph{unified} manner. To do this, we show that both methods produce measures that satisfy two (fairly weak) inequalities. We prove that any measure enjoying these properties approximates the unknown measure in a natural sense, and in particular, we can bound the error in terms of the target resolution and noise level. We prove this result using well-known techniques, but we combine the pieces in a different and more efficient manner, resulting in a \emph{significantly simpler} argument. Our $L^\infty$ error estimate is entirely new for Problem (\ref{SR2}) and improves upon a previously established $L^1$ result for Problem (\ref{SR1}), derived by Cand\`{e}s and Fernandez-Granda \cite{candes2013super}. \subsection{Background} We first introduce some notation and discuss prior work on the noiseless case. While our results generalize to higher dimensions, for simplicity, we focus on the one-dimensional case. Let $M(\T)$ be the space of complex Radon measures on the torus group $\T=\R/\Z$. For $\mu\in M(\T)$, let $|\mu|$ be its variation, $\hat\mu$ be its Fourier transform, and $\|\mu\|$ be its total variation. For any integer $M>0$, let $\Lambda_M=\{-M,-M+1,\dots,M\}$ and define the projection $P_M\colon M(\T)\to C(\T;\Lambda_M)$ by \[ P_M\mu(x) =\sum_{m=-M}^M \hat\mu(m) e^{2\pi imx}, \] where $C(\T;\Lambda_M)$ is the space of trigonometric polynomials of degree $M$, i.e., \[ C(\T;\Lambda_M) =\Big\{f\in C^\infty(\T)\colon f(x)=\sum_{m=-M}^M a_m e^{2\pi imx}, \ a_m\in\C\Big\}. \] We say the discrete set $S=\{s_j\}_{j=1}^J$ satisfies the $\Lambda_M$-\emph{minimum separation condition} if \begin{equation} \label{min} \min_{1\leq j<k\leq J} |s_j-s_k|\geq \frac{2}{M}, \end{equation} where $|\cdot|$ is the distance on $\T$. Let $M(\T,\Lambda_M)$ be the set of discrete measures on $\T$ whose support satisfies the $\Lambda_M$-minimal separation condition. Cand\`{e}s and Fernandez-Granda \cite[Theorem 1.2]{candes2014towards} proved that if $\mu_0\in M(\T,\Lambda_M)$ for $M\geq 128$, then $\mu_0$ is the unique solution to the \emph{super-resolution problem}, \begin{equation} \label{SR} \tag{SR} \inf \|\mu\| \quad \text{subject to}\quad \mu\in M(\T) \quad\text{and}\quad P_M\mu=P_M \mu_0. \end{equation} Their proof requires the assumption that $M\geq 128$ and it is unknown whether the theorem holds for all values of $M>0$. Further, the conclusion of their theorem still holds if we replace the numerical constant 2 in (\ref{min}) with a smaller constant and impose a stronger condition on $M$. For example, the conclusion holds if the 2 is replaced with 1.26 provided that $M\geq 10^3$ \cite[Theorem 2.2]{fernandez2016super}. As previously mentioned, we are concerned with the noisy case. For this model, instead of observing the noiseless data $P_M\mu_0$, suppose we are given the corrupted data, $P_M (\mu_0+\eta)$. The papers \cite{candes2013super,fernandez2013support} obtained an approximation of $\mu_0$ by solving the constrained minimization problem, \begin{equation} \label{SR1} \tag{SR$_{\delta}$} \inf \|\mu\| \quad \text{subject to}\quad \mu\in M(\T) \quad\text{and}\quad \|P_M(\mu-\mu_0-\eta)\|_{L^2}\leq \delta, \end{equation} where $\delta>0$ can be freely chosen. On the other hand, the papers \cite{bhaskar2013atomic, tang2015near, azais2015spike, duval2015exact} studied the closely related unconstrained minimization problem, \begin{equation} \label{SR2} \tag{SR$_{\tau}$} \inf \(\frac{1}{2}\|P_M(\mu-\mu_0-\eta)\|_{L^2}^2+\tau\|\mu\|\) \quad\text{subject to}\quad \mu\in M(\T), \end{equation} where $\tau>0$ can also be freely chosen. This problem is a special case of Tikhonov regularization. Using standard weak-$\ast$ compactness arguments, it is not difficult to show that Problems (\ref{SR}), (\ref{SR1}), and (\ref{SR2}) are well-posed, i.e., the infimum in the three minimization problems can be replaced with the minimum. Further, appropriate dual formulations of all three problems can be recast as semi-definite programs, see \cite{candes2014towards,candes2013super,tang2015near}. The most important question in the study of regularization methods is to determine if the regularized solutions approximate the noiseless solution in some suitable sense. Suppose $\mu_\delta$ and $\mu_\tau$ are solutions to Problems (\ref{SR1}) and (\ref{SR2}), respectively. Intuitively speaking, we expect that $\mu_\delta$ and $\mu_\tau$ converge to $\mu_0$ if the parameters $\delta$ and $\tau$ are chosen appropriately depending on the noise level and the noise level tends to zero. This intuition is somewhat correct, since it is possible to show convergence for a subsequence and in the weak-$\ast$ sense. Such convergence statements are qualitative, whereas we want a quantitative bound. This leads us to the question: What is a natural way of quantifying the errors, $\mu_\delta-\mu_0$ and $\mu_\tau-\mu_0$? Burger-Osher \cite{burger2004convergence} argued that, since Tikhonov regularization is achieved in the weak-$\ast$ topology, it would be surprising if it is possible to bound the error in the total variation norm. Since Problem (\ref{SR2}) is a special case of Tikhonov regularization and is similar to Problem (\ref{SR1}), it is reasonable that the same principle applies. Numerical results have shown that the supports of $\mu_0$, $\mu_\tau$, and $\mu_\delta$ can be different \cite{candes2013super,duval2015exact}, which further supports this heuristic. Thus, it appears impossible to bound $\|\mu_\delta-\mu_0\|$ and $\|\mu_\tau-\mu_0\|$ in terms of the noise level. Since super-resolution is concerned with the recovery of fine details from coarse data, it is reasonable to bound $\mu_\delta-\mu_0$ and $\mu_\tau-\mu_0$ at small scales. Cand\`{e}s and Fernandez-Granda \cite{candes2013super} argued that it suffices to control smoothed-out errors at a certain resolution. For a kernel $K$, the smoothed out errors are $K*(\mu_\delta-\mu_0)$ and $K*(\mu_\tau-\mu_0)$. \subsection{Results} We are primarily concerned with the solutions to Problems (\ref{SR1}) and (\ref{SR2}). In order to avoid addressing each method separately, we introduce the following definition. We say $\mu\in M(\T)$ is a \emph{$(\epsilon,\Lambda_M)$-approximation} of $\mu_0\in M(\T)$ if \begin{equation} \|\mu\|\leq \|\mu_0\|+2\epsilon \quad\text{and}\quad \|P_M(\mu-\mu_0)\|_{L^2}\leq 2\epsilon. \end{equation} The numerical constant 2 that appears in both inequalities is unimportant; our theorem still holds for any other sufficiently large constant. Propositions \ref{prop0} and \ref{prop1} show that solutions to either of the convex problems are $(\epsilon,\Lambda_M)$-approximations of $\mu_0\in M(\T;\Lambda_M)$, where $\epsilon$ depends on the noise. \begin{theorem} \label{thm1} There exists a constant $C>0$ such that the following hold. Suppose $\mu_0\in M(\T;\Lambda_M)$ for an integer $M\geq 128$ and $\mu$ is a $(\epsilon,\Lambda_M)$-approximation of $\mu_0$. For any twice differentiable $K$ with $K''\in L^\infty(\T)$, we have \begin{equation} \label{eq0} \|K*(\mu-\mu_0)\|_{L^\infty} \leq C\epsilon \big(\|K\|_{L^\infty} + M^{-1}\|K'\|_{L^\infty} + M^{-2}\|K''\|_{L^\infty}\big). \end{equation} \end{theorem} \begin{remark} \label{remark1} Since we are given noisy observations of $\mu_0$ up to frequency $M$ (equivalently, at scale $1/M$) and super-resolution is concerned with the recovery of fine details, we are particularly interested in quantifying the error $\mu-\mu_0$ at scale $1/N$, for integers $N>M$. There are two natural avenues for a defining a kernel $K_N$ that corresponds to a function of scale $1/N$. \begin{enumerate}[(a)] \item The first is in the Fourier domain. Let $K_N\in C(\T;\Lambda_N)$. Important examples include the Dirichlet and Fej\'{e}r kernels. By Bernstein's inequality for trigonometric polynomials, we have \[ \|K_N''\|_{L^\infty} \leq N\|K_N'\|_{L^\infty} \leq N^2\|K_N\|_{L^\infty}. \] Inserting this into (\ref{eq0}), we obtain \[ \|K_N*(\mu-\mu_0)\|_{L^\infty} \leq C\|K_N\|_{L^\infty} \(\frac{N}{M}\)^2 \epsilon. \] \item The second is in the spatial domain. Suppose $k$ is twice differentiable, $k''$ is bounded, and $k$ is compactly supported in $[-\frac{1}{L},\frac{1}{L}]$ for some $L>2$. For an integer $N>M$, the function $k_N(x)=k(Nx)$ is compactly supported in $[-\frac{1}{LN},\frac{1}{LN}]$. Let $K_N\in C^2(\T)$ be the 1-periodization of $k_N$. We have \[ \|K_N\|_{L^\infty} =\|k\|_{L^\infty}, \quad \|K_N'\|_{L^\infty} \leq N\|k'\|_{L^\infty} \quad\text{and}\quad \|K_N''\| \leq N^2\|k''\|_{L^\infty}. \] Inserting this into (\ref{eq0}), we obtain \[ \|K_N*(\mu-\mu_0)\|_{L^\infty} \leq C \max\big(\|k\|_{L^\infty},\|k'\|_{L^\infty},\|k''\|_{L^\infty}\big) \(\frac{N}{M}\)^2 \epsilon. \] \end{enumerate} \end{remark} \subsection{Related work} The papers \cite{candes2013super, fernandez2013support, bhaskar2013atomic, tang2015near, azais2015spike, duval2015exact} assume $\mu_0$ is a discrete measure whose support satisfies the $\Lambda_M$-minimum separation condition and analyze either Problem (\ref{SR1}) or (\ref{SR2}). Our result is completely different from the results contained in the aforementioned papers, with the exception of Cand\`{e}s and Fernandez-Granda \cite[Theorem 1.2]{candes2013super}. There are also some important differences between our Theorem \ref{thm1} and their theorem. \begin{enumerate}[(a)] \item An important difference is that our result applies to both Problems (\ref{SR1}) and (\ref{SR2}), whereas their theorem only applies to the former de-noising method. To our best knowledge, we are the first to establish estimate (\ref{eq0}) for the latter method. \item Further, their theorem requires weaker assumptions on the kernel and they obtain $L^1(\T)$ estimates. We require slightly stronger assumptions on the kernel, but in return, we obtain $L^\infty(\T)$ estimates and a greatly simplified proof. In fact, they use a complicated comparison of scales argument to derive their inequality, whereas we shall not require this type of argument. Importantly, our stronger assumptions on the kernel do not preclude any important cases, see Remark \ref{remark1}, and from this perspective, these assumptions come for free. \end{enumerate} \section{Proofs} \label{sec proofs} \subsection{Notation} Before we prove the theorem, we need to introduce some notation. For a discrete set $S=\{s_j\}_{j=1}^J\subset\T$ and integer $M>0$, let \[ S_M(j) =\{x\in\T\colon |x-s_j|\leq 0.16 M^{-1}\}. \] If $S$ satisfies the $\Lambda_M$-minimum separation condition and $j\not=k$, then $S_M(j)$ and $S_M(k)$ are disjoint. The constant $0.16$ was originally chosen by Cand\`{e}s and Fernandez-Granda \cite{candes2014towards,candes2013super} in order to somewhat minimize the constants that appeared in their arguments. The following results still hold if 0.16 is replaced with a smaller positive constant constant, but the constants that appear in Propositions \ref{prop2} and \ref{prop3} and Theorem \ref{thm1} would also change. For convenience, let \[ S_M=\bigcup_{j=1}^J S_M(j). \] For a vector $v\in\C^K$, let $v_k$ denote its $k$-th entry, and let $\|v\|_\infty=\max_{1\leq k\leq K}|v_k|$. For a $K\times K$ matrix $D$, let $\|D\|_\infty$ be its operator norm. Note that we reserve $\|\cdot\|_{L^\infty}$ for functions and $\|\cdot\|_\infty$ for vectors and matrices. Throughout the remainder of this paper, we shall write $A \lesssim B$ if there exists a universal constant $C>0$ such that $A\leq CB$. In particular, the constant $C$ is independent of $\mu,\mu_0,K,M,J,\delta,\tau,\epsilon$. \subsection{Preliminary results} The following proposition establishes the connection between $(\epsilon,\Lambda_M)$-approximations of $\mu_0$ and the solutions to Problems (\ref{SR1}) and (\ref{SR2}) under a certain noise model. The following result holds without assuming $\mu_0\in M(\T;\Lambda_M)$ or $M\geq 128$, and clearly generalizes to higher dimensions. \begin{proposition} \label{prop0} Let $\mu_0\in M(\T)$ and $\eta\in L^2(\T)$ be unknown. Suppose we are given $P_M(\mu_0+\eta)$ for some integer $M>0$ and given $\epsilon>0$ such that $\|P_M\eta\|_{L^2}\leq\epsilon$. Set $\delta=\tau=\epsilon$. Then, any solution to Problem (\ref{SR1}) or (\ref{SR2}) is a $(\epsilon,\Lambda_M)$-approximation of $\mu_0$. \end{proposition} \begin{proof} \indent \begin{enumerate}[(a)] \item Let $\mu_\delta$ be a solution to Problem (\ref{SR1}). Observe that $\mu_0$ satisfies the constraint in Problem (\ref{SR1}) since \[ \|P_M(\mu_0-\mu_0-\eta)\|_{L^2} =\|P_M\eta\|_{L^2} \leq\epsilon. \] By definition of $\mu_\delta$ being a solution, we have $\|\mu_\delta\|\leq \|\mu_0\|$. We also have \[ \|P_M(\mu_\delta-\mu_0)\|_{L^2} \leq \|P_M(\mu_\delta-\mu_0-\eta)\|_{L^2}+\|P_M\eta\|_{L^2} \leq 2\epsilon. \] \item Let $\mu_\tau$ be a solution to Problem (\ref{SR2}). By definition of $\mu_\tau$ being a solution, we have \[ \epsilon \|\mu_\tau\| \leq \frac{1}{2} \|P_M(\mu_0-\mu_0-\eta)\|_{L^2}^2 + \epsilon \|\mu_0\|. \] Rearranging, we obtain $\|\mu_\tau\|\leq \|\mu_0\|+\epsilon/2$. The inequality, \[ \|P_M(\mu_\tau-\mu_0)\|_{L^\infty}\leq \tau, \] requires more work and we refer to \cite[Lemma 1]{bhaskar2013atomic} for a proof. \end{enumerate} \end{proof} The previous proposition assumed that the noise satisfies $\|P_M\eta\|_{L^2}\leq\epsilon$, and in particular, this implies $|\hat\eta(m)|\leq\epsilon$ for all $m\in\Lambda_M$. If we do not want to assume that $\hat\eta(m)$ is bounded, an alternative noise model is to assume that $\hat\eta(m)$ is a Gaussian random variable. The following proposition shows that, with high probability, solutions to both convex problems are still $(\epsilon,\Lambda_M)$-approximations. \begin{proposition} \label{prop1} Let $\mu_0\in M(\T)$ and $\eta\in L^2(\T)$ be unknown. Suppose we are given $P_M(\mu_0+\eta)$ for some integer $M>0$ and the real and complex parts of $\hat\eta(m)$ are i.i.d. Gaussian random variables with mean zero and variance $\sigma^2$. For a parameter $\gamma>0$, set \begin{equation} \label{eq9} \epsilon =\delta =\tau =\sigma(1+\gamma)\sqrt{2(2M+1)}. \end{equation} With probability at least $1-e^{-2(2M+1)\gamma^2}$, any solution to Problem (\ref{SR1}) or (\ref{SR2}) is a $(\epsilon,\Lambda_M)$-approximation of $\mu_0$. \end{proposition} \begin{proof} By Parseval's equality, note that \[ \frac{1}{\sigma^2}\|P_M\eta\|_{L^2}^2 =\sum_{m=-M}^M \frac{|\hat\eta(m)|^2}{\sigma^2}, \] is a $\chi^2$ random variable with $2(2M+1)$ degrees of freedom. By inequality (4.3) in \cite[Section 4]{laurent2000adaptive}, for all $x>0$, \[ \mathbb{P}\(\ \frac{1}{\sigma^2}\|P_M\eta\|_{L^2}^2 \geq 2(2M+1)+2\sqrt{2(2M+1)x}+2x\) \leq e^{-x}. \] Set $x=2(2M+1)\gamma^2$. Then, \[ \mathbb{P}\(\|P_M\eta\|_{L^2} \geq \sigma(1+\gamma)\sqrt{2(2M+1)}\) \leq e^{-2(2M+1)\gamma^2}. \] With probability at least $1-e^{-2(2M+1)\gamma^2}$, we have \[ \|P_M\eta\|_{L^2} \leq \sigma(1+\gamma)\sqrt{2(2M+1)} =\epsilon. \] The conclusion follows from Proposition \ref{prop0}. \end{proof} The following proposition shows that a weighted integral of $|\mu-\mu_0|$ on $S_M^c$ can be controlled in terms of $\epsilon$, provided that the assumptions of Theorem \ref{thm1} hold. This result first appeared in \cite[Lemma 2.1]{candes2013super}, but only for the difference $|\mu_\delta-\mu_0|$. A similar, but not identical, result for $|\mu_\tau-\mu_0|$ was proved in \cite[Lemma 2]{tang2015near}. \begin{proposition} \label{prop2} There exists a constant $C>0$ such that the following hold. Suppose $\mu_0\in M(\T;\Lambda_M)$ for an integer $M\geq 128$, $S=\{s_j\}_{j=1}^J$ is the support of $\mu_0$, and $\mu$ is a $(\epsilon,\Lambda_M)$-approximation of $\mu_0$. Then, \begin{align*} \int_{S_M^c} \ d|\mu-\mu_0| &\leq C\epsilon, \\ \sum_j\int_{S_M(j)} |x-s_j|^2\ d|\mu-\mu_0|(x) &\leq C M^{-2}\epsilon. \end{align*} \end{proposition} \begin{proof} Let $\nu=\mu-\mu_0$. It was shown in \cite[Lemma 2.4]{candes2013super} that there exist $f\in C(\T;\Lambda_M)$ with $\|f\|_{L^\infty}\leq 1$ and universal constants $C_1,C_2>0$ such that \begin{align*} \int_S\ d|\nu| &=\Big|\int_S f\ d\nu \Big| \\ &\leq \Big|\int_{\T} f\ d\nu \Big| + \Big|\int_{S_M^c} f\ d\nu\Big| + \Big|\sum_j \int_{S_M(j)\setminus\{s_j\}} f\ d\nu \Big| \\ &\leq \Big|\int_{\T} f\ d\nu \Big| +\int_{S^c}\ d|\nu| - C_1\int_{S_M^c}\ d|\nu| - C_2M^2 \int_{S_M} |x-s_j|^2\ d|\nu|(x). \end{align*} Rearranging, we obtain \begin{equation} \label{eq6} C_1\int_{S_M^c}\ d|\nu| + C_2M^2 \int_{S_M} |x-s_j|^2\ d|\nu|(x) \leq \Big|\int_{\T} f\ d\nu \Big| + \int_{S^c}\ d|\nu| - \int_S\ d|\nu|. \end{equation} By definition of $(\epsilon,\Lambda_M)$-approximation, $f\in C(\T;\Lambda_M)$, and that $\|f\|_{L^2}\leq \|f\|_{L^\infty}\leq 1$, we see that \begin{equation} \label{eq7} \Big|\int_{\T} f\ d\nu \Big| \leq \|f\|_{L^2}\|P_M\nu\|_{L^2} \leq 2\epsilon. \end{equation} By definition of $(\epsilon,\Lambda_M)$-approximation and that $\mu_0$ is supported in $S$, we have \[ 2 \epsilon +\|\mu_0\| \geq \|\mu\| =\|\mu_0+\nu\| \geq \int_S\ d|\mu_0|-\int_S\ d|\nu| + \int_{S^c}\ d|\nu|. \] Rearranging this inequality, we obtain \begin{equation} \label{eq8} \int_{S^c}\ d|\nu| - \int_S\ d|\nu| \leq 2\epsilon. \end{equation} Combining inequalities (\ref{eq6}), (\ref{eq7}) and (\ref{eq8}) completes the proof. \end{proof} The following proposition is a generalization of \cite[Lemmas 2.5 and 2.7]{candes2013super}, and shows that there exists $f\in C(\T;\Lambda_M)$ that behaves like an affine function on each $S_M(j)$. \begin{proposition} \label{prop3} There exists a constant $C>0$ such that the following hold. Suppose $M\geq 128$ and the set $S=\{s_j\}_{j=1}^J\subset\T$ satisfies the $\Lambda_M$-minimum separation condition. For any $a,b\in \C^J$, there exists $f\in C(\T^d;\Lambda_M)$ such that \begin{align*} \|f\|_{L^\infty} &\leq C (\|a\|_\infty + M^{-1} \|b\|_\infty), \\ |f(x)-a_j-b_j(x-s_j)| &\leq C(M^2\|a\|_\infty + M \|b\|_\infty) |x-s_j|^2, \quad x\in S_M(j). \end{align*} \end{proposition} \begin{proof} Following the recipe given in \cite[Section 2]{candes2014towards}, it is possible to explicitly construct the desired $f$. Let \[ G(x)=\(\frac{\sin ((\frac{M}{2}+1)\pi x)}{(\frac{M}{2}+1)\sin(\pi x)}\)^4, \] and note that $G\in C(\T;\Lambda_M)$. We claim that there exist $\alpha,\beta\in\C^J$ such that if we define $f$ by \[ f(x)=\sum_j \alpha_j G(x-s_j) + \sum_j \beta_j G'(x-s_j), \] then \begin{equation} \label{f3} f(s_j)=a_j, \quad\text{and}\quad f'(s_j)=b_j. \end{equation} To see why, we define the matrices $D_0,D_1,D_2\in \C^{J\times J}$, where \[ (D_0)_{j,k}=G(s_j-s_k),\quad (D_1)_{j,k}=G'(s_j-s_k), \quad\text{and}\quad (D_2)_{j,k}=G''(s_j-s_k). \] To prove the existence of the desired $f$, it suffices to show that there exists a solution to system of equations, \[ \begin{pmatrix} D_0 &D_1 \\ D_1 &D_2 \end{pmatrix} \begin{pmatrix} \alpha \\ \beta \end{pmatrix} = \begin{pmatrix} a \\ b \end{pmatrix}. \] It was shown in \cite[Section 2]{candes2014towards} that the $\Lambda_M$-minimum separation condition on $S$ and the assumption $M\geq 128$ imply that the system is invertible and that the unique solution is given by \begin{align*} \alpha &= D_0^{-1}(a-D_1\beta), \\ \beta &= (D_2-D_1D_0^{-1}D_1)^{-1} (b-D_1D_0^{-1}a). \end{align*} This proves the existence of $f$ satisfying conditions (\ref{f3}). Next, we obtain estimates on $\alpha,\beta$. It was also shown in \cite[Section 2]{candes2014towards} that \begin{align*} \|D_0^{-1}\|_\infty &\lesssim 1,\\ \|D_1\|_\infty &\lesssim M, \\ \|(D_2-D_1D_0^{-1}D_1)^{-1}\|_\infty &\lesssim M^{-2}. \end{align*} These inequalities imply \begin{align*} \|\beta\|_\infty &\lesssim M^{-1}\|a\|_\infty + M^{-2} \|b\|_\infty, \\ \|\alpha\|_\infty &\lesssim \|a\|_\infty + M^{-1}\|b\|_\infty. \end{align*} It was shown in \cite[Section 2]{candes2014towards} that \[ \sum_{k\not=j} |G^{(\ell)}(x-s_k)|\lesssim M^\ell, \quad x\in S_M(j) \quad \text{and}\quad \ell=0,1,2,3. \] Since $G^{(\ell)}$ decays rapidly away from the origin, the above inequalities imply, for all $x\in\T$, \begin{align*} |f(x)| &\leq \|\alpha\|_\infty \sum_j |G(x-s_j)| + \|\beta\|_\infty \sum_j |G'(x-s_j)| \\ &\lesssim \|a\|_\infty + M^{-1}\|b\|_\infty. \end{align*} This proves the first inequality of the proposition. On each $S_M(j)$, define the function $h_j(x)=f(x)-a_j-b_j(x-s_j)$. It follows from (\ref{f3}) that $h_j(s_j)=h_j'(s_j)=0$. For all $x\in S_M(j)$, we have \begin{align*} |h_j''(x)| =|f''(x)| &\leq \|\alpha\|_\infty \sum_k |G''(x-s_k)| +\|\beta\|_\infty \sum_k |G'''(x-s_k)| \\ &\lesssim M^2\|a\|_\infty + M \|b\|_\infty. \end{align*} Using Taylor expansions of $h_j$ around $s_j$, we obtain \[ |f(x)-a_j-b_j(x-s_j)| \lesssim (M^2\|a\|_\infty + M \|b\|_\infty) |x-s_j|^2, \quad x\in S_M(j). \] \end{proof} \subsection{Proof of Theorem \ref{thm1}} Let $\nu=\mu-\mu_0$ and fix $x_0\in\T$. Since $\mu_0\in M(\T;\Lambda_M)$, we know that $\mu_0$ is supported in some discrete set $S=\{s_j\}_{j=1}^J$ satisfying the $\Lambda_M$-separation condition. We have \begin{align} \label{eq1} \begin{split} |(K*\nu)(x_0)| &=\Big|\int_{\T} K(x_0-x)\ d\nu(x)\Big| \\ &\leq \Big|\sum_j \int_{S_M(j)} K(x_0-x) \ d\nu(x)\Big| + \|K\|_{L^\infty}\int_{S_M^c} \ d|\nu|. \end{split} \end{align} The first-order Taylor expansion of $K(x_0-x)$ around the point $x_0-s_j$ on the interval $S_M(j)$ is \[ K(x_0-x) =K(x_0-s_j)+K'(x_0-s_j)(s_j-x)+ \frac{1}{2} K''(z_j)|x-s_j|^2, \quad x\in S_M(j), \] for some $z_j\in\T$ depending on $x_0,x,s_j$. Inserting this into (\ref{eq1}), we obtain \begin{align} \label{eq2} \begin{split} |(K*\nu)(x_0)| &\leq \Big|\sum_j \int_{S_M(j)} (K(x_0-s_j) - K'(x_0-s_j)(x-s_j)) \ d\nu(x)\Big| \\ &\quad + \|K''\|_{L^\infty} \sum_j \int_{S_M(j)} |x-s_j|^2\ d|\nu|(x) + \|K\|_{L^\infty}\int_{S_M^c} \ d|\nu|. \end{split} \end{align} To bound the first term on the right hand side, we use an interpolation argument. Let $a,b\in\C^J$ such that $a_j = K(x_0-s_j)$ and $b_j = -K'(x_0-s_j)$. Let $f\in C(\T;\Lambda_M)$ be a function satisfying the properties in Proposition \ref{prop3}. We have \begin{align} \label{f1} \|f\|_{L^\infty} &\lesssim \|K\|_{L^\infty} + M^{-1}\|K'\|_{L^\infty}, \\ \label{f2} |f(x)-a_j-b_j(x-s_j)| &\lesssim (M^2\|K\|_{L^\infty} + M\|K'\|_{L^\infty}) |x-s_j|^2, \quad x\in S_M(j). \end{align} Inequality (\ref{f2}) implies \begin{align} \label{eq3} \begin{split} &\Big|\sum_j \int_{S_M(j)} (K(x_0-s_j) - K'(x_0-s_j)(x-s_j)) \ d\nu(x)\Big| \\ &\quad \leq \Big|\sum_j \int_{S_M(j)} (f(x)-K(x_0-s_j)+K'(x_0-s_j)(x-s_j)) \ d\nu(x)\Big| +\Big| \int_{S_M} f \ d\nu\Big| \\ &\quad \lesssim \big(M^2\|K\|_{L^\infty} + M\|K'\|_{L^\infty}\big) \sum_j \int_{S_M(j)} |x-s_j|^2 \ d|\nu|(x) + \Big|\int_{\T} f\ d\nu\Big| + \Big|\int_{S_M^c} f\ d\nu\Big|. \end{split} \end{align} Using inequality (\ref{f1}), we obtain \begin{equation} \label{eq4} \Big|\int_{S_M^c} f\ d\nu\Big| \lesssim \big(\|K\|_{L^\infty} + M^{-1}\|K'\|_{L^\infty}\big) \int_{S_M^c} \ d|\nu|. \end{equation} Using inequality (\ref{f1}) and the definition of a $(\epsilon,\Lambda_M)$-approximation, we see that \begin{equation} \label{eq5} \Big|\int_{\T} f\ d\nu\Big| \leq \|f\|_{L^2}\|P_M\nu\|_{L^2} \lesssim \big(\|K\|_{L^\infty} + M^{-1}\|K'\|_{L^\infty}\big)\epsilon. \end{equation} Combining inequalities (\ref{eq2}), (\ref{eq3}), (\ref{eq4}) and (\ref{eq5}), we obtain \begin{align*} |(K*\nu)(x_0)| &\lesssim \big(\|K\|_{L^\infty} + M^{-1}\|K'\|_{L^\infty} \big)\epsilon \\ &\quad + \big(\|K\|_{L^\infty} + M^{-1}\|K'\|_{L^\infty} \big)\int_{S_M^c} \ d|\nu| \\ &\quad + \big(M^2\|K\|_{L^\infty}+M\|K'\|_{L^\infty}+\|K''\|_{L^\infty} \big) \sum_j \int_{S_M(j)} |x-s_j|^2\ d|\nu|(x). \end{align*} Finally, we apply Proposition \ref{prop2} to complete the proof. \section{Acknowledgements} The author thanks Professors John J. Bendetto and Hans G. Feichtinger for their helpful feedback and interest in the manuscript. This work was partially supported by DTRA Grant 1-13-1-0015. \nocite{} \nocite{benedetto2010integration}
{ "timestamp": "2017-02-13T02:02:07", "yymm": "1702", "arxiv_id": "1702.03021", "language": "en", "url": "https://arxiv.org/abs/1702.03021", "abstract": "This paper studies the problem of recovering a discrete complex measure on the torus from a finite number of corrupted Fourier samples. We assume the support of the unknown discrete measure satisfies a minimum separation condition and we use convex regularization methods to recover approximations of the original measure. We focus on two well-known convex regularization methods, and for both, we establish an error estimate that bounds the smoothed-out error in terms of the target resolution and noise level. Our $L^\\infty$ approximation rate is entirely new for one of the methods, and improves upon a previously established $L^1$ estimate for the other. We provide a unified analysis and an elementary proof of the theorem.", "subjects": "Information Theory (cs.IT)", "title": "Elementary $L^\\infty$ error estimates for super-resolution de-noising", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682437975527, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7073169854818743 }
https://arxiv.org/abs/2206.12345
Some dynamics in real quadratic fields with applications to inhomogeneous minima
Let $K$ be a real quadratic field. We use a symbolic coding of the action of a fundamental unit on the real $2$-torus associated to $K$ to study the family of subsets $X_t$ of norm distance $\geq t$ from the origin. As an application, we prove that inhomogeneous spectrum of $K$ contains a dense set of elements of $K$, and conclude that all isolated inhomogeneous minima lie in $K$.
\section{Introduction} Let $D>1$ be a square-free positive integer and let $K = \mathbb{Q}(\sqrt{D})$ be the associated real quadratic field with ring of integers $\scr{O}_K$. Let $\mathbf{N}:K\longrightarrow \mathbb{Q}$ denote the absolute norm $\mathbf{N}(a) = |\mathrm{Nm}_{K/\mathbb{Q}}(a)| = |a\overline{a}|$, where $a\mapsto\overline{a}$ is Galois conjugation, and recall that the ring $\scr{O}_K$ is called \emph{norm-Euclidean} if for all $a\in K$ there exists $q\in\scr{O}_K$ such that $\mathbf{N}(a-q)<1.$ The ring of integers $\scr{O}_K$ embeds as a lattice in the two-dimensional real vector space $V_K = K\otimes_\mathbb{Q}\mathbb{R}$, and we denote the quotient torus by $\mathbb{T}_K = V_K/\scr{O}_K$. Galois conjugation extends linearly to $V_K$, and the absolute norm extends accordingly to an indefinite quadratic form on $V_K$ that we also denote by $\mathbf{N}$. The norm is not $\scr{O}_K$-invariant, but the function defined by $$M(P) = \inf_{Q\in \scr{O}_K}\mathbf{N}(P-Q)$$ is, and descends to a function on the torus $\mathbb{T}_K$ which we also denote by $M$. The function $M$ is upper-semicontinuous (\cite{bsd1}, Theorem F). The \emph{Euclidean minimum} of $K$ is defined by $M_1(K) = \sup_{P\in K}M(P)$. In particular, $M_1(K)<1$ implies that $\scr{O}_K$ is norm-Euclidean, while $M_1(K)>1$ implies that it is not. The second Euclidean minimum is defined by $$M_2(K) = \sup_{M(P)<M_1(K)}M(P)$$ and $M_1(K)$ is said to be \emph{isolated} if $M_2(K)<M_1(K)$. We may proceed in this fashion producing Euclidean minima $M_i(K)$ until we find a non-isolated one. Note that upper-semicontinuity ensures that each of these suprema is actually achieved by some collection of points on the torus. These Euclidean minima demonstrate a variety of behavior, in some cases producing an infinite sequence of isolated minima while in others we find that $M_2(K)$ already fails to be isolated - see \cite{lemmermeyer} for an overview of results. Barnes and Swinnerton-Dyer conjectured in \cite{bsd2} that $M_1(K)$ is always isolated and rational, and that $M_2(K)$ is taken at a point with coordinates in $K$. Numerous computations by other authors (\emph{e.g.} \cite{davenport2}, \cite{davenport4}, \cite{inkeri}, \cite{godwin55}, \cite{godwin63}, \cite{varnavides48}) suggest further that all Euclidean minima lie in $K$. \begin{theorem} All isolated Euclidean minima lie in $K$. If $M_1(K)$ is isolated, then it lies in $\mathbb{Q}$. \end{theorem} \noindent The first part of this statement follows from the next theorem, which implies more broadly that all isolated points of the \emph{Euclidean spectrum} $\mathrm{ES}(K) = M(\mathbb{T}_K)$ lie in $K$. The method of proof establishes that any such isolated point is taken at a point $P$ with coordinates in $K$, and we prove that $M(P)\in K$ for such points. The second part is here for completeness, but was known already to Barnes and Swinnerton-Dyer (\cite{bsd2}, Theorem M). The following theorem is our main result, and is proven Section \ref{sec:apps}. \begin{theorem} The set $\mathrm{ES}(K)\cap K$ is dense in $\mathrm{ES}(K)$. \end{theorem} \section{The dynamical systems $X_t$} By Dirichlet's unit theorem, we have $\scr{O}_K^\times = \pm \varepsilon^\mathbb{Z}$ for some fundamental unit $\varepsilon$ of infinite order. We will later fix an embedding of $K$ into $\mathbb{R}$ and assume that $\varepsilon$ is chosen so that $\varepsilon>1$. Multiplication by $\varepsilon$ is absolute norm-preserving and extends by linearity to an endomorphism $\phi$ of $V_K$ that is also absolute norm-preserving. Since $\phi$ preserves the lattice $\scr{O}_K$, it descends to an endomorphism of the torus $\mathbb{T}_K$ with the property that $M(\phi(P)) = M(P)$ for all $P\in \mathbb{T}_K$. The eigenvalues of $\phi$ are the embeddings of $\varepsilon$ in to $\mathbb{R}$ and hence not roots of unity, so $\phi$ is an ergodic transformation of $\mathbb{T}_K$. This dynamical system, and a symbolic coding of it obtained from a Markov partition of the torus, is our main resource. We note that the subset $K/\scr{O}_K$ coincides with the set of periodic points for $\phi$. For $t>0$, the $\phi$-invariant set $X_t = \{P\in \mathbb{T}_K\ |\ M(P)\geq t\}$ is closed by upper semicontinuity. We can describe $X_t$ alternatively by first noting that the open set $$\scr{U}(t) = \bigcup_{Q\in\scr{O}_K}\{P\in V_K\ |\ \mathbf{N}(P-Q)<t\}$$ is translation-invariant and descends to an open subset of $\mathbb{T}_K$, and then observing that $X_t$ is its complement. The sets $X_t$ have Lebesgue measure zero for $t>0$ since they are proper, closed, and $\phi$-invariant. \begin{question} How does the Hausdorff dimension $\dim(X_t)$ vary with $t$? \end{question} \noindent That $\dim(X_t)\to 2$ as $t\to 0$ is a simple consequence of Theorem 2.3 of \cite{bm}. We prove in Corollary \ref{leftcontinuity} that $\dim(X_t)$ is left-continuous everywhere. Right-continuity remains an open question. We illustrate this dimension in the case $K=\mathbb{Q}(\sqrt{5})$. Davenport computed the Euclidean minima for this field in \cite{davenport2} and \cite{davenport4}, finding the infinite decreasing sequence of minima $M_1 = 1/4$, and for $i\geq 1$, $$M_{i+1} = \frac{f_{6i-2} + f_{6i-4}}{4(f_{6i-1}+f_{6i-3}-2)}$$ where $f_k$ denotes the $k$th Fibonacci number. Each of these minima is obtained at a finite collection of elements of $K/\scr{O}_K$, and we have $M_i\longrightarrow t_\infty = (-1+\sqrt{5})/8\approx .1545$. A plot of $\dim(X_t)$ in this case is given below. The zero-dimensional region necessarily covers $t>t_\infty$, since the collection of points giving rise to the Euclidean minima is countable. We prove in \cite{ramsey} that $\dim(X_t)>0$ for all $t<t_\infty$ and that $\dim(X_t)$ is continuous at $t_\infty$. \begin{center} \includegraphics[scale=.4]{ubs611.png} \end{center} The evident plateaus on this graph and its detail in Figure \ref{charondetail} have dynamical significance. The dimensions plotted here are actually upper bounds obtained by symbolically coding the torus dynamical system with a Markov partition and finding subshifts of finite type (SFTs) that contain the coding of $X_t$, as in Section \ref{sec:ub}. As we will see in Proposition \ref{isosft}, a plateau will occur wherever it is possible to make such a bound tight and $X_t$ can be described directly by an SFT. The longest such plateau occurs around $t=.15$ (see Figure \ref{charondetail} for a detail), and we give an explicit symbolic coding of the $X_t$ on this plateau in \cite{ramsey}. \begin{figure} \centering \includegraphics[scale=.25]{charon.png} \caption{Detail near $t=.15$} \label{charondetail} \end{figure} \section{Coordinates and $K$-points}\label{coords} Let us now take $K$ to be a subset of $\mathbb{R}$ by fixing an embedding, and take $\varepsilon$ to be a fundmental unit with $\varepsilon > 1$. Recall that $\{1, \alpha_K\}$ is a $\mathbb{Z}$-basis of $\scr{O}_K$, where $$\alpha_K = \left\{\begin{array}{cc} \sqrt{D} & D\equiv 2,3\!\pmod{4} \\ \frac{1+\sqrt{D}}{2} & D\equiv 1\!\pmod{4}\end{array}\right.$$ Coordinates with respect to this basis will be denoted $(x,y)$. The choice of embedding gives an isomoprhism \begin{eqnarray*} V_K = K\otimes_\mathbb{Q}\mathbb{R} &\stackrel{\sim}{\longrightarrow}& \mathbb{R}\times\mathbb{R} \\ a\otimes 1 & \longmapsto & (\overline{a},a) \end{eqnarray*} of $\mathbb{R}$-algebras, and thus another coordinate system. Multiplication by $\varepsilon$ has the effect of multiplying by $\overline{\varepsilon} = \pm \varepsilon^{-1}$ in the first coordinate and $\varepsilon$ in the second coordinate. Accordingly, these are known as the \emph{stable} and \emph{unstable} coordinates and denoted $(s,u)$. Note that the absolute norm is simply $\mathbf{N}(s,u) = |su|$ in these coordinates, and that the coordinate transformations between $(x,y)$ and $(s,u)$ coordinates are $K$-linear. A point $P\in \mathbb{T}_K$ is called \emph{determinate} if it has a representative $Q\in V_K$ with $\mathbf{N}(Q) = M(P)$. It is shown in \cite{bm} (Theorem 2.6) that the set of determinate points is a meagre $F_\sigma$ set of measure zero and Hausdorff dimension 2. For a general point $P\in\mathbb{T}_K$, the following two lemmas help relate the value $M(P)$ to the more concrete values $\mathbf{N}(Q)$ for $Q\in V_K$. \begin{lemma}[\cite{bm}, Lemma 4.2]\label{bmlemma} Suppose that $P\in\mathbb{T}_K$ satisfies $M(P)<t$. There exists a point $Q=(s,u)\in V_K$ representing an element of the orbit of $P$ satisfying $$|s|, |u| < \sqrt{\varepsilon t}$$ such that $\mathbf{N}(Q) = |su| <t$. \end{lemma} \begin{lemma}\label{orbclosure} Let $P\in \mathbb{T}_K$. There exists $Q\in V_K$ representing an element of the orbit closure of $P$ satisfying $$\mathbf{N}(Q) = M(Q) = M(P)$$ \end{lemma} \begin{proof} Let $R_{\mathrm{big}}$ denote the rectangle in $V_K$ given by $|s|,|u|< \sqrt{\varepsilon(M_1(K)+1)}$. By Lemma \ref{bmlemma}, there is for each $n\in\mathbf{N}$ a point $Q_n\in R_{\mathrm{big}}$ representing an element of orbit $\phi^\mathbb{Z}(P)$ with \begin{equation}\label{qnbd} \mathbf{N}(Q_n)<M(P)+\frac{1}{n} \end{equation} Since $R_{\mathrm{big}}$ is bounded, there exists a subsequence $Q_{k_n}$ converging to some point $Q$. Observe that $$M(Q) \leq \mathbf{N}(Q) = \lim_{k\to\infty}\mathbf{N}(Q_{k_n})\leq M(P)$$ where the last inequality follows from (\ref{qnbd}). The definition of $Q$ ensures that it represents an element of the closure of $\phi^\mathbb{Z}(P)$. But this implies that $M(Q)\geq M(P)$ by upper-semicontinuity, since the value $M(P)$ is common to the entire orbit of $P$. These inequalities combine to the give the desired result. \end{proof} Let us call a point in $V_K$ with rational $(x,y)$ coordinates a \emph{$\mathbb{Q}$-point}. Similarly a \emph{$K$-point} is one whose $(x,y)$ coordinates lie in $K$, or equivalently whose $(s,u)$ coordinates lie in $K$. The set of $\mathbb{Q}$-points coincides with $K/\scr{O}_K$, which is also the set of periodic points for $\phi$. In particular, if $P$ is a $\mathbb{Q}$-point then the previous lemma immediately implies that $P$ is determinate and $M(P)\in\mathbb{Q}$. \begin{proposition}\label{KptMK} Let $P\in\mathbb{T}_K$ be a $K$-point. \begin{enumerate} \item There exists $N\in\mathbb{N}$ such that $$\phi^k(NP)\longrightarrow 0\ \ \ \ \mbox{as}\ \ \ \ |k|\to\infty$$ \item $M(P)\in K$ \end{enumerate} \end{proposition} \begin{proof}\ \begin{enumerate} \item Since $P$ has $(s,u)$ coordinates in $K$, there exists $N\in\mathbb{N}$ such that $NP$ has $(s,u)$ coordinates in $\scr{O}_K$. In $(s,u)$ coordinates, the lattice $\scr{O}_K\subseteq V_K$ is given by the set of pairs $(\overline{a},a)$ for $a\in\scr{O}_K$. It follows by subtracting such elements that $NP$ has a representative whose stable coordinate vanishes, as well as a representative whose unstable coordinate vanishes. Now $\phi^{k}(NP)\to 0$ as $|k|\to \infty$ follows immediately. \item By the previous part, the orbit closure of the $K$-point $P$ consists of the orbit of $P$ together with a finite collection of $N$-torsion points on the torus. By Lemma \ref{orbclosure}, there exists $Q\in V_K$ representing an element of this orbit closure with $\mathbf{N}(Q) = M(P)$. Should $Q$ represent an $N$-torsion point, then $M(P)\in\mathbb{Q}$ since torsion points are $\mathbb{Q}$-points. On the other hand, if $Q=(s,u)$ represents an element of the orbit of $P$ then $Q$ is also a $K$-point, so we have $M(P)=\mathbf{N}(Q)=|su|\in K$. \end{enumerate} \end{proof} \section{Markov Partitions} For each $K$, the dynamical system $(\mathbb{T}_K,\phi)$ admits a Markov partition consisting of two open rectangles. Such a partition $\{R_0, R_1\}$ for $K=\mathbb{Q}(\sqrt{5})$ is pictured in Figure \ref{rq5morig} in $(x,y)$ coordinates. Figure \ref{morigsu} furnishes a uniform description in $(s,u)$ coordinates of a two-rectangle Markov partition for any $K$. This description is simply the one provided by Adler in \cite{adler} translated into $(s,u)$ coordinates. See also \cite{snavely}, where the construction may originate. \begin{figure}[h] \centering \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.5\textwidth]{rq5morig1.png} \caption{$\{R_0, R_1\}$ for $\mathbb{Q}(\sqrt{5})$} \label{rq5morig} \end{minipage}\hfill \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.5\textwidth]{rq5morigsu1.png} \put(-6,6){$\bullet$} \put(3,7){\small $(-\overline{\alpha_K},0)$} \put(-43,100){$\bullet$} \put(-37,100){\small $(0,\alpha_K)$} \put(-43,65){$\bullet$} \put(-37,73){\small $(0,1)$} \put(-102,6){$\bullet$} \put(-135,7){\small $(-1,0)$} \caption{\centering Original partition in $(s,u)$ coordinates (scale shown for $\mathbb{Q}(\sqrt{5})$)} \label{morigsu} \end{minipage} \end{figure} These two-rectangle partitions are typically not \emph{generators} essentially because the intersections $R\cap \phi(S)$ are generally disconnected. In the case of $\mathbb{Q}(\sqrt{5})$ however, the original partition $\scr{P}_0=\{R_0, R_1\}$ is a generator. Moreover, $R_0\cap\phi(R_0) = \emptyset$, while the remaining intersections consist of a single nonempty rectangle each. Let $\Sigma$ denote the subset of $\{0,1\}^\mathbb{Z}$ that avoids the string $00$ and let $\sigma:\Sigma\to\Sigma$ be the shift operator $\sigma(s)_i = s_{i+1}$. The Markov generator property furnishes a map $$\pi: \Sigma \longrightarrow \mathbb{T}_K$$ intertwining $\phi$ and the shift operator on $\Sigma$ that sends each string of coordinates to the unique point in $\mathbb{T}_K$ whose orbit has these coordinates: $$\pi(s) = \bigcap_{n\in\mathbb{N}}\overline{\bigcap_{i=-n}^n\phi^{-i}(s_i) }= \bigcap_{i\in\mathbb{Z}} \phi^{-i}(\overline{s_i})$$ \begin{remark}\label{closure} This construction ensures that $\phi^k\pi(s)\in\overline{s(k)}$ for all $k$. It follows that if the coordinate word of $A\in\scr{P}_n$ occurs in $s\in\Sigma$, then $\phi^k\pi(s)\in \overline{A}$ for a suitable $k\in\mathbb{Z}$. \end{remark} The map $\pi$ is continuous, surjective, bounded-to-one, and essentially one-to-one. Moreover, if $X\subseteq \mathbb{T}_K$ is a closed, invariant subset then $\pi$ restricts to a map $$\pi^{-1}(X)\longrightarrow X$$ with the same properties, from which it follows that the entropy of $\phi|_X$ coincides with the entropy of the shift restricted to $\pi^{-1}(X)$. In the case of $X=X_t$, this entropy can be approximated by approximating the \emph{set} $\pi^{-1}(X_t)$ by subshifts obtained by refining the partition $\scr{P}_0$ and omitting some rectangles. The refinements are defined by taking $\scr{P}_n$ to consist of all nonempty intersections of the form \begin{equation}\label{tots} \phi^{n}(A_{-n})\cap \cdots \cap \phi^{}(A_{-1})\cap A_0\cap\phi^{-1}(A_1)\cap\cdots \cap \phi^{-n}(A_n),\ \ \ A_i\in \scr{P}_0, \end{equation} and we say that this particular rectangle has \emph{coordinate word} $A_{-n}\cdots A_0\cdots A_n$. When a representative rectangle in the plane $V_K$ is needed for a member of $\scr{P}_n$, we take the one contained in the original footprint $R_0\cup R_1$. The refinement $\scr{P}_n$ is also a Markov generator, and we have a refined coding $\pi_n:\Sigma_n\longrightarrow \mathbb{T}_K$ by the set of admissible strings in the alphabet $\scr{P}_n$. Note that $\Sigma_n$ is simply a ``block form" of $\Sigma$ and there is a canonical bijection $\Sigma\cong\Sigma_n$ compatible with the shift operator and the two codings of $\mathbb{T}_K$. While for general $K$, the partition $\{R_0, R_1\}$ is not a generator, in all cases the connected components of $A_0\cap \phi^{-1}(A_1)$ for $A_i\in \{R_0, R_1\}$ do comprise a Markov generator (see the proof of Theorem 8.4 of \cite{adler}). Thus for any $K$ other than $\mathbb{Q}(\sqrt{5})$ we may let $\scr{P}_0$ denote this generator and then proceed as in the previous paragraph to produce refinements $\scr{P}_n$. In all cases, the diameter of $\scr{P}_n$ tends to zero as $n\to\infty$. The following explicit construction of $\pi$ will be useful below. Here, $\scr{P}$ can be any Markov generator on $\mathbb{T}_K$ arising from a collection of rectangles in the plane $V_K$ with sides parallel to the stable and unstable axes. In particular we suppose we have a chosen representative in the plane for each member of $\scr{P}$, or equivalently a choice of stable and unstable interval of which this member is the product. Let $s\in \Sigma$, the set of all admissible bi-infinite strings in the alphabet $\scr{P}$. First we show how to compute the unstable coordinate of $\pi(s)$. The intersections \begin{eqnarray*} r_0 &=& s_0\\\ r_1 &=& s_0\cap \phi^{-1}(s_1)\\ r_2 &=& s_0\cap \phi^{-1}(s_1)\cap \phi^{-2}(s_2) \\ &\vdots& \end{eqnarray*} on the torus can be viewed in the plane as a sequence of rectangles within $s_0$ whose stable interval is constant (and equal to that of $s_0$) and whose unstable interval is shrinking. Up to similarity, the footprint of the unstable interval of $r_{i+1}$ inside that of $r_i$ depends only on the rectangles $s_i$ and $s_{i+1}$ and is independent of $i$. This is because $\phi$ simply scales by the positive number $\varepsilon$ in the unstable direction, preserving similarity. Given a rectangle in the plane with sides parallel to the stable and unstable axes, let us denote its stable and unstable intervals by $[\alpha_s(A),\beta_s(A)]$ and $[\alpha_u(A), \beta_u(A)]$, and let $\ell_*(A) = \beta_*(A)-\alpha_*(A)$ denote the corresponding lengths. For each pair $A,B\in \scr{P}$ with $AB$ admissible, we define $$\rho_u(A,B) = \frac{\alpha_u(A\cap\phi^{-1}(B))-\alpha_u(A)}{\ell_u(A)}$$ Pictured in the $(s,u)$ plane, this is the height of the bottom of the subrectangle $A\cap \phi^{-1}(B)$ inside $A$, expressed as a fraction of the total height of $A$, and is a measure of the footprint of this subrectangle in $A$ alluded to above. The left endpoint of the unstable interval of $r_i$ is then equal to $$\alpha_u(s_0)+\rho_u(s_0,s_1)\ell_u(s_0)+\rho_u(s_1,s_2)\frac{\ell_u(s_1)}{\varepsilon} + \cdots + \rho_u(s_{i-1},s_i)\frac{\ell_u(s_{i-1})}{\varepsilon^{i-1}},$$ so the unstable coordinate of $\pi(s)$ is given by the series \begin{equation}\label{unstableseries} \alpha_u(s_0)+\rho_u(s_0,s_1)\ell_u(s_0)+\rho_u(s_1,s_2)\frac{\ell_u(s_1)}{\varepsilon} + \rho_u(s_2,s_3)\frac{\ell_u(s_2)}{\varepsilon^2} +\cdots \end{equation} The stable coordinate works the same way if $\overline{\varepsilon}>0$. Some additional care must be taken if $\overline{\varepsilon}<0$, since then $\phi$ is orientation-reversing in the stable direction and the footprints alternate with their mirror images up to similarity instead of being independent of $i$. In that case we define coefficients $$\rho_s^{+}(A,B) = \frac{\alpha_s(A\cap\phi(B))-\alpha_s(A)}{\ell_s(A)}$$ and $$\rho_s^-(A,B) = \frac{\beta_s(A)-\beta_s(A\cap\phi(B))}{\ell_s(A)},$$ and stable coordinate alternates between these: \begin{equation}\label{stableseries} \alpha_s(s_0)+\rho^+_s(s_0,s_{-1})\ell_s(s_0)+\rho_s^-(s_{-1},s_{-2})\frac{\ell_s(s_{-1})}{\varepsilon} + \rho_s^+(s_{-2},s_{-3})\frac{\ell_s(s_{-2})}{\varepsilon^2}+\cdots \end{equation} If $s\in\Sigma$ is periodic, then the image $\pi(s)\in\mathbb{T}_K$ has periodic orbit, and hence is a $\mathbb{Q}$-point. The following lemma furnishes a similar description of some $K$-pts. \begin{lemma}\label{periodK} Suppose that $s$ is eventually periodic in both directions. Then $\pi(s)$ is a $K$-point. \end{lemma} \begin{proof} First observe that all members of our partitions $\scr{P}_n$ have coordinates in the field $K$. If $s$ is eventually periodic in both directions, then the series (\ref{unstableseries}) and (\ref{stableseries}) (and its analog in case $\overline{\varepsilon}>0$) decompose into finitely many geometric series with all terms and coefficients expressible in terms of these coordinates, and the result follows. \end{proof} \begin{lemma}\label{newword} If $t'<t$ and $X_t\subsetneq X_{t'}$, then there exists a finite word occurring in $\pi^{-1}(X_{t'})$ that does not occur in $\pi^{-1}(X_t)$. \end{lemma} \begin{proof} Suppose to the contrary that every word appearing in $\pi^{-1}(X_{t'})$ also occurs in $\pi^{-1}(X_t)$. We claim this forces $\pi^{-1}(X_{t'})$ to be contained in the closure of $\pi^{-1}(X_t)$, which is a contradiction since the latter is closed assumed distinct from the former. Let $s\in \pi^{-1}(X_{t'})$, and for $k\in\mathbb{N}$ let $w_k$ be the word $s(-k)\cdots s(0)\cdots s(k)$. By hypothesis, this word occurs in $\pi^{-1}(X_t)$, and by applying $\phi$ we may assume that it occurs centrally in some element $x_k\in \pi^{-1}(X_t)$. In particular, $x_k$ and $s$ agree on the index interval $[-k,k]$, and it follows that $x_k\to s$ as $k\to \infty$, so $s$ lies in the closure of $\pi^{-1}(X_t)$. \end{proof} \section{Upper bounds via trapping rectangles}\label{sec:ub} Given a collection of rectangles $\scr{C}\subseteq \bigcup_n \scr{P}_n$, we denote by $\Sigma\langle \scr{C}\rangle$ the subshift of $\Sigma$ that avoids the coordinate words of elements of $\scr{C}$. If $\scr{C}$ is finite, then there is a largest $n$ for which $\scr{P}_n$ contains an element of $\scr{C}$. Now every element of $\scr{C}$ breaks up into rectangles in $\scr{P}_n$, and we let $\scr{C}'\subseteq \scr{P}_n$ denote the collection of rectangles occurring in this fashion. Under the identification $\Sigma\cong\Sigma_n$, the subshift $\Sigma\langle\scr{C}\rangle$ can alternately be described as the collection of $s\in \Sigma_n$ for which $s(k)\notin\scr{C}'$ for all $k\in\mathbb{Z}$. Let $I\subseteq\scr{O}_K$ be a finite set of lattice points and let $$\scr{U}(t,I) = \bigcup_{Q\in I} \{P\in V_K\ |\ N(P-Q)<t\}$$ and let $$\scr{T}_n(t,I) = \left\{ A \in \scr{P}_n\ \left|\ \overline{A}\subseteq \scr{U}(t,I) \right.\right\}$$ be the collection of rectangles in $\scr{P}_n$ whose closures are trapped within the norm-distance $t$ ``neighborhood'' of some lattice point in $I$. The following lemma says that $\Sigma\langle\scr{T}_n(t,I)\rangle$ is an upper bound not only for $X_t$ but for $X_{t-\eta}$ for some $\eta>0$. \begin{lemma}\label{upperbound} There exists $\eta>0$ such that $\pi^{-1}(X_{t-\eta}) \subseteq \Sigma\langle\scr{T}_n(t,I)\rangle$. \end{lemma} \begin{proof} The elements of $\scr{T}_n(t,I)$ have closures contained in the $\scr{U}(t,I)$ and thus in $\scr{U}(t-\eta,I)$ for some $\eta>0$ since $I$ is finite. If $s\in \Sigma$ contains the coordinates of $A\in\scr{T}_n(t,I)$, then $\phi^k\pi(s)$ lies in $\overline{A}$ for some $k$, by Remark \ref{closure}. But then $M(\pi(s)) = M(\phi^k\pi(s)) < t-\eta$. Thus $s\notin \pi_n^{-1}(X_{t-\eta})$. \end{proof} The entropy of $\phi$ on $X_t$ is thus bounded above by the shift entropy of $\Sigma\langle\scr{T}_n(t,I)\rangle$, which is computable by Perron-Frobenius theory. These upper bounds depend on the set $I\subseteq \scr{O}_K$ and improve as $I$ grows. The following proposition and its corollary ensure that it is possible to choose $I$ so that the bounds are tight in the limit as $n\to \infty$. \begin{proposition}\label{uppertight} There exists a finite set $I_K$ such that if $I_K\subseteq I$ and $t'<t$, then for $n$ sufficiently large we have \begin{equation}\label{tighteq} \pi^{-1}(X_t)\subseteq \Sigma\langle \scr{T}_n(t,I)\rangle\subseteq \pi^{-1}(X_{t'}) \end{equation} In particular, for such $I$ we have $$\pi^{-1}(X_t) = \bigcap_{n\geq 0}\Sigma\langle \scr{T}_n(t,I)\rangle$$ \end{proposition} \begin{proof} The second assertion here follows immediately from the first. Let $R_{\mathrm{big}}$ denote the rectangle in $V_K$ given by $|s|,|u|<\sqrt{\varepsilon(M_1(K)+1)}$ and let $I_K$ be the set of all $q\in\scr{O}_K$ such that $R-q$ meets $R_{\mathrm{big}}$ for some $R\in\scr{P}_0$. The set $I_K$ is finite and necessarily contains any $q$ for which there exists some $A\in \scr{P}_n$ such that $A-q$ meets $R_{\mathrm{big}}$. Since the diameter of $\scr{P}_n$ tends to zero, there exists $N\in\mathbb{N}$ such that $n\geq N$ implies that every translate of $A$ that meets the region defined by $\mathbf{N}<t'$ in $R_{\mathrm{big}}$ must have closure entirely contained within the region $\mathbf{N}<t$. The first containment in (\ref{tighteq}) is clear from the preceding lemma, and we prove the second by contrapositive. Suppose that $s\in\Sigma$ is not in $\pi^{-1}(X_{t'})$. Then with $P=\pi(s)$ we have $M(P) < t'$, so $$M(P)< t'' = \min(t',M_1(K)+1)$$ Thus we may take $Q=(s,u)$ as in Lemma \ref{bmlemma} representing an element of the orbit of $P$ with $\mathbf{N}(Q)<t''$ and $$|s|,|u| < \sqrt{\varepsilon t''} $$ In particular, $Q\in R_{\mathrm{big}}$. For each $n$, the point $Q$ lies in the $\scr{O}_K$-translates of the the closures of one or more members of the partition $\scr{P}_n$. Let $A\in \scr{P}_n$ and $q\in \scr{O}_K$ such that $Q\in \overline{A}-q$. Thus $\overline{A}-q$ meets the region defined by $\mathbf{N}<t'$ in $R_{\mathrm{big}}$, which requires that $A-q$ meet this region since $A$ is open, and hence $q\in I_K\subseteq I$. Now if $n\geq N$, it follows that $A\in \scr{T}_n(t,I)$. By Remark \ref{closure}, the $0$th symbolic coordinate of any element of $\pi_n^{-1}(Q)$ must be a member of $\scr{T}_n(t,I)$, which implies that each element of $\pi_n^{-1}(P)$ has some symbolic coordinate in $\scr{T}_n(t,I)$. This is to say that each element of $\pi^{-1}(P)$, including $s$, contains the coordinates of some element of $\scr{T}_n(t,I)$, and thus $s\notin\Sigma\langle \scr{T}_n(t,I)\rangle$. \end{proof} \begin{corollary} If $I_K\subseteq I$, then $$h(\phi|X_t) = \lim_{n\to \infty} h(\sigma|\Sigma\langle \scr{T}_n(t,I)\rangle)$$ \end{corollary} \begin{proof} Let $\mu_n$ be a measure of maximal entropy for $\Sigma\langle\scr{T}_n(t,I)\rangle$. Extended to $\Sigma$, this sequence of measures has some weak-$*$ limit point $\mu$ in the convex, compact space of invariant probability measures on $\Sigma$. The measure $\mu$ is supported on the intersection $\pi^{-1}(X_t)$, and by upper semi-continuity of entropy in subshifts we have \begin{eqnarray*} h(\sigma|\pi^{-1}(X_t))\geq h_\mu(\sigma) &\geq& \limsup h_{\mu_n}(\sigma) \\ & =& \limsup h(\sigma|\Sigma\langle\scr{T}_n(t,I)\rangle \geq \liminf h(\sigma|\Sigma\langle\scr{T}_n(t,I)\rangle \geq h(\sigma|\pi^{-1}(X_t)) \end{eqnarray*} This implies that $\mu$ is a measure of maximal entropy for $\pi^{-1}(X_t)$, as well as the claim. \end{proof} \begin{corollary}\label{leftcontinuity} The function $t\longmapsto \dim(X_t)$ is left-continuous at each point. \end{corollary} \begin{proof} The dimension of a closed, invariant subset $X\subseteq \mathbb{T}_K$ is related to the entropy of $\phi$ on $X$ via $$\dim(X) = \frac{2h(\phi|X)}{\log(\varepsilon)},$$ so it suffices to prove that $t\longmapsto h(\phi|X_t)$ is left continuous. Since this function is decreasing, left-discontinuity at $t$ would imply there exists $B>0$ such that $$h(\phi|X_{t-\eta})-h(\phi|X_{t})\geq B\ \ \mbox{for all}\ \ \eta>0$$ By the previous corollary we know there exists $n\in \mathbb{N}$ with $$h(\sigma|\Sigma\langle \scr{T}_n(t,I_K)\rangle) - h(\phi|X_{t}) < B$$ Now Lemma \ref{upperbound} ensures that $\Sigma\langle \scr{T}_n(t,I_K)\rangle$ contains $\pi^{-1}(X_{t-\eta})$ for some $\eta>0$, which implies $$h(\sigma|\Sigma\langle \scr{T}_n(t,I_K)\rangle)\geq h(\sigma|\pi^{-1}(X_{t-\eta})) = h(\phi|X_{t-\eta}),$$ contradicting the inequalities above. \end{proof} \section{Applications to the Euclidean Spectrum}\label{sec:apps} The plot of $\dim(X_t)$ contains a number of plateaus as illustrated in the case $K=\mathbb{Q}(\sqrt{5})$ above. Sometimes these are actually set-theoretic plateaus, and the following proposition demonstrates that $\pi^{-1}(X_t)$ is particularly simple in such cases. \begin{proposition}\label{isosft} Suppose that $X_t = X_{t-\eta}$ for some $\eta>0$. Then $\pi^{-1}(X_t)$ is a subshift of finite type. \end{proposition} \begin{proof} By Proposition \ref{uppertight}, we may choose $n\in\mathbb{N}$ so that $$\pi^{-1}(X_t)\subseteq \Sigma\langle\scr{T}_n(t,I)\rangle\subseteq \pi^{-1}(X_{t-\eta})=\pi^{-1}(X_t)$$ Thus $\pi^{-1}(X_t) = \Sigma\langle\scr{T}_n(t,I)\rangle$, which is expressible directly as an SFT via a 0-1 matrix when viewed in block form in $\Sigma_{m}$ for some $m$ (namely, any $m\geq n-1$). \end{proof} \begin{theorem} $\mathrm{ES}(K)\cap K$ is dense in $\mathrm{ES}(K)$. \end{theorem} \begin{proof} First suppose that $t\in\mathrm{ES}(K)$ is an isolated point. By the previous proposition, $\pi^{-1}(X_t)$ is a subshift of finite type, which is to say that it can be described by a 0-1 transition matrix when viewed in block form $\Sigma_m$ for some $m$. Since $t$ is isolated, we know by Lemma \ref{newword} that $\pi^{-1}(X_t)$ contains a finite word $w$ that does not occur in $\pi^{-1}(X_{>t})$. Let $s=uwv\in\Sigma$ with $M(\pi(s))=t$. Viewed in $\Sigma_m$, there is by the Pigeanhole Principle a repeated block in both $u$ and $v$. We can then truncate $u$ and $v$ and loop the segment between these books indefinitely to produce an element $s'\in \pi^{-1}(X_t)$ that contains $w$ and is eventually periodic in both directions. Then $\pi(s')$ is a $K$-point by Lemma \ref{periodK}, and $M(\pi(s'))=t$ since $s'$ contains $w$, so $t\in K$ by Proposition \ref{KptMK}. Now suppose that $t\in\mathrm{ES}(K)$ is not isolated, so there is a strictly monotone sequence $(t_k)$ in $\mathrm{ES}(K)$ with $t_k\to t$. Fixing $k\in\mathbb{N}$, we will show that there is a $K$-point $P$ with such that $M(P)$ lies between $t$ and $t_k$, which will finish the density claim. First suppose that $(t_k)$ increases to $t$. Since $t_{k+1}\in\mathrm{ES}(K)$, Lemma \ref{newword} ensures that there exists $s\in\pi^{-1}(X_{t_{k+1}})$ containing a word $w$ that does not occur in $\pi^{-1}(X_t)$. Now take $n$ large enough so that $$\pi^{-1}(X_{t_{k+1}})\subseteq \Sigma\langle\scr{T}_n(t_{k+1},I)\rangle\subseteq \pi^{-1}(X_{t_k})$$ as in Proposition \ref{uppertight}. Since $s$ belongs to the SFT $\Sigma\langle\scr{T}_n(t_{k+1},I)\rangle$, we can modify it by looping its ends as in the previous paragraph to obtain another element $s'$ of this SFT that also contains $w$. But then we have $t_k\leq M(\pi(s'))<t$, so $P=\pi(s')$ is the desired $K$-point. Now suppose that $(t_k)$ is decreasing. Since $t_{k+1}\in \mathrm{ES}(K)$, Lemma \ref{newword} ensures there is word $w$ occurring in $\pi^{-1}(X_{t_{k+1}})$ that does not occur in $\pi^{-1}(X_{t_{k}})$. Now take $n$ large enough so that $$\pi^{-1}(X_{t_{k+1}})\subseteq \Sigma\langle\scr{T}_n(t_{k+1},I)\rangle\subseteq \pi^{-1}(X_{t})$$ and proceed as before to produce $s'\in\Sigma\langle\scr{T}_n(t_{k+1},I)\rangle$ that contains $w$ and is eventually periodic in both directions. We have $t\leq M(\pi(s'))<t_k$, and again $P=\pi(s')$ is the desired $K$-point. \end{proof} \begin{corollary} The isolated Euclidean minima all lie in $K$. If $M_1(K)$ is isolated, then $M_1(K)\in\mathbb{Q}$. \end{corollary} \begin{proof} The first statement is immediate from preceding theorem. If $M_1(K)$ is isolated, then $X_{M_1(K)}$ is a nonempty SFT and hence contains a periodic point $P$. But then $M_1(K)=M(P)\in\mathbb{Q}$ is forced. \end{proof}
{ "timestamp": "2022-06-27T02:15:24", "yymm": "2206", "arxiv_id": "2206.12345", "language": "en", "url": "https://arxiv.org/abs/2206.12345", "abstract": "Let $K$ be a real quadratic field. We use a symbolic coding of the action of a fundamental unit on the real $2$-torus associated to $K$ to study the family of subsets $X_t$ of norm distance $\\geq t$ from the origin. As an application, we prove that inhomogeneous spectrum of $K$ contains a dense set of elements of $K$, and conclude that all isolated inhomogeneous minima lie in $K$.", "subjects": "Number Theory (math.NT)", "title": "Some dynamics in real quadratic fields with applications to inhomogeneous minima", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682488058381, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7073169830661482 }
https://arxiv.org/abs/2104.07893
On Kippenhahn curves and higher-rank numerical ranges of some matrices
The higher rank numerical ranges of generic matrices are described in terms of the components of their Kippenhahn curves. Cases of tridiagonal (in particular, reciprocal) 2-periodic matrices are treated in more detail.
\section{Introduction} Let ${\bf M}_n$ stand for the algebra of all $n$-by-$n$ matrices with the entries $a_{ij}\in\mathbb C$, $i,j=1,\ldots n$. We will identify $A\in{\bf M}_n$ with a linear operator acting on $\mathbb C^n$, the latter being equipped with the standard scalar product $\scal{.,.}$ and the associated norm $\norm{x}:=\scal{x,x}^{1/2}$. The {\em numerical range} of $A$ is defined as \eq{nr} W(A)=\{\scal{Ax,x}\colon \norm{x}=1 \}, \en see e.g. \cite[Chapter 1]{HJ2} or more recent \cite[Chapter~6]{DGSV} for the basic properties of $W(A)$, in particular its convexity and invariance under unitary similarities. In \cite{ChoKriZy06}, this notion was generalized as follows: the {\em rank-$k$ numerical range} of $A$ is \eq{knr} \Lambda_k(A)= \{ \lambda\in\mathbb C\colon PAP=\lambda P \text{ for some rank-}k \text{ orthogonal projection } P\}. \en Of course, \eq{chain} W(A)=\Lambda_1(A)\supseteq\Lambda_2(A)\supseteq \cdots\supseteq \Lambda_n(A). \en For $k>n/2$ the set $\Lambda_k(A)$ is empty or a singleton $\{\lambda_0\}$; in the latter case $\lambda_0$ is an eigenvalue of $A$ having geometric multiplicity at least $2k-n$ \cite[Proposition 2.2]{ChoKriZy06}. In particular, $\Lambda_n(A)\neq\emptyset$ if and only if $A$ is a scalar multiple of the identity, and then all the sets in \eqref{chain} coincide. So, for $k=1$ and $k>n/2$ the sets $\Lambda_k(A)$ are convex. Their convexity for intermediate values of $k$ was established in \cite{Woe08}. Shortly thereafter, in \cite{LiSze08} it was shown that, moreover, \eq{knrint}\Lambda_k(A)=\bigcap_{\theta\in [0,2\pi)} \{ \mu\in\mathbb C\colon \Re (e^{i\theta}\mu)\leq\lambda_k(\theta)\}, \en where $\lambda_k(\theta)$ stands for the $k$-th largest (counting the multiplicities) eigenvalue of the matrix $\Re(e^{i\theta}A)$. As usual, for any $X\in{\bf M}_n$ \[ \operatorname{Re} X=\frac{X+X^*}{2},\quad \operatorname{Im} X=\frac{X-X^*}{2i}.\] When applied to normal matrices, \eqref{knrint} yields \eq{norm} \Lambda_k(N)=\cap\operatorname{conv}\{\lambda_{j_1},\ldots,\lambda_{j_{n-k+1}}\}, \en with the intersection taken over all $(n-k+1)$-tuples from the spectrum $\sigma(N)$ of a normal matrix $N$. This result is also from \cite{LiSze08}, confirming a conjecture from \cite{CHKZ}. Our next observation is that the boundary lines \eq{ell} \ell_{\theta,k}= \{ \mu\in\mathbb C\colon \Re (e^{i\theta}\mu)=\lambda_k(\theta)\} \en of the half-planes in the right hand side of \eqref{knrint}, when taken for all $k=1,\ldots,n$, form a family the envelope of which is the so called {\em Kippenhahn curve} $C(A)$ of the matrix $A$. It was shown in \cite{Ki} (see also the English translation \cite{Ki08}) that $W(A)=\operatorname{conv} C(A)$. From the discussion above it is clear that, at least in principle, not only $W(A)$ but all the rank-$k$ numerical ranges of $A$ can be described in terms of $C(A)$. Section~\ref{s:gen} is devoted to generic matrices, for which $C(A)$ splits into $\ceil{n/2}$ components, each solely responsible for the respective higher rank numerical range. These results are specified further in Section~\ref{s:tri} for the case of tridiagonal 2-periodic matrices, when explicit formulas for $\lambda_k(\theta)$ are known. Finally, a particular case of reciprocal 2-periodic matrices is treated in Section~\ref{s:rec}. \section{Generic matrices}\label{s:gen} For $n=2$, there are only two sets in the chain \eqref{chain}, both easily identifiable. If $n=3$, the middle term is either a singleton or the empty set (since $2>3/2$). The next proposition allows to distinguish between the two possibilities. \begin{prop}Let $A\in{\bf M}_3$. Then $\Lambda_2(A)\neq\emptyset$ if and only if $W(A)$ is an elliptical disk, possibly degenerating into a line segment. \label{th:k2n3} \end{prop} \begin{proof}Directly from the definition it follows that $\Lambda_2(A)$ is a singleton $\{\lambda\}$ if and only if $A$ is unitarily similar to $\begin{bmatrix}\lambda & 0 & x \\ 0 & \lambda & y \\ u & v & z\end{bmatrix}$. Applying another unitary similarity if needed, we may without loss of generality suppose that $u=0$. {\sl Case 1.} $x=0$. Then $A=(\lambda)\oplus B$, where $B=\begin{bmatrix}\lambda & y\\ v & z\end{bmatrix}$, and $W(A)=W(B)$ is either an elliptical disk or a line segment, depending on whether or not $B$ is normal. {\sl Case 2.} $x\neq 0$. Then $A$ is unitarily similar to the tridiagonal matrix $\begin{bmatrix}\lambda & x & 0 \\ 0 & z & v \\ 0 & y & \lambda\end{bmatrix}$ with $(1,2)$- and $(2,1)$-entries having distinct absolute values. According to \cite[Lemma 8]{BS041}, $A$ is unitarily irreducible. On the other hand, its $(1,1)$- and $(3,3)$-entries coincide, which implies the ellipticity of $W(A)$ \cite[Theorem 4.2]{BS04}. \end{proof} Recall that a matrix $A\in{\bf M}_n$ is {\em generic} if $\lambda_1(\theta),\ldots,\lambda_n(\theta)$ are distinct for all $\theta$. Normal matrices are not generic; for $n=2$ the converse is also true. Hence, there is a direct relation with the shape of the numerical range: $A\in{\bf M}_2$ is generic if and only if $W(A)$ is a non-degenerate elliptical disc. Already for $n=3$, things get more subtle. \begin{prop}\label{th:gen3}Let $A\in{\bf M}_3$. Then $A$ is generic if and only if $W(A)$:\\ {\rm (i)} has an ovular shape, or\\ {\rm (ii)} is an ellipse with no eigenvalues of $A$ lying on its boundary. \end{prop} Note that $A$ is unitarily irreducible in case (i) while it may or may not be unitarily reducible (though not normal) in case (ii). \begin{proof} If $A$ is unitarily irreducible, according to \cite[Proposition 3.2]{KRS} it is generic if and only if $W(A)$ has no flat portions on the boundary. These are exactly ovular and elliptical shapes, as per Kippenhahn's classification. Moreover, unitary irreducibility of $A$ implies that its eigenvalues are not on the boundary. Normal matrices are not generic, as was mentioned earlier. In the remaining case, $W(A)$ is the convex hull of an ellipse $E$ and a normal eigenvalue $\lambda$ of $A$. The matrix is generic if $\lambda$ lies in the interior of $E$, which falls under (ii), and non-generic otherwise. \end{proof} Comparing Propositions~\ref{th:k2n3} and \ref{th:gen3}, we see that for $A\in{\bf M}_3$ non-empty and empty $\Lambda_2(A)$ materialize both for generic and non-generic matrices. \begin{Ex}\label{ex1} Let $$ M_1= \left[\begin{matrix} 0&-1/2& 0\\ 2&0&-1/2\\ 0&1/2& \sqrt2\\ \end{matrix}\right],~~ M_2= \left[\begin{matrix} 0&1/2& 0\\ 1/2&0&2\\ 0&1& 0\\ \end{matrix}\right].~~ $$ Figure~\ref{fig2} refers to the matrix $M_1$ and Figure~\ref{fig3} refers to the matrix $M_2$. Observe that $W(M_1)$ is ovular, $\Lambda_2(M_1)=\emptyset$, while $W(M_2)$ is elliptical and $\Lambda_2(M_2)=\{0\}$ is the eigenvalue of $M_2$ different from the foci $\pm 3/2$ of $W(M_2)$. \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth,angle=0]{kippenhann2.eps} \caption{Kippenhahn curve of $M_1$} \label{fig2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth,angle=0]{kippenhann3.eps} \caption{Kippenhahn curve of $M_2$} \label{fig3} \end{figure} \end{Ex} Returning to generic matrices of arbitrary dimension $n$, note that from their definition it immediately follows that \eq{op} \lambda_k(\theta)=-\lambda_{n-k+1}(\theta+\pi),\quad k=1,\ldots,n. \en Since $\lambda_{n-k+1}(\theta)>\lambda_k(\theta)$ for $k>\ceil{n/2}$, the half-planes corresponding to $\theta$ and $\theta+\pi$ in \eqref{knrint} are disjoint. Therefore, the rank-$k$ numerical ranges of generic matrices $A$ are empty for $k>\ceil{n/2}$. On the other hand, directly from \eqref{knrint} we see that for generic matrices $A$ the inclusions in \eqref{chain} are proper for $k=1,\ldots,\ceil{n/2}$; moreover, $\Lambda_{k+1}(A)$ lies in the interior of $\Lambda_k(A)$. The structure of $C(A)$ and the related description of $\Lambda_k(A)$ for $k\leq\ceil{n/2}$ are as follows. \begin{thm}\label{th:Agen}For a generic matrix $A\in{\bf M}_n$ its Kippenhahn curve $C(A)$ consists of the closed components \[ \gamma_k(A)=\{\scal{Az_k(\theta),z_k(\theta)}\colon \theta\in [0,2\pi]\}, \quad k=1,\ldots,\ceil{n/2}, \] where $z_k(\theta)$ is the unit eigenvector associated with the eigenvalue $\lambda_k(\theta)$ of $\operatorname{Re}(e^{i\theta}A)$. Respectively, the half-planes in the representation \eqref{knrint} of $\Lambda_k(A)$ are bounded by the family \eqref{ell} of the tangent lines of $\gamma_k(A)$. \end{thm} The first statement is a rewording (in different terms) of \cite[Theorem 13]{JAG98}, based in particular on \eqref{op}; the second immediately follows from the first. For $n$ odd and $k=\ceil{n/2}$ from \eqref{knrint}, \eqref{op} it can be seen that in fact $\Lambda_k(A)$ is the intersection of the tangent lines $\ell_{\theta,k}$ to $\gamma_k(A)$ defined by \eqref{ell}. This yields the following test for distinguishing between $\Lambda_{\ceil{n/2}}$ being a singleton or the empty set. \begin{cor}\label{th:midodd}Let $A\in{\bf M}_n$ be generic. If $n$ is odd, then $\Lambda_{\ceil{n/2}}(A)=\gamma_{\ceil{n/2}}(A)$ if $\gamma_{\ceil{n/2}}(A)$ is a point, and $\Lambda_{\ceil{n/2}}(A)=\emptyset$ otherwise. \end{cor} \noindent Both cases are illustrated by Example~\ref{ex1}. Corollary~\ref{th:midodd} implies that for odd $n$ the curve $\gamma_{\ceil{n/2}}(A)$ cannot be convex unless it collapses to a single point. On the other hand, the outermost curve $\gamma_1(A)$ of $C(A)$ for a generic matrix $A$ is always convex, and thus coincides with the boundary $\partial W(A)$ of its numerical range. This means in particular that $\partial W(A)$ does not have corners or flat portions. Other components of $C(A)$ may exhibit cusps and swallowtails but no inflection points. As can be seen from Fig.~\ref{fig2}, cusps (but not swallowtails) materialize already when $n=3$. The emergence of swallowtails will be demonstrated in Section~\ref{s:rec}, see Fig.~\ref{fig4by4}--\ref{fig7N}. Convexity of $\gamma_1(A)$ implies that the subsequent components lie strictly inside of it. This, however, does not preclude $\gamma_j(A)$ with $j>1$ from intersecting, as soon as there are at least two of them (i.e., when $n\geq 5$ -- see Fig.~\ref{fig00} in Section~\ref{s:tri} for an example corresponding to $n=5$). Note that this is happening in spite of strict inclusions in \eqref{chain}. \section{Tridiagonal 2-periodic matrices}\label{s:tri} A matrix $A\in{\bf M}_n$ is {\em tridiagonal} if $a_{ij}=0$ whenever $\abs{i-j}>1$. We will be making use of the well known (and easy to prove) recursive relation for the determinants $\Delta_n$ of such matrices, \eq{rec} \Delta_n=a_{nn}\Delta_{n-1}-a_{n-1,n}a_{n,n-1}\Delta_{n-2}, \en implying in particular that $\Delta_n$ is invariant under transpositions $a_{i+1,i}\leftrightarrow a_{i,i+1}$ of its off-diagonal pairs. Suppose now that these pairs are {\em unbalanced}, i.e., \eq{abs} \abs{a_{i+1,i}}\neq\abs{a_{i,i+1}} \text{ for } i=1,\ldots,n-1.\en Then hermitian matrices $\operatorname{Re}(e^{i\theta}A)$ will be {\em proper} tridiagonal, i.e., their entries directly above and below the main diagonal will be non-zero. According to \cite[Corollary 7]{BS041}, the eigenvalues of $\operatorname{Re}(e^{i\theta}A)$ are simple for all $\theta$, thus implying the genericity of $A$. \begin{Ex}\label{5x5} Let $$ M_3= \left[\begin{matrix} 1&1& 0&0&0\\ 1/4&2&1/2&0&0\\ 0&1/4&0&3/4&0\\ 0&0&1/4&-2&1\\ 0&0&0&1/4&-1 \end{matrix}\right]. $$ \end{Ex} This matrix is generic, since \eqref{abs} holds. According to Corollary~\ref{th:midodd}, $\Lambda_3=\emptyset$. \iffalse The characteristic polynomial is given by \begin{eqnarray*}&&f(z,\theta)=\frac{1}{4096}((478 - 768 z^2) \cos\theta + 6 z (-1059 + 1984 z^2) \cos2 \theta\\&& + (215 - 128 z^2) \cos3 \theta - 8 z (589 - 1672 z^2 + 512 z^4 + 221 \cos4 \theta) + 28 \cos5 \theta). \end{eqnarray*} The boundary generating curve is given in Figure \ref{fig00} \fi \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth,angle=0]{CurDT5b5Z.eps} \caption{Kippenhahn curve of $M_3.$ Notice that $\gamma_2$ intersects $\gamma_3.$} \label{fig00} \end{figure} We will say that a tridiagonal matrix $A$ is {\em 2-periodic} if so are the sequence of its diagonal entries and of its (non-ordered) off-diagonal pairs. For such matrices we will use the notation $a_1,a_2$ for the first two diagonal entries, and $\{b_1,c_1\}, \{b_2,c_2\}$ for the first two (once again, non-ordered) pairs of the off-diagonal entries. Along with $A$, for any $\theta$ the hermitian matrix $\operatorname{Re}(e^{i\theta}A)$ will be 2-periodic as well, with $\alpha_j(\theta)=:\operatorname{Re}(e^{i\theta}a_j)$ ($j=1,2$) as the period of its main diagonals. Transposing their off-diagonal pairs as needed, we may arrange for the superdiagonal to also be 2-periodic, with \eq{beta}\beta_j(\theta)=:(e^{i\theta}b_j+e^{-i\theta}\overline{c_j)}/2, \quad j=1,2\en as the first two entries. According to \eqref{rec}, this rearrangement preserves the characteristic polynomial of $\operatorname{Re}(e^{i\theta}A)$. Therefore, explicit formulas from \cite{Gov94} can be used to compute $\lambda_k(\theta)$ in our setting. The respective straightforward computation shows that \eq{lt}\lambda_{k,n-k+1}=\frac{\alpha_1+\alpha_2}{2}\pm\sqrt{\left(\frac{\alpha_1-\alpha_2}{2}\right)^2+\abs{\beta_1}^2+\abs{\beta_2}^2 +2\abs{\beta_1\beta_2}Q_k}\en for $k=1,\ldots,m:=\floor{n/2}$, while $\lambda_{m+1}=\alpha_1$ if $n$ is odd. Here $Q_k=\cos\frac{k\pi}{m+1}$ if $n$ is odd, and the $k$-th (in the decreasing order) root of the $m$-th degree polynomial $q_m$ defined recursively via \eq{qrec} q_0=1,\ q_1(\mu)=\mu+\abs{\beta_2/\beta_1}, q_{k+1}(\mu)=\mu q_k(\mu)-q_{k-1}(\mu)\text{ for } k\geq 1 \en if $n$ is even. For odd $n$, directly from the formula for $\lambda_{m+1}$ we obtain \begin{prop}\label{th:lastodd}Let $A\in{\bf M}_n$ be tridiagonal and 2-periodic. If $n$ is odd, then $\gamma_{\ceil{n/2}}(A)=\{a_1\}$, the {\rm (1,1)}-entry of $A$. \end{prop} According to Corollary~\ref{th:midodd}, for such matrices $\Lambda_{\ceil{n/2}}(A)=\{a_1\}$. Also, by Proposition~\ref{th:lastodd} a 2-periodic tridiagonal matrix $A\in{\bf M}_5$ cannot have intersecting $\gamma_2$ and $\gamma_3$. For $n=6$, however, this becomes a possibility; see Fig.~\ref{fig6N} in Section~\ref{s:rec}. The parameters $Q_k$ are explicit and constant when $n$ is odd, and implicit (and in general depending on $\theta$) if $n$ is even. This makes consideration of even-sized matrices much harder. However, in the case \eq{AAS} \overline{b_1}c_2=c_1\overline{b_2}\en treated in \cite{AAS}, the ratio $\abs{\beta_2/\beta_1}$ is the same as $\abs{b_2/b_1}$ and thus $\theta$-independent. According to \eqref{qrec}, $Q_k$ then do not depend on $\theta$ for even $n$ as well. Formulas \eqref{lt}, with some addtional nontrivial computations, provide an alternative approach to the complete description of rank-$k$ numerical ranges of 2-periodic tridiagonal matrices satisfying \eqref{AAS}. In agreement with \cite{AAS}, they all happen to be elliptical disks. Condition \eqref{AAS} holds in particular for tridiagonal Toeplitz matrices. If in addition either the super- or the subdiagonal vanishes, then the dependence on $\theta$ disappears in \eqref{lt} alltogether. In other words, $\gamma_k$ are then concentric circles, and $\Lambda_k(A)$ the respective circular disks. This covers the result on shift operators from \cite{Gaa}. \begin{Ex} To illustrate other possible shapes of Kippenhahn curves for 2-periodic tridiagonal matrices, let $M_4\in{\bf M}_7$ have the zero main diagonal and $b_1=3, b_2=6, c_1=c_2=2$. \end{Ex} \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth,angle=0]{figure7NONREP.eps} \caption{Kippenhahn curve of $M_4$} \label{figure7NR} \end{figure} See the next section for more specific examples. \section{Reciprocal matrices} \label{s:rec} Recall the notion of reciprocal matrices introduced in \cite{BPSV}. These are tridiagonal matrices with constant (without loss of generality, zero) main diagonal and the off diagonal pairs satisfying $a_{i+1,i}a_{i,i+1}=1$. Reciprocal matrices are of course proper tridiagonal. Denoting $\abs{a_{j+1,j}}^2+\abs{a_{j,j+1}}^2=:2A_j$ we see that $A_j\geq 1$. Condition \eqref{abs} for such matrices takes the form $A_j>1$, $j=1,\ldots,n-1$. A 2-periodic reciprocal matrix $A$ is completely characterized by its size $n$ and the values $a_1:=\abs{a_{12}}, a_2:=\abs{a_{23}}$ (alternatively, by $A_1$ and $A_2$). For $n\geq 4$ (the only interesting setting), $\operatorname{Im} A$ has multiple eigenvalues if $A_1$ or $A_2$ is equal to one, and so conditions $A_1,A_2>1$ are not only sufficient but also necessary for $A$ to be generic. Moreover, for reciprocal matrices \eqref{beta} yields $\abs{\beta_j}=\sqrt{(A_j+\tau)/2}$, where $\tau=\cos(2\theta)$. So, according to \eqref{lt} $\lambda_{k,n-k+1}$ in this case are the square roots of \eq{zeta} \zeta_k=\frac{1}{2}(A_1+A_2+2\tau)+\sqrt{(A_1+\tau)(A_2+\tau)}Q_k, \quad j=1,\ldots,m. \en Observe that the right hand side of \eqref{zeta} is invariant under the substitutions $\theta\mapsto -\theta$ and $\theta\mapsto\theta+\pi$. Thus, we arrive at the following \begin{cor}\label{co:sym}Let $A\in{\bf M}_n$ be a 2-periodic reciprocal matrix. Then each component $\gamma_1,\ldots\gamma_m$ of its Kippenhahn curve $C(A)$, and consequently its rank-$k$ numerical ranges $\Lambda_k(A)$ for $k=1,\ldots,m$, are symmetric with respect to both horizontal and vertical coordinate axes. Also, $\gamma_{m+1}=\Lambda_{m+1}=\{0\}$ if $n$ is odd. \end{cor} Furthermore, $\gamma_k$ is an ellipse if and only if $\zeta_k=x\tau+y$ with some constant $y>x>0$. If $A_1=A_2:=A$, this happens to be the case for all $k$, since then \[ \zeta_k= (A+\tau)(1+Q_k),\] with $Q_k$ constant (note that \eqref{AAS} holds in a trivial way). So, the rank-$k$ numerical ranges of such matrices are elliptical disks with the boundaries $\{\gamma_k\}_{k=1}^m$ forming a family of nested ellipses whose axes are coincident with the coordinate axes. On the contrary, when $A_1\neq A_2$ we have \begin{thm}\label{th:onel}Let $A$ be a 2-periodic reciprocal matrix of odd size $n$ and $A_1\neq A_2$. Then none of its rank-$k$ numerical ranges has an elliptical shape if $n=1\mod 4$. Otherwise, exactly one of them, namely $\Lambda_{(n+1)/4}(A)$, is an elliptical disk. \end{thm} \begin{proof}The first summand in the right hand side of \eqref{zeta} is of desired form. The second term, however, is such only if $Q_k=0$. Since $Q_k=\cos\frac{k\pi}{m+1}$ for odd $n$, the result follows. \end{proof} Observe that for generic 4-by-4 matrices $\gamma_1$ and $\gamma_2$ (consequently, $\Lambda_1$ and $\Lambda_2$) are elliptical only simultaneously. Recall also that the numerical range of a reciprocal matrix $A\in{\bf M}_4$ is elliptical if and only if \eq{gold} A_2=\phi A_1-\phi^{-1}A_3 \text{ or } A_2=\phi A_3-\phi^{-1}A_1, \en where $\phi$ is the golden ratio, and at least one of the inequalities $A_j\geq 1$ is strict \cite[Theorem 7]{BPSV}. If $A$ in addition is 2-periodic, i.e. $A_1=A_3$, then \eqref{gold} implies $A_2=A_1$. In other words, neither of rank-$k$ numerical ranges of such $A$ is elliptical, unless $A_1=A_2$. We suspect that this is the case for generic 2-periodic reciprocal matrices $A\in{\bf M}_n$ for all even $n>2$, not just $n=4$. Formulas \eqref{zeta} should be instrumental in proving this conjecture; the difficulty lies in the implicit nature of $Q_k$ for even values of $n$. Kippenhahn curves of several reciprocal matrices are pictured below. The matrices are described by the triples $\{n,\abs{a_1},\abs{a_2}\}$, or $\{n,A_1,A_2\}$. In Fig.~\ref{fig6x6}, \ref{fig6N} and \ref{fig7NC}, the dotted curves are the best fitting ellipses to the components of $C(A)$ which look elliptical but in fact are not. \begin{figure}[p] \centering \includegraphics[width=0.75\linewidth,angle=0]{CurDT4b4Y.eps} \caption{$n=4,a_1=2,a_2=21/20$. The numerical range $\Lambda_1$ is bounded by the exterior component, while $\Lambda_2$ is bounded by the interior component with its swallowtails removed; $\Lambda_3=\emptyset$.} \label{fig4by4} \end{figure} \begin{figure}[p] \centering \includegraphics[width=0.75\linewidth,angle=0]{CurDT4b4YZ.eps} \caption{$n=5,a_1=2,a_2=21/20$. The picture is similar to Fig.~\ref{fig4by4}, except that now $\Lambda_3=\{0\}$.} \label{fig5by5} \end{figure} \begin{figure}[p] \centering \includegraphics[width=0.9\linewidth,angle=0]{twotoep6BN.eps} \caption{$n=6,A_1=1.25, A_2=1.5$. The components of $C(A)$ are nested, with $\gamma_1$ and $\gamma_2$ being convex and so coinciding with the boundaries of $\Lambda_1,\Lambda_2$, respectively. On the other hand, $\Lambda_3$ is bounded by the ``middle portion'' of $\gamma_3$.} \label{fig6x6} \end{figure} \begin{figure}[p] \centering \includegraphics[width=0.9\linewidth,angle=0]{figure6N.eps} \caption{$n=6,A_1=1.05, A_2=1.62$. The component $\gamma_1$ and $\gamma_2$ are still convex. As opposed to Fig.~\ref{fig6x6}, $\gamma_3$ is intersecting with $\gamma_2$. } \label{fig6N} \end{figure} \begin{figure}[p] \centering \includegraphics[width=0.9\linewidth,angle=0]{figure7N.eps} \caption{$n=7,A_1=1.05,A_2=1.62$. The picture is similar to Fig.~\ref{fig6N}, except that $\gamma_2$ is an exact ellipse, and there emerges $\gamma_4=\{0\}$. } \label{fig7N} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth,angle=0]{figure7NC.eps} \caption{$n=7, A_1=2, A_2=1.5$. The components $\gamma_j$ are convex for $j=1,2,3$ and visually indistinguishable from ellipses, though only the middle one is a genuine ellipse.} \label{fig7NC} \end{figure} \newpage \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2021-04-19T02:08:59", "yymm": "2104", "arxiv_id": "2104.07893", "language": "en", "url": "https://arxiv.org/abs/2104.07893", "abstract": "The higher rank numerical ranges of generic matrices are described in terms of the components of their Kippenhahn curves. Cases of tridiagonal (in particular, reciprocal) 2-periodic matrices are treated in more detail.", "subjects": "Functional Analysis (math.FA)", "title": "On Kippenhahn curves and higher-rank numerical ranges of some matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682484719526, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7073169828272785 }
https://arxiv.org/abs/1109.0813
On Universal Tilers
A famous problem in discrete geometry is to find all monohedral plane tilers, which is still open to the best of our knowledge. This paper concerns with one of its variants that to determine all convex polyhedra whose every cross-section tiles the plane. We call such polyhedra universal tilers. We obtain that a convex polyhedron is a universal tiler only if it is a tetrahedron or a pentahedron.
\section{Introduction} A monohedral tiler is a polygon that can cover the plane by congruent repetitions without gaps or overlaps. The problem of determining all monohedral tilers, also called the problem of tessellation, was brought anew into mathematical prominence by Hilbert when he posed it as one of his ``Mathematische Probleme'', see Kershner~\cite{Ker68}. It is well-known that all triangles and all quadrangles are tilers. Reinhardt~\cite{Rei18} determined all hexagonal tilers, and obtained some special kinds of pentagonal tilers. Later it is shown that any polygon with at least $7$ edges is not a tiler by using Euler's formula, see Dress and Huson~\cite{DH87}. The problem of plane tiling, however, is now still open to the best of our knowledge. In fact, there are $14$ classes of pentagonal tilers were found, see Hirschhorn and Hunt~\cite{HH85}, Sugimoto and Ogawa~\cite{SO06}, and Wells~\cite{Wel91}. For a whole theory of tessellation patterns, see Gr\"unbaum and Shephard's book~\cite{GS87} as a survey up to 1987. Considering a variant of the problem of plane tiling, Akiyama \cite{Aki07} found all convex polyhedra whose every development tiles the plane. He call them tile-makers. The main idea in his proof is to investigate the polyhedra whose facets tile the plane by stamping. Notice that facets are special cross-sections. This motivates us to consider a more general class of polyhedron tilers. Let $\p$ be a convex polyhedron, and $\pi$ a plane. Denote by $C(\pi)$ the intersection of $\pi$ and $\p$. We say that $\pi$ intersects $\p$ trivially if $C(\pi)$ is empty, or a point, or a line segment. Otherwise we say $\pi$ intersects $\p$ non-trivially. In this case, $C(\pi)$ is a polygon with at least $3$ edges. We call $C(\pi)$ a cross-section if $\pi$ crosses $\p$ nontrivially. We say that $\p$ is a universal tiler if every cross-section of $\p$ tiles the plane. In this paper, we study the shape of universal tilers. It is a variant of the problem of plane tiling. It is easy to see that every tetrahedron is a universal tiler since every cross-section of a tetrahedron is either a triangle or a quadrangle. The main goal of this paper is to present that any universal tiler has at most $5$ facets. This paper is organized as follows. In Section~\ref{sec2}, we derive a necessary condition that a hexagonal cross-section (if exists) of a universal tiler satisfies. It will be used for excluding the membership of many polyhedra from the class of universal tilers. In Section~\ref{sec3}, we prove that any facet of a universal tiler is either a triangle or a quadrangle. In Section~\ref{sec4}, by using Euler's formula we obtain that any universal tiler has at most $5$ facets. \section{Hexagonal cross-sections of universal tilers} \label{sec2} Note that no polygonal tiler has more than $6$ edges. It follows that any cross-section of a universal tiler has at most $6$ edges. In particular, any facet of a universal tiler has no more than $6$ edges. In this section, we shall obtain a necessary condition for hexagonal cross-sections of a universal tiler. Let~$\p$ be a polyhedron, and let~$\pi$ be a plane which crosses~$\p$ nontrivially. Let~$l$~be a line belonging to~$\pi$ and let $\e>0$. Denote by~$\pi_+$ (resp.~$\pi_-$) the plane obtained by rotating~$\pi$ around~$l$ by the angle~$\e$ (resp.~$-\e$). Set $\e\to0$. It is clear that either~$\pi_+$ or~$\pi_-$ crosses~$\p$ nontrivially. Of course it is possible that both~$\pi_+$ and~$\pi_-$ crosses~$\p$ non-trivially. Write \[ p(\pi;\,l;\,\e)= \begin{cases} \pi_+,&\textrm{if $\pi_+$ crosses $\p$ nontrivially};\\[5pt] \pi_-,&\textrm{otherwise}. \end{cases} \] Then $p(\pi;\,l;\,\e)$ is a plane crossing~$\p$ nontrivially. Intuitively, for small~$\e$, the plane $p(\pi;\,l;\,\e)$ is obtained by rotating the plane~$\pi$ a little along~$l$. For notational simplification, we rewrite \[ C(\pi;\,l;\,\e)=C(\,p(\pi;\,l;\,\e)\,). \] By the continuity of a polyhedron, we see that the cross-section $C(\pi;\,l;\,\e')$ is nontrivial for any $0<\e'<\e$. Let $C$ be a cross-section of~$\p$. We say that $C$ is proper if none of its vertices is a vertex of~$\p$, that is, any vertex of a proper cross-section lies in the interior of an edge of~$\p$. \begin{lem}\label{lem_proper} If $\p$ has a cross-section with $n$ vertices, then $\p$ has a proper cross-section with at least $n$ vertices. \end{lem} \pf Set up an $xyz$-coordinate system. For any real number~$a$, denote by $\pi^a$ the plane determined by the equation $z\!=\!a$. Suppose that the cross-section $C(\pi^0)$ has~$n$ vertices. Without loss of generality, we can suppose that the half-space~$z>0$ has non-empty intersection with~$\p$. By the continuity of~$\p$, there exists $\delta>0$ such that for any $0<\e<\delta$, the cross-section $C(\pi^\e)$ has at least~$n$ vertices. Consider the $z$-coordinates of all vertices of~$\p$. Let~$\eta$ be the minimum positive $z$-coordinate among. Then the cross-section $C(\pi^{\eta/2})$ is a proper cross-section with at least~$n$ vertices. This completes the proof. \qed As will be seen, with aid of the above lemma we may take improper cross-sections out of our consideration. Let $C(\pi)=V_1V_2\cdots V_n$ be a proper cross-section of~$\p$. It is clear that for any $1\le i\le n$, there is a unique edge of~$\p$ which contains~$V_i$, denoted $e_i$. \begin{lem}\label{lem_angle} Let $C(\pi)=V_1V_2\cdots V_n$ be a proper cross-section with $V_i\in e_i$. Then there exists $\delta>0$ such that for any $0<\e<\delta$, \begin{itemize} \item[(i)] $C(\pi;\,V_1V_3;\,\e)$ is a proper cross-section with exactly $n$ vertices which belong to the edges $e_1$, $e_2$, $\ldots$, $e_n$ respectively; \item[(ii)] if $C(\pi;\,V_1V_3;\,\e)=U_1^\e U_2^\e\cdots U_n^\e$, where $U_i^\e\in e_i$, $U_1^\e=V_1$, and $U_3^\e=V_3$, then \[ \angle V_1U_2^\e V_3\ne\angle V_1V_2V_3. \] \end{itemize} \end{lem} \pf Since $C(\pi)$ is proper, by continuity, there exists $\delta_1>0$ such that Condition (i) holds for any $0<\e<\delta_1$. Suppose that \[ C(\pi;\,V_1V_3;\,\e)=V_1\,U_2^\e \,V_3\,U_4^\e\, U_5^\e\,\cdots\, U_n^\e, \] where $U_i^\e\in e_i$. Let $T$ be the trace of the point~$U_2^\e$ as~$\e$ varies such that \begin{equation}\label{eq1} \angle V_1U_2^\e V_3=\angle V_1V_2V_3. \end{equation} Then $T$ is a sphere if $\angle V_1V_2V_3=\pi/2$, while $T$ is an ellipsoid otherwise. On the other hand, the point $U_2^\e$ moves along $e_2$ by Condition (i). So~$U_2^\e$~belongs to the intersection of a sphere (or ellipsoid) and a line. Such an intersection contains at most two points, say $\e_1$ and $\e_2$. Taking $\delta<\min\{\delta_1,\e_1,\e_2\}$, we complete the proof. \qed We need Reinhardt's theorem~\cite{Rei18} of the classification of hexagonal tilers. Traditionally, we use the concatenation of two points, say, $AB$, to denote both the line segment connecting~$A$ and~$B$, and its length. \begin{thm}[Reinhardt]\label{thm_Reinhardt} Let $V_1V_2\cdots V_6$ be a hexagonal tiler. Then one of the following three properties holds: \begin{itemize} \item[(i)] $V_1+V_2+V_3=2\pi$ and $V_3V_4=V_6V_1$; \item[(ii)] $V_1+V_2+V_4=2\pi$, $V_2V_3=V_4V_5$ and $V_3V_4=V_6V_1$; \item[(iii)] $V_1=V_3=V_5=2\pi/3$, $V_2V_3=V_3V_4$, $V_4V_5=V_5V_6$ and $V_6V_1=V_1V_2$. \end{itemize} \end{thm} Figure 1 illustrates the $3$ classes of hexagonal tilers. See also Bollob\'{a}s~\cite{Bol63} and Gardner~\cite{Gar75} for its proof. \begin{tikzpicture} \begin{scope} \draw (0,0 --node[below]{$a$}(1.5,0)node[above left]{$A$}-- ++(60:1.3)node[left=2pt]{$B$}-- ++(140:1)node[below]{$C$}--node[above]{$d$} ++(180:1.5 --++(250:.8 --cycle; \node[below=.5cm,xshift=1cm,text width=3.5cm,text badly ragged] {$A+B+C=2\pi$, $a=d$.}; \end{scope} \begin{scope}[xshift=5cm] \draw (0,0 --node[below]{$a$} (1.5,0)node[above left]{$A$}-- ++(60:1.2)node[left=3pt]{$B$}--node[above right]{$c$} ++(130:1 --node[above right]{$d$} ++(170:1.5)node[below]{$D$}--node[above left]{$e$} ++(220:1 --cycle; \node[below=.5cm,xshift=.85cm,text width=3.5cm,text badly ragged] {$A+B+D=2\pi$, $a=d$, $c=e$.}; \end{scope} \begin{scope}[xshift=10cm] \draw (0,0)coordinate(f --node[below]{$a$} (1.5,0)coordinate(a)node[above left]{$A$}--node[right]{$b$} ++(60:1.5)coordinate(b --node[above]{$c$} ++(150:1.7)coordinate(c) node[below]{$C$}--node[above]{$d$} ++(210:1.7)coordinate(d) ; \coordinate (x) at ($(0,0)!.5!(d)$); \coordinate (e) at ($(x)!{.5/sin(60)}!90:(d)$); \draw (d)--node[left]{$e$}(e)node[above right]{$E$} --node[below]{$f$}(f); \node[below=.5cm,xshift=.8cm,text width=3.5cm,text badly ragged] {$A=C=E={2\over3}\pi$, $a=b$, $c=d$, $e=f$.}; \end{scope} \begin{scope}[yshift=-2cm] \node[xshift=6cm,text centered] {Figure 1. The $3$ classes of hexagonal tilers.}; \end{scope} \end{tikzpicture} Denote by $\mathcal{H}_\p$ the set of proper hexagonal cross-sections of $\p$. \begin{thm}\label{thm_hexcs} Any proper hexagonal cross-section of a universal tiler, if exists, has a pair of opposite edges of the same length. \end{thm} \pf Let $\p$ be a universal tiler with $\mathcal{H}_\p\ne\emptyset$. For any $C\in\mathcal{H}_\p$, denote by $a(C)$ the number of angles of size $2\pi/3$ in $C$. Let \[ S=\{H\in\mathcal{H}_\p\,\colon\, \mbox{any pair of opposite edges of $H$ has distinct lengths}\}. \] Suppose to the contrary that $S\ne\emptyset$. Let $H=V_1V_2\cdots V_6\in S$ such that \[ a(H)=\min\{a(C)\,\colon\,C\in S\}. \] By Theorem~\ref{thm_Reinhardt}, we have $a(H)\ge3$. Without loss of generality, we can suppose that \begin{equation}\label{eq2pi/3} \angle V_1V_2V_3={2\pi\over3}. \end{equation} By Lemma~\ref{lem_angle}, there exists $\delta>0$ such that for any $0<\e<\delta$, \[ C(H;\,V_1V_3;\,\e) =U_1^\e\, U_2^\e\, U_3^\e\, U_4^\e\, U_5^\e\, U_6^\e\in\mathcal{H}_\p, \] where $U_1^\e=V_1$, $U_3^\e=V_3$, $U_i^\e\in e_i$ and \begin{equation}\label{ineq4} \angle\,{V_1\,U_2^\e\, V_3}\ne {2\pi\over3}. \end{equation} On the other hand, by continuity, there exists $0<\eta<\delta$ such that for any $i$ mod $6$, \begin{align} \Bigl|\,U_i^\eta\, U_{i+1}^\eta-U_{i+3}^\eta\, U_{i+4}^\eta\,\Bigr| &\ge{1\over2}\,\Bigl|\,V_i\,V_{i+1}-V_{i+3}\,V_{i+4}\,\Bigr|, \label{ineq-edge}\\[5pt] \Bigl|\,\angle U_i^\eta\, U_{i+1}^\eta\, U_{i+2}^\eta -{2\pi\over3}\,\Bigr| &\ge{1\over2}\,\Bigl|\,\angle V_i\,V_{i+1}\,V_{i+2}-{2\pi\over3}\,\Bigr|.\label{ineq-angle} \end{align} Write $H^\eta=C(H;\,V_1V_3;\,\eta)$. Then $H^\eta\in S$ by~\eqref{ineq-edge}. In view of~\eqref{eq2pi/3}, \eqref{ineq4} and~\eqref{ineq-angle}, we deduce that $a(H^\eta)\le a(H)-1$, contradicting to the choice of~$H$. This completes the proof. \qed As will be seen, we shall obtain that any universal tiler has no hexagonal cross-sections. But we need Theorem~\ref{thm_hexcs} to derive this result. \section{The valence-sets of universal tilers}\label{sec3} In this section, we show that any facet of a universal tiler is either a triangle or a quadrangle. Let $F=V_1V_2\cdots V_n$ be a facet of $\p$. Let $d_i$ be the valence of~$V_i$. We say that the multiset $\{d_1,d_2,\ldots,d_n\}$ is the valence-set of $F$. For example, the valence-set of any facet of a~tetrahedron is $\{3,3,3\}$. \begin{lem}\label{lem_cutQ} Let $\p$ be a universal tiler. Let $\{d_1,d_2,\ldots,d_n\}$ be a valence-set of a facet of~$\p$. Then for any $1\le h\le n$, there is a cross-section of~$\p$ with ${\sum_{i=1}^nd_i}-d_h-2n+4$ edges. Consequently, we have \begin{equation}\label{2n+2} {\sum_{i=1}^nd_i}-d_h\le 2n+2. \end{equation} \end{lem} \pf Let $F=V_1V_2\cdots V_n$ be a facet of $\p$, where~$V_i$ has valence~$d_i$. It suffices to show for the case $h=1$. We shall prove by construction. For convenience, we set up an $xyz$-coordinate system as follows. First, choose a point~$U_1$ from the interior of the edge $V_nV_1$. Set~$U_1$ to be the origin. Next, choose~$U_2$ from the interior of $V_1V_2$, and build the $x$-axis by putting $U_2$ on the positive $x$-axis. Then, build the $y$-axis such that $F$ lies on the $xy$-plane and the $y$-coordinate of $V_1$ is negative. Consequently, all the other vertices $V_2,\ldots,V_n$ have positive $y$-coordinates. Since $F$ is a facet, the convex polyhedron $\p$ must lie entirely in one of the two half-spaces divided by the $xy$-plane. Build the $z$-axis such that all points in~$\p$ have nonnegative $z$-coordinates. Now we have an $xyz$-coordinate system. Let $S=\{F'\ |\ \mbox{$F'$ is a facet of $\p$},\ F'\cap F\ne\emptyset,\ F'\ne F\}$ with $|S|=s$. It is easy to see that \begin{equation}\label{s} s=\sum_{i=1}^nd_i-2n. \end{equation} By continuity, there exists $\delta>0$ such that for any $0<\e<\delta$, the cross-section $C(z\!=\!\e)$ has exactly $s$~vertices. Here, as usual, the equation $z\!=\!\e$ represents the plane parallel to the $xy$-plane with distance $\e$. Write \[ C(z\!=\!\e)=C_1^\e\, C_2^\e\,\cdots\, C_s^\e\,. \] Then for any vertex $C_j^\e$, there is a unique vertex $V_i$ such that $V_i$ and $C_j^\e$ lie in the same edge of~$\p$. Denote this~$V_i$ by~$R_j^\e$. Clearly $R_j^\e$ is independent of~$\e$. So we can omit the superscript~$\e$ and simply write~$R_j$. Without loss of generality, we can suppose that \[ R_1=R_2=\cdots=R_{\,t}=V_1,\quad R_{\,t+1}=V_2,\quad R_s=V_n, \] where \begin{equation}\label{t} t=d_1-2. \end{equation} Let $t+1\le k\le s$, and let $y_k^\e$ be the $y$-coordinate of $C_k^\e$. Since $V_2,\ldots,V_n$ have positive $y$-coordinates, there exists $0<z_0<\delta$ such that $y_k^\e>0$ for any $0<\e\le z_0$. For simplifying notation, we rewrite \[ C(z\!=\!z_0)=C_1C_2\cdots C_s. \] Let $y_k$ be the $y$-coordinate of $C_k$. Set \begin{equation}\label{epsilon0} \e_0={1\over2}\min\Bigl\{\,z_0,\,{ z_0\over y_{t+1}},\,{ z_0\over y_{t+2}},\, \ldots,\,{ z_0\over y_s}\,\Bigr\}. \end{equation} We shall show that the cross-section $C(\pi_0)$ has ${\sum_{i=2}^nd_i}-2n+4$ edges. Consider the function $f$ defined by \[ f(V)=\e_0y-z, \] where $V=(x,y,z)$ is a point. Denote by $\pi_0$ the plane determined by the equation $f(V)=0$. On one hand, we have $f(R_k)>0$ since the vertex~$R_k$ has positive $y$-coordinate and zero $z$-coordinate. On the other hand, by~\eqref{epsilon0} we have \[ f(C_k)=\e_0y_k-z_0\le{1\over2}\cdot{z_0\over y_k}\cdot y_k-z_0<0. \] Therefore, the points~$R_k$ and~$C_k$ lie on distinct sides of~$\pi_0$. Consequently, the plane $\pi_0$ intersects the line segment~$C_kR_k$. Let $I_k$ be the intersecting point. Recall that $U_1$ is the origin and $U_2$ lies on the positive $x$-axis. So these two points belong to the plane~$\pi_0$. Hence \[ C(\pi_0)=U_1U_2I_{t+1}I_{t+2}\cdots I_s. \] By~\eqref{s} and~\eqref{t}, the number of edges of $C(\pi_0)$ is \[ s-t+2=\sum_{i=1}^nd_i-2n-(d_1-2)+2={\sum_{i=2}^nd_i}-2n+4. \] Since any cross-section of a universal tiler has at most $6$ edges, the inequality~\eqref{2n+2} follows immediately. This completes the proof. \qed \begin{lem}\label{lem_33333} The valence-set of any facet of a universal tiler is not $\{3,3,3,3,3\}$. \end{lem} \pf Let $\p$ be a universal tiler. Suppose to the contrary that $\p$ has a pentagonal facet $F$ whose every vertex has valence $3$. For convenience, write $F=U_1'U_2U_3U_4U_5$. Pick a point $U_1$ from the interior of the line segment~$U_1'U_2$ such that \begin{equation}\label{ineq2} U_1U_2\ne U_4U_5. \end{equation} Pick a point $U_6$ from the interior of the line segment~$U_5U_1'$ such that \begin{equation}\label{ineq3} U_2U_3\ne U_5U_6\quad\mbox{and}\quad U_3U_4\ne U_6U_1. \end{equation} The existences of~$U_1$ and~$U_6$ are clear. Since the valence of each vertex of~$F$ is~$3$, there exists~$\delta$ such that for any $0<\e<\delta$, \[ C(F;\,U_6U_1;\,\e) =U_1^\e\, U_2^\e\, U_3^\e\, U_4^\e\, U_5^\e\, U_6^\e\in\mathcal{H}_\p, \] where $U_1^\e=U_1$, $U_6^\e=U_6$, and $U_i^\e$ and $U_i$ lie on the same edge of $\p$ for each $2\le i\le 5$. On the other hand, by continuity, there exists $0<\eta<\delta$ such that for any $i$ mod $6$, \begin{equation}\label{u} \bigl|\,U_i^\eta\, U_{i+1}^\eta -U_{i+3}^\eta\, U_{i+4}^\eta\,\bigr| \ge{1\over 2}\,\bigl|\,U_i\,U_{i+1}-U_{i+3}\,U_{i+4}\,\bigr|. \end{equation} In light of~\eqref{ineq2}, \eqref{ineq3} and~\eqref{u}, we see that the cross-section $C(F;\,U_1U_6;\,\eta)$ has no pair of opposite edges of the same length, contradicting to Theorem~\ref{thm_hexcs}. This completes the proof. \qed Using similar combinatorial arguments as in the above proof, we can determine the shape of a facet of a universal tiler. \begin{thm}\label{thm_facet} Let $\p$ be a universal tiler. Then every facet of $\p$ is either a triangle or a quadrangle. Moreover, the valence-set of any triangular facet (if exists) of~$\p$ is either $\{4,3,3\}$ or $\{3,3,3\}$, while the valence-set of any quadrilateral facet (if exists) of~$\p$ is $\{3,3,3,3\}$. \end{thm} \pf Let $F_n=V_1V_2\cdots V_n$ be a facet of $\p$. Let $S_n=\{d_1,d_2,\ldots,d_n\}$ be the valence-set of~$F_n$. By Lemma~\ref{lem_cutQ}, we see that for any $1\le h\le n$, \[ 2n+2\ge{\sum_{i=1}^nd_i}-d_h\ge3(n-1). \] Namely $n\le5$. If $n=5$, then~\eqref{2n+2} reads \[ \sum_{i=1}^5d_i-d_h\le 12. \] Since each $d_i\ge3$, we deduce that all $d_i=3$, contradicting to Lemma~\ref{lem_33333}. Hence $n\le 4$. If $n=4$, then the valence-set~$S_4$ is either $\{3,3,3,3\}$ or $\{4,3,3,3\}$ by~\eqref{2n+2}. Assume that $S_4=\{4,3,3,3\}$, where~$V_1$ has valence~$4$. Pick a point~$A$ from the interior of the line segment~$V_1V_2$ such that $V_1A\ne V_3V_4$, and a point~$B$ from the interior of~$V_2V_3$ such that $AB\ne V_4V_1$. Similar to the proof of Lemma~\ref{lem_33333}, we can deduce that there exists~$\eta$ such that the cross-section $C(F_4;\,AB;\,\eta)$ belongs to~$\mathcal{H}_\p$, and it has no pair of opposite edges of the same length, contradicting to Theorem~\ref{thm_hexcs}. Hence $S_4=\{3,3,3,3\}$. Consider the case $n=3$. By Lemma~\ref{lem_cutQ}, the valence-set~$S_3$ has five possibilities: \[ \{3,3,3\},\quad\{4,3,3\},\quad\{4,4,3\},\quad\{4,4,4\},\quad\{5,3,3\}. \] If $S_3=\{4,4,3\}$ or $S_3=\{4,4,4\}$, we can suppose that both~$V_1$ and~$V_2$ have valence~$4$. Pick a point~$A$ from the interior of~$V_1V_2$, and~$B$ from~$V_2V_3$ such that $AB\ne V_3V_1$. Again, there exists~$\eta$ such that $C(F_3;\,AB;\,\eta)$ has no pair of opposite edges of the same length, contradicting to Theorem~\ref{thm_hexcs}. If $S_3=\{5,3,3\}$, we can suppose that~$V_1$ has valence~$5$. Pick~$A$ from the interior of~$V_1V_2$ such that $V_1A\ne V_3V_1$, and~$B$ from~$V_2V_3$. By similar arguments, we get a contradiction to Theorem~\ref{thm_hexcs}. Hence the valence-set~$S_3$ is either $\{3,3,3\}$ or $\{4,3,3\}$. This completes the proof. \qed \section{The shapes of universal tilers}\label{sec4} In this section, we show that every universal tiler at at most $5$ facets. Let~$\p$~be a universal tiler. Let~$f$ (resp.~$v$,~$e$) be the total number of facets (resp. vertices, edges) of~$\p$. Euler's formula reads \begin{equation}\label{Euler} f+v=e+2. \end{equation} It is well-known that there are two distinct topological types of pentahedra. One is the quadrilateral-based pyramids, which has the parameters \[ (v,e,f)=(5,8,5); \] the other is pentahedra composed of two triangular bases and three quadrilateral sides, which has \[ (v,e,f)=(6,9,5). \] Let~$f_i$ be the number of facets of~$i$~edges in~$\p$. Let~$v_i$ be the number of vertices of valence~$i$ in~$\p$. By Theorem~\ref{thm_facet}, we have \begin{equation} f=f_3+f_4\quad\mbox{and}\quad v=v_3+v_4. \end{equation} Here is the main result of this paper. \begin{thm}\label{thm_main} A convex polyhedron is a universal tiler only if it is a tetrahedron or a pentahedron. \end{thm} \pf Let $\p$ be a universal tiler. By Theorem~\ref{thm_facet}, every facet of $\p$ has at most $4$ edges and every vertex of $\p$ has valence at most $4$. First, we deduce some relations by double-counting. Counting the pairs $(e',f')$ where $f'$ is a facet of $\p$ and $e'$ is an edge of $f'$, we find that \begin{equation}\label{f34} 3f_3+4f_4=2e. \end{equation} Counting the pairs $(v',e')$ where $e'$ is an edge of $\p$ and $v'$ is a vertex of $e'$, we obtain \begin{equation}\label{v34} 3v_3+4v_4=2e. \end{equation} Combining the relations from~\eqref{Euler} to~\eqref{v34}, we deduce that \[ (3f_3+4f_4)+(3v_3+4v_4)=4e=4(v+f-2)=4(v_3+v_4+f_3+f_4-2), \] namely \begin{equation}\label{eq_f+v=8} f_3+v_3=8. \end{equation} On the other hand, taking the difference of~\eqref{f34} and~\eqref{v34} yields \begin{equation}\label{Euler_diff} 4(f_4-v_4)=3(v_3-f_3). \end{equation} Now we count the pairs $(v',T)$, where $T$~is a triangular facet of~$\p$ and~$v'$~is a vertex of~$T$ having valence~$4$. By Theorem~\ref{thm_facet}, every triangular facet has at most one vertex of valence~$4$, and every facet containing a vertex of valence~$4$ must be a triangle. Therefore \begin{equation}\label{ineq_f3v4} 4v_4\le f_3. \end{equation} By~\eqref{eq_f+v=8}, \eqref{Euler_diff} and~\eqref{ineq_f3v4}, we deduce that $f_3\le4$. Note that~$f_3$ is an even number by~\eqref{f34}. So $f_3\in\{0,2,4\}$. If $f_3=4$, then $v_3=4$ by~\eqref{eq_f+v=8}, and $f_4=v_4\le1$ by~\eqref{Euler_diff} and~\eqref{ineq_f3v4}. In this case, $\p$~is a tetrahedron if $f_4=0$, and~$\p$ is a quadrilateral-based pyramid if $f_4=1$. If $f_3=2$, then $v_3=6$ by~\eqref{eq_f+v=8}, $v_4=0$ by~\eqref{ineq_f3v4}, and consequently $f_4=3$ by~\eqref{Euler_diff}. In this case, $\p$ is a pentahedron composed of two triangular bases and three quadrilateral sides. If $f_3=0$, then $v_3=8$, $v_4=0$, and $f_4=6$. Thus $\p$ is a cube. We shall show that it is impossible. Denote \[ \p=ABCD\mbox{-}EFGH. \] For convenience, we set up an $xyz$-coordinate system such that the plane $z\!=\!0$ coincides with the plane $ACH$, and the vertex~$D$ has negative $z$-coordinate. Let~$z_B$ (resp.~$z_E$, $z_F$, $z_G$) be the $z$-coordinate of~$B$ (resp.~$E$, $F$, $G$). Since $\p$ is convex, all these $z$-coordinates are positive. Write \[ \delta={1\over2}\min\{z_B,\,z_E,\,z_F,\,z_G\}. \] Then the line segment~$AB$ intersects the plane $z\!=\!\e$. Let~$A_1^\e$~be the intersecting point. Similarly, let~$A_2^\e$ (resp.~$C_1^\e$, $C_2^\e$, $H_1^\e$, $H_2^\e$) be the intersection of the plane $z=\e$ and the line segment~$AE$ (resp.~$BC$, $CG$, $GH$, $HE$). So \[ C(z\!=\!\e) =A_2^\e\, A_1^\e\, C_1^\e\, C_2^\e\, H_1^\e\, H_2^\e\in\mathcal{H}_\p. \] \begin{center} \begin{tikzpicture} \begin{scope} \path coordinate (O) at (0,0) coordinate (x) at (5.5,0) coordinate (y) at (80:3.7) coordinate (z) at (240:2.5); \draw coordinate (A) at (240:2) let \p1=($(A)$) in (A)node[below=.5mm]{$A$}coordinate(A) --(3,\y1) node[below=.5mm]{$B$}coordinate (B) --(5,0)node[below right]{$C$}coordinate(C); \draw (110:1.5)node[left]{$E$}coordinate (E) --(25:3)node[right]{$F$}coordinate (F) --(35:4.5)node[above=.5mm]{$G$}coordinate (G) --(80:2.5)node[above=1mm]{$H$}coordinate (H) --cycle; \draw[dotted] (O)node[below right]{$D$}--(A) (O)--(C) (O)--(H); \draw (A)--(E) (B)--(F) (C)--(G); \coordinate (A1) at (intersection of A--B and x--z); \coordinate (A2) at (intersection of A--E and y--z); \node[below of=A1,yshift=0.6cm]{$A_1^\e$}; \node[left of=A2,xshift=0.6cm]{$A_2^\e$}; \coordinate (C1) at (intersection of B--C and x--z); \coordinate (C2) at (intersection of C--G and x--y); \node[below of=C1,yshift=0.6cm]{$C_1^\e$}; \node[right of=C2,xshift=-0.5cm]{$C_2^\e$}; \coordinate (H1) at (intersection of G--H and x--y); \coordinate (H2) at (intersection of H--E and y--z); \node[above of=H1,yshift=-0.7cm]{$H_1^\e$}; \node[left of=H2,xshift=0.5cm]{$H_2^\e$}; \draw[thick] (A1)--(A2)--(H2)--(H1)--(C2)--(C1)--cycle; \foreach \point in {A1,A2,C1,C2,H1,H2} \fill[black,opacity=1] (\point) circle (1.5pt); \end{scope} \begin{scope}[yshift=-3cm] \node[xshift=2cm,text centered] {Figure 2. The hexagonal cross-section $C(z\!=\!\e) =A_2^\e\, A_1^\e\, C_1^\e\, C_2^\e\, H_1^\e\, H_2^\e$.}; \end{scope} \end{tikzpicture} \end{center} By continuity, we have \[ A_1^\e\, A_2^\e\,\to0,\quad C_1^\e\, C_2^\e\,\to0,\quad H_1^\e\, H_2^\e\,\to0, \] as $\e\to0$. So there is $0<\eta<\delta$ such that the cross-section $C(z\!=\!\eta)$ has no pair of opposite edges of the same length, contradicting to Theorem~\ref{thm_hexcs}. This completes the proof. \qed Recall that any tetrahedron $T$ is a universal tiler. We present that pentahedron universal tilers also exist. \begin{thm}\label{thm1} Any pentahedron having a pair of parallel facets is a universal tiler. \end{thm} \pf Suppose that $\p$ is a pentahedron with a pair of parallel facets. Note that any cross-section of a pentahedron has at most $5$ edges. It suffices to show that any pentagonal cross-section of $\p$ tiles the plane. Let $C$ be a pentagonal cross-section of $\p$. Then $C$ has a pair of parallel edges. As pointed out by Reinhardt in~\cite{Rei18}, any pentagon with a pair of parallel edges is a tiler. This completes the proof. \qed
{ "timestamp": "2011-09-06T02:02:32", "yymm": "1109", "arxiv_id": "1109.0813", "language": "en", "url": "https://arxiv.org/abs/1109.0813", "abstract": "A famous problem in discrete geometry is to find all monohedral plane tilers, which is still open to the best of our knowledge. This paper concerns with one of its variants that to determine all convex polyhedra whose every cross-section tiles the plane. We call such polyhedra universal tilers. We obtain that a convex polyhedron is a universal tiler only if it is a tetrahedron or a pentahedron.", "subjects": "Combinatorics (math.CO)", "title": "On Universal Tilers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682468025243, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7073169816329294 }
https://arxiv.org/abs/2204.07873
The minimal sum of squares over partitions with a nonnegative rank
Motivated by a question of Defant and Propp (2020) regarding the connection between the degrees of noninvertibility of functions and those of their iterates, we address the combinatorial optimization problem of minimizing the sum of squares over partitions of $n$ with a nonnegative rank. Denoting the sequence of the minima by $(m_n)_{n\in\mathbb{N}}$, we prove that $m_n=\Theta\left(n^{4/3}\right)$. Consequently, we improve by a factor of $2$ the lower bound provided by Defant and Propp for iterates of order two.
\section{Introduction} Recently, \cite{DP} defined the \emph{degree of noninvertibility} of a function $f\colon X\to Y$ between two finite nonempty sets $X$ and $Y$ by $$\deg(f)=\frac{1}{|X|}\sum_{x\in X}\left|f^{-1}(f(x))\right|,$$ as a measure of how far $f$ is from being injective. Interested mainly in endofunctions (also called dynamical systems within the field of dynamical algebraic combinatorics), that is, functions $f\colon X\to X$, they then computed the degrees of noninvertibility of several specific functions and studied, from an extremal point of view, the connection between the degrees of noninvertibility of functions and those of their iterates. They concluded their work with the following question: Let $2\leq k\in\mathbb{N}$. Does the limit \begin{equation}\label{eq;44} \lim_{n\to\infty}\max_{\substack{f\colon X\to X\\ |X|=n}}\frac{\deg\left(f^k\right)}{\deg(f)^{2-1/2^{k-1}}}\frac{1}{n^{1-1/2^{k-1}}}\end{equation} exist? If so, what is its value? They remarked that even answering the question for $k=2$ would be interesting and stated that it follows from their results that, if the limit exists, then it lies in the interval between $3^{-3/2}\approx0.19245$ and $1$. Our attempts to answer their question in the case $k=2$ have led us to a combinatorial optimization problem that seems not to have been addressed before, namely, the problem of finding the minimal sum of squares over partitions with a nonnegative rank. In this work, we address this problem and, consequently, improve the lower bound of the interval $\left[3^{-3/2}, 1\right]$ by a factor of $2$. We begin by stating our main results. The definitions of the terms that we use and the proofs of all the statements are given in Section \ref{43}. \section{Main results} Let $X$ be a set of size $n\in\mathbb{N}$ to be used throughout this work. We denote by $\mathbb{N}_0$ the set of all nonnegative integers. Taking $k=2$ in (\ref{eq;44}), we wish to lower bound \begin{equation}\label{eq;001} \max_{f\colon X\to X}\frac{\deg\left(f^2\right)}{\deg(f)^{3/2}}\frac{1}{n^{1/2}}, \end{equation} where $f^2$ stands for the composition $f\circ f$. Our approach is based on the fact that the functions with the largest possible degree of noninvertibility, namely $n$, are the constant functions (cf.\ \cite[p.\ 2]{DP}). Thus, we wish to solve the following combinatorial optimization problem: \begin{align} \text{minimize } & \deg(f)\label{eq;6522}\\ \text{where } & f\colon X\to X \textnormal{ is such that } f^2 \textnormal{ is constant}.\nonumber \end{align} The notion of the degree of noninvertibility of a function $f\colon X\to X$ is directly related to the sum of squares over a certain partition of $n$ via the observation that $$ \deg(f)=\frac{1}{n}\sum_{x\in X}\left|f^{-1}(x)\right|^2 $$ (cf.\ \cite[p.\ 2]{DP}). Indeed, if $X=\{1,\ldots,n\}$ then $\left|f^{-1}(1)\right|,\ldots,\left|f^{-1}(n)\right|$ yield, upon reordering and omitting zeros, a partition of $n$ that we denote by $\textnormal{Partition}(f)$. Conversely, it is clear that every partition $\lambda$ of $n$ induces a function $f\colon X\to X$ such that $\textnormal{Partition}(f)=\lambda$. It turns out (cf.\ Lemma \ref{lem;732}), that if $f\colon X\to X$ is such that $f^2$ is constant, then $\textnormal{Partition}(f)$ has a nonnegative rank (cf.\ Definition \ref{d;01}). Denoting the set of all partitions of $n$ by $\mathcal{P}(n)$ and the Euclidean norm of a vector $x$ by $||x||_2$, we may rewrite problem (\ref{eq;6522}) equivalently as \begin{align} \text{minimize } & ||\lambda||_2^2 \label{eq;364}\\ \text{where } & \lambda\in\mathcal{P}(n) \textnormal{ is such that } \textnormal{rank}(\lambda)\geq 0. \nonumber \end{align} \begin{remark} Notice that, in general, a minimizer of (\ref{eq;364}) is not unique. For example, both $(5,3, 3, 3, 3)$ and $(6,3, 2, 2, 2, 2)$ minimize (\ref{eq;364}) for $n=17$. \end{remark} \iffalse \begin{example} A minimizer of (\ref{eq;364}) is, in general, not unique. For instance, both \begin{align} &(17, 6, 6, 6, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5)\textnormal{ and}\nonumber\\ &(18, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4) \nonumber\end{align} minimize (\ref{eq;364}) for $n=100$. Figure \ref{fig:M1} shows the Young diagram of the first of the two partitions. \end{example} \fi Our first main result is the observation that, for $n\neq 2$, the partitions of $n$ that minimize (\ref{eq;364}) must have a certain structure, namely, their largest part $\lambda_1$ is equal to their number of parts (i.e., their rank is $0$) and $n-\lambda_1$ is divided as evenly as possible among the remaining $\lambda_1 -1$ parts (see Figure \ref{fig:M1} for a visualization): \begin{theorem}\label{thm;2} For $n\geq 2$, problem (\ref{eq;364}) is equivalent to the following problem: \begin{align} \textnormal{minimize } & x^{2}+r(a+1)^{2}+(x-1-r)a^{2}\label{eq;65}\\ \textnormal{such that } & x\in\{2,\ldots, n\}, \nonumber\\& n-x = a(x-1) + r \textnormal{ where } a\in\mathbb{N}_0 \textnormal{ and } 0\leq r<x-1\textnormal{ and}\nonumber\\ &x\geq \begin{cases}a,&r = 0;\\a+1,&\textnormal{otherwise}.\end{cases}\nonumber \end{align} \end{theorem} \begin{figure} \centering \ydiagram{17, 6, 6, 6, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5} \caption{The Young diagram of one of the two minimizers of (\ref{eq;364}), for $n=100$.}\label{fig:M1} \end{figure} Let $m_n$ denote the minimum of (\ref{eq;65}). Then $$(m_n)_{n\in\mathbb{N}}=1, 4, 5, 8, 11, 14, 17, 22, 25,\;\ldots$$ (the sequence is registered as \seqnum{A353044} in the OEIS. See Table \ref{table:22} for its first $210$ values). Lemmas \ref{lem;660} and \ref{lem;661}, respectively, show that $(m_n)_{n\in\mathbb{N}}$ is strictly increasing and that its elements have alternating parity. While we do not have an exact formula for $(m_n)_{n\in\mathbb{N}}$, we obtain lower and upper bounds by applying continuous relaxation: \begin{theorem}\label{th;211} We have $m_n=\Theta(n^{4/3})$. More precisely, $$\frac{n^{4/3}}{4}\leq m_n\leq(2^{-2/3}+2^{1/3})n^{4/3},$$ for $n\geq 28$. \end{theorem} Theorem \ref{th;211} allows us to improve by a factor of $2$ the lower bound given by \cite[p.\ 17]{DP}: \begin{corollary}\label{cor;222} Taking $k=2$, the limit in (\ref{eq;44}), if it exists, is lower bounded by $2\cdot3^{-3/2}$. \end{corollary} We proceed by showing that, if $t_n$ is a minimizer of (\ref{eq;65}), then it must lie in a certain interval: \begin{theorem}\label{T;6} Suppose that $n\geq 6$. There exists a function $u_n\colon(1,\infty)\to\mathbb{R}$ such that if $x_0^{(n)}$ is the global minimum of $u_n$ and $x_1^{(n)} < x_2^{(n)}$ are the two real positive roots of the cubic polynomial $$ x^{3}+\left(-2n-u_n\left(\left\lfloor x_0^{(n)}\right\rfloor\right)\right)x+n^{2}+u_n\left(\left\lfloor x_0^{(n)}\right\rfloor\right),$$ then $t_n\in \left\{\left\lceil x_1^{(n)} \right\rceil, \ldots, \left\lfloor x_2^{(n)} \right\rfloor\right\}$, for every $t_n$ that minimizes (\ref{eq;65}). \end{theorem} \begin{example} In the notation of Theorem \ref{T;6}, we have $\left\lceil x_1^{(1000)}\right\rceil = 78, \left\lfloor x_2^{(1000)}\right\rfloor = 82$ and $t_{1000} = 78$ is the unique minimizer of (\ref{eq;65}). See Figure \ref{fig;eg} for a visualization of Theorem \ref{T;6}. \end{example} \begin{figure}[H] \includegraphics[width=10cm, height=8cm]{cap_2.PNG} \centering \caption{The bounds on $t_n$ of Theorem \ref{T;6}, together with the smallest $t_n$ that minimizes (\ref{eq;65}), for $6\leq n\leq 5000$.} \label{fig;eg} \end{figure} Finally, we establish the asymptotic behaviour of the minimizers of (\ref{eq;65}): \begin{theorem}\label{thm;456} Let $(t_n)_{n\in\mathbb{N}}$ be any sequence such that $t_n$ minimizes (\ref{eq;65}). Then $t_n=\Theta\left(n^{2/3}\right)$. \end{theorem} \section{Definitions and proofs}\label{43} We begin with the definition of the rank of a partition, a notion that goes back to \cite{D} (see also \seqnum{A064174} in the OEIS). The reader is referred to \cite{A} for the general theory of partitions. \begin{definition}\label{d;01} Let $\lambda\in\mathcal{P}(n)$. The \emph{rank of $\lambda$}, denoted by $\textnormal{rank}(\lambda)$, is defined as $\lambda$'s largest part minus its number of parts. \end{definition} Functions $f\colon X\to X$ such that $f^2$ is constant are characterized by induced partitions of $n$ with a nonnegative rank: \begin{lemma}\label{lem;732} Let $f\colon X\to X$ be a function such that $f^2$ is constant. Then $\textnormal{rank}(\textnormal{Partition}(f))\geq 0$. Conversely, for every $\lambda\in\mathcal{P}(n)$ with $\textnormal{rank}(\lambda)\geq 0$, there is a function $f\colon X\to X$ such that $f^2$ is constant and $\textnormal{Partition}(f)=\lambda$. \end{lemma} \begin{proof} Assume that $f^2$ is constant. Then there is $y\in X$ such that $f(f(x))=y$ for every $x\in X$. Thus, $f(x) \in f^{-1}(y)$ for every $x\in X$. Notice that the number of parts of $\textnormal{Partition}(f)$ is equal to $\left|\text{Im}(f)\right|$. Now, $$|\text{Im}(f)|\leq \left|f^{-1}(y)\right|\leq \max_{x\in X}\left\{\left|f^{-1}(x)\right|\right\}.$$ It follows that $\textnormal{rank}(\textnormal{Partition}(f))\geq 0$. Conversely, suppose $X=\{1,\ldots,n\}$ and let $\lambda=(\lambda_1,\ldots,\lambda_r)\in\mathcal{P}(n)$ such that $\textnormal{rank}(\lambda)\geq 0$. We define $f\colon X\to X$ as follows: For $1\leq i\leq n$ let $$f(i) = \begin{cases} 1, & 1\leq i\leq \lambda_1; \\ k, & \sum_{j=1}^{k-1} \lambda_j <i\leq \sum_{j=1}^k\lambda_j \textnormal{ where } 2\leq k\leq r.\end{cases}$$ Since $\lambda_1\geq r$, we have that $2,\ldots, r\in f^{-1}(1)$. It follows that $f(f(i))=1$ for every $i\in X$, i.e., $f^2$ is constant. Furthermore, $\textnormal{Partition}(f)=\lambda$. \end{proof} The proof of Theorem \ref{thm;2} relies on the following two lemmas. For the first, we shall need the notion of a balanced partition. We could not find any mention of this notion other than in \seqnum{A047993} in the OEIS. \begin{definition} A partition whose rank is zero is called a \emph{balanced} partition. \end{definition} \begin{lemma}\label{lem;15} If $n\neq 2$, then the minimum of (\ref{eq;65}) is obtained at a balanced partition. \end{lemma} \begin{proof} Consider first the cases $n=1,3,4$: If $n=1$, then there is only one partition $(1)$ which is balanced. If $n=3$, then there are only two partitions with a nonnegative rank, namely $(3)$ and $(2,1)$, of which the latter, that is balanced, has the smallest sum of squares. Similarly, if $n=4$, then there are only three partitions with a nonnegative rank, namely $(4), (3,1)$ and $(2,2)$, of which the latter, that is balanced, has the smallest sum of squares. Assume now that $n\geq 5$ and let $\lambda=(\lambda_1,\ldots,\lambda_r)\in\mathcal{P}(n)$ such that $\textnormal{rank}(\lambda)>0$. We shall construct a partition $\lambda'\in\mathcal{P}(n)$ such that $||\lambda||_2^2> ||\lambda'||_2^2$ and $0 \leq \textnormal{rank}(\lambda')<\textnormal{rank}(\lambda)$. To this end, let $k=\max\{1\leq i\leq r\;|\; \lambda_i>1\}$. We distinguish between two cases: \begin{enumerate} \item $k>1$. We have \begin{align} ||\lambda||_2^2&=\sum_{1\leq i\leq r, i\neq k}\lambda_{i}^{2}+(\lambda_k-1+1)^2\nonumber\\&=\sum_{1\leq i\leq r, i\neq k}\lambda_{i}^{2}+(\lambda_k-1)^2+\overbrace{2\lambda_k-1}^{>1}\nonumber\\&>\sum_{1\leq i\leq r, i\neq k}\lambda_{i}^{2}+(\lambda_k-1)^2+1\nonumber\\&= ||\lambda'||_2^2,\nonumber \end{align} where $\lambda'=(\lambda_1,\ldots,\lambda_{k-1},\lambda_k-1,\lambda_{k+1},\ldots,\lambda_r,1)$. Since $\lambda_1\geq r+1$, we have $\textnormal{rank}(\lambda')\geq 0$. Furthermore, $\textnormal{rank}(\lambda')<\textnormal{rank}(\lambda)$. \item $k=1$. In this case, $\lambda = (n-r+1,\overbrace{1,\ldots,1}^{r-1 \textnormal{ times }})$. First, assume that $r\geq 3$. It is easy to see that $$(n-r+1)^{2}+r-1>(n-r)^{2}+4+r-2\iff n-r> 1.$$ Now, by assumption, $n-r+1>r$. Thus, $n-r>r-1\geq 2$ and we take $\lambda'=(n-r,2,\overbrace{1,\ldots,1}^{r-2 \textnormal{ times }})$. Assume now that $r=2$. Then $\lambda=(n-1,1)$ and it is easy to see that $$(n-1)^{2}+1>(n-2)^{2}+2\iff n\geq3.$$ Thus, we take $\lambda'=(n-2,1,1)$. Finally, assume that $r=1$. Then $\lambda=(n)$ and we have $$n^2 > (n-1)^2 +1 \iff n\geq 2.$$ Then we take $\lambda'=(n-1,1)$. In each of these cases, $||\lambda||_2^2> ||\lambda'||_2^2$ and $0\leq \textnormal{rank}(\lambda')<\textnormal{rank}(\lambda)$. \end{enumerate} \end{proof} \begin{lemma}\label{lem;16} Let $\lambda=(\lambda_1,\ldots,\lambda_r)\in\mathcal{P}(n)$ such that $\lambda_j > \lambda_k + 1$ for some $1\leq j<k\leq r$. Let $\lambda'\in\mathcal{P}(n)$ correspond to the parts $\lambda_1,\ldots,\lambda_j-1,\ldots,\lambda_k+1,\ldots,\lambda_r$. Then $||\lambda||_2^2>||\lambda'||_2^2$. \end{lemma} \begin{proof} It suffices to prove that $$\lambda_j^{2}+\lambda_k^{2}>(\lambda_{j}-1)^{2}+(\lambda_{k}+1)^{2},$$ which is easily seen to be equivalent to $\lambda_j>\lambda_k+1$. \end{proof} \paragraph{Proof of Theorem \ref{thm;2}} The assertion follows immediately from the combination of Lemma \ref{lem;15} together with Lemma \ref{lem;16}. \qed In our work we shall make extensive use of two functions $l_n,u_n\colon \mathbb{R}\setminus\{1\}\to\mathbb{R}$, given by \begin{align} l_n(x) &=x^2+\frac{(n-x)^2}{x-1} \textnormal{ and}\nonumber\\ u_n(x)&=x^2+\frac{(n-x)^2}{x-1}+\frac{x-1}{4}. \nonumber \end{align} The bounds in the following lemma are visualized in Figure \ref{fig;e} for $n=100$. \begin{lemma}\label{lem;52} Let $2\leq x,n\in\mathbb{N}$. Then \begin{equation}\label{eq;dfa} l_n(x)\leq x^{2}+r(a+1)^{2}+(x-1-r)a^{2} \leq u_n(x), \end{equation} where $a\in\mathbb{N}_0$ and $0\leq r<x-1$ are such that $n-x = a(x-1) + r$. \end{lemma} \begin{proof} Let $m\in\mathbb{N}_0, q\in\mathbb{N}$ and write $m = bq + s$ where $b\in\mathbb{N}_0$ and $0\leq s<q$. It is immediately verified that $$s(b+1)^{2}+(q-s)b^{2}-\frac{m^2}{q}=\frac{s(q-s)}{q}.$$ Clearly, $\frac{s(q-s)}{q}\geq 0$. On the other hand, $\frac{s(q-s)}{q}\leq \frac{q}{4}$ due to the well known fact that the maximal product of two integers whose sum is $q$ is $\left\lfloor \frac{q^2}{4}\right\rfloor\leq \frac{q^2}{4}$ (cf.\ \seqnum{A002620} in the OEIS). It follows that \begin{equation}\label{eq;dfb}\frac{m^2}{q}\leq s(b+1)^{2}+ (q-s)b^{2}\leq \frac{m^2}{q}+\frac{q}{4}.\end{equation} The two inequalities in (\ref{eq;dfa}) are now proved by setting $m=n-x, q=x-1$ and adding $x^2$ to (\ref{eq;dfb}). \end{proof} \begin{figure} \includegraphics[width=10cm, height=8cm]{cap_1.PNG} \centering \caption{The lower and upper bounds of Lemma \ref{lem;52}, visualized for $n=100$.} \label{fig;e} \end{figure} We may now prove Theorem \ref{th;211}. \paragraph{Proof of Theorem \ref{th;211}} The function $u_n(x)$ is continuous in $(1,\infty)$ and $$\lim_{x\to 1^+}u_n(x) = \lim_{x\to \infty}u_n(x)=\infty.$$ Furthermore, $$u'_n(x)=\frac{8x^{3}-11x^{2}-2x+8n-4n^{2}+1}{4(x-1)^{2}}.$$ Since the discriminant of the numerator of $u'_n(x)$ is negative for $n\geq 3$, the equation $u'_n(x)=0$ has a unique real solution $x_0^{(n)}$, given by $x_0^{(n)}=\frac{11+C_n+169/C_n}{24}$, where $$C_n=\sqrt[3]{3456n^{2}-6912n+1259-\sqrt{(3456n^{2}-6912n+1259)^{2}-169^3}}.$$ It follows that, restricted to $(1,\infty)$, the function $u_n(x)$ obtains its global minimum at $x_0^{(n)}$. Now, for every $0<y<z\in\mathbb{R}$, we have $$\frac{y^{2}}{2z}\leq z-\sqrt{z^{2}-y^{2}}\leq\frac{y^{2}}{z}.$$ Thus, $C_n\leq 1$ for $n\geq 28$ and therefore \begin{align} x_0^{(n)}&=\frac{11+C_n+169/C_n}{24}\nonumber\\&\leq\frac{1}{2}+\frac{169}{24} \sqrt[3]{\frac{2(3456n^{2}-6912n+1259)}{169^3}}\nonumber\\&\leq\frac{1}{2}+2^{-1/3}n^{2/3}\nonumber \end{align} (notice, for later use, that $\lim_{n\to\infty}\frac{x_0^{(n)}}{n^{2/3}}=2^{-1/3}$). Since $u_n(x)$ is increasing in $\left[x_0^{(n)}, \infty\right)$, we have \begin{align} u_n\left(\left\lceil x_0^{(n)} \right\rceil\right)&\leq u_n\left(x_0^{(n)}+1\right)\nonumber\\&\leq u_n\left(\frac{3}{2}+2^{-1/3}n^{2/3}\right)\nonumber\\ &=\left(\frac{3}{2}+2^{-1/3}n^{2/3}\right)^2+\frac{\left(n-\left(\frac{3}{2}+2^{-1/3}n^{2/3}\right)\right)^2}{\frac{1}{2}+2^{-1/3}n^{2/3}}+\frac{\frac{1}{2}+2^{-1/3}n^{2/3}}{4}\nonumber\\&\leq \left(2^{-2/3}+2^{1/3}\right)n^{4/3},\nonumber \end{align} where the last inequality holds for $n\geq 5$. Now, it follows from Lemma \ref{lem;52} that $m_n\leq u_n\left(\left\lceil x_0^{(n)} \right\rceil\right)$, which concludes the proof of the upper bound. To prove the lower bound, we notice that, for $n\geq 3$ and restricted to $(1,\infty)$, the function $l_n(x)$ obtains its global minimum at $y_0^{(n)}$, given by $y_0^{(n)}=\frac{D_n+1+1/D_n}{2}$, where $$D_n=\sqrt[3]{(2n^{2}-4n+1)-\sqrt{(2n^{2}-4n+1)^{2}-1}}.$$ We have \begin{align} l_n\left(y^{(n)}_0\right)&=\left(y^{(n)}_0\right)^{2}+\frac{\left(y_0^{(n)}-n\right)^{2}}{y_0^{(n)}-1}\nonumber\\&\geq\frac{1}{4D_n^{2}}\nonumber\\&\geq\frac{\sqrt[3]{(2n^{2}-4n+1)^{2}}}{4}\nonumber\\&\geq\frac{n^{4/3}}{4},\nonumber \end{align} where the last inequality holds for $n\geq 4$. By Lemma \ref{lem;52}, $m_n\geq l_n\left(y_0^{(n)}\right)$. \qed \paragraph{Proof of Corollary \ref{cor;222}} By Theorem \ref{th;211}, if $n\geq 28$, then $m_n \leq(2^{-2/3}+2^{1/3})n^{4/3}$. Let $f\colon X\to X$ be a function such that $||\textnormal{Partition}(f)||^2_2=m_n$. Notice that $\deg(f)=\frac{m_n}{n}$. It follows that, if the limit in (\ref{eq;44}) exists for $k=2$, then \begin{align} \lim_{n\to\infty}\max_{f\colon X\to X}\frac{\deg(f^2)}{\deg(f)^{3/2}}\frac{1}{n^{1/2}}&\geq\lim_{n\to\infty} \frac{n}{\left(\left(2^{-2/3}+2^{1/3}\right)n^{1/3}\right)^{3/2}}\frac{1}{n^{1/2}}\nonumber\\&=2\cdot3^{-3/2}.\nonumber\end{align} \qed \paragraph{Proof of Theorem \ref{T;6}} Let $x_0^{(n)}$ and $y_0^{(n)}$ be the points, calculated in the proof of Theorem \ref{th;211}, at which $u_n$ and $l_n$, respectively, obtain their global minimum. We wish to solve the equation \begin{equation}\label{eq;981} l_n(x) = u_n\left(\left\lfloor x_0^{(n)}\right\rfloor\right) \end{equation} (see Figure \ref{fig;egb} for a visualization for $n=100$). The function $l_n(x)$ is continuous in $(1,\infty)$ and $$\lim_{x\to 1^+}l_n(x) = \lim_{x\to \infty}l_n(x)=\infty.$$ Since $l_n\left(y_0^{(n)}\right)< u_n\left(x_0^{(n)}\right)\leq u_n\left(\left\lfloor x_0^{(n)}\right\rfloor\right)$, by the mean value theorem, equation (\ref{eq;981}) has at least two real solutions in $(1,\infty)$. Similarly, $l_n(x)$ is continuous in $(-\infty, 1)$ and $$\lim_{x\to 1^-}l_n(x) = -\infty, \lim_{x\to -\infty}l_n(x)=\infty.$$ Thus, equation (\ref{eq;981}) has at least one real solution in $(-\infty, 1)$ and, since solving it is equivalent to finding the roots of the cubic polynomial $$x^{3}+\left(-2n-u_n\left(\left\lfloor x_0^{(n)}\right\rfloor\right)\right)x+n^{2}+u_n\left(\left\lfloor x_0^{(n)}\right\rfloor\right),$$ we conclude that equation (\ref{eq;981}) has exactly two real solutions $1<x_1^{(n)}<x_2^{(n)}$. Necessarily, $x_1^{(n)}< \left\lfloor x_0^{(n)}\right\rfloor, y_0^{(n)}< x_2^{(n)}$. Thus, if $t_n$ minimizes (\ref{eq;65}), then $t_n\in \left\{\left\lceil x_1^{(n)} \right\rceil, \ldots, \left\lfloor x_2^{(n)} \right\rfloor\right\}$. \qed \begin{figure}[H] \includegraphics[width=10cm, height=8cm]{cap_3.PNG} \centering \caption{We have $\left\lceil x_1^{(100)}\right\rceil=17$ and $\left\lfloor x_2^{(100)}\right\rfloor=18$. Thus, $t_{100}\in\{17,18\}$.} \label{fig;egb} \end{figure} In the proof of Theorem \ref{thm;456} we shall make use of the following notation (cf. \cite[(9.3)]{G}): Let $(a_n)_{n\in\mathbb{N}}$ and $(b_n)_{n\in\mathbb{N}}$ be two sequences. By $a_n \prec b_n$ we mean that $\lim_{n\to\infty}\frac{a_n}{b_n}=0$. \paragraph{Proof of Theorem \ref{thm;456}} Let $x_0^{(n)}, x_1^{(n)}$ and $x_2^{(n)}$ be as in Theorem \ref{T;6} and consider the cubic polynomial \begin{equation}\label{eq;hse} x^{3}+\left(-2n-u_n\left(\left\lfloor x_0^{(n)}\right\rfloor\right)\right)x+n^{2}+u_n\left(\left\lfloor x_0^{(n)}\right\rfloor\right).\end{equation} Since $x_0^{(n)},t_n\in \left\{\left\lceil x_1^{(n)} \right\rceil, \ldots, \left\lfloor x_2^{(n)} \right\rfloor\right\}$ and $x_0^{(n)}=\Theta\left(n^{2/3}\right)$, it suffices to show that $x_2^{(n)}-x_1^{(n)}\prec n^{2/3}$. To this end, denote $p_n=-2n-u_n\left(\left\lfloor x_0^{(n)}\right\rfloor\right)$ and $q_n=n^{2}+u_n\left(\left\lfloor x_0^{(n)}\right\rfloor\right)$. By Cardano's formula (e.g., \cite[p. 128]{V}), the three roots of (\ref{eq;hse}) are given by $$\sqrt[3]{-\frac{q_n}{2}+\sqrt{\frac{p_n^3}{27}+\frac{q_n^2}{4}}}+\sqrt[3]{-\frac{q_n}{2}-\sqrt{\frac{p_n^3}{27}+\frac{q_n^2}{4}}}.$$ In the appendix we show that $\frac{p_n^3}{27}+\frac{q_n^2}{4}\prec n^4$. Since $u_n\left(\left\lfloor x_0^{(n)}\right\rfloor\right)=\Theta\left(n^{4/3}\right)$, we have $q_n=\Theta\left(n^2\right)$. Hence, $$-\frac{q_n}{2}+\sqrt{\frac{p_n^3}{27}+\frac{q_n^2}{4}}= r_n(\cos(\pi-\theta_n) + i\sin(\pi-\theta_n)),$$ where $r_n=\Theta\left(n^2\right)$ and $\theta_n = \arctan\left(\frac{2\left|\sqrt{\frac{p_n^3}{27}+\frac{q_n^2}{4}}\right|}{q_n}\right)$. Notice that $\lim_{n\to\infty}\theta_n=0$. Proceeding as in \cite[Example 3.106]{V}, we conclude that $$ x_1^{(n)} =r_n^{1/3}\cos\left(\frac{\pi + \theta_n}{3}\right) \;\textnormal{ and }\; x_2^{(n)}=r_n^{1/3}\cos\left(\frac{\pi -\theta_n}{3}\right).$$ Thus, applying the trigonometric identity $$\cos\alpha-\cos\beta = 2\sin\left(\frac{\beta+\alpha}{2}\right)\sin\left(\frac{\beta-\alpha}{2}\right),$$ that holds for every $\alpha,\beta\in\mathbb{R}$, we see that $$\lim_{n\to\infty}\frac{x_2^{(n)} - x_1^{(n)}}{n^{2/3}} =\lim_{n\to\infty}\left(\frac{r_n}{n^2}\right)^{1/3}\sin\left(\frac{\pi}{3}\right)\sin\left(\frac{\theta_n}{3}\right)=0.$$ \qed \begin{remark} It should be emphasized, that our approach does not, in general, provide the true maximum of (\ref{eq;001}). For example, let $n=8$ and assume that $X=\{1,\ldots,8\}$. Consider the function $h\colon X\to X$ given by $$h(i) = \begin{cases} 1, & i\in\{1,2,3\};\\ 2, & i\in\{4,5\};\\ 3, & i\in\{6,7\};\\ 4, & i=8. \end{cases}$$ Then, $h^2\colon X\to X$ is given by $$h^2(i) = \begin{cases} 1, & i\in\{1,\ldots,7\};\\ 2, & i=8. \end{cases}$$ It follows that $$\frac{\deg\left(h^2\right)}{\deg(h)^{3/2}} = \frac{\frac{50}{8}}{\left(\frac{18}{8}\right)^{3/2}}=\frac{50}{27}\approx 1.85185.$$ In contrast, our approach provides the partition $(3,3,2)$ that corresponds to a function $f\colon X\to X$ such that $$\frac{\deg\left(f^2\right)}{\deg(f)^{3/2}}=\frac{8}{\left(\frac{22}{8}\right)^{3/2}}\approx 1.75424.$$ \end{remark} \begin{lemma}\label{lem;660} The sequence $(m_n)_{n\in\mathbb{N}}$ is strictly increasing. \end{lemma} \begin{proof} Assume that $m_{n+1}\leq m_n$ for some $n\in\mathbb{N}$ and let $\lambda=(\lambda_{1},\ldots,\lambda_{r})\in\mathcal{P}(n+1)$ such that $||\lambda||_2^2=m_{n+1}$. Then $\lambda'=(\lambda_{1},\ldots,\lambda_{r-1}, \lambda_{r}-1)\in\mathcal{P}(n)$ (omitting the last part, if necessary) such that $\textnormal{rank}(\lambda')\geq 0$. Now, $$||\lambda'||_{2}^{2}<||\lambda||_{2}^{2}=m_{n+1}\leq m_n,$$ contradicting the minimality of $m_n$. \end{proof} \begin{lemma}\label{lem;661} Let $\lambda=(\lambda_{1},\ldots,\lambda_{r})\in\mathcal{P}(n)$. Then $n$ and $||\lambda||_2^2$ have the same parity. In particular, $n$ and $m_n$ have the same parity. \end{lemma} \begin{proof} We proceed by induction on $n$. For $n=1$, the assertion holds trivially. Assume that the assertion holds for $n\in\mathbb{N}$ and let $\lambda=(\lambda_{1},\ldots,\lambda_{r})\in\mathcal{P}(n+1)$. Then $\lambda'=(\lambda_{1},\ldots,\lambda_{r-1}, \lambda_{r}-1)\in\mathcal{P}(n)$ (omitting the last part, if necessary). Now, \begin{align} ||\lambda||_2^2&=\lambda_1^2+\cdots+\lambda_r^2\nonumber\\&=\lambda_1^2+\cdots+\lambda_{r-1}^2+(\lambda_r-1)^2 +2\lambda_r -1\nonumber\\&=\overbrace{||\lambda'||_2^2}^{=\textnormal{ parity of }n-1}+\overbrace{2\lambda_r -1}^{\textnormal{odd}}\nonumber\\&=\textnormal{ parity of }n.\nonumber \end{align} \end{proof} \iffalse \paragraph{Proof of Theorem \ref{t;t1}} Denote by $w <x_1^{(n)}<x_2^{(n)}$ the roots of (\ref{eq;kf}). Since $w=-x_1^{(n)}-x_2^{(n)}$, we must have $x_1^{(n)}-w>x_2^{(n)}-x_1^{(n)}$, for, otherwise, $x_1^{(n)}-w\leq x_2^{(n)}-x_1^{(n)}$, which leads to $x_1^{(n)}\leq 0$, contradicting the positivity of $x_1^{(n)}$. Now, using Vi\`ete's formulas (e.g., \cite[p. 89]{V}), \begin{align} 3\left(2n+g(\lfloor x_0^{(n)}\rfloor)\right)&=(x_2^{(n)}-x_1^{(n)})^{2}+(x_2^{(n)}-w)^{2}+(x_1^{(n)}-w)^{2} \nonumber\\&>2(x_2^{(n)}-x_1^{(n)})^{2}+(x_2^{(n)}-w)^{2}\nonumber\\&>3(x_2^{(n)}-x_1^{(n)})^{2}.\nonumber\end{align} We conclude that \begin{align} x_2^{(n)}-x_1^{(n)} &< \sqrt{2n+g(\lfloor x_0^{(n)}\rfloor)}\leq 2n^{2/3},\nonumber \end{align} where in the last inequality we applied similar arguments as in the proof of \ref{th;211}. Thus, if $n_0$ minimizes (\ref{eq;65}), then $n_0 \leq x_0^{(n)} + x_2^{(n)} - x_1^{(n)} \leq 3n^{2/3}$. \qed \fi \section{Appendix} Denote $z_n=\left\lfloor x_0^{(n)}\right\rfloor$ and $X_i=\frac{1}{(z_n-1)^i}$ for $i=1,2,3$. We have \begin{align} \frac{p_n^3}{27}+\frac{q_n^2}{4}&=\boxed{-\frac{n^{6}}{27}X_{3}}+\frac{2n^{5}z_{n}}{9}X_{3}-\frac{2n^{5}}{9}X_{2}-\frac{5n^{4}z_{n}^{2}}{9}X_{3}\boxed{-\frac{n^{4}z_{n}^{2}}{9}X_{2}}+\frac{31n^{4}z_{n}}{36}X^{2}\boxed{+\frac{n^{4}}{4}}\nonumber \\&+\frac{5n^{4}}{18}X_{2}+\frac{n^{4}}{18}X_{1}+\frac{20n^{3}z_{n}^{3}}{27}X_{3}+\frac{4n^{3}z_{n}^{3}}{9}X_{2}-\frac{11n^{3}z_{n}^{2}}{9}X_{2}-\frac{4n^{3}z_{n}^{2}}{9}X_{1}\nonumber \\&-\frac{10n^{3}z_{n}}{9}X_{2}-\frac{2n^{3}z_{n}}{9}X_{1} -\frac{8n^{3}}{27}+\frac{n^{3}}{9}X_{1}-\frac{5n^{2}z_{n}^{4}}{9}X_{3}-\frac{2n^{2}z_{n}^{4}}{3}X_{2}\boxed{-\frac{n^{2}z_{n}^{4}}{9}X_{1}} \nonumber \\&+\frac{13n^{2}z_{n}^{3}}{18}X_{2}+\frac{5n^{2}z_{n}^{3}}{6}X_{1}+\frac{n^{2}z_{n}^{2}}{18}+\frac{5n^{2}z_{n}^{2}}{3}X_{2}+\frac{119n^{2}z_{n}^{2}}{144}X_{1}+\frac{n^{2}z_{n}}{72}-\frac{n^{2}z_{n}}{12}X_{1}\nonumber \\& -\frac{n^{2}}{72}-\frac{19n^{2}}{144}X_{1}+\frac{2nz_{n}^{5}}{9}X_{3}+\frac{4nz_{n}^{5}}{9}X_{2}+\frac{2nz_{n}^{5}}{9}X_{1}-\frac{2nz_{n}^{4}}{9}-\frac{nz_{n}^{4}}{9}X_{2}-\frac{nz_{n}^{4}}{3}X_{1} \nonumber \\&-\frac{nz_{n}^{3}}{9}-\frac{10nz_{n}^{3}}{9}X_{2}-\frac{29nz_{n}^{3}}{24}X_{1}+\frac{7nz_{n}^{2}}{72}-\frac{nz_{n}^{2}}{6}X_{1}+\frac{nz_{n}}{36}+\frac{19nz_{n}}{72}X_{1}-\frac{n}{72}\nonumber \\&\boxed{-\frac{z_{n}^{6}}{27}}-\frac{z_{n}^{6}}{27}X_{3}-\frac{z_{n}^{6}}{9}X_{2}-\frac{z_{n}^{6}}{9}X_{1}-\frac{z_{n}^{5}}{36}-\frac{z_{n}^{5}}{36}X_{2}-\frac{z_{n}^{5}}{18}X_{1}+\frac{13z_{n}^{4}}{48}+\frac{5z_{n}^{4}}{18}X_{2}\nonumber\\ &+\frac{79z_{n}^{4}}{144}X_{1}+\frac{239z_{n}^{3}}{1728}+\frac{5z_{n}^{3}}{36}X_{1}-\frac{11z_{n}^{2}}{96}-\frac{19z_{n}^{2}}{144}X_{1}-\frac{19z_{n}}{576}+\frac{7}{432}.\nonumber \end{align} Recall (cf.\ the proof of Theorem \ref{th;211}) that $\lim_{n\to\infty} \frac{z_n}{n^{2/3}} = 2^{-1/3}$. Thus, the expansion of $\frac{p_n^3}{27}+\frac{q_n^2}{4}$ contains terms of order $n^4$ (the boxed terms). Nevertheless, the overall order is strictly less than $n^4$, as the following calculation shows: {\footnotesize\begin{align} &\lim_{n\to\infty}\frac{1}{n^4}\left(-\frac{n^{6}}{27}X_{3}-\frac{n^{4}z_{n}^{2}}{9}X_{2}+\frac{n^{4}}{4}-\frac{n^{2}z_{n}^{4}}{9}X_{1}-\frac{z_{n}^{6}}{27}\right)=\nonumber \\ &\lim_{n\to\infty}\frac{1}{108}\left(-4\left(\frac{n^{2/3}}{z_n}\right)^3\frac{z_n^3}{(z_n-1)^3}-12\frac{z_n^2}{(z_n-1)^2}+27-12\left(\frac{z_{n}}{n^{2/3}}\right)^3\frac{z_n}{z_n-1}-4\left(\frac{z_{n}}{n^{2/3}}\right)^6\right)=\nonumber\\&\lim_{n\to\infty}\frac{1}{108}\left(-4\cdot 2-12+27-12\cdot 2^{-1}-4\cdot 2^{-2}\right)=0.\nonumber \end{align}}
{ "timestamp": "2022-05-05T02:12:35", "yymm": "2204", "arxiv_id": "2204.07873", "language": "en", "url": "https://arxiv.org/abs/2204.07873", "abstract": "Motivated by a question of Defant and Propp (2020) regarding the connection between the degrees of noninvertibility of functions and those of their iterates, we address the combinatorial optimization problem of minimizing the sum of squares over partitions of $n$ with a nonnegative rank. Denoting the sequence of the minima by $(m_n)_{n\\in\\mathbb{N}}$, we prove that $m_n=\\Theta\\left(n^{4/3}\\right)$. Consequently, we improve by a factor of $2$ the lower bound provided by Defant and Propp for iterates of order two.", "subjects": "Combinatorics (math.CO)", "title": "The minimal sum of squares over partitions with a nonnegative rank", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682464686386, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.7073169813940595 }
https://arxiv.org/abs/1908.08816
On the largest prime factor of $n^2+1$
We show that the largest prime factor of $n^2+1$ is infinitely often greater than $n^{1.279}$. This improves the result of de la Bretèche and Drappeau (2019) who obtained this with $1.2182$ in place of $1.279.$ The main new ingredients in the proof are a new Type II estimate and using this estimate by applying Harman's sieve method. To prove the Type II estimate we use the bounds of Deshouillers and Iwaniec on linear forms of Kloosterman sums. We also show that conditionally on Selberg's eigenvalue conjecture the exponent $1.279$ may be increased to $1.312.$
\section{Introduction} An outstanding open problem in number theory is to prove that there are infinitely many primes of the form $n^2+1$. To approximate this we may consider the largest prime factor of integers of the form $n^2+1,$ as was done by Chebyshev already in the 19th century (cf. the introduction in \cite{hooley} for the prehistory of this problem). In 1967 Hooley \cite{hooley} proved that the largest prime factor of $n^2+1$ is infinitely often at least $n^{1.10014\dots}$ by applying the Weil bound for Kloosterman sums. Deshouillers and Iwaniec \cite{DI} showed in 1982 that the largest prime factor of $n^2+1$ is at least $n^{1.202468\dots}$ infinitely often. Their improvement came as an application of their bounds for linear forms of Kloosterman sums \cite{DI2}. In 2017 de la Bret\`eche and Drappeau \cite{BD} improved the exponent to 1.2182 by making use of the result of Kim and Sarnak \cite[Appendix 2]{KS} towards Selberg's eigenvalue conjecture. We will show a new Type II estimate (Proposition \ref{typeiiprop} below) and use this by applying Harman's sieve method to improve the previous results: \begin{theorem} \label{maint} The largest prime factor of $n^2+1$ is greater than $n^{1.279}$ for infinitely many integers $n$. \end{theorem} \begin{remark} The proof of Theorem \ref{maint} uses the deep bound of Kim and Sarnak \cite[Appendix 2]{KS}. Using just the classical Selberg's $3/16$-Theorem our argument gives a result with the exponent $1.279$ replaced by $1.23$. \end{remark} We also obtain a new conditional result (improving the exponent $\sqrt{3/2}-\epsilon \geq 1.2247$ of Deshouillers and Iwaniec \cite[Section 8]{DI}): \begin{theorem} \label{selbergt} Assuming Selberg's eigenvalue conjecture the exponent $1.279$ in Theorem \ref{maint} may be increased to $1.312$. \end{theorem} \begin{remark} As is usual with Harman's sieve, the exact limit of the method is hard to determine and would require extensive numerical computations. The exponents in both of the above theorems could still be slightly improved by optimizing the sieve more carefully but we do not pursue this issue here for the sake of simplifying presentation. \end{remark} \begin{remark} By using similar arguments as in \cite{BD}, \cite{DFI}, and \cite{hooley} it should be possible to generalise our result from $n^2+1$ to polynomials $n^2-d$ where $d$ is not a perfect square. \end{remark} \subsection{Sketch of the proof} Similarly as in \cite{BD}, \cite{DI} and \cite{hooley}, we will use Chebyshev's device to detect large prime factors, that is, we use the elementary fact that \begin{align*} \sum_{m} \Lambda(m) \sum_{\substack{\ell \sim x \\ \ell^2+1 \equiv 0 \, \, (m)}} 1 = \sum_{\substack{\ell \sim x }} \sum_{m | \ell^2+1} \Lambda(m) = \sum_{\substack{\ell \sim x }} \log(\ell^2+1) = 2x\log x + O(x) \end{align*} so that if $P_x$ denotes the largest prime factor of $\ell^2+1$ for $\ell \sim x$, then \begin{align*} \sum_{p \leq P_x} \log p \sum_{\substack{\ell \sim x \\ \ell^2+1 \equiv 0 \, \, (p)}} 1 \geq (2+o(1)) x\log x. \end{align*} Hence, to get a lower bound for $P_x$ we require upper bounds for sums of the type \begin{align} \label{heur} \sum_{p \sim P} \sum_{\substack{\ell \sim x \\ \ell^2+1 \equiv 0 \, \, (p)}} 1, \end{align} where $P \leq x^\varpi $ with $\varpi$ corresponding to the exponent in Theorem \ref{maint}. Deshouillers and Iwaniec \cite{DI} use linear sieve upper bound for the sum (\ref{heur}), and the main point in their work is to obtain strong Type I information, that is, asymptotic formulas for sums of the form \begin{align*} \sum_{d \leq D} \lambda_d \sum_{\substack{m\sim P \\ m \equiv 0 \,\, (d)}} \sum_{\substack{\ell \sim x \\ \ell^2+1 \equiv 0 \, \, (m)}} 1, \end{align*} where $\lambda_d$ are divisor bounded coefficients. The level of distribution obtained in \cite[Section 7]{DI} is $D=x^{1-\epsilon} P^{-1/2}$, which improved the level $D=x^{1-\epsilon}P^{-3/4}$ in Hooley's work \cite{hooley} (the conditions $m\sim P$ and $\ell \sim x$ need to be replaced by smooth coefficients but let us ignore this detail for the moment). De la Bret\`eche and Drappeau \cite{BD} improve the level of distribution to $D=x^{1/(2-4\theta)-\epsilon}P^{-\theta/(1-2\theta)}$, where $\theta \geq 0$ is any admissible exponent in the Ramanujan-Selberg conjecture. Note that from Selberg's $3/16$-Theorem we know $\theta =1/4$ is admissible which gives the same the level of distribution as in the work of Deshouillers and Iwaniec \cite{DI}. The exponent 1.2182 in \cite{BD} follows from using the result of Kim and Sarnak \cite{KS} that $\theta=7/64$ is admissible. We will use a combination of Harman's sieve method \cite{harman} and the linear sieve to give an improved upper bound for (\ref{heur}) for some ranges of $P$ (see the beginning of Section \ref{buchsec} for a heuristic explanation of Harman's sieve). Our sieve has similarities also to the sieve used by Duke, Friedlander, and Iwaniec in \cite{DFI}. For the sieve we need to obtain Type II information, that is, an asymptotic formula for sums of the form \begin{align} \label{typeiiheur} \sum_{\substack{m \sim M \\ n \sim N}} a_m b_n \sum_{\substack{\ell \sim x \\ \ell^2+1 \equiv 0 \, \, (mn)}} 1, \end{align} where $MN=P$ and $a_m$ and $b_n$ are divisor bounded coefficients. Type II sums of this form are also considered in the works Iwaniec \cite{iwaniec}, Lemke Oliver \cite{lemke} and more recently in \cite[Th\'eor\`eme 5.2]{BD}, but they are not applied to the problem of the largest prime factor of $n^2+1$. Our Proposition \ref{typeiiprop} gives an improvement on \cite[Th\'eor\`eme 5.2]{BD}. The proof of our Type II estimate is given in Section \ref{typeiisection}. The sieve argument is carried out in Section \ref{sievesection}, using the Type I information proved in \cite{BD}. Our proof of the Type II information is inspired by the arguments in \cite{DI} and \cite{DFI}. The key ingredient in the proof is an estimate for linear forms of Kloosterman sums of the form \begin{align} \label{sumklooster} \sum_{r} \sum_{\substack{m \sim \bm{M}\\ n \sim \bm{N}}}A_{m,r} B_{n,r} \sum_{(c,r)=1} g(m,n,c,r) S(m \overline{r},\pm n;c), \end{align} for some nice smooth function $g$. Unfortunately both of the coefficients $A_{m,r}$ and $B_{n,r}$ depend on $r$, so that we are unable to make use of the average over the `level variable' $r$ (cf. \cite[Theorem 10]{DI2} for such a result). Similarly as the results in \cite{BD}, our Type II information will depend on the smallest eigenvalue $\lambda_1(r)=1/4-\theta_r^2$ for the Hecke congruence subgroups $\Gamma_0(r)$ (cf. \cite[Section 1]{DI2} for precise definitions). Selberg's eigenvalue conjecture famously states that $\lambda_1(\Gamma) \geq 1/4$ for any congruence subgroup $\Gamma$. The current best lower bound is the result of Kim and Sarnak \cite[Appendix 2]{KS} that $\lambda_1(\Gamma) \geq 1/4-(7/64)^2$, which we will apply with the estimate of Deshouillers and Iwaniec \cite[Theorem 9]{DI2} to obtain a bound for the sum (\ref{sumklooster}) individually for each $r$. For a more detailed sketch of the proof of the Type II estimate we refer to the begininning of Section \ref{typeiisection}. Unfortunately we can handle Type II sums only in the range $P < x^{153/128}$, so that for $x^{153/128} < P < x^\varpi$ we cannot improve on the upper bound of \cite{BD}. Note that even for $P=x^{1+\epsilon}$ a good upper bound for (\ref{heur}) is highly nontrivial, in fact, for $P=x^{1+\epsilon}$ the linear sieve upper bound is off by a factor of $4+O(\epsilon)$. In the last section we outline some open problems whose resolution would lead to further progress on the largest prime factor of $n^2+1$. \subsection{Notations} We use the following asymptotic notations: for functions $f$ and $g$ with $g$ positive, we write $f \ll g$ or $f= \mathcal{O}(g)$ if there is a constant $C$ such that $|f| \leq C g.$ The notation $f \asymp g$ means $g \ll f \ll g.$ The constant may depend on some parameter, which is indicated in the subscript (e.g. $\ll_{\epsilon}$). We write $f=o(g)$ if $f/g \to 0$ for large values of the variable. For variables we write $n \sim N$ meaning $N<n \leq 2N$. It is convenient for us to define \begin{align*} A \pprec B \end{align*} to mean $A \, \ll_\epsilon x^{\epsilon} B.$ A typical bound we use is $\tau_k(n) \pprec 1$ for $n \ll x$, where $\tau_k$ is the $k$-fold divisor function. We say that an arithmetic function $f$ is divisor bounded if $|f(n)| \ll \tau_k(n)$ for some $k$. We let $\eta >0$ denote a sufficiently small constant, which may be different from place to place. For example, $A \ll x^{-\eta}B$ means that the bound holds for some $\eta >0.$ For a statement $E$ we denote by $1_E$ the characteristic function of that statement. For a set $A$ we use $1_A$ to denote the characteristic function of $A.$ We also define $P(w):= \prod_{p\leq w} p,$ where the product is over primes. We let $e(x):= e^{2 \pi i x}$ and $e_q(x):= e(x/q)$ for any integer $q \geq 1$. For integers $a,$ $b$, and $q \geq 1$ with $(b,q)=1$ we define $e_{q}(a/b) := e(a\overline{b}/q)$. For Kloosterman sums we use the standard notation \begin{align*} S(a,b;c):= \sum_{\substack{n \,\, (c)\\ (n,c)=1}} e_c(an+b/n). \end{align*} \subsection{Acknowledgements} I am grateful to my supervisor Kaisa Matom\"aki for useful discussions, comments, and support. I also express my gratitude to Emmanuel Kowalski for helpful discussions. I also wish to thank Philippe Michel for bringing the article \cite{BD} to my attention. During the work the author was funded by UTUGS Graduate School. \section{The sieve} \label{sievesection} In this section we will state the arithmetical information (Propositions \ref{typeiprop} and \ref{typeiiprop} below) and apply them with Harman's sieve method \cite{harman} and the linear sieve to give a proof of Theorem \ref{maint}. We also sketch the proof of Theorem \ref{selbergt} by indicating how the proof of Theorem \ref{maint} needs to be modified. \subsection{Set up} Our notations will be mostly similar to those of \cite{DI}. For $x \geq 1$, let $b$ denote a non-negative $C^{\infty}$-smooth function, supported on $[x,2x]$, whose derivatives satisfy for all $j \geq 0$ \begin{align*} b^{(j)}(x) \ll_j x^{-j}. \end{align*} For any integer $d \geq 1,$ define \begin{align*} |\mathcal{A}_d| \,\, := \sum_{n^2+1 \equiv 0 \,\, (d)} b(n) \quad \quad \text{and} \quad\quad X := \int b(\xi) \, d \xi. \end{align*} If $P_x$ denotes the greatest prime factor of $\prod_{x \leq n \leq 2x}(n^2+1)$, then by using the Chebysev-Hooley method similarly as in \cite[Section 2]{DI} we find \begin{align} \label{cheby} S(x):= \sum_{x < p \leq P_x } |\mathcal{A}_p| \log p = X \log x + O(x). \end{align} Therefore, we require an upper bound of $S(x)$ to get a lower bound for $P_x$. We first split the sum using a smooth dyadic partition of unity similarly as in \cite[Section 3]{DI} \begin{align*} S(x) = \sum_{\substack{x \leq P \leq P_x \\ P= 2^j x}} S(x,P) + O(x), \end{align*} where \begin{align*} S(x,P) = \sum_{P \leq p \leq 4P} \psi_P(p) |\mathcal{A}_p| \log p \end{align*} for some $C^{\infty}$-smooth functions $\psi_P$ supported on $[P,4P]$ satisfying $\psi_P^{(\ell)}(\xi) \ll_\ell P^{-\ell}$ for all $\ell\geq 0.$ Compared to \cite{BD} and \cite{DI}, we will improve on their upper bound for $S(x,P)$ but only for $x \leq P < x^{153/128}$. This is because only in this range we are able to prove a new bilinear estimate (Proposition \ref{typeiiprop}). To see how to use this new arithmetic information, we first note that in \cite{BD} and \cite{DI} the upper bound for $S(x)$ is obtained by using the linear sieve. Since the linear sieve is neutral with respect to applications of Buchstab's identity, we may apply Buchstab's indentity as we please to obtain Type II sums which we now have an asymptotic formula instead of just upper and lower bounds of the linear sieve, thus improving on the linear sieve bound. Similar principle also appears in the sieve of Duke, Friedlander, and Iwaniec in \cite{DFI}. By applying Harman's sieve method the use of the linear sieve can be completely avoided in some ranges (cf. \cite[Sections 3.5 and 3.8]{harman} for further discussion on the relation between Harman's sieve and the linear sieve). For $P \geq x^{153/128}$ we are unable to obtain new information and we just apply the same argument as in \cite[Section 8]{DI} to get an upper bound for $S(x,P)$. In the end we sum over the dyadic ranges $x \leq P \leq x^\varpi$ to determine the largest $\varpi$ for which we can show that \begin{align*} \sum_{x < p \leq x^\varpi} |\mathcal{A}_p| \log p \leq (1-\epsilon) X \log x. \end{align*} As usual with Harman's sieve method, we have to calculate numerical upper bounds for multi-dimensional integrals. These integrals are computed using Python 3.7, and the links to the codes can be found at the end of this section. \subsection{Arithmetic information} Let us define \begin{align*} \rho(m):= | \{ \nu \in \mathbb{Z}/m\mathbb{Z}: \,\, \nu^2+1 \equiv 0 \,\, (m) \} |. \end{align*} \begin{remark} In \cite{DI} this is denoted by $\omega(m)$ but we reserve the symbol $\omega$ for the Buchstab function. \end{remark} From the work of de la Bret\`eche and Drappeau we know the following linear estimate (cf. \cite[Section 8.4]{BD}). \begin{prop} \emph{(Type I information, de la Bret\`eche-Drappeau).} \label{typeiprop} Let $\theta=7/64$. Let $x \leq P=x^\alpha \leq x^{2-\eta}$ and \begin{align*} D:=x^{1/(2-4\theta)-\eta}P^{-\theta/(1-2\theta)}= x^{(1-2\theta\alpha)/(2-4\theta)-\eta}=x^{(32-7\alpha)/50-\eta}. \end{align*} Suppose that $D \ll x^{2-\eta}/P.$ Let $\lambda_d$ be any divisor bounded coefficients. Then \begin{align*} \sum_{d \leq D} \lambda_d \sum_{m \equiv 0 \,\, (d)} |\mathcal{A}_m| \psi_P(m) \log m = X \sum_{d \leq D} \lambda_d \sum_{m \equiv 0 \,\, (d)} \frac{\rho(m)}{m} \psi_P(m) \log m + O (x^{1-\eta}). \end{align*} \end{prop} In Section \ref{typeiisection} we will show the following bilinear estimate which improves on \cite[Th\'eor\`eme 5.2]{BD}: \begin{prop} \emph{(Type II information).} \label{typeiiprop} Let $\theta= 7/64.$ Let $P=x^\alpha$ for some $\alpha \geq 1$, and let $MN=P$ for $M,N \geq 1$. Let $a_m$ and $b_n$ be divisor bounded coefficients such that $b_n$ is supported on square-free integers. Then \begin{align*} \sum_{\substack{m \sim M \\ n \sim N}} a_m b_n |\mathcal{A}_{mn}| \psi_P(mn) \log mn = X \sum_{\substack{m \sim M \\ n \sim N}} \frac{a_m b_n \rho(mn)}{mn} \psi_P(mn) \log mn + O (x^{1-\eta}). \end{align*} if one of the following holds: \\ \textbf{\emph{(i)}} \begin{align*} x^{\alpha-1+\eta} \ll N \ll x^{(2-2\theta-\alpha)/3-\eta}=x^{(57-32\alpha)/96 - \eta} . \end{align*} \\ \textbf{\emph{(ii)}}\emph{(Duke-Friedlander-Iwaniec+de la Bret\`eche-Drappeau)} $b_n$ is supported on primes and \begin{align*} x^{2(\alpha-1)+\eta} \ll N \ll x^{(4-(3+2\theta)\alpha)/(3-6\theta)}=x^{(128-103\alpha)/75-\eta}. \end{align*} \end{prop} \begin{remark} The part (i) gives a non-trivial range for $N$ if $\alpha < 5/4-\theta/2 =153/128=1.195\dots$ \end{remark} \begin{remark} The exponent $\theta=7/64$ corresponds to the smallest eigenvalues $\lambda_1(q)$ on the Hecke congruence subgroups $\Gamma_0(q), \, q\geq 1,$ by $\lambda_1(q)= 1/4-\theta_q^2$ (cf. \cite[Section 1]{DI2} for precise definitions). Under Selberg's eigenvalue conjecture we could set $\theta=0.$ That $\theta_q \leq 7/64$ follows from a deep result of Kim and Sarnak \cite[Appendix 2]{KS}. \end{remark} \begin{remark} The part (ii) is almost a direct consequence of combining the argument in \cite[Section 5]{DFI} with \cite[Lemme 8.3, part 1]{BD}. The upper limit is better than (i) only in the range $\alpha < 2671/2496 = 1.070\dots$. Notice that for $\theta=0$ our part (i) gives a better result in the full range. \end{remark} \begin{remark} By similar arguments as in \cite{iwaniec} and \cite{lemke}, in \cite[Th\'eor\`eme 5.2]{BD} de la Bret\`eche and Drappeau use the dispersion method to handle Type II sums for \begin{align*} x^{\alpha-1+\eta} \ll N \ll x^{\alpha(1-2\theta)/(7-6\theta)-\eta} \end{align*} but this is weaker than Proposition \ref{typeiiprop}(i). \end{remark} \subsection{Fundamental Proposition} For integer $d \geq 1$ denote \begin{align} \label{asieve} S(\mathcal{A}(P)_d,z) := \sum_{\substack{(n,P(z))=1}} |\mathcal{A}_{dn}| \psi_P(dn) \log (dn), \end{align} so that \begin{align*} S(x,P) = S(\mathcal{A}(P), 2 \sqrt{P}). \end{align*} Let us also define the expected value of $S(\mathcal{A}(P)_d,z)$ \begin{align} \label{bsieve} S(\mathcal{B}(P)_d,z) := X \sum_{(n,P(z))=1} \frac{\rho(dn)}{dn} \psi_P(dn) \log(dn). \end{align} For the next Proposition we note that $(2-2\theta-\alpha)/3 > 2(\alpha-1)$ exactly if $\alpha < 249/224 =1.11\dots$ We can combine Propositions \ref{typeiprop} and \ref{typeiiprop} by using a variant of the argument in \cite[Chapter 3]{harman} to get \begin{prop} \emph{(Fundamental Proposition I).} \label{funprop} Let $P=x^\alpha$ for $1\leq \alpha < 249/224-2\eta.$ Let $D$ be as in Proposition \ref{typeiprop} and set \begin{align*} U:=Dx^{1-\alpha-\eta}=x^{(1-2\theta\alpha)/(2-4\theta)-\alpha+1-2\eta}, \end{align*} and \begin{align*} \sigma := \max \bigg\{\frac{2-2\theta-\alpha}{3}-\eta, \frac{4-(3+2\theta)\alpha}{3-6\theta}-\eta\bigg \}. \end{align*} Let $\lambda_u$ be divisor-bounded coefficients. Then \begin{align*} \sum_{u \leq U } \lambda_u S(\mathcal{A}(P)_u,x^\sigma) = \sum_{u \leq U} \lambda_u S(\mathcal{B}(P)_u,x^\sigma) + O(x^{1-\eta}). \end{align*} \end{prop} \begin{proof} Using the M\"obius function to detect $(n,P(x^\sigma))=1$, we have \begin{align*} \sum_{u \leq U } \lambda_u S(\mathcal{A}(P)_u,x^\sigma) &= \sum_{u \leq U } \sum_{d | P(x^\sigma)} \sum_n \lambda_u \mu(d) |\mathcal{A}_{udn}| \psi_P(udn)\log(udn) \\ &= \Sigma_I(\mathcal{A}(P)) + \Sigma_{II}(\mathcal{A}(P)), \end{align*} where \begin{align*} \Sigma_I(\mathcal{A}) &:= \sum_{u \leq U } \sum_{\substack{d | P(x^\sigma)\\ d \leq x^{\alpha-1+\eta}} } \sum_n \lambda_u \mu(d) |\mathcal{A}_{udn}| \psi_P(udn)\log(udn) \quad \quad \text{and} \\ \Sigma_{II}(\mathcal{A}) & := \sum_{u \leq U } \sum_{\substack{d | P(x^\sigma)\\ d > x^{\alpha-1+\eta}} } \sum_n \lambda_u \mu(d) |\mathcal{A}_{udn}| \psi_P(udn)\log(udn). \end{align*} Similarly, we can write \begin{align*} \sum_{u \leq U } \lambda_u S(\mathcal{B}(P)_u,x^\sigma) =\Sigma_I(\mathcal{B}(P)) + \Sigma_{II}(\mathcal{B}(P)). \end{align*} For the first pair of sums, since $du \leq x^{\alpha-1+\eta}U = D,$ we have by Proposition \ref{typeiprop} \begin{align*} \Sigma_I(\mathcal{A}(P)) = \Sigma_I(\mathcal{B}(P)) + O(x^{1-\eta}). \end{align*} In the second pair of sums we have (writing $d=q_1q_2\cdots q_k$) \begin{align*} \Sigma_{II}(\mathcal{A}(P)) = \sum_{k \ll \log x}(-1)^k \sum_{u \leq U} \sum_{\substack{q_k < \cdots < q_1 \leq x^{\sigma} \\ q_1 \cdots q_k > x^{\alpha-1+\eta} }} \lambda_u |\mathcal{A}_{uq_1 \cdots q_k n}| \psi_P(uq_1 \cdots q_k n)\log(u q_1 \cdots q_kn). \end{align*} For every $q_1\cdots q_k$ there exists a unique $\ell \leq k$ such that \begin{align*} q_1 \cdots q_\ell \geq x^{\alpha-1+\eta} \quad \quad \text{and} \quad \quad q_1 \cdots q_{\ell-1} < x^{\alpha-1+\eta}. \end{align*} Hence, writing $n':=q_1 \cdots q_\ell$ and $m:=un q_{\ell+1} \cdots q_k$, and using Perron's formula to remove the cross-condition $q_\ell < q_{\ell+1}$ (cf. \cite[Chapter 3.2]{harman}), we can partition $\Sigma_{II}(\mathcal{A}(P))$ into \begin{align*} \sum_{k \ll \log x}(-1)^k \sum_{\ell \leq k} \sum_m \sum_{\substack{n'=q_1 \cdots q_\ell \geq x^{\alpha-1+\eta} \\ q_1 \cdots q_{\ell-1} < x^{\alpha-1+\eta} \\ q_\ell < \cdots < q_1 \leq x^\sigma}} a_m b_{n'} |\mathcal{A}_{mn'}| \psi_P(mn') \log mn' \end{align*} with $b_{n'}$ supported on square-free integers. A similar partition applies to $\Sigma_{II}(\mathcal{B}(P))$. If $\ell =1$, then $x^{\alpha-1+\eta} \leq q_1 \leq x^\sigma$, so that we have an asymptotic formula by combining Proposition \ref{typeiiprop}(i) and (ii) if $\alpha < 2671/2496$, and for $\alpha \geq 2671/2496$ simply using part (ii). If $\ell > 1$, then we have $q_1 \cdots q_\ell \leq x^{\alpha-1+\eta} q_\ell \leq x^{(2-2\theta-\alpha)/3-\eta}$ (since $q_\ell < q_1< x^{\alpha-1+\eta}$ and $2(\alpha-1) < (2-2\theta-\alpha)/3-3\eta$ for $\alpha < 249/224-2\eta$), so that we may apply Proposition \ref{typeiiprop}(i) to get an asymptotic formula. Summing over $\ell$ and $k$ we obtain \begin{align*} \Sigma_{II}(\mathcal{A}(P)) = \Sigma_{II}(\mathcal{B}(P)) + O(x^{1-\eta}). \end{align*} \end{proof} We note that $(2-2\theta-\alpha)/3>\alpha-1$ precisely if $\alpha < 153/128.$ By a similar argument we obtain the following variant of the previous proposition \begin{prop} \emph{(Fundamental Proposition II).} \label{funprop2} Let $P=x^\alpha$ for $1\leq \alpha < 153/128-2\eta.$ Let $D$ be as in Proposition \ref{typeiprop} and set \begin{align*} U:=Dx^{1-\alpha-\eta}=x^{(1-2\theta\alpha)/(2-4\theta)-\alpha+1-2\eta}, \end{align*} and \begin{align*} \gamma := \frac{2-2\theta-\alpha}{3} - \alpha+1 - 2\eta. \end{align*} Let $\lambda_u$ be divisor-bounded coefficients. Then \begin{align*} \sum_{u \leq U } \lambda_u S(\mathcal{A}(P)_u,x^\gamma) = \sum_{u \leq U} \lambda_u S(\mathcal{B}(P)_u,x^\gamma) + O(x^{1-\eta}). \end{align*} \end{prop} \begin{proof} The only difference to the proof of Proposition \ref{funprop} is that this time in $\Sigma_{II}(\mathcal{A}(P))$ combining \begin{align*} q_1 \cdots q_\ell \geq x^{\alpha-1+\eta} \quad \quad \text{and} \quad \quad q_1 \cdots q_{\ell-1} < x^{\alpha-1+\eta} \end{align*} with $q_\ell < x^\gamma$ we get $q_1\cdots q_\ell < x^{\alpha-1+\eta+\gamma} < x^{(2-2\theta-\alpha)/3 -\eta}$, so that we may use Proposition \ref{typeiiprop}(i) to get an asymptotic formula. \end{proof} We also need a lemma for transforming sums over almost-primes into integrals which can be evaluated numerically. Let $\omega(u)$ denote the Buchstab function (cf. \cite[Chapter 1]{harman} for the properties below, for instance), so that by the Prime Number Theorem for $y^{\epsilon} < z < y$ \begin{align} \label{buchasymp} \sum_{y < n \leq 2y} 1_{(n,P(z))=1} = (1+o(1)) \omega \left(\frac{\log y}{\log z} \right) \frac{y}{\log z}. \end{align} Note that for $1< u \leq 2$ we have $\omega(u)=1/u.$ In the numerical computations we will use the following bounds for the Buchstab function (cf. \cite[Lemma 20]{jia}) \begin{align} \label{buchbound} \omega(u) \, \begin{cases} = 0, &u < 1 \\ = 1/u, & 1 \leq u < 2 \\ = (1+\log(u-1))/u, &2 \leq u < 3 \\ \leq 0.5644, & 3 \leq u < 4 \\ \leq 0.5617, & u \geq 4 \\ \geq 0.5607, & u \geq 2.47. \end{cases} \end{align} In the lemma below we assume that the range $\mathcal{U}\subset [x^{\eta},Px^{-\eta}]^{k}$ is sufficiently well-behaved, e.g. an intersection of sets of the type $\{ \boldsymbol{u}: u_i < u_j \}$ or $\{\boldsymbol{u}: V < f(u_1, \dots,u_k) < W\}$ for some polynomial $f$ and some fixed $V,W.$ \begin{lemma} \label{bilemma} Let $\mathcal{U} \subset [x^{\eta},Px^{-\eta}]^{k} $ and $P=x^\alpha$. Then \begin{align*} \sum_{(q_1, \dots , q_k) \in \mathcal{U}} S(\mathcal{B}(P)_{q_1, \dots, q_k},q_k) = (1+o(1))X \int \psi_P(u) \frac{du}{u} \alpha \int \omega (\alpha,\boldsymbol{\beta }) \frac{d\beta_1 \cdots d\beta_k}{\beta_1\cdots\beta_{k-1}\beta_k^2}, \end{align*} where the integral is over the range $\{\boldsymbol{\beta}: \, (x^{\beta_1}, \dots, x^{\beta_k}) \in \mathcal{U}\}$, and \begin{align*} \omega(\alpha,\boldsymbol{\beta}):= \omega\bigg(\frac{\alpha-\beta_1-\cdots -\beta_k}{\beta_k}\bigg). \end{align*} \end{lemma} \begin{proof} By definition the left-hand side in the lemma is equal to \begin{align*} \sum_{(q_1, \dots , q_k) \in \mathcal{U}} X \sum_m 1_{(m,P(q_k))=1} \frac{\rho(q_1\cdots q_k m)}{q_1 \cdots q_k m} \psi_P(q_1 \cdots q_k m) \log (q_1 \cdots q_k m). \end{align*} Note that the function $\rho(m)$ is multiplicative and $\rho(p) = 2\cdot 1_{p \equiv 1 \, (4)}$ for primes $p > 2.$ Hence, for $(m,P(x^\eta))=1$ we can replace $\rho(m)$ by 1 with negligible error by equidistribution of primes in arithmetic progressions. Therefore, by (\ref{buchasymp}) and by the Prime Number Theorem we have \begin{align*} & \sum_{(q_1, \dots , q_k) \in \mathcal{U}} S(\mathcal{B}(P)_{q_1, \dots, q_k},q_k) \\ &= \sum_{(q_1, \dots , q_k) \in \mathcal{U}} X \sum_m 1_{(m,P(q_k))=1} \frac{1}{q_1 \cdots q_k m} \psi_P(q_1 \cdots q_k m) \log (q_1 \cdots q_k m) \\ &= (1+o(1))X \int \psi_P(u) \log u \frac{du}{u} \sum_{(q_1, \dots , q_k) \in \mathcal{U}} \frac{1}{q_1\cdots q_k \log q_k} \omega \left( \frac{\log(P/(q_1\cdots q_k))}{\log q_k} \right) \\ &= (1+o(1))X \int \psi_P(u) \log u \frac{du}{u} \\ & \hspace{60pt} \sum_{(n_1,\dots,n_k ) \in \mathcal{U}} \frac{1}{n_1\cdots n_k (\log n_1) \dots (\log n_{k-1} )\log^2 n_k} \omega \left( \frac{\log(P /(n_1\cdots n_k))}{\log n_k} \right) \\ &= (1+o(1))X \int \psi_P(u) \log u \frac{du}{u} \\ & \hspace{60pt} \int_{\mathcal{U}} \omega \left( \frac{\log(P/(u_1\cdots u_k))}{\log u_k} \right) \frac{du_1\cdots du_k}{u_1\cdots u_k (\log u_1) \dots (\log u_{k-1} )\log^2 u_k}\\ &= (1+o(1))X \int \psi_P(u) \frac{du}{u} \alpha \int \omega (\alpha,\boldsymbol{\beta }) \frac{d\beta_1 \cdots d\beta_k}{\beta_1\cdots\beta_{k-1}\beta_k^2} \end{align*} by the change of variables $u_j=x^{\beta_j}$ and by inserting $\log u = (1+o(1))\alpha \log x$. \end{proof} \begin{remark} We refer to the factor $\alpha \int \omega(\alpha,\boldsymbol{\beta}) \frac{ d \beta_1\cdots d\beta_k }{\beta_1\cdots\beta_{k-1}\beta_k^2}$ as the deficiency of the corresponding sum. \end{remark} For the linear sieve (cf. \cite[Chapter 11]{OdC}) we let $F(s),f(s)$ denote the continuous solution to the system of delay-differential equations \begin{align*} \begin{cases} (sF(s))' = f(s-1) \\ (sf(s))' = F(s-1) \end{cases} \end{align*} with the initial condition \begin{align*} \begin{cases} sF(s) = 2e^{\gamma}, & \text{if} \, \, 1 \leq s \leq 3 \\ sf(s) = 0, & \text{if} \, \, s\leq 2. \end{cases}. \end{align*} Here $\gamma$ is the Euler-Mascheroni constant. We require the following \begin{lemma}\textbf{\emph{(Linear sieve upper bound).}} \label{linearlemma} Let $D$ be as in Proposition \ref{typeiprop}. For $P=x^\alpha$ and for any $x^\eta < z < D$ we have \begin{align*} S(\mathcal{A}(P),z) \leq (1+o(1)) X \int \psi_P(u)\frac{du}{u} \frac{\alpha \log x}{e^\gamma \log z} F\bigg( \frac{\log D}{\log z}\bigg). \end{align*} \end{lemma} \begin{proof} Let $\lambda_d$ denote the sieve weights of the upper bound linear sieve \cite[Chapter 11]{OdC}) with level of distribution $D$. Then \begin{align*} S(\mathcal{A}(P),z) \leq \sum_{\substack{d\leq D \\ d| P(z)}}\lambda_d \sum_{m \equiv 0 \,\, (d)}|\mathcal{A}_m| \psi_P(m) \log m = X \sum_{d \leq D} \lambda_d \sum_{\substack{d\leq D \\ d| P(z)}} \frac{\rho(m)}{m} \psi_P(m) \log m + O (x^{1-\eta}) \end{align*} by Proposition \ref{typeiprop}. The sum on the right-hand side can now be evaluated by using \cite[Theorem 11.12]{OdC} and the same argument as in \cite[Section 8]{DI}, which leads to the result. \end{proof} \subsection{Buchstab decompositions} \label{buchsec} The general idea of Harman's sieve is to use Buchstab's identity to decompose the sum $S(\mathcal{C}(P),\sqrt{P})$ (in parallel for $\mathcal{C}(P)=\mathcal{A}(P)$ and $\mathcal{C}(P)=\mathcal{B}(P)$) into a sum of the form $\sum_k \epsilon_k S_k(\mathcal{C}(P)),$ where $\epsilon_k \in \{-1,1\},$ and $S_k(\mathcal{C}(P)) \geq 0$ are sums over almost-primes. Since we are interested in an upper bound, for $\mathcal{C}(P)=\mathcal{A}(P)$ we can insert the trivial estimate $S_k(\mathcal{A}(P)) \geq 0$ for any $k$ such that the sign $\epsilon_k =-1;$ these sums are said to be discarded. For the remaining $k$ we will obtain an asymptotic formula by using Propositions \ref{typeiiprop} and \ref{funprop} (in some cases with $\epsilon_k=1$ we will use the linear sieve upper bound (Lemma \ref{linearlemma}) but let us ignore this for now). That is, if $\mathcal{K}$ is the set of indices that are discarded, then \begin{align*} S(\mathcal{A}(P), \sqrt{P})&= \sum_k \epsilon_k S_k(\mathcal{A}(P)) \leq \sum_{k \notin \mathcal{K}} \epsilon_k S_k(\mathcal{A}(P)) \\ &=(1+o(1)) \sum_{k \notin \mathcal{K}} \epsilon_k S_k(\mathcal{B}(P)) = (1+o(1))S(\mathcal{B}(P),2\sqrt{P}) + \sum_{k \in \mathcal{K}} S_k(\mathcal{B}(P)). \end{align*} By the Prime Number Theorem we have \begin{align*} S(\mathcal{B}(P), 2 \sqrt{P}) = (1+o(1)) X \int \psi_P(u)\frac{du}{u}. \end{align*} The remaining sum $\sum_{k \in \mathcal{K}} S_k(\mathcal{B}(P))$ we can estimate using Lemma \ref{bilemma}. Thus, we will obtain an upper bound of the form \begin{align} \label{Gclaim} S(\mathcal{A}(P),2\sqrt{P}) \leq (1+ G(\alpha))X \int \psi_P(u)\frac{du}{u} \end{align} for some non-negative function $G$ measuring the deficiency at range $P=x^\alpha.$ To relax the notations we will ignore factors of $x^{\eta}$ in the ranges of variables in this section, since their contribution to $G(\alpha)$ will be $O(\eta)$ which can be made arbitrarily small. We separate into five cases, $1\leq \alpha \leq 758/733,$ $758/733 \leq \alpha < 249/224$, $249/224 \leq \alpha <182/157 $, $182/157 \leq \alpha < 153/128,$ and $\alpha > 153/128$. \begin{remark} The range $\alpha < 249/224$ is where we can apply Proposition \ref{funprop}. For $\alpha < 182/157$ we will use Propostion \ref{funprop2}. For $182/157 \leq \alpha < 153/128$ we will use a combination of Proposition \ref{typeiiprop}(i) and the linear sieve upper bound. For $\alpha > 153/128$ we do not have any new information so that we just use the linear sieve similarly as in \cite{BD} and \cite{DI} to get an upper bound. \end{remark} \subsubsection{Case $1\leq \alpha < 758/733$} \label{alphasmallsection} Let \begin{align*} \sigma := \frac{4-(3+2\theta)\alpha}{3-6\theta}-\eta \end{align*} (for $\alpha< 758/733$ part (ii) of Proposition \ref{typeiiprop} is stronger than (i)). Define $\xi$ by setting (recall Proposition \ref{funprop}) \begin{align*} U=Dx^{1-\alpha-\eta}=x^{(1-2\theta\alpha)/(2-4\theta)-\alpha+1-2\eta}=:x^{\xi}, \end{align*} Let $\mathcal{C} \in \{\mathcal{A},\mathcal{B}\}$. By Buchstab's identity we have \begin{align*} S(\mathcal{C}(P), 2 \sqrt{P})= S(\mathcal{C}(P), x^\sigma) - \sum_{x^\sigma < q \leq 2\sqrt{P}} S(\mathcal{C}(P)_q, q). \end{align*} By Proposition \ref{funprop} we have an asymptotic formula for the first term. In the second sum we note that the implicit variable in $S(\mathcal{C}(P)_q, q)$ (cf. $n$ in (\ref{asieve}) and (\ref{bsieve})) is of size $ x^{\alpha}/q$, so that for $q \gg x^{\alpha-2\sigma}$ the implicit variable runs over primes of size $<x^{2\sigma}.$ Hence \begin{align*} \sum_{x^{\alpha-2\sigma} \ll q \leq U} S(\mathcal{C}(P)_q, q) = \sum_{x^{\alpha-2\sigma} \ll q \leq U} S(\mathcal{C}(P)_q, x^\sigma), \end{align*} so that we have an asymptotic formula by Proposition \ref{funprop} in this range. We note that this range is non-trivial precisely if \begin{align*} \alpha < 758/733 = 1.034\dots. \end{align*} The remaining part we just discard, which by Lemma \ref{bilemma} gives us a deficiency \begin{align} \label{g1alpha} \alpha \int_{\sigma}^{\alpha-2\sigma} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2} + \alpha \int_{\xi}^{\alpha/2} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2}. \end{align} \subsubsection{Case $758/733 \leq \alpha < 249/224$} Let \begin{align*} \sigma := \max \bigg\{\frac{2-2\theta-\alpha}{3}-\eta, \frac{4-(3+2\theta)\alpha}{3-6\theta}-\eta\bigg \}. \end{align*} By Buchstab's identity we have \begin{align*} S(\mathcal{C}(P), 2 \sqrt{P})= S(\mathcal{C}(P), x^\sigma) - \sum_{x^\sigma < q \leq 2\sqrt{P}} S(\mathcal{C}(P)_q, q). \end{align*} By Proposition \ref{funprop} we have an asymptotic formula for the first term. The second sum we just discard, which by Lemma \ref{bilemma} gives us a deficiency \begin{align*} \alpha \int_{\sigma}^{\alpha/2} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2}. \end{align*} Summing over dyadic ranges $x < P=2^j x < x^{249/224}$ we obtain \begin{align*} \sum_{\substack{x \leq P \leq x^{249/224} \\ P= 2^j x}} S(x,P) \leq (25/224 + G_1+G_2+o(1)) X \log x , \end{align*} where by (\ref{g1alpha}) \begin{align*} G_1 := \int_1^{758/733} \alpha\bigg( \int_{\sigma}^{\alpha-2\sigma} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2} + \int_{\xi}^{\alpha/2} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2} \bigg) d\alpha < 0.01745 \end{align*} and \begin{align*} G_2 := \int_{758/733}^{249/224} \alpha \int_{\sigma}^{\alpha/2} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2} d\alpha < 0.11478. \end{align*} \subsubsection{Case $249/224 \leq \alpha < 182/157$} From here on we let $\sigma := (2-2\theta-\alpha)/3$ ((for $\alpha \geq 249/224$ part (i) of Proposition \ref{typeiiprop} is stronger than (ii))). Recall that in Proposition \ref{funprop2} \begin{align*} \gamma := \frac{2-2\theta-\alpha}{3} - \alpha+1 - 2\eta. \end{align*} By applying Buchstab's identity we get \begin{align*} S(\mathcal{A}(P), 2 \sqrt{P})&= S(\mathcal{A}(P), x^{\gamma}) - \sum_{x^{\gamma} < q \leq 2\sqrt{P}} S(\mathcal{A}(P)_q, q). \end{align*} For the first term we have an asymptotic formula by Proposition \ref{funprop2}. In the second sum we get an asymptotic formula by Proposition \ref{typeiiprop}(i) in the part $x^{\alpha-1}< q <x^\sigma$. We discard the part with $x^\sigma < q < x^{\alpha/2}$, which gives us a deficiency \begin{align*} \alpha \int_{\sigma}^{\alpha/2} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2}. \end{align*} For the remaining part $x^\gamma < q \leq x^{\alpha-1}$ we apply Buchstab's identity twice to get \begin{align*} -\sum_{x^{\gamma} < q\leq x^{\alpha-1}} S(\mathcal{A}(P)_q, q) = -\sum_{x^{\gamma} < q\leq x^{\alpha-1}}& S(\mathcal{A}(P)_q, x^\gamma) + \sum_{x^{\gamma} < q_2<q_1\leq x^{\alpha-1}} S(\mathcal{A}(P)_{q_1q_2}, x^\gamma) \\ &- \sum_{x^{\gamma} < q_3<q_2<q_1\leq x^{\alpha-1}} S(\mathcal{A}(P)_{q_1q_2q_3}, q_3). \end{align*} Since $\alpha < 182/157$, we have $x^{2(\alpha-1)} < U$ so that for the first two sums we have an asymptotic formula by Proposition \ref{funprop2}. In the last sum we use Proposition \ref{typeiiprop}(i) to get an asymptotic formula whenever any combination of $q_1,q_2,q_3$ is in the Type II range $[x^{\alpha-1},x^\sigma]$ and we discard the rest. Thus, \begin{align*} \sum_{\substack{x^{249/224} \leq P \leq x^{182/157} \\ P= 2^j x}} S(x,P) \leq \bigg(\frac{182}{157}-\frac{249}{224} + G_3+G_4+o(1)\bigg) X \log x , \end{align*} where \begin{align*} G_3 := \int_{249/224}^{182/157} \alpha \int_{\sigma}^{\alpha/2} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2} d\alpha < 0.093754. \end{align*} and \begin{align*} G_4:= \int f_4(\alpha, \bm{\beta})\alpha\omega \bigg( \frac{\alpha-\beta_1-\beta_2-\beta_3}{\beta_3}\bigg) \frac{d \beta_1 d\beta_2 d\beta_3}{\beta_1 \beta_2 \beta_3^2} d\alpha < 0.0057 \end{align*} with $f_4$ the characteristic function of the four dimensional set \begin{align*} \bigg\{\frac{249}{224}< \alpha & <\frac{182}{157}, \, \gamma < \beta_3< \beta_2< \beta_1 < \alpha-1 \\ & \beta_1+\beta_2, \, \beta_1+\beta_3, \, \beta_2+\beta_3, \, \beta_1+\beta_2+\beta_3 \notin [\alpha-1, \sigma] \bigg \} \end{align*} \subsubsection{Case $182/157 \leq \alpha < 153/128$} By applying Buchstab's identity we get \begin{align*} S(\mathcal{A}(P), 2 \sqrt{P})&= S(\mathcal{A}(P), x^{\alpha-1}) - \sum_{x^{\alpha-1} < q \leq 2\sqrt{P}} S(\mathcal{A}(P)_q, q) \\ &\leq S(\mathcal{A}(P), x^{\alpha-1}) - \sum_{x^{\alpha-1} <q \leq x^\sigma} S(\mathcal{A}(P)_q, q). \end{align*} For the first term we use the linear sieve upper bound (Lemma \ref{linearlemma}), while for the second term we have an asymptotic formula by Proposition \ref{typeiiprop}. Hence, by Lemmata \ref{bilemma} and \ref{linearlemma} we get an upper bound \begin{align*} S(\mathcal{A}(P), 2 \sqrt{P}) \leq (G_5(\alpha)-G_6(\alpha)+o(1)) X \int \psi_P(u)\frac{du}{u}, \end{align*} so that \begin{align*} \sum_{\substack{x^{182/157} \leq P \leq x^{153/128} \\ P= 2^j x}} S(x,P) \leq ( G_5-G_6+o(1)) X \log x , \end{align*} where \begin{align*} G_5 := e^{-\gamma}\int_{182/157}^{153/128}\frac{\alpha}{\alpha-1} F\bigg( \frac{1-2\theta\alpha}{(2-4\theta)(\alpha-1)}\bigg) d\alpha = 4(1-2\theta)\int_{182/157}^{153/128}\frac{\alpha}{1-2\theta\alpha} d\alpha < 0.17877 \end{align*} and \begin{align*} G_6 := \int_{182/157}^{153/128} \alpha \int_{\alpha-1}^{\sigma} \omega(\alpha/\beta-1) \frac{d\beta}{\beta^2} d\alpha > 0.016329. \end{align*} \begin{remark} Here also we could apply Buchstab's identity multiple times to generate more Type II sums, similarly as we did for $\alpha < 182/157$. However, for $\alpha > 182/157$ the width of our Type II information is $\gamma <0.048$ so that the gain from this would be fairly small (certainly less than $G_6$) so we ignore this to simplify the argument. \end{remark} \subsubsection{Case $\alpha > 153/128$} In the range $P \geq x^{153/128}$ we do not have any new information, so that just use the linear sieve upper bound (Lemma \ref{linearlemma}) we obtain \begin{align} \label{linearremainder} \sum_{\substack{x^{153/128} \leq P \leq x^\varpi \\ P= 2^j x}} S(x,P) \leq \bigg(4(1-2\theta) \int_{153/128}^\varpi \frac{\alpha}{1-2\theta\alpha} d \alpha+o(1)\bigg) X \log x . \end{align} \subsection{Conclusion of the proof of Theorem \ref{maint}} Summing over the estimates we get \begin{align*} \sum_{\substack{x\leq P \leq x^{153/128} \\ P= 2^j x}} S(x,P) \leq (25/157+G+o(1)) X \log x, \end{align*} where \begin{align*} 25/157 + G = 25/157+G_1+G_2+G_3+G_4+G_5-G_6 < 0.553361 \end{align*} Combining this with (\ref{linearremainder}), we have \begin{align*} \frac{1}{X \log x}\sum_{\substack{x\leq P \leq x^{1.279} \\ P= 2^j x}} S(x,P) < 0.553361 +4(1-2\theta) \int_{153/128}^{1.279} \frac{\alpha}{1-2\theta\alpha} d \alpha = 0.997\dots < 1, \end{align*} which proves Theorem \ref{maint} since otherwise we reach a contradiction with the asymptotic (\ref{cheby}). \qed \begin{remark} In comparison, just using the linear sieve upper bound gives \begin{align*} \sum_{\substack{x \leq P \leq x^{153/128} \\ P= 2^j x}} S(x,P) \leq \bigg(4(1-2\theta) \int_{1}^{153/128} \frac{\alpha}{1-2\theta\alpha} d \alpha+o(1)\bigg) X \log x < 0.8213 \cdot X \log x. \end{align*} \end{remark} \begin{remark} The method in \cite{BD} and \cite{DI} gives an asymptotic formula for $S(x,P)$ for $P \leq x$, but for $P=x^{1+\epsilon}$ the upper bound is off by a factor of $4+O(\epsilon)$. In contrast, we get the correct upper bound for $P=x^{1+\epsilon}$. As $P=x^\alpha$ varies from $x$ to $x^{153/128}$ our method can be enhanced to give an upper bound which continuously increases from an asymptotic formula to the linear sieve upper bound (this would require a more careful handling of the part $182/157 \leq \alpha < 153/128$). This is in accordance with the general principle of Harman's sieve method that our sieve bounds should depend continuously on the quality of the arithmetic information. \end{remark} The Python 3.7 codes for computations of the Buchstab integrals are available at: \begin{tabular}{ c c c } &\hspace{50pt} $G_1$ & \quad \quad \url{http://codepad.org/e2RiL3TM} \\ &\hspace{50pt} $G_2$ &\quad \quad \url{http://codepad.org/i2BOT07g} \\ &\hspace{50pt} $G_3$ & \quad \quad\url{http://codepad.org/vMlImNKm} \\ &\hspace{50pt} $G_4$ &\quad \quad \url{http://codepad.org/DOxewic3} \\ &\hspace{50pt} $G_6$ &\quad \quad \url{http://codepad.org/IKZNttfN} \\ \end{tabular} \subsection{Proof of Theorem \ref{selbergt}} The sieve follows the same recipe as the proof of Theorem \ref{maint}. Assuming Selberg's conjecture we may set $\theta=0$, so that $D=x^{1/2}$, $U=x^{3/2-\alpha}=x^\xi$, and $\sigma = (2-\alpha)/3$. The reader will verify that now the ranges corresponding to the five ranges in the proof of Theorem 1 are $1\leq \alpha < 17/16$, $17/16 \leq \alpha < 8/7$, $8/7 \leq \alpha < 7/6$, $7/6 < \alpha < 5/4$ and $\alpha \geq 5/4.$ By a similar application of Buchstab's identities we get \begin{align*} \sum_{\substack{x\leq P \leq x^{5/4} \\ P= 2^j x}} S(x,P) \leq (1/6+F+o(1)) X \log x, \end{align*} where \begin{align*} 1/6 + F = 1/6+ F_1+F_2+F_3+F_4+F_5-F_6 < 0.679914 \end{align*} with \begin{align*} F_1 &:= \int_1^{17/16} \alpha\bigg( \int_{\sigma}^{\alpha-2 \sigma} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2} + \int_{\xi}^{\alpha/2} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2} \bigg) d\alpha < 0.0287 \\ F_2 &:= \int_{17/16}^{8/7} \alpha \int_{\sigma}^{\alpha/2} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2} d\alpha < 0.08622 \\ F_3 &:= \int_{8/7}^{7/6} \alpha \int_{\sigma}^{\alpha/2} \omega(\alpha/\beta-1) \frac{d \beta}{\beta^2} d\alpha < 0.03107 \\ F_4 &:= \int f_4(\alpha, \bm{\beta})\alpha\omega \bigg( \frac{\alpha-\beta_1-\beta_2-\beta_3}{\beta_3}\bigg) \frac{d \beta_1 d\beta_2 d\beta_3}{\beta_1 \beta_2 \beta_3^2} d\alpha < 0.00011 \\ F_5 &:= 4 \int_{7/6}^{5/4}\alpha d\alpha =29/72 \\ F_6 &:= \int_{7/6}^{5/4} \alpha \int_{\alpha-1}^{\sigma} \omega(\alpha/\beta-1) \frac{d\beta}{\beta^2} d\alpha > 0.035631 \end{align*} with $f_4$ the characteristic function of the four dimensional set \begin{align*} \bigg\{8/7< \alpha & <7/6, \, \gamma < \beta_3< \beta_2< \beta_1 < \alpha-1 \\ & \beta_1+\beta_2, \, \beta_1+\beta_3, \, \beta_2+\beta_3, \, \beta_1+\beta_2+\beta_3 \notin [\alpha-1, \sigma] \bigg \}. \end{align*} We also have by the linear sieve (Lemma \ref{linearlemma}) \begin{align*} \sum_{\substack{ x^{5/4} \leq P \leq x^{\varpi} \\ P= 2^j x}} S(x,P) \leq \bigg(4\int_{5/4}^\varpi \alpha d \alpha +o(1)\bigg) X \log x. \end{align*} Combining the two estimates we have \begin{align*} \frac{1}{X \log x}\sum_{\substack{x\leq P \leq x^{1.312} \\ P= 2^j x}} S(x,P) < 0.679914 + 4\int_{5/4}^{1.312} \alpha d \alpha = 0.997\dots < 1, \end{align*} which implies Theorem \ref{selbergt}. \qed \section{Type II information} \label{typeiisection} In this section we give a proof of Proposition \ref{typeiiprop}. Let us first give a non-rigorous sketch of the argument. \subsection{Sketch of the argument} Similarly as in \cite{iwaniec} and \cite{lemke}, in \cite[Th\'eor\`eme 5.2]{BD} de la Bret\`eche and Drappeau obtain asymptotic formulas for Type II sums by using the dispersion method of Linnik (cf. \cite[Section 8.3.3]{BD}). Our argument is more direct. We begin by applying the Poisson summation formula to evaluate $|\mathcal{A}_{mn}|$. For simplicity, let us assume that $(m,n)=1$ in the Type II sum in Proposition \ref{typeiiprop}. Then by the Poisson summation formula (Lemma \ref{poisson}) we can reduce the claim to showing that for $H=x^\epsilon P/x$ and for any bounded coefficients $c_h$ we have \begin{align*} \frac{1}{H} \sum_{1 \leq |h| \leq H} c_h \sum_{\substack{m \sim M \\ n \sim N \\ (m,n)=1}} a_m b_n \sum_{\substack{\nu \,\, (mn) \\ \nu^2+1 \equiv 0 \,\, (mn)}} e_{mn}(-h\nu) \, \ll x^{1-\eta}. \end{align*} \begin{remark} Note that the length of the exponential sum is $MN=P,$ while we need a bound that is a bit less than $x$. Thus, we need to save a power of $x$, the more the bigger $P$ is. Since we need to apply the Cauchy-Schwarz inequality in the proof, all savings are essentially halved. For this reason we are unable to get an estimate for large $P$. \end{remark} \begin{remark} For a fixed $h$ this sum is the same the bilinear sum as in the work of Duke, Friedlander and Iwaniec \cite[Proposition 2]{DFI}. Note that in their work only a small saving over the trivial bound is required, that is a bound $\ll P^{1-\eta}$. In this case their method gives unconditionally the same range as one gets assuming Selberg's conjecture (ie. $x^{\eta} \ll N \ll x^{1/3-\eta}$). Our argument has a similar flavour to their proof, but in contrast we also make use of the average over the frequencies $h$. \end{remark} When we apply Cauchy-Schwarz we would like to simplify matters by keeping the sum over $\nu^2+1\equiv 0 \, (mn)$ `outside' while keeping the sum over $n$ `inside'. To facilitate this, recall that $b_n$ is supported on square-free integers. Hence, if we denote \begin{align*} Q:=Q(m):= \prod_{\substack{2 \leq p\leq 2N \\ p \equiv 1,2 \, \, (4) \\ p\, \nmid\, m}} p, \end{align*} then by the Chinese Remainder theorem we have (for $(m,n)=1$) \begin{align*} \sum_{\substack{\nu^2+1 \equiv 0 \,\, (mn)}} e_{mn}(-h\nu) =\frac{\rho(n)}{\rho(Q)} \sum_{\substack{\nu^2+1 \equiv 0 \,\, (mQ)}} e_{mn}(-h\nu). \end{align*} Let $\psi_M(m)$ denote a $C^\infty$-smooth majorant of $1_{m \sim M}.$ By the Cauchy-Schwarz inequality and by expanding the square afterwards we obtain \begin{align*} \sum_{m \sim M} &a_m \frac{1}{\rho(Q)} \sum_{\substack{\nu^2+1 \equiv 0 \,\, (m Q)}} \frac{1}{H} \sum_{1 \leq |h| \leq H} c_h \sum_{\substack{ n \sim N \\ (m,n)=1}} b_n \rho(n) e_{mn}(-h\nu) \\ & \pprec M^{1/2} \bigg( \sum_{m} \psi_M(m)\frac{1}{H^2} \sum_{1\leq |h_1|,|h_2| \leq H} c_{h_1} \overline{c_{h_2}} \sum_{\substack{n_1,n_2 \sim N \\ (m,n_1n_2)=1}} b_{n_1} \overline{b_{n_2}} \\ & \hspace{100pt} \frac{\rho(n_1)\rho(n_2)}{\rho(Q)} \sum_{\substack{\nu^2+1 \equiv 0 \,\, (mQ)}} e_{mn_1}(-h_1\nu) e_{mn_2}(h_2\nu)\bigg)^{1/2} \\ &\pprec M^{1/2} \bigg( \frac{1}{H^2} \sum_{1\leq |h_1|,|h_2| \leq H} c_{h_1} \overline{c_{h_2}} \sum_{n_0\ll N} \rho(n_0) \sum_{\substack{n_1,n_2 \sim N/n_0 \\ (n_1,n_2)=1}} b_{n_0n_1} \overline{b_{n_0 n_2}} \\ & \hspace{70pt} \sum_{(m,n_0n_1n_2)=1} \psi_M(m) \sum_{\substack{\nu^2+1 \equiv 0 \,\, (m n_0n_1n_2 )}} e_{mn_0n_1n_2}((h_2n_1-h_1n_2)\nu)\bigg)^{1/2} \end{align*} by denoting $n_0=(n_1,n_2)$ and by using the Chinese Remainder Theorem to collapse the sum over $\nu^2+1 \equiv 0 \,\, (mQ)$ back to a sum over $\nu^2+1\equiv 0 \,\, (mn_0n_1n_2)$. In the diagonal part $h_1n_2-h_2n_1=0$ we use a trivial estimate to get aboud \begin{align*} \pprec M^{1/2} \bigg( \frac{1}{H^2} HNM \bigg)^{1/2} \ll MN^{1/2}H^{-1/2} \ll x^{1/2}P^{1/2} N^{-1/2} < x^{1-\eta}, \end{align*} since $H>P/x$ and $N\gg x^{\alpha-1+\eta}$. For the off-diagonal $h_1n_2-h_2n_1\neq 0$ we can introduce Kloosterman sums by a similar argument as in \cite[Section 5]{DI} to get a sum of the type \begin{align*} \sum_{r} \sum_{\substack{m \sim \bm{M}\\ n \sim \bm{N}}}A_{m,r} B_{n,r} \sum_{(c,r)=1} g(m,n,c,r) S(m \overline{r},\pm n;c) \end{align*} where $g(m,n,c,r)$ is a $C^{\infty}$-smooth function. Here $r$ corresponds to $n_0n_1n_2$, $n$ corresponds to $h_1n_2-h_2n_1$, and $m$ is the frequency parameter that arises from completing an incomplete Kloosterman sum by using Lemma \ref{complete}. Unfortunately both of the coefficients $A_{m,r}$ and $B_{n,r}$ depend on $r$, so that we are unable to make use of the average over the `level variable' $r$ (as in \cite[Theorem 10]{DI2}). By combining the bound $\theta \leq 7/64$ of Kim and Sarnak \cite[Appendix 2]{KS} with the estimate of Deshouillers and Iwaniec \cite[Theorem 9]{DI2} we can bound \begin{align*} \sum_{\substack{m \sim \bm{M}\\ n \sim \bm{N}}}A_{m,r} B_{n,r} \sum_{(c,r)=1} g(m,n,c,r) S(m \overline{r},\pm n;c) \end{align*} for each $r$ individually, which gives a sufficient bound as long as $N \ll x^{(2-2\theta-\alpha)/3}$ for $\theta=7/64$. \subsection{Sizes of various quantities in the proof} In the proof of Proposition \ref{typeiiprop}(i) below there will appear numerous quantities. Here we have collected their sizes and relations to one another: \begin{align*} &P= x^{\alpha}, \quad \quad MN=P, \quad \quad x^{\alpha-1+\eta} \ll N \ll x^{(2-2\theta-\alpha)/3-\eta}=x^{(57-32\alpha)/96 - \eta}, \\ &H= x^\epsilon P/x, \quad \quad k \ll M, \quad \quad 1 \ll R, S \ll \frac{P^{1/2}N^{1/2}}{k^{1/2}n_0^{1/2}}, \\ &T = x^\epsilon \frac{S \delta N^2 }{R n_0}, \quad \quad H_1, H_2 \ll H, \quad \quad \varrho = \delta k^2 n_0 n_1n_2 \asymp \delta N^2/n_0, \\ &\bm{M} \ll T, \quad \quad \bm{N} \ll \frac{HN}{kn_0}, \quad \quad \text{and} \quad \quad C \ll S. \end{align*} \subsection{Preliminaries} We have collected here some basic estimates which will be needed in the proof. \begin{lemma} \label{gcdsum} Let $L \geq 1.$ For any integer $q \neq 0$ we have \begin{align*} \sum_{1 \leq \ell \leq L} (\ell, q) \leq \tau(q) L. \end{align*} \end{lemma} \begin{proof} We have \begin{align*} \sum_{1 \leq \ell \leq L} (\ell, q) \leq \sum_{d| q} \sum_{1\leq \ell \leq L} 1_{d| \ell} \leq \tau(q)L. \end{align*} \end{proof} The following lemma is easily proved from \cite[Lemma 1]{DI} by using integration by parts multiple times. \begin{lemma} \emph{\textbf{(Truncated Poisson summation formula).}} \label{poisson} Let $\psi$ be a fixed $C^\infty$-smooth compactly supported function and let $x \gg 1$. Let $q \geq 1$ be an integer. Then for any $A, \epsilon > 0$ \begin{align*} \sum_{n \equiv a \, (q)} \psi\bigg(\frac{n}{x}\bigg) = \frac{1}{q} \int \psi \bigg(\frac{\xi}{x}\bigg) d\xi + \frac{x}{q} \sum_{1 \leq |h| \leq x^\epsilon q/x} \widehat{\psi} \bigg( \frac{h x}{q}\bigg) e \bigg(-\frac{ah}{q} \bigg) + O_{A,\epsilon,\psi}(x^{-A}), \end{align*} where $\hat{f}(h):= \int f(\xi)e(h\xi) d\xi$ is the Fourier transform. \end{lemma} Applying the above lemma we immediately infer \begin{lemma} \emph{\textbf{(Completion of sums).}} \label{complete} Let $\psi$ be a fixed $C^\infty$-smooth compactly supported function and let $x \gg 1$. Let $q \geq 1$ be an integer. Suppose that $F:\mathbb{N} \to \mathbb{C}$ is a $q$-periodic function. Then for any $A, \epsilon > 0$ \begin{align*} \sum_{n} \psi\bigg(\frac{n}{x}\bigg) F(n) = \frac{x}{q} \sum_{0 \leq |h| \leq x^\epsilon q/x} \widehat{\psi} \bigg( \frac{h x}{q}\bigg) \sum_{a \in \mathbb{Z}/q\mathbb{Z} } F(a)e_q (-ah) + O_{A,\epsilon,\psi}\bigg(x^{-A} \sum_{a \in \mathbb{Z}/q\mathbb{Z}} |F(a)| \bigg). \end{align*} \end{lemma} To state the next lemma, for any sequence $a_m$ and any $M > 0$ define the $\ell^2$-norm \begin{align*} \|a_M \|_2 := \bigg( \sum_{m \sim M} |a_m|^2 \bigg)^{1/2}. \end{align*} Let $\lambda_1(q)$ denote the smallest eigenvalue of the Laplacian on $\Gamma_0(q)\backslash \mathbb{H}$ (cf. \cite[Section 1]{DI2} for precise definitions). The Selberg eigenvalue conjecture famously states that for every congruence subgroup $\Gamma$ the smallest eigenvalue $\lambda_1(\Gamma)$ is at least 1/4. The current best result towards this is the result of Kim and Sarnak \cite[Proposition 2 in Appendix 2]{KS} which gives the lower bound $\lambda_1(\Gamma)\geq 1/4-(7/64)^2$. By combining this with \cite[Theorem 9]{DI2} of Deshouillers and Iwaniec, we get \begin{lemma}\emph{\textbf{(Deshouillers-Iwaniec $+$ Kim-Sarnak).}} \label{dilemma} Let $\theta=7/64,$ and let $r$ be a positive integer. Let $C,M,N > 0$ and let $g(m,n,c)$ be a $C^\infty$-smooth function, supported in \begin{align*} [M,2M]\times [N,2N] \times [C, 2C] \end{align*} and satisfying \begin{align*} \bigg| \frac{\partial^{j+k+\ell}}{\partial m^j \partial n^k \partial c^\ell} g(m,n,c)\bigg| \ll M^{-j} N^{-k} C^{-\ell} \quad \text{for} \,\, 0 \leq j,k,\ell \leq 2. \end{align*} Then for any coefficients $a_m$ and $b_n$ we have \begin{align*} \sum_{\substack{m,n,c \\ (c,r)=1}} a_m b_n g(m,n,c) S(m \overline{r},\pm n;c) \pprec& \bigg(1+ \frac{\sqrt{r}C }{\sqrt{MN}} \bigg)^{2 \theta} \, \mathcal{L} \, \|a_M\|_2 \|b_N\|_2, \end{align*} where \begin{align*} \mathcal{L} = \frac{(\sqrt{r} C + \sqrt{MN} +\sqrt{M} C )(\sqrt{r} C + \sqrt{MN} +\sqrt{N} C )}{\sqrt{r} C + \sqrt{MN} } . \end{align*} \end{lemma} \begin{remark} In the statement in \cite[Theorem 9]{DI2} there is a typographical error: the factor $(1+ \frac{\sqrt{rC}}{\sqrt{MN}} )$ should be $(1+ \frac{\sqrt{r}C }{\sqrt{MN}})$. \end{remark} To apply the above lemma we need an upper bound for the average value of $\|b_N\|_2$: \begin{lemma} \label{bNaveragelemma} Let $H_1,H_2,N,K \gg 1$ and $H_1\geq H_2$. Then \begin{align*} S:= \sum_{k_1,k_2 \sim K} \bigg( \sum_{n \sim N} \bigg| \sum_{\substack{h_1\sim H_1 \\ h_2 \sim H_2}} 1_{h_1k_2-h_2k_1=n} \bigg|^2 \bigg)^{1/2} \, \ll N^{1/2} \max\{ KH_1, K^{3/2} H_1^{1/2} \}. \end{align*} \end{lemma} \begin{proof} If $H_1 \geq K$, then trivially $S \ll N^{1/2} K H_1$, since the number of solutions $(h_1,h_2)$ to $h_1k_2-h_2k_1=n$ is bounded by $\ll H_1/k_1 +1 \ll H_1/K$. If $H_1 < K$, then by the Cauchy-Schwarz inequality \begin{align*} S &\ll K \bigg( \sum_{\substack{h_1,h_1' \sim H_1 \\ h_2,h_2' \sim H_2}} \sum_{\substack{k_1,k_2 \sim K \\ h_1k_2-h_2 k_1 \sim N}} 1_{k_2(h_1-h_1')=k_1(h_2-h_2')} \bigg)^{1/2} \\ & \ll K H_1 \bigg( \sum_{\substack{ |\ell_1| \ll H_1 \\ |\ell_2| \ll H_2}}\max_{\substack{h_1 \sim H_1\\ \substack{h_2 \sim H_2}}} \sum_{\substack{k_1,k_2 \sim K \\ h_2 k_1-h_1k_2 \sim N}} 1_{n_2\ell_1 =n_1\ell_2} \bigg)^{1/2} \\ & \ll K H_1 \bigg( \max_{\substack{h_1 \sim H_1\\ \substack{h_2 \sim H_2}}} \sum_{n\sim N} \sum_{\substack{k_1,k_2 \sim K }} 1_{h_1k_2-h_2k_1=n} \bigg)^{1/2} \ll K H_1 \bigg( N \frac{K}{H_1}\bigg)^{1/2} = N^{1/2}K^{3/2} H_1^{1/2}. \end{align*} \end{proof} For the proof of Proposition \ref{typeiiprop}(ii) we require the following lemma of de la Bret\`eche and Drappeau \cite[Lemme 8.3, part 1.]{BD} (applied with $r=d=1$ and $D=-1$), which makes explicit the dependence on $\theta$ of the result of Duke, Friedlander and Iwaniec \cite[Proposition 4]{DFI} (for $\theta=1/4$ they give essentially the same result). \begin{lemma} \label{bdlemma} Let $\theta=7/64$ and fix an integer $q \geq 1$. Suppose that $|h| \leq q$, $M \gg1$, and let $\psi$ be a fixed $C^\infty$-smooth compactly supported function. Then \begin{align*} \sum_{(m,q)=1} \psi(m/M) \sum_{\nu^2+1 \equiv 0 \,\, (mq)} e_{mq}(h\nu) \, \pprec |h| \, + \, (q,h)^\theta q^{1/2-\theta}M^{1/2+\theta}. \end{align*} \end{lemma} \subsection{Evaluation of $|\mathcal{A}_{mn}|$ by Poisson Summation} \label{poissonsection} We are now in place to begin the proof of Proposition \ref{typeiiprop}. We will first show part (i) and in the end part (ii). By the Truncated Poisson summation formula (Lemma \ref{poisson}) we have for any $\epsilon > 0$ \begin{align*} |\mathcal{A}_{mn}| &= \sum_{\ell^2+1 \equiv 0 \,\, (mn)} b(\ell) = \sum_{\substack{\nu \,\, (mn) \\ \nu^2+1 \equiv 0 \,\, (mn)}} \sum_{\ell \equiv \nu \,\, (mn)} b(\ell) \\ & = \frac{\rho(mn)}{mn} X + r(\mathcal{A},mn) + O_{A,\epsilon} (x^{-A} ), \end{align*} where, for $\psi(z):=b(xz)$ and $H:= x^\epsilon P/x$, we have \begin{align*} r(\mathcal{A},mn) = \frac{x}{mn} \sum_{1 \leq |h| \leq H} \widehat{\psi}(hx/mn)\sum_{\substack{\nu \,\, (mn) \\ \nu^2+1 \equiv 0 \,\, (mn)}} e_{mn}(-h\nu). \end{align*} The smooth `cross-conditions' $\widehat{\psi}(hx/mn)$ and $\psi_P(mn)\log mn$ may be removed by applying Mellin transform (similarly as one can use Perron's formula to remove cross-conditions as in \cite[Chapter 3.2]{harman}). Hence, Proposition \ref{typeiiprop} follows once we show \begin{prop} \label{expprop} Let $c_h$ be any bounded coefficients. Adopting the assumptions of Proposition \ref{typeiiprop}, for $H:=x^\epsilon P/x$ we have \begin{align} \label{expsum} \Sigma(M,N):=\frac{1}{H} \sum_{1 \leq |h| \leq H} c_h \sum_{\substack{m \sim M \\ n \sim N}} a_m b_n \sum_{\substack{\nu \,\, (mn) \\ \nu^2+1 \equiv 0 \,\, (mn)}} e_{mn}(-h\nu) \, \ll x^{1-\eta}. \end{align} \end{prop} Our proof of Proposition \ref{typeiiprop}(i) actually gives the following general bound, which we state only in the case $H \ll N $ for simplicity. \begin{prop} Let $M,N,H \geq 1$ with $H\ll N $ and let $a_m$, $b_n$ and $c_h$ be divisor-bounded coefficients. Assume that $b_n$ is supported on square-free integers. Then \begin{align*} \frac{1}{H} \sum_{1 \leq |h| \leq H} c_h & \sum_{\substack{m \sim M \\ n \sim N}} a_m b_n \sum_{\substack{\nu \,\, (mn) \\ \nu^2+1 \equiv 0 \,\, (mn)}} e_{mn}(-h\nu) \\ &\pprec \frac{M N^{1/2}}{H^{1/2}} + \sqrt{HMN} + H^{1/2}M^{1/4}N + M^{3/4}N^{1/2} + \frac{M^{3/4-\theta/2}N^{3/2+\theta/2}}{H^{1/2+\theta/2}} . \end{align*} \end{prop} \subsection{Application of the Cauchy-Schwarz inequality} \label{cssection} Let us write $k=(m,n)$ and make the change of variables $m \mapsto km$ and $n \mapsto kn$ to get \begin{align*} \Sigma(M,N)= \sum_{k\ll N} \Sigma_{k}(M,N) \end{align*} for \begin{align*} \Sigma_k(M,N):= \sum_{\substack{m \sim M/k}} a_{km} \frac{1}{H} \sum_{1 \leq |h| \leq H} c_h \sum_{\substack{n \sim N/k\\ (n,km)=1}}b_{kn} \sum_{\substack{\nu^2+1 \equiv 0 \,\, (k^2mn)}} e_{k^2mn}(-h\nu). \end{align*} We will show that $\Sigma_k(M,N) \pprec x^{1-\eta}/k$ (in the first pass the reader may wish to restrict to the case $k=1$). Before applying the Cauchy-Schwarz inequality we note that by the Chinese Remainder Theorem for any coprime integers $a,b$ the solutions to $\nu^2+1 \equiv 0 \,(ab)$ are in one-to-one correspondence to the solutions to the pair of equations $\alpha^2+1 \equiv 0 \,(a), \,\beta^2+1 \equiv 0 \,(b)$. Thus, denoting \begin{align*} Q=Q(km):= \prod_{\substack{2 \leq p\leq 2N \\ p \equiv 1,2 \, \, (4) \\ p\, \nmid\, km}} p, \end{align*} we have \begin{align*} \sum_{\substack{\nu^2+1 \equiv 0 \,\, (k^2mn)}} e_{k^2mn}(-h\nu) =\frac{\rho(n)}{\rho(Q)} \sum_{\substack{\nu^2+1 \equiv 0 \,\, (k^2 m Q)}} e_{k^2mn}(-h\nu) \end{align*} by using the fact that $b_n$ is supported on square-free integers. Inserting this and applying the Cauchy-Schwarz inequality we get \begin{align} \nonumber \Sigma_k(M,N) \pprec & \frac{\sqrt{M}}{\sqrt{k}} \bigg( \sum_m \psi_{M}(km) \frac{1}{\rho(Q)} \sum_{\substack{\nu^2+1 \equiv 0 \,\, (k^2mQ)}} \bigg| \frac{1}{H} \sum_{h} c_h \sum_{\substack{n \sim N\\(n,m)=1}} b_{kn} \rho(n) e_{k^2mn}(-h\nu) \bigg|^2 \bigg)^{1/2} \\ \label{cs} =& \frac{\sqrt{M}}{\sqrt{k}} \bigg( \frac{1}{H^2} \sum_{1\leq |h_1|,|h_2| \leq H} c_{h_1} \overline{c_{h_2}} \sum_m \psi_{M}(km) \sum_{\substack{n_1,n_2 \sim N/k\\ (n_1n_2,m)=1}} b_{kn_1} \overline{b_{kn_2}} \\ \nonumber & \hspace{100pt} \frac{\rho(n_1)\rho(n_2)}{\rho(Q)} \sum_{\substack{\nu^2+1 \equiv 0 \,\, (k^2mQ)}} e_{k^2mn_1}(-h_1\nu) e_{k^2mn_2}(h_2\nu) \bigg)^{1/2}. \end{align} Denote $n_0:= (n_1,n_2),$ and make the change of variables $n_j \mapsto n_0n_j$ in the above sum. Since $n_0n_1n_2$ is square-free and coprime to $km$, by the Chinese Remainder Theorem we obtain \begin{align*} \frac{\rho(n_0 n_1)\rho(n_0 n_2)}{\rho(Q)} \sum_{\substack{\nu^2+1 \equiv 0 \,\, (k^2 m Q)}} & e_{k^2mn_0n_2}(h_2\nu)e_{k^2mn_0 n_1}(-h_1\nu) \\ &= \frac{\rho(n_0 n_1)\rho(n_0 n_2)}{\rho(Q )} \sum_{\substack{\nu^2+1 \equiv 0 \,\, (k^2m Q )}} e_{k^2mn_0n_1n_2}((h_2n_1-h_1n_2)\nu) \\ & = \rho(n_0) \sum_{\substack{\nu^2+1 \equiv 0 \,\, (k^2 m n_0n_1n_2 )}} e_{k^2mn_0n_1n_2}((h_2n_1-h_1n_2)\nu). \end{align*} Hence, we obtain $\Sigma(M,N)_k^2 \pprec (M/k) \cdot \Xi_k(M,N)$, where \begin{align*} \Xi_k(M,N):= \frac{1}{H^2}& \sum_{1\leq |h_1|,|h_2| \leq H} c_{h_1} \overline{c_{h_2}} \sum_{n_0\ll N} \rho(n_0) \sum_{\substack{n_1,n_2 \sim N/kn_0 \\ (n_1,n_2)=1}} b_{kn_0n_1} \overline{b_{kn_0 n_2}} \\ & \sum_{(m,n_0n_1n_2)=1} \psi_{M}(km) \sum_{\substack{\nu^2+1 \equiv 0 \,\, (k^2 m n_0n_1n_2 )}} e_{k^2mn_0n_1n_2}((h_2n_1-h_1n_2)\nu) . \end{align*} We immediately note that the contribution from the diagonal $h_1n_2-h_1n_2 =0$ to $\Xi_k(M,N)$ is trivially bounded by \begin{align*} \pprec \frac{M}{kH^2} \sum_{n_0 \ll N} \sum_{1 \leq |h_1|,|h_2| \leq 2 H} \sum_{n_1,n_2 \ll N/kn_0} 1_{h_1n_2=h_2n_1} \pprec \frac{MN}{k H}, \end{align*} which contributes to $\Sigma_k(M,N)$ at most \begin{align} \label{diagtrue} \pprec \frac{1}{k} M^{1/2} \bigg(\frac{MN}{H} \bigg)^{1/2} = \frac{MN^{1/2}}{kH^{1/2}} \ll \frac{1}{k} x^{1/2} P^{1/2} N^{-1/2} \ll x^{1-\eta}/k \end{align} by using $H=x^\epsilon P/x$ and the assumption $N \gg x^{\alpha-1+\eta}$. Therefore, we may assume below that $h_1n_2-h_2n_1 \neq 0$. \subsection{Introducing Kloosterman sums} We expand the condition $(m, n_0 n_1n_2)=1$ by using the M\"obius function to get \begin{align*} \sum_{(m, n_0n_1n_2) = 1} = \sum_{\delta | n_0 n_1n_2} \mu (\delta) \sum_{\substack{m \\ \delta | m}}. \end{align*} In the first pass the reader may wish to pretend that $\delta=1$ below. Let us denote $\ell := m k^2 n_0 n_1n_2,$ so that the condition $\delta | m$ can be written as $\delta k^2 n_0 n_1 n_2 | \ell$ and \begin{align*} \Xi_k(M,N)=& \frac{1}{H^2} \sum_{1\leq |h_1|,|h_2| \leq H} c_{h_1} \overline{c_{h_2}} \sum_{n_0\ll N} \rho(n_0) \sum_{\substack{n_1,n_2 \sim N/kn_0 \\ (n_1,n_2)=1 \\ h_1n_2-h_2n_1 \neq 0}} b_{kn_0n_1} \overline{b_{kn_0n_2}} \sum_{\delta| n_0n_1n_2}\mu(\delta) \\ & \sum_{\ell \equiv 0 \, \, (\delta k^2 n_0 n_1n_2)} \psi_{M}\bigg(\frac{\ell}{k n_0n_1n_2}\bigg) \sum_{\substack{\nu^2+1\equiv 0 \, (\ell)}} e_{\ell} ((h_2n_1 -h_1 n_2)\nu ) + O(MN/kH). \end{align*} The variable $\ell$ is of size $ P N / k n_0.$ To proceed we require the following Lemma of Gauss (cf. \cite[Lemma 2]{DI}): \begin{lemma} \label{gauss} If the equation $\nu^2+1 \equiv 0 \,\, (\ell)$ has a solution, then $\ell$ has a representation as a sum of two squares \begin{align*} \ell=r^2+s^2, \quad (r,s)=1, \quad r,s >0. \end{align*} Furthermore, there is a one-to-one correspondence between such representations and the solutions to $\nu^2+1 \equiv 0 \,\, (\ell),$ and we have \begin{align*} \frac{\nu }{\ell} \equiv \frac{r}{s(r^2+s^2)} - \frac{\overline{r}}{s} \mod 1. \end{align*} \end{lemma} Applying this lemma we get \begin{align*} e_{\ell} \bigg((h_2n_1 -h_1 n_2)\nu \bigg) = e_{s} \bigg(\frac{h_1n_2 -h_2 n_1}{r} \bigg)\bigg(1+ O \bigg( \frac{Hr}{Ps}\bigg) \bigg). \end{align*} The contribution from the $O$-term to $\Sigma_k(M,N)$ is trivially bounded by \begin{align} \label{nonarithm} & \frac{\sqrt{M}}{\sqrt{k}} \bigg( \frac{H}{P} \sum_{n_0 \ll N} \sum_{\substack{n_1,n_2 \sim N/kn_0}} \sum_{\delta| n_0n_1n_2} \max_t \sum_{s \ll(PN/kn_0)^{1/2}} \frac{1}{s} \sum_{\substack{r \ll (PN/kn_0)^{1/2} \\ r \equiv t \, (\delta k^2 n_0 n_1n_2)}} r \bigg)^{1/2} \\ \nonumber & \pprec \frac{\sqrt{M}}{\sqrt{k}} \bigg( \frac{H}{P} \sum_{n_0 \ll N} \sum_{\substack{n_1,n_2 \sim N/kn_0}} \sum_{\delta| n_0 n_1n_2} \frac{P^{1/2} N^{1/2}}{(n_0k)^{1/2}}\bigg( \frac{P^{1/2}N^{1/2}}{k^{5/2}\delta n_0^{3/2} n_1n_2 } + 1\bigg) \bigg)^{1/2} \\ \nonumber &\pprec \frac{1}{k} ( M^{1/2} H^{1/2} N^{1/2} + M^{1/2} H^{1/2} N^{5/4}P^{-1/4}) \\ \label{nonarithmcontribution} & = \frac{x^{\epsilon/2}}{k} (Px^{-1/2} + P^{3/4}N^{3/4}x^{-1/2})\ll x^{1-\eta}/k \end{align} since from the assumptions it follows that $\alpha < 3/2-\eta$ and $N < x^{2-\alpha - \eta}.$ Hence, we have $\Xi_k(M,N)= \widetilde{\Xi_k}(M,N) +O(\mathcal{E}),$ where $(M/k)^{1/2} \mathcal{E}^{1/2} < x^{1-\eta}/k$ and \begin{align*} \widetilde{\Xi_k}(M,N):= \frac{1}{H^2} \sum_{1\leq |h_1|,|h_2| \leq H} c_{h_1} \overline{c_{h_2}} & \sum_{n_0\ll N} \rho(n_0) \sum_{\substack{n_1,n_2 \sim N/kn_0 \\ (n_1,n_2)=1 \\ h_1n_2-h_2n_1 \neq 0}} b_{kn_0n_1} \overline{b_{kn_0n_2}} \sum_{\delta| n_0n_1n_2}\mu (\delta) \\ &\sum_{\substack{r,s > 0\\ (r,s)=1\\r^2 \equiv -s^2 \, (\delta k^2 n_0n_1n_2)}} \psi_{M}\bigg(\frac{r^2+s^2}{k n_0n_1n_2}\bigg) e_{s} \bigg(\frac{h_1n_2 -h_2 n_1}{r} \bigg). \end{align*} \subsection{Completing the sum} By a smooth dyadic partition of unity for the variables $r$ and $s$, we can split $\widetilde{\Xi_k}(M,N)$ into $\ll \log^2x$ sums of the form \begin{align*} \Psi_k(R,S):= \frac{1}{H^2} \sum_{1\leq |h_1|,|h_2| \leq H} c_{h_1} \overline{c_{h_2}} & \sum_{n_0\ll N} \rho(n_0) \sum_{\substack{n_1,n_2 \sim N/kn_0 \\ (n_1,n_2)=1 \\ h_1n_2-h_2n_1 \neq 0}} b_{kn_0n_1} \overline{b_{kn_0n_2}} \sum_{\delta| n_0n_1n_2}\mu (\delta) \\ &\sum_{\substack{(r,s)=1\\r^2 \equiv -s^2 \, (\delta k^2 n_0n_1n_2)}} g(r,s,n_0n_1n_2) e_{s} \bigg(\frac{h_1n_2 -h_2 n_1}{r} \bigg). \end{align*} where \begin{align*} g(r,s,n_0n_1n_2) := \psi_R(r) \psi_S(s) \psi_{M}\bigg(\frac{r^2+s^2}{k n_0n_1n_2}\bigg) \end{align*} with $\psi_R(r)$ (similarly for $\psi_S(s)$) a $C^\infty$-smooth function supported on $[R,2R]$ and satisfying $\psi_R^{(i)}(r) \ll_i R^{-i}$ for all $i\geq 0$, where \begin{align*} 1 \ll R, S \ll \frac{P^{1/2}N^{1/2}}{k^{1/2} n_0^{1/2}} \quad \text{and} \quad \max\{R,S\} \gg \frac{P^{1/2}N^{1/2}}{k^{1/2} n_0^{1/2}}. \end{align*} For each $R$ and $S$ we can now complete the sum over $r$ by using the Poisson summation formula (Lemma \ref{complete}), similarly as in \cite[Section 5]{DI}. The modulus of the sum is of size $S\delta k^2 n_0n_1n_2 \asymp S \delta N^2/n_0$, and the length of the sum is $R,$ so that for \begin{align*} T:= x^{\epsilon} \frac{S \delta N^2}{R n_0 } \end{align*} we get by Lemma \ref{complete} \begin{align} \nonumber & \sum_{\substack{r \\(r,s)=1\\ r^2 \equiv -s^2 \, (\delta k^2 n_0 n_1n_2)}} g(r,s,n_0 n_1n_2) e_{s} \bigg(\frac{h_1n_2 -h_2 n_1}{r} \bigg) +O_{A,\epsilon}(x^{-A}) \\ \label{precomplete} & \hspace{25pt}= \frac{x^\epsilon}{T} \sum_{|t| \leq T} G(t,s,n_0 n_1n_2)\sum_{\substack{u \, (s\delta k^2 n_0n_1n_2)\\ (u,s)=1 \\ u^2\equiv -s^2 \, (\delta k^2 n_0 n_1n_2)}} e_{s} \bigg(\frac{h_1n_2 -h_2 n_1}{u} \bigg) e_{s\delta k^2 n_0 n_1n_2}(-tu), \end{align} where \begin{align*} G(t,s,n_0 n_1n_2) = \frac{R T}{ x^\epsilon s\delta k^2 n_0n_1n_2} \widehat{f }_{s,n_0,n_1,n_2}(tR/ s\delta k^2 n_0n_1n_2) \end{align*} for \begin{align*} f_{s,n_0,n_1,n_2}(x) := g(Rx,s,n_0n_1n_2) \end{align*} (so that the function $G$ is bounded). By writing $u=\alpha s + \beta \delta k^2 n_0n_1n_2$ (note that $(u,s)=1$ implies $(s,\delta k^2 n_0n_1n_2)=1$) the right-hand side in (\ref{precomplete}) is equal to \begin{align*} \frac{x^\epsilon}{T} \sum_{|t| \leq T} G(t,s,n_0 n_1n_2) \sum_{\alpha^2+1 \equiv 0 \, (\delta k^2 n_0 n_1n_2)} e_{\delta k^2 n_0 n_1n_2}(-t \alpha) \sum_{\substack{\beta \, (s) \\ (\beta,s)=1}} e_{s} \bigg(\frac{h_1n_2 -h_2 n_1}{\delta k^2 n_0n_1n_2\beta} - t \beta \bigg). \end{align*} The contribution from $t=0$ to $\Psi_k(R,S)$ is by a standard bound for Ramanujan's sums bounded by (using Lemma \ref{gcdsum}) \begin{align*} \pprec \frac{1}{T H^2}\sum_{h_1,h_2} \sum_{n_0 \ll N} \sum_{\substack{n_1,n_2 \sim N/kn_0 \\ (n_1,n_2)=1\\ h_1n_2-h_2n_1 \neq 0}} \sum_{s} (h_1n_2-h_2 n_1, s) \pprec \sum_{n_0 \ll N} \frac{S N^2}{Tk^2n_0^2} \ll P^{1/2} N^{1/2} k^{-2}. \end{align*} The contribution from this to $\Sigma_k(M,N)$ is \begin{align} \label{t0contribution} \pprec M^{1/2} P^{1/4} N^{1/4}/k = P^{3/4} N^{-1/4}/k \ll x^{1-\eta}/k \end{align} since $N \gg x^{\alpha -1+ \eta} \gg x^{3\alpha-4 + \eta}$ for $\alpha < 3/2$. Therefore, the sum $\Psi_k(R,S)$ is up to a negligible error term equal to a sum of Kloosterman sums of the form \begin{align*} &\widetilde{\Psi_k}(R,S):= \frac{x^\epsilon}{T H^2}\sum_{1 \leq |h_1|,|h_2|\leq H} c_{h_1} \overline{c_{h_2}} \sum_{n_0\ll N} \rho(n_0) \sum_{\substack{n_1,n_2 \sim N/kn_0 \\ (n_1,n_2)=1 \\ h_1n_2-h_2n_1 \neq 0}} b_{n_0n_1} \overline{b_{n_0n_2}} \sum_{\delta| n_0n_1n_2} \mu (\delta) \\ &\sum_{\alpha^2+1 \equiv 0 \, (\delta k^2 n_0 n_1n_2)} \sum_{1\leq |t| \leq T} e_{\delta k^2 n_0 n_1n_2}(-t \alpha) \sum_{(s,\delta k^2 n_0n_1n_2)=1} G(t,s,n_0 n_1n_2)S(- t\overline{\delta k^2 n_0 n_1n_2}, h_1n_2-h_2n_1; s). \end{align*} \subsection{Application of the Deshouillers-Iwaniec bound} \label{disection} We split the sum over $h_1$ and $h_2$ dyadically to parts with $h_1 \sim H_2$ and $ h_2 \sim H_2$. By symmetry we may assume $H_1 \geq H_2.$ We now fix $n_0,n_1,n_2, \delta,\alpha$, and write \begin{align*} \varrho:= \delta k^2 n_0 n_1n_2 \asymp \delta N^2 /n_0 \end{align*} and (denoting $m:=t$ and $n:= h_1n_2-h_1n_2$) \begin{align*} A_m = A_m(\varrho,\alpha):= e_{\varrho}(-m\alpha) \quad \text{and} \quad B_n= B_n(n_1,n_2) := \sum_{\substack{h_1 \sim H_1 \\ h_2 \sim H_2 \\ n= h_1 n_2-h_2n_1}}c_{h_1} \overline{c_{h_2}}. \end{align*} \begin{remark} Since both of the coefficients $A_m$ and $B_n$ depend on the level $r$, we are unable to make use of the average over $r$ as in \cite[Theorem 10]{DI2}. \end{remark} Since $t \neq 0 \neq h_1n_2-n_2h_1$, by a smooth dyadic decomposition in the variables $m$ and $n$ we can partition $\widetilde{\Psi_k}(R,S)$ into $\ll \log^2 x$ sums of the form \begin{align*} \Upsilon_k := \frac{1}{TH^2} \sum_{n_0\ll N} \rho(n_0) \sum_{n_1,n_2 \sim N/kn_0} \sum_{ \substack{\delta | n_0n_1n_2}} \max_{\alpha \, (\varrho)} \bigg| \sum_{\substack{m,n,c \\ (c,\varrho)=1}} A_m B_n F(m,n,c) S(m \overline{\varrho}, \pm n;c) \bigg|, \end{align*} where it is easily verified that $F$ satisfies the assumptions of Lemma \ref{dilemma} in the range \begin{align*} (m,n,c) \in [\bm{M},2\bm{M}] \times [\bm{N},2\bm{N}] \times [C,2C] \end{align*} for \begin{align*} \bm{M} \ll T = x^{\epsilon} \frac{S \delta N^2}{R n_0 }, \quad \bm{N} \ll HN/kn_0, \quad \text{and} \quad C \ll S \ll \frac{P^{1/2}N^{1/2}}{k^{1/2} n_0^{1/2}}. \end{align*} By Lemma \ref{dilemma} we get for $\theta:=7/64$ \begin{align*} \bigg| \sum_{\substack{m,n,c\\(c,\varrho)=1}} A_m B_n F(m,n,c) S(m \overline{\varrho}, \pm n;c) \bigg| \pprec \bigg(1+ \frac{\sqrt{\varrho}C}{\sqrt{\bm{M}\bm{N}}} \bigg)^{2\theta} \mathcal{L} \, \| A_{\bm{M}}\|_2 \| B_{\bm{N}}\|_2 \end{align*} for \begin{align*} \mathcal{L} &= \frac{(\sqrt{\varrho}C +\sqrt{\bm{M}\bm{N}} + \sqrt{\bm{M}} C) (\sqrt{\varrho}C +\sqrt{\bm{M}\bm{N}} + \sqrt{\bm{N}} C) }{\sqrt{\varrho}C +\sqrt{\bm{M}\bm{N}} } \\ & = \sqrt{\varrho}C +\sqrt{\bm{M}\bm{N}} + \sqrt{\bm{M}} C+ \sqrt{\bm{N}} C+ \frac{\sqrt{\bm{M} \bm{N}} C^2}{\sqrt{\varrho}C +\sqrt{\bm{M}\bm{N}}} \\ & \ll \sqrt{\varrho} C + \sqrt{\bm{M}} C + \sqrt{\bm{M} \bm{N}} \ll \sqrt{\delta/n_0 } N S + \sqrt{T} S + \sqrt{T HN/kn_0}, \end{align*} where the last bound follows from $ \bm{N} \ll \varrho $. We have $\|A_{\bm{M}}\|_2 \, \ll \sqrt{\bm{M}}$, and by Lemma \ref{bNaveragelemma} \begin{align*} \sum_{n_1,n_2 \sim N/kn_0} \| B_{\bm{N}}\|_2 \, \ll \sqrt{\bm{N}} \max \bigg\{ \frac{N H_1}{k n_0}, \frac{N^{3/2} H_1^{1/2}}{k^{3/2}n_0^{3/2}}\bigg\}. \end{align*} Hence, by using $H \ll N$ we have \begin{align*} \Upsilon_k &\pprec \, \max_{\delta } \sum_{n_0 \ll N} \frac{1}{TH^2}\bigg(1+ \frac{\sqrt{\varrho }C}{\sqrt{\bm{M}\bm{N}}} \bigg)^{2\theta}(\sqrt{\delta/n_0 } N S + \sqrt{T} S + \sqrt{T HN/kn_0}) \\ & \hspace{200pt} \cdot \sqrt{\bm{M}\bm{N}}\max \bigg\{ \frac{N H_1}{kn_0}, \frac{N^{3/2} H_1^{1/2}}{k^{3/2}n_0^{3/2}}\bigg\} \\ &\pprec \, \max_{\delta } \sum_{n_0 \ll N} \frac{1}{TH^2}\bigg(1+ \frac{\sqrt{k \delta }N S}{\sqrt{T HN}} \bigg)^{2\theta}(\sqrt{\delta/n_0 } N S + \sqrt{T} S + \sqrt{T HN/kn_0}) \sqrt{T} \frac{H N^2}{k^{3/2}n_0^{3/2}} \\ &\ll \max_{\delta }\sum_{n_0 \ll N} \frac{1}{kn_0^{3/2}} \bigg(1+ \frac{\sqrt{\delta }N S}{\sqrt{T HN}} \bigg)^{2\theta} \bigg( \frac{\sqrt{\delta/n_0 } SN^3}{ \sqrt{T} H} + \frac{SN^2}{ H } + \frac{N^{5/2}}{\sqrt{Hn_0}} \bigg), \end{align*} since the first bound is increasing as a function of $\bm{M}$ and $\bm{N}.$ Inserting $T =x^\epsilon S \delta N^2 / R n_0 $ we obtain \begin{align*} \Upsilon_k & \pprec \sum_{n_0 \ll N} \frac{1}{kn_0^{3/2}} \bigg(1+ \frac{ \sqrt{n_0 RS}}{\sqrt{H N }} \bigg)^{2\theta} \bigg( \frac{ \sqrt{ RS} N^2}{ H} + \frac{SN^2}{ H } + \frac{N^{5/2}}{\sqrt{H}} \bigg) \\ &\pprec \frac{1}{k}\bigg(1+ \frac{\sqrt{P}}{\sqrt{H}} \bigg)^{2\theta}\bigg( \frac{P^{1/2} N^{5/2}}{ H } + \frac{N^{5/2}}{\sqrt{H}} \bigg), \end{align*} since $R,S \ll P^{1/2}N^{1/2}n_0^{-1/2}.$ By using $H= x^\epsilon P/x$ this yields \begin{align} \label{upsilonbound} \Upsilon_k \pprec \frac{x^{1+\theta} N^{5/2}}{kP^{1/2}}, \end{align} so that the contribution to $\Sigma_k (M,N)$ is \begin{align} \label{dicontribute} \pprec M^{1/2} x^{1/2+\theta/2} N^{5/4}P^{-1/4}/k = x^{1/2+\theta/2}P^{1/4}N^{3/4}/k \ll x^{1-\eta}/k \end{align} by using the assumption $N \ll x^{(2-2\theta-\alpha)/3-\eta}$. \subsection{Proof of Proposition \ref{typeiiprop}(i)} By combining the bounds (\ref{diagtrue}), (\ref{nonarithmcontribution}), (\ref{t0contribution}), and (\ref{dicontribute}) we obtain $\Sigma_k(M,N) \ll x^{1-\eta}/k$. Summing over $k \ll M$ we get $\Sigma(M,N) \ll x^{1-\eta}$, which by Section \ref{poissonsection} proves Proposition \ref{typeiiprop}(i). \qed \subsection{Proof of Proposition \ref{typeiiprop}(ii)} We need to prove (\ref{expsum}) under the assumptions in Proposition \ref{typeiiprop}(ii). We use a similar argument as in \cite[Section 5]{DFI} with Lemma \ref{bdlemma}. Inserting the condition $(m,n)=1$ to $\Sigma(M,N)$ gives an error term (since $b_n$ are supported on primes) \begin{align*} \pprec \sum_{n \sim N} \sum_{m \sim M} 1_{n|m} \pprec M \ll x^{1-\eta}, \end{align*} so that we may restrict to the part $(m,n)=1$. Applying the Cauchy-Schwarz inequality similarly as in Section \ref{cssection} but with the sum over $h$ `outside' we get $\Sigma(M,N) \pprec M^{1/2}\cdot \Xi(M,N)^{1/2} $ for \begin{align*} \Xi(M,N):= \frac{1}{H} \sum_{1\leq |h| \leq H} & \sum_{n_0\ll N} \rho(n_0) \sum_{\substack{n_1,n_2 \sim N/n_0 \\ (n_1,n_2)=1}} b_{n_0n_1} \overline{b_{n_0 n_2}} \\ & \sum_{(m,n_0n_1n_2)=1} \psi_M(m) \sum_{\substack{\nu^2+1 \equiv 0 \,\, (m n_0n_1n_2 )}} e_{mn_0n_1n_2}(h(n_1-n_2)\nu). \end{align*} The diagonal part $n_1=n_2$ is bounded by $\pprec M N$, whose contribution to $\Sigma(M,N)$ is at most $\pprec MN^{1/2} < x^{1-\eta}$ by using $N \gg x^{2(\alpha-1)+\eta}$. For the remaining part $\Xi_0(M,N)$ with $n_0=1$ we use Lemma \ref{bdlemma} with $q=n_1n_2$ to get \begin{align*} \Xi_0(M,N) := & \frac{1}{H} \sum_{1\leq |h| \leq H} \sum_{\substack{n_1,n_2 \sim N \\ (n_1,n_2)=1}} b_{n_1} \overline{b_{n_2}} \sum_{(m,n_1n_2)=1} \psi_M(m) \sum_{\substack{\nu^2+1 \equiv 0 \,\, (m n_1n_2 )}} e_{mn_1n_2}(h(n_1-n_2)\nu) \\ \ll & \sum_{\substack{n_1,n_2 \sim N \\ (n_1,n_2)=1}} \frac{1}{H} \sum_{1\leq |h| \leq H} \bigg(HN + (n_1n_2,h(n_1-n_2))^\theta N^{1-2\theta}M^{1/2+\theta}\bigg) \\ \pprec & \, HN^3 + M^{1/2+\theta} N^{3-2\theta} \end{align*} by computing the sum over $h$ with Lemma \ref{gcdsum}. The contribution from this to $\Sigma(M,N)$ is bounded by \begin{align*} \pprec M^{1/2}H^{1/2} N^{3/2} + M^{3/4+\theta/2} N^{3/2-\theta} = x^{\epsilon/2} P N x^{-1/2} +P^{3/4+\theta/2} N^{3/4-3\theta/2} \ll x^{1-\eta}, \end{align*} since $N \ll x^{(4-(3+2\theta)\alpha)/(3-6\theta)-\eta} < x^{3/2-\alpha-\eta}$. Hence, $\Sigma(M,N) \ll x^{1-\eta}$. \qed \section{Remarks on the arithmetic information} For $\alpha=1+o(1)$ Proposition \ref{typeiiprop}(i) gives Type II information for $N \ll x^{1/3-2\theta/3-\eta}$, while part (ii) works for $N \ll x^{1/3-\eta}$. The reason for this discrepancy is that we were unable to use the average over the level variable $r$ in Section \ref{disection}. If we could use the average over $r$, we expect that the dependency on the parameter $\theta$ would be same as in \cite[Lemme 8.3, part 3.]{BD}, that is, $M^{\theta}Q^{-\theta},$ where $Q$ corresponds to $N^2$ (note that by a more careful argument we know that the coefficient $c_h$ is a nice smooth function of $h$). Therefore, instead of (\ref{upsilonbound}), our bound for $\Upsilon_k$ would read $xM^\theta N^{5/2-2\theta}P^{-1/2}/k$, which yields \begin{conj} Suppose that $\alpha < 3/2-\eta$. Let $H=x^\epsilon P/x$ and let $c_h=\psi(h/H)$ for some fixed compactly supported $C^\infty$-smooth function $\psi$. Then for $b_n$ supported on square-free integers we have \begin{align*} \Sigma(M,N) \pprec x^{1/2} M^{1/2} + x^{1/2}M^{1/4+\theta/2}N^{1-\theta}+x^{1-\eta}. \end{align*} \end{conj} This gives a bound $\Sigma(M,N) \ll x^{1-\eta}$ as soon as \begin{align*} x^{\alpha-1+\eta}\ll N\ll x^{(2-(1+2\theta)\alpha)/(3-6\theta) - \eta}. \end{align*} Note that this is better than the combined bound of Proposition \ref{typeiiprop} parts (i) and (ii), and for $\alpha=1+o(1)$ the upper limit is $x^{1/3-\eta}$. Assuming the above bound with $\theta=7/64$ we can improve the exponent in Theoren \ref{maint} from $1.279$ to $1.286$. The main reason why the Type II estimate is restricted to small values of $P$ is that we have to use the Cauchy-Schwarz inequality, which means that all savings are essentially halved. Therefore, for large $P$ one should attempt to obtain some other type of arithmetical information where the Cauchy-Schwarz inequality is not necessary, eg. an asymptotic for Type I$_2$ sums \begin{align*} \sum_{d \leq D_2} \lambda_d \sum_{\substack{m\sim M, \, \, n \sim N \\ mn \equiv 0 \, (d)}} |\mathcal{A}_{mn}| \psi_P(mn) \log mn \end{align*} where the most important range would be $M=N=\sqrt{P}$. Even for $D_2=1$ this is an open problem. Currently we have an asymptotic formula for $S(x,P)$ only in the range $P=x^{1+o(1)}$ (this follows already from the work of Duke, Friedlander, and Iwaniec \cite{DFI}). To get an asymptotic formula for $S(x,P)$ with $P$ up to $x^{1+\beta}$ for some fixed $\beta>0$ it seems that we would need to handle also Type I$_3$ sums of the form \begin{align*} \sum_{d \leq D_3} \lambda_d \sum_{\substack{\ell \sim L, \,\, m\sim M, \, \, n \sim N \\ \ell mn \equiv 0 \, (d)}} |\mathcal{A}_{\ell mn}| \psi_P(\ell mn) \log \ell mn. \end{align*} This is because in Section \ref{alphasmallsection} the sums that we cannot handle are \begin{align*} \sum_{x^\sigma < q \leq x^{\alpha-2\sigma}} S(\mathcal{A}(P)_q, q) \quad \quad \text{and} \quad \quad \sum_{U < q \leq x^{\alpha/2}} S(\mathcal{A}(P)_q, q), \end{align*} where the first sum corresponds to a sum of three primes all of size $x^{\alpha/3+O(\beta)}$, and the second sum is a sum over two primes of size $x^{\alpha/2+O(\beta)}$.
{ "timestamp": "2019-08-28T02:13:33", "yymm": "1908", "arxiv_id": "1908.08816", "language": "en", "url": "https://arxiv.org/abs/1908.08816", "abstract": "We show that the largest prime factor of $n^2+1$ is infinitely often greater than $n^{1.279}$. This improves the result of de la Bretèche and Drappeau (2019) who obtained this with $1.2182$ in place of $1.279.$ The main new ingredients in the proof are a new Type II estimate and using this estimate by applying Harman's sieve method. To prove the Type II estimate we use the bounds of Deshouillers and Iwaniec on linear forms of Kloosterman sums. We also show that conditionally on Selberg's eigenvalue conjecture the exponent $1.279$ may be increased to $1.312.$", "subjects": "Number Theory (math.NT)", "title": "On the largest prime factor of $n^2+1$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682461347529, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7073169811551897 }
https://arxiv.org/abs/2009.08622
Conjecture: 100% of elliptic surfaces over $\mathbb{Q}$ have rank zero
Based on an equation for the rank of an elliptic surface over $\mathbb{Q}$ which appears in the work of Nagao, Rosen, and Silverman, we conjecture that 100% of elliptic surfaces have rank $0$ when ordered by the size of the coefficients of their Weierstrass equations, and present a probabilistic heuristic to justify this conjecture. We then discuss how it would follow from either understanding of certain $L$-functions, or from understanding of the local behaviour of the surfaces. Finally, we make a conjecture about ranks of elliptic surfaces over finite fields, and highlight some experimental evidence supporting it.
\section{Introduction}\label{introduction} Let $\mathcal{E}$ be an elliptic surface over $\mathbb{Q}$. Fix a Weierstrass equation $$\mathcal{E}: y^2 = x^3 + A(T)x + B(T)$$ with $A(T), B(T) \in \mathbb{Z}[T]$ and $4A(T)^3 + 27B(T)^2 \not\equiv 0$. Given an integer $t$ such that $4A(t)^3 + 27B(t)^2\neq 0$, let $\mathcal{E}_t$ denote the elliptic curve over $\mathbb{Q}$ with Weierstrass equation $$\mathcal{E}_t: y^2 = x^3 + A(t)x + B(t).$$ Given an elliptic curve $E/\mathbb{Q}$ and a prime $p$, define $a_p(E)$ to be the $p^{\text{th}}$ coefficient of the $L$-function attached to $E$.\\ \\ It was conjectured by Nagao \cite{nagao} that \begin{equation}\label{nagaoeqn} \text{rank }\mathcal{E}(\mathbb{Q}(T)) = -\lim_{X\to\infty}\frac{1}{X}\sum_{p<X}\sum_{t=1}^pa_p(\mathcal{E}_t)\frac{\log p}{p}. \end{equation} Rosen and Silverman \cite{rosen_silverman} proved that (\ref{nagaoeqn}) held if one assumed Tate's conjecture and that a certain $L$-function didn't vanish on the right edge of its critical strip. In particular, (\ref{nagaoeqn}) holds unconditionally for rational elliptic surfaces.\\ \\ In light of (\ref{nagaoeqn}), we think it's likely that the average rank of elliptic surfaces over $\mathbb{Q}$ is $0$, essentially because the average value of $a_p(\mathcal{E}_t)$ should be $0$. We give a more thorough justification for this belief in section \ref{justification}.\\ \\ There are several ways the belief that the average rank of elliptic surfaces is $0$ can be formulated as a precise conjecture. For convenience, we introduce the following framework for discussing possible formulations of such a conjecture.\\ \\ Let $\mathcal{S}(M)$ with $m \in \mathbb{Z}_{>0}$ be a sequence of subsets of the set of all elliptic surfaces over $\mathbb{Q}$, with the properties that $\mathcal{S}(M)$ is finite for every $M$, and $\mathcal{S}(M) \subseteq \mathcal{S}(M')$ whenever $M \leq M'$. We'll say that $\mathcal{S} = \cup_{M=1}^\infty \mathcal{S}(M)$ is a \textit{family} of elliptic surfaces, and the filtration given by $M$ is an \textit{ordering} of that family. This parallels the language used when discussing statistics of ranks of elliptic curves, where, for example, people might discuss ``the family of quadratic twists of a fixed elliptic curve, ordered by conductor''.\\ \\ Define the \textit{average rank of the family $\mathcal{S}$} to be the quantity $$\lim_{M\to\infty}\frac{1}{\#\mathcal{S}(M)}\sum_{\mathcal{E} \in \mathcal{S}(M)} \text{rank }\mathcal{E}(\mathbb{Q}(T)).$$ We believe that the average rank of many families $\mathcal{S}$ will be $0$. In formulating conjecture \ref{avgrank0} below, we pick a specific family for the sake of concreteness, but there are many other reasonable choices.\\% For example, below we'll propose ordering the polynomials $A(T)$ and $B(T)$ by their Mahler measure, but ordering by them by the maximum of the absolute values of their coefficients would also be reasonable, and we'll take $A(T)$ and $B(T)$ to be roughly the same size, but it would also be reasonable to pick a formulation that causes $A(T)^3$ and $B(T)^2$ to be of similar sizes instead.\\ \\ Conversely, there are several examples in the literature of families of elliptic surfaces over $\mathbb{Q}$ which were constructed to have strictly positive rank \cite{elkies} \cite{fermigier}. Average ranks of the elliptic curves coming from specializations of elliptic surfaces have been studied as well \cite{bdd} \cite{fermigier} \cite{silverman}, and in some situations the average ranks of the specializations can exhibit rich and possibly unexpected behaviour.\\ \\ To formulate conjecture \ref{avgrank0} below, we first define, for positive integers $d$ and $M$, the set $$\mathcal{P}_d(M) = \left\{p \in \mathbb{Z}[T]\,:\, \text{deg}(p) = d,\, \mu(p) < M\right\},$$ where $\mu(p)$ is the Mahler measure of $p$ (using the height of $p$ would also be reasonable here). Set $$\mathcal{S}_{m,n}(M) := \{\mathcal{E}: y^2 = x^3 + A(T)x + B(T)\,:\, A\in \mathcal{P}_m(M^2),\, B\in\mathcal{P}_n(M^3),\, 4A(T)^3 + 27B(T)^2 \not\equiv 0\}.$$ Then we believe the average rank of the family $\mathcal{S} = \mathcal{S}_{m,n}$ will be $0$ whenever $m$ and $n$ are both positive, i.e. \begin{conjecture}\label{avgrank0} For any fixed positive integers $m$ and $n$, we have $$\lim_{M\to\infty}\frac{1}{\#\mathcal{S}_{m,n}(M)}\sum_{\mathcal{E} \in \mathcal{S}_{m,n}(M)}\text{\emph{rank }}\mathcal{E}(\mathbb{Q}(T)) = 0.$$ \end{conjecture} \noindent In section \ref{analyticproofsection} we discuss the main obstacles for proving conjecture \ref{avgrank0} using the analytic framework in \cite{rosen_silverman}. In section \ref{finitefieldsection} we outline an approach to conjecture \ref{avgrank0} based on investigating statistics of ranks of elliptic surfaces over finite fields. \section{Acknowledgments} We thank Noam Elkies, Bjorn Poonen, and Michael Snarski for helpful discussions. This work was supported by grant 550031 from the Simons Foundation. \section{Probabilistic heuristics}\label{justification} For a fixed prime $p$, Birch \cite{birch} gave all moments of the distribution of $a_p(E)$ when $E$ is chosen by selecting a (possibly singular) Weierstrass equation with coefficients in $\mathbb{F}_p$ uniformly at random. Let $(A_{p,t})$ be a sequence of independent random variables indexed by a prime $p$ and a positive integer $t$, with the property that, for every $t$, the random variable $A_{p,t}$ has the same distribution as the values $a_p(E)$ for $E$ a Weierstrass equation in $\mathbb{F}_p$ chosen uniformly at random, as in Birch's work. We highlight the following property of the sequence $(A_{p,t})$: \begin{proposition}\label{threeseries} For any $\varepsilon > 0$, the series $$\frac{1}{X^{\frac{1}{2}+\varepsilon}}\sum_{p < X}\sum_{t = 1}^p A_{p,t}\frac{\log p}{p}$$ converges to $0$ with probability $1$ as $X$ goes to infinity. \end{proposition} \begin{proof} By construction we have $$\text{Var}\!\left(\frac{1}{X^{\frac{1}{2}+\frac{\varepsilon}{2}}}A_{p,t}\frac{\log p}{p}\right) \ll \frac{(\log p)^2}{X^{1+\varepsilon}p}.$$ Thus $$\text{Var}\!\left(\frac{1}{X^{\frac{1}{2}+\frac{\varepsilon}{2}}}\sum_{p < X}\sum_{t = 1}^p A_{p,t}\frac{\log p}{p}\right) \ll \frac{1}{X^{1+\varepsilon}}\sum_{p<X}(\log p)^2 \ll 1.$$ By Kolmogorov's three-series theorem, as stated in \cite[theorem 2.5.8]{durrett} for example, it follows that the series $$\frac{1}{X^{\frac{1}{2}+\frac{\varepsilon}{2}}}\sum_{p < X}\sum_{t = 1}^p A_{p,t}\frac{\log p}{p}$$ converges almost surely. Hence the series $$\frac{1}{X^{\frac{1}{2}+\varepsilon}}\sum_{p < X}\sum_{t = 1}^p A_{p,t}\frac{\log p}{p} = \frac{1}{X^{\frac{\varepsilon}{2}}} \left( \frac{1}{X^{\frac{1}{2}+\frac{\varepsilon}{2}}}\sum_{p < X}\sum_{t = 1}^p A_{p,t}\frac{\log p}{p}\right)$$ converges to $0$ almost surely. \end{proof} \noindent Proposition \ref{threeseries} is relevant because it suggests that, if we believe that the values $a_p(\mathcal{E}_t)$ are distributed in the same way $a_p(E)$ is for elliptic curves $E/\mathbb{F}_p$ chosen uniformly at random, then we should expect that $100\%$ of the time the quantity in (\ref{nagaoeqn}) is $0$.\\ \\ There is evidence to support the belief that the values $a_p(\mathcal{E}_t)$ will be distributed in this way. If, instead of fixing $p$ and letting $t$ vary initially, we fix $t$ and let $p$ vary, then, for all $\varepsilon > 0$ the series \begin{equation}\label{asconvofap} \frac{1}{X^\varepsilon}\sum_{p < X}A_{p,t}\frac{\log p}{p} \end{equation} converges to $0$ as $X$ goes to infinity with probability $1$. This suggests that, for a fixed elliptic curve $E_t/\mathbb{Q}$, we should expect that \begin{equation}\label{heathbrowneqn} \sum_{p < X} a_p(E_t)\frac{\log p}{p} \ll X^\varepsilon \end{equation} for all $\varepsilon > 0$, because we should expect that the reductions $E_t/\mathbb{F}_p$ will be distributed uniformly at random. The bound (\ref{heathbrowneqn}) was proven by Heath-Brown \cite{heath-brown}, so the reductions $E_t/\mathbb{F}_p$ do indeed behave uniformly randomly in this case.\\ \\ The popular belief that the average rank of elliptic curves over $\mathbb{Q}$ is $\frac{1}{2}$ can also support the idea that these sums of $a_p$ values behave randomly in the way described. The Birch and Swinnerton-Dyer conjecture (BSD) connects the rank of an elliptic curve $E/\mathbb{Q}$ to the coefficients $a_p(E)$ of its $L$-function. The original formulation of BSD was $$\prod_{p < X}\frac{p+1 - a_p(E)}{p} \sim C_E(\log X)^{\text{rank } E(\mathbb{Q})}$$ for some constant $C_E$ which depends on $E$. See work of Goldfeld \cite{goldfeld}, K. Conrad \cite{conrad}, and Kuo-R. Murty \cite{kuo_murty} for some treatment of this specific formulation of BSD. One consequence of this form of BSD is \cite{rubinstein} \begin{equation}\label{rubeqn} \text{rank }E(\mathbb{Q}) = \frac{1}{2} - \frac{1}{\log X}\int_1^X\frac{1}{x}\sum_{p<x}a_p(E)\log p\,\frac{dx}{x} + \mathcal{O}\Big(\frac{1}{\log X}\Big). \end{equation} It is widely believed that many families of elliptic curves over $\mathbb{Q}$ have average rank $\frac{1}{2}$. This belief suggests that the quantity \begin{equation}\label{rubquantity} \frac{1}{x}\sum_{p<x}a_p(E)\log p \end{equation} appearing in equation (\ref{rubeqn}) should average to $0$ in those families. However, because the error term in (\ref{rubeqn}) depends on $E$, even knowing that the average rank of elliptic curves was $\frac{1}{2}$ wouldn't be enough to conclude that the quantity (\ref{rubquantity}) averages to $0$. While it would be surprising if the error terms in (\ref{rubeqn}) did not average to $0$ as well, controlling these error terms is difficult, and is the main obstacle in proving anything about average ranks from this perspective. \section{Obstacles for analytic proofs}\label{analyticproofsection} In this section we discuss what obstacles exist which make proving conjecture \ref{avgrank0} difficult within the framework established in \cite{rosen_silverman}. In proving Nagao's conjecture, Rosen and Silverman first prove an analytic version of his conjecture: \begin{equation}\label{annagao} \underset{s = 2}{\text{Res}}\,\sum_p\sum_{t=1}^p a_p(\mathcal{E}_t)\frac{\log p}{p^s} = -\text{rank }E(\mathbb{Q}(T)). \end{equation} To do this, they introduce the following notation: \begin{itemize} \item $\displaystyle{L_2(\mathcal{E}/\mathbb{Q},s)}$ is the Hasse-Weil $L$-function of $\mathcal{E}/\mathbb{Q}$ attached to $\displaystyle{H^2_{\acute{e}t}(\mathcal{E}/\bar{\mathbb{Q}},\mathbb{Q}_\ell)}$. \item $\displaystyle{\text{NS}(\mathcal{E}/\bar{\mathbb{Q}})}$ is the N\'{e}ron-Severi group of $\mathcal{E}/\bar{\mathbb{Q}}$. \item $\displaystyle{\mathscr{S}}$ is the trivial part of $\text{NS}(\mathcal{E}/\bar{\mathbb{Q}}) \otimes \mathbb{Q}$, generated by the image of the zero section and by all components of all fibers. \item $\displaystyle{\mathscr{S}_\ell(1)}$ is the Tate twist $\mathscr{S}\otimes T_\ell(\bar{\mathbb{Q}}^*)$ of the $\text{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$-module $\mathscr{S}$. \item $\displaystyle{L(\mathscr{S}_\ell(1),s)}$ is the Artin $L$-function attached to the representation $\mathscr{S}_\ell(1)$. \item $\displaystyle{N(\mathcal{E}/\mathbb{Q},s) = \frac{L_2(\mathcal{E}/\mathbb{Q},s)}{L(\mathscr{S}_\ell(1),s)}}.$ \end{itemize} Then, assuming the Tate conjecture, Silverman and Rosen prove that both sides of (\ref{annagao}) are equal to $$\underset{s = 2}{\text{Res}}\,\frac{d}{ds}\log N(\mathcal{E}/\mathbb{Q},s).$$ The conjecture (\ref{nagaoeqn}) then follows from standard analytic techniques applied to the series $\frac{d}{ds}\log N(\mathcal{E}/\mathbb{Q},s)$.\\ \\ We now discuss what might be involved in a proof of conjecture \ref{avgrank0} using these analytic techniques. Suppose that there exist constants $\delta_\mathcal{E} > 0$ and $C_\mathcal{E}$ such that $$\left|\frac{1}{X}\sum_{p < X}\sum_{t = 1}^p a_p(\mathcal{E}_t)\frac{\log p}{p} + \text{rank }\mathcal{E}(\mathbb{Q}(T))\right| < C_\mathcal{E} X^{-\delta_\mathcal{E}},$$ as is typical in this kind of setting. To prove conjecture \ref{avgrank0}, it would be sufficient to find, for every elliptic surface $\mathcal{E} \in \mathcal{S}$, a real number $X_\mathcal{E} > 0$ depending only on $\mathcal{E}$, such that both \begin{equation}\label{apcond} \lim_{M\to\infty}\frac{1}{\#\mathcal{S}(M)}\sum_{\mathcal{E} \in \mathcal{S}(M)}\frac{1}{X_{\mathcal{E}}}\sum_{p < X_{\mathcal{E}}}\sum_{t = 1}^p a_p(\mathcal{E}_t)\frac{\log p}{p} = 0 \end{equation} and \begin{equation}\label{errorcond} \lim_{M\to\infty}\frac{1}{\#\mathcal{S}(M)}\sum_{\mathcal{E} \in \mathcal{S}(M)}C_{\mathcal{E}}X_{\mathcal{E}}^{-\delta_{\mathcal{E}}} = 0 \end{equation} simultaneously. Condition (\ref{apcond}) requires that $M$ be ``large'' relative to the values $X_{\mathcal{E}}$ so that the averages $$\frac{1}{\#\mathcal{S}(M)}\sum_{\mathcal{E} \in \mathcal{S}(M)}a_p(\mathcal{E}_t)$$ for fixed $p$ and $t$ are small. Condition (\ref{errorcond}), on the other hand, would like for $M$ to be ``small'' relative to the values $X_{\mathcal{E}}$. The difficulty, then, is to find some choice of $X_{\mathcal{E}}$'s such that both of these conditions hold at once. We expect that showing that there is such a choice will be difficult. Condition (\ref{apcond}) might require a P\'{o}lya-Vinogradov style result, where one shows that there are no ``conspiracies'' among the values $a_p(\mathcal{E}_t)$ that might cause them to behave differently than random variables. Condition (\ref{errorcond}) might require subconvexity bounds, and the number $\delta_{\mathcal{E}}$ will depend on the locations of the zeroes of $N(\mathcal{E}/\mathbb{Q},s)$. \section{An approach via finite fields}\label{finitefieldsection} Let $\mathcal{S}_{\ell;m,n}$ denote the set of elliptic surfaces $\mathcal{E}/\mathbb{F}_\ell: y^2 = x^3 + A(T)x + B(T)$ with $\deg(A) = m$ and $\deg(B) = n$. This set is finite. Let $\rho_\ell(m,n)$ denote the proportion of elliptic surfaces in $\mathcal{S}_{\ell;m,n}$ which have positive rank. \begin{conjecture}\label{finitefieldconjecture} For every prime $\ell_0$, and every pair of integers $m_0, n_0 > 0$, $$\lim_{\ell\to\infty}\rho_{\ell}(m_0,n_0) = \lim_{m\to\infty}\rho_{\ell_0}(m,n_0) = \lim_{n\to\infty}\rho_{\ell_0}(m_0,n) = \frac{1}{2}.$$ \end{conjecture} \noindent This conjecture, beyond being interesting in its own right, provides an approach for proving conjecture \ref{avgrank0}. See \cite{lauder} for experimental evidence towards conjecture \ref{finitefieldconjecture}, where $\rho_\ell(m,n)$ is estimated computationally for $\ell = 7$, $n = 6, 12, 18, 24, 30$, and $m \leq n/2$.\\ \\ Let $N$ be a squarefree positive integer. Let $\mathcal{S}^{(N)}(M)$ denote the subset of $\mathcal{S}(M)$ for which $\text{gcd}(N, 4A(T)^3 + 27B(T)^2) = 1$. Then, for any $m$, $n$, and $M$, \begin{equation}\label{crt} \frac{\#\{\mathcal{E} \in \mathcal{S}^{(N)}(M)\,:\,\mathcal{E}/\mathbb{F}_\ell \text{ has positive rank for all } \ell|N\}}{\#\mathcal{S}^{(N)}(M)} = \prod_{\ell|N}\rho_\ell(m,n) + \mathcal{O}(M^{-1}) \end{equation} by the Chinese remainder theorem, where $\mathcal{E}/\mathbb{F}_\ell$ denotes the reduction mod $\ell$ of $\mathcal{E}/\mathbb{Q}$.\\ \\ If $\mathcal{E}/\mathbb{Q}$ has positive rank, then either the reduction $\mathcal{E}/\mathbb{F}_\ell$ has positive rank, or the kernel of the reduction $\mathcal{E}(\mathbb{Q}(T)) \to \mathcal{E}(\mathbb{F}_\ell(T))$ is of finite index in $\mathcal{E}(\mathbb{Q}(T))$. If this kernel was never of finite index then conjecture \ref{avgrank0} would follow from conjecture \ref{finitefieldconjecture} (as well as much weaker versions of this conjecture), via observation (\ref{crt}). The kernel of the reduction $\mathcal{E}(\mathbb{Q}(T)) \to \mathcal{E}(\mathbb{F}_\ell(T))$ is of finite index in $\mathcal{E}(\mathbb{Q}(T))$ occasionally, but presumably not nearly enough for this approach to fail. However, proving as much seems difficult. The generators of $\mathcal{E}(\mathbb{Q}(T))$ will map to the identity of $\mathcal{E}(\mathbb{F}_\ell(T))$ if their denominators are divisible by $\ell$, so one is naturally lead to investigate the dependence of the height of the generators of $\mathcal{E}(\mathbb{Q}(T))$ on the size of the coefficients of the Weierstrass model of $\mathcal{E}/\mathbb{Q}$. \bibliographystyle{plain}
{ "timestamp": "2020-09-23T02:04:36", "yymm": "2009", "arxiv_id": "2009.08622", "language": "en", "url": "https://arxiv.org/abs/2009.08622", "abstract": "Based on an equation for the rank of an elliptic surface over $\\mathbb{Q}$ which appears in the work of Nagao, Rosen, and Silverman, we conjecture that 100% of elliptic surfaces have rank $0$ when ordered by the size of the coefficients of their Weierstrass equations, and present a probabilistic heuristic to justify this conjecture. We then discuss how it would follow from either understanding of certain $L$-functions, or from understanding of the local behaviour of the surfaces. Finally, we make a conjecture about ranks of elliptic surfaces over finite fields, and highlight some experimental evidence supporting it.", "subjects": "Number Theory (math.NT)", "title": "Conjecture: 100% of elliptic surfaces over $\\mathbb{Q}$ have rank zero", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682441314385, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.7073169797219705 }
https://arxiv.org/abs/2212.12528
Some recent developments on the Steklov eigenvalue problem
The Steklov eigenvalue problem, first introduced over 125 years ago, has seen a surge of interest in the past few decades. This article is a tour of some of the recent developments linking the Steklov eigenvalues and eigenfunctions of compact Riemannian manifolds to the geometry of the manifolds. Topics include isoperimetric-type upper and lower bounds on Steklov eigenvalues (first in the case of surfaces and then in higher dimensions), stability and instability of eigenvalues under deformations of the Riemannian metric, optimisation of eigenvalues and connections to free boundary minimal surfaces in balls, inverse problems and isospectrality, discretisation, and the geometry of eigenfunctions. We begin with background material and motivating examples for readers that are new to the subject. Throughout the tour, we frequently compare and contrast the behavior of the Steklov spectrum with that of the Laplace spectrum. We include many open problems in this rapidly expanding area.
\section{Introduction} In this paper, we give a survey of some recent developments concerning the Steklov eigenvalues and eigenfunctions of compact manifolds with boundary. The last few years have seen intense interest in these topics, with significant progress in many directions since the publication of the survey paper of Girouard and Polterovich~\cite{GiPo2017}. We focus on the time period since the earlier survey but try to include enough background to make the current survey somewhat self-contained. Even so, we are only able to touch on some of the topics, as a quick search on MathSciNet for titles containing \emph{Steklov eigenvalue} over this time period reveals over 100 papers. The selection of topics which are covered here is naturally influenced by the tastes and knowledge of the authors. Let $\Omega$ be a smooth compact Riemannian manifold of dimension $d+1\geq 2$ with boundary $\Sigma=\partial\Omega$. The Dirichlet-to-Neumann operator $\mathcal{D}:C^\infty(\Sigma)\to C^\infty(\Sigma)$ is defined by $\mathcal{D} f=\partial_\nu{\hat{f}}$, where $\nu$ is the outward normal along the boundary $\Sigma$ and where the function $\hat{f}\in C^\infty(\Omega)$ is the unique harmonic extension of $f$ to the interior of $\Omega$. The eigenvalues of $\mathcal{D}$ are known as \emph{Steklov eigenvalues} of $\Omega$. They form an unbounded sequence $0=\sigma_0\leq\sigma_1\leq\sigma_2\leq\cdots\to\infty,$ where as usual each eigenvalue is repeated according to its multiplicity. There exists a corresponding sequence of eigenfunctions $f_k\in C^\infty(\Sigma)$ that forms an orthonormal basis of $L^2(\Sigma)$. Their harmonic extensions $u_k=\hat{f_k}$ are solutions of the Steklov spectral problem: $$\begin{cases} \Delta u_k=0&\text{in }\Omega,\\ \partial_{\nu}u_k=\sigma_k u_k&\text{on }\Sigma. \end{cases}$$ The functions $u_k$ are referred to as the \emph{Steklov eigenfunctions} of $\Omega$, while the sequence of eigenvalues $(\sigma_k)_{k\geq 0}$ is the \emph{Steklov spectrum} of the manifold $\Omega$. We are interested in the rich interplay between the Steklov spectral data and various geometric features of the manifold $\Omega$. For basic motivation we refer to the introduction of the aforementioned paper~\cite{GiPo2017}, while for historical background and physical motivations readers are invited to look at the paper~\cite{KuKu2014} by Kuznetsov, Kulczycki, Kwaśnicki, Nazarov, Poborchi, Polterovich and Siudeja. There are only a few manifolds for which the Steklov eigenvalues can be computed explicitly. In general, one must instead use methods other than direct computation to study Steklov eigenvalues. One critical tool is variational characterisations. The \emph{Rayleigh--Steklov quotient} of a function $u$ in the Sobolev space $H^1(\Omega)$ is given by \begin{equation*} R(u)= \frac{\int_{\Omega}\vert \nabla u\vert^2 dV_{\Omega}}{\int_{\Sigma}u^2dV_{\Sigma}}. \end{equation*} Denote by $\mathcal E(k)$ the set of all $k$-dimensional subspaces of $H^1(\Omega)$, and let $\mathcal H(k)\subset \mathcal E(k)$ consist of those $k$-dimensional subspaces of $H^1(\Omega)$ that are orthogonal to constant functions on $\Sigma$. The following equation gives two convenient formulations of the variational characterisation of the Steklov eigenvalues $\sigma_k({\Omega})$ for all $k\in \mathbf Z^+$: \begin{equation* \sigma_k(\Omega)= \min_{E\in \mathcal E(k+1)}\max_{0\not=u \in E}R(u)=\min_{V\in \mathcal H(k)}\max_{0\not=u \in V}R(u). \end{equation*} In particular, $$\sigma_1(\Omega)=\min\{R(u)\,:\,u\in H^1(\Omega)\quad s.t.\quad \int_{\Sigma}u\,dV_\Sigma=0\}.$$ Similar variational characterisations for mixed Steklov--Neumann and Steklov--Dirichlet eigenvalues are also presented in Section~\ref{section:Prelim}. Observe that the numerator of the Rayleigh--Steklov quotient is the Dirichlet energy, which is a conformal invariant in dimension two, while the denominator depends only on the metric on the boundary $\Sigma$. Thus, in the case of surfaces, the Steklov spectrum is invariant under conformal changes in the metric away from the boundary. \subsection{Organisation of the paper} In Section~\ref{section:Prelim}, after first introducing notational conventions that will be used throughout the paper, we provide basic background on the Steklov eigenvalue problem and examples. Two examples presented in detail are metric balls in Euclidean space and the particularly useful example of cylinders $\Omega=[0,L]\times\Sigma$ over a closed Riemannian manifold $\Sigma$. Even these simple examples are sufficient to motivate many of the questions that will be studied in this survey. In particular, both examples illustrate the interplay between the Steklov eigenvalues of $\Omega$ and the Laplace eigenvalues of the tangential Laplacian on $\partial\Omega$. The behaviour of the Steklov spectrum of cylinders as $L$ tends to 0 or to $+\infty$ is also a meaningful preview of results to come. Each of the remaining sections is devoted to a particular topic or set of topics. We discuss each briefly here. One of the oldest and most active lines of investigation regarding Steklov eigenvalues is the search for isoperimetric-type geometric inequalities. For simply-connected planar domains $\Omega\subset\R^2$ this goes back to Weinstock~\cite{We1954}. In the last few years the variational characterisations of Steklov eigenvalues were used together with a combination of tools, in particular from complex analysis, to obtain upper bounds for the perimeter-normalised Steklov eigenvalues $\sigma_k({\Omega})L(\Sigma)$ of compact Riemannian surfaces ${\Omega}$ with boundary in terms of the genus and number of boundary components of ${\Omega}$. This is discussed in Section~\ref{section:surfaces}. One of the major developments that took place in the last few years is the full solution of the isoperimetric problem for Steklov eigenvalues of planar domains, without constraint on the number of boundary components. This was obtained by Girouard, Karpukhin and Lagacé in~\cite{GiKaLa2021} through the use of homogenisation theory by perforation, which provides examples saturating previous bounds by Kokarev~\cite{Ko2014}. This technique also reveals an interesting connection between area-normalised eigenvalues of the Laplace operator on a closed surface and perimeter-normalised Steklov eigenvalues for domains in that surface. For manifolds $\Omega$ of arbitrary dimension $d+1$, sharp upper bounds and existence results are out of reach at the moment. One of the first difficulties that is encountered is that the Dirichlet energy is no longer conformally invariant in dimension larger than two. This is often circumvented by using H\"older's inequality to replace the Dirichlet energy $\int_{\Omega}|\nabla u|^2\,dV$ by an expression containing $\int_{\Omega}|\nabla u|^{d+1}\,dV$, which is conformally invariant. However by doing so one loses precision. Section \ref{section:boundshigher} presents many upper bounds for the eigenvalues $\sigma_k$ in terms of various geometric features of the manifold. An important recent innovation was the realisation by Karpukhin and M\'etras that apart from the perimeter and volume normalisation, there is a another normalisation that appears to be particularly well suited to the study of upper bounds for Steklov eigenvalues. This normalisation will be introduced in Section~\ref {section:boundshigher} and will play an important role both there and later in the survey. Thus far we have discussed bounds for the normalised Steklov eigenvalues. Do there exist Riemannian metrics on a given underlying surface that realise these bounds? This is the subject of Section~\ref{sec:existence}. Fraser and Schoen~\cite{FrSc2016} first discovered and studied a deep connection between such extremal metrics on surfaces and free boundary minimal surfaces in Euclidean balls. This connection has not only strongly influenced advances on the Steklov eigenvalue problem, including many of the results in Section~\ref{section:surfaces}, but also yields important applications to the study of minimal surfaces. Fraser and Schoen went on to introduce innovative techniques to address the existence of Riemannian metrics maximising the first non-zero normalised Steklov eigenvalue, eventually leading to a proof by Matthiesen and Petrides of existence of such a maximising metric on \emph{every} surface. An interesting way to get bounds on Steklov eigenvalues is to discretise the manifold and compare its spectrum with the spectrum of a Steklov problem on a graph. While obtaining a meaningful discretisation requires significant geometric constraints on the manifold, a number of interesting results have been obtained. Discretisation motivates the study of the spectrum of the Steklov problem on a finite graph in and of itself (see Section \ref{Section: discretisation}). For ${\Omega}$ a compact $(d+1)$-dimensional Riemannian manifold with boundary $\Sigma$, Section~\ref{stek.forms} addresses two ways of defining an analogue of the Dirichlet-to-Neumann operator on the space $\mathcal{A}^p(\Sigma)$ of smooth $p$-forms on $\Sigma$ for each $p=1,\dots, d$. Both operators have discrete spectra, allowing one to define notions of the Steklov spectrum for $p$-forms and to study their properties. In Section~\ref{sec:inv probs pos} we consider the inverse spectral problem for the Steklov problem, with an emphasis on ``positive" results, that is, finding geometric information that can be recovered from the Steklov spectrum. Along the way, we explain recent developments in the theory of Steklov spectral asymptotics. Next in Section~\ref{sec: isospectrality}, we address "negative" inverse spectral results. We discuss general techniques for constructing pairs or continuous families of compact Riemannian manifolds with boundary that have the same Steklov spectrum. By comparing their geometry, we identify geometric invariants that are \emph{not} spectrally determined. Section~\ref{sec:eigenfunctions} is devoted to the geometry of Steklov eigenfunctions. We discuss the interior decay of Steklov eigenfunctions. We also present the best known bounds on the volumes of the nodal sets of both Steklov eigenfunctions and their restrictions to the boundary (the Dirichlet-to-Neumann eigenfunctions). Finally, we explain some recent results on nodal counts and the density of nodal sets. In Appendix~\ref{Section:Radon}, we present some material on variational eigenvalues of Radon measures. One could start with this section, or simply refer to it when needed. Indeed, the setting presented here allows the unification of many well known eigenvalue problems. In particular, homogenisation procedures that relate isoperimetric problems for Steklov, Laplace and various other eigenvalue problems is natural in this setting. There are numerous open problems scattered throughout the survey. For the convenience of the reader, all of these problems are gathered together in Appendix~\ref{app:open ques} along with references to their locations in the survey. As we hope this survey conveys, new techniques and results are being introduced into the study of the Steklov spectrum at a very rapid pace. In particular, it is of course possible that some of the open problems may be resolved quickly. Among the topics not covered in this survey is the significant progress in the development of numerical methods for computing Steklov eigenvalues and eigenfunctions. Interested readers are invited to begin with the paper~\cite{BrGa2020} by Bruno and Galkowski where these computations are performed with a view towards nodal geometry. This is based on methods that are developed by Akhmetgaliyev, Kao, and Osting in \cite{AkElOs2017}. Another relevant paper is~\cite{OuKaOs2021} by Oudet, Kao, and Osting, where numerical isoperimetric shape optimisation is used to compute free boundary minimal surfaces. For a survey of recent results and interesting discussions we also recommend \cite{MoZh2022} by Monk and Zhang as well as \cite{YoXiLi2019} by Liu, Xie, and Liu. \subsection{Acknowledgements} The authors would like to thank the following people who have read preliminary versions of this paper and helped the authors improve the presentation in various ways: Iosif Polterovich, Alessandro Savo, Asma Hassanezhad, Katie Gittins, Chakradhar, Mikhail Karpukhin, Antoine M\'etras, Nilima Nigam, and Romain Petrides. We thank David Webb for providing several drawings and for helpful discussions. Bruno Colbois and Alexandre Girouard would also like to thank the participants in the Qu\'ebec-Neuch\^atel-Montr\'eal doctoral seminar, who have patiently attended lectures on many parts of this paper during the year 2021-2022. \section{Motivating examples and preliminary material}\label{section:Prelim} The background and basic examples in section will serve as motivation for many of the questions that we will study. Readers who are well acquainted with the Steklov eigenvalue problem may want to skip ahead to Section~\ref{section:surfaces} after going over our notational conventions below and then come back to this section when needed. \subsection*{Notational conventions}\label{nota.convention} \begin{enumerate} \item $(M,g)$ and $(N,g)$ will usually denote complete Riemannian manifolds (whether or not compact). We will often suppress the name of the metric $g$. \item Throughout the paper, compact manifolds with boundary will systematically be denoted by $\Omega$ and the boundary of ${\Omega}$ will be denoted by $\Sigma$. We also use $\Omega$ for bounded domains $\Omega\subset M$ with nonempty boundary. \item The dimension of ${\Omega}$ will usually be denoted by $d+1$, so that $d$ is the dimension of its boundary $\Sigma$. \item The volume of $({\Omega},g)$ will usually be written $|{\Omega}|_g$ or simply $|{\Omega}|$ if the metric $g$ is understood. We will continue to denote by $g$ the Riemannian metric induced by $g$ on $\Sigma=\partial\om$. In particular, $|\Sigma|_g$ will denote the $d$-dimensional volume of $\Sigma$. Sometimes we will also use notation such as $A$ and $L$ for the area of a surface and the length of a curve. \item We use the positive definite Laplacian defined by $\Delta_g=-\text{div}_g\circ\nabla_gf$. \item The Riemannian volume form on ${\Omega}$ will usually be denoted $dV_{({\Omega},g)}$ (with the subscripts ${\Omega}$ and/or $g$ suppressed if they are clear from the context) but we will also sometimes replace $dV$ with $dA$ in the case of surfaces and sometimes use $ds$ for arclength measure. \item The standard Sobolev space consisting of functions in $L^2({\Omega})$ with weak gradient also in $L^2({\Omega})$ will be denoted $H^1(\Omega)$. We use $H^1_0({\Omega})$ to denote the closure in $H^1({\Omega})$ of $C_0^\infty({\Omega})$, where $C_0^\infty({\Omega})$ denotes the space of smooth functions with compact support in the interior of ${\Omega}$. \item Eigenvalues will be indexed starting with the index zero. The Steklov spectrum of $({\Omega},g)$ will thus be written as: $$\stek({\Omega},g): 0=\sigma_0({\Omega},g)\leq \sigma_1({\Omega},g)\leq \sigma_2({\Omega},g)\leq \dots$$ and the Laplace spectrum of a closed Riemannian manifold $(M,g)$ will be expressed as $$0=\lambda_0(M,g)\leq \lambda_1(M,g)\leq \lambda_2(M,g)\leq \dots$$ \end{enumerate} We will write simply $\sigma_k$ rather than $\sigma_k({\Omega},g)$ if $({\Omega},g)$ is fixed. We caution that there are two frequently used indexing conventions in the literature: indexing the lowest eigenvalue by zero as we are doing here or by one. The convention we are using is convenient for the Steklov spectrum of connected manifolds. In that case $\sigma_0=0$ and $\sigma_1$ is the lowest non-zero eigenvalue. For consistency, we are maintaining the same indexing convention for all spectra even, for example, in the setting of the Dirichlet spectrum of a compact manifold with boundary, in which case the lowest eigenvalue $\lambda_0^D$ is non-zero. While the Steklov spectrum $\stek({\Omega},g)$ is the spectrum of an operator on $C^\infty(\Sigma)$, it is common to refer to the harmonic extensions to ${\Omega}$ of the eigenfunctions of $\mathcal{D}$ as \emph{Steklov eigenfunctions}. We will follow that practice here. \subsection{Examples}\label{subsec: examples} There are very few Riemannian manifolds for which the Steklov eigenvalues can be computed explicitly. Here we give two simple but illustrative examples. \begin{ex}\label{ex: ball} Consider the unit ball $\mathbb{B}^{d+1}$ in $\R^{d+1}$. Let $P_k$ denote the space of homogeneous harmonic polynomials of degree $k$ on $\R^{d+1}$. For $p\in P_k$ expressed in spherical coordinates as $p(r,\theta)=r^kh(\theta)$, observe that $\partial_\nu p =\frac{\partial p}{\partial r} =kp$ on the boundary sphere $\mathbb{S}^d$. Thus $P_k$ consists of Steklov eigenfunctions with eigenvalue $k$. Since the spherical harmonics, i.e., the restrictions to $\mathbb{S}^d$ of all the homogeneous harmonic polynomials, span $L^2(\mathbb{S}^d)$, we conclude that the Steklov spectrum of $\mathbb{B}^{d+1}$ consists precisely of the non-negative integers, and the eigenspace associated with $k$ is given by $P_k$. Compare this with the spectrum of the Laplacian $\Delta$ on $\mathbb{S}^d$. The spectrum consists of all $k(k+d-1)$ for $k=0,1,2,\dots$ with corresponding eigenspaces the $k$th degree spherical harmonics. In particular, the Dirichlet-to-Neumann operator and the Lapacian have the same eigenspaces. In fact, the relationship between the Laplacian and the Dirichlet-to-Neumann operator is completely explicit in this case: $$\Delta_{\mathbb{S}^d}=\mathcal{D}_{\mathbb{B}^{d+1}}^2+(d-1)\mathcal{D}_{\mathbb{B}^{d+1}}.$$ In particular, in the case of $\mathbb{B}^2$, we have $\mathcal{D}_{\mathbb{B}^{2}}=\sqrt{\Delta_{\mathbb{S}^1}}$. \end{ex} We identify a few features of the example above, some unique to balls, others completely general. \begin{itemize} \item For $k$ large, the $\sigma_k$-eigenfunctions decay towards zero rapidly on compact subsets of the interior. This property is completely general and will be discussed in Section~\ref{sec:eigenfunctions}. \item Comparison with the Laplace eigenvalues of the sphere $\mathbb{S}^d$ yields $$\sigma_k(\mathbb{B}^{d+1})=\sqrt{\lambda_k(\mathbb{S}^d)}+ O(1).$$ As we will discuss in the next subsection, this relationship between the Steklov eigenvalues of a manifold and the Laplace eigenvalues of its boundary also holds more generally. \item Special to this example: Girouard, Karpukhin, Levitin and Polterovich \cite{GKLP2021} showed that Euclidean balls are the \emph{only} compact Riemannian manifolds for which the Dirichlet-to-Neumann operator commutes with the boundary Laplacian, and disks are the only surfaces for which $\mathcal{D}=\sqrt{\Delta_\Sigma}$. \ \end{itemize} \begin{ex}\label{example: cylinder} Cylinders over compact manifolds are among the simplest and at the same time most useful examples. Let ${\Omega}$ be a cylinder ${\Omega}=C_L= [0,L]\times M$ where $M$ is a connected, closed $d$-dimensional Riemannian manifold and $L\in\R^+$. Denote the eigenvalues of the Laplace--Beltrami operator $\Delta_M$ by $0=\lambda_0(M)<\lambda_1(M)\le \lambda_2\le\cdots\nearrow\infty$, and let $(\varphi_k)_{k=0}^{\infty}\subset C^\infty(M)$ be a corresponding orthonormal basis of eigenfunctions. Since $\Sigma:=\partial\om$ consists of two copies of $M$, we have \begin{equation}\label{eq: sig vs N}\lambda_{2k}(\Sigma)=\lambda_{2k+1}(\Sigma)=\lambda_k(M)\end{equation} for all $k=0,1,2,\dots$. The Steklov eigenvalues of the cylinder are given by $0$, $\frac{2}{L}$ and for each $k\geq 1$, $$ \sqrt{\lambda_k(M)}\tanh(\sqrt{\lambda_k(M)} \frac{L}{2})\qquad\text{and}\qquad \sqrt{\lambda_k(M)}\coth (\sqrt{\lambda_k(M)} \frac{L}{2}). $$ Using the variables $t\in [0,L]$ and $x\in M$, the corresponding eigenfunctions are $$ 1;\ t;\ \cosh\left(\sqrt{\lambda_k(M)}(t-L/2)\right)\varphi_k(x);\ \sinh\left(\sqrt{\lambda_k(M)}(t-L/2)\right)\varphi_k(x). $$ (i) We first consider the asymptotics of the Steklov eigenvalues as $k\to\infty$. Because $\tanh(t)=1+O(t^{-\infty})$ and $\coth(t)=1+O(t^{-\infty})$ it follows from the asymptotic growth rate of $\lambda_k$ given by the Weyl law that $$\sigma_k(C_L)=\sqrt{\lambda_k(\Sigma)}+O(k^{-\infty}).$$ (ii) Next we let the length $L$ of the cylinder vary and consider the limiting behaviour as $L\to 0$: The eigenvalue $2/L\to+\infty$, while $$\sqrt{\lambda_k(M)}\tanh(\sqrt{\lambda_k(M)} \frac{L}{2})\to 0\qquad\text{and}\qquad\sqrt{\lambda_k(M)}\coth (\sqrt{\lambda_k(M)} \frac{L}{2})\to+\infty,$$ In particular, the number of very small eigenvalues increases and for each index $k\in\N$, $$\sigma_k(C_L)\xrightarrow{\,L\to 0\,}0.$$ (iii) Finally let $L\to\infty$. Then $2/L\to 0$ and $$ \sqrt{\lambda_k(M)}\tanh(\sqrt{\lambda_k(M)} \frac{L}{2})\to\sqrt{\lambda_k(M)}\qquad\text{and}\qquad \sqrt{\lambda_k(M)}\coth (\sqrt{\lambda_k(M)} \frac{L}{2})\to\sqrt{\lambda_k(M)}.$$ Taking into account Equation~\eqref{eq: sig vs N}, it follows that $$\sigma_k(C_L)\xrightarrow{\,L\to \infty\,}\sqrt{\lambda_k(\Sigma)}.$$ \end{ex} \begin{itemize} \item This example illustrates that we can find metrics on the underlying manifold for which the $k$th Steklov eigenvalue is arbitrarily small while keeping the volume of the boundary fixed. As we will discuss in Subsection~\ref{subsec: prelim bounds}, one can construct metrics with similar behaviour on every compact manifold. \item The fact that $\sigma_k(C_L)=\sqrt{\lambda_k(\Sigma)}+O(k^{-\infty})$ is a feature of this example that is \emph{not} common to all Riemannian manifolds. In fact, it is enough to look at the ball $\Omega=B(0,1)\subset\R^n$, with $n\geq 3$ to see that this is not true. However, we will see in the next subsection that a weaker relationship does hold in general. \end{itemize} This simple example has many applications. A sampling: \begin{itemize} \item It will be used in Section~\ref{section:boundshigher} to show that the exponent on $k$ in some bounds for $\sigma_k$ is optimal (see Theorem~\ref{globalestimate}). \item It can be used to deduce results on $\lambda_k$ from results on $\sigma_k$. See also Examples 5.1 and 5.2 in~\cite{CoGi2021}. \item The earliest known examples of non-isometric Steklov isospectral manifolds arose from the observation (see the earlier survey \cite{GiPo2017}) that the Steklov spectrum of the cylinder $[0,L]\times M$ depends only on $L$ and the Laplace spectrum of $M$; thus any pair of Laplace isospectral closed Riemannian manifolds yields a pair of cylindrical Steklov isospectral manifolds. \end{itemize} \begin{remark} The example of a dumbbell is classical for the Laplacian. The Steklov spectrum of a dumbbell is addressed by Bucur, Henrot and Michetti \cite{BuHeMi2021} in all dimensions. See also the Ph.D. thesis of Michetti \cite{Mi20222}. \end{remark} \subsection{Asymptotic behaviour of eigenvalues}\label{subsec: prelim asympt} For compact Riemannian manifolds $({\Omega},g)$ with smooth boundary, the Dirichlet-to-Neumann operator $\mathcal{D}=\mathcal{D}_{({\Omega},g)}:C^\infty(\partial\om)\to C^\infty(\partial\om)$ is an elliptic pseudodifferential operator of order one. As shown in \cite{LeUh1989}, the symbol of $\mathcal{D}$ is completely determined by the Riemannian metric in an arbitrarily small neighborhood of the boundary. Since the asymptotics of the spectrum of a pseudodifferential operator depend only on the symbol, this yields: \begin{thm}\cite{HiLu2001}, \cite[Theorem 2.5]{GPPS2014}\label{thm: hislop lutzer} Suppose $({\Omega},g)$ and $({\Omega}',g')$ are compact Riemannian manifolds with boundary. If some neighborhood of $\partial\om$ is isometric to a neighborhood of $\partial\om'$, then $$\sigma_k({\Omega},g)-\sigma_k({\Omega}',g') = O(k^{-\infty}).$$ \end{thm} (As discussed in Section~\ref{sec:inv probs pos}, a much stronger statement holds in the case of surfaces.) \smallskip The principal symbol of $\mathcal{D}_{({\Omega},g)}$ depends only on the boundary $(\Sigma, g)$ and in fact coincides with the principal symbol of $\sqrt{\Delta_\Sigma}$, where $\Delta_\Sigma$ is the Laplace-Beltrami operator of the boundary $\Sigma$ of ${\Omega}$ with the induced Riemannian metric. However, the subprincipal symbols of these two operators are different in general. The Weyl law for the Steklov eigenvalues (see Section~\ref{sec:inv probs pos}) yields \begin{equation}\label{eq: prelim Weyl}\sigma_k({\Omega},g)=\frac{2\pi}{|\mathbb{B}^d|}\left(\frac{k}{|\Sigma|_g}\right)^{1/d} \,+\,O(1). \end{equation} As noted in \cite{GKLP2021}, a comparison with the Weyl law for the Laplacian yields: \begin{equation}\label{eq: sigma = lambda +O(1)}\sigma_k({\Omega},g)=\sqrt{\lambda_k(\Sigma,g)} + O(1)\end{equation} where the $\lambda_j$'s are the eigenvalues of $\Delta_\Sigma$, the Laplacian on $\Sigma=\partial\om$. Further relationships between Steklov eigenvalues and Laplace eigenvalues of the boundary will be manifest in various parts of this paper (see, for example,Theorem~\ref{comparison}). \subsection{Variational characterisation of eigenvalues} In most cases, it is impossible to compute the Steklov spectrum of a manifold explicitly. Instead, one resorts to variational characterisations of eigenvalues in order to obtain lower and upper bounds. Let $\Omega$ be a compact Riemannian manifold with boundary $\Sigma=\partial \Omega$. In contrast to the previous subsection, we do not require the boundary to be smooth. For example, we allow Lipschitz boundary. The Rayleigh--Steklov quotient of a function $u \in H^1(\Omega)$ is given by \begin{equation}\label{eq:Rayleigh S} R(u)= \frac{\int_{\Omega}\vert \nabla u\vert^2 dV_{(\Omega,g)}}{\int_{\Sigma}u^2dV_{(\Sigma,g))}}. \end{equation} Denote by $\mathcal E(k)$ the set of all $k$ dimensional subspaces of $H^1(\Omega)$. Let $\mathcal H(k)\subset \mathcal E(k)$ consist of those $k$-dimensional subspaces of $H^1$ that are orthogonal to the constant functions on $\Sigma$. The following equation gives two convenient formulations of the variational characterisation of the Steklov eigenvalues $\sigma_k({\Omega})$ for all $k\in\N$: \begin{equation}\label{eq:Stek Rayleigh min max} \sigma_k(\Omega)= \min_{E\in \mathcal E(k+1)}\max_{0\not=u \in E}R(u)=\min_{V\in \mathcal H(k)}\max_{0\not=u \in V}R(u). \end{equation} Letting $u_0, u_1, u_2,\dots$ be Steklov eigenfunctions for $\sigma_0,\sigma_1, \sigma_2,\dots$, then the minimum in the first formulation is obtained by $E$=span$(u_0,\dots, u_k)$ and in the second formulation by $V$=span$(u_1,\dots, u_k)$. \subsection{Variational eigenvalues for Radon measures} \label{subsection:introvareigenradon} The reader may notice the similarity between the Rayleigh--Steklov quotient $R(u)$ above and the standard Rayleigh quotient for the Neumann eigenvalues on ${\Omega}$, i.e., the eigenvalues of the Laplace--Beltrami operator with Neumann boundary conditions. The latter is given by $$\frac{\int_{\Omega}\vert \nabla u\vert^2 dV_{{\Omega}}}{\int_{{\Omega}}u^2\,dV_{{\Omega}}}.$$ Comparing the two Rayleigh quotients, the only difference is in the denominators; in one case the integral is with respect to the volume measure on the boundary $\Sigma$, in the other it is with respect to the volume measure on ${\Omega}$. This observation led Kokarev~\cite{Ko2014} to introduce variational eigenvalues associated to any nonzero Radon measure $\mu$ on $\Omega$. Let $u\in C^\infty(\Omega)$ with $\int_{\Omega}u^2\,d\mu\neq 0$. The \emph{Rayleigh--Radon quotient} of $u$ is defined to be $$R_{\mu}(u):=\frac{\int_\Omega|\nabla u|^2\,dV_{{\Omega}}}{\int_{\Omega}u^2\,d\mu}.$$ It is then natural to define the \emph{variational eigenvalues} by \begin{equation}\label{def:VarEigenRadon} \lambda_k(\Omega,g,\mu) := \inf_{F_{k+1}}\sup_{f\in F_{k+1}\setminus\{0\}} R_\mu(f), \end{equation} where the infimum is taken over all $(k+1)$-dimensional subspaces $F_{k+1}\subset C^\infty(\Omega)$ such that the image of $F_{k+1}$ in $L^2(\Omega,\mu)$ is also $(k+1)$-dimensional. We will sometimes write $\lambda_k(\mu)$ for $\lambda_k(\Omega,g,\mu)$ when no confusion is created. This setting encompasses many well-known eigenvalue problems. \begin{ex}\label{ex: vareigen weighted lap} Let $\beta:\Omega\longrightarrow\R$ be a continuous positive function. For $\mu=\beta\,dV_{\Omega}$ the variational eigenvalues $\lambda_k(\mu)$ are the eigenvalues of the non-homogeneous weighted Laplace problem with Neumann boundary conditions: $$\begin{cases} \Delta u=\lambda \beta u&\text{in }\Omega,\\ \partial_{\nu}u=0&\text{on }\partial\Omega. \end{cases}$$ If the boundary $\partial\Omega$ is empty, these are simply the eigenvalues of the weighted Laplace operator $\beta^{-1}\Delta$, without any boundary condition. Note in particular that for $\beta>0$ constant, $\lambda_k(\Omega,\beta)=\beta^{-1}\lambda_k(\Omega)$ are constant multiples of the Neumann eigenvalues of $\Omega$. In the particular case where $\Omega$ is a surface, the Laplace operator induced by the conformal metric $\beta g$ is $\beta^{-1}\Delta$, so that $\lambda_k(\beta\,dV_{\Omega})=\lambda_k(\Delta_{\beta g})$. \end{ex} \begin{ex}\label{ex:vareigenSteklov} Let $\Omega$ be a compact Riemannian manifold with non-empty boundary $\Sigma$ and let $\iota:\Sigma\to\Omega$ be the inclusion. Let $\mu=\iota_\star dV_\Sigma$ be the push-forward of the boundary measure. That is, for each open set $A\subset\Omega$, $$\mu(A)=|A\cap\Sigma|_{\Sigma,g}.$$ Then the variational eigenvalues of the measure $\mu=\iota_\star dV_\Sigma$ are the Steklov eigenvalues of $\Omega$, $\lambda_k(\mu)=\sigma_k(\Omega).$ \end{ex} \begin{ex}\label{ex:varweightedSteklov} Let $\Omega$ be a compact Riemannian manifold with non-empty boundary, let $\iota:\Sigma\to\Omega$ be the inclusion, and let $0\leq \rho\in L^{\infty}(\Sigma, dV_\Sigma)$. Then the variational eigenvalues of the measure $\mu=\iota_\star (\rho\,dV_\Sigma)$ are the eigenvalues $\lambda_k(\mu)=\sigma_k({\Omega},g,\rho)$ of the so-called \emph{weighted Steklov problem} with density $\rho$ given by: $$\begin{cases} \Delta u= 0\,\,&\mbox{in}\,\,{\Omega},\\ \partial_\nu u =\sigma_k\,\rho u \,\,\,&\mbox{on}\,\,\partial{\Omega}. \end{cases}$$ Note that if $\rho$ is a strictly positive density, then the weighted Steklov eigenvalues are the eigenvalues of $\frac{1}{\rho}\mathcal{D}$ where $\mathcal{D}$ is the Dirichlet-to-Neumann operator of ${\Omega}$. \end{ex} \begin{ex} Let $\Omega$ be a compact Riemannian manifold with non-empty boundary $\Sigma$ and let $\beta:\Omega\longrightarrow\R$ be a continuous positive function. Consider the measure $\mu=\beta\,dV_\Omega+\iota_\star dV_\Sigma$. Then the variational eigenvalues $\lambda_k(\mu)$ are the eigenvalues of the \emph{dynamical spectral problem}: $$\begin{cases} \Delta u=\lambda \beta u&\text{in }\Omega,\\ \partial_{\nu}u=\lambda u&\text{on }\partial\Omega. \end{cases}$$ Similar problems were studied in~\cite{BeFr2005} and used in~\cite{GiHeLa2021} for the study of isoperimetric type inequalities for Steklov eigenvalues, as we will see in Section~\ref{section:surfaces}. Its eigenvalues form an unbounded sequence $0=\sigma_0\leq\sigma_{1,\beta}\leq\sigma_{2,\beta}\leq\cdots\to+\infty$. \end{ex} \begin{ex}\label{example:transmission} Let $M$ be a compact $(d+1)$-dimensional manifold without boundary and let $\Omega\subset M$ be a domain with smooth boundary $\Sigma$. Let $\iota:\Sigma\to M$ be the inclusion in the ambient manifold $M$ and $\mu=\iota_{\star}dV_{\Sigma}$ be the boundary measure of $\Omega$. Then the variational eigenvalues of $\mu$ are the eigenvalues of the following transmission eigenvalue problem: $$\begin{cases} \Delta u=0&\text{in }M\setminus\Sigma,\\ (\partial_{\nu^+}+\partial_{\nu^-})u=\tau u&\text{on }\Sigma. \end{cases}$$ They form an unbounded sequence $0=\tau_0\leq\tau_1\leq\tau_2\leq\cdots\to+\infty$. It follows directly from the variational definition that $\sigma_k(\Omega)\leq\tau_k$ for each index $k$. \end{ex} At this point, variational eigenvalues of Radon measures are merely a convenient tool to keep track of various eigenvalue problems. However, by restricting to classes of Radon measures that have good functional properties, Girouard, Karpukhin and Lagac\'e~\cite{GiKaLa2021} were able to formulate conditions such that the variational eigenvalues $\lambda_k(\mu)$ form non-negative discrete unbounded sequence and depend continuously on $\mu$. See Appendix~\ref{Section:Radon} for some details. This will be useful when considering the saturation of isoperimetric-type inequalities in subsection~\ref{subsec:homolarge}. \subsection{Conformal invariance in dimension two}\label{subsec: conf invar} The study of the Steklov spectrum of surfaces often employs different techniques than in higher dimensions due to the following conformal invariance property: \begin{prop}\label{prop: conf invar} Let ${\Omega}$ be a compact surface with boundary $\Sigma$. Suppose $g$ and $g'$ are Riemannian metrics on ${\Omega}$ satisfying both of the following conditions: \begin{enumerate} \item $g'$ is conformally equivalent to $g$, i.e., $g'=\tau g$ for some positive function $\tau\in C^\infty({\Omega})$; \item $\tau\equiv 1$ on $\Sigma$. \end{enumerate} Then the Dirichlet-to-Neumann operators of $({\Omega},g)$ and $({\Omega},g')$ coincide, and $$\stek({\Omega},g)=\stek({\Omega},g').$$ \end{prop} \begin{proof} Since we are in dimension two, the Laplace-Beltrami operators associated with $g$ and $g'$ satisfy $\Delta_{g'}=\frac{1}{\tau}\Delta_g$; thus the condition that a function be harmonic depends only on the conformal class of the metric. Moreover, since the metrics agree on $\Sigma$, the unit normals to the boundary agree and thus the Dirichlet-to-Neumann operators are identical. \end{proof} We will use the following convenient language introduced by Fraser and Schoen: \begin{defn}\label{sigmaisom} We say two compact Riemannian surfaces $({\Omega}_1,g_1)$ and $({\Omega}_2,g_2)$ are $\sigma$-\emph{isometric} if there exists a diffeomorphism $\Phi:{\Omega}_1\to {\Omega}_2$ such that $\Phi^*g_2=\tau g_1$ where $\tau\in C^\infty({\Omega}_1)$ satisfies $\tau_{|\partial{\Omega}_1}\equiv 1$. \end{defn} \begin{cor}\label{cor:sigmaisom} Suppose $({\Omega}_1,g_1)$ and $({\Omega}_2,g_2)$ are $\sigma$-isometric. Then they have the same Steklov spectrum. Moreover, for $\Phi$ as in Definition~\ref{sigmaisom}, the restriction $\Phi |_{\partial\om_1}:\partial\om_1\to \partial\om_2$ intertwines the Dirichlet-to-Neumann maps of $({\Omega}_1,g_1)$ and $({\Omega}_2,g_2)$. \end{cor} One can give another proof that $\stek({\Omega}_1,g_1)=\stek({\Omega}_2,g_2)$ when $({\Omega}_1,g_1)$ and $({\Omega}_2,g_2)$ are $\sigma$-isometric by appealing to the variational characterisation~\eqref{eq:Stek Rayleigh min max} of the eigenvalues. The numerator of the Rayleigh quotient $R(u)$ in Equation~\eqref{eq:Rayleigh S} is the Dirichlet energy. In the case of surfaces, the Dirichlet energy is independent of the conformal class. Since the denominator of the Rayleigh quotient depends only on the metric restricted to the boundary, the Steklov spectra agree. A particularly simple and interesting class of $\sigma$-isometric metrics is obtained for surfaces of revolution. \begin{prop}\cite{Br2019} Let $\Omega\subset\R^3$ be a surface of revolution with connected boundary $\partial\Omega=\partial\mathbb D\times\{0\}$. Then $\sigma_k(\Omega)=\sigma_k(\mathbb D)$ where $\mathbb D$ is a disk of the same boundary length. Moreover, the Dirichlet-to-Neumann maps of ${\Omega}$ and $\mathbb D$ coincide (when one identifies the boundary circles of ${\Omega}$ and $\mathbb D$). \end{prop} The proof of this proposition is surprisingly direct, since conformal parametrisations for surfaces of revolutions are known since Liouville and it follows that any surface of revolution is $\sigma$-isometric to the unit disk. This leads to a gigantic family of surfaces in $\R^3$ that have exactly the same DtN map. \begin{ques}\label{ques:conformal} Describe the class of all smooth compact surfaces $\Omega\subset\R^3$ with boundary $\partial\Omega=\partial\mathbb D$ that admit a conformal parametrisation $\Phi:\mathbb D\to\Omega$ such that $|\Phi'|\equiv 1$ on $\partial\mathbb D$. \end{ques} \begin{remark}\label{rem:con sing} A Riemannian metric $g$ on a surface ${\Omega}$ is said to have an isolated conical singularity at an interior point $p\in{\Omega}$ if in some geodesic disk centered at $p$ with complex coordinate $z$, the metric is expressed in the form $$g=|z|^{2(\alpha-1)}\varphi(z) |dz|^2$$ for some real number $\alpha>0$ (with $\alpha\neq 1$) and some positive smooth function $\varphi$. The cone angle at $p$ is $2\pi\alpha$. In case $\varphi\equiv 1$, then the metric in this geodesic disk is isometric to the standard flat cone with cone angle $2\pi\alpha$. A Riemannian metric on ${\Omega}$ that is smooth except for isolated interior conical singularities is conformally equivalent to a smooth metric although the conformal factor has value $0$ or $\infty$ (depending on the cone angle) at the conical singularities. Moreover, under our assumption that all the singularities are at interior points, one can choose the conformal factor to be identically one on the boundary. We extend the notion of $\sigma$-isometry to allow conformal equivalences of this type; i.e., we allow the conformal factor $\tau$ in Definition~\ref{sigmaisom} to take on the values $0$ and $\infty$ at finitely many interior points. The Steklov spectrum is still well-defined via the variational characterisation of eigenvalues for compact Riemannian surfaces with isolated interior conical singularities. Moreover, Corollary~\ref{cor:sigmaisom} continues to hold with the extended definition of $\sigma$-isometry. The ability to conformally remove conical singularities without affecting the Steklov spectrum plays a role in the construction of smooth metrics that maximise normalised Steklov eigenvalues on surfaces; see Section~\ref{sec:existence}. \end{remark} \subsection{Steklov eigenvalue bounds}\label{subsec: prelim bounds} Since rescaling a Riemannian metric has the effect of rescaling all the Steklov eigenvalues, one must choose a scale-invariant normalisation in order to address eigenvalue bounds. The most commonly used normalisation is via the boundary volume: $\sigma_k({\Omega},g)|\Sigma|_g^{\frac{1}{d}}$. A second normalisation is via the volume of the manifold ${\Omega}$ itself: $\sigma_k({\Omega},g)|{\Omega}|_g^{\frac{1}{d+1}}$. Motivated by the isoperimetric ratio, Karpukhin and M\'etras recently introduced a normalisation involving both the boundary volume and the volume of ${\Omega}$; see Section ~\ref{section:boundshigher}. Note that in the case of surfaces ${\Omega}$, $\sigma$-isometries (see Definition~\ref{sigmaisom}) allow one to adjust the area arbitrarily without affecting the Steklov eigenvalues. Thus the boundary length normalisation is the most natural one in this case. The area normalisation has occasionally been used in dimension two when restricting, say, to the case of plane domains. Due to the conformal invariance of the Steklov spectrum in dimension two, the techniques used for addressing eigenvalue bounds for surfaces typically differ substantially from higher dimensions. We first address here the non-existence of lower eigenvalue bounds, other than the trivial bound of zero, for manifolds of arbitrary dimension. \begin{prop}\label{prop: no lower bound} Let ${\Omega}$ be a compact $d+1$-dimensional manifold with boundary, and let $k\in \N$. Then: \begin{enumerate} \item[1.] \cite[Subsection 2.2]{GiPo2010}; \cite[Section 4]{GiPo2017} For every $\epsilon >0$, there exists a Riemannian metric $g_\epsilon$ on ${\Omega}$ such that $\sigma_k({\Omega},g_\epsilon)<\epsilon$. The metrics can be chosen so that $|{\Omega}|_{g_\epsilon}=|{\Omega}|$ and $|\partial\om|_{g_\epsilon}=|\partial\om|$, independently of $\varepsilon$. \item[2.] \cite[Proposition 2.1]{CoElGi2019}. Moreover, if $d+1\geq 3$, then given any Riemannian metric $g$ on ${\Omega}$, the metrics $g_\epsilon$ as above can be chosen so that they are conformally equivalent to $g$ and coincide with $g$ on the boundary $\Sigma$. \end{enumerate} \end{prop} \begin{proof} (1) Fix $L>0$ and let $\eta>0$. Choose a metric $h_\eta$ such that $({\Omega}, h)$ contains a cylinder $C_{L,\eta}$ isometric to $[0,L]\times \mathbb{B}^{d}_\eta$ with lateral boundary $[0,L]\times \mathbb{S}^{d-1}_\eta$ contained in $\Sigma$. See Figure~\ref{fig:cylinder}. \begin{figure} \includegraphics[width=3cm]{Figures/cylinder.pdf} \caption{} \label{fig:cylinder} \end{figure} Let $E\subset\mathcal{E}(k)$ be the subspace spanned by the constant function 1 together with $u_1, \dots, u_k$, where $u_j\equiv 0$ off $C_{L,\eta}$ and $u_j(t,x) = \sin(\frac{j\pi t}{L})$ for $(t,x)\in C_{L,\eta}$. Then an easy computation shows that there exists a constant $A$ depending only on $L$, $k$ and $d$, not on $\eta$, such that the Rayleigh--Steklov quotient satisfies $R(u)\leq A\eta$ for all $u\in E$. Thus by Equation~\eqref{eq:Stek Rayleigh min max}, $\sigma_k({\Omega}, h_\eta)\leq A\eta$, and the first statement in (1) follows by choosing $g_\epsilon=h_\eta$ with $\eta$ sufficiently small. To guarantee that the measure of $\Omega$ and of its boundary are independent of $\varepsilon$, one then needs to perform some scaling and local perturbations. (2) Fix $g$ and observe that if $\beta$ is a non-negative smooth function on ${\Omega}$ then the Riemannian metric $g'=\beta g$ has Dirichlet energy $$\int_{\Omega}\vert \nabla u\vert^2_g \,\beta^{d-1}\,dV_{(\Omega,g)}.$$ The desired result is achieved by fixing a point $p\in \Sigma$ and a small neighborhood $U$ of $p$ in ${\Omega}$ and choosing $\beta$ so that: (i) $\beta\equiv 1$ both on $\Sigma$ and on the complement of $U$ in ${\Omega}$; (ii) $0<\beta<1$ in $\operatorname{int}(U)$; and (iii) $\beta$ is very close to zero in $\operatorname{int}(U)$ away from a small neighborhood of $\partial{U}$. One then applies Equation~\eqref{eq:Stek Rayleigh min max} choosing $E\subset\mathcal{E}(k)$ to consist of functions supported in $U$. See \cite{CoElGi2019} for details. \end{proof} As discussed at the beginning of this subsection, the commonly used eigenvalue normalisations in the literature involve only the volume and the volume of the boundary of the Riemannian manifold. With respect to any such normalisation, Proposition~\ref{prop: no lower bound} implies: \begin{cor}\label{cor: no lower bound} Let ${\Omega}$ be a compact manifold with boundary. Then for each $k\in \N$ there exist Riemannian metrics on ${\Omega}$ for which the $k$th normalised Steklov eigenvalue is arbitrarily small. \end{cor} Section~\ref{section:surfaces} will focus on upper bounds for eigenvalues on surfaces, normalised by boundary length. Such bounds always exist and depend only on the topology of the surface. However, for manifolds ${\Omega}$ of higher dimension, Colbois, El Soufi and Girouard \cite{CoElGi2019} showed that within every conformal class of metrics, the normalised eigenvalue $\sigma_1({\Omega},g)|\Sigma|_g$ can be made arbitrarily large. Very interesting questions arise concerning eigenvalue bounds when one imposes geometric constraints in these higher dimensional settings, which is the subject of Section~\ref{section:boundshigher}. \subsection{Mixed eigenvalue problems}\label{subsec: mixed} While our focus in this survey will be on the Steklov problem, it will sometimes be useful to consider mixed Steklov-Neumann or mixed Steklov-Dirichlet problems. In particular, as we will discuss in the next subsection, one can sometimes obtain bounds on Steklov eigenvalues of a Riemannian manifold ${\Omega}$ by comparing the Steklov eigenvalues of ${\Omega}$ with the eigenvalues of mixed problems on well-chosen domains in ${\Omega}$. Given a decomposition $\partial\om=\partial_S({\Omega})\,\sqcup\,\partial_N({\Omega})$, one defines the mixed Steklov-Neumann problem \begin{gather*} \Delta f=0\mbox{ in } {{\Omega}},\\ \partial_\nu f=\sigma f\mbox{ on } \partial_S({\Omega}),\qquad \partial_\nu f=0 \mbox{ on }\ \partial_N({\Omega}). \end{gather*} The eigenvalues of this mixed problem form a discrete sequence $$0=\sigma_0^N( {\Omega}) \le \sigma_1^N( {\Omega})\leq\sigma_2^N({\Omega})\leq\cdots\nearrow\infty,$$ and for each $k\geq 1$ the $k$-th eigenvalue is given by \begin{equation}\label{eq:Rayleigh S-N} \sigma_k^{N}({\Omega})=\min_{E\in\mathcal {E}(k+1)}\max_{0\neq u\in E}\frac{\int_{{\Omega}}|\nabla u|^2\,dV_{\Omega}}{\int_{\Sigma}|u|^2\,dV_{\partial_S({\Omega})}}.\end{equation} Similarly, the mixed Steklov-Dirichlet problem is given by \begin{gather*} \Delta f=0\mbox{ in } {{\Omega}},\\ \partial_\nu f=\sigma f\mbox{ on } \partial_S({\Omega}),\qquad f=0 \mbox{ on }\ \partial_D({\Omega}) \end{gather*} relative to a decomposition $\partial\om=\partial_S({\Omega})\,\sqcup\,\partial_D({\Omega})$, and the eigenvalues form a discrete sequence $$0<\sigma_0^D( {\Omega}) \le \sigma_1^D( {\Omega})\leq\sigma_2^D({\Omega})\leq\cdots\nearrow\infty,$$ Their variational characterisation is given by \begin{equation}\label{eq:Rayleigh S-D} \sigma_k^{D}({\Omega})=\min_{E\in\mathcal {E}_0(k+1)}\max_{0\neq u\in E}\frac{\int_{{\Omega}}|\nabla u|^2\,dV_{\Omega}}{\int_{\Sigma}|u|^2\,dV_{\partial_S({\Omega})}}, \end{equation} where $\mathcal {E}_0(k+1)$ consists of all $(k+1)$-dimensional subspaces of $\{f\in H^1({\Omega})\,:\,f=0\,\text{ on }\partial_D({\Omega})\}$. \begin{ex}[Mixed Steklov--Neumann and Steklov--Dirichlet eigenvalues of cylinders]\label{example:mixedcylinders}~ (i) Let $C_L$ be the cylinder in Example~\ref{example: cylinder}. Consider the mixed problem on $C_L$ with Steklov condition at $t=0$ and Neumann condition at $t=L$: \begin{gather*} \Delta f=0\mbox{ in } {C_L},\\ \partial_\nu f=\sigma f\mbox{ on } \{0\}\times M,\qquad \partial_\nu f=0 \mbox{ on } \{L\}\times M. \end{gather*} The eigenvalues are $$\sigma_k^N(C_L)=\sqrt{\lambda_k(M)}\tanh(\sqrt{\lambda_k(M)}L),\quad k\geq 0,$$ with corresponding eigenfunctions $$\varphi_k(x)\cosh(\sqrt{\lambda_k(M)}(L-t)).$$ In particular $\sigma_0^N(L)=0$, with constant eigenfunction. Notice that for each index $k$, $$\lim_{L\to 0}\sigma_k^N(C_L)=0\qquad\text{and}\qquad\lim_{L\to+\infty}\sigma_k^N(C_L)=\sqrt{\lambda_k(M)}.$$ (ii) Next consider the Steklov--Dirichlet problem on $C_L$ with Steklov condition at $t=0$ and Dirichlet condition at $t=L$. The eigenvalues are $\sigma_0^D(L)=1/L$ with corresponding eigenfunctions $1-t/L$ and for each $k\geq 1$, $$\sigma_k^D(L)=\sqrt{\lambda_k(M)}\coth(\sqrt{\lambda_k(M)}L),$$ with corresponding eigenfunctions $$\varphi_k(x)\sinh(\sqrt{\lambda_k(M)}(L-t)).$$ \medskip For each index $k$, we have $$\lim_{L\to 0}\sigma_k^D(C_L)=+\infty\qquad\text{and}\qquad\lim_{L\to+\infty}\sigma_k^D(C_L)=\sqrt{\lambda_k(M)}.$$ Observe that $$\sigma_k^D=\sqrt{\lambda_k(M)}+O(k^{-\infty})\qquad\text{and}\qquad\sigma_k^N=\sqrt{\lambda_k(M)}+O(k^{-\infty}).$$ \end{ex} \begin{ex} \label{annuli} The mixed Steklov-Dirichlet and Steklov-Neumann eigenvalues on annular domains. In $\R^{d+1}$ ($d\ge 2$), let $B_1$ and $B_L$ be the balls centered at the origin of radius $1$ and $L$, respectively, with $L>1$. Consider the annulus ${\Omega}_L:=B_L \setminus B_1$ with Steklov condition on $\partial B_1$ and Dirichlet (resp. Neumann) condition on $\partial B_L$. Because of the symmetries of the problem, the eigenvalues for both mixed problems have multiplicity. We denote the distinct eigenvalues respectively by $$ \sigma_{(0)}^D(\Omega_L)< \sigma_{(1)}^D(\Omega_L)<\sigma_{(2)}^D(\Omega_L)<... $$ and $$ \sigma_{(0)}^N(\Omega_L)< \sigma_{(1)}^N(\Omega_L)<\sigma_{(2)}^N(\Omega_L)<... $$ where $\sigma_{(k)}^D(\Omega_L)$ and $\sigma_{(k)}^N(\Omega_L)$ have the multiplicity of the $k$th eigenvalue of the Laplacian on the sphere $\mathbb S^d$. Then, we have (\cite[Proposition 4 and 5]{CoVe2021}): $$ \sigma_{(k)}^D(\Omega_L)= \frac{k}{L^{2k+d-1}-1} +\frac{(k+d-1)L^{2k+d-1}}{L^{2k+d-1}-1} $$ and $$ \sigma_{(k)}^N(\Omega_L)=k \frac{(d+k-1)(L^{2k+d-1}-1)}{kL^{2k+d-1}+(k+d-1)} $$ In particular: \medskip For $k\ge 0$, \begin{equation} \label{Dirichlet} \lim_{L\to \infty} \sigma^D_{(k)}(\Omega_L)=k+d-1, \end{equation} and for $k>0$, \begin{equation} \label{Neumann} \lim_{L\to \infty} \sigma^N_{(k)}(\Omega_L)=k+d-1. \end{equation} It may appear surprising that for every $k>0$, $$\lim_{L\to\infty}\,\left(\sigma_{(k)}^N(\Omega_L)-\sigma_{(k)}^D(\Omega_L)\right)=0.$$ Intuitively, the reason is that each Steklov-Neumann eigenfunction of $\Omega_L$ is obtained by separation of variables as a product of a radial functions with the corresponding eigenfunction of the sphere. Observe that the denominator in the Rayleigh quotient is an integral over the inner boundary sphere only. When the annulus is very large, the radial function must eventually decay towards zero as the radius grows. This example is used in the proof of Theorem \ref{upperrev}. \end{ex} The calculation of the (non-mixed) Steklov spectrum of an annulus $\Omega_L$ is much more complicated. The asymptotics are given in \cite[Proposition 3.1]{FrSc2019} by Fraser and Schoen, and $\sigma_1$ is given in \cite[Theorem 4.1]{Ft2022} by Ftouhi. In dimension 2, it was already done by Dittmar in \cite{Di2004} and presented in \cite{GiPo2017}. \subsection{Dirichlet-Neumann bracketing}\label{subsec:Dir Neum bracketing} Let ${\Omega}$ be a compact Riemannian manifold, let $\Sigma=\partial\om$, and let $A\subset \Omega$ be a domain satisfying $\Sigma \subset \partial A$. We denote by $\partial_IA$ the intersection of the boundary of $A$ with the interior of $\Omega$ and we suppose that it is smooth. Thus we have $\partial A=\Sigma\sqcup \partial_IA$. Consider the mixed Steklov-Neumann and mixed Steklov-Dirichlet eigenvalue problems on $A$, where we impose the Steklov condition on $\Sigma$ and Neumann or Dirichlet conditions on $\partial_IA$. Comparing the variational formulae~\eqref{eq:Rayleigh S-N} and \eqref{eq:Rayleigh S-D} (with $A$ playing the role of ${\Omega}$) with the variational formula~\eqref{eq:Rayleigh S}, we obtain the following bracketing for each $k$: \begin{gather}\label{ineq:compmixed} \sigma_k^N(A) \le \sigma_k(\Omega) \le \sigma_{k}^D(A). \end{gather} As a consequence, we have the following: \begin{prop}\label{prop: dir neum bracket} Let $({\Omega},g)$ and $({\Omega}',g')$ be compact Riemannian manifolds with connected boundary. Suppose that some neighborhood of $\partial\om$ in ${\Omega}$ is isometric to a neighborhood of $\partial\om'$ in ${\Omega}'$. Identify these neighborhoods and call them $A$. Then with boundary conditions chosen as above, we have $$|\sigma_k({\Omega})-\sigma_k({\Omega}')|\leq \sigma_k^D(A)-\sigma_k^N(A).$$ \end{prop} This simple observation illustrates again that the geometry far away from the boundary has limited effect on the Steklov eigenvalues. (Compare with Theorem~\ref{thm: hislop lutzer}.) \begin{ex}[Manifolds with cylindrical boundary neighborhood]\label{cylindricalend} Given $L\geq 0$, let $\Omega_L$ be a compact manifold with connected boundary $\Sigma$ such that a neighborhood of the boundary is isometric to $\Sigma\times [0,L]$. It follows from Example~\ref{example:mixedcylinders} and the bracketing inequality~\eqref{ineq:compmixed} that \begin{gather}\label{ineq:DNbracketCylinder} \sqrt{\lambda_k(\Sigma)}\tanh(\sqrt{\lambda_k(\Sigma)}L)\le \sigma_k(\Omega) \le\sqrt{\lambda_k(\Sigma)}\coth(\sqrt{\lambda_k(\Sigma)}L). \end{gather} This inequality is replete with interesting consequences that will lead to interesting questions. For instance, the definition of $\tanh$ and $\coth$ gives $\tanh(x)<1<\coth(x)$ and one checks that as $x\to\infty$, $$\tanh(x),\coth(x)=1+O(x^{-\infty}).$$ Thus for each $N\in\N$, $\lim_{x\to\infty}(1-\tanh(x))x^N=0= \lim_{x\to\infty}(1-\coth(x))x^N$. Now the Weyl Law for the eigenvalues of the Laplace operator implies that $\lambda_k\sim c(n)k^{2/n}$ as $k\to\infty$. Since Inequality~\ref{ineq:DNbracketCylinder}, says $$\sqrt{\lambda_k(\Sigma)}\left(\tanh(\sqrt{\lambda_k(\Sigma)}L)-1\right)\le \sigma_k(\Omega)-\sqrt{\lambda_k(\Sigma)} \le\sqrt{\lambda_k(\Sigma)}\left(\coth(\sqrt{\lambda_k(\Sigma)}L)-1\right).,$$ it follows that $|\sigma_k-\sqrt{\lambda_k(\Sigma)}|=O(k^{-\infty}).$ In other words, for manifolds $\Omega_L$ with cylindrical boundary components, the Steklov eigenvalues are intimately linked to the Laplace eigenvalue of the boundary: $$\sigma_k=\sqrt{\lambda_k(\Sigma)}+O(k^{-\infty})$$ a much stronger link that in the general case of Equation~\eqref{eq: sigma = lambda +O(1)}. Another interesting consequence of inequality~\eqref{ineq:DNbracketCylinder} is that manifolds $\Omega_L$ containing a cylindrical neighborhood of the boundary of length $L$ have precisely controlled Steklov eigenvalues. For each fixed $k$, as $L$ goes to infinity, we have $$\sigma_k=\sqrt{\lambda_k(\Sigma)}+O(L^{-\infty}).$$ In particular, when $L$ is very large, not only are the Steklov spectrum and the spectrum of $\sqrt{\Delta_{\Sigma}}$ asymptotically close but in fact \emph{all} their eigenvalues are very close. \end{ex} \begin{remark} In the example above, we assumed that the boundary $\Sigma=\partial\Omega$ is connected only for simplicity. In case the number of connected components of $\Sigma$ is $b>1$, the exact same Dirichlet--Neumann bracketing inequality~\eqref{ineq:DNbracketCylinder} is obtained, but the eigenvalue $0$ of $\Sigma$ has multiplicity $b$ so that $\lambda_1(\Sigma)=...=\lambda_{b-1}(\Sigma)=0$. This implies that for the eigenvalues $\sigma_1(\Omega),...,\sigma_{b-1}(\Omega)$, the lower bound becomes trivial. This shows that the global geometry of $\Omega$ has a greater impact on the first $b-1$ non-zero Steklov eigenvalues of ${\Omega}$ and leads to interesting questions (see open question \ref{question lower}) \end{remark} \medskip \medskip \subsection{Upper bounds, regularity and uniform approximation of domains} \label{sec:upperboundsunifapprox} Over the years, many people have proved various upper bounds for Steklov eigenvalues in terms of geometric and topological features of a manifold. These results are usually stated in the class of smooth compact manifolds with boundary. In particular, the boundary of these manifolds are themselves smooth. However, it is interesting to know which of these results can be extended to manifolds with boundary that are not smooth. A particularly interesting case is that of bounded domains with Lipschitz boundary in a complete Riemannian manifold $M$. Any bounded Lipschitz domain ${\Omega}$ in $\R^n$ can be nicely approximated by a sequence of domains ${\Omega}_j$ with smooth boundary; see e.g., Verchota \cite{Ve1984} for details. Mitrea and Taylor \cite{MiTa1999} observed that the analogous statement holds for bounded Lipschitz domains in complete Riemannian manifolds. Several recent results address stability of Steklov eigenvalues and of weighted Steklov eigenvalues under suitable domain perturbations. Bucur and Nahon gave a sufficient condition for stability in the case of plane domains. Shortly after, Bucur, Giacomini and Trebeschi \cite[Theorem 4.1]{BuGiTr2020} obtained a stability result for Steklov eigenvalues of bounded domains in $\R^n$. Karpukhin and Lagac\'e, using \cite[Lemma 3.1]{GiLa2021}, extended these results to domains in arbitrary complete Riemannian manifolds. In the case of connected surfaces ${\Omega}$ with smooth boundary, there exist upper bounds, depending only on the topology of the surface, on the perimeter-normalised Steklov eigenvalues. These will be discussed in the next section. The stability results allow these bounds to be extended to domains with Lipschitz boundary and to weighted Steklov eigenvalues: \begin{cor}\label{rem: Lipschitz bound}\cite{KaLa2022} (See also \cite{ADGHRS2022}.) Let ${\Omega}={\Omega}_{\gamma,b}$ be the orientable surface of genus $\gamma$ with $b$ boundary components and let $$\sigma^*_k(\gamma,b)=\sup_g\,\sigma_k({\Omega},g)|\partial{\Omega}|_g$$ where the supremum is over all smooth Riemannian metrics on ${\Omega}$. Let $D$ be a bounded domain with Lipschitz boundary in a complete Riemannian surface $(M,g)$. If $D$ is orientable of genus $\gamma$ with $b$ boundary components, then $\sigma_k(D,g)|\partial D|_g\leq \sigma^*_k(\gamma,b).$ Moreover, the same eigenvalue bounds hold for all weighted Steklov eigenvalue problems on $D$; i.e., $\sigma_k({\Omega},g,\rho)\|\rho\|_{L^\infty}\leq \sigma^*_k(\gamma,b)$ for all non-negative densities $\rho\in L^1(\partial D)$ that are not identically zero. \end{cor} In the higher-dimensional setting, there do not exist upper bounds on the normalised Steklov eigenvalues that depend only on the topology but as will be discussed later in this survey, bounds are known for domains in complete Riemannian manifolds in $M$ subject to geometric constraints on $M$. As a consequence of the stability results, using any of the normalisations of eigenvalues discussed in Subsection~\ref{subsec: prelim bounds}, one has: \begin{prop}\cite[Theorem 3.5]{KaLa2022} Let $(M,g)$ be a complete Riemannian manifold. Then any normalised Steklov eigenvalue bound that is valid for all bounded domains in $(M,g)$ with smooth boundary is also valid for all domains in $(M,g)$ with Lipschitz boundary. \end{prop} Karpukhin and Lagac\'e also prove \cite[Theorem 1.5]{KaLa2022} under the same hypotheses that such eigenvalue bounds for normalised Steklov eigenvalues of domains in $M$ extend to weighted normalised Steklov eigenvalues, provided that one uses the normalisation introduced in \cite{KaMe2021}. (See Definition~\ref{def: conf ext pair} later in this survey for the definition of this normalisation for weighted Steklov eigenvalues.) \section{Upper bounds for Steklov eigenvalues on surfaces and homogenisation} \label{section:surfaces} The initial impetus for studying Steklov eigenvalues from a geometric point of view came from Weinstock's 1954 paper~\cite{We1954}. He proved that among all simply-connected planar domains $\Omega\subset\R^2$ of prescribed perimeter $L=|\partial\Omega|$, the first nonzero eigenvalue $\sigma_1$ is maximal if and only if $\Omega$ is a disk. \begin{thm}[Weinstock, 1954]\label{thm:Weinstock} Let $\Omega\subset\R^2$ a bounded simply-connected domain with smooth boundary. Then \begin{equation}\label{ineq:weinstock} \sigma_1L\leq 2\pi, \end{equation} with equality if and only if $\Omega$ is a disk. \end{thm} The proof of this theorem is prototypical and it will be useful to have it fresh in our mind. The first step is to use the Riemann mapping theorem to obtain a conformal diffeomorphism $\Phi:\Omega\to\mathbb D$. The regularity of the boundary $\partial\Omega$ implies that $\Phi$ extends to a diffeomorphism $\overline{\Phi}:\overline{\Omega}\to\overline{\mathbb D}$. Now the first nonzero Steklov eigenvalue of the unit disk $\mathbb D$ is 1, and it has multiplicity two. That is, $\sigma_1(\mathbb D)=\sigma_2(\mathbb D)=1<\sigma_3(\mathbb D)$. The corresponding eigenspace is the span of the coordinate functions $\pi_1,\pi_2:\mathbb D\to\R$ defined by $\pi_i(x)=x_i$. Precomposing these functions with $\Phi$ leads to functions $u_i:=\pi_i\circ\Phi:\Omega\to\R$ that Weinstock wants to use in the variational characterisation of $\sigma_1(\Omega)$. In order to do so, one must ensure that these functions are admissible: $\int_{\partial\Omega}u_i\,ds=0$ for both $i=1,2$. This is not true for an arbitrary conformal diffeomorphism $\Phi$, but the group of conformal automorphisms of $\mathbb D$ is rich enough to ensure the existence of a $\Phi$ for which this holds. This is in fact the easiest occurrence of the now classical \emph{center of mass method} which was put forward by Hersch in~\cite{He1970} following earlier work by Szeg\H{o}. Using this well-chosen conformal diffeomorphism $\Phi$, we see that \begin{equation*} \sigma_1(\Omega)\int_{\partial\Omega}u_i^2\,ds\leq\int_{\Omega}|\nabla u_i|^2\,dA. \end{equation*} Summing over $i=1,2$ and using the conformal invariance of the Dirichlet energy (which holds because $\Omega$ is 2-dimensional), this leads to the sought inequality: \begin{equation*} \sigma_1(\Omega)\int_{\partial\Omega}1\,ds\leq\int_{\mathbb D}|\nabla\pi_1|^2+|\nabla\pi_2|^2\,dA=2\pi. \end{equation*} \begin{remark} The Weinstock inequality also holds for simply-connected domains with Lipschitz boundary. See the discussion in section~\ref{sec:upperboundsunifapprox}. Moreover, it also holds for compact simply-connected Riemannian surfaces with boundary. \end{remark} \subsection{Upper bounds for multiply-connected domains and surfaces} Several results for higher-ranked eigenvalues $\sigma_k$ of multiply-connected domains and compact surfaces with boundary were also obtained by various authors, who replaced the conformal equivalence obtained from the Riemann mapping theorem by proper holomorphic covers $\Phi:\Omega\to\mathbb D$ that are known as Ahlfors maps. The bounds that are obtained in this way are not sharp in general. They depend on the degree of the cover $\Phi$, which in turn depends on the genus $\gamma$ of the surface and on the number $b$ of connected components of its boundary $\partial\Omega$. See in particular the results of Girouard and Polterovich~\cite{GiPo2012}. More recently, this was improved by Karpukhin, who obtained the following in~\cite[Theorem 1.4]{Ka2017}. \begin{thm}\label{thm:karp hersch} Let $\Omega$ be a compact oriented Riemannian surface with boundary. Then for each $p,q\in\N$ the following holds: \begin{gather*} \sigma_p\sigma_qL^2\leq \pi^2 \begin{cases} (p+q+2\gamma+2b+2k-3)^2&\text{ if }p+q\equiv 1\\ (p+q+2\gamma+2b+2k-2)^2&\text{ if }p+q\equiv 0\\ \end{cases} \qquad (\text{mod }2). \end{gather*} In particular, setting $p=q=k$ leads to \begin{gather}\label{ineq:karpukhinuppersurface} \sigma_kL\leq 2\pi(\gamma+b+k-1). \end{gather} \end{thm} The proof of this result is based on results of Yang and Yu~\cite{YaYu2017} that allow comparison of Steklov problems on differential $p$-forms with eigenvalues of the Laplace--Beltrami operator on the boundary (see section~\ref{stek.forms}). In the case of surfaces, the boundary is a union of circles and the eigenvalues of the tangential Laplacian are known explicitly. \begin{remark} It is common knowledge that using trial functions in variational characterisations of eigenvalues rarely leads to sharp upper bounds. \begin{enumerate} \item For $\gamma=0$, $b=1$, and $k=1$, one recovers Weinstock's result, in which case the bound \ref{ineq:karpukhinuppersurface} is sharp. \item For $\gamma=0$, $b=1$, and arbitrary $k$, one recovers from~ (\ref{ineq:karpukhinuppersurface}) a result of Hersch, Payne and Schiffer~\cite{HePaSc1975}, which is known to be sharp thanks to a construction of Girouard and Polterovich~\cite{GiPo2010}. \item For $\gamma=0$, $b=2$ and $k=1$, the best upper bound is known thank to the work of Fraser and Schoen~\cite{FrSc2016}: it is attained by the so-called \emph{critical catenoid}, for which $\sigma_1L\approx 4\pi/1.2$. See Example~\ref{ex.cat}. \end{enumerate} \end{remark} \begin{remark}\label{rem:has cgr}~ \begin{enumerate} \item In early upper bounds on $\sigma_k$, the index $k$ appeared as a multiplicative rather than an additive factor. The first upper bound to feature additivity is due to Hassannezhad~\cite{Ha2011}, who proved the existence of $A,B>0$ such that $\sigma_kL\leq A\gamma + Bk$. Note in particular that the number of connected coponents of the boundary does not appear in this inequality. \item The dependence on the genus is essential, since Colbois, Girouard and Raveendran~\cite{CoGiRa2018} have constructed surfaces with arbitrarily large normalised first eigenvalue $\sigma_1L$ and with connected boundary. \end{enumerate} \end{remark} \subsection{Upper bound for surfaces of genus 0}\label{section:uppergenus0} For $\varepsilon\in(0,1)$, let $A_\varepsilon:=B(0,1)\setminus B(0,\varepsilon)$ be a planar annulus. It was observed in~\cite{GiPo2017} that for $\varepsilon>0$ small enough, $\sigma_1(A_\varepsilon)L(\partial A_\varepsilon)>2\pi$. This shows that the simple-connectedness assumption in Weinstock's result is genuinely necessary, which raises the question of how large the perimeter-normalised eigenvalue $\sigma_1(\Omega)L(\partial\Omega)$ can be among all bounded planar domains $\Omega\subset\R^2$ with smooth boundary, without assuming simple-connectivity. The Riemann mapping theorem is not available in this case, and we have seen above that one could use the Ahlfors map instead, in which case the resulting upper bounds depend on the number of connected components of its boundary. In~\cite{Ko2014} Kokarev proposed instead to use the stereographic parametrisation $\Phi:\R^2\to \Sp^2$. The group of automorphisms of $\Sp^2$ is rich enough to ensure the existence of a conformal diffeomorphism $F:\Sp^2\to\Sp^2$ for which the functions $u_i:=\pi_i\circ F\circ\Phi:\Omega\to\R$ satisfy $\int_{\partial\Omega}u_i\,ds=0$ for $i\in\{1,2,3\}$ as well as $u_1^2+u_2^2+u_3^2=1$ identically on $\partial\Omega$. This is the Hersch renormalisation trick again, as in the proof above of the Weinstock inequality (Theorem~\ref{thm:Weinstock}). Because the stereographic projection is a conformal map, it follows as above that $$\sigma_1L\leq \int_{F(\Phi(\Omega))}\sum_{i=1}^3|\nabla\pi_i|^2\,dA.$$ An easy computation shows that $\sum_{i=1}^3|\nabla\pi_i|^2=2$ pointwise, so that $$\sigma_1L\leq 2\text{Area}(F(\Phi(\Omega)) )<2\text{Area}(\Sp^2)=8\pi.$$ This led Kokarev \cite[Theorem A1]{Ko2014} to the following result. \begin{thm}\label{thm:KokarevGenus0} Let $\Omega$ be a compact surface with boundary of genus $\gamma=0$. Then \begin{gather}\label{ineq:Kokarev} \sigma_1L< 8\pi. \end{gather} \end{thm} It is then natural to investigate the sharpness of inequality~\eqref{ineq:Kokarev}, which amounts to the construction of surfaces of genus 0 with large enough perimeter-normalised $\sigma_1L$. \subsection{Interlude: using homogenisation theory to obtain large Steklov eigenvalues}\label{subsec:homolarge} In~\cite{GiHeLa2021}, Girouard, Henrot and Lagac\'e studied homogenisation of the Steklov problem by periodic perforation. The results are expressed in terms of the Neumann and dynamical eigenvalue problems, which we now recall. Let $\Omega\subset\R^{d+1}$ be a bounded domain with smooth boundary $\partial\Omega$. Recall that the Neumann spectral problem is $$\begin{cases} \Delta f=\lambda f&\text{in }\Omega,\\ \partial_{\nu}f=0&\text{on }\partial\Omega. \end{cases}$$ Its spectrum consists of an unbounded sequence of real numbers $$0=\lambda_0<\lambda_1\leq\lambda_2\leq\cdots\to+\infty.$$ The \emph{dynamical spectral problem with parameter $\beta\in [0,\infty)$} is \begin{gather}\label{eq:dynamiceigen} \begin{cases} \Delta f=\beta \sigma f&\text{in }\Omega,\\ \partial_{\nu}f=\sigma f&\text{on }\partial\Omega. \end{cases} \end{gather} Its spectrum consists of an unbounded sequence of real numbers $$0=\sigma_0<\sigma_{1,\beta}\leq\sigma_{2,\beta}\leq\cdots\to+\infty.$$ Readers are invited to look at the work of von Below and Fran\c{c}ois~\cite{BeFr2005} for details. For $\beta=0$, the dynamical eigenvalues of $\Omega$ coincide with Steklov eigenvalues: $\sigma_{k,0}(\Omega)=\sigma_k(\Omega)$. As $\beta\to+\infty$, the relative importance of the spectral parameter in the boundary condition seems to disappear. This is captured in the following result~\cite{GiHeLa2021}. \begin{thm}\label{thm:dynamiclargebeta} For each $k\in\N$, the eigenvalue $\sigma_{k,\beta}$ depends continuously on $\beta$ and satisfies $$\lim_{\beta\to+\infty}\beta \sigma_{k,\beta}=\lambda_k.$$ \end{thm} For the purpose of studying Steklov eigenvalues, the importance of the dynamical eigenvalue problem~\eqref{eq:dynamiceigen} is that it appears as the limit problem for periodic homogenisation by perforation. Given $\varepsilon>0$, let $\Omega^\varepsilon\subset\Omega$ be the domain obtained by removing balls $B(p,r_\varepsilon)\subset\Omega$ of radius $r_\varepsilon>0$ centered at a point $p$ of the periodic lattice $\varepsilon\mathbf Z^{d+1}$. See Figure~\ref{figure:Omegaeps}. \begin{figure} \includegraphics[width=9cm]{Figures/OmegaN.pdf} \caption{The perforated domain $\Omega^\varepsilon\subset\R^2$ in the planar case.} \label{figure:Omegaeps} \end{figure} The behaviour of the Steklov eigenvalues $\sigma_k(\Omega^\varepsilon)$ as $\varepsilon\to 0$ then depends on the choice of radius $r_\varepsilon\in (0,\varepsilon)$. The following result should be compared with the classical \emph{crushed ice problem}. See for instance the work of Rauch and Taylor~\cite{RaTa1975}, and of Cioranescu and Murat~\cite{CiMu1982}. \begin{thm}[\cite{GiHeLa2021}]\label{thm:PeriodicHomoeConstantRadius} Let $\Omega\subset\R^{d+1}$ be a bounded domain with smooth boundary $\partial\Omega$. Let $T^\varepsilon\subset\Omega$ be the union of all balls $B(p,r_\varepsilon)$ with $p\in\varepsilon\mathbf Z^d$ that are included in $\Omega$, and consider the perforated domain $\Omega^\varepsilon:=\Omega\setminus\overline{T^\varepsilon}$. The asymptotic behaviour of $\sigma_k(\Omega^\varepsilon)$ as $\varepsilon\to 0$ depends on the parameter\footnote{It is implicitly understood that $r_\varepsilon$ is chosen so that this limit exists.} \begin{gather}\label{eq:betaregimehomo} \beta:=\frac{1}{|\Sp^d|}\lim_{\varepsilon\to 0}r_\varepsilon^{d}/\varepsilon^{d+1}\in[0,+\infty]. \end{gather} \begin{description} \item[Small-holes regime] If $\beta=0$, then $$|\partial T^\varepsilon|\to 0\quad\text{ and }\quad \sigma_k(\Omega^\varepsilon)\to\sigma_k(\Omega).$$ \item[Large-holes regime] If $\beta=+\infty$, then $$|\partial T^\varepsilon|\to \infty\quad\text{ and }\quad\sigma_k(\Omega^\varepsilon)\to 0.$$ \item[Critical regime] If $\beta\in (0,\infty)$, then $$|\partial T^\varepsilon|\to \beta |\Omega|\quad\text{ and }\quad \sigma_k(\Omega^\varepsilon)\to\sigma_{k,\beta}.$$ \end{description} \end{thm} \begin{remark} The convention that we use for the dynamical eigenvalue problem is slightly different from that of~\cite{GiHeLa2021}, where the constant $A_d:=|\Sp^d|$ was built into the problem itself, while here we have simply introduced it in the definition of the constant~\eqref{eq:betaregimehomo} controlling the homogenisation regime. This is clearer for our purpose, and in particular will be compatible with the situation where $\beta$ is a density function rather than a constant. \end{remark} \begin{remark} The large-holes regime could be deduced from known upper bounds for $\sigma_k$. For instance, it follows from the earlier work of Colbois, Girouard and El Soufi~\cite{CoElGi2011} that any domain $\Omega\subset\R^{d+1}$ with boundary of large measure has small Steklov eigenvalues. See the discussion below Proposition~\ref{prop:bestupperboundDomainSphere}. \end{remark} The critical regime is particularly interesting for planar domains. Indeed, in this case $d=1$ and it follows that \begin{gather}\label{eq:limithomoplanar} \sigma_k(\Omega^\varepsilon)L(\partial\Omega^\varepsilon)\xrightarrow{\varepsilon\to0} \sigma_{k,\beta}(\Omega)(L(\partial\Omega)+\beta |\Omega|)\xrightarrow{\beta\to+\infty}\lambda_k(\Omega)|\Omega|, \end{gather} where the second limit is a direct consequence of Theorem~\ref{thm:dynamiclargebeta}. This suggests a link between maximisation of perimeter-normalised Steklov eigenvalues and maximisation of area-normalised Neumann eigenvalues. In particular, the Szeg\H{o}--Weinberger inequality states that for each bounded planar domain $\Omega\subset\R^2$, $$\lambda_1(\Omega)|\Omega|\leq\lambda_1(\mathbb D)\pi\cong 3.39\pi,$$ hence the largest possible value for the RHS of~\eqref{eq:limithomoplanar} is $\lambda_1(\mathbb D)\pi$. A simple diagonal argument then leads to the existence of a family $\Omega^\varepsilon\subset\R^2$ with $\sigma_1(\Omega^\varepsilon)L(\partial\Omega^\varepsilon)\xrightarrow{\varepsilon\to 0}\lambda_1(\mathbb D)\pi$. In combination with the above Theorem~\ref{thm:KokarevGenus0} of Kokarev, this shows that $$\lambda_1(\mathbb D)\pi\leq \sup\{\sigma_1(\Omega)L(\partial\Omega)\,:\,\Omega\subset\R^2\}\leq 8\pi.$$ In this perforation procedure, the balls $B(p,r_\varepsilon)$ all have the same radius $r_\varepsilon$. Could one obtain larger Steklov eigenvalues by relaxing this constraint? Indeed, given a constant $\alpha>d$ and a positive continuous function $\beta:\Omega\to\R$, Girouard, Karpukhin and Lagac\'e~\cite{GiKaLa2021} introduced the family of functions $r_{\varepsilon,\alpha}:\Omega\to\R$ defined by $$r_{\varepsilon,\alpha}(p)=\left(\frac{\varepsilon^\alpha}{|\Sp^d|}\beta(p)\right)^{1/d}.$$ This function now determines the radii of the various balls to be removed from $\Omega$, with the situation where $\beta$ is constant and $\alpha=d+1$ corresponding to the critical regime in Theorem~\ref{thm:PeriodicHomoeConstantRadius}. Given $\varepsilon>0$, let $\Omega^\varepsilon\subset\Omega$ be the domain obtained by removing balls $B(p,r_{\varepsilon,\alpha}(p))\subset\Omega$ of radius $r_{\varepsilon,\alpha}(p)>0$ centered at point $p$ of the periodic lattice $\varepsilon\mathbf Z^{d+1}$. The behaviour of the Steklov eigenvalues $\sigma_k(\Omega^\varepsilon)$ as $\varepsilon\to 0$ depends on the choice of density function $\beta$ and on the parameter $\alpha$. It is convenient to discuss the results in terms of variational eigenvalues of Radon measures. See Section~\ref{subsection:introvareigenradon} for a quick overview and Appendix~\ref{Section:Radon} for more details. The inclusion $\iota:\partial\Omega^\varepsilon\to\overline{\Omega^\varepsilon}$ allows the definition of the push-forward measure $\mu_\alpha^\varepsilon=\iota_*dV_{\partial\Omega^\varepsilon}$ on $\overline{\Omega^\varepsilon}$, which we call the boundary measure of the perforated domain $\Omega^\varepsilon$. The corresponding renormalised probability measures are $$\overline{\mu_\alpha^\varepsilon}=\frac{\mu_\alpha^\varepsilon}{|\partial\Omega^\varepsilon|}.$$ It follows from the definition~\ref{def:VarEigenRadon} of variational eigenvalues and from~Example~\ref{ex:vareigenSteklov} that $$\lambda_k(\Omega^\varepsilon,\overline{\mu_\alpha^\varepsilon})=\sigma_k(\Omega^\varepsilon)|\partial\Omega^\varepsilon|. $$ The behaviour of these measures as $\varepsilon\to 0$ depends on the parameter $\alpha>d$. This is expressed in terms of weak-$\star$ convergence to limit measures. Girouard, Karpukhin and Lagac\'e~\cite[Theorem 6.2]{GiKaLa2021} proved the convergence of the corresponding variational eigenvalues. \begin{thm}\label{thm:HomoBdryGKL} Let $\Omega\subset\R^{d+1}$ be a bounded domain with smooth boundary $\Sigma$ and let $\beta:\Omega\to\R$ be a positive continuous function. For each $\varepsilon>0$ small enough, let $T^\varepsilon\subset\Omega$ be the union of all balls $B(p,r_{\varepsilon,\alpha}(p))$ with $p\in\varepsilon\mathbf Z^{d+1}$ that are included in $\Omega$, and consider the perforated domain $\Omega^\varepsilon:=\Omega\setminus\overline{T^\varepsilon}$. \begin{description} \item[Small-holes regime] If $\alpha>d+1$, then the measures $\overline{\mu_\alpha^\varepsilon}$ concentrate on the boundary $\Sigma$: $$\overline{\mu_\alpha^\varepsilon}\xrightarrow{\varepsilon\to0}\overline{dV_{\Sigma}}.$$ Moreover, $$\sigma_k(\Omega^\varepsilon)|\partial\Omega^\varepsilon|=\lambda_k(\Omega^\varepsilon,\overline{\mu_\alpha^\varepsilon})\xrightarrow{\varepsilon\to0}\lambda_k(\Omega,\overline{dV_{\Sigma}})=\sigma_k(\Omega)|\Sigma|.$$ \item[Large-holes regime] If $\alpha\in (d,d+1)$, then the boundary becomes negligible in the limit and $\beta dV_\Omega$ dominates: $$\overline{\mu_\alpha^\varepsilon}\xrightarrow{\varepsilon\to0}\overline{\beta dV_{\Omega}}.$$ Moreover, $$\sigma_k(\Omega^\varepsilon)|\partial\Omega^\varepsilon| = \lambda_k(\Omega^\varepsilon,\overline{\mu_\alpha^\varepsilon})\xrightarrow{\varepsilon\to0}\lambda_k(\Omega,\overline{\beta dV_{\Omega}}) = \lambda_k(\Omega,\beta)\int_{\Omega}\beta\,dV_\Omega.$$ \item[Critical regime] If $\alpha=d+1$, then both the boundary and interior measures persist in the limit: $$\overline{\mu_\alpha^\varepsilon}\xrightarrow{\varepsilon\to0}\overline{dV_{\Sigma}+\beta dV_{\Omega}}.$$ Moreover, $$\sigma_k(\Omega^\varepsilon)|\partial\Omega^\varepsilon|=\lambda_k(\Omega^\varepsilon,\overline{\mu_\alpha^\varepsilon})\xrightarrow{\varepsilon\to0}\lambda_k(\Omega,\overline{dV_{\Sigma}+\beta dV_{\Omega}})=\sigma_{k,\beta}(\Omega)(|\Sigma|+\int_{\Omega}\beta\,dV_\Omega).$$ \end{description} \end{thm} The proof of Theorem~\ref{thm:HomoBdryGKL} is based on continuity properties of variational eigenvalues associated to the Radon measures $\overline{\mu_\alpha^\varepsilon}$. See Theorem~\ref{thm:ContinuityRadon}. For a simply connected domain $\Omega\subset\R^2$, particularly interesting density functions $\beta\in C^\infty(\Omega)$ can be obtained by considering conformal maps $\Phi_\delta$ from the disk $\mathbb D$ to punctured spheres $C_\delta=\Sp^2\setminus B(p,\delta)$. In particular, if $\Omega=\mathbb D$ these maps can be constructed explicitly as a composition of the stereographic parametrisation with homotheties of the plane. The pullback of the round metric $g_{\Sp^2}$ is of the form $\Phi_\delta^{\star}g_{\Sp^2}=\beta_{\delta}g_{\text{eucl}}$, for some positive $\beta_\delta\in C^\infty(\overline{\mathbb D})$. It follows from conformal invariance of the Dirichlet energy that the Neumann eigenvalues of $C_\delta$ can be expressed as weighted Neumann eigenvalues of $\Omega$: $$\lambda_k(C_\delta,g_{\Sp^2})=\lambda_k(\Omega,\beta_{\delta}dV_\Omega).$$ Now, it is well known that the Neumann eigenvalues of a punctured closed Riemannian manifolds converge to the eigenvalues of the manifolds as the radius of the puncture goes to zero (see for instance the work of Ann\'e~\cite{An1986}). In particular, $\lambda_k(C_\delta)\xrightarrow{\delta\to 0}\lambda_k(\Sp^2)$. Combining this observation with the large-hole regime of~Theorem~\ref{thm:HomoBdryGKL} leads to a family of perforated domains $\Omega^{\varepsilon}\subset\Omega$ such that $$\sigma_k(\Omega^\varepsilon)L(\partial\Omega^\varepsilon)\xrightarrow{\varepsilon\to0} \lambda_k(\Omega,\beta_\delta)\int_{\Omega}\beta_\delta\,dV_\Omega=\lambda_k(C_\delta)|C_\delta|\xrightarrow{\delta\to 0}\lambda_k(\Sp^2)\times 4\pi.$$ Since $\lambda_1(\Sp^2)=2$, this shows that Kokarev's inequality is sharp and provides a complete solution for the isoperimetric problem for $\sigma_1$ of planar domains. \begin{thm}[\cite{Ko2014,GiKaLa2021}] \label{thm:IsopStekOnePlanar} Let $\Omega\subset\R^2$ be a bounded domain with sufficiently regular boundary. Then, $\sigma_1(\Omega)L(\partial\Omega)<8\pi.$ Moreover there exists a family $\Omega^\varepsilon\subset\R^2$ such that $$\sigma_1(\Omega^\varepsilon)L(\partial\Omega^\varepsilon)\xrightarrow{\varepsilon\to 0}8\pi.$$ \end{thm} \begin{figure} \includegraphics[width=9cm]{Figures/PerforatedDisk.png} \caption{By using appropriate density functions $\beta$, we obtain perforated disks saturating Kokarev's bound.} \label{figure:PerforatedDisk} \end{figure} Instead of using the round metric $g_{\Sp^2}$ on the sphere, one can use an arbitrary metric $g$ and proceed exactly as above to obtain planar domains $\Omega^\varepsilon$ such that $\sigma_k(\Omega^\varepsilon)L(\partial\Omega^\varepsilon)\xrightarrow{\varepsilon\to 0}\lambda_k(\Sp^2,g)\text{Area}(\Sp^2,g)$. The best upper bound for these area-normalised eigenvalues were obtained by Karpukhin, Nadirashvili, Penskoi and Polterovich in~\cite{KNPP2020}: $$\lambda_k(\Sp^2,g)\text{Area}(\Sp^2,g)\leq 8\pi k.$$ This shows that $\sup\{\sigma_k(\Omega)L(\partial\Omega)\,:\Omega\subset\R^2\}\geq 8\pi k$. However, $8\pi k$ is also an upper bound, as we will see shortly (see Theorem~\ref{thm: GiKaLa 8pi}). \subsection{Best upper bounds for Steklov eigenvalues and conformal eigenvalues} Thus far we have been discussing homogenisation in the Euclidean setting where we have a periodic procedure for perforating a domain. In order to address domains in compact Riemannian manifolds, Girouard and Lagac\'e~\cite{GiLa2021} extended the homogenisation procedure of~\cite{GiHeLa2021} to the non-periodic setting by using Vorono\u{\i} tessellations associated to maximal $\varepsilon$-separated subsets in a closed Riemannian manifold $(M,g)$. \begin{thm}[\cite{GiLa2021}]\label{thm:homoclosed} Let $(M,g)$ be a closed Riemannian manifold and let $\beta:M\to\R_{>0}$ a continuous function. There exists a family $\Omega^\varepsilon\subset M$ such that $|\partial\Omega^\varepsilon|\xrightarrow{\varepsilon\to 0}\int_{M}\beta\,dV_g$, $|\Omega^\varepsilon|\xrightarrow{\varepsilon\to 0}|M|_g$ and for each $k\in\N$, \begin{gather}\label{eq:limithomomanifold} \sigma_k(\Omega^\varepsilon)\xrightarrow{\varepsilon\to 0}\lambda_k(\beta^{-1}\Delta_g). \end{gather} \end{thm} For surfaces, this suggests a link between maximisation of perimeter-normalised Steklov eigenvalues for domains $\Omega\subset M$ and maximisation of area-normalised eigenvalues of the Laplace operator. This is best expressed by introducing the \emph{conformal eigenvalues} of a closed Riemannian manifold $(M,g)$ of dimension $d+1$. They are defined by \begin{gather}\label{eq:defConformalEigenvalues} \lambda_k^*(M,[g]):=\sup_{h\in[g]}\lambda_k(M,h)\text{Vol}(M,dV_h)^{2/{d+1}}. \end{gather} They were introduced and studied in~\cite{CoEl2003}, following work of Korevaar~\cite{Ko1993} who showed that they are finite. In their paper~\cite{KaSt2020}, Karpukhin and Stern discovered a link between the Steklov eigenvalues of domains in a closed surface and the eigenvalues of the Laplacian on that surface. In particular they proved that for any domain $\Omega\subset M$, the following strict inequalities hold: \begin{gather}\label{eq:karp stern} \sigma_1(\Omega,g)L(\partial\Omega)<\lambda_1^*(M,[g])\qquad \text{ and }\qquad\sigma_2(\Omega,g)L(\partial\Omega)<\lambda_2^*(M,[g]). \end{gather} It follows from Theorem~\ref{thm:homoclosed} that these inequalities are sharp and raises the question of whether similar inequalities hold for arbitrary index $k$. Girouard, Karpukhin and Lagac\'e answered this question in the affirmative: \begin{thm}[\cite{GiKaLa2021}]\label{thm: GKL sharp} Let $M$ be a closed surface. Then for each $k\in\N$ and for each domain $\Omega\subset M$, \begin{gather}\label{ineq:StekConfEigen} \sigma_k(\Omega)L(\partial\Omega)\leq\lambda_k^*(M,[g]). \end{gather} Moreover, for each $k\in\N$, there exists a family of domains $\Omega^\varepsilon\subset M$ such that $$\sigma_k(\Omega^\varepsilon)L(\partial\Omega^\varepsilon)\xrightarrow{\varepsilon\to0}\lambda_k^*(M,[g]).$$ \end{thm} The family $\Omega^\varepsilon$ is obtained as a direct consequence of Theorem~\ref{thm:homoclosed} while the upper bound is proved using continuity properties of the variational eigenvalues associated to Radon measures. Indeed, given a domain $\Omega\subset M$ with boundary $\Sigma$, let $\mu=\iota_{\star}dA$ be the boundary measure of $\Omega$ in $M$. Recall from Example~\ref{example:transmission} that the variational eigenvalues of $\mu$ are the transmission eigenvalues $\tau_k(\Omega)$ of $\Omega\subset M$. One can construct a family $g_\varepsilon=\beta_{\varepsilon}g$ of conformal Riemannian metrics that concentrates on $\partial\Omega$ in the weak-$\star$ sense: $dV_{g_\varepsilon}\xrightarrow{\varepsilon\to 0}\mu.$ It was proved in~\cite{GiKaLa2021} that the corresponding variational eigenvalues also converge: $$\lambda_k(M,g_\varepsilon)=\lambda_k(M,g,dV_{g_\varepsilon})\xrightarrow{\varepsilon\to 0}\lambda_k(M,g,\mu)=\tau_k(\Omega)\geq\sigma_k(\Omega).$$ See Theorem~\ref{thm:ContinuityRadon}. Inequality~\eqref{ineq:StekConfEigen} now follows from the definition of the conformal eigenvalue $\lambda_k^*(M,g)$. \begin{ques}\label{ques:karpstern} Can inequality~\eqref{eq:karp stern} be improved to a strict inequality for $k\geq 3$? \end{ques} The conformal eigenvalues of many surfaces are known explicitly. For instance, it was proved in~\cite{KNPP2020} that $\lambda_k^*(\Sp^2,g_{\Sp^2})=8\pi k$. Because planar domains are conformally equivalent to spherical domains, this implies a complete solution of the isoperimetric problems for $\sigma_k$. \begin{thm}[\cite{GiKaLa2021}]\label{thm: GiKaLa 8pi} Let $\Omega\subset\R^2$ be a bounded domain with sufficiently regular boundary. Then, $\sigma_k(\Omega)L(\partial\Omega)\leq8\pi k.$ Moreover there exists a family $\Omega^\varepsilon\subset\R^2$ such that $$\sigma_k(\Omega^\varepsilon)L(\partial\Omega^\varepsilon)\xrightarrow{\varepsilon\to 0}8\pi k.$$ \end{thm} \begin{remark}\label{rem:b to infty} For $M$ a closed surface, define \begin{equation}\label{eq:lambdak*M} \lambda_k^*(M)=\sup_g\,\lambda_k(M,g)\operatorname{Area}(M,g) \end{equation} where the supremum is over all Riemannian metrics on $M$. Similarly, for ${\Omega}$ a compact surface with boundary, define \begin{equation}\label{eq:sigmak*M} \sigma_k^*({\Omega})=\sup_g\,\sigma_k({\Omega},g)L(\partial{\Omega},g) \end{equation} where again the supremum is over all Riemannian metrics on ${\Omega}$. Now let $M_b$ be the compact surface with boundary obtained by removing $b$ disjoint disks from $M$. Then as a consequence of Theorem~\ref{thm: GKL sharp} and its proof by homogenisation, we have $\sigma_k^*(M_b)\leq \lambda_k^*(M)$ for all $b$ and $$\lim_{b\to\infty}\,\sigma_k^*(M_b)=\lambda_k^*(M).$$ In particular, letting $M$ be the 2-sphere, we have in the notation of Remark~\ref{rem: Lipschitz bound} that $$\lim_{b\to\infty}\sigma^*_1(0,b)=8\pi.$$ \end{remark} \subsection{Stability and quantitative isoperimetry} Whenever a sharp inequality is known, it becomes interesting to investigate the case of almost equality. For instance, the case of equality in Weinstock's theorem (Theorem~\ref{thm:Weinstock}) states that for simply-connected domains, $\sigma_1(\Omega)L(\partial\Omega)=2\pi$ if and only if $\Omega$ is a disk. This raises the question of whether a domain $\Omega$ having $\sigma_1(\Omega)L(\partial\Omega)$ near $2\pi$ implies that $\Omega$ is near a disk, and if so, in which sense. In~\cite{BuNa2021}, Bucur and Nahon gave a negative answer to that question. Let us state a particularly striking case of their result here. \begin{thm}\label{thm:BucurNahon} Let $\Omega\subset\R^2$ be a simply-connected planar domain. Then there exists a family $\Omega^\varepsilon$ of simply-connected planar domains such that $\Omega^\varepsilon\xrightarrow{\text{Hausdorff}}\Omega$ while for each $k\in\N$, $$\sigma_k(\Omega^\varepsilon)L(\partial\Omega^\varepsilon)\xrightarrow{\varepsilon\to0} 2\pi\sigma_k(\mathbb D).$$ \end{thm} In particular, $\sigma_1(\Omega^\varepsilon)L(\partial\Omega^\varepsilon)\to 2\pi$, while the domains $\Omega^{\epsilon}$ approach a domain $\Omega$ which could be very different from a disk. \begin{remark}\label{rem:BucurNahon} The proof of Theorem~\ref{thm:BucurNahon} is based on a boundary-homogenisation method. Let $\Phi:\Omega\to\mathbb D$ be a conformal diffeomorphism. Then because $\Omega$ is smooth, $\Phi$ extends to a diffeomorphism up to the boundary, and one can define the pullback measure $\mu:=\Phi^\star(ds)=|\Phi'(s)|\,ds$ on $\partial\Omega$. Because the Dirichlet energy is conformally invariant, the Steklov problem on $\mathbb D$ is isospectral to a weighted Steklov problem on $\Omega$, which we express using the variational eigenvalue associated to this Radon measure: $\lambda_k(\Omega,\mu)=\sigma_k(\mathbb D)$. See section~\ref{subsection:introvareigenradon} for a quick overview of variational eigenvalues of Radon measures. The proof is then based on perturbations $\Omega^\varepsilon$ of $\Omega$ by small oscillations of its boundary that lead to smooth approximations of the measure $\mu$ by the boundary measures of $\Omega^\varepsilon$, so that $\sigma_k(\Omega^\varepsilon)\to\lambda_k(\Omega,\mu)$. \end{remark} The above result shows that the first perimeter-normalised Steklov eigenvalue is not stable under Hausdorff perturbations of the domain. However, the story is completely different if we change the notion of proximity between $\Omega^\varepsilon$ and $\Omega$ that we use. Indeed, let $\Omega^\varepsilon$ be a family of bounded simply-connected domains such that $L(\partial\Omega^\varepsilon)=2\pi$ and $\sigma_1(\Omega^\varepsilon)\xrightarrow{\varepsilon\to 0} 1$. Let $\Phi_\varepsilon:\mathbb D\to\Omega^\varepsilon$ be conformal diffeomorphisms and let $\mu_\varepsilon:=\Phi_\varepsilon^{\star}ds=\Theta_\varepsilon\,ds$, where $\Theta_\varepsilon(z)=|\Phi_\varepsilon'(z)|\in C^\infty(\partial\mathbb D)$. Because the group of conformal automorphisms of $\mathbb D$ is rich enough, it is possible to choose $\Phi_\varepsilon$ so that these measures have their center of mass at the origin: $\int_{\partial\mathbb D}\pi_i\,d\mu_\varepsilon=0$ for all $\varepsilon$ and $i\in\{1,2\}$. This follows from a topological argument that was introduced by Hersch~\cite{He1970} following work of Szeg\H{o}. Observe that $$\lambda_1(\mathbb D,\mu_\varepsilon)=\sigma_1(\Omega^\varepsilon)\xrightarrow{\varepsilon\to 0}1.$$ The following is a reformulation of \cite[Proposition 3.1]{BuNa2021}. \begin{thm} There are constants $\delta_0\in (0,1)$ and $C>0$ with the following property. Let $\Theta\in L^\infty(\partial\mathbb D)$ be a positive density such that $\int_{\partial\mathbb D}\pi_i\,\Theta ds=0$ for $i\in\{1,2\}$ and such that $\int_{\partial\mathbb D}\Theta\,ds=2\pi$. If $\lambda_1(\mathbb D,\Theta\,ds)>\delta_0$, then $$\lambda_1(\mathbb D,\Theta\,ds)\leq \frac{1}{1+C\|\Theta-1\|_{H^{-1/2}}^2}.$$ \end{thm} This beautiful result shows that in this particular norm, stability is restored for the Weinstock inequality after transplantation to the disk by an appropriate conformal map. This should be compared with the recent paper~\cite{KNPS2021} by Karpukhin, Nahon, Polterovich and Stern, where the stability of isoperimetric inequalities for eigenvalues of the Laplace operator on surfaces is studied. The situation for arbitrary bounded planar domains $\Omega\subset\R^2$ is quite different. Indeed, Theorem~\ref{thm:IsopStekOnePlanar} states that $\sigma_1L<8\pi$ is a sharp upper bound, but since the inequality is strict, there does not exist a planar domain realising this bound. However we can still obtain interesting information regarding planar domains $\Omega\subset\R^2$ such that $\sigma_1(\Omega)L(\partial\Omega)$ is close to $8\pi$. In this case again, there is a flexibility in the geometry of the maximising sequence. Indeed, it was proved in~\cite{GiKaLa2021} that one can start with any simply-connected domain $\Omega\subset\R^2$ with smooth boundary, and construct a family of domains $\Omega^\varepsilon\subset\Omega$, obtained by perforation, such that $$\sigma_1(\Omega^\varepsilon)L(\partial\Omega^\varepsilon)\xrightarrow{\varepsilon\to0}8\pi.$$ However, one may still obtain geometric information on maximising sequences in this situation. The following result is a corollary of~\cite[Theorem 2.1]{GiKaLa2021}. \begin{thm} Let $\Omega\subset\R^2$ be a bounded planar domain with $\sigma_1(\Omega)L(\partial\Omega)\geq 6\pi$ and such that the number of connected components of its boundary is $b$. Then the two following inequalities hold: \begin{gather*} \sigma_1(\Omega)L(\partial\Omega)\leq 8\pi-6\pi\exp(-2b),\\ \sigma_1(\Omega)L(\partial\Omega)\leq 8\pi-6\pi\exp(-\frac{L(\partial\Omega)}{\text{diam}(\Omega)}) \end{gather*} \end{thm} The proof is based on a careful quantitative adaptation of Kokarev's result (Theorem~\ref{thm:KokarevGenus0}). This proves in particular that any maximising sequence $\Omega^\varepsilon\subset\R^2$ for $\sigma_1L$ has a boundary with an unbounded number of connected components in the limit. Moreover, if the maximising sequence is normalised by requiring that $L(\partial\Omega^\varepsilon)=1$, then its diameter must tend to 0 in the limit. Let us conclude this section by mentioning a recent preprint of Karpukhin and Stern~\cite{KaSt2021} where a quantitative improvement of Theorem~\ref{thm: GKL sharp} is presented for some surfaces: the sphere $\Sp^2$, the projective plane $\R\mathbb{P}^2$, the torus $\mathbb{T}$ and the Klein bottle $\mathbb{K}$. \section{Geometric bounds in higher dimensions} \label{section:boundshigher} Because of scaling properties, meaningful bounds for eigenvalues require some type of constraint on the size of the underlying manifold. In the previous sections we have seen that prescribing the area of a surface or the length of its boundary is often sufficient to obtain interesting upper bounds. For manifolds of dimension at least 3, the situation is more complicated. For instance it was proved by Colbois, El Soufi and Girouard ~\cite{CoElGi2019} that any compact connected manifold $(\Omega,g_0)$ admits conformal perturbations $g=\delta g_0$ with $\delta\equiv 1$ on the boundary and with $\sigma_1(\Omega,g)$ arbitrarily large. In other words, prescribing the geometry of the boundary is not enough to bound $\sigma_1$, even while staying in a fixed conformal class. This shows that, in order to get upper bounds for the Steklov spectrum in dimension higher than $2$, we need additional geometric information. \smallskip Much less is known regarding lower bounds, and in Subsection \ref{Lower}, we will summarise what is known. \smallskip Another indication of the flexibility of the Steklov spectrum is given in the paper~\cite{Ja2014} by Jammes, where it is shown that any finite part of the Steklov spectrum can be prescribed within a given conformal class $(\Omega,[g_0])$ provided that $\Omega$ has dimension at least three. In view of the work of Lohkamp~\cite{Lo1996} it is therefore natural to ask, in the case of manifolds of dimension at least 3, whether it is possible in a fixed conformal class to simultaneously prescribe a finite part of the Steklov spectrum, the volume of $(\Omega,g)$, and the volume of $\partial \Omega$. We will explain in Remark \ref{rem:prescription} below that the answer is no. \medskip The outline of this section is as follows. \begin{itemize} \item Subsection~\ref{Lower}: Lower bounds for eigenvalues. \item Subsection~\ref{Upperexamples}: Upper bounds for eigenvalues: basic results and examples. \item Subsection~\ref{UpperDomains}: Upper bounds: the case of domains in a Riemannian manifold. \item Subsection~\ref{UpperRiemannian}: Metric upper bounds for Riemannian manifolds. \item Subsection~\ref{UpperRev}: Upper and lower bounds: the case of manifolds of revolution. \item Subsection~\ref{UpperSub}: Upper and lower bounds: the case of submanifolds of the Euclidean space. \end{itemize} \subsection{Lower bounds for eigenvalues} \label{Lower} Regarding lower bounds for eigenvalues, and in particular for the first nonzero eigenvalue $\sigma_1$, it is useful to recall briefly a couple of facts about the spectrum of the Laplacian. If $(M,g)$ is a closed connected Riemannian manifold of dimension $d$ (resp. if $(\Omega,g)$ is a compact connected Riemannian manifold of dimension $d$ with boundary), there are two main ways to find a lower bound for the first nonzero eigenvalue $\lambda_1(M,g)$ (resp.the first nonzero eigenvalue $\mu_1(\Omega,g)$ for the Laplacian with the Neumann boundary condition): \smallskip - One way is to compare $\lambda_1(M,g)$, respectively $\mu_1(\Omega,g)$ with an isoperimetric constant by applying the celebrated Cheeger inequality \cite{Ch1969} \begin{equation} \lambda_1(M,g) \ge \frac{h_c^2}{4}, \end{equation} respectively \begin{equation} \mu_1(M,g) \ge \frac{h_c^2}{4}. \end{equation} Here, $h_c$ is the classical Cheeger constant associated to a compact Riemannian manifold $M$ defined by \begin{equation} \label{ctcheeger} h_c(M)=\inf_{\vert A\vert \le \frac{\vert M\vert}{2}}\frac{\vert \partial_IA \vert}{\vert A\vert} \end{equation} where the infimum is over subsets $A$ of $M$ with smooth boundary such that $\vert A\vert \le \frac{\vert M\vert}{2}$ (the definition of $h_c$ is word for word the same for $\Omega,g$). Here, for a compact Riemannian manifold $M$, and a domain $A\subset M$, we write $\partial_IA$ for the interior boundary of $A$, that is the intersection of the boundary of $A$ with the interior of $M$. \medskip This gives a relation between the spectrum of the Laplacian and the geometry of $(M,g)$ through the Cheeger constant $h_c$. In general, this quantity is difficult to estimate, let alone compute. However the Cheeger constant is a very good geometric measure of the spectral gap $\lambda_1(M,g)$. Indeed Buser~\cite{Bu1982} proved that for a closed Riemannian manifold, with only the additional hypothesis of a lower bound on the Ricci curvature of $(M,g)$, one also gets an upper bound on $\lambda_1(M,g)$ in terms of $h_c$. The Buser inequality states that if $Ric (M,g)\ge -(d-1)a^2$ (where $a\ge 0$), then \begin{equation} \label{Thm:Buser} \lambda_1(M,g)\le 2a(d-1)h_c+10h_c^2. \end{equation} Note that a similar upper bound does not exist for $\mu_1(\Omega,g)$: see \cite[Example 1.4]{Bu1982} \smallskip - A second way is to give a lower bound for the lowest non-zero eigenvalue directly in terms of geometric invariants of $(M,g)$. The foundational work of Li and Yau establishes lower bounds in terms of the Ricci curvature and the diameter both for the eigenvalue $\lambda_1(M,g)$ of any connected closed Riemannian manifold~\cite[Theorem 7]{LiYa1979} and for the Neumann eigenvalue $\mu_1(\Omega,g)$ in the case of a compact manifold $\Omega$ with boundary~\cite[Theorem 9]{LiYa1979}. In the latter case, they impose the additional hypothesis that the boundary is convex in the sense that the principal curvatures of $\partial \Omega$ are non-negative. This was generalised by Chen ~\cite[Theorem 1.1]{Ch1990}, where the convexity condition is replaced by the hypothesis of an interior $\delta$-rolling condition (which means that every point on the boundary is on the boundary of a ball of radius $\frac{\delta}{2}$ whose interior lies entirely inside $\Omega$ and whose closure meets $\partial \Omega$ only at the given point). The important point is that, in addition to conditions on the geometry inside $\Omega$, we need some control of the geometry of the boundary. \subsubsection{Lower bounds of the Steklov spectrum via geometric constants.}\label{subsec.lowerboundgeom} Proposition \ref{prop: no lower bound} shows that one can easily construct Riemannian metrics with small eigenvalues under local deformation. In order to find a lower bound for the Steklov spectrum of a compact manifold $\Omega$ with boundary, it is thus natural to impose a geometric condition on the boundary $\Sigma=\partial \Omega$ comparable to the convexity assumption of \cite[Theorem 9]{LiYa1979}. This is precisely the celebrated conjecture proposed by Escobar in \cite{Es1999}. \begin{conj}[Escobar]\label{conj:escobar} Let $\Omega$ be a smooth compact connected Riemannian manifold of dimension $\ge 3$ with boundary $\Sigma=\partial \Omega$. Suppose that the Ricci curvature of $\Omega$ is non-negative and that the second fundamental form $\rho$ of $\Sigma$ is bounded below by $c>0$. Then $\sigma_1(\Omega) \ge c$, with equality if and only if $\Omega$ is the Euclidean ball of radius $\frac{1}{c}$. \end{conj} Note that Example \ref{example: cylinder}, with $L\to 0$, shows that convexity of the boundary does not imply any lower bound. One really needs the hypothesis $c>0$. \medskip In ~\cite[Theorem 1]{Es1997}, Escobar proved the conjecture in dimension 2, and moreover proved that in higher dimensions, $\sigma_1(\Omega) > \frac{c}{2}$. However, the conjecture itself remains open when the dimension is at least 3 \medskip Important progress was made recently by Xia and Xiong in \cite[Theorem 1]{XiXi2019}. The authors show that the conclusion of Escobar's conjecture is true if we impose non-negative sectional curvature $K_g$ instead of Ricci curvature. Specifically, assume that \begin{equation} K_g\ge 0;\ \rho\ge cg_{\Sigma}>0, \end{equation} where $\rho$ denotes the second fundamental form of the boundary $\Sigma$ and $g_{\Sigma}$ the restriction of $g$ to $\Sigma$. Then $\sigma_1(\Omega) \ge c$ with equality if and only if $\Omega$ is isometric to the Euclidean ball of radius $\frac{1}{c}$. \medskip These types of results lead naturally to the question of a possible generalisation when the curvature is not necessarily non-negative, while keeping some restriction on the geometry of the boundary $\Sigma =\partial \Omega$ and on the geometry of $\Omega$ near the boundary. It turns out that we can get partial results by first addressing another natural question: \emph{Is it possible to relate the Steklov spectrum of $\Omega$ to the Laplace spectrum of its boundary $\Sigma= \partial \Omega$?} This question was considered by Wang and Xia in \cite{WaXi2009} and Karpukhin in \cite{Ka2017}. Important progress is given in the paper by Provenzano and Stubbe ~\cite{PrSt2019}, who consider the problem only for domains in $\R^{d+1}$. The ideas may be generalised to the Riemannian context: this was done by Xiong \cite{Xi2018} and by Colbois, Girouard and Hassannezhad in \cite{CoGiHa2020} as we describe next. \medskip Let $d\in\mathbb N$ and let $\alpha,\beta,\kappa_-,\kappa_+\in\mathbb R$ and $\delta_0>0$ be such that $\alpha\leq\beta$ and $\kappa_-\leq\kappa_+$. Consider the class $\mathcal{C}=\mathcal{C}(d,\alpha,\beta,\kappa_-,\kappa_+,\delta_0)$ of smooth compact Riemannian manifolds $\Omega$ of dimension $d+1$ with nonempty boundary $\Sigma=\partial \Omega$ satisfying the following hypotheses: \begin{itemize} \item[(H1)] The rolling radius of $\Omega$ satisfies $\delta(\Omega)\geq \delta_0$. \item[(H2)] The sectional curvature $K$ satisfies $\alpha\leq K\leq \beta$ on the tubular neighbourhood $$\Omega_{\delta}=\{x\in \Omega\,:\,d(x,\Sigma)<\delta\}.$$ \item[(H3)] The principal curvatures of the boundary $\Sigma$ satisfy $\kappa_-\leq\kappa_i\leq\kappa_+.$ \end{itemize} Let $b$ be the number of connected components of $\Sigma$. The spectrum of the Laplacian on $\Sigma$ is denoted by $0=\lambda_0(\Sigma)=\lambda_1(\Sigma)=...=\lambda_{b-1}(\Sigma)<\lambda_{b}(\Sigma)\le ...$ \begin{thm}\label{comparison} \cite[Theorem 3]{CoGiHa2020} There exist explicit constants $D= D(d,\alpha,\beta,\kappa_-,\kappa_+,\delta_0)$ $ B= B(d,\alpha,\kappa_-,\delta_0)$ such that each manifold $\Omega$ in the class $\mathcal{C}$ satisfies the following inequalities for each~$k\in\mathbb N$, \begin{gather} \lambda_k\leq \sigma_k^2+D\sigma_k,\label{ineq:main1}\\ \sigma_k\leq B+\sqrt{B^2+\lambda_k}.\label{ineq:thm:mainstek} \end{gather} In particular, for each $k\in\mathbb N$, $|\sigma_k-\sqrt{\lambda_k}|<\max\{D,2B\}.$ \end{thm} The hypotheses $H_1,H_2,H_3$ of Theorem \ref{comparison} seem quite strong. In \cite[Examples 37, 38, 39]{CoGiHa2020}, it is explained why they are necessary. \medskip Under the hypotheses of Theorem \ref{comparison} one can take $$D=\frac{d+1}{\tilde \delta}+\sqrt{|\alpha|+\kappa_-^2} \qquad\mbox{ and }\qquad B=\frac{1}{2\tilde \delta}+\frac{d+1}{2}\sqrt{|\alpha|+\kappa_-^2},$$ where $\tilde \delta\leq \delta_0$ is a positive constant depending on $\delta_0$, $\beta$, and $\kappa_+$. \medskip Inequality (\ref{ineq:thm:mainstek}) gives an upper bound for $\sigma_k(\Omega)$ in terms of $\lambda_k(\Sigma)$ and of the geometry of $\Omega$. We will see in the sequel many other ways to obtain upper bounds for the eigenvalues in terms of the geometry of $\Omega$. \medskip Inequality (\ref{ineq:main1}) implies that for all $k$, \begin{equation} \label{In: geometry} \sigma_k(\Omega)\ge \frac{2\lambda_k(\Sigma)}{\sqrt{4\lambda_k(\Sigma)+D^2}+D}. \end{equation} This inequality is interesting only for $k$ larger than or equal to the number of connected components of the boundary $\Sigma=\partial \Omega$. In particular, if the boundary is connected, this gives a lower bound on $\sigma_1(\Omega)$ in terms of $\lambda_1(\Sigma)$ and of the geometry of $\Omega$. Since by \cite{LiYa1979} $\lambda_1(\Sigma)$ is bounded explicitly in terms of the geometry of $\Sigma$, one thus obtains a lower bound for $\sigma_1({\Omega})$ depending only on the geometry of ${\Omega}$ and of its boundary $\Sigma$. To our knowledge, this is the only general lower bound in terms of the classical geometric invariants of $\Omega$ and $\Sigma$ like the curvature. However it involves the constant $D$ from Theorem~\ref{comparison} that is not very explicit. Note also that example \ref{example: cylinder} shows that, in general, $\sigma_1$ depends on the inner geometry of the manifold $\Omega$, and not only on the geometry near the boundary. This leads to the question \begin{ques}\label{question lower} Under the hypotheses (H1)-(H3) of Theorem \ref{comparison} and the assumption of connected boundary $\Sigma$, is it possible to find an explicit lower bound for $\sigma_1(\Omega)$ in terms of geometric invariants of $\Omega$ and of it boundary $\Sigma$? When the boundary $\Sigma$ is not assumed to be connected, is it also possible to find an explicit lower bound of this type? \end{ques} Note that Inequality (\ref{In: geometry}) will be used in (\ref{partiallower}), in the case where the boundary is connected. \subsubsection{Lower bounds for the Steklov spectrum via isoperimetric constants.} \medskip \noindent \medskip \noindent \textbf{A Cheeger inequality}. Jammes \cite[Theorem 1]{Ja2015} proves a Cheeger-type inequality. Besides the classical Cheeger constant denoted by $h_c(\Omega)$, Jammes introduces another isoperimetric constant, denoted by $h_j(\Omega)$ and defined by \begin{equation} \label{ctjammes} h_j(\Omega)=\inf_{\vert A\vert \le \frac{\vert \Omega\vert}{2}}\frac{\vert \partial_IA \vert}{\vert A \cap \partial \Omega\vert}. \end{equation} (Recall that $\partial_IA$ is defined following Equation~\eqref{ctcheeger}.) Then, Jammes shows that \begin{equation}\label{ineq:stek cheeger} \sigma_1(\Omega) \ge \frac{1}{4}h_c(\Omega)h_j(\Omega). \end{equation} This estimate is optimal in the sense that the two isoperimetric constants $h_c(\Omega)$ and $h_j(\Omega)$ both have to appear. Intuitively, the classical Cheeger constant $h_c(\Omega)$ allows to measure how large a hypersurface is needed to disconnect ${\Omega}$ into two substantial pieces (consider the celebrated example of the Cheeger dumbbell, see for example \cite[Sections 2, 3]{Co2017}). As shown in Inequality (\ref{Thm:Buser}), having small $h_c$ has a strong influence on the spectrum of the Laplacian. The new constant $h_j(\Omega)$ will be small if there are two parts of the boundary $\Sigma$ of ${\Omega}$ that are close together in ${\Omega}$ but far apart with respect to intrinsic distance in $\Sigma$. As in the proof of Part 1 of Proposition \ref{prop: no lower bound} this tends to create small eigenvalues for the Steklov spectrum. In \cite{Ja2015}, the author also discusses the optimality of Inequality~\eqref{ineq:stek cheeger}, and we present two examples that are related to Example \ref{example: cylinder}. \medskip First in \cite[Example 4]{Ja2015}, to see the necessity of the presence of $h_j$, Jammes considers the family of cylinders $\Omega_n= M\times[0,\frac{1}{n}]$. Example \ref{example: cylinder} shows that $\sigma_1(\Omega_n)\sim \frac{1}{n}$ as $n\to \infty$. It is also easy to see that $h_j(\Omega_n) \to 0$ as $n\to \infty$. However we will see that $h_c(\Omega_n)$ is uniformly bounded from below, which shows the necessity of the presence of $h_j$. To bound $h_c(\Omega_n)$ from below, if $A \subset \Omega_n$ is a domain, we have to bound the ratio $\frac{\vert \partial_IA\vert}{\vert A \vert}$ from below. To this aim, the author glues $2n$ copies of $\Omega_n$ along their boundaries in order to get a closed manifold $M \times S^1$, and associates to $A$ a domain $A'\subset M \times S^1$ obtained by reflecting $A$ along the boundary. We have $\vert A'\vert=2n \vert A\vert$ and $\vert \partial A'\vert =2n \vert \partial \partial_IA\vert$. This implies $\frac{\vert \partial_IA\vert }{\vert A \vert}= \frac{\vert \partial A'\vert}{\vert A'\vert}$, but $A'$ is a domain in a fixed manifold $M \times S^1$, so $\frac{\vert \partial A'\vert}{\vert A'\vert}\ge h_c(M \times S^1)$, which gives a lower bound on $h_c(\Omega_n)$. \medskip Next \cite[Example 5]{Ja2015} shows the necessity of the presence of $h_c$. The author considers the product $M \times [0,L]$ as in Example \ref{example: cylinder}, but with $L\to \infty$. For $L$ large enough, we have $\sigma_1(\Omega)=\frac{2}{L}\to 0$ and $h_c\to 0$ as $L\to \infty$. However one can see by projection on the boundary that for $A\subset \Omega$, the ratio $\frac{\vert \partial_IA \vert}{\vert A \cap \partial \Omega\vert}$ is bounded from below by $1$, so that $h_j \ge 1$. \medskip However, although very interesting, Inequality~\eqref{ineq:stek cheeger} (\cite[Theorem 1]{Ja2015}) is not optimal in the sense that there does not exist a Buser-type inequality for this constant $h_c(\Omega)h_j(\Omega)$. As the next examples show, it is easy to deform a Riemannian manifold $(\Omega,g)$ to make the Cheeger constant arbitrarily small without affecting the first eigenvalue $\sigma_1$ too much. In Example \ref{cylindricalend}, the Cheeger-Jammes constant $h_j$ stays bounded and in Example \ref{Ex: hyperbolic}, it becomes also small. \begin{ex} Example \ref{cylindricalend} above shows that the Steklov eigenvalues of a Riemannian manifold $(\Omega,g)$ with connected boundary admitting a neighourhood that is isometric to $\Sigma \times [0,L]$ are bounded with respect to the length $L$ and to the eigenvalues of the Laplacian on $\Sigma$. Let us consider a one parameter family of metrics $h_{\epsilon}^2g$ conformal to $g$ with $h_{\epsilon}=1$ on $\Sigma \times [0,L/2]$ and $h_{\epsilon}=\frac{1}{\epsilon}$ outside $\Sigma \times [0,L]$. Because the metric is fixed on $\Sigma \times [0,L/2]$, the Steklov spectrum is not affected too much by the conformal factor. It is even possible to keep the curvature uniformly bounded by a careful smoothing. On the other hand, the Cheeger constant $h_c$ of $(\Omega,h_{\epsilon}^2g)$ becomes small as $\epsilon \to 0$: to see this, it suffices to consider a domain $A$ outside $\Sigma \times [0,L]$. The behaviour of $\frac{\vert \partial_IA \vert_{h_{\epsilon}^2g}}{\vert A\vert_{h_{\epsilon}^2g}}$ is proportional to $\epsilon$ and $h_c(\Omega,h_{\epsilon}^2g) \to 0$ as $\epsilon \to 0$. The Cheeger-Jammes constant $h_j(\Omega,h_{\epsilon}^2g)$ is bounded from above: it suffices to consider domains $A \subset \Sigma \times [0,L/2]$ where the metric is constant. \end{ex} \begin{ex} \label{Ex: hyperbolic} In this example, we construct a family $\Omega_{\epsilon},\ 0<\epsilon<\frac{1}{2}$, of hyperbolic surfaces (curvature $-1$) of genus one, with one boundary component $\Sigma_{\epsilon}$ of length $\ge 1$. For convenience we call this a hyperbolic torus. The first nonzero eigenvalue $\sigma_1(\Omega_{\epsilon})$ of $\Omega_{\epsilon}$ is uniformly bounded from below as $\epsilon \to 0$, but the two Cheeger constants $h_c(\Omega_{\epsilon})\to 0$ and $h_j(\Omega_{\epsilon})\to 0$ as $\epsilon \to 0$. This shows that, even when the curvature is bounded, small $h_c(\Omega)$ and $h_j(\Omega)$ do not imply small $\sigma_1$. That is, there is no Cheeger-Buser type upper bound for $\sigma_1$. \begin{figure}[h] \includegraphics[width=8cm]{Figures/NoBuser.pdf} \caption{There is no Buser-type inequality for $\sigma_1$.} \label{fig:nobuser} \end{figure} The surface $\Omega_{\epsilon}$ is built by gluing a hyperbolic cylinder $C_{\epsilon}$ and a hyperbolic torus $P_{\epsilon}$ along their geodesic boundaries of length $2\pi \epsilon$, as explained, for example, in \cite[Chapter 3]{Bu1992} (see Figure~\ref{fig:nobuser}). The cylinder $C_{\epsilon}=[0,L_{\epsilon}]\times S^1$ has hyperbolic metric given in Fermi coordinates by $dr^2+\epsilon^2 \cosh^2 r$, $0\le r \le L_\epsilon:=1+\arcosh \frac{1}{\epsilon}, 0\le \theta \le 2\pi$. It has a geodesic boundary of length $2\pi \epsilon$ corresponding to $r=0$. We glue $C_{\epsilon}$ to $P_{\epsilon}$ along their geodesic boundaries. The other boundary $\Sigma_{\epsilon}$ of $C_{\epsilon}$, corresponding to $r=1+\arcosh \frac{1}{\epsilon}$, is of length $\vert \Sigma_{\epsilon} \vert >1$, and becomes the boundary of $\Omega_{\epsilon}$, which is connected. One can show that the neighborhood $\Sigma_1$ of $\Sigma_{\epsilon}$ given by $\{r:\arcosh \frac{1}{\epsilon}\le r \le 1+ \arcosh \frac{1}{\epsilon}\}$ is uniformly quasi-isometric to $[0,1] \times S^1$ as $\varepsilon\to0$. Because $\Omega_{\epsilon}$ has one boundary component, $\sigma_1(\Omega_{\epsilon})$ is uniformly bounded from below and above using Dirichlet-to-Neumann bracketing (Proposition \ref{prop: dir neum bracket} and Example \ref{cylindricalend}). However, consider the domain $A=C_{\epsilon} \subset \Omega_{\epsilon}$. One can see that $|\partial_IA|=2\pi \epsilon$, and $\vert A\cap \partial \Omega_{\epsilon}\vert=\vert \Sigma_{\epsilon}\vert \ge 1, \vert A\vert \ge 1$, so that both $h_c$ and $h_j$ tend to $0$ as $\epsilon \to 0$. \end{ex} In Remark 2 of \cite{Ja2015}, the author observes that one can slightly change the definition of the constant $h_j$ of (\ref{ctjammes}) and consider \begin{equation} \label{ctescobar} h_e(\Omega)=\inf_{\vert A \cap \partial \Omega\vert \le \frac{\vert \partial \Omega\vert}{2}}\frac{\vert \partial_IA\vert}{\vert A \cap \partial \Omega\vert}. \end{equation} This corresponds to the Cheeger--Escobar constant defined in \cite{Es1999} and Jammes observes that Inequality \ref{ineq:stek cheeger} is true with the same proof. \medskip For the case where $\Sigma=\partial \Omega$ is connected, Theorem \ref{comparison} allows one to get another Cheeger-type inequality. If $h_c(\Sigma)$ denotes the classical Cheeger constant of $\Sigma$, we have $\lambda_1(\Sigma) \ge \frac{h^2_c(\Sigma)}{4}$, and estimate (\ref{ineq:main1}) leads to \begin{equation}\label{partiallower} \sigma_1(\Omega)\ge \frac{h^2_c(\Sigma)}{2\sqrt{h^2_c(\Sigma)+D}}. \end{equation} However, the geometry of $\Omega$ appears strongly through the term $D$ of Theorem \ref{comparison}. Regarding upper bounds, it was mentioned after Theorem \ref{comparison} that Inequality (\eqref{ineq:thm:mainstek}) gives an upper bound for $\sigma_k(\Omega)$ in terms of $\lambda_k(\Sigma)$ and of the geometry of $\Omega$. Combining this with Inequality (\ref{Thm:Buser}), this shows that $\sigma_1(\Omega)$ is bounded from above by a term involving the Cheeger constant and the Ricci curvature of $\Sigma$, along with the geometry of $\Omega$ through the term $B$ that appears in~\eqref{ineq:thm:mainstek}. But this upper bound depends on a lot of geometric invariants, leading to the question: \begin{ques}\label{question cheeger} Can one define a different Cheeger-type isoperimetric constant $h'$ for which $\sigma_1$ satisfies a Buser-type inequality as in (\ref{Thm:Buser})? That is an upper bound in terms of this new isoperimetric constant $h'$ and of a lower bound on the Ricci curvature of the manifold $\Omega$. \end{ques} \medskip \noindent \textbf{Higher order Cheeger inequalities}. In \cite{HaMi2020}, Hassannezhad and Miclo proved a lower bound for the $k$th Steklov eigenvalue in terms of what they call a $k$th Cheeger-Steklov constant in three different situations. In the context of Riemannian manifolds, this inequality extends the Cheeger inequality by Jammes and we will describe it below. In section \ref{cheegerdiscrete}, we will briefly describe another aspect of this work regarding the Steklov problem on graphs. The third aspect, concerning measurable state spaces, is beyond the scope of this paper and will not be described. \medskip Intuitively, the idea is the following: for the Cheeger-Steklov inequality on a manifold $\Omega$, we consider a family of domains $A$ and $\Omega \setminus A$, and define the isoperimetric bounds $h_c$ and $h_j$ by minimising some isoperimetric ratio on the family of domains $A$. The authors use the same strategy, but in order to estimate the eigenvalue $\sigma_k$, they consider a decomposition of $\Omega$ into $k+1$ disjoint domains, and they also define two isoperimetric constants by minimisation. \medskip More precisely, let $\Omega$ be a compact Riemannian manifold with boundary $\partial \Omega=\Sigma$ and consider the family $\mathcal A$ of non-empty open domains of $\Omega$ with piecewise smooth boundary. The authors introduce for each $k \ge 1$ the set $\mathcal A_k$ of all $k+1$-tuples $(A_1,...,A_{k+1})$ of mutually disjoint elements of $\mathcal A$. The k-th order Cheeger constant will be defined by minimisation on $\mathcal A_k$. First, for each domain $B\in \mathcal A$, the authors introduce the isoperimetric constants $\eta(B)$ and $\eta'(B)$ defined by $$ \eta(B)= \frac{\vert \partial_I B\vert}{\vert B\vert};\ \ \eta'(B)= \frac{\vert \partial_I B\vert}{\vert B \cap \partial \Omega\vert}. $$ \medskip Consider now one of the k+1-tuples $(A_1,...,A_{k+1})\in\mathcal{A}_k$. For $A\in\{A_1,...,A_{k+1}\}$ the authors define $$ \rho(A)= \inf \{\eta(B):B\in \mathcal A;\ B\subset A;\ B\cap \partial_I A=\emptyset\} $$ and $$ \rho'(A)= \inf \{\eta'(B):B\in \mathcal A;\ B\subset A;\ B\cap \partial_I A=\emptyset\}. $$ Note that the condition $B\cap \partial_IA=\emptyset$ means that $\rho(A)$ is exactly the usual Cheeger constant of $A$ associated to the Dirichlet Laplacian for functions equal to $0$ on $\partial_I A$ (see \cite{Bu1980}, (1.5) p. 30, \cite{Ch1984}, Theorem 3, p. 95). \medskip The Cheeger-Steklov constant of order $k$ is defined as $$ i_k(\Omega)=\min_{(A_1,...,A_{k+1})\in \mathcal A_k}\max_{l\in \{1,...,k+1\}}\rho(A_l)\rho'(A_l). $$ Hassannezhad and Miclo prove a Cheeger-Steklov inequality of order $k$ \cite[Theorem C]{HaMi2020}, which states that there exists a positive universal constant $C$ such that for each $k\ge 1$ one has \begin{equation} \label{highercheegermanifold} \sigma_k(\Omega)\ge \frac{C}{k^6}i_k(\Omega). \end{equation} The proof of the inequality is quite complicated. The authors do not work directly on the Steklov problem, but they use the fact that a similar inequality exists for the Laplacian with Neumann boundary condition (see \cite[Theorem 26]{HaMi2020} referring to a previous result by Miclo). The authors use the fact that one can approximate the Steklov eigenvalues by the eigenvalues of the Laplacian with density with Neumann boundary condition, where the density accumulates near the boundary \cite[Theorem 24]{HaMi2020}. The authors also give examples in the spirit of the already mentioned examples by Jammes to see the necessity of the presence of the two constants $\rho$ and $\rho'$ in the definition of $i_k(\Omega)$. \medskip A natural question is whether the term $\frac{1}{k^6}$ in the inequality (\ref{highercheegermanifold}) is optimal. The authors also show the following estimate \cite[Proposition 1]{HaMi2020}: \begin{equation} \label{better} \sigma_{2k+1}(\Omega)\ge \frac{C'}{\log^2(k+2)} i_{k+1}(\Omega). \end{equation} This means that the dependence on $k$ is only of order $\frac{1}{\log^2(k+2)}$ if the eigenvalue $\sigma_{2k+1}$ is estimated in terms of the Cheeger constant of order $k+1$. In \cite[ Remark 1]{HaMi2020}, the authors ask the following question. \begin{ques}\label{ques:logbase2} Is the coefficient $\frac{C'}{\log^2(k+2)}$ of $i_{k+1}(\Omega)$ in Estimate (\ref{better}) sharp? \end{ques} In comparable situations (combinatorial Laplacian by Lee, Oveis Gharan and Trevisan~\cite{LGOT2014} and Markov operator by Miclo~\cite[pp. 336-337]{Mi2015}) one can show it is sharp. \medskip More generally, this leads to the following question. \begin{ques}\label{ques:higherordercheeger} Is it possible to have a higher order Cheeger inequality for which the coefficient of $i_k({\Omega})$ does not depend on $k$ \end{ques} \medskip \noindent \textbf{Other lower bounds}. For other lower bounds in some specific situations, see also \cite[Theorem 2.1]{Ve2018} by Verma and \cite[Theorem 1.3]{HaSi2020} by Hassannezhad and Siffert. \subsection{Upper bounds for eigenvalues: basic results and examples} \label{Upperexamples} This subject was introduced in Section 4 of the survey \cite{GiPo2017}, and we invite the reader to look there for the state of the art until 2014. The question of finding upper bounds for eigenvalues in terms of geometric invariants is closely related to the question of constructing large eigenvalues under geometric restrictions and we will often present both stories in parallel. These questions depend also on the choice of normalisation. For a Riemannian manifold $\Omega$ of dimension $d+1$ with boundary $\Sigma =\partial \Omega$, the most common normalisation is with respect to the volume of the boundary: we consider the normalised eigenvalues $\sigma_k(\Omega)\vert \partial \Omega \vert^{1/d}$. Another normalisation, used in particular when $\Omega$ is a domain in a complete Riemannian manifold $M$, is with respect to the volume of $\Omega$; we consider the normalised eigenvalues $\sigma_k(\Omega)\vert \Omega\vert^{1/(d+1)}$. Recently, in \cite{KaMe2021}, Karpukhin and M\'etras proposed a normalisation involving both the volumes of $\Omega$ and of $\Sigma$, with the normalised eigenvalues given by $\sigma_k(\Omega) \vert \Sigma\vert\vert\Omega\vert^{\frac{1-d}{d+1}}$. See Example~\ref{example:KarpukhinMetrasNorm} for discussion of why this specific normalisation is natural. One of the interests in this last normalisation is that it was shown in the paper~\cite{CoElGi2011} by Colbois, El Soufi and Girouard and in~\cite{Ha2011} by Hassannezhad that these normalised eigenvalues are bounded from above within the conformal class of any Riemannian metric (see inequality (\ref{secondformulation}) in Theorem \ref{asma2011}), which allows one to investigate maximal Riemannian metrics in this context. Specifically, let $I_g(\Omega)$ denote the isoperimetric ratio: if $(\Omega,g)$ is a compact Riemannian manifold of dimension $d+1$, with boundary, $$ I_g(\Omega)=\frac{\vert \partial \Omega \vert_g}{\vert \Omega\vert_g^{\frac{d}{d+1}}}. $$ Then we have \cite[Theorem 4.1]{Ha2011}: \begin{thm}\label{asma2011} Let $(M,g_0)$ be a complete Riemannian manifold of dimension $d+1$ with $Ric_{g_0}(M) \ge -a^2d$ for a constant $a\ge 0$. Let $\Omega \subset M$ be a relatively compact domain with $C^1$ boundary and $g$ be any metric conformal to $g_0$. Then we have \begin{equation} \sigma_k(\Omega,g) \vert \partial \Omega\vert_g^{\frac{1}{d}}\le \frac{A_d \vert \Omega\vert_{g_0}^{\frac{2}{d+1}} a^2+B_dk^{\frac{2}{d+1}}}{I_g(\Omega)^{\frac{d-1}{d}}} \end{equation} where $A_d$ and $B_d$ are constants depending only on $d$. An equivalent formulation is \begin{equation} \label{secondformulation} \sigma_k(\Omega,g)\vert \partial \Omega \vert_g\vert \Omega\vert_g^{\frac{1-d}{d+1}} \le A_d \vert \Omega\vert_{g_0}^{\frac{2}{d+1}}a^2+B_dk^{\frac{2}{d+1}}. \end{equation} \end{thm} In the situation where $M$ is a closed Riemannian manifold, inequality~\eqref{secondformulation} provides a uniform upper bound since $|\Omega|_{g_0}<|M|_{g_0}$. See section~\ref{UpperDomains} for further discussion. \begin{remark} \label{rem: domain} Theorem \ref{asma2011} is established for domains in complete manifolds and not directly for manifolds with boundary. The reason for this is that constructions as in \cite{GrNeYa2004} or \cite{CoMa2008} used in the proof of the theorem are not established for manifolds with boundary without additional conditions on the geometry of the boundary. A compact Riemannian manifold with smooth boundary $(\Omega,g_0)$ with $Ricci_g(\Omega)\ge -a^2d$ can always be seen as a domain of a complete manifold $M$ (without boundary) by extension of the manifold $(\Omega,g_0)$. However, in general, we cannot keep the same lower bound $-a^2d$ for the Ricci curvature of the complete manifold $M$. Despite this, Inequality (\ref{secondformulation}) shows that $\sigma_k(\Omega,g)\vert \partial \Omega \vert_g\vert \Omega\vert_g^{\frac{1-d}{d+1}}$ is bounded from above for $g\in [g_0]$, but the control of the bound is not explicit. The proof of Theorem \ref{asma2011} only requires that the curvature of the ambient manifold $M$ satisfy the specified lower curvature bound on a neighbourhood of $(\Omega,g_0) \subset M$ of radius at most the diameter of $(\Omega,g_0)$ and on such a neighbourhood, the Ricci curvature is bounded from below, however in general with a different bound than $-a^2d$. This question of controlling the curvature when extending a manifold with boundary to a complete manifold is discussed in detail by Pigola and Veronelli in \cite{PiVe2020}. \end{remark} \begin{remark}\label{rem:prescription} Let $(\Omega,g_0)$ be a compact Riemannian manifold with boundary. It follows from Theorem~\ref{asma2011} and from Remark \ref{rem: domain} that one cannot use a conformal perturbation $g\in[g_0]$ to prescribe at the same time the kth eigenvalue ($k>0$) and the volumes of $(\Omega,g)$ and of $(\partial \Omega,g)$. \end{remark} Let us now give a series of examples to provide intuition. They will allow us to show that certain upper bounds presented below are sharp in the sense that all ingredients entering into the inequalities are necessary. \smallskip When we get upper bounds for $\sigma_k$ (in particular for $k=1$), the question arises of finding a maximising metric. These examples allow us to show that, sometimes, the expected metric (for example the ball in the case of domains) is not a maximiser. \begin{ex} \label{confdef}A general question is to understand and distinguish the spectral effects of geometric perturbations of a manifold near its boundary from the spectral effects of perturbations occurring deep inside the manifold. In dimension $d+1\ge 3 $, one can construct examples with fixed boundary and arbitrarily large $\sigma_1$ staying within the conformal class \cite[ Theorem 1.1 (ii)]{CoElGi2019}. Let $(\Omega,g)$ be a compact, connected Riemannian manifold of dimension $d+1\geq 3$ with boundary $\Sigma$. There exists a one-parameter family of Riemannian metrics $g_\varepsilon$ conformal to $g$ that coincide with $g$ on $\Sigma$ such that $$\sigma_{1}(\Omega,g_\varepsilon) \to\infty \quad as\quad \varepsilon\to 0.$$ \smallskip In this example, $\sigma_1(\Omega) \vert \Sigma \vert^{1/d}$ and $\sigma_1(\Omega) \vert \Omega \vert^{1/(d+1)}$ tend to $\infty$ as $\epsilon \to 0$, but this is not the case of $\sigma_1(\Omega) \vert \Sigma\vert\vert\Omega\vert^{\frac{1-d}{d+1}}$ because the conformal class is fixed. \end{ex} Example~\ref{confdef} leads to the question of whether there are similar examples for which $\sigma_1(\Omega) \vert \Sigma\vert\vert\Omega\vert^{\frac{1-d}{d+1}} \to \infty$ still keeping the boundary fixed (but not the conformal class). In full generality, this is unknown. However, it is possible to obtain examples in specific situations. \begin{ex}\label{CianciGirouard} A family of examples are constructed by Cianci and Girouard ~\cite[Theorem 1.1]{CiGi2018}. The authors consider a compact manifold $\Omega$ of dimension $d+1\ge 4$ with connected boundary $\Sigma$ having a very particular property: there exists a Riemannian metric $g_{\Sigma}$ on $\Sigma$ which admits a unit Killing vector field $\xi$ with dual $1$-form $\eta$ whose exterior derivative is nowhere $0$. The odd-dimensional spheres have this property. But a Riemannian manifold does not necessarily support a Killing vector field: a vector field $X$ on a Riemannian manifold is a Killing field if and only if the $1$-parameter group generated by $X$ consists of local isometries. Under this condition, the authors show that there exists a family $(g_{\epsilon})_{\epsilon>0}$ of Riemannian metrics on $\Omega$ which coincide with $g_{\Sigma}$ on $\Sigma$ such that $\vert (\Omega, g_{\epsilon})\vert =1$ and $\sigma_1(\Omega, g_{\epsilon}) \to \infty$ as $\epsilon \to 0$. As $\Sigma$ is fixed and $\vert (\Omega, g_{\epsilon})\vert =1$, the normalised eigenvalue $\sigma_1(\Omega,g_{\epsilon}) \vert \Sigma\vert\vert(\Omega, g_{\epsilon})\vert^{\frac{1-d}{d+1}} \to \infty$ as $\epsilon \to 0$. \end{ex} \smallskip Let us observe that the geometry of the example is very special: as the boundary $\Sigma$ is fixed, its intrinsic diameter is fixed. However, its extrinsic diameter, that is its diameter in $\Omega$, goes to $0$ as $\epsilon \to 0$. It would be very surprising if the existence of a Killing field was an important condition, but it is not clear how to avoid this hypothesis. \begin{ques}\label{ques:intrinsicextrinsic} Let $\Omega$ be a compact Riemannian manifold of dimension $d+1\ge 3$ with boundary $\Sigma$. Is it possible to construct a family $(g_{\epsilon})_{\epsilon>0}$ of Riemannian metrics on $\Omega$ that stays constant on $\Sigma$ and satisfies $\sigma_1(\Omega,g_{\epsilon}) \vert \Sigma\vert\vert(\Omega, g_{\epsilon})\vert^{\frac{1-d}{d+1}} \to \infty$ as $\epsilon \to 0$? \end{ques} Note that fixing the Riemannian metric on the boundary is a strong constraint. If we relax it by requiring that the volume $\vert\Sigma\vert$ is prescribed instead, and if the dimension $d+1$ is $\ge 4$, then Example \ref{cylindricalend} allows us to construct such examples, at least when the boundary is connected. It suffices to introduce a cylindrical metric $\Sigma_{\epsilon} \times [0,1]$ near the boundary, with $\vert \Sigma_{\epsilon}\vert=1$, $\lambda_1(\Sigma_{\epsilon})\to \infty$ (which is possible by the work of Colbois and Dodziuk~\cite{CoDo1994} because the dimension of $\Sigma_{\epsilon}$ is $\ge 3$). This method may not be used in dimension $d+1=3$, but the following example shows another, more elaborate, approach. \begin{ex}\label{example:Karpukhin}The authors would like to thank Mikhail Karpukhin for suggesting the following construction, which leads to Riemannian metrics $g$ on the ball $\mathbb{B}^3$ such that both $|\partial\mathbb{B}^3|_g=1$ and $|\mathbb{B}^3|_g=1$ are prescribed, while $\sigma_1(\mathbb{B}^3,g)$ is arbitrarily large. Let $M=\Sp^3$ be equipped with any Riemannian metric $g$ of volume one. The homogenisation construction of Girouard and Lagac\'e~\cite{GiLa2021} provides a family of domains $\Omega^\varepsilon\subset M$ such that $|\partial\Omega^\varepsilon|\xrightarrow{\varepsilon\to 0}|M|_g=1$ and $\sigma_1(\Omega_\varepsilon)\xrightarrow{\varepsilon\to 0}\lambda_1(M,g)$. See Theorem~\ref{thm:homoclosed}. The domains $\Omega^\varepsilon$ are obtained by removing a finite number of disjoint small balls $B(p_i,r_\varepsilon)$ from $M$. One then removes a finite number of thin tubes connecting the boundaries of these balls to each other sequentially, leading to a family of connected domains $\Omega_\delta^\varepsilon\subset\Omega^\varepsilon\subset M$, where $\delta$ represents the width of the excised tubes. As $\delta\to 0$, the tubes collapse to curves meeting the boundary of the balls $B(p_i,r_\varepsilon)$ perpendicularly. It follows from the work of Fraser and Schoen~\cite[Theorem 1.2]{FrSc2019} that $\sigma_1(\Omega_\delta^\varepsilon)\xrightarrow{\delta\to 0}\sigma_1(\Omega^\varepsilon)$. The connecting tubes being chosen sequentially means that the corresponding graph, with vertices corresponding to the balls $B(p_i,r_\varepsilon)$ and with edges corresponding to the connecting tubes, is a linear graph. It follows that the excised region is itself diffeomorphic to a ball, and because $M=\Sp^3$, its complement $\Omega_\delta^\varepsilon$ is also diffeomorphic to the ball $\mathbb{B}^3$. Using these diffeomorphisms to pull back the metric $g$ to the ball $\mathbb{B}^3$ and using a simple diagonal argument leads to a sequence of Riemannian metrics $g_m$ on $\overline{\mathbb{B}}^3$ such that $|\mathbb{B}^3|_{g_m}\xrightarrow{m\to+\infty}|M|_g=1$, $|\partial\mathbb{B}^3|_{g_m}\xrightarrow{m\to+\infty}|M|_g=1$ and $\sigma_1(\mathbb{B}^3,g_m)\xrightarrow{m\to+\infty}\lambda_1(M,g)$. Because the metric $g$ is arbitrary, the conclusion follows from the well-known fact that $\lambda_1$ is not bounded above on $\Sp^3$. See for instance the work of Bleecker~\cite{Bl1983}. \end{ex} Let us come back to the question of bounding Steklov eigenvalues for domains in a closed Riemannian manifold $M$ of dimension $d+1$. Theorem~\ref{asma2011} provides a good upper bound for the Karpukhin-M\'etras normalisation. \begin{ex}\label{example:KarpukhinMetrasNorm} In \cite{GiLa2021}, the authors consider a continuous density $\beta>0$ on $M$ and use Theorem~\ref{thm:homoclosed} to construct a family $\Omega_{\epsilon}\subset M$ of domains with $$\sigma_1(\Omega_\varepsilon)\to\lambda_1(\beta^{-1}\Delta_g),\quad\vert\partial\Omega_\varepsilon\vert\to\int_M\beta\,dV_g,\quad\vert\Omega^\varepsilon|\to |M|.$$ In particular for $\beta>0$ constant, this leads to \begin{gather*} \sigma_1(\Omega_{\epsilon})\vert\partial\Omega_{\epsilon}\vert^{1/d}\to\beta^{-1+1/d}\lambda_1(M)|M|^{1/d}\\ \sigma_1(\Omega_{\epsilon})\vert \Omega_{\epsilon}\vert^{1/(d+1)} \to \beta^{-1}\lambda_1(M)|M|^{1/(d+1)}\\ \sigma_1(\Omega_{\epsilon})\vert\partial\Omega_{\epsilon}\vert\vert\Omega_\epsilon\vert^{\frac{1-d}{1+d}}\to\lambda_1(M)|M|^{2/(d+1)} \end{gather*} Using \cite[Theorem 1.2]{FrSc2019}, the domains $\Omega_\epsilon$ can be chosen to be connected. See Example~\ref{exfraserschoen} and Example~\ref{example:Karpukhin} for other applications. This is another indication of the importance of the mixed normalisation proposed by Karpukhin and M\'etras \cite{KaMe2021}. In fact, it is easy to show using the same method that the functional $\sigma_1(\Omega)|\partial\Omega|^a|\Omega|^b$ (where $ad+b(d+1)=1$ to obtain scaling invariance) is not bounded above for any other normalisation. Indeed for $\beta>0$ constant it follows as above that $$\sigma_1(\Omega)|\partial\Omega|^a|\Omega|^b\to\beta^{a-1}\lambda_1(M)|M|^b,$$ so that $a=1$ is the only possibility keeping the functional bounded. \end{ex} \begin{ex}\label{JadeBrisson} The previous example is based on homogenization theory. A simpler approach to obtaining large eigenvalues for domains in compact manifolds was developed by Brisson in \cite{Br2022}, where she considers a closed connected submanifold $N\subset M$ of positive codimension. Let $T_{\epsilon}$ be the tubular neighborhood of $N$ defined by $T_{\epsilon}=\{x\in M:dist(x,N)<\epsilon\}$ where $dist$ denotes the Riemannian distance. For $\epsilon$ small enough the boundary $\Sigma_{\epsilon}:=\partial T_{\epsilon} $ is a smooth submanifold. The author considers the domain $\Omega_{\epsilon}=M\setminus T_{\epsilon}$ whose boundary is $\Sigma_{\epsilon}$. She investigates the properties of $\sigma_k(\Omega_{\epsilon})$, in particular the asymptotics of $\epsilon\sigma_k(\Omega_{\epsilon})$. Among other things, the following result is proved \cite[Corollary 1.6]{Br2022}: If the dimension $d+1$ of $M$ is $\ge 3$ and the dimension $n$ of $N$ satisfies $0<n\le d-1$, then \begin{equation} \lim_{\epsilon \to 0} \vert \Sigma_{\epsilon}\vert^{1/d}\sigma_{1}(\Omega_{\epsilon})=\infty. \end{equation} \end{ex} \subsubsection{Surgery methods} \begin{ex}\label{exfraserschoen} In their papers ~\cite{FrSc2019,FrSc2020}, among other things, Fraser and Schoen propose many very enlightening constructions having consequences for the optimisation problem. Roughly speaking, they give a way to do some surgery on a compact Riemannian manifold without affecting the Steklov spectrum too much. We give below some typical results and a couple of applications, and invite the interested readers to consult the two papers. \begin{enumerate} \item The paper \cite{FrSc2019} mainly concerns the first nonzero eigenvalue in dimension higher than $2$. It shows in particular that it is not possible to generalise the Weinstock inequality to domains in Euclidean space of dimension higher than 2, even with strong hypotheses on the topology of the domains: \begin{thm} \label{contractible} \cite[Theorem 1.1 ]{FrSc2019}. Let $\mathbb{B}$ denote the unit ball in the Euclidean space of dimension $\ge 3$. Then there exists a smooth contractible domain $\Omega$ with $\vert \partial \Omega\vert =\vert \partial \mathbb{B} \vert$ and $\sigma_1(\Omega)>\sigma_1( \mathbb{B})$. \end{thm} The proof of this result is hard, but the idea is simple and beautiful. First, in \cite[Section 3]{FrSc2019}, the authors study the spectrum of the annulus $\mathbb{B}_1\setminus \mathbb{B}_{\epsilon}$ where $\mathbb{B}_{\rho}$ is the ball of radius $\rho$ in $\R^{d+1}$, $d\ge 2$. They show that for all $k$ and $\epsilon$ small enough $$ \sigma_k(\mathbb{B}_1)\vert \partial \mathbb{B}_1\vert^{1/d} < \sigma_k(\mathbb{B}_1\setminus \mathbb{B}_{\epsilon})\vert \partial (\mathbb{B}_1\setminus \mathbb{B}_{\epsilon})\vert^{1/d}. $$ In particular, for $k=1$, there exists an annulus with normalised first nonzero eigenvalue greater than the first normalised eigenvalue of the ball. But an annulus is not contractible. In order to obtain a contractible example, the authors perform a surgery: they remove a thin cylinder connecting the inner sphere to the outer sphere and consider its complement which is now contractible. The main difficulty, addressed in \cite[Section 4]{FrSc2019}, is to show that this operation does not affect the spectrum too much. Some of the results were generalised in \cite[Section 3]{Ho2021} by Hong, who performed higher-dimensional surgery. As a corollary, Hong generalises Theorem \ref{contractible}: \begin{thm}\cite[Corollary 1.2]{Ho2021}: Let $\mathbb{B}$ denote the unit ball in the Euclidean space of dimension $\ge 3$. Then for each $k$, there exists a smooth contractible domain $\Omega$ with $\vert \partial \Omega\vert = \vert \partial \mathbb{B}\vert$ and $\sigma_j(\Omega)>\sigma_j( \mathbb{B})$ for each $1\le j\le k$.\end{thm} \item In \cite{FrSc2020}, the authors further develop the behaviour of the spectrum under surgery. For example, Theorem 1.1 of that paper says the following: let $\Omega_1,...,\Omega_s$ be compact $(d+1)$-dimensional Riemannian manifolds with boundary. Given $\epsilon>0$, there exists a Riemannian manifold $\Omega_{\epsilon}$, obtained by appropriately gluing $\Omega_1,...,\Omega_s$ together along their boundaries, such that $\lim_{\epsilon \to 0} \vert \partial \Omega_{\epsilon}\vert =\vert \partial(\Omega_1 \sqcup ... \sqcup \Omega_s) \vert$ and $\lim_{\epsilon \to 0}\sigma_k(\Omega_{\epsilon})=\sigma_k(\Omega_1 \sqcup ... \sqcup \Omega_s)$ for all $k$. \smallskip The way to glue appropriately two manifolds $\Omega_1$ and $\Omega_2$ together is described in a sequence of lemmas in \cite[Section 4.2]{FrSc2020}. Roughly speaking, the idea is to make a hole on the boundary of both manifolds and to join these holes by a catenoid. The technical difficulty is to do this without perturbing the spectrum of $\Omega_1$ and $\Omega_2$ too much. \item The authors also apply the previous construction to a single manifold $\Omega$ (\cite[Theorem 4.8]{FrSc2020}). They make two holes on the boundary of $\Omega$ and join them by a catenoid. They obtain a family of manifolds $\Omega_{\epsilon}$ such that for all $k$ $$ \lim_{\epsilon \to 0} \vert \partial \Omega_{\epsilon}\vert=\vert \partial \Omega\vert;\ \lim_{\epsilon \to 0}\sigma_k(\Omega_{\epsilon})=\sigma_k(\Omega). $$ \end{enumerate} \end{ex} \subsection{Upper bounds: the case of domains in a Riemannian manifold.}\label{UpperDomains} \, Let us start with an example which probably gives the simplest available upper bound for Steklov eigenvalues. Consider a domain in the round sphere $\Omega\subset\Sp^{d+1}$ such that each coordinate function $\pi_i$ satisfies \begin{gather} \int_{\partial\Omega}\pi_i\,dA=0. \end{gather} In particular any domain such that $-\Omega=\Omega$ satisfies this condition. It follows that the coordinate functions can be used as trial functions in Equation~\eqref{eq:Stek Rayleigh min max}, so that $$\sigma_1(\Omega)\int_{\partial\Omega}\pi_i^2\,dA\leq\int_{\Omega}|\nabla\pi_i|^2\,dV.$$ Summing over $i$ and using that $\sum_{i=1}^{d+2}\pi_i^2\equiv 1$ and $\sum_{i=1}^{d+1}|\nabla\pi_i|^2\equiv d+1$ leads to \begin{gather} \sigma_1(\Omega)|\partial\Omega|< (d+1)|\Omega|. \end{gather} Note that this does not violate scale invariance because the size of the sphere $\Sp^{d+1}$ is fixed. In particular, there are no homotheties in the ambient space $\Sp^{d+1}$. In~\cite{FrSc2019} Fraser and Schoen considered an arbitrary domain $\Omega\subset\R^{d+1}$. They precomposed the coordinate functions $\pi_i$ with an appropriate conformal diffeomorphism $\Omega\to\Sp^{d+1}$ and used H\"older's inequality to obtain \begin{gather}\label{ineq:FraserSchoenPreResult} \sigma_1(\Omega)|\partial\Omega|\leq (d+1)|\Sp^{d+1}|^{\frac{2}{d+1}}|\Omega|^{\frac{d-1}{d+1}}. \end{gather} Their argument also applies verbatim to domains $\Omega\subset\Sp^{d+1}$, and in that case inequality~(\ref{ineq:FraserSchoenPreResult}) is sharp. \begin{prop}\label{prop:bestupperboundDomainSphere} Let $\Omega\subset\Sp^{d+1}$ be a domain with smooth boundary. Then $$\sigma_1(\Omega)|\partial\Omega||\Omega|^{\frac{1-d}{d+1}}\leq (d+1)|\Sp^{d+1}|^{\frac{2}{d+1}}.$$ This inequality is sharp: there exists a family $\Omega^\varepsilon\subset\Sp^{d+1}$ such that $$\sigma_1(\Omega^\varepsilon)|\partial\Omega^\varepsilon||\Omega^\varepsilon|^{\frac{1-d}{d+1}}\xrightarrow{\varepsilon\to 0} (d+1)|\Sp^{d+1}|^{\frac{2}{d+1}}.$$ \end{prop} The family $\Omega^\varepsilon$ is constructed using homogenisation techniques. See Theorem~\ref{thm:homoclosed}. Notice that for $d=1$ one recovers Kokarev's inequality (\ref{ineq:Kokarev}). \medskip For domains in an arbitrary closed Riemannian manifold $M$, uniform upper bounds for all eigenvalues $\sigma_k$ are obtained from Theorem~\ref{asma2011}. Indeed, inequality (\ref{secondformulation}) shows that for any domain $\Omega \subset M$, \begin{equation}\label{thirdformulation} \sigma_k(\Omega,g)\vert \partial \Omega \vert_g\vert \Omega\vert_g^{\frac{1-d}{d+1}} \le A_d \vert M\vert_{g_0}^{\frac{2}{d+1}}a^2+B_dk^{\frac{2}{d+1}}. \end{equation} In fact, the best upper bound for $\sigma_k(\Omega)|\partial\Omega||\Omega|^{\frac{1-d}{1+d}}$ can be expressed in terms of the best upper bound for the eigenvalues of the weighted Laplace operator $\beta^{-1}\Delta_g$. Let us introduce \begin{gather}\label{eq:bestdensityeigenvalues} \lambda_k^\#(M,g):=\sup_{0<\beta\in C^0(M)}\lambda_k(\beta^{-1}\Delta_g)\int_M\beta\,dV_g. \end{gather} This should be compared with the definition of conformal eigenvalues $\lambda_k^\star(M,[g])$ (see equation~\eqref{eq:defConformalEigenvalues}). For surfaces, it follows from the conformal invariance of the Laplace operator that $\lambda_k^\star(M,[g])=\lambda_k^\#(M,g)$, and the following result is a direct generalisation of Theorem~\ref{thm: GKL sharp}. It is proved in exactly the same way. \begin{thm}\label{thm: GKL higherd} Let $(M,g_0)$ be a closed Riemannian manifold of dimension $d+1$ with $Ric_{g_0}(M) \ge -a^2d$ for a constant $a\ge 0$. Then for each $g\in[g_0]$, each $k\in\N$ and each domain $\Omega\subset M$, \begin{gather} \sigma_k(\Omega)|\partial\Omega||\Omega|^{\frac{1-d}{1+d}}\leq\lambda_k^\#(M,g)|M|_g^\frac{1-d}{1+d}. \end{gather} Moreover, for each $k\in\N$, there exists a family of domains $\Omega^\varepsilon\subset M$ such that $$\sigma_k(\Omega^\varepsilon)|\partial\Omega^\varepsilon||\Omega^\varepsilon|^{\frac{1-d}{1+d}}\xrightarrow{\varepsilon\to0}\lambda_k^\#(M,g)|M|_g^\frac{1-d}{1+d}.$$ \end{thm} The following is then obtained from~\eqref{thirdformulation}. \begin{cor} Let $(M,g_0)$ be a closed Riemannian manifold of dimension $d+1$ with $Ric_{g_0}(M) \ge -a^2d$ for a constant $a\ge 0$. Then $$\lambda_k^\#(M,g)|M|_g^\frac{1-d}{1+d}\leq A_d \vert M\vert_{g_0}a^2+B_dk^{\frac{2}{d+1}}.$$ In particular, using the constant density $\beta\equiv 1$ leads to the following upper bound for the eigenvalues of the Laplace operator: \begin{equation} \lambda_k(M)\vert M \vert_g^{\frac{2}{d+1}} \le A_d \vert M\vert_{g_0}^{\frac{2}{d+1}}a^2+B_dk^{\frac{2}{d+1}}. \end{equation} \end{cor} \begin{remark} The reader is invited to look at the very recent papers \cite{KaSt2022} by Karpukhin and Stern and \cite{Pe2022} by Romain Petrides for further investigation of links with harmonic maps to spheres. \end{remark} Let us come back to the question of bounding $\sigma_k(\Omega)$ for domains in complete manifolds. For a domain $\Omega \subset \R^{d+1}$, Fraser and Schoen combined~\eqref{ineq:FraserSchoenPreResult} with the classical isoperimetric inequality and obtained~\cite[Proposition 2.1]{FrSc2019}: \begin{equation}\label{eq:fraserschoen} \sigma_1(\Omega)\vert\Sigma \vert^{1/d}\le \frac{(d+1)^{1/d}\vert \mathbb \Sp^{d+1}\vert^{\frac{2}{d+1}}}{\vert \mathbb \mathbb{B}^{d+1}\vert^{\frac{d-1}{d(d+1)}}}. \end{equation} More generally, if $\Omega$ is a domain in a complete manifold $(M,g_0)$ with non-negative Ricci curvature and $g$ is a Riemannian metric conformal to $g_0$, then Inequality \eqref{secondformulation} of Theorem \ref{asma2011} shows that $$ \sigma_k(\Omega,g)\vert \partial \Omega\vert_g \vert \Omega\vert_g^{\frac{1-d}{1+d}} \le B(d)k^{\frac{2}{d+1}}. $$ Using the classical isoperimetric inequality, this implies that for domains $\Omega$ of the Euclidean space $\R^{d+1}$, of the hyperbolic space or of a hemisphere of the sphere $\Sp^{d+1}$, we have \begin{equation} \sigma_k(\Omega)\vert \Sigma\vert^{\frac{1}{d}}\le C_dk^{\frac{2}{d+2}}, \end{equation} where $C_d$ is a constant depending only on the dimension (see also \cite[Theorem 1.2]{CoElGi2011}). This implies that on these spaces, \emph{large boundary} implies \emph{small $\sigma_1$}. This leads to the question of whether these results may be generalised. In \cite[Examples 6.2, 6.3]{CoElGi2011}, it is shown that there does not exist an upper bound in full generality, independent of the normalisation. However, these examples are very specific constructions, and we can ask the following: \begin{ques}\label{question:maxdom2} Let $(M,g)$ be a complete Riemannian manifold of dimension $\ge 3$ of infinite volume, with Ricci curvature bounded from below. Discuss if one can construct domains $\Omega \subset M$ with arbitrarily large first nonzero normalised eigenvalue (for the different normalisations mentioned in this section). \end{ques} One can show that, in certain cases, by adding some hypotheses, we can get sharper inequalities for domains in Euclidean space, or more generally in non-compact rank-1 symmetric spaces such as the hyperbolic space. One can also characterise the case(s) of equality. \medskip It is known that the Weinstock inequality is not true for non simply-connected domains of the plane~\eqref{ineq:weinstock} and cannot be generalised without conditions in higher dimension. Because Fraser and Schoen (Theorem~\ref{contractible}) showed the existence of contractible domains with larger first nonzero eigenvalue than the ball but with the same boundary area as the ball, it appears difficult to generalise Weinstock inequality to domains in $\R^{d+1}$ ($d+1\ge 3$) that are not convex. For convex domains, however, in \cite[Theorem 3.1]{BFNT2021}, Bucur, Ferone, Nitsch and Trombetti were able to show that this inequality remains true. \begin{thm} Let $\Omega$ be a bounded convex domain in $\R^{d+1}$ and let $\Omega^*$ be the ball in $\R^{d+1}$ with $\vert \partial \Omega^*\vert =\vert \partial \Omega\vert$. Then \begin{equation}\label{weinstockconvex} \sigma_1(\Omega) \le \sigma_1(\Omega^*) \end{equation} with equality if and only if $\Omega$ is a ball. \end{thm} \begin{ques}\label{ques:weinstockhnsn} Is such an inequality also true for convex domains in hyperbolic space or in the sphere? \end{ques} A quantitative version of Inequality (\eqref{weinstockconvex}) was given in \cite[ Theorem 1.1]{GLPT2020} by Gavitone, La Manna, Paoli and Trani. \medskip If we consider a normalisation with the volume $\vert \Omega\vert$, the situation is different. First, there is an inequality, due to F. Brock \cite{Br2001} for domains of the Euclidean space $\R^{d+1}$ in all dimensions: \begin{thm} Let $\Omega \subset \R^{d+1}$ and let $\Omega^{\star}$ be the Euclidean ball such that $\vert \Omega\vert=\vert \Omega^*\vert$. Then \begin{equation} \label{thm:brock} \sigma_1(\Omega)\le \sigma_1(\Omega^*). \end{equation} Equality holds if and only if $\Omega$ is isometric to $\Omega^{\star}$. \end{thm} A quantitative version of inequality (\eqref{thm:brock}) was given in \cite{BrDeRu2012} by Brasco, De Phillippis and Ruffini. \medskip Inequality (\eqref{thm:brock}) was extended by Raveendran and Santhanam to rank-1 symmetric spaces of noncompact types \cite[Theorem 1.1]{BiSa2014}. \begin{thm}\label{binoysanthanam1} Let $M$ be a noncompact rank-1 symmetric space with sectional curvature $-4 \le K(M)\le -1$ (a typical example being the hyperbolic space). Let $\Omega \subset M$ be a bounded domain with smooth boundary. Then \begin{equation} \label{BiSa} \sigma_1(\Omega)\le \sigma_1(\Omega^*) \end{equation} where $\Omega^*\subset M$ is a geodesic ball such that $\vert \Omega\vert =\vert \Omega^*\vert$. Equality holds if and only if $\Omega$ is isometric to $\Omega^*$. \end{thm} Note that such a result is not true for rank-1 symmetric spaces of compact type. In \cite[Theorem 1.4]{CaRu2019}, Castillon and Ruffini construct a counterexample on the sphere. The domain $\Omega$ consists of the intersection of two antipodal geodesic balls, and for a ball $B$ with $\vert B\vert =\vert \Omega\vert$, they show that $\sigma_1(\Omega)>\sigma_1(B)$. \medskip \noindent \textbf{Doubly connected domains.} The study of eigenvalue problems for the Laplacian on doubly connected domains is a classical problem (optimal placement of an obstacle). Recently it has been studied for the Steklov eigenvalues, much of the time for the first nonzero eigenvalue, and with various boundary conditions like mixed Steklov-Dirichlet or Steklov-Neumann. These problems are usually defined in annular domains with a hole having a spherical shape. Different boundary conditions can be imposed on the inner and outer boundary, and, also, different optimisation problems can be studied. \medskip A first result of this kind was obtained by Verma and Santhanam \cite[Theorem 1.1]{SaVe2020}. For $d+1>2$, they introduce a ball $B_1 \subset \R^{d+1}$ of radius $R_1$ and a ball $B_2 \subset \R^{d+1}$ of radius $R_2>R_1$ such that $\bar B_1\subset B_2$. They consider the mixed Steklov-Dirichlet problem on $B_2\setminus \bar B_1$ with Dirichet boundary condition on $\partial B_1$ and Steklov boundary condition on $\partial B_2$. Under these assumptions, they show that annular domains (concentric balls) maximise the first eigenvalue $\sigma_0^D$ of $B_2\setminus \bar B_1$. In \cite[Theorem 1.2]{Ft2022}, Ftouhi gives another proof of this result, including in dimension $d+1=2$ which was left open in \cite{SaVe2020}. In the same paper, the author shows that for the first nonzero Steklov eigenvalue $\sigma_1(B_2\setminus \bar B_1)$, the maximum is also achieved uniquely when the balls are concentric (Theorem 1.1). In \cite[Theorem 1]{Se2021}, Seo generalises the result of \cite{SaVe2020} to domains in rank one symmetric spaces. However, in the compact case, one needs to add the assumption that the radius $R_2$ of the larger ball $B_2$ is less that half of the injectivity radius of the space. For example, for a sphere, this means that the domain is contained in a hemisphere. \medskip In this context, another natural question is to study the behaviour of the first eigenvalue when the center of $B_1$ is moving radially outward from the center of $B_2$ (with the Dirichlet boundary condition on $\partial B_1$). This question is studied by Hong, Lim and Seo ~\cite[Theorems 1-4]{HoLiSe2022} for domains in $\R^{d+1}$. In Section 6, the authors perform numerical estimates and conjecture that $\sigma_0^D(B_2\setminus \bar B_1)$ is decreasing as $B_1$ moves in the direction of the boundary. This was proved by Gavitone and Piscitelli \cite[Theorem 1.2]{GaPi2021}. In fact, their result is more general: instead of the ball $B_2$, the authors consider a domain $\Omega\subset \R^{d+1}$ which is connected and centrally symmetric with respect to a point. \medskip This last result leads to a natural question for doubly connected domains in $\R^{d+1}$: fix the inner ball $B_1$ (with Dirichlet boundary condition on $\partial B_1$) and replace $B_2$ by a domain $\Omega$. Under certain conditions, one may hope that $\sigma_0^D(\Omega \setminus B_1)$ is maximal when $\Omega$ is a ball. This was investigated in \cite{PaPiSa2021} by Paoli, Piscitelli and Sannipoli and in \cite{GPPS2021} by the same three authors and Gavitone. In \cite[Definition 3.1]{PaPiSa2021} the authors introduce the concept of a nearly spherical set, which intuitively means a set close to a ball. The authors show that, among all nearly spherical sets having the same volume, the first eigenvalue $\sigma_0^D$ is maximal exactly when $\Omega$ is a ball \cite[Theorem 3.2]{PaPiSa2021}. In \cite[Theorem 1.1]{GPPS2021}, a similar type of result is shown when $\Omega$ is a convex domain satisfying the following condition: the convex domain $\Omega$ has to be contained in another ball of radius $R$, where $R$ depends on the radius $R_1$ of the inner ball $B_1$. \begin{ques}\label{ques:manyquestionsdoublyconnected} These results lead to many new questions. \medskip For example, most of the above results concern the mixed Steklov-Dirichlet problem. What about the mixed Steklov-Neumann problem (for the first nonzero eigenvalue $\sigma_1$)? \medskip To what extent are the restrictions imposed on $\Omega$ in \cite{PaPiSa2021} and \cite{GPPS2021} necessary? \medskip In \cite{PaPiSa2021} and \cite{GPPS2021}, the authors consider the spectrum normalised by the volume of the domain. Is it possible to obtain similar results with a normalisation by the volume of the boundary of the domain? \medskip Is it possible to extend some of the results to domains in rank one symmetric spaces, as was done in \cite{Se2021} for the Steklov-Dirichlet problem on $B_2 \setminus B_1$? \end{ques} \medskip Recently, there also appeared interesting inequalities with normalisation on the diameter of the domain. We now discuss some of these. \medskip In \cite[Theorem 1.2]{BiSa2014}, the authors also obtain an inequality for domains in Cartan-Hadamard manifolds. The estimate was made more explicit recently by Li, Wang and Wu in \cite[Theorem 1.1]{LiWaWu2020}: \begin{thm}\label{liwangwu} Let $(M,g)$ be a complete, simply-connected Riemannian manifold of dimension $d+1$ and $\Omega \subset M$ be a bounded domain with Lipschitz boundary. Let $M_{\kappa}$ be the $d+1$-dimensional space form of constant sectional curvature $\kappa \le 0$ and $\Omega^* \subset M_{\kappa}$ be a geodesic ball such that $\vert \Omega^*\vert =\vert \Omega \vert$. If the sectional curvature of $(M,g)$ is $\le \kappa$ and the Ricci curvature of $(M,g)$ is $\ge dKg$ with $K\le 0$, then \begin{equation} \label{LiWaWu} \sigma_1(\Omega)\le \left(\frac{sn_{K}(Diam(\Omega))}{sn_{\kappa}(Diam(\Omega))}\right)^{2d} \sigma_1(\Omega^*) \end{equation} where $sn_0(t)=t$ and for $k<0$, $sn_k(t)=\frac{1}{\sqrt{-k}}\sinh(\sqrt{-k}t)$. \end{thm} \begin{ques}\label{ques:diamterm} The term $\frac{sn_{K}(Diam(\Omega))}{sn_{\kappa}(Diam(\Omega))}$ may become very large when $Diam(\Omega)$ becomes large. Is it possible to establish a better estimate or to construct an example of a domain $\Omega$ with large diameter and $\sigma_1(\Omega)$ large? \end{ques} \begin{ques}\label{ques:cartanhadamard} Can we get estimates like (\ref{LiWaWu}) for the other eigenvalues of domains in Cartan-Hadamard manifolds? The methods used in \cite{CoElGi2011} or \cite{Ha2011} do not seem to apply. \end{ques} \medskip In \cite[Proposition 4.3]{BoBuGi2017}, Bogosel, Bucur and Giacomini obtain an upper bound involving the diameter $Diam(\Omega)$ of the domain. \begin{thm} \label{diam1}There exists a constant $C_d>0$ such that for every $k \in \mathbb N$ and for every bounded connected Lipschitz open set $\Omega$ in $\R^{d+1}$ \begin{equation} \sigma_k(\Omega)\le C_d \frac{k^{\frac{d+3}{d+1}}}{Diam(\Omega)} \end{equation} \end{thm} It would be interesting to investigate the optimality of the power of $k$. \medskip From this theorem, we deduce that large diameter implies small eigenvalues. It is known that this is not the case for Riemannian manifolds: this is part (4) of Example \ref{exfraserschoen}. However, this leads to the following question: \begin{ques}\label{ques:diameterhnsn} Is it possible to get inequalities similar to those in Theorem \ref{diam1} for domains in the hyperbolic space or the sphere? \end{ques} Regarding the importance of the diameter, Al Sayed, Bogosel, Henrot and Nacry proved the following inequality in \cite[Proposition 2.2]{ABHN2021}: \begin{thm} \label{diam2} Let $\Omega$ be a convex domain of diameter $Diam(\Omega)$ in $\R^{d+1}$. Then, there exists an explicit constant $C = C(d, k)$ depending only on the dimension $d+1$ and on $k$ such that \begin{equation} \sigma_k(\Omega)\le C \frac{\vert \Omega \vert^{\frac{1}{d}}}{Diam(\Omega)^{\frac{2d+1}{d}}}. \end{equation} \end{thm} This shows that when the diameter is fixed, if the volume of a convex set tends to $0$ then all the eigenvalues tend to $0$. \begin{ques}\label{ques:specconvtozero} Let $\Omega_{\epsilon} \subset \R^{d+1}$ be a family of domains with fixed diameter (without convexity assumption). If the volume of $\Omega_{\epsilon}$ tends to $0$ as $\epsilon \to 0$, can one say that all the eigenvalues of $\Omega_{\epsilon}$ tend to $0$? \end{ques} In the same paper, the authors show that when considering domains in $\R^{d+1}$ with fixed diameter, the ball is never a maximum for the kth eigenvalue $\sigma_k$ \cite[Theorem 3.2]{ABHN2021}. \subsection{Metric upper bounds for Riemannian manifolds.}\label{UpperRiemannian} We will now present some recent metric estimates. Let us begin with an estimate in terms of diameter and injectivity radius in the spirit of \cite{Be1979} by Berger and \cite{Cr1980} by Croke. This is \cite[Theorem 5]{CoGi2022} by Colbois and Girouard. For a compact Riemannian manifold $\Omega$ of dimension $d+1$ with boundary $\Sigma$, and for $x,y \in \Omega$, we denote by $d_{\Omega}(x,y)$ the distance in $\Omega$. The diameter of $\Sigma$ in $\Omega$ is defined by $$\text{Diam}_\Omega(\Sigma):=\sup\left\{d_{\Omega}(x,y)\,:\,x,y\in\Sigma\right\}.$$ Let $\Sigma_1,\cdots,\Sigma_b$ be the connected components of $\Sigma$. On each $\Sigma_j$, there is the extrinsic distance $d_{\Omega}$ and the corresponding diameter $\text{Diam}_\Omega(\Sigma_j)$. We will also consider the intrinsic distance $d_{\Sigma_j}(x,y)$ on $\Sigma_j$. \begin{thm}\label{thm:upperboundDiam} (\cite[Theorem 5]{CoGi2022}) Let $\Omega$ be a smooth connected compact Riemannian manifold of dimension $d+1$ with boundary $\Sigma$. Then, for each $j=1,\cdots,b$ and each $k\geq 1$, \begin{gather}\label{ineq:upperboundDiam} \sigma_k \leq K_d\frac{|\Omega|}{\text{Diam}_\Omega(\Sigma_j)^{2}}\left(\frac{1}{\text{inj}(\Sigma_j)}\right)^{d}k^{d+1}, \end{gather} where $K_d$ is an explicit constant depending on the dimension of $\Omega$. \end{thm} To obtain upper bounds of a metric nature which have optimal exponent of $k$, we need to introduce the metric concepts of packing and growth constant, as is done in \cite{GrNeYa2004}. These constants avoid the need to introduce restrictions on the curvature. We need also to introduce a constant measuring how the boundary $\Sigma$ of $\Omega$ is distorted in $\Omega$. This is another way to compare the intrinsic diameter $\text{Diam}(\Sigma_j)$ and the extrinsic diameter $\text{Diam}_\Omega(\Sigma_j)$. \medskip For $x\in \Omega$, let $$ B^{\Omega}(x,r)=\{y \in \Omega:d_{\Omega}(x,y)<r\}. $$ For $x\in \Sigma_j$, let $$ B^{\Sigma_j}(x,r)=\{y\in \Sigma:d_{\Sigma_j}(x,y)<r\} $$ The \emph{growth constant} $A$ is the smallest value such that for each $x \in \Sigma_j$, $j=1,...,b$, and $r>0$, $\vert B^{\Sigma_j}(x,r)\vert \le Ar^{d}$. The \emph{extrinsic packing constant} $N$ is the smallest value such that, for each $r>0$ and each $x\in \Sigma_j$, the extrinsic ball $B^{\Omega}(x,r)\cap \Sigma_j$ can be covered by $N$ extrinsic balls of radius $\frac{r}{2}$ centered at points $x_1,...,x_N \in \Sigma_j$. We measure the distortion of the boundary as follows: first, observe that for $x,y \in \Sigma_j$, $d_{\Omega}(x,y)\le d_{\Sigma_j}(x,y)$. Then if $\Sigma_j$ is a connected component of $\Sigma$, the distortion $\Lambda_j$ of $\Sigma_j$ is $$ \Lambda_j=\inf\{c>0:d_{\Sigma_j}(x,y)\le cd_{\Omega}(x,y); x,y \in \Sigma_j\} $$ If $b$ is the number of connected components of $\Sigma$, the distortion $\Lambda$ of $\Sigma$ is $$ \Lambda=\max\{\Lambda_1,...,\Lambda_b\}. $$ Note that one can express the packing constant $N$ in terms of the distortion $\Lambda$ and the intrinsic packing constant of $(\Sigma,d_{\Sigma})$ \cite[ Lemma 13]{CoGi2022}. This is useful in discussing below the sharpness of the main estimate. \medskip With these definitions, we get \cite[Theorem 1]{CoGi2022}): \begin{thm} \label{globalestimate}Let $\Omega$ be a connected compact Riemannian manifold of dimension $d+1$ with boundary $\Sigma$. For each $k\ge 1$, \begin{equation}\label{ineq} \sigma_k(\Omega)\le 512b^2N^3A\Lambda^2\frac{\vert \Omega\vert}{\vert\Sigma\vert^{\frac{d+2}{d}}}k^{2/d} \end{equation} Moreover, the exponent $2/d$ on $k$ is now optimal. \end{thm} \begin{remark} \label{rem: necessity} Of course, the estimates in Theorems \ref{thm:upperboundDiam} and \ref{globalestimate} are not sharp in the sense that there are no cases of equality, even for the first nonzero eigenvalue $\sigma_1$. This comes from the fact that we use metric constructions. However, the different quantities appearing in the right-hand side of this inequality cannot be removed. Precisely, for each such quantity, we construct a family of examples where all the other such quantities are constant or bounded, and where the first eigenvalue becomes arbitrarily large. \begin{enumerate} \item Example \ref{confdef} shows that the presence of the volume $\vert \Omega\vert$ is necessary in both Theorems. \item Example \ref{JadeBrisson} shows that the the injectivity radius is necessary in Theorem \ref{thm:upperboundDiam} and that the volume $\vert \Sigma\vert$ is necessary in Theorem \ref{globalestimate}. \item In Example \ref{CianciGirouard}, the boundary is fixed. This shows that the extrinsic diameter $diam_{\Omega} \Sigma_j $ of the boundary component is necessary in Theorem \ref{thm:upperboundDiam}. This also shows that the distortion and the growth constant $A$ are necessary in Theorem \ref{globalestimate}. Note that the distortion and the growth constant (with respect to the extrinsic distance) are closely related. \item The number $b$ of connected components of the boundary is necessary in Theorem \ref{globalestimate}: in \cite{CoGi2014}, we constructed a sequence of compact surfaces $\Omega_i$ with $\vert \partial \Omega_i \vert \sigma_1(\Omega_i) \to \infty$. The surface was locally a product $S^1\times [0,1]$ near each boundary component, so that $\Lambda=1$. After renormalisation, we obtain a sequence with $\vert \partial \Omega_i\vert=1$, $\sigma_1(\Omega_i) \to \infty$, $\vert \Omega_i\vert$ uniformly bounded from below and above, $\Lambda=1$ and $N_{\partial \Omega_i}$ and the growth is constant independent of $i$. This shows that $b$ has to go to $\infty$. We can adapt this construction to higher dimensional manifolds as well. \item In Theorem \ref{globalestimate}, the packing constant $N$ is necessary and the exponent $2/d$ is optimal. This last fact is surprising, because in comparison with the Weyl law, we could expect to have the exponent $1/d$. These two facts are consequences of Example \ref{example: cylinder}. For each closed Riemannian manifold $M$ of dimension $d$, we consider the cylinder $\Omega_L=[0,L]\times M $. In this example, the distortion $\Lambda=1$ and the growth constant $A$ is fixed. For $L$ small enough, we have $$ \sigma_k(\Omega_L)= \sqrt{\lambda_k(M)}\tanh(\sqrt{\lambda_k(M)}\frac{L}{2}). $$ As $\vert\Omega_L \vert=L \vert M \vert$, Inequality (\eqref{ineq}) becomes $$ \sqrt{\lambda_k(M)}\tanh(\sqrt{\lambda_k(M)}\frac{L}{2}) \le 2048N^3A\frac{L\vert M\vert}{\vert M\vert^{\frac{d+2}{d}}}k^{2/d} $$ and, after division by $L$, $$ \sqrt{\lambda_k(M)}\frac{1}{L}\tanh(\sqrt{\lambda_k(M)}\frac{L}{2}) \le 2048N^3A\frac{\vert M\vert}{\vert M\vert^{\frac{d+2}{d}}}k^{2/d} $$ Then we let $L$ tend to $0$. As $\lim_{L\to 0} \frac{1}{L}\tanh(\sqrt{\lambda_k(M)}\frac{L}{2})=\sqrt{\lambda_k(M)}$, and we get \begin{equation} \label{ineq1} \lambda_k(\Sigma) \le 2048N^3A\frac{\vert M\vert}{\vert M\vert^{\frac{d+2}{d}}}k^{2/d} \end{equation} Now the Weyl law for $\lambda_k$ implies that the exponent of $k$ cannot be smaller than $2/d$. At the same time, we see that we need control of the packing constant $N$ of $\Omega$: if $N^3$ is bounded, Inequality \eqref{ineq1} would lead to a universal inequality for $\lambda_1(M)$, which is impossible, as on each closed manifold $M$ of dimension $d\ge 3$, it is possible to construct a family of Riemannian metrics $g_{\epsilon}$ on $M$, $0<\epsilon <1$, with $\vert (M,g_{\epsilon})\vert =1$ and $\lambda_1(M,g_{\epsilon})\to \infty$ as $\epsilon\to 0$ (see \cite{CoDo1994}). In fact, in this case, the intrinsic packing constant of $(M,g_{\epsilon})$ tends to $\infty$ as $\epsilon \to 0$ and this makes that the extrinsic packing constant $N$ of $\Omega_L=[0.L]\times (M,g_{\epsilon})$ tend to $\infty$ as $\epsilon \to 0$. \item The exponent $d+1$ of $k$ in Theorem \ref{thm:upperboundDiam} is probably not optimal. \end{enumerate} \end{remark} \medskip In dimension higher than $2$, we have obtained robust geometric estimates, where all the geometric and metric ingredients appearing in the inequalities are necessary. However, these estimates are too general to be sharp. The next subsection presents a setting where one can get sharp estimates: Riemannian manifolds of revolution and in particular hypersurfaces of revolution in Euclidean space. \subsection{Upper and lower bounds: the case of manifolds of revolution} \label{UpperRev} A manifold of revolution $\Omega$ of dimension $d+1$ is a warped product $\Omega= [0,L] \times \Sp^{d} $ with a Riemannian metric $g(r,p)=dr^2+h^2(r)g_{0}$ where $g_0$ is the usual canonical metric on $\Sp^{d}$. The function $h$ is smooth and satisfies $h(r)> 0$ in $[0,L[$. The situation where $h(L)>0$ corresponds to $\Omega$ being a topological cylinder, as expected from the product structure. If instead $h(L)=0$, then $\Omega$ is homeomorphic to a ball. In order to obtain a smooth manifold, we also need to impose that $h(L)=h^{(2k)}(L)=0$ for $k >0$ and $h'(L)=-1$. \medskip First, note that for the spectrum of the Laplacian, to our knowledge, there do not exist many contributions in the context of compact Riemannian manifolds of revolution. However, we can mention the paper \cite{AbFr2002} by Abreu and Freitas where the authors study the first eigenvalue for $S^1$-invariant abstract Riemannian metrics on the sphere and compare it with the situation where the metric is realised as the pullback of the canonical Euclidean metric in $\R^3$ by some $S^1$-invariant embedding (see also \cite{CoDrEl2008} for some generalisations). A series of papers by Ariturk (\cite{Ar2014, Ar2016, Ar2018}) is very inspiring. They mainly concern maximisation of the first eigenvalue of surfaces of revolution in Euclidean space with Dirichlet boundary conditions. However, the initial paper \cite{Ar2014} is not yet published to our knowledge. \medskip Regarding the Steklov problem, for specific manifolds of revolution, such as the Euclidean ball or the cylinder $[0,L] \times \Sp^{d} $, it is possible to explicitly calculate the eigenvalues; see Examples in \cite{GiPo2017} and Example \ref{example: cylinder}. The hope is that for more general revolution manifolds, it remains possible to get sharp estimates on the Steklov eigenvalues. \medskip We discuss lower and upper bounds for the Steklov eigenvalues of such metrics. We will take $d+1 \ge 3$. The case of surfaces ($d=1$) is distinct, well-understood, and explained in \cite[Theorem 1.1]{FaTaYu2015} by Fan, Tam and Yu; we discuss this work further in Section \ref{applic.bounds}. They obtain the maximum of the $k$th normalised Steklov eigenvalue of all rotationally symmetric metrics on $[0,L] \times \mathbb \Sp^1$ for $k>2$, and the supremum for $k=2$. \medskip We will consider two situations: we will first consider the case of Riemannian metrics of revolution, and subsequently, the special case of Euclidean hypersurfaces of revolution. \subsubsection{Riemannian metrics of revolution} If $\Omega$ is a Riemannian manifold of revolution of dimension $d+1\ge 3$ and fixed boundary (with one or two connected components) and without other assumptions, one can find a family of revolution metrics with that boundary and arbitrarily small eigenvalues $\sigma_{k}$ and another family of revolution metrics with that boundary and arbitrarily large $\sigma_{1}$. It is even possible to construct these families so that they are conformal to any initial Riemannian metric $g$ given on $\Omega$. In order to show these facts, we just have to adapt \cite[Theorem 1.1 and Proposition 2.1]{CoElGi2019} (see Example \ref{confdef}). Roughly speaking, if $g(r,p)=dr^2+h^2(r)g_{0}$ is a revolution metric on $\Omega$ and $f=f(r)$ a positive smooth function, taking the value $1$ on $\partial \Omega$, we consider the conformal metric $g_f(r,p)=f^2(r)g(r,p)$. (Note that, after a change of variable, the Riemannian metric may be written $dr^2+\tilde h^2(r)g_0$.) The Rayleigh quotient $R(u)$ of a function $u$ on $\Omega$ is given by \begin{equation} R(u)=\frac{\int_{\Omega} \vert du\vert^2_gf^{d-1}dV_{(\Omega,g)}}{\int_{\Sigma}u^2dV{(\Sigma,g)}} \end{equation} Taking $f$ close to $0$ in the interior of $\Omega$ allows one to obtain as many arbitrarily small eigenvalues as one wishes. Taking $f$ large inside $\Omega$ leads to large first nonzero eigenvalue $\sigma_1(\Omega, f^2g)$. Therefore, in order to get control over the Steklov spectrum, one has to add some geometric hypotheses. \medskip In \cite{Xi2022}, Xiong considers the case of a revolution metric $g(r,p)=dr^2+h^2(r)g_{0}$ on a ball with constraint on the Ricci curvature and on the convexity of the boundary. Because of the symmetries of revolution, the spectrum of $\Omega$ comes with multiplicity. In the sequel we will denote by $\sigma_{(k)}(\Omega)$ the Steklov eigenvalues of $\Omega$ counting without multiplicity, that is $$ \sigma_{(0)}=\sigma_0=0<\sigma_{(1)}(\Omega) < \sigma_{(2)}(\Omega) <... $$ The multiplicity of $\sigma_{(k)}$ is the multiplicity of the $k$th eigenvalue $\lambda_{(k)}$ of $\mathbb \Sp^d$. See \cite[Proposition 2 ]{CoVe2021}. In Example \ref{ex: ball}, it was shown that for the Euclidean ball of radius $R$ in $\R^{d+1}$, we have $\sigma_{(k)}=\frac{k}{R}$ with multiplicity expressed in terms of binomial coefficients, namely $C_{d}^{d+k}-C_{d}^{d+k-2}$ when $k\ge 2$. Note that for a metric of revolution with two boundary components, isolated examples with larger multiplicity may appear: the multiplicity of $\sigma_{(k)}$ is not always the same. \medskip With the above notations, Xiong shows: \begin{thm}\label{thm:xiong1} \cite[ Theorems 2 and 3]{Xi2022} Suppose that $\Omega$ has nonnegative Ricci curvature and strictly convex boundary. Then $\sigma_{(k)}$ satisfies \begin{equation} \sigma_{(k)}(\Omega,g) \ge k\frac{-h'(0)}{h(0)}. \end{equation} \medskip If $\Omega$ has nonpositive Ricci curvature and strictly convex boundary, then $\sigma_{(k)}$ satisfies \begin{equation} \sigma_{(k)}(\Omega,g) \le k\frac{-h'(0)}{h(0)}. \end{equation} Moreover, in each case, we have equality if and only if $h(r)=L-r$ (that is, $\Omega$ is isometric to the Euclidean ball of radius $L$). \end{thm} In \cite{Xi2021}, Xiong investigates the same problem, but for the difference and the ratio of successive eigenvalues. \begin{thm} \label{thm:xiong2}\cite[Theorem 2 and 5]{Xi2021} Suppose that $\Omega$ has nonnegative Ricci curvature and strictly convex boundary, then for $k\ge 0$ \begin{equation} \sigma_{(k+1)}(\Omega,g)- \sigma_{(k)}(\Omega,g)\ge k\frac{-h'(0)}{h(0)} \end{equation} and for $k\ge1$ \begin{equation} \frac{\sigma_{(k+1)}(\Omega,g)}{\sigma_{(k)}(\Omega,g)}\le \frac{k+1}{k}. \end{equation} \medskip If $\Omega$ has nonpositive Ricci curvature and strictly convex boundary, then for $k\ge 0$ \begin{equation} \sigma_{(k+1)}(\Omega,g)- \sigma_{(k)}(\Omega,g)\le k\frac{-h'(0)}{h(0)} \end{equation} and for $k\ge1$ \begin{equation} \frac{\sigma_{(k+1)}(\Omega,g)}{\sigma_{(k)}(\Omega,g)}\ge \frac{k+1}{k}. \end{equation} In all these situations, equality holds if and only if $h(r)=L-r$ (that is, $\Omega$ is isometric to the Euclidean ball of radius $L$). \end{thm} These results lead to different kinds of questions: \begin{ques}\label{ques:rev1} What can be said for revolution metrics with other geometric constraints like $\vert Ricci \vert \le a^2$? \end{ques} \begin{ques}\label{ques:rev2} Can we get similar results for revolution metrics on manifolds with two boundary components? \end{ques} \subsubsection{Euclidean hypersurfaces of revolution} A particular case of a revolution manifold is when $\Omega$ is a (d+1)-dimensional hypersurface of revolution in Euclidean space $\R^{d+2}$. Without lost of generality, we fix the boundary to be $\mathbb \Sp^{d}\times \{0\} \subset \R^{d+1} \times \{0\}$ if the boundary has one connected component and $\mathbb \Sp^{d}\times \{0\} \cup \mathbb \Sp^{d}\times \{\delta\}$ if the boundary has two connected components. Note that we are assuming both boundary components are isometric to the unit sphere, in particular they have the same volume. The fact that ${\Omega}$ is a hypersurface has a strong consequence for the induced Riemannian metric: Consider the meridian curve $c$ of the hypersurface of revolution, that is the curve in ${\Omega}$ cut by a 2-dimensional half-plane whose edge is the axis of revolution ( i.e., the $x_{d+2}$ axis in the standard coordinates on $\R^{d+2}$). Denote by $L$ its length and introduce a parametrisation by arc-length $c(r)=(h(r),x_{d+2}(r)$, $0\le r \le L$, where $h(r)$ denotes the distance to the the $x_{d+2}$ axis. Then, we can write the metric of $\Omega$ as $g(r,p)=dr^2+h^2(r)g_{0}$ where $g_0$ is the canonical metric on $\mathbb \Sp^{d}$ and $r \in [0,L]$. For each $r$ we have $\vert h'(r)\vert \le 1$. This implies \begin{equation} 1-r \le h(r) \le 1+r;\ h(L)-r \le h(L-r) \le h(L)+r. \end{equation} If $\Omega$ has one boundary component, $h(L)=0$ and $0\le h(L-r) \le r$. \smallskip If $\Omega$ has two boundary components, $h(L)=1$ and $1-r \le h(L-r)\le 1+r$. \medskip \noindent \textbf{Case with one boundary component}: This case is now well understood. For each $k\ge 1$, we have the sharp inequalities \begin{equation} k\le \sigma_{(k)}(\Omega) < k+d-1. \end{equation} Precisely, for lower bounds, Colbois, Girouard and Gittins show \cite[Theorem 1.8]{CoGiGi2019}: \begin{thm} Let $\Omega$ be a hypersurface of revolution in $\R^{d+2}$, ($d\ge 2$), with connected boundary $\mathbb \Sp^{d} \times \{0\}$. Then for each $k\ge 1$, $\sigma_{(k)}(\Omega) \ge \sigma_{(k)}(\mathbb B^{d+1})=k$, where $\mathbb B^{d+1}$ denotes the unit ball in $\R^{d+1}$. For each given $k$, we have equality if and only if $\Omega =\mathbb B^{d+1} \times \{0\}$. \end{thm} For upper bounds, Colbois and Verma show~\cite[Theorem 1]{CoVe2021}: \begin{thm} \label{upperrev} Let $\Omega$ be a hypersurface of revolution in $\R^{d+2}$ with one boundary component $\mathbb \Sp^{d} \times \{0\}$. Then, for $d\ge 2$ and for each $k \ge 1$, we have \begin{equation} \sigma_{(k)}(\Omega) < k+d-1. \end{equation} Moreover, the result is sharp. For each $\epsilon >0$ and each $k \ge 1$, there exists a hypersurface of revolution $\Omega_{\epsilon}$ with one boundary component such that $\sigma_{(k)}(\Omega_{\epsilon}) > k+d-1-\epsilon$. \end{thm} In order to understand the geometry behind this estimate, we can consider Formula (\ref{Dirichlet}) and (\ref{Neumann}) of Example \ref{annuli}. The hypersurfaces of revolution with $\sigma_k$ close to $k+d-1$ contain annuli $\Omega_L$ with $L \to \infty$. The bracketing formula (\ref{ineq:compmixed}) shows that the $k$th eigenvalue $\sigma_{(k)}$ converges to $k+d-1$ as $L\to \infty$. \begin{remark} The situation for $d=1$ is also interesting: all surfaces of revolution that share the same connected boundary are Steklov-isospectral. Indeed they are conformally equivalent to each other with conformal factor identically equal to one on the boundary, hence the observation follows from the variational characterisation \eqref{eq:Stek Rayleigh min max}. \end{remark} \medskip \noindent \textbf{Case with two boundary components}: The situation with two boundary components is more complicated. For lower bounds, we have a result comparable to the case with one boundary component \cite[Theorem 1.11]{CoGiGi2019}. \begin{thm} \label{lowerrev2}Let $\Omega$ be a hypersurface of revolution in $\R^{d+2}$ with boundary $\mathbb \Sp^{d} \times \{0\} \cup \mathbb \Sp^{d} \times \{\delta\}$. Let $L$ be the arc length of the meridian. If $L\ge 2$, then for each $k\ge 1$, \begin{equation} \sigma_k(\Omega) \ge \sigma_k(\mathbb B^{d+1} \sqcup \mathbb B^{d+1} ) \label{inlowerrev2} \end{equation} In particular, if $\delta \ge 2$ this is always true. \smallskip For $\delta \ge 2$, Inequality (\eqref{inlowerrev2}) is sharp: for each $k$ and each $\epsilon >0$, there exists a connected hypersurface $\Omega_{\epsilon,k}$ such that $\sigma_{k}(\Omega_{\epsilon,k})\le \sigma_k(\mathbb B^{d+1} \sqcup \mathbb B^{d+1})+\epsilon$. \end{thm} Note that, if we do not require the hypersurface to be connected, it suffices to take the disjoint union of two unit balls to have the sharpness. When we require the hypersurface to be connected, we consider the disjoint union of the two balls perforated with a small hole around the origin and join the punctured balls by a thin cylinder. \medskip When $\delta <2$, it appears to be more difficult to find a lower bound for $\sigma_{(k)}$. For $k=1$, the union of two balls gives the lower bound $0$. But for larger $k$, using Example \ref{example: cylinder}, it is easy to see that, when $\delta$ becomes small, the cylinder $[0,\delta] \times \mathbb \Sp^{d}$ has plenty of small eigenvalues and the union of two balls no longer gives the lower bound. \begin{ques}\label{ques:2bc1} Find a sharp lower bound for $\sigma_{k}(\Omega)$, where $\Omega$ is a hypersurface of revolution with boundary components $\mathbb \Sp^{d} \times \{0\} \cup \mathbb \Sp^{d} \times \{\delta\}$ and $\delta<2$. \end{ques} \begin{ques}\label{ques:2bc2} Find a sharp upper bound for $\sigma_{k}(\Omega)$, where $\Omega$ is a hypersurface of revolution with boundary components $\mathbb \Sp^{d} \times \{0\} \cup \mathbb \Sp^{d} \times \{\delta\}$. \end{ques} \begin{ques}\label{ques:2bc3} In both cases (one or two boundary components) can we obtain results as in Theorem \ref{thm:xiong2}, that is, sharp bounds for the difference $\sigma_{(k+1)}-\sigma_{(k)}$ and the ratio $\frac{\sigma_{(k+1)}}{\sigma_{(k)}}$? \end{ques} \medskip We have seen that considering hypersurfaces of revolution of Euclidean space allows one to obtain some sharp estimates. An intermediate situation between general Riemannian manifolds and hypersurfaces of revolution is to consider submanifolds of Euclidean space, and this is the object of the next subsection. \subsection{Upper and lower bounds: the case of submanifolds of Euclidean space} \label{UpperSub} We consider the following situation: we fix a closed (not necessarily connected) submanifold $\Sigma$ of dimension $d$ of Euclidean space $\R^m$ and consider all possible $d+1$-dimensional submanifolds $\Omega \subset \R^m$ with boundary $\Sigma$. In \cite[Theorem 1.11]{CoGiGi2019}, Colbois, Girouard and Gittins prove the inequality \begin{equation} \sigma_k(\Omega) \le A_{\Sigma} \vert \Omega \vert k^{2/d} \end{equation} where $A_{\Sigma}$ depends on the geometry of $\Sigma$. Moreover, the authors keep open the question of constructing $\Omega$ with $\sigma_1$ arbitrarily large or small (with $\Sigma$ fixed). We can apply Inequality (\ref{ineq}) in this context, and this makes the dependence on the geometry of $\Sigma$ more explicit: \begin{equation} \sigma_k(\Omega)\le 512b^2N^3A\Lambda^2\frac{\vert \Omega\vert}{\vert\Sigma\vert^{\frac{d+2}{d}}}k^{2/d}. \end{equation} The distortion $\Lambda$ of $\Sigma$ in $\Omega$ is bounded from above by the distortion of $\Sigma$ in $\R^m$. For the same reason as for Riemannian manifolds (Theorem \ref{globalestimate} and Point 5 of Remark \ref{rem: necessity}), the exponent is optimal: we can also consider in $\R^m$ a product of the type $M\times [0,L]$ as in Remark \ref{rem: necessity}. Similarly, one needs to control $\vert \Sigma\vert$: the construction of Brisson ~\cite{Br2022} and Example \ref{JadeBrisson} also works for a submanifold. The only question is about the presence of $\vert \Omega \vert$. It does not seem possible to adapt the construction of \cite{CoElGi2019} in Example \ref{confdef} to submanifolds with fixed boundary, and it turns out that the question is open: \begin{ques}\label{open:arbitrarilylarge} Given a $d$-dimensional compact submanifold $\Sigma$ in $\R^m$, is it possible to construct a family of $d+1$- dimensional submanifolds $\Omega$ of $\R^m$ with boundary $\Sigma$ for which $\sigma_1(\Omega)$ becomes arbitrarily large? \end{ques} We can add different kinds of conditions in order to avoid the presence of $\vert \Omega\vert$. We can take advantage of being in the Euclidean space to introduce the \emph{intersection index} already considered in \cite{CoDrEl2010}. For a compact immersed submanifold $N$ of dimension $q$ in $\R^{q+p}$, almost every $p$-plane $\Pi$ in $\R^{q+p}$ is transverse to $N$, meaning that the intersection $\Pi \cap N$ consists of a finite number of points. \begin{defn} \label{def:index} The intersection index of $N$ is \begin{equation} \label{eq:index} i_p(N) := \sup_{\Pi} \{\# (\Pi \cap N)\}, \end{equation} where the supremum is taken over the set of all $p$-planes $\Pi$ that are transverse to $N$ in $\R^{q+p}$. \end{defn} In \cite[Corollary 1.5]{CoGi2021}, the following result is proved by Colbois and Gittins: \begin{thm} \label{thm:3} Let $d \geq 2$. Let $\Sigma$ be a $d$-dimensional, closed, smooth submanifold of $\R^m$. Let $inj(\Sigma)$ denote the injectivity radius of $\Sigma$. There exist constants $A_d, B_d>0$ depending only on $d$ such that for any compact $(d+1)$-dimensional submanifold $\Omega$ of $\R^m$ with boundary $\Sigma$ and for $k \geq 1$, \begin{equation}\label{eq:T3} \sigma_k(\Omega) \leq A_d \frac{i(\Omega)}{inj(\Sigma)} + B_d \, i(\Omega)\left(\frac{i(\Sigma) k}{\vert \Sigma \vert}\right)^{1/d}, \end{equation} where $i(\Sigma) = i_{m-d}(\Sigma)$ and $i(\Omega) = i_{m-d-1}(\Omega)$. \end{thm} In some sense, the index $i(\Omega)$ plays the role of the volume of $\Omega$, and it is also an open question to decide if we really need it. Note that the exponent of $k$ is optimal with respect to the Weyl asymptotics: to our knowledge, this is one of the few situations where one can get an optimal exponent in the upper bound without control of the curvature. By the results of \cite{Br2022} in Example \ref{JadeBrisson} (see also the example in \cite[Section 4]{CoGi2021}), we need to take the injectivity radius into account for this inequality. \medskip In contrast to the case of revolution manifolds, there do not exist many sharp inequalities in the case of submanifolds of the Euclidean space where we can characterise cases of equality. One notable exception is the paper \cite{IlMa2011} by Ilias and Makhoul. In order to get a sharp inequality, they have to take the mean curvature into account. We state here part of \cite[Theorem 1.2]{IlMa2011}. \begin{thm} Let $\Omega$ be an immersed compact submanifold of dimension $d+1$ in Euclidean space $\R^m$ with boundary $\Sigma$. Let $H$ denote the mean curvature of $\Sigma$. \begin{enumerate} \item If $m>d+1$, then \begin{equation} \label{ineqmean} \sigma_1(\Omega)\le (d+1)\frac{\vert \Omega\vert}{\vert \Sigma\vert^2}\int_{\Sigma} \vert H\vert^2dV_{\Sigma}. \end{equation} We have equality in (\ref{ineqmean}) if and only if $\Omega$ is a minimal submanifold of the ball $B(\frac{1}{\sigma_1})$ of radius $\frac{1}{\sigma_1}$ in $\R^m$, $\Sigma \subset \partial B(\frac{1}{\sigma_1})$, and $\Omega$ reaches the boundary orthogonally. \item If $m=d+1$, we have the same inequality, and we have equality if and only if $\Omega$ is a ball. \end{enumerate} \end{thm} The authors also observe that one can prove a similar result for submanifolds of the sphere and ask the question: \begin{ques}\label{ques:eucl1} Is it possible to get a similar inequality for submanifolds of hyperbolic space? \end{ques} Another natural question is \begin{ques}\label{ques:eucl2} Is it possible to generalise Inequality (\ref{ineqmean}) to other eigenvalues? Note that a similar question for the spectrum of the Laplacian was solved only recently (and partially) by Kokarev in~\cite[Theorem 1.6]{Ko2020}. \end{ques} \noindent For other results of this kind, see the two recent papers \cite{ChSh2018,ChSh2022} by Chen and Shi. \medskip The question of constructing manifolds $\Omega$ with prescribed boundary $\Sigma$ and with arbitrarily small Steklov eigenvalue $\sigma_1$, or even $\sigma_k$, is also interesting in this context. In general, it is easy to deform a Riemannian metric in order to construct small eigenvalues (see Proposition~\ref{prop: no lower bound}). However, one cannot realise the first construction of Proposition \ref{prop: no lower bound} if the boundary is prescribed (as a Riemannian manifold). Moreover, the second construction requires a conformal perturbation, and so is not accessible if we consider submanifolds. Colbois, Girouard and M\'etras~\cite{CoGiMe2020} proposed a construction of a family $\Omega_\varepsilon\subset\R^n$ of $(d+1)$-dimensional submanifolds such that $\partial\Omega_\varepsilon=\Sigma$ for each $\varepsilon$ and $\sigma_k(\Omega_\varepsilon)\xrightarrow{\varepsilon\to 0}0$ for each $k$. This should be compared with Open Problem~\ref{open:arbitrarilylarge}. The proof of the general case is rather technical, but its essence is captured by~\cite[ Example 4.1]{CoGiMe2020}, which we reproduce here. \begin{ex} \cite[Example 4.1]{CoGiMe2020} We consider the unit circle $\mathbb \Sp^1 \subset \R^2$ as a submanifold of dimension $d=1$ in $\R^3$ and we want to fill it with a surface $\Omega$ with $\sigma_1(\Omega)$ small. We will construct $\Omega$ as the graph in $\R^3$ of a function $f$ on the unit disc $\overline{\mathbb D}$. Given a smooth function $f:\overline{\mathbb D}\rightarrow\mathbb R$ vanishing on the circle $S^1=\partial\mathbb D$, let $$\Omega_f:=\mbox{Graph of }f=\{(x,y,f(x,y))\,:\,(x,y)\in\overline{\mathbb D}\}.$$ As a test function we will use the restriction $u_f$ to $\Omega_f$ of the function defined by $\tilde{u}(x,y,z)=x$. Note that the norm of $u_f$ on $\mathbb S^1=\partial \Omega_f$ will be a positive constant independent of $f$. \smallskip The key point is that in order to obtain $\nabla u_f$, we have to find the orthogonal projection of $\nabla\tilde{u}=(1,0,0)$ on the tangent space of $\Omega_f$. The idea is to choose a family $f_n$ of functions such that this projection tends to $0$ almost everywhere as $n \to \infty$. Concretely, the Dirichlet energy of $u_f$ is given by $$\int_{\Omega_f}|\nabla u_f|^2=\int_{\mathbb D} \frac{1 + f_y^2}{\sqrt{1 + f_x^2+f_y^2}} \,dxdy.$$ For $n\in\mathbb N$, define $f=f_n:\overline{\mathbb D}\rightarrow\mathbb R$ by $$f(x,y)=\sin(nx)(1-x^2-y^2).$$ A direct computation shows that $$\lim_{n\to\infty}\int_{\Omega_{f_n}}|\nabla u_{f_n}|^2=0.$$ This implies that the Rayleigh quotient of $u_{f_n}$ on $\Omega_{f_n}$ tends to $0$ as $n\to \infty$, which means $\sigma_1(\Omega_{f_n})$ tends to zero as well. \end{ex} \section{Existence of maximising metrics and applications to free boundary minimal surfaces} \section{Optimising eigenvalues and applications to minimal surfaces} \label{sec:existence} In this section we address both the existence of metrics that maximise normalised eigenvalues and geometric implications. Let $(M,g)$ be a closed Riemannian manifold and $(N,h)$ an arbitrary Riemannian manifold. Recall that a map $\tau:M\to N$ is harmonic if and only if it is a critical point of the Dirichlet energy $E$. Here $E$ is defined by \begin{equation}\label{eq:dir en}E(\tau)=\frac{1}{2}\int_M\,|d \tau|^2 \,dV_g. \end{equation} (We emphasise that $|d\tau|$ depends on both Riemannian metrics $g$ and $h$. In local coordinates, it is given by $|d\tau|^2_p=\sum\,h^{kl}(\tau(p))g_{ij}(p)\,\partial_{x^i}(\tau_k)\,\partial_{x^j}(\tau_l)$.) A classical result of Eells and Sampson \cite{EeSa1964} says that an isometric immersion $\tau:(M,g)\to (N,h)$ is minimal (i.e., its mean curvature vanishes) if and only if $\tau$ is harmonic. In 1966 (only two years after the aforementioned result of Eells and Sampson) Takahashi \cite{Ta1966} observed for any closed Riemannian manifold $(M,g)$ that an isometric immersion $u: M\to \mathbb{S}^m\subset \R^{m+1}$ of $(M,g)$ into a sphere is minimal (equivalently harmonic) if and only if all the coordinate functions $u_i:M\to \R$ are Laplace eigenfunctions with the same eigenvalue. Thirty years later, Nadirashvili proved for smooth surfaces that any metric on $M$ maximising the normalised first eigenvalue admits such an isometric minimal immersion into a sphere. El Soufi and Ilias later extended this result to higher dimensions and to more general critical metrics. (We will define the notion of ``critical metric'' used by El Soufi and Ilias after the statements of the two theorems below.) \begin{thm}\cite{Na1996}, \cite{ElIl2000}, \cite{ElIl2008}\label{thm.nada min} Let $M$ be a closed manifold of dimension $n$. Suppose that a smooth Riemannian metric $g_0$ on $M$ is a critical point of the functional $g\to \overline{\lambda}_k(M,g)=\lambda_k(M,g)|M|_g^{2/n}$. Then (after possibly rescaling $g_0$) there exist $m\in\mathbf Z^+$ and linearly independent $\lambda_k(M,g_0)$-eigenfunctions $u_1,\dots, u_{m+1}$ such that $\sum_{i=1}^{m+1}u_i^2\equiv 1$. The map $u=(u_1,\dots, u_{m+1}):M\to \mathbb{S}^{m}$ is an isometric minimal immersion. \end{thm} El Soufi and Ilias \cite{ElIl2008} also proved a converse: suppose $u:(M,g_0)\to\mathbb{S}^m$ is an isometric minimal immersion. By Takahashi's result, there exists $\lambda$ such that the coordinate functions of $u$ are $\lambda(M,g_0)$-eigenfunctions. Let $k$ satisfy either $\lambda_{k-1}(M,g_0)<\lambda =\lambda_k(M,g_0)$ or $\lambda_k(M,g_0)=\lambda<\lambda_{k+1}(M,g_0)$. Then $g_0$ is a critical point of the functional $g\to\lambda_k(M,g)|M|_g$. Theorem~\ref{thm.nada min} not only gives an important application of spectral theory to minimal surface theory but also yields applications in the opposite direction. Indeed, Nadirashvili proved this theorem as part of a program to seek a maximising metric for the first normalised eigenvalues on the 2-torus $T^2$ and the Klein bottle. El Soufi and Ilias showed that $\overline{\lambda}_k$-conformally critical metrics for $\overline{\lambda}_k$ (i.e., critical points for the restriction of $\overline{\lambda}_k(M,\cdot)$ to a conformal class of metrics), always yield harmonic maps $u$ into spheres (but $u$ need not be an isometric or even a conformal immersion): \begin{thm}\cite[Theorem 4.1]{ElIl2008}\label{thm: El harmonic} \begin{enumerate} \item Let $M$ be a closed manifold. Suppose that the Riemannian metric $g_0$ on $M$ is a $\overline{\lambda}_k$-conformally critical metric. Then there exist linearly independent $\lambda_k(M,g_0)$-eigenfunctions $u_1,\dots, u_{m+1}$ such that $\sum_{i=1}^mu_i^2\equiv 1$. The map $u=(u_1,\dots, u_{m+1}): M\to \mathbb{S}^{m}$ is harmonic with constant energy density $\frac{1}{2}|du^2|=\frac{1}{2}\lambda_k$\textbf{}. Conversely, suppose that $(M,g_0)$ admits a harmonic map $u:M\to \mathbb{S}^m$ whose coordinate functions are $\lambda(M,g_0)$ eigenfunctions for some $\lambda$. Let $k$ satisfy either $\lambda_{k-1}(M,g_0)<\lambda =\lambda_{k}(M,g_0)$ or $\lambda_{k}(M,g_0)=\lambda <\lambda_{k+1}(M,g_0)$. Then $g_0$ is a conformally critical metric for $\overline{\lambda}_k$. \end{enumerate} \end{thm} El Soufi and Ilias defined the notion of critical metric used in Theorems~\ref{thm.nada min} and \ref{thm: El harmonic} as follows: They first showed \cite[Theorem 2.1(i)]{ElIl2008} that if $\{g_t\}_t$ is a family of Riemannian metrics on $M$ depending analytically on the parameter $t\in (-\epsilon,\epsilon)$, then the map $t\to \lambda_k(M,g_t)$ admits left and right derivatives at each $t$. They then defined a metric $g$ on $M$ to be $\lambda_k$-critical if for each volume-preserving deformation $\{g_t\}_t$ depending analytically on $t\in (-\epsilon,\epsilon)$ with $g_0=g$, one has \begin{equation}\label{eq:crit pt deriv }\left(\frac{d}{dt}_{t=0^-}\lambda_k(g_t)\right)\left(\frac{d}{dt}_{t=0^+}\lambda_k(g_t)\right)\leq 0.\end{equation} Equivalently, for each such deformation, either \begin{equation}\label{eq:crit pt def}\lambda_k(g_t)\leq \lambda_k(g)+o(t) \mbox{\,\,\,or\,else\,\,\,} \lambda_k(g_t)\geq \lambda_k(g)+o(t).\end{equation} \begin{remark}\label{rem:crit different}Recently, Karpukhin and M\'etras \cite{KaMe2021} verified that the results above remain valid using a notion of critical points (referred to in \cite{KaMe2021} as extremal points) that allows $C^\infty$ rather than only analytic deformations in~\eqref{eq:crit pt def}. \end{remark} Singularities can arise when one tries to find $\overline{\lambda}_k$-critical or conformally critical metrics. Kokarev \cite[``Regularity Theorem'']{Ko2014} addressed regularity properties of $\overline{\lambda}_k$ conformally critical metrics (and generalizations) in the case of surfaces. Under suitable hypotheses, he showed that the singularities are isolated conical singularities whose cone angles are multiples of $2\pi$. (See Remark~\ref{rem:con sing} for the definition of conical singularity.). The existence of a harmonic map $u$ as in Theorem~\ref{thm: El harmonic} continues to hold in this case but $u$ will have branch points. See also the work of Nadarashvili-Sire \cite{NaSi2015} and Petrides \cite{Pe2014}. Theorem~\ref{thm.nada min} gave impetus to the very challenging study of existence of optimizing metrics. This study was deeply influenced by stunning developments in the Steklov setting, the focus of this section. The groundbreaking work \cite{FrSc2016} of Fraser and Schoen first established an analogue of Nadirashvili's theorem in the Steklov setting: metrics maximising the perimeter-normalised Steklov eigenvalues among all Riemannian metrics on a given surface give rise to free boundary minimal surfaces in Euclidean balls. The article went on to prove existence of Riemannian metrics maximising the first perimeter-normalised Steklov eigenvalue for \emph{all} orientable surfaces of genus zero and for the M\"obius band. Remarkably, at the time that Fraser and Schoen proved this result, the analogous question of existence of maximisers for the first area-normalised Laplace eigenvalue on closed surfaces had been resolved only for $\mathbb{S}^2$ (Hersch \cite{He1970}), $\R{\mathbf P}^2$ (Li and Yau \cite{LiYa1982}),the 2-torus \cite{Na1996} and the Klein bottle \footnote{In the case of the Klein bottle, Jakobson, Nadirashvili and Polterovich constructed the metric in \cite{JaNaPo2006} after existence was proved in \cite{Na1996}; El Soufi, Giacomini and Jazar then proved uniqueness in \cite{ElGiJa2006}. Subtle parts in the arguments for the torus and Klein bottle were completed through the work of Girouard \cite{Gi2009} and of Cianci, Karpukhin and Medvedev \cite{CiKaMe2019}.} Influenced by the ideas in \cite{FrSc2016}, the existence of maximising metrics (smooth except for possible conical singularities) for the first normalised Laplace eigenvalue on all closed surfaces was fully realised in 2019 (see Petrides \cite{Pe2014} and Matthiesen and Siffert \cite{MaSi2019, MaSi2021}), and two years later, the existence of smooth maximising metrics for the first Steklov eigenvalue was proved on all compact surfaces with boundary (\cite{Pe2019}, \cite{MaPe2020}). Fraser's expository article \cite{Fr2020} gives an excellent summary of \cite{FrSc2016} and related works, including the ideas behind the proofs and many applications. We also recommend \cite{FrSc2013} (which is a mix of exposition and then-new results) and the expository article \cite{Li2020} by Li. Concerning the problem of maximising normalised Laplace eigenvalues within a conformal class of metrics on closed surfaces, Petrides \cite{Pe2014} proved the existence of $\overline{\lambda}_1$-conformally maximal metrics (again with possible conical singularities) in every conformal class. A very different independent proof was given by Nadarashvili-Sire \cite{NaSi2015}. The question of existence of $\overline{\lambda}_k$-conformal maximisers for $k>1$ was addressed by Nadarashvili and Sire \cite{NaSi2015_2} and Petrides \cite{Pe2018}. Karpukhin, Nadarashvili, Penskoi and Polterovich \cite{KNPP2020} gave new proofs of these results both for $k=1$ and higher $k$ and provided excellent exposition. Fraser and Schoen \cite{FrSc2013} also provided a Steklov analogue of the first item in Theorem~\ref{thm: El harmonic} in the case of metrics that maximize the first normalized Steklov eigenvalue in a conformal class. Recently Karpukhin and M\'etras \cite{KaMe2021} obtained Steklov analogues of Theorems~\ref{thm.nada min} and \ref{thm: El harmonic} in all higher dimensions and for more general critical metrics. Petrides addressed the question of existence of $\overline{\sigma}_k$-conformal maximisers for all $k\geq 1$ in \cite{Pe2019}. This section is organised as follows: \begin{itemize} \item Subsection~\ref{fbms}: From maximising metrics to free boundary minimal surfaces. \item Subsection~\ref{max exist}: Existence of maximising metrics for the first normalised eigenvalue. \item Subsection~\ref{subsec.higher?} Do maximising metrics exist for higher normalised eigenvalues? \item Subsection~\ref{subsec.karp-stern} From Steklov to Laplace: asymptotics of free boundary minimal surfaces. \item Subsection~\ref{subsec.embedded}: Spectral index of embedded free boundary minimal surfaces. \item Subsection~\ref{applic.bounds} Applications to Steklov eigenvalue bounds. \item Subsection~\ref{subsec: fbms higher dim} Higher dimensions. \end{itemize} \subsection{From maximising metrics to free boundary minimal surfaces}\label{fbms} \begin{defn}\label{def.fbms}~ \begin{enumerate} \item Let ${\Omega}$ be a compact Riemannian manifold with boundary and let $\mathbb{B}^n$ be the unit Euclidean ball in $\R^n$ and $\mathbb{S}^{n-1}$ its boundary. Recall that a smooth map $\tau:{\Omega}\to \mathbb{B}^n$ is proper if $\tau({\Omega})\cap \partial \mathbb{S}^{n-1} =\tau(\partial {\Omega})$. \item One says that a properly and isometrically immersed submanifold $\tau:{\Omega}\to \mathbb{B}^n$ is a \emph{free boundary minimal submanifold} of $\mathbb{B}^n$ if it is a critical point of the volume functional under all deformations $\{\tau_t\}$ of $\tau$ subject only to the constraint that $\tau_t(\partial{\Omega})\subset \mathbb{S}^{n-1}$. (The image of $\partial{\Omega}$ may vary freely within $\mathbb{S}^{n-1}$.) Equivalently, $\tau$ satisfies the following two conditions: \begin{itemize} \item $\tau$ is harmonic on the interior of ${\Omega}$ (i.e., its component functions are harmonic); \item $\tau({\Omega})$ meets $\mathbb{S}^{n-1}$ orthogonally. \end{itemize} The second condition, together with the fact that $\tau$ is assumed to be an isometric immersion, says that for $p\in\partial{\Omega}$ and for $\nu_p\in T_p({\Omega})$ the outward pointing unit normal vector $\partial{\Omega}$ at $p$, the image $d\tau_p(\nu_p)$ coincides with the outward unit normal to $\mathbb{S}^{n-1}$ in $T_{\tau(p)}(\mathbb{B}^n)$. When the isometrically immersed free boundary minimal submanifold ${\Omega}$ is viewed as a subset of $\mathbb{B}^n$, it is common to say simply that ${\Omega}$ is a free boundary minimal surface in $\mathbb{B}^n$, suppressing mention of the inclusion mapping. \item More generally, a proper smooth map $\tau:{\Omega}\to \mathbb{B}^n$ is said to be a \emph{free boundary harmonic map} if it is a critical point for the Dirichlet energy under deformations $\{\tau_t\}$ again subject only to the constraint that $\tau_t(\partial{\Omega})\subset \mathbb{S}^{n-1}$. Equivalently: \begin{itemize} \item $\tau$ is harmonic on the interior of ${\Omega}$; \item $\tau({\Omega})$ meets $\mathbb{S}^{n-1} $ orthogonally. \end{itemize} \end{enumerate} \end{defn} Throughout this section, we adopt the convention that all immersions and branched immersions will be understood to be proper, whether or not this is explicitly stated. Fraser and Schoen first observed a relationship between free boundary minimal surfaces in Euclidean balls and the Steklov problem analogous to the relationship identified by Takahashi between minimal surfaces in spheres and the Laplace eigenvalue problem. \begin{prop}\label{fbms iff stek}\cite[Lemma 2.2]{FrSc2011} A properly immersed submanifold $\tau: {\Omega} \to \mathbb{B}^n$ with the induced Riemannian metric is a free boundary minimal submanifold if and only if for each $i=1,\dots, n$, the composition $u_i:=x_i\circ\tau$ is a Steklov eigenfunction of ${\Omega}$ with eigenvalue one. (Here $x_1,\dots, x_n$ are the standard coordinate functions of $\R^n$.) \end{prop} \begin{proof} Minimality of the immersion is equivalent to harmonicity of $x_i\circ\tau$, $i=1,\dots, n$. Since the outward unit normal to $\partial \mathbb{B}^n$ is the radial vector field $\mathbf{r}=\sum_{i=1}^n\,x_i\frac{\partial}{\partial x_i}$, the second condition in Definition~\ref{def.fbms} is equivalent to $$\partial_{\nu} (x_i\circ\tau) =(\partial_{\,\mathbf{r}} x_i)\circ\tau$$ i.e., $$\partial_{\nu} (x_i\circ\tau) = x_i\circ\tau \mbox{\,\,on\,\,}\partial{\Omega}.$$ \end{proof} We emphasise that the proposition places no assumptions on the dimension of ${\Omega}$. We now focus on the case that $({\Omega},g)$ is a surface. In this case, the property of a harmonicity of a map $\tau: ({\Omega},g)\to \mathbb{B}^n$ depends only on the conformal class of $g$ due to conformal invariance of the Laplacian. A proper (possibly branched) conformal immersion $\tau:{\Omega} \to \mathbb{B}^n$ that is also harmonic on the interior of ${\Omega}$ is referred to as a (branched) conformal minimal immersion. If moreover, the image $\tau({\Omega})$ meets $\mathbb{S}^{n-1}$ orthogonally, then $\tau({\Omega})$, with the metric induced by the Euclidean metric, will be a free boundary minimal surface in $\mathbb{B}^n$ with (branched) conformal cover $\tau:{\Omega} \to \tau({\Omega})$. \begin{nota}\label{nota.sig} Given a smooth (not necessarily orientable) compact surface ${\Omega}$ with boundary, let $$\sigma_k^*({\Omega})=\sup\{\sigma_k(g)|\partial {\Omega}|_g\}$$ where the supremum is over all smooth Riemannian metrics $g$ on ${\Omega}$. Letting ${\Omega}_{\gamma,b}$ be the orientable surface of genus $\gamma$ with $b$ boundary components, we will write $$\sigma_k^*(\gamma,b):=\sigma_k^*({\Omega}_{\gamma,b}).$$ \end{nota} \begin{ex}\label{disk fbms} Let ${\Omega}$ be homeomorphic to a disk. By Weinstock's Theorem~\ref{thm:Weinstock}, $\sigma_1^*({\Omega})=2\pi$ and the maximum is achieved by the Euclidean metric $g_0$. Moreover, up to $\sigma$-isometry and rescaling, $g_0$ is the unique maximiser. (See Definition~\ref{sigmaisom} for the notion of $\sigma$-isometry.) Scaling the metric so that the radius of the disk is one, then $\sigma_1({\Omega},g_0)=1$ and the coordinate functions $x_1$ and $x_2$ form an orthonormal basis of the $\sigma_1$-eigenspace. The map $(x_1,x_2)\to (x_1,x_2,0)$ isometrically embeds ${\Omega}$ into $\mathbb{B}^3$ as the equatorial disk, a free boundary minimal surface. Fraser and Schoen \cite[Theorem 2.1]{FrSc2015} proved moreover that if ${\Omega}$ is a topological disk and $u:{\Omega}\to \mathbb{B}^n$ is any proper branched conformal minimal immersion whose image meets $\mathbb{S}^{n-1}$ orthogonally, then the image $u({\Omega})$ is an equatorial plane disk. Note: Any free boundary minimal surface in $\mathbb{B}^n$ can also be viewed as a free boundary minimal surface in $\mathbb{B}^{n+1}$ by including $\mathbb{B}^n$ into $\mathbb{B}^{n+1}$ as the equatorial $n$-ball. When speaking of uniqueness, one usually means unique modulo such trivial inclusions. Thus the statement above is often expressed by saying that the equatorial plane in $\mathbb{B}^3$ is the unique free boundary minimal surface in any $\mathbb{B}^n$ that is the image of a proper branched conformal minimal immersion of a disk. \end{ex} For all other connected surfaces, Fraser and Schoen proved the following powerful theorem: \begin{thm}\label{thm.fs} Let ${\Omega}$ be a compact smooth surface with boundary that is not homeomorphic to a disk. Suppose $g_0$ is a Riemannian metric on ${\Omega}$ satisfying $$\sigma_1(g_0)|\partial {\Omega}|_{g_0}=\sigma_1^*({\Omega}).$$ Then: \begin{enumerate} \item \cite[Proposition 5.2]{FrSc2016} The multiplicity $\operatorname{mult}\sigma_1(g_0)$ is at least three. After rescaling $g_0$ so that $\sigma_1(g_0)=1$, there exist linearly independent eigenfunctions $u_1,\dots, u_n$ for $\sigma_1(g_0)$ such that $u:=(u_1,\dots, u_n)$ is a proper branched conformal immersion of ${\Omega}$ into the unit ball $\mathbb{B}^n$ that restricts to an isometric immersion from $\partial\om$ into the unit sphere $\mathbb{S}^{n-1}$. The image $u({\Omega})$ is a free boundary minimal surface. \item \cite[Proposition 8.1]{FrSc2016} If, moreover, ${\Omega}$ is orientable and has genus zero, then $n=3=\operatorname{mult}\sigma_1(g_0)$ and $u: {\Omega} \to \mathbb{B}^3$ is an embedding. \end{enumerate} \end{thm} A priori, in the case of higher genus, $n$ may be strictly smaller than $\operatorname{mult}\sigma_1(g_0)$. \begin{remark}\label{rem: fs explanation} The fact that one obtains a branched \emph{conformal} immersion here as opposed to a branched \emph{isometric} immersion as in Theorem~\ref{thm.nada min} is due simply to the invariance of Steklov eigenvalues on surfaces under conformal changes of metric away from the boundary. The pullback by the branched immersion $u$ of the metric on $(u({\Omega})$ induced by the Euclidean metric is another maximising metric $g$ for $\sigma_1^*({\Omega})$ that is $\sigma$-isometric to $g_0$. If $u$ has branch points, then $g$ will have conical singularities and the conformal factor will be singular at the cone points. As noted in Remark~\ref{rem:con sing}, these isolated singularities do not affect the conclusion that $\stek({\Omega},g)=\stek({\Omega},g_0)$. \end{remark} For an outline and key ideas of the interesting proof of the first statement, see Subsection 1.4.2 of Fraser's expository article \cite{Fr2020} referenced above. The second statement is a special case of \cite[Proposition 8.1]{FrSc2016}; we give the full statement of that proposition later in this subsection as Proposition~\ref{first implies embed}. Fraser and Schoen generalised the first statement of Theorem~\ref{thm.fs} to higher eigenvalues on all surfaces: \begin{thm}\label{thm.fs2}\cite[Proposition 5.2]{FrSc2016} Let ${\Omega}$ be a compact smooth surface with boundary. Suppose $g_0$ is a Riemannian metric on ${\Omega}$ satisfying $$\sigma_k(g_0)|\partial {\Omega}|_{g_0}=\sigma_k^*({\Omega}).$$ Then the multiplicity of $\sigma_k(g_0)$ is at least three. After rescaling $g_0$ so that $\sigma_k(g_0)=1$, there exist linearly independent eigenfunctions $u_1,\dots, u_n$ for $\sigma_k(g_0)$ such that $u:=(u_1,\dots, u_n)$ is a proper branched conformal minimal immersion of ${\Omega}$ into the unit ball $\mathbb{B}^n$ that restricts to an isometric immersion from $\partial\om$ into $\mathbb{S}^{n-1}$. The image $u({\Omega})$ is a free boundary minimal surface. \end{thm} We now comment on the size of $n$ in Theorems~\ref{thm.fs} and \ref{thm.fs2}. Necessarily $n\leq \operatorname{mult}(\sigma_k)$ where $k$ is the eigenvalue under consideration. Karpukhin, Kokarev, and Polterovich \cite[Theorem 1.1] {KaKoPo2014} (see also Jammes \cite[Theorem 1.5]{Ja2014} and Fraser-Schoen \cite[Theorem 2.3]{FrSc2016}) obtained multiplicity bounds on the $k$th Steklov eigenvalue of any compact Riemannian surface of genus $\gamma$ with $b$ boundary components: \begin{equation}\label{eq.mult bound} \begin{cases} \operatorname{mult}(\sigma_k)\leq \min(4\gamma +2k +1, 4\gamma +2b+k)\hspace{1cm} {\Omega} \mbox{\,orientable,}\\ \operatorname{mult}(\sigma_k)\leq \min(2\gamma +2k +1, 2\gamma +2b+k)\hspace{1cm} {\Omega} \mbox{\,non-orientable.} \end{cases} \end{equation} In particular for every Riemannian surface of genus zero, one has $\operatorname{mult}(\sigma_1)({\Omega},g)\leq 3$. It was not previously known whether this maximum is attained (except in the case of the disk, where the answer is no). In the proof of Theorem~\ref{thm.fs}, the authors showed directly that the multiplicity has to be at least three for any metric realising $\sigma_1^*({\Omega})$. This fact together with the existence result Theorem~\ref{fs max exist} below prove that the multiplicity bound of three is indeed attained for every surface of genus zero with more than one boundary component. The following relationship between the area and the boundary for 2-dimensional free boundary minimal surfaces $S$ in $\mathbb{B}^n$, proven in \cite[Theorem 5.4]{FrSc2011}, will be used several times later in this section: \begin{equation}\label{2A=L}|S|=\frac{1}{2} |\partial S|\geq \pi.\end{equation} Brendle \cite[Theorem 4]{Br2012} showed that the lower bound $\pi$ is obtained only for a flat equatorial disk. Now suppose that ${\Omega}$ is a compact surface with boundary, $k\in\mathbf Z^+$, and $g_0$ is a Riemannian metric on ${\Omega}$ that realises $\sigma_k^*({\Omega})$, normalised so that $\sigma_k({\Omega},g_0)=1$. Theorem~\ref{thm.fs} or \ref{thm.fs2} yields a free boundary minimal surface $S:=u({\Omega})$ in $\mathbb{B}^n$ and a branched conformal cover $u: ({\Omega},g_0)\to S$ that restricts to a Riemannian cover $(\partial\om, g_0)\to \partial S$. Let $m$ be the order of this cover. Since $\sigma_k^*({\Omega})=\sigma_k({\Omega}, g_0)|\partial\om|_{g_0}=|\partial\om|_{g_0}$, Equation~\eqref{2A=L} yields: \begin{prop}\label{prop.2A=mL} Under the hypotheses in the previous paragraph, we have $$|S|=\frac{1}{2} |\partial S|=\frac{1}{2m}\sigma_k^*({\Omega}).$$ In particular, if the branched immersion $u: {\Omega} \to B^n$ is an embedding, i.e., if $S$ is homeomorphic to ${\Omega}$, then $$|S|=\frac{1}{2} |\partial S|=\frac{1}{2}\sigma_k^*({\Omega}).$$ \end{prop} We emphasise, however, that the existence of maximising metrics for $\sigma_k^*({\Omega})$ remains open for $k\geq 2$. We end this subsection by turning to conformally maximal metrics and stating Fraser and Schoen's partial analogue of Theorem~\ref{thm: El harmonic}. \begin{thm}\cite[Proposition 2.8]{FrSc2013}\label{thm: FrSc conf class} Let $({\Omega},g_0)$ be a compact Riemannian surface with boundary and suppose that $$\sigma_k({\Omega},g_0)|\partial{\Omega}|_{g_0}=\sup_{g\in [g_0]}\,\sigma_k({\Omega},g)|\partial{\Omega}|_g$$ where $[g_0]$ is the conformal class of $g_0$. Then there exist linearly independent $\sigma_k({\Omega},g_0)$-eigenfunctions $u_1,\dots, u_m$ such that $\sum_{j=1}^mu_j^2= 1$ on $\partial {\Omega}$. Thus $u:=(u_1,\dots, u_m):({\Omega},g_0)\to \mathbb{B}^m$ is a free boundary harmonic map. \end{thm} \begin{remark}\label{rem: eigenfunc to fb harm} It is immediate from Definition~\ref{def.fbms} that any proper map $u: ({\Omega},g_0) \to \mathbb{B}^m$ by $\sigma_k$-eigenfunctions is a free boundary harmonic map. However, given a Riemannian surface $({\Omega},g_0)$ and a free boundary harmonic map $\tau: ({\Omega},g)\to \mathbb{B}^m$, one \emph{cannot} conclude that the component functions are Steklov eigenfunctions since the map $p\mapsto |d\tau_p\nu_p|$ need not be constant on $\partial{\Omega}$. The question of a converse to Theorem~\ref{thm: FrSc conf class}, as well as analogues in higher dimensions, will be addressed in Subsection~\ref{subsec: fbms higher dim}. \end{remark} \subsection{Existence of maximising metrics for the first eigenvalue}\label{max exist} The first general result concerning the existence of metrics realising $\sigma_1^*({\Omega})$ for a large class of surfaces ${\Omega}$ was proved by Fraser and Schoen: \begin{thm}\label{fs max exist}\cite[Theorem 1.1]{FrSc2016}. Let ${\Omega}$ be either a M\"obius band or an orientable surface of genus zero with smooth boundary. Then there exists a smooth Riemannian metric $g$ on ${\Omega}$ such that $$\sigma_1(g)|\partial{\Omega}|_g=\sigma_1^*({\Omega}).$$ (See Notation~\ref{nota.sig}.) \end{thm} Only the case of the disk was known previously. In the case of the annulus and the M\"obius strip, they were able to find the maximisers explicitly. (We will discuss these maximisers later in Example~\ref{ex.cat}.) See \cite[Theorem 1.4.4]{Fr2020} for an outline of the innovative proof of Theorem~\ref{fs max exist}. We make only brief comments here. Let $\{g_j\}$ be a sequence of Riemannian metrics on ${\Omega}$ such that $\sigma_1(g_j)|{\Omega}|_{g_j}\to \sigma_1^*({\Omega})$. One must show (i) that the conformal classes of these metrics stay within a compact subset of the moduli space and (ii) that the boundary measures do not degenerate. The proof of (ii) involves a carefully chosen choice of maximising sequence $\{g_j\}$. For (i), the following ``spectral gap'' result (and induction on the number of boundary components) plays a key role: \begin{prop}\label{fs gap}\cite[Proposition 4.3]{FrSc2016}. Let $b\in \{2, 3,\dots\}$. In the notation of~\ref{nota.sig}, if there exists a smooth Riemannian metric realising $\sigma_1^*(0, b-1)$, then $$\sigma_1^*(0, b) > \sigma_1^*(0, b-1).$$ \end{prop} \smallskip (While Fraser and Schoen communicated a small gap in the proof of Proposition~\ref{fs gap} (see \cite[Appendix]{GiLa2021}), the result is valid and is generalised in Inequality~\eqref{eq.gap} below.) Theorems~\ref{thm.fs} and \ref{fs max exist} together imply: \begin{cor}\label{cor: fbms genus 0} Let ${\Omega}$ be an orientable surface of genus zero with smooth boundary. Then there exists an embedding of ${\Omega}$ in $\mathbb{B}^3$ as a free boundary minimal surface such that the metric $g_0$ on ${\Omega}$ induced by the Euclidean metric satisfies $$\sigma_1(g_0)|\partial{\Omega}|_{g_0}=\sigma_1^*({\Omega}).$$ \end{cor} In the notation of~\ref{nota.sig}, Corollary~\ref{cor: fbms genus 0} yields for each $b\in \mathbf Z^+$ a (not necessarily unique) embedded free boundary minimal surface in $\mathbb{B}^3$ homeomorphic to ${\Omega}_{0,b}$. It was originally asserted in the proof of Theorem~\ref{fs max exist} that a sequence of these surfaces converge to a double disk as $b\to\infty$. Following communication from Fraser and Schoen regarding a subtle error in this assertion, Girouard and Lagac\'e \cite[Appendix]{GiLa2021} conjectured that these surfaces converge in the varifold sense to the boundary sphere $\mathbb{S}^2$. This conjecture was later affirmed along with a remarkable generalisation addressed in Subsection~\ref{subsec.karp-stern}. An early version of the article \cite{FrSc2016} containing Theorem~\ref{fs max exist} was first posted on the arXiv in 2012. The innovative ideas introduced there quickly ignited major developments on the existence of maximising metrics for the normalised first eigenvalue of surfaces, first in the Laplace setting, where the existence results caught up with and briefly overtook the Steklov setting, followed by corresponding results in the Steklov setting. We first address the Laplace setting. Analogous to Notation~\ref{nota.sig}, for a closed surface $M$ we write \begin{equation}\label{nota.lam}\lambda_k^*(M)=\sup\{\lambda_k(g)|M|_g\}\text{ and } \lambda_k^*(\gamma)=\lambda_k^*(M_{\gamma})\end{equation} where the supremum is over all Riemannian metrics $g$ on $M$ and where $M_\gamma$ is the orientable closed surface of genus $\gamma$. The following theorem was proven through a sequence of three articles: \begin{thm}\cite{Pe2014, MaSi2019, MaSi2021}\label{thm: lap max exists} Let $M$ be a connected smooth closed surface. Then there exists a Riemannian metric $g_0$ on $M$, smooth except for possibly a finite number of cone points, such that $$\lambda_1(M,g_0)|M|_{g_0}=\lambda_1^*(M).$$ \end{thm} The sequence of three papers proceeded as follows: \begin{itemize} \item Petrides \cite[Theorem 2]{Pe2014} proved the theorem for orientable surfaces under the additional hypothesis that the following gap condition is satisfied when $\gamma>1$: \begin{equation}\label{eq.lagap}\lambda_1^*(\gamma)>\lambda_1^*(\gamma -1). \end{equation} \noindent (Colbois and El Soufi \cite{CoEl2003} earlier showed that the non-strict version of Inequality~\eqref{eq.lagap} always holds.) \item In \cite{MaSi2021} Matthiesen and Siffert extended Petrides' result to the non-orientable setting. In this case, the single gap condition~\eqref{eq.lagap} is replaced by two gap conditions. \item In \cite{MaSi2019} Matthiesen and Siffert addressed the gap condition(s) by proving the following: Let $g$ be a Riemannian metric on a surface $M$, smooth except for perhaps finitely many conical singularities. Suppose that $M'$ is obtained from $M$ by attaching a cylinder or cross-cap. Then there is a smooth Riemannian metric $g'$ on $M'$ such that \begin{equation}\label{Mat Sif ineq}\lambda_1(M',g')|M'_{g'}|> \lambda_1(M,g)|M|_g.\end{equation} Thus, for example in the orientable case, if one knows a maximising metric $g_0$ exists for $\lambda_1^*(M_{\gamma-1})$, then Equation~\eqref{Mat Sif ineq} yields the gap inequality \eqref{eq.lagap} for $\gamma$, and one can proceed by induction to prove the theorem. (For the initial case of genus zero, one knows by Hersch's Theorem that the round metric attains $\lambda_1^*(\mathbb{S}^2)$.) \end{itemize} The actual maximiser is known in only a handful of cases: the four surfaces mentioned earlier ($\mathbb{S}^2$, $T^2$, $\R {\mathbf P}^2$ and the Klein bottle) and the orientable surface of genus two (a singular Bolza metric maximises, as conjectured in \cite{JLNNP2005} and confirmed by Nayatani and Shoda \cite{NaSh2019}). A general existence theorem for maximisers of the first normalised Steklov eigenvalue on arbitrary connected closed surfaces with boundary followed soon afterwards, following a similar sequence of steps. \begin{thm}\label{mp max exists}\cite[Theorem 1]{Pe2019}, \cite[Theorem 1.2]{MaPe2020} Let ${\Omega}$ be a compact surface with smooth boundary. Then there exists a smooth Riemannian metric $g$ on ${\Omega}$ such that $$\sigma_1(g)|\partial{\Omega}|_g=\sigma_1^*({\Omega}).$$ \end{thm} In the first \cite{Pe2019} of the two papers resulting in Theorem~\ref{mp max exists}, Petrides proved the theorem for orientable surfaces under the additional assumption that the following gap condition holds: \begin{equation}\label{eq.gap} \sigma_1^*(\gamma,b)> \max(\sigma_1^*(\gamma-1, b+1), \sigma_1^*(\gamma, b-1)). \end{equation} (On the right hand side, if either of the two quantities is of the form $\sigma_1^*(\gamma', b')$ with $\gamma'<0$ or $b' <1$, then we replace $\sigma_1^*(\gamma', b')$ by zero.) The non-strict version of Inequality~\eqref{eq.gap} always holds. In the second article \cite{MaPe2020}, Matthiesen and Petrides completed the proof of Theorem~\ref{mp max exists} in the orientable case by proving that the gap conditions hold. In particular they showed the following: Let $({\Omega},g)$ be a compact surface with smooth boundary, and let ${\Omega}'$ be a surface topologically obtained from ${\Omega}$ by gluing two opposite sides of the boundary of a strip $S$ to two disjoint portions of $\partial {\Omega}$. Then there exists a smooth Riemannian metric $g'$ of ${\Omega}'$ such that $$\sigma_1({\Omega}',g')|\partial{\Omega}'|_{g'}> \sigma_1({\Omega},g)|\partial{\Omega}_g|.$$ They then extended the proof to the non-orientable case. \begin{remark}\label{ks max exists} For each $\gamma\geq 0$, Karpukhin and Stern \cite[Theorem 1.8]{KaSt2020} gave an independent and elegant proof that the gap condition~\eqref{eq.gap} holds for a large class of surfaces. More precisely, they showed: For each $\gamma\geq 0$, there are infinitely many $b$ such that the gap condition~\eqref{eq.gap} holds for $(\gamma, b)$. To prove this assertion, they first proved $$\sigma_1^*(\gamma, b) < \lambda_1^*(\gamma).$$ for all $b$. (See Inequality~\eqref{eq:karp stern} in Section~\ref{section:surfaces}.) They then showed that failure of their assertion would contradict the Laplace gap inequality~\eqref{eq.lagap}. \end{remark} \begin{remark} Let $g_{\gamma, b}$ be a Riemannian metric on ${\Omega}_{\gamma, b}$ attaining $\sigma_1^*(\gamma,b)$. Theorems~\ref{thm.fs} and \ref{mp max exists} together yield a free boundary minimal surface $S$ in a ball $\mathbb{B}^n$ together with a conformal branched covering $u:({\Omega}_{\gamma,b},g_{\gamma, b}) \to S$ that restricts to a regular Riemannian covering $\partial\om\to \partial S$. In case $\gamma=0$, these branched coverings are homeomorphisms (see Corollary~\ref{cor: fbms genus 0}) and thus the free boundary minimal surfaces obtained by varying $b$ are all distinct. In contrast, for $\gamma\geq 1$, the free boundary minimal surfaces arising from $({\Omega}_{\gamma,b},g_{\gamma, b})$ for different choices of $b$ could coincide. However, Matthiesen and Petrides \cite[Theorem 1.1]{MaPe2020} showed that, for fixed $\gamma$, there can be only finitely many choices of $b$ for which $({\Omega}_{\gamma,b}. g_{\gamma, b})$ is a branched covering of a given minimal surface $S$. The proof is an elementary consequence of Equation/Inequality~\eqref{2A=L} along with the existence of upper bounds for $\sigma_1^*(\gamma,b)$ that are independent of $b$ as in Remark~\ref{rem:has cgr}. They also obtain a somewhat weaker statement when $b$ is fixed and $\gamma$ is allowed to vary. \end{remark} \subsection{Do maximising metrics exist for higher normalised eigenvalues?}\label{subsec.higher?} For $k\geq 2$, we are not aware of examples of surfaces with boundary, respectively closed surfaces, for which the existence of maximisers for the $k$th normalised Steklov, respectively Laplace, eigenvalue has been established. In contrast to the case $k=1$, the following theorem proves non-existence of maximising metrics for the higher eigenvalues on the disk ${\Omega}_{0,1}$. \begin{thm}\label{no max}\cite[Theorem 2.3]{FrSc2020} For $k\geq 2$, the value $\sigma_k^*(0,1)$ is not attained by any smooth Riemannian metric. \end{thm} The theorem is an immediate consequence of Theorem~\ref{thm.fs2} and the second item of Example~\ref{disk fbms}. The case $k=2$ was proven earlier by Girouard and Polterovich \cite{GiPo2010}. \begin{remark}\label{rem: bubbles} Recall that the Hersch-Payne Schiffer inequality states that $\sigma_k^*(0,1)\leq 2\pi k$. In the article just cited, Girouard and Polterovich \cite{GiPo2010} constructed for each $k\geq 2$ a family of simply-connected plane domains ${\Omega}_{\epsilon}$ such that $\lim_{\epsilon\to 0} \sigma_k({\Omega}_{\epsilon})|\partial\om_{\epsilon}|\to 2\pi k$. The domains ${\Omega}_{\epsilon}$ converge as $\epsilon\to 0$ to a union of $k$ touching disks. As conjectured by Nadirashvili \cite{Na2002} in 2002 and recently proved by Karpukhin, Nadirashvili, Penskoi and Polterovich \cite{KNPP2021}, analogous behaviour occurs in the case of the Laplace eigenvalues of metrics on the 2-sphere: $\lambda_k^*(\mathbb{S}^2)=8k\pi$ and there exists a maximising sequence of metrics converging to the union of $k$ touching spheres each with the standard round metric. Moreover, for $k\geq 2$, the value $8k\pi$ is not achieved within the class of all metrics on $\mathbb{S}^2$ that are smooth except possibly for finitely many conical singularities. \end{remark} \begin{ques}\label{ques:any higher maximisers} Find examples of compact surfaces with boundary and integers $k>1$ for which a $\sigma_k$-maximising metric exists. \end{ques} For arbitrary orientable compact surfaces with smooth boundary and for each positive integer $k$, Petrides introduced an expression that we will refer to as $\gap_k({\Omega})$ (defined below). The role of the gap condition is to preclude the type of ``bubbling'' phenomenon just described in the case of the disk. generalising the result described in the previous subsection, Petrides proved: \begin{thm}\label{pe gap thm}\cite[Theorem 1]{Pe2019} Let ${\Omega}$ be an orientable surface with smooth boundary. If $\gap_k({\Omega}) >0$, then there exists a smooth Riemannian metric on ${\Omega}$ realising $\sigma_k^*({\Omega})$. \end{thm} \begin{remark}\label{rem:pe2018} In \cite{Pe2018}, Petrides proved an analogous result in the Laplace setting, but we will focus here only on the Steklov case. \end{remark} The definition of $\gap_k({\Omega})$ appears in the right-hand side of inequality (0.2) in \cite{Pe2019}. Inequality~\eqref{eq.gapk} below corrects a typo in that expression. We wish to thank Petrides both for providing the corrected expression and for the following more intuitive definition of $\gap_k$. \begin{defn}\label{def.gap} Let $\mathcal{S}$ be the collection of all surfaces $\hat{{\Omega}}$ that can be obtained from ${\Omega}$ by cutting along a non-empty finite collection of embedded closed curves $\tau:[0,1]\to {\Omega}$ with $\tau(]0,1[)\subset \operatorname{int}({\Omega})$ and $\tau(0),\tau(1)\subset \partial{\Omega}$. We identify two surfaces if they are homeomorphic. Set $$\gap_k({\Omega})=\sigma_k^*({\Omega})-\max_{\hat{{\Omega}}\in \mathcal{S}} \sigma_k^*(\hat{{\Omega}}).$$ \end{defn} \begin{remark}\label{rem.gap} For ${\Omega}_{\gamma,b}$ as in Notation~\ref{nota.sig}, a cut along a single curve $\tau$ as above in ${\Omega}_{\gamma,b}$ will result in one of the following: \begin{enumerate} \item A surface with two components ${\Omega}_{\gamma_1,b_1}$ and ${\Omega}_{\gamma_2,b_2}$ satisfying $\gamma_1 +\gamma_2\leq \gamma$ and $b_1+b_2=b+1$ . The curve $\tau$ appears as a boundary component of both. \item The connected surface ${\Omega}_{\gamma-1,b+1}$. \item The connected surface ${\Omega}_{\gamma, b-1}$. \end{enumerate} Cuts that yield surfaces of either of the first two types arise from curves $\tau$ both of whose endpoints lie on a single boundary component; for the third type, the endpoints of $\tau$ must lie on different boundary components. \end{remark} When $k=1$, the condition ``$\gap_1({\Omega}) >0$'' is equivalent to Inequality~\eqref{eq.gap}. The topological effect of cutting ${\Omega}={\Omega}_{\gamma,b}$ along a curve $\tau$ as in Definition~\ref{pe gap thm} to obtain a surface $\hat{{\Omega}}$ can be reversed by gluing two opposite sides of the boundary of a strip $S$ to two disjoint portions of $\partial \hat{{\Omega}}$, as was carried out by Matthiesen and Petrides in their proof of Theorem~\ref{mp max exists} when $k=1$. \smallskip Beginning with a surface ${\Omega}={\Omega}_{\gamma, b}$, one sees from Remark~\ref{rem.gap} that the set $\mathcal{S}$ in Definition~\ref{def.gap} consists of all surfaces distinct from ${\Omega}_{\gamma,b}$ itself of the form ${\Omega}_{\gamma_1,b_1}\sqcup\dots \sqcup {\Omega}_{\gamma_s,b_s}$, where $s\in \mathbf Z^+$, $\gamma_1+\dots \gamma_s\leq \gamma$ and $\gamma_1+\dots +\gamma_s+b_1+\dots b_s\leq \gamma+b +s-1$. Inducting on the number of components and taking into account the effect of the length normalisation in computing $\sigma_k^*(\hat{{\Omega}})$ when $\hat{{\Omega}}$ is not connected, one can express the condition $\gap_k({\Omega}_{\gamma,b})>0$ as follows: \begin{equation}\label{eq.gapk} \sigma_k^*(\gamma,b) >\max_{\substack{i_1+\dots +i_s=k\\i_j\geq 1\,\,\,\forall j\\\gamma_1+\dots +\gamma_s\leq \gamma\\\gamma_1+\dots +\gamma_s+b_1+\dots +b_s\leq \gamma+b+s-1\\(\gamma_1,b_1)\neq (\gamma, b)\,\, \rm{if}\,\,\,s=1 }}\, \sum_{q=1}^s\,\sigma_{i_q}^*(\gamma_q,b_q) \end{equation} By Theorem~\ref{pe gap thm}, one can answer question~\ref{ques:any higher maximisers} by addressing the following: \begin{ques}\label{ques:gapk} For given $k$, which compact surfaces ${\Omega}$ satisfy $\gap_k({\Omega})>0$? \end{ques} \subsection{Existence results for conformally maximising metrics}\label{subsec:conf max} We begin with the case of the Laplacian. Given a closed surface $M$ and a conformal class $[g]$ of metrics on $M$, let \begin{equation}\label{eq;lak[g]} \lambda_k^*(M,[g])=\sup_{h\in g}\,\lambda_k(M,h)|M|_h . \end{equation} Nadarashvili and Sire first put forth an argument in \cite{NaSi2015} to prove existence of a metric realising $\lambda_1^*(M,[g])$ under the condition that $\lambda_1^*(M,[g])>8\pi$. In \cite{NaSi2015_2}, they similarly put forward an argument to realise $\lambda_k^*(M,[g])$ for $k\geq 2$ under the assumption of a gap condition. Influenced by the results of Fraser and Schoen, Petrides \cite{Pe2014} gave a complete and very different existence proof for a maximiser of $\lambda_1^*(M,[g])$ without a gap hypothesis. His proof of existence of a conformal maximiser was the first step in his proof (modulo a gap condition) of Theorem ~\ref{thm: lap max exists} discussed above. Similarly, as a step in the result cited in Remark~\ref{rem:pe2018}, he proved existence modulo a gap condition of metrics realising $\lambda_k^*(M,[g])$ for every $k\geq 2$. Karpukhin, Nadarashvili, Penskoi and Polterovich's resolution \cite{KNPP2020} of Nadarashvili's conjecture (see Remark~\ref{rem: bubbles} above) enabled them to give a simpler expression for the gap condition. They also completed the arguments put forward by Nadarashvili and Sire (using results of Grigor'yan, Nadarashvili and Sire \cite{GrNaSi2016}). We state here the version appearing in \cite{KNPP2020}. \begin{thm}\cite{NaSi2015, NaSi2015_2,Pe2014, Pe2018, KNPP2020}\label{thm:ex lap conf max} Let $M$ be a closed surface and let $[g]$ be any conformal class of metrics on $M$. Then: \begin{enumerate} \item For every $k\geq 1$, either there exists a metric $h\in[g]$ that is smooth except possibly for finitely many singularities such that \begin{equation}\label{eq:conf max 1}\lambda_k(M,h)|M|_h=\lambda_k^*(M,[g])>\lambda_{k-1}^*(M,[g])+8\pi\end{equation} or else \begin{equation}\label{eq:conf max 2}\lambda_k^*(M,[g])=\lambda_{k-1}^*(M,[g])+8\pi.\end{equation} \item For $k=1$, the first case always holds; i.e., there exists $h$ as above satisfying Equation~\eqref{eq:conf max 1}. \end{enumerate} \end{thm} The second statement follows from the first together with Petrides' result \cite[Theorem 1]{Pe2014} that \begin{equation}\label{eq:>8pi} \lambda_1^*(M,[g])>8\pi \end{equation} except when $M$ is diffeomorphic to a sphere. The article \cite{KNPP2020} gives an informal interpretation of the first item in the theorem as follows: Either a metric realising $\lambda_k^*(M,g)$ exists or else for some $j$ with $1<j<k$, a maximising sequence of metrics for $\lambda_k^*(M,[g])$ degenerates to a disjoint union of the surface $(M,h_j)$, where $h_j\in [g]$ where $h_j$ is a maximising metric for $\lambda_{k-j}^*(M,[g])$, together with $j$ identical round spheres (''bubbles'') each of volume $\frac{8\pi}{\lambda_k^*(M,g)}$. This informal statement is clarified in \cite[Subsection 5.1]{KNPP2020}. Note that the first item says that a metric realising $\lambda_k^*(M,[g])$ exists provided that the gap condition $\lambda_k^*(M,[g])>\lambda_{k-1}^*(M,[g])+8\pi$ is satisfied. The reader may find both this formulation of the first item and also the informal statement in the previous paragraph helpful in comparing Theorem~\ref{thm:ex lap conf max} with Theorem~\ref{thm:ex stek conf max} below. Next consider the Steklov problem on compact surfaces ${\Omega}$ with boundary. Given a conformal class $[g]$, let \begin{equation}\label{eq;stek[g]} \sigma_k^*({\Omega},[g])=\sup_{h\in [g]}\,\sigma_k({\Omega},h)|\partial\om|_h. \end{equation} As the first step in his proof, modulo a gap condition, of Theorem~\ref{mp max exists}, Petrides proved the following analogue of the first item in Theorem~\ref{thm:ex lap conf max}: \begin{thm}\label{thm:ex stek conf max}\cite[Theorem 2]{Pe2019} Let ${\Omega}$ be a connected compact surface with boundary and $[g]$ a conformal class of metrics on ${\Omega}$. If \begin{equation}\label{eq: conf stek gap} \sigma_k^*({\Omega},[g])> \max_{1\leq j<k}\,\sigma_{k-j}^*({\Omega},[g]) +2\pi j \end{equation} then there exists a smooth Riemannian metric $h\in [g]$ such that $$\sigma_k({\Omega},h)|\partial\om|_h=\sigma_k^*({\Omega},[g]).$$ \end{thm} As before, the stronger conclusion that the metric is smooth is due to the fact that any interior conical singularities can be conformally removed without affecting the spectrum. The proof uses a special choice of maximising sequence and a careful argument using the relationship between conformally critical metrics and harmonic maps. The gap hypothesis prevents disks from bubbling off. Observe that when $k=1$, existence of a maximising metric in $[g]$ would follow from an affirmative answer to the following question: \begin{ques}\label{ques:sig >2pi} Is $\sigma_1^*({\Omega},[g])>2\pi$ for every conformal class $[g]$ when the surface ${\Omega} $ is not diffeomorphic to a disk? \end{ques} The inequality in the question above would be the Steklov analogue of Inequality~\eqref{eq:>8pi}. \subsection{From Steklov to Laplace: asymptotics of free boundary minimal surfaces}\label{subsec.karp-stern} Let $M$ be a closed surface and let $M_b$ be the compact surface with boundary obtained by removing $b$ disjoint disks from $M$. Thus for example, if $M=M_\gamma$, the closed orientable surface of genus $\gamma$, then $M_b={\Omega}_{\gamma, b}$, the orientable surface of genus $\gamma$ with $b$ boundary components. We've seen the following: \begin{enumerate} \item $\sigma_1^*(M_b) < \lambda_1^*(M)$ and $\lim_{b\to\infty}\, \sigma_1^*(M_b) = \lambda_1^*(M)$. (See Remark~\ref{rem:b to infty}. The fact that the inequality is strict follows from the gap condition~\eqref{eq.gap}.) \item There exists a (not necessarily unique) Riemannian metric $g$ on $M$ (possibly with conical singularities) attaining $\lambda_1^*(M)$ and an associated minimal surface $S$ in $\mathbb{S}^n$ for some $n$. \item For each $b$, there exists a (not necessarily unique) smooth metric $g_b$ on $M_b$ attaining $\sigma_1^*(M)$ and an associated minimal surface $S_b$ in $\mathbb{B}^{n_b}$ for some $n_b$. \end{enumerate} The multiplicity of $\sigma_1(M_b, g_b)$, and thus the value of $n_b$ in item (2), is bounded above by a constant depending only on the genus of $M$. (See the multiplicity bound ~\eqref{eq.mult bound}.) Let $N$ be the maximum of all the $n_b$ and of $n$. By the observation at the end of Example~\ref{disk fbms} and its analogue for minimal surfaces in spheres, the free boundary minimal surfaces in the third item above may be viewed as free boundary minimal surfaces in $\mathbb{B}^N$, and the minimal surface in the second item may be viewed as a minimal surface $\mathbb{S}^{N-1}$. Karpukhin and Stern proved the following striking asymptotic result: \begin{thm}\label{thm.KS asymptotics}\cite[Theorem 1.1]{KaSt2021} Up to a choice of a subsequence $\{b_j\}$ of $\mathbf Z^+$, the free boundary minimal surfaces in $\mathbb{B}^N$ inducing the metrics attaining $\sigma_1^*(M_{b_j})$ converge in the varifold sense to a closed minimal surface in $\mathbb{S}^{N-1}$ inducing the metric on $M$ attaining $\lambda_1^*(M)$. As a consequence, their supports converge in the Hausdorff sense to that of the limit surface, and their boundary measures converge to twice the area measure of the limit surface. \end{thm} See Subsection 2.6 of \cite{KaSt2021} for a brief summary of the concept of convergence in the varifold sense. Compare the final statement of the theorem with Equation~\eqref{2A=L}. As an example (see \cite[Corollary 1.4]{KaSt2021}), if we let $M$ be the topological 2-sphere, then $M_b={\Omega}_{0,b}$. By Remark~\ref{rem:b to infty}, we have $$\lim_{b\to\infty}\sigma_1^*(0,b)=8\pi =\lambda_1^*(M)=\lambda_1(\mathbb{S}^2)|\mathbb{S}^2|$$ where $\mathbb{S}^2$ is the round unit 2-sphere. By Theorem~\ref{thm.fs}, the maximising metric for the first Steklov eigenvalue on ${\Omega}_{0,b}$ is induced by an embedding of ${\Omega}_{0,b}$ as a free boundary minimal surface in $B^3$ for each $b$. Theorem~\ref{thm.KS asymptotics} states in this case that as $b\to \infty$, the sequence of minimal surfaces converges in the varifold sense to the boundary sphere $\mathbb{S}^2$ itself as conjectured in \cite{GiLa2021}. The following is one of many interesting open questions raised by Karpukhin and Stern in \cite{KaSt2021}. \begin{ques}\label{ques:ksoq3}\cite[Open Question 3]{KaSt2021}. In the setting of Theorem~\ref{thm.KS asymptotics} if the limiting surface in $\mathbb{S}^{N-1}$ realising $\lambda_1^*(M)$ is embedded, does it necessarily follow that the minimal surfaces in $\mathbb{B}^{N}$ realising $\sigma_1^*(M_b)$ are embedded for all sufficiently large $b$? \end{ques} \subsection{Spectral index of embedded free boundary minimal surfaces}\label{subsec.embedded} Proposition~\ref{fbms iff stek} tells us for any free boundary minimal surface $S$ in $\mathbb{B}^n$ that the coordinate functions in $\R^n$ restrict to Steklov eigenfunctions on $S$ with eigenvalue 1. However, it does not tell us where the value 1 occurs in the spectrum. \begin{defn}\label{spec.index} A free boundary minimal surface in $\mathbb{B}^n$ is said to have \emph{spectral index} $k$ if $k$ is the minimum element of $\mathbf Z^+$ such that $\sigma_k({\Omega})=1$. \end{defn} Knowledge of the spectral index has many applications, e.g., to obtaining eigenvalue bounds as we will discuss in the next subsection. A. Fraser and M. Li conjectured: \begin{conj}\label{fraser li conj}\cite[Conjecture 3.3]{FrLi2014} If ${\Omega}$ is a properly embedded free boundary minimal hypersurface in $\mathbb{B}^n$, then $\sigma_1({\Omega})=1$; i.e,. ${\Omega}$ has spectral index one. \end{conj} This conjecture is analogous to a longstanding conjecture of S. T. Yau \cite{Ya1982} asking whether every embedded minimal surface in $\mathbb{S}^3$ is embedded by first Laplace eigenfunctions. We emphasise that if the conjecture holds in the case $n=3$, then Proposition~\ref{prop.2A=mL} yields an upper bound on the area of free boundary embedded minimal surfaces in $\mathbb{B}^3$ of any given genus and number of boundary components, and Theorem~\ref{mp max exists} shows that the upper bound is attained in each case. In support of the conjecture, Fraser and Li \cite[Corollary 3.2]{FrLi2014} proved that every properly embedded free boundary minimal hypersurface in $\mathbb{B}^n$ satisfies $\sigma_1({\Omega})\geq \frac{1}{2}.$ Batista and Cunha \cite{BaCu2016} showed that this inequality is strict. McGrath \cite[Theorem 4]{Mc2018} affirmed the conjecture for free boundary minimal surfaces in $\mathbb{B}^3$ that satisfy certain symmetry conditions; these conditions are satisfied by large families of examples in the literature. Girouard and Lagac\'e \cite[Theorem 1.13]{GiLa2021} applied McGrath's result and other tools to verify the conjecture for all free boundary minimal surfaces in $\mathbb{B}^3$ of genus zero with tetrahedral, octahedral, or isosahedral symmetry and $b$ boundary components where $b\in \{4\}, \{6,8\}, \{8,12,20\}$, respectively. Lee and Yeon \cite{LeYe2021} presented new approaches to the conjecture. For example, they showed that the Gauss map of any embedded free boundary minimal annulus in $\mathbb{B}^3$ is one-to-one and re-interpreted the conjecture as the problem of determining the Gauss map. The first item of the following proposition yields a partial converse to the Fraser-Li conjecture: \begin{prop}\label{first implies embed}\cite[Proposition 8.1]{FrSc2016} Let ${\Omega}$ be a Riemannian surface of genus zero with more than one boundary component. Suppose that $\tau:{\Omega}\to \mathbb{B}^3$ is a branched minimal immersion satisfying the free boundary condition and that $\tau({\Omega})$ has spectral index one. Then: \begin{enumerate} \item $\tau$ is an embedding; \item $\tau({\Omega})$ does not contain the origin; \item Any ray through the origin intersects $\tau({\Omega})$ in at most one point; \item ${\Omega}$ is a stable minimal surface with area bounded above by $4\pi$. \end{enumerate} \end{prop} The proof of the first three items uses the nodal domain theorem along with the maximum principle for harmonic functions. The area bound follows from Kokarev's bound $\sigma_1^*({\Omega})\leq 8\pi$ for all surfaces of genus zero (see Theorem~\ref{thm:KokarevGenus0}). As a consequence of Theorem~\ref{thm.fs}, and the fact \cite{GiHeLa2021} that $\lim_{b\to\infty}\, \sigma_1^*(0,b)=8\pi$ (see Remark \ref{rem:b to infty}), one has: \begin{cor} The bound $4\pi$ in Proposition~\ref{first implies embed} is sharp. \end{cor} The literature contains many constructions of embedded free boundary minimal surfaces in $\mathbb B^3$. In many cases the spectral index is not known. We give a sampling here: \begin{itemize} \item Folha, Pacard and Zolotareva \cite{FoPaZo2017} gave examples of surfaces of genus zero with $b$ boundary components for each large $b$, converging to the double disk as $b\to \infty$. Although their construction is a priori different from the construction by Fraser-Schoen \cite{FrSc2016} of free boundary minimal surfaces converging to the double disk, they conjectured that their surfaces are congruent to those of Fraser-Schoen. They also proved the existence for all large $b$ of free boundary minimal surfaces in $\mathbb B^3$ of genus one with $b$ boundary components converging as $b\to\infty$ to a double copy of the unit equatorial disk punctured at the origin. The convergence is uniform on compact subsets of $\mathbb B^3-\{0\}$. McGrath in the work \cite{Mc2018} cited above verified that all the surfaces in both of these families have spectral index one. \item Ketover \cite{Ke2016} and also Kapouleas and Li \cite{KaLi2021} constructed surfaces of genus $\gamma$ with three boundary components converging as varifolds to the union of the disk and the critical catenoid (discussed in the next section) as $\gamma\to \infty$. \item Carlotto, Franz and Schulz \cite{CaFrSc2020} construct examples of every genus with one boundary component and with dihedral symmetry. \end{itemize} \subsection{Applications of free boundary minimal surfaces to finding Steklov eigenvalue bounds and maximising metrics}\label{applic.bounds} Currently, the disk, annulus and M\"obius strip are the only surfaces with boundary for which explicit maximising metrics have been determined. The interplay with free boundary minimal surfaces played a critical role in both the case of the annulus and M\"obius strip. \begin{ex}\label{ex.cat}\label{ex.mobius}~ (i)Except for the equatorial disk, the earliest known example of a free boundary minimal surface in $\mathbb{B}^3$ is the so-called critical catenoid, the intersection with $\mathbb{B}^3$ of the unique catenoid about the $x_3$ axis centered at the origin that meets the unit sphere $\mathbb{S}^2$ orthogonally. See Figure~\ref{fig:catenoid}, drawn by Emma Fajeau and reprinted from Notices of the American Mathematical Society \cite{BMWE2018} where it originally appeared. In \cite[Theorem 6.2]{FrSc2016}, Fraser and Schoen showed that the critical catenoid is the \emph{unique}, up to congruence, free boundary minimal annulus embedded in $\mathbb{B}^3$ -- in fact in any $\mathbb{B}^n$ -- of spectral index one. (See Definition~\ref{spec.index} for the notion of spectral index.) Comparing this uniqueness statement with Theorems~\ref{fs max exist} and \ref{thm.fs}, they concluded that the critical catenoid does indeed realise $\sigma_1^*(0,2)$. They were then able to sharpen the bound of $6\pi$ for $\sigma_1^*(0,2)$ in Theorem~\ref{thm:karp hersch} to approximately $\frac{4\pi}{1.2}$. \begin{figure} \includegraphics[width=3cm]{Figures/catenoid.pdf} \caption{Critical catenoid drawn by Emma Fajeau (originally appeared in \cite{BMWE2018}).} \label{fig:catenoid} \end{figure} (ii) Fraser and Schoen \cite{FrSc2016}explicitly constructed a free boundary minimal embedding of the M\"obius band into $\mathbb{B}^4$ that is invariant under an action of $\mathbb{S}^1$ by rotations. By showing (i) that it is the unique free boundary minimal M\"obius band in any $\mathbb{B}^n$ that is $\mathbb{S}^1$ invariant and (ii) that any free boundary minimal M\"obius band in $\mathbb{B}^n$ whose coordinate functions are first eigenfunctions must be $\mathbb{S}^1$-invariant, they could conclude by Theorem~\ref{fs max exist} that the metric on the M\"obius band ${\Omega}$ induced by this embedding in $\mathbb{B}^4$ must realise $\sigma_1^*({\Omega})$. As a consequence, they found that $\sigma_1^*({\Omega})=2\pi\sqrt{3}$. \end{ex} It is rare that one can establish the type of uniqueness statements that were used in Example~\ref{ex.cat} to find the actual maximiser of $\sigma_k^*({\Omega})$. However, any embedding of ${\Omega}$ as a free boundary minimal surface of spectral index $k$ in a Euclidean ball $\mathbb{B}^n$ can be used to obtain a lower bound for $\sigma_k^*({\Omega})$, since by Equation~\eqref{2A=L} and Proposition~\ref{fbms iff stek}, we have $$2|{\Omega}|_g= |\partial {\Omega}|_g=\sigma_k({\Omega},g)|\partial {\Omega}|_g\leq\sigma^*_k({\Omega})$$ where $g$ is the metric on ${\Omega}$ induced by the Euclidean metric on $\mathbb{B}^n$. In particular, in view of the many examples in the literature of embedded free boundary minimal surfaces in $\mathbb{B}^3$, the Fraser-Li Conjecture~\ref{fraser li conj} has strong implications for eigenvalue bounds. If the Fraser-Li Conjecture~\ref{fraser li conj} is affirmed, then the known examples of embedded free boundary minimal surfaces in $\mathbb{B}^3$ would yield lower bounds for $\sigma_1^*({\Omega})$ for surfaces of many topological types. \smallskip We now consider higher eigenvalues. Interesting results on maximisers have been obtained when one restricts to a special class of metrics. Given a surface that admits a circle action, let $$\sigma_k^{\mathbb S^1}({\Omega})=\sup_{g\in \mathcal{S}({\Omega})}\,\sigma_k({\Omega},g)|\partial {\Omega}|_g$$ where $\mathcal{S}({\Omega})$ denotes the set of $\mathbb S^1$-invariant smooth metrics on ${\Omega}$. Fan, Tam and Yu \cite{FaTaYu2015} considered rotationally symmetric Riemannian metrics on the topological annulus ${\Omega}_{0,2}$. We summarise their results: \begin{enumerate} \item $\sigma_2^{\mathbb S^1}({\Omega}_{0,2})=4\pi$. No metric in $\mathcal{S}({\Omega}_{0,2})$ attains this supremum. As $T\to\infty$, the normalised second eigenvalue on the cylinder $[0,T]\times \mathbb{S}^1$ converges to $4\pi$. \item For every $k>2$, they both find $\sigma_k^{\mathbb S^1}({\Omega}_{0,2})$ and find explicit metrics $g\in\mathcal{S}({\Omega}_{0,2})$ that attain $\sigma_k^{\mathbb S^1}(0,2)$. \item For $k>2$ odd, the maximising metric for $\sigma_k^{\mathbb S^1}({\Omega}_{0,2})$ is induced by an immersion of ${\Omega}_{0,2}$ into $\mathbb{B}^3$ whose image is a free boundary minimal surface, the so-called critical $k$-catenoid. \item For $k>2$ even, the metric is induced by an immersion of ${\Omega}_{0,2}$ into $\mathbb{B}^4$ whose image is a free boundary minimal M\"obius strip. \end{enumerate} Subsequently, Fraser and Sargent \cite{FrSa2021} classified all free boundary rotationally symmetric minimal annuli and M\"obius bands in $\mathbb{B}^n$ for arbitrary $n$. We will denote the topological M\"obius strip by $\text{M\"ob}$. Some highlights of their results are: \begin{enumerate} \item For all $k\geq 1$, they show that $\sigma_{2k-1}^{\mathbb S^1}(\mob)=\sigma_{2k}^{\mathbb S^1}(\mob)$. Moreover they compute this value explicitly and show that it is attained by a free boundary minimal embedding of $\mob$ in $\mathbb{B}^4$. \item Every rotationally symmetric free boundary minimal annulus or M\"obius band in any $\mathbb{B}^n$ is critical for some $k$-th normalised eigenvalue within the space of rotationally symmetric Riemannian metrics. \end{enumerate} \subsection{Higher dimensions}\label{subsec: fbms higher dim} In this final subsection, we address Karpukhin and M\'etras' extensions to arbitrary dimension of Theorems~\ref{thm.fs} and \ref{thm: FrSc conf class}. For motivation, we will first consider the setting of the Laplacian on closed manifolds. \subsubsection{Laplace conformally critical metrics and $n$-harmonic maps.} Let $(M,g)$ be a closed Riemannian manifold. By the variational formula for the Dirichlet energy, a map $u:(M,g)\to\mathbb{S}^m$ is harmonic if and only if $$\Delta_g u=|du|_g^2u.$$ \begin{defn}\label{def:eigenmap} A map $u: (M,g) \to S^m$ is said to be a $\lambda$-\emph{eigenmap} if $u$ is harmonic and the component functions $u_1,\dots, u_{m+1}$ are $\lambda(M,g)$-Laplace eigenfunctions. Equivalently, $u$ is harmonic with constant energy density $|du|^2\equiv\lambda $. \end{defn} Now suppose that $\dim(M)=2$ and let $u:(M,g)\to \mathbb{S}^m$. Since the Dirichlet energy is a conformal invariant in dimension two, the property of being harmonic depends only on the conformal class of $g$. We say that $u$ is non-degenerate if $|du|_g^2$ is nowhere zero. In this case we can define $$g_u=|du|^2_g g.$$ Since $\dim(M)=2$, we have \begin{equation}\label{eq:Delta gu} \Delta_{g_u}u = \frac{1}{|du|^2_g}\Delta_gu =u,\end{equation} so $u$ is an eigenmap with eigenvalue one with respect to $g_u$. Moreover, Theorem~\ref{thm: El harmonic} tells us that $g_u$ is $\overline{\lambda}_k$-conformally critical where $k$ is chosen so that either $\lambda_k(M,g_0)=1<\lambda_{k+1}(M,g_0)$ or $\lambda_{k-1}(M,g_0)<1=\lambda_{k}(M,g_0)$. An insight of Matthiesen~\cite{Ma2019} is that in any dimension $n\geq 2$, one can get a similar result by replacing harmonic maps by $n$-harmonic maps, defined as follows: \begin{defn} Let $(M,g)$ be a closed Riemannian manifold. The $n$-energy of a map $u:M\to \mathbb{S}^m$ is defined by $$E_n(u)=\int_M\,|d u|_g^n \,dV_g.$$ The map $u$ is said to be $n$-\emph{harmonic} if it is a critical point of $E_n$. Assuming $\dim(M)=n$, this condition is equivalent to \begin{equation}\label{eq:n harm}-\delta_g(|du|_g^{n-2}du) =|d u|_g^n u \end{equation} where $\delta_g$ is the dual to $d$. \end{defn} Note that when $n=2$, $E_n$ agrees with the Dirichlet energy (see Equation~\eqref{eq:dir en}) except for the missing coefficient $\frac{1}{2}$. (Some authors, including Karpukhin and M\'etras, include a coefficient of $\frac{1}{n}$ in the definition of $n$-energy. We followed the convention of \cite{Ma2019} here.) In particular, a 2-harmonic map is the same as a harmonic map. There are numerous references on $n$-harmonic maps; \cite{Ta1994} addresses the relationship with minimal immersions. The $n$-energy is a conformal invariant for $n$-dimensional manifolds $M$. Moreover: \begin{lemma}\cite{Ma2019}\label{lem: eigenmap} Let $(M,g)$ be an $n$-dimensional closed Riemannian manifold. \begin{enumerate} \item If $u:M\to \mathbb{S}^m$ is an eigenmap with respect to $g$, then $u$ is necessarily both $n$-harmonic and harmonic. \item If $u:(M,g)\to \mathbb{S}^m$ is a non-degenerate $n$-harmonic map, then $u$ is an eigenmap with respect to the conformally equivalent metric $g'=|du|^2_g g$. \end{enumerate} \end{lemma} The first statement is immediate from Equation~\eqref{eq:n harm} since eigenmaps have constant energy density, and the proof of the second statement is similar to the case $n=2$ above. Karpukhin and M\'etras \cite{KaMe2021} observed that Lemma~\ref{lem: eigenmap} enables Theorem~\ref{thm: El harmonic} to be reformulated as in the corollary below and also showed that the result remains valid with the notion of critical metric alluded to in Remark~\ref{rem:crit different}. \begin{cor}(See \cite[Theorem 1]{KaMe2021}.) \label{cor: KM Lap} Let $(M,g)$ be a compact $n$-dimensional Riemannian manifold. \begin{enumerate} \item Let $k\in \mathbf Z^+$. If there exists a $\overline{\lambda}_k$-conformally critical metric $g_0$ in $[g]$, then there exists $m\in \mathbf Z^+$ and a non-degenerate $n$-harmonic map $u: (M,[g])\to \mathbb{S}^m$. \item Conversely, if $u: (M,g)\to \mathbb{S}^m$ is a non-degenerate $n$-harmonic map, then there exists $k\in \mathbf Z^+$ such that $g_u =|du|^2_g g$ is a $\overline{\lambda}_k$-conformally critical metric and $u$ is a $\lambda_k(M,g_0)$ eigenmap. \end{enumerate} \end{cor} \subsubsection{Critical metrics for Steklov eigenvalues} Karpukhin and M\'etras had two critical insights that enabled them to generalise Theorems~\ref{thm: FrSc conf class} and \ref{thm.fs} to higher dimensions: \begin{enumerate} \item For manifolds of dimension $d+1$, one must use the normalisation \begin{equation}\label{eq: KM normalisation}\overline{\sigma}_k({\Omega},g)=\sigma_k({\Omega},g)|{\Omega}|_g^{\frac{1-d}{d+1}}|\partial{\Omega}|_g.\end{equation} Indeed they showed that this is the \emph{only} normalisation for which smooth critical metrics can exist. (Compare with Example~\ref{example:KarpukhinMetrasNorm}, which illustrates another instance in which this normalisation is distinguished by its special behavior.) \item One must introduce densities and consider the weighted Steklov spectrum. \end{enumerate} The new normalisation that they introduced was discussed in Section~\ref{section:boundshigher}. Here we address the second insight above. To adapt Matthiesen's ideas to the Steklov case, one considers free boundary $(d+1)$-harmonic maps, where as usual $d+1 =\dim({\Omega})$. Free boundary harmonic maps $u: ({\Omega},g)\to \mathbb{B}^m$ are precisely the maps that satisfy the following: \begin{equation}\label{eq: fb harm ball} \begin{cases} -\delta_g(|du|_g^{d-1}du)= 0\,\,&\mbox{in}\,\,{\Omega},\\ \,\,\,\,\partial\nu_gu =|\partial\nu_{g}u|u\,\,\,&\mbox{on}\,\,\partial{\Omega}. \end{cases} \end{equation} Now assume that $u$ is non-degenerate. If we replace $g$ by the conformally equivalent metric $g_u=|du|_g^2 g$, then (\ref{eq: fb harm ball}) can be expressed as: \begin{equation}\label{eq: fb harm ball'} \begin{cases} \Delta_{g_u}u= 0\,\,&\mbox{in}\,\,{\Omega},\\ \partial\nu_{g_u}u =|\partial\nu_{g_u}u|u \,\,\,&\mbox{on}\,\,\partial{\Omega}. \end{cases} \end{equation} Equivalently, if one defines $\rho_u:\partial {\Omega} \to \R$, by \begin{equation}\label{eq:rho sub phi}\rho_u=|\partial\nu_{g_u}u|\end{equation} then \eqref{eq: fb harm ball'} says that the coordinate functions of $u$ are eigenfunctions with eigenvalue one of the weighted Steklov problem with density $\rho_u$ as in Example~\ref{ex:varweightedSteklov}. Denoting the eigenvalues by $\sigma_k({\Omega},g_u, \rho_u)$, one defines the \emph{spectral index} of $u$ to be the minimum $k$ for which $\sigma_k({\Omega},g_u, \rho_u)=1$. Moreover, Karpukhin and M\'etras \cite[Lemma 2]{KaMe2021} showed that the density $\rho$ is strictly positive. Thus the weighted Steklov problem is precisely the eigenvalue problem for the operator \begin{equation}\label{eq:dtn weight} \mathcal{D}_{g,\rho}:=\frac{1}{\rho}\mathcal{D}_g \end{equation} where $\mathcal{D}_g$ is the Dirichlet-to-Neumann operator of $({\Omega},g)$. They introduced the notion of $\overline{\sigma}_k$-critical pairs and $\overline{\sigma}_k$-conformally critical pairs $(g,\rho)$ as follows: \begin{defn}\label{def: conf ext pair} Let ${\Omega}$ be a $(d+1)$-dimensional compact Riemannian manifold with boundary. Denote by $\mathcal{M}_{\Omega}$ the space of all Riemannian metrics on ${\Omega}$, and denote by $C^\infty(\Sigma)_{>0}$ the space of positive smooth densities on the boundary $\Sigma$ of ${\Omega}$. \begin{enumerate} \item Given a Riemannian metric $g$ and positive density $\rho$, denote by $\sigma_k({\Omega},g,\rho)$ the $k$th eigenvalue for the associated weighted Steklov problem. Define the normalised weighted Steklov eigenvalues by \begin{equation}\label{eq: weight normalis}\overline{\sigma}_k({\Omega},g,\rho)=\sigma_k({\Omega},g,\rho)\,|{\Omega}|^{\frac{1-d}{d+1}} \,\|\rho\|_{L^1({\Omega},g)}.\end{equation} Observe that this normalisation is the natural extension to the weighted Steklov problem of the normalisation~\eqref{eq: KM normalisation}. \item Given a functional $F:\mathcal{M}_{\Omega}\times C^\infty(\Sigma)_{>0}\to \R$, we say $(g,\rho)\in \mathcal{M}_{\Omega}\times C^\infty(\Sigma)_{>0}$ is an $F$-critical pair if for each deformation $\{(g_t,\rho_t)\}_t$ depending smoothly on $t\in (-\epsilon, \epsilon)$ with $(g_0,\rho_0)=(g,\rho)$, either \begin{equation}\label{eq:extremal pt def}F(g_t,\rho_t)\leq F(g,\rho)+o(t) \mbox{\,\,\,or\,else\,\,\,} F(g_t,\rho_t)\geq F(g,\rho)+o(t).\end{equation} We say $(g,\rho)$ is an $F$-conformally critical pair if \eqref{eq:extremal pt def} holds for all such deformations $(g_t,\rho_t)$ that satisfy $g_t\in [g]$ for all $t$. \item If $H: \mathcal{M}_{\Omega}\to \R$ is a linear functional, the notions of $H$-critical metric $g$, respectively $H$-conformally critical metric, used in \cite{KaMe2021} are defined analogously to the notions of $F$-critical pairs and $F$-conformally critical pairs, respectively. \end{enumerate} \end{defn} \begin{remark} Karpukhin and M\'etras use the language ``extremal'' rather than ``critical''. \end{remark} Karpukhin and M\'etras \cite{KaMe2021} observed that a modification of the proof of Hassannezhad's theorem~\ref{asma2011} yields $$\overline{\sigma}_k({\Omega},g,\rho)\leq Ck^{2/n}$$ where $C$ is a constant depending only on the conformal class $[g]$. Note the following consequences of equations~\eqref{eq:dtn weight} and \eqref{eq: weight normalis}: \begin{itemize} \item For $\alpha\in \R^+$, we have $$\mathcal{D}_{\alpha g,\frac{\rho}{\sqrt{\alpha}}} =\mathcal{D}_{g,\rho}$$ so $\sigma_k(\alpha g, \alpha^{-1/2}\rho)=\sigma_k(g,\rho)$ for all $k$. \item For $\alpha, \beta\in \R^+$, we have $\overline{\sigma}_k(\alpha g, \beta \rho)=\overline{\sigma}_k(g,\rho)$ for all $k$. \item If $\dim({\Omega})=2$, we further have $$\mathcal{D}_{e^{2f}g, e^{-f}\rho}=\mathcal{D}_{g,\rho}$$ so $\overline{\sigma}_k(e^{2f}g, \beta e^{-f}\rho)=\overline{\sigma}_k(g,\rho)$ for all $f\in C^{\infty}({\Omega})$ and $\beta\in \R^+$. \end{itemize} \begin{nota}\label{nota:g rho equiv} Define an equivalence relation $\sim$ on $\mathcal{M}_{\Omega}\times C^\infty_{>0}$ as follows: \underline{{Case 1}.} $\dim({\Omega})\geq 3$. $$(g,\rho)\sim (g',\rho') \iff (g',\rho')=(\alpha g,\beta\rho) \,\,\rm{for\, some} \,\,\alpha,\beta\in\R^+.$$ \underline{Case 2}.\,$\dim({\Omega})=2$. $$(g,\rho)\sim (g',\rho') \iff (g',\rho')=(e^{2f} g,\beta e^{-f}\rho) \,\,{\rm for\, some} \,\,f\in\C^\infty({\Omega}), \beta\in \R^+.$$ \end{nota} Note that the set of $\overline{\sigma}_k$-(conformally) critical pairs is saturated by equivalence classes; i.e., any pair that is equivalent to a $\overline{\sigma}_k$-(conformally) critical pair is also $\overline{\sigma}_k$-(conformally) critical. We now state Karpukhin and M\'etras' characterisation of conformally critical pairs: \begin{thm}\label{thm: KM stek conf extremals}\cite[Theorem 2]{KaMe2021} Let ${\Omega}$ be a compact manifold with boundary. We use the notation of Definitions~\ref{def: conf ext pair} and \ref{nota:g rho equiv}. \begin{enumerate} \item Suppose that $(g,\rho)$ is $\overline{\sigma}_k$-conformally critical. Then there exists $m>1$ and a non-degenerate free boundary $(d+1)$-harmonic map $u:({\Omega}, [g])\to \mathbb{B}^m$ such that $(g,\rho)\sim( g_u, \rho_u)$. In particular, the coordinate functions of $u$ are $\sigma_k({\Omega},g,\rho)$-eigenfunctions. \item Conversely, let $u: ({\Omega}, [g])\to \mathbb{B}^m$ be a non-degenerate free boundary $(d+1)$-harmonic map. Then $(g_u,\rho_u)$ is a $\overline{\sigma}_k$-conformally critical pair, where $k$ is the spectral index of $u$. \end{enumerate} \end{thm} In the first part of Theorem~\ref{thm: KM stek conf extremals}, the map $u$ is harmonic as well as $(d+1)$-harmonic since it is given by weighted Steklov eigenfunctions. \begin{remark}\label{rem: n harm and harm} We address the case that $\dim({\Omega})=2$. The first statement of Theorem~\ref{thm: KM stek conf extremals} in this case is a reformulation of Theorem~\ref{thm: FrSc conf class}. Indeed, the equivalence class of any pair $(g,\rho)$ necessarily contains an element of the form $(g',1)$ and any such $g'$ will be a conformally critical metric. (While Fraser and Schoen stated Theorem~\ref{thm: FrSc conf class} only for conformally maximal metrics, Karpukhin and M\'etras observe that their proof is easily modified to address more general conformally critical metrics.) The second statement similarly guarantees that the coordinate functions of an arbitrary free boundary harmonic map $u: ({\Omega}, [g])\to \mathbb{B}^m$ can be realised as eigenfunctions for some critical metric $g_0$; one needs only choose $g_0$ so that $(g_0,1)\sim (g_u,\rho_u)$. However, the critical metric may not coincide with $g_u$. Consider the case of an annulus. Conformal classes of metrics are in one-to-one correspondence with cylinders $[0,T]\times S^1$ for $T\in \R^+$. The global maximising metric for $\overline{\sigma}_1$ identified by Fraser and Schoen lies in the conformal class for which $T$ is the unique solution of $T=\coth T$. For those conformal classes parametrised by larger values of $T$, Karpukhin and M\'etras \cite{KaMe2021} showed that there exists a rotationally symmetric free boundary harmonic map $u: [g]\to \mathbb{B}^3$ whose image is the critical catenoid. However, $\rho_u$ takes on different constant values on the two boundary circles. \end{remark} The first statement of \cite[Theorem 9]{KaMe2021} yields the following characterisation of conformally critical metrics for the unweighted Steklov problem in dimensions greater than two: \begin{thm}\cite{KaMe2021}\label{thm: higher conf ext metric} Let ${\Omega}$ be a $(d+1)$-dimensional compact manifold with boundary where $d+1\geq 3$. If the metric $g$ is $\overline{\sigma}_k$-conformally critical, then there exists a free boundary harmonic (and simultaneously $(d+1)$-harmonic) map $u=(u_1,\dots, u_m): ({\Omega},[g])\to \mathbb{B}^m$ for some $m$ such that the $u_j$ are linearly independent $\sigma_k({\Omega},g)$-eigenfunctions and $|du|_g^2$ is constant. (In particular, $g$ is a constant multiple of $g_u$ and $\rho_u$ is constant.) \end{thm} Next we consider critical pairs in the full space $\mathcal{M}_\om\times C^\infty_{> 0}$. Since any $\overline{\sigma}_k$-critical pair $(g,\rho)$ is also $\overline{\sigma}_k$-conformally equivalent, Theorem~\ref{thm: KM stek conf extremals} and Remark~\ref{rem: n harm and harm}, yield a free boundary $(d+1)$-harmonic map $u: ({\Omega},g)\to \mathbb{B}^m$ such that $(g,\rho)\sim (g_u, \rho_u)$. Karpukhin and M\'etras show that $u$ is moreover a \emph{conformal} map and thus the image is a free boundary minimal submanifold. Denoting by $g_{\mathbb{B}^m}$ the Euclidean metric on $\mathbb{B}^m$, we thus have $$u^*g_{\mathbb{B}^m}=|du |^2_g g=g_u$$ and $\rho_u\equiv 1$ (see Equation~\eqref{eq:rho sub phi}). This argument implies the first statement of the following theorem: \begin{thm}\cite[Theorem 5]{KaMe2021}\label{thm:KM global} Let ${\Omega}$ be a compact manifold with boundary and let $(g,\rho)$ be a $\overline{\sigma}_k$-critical pair. Then there exists $m\in \mathbf Z^+$ and a free boundary minimal immersion $u:{\Omega}\to \mathbb{B}^m$ such that $(g,\rho)\sim (g_u, 1)$. Conversely, if $u:{\Omega}\to \mathbb{B}^m$ is a free boundary minimal immersion, then $(u^*g_{B^m},1)$ is a $\overline{\sigma}_k$-critical pair, where $k$ is the spectral index of $u$. \end{thm} If $(g, 1)$ is a $\overline{\sigma}_k$-critical pair, respectively $\overline{\sigma}_k$-conformally critical pair, then $g$ is a $\overline{\sigma}_k$-conformally critical metric. In view of the results above, it is interesting to ask the converse: \begin{ques}\label{ques: conf ext metric vs pair} If $g$ is a $\overline{\sigma}_k$-(conformally) critical metric, is $(g,1)$ necessarily a $\overline{\sigma}_k$-(conformally) critical pair? \end{ques} We wish to thank Karpukhin and M\'etras for clarifying the discussion below. Concerning conformally critical metrics, the second statement in Theorem~\ref{thm: KM stek conf extremals} and Theorem~\ref{thm: higher conf ext metric} together yield an affirmative answer to Question~\ref{ques: conf ext metric vs pair} provided that $k$ coincides with the spectral index of the free boundary map in the latter theorem. In case $g$ is a $\overline{\sigma}_k$-conformally maximising metric, an affirmative answer would thus follow from an affirmative answer to the following open question: \begin{ques}\label{ques: conf spec simple} Given a compact manifold ${\Omega}$ with boundary and a conformal class $[g]$ of Riemannian metrics on ${\Omega}$, define $$\overline{\sigma}_k^*({\Omega}, [g])=\sup_{g'\in [g]}\, \overline{\sigma}_k({\Omega},g).$$ Is $$\overline{\sigma}_k^*({\Omega},[g])<\overline{\sigma}_{k+1}^*({\Omega}, [g]) \mbox{\,\,for all}\,\,k?$$ \end{ques} The analogous question for the Laplacian on closed manifolds was answered affirmatively by Colbois and El Soufi \cite{CoEl2003}. In the case of globally critical metrics, the conclusion of the first statement of Theorem~\ref{thm:KM global} remains valid if one only assumes the a priori weaker hypothesis that $g$ is a $\overline{\sigma}_k$-critical metric. Thus as in the case of conformally critical metrics, we can conclude from Theorem~\ref{thm:KM global} that if $g$ is a $\overline{\sigma}_k$-critical metric satisfying $\sigma_k({\Omega},g)>\sigma_{k-1}({\Omega},g)$, then $(g,1)$. is a $\overline{\sigma}_k$-critical pair. \begin{remark} As we were completing this survey, the extensive article \cite{KaSt2022} appeared on the mathematics arXiv. This article contains many interesting results addressing harmonic maps and eigenvalue optimisation in dimension greater than two. \end{remark} \section{Discretisation and Steklov eigenvalues of graphs} \label{Section: discretisation} To facilitate the study of the eigenvalues of the Laplacian on a Riemannian manifold, numerous papers in the 1980's (for example \cite{Bu1982,Br1986, Ka1985}) have shown that it is very useful to introduce rough discretisation of manifolds. Then one can compare the spectrum of the manifold (in particular the first nonzero eigenvalue) to the spectrum of the combinatorial Laplacian on a graph associated to the discretisation. This idea of rough discretisation has applications not only for the study of the spectrum but also for other geometric concepts. This is very well explained in the two books by Chavel \cite{Ch1993, Ch2001}. See in particular \cite[Chapter V]{Ch2001} for isoperimetric inequalities and \cite[Chapter VI]{Ch2001} for Sobolev inequalities. \medskip More generally, the question behind this approach is the following: if two metric spaces are "close in large scale", in some sense, what do they have in common? One potential meaning of "close in large scale" is the existence of a rough quasi-isometry: if $(X,d_X),(Y,d_Y)$ are two metric spaces, a map $\phi:X\rightarrow Y$ is a rough quasi-isometry if there exist constants $a\ge 1,b\ge 0$ and $\epsilon>0$ such that $$ a^{-1}d_X(x_1,x_2)-b\le d_Y(\phi(x_1),\phi(x_2))\le ad_X(x_1,x_2)+b $$ for all $x_1,x_2 \in X$ and $$\bigcup_{x\in X}B(\phi(x),\epsilon)=Y.$$ Observe that two compact metric spaces are always roughly quasi-isometric, so when we consider compact metric spaces, we will need more information. Throughout this section, the metric space $X$ will be a closed Riemannian manifold $M$ or a compact Riemannian manifold with boundary $\Omega$ and $Y$ will be a subset of points of $X$ equipped with a graph structure. The goal is to compare the spectrum of a differential operator on $X$ with the spectrum of a combinatorial operator on $Y$. It is important to note that this type of discretisation is different from those used in numerical analysis, which were extended to Riemannian manifolds by different authors (see the initial contribution by Dodziuk and Patodi \cite{DoPa1976} and the more recent approaches of Burago, Ivanov and Kurilev \cite{BuIvKu2015} and Aubry \cite{Au2013}). In the situation we will describe, the mesh $\epsilon>0$ of the discretization $Y$ will be fixed. For the finite elements method and its extension to the Riemannian setting, the mesh tends to $0$ and the quotient between the k th eigenvalue of the differential operator on the Riemannian manifold and the k th eigenvalue of its combinatorial approximation tends to $1$ as $\epsilon \to 0$ under suitable assumptions (\cite{DoPa1976}). In \cite{BuIvKu2015} and \cite{Au2013}, the authors make explicit the rate of convergence by universal functions depending on the geometry of the manifold $M$ (bounds on the sectional curvature, injectivity radius, diameter). \medskip The outline of this section is as follows. \begin{itemize} \item Subsection~\ref{Discretisation and spectrum}: Discretisation and spectrum. We will in particular recall the case of the Laplacian before looking at the case of the Steklov problem on graphs. \item Subsection~\ref{Discrete}: Study of the spectrum of the Steklov problem on a graph. \item Subsection~\ref{Diameter}: Lower bounds in terms of the diameter. \item Subsection~\ref{UpperGraphs}: Upper bounds for subgraphs. \end{itemize} \subsection{Discretisation and spectrum.} \label{Discretisation and spectrum} \subsubsection{Case of the Laplacian.}\label{subsec:disc lap} We will recall the case of the Laplacian, because this is what we imitate in the case of the Steklov problem on graphs. First, note that the study of the spectrum of the combinatorial Laplacian is a well-established subject with a large literature (see for example \cite{BrHa2012}), which is not the case for the Steklov problem on graphs. We use a coarse and uniform discretisation that is not sensitive to the local geometry. We fix a (large) family of compact Riemannian manifolds without boundary given by a set of geometric parameters: bounds on the curvature in order to avoid local deformations that are invisible to a discretisation, and a lower bound on the injectivity radius in order to guarantee that the volumes of small balls are comparable (this last point could be avoided by using weighted graphs). Then we choose a mesh (smaller than the injectivity radius) for the discretisation (the distance between the points of the discretisation) and discretise each manifold using the same mesh of discretisation. The goal is then to establish a uniform comparison between the spectra of the smooth Laplacian on the manifolds and the combinatorial ones on the corresponding graphs. We refer to the paper~\cite{Ma2005} by Mantuano that we describe briefly below. \medskip We associate a graph to a Riemannian manifold following \cite[Section V.3.2]{Ch2001}. Let $(M^n,g)$ be a connected closed n-dimensional Riemannian manifold. We denote by $d_M$ the usual Riemannian distance. A discretisation of $M$, of mesh $\epsilon > 0$, is a graph $\Gamma = (V, E)$ such that the set $V$ of vertices is a maximal $\epsilon$-separated subset of $M$. That is, for any $v$, $ w \in V$, $v\neq w$, we have $d_M(v,w) \geq \epsilon$ and $\cup_{v \in V} B(v,\epsilon) = M $. Moreover, the graph structure of $\Gamma$ is entirely determined by the collection of neighbours that we define as follows. For each $v \in V$, a point $w \in V$ is declared to be a neighbour of $v$ (that is $v\sim w$ is an edge of $\Gamma$), if $ 0 < d_M(v,w) < 3 \epsilon$, see \cite[p.140]{Ch2001}. In the sequel, we denote by $N(v)$ the set of neighbours of $v$. \medskip We denote by $i$, $1\le i\le \vert V\vert$, the vertices of $\Gamma$. If there is an edge between $i$ and $j$, we denote this by $i\sim j$. By functions on $\Gamma$ we mean functions from $V$ to $\R$; the set of functions forms a vector space of dimension $\vert V\vert$. The Laplacian $\Delta u$ of a function $u$ on $\Gamma$ is given by $$ \Delta u(i)=\sum_{j\sim i}(u(i)-u(j)). $$ This is a symmetric operator on $V$ and it has a set of eigenvalues $$ 0=\lambda_0(\Gamma)< \lambda_1(\Gamma)\le ...\le \lambda_{\vert V\vert-1}(\Gamma). $$ \medskip We have the following important result \cite[Theorem 3.7]{Ma2005}): if $\mathcal{M}(n, \kappa, r_{0})$ denotes the class of all closed $n$-dimensional Riemannian manifolds with Ricci curvature and injectivity radius uniformly bounded below (i.e. with $Ricci(M,g) \geq -(n-1) \kappa g$, $\kappa \geq 0$ and $Inj(M) \geq r_{0} > 0$) and if we fix $\epsilon < \frac{r_0}{2}$, there exist positive constants $c$, $C$ such that for all manifolds $M$ in $\mathcal{M}(n, \kappa, r_{0})$ and any discretisation $\Gamma$ of $M$ of mesh $\epsilon$, we have \begin{equation} \label{discretelaplacian} c \leq \frac{\lambda_{k}(M) }{\lambda_{k}(\Gamma)} \leq C \end{equation} for all $k < |V|$. The crucial point is this: the constants $c$ and $C$ depend on $n$, $\kappa$, $r_{0}$, and $\epsilon$, but not on the particular manifold or discretisation we consider. \begin{remark} Observe that $|V|$ behaves like the volume of $M$ for a \emph{fixed} mesh size $\epsilon$, for example $\epsilon=\frac{r_0}{4}$, in the sense that $|V|/|M|$ is bounded above and also bounded away from zero by constants depending on the bounds on the geometry (curvature, injectivity radius) in the class $\mathcal{M}(n, \kappa, r_{0})$ . To see this, we use the fact that the mesh is smaller than the injectivity radius and that the curvature is bounded. So the volumes of the ball of radius $\epsilon$ are uniformly comparable. \end{remark} \subsubsection{The Steklov spectrum on a compact Riemannian manifold with boundary.} In \cite{CoGiRa2018}, Colbois, Girouard and Raveendran introduced a discretisation for the Steklov spectrum in the spirit of what was done for the Laplacian. In comparison with the case of the Laplacian, the two main difficulties are the presence of a boundary and the necessity to define what plays the role of the Steklov problem on graphs and of its spectrum. \medskip Regarding the Steklov spectrum, the definition considered in \cite{CoGiRa2018} differs slightly from the definitions commonly used recently (as will be explained below). In \cite{CoGiRa2018}, a graph with boundary is a pair $(\Gamma,B)$ where $\Gamma=(V,E)$ is a finite, simple, connected graph, and $B\subset V$ is a non empty set of vertices called the boundary. The eigenvalues of the Steklov problem on $(\Gamma,B)$ are defined by \begin{equation} \label{def:var1} \sigma_k(\Gamma,B)= \min_F \max_{u\in F,u\not=0}\frac{\sum_{i\sim j}(u(i)-u(j))^2}{\sum_{i\in B}u(i)^2} \end{equation} where $F$ is a vector subspace of dimension $k+1$ of the functions on $\Gamma$. That is, in order to define the eigenvalues, we imitate the usual min-max characterisation in the discrete setting, but \emph{without} defining a discrete Dirichlet-to-Neumann operator. Observe that there are exactly $\vert B\vert$ eigenvalues (counted with multiplicity) and that, in the particular case where $B=V$, we recover the spectrum of the usual combinatorial Laplacian. With this definition, a relation between the Steklov spectrum of a Riemannian manifold and the spectrum of its discretisation can be established as follows. For $\kappa\geq 0$ and $r_0\in (0,1)$, we introduce the space $\mathcal M(d,\kappa,r_0)$ of Riemannian manifolds $\Omega$ of dimension $d+1$ with non empty boundary and with the following properties: \begin{itemize} \item The boundary $\Sigma$ admits a neighbourhood which is isometric to the cylinder $[0,1]\times\Sigma$, with the boundary corresponding to $\{0\}\times\Sigma$; \item The Ricci curvature of $\Omega$ is bounded below by $-d\kappa$; \item The Ricci curvature of $\Sigma$ is bounded below by $-(d-1)\kappa$; \item For each point $p\in \Omega$ such that $d(p,\Sigma)>1$, $\mbox{inj}_{\Omega}(p)>r_0;$ \item For each point $p\in \Sigma$, $\mbox{inj}_\Sigma(p)>r_0.$ \end{itemize} The first condition, requiring that a neighborhood of the boundary be cylindrical, seems to be very strong. However, in many applications, it suffices to have a Riemannian manifold which is locally quasi-isometric (in the Riemannian sense) to a cylinder near the boundary. \medskip To discretise the elements $\Omega \in \mathcal M(d,\kappa,r_0)$ we first fix the mesh $\epsilon\in(0, \frac{r_0}{4})$. We choose a maximal $\epsilon$-separated set $V_{\Sigma}$ on the boundary $\Sigma$ of $\Omega$, that is, a discretisation of the boundary. Then we introduce a copy $V'_{\epsilon}$ of $V_{\Sigma}$ using the fact that $\Omega$ is cylindrical near the boundary: $$ V'_{\epsilon}=\{4\epsilon\} \times V_{\Sigma}. $$ We then introduce a maximal $\epsilon$-separated set $V_{I}$ on $\Omega \setminus (0,4\epsilon)\times \Sigma$ such that $V'_{\epsilon} \subset V_I$. Then the discretisation is given by the set $V=V_{\Sigma} \cup V_I$. We associate to $V$ the structure of a graph with boundary $B=V_{\Sigma}$. Regarding the edges, they are of two types. First, as in Subsection~\ref{subsec:disc lap}, we join two vertices $v$ and $w$ by an edge if $d_{\Omega}(v,w)<3\epsilon$ where $d_{\Omega}$ denotes the Riemannian distance on $\Omega$. Secondly, we join each vertex $(4\epsilon,v)$ in $V'_{\epsilon}$ to the vertex $(0,v)\in B$. We thus obtain a graph $\Gamma=(V,E)$ with boundary $B$. With this definition, we obtain a uniform comparison between the first nonzero Steklov eigenvalue of $\Omega \in \mathcal M(d,\kappa,r_0)$ and the first nonzero eigenvalue of the Steklov problem on the associated graph with boundary $\Gamma=(V,E)$ \cite[Theorem 3]{CoGiRa2018}. Given $\epsilon \in (0,\frac{r_0}{4})$, there exist two constants $A_1,A_2$ depending on $\kappa, r_0,d,\epsilon$ such that for any discretisation of a manifold $\Omega \in \mathcal M(d,\kappa,r_0)$, $$ A_1 \le \frac{\sigma_1(\Omega)}{\sigma_1(\Gamma)}\le A_2 $$ where $\Gamma$ is the graph associated to the discretisation. \medskip If we want to take all possible eigenvalues into account, we have that for $1\le k \le \vert B\vert-1$ (with $B$ the boundary of $\Gamma$) \begin{equation} \label{discretesteklov} \frac{A_1}{k} \le \frac{\sigma_k(\Omega)}{\sigma_k(\Gamma)}\le A_2 \end{equation} We see that for large $k$, the inequality is weaker than one might expect in view of Inequality (\ref{discretelaplacian}) in the case of the discrete Laplacian. \smallskip This result, \cite[Theorem 3]{CoGiRa2018}, has three immediate applications: \cite[Theorems 4,5,6]{CoGiRa2018}). In particular, Theorem 6 shows that one can construct a family $\Omega_l$ of surfaces with connected boundary $\Sigma_l$ such that $\lim_{l\to \infty} \sigma_1(\Omega_l)\vert \Sigma_l\vert=\infty$. \begin{ques}\label{ques:discretcomp} For all $1\le k \le \vert B\vert-1$, is it possible to obtain an inequality of the form $$ A_1 \le \frac{\sigma_k(\Omega)}{\sigma_k(\Gamma)}\le A_2? $$ \end{ques} \subsection{Study of the spectrum of the Steklov problem on a graph.} \label{Discrete} In the previous subsection we defined a possible concept of a graph with boundary, which was adapted to the discretisation of a compact Riemannian manifold with boundary. We did not define the Steklov problem on a graph, but we directly defined the eigenvalues via the min-max characterisation using the natural definition of the Rayleigh quotient. We can extend this point of view to all finite graphs: we just have to choose the boundary of a given finite graph among the vertices of the graph. The two extremal situations are the case where the boundary has only one vertex, and the only eigenvalue is $0$, and the case where the boundary consists of all the vertices of the graph, in which case we recover the usual combinatorial Laplacian: this was done by Colbois and Girouard in \cite{CoGi2014}. A particular class of Riemannian manifolds with boundary consists of the manifolds obtained as subdomains of a larger manifold. There is a corresponding concept for graphs with boundary which are obtained as subgraphs of a larger graph. We will consider these extensively in what follows. In this case, we cannot choose the boundary: it is precisely defined by the situation. Moreover, there are \emph{no} edges between any two vertices in the boundary. In this particular case, there is a definition of a Steklov problem on graphs, whose eigenvalues are the Steklov eigenvalues. \begin{defn} \label{def: graphsteklov} Let $G = (V,E)$ be a graph and let $\Omega$ be a finite domain of $V$ (that is a finite \emph{connected} subset of $V$). The vertex boundary of $\Omega$ is $B=\delta \Omega=\{i\in V, i\not \in \Omega: \exists j\in \Omega,\ i\sim j\}$. The set of edges between two subsets $\Omega_1,\Omega_2 \subset V$ is $E(\Omega_1,\Omega_2)=\{i\sim j,\ i\in \Omega_1,j\in \Omega_2\}$. Let $\Omega \subset V$ with $\Omega\not=V$. We define a graph with boundary $\Gamma=(\bar \Omega , E')$ associated to $\Omega$ by $$ \bar{\Omega}=\Omega \cup \delta \Omega;\ E'=E(\Omega,\bar{\Omega}). $$ This defines a graph $\Gamma$ with vertices $\bar{\Omega}$ and with boundary $B=\delta \Omega$. By definition, between any two elements of $B$, there is no edge. \end{defn} \begin{remark} Note that sometimes the authors do not suppose $\Omega$ to be connected, see for example \cite{HuHuWa2017} page 3. But in that case, the multiplicity of the eigenvalue $0$ is equal to the number of connected components of the graph (see \cite[Proposition 3.2]{HuHuWa2017}) and in order to obtain nontrivial lower bounds, $\sigma_1$ denotes the first nontrivial eigenvalue (\cite{HuHuWa2017}, page 4). \end{remark} \begin{ex}\label{ex: graph1} Let $G$ be the lattice $\mathbb Z^2$ with its usual graph structure (a vertex $(p,q)$ has $4$ neighbours $(p+1,q); (p-1,q); (p,q+1); (p,q-1)$). Let $\Omega$ be the subset of $\mathbb Z^2$ defined by $$ \Omega=\{(p,q)\in \mathbb Z^2: -1 \le p,q\le 1\}. $$ We have $$ \delta \Omega=\{(-2,q)\cup (2,q)\cup (p,-2)\cup(p,2):-1 \le p,q\le 1\} $$ and $\bar \Omega= \Omega \cup \delta \Omega$. In this case, the points of $\delta \Omega$ have only one edge. \end{ex} For domains in ambient graphs (as in Definition \ref{def: graphsteklov}), there is a natural notion of Steklov problem corresponding to the definition in the continuous case. As before, the Laplacian $\Delta u$ of a function $u$ is given by $\Delta u(i)=\sum_{j\sim i}(u(i)-u(j))$ and we introduce the normal derivative at a point $i\in B$ given by $\partial_{\nu}u(i)=\frac{\partial u}{\partial \nu}(i)=\sum_{j\sim i}(u(i)-u(j))$. Then the Steklov problem on the graph with boundary $(\Gamma,B)$ consists of finding $\sigma \in \R$ for which there exists a non-trivial solution of $$\begin{cases} \Delta u(i)=0&\text{if } i\in B^c,\\ \partial_{\nu}u(i)=\sigma u(i)&\text{if } i\in B. \end{cases}$$ The numbers $\sigma$ are the Steklov eigenvalues and we have $$ 0=\sigma_0< \sigma_1\le ...\le \sigma_{\vert B\vert-1}. $$ The eigenvalues have the variational characterisation \begin{equation} \label{def:var2} \sigma_k(\Gamma,B)= \min_F \max_{u\in F,u\not=0}\frac{\sum_{i\sim j}(u(i)-u(j))^2}{\sum_{i\in B}u(i)^2} \end{equation} where $F$ is a vector subspace of dimension $k+1$ of the functions on $\Gamma$. Note that we recover the notion of the Rayleigh quotient introduced in (\ref{def:var1}). \medskip In this situation, one can also define a discrete Dirichlet-to-Neumann operator (see for example \cite[(1.3)]{HuHuWa2017} and \cite{Per2019} in a more general situation): $$ \Lambda: \mathbb R^{\delta \Omega}\rightarrow \mathbb R^{\delta \Omega} $$ given by $$ \Lambda v(i)= \partial_{\nu}u_v(i) $$ where $u_v$ denote the harmonic extension of $v$ to $\Omega$. \subsubsection{A Cheeger-type inequality.} \label{cheegerdiscrete}To our knowledge, the first estimate for $\sigma_1$ in the discrete case was a Cheeger-type inequality obtained independently by Hassannezhad and Miclo in \cite{HaMi2020} (as a particular case of Theorem A) and in the paper \cite{HuHuWa2017} by Hua, Huang and Wang. With the above notations, the authors consider a finite graph $\Omega \subset V$ and establish a Cheeger-type inequality following Escobar \cite{Es1997} and Jammes \cite{Ja2015}. \medskip Let $\bar \Omega =\Omega \cup \delta \Omega$. If $A\subset \bar \Omega$, we denote by $\partial A$ the edge boundary of $A$, that is the subset of edges $E(A,A^c)$ and by $\partial_{\Omega}A$ the subset $\partial A\cap E(\Omega,\bar \Omega)$. For example, in Example \ref{ex: graph1}, if we take $A\subset \bar \Omega$ given by $A=\{(0,2)\}$, then $\partial A$ consists of the $4$ edges emanating from $(0,2)$ but $\partial_{\Omega}A$ consists only of the edge between $(0,2)$ and $(0,1)$. As in the continuous case (see (\ref{ctescobar}) and (\ref{ctjammes})), beside the classical Cheeger constant $$ h_C(\bar \Omega)= \min_{\vert A\vert \le \frac{1}{2} \vert \bar \Omega\vert} \frac{\vert \partial_{\Omega}A \vert }{\vert A\vert} $$ one has to introduce two other constants: the Escobar-type Cheeger constant $$ h_E(\bar \Omega)=\min_{A \subset \bar \Omega: \vert A \cap \delta \Omega\vert \le \frac{1}{2}\vert \delta \Omega \vert} \frac{\vert \partial_{\Omega}A \vert}{\vert A\cap \delta \Omega \vert}, $$ and the Jammes-type Cheeger constant $$ h_J(\bar \Omega)=\min_{A \subset \bar \Omega: \vert A \vert \le \frac{1}{2}\vert \bar \Omega \vert} \frac{\vert \partial_{\Omega}A \vert}{\vert A\cap \delta \Omega \vert}. $$ \medskip The main result is a Cheeger-Jammes type inequality \cite[Theorem 1.3]{HuHuWa2017} and \cite[Theorem A]{HaMi2020}): $$ \sigma_1(\bar \Omega) \ge \frac{1}{2} h_C(\bar \Omega) h_J(\bar \Omega) $$ This inequality has the same defect as in the continuous case: one can deform a graph far from the boundary and make $h$ very small without significantly affecting the Steklov spectrum. However, there are situations where the inequality is asymptotically sharp \cite[ Example 5.1]{HuHuWa2017}. \medskip In \cite[Proposition 1.4]{HuHuWa2017}, the authors also establish an upper bound involving the isoperimetric constant $h_E(\bar \Omega)$: $$ \sigma_1(\bar \Omega) \le 2 h_E(\bar \Omega). $$ In \cite[Example 4.2 ]{HuHuWa2017}, the authors give a situation where this inequality is sharp. In particular, one cannot avoid the factor $2$ in the inequality. \medskip Note that the results of \cite{HaMi2020} and \cite{HuHuWa2017} are more general than what we mention, because they are established for weighted graphs. \medskip In the same spirit of what they did in the Riemannian (and measurable) cases, Hassannezhad and Miclo establish a higher order Cheeger-type inequality for finite graphs \cite[Theorem A ]{HaMi2020}. Here also, as in the continuous case (see \ref{highercheegermanifold}), there is a term in $k^6$ in the denominator of the inequality. Hassannezhad and Miclo also obtain a better inequality when estimating $\sigma_{2k+1}$ with respect to the Cheeger constant of order $k+1$ \cite[Proposition 1]{HaMi2020}. \begin{ques}\label{ques:cheegerdiscrete} Same question as in the Riemannian case -- is it possible to have a higher order Cheeger inequality without the term in $k$? \end{ques} \subsection{Lower bound via the diameter.} \label{Diameter} For Riemannian manifolds, there are only a few results concerning lower bounds for the first nonzero Steklov eigenvalue. In the context of the Steklov problem on graphs, lower bounds depending on the extrinsic diameter of the boundary of a graph are established by Perrin in \cite{Per2019}. Let $\Gamma=(V,E)$ a finite graph, $B \subset V$ the boundary. Then, the diameter of a finite connected graph with boundary $(\Gamma, B)$ is the maximum distance between any two vertices of $(\Gamma, B)$, where the distance of two adjacent vertices is $1$. The extrinsic diameter $d_B$ of the boundary $B$ of $\Gamma$ is the maximum distance in $\Gamma$ between any two vertices of $B$. \medskip In \cite[Theorem 1]{Per2019}, it is shown that, for a finite connected graph with boundary $(\Gamma, B)$ $$ \sigma_1(\Gamma,B)\ge \frac{\vert B\vert}{(\vert B\vert-1)^2d_B}. $$ Note that the bound is optimal for $\vert B\vert=2$ and that its proof is very easy. The simplest graph with $\vert B\vert=2$ of diameter $d_B$ is given by $d_B+1$ vertices $0,...,d_B$ with edges $i\sim i+1$, $i=0,d_B-1$ and boundary the two extremities $B=\{0;d_B\}$. The first nonzero eigenvalue $\sigma_1$ of this graph is $\frac{2}{d_B}$. \medskip In Theorem 2, a stronger, but much more elaborate estimate is proved: $$ \sigma_1(\Gamma,B)\ge \frac{\vert B\vert}{\lfloor \frac{\vert B\vert}{2}\rfloor \lceil \frac{\vert B\vert}{2} \rceil d_B}. $$ Moreover, this bound is sharp for any $B$ as $d_B \to \infty$ in the following sense. In \cite[Lemma 2]{Per2019}, the author constructs for each given $b$ a family of graphs $H^b$ with $\vert B\vert=b$, diameter $d_b$ and first nonzero eigenvalue $\sigma_1(H^b)= \frac{\vert B\vert}{\lfloor \frac{\vert B\vert}{2}\rfloor \lceil \frac{\vert B\vert}{2} (\rceil d_B-2)+b}.$ \medskip Note that from \cite[Theorems 1 and 2]{Per2019} we can deduce a similar estimate for Riemannian manifolds with boundary that we can discretise described in Section \ref{Discretisation and spectrum}. However, the estimate depends not only on the extrinsic diameter of the boundary, but also on the bounds on the Ricci curvature and on the injectivity radius that we impose. \subsection{Upper bounds for subgraphs.} \label{UpperGraphs} In their paper \cite{HaHu2020}, Han and Hua study the Steklov eigenvalue problem on subgraphs of integer lattices. One can compare this problem with the study of the spectrum of Euclidean domains in the continuous case. This paper leads naturally to the same type of questions for subgraphs of more general lattices. We will briefly describe some recent results about this question because they lead to different approaches to the problem. In \cite{HaHu2020}, the authors consider the lattice $\mathbb Z^n$ with its usual graph structure, and a finite subset $\Omega$ which defines a graph $\bar \Omega$ with boundary $B$. They prove a Brock-type inequality \cite[Theorem 1.2]{HaHu2020}: \begin{equation} \label{In:Brock1} \sum_{i=1}^n \frac{1}{\sigma_i(\bar \Omega)} \ge C_1 \vert \Omega \vert^{1/n}-\frac{C_2}{ \vert \Omega\vert}, \end{equation} with explicit constants $C_1=(64 n^3 \omega_n^{\frac{1}{n}})^{-1}$, where $\omega_n$ is the volume of the unit ball in $\R^n$, and $C_2=\frac{1}{32n}$. \smallskip Note that in order for the right-hand side to be positive, one needs to have $\vert \Omega \vert > \left(\frac{C_2}{C_1}\right)^{n/(n+1)}$ which is satisfied if $\vert \Omega \vert$ is large enough in terms of $n$. As a corollary, the authors show (Corollary 1.4) that \begin{equation}\label{In:Brock2} \sigma_1(\Omega) \le \frac{n}{C_1 \vert \Omega \vert^{\frac{1}{n}}- \frac{C_2}{\vert \Omega\vert}}. \end{equation} In particular, as $\vert \Omega \vert \to \infty$, $\sigma_1(\Omega)\to 0$, at least at rate $\frac{C(n)}{\vert \Omega\vert^{\frac{1}{n}}}$. The proof of the theorem is quite tricky. The authors associate to the subgraph a domain of $\R^n$, and they apply a weighted isoperimetric inequality due to Brock to this domain (see Inequality (\ref{thm:brock})). The difficulty is the construction of a convenient domain, which creates a lot of technical problems. \medskip In \cite{Per2021}, Perrin investigates the same type of problem, but with a more geometric viewpoint, in the sense where the author does not use classical results from analysis as the Brock inequality, but works directly on the graph itself. The author observes that the crucial point is to control the growth of balls in the graph, which need not be a Euclidean lattice. For example, the Heisenberg lattice $H_3(\mathbb Z)$ of the Heisenberg group $H_3(\mathbb R)$ is convenient. In \cite{Per2021}, the author considers a finite subgraph $\Omega$ of a Cayley graph $\Gamma$ of a group with polynomial growth of order $D$: there exists $C_0=C_0(\Gamma)$ such that, if $B(N)$ is any ball of radius $N$ in $\Gamma$, $$ C_0^{-1}N^D \le \vert B(N)\vert \le C_0N^D $$ for each $N \in \mathbb N^*$. For example, for the lattice $\mathbb Z^n$, $D=n$, but for the lattice $H_3(\mathbb Z)$, $D=4$. In this setting, Perrin shows that there exists a constant $C(\Gamma)$ such that, if $B$ denotes the boundary of the finite subgraph $\Omega \subset \Gamma$, we have \cite[Theorem 1]{Per2021}: \begin{enumerate} \item If $D \le 2$, $\sigma_1(\bar \Omega) \le C(\Gamma) \frac{1}{\vert B\vert}$. \item If $D >2$, $\sigma_1(\bar \Omega) \le C(\Gamma) \frac{\vert \Omega\vert^{\frac{D-2}{D}}}{\vert B\vert}$. \end{enumerate} \medskip As a corollary \cite[Corollary 2]{Per2021}, Perrin observes that $\sigma_1(\bar \Omega) \le \frac{1}{\vert \bar \Omega\vert^{\frac{1}{D}}}$. \medskip The proof of Perrin's result is completely different from, and simpler than, the proof Inequality (\ref{In:Brock2}). The idea is to look directly at the graph and to construct a test function for the Rayleigh quotient, using the fact that the growth of the balls is controlled. For the first nonzero eigenvalue $\sigma_1$, this result generalises the Inequality (\ref{In:Brock2}), but with a less explicit constant $C(\Gamma)$, and it says nothing for the next eigenvalues: it does not seem possible to use the same method to estimate the higher eigenvalues. However, the higher eigenvalues may be estimated by other means, as in the recent paper \cite{Ts2022} by Tschanz. The main result is \cite[Theorem 5]{Ts2022}): for a subgraph $\Omega$ of a Cayley graph $\Gamma$ of polynomial growth of order $D\ge 2$, there exists a constant $C(\Gamma)>0$ such that for all $k\le \vert B\vert-1$, \begin{equation}\label{tschanz1} \sigma_k(\bar \Omega)\le C(\Gamma) \frac{1}{\vert B \vert^{\frac{1}{D-1}}}k^{\frac{D+2}{D}}. \end{equation} In particular, for a fixed $k$, we see that a large volume of $\Omega$ implies a small eigenvalue $\sigma_k(\Omega)$ \cite[Corollary 6]{Ts2022}. This comes from the fact that there exists a constant $C=C(D)>0$ such that $\vert B\vert \ge \vert \Omega\vert^{\frac{D-1}{D}}$ (\cite[Proposition 9]{Ts2022}). \medskip The proof of the result uses a third method: as in \cite{HaHu2020}, the author cannot work directly on the graph. But the idea is to associate to $\Gamma$ a complete Riemannian manifold $M$ of dimension $D$ modelled on it and to $\Omega$ a domain $U$ of $M$. Then, one shows that the growth of balls in $M$ is polynomial of order $D$, and the author adapts the method of \cite{CoElGi2011} to obtain a result on the domain $U$. Using \cite{CoGiRa2018}, one can transfer the estimate from $U$ to $\Omega$. \medskip A further question is whether or not we need to have polynomial growth to get such results. This is not the case, as was shown by He and Hua in \cite{HeHu2022}. The authors study the case of subgraphs of a tree, and it is easy to produce trees where the volume of balls grows exponentially, for example the regular tree of degree $\ge 3$. The authors establish a variety of results (Theorem 1.1, 1.2, 1.5). In particular, Theorem 1.5 says the following: consider a finite tree $\Gamma=(\Omega,E)$ with boundary $B=\delta \Omega$, where the boundary denotes the vertices of degree one. Suppose that $\Gamma$ has degree bounded by $D$. Then, for any $3\le k \le \vert B\vert-1$, \begin{equation}\label{ineghehua} \sigma_k(\bar \Omega)\le \frac{8(D-1)^2(k-1)}{\vert B\vert}. \end{equation} In Theorem 1.3, the authors obtain an upper bound in terms of the diameter $L$ of the tree: $$ \sigma_1(\bar \Omega)\le \frac{2}{L}, $$ and they characterise the equality case. \medskip This result leads to another natural question: a tree may have exponential growth but is very different from a space of negative curvature like a Cartan-Hadamard manifold, for example the hyperbolic space. Precisely, there does not exist a rough quasi-isometry between a tree and the hyperbolic space. The question is to decide what can be said for subgraphs of Cayley graphs roughly quasi-isometric to a Cartan-Hadamard manifold. A partial answer to this question is given by Tschanz in \cite{Ts20222} for a finite subgraph of a graph $\Gamma$ roughly quasi-isometric to the hyperbolic space. In order to construct such a graph $\Gamma$, the author uses a tiling of the hyperbolic space by triangles. The vertices $V$ of $\Gamma$ correspond to the triangles of the tiling and the edges to the adjacent triangles. By construction, $\Gamma$ is roughly quasi-isometric to the hyperbolic space. In \cite[Theorem 9]{Ts20222}, it is shown that there exists a constant $C(\Gamma)$ such that for all finite subgraphs $\Omega$ of $\Gamma$ with boundary $B$, one has \begin{equation} \label{inegtschanz2} \sigma_k(\bar \Omega) \le C(\Gamma)\frac{k^2}{\vert B\vert^2}. \end{equation} The first idea of the proof is, in some sense, comparable to the proof of \cite{HaHu2020}, that is to associate a domain of the hyperbolic space to the subgraph. Then Tschanz uses \cite[Theorem 1.2]{CoElGi2011} to get an upper bound for the Steklov spectrum of the domain, and \cite[Theorem 3]{CoGiRa2018} to transfer the estimate from the domain to the graph. This leads to serious technical difficulties: the reason is that the domains are constructed by gluing together hyperbolic triangles, resulting in singularities along their boundaries. Thus the domains are not quasi-isometric in the Riemannian sense to a domain with cylindrical boundary. To solve this difficulty, the idea is to smooth out the domain near the boundary. \medskip There exist numerous Cayley graphs which are roughly quasi-isometric to the hyperbolic space. For example, to each compact surface with curvature $-1$, one can associate such a graph via its fundamental group. It seems natural that the behaviour of the spectrum of subgraphs of roughly quasi-isometric domains must be comparable. This is not obvious and it leads to the following question. \begin{ques}\label{ques:discretequasiisom} Let $\Gamma_1,\Gamma_2$ be two infinite roughly quasi-isometric graphs. If there exists a constant $C_1(\Gamma_1)$ such that for each finite subgraph $\Omega$ of $\Gamma_1$, $\sigma_1(\bar \Omega)\le \frac{C_1(\Gamma_1)}{\vert B\vert^{\alpha}}$, with $B$ the boundary of $\Omega$ and $\alpha>0$, is an analogous property also true for the finite subgraphs of $\Gamma_2$? \end{ques} \subsection{Other contributions} There are new contributions appearing regularly on the spectrum of the Steklov problem ongraphs that we will not describe here. For example, the setting of weighted finite graphs satisfying the Bakry-Emery curvature-dimension condition $CD(K, n)$ with $K > 0$ and $n > 1$ is considered in \cite{ShiYu2022}.These authors considered also a discrete Dirichlet-to-Neumann operator for differential forms on graphs \cite{ShiYu20222} For infinite graphs, see the preprint \cite{HuHuWa2018}. \section{Steklov spectrum on $p$-forms}\label{stek.forms} There are several different notions of a Dirichlet-to-Neumann map acting on the space $\mathcal{A}(\Sigma)$ of smooth differential forms on the boundary $\Sigma$ of a compact $(d+1)$-dimensional Riemannian manifold ${\Omega}$. The earliest such notion, introduced by Joshi and Lionheart \cite{JoLi2005}, is a pseudo-differential operator $\mathcal{A}^p(\Sigma)\times \mathcal{A}^{d+1-p}(\Sigma)\to \mathcal{A}^{d-p}(\Sigma) \times \mathcal{A}^{p-1}(\Sigma)$. Belishev and Sharafatudinov \cite{BeSh2008} introduced an operator $\Lambda:\mathcal{A}^p(\Sigma)\to\mathcal{A}^{d-p}(\Sigma)$. The article \cite{ShSh2013} by Sharafutdinov and Shonkwiler synthesises these two notions. These definitions are motivated not by spectral problems but rather inverse problems such as the analog for forms of Calderon's problem, which asks the extent to which the Dirichlet-to-Neumann map of a Riemannian manifold with boundary determines the manifold. We will focus here on two notions of a Dirichlet-to-Neumann map for $p$-forms, both of which allow the Steklov spectrum to be generalised to the setting of $p$-forms. The first, introduced by Raulot and Savo \cite{RaSa2012}, is a non-negative elliptic, self-adjoint, pseudo-differential operator of order one. The second, introduced by Karpukhin \cite{Ka2019}, coincides up to sign with the composition $*\circ \Lambda$, where $\Lambda$ is the operator defined by Belishev and Sharafutdinov and $*$ is the Hodge-$*$ operator of $\Sigma$. This operator is self-adjoint on the space of co-closed forms and has non-negative discrete spectrum on this space. The definition of the Dirichlet-to-Neumann map on functions relies crucially on the uniqueness of the harmonic extension to ${\Omega}$ of functions $f\in C^\infty(\Sigma)$. In defining a Dirichlet-to-Neumann operator on $p$-forms when $p>0$, one must take into account the non-uniqueness of harmonic $p$-forms on ${\Omega}$ that pull back to a given smooth $p$-form on $\Sigma$. Before introducing the operators, we recall the Hodge-Morrey-Friedrichs decomposition of the space $\mathcal{A}^p({\Omega})$ of smooth $p$-forms on ${\Omega}$. See \cite{Sc1995} for an extensive introduction to the Hodge-Morrey-Friedrichs decomposition and its applications to boundary value problems or \cite{Ka2019} for an overview of the aspects especially relevant here. Recall that the Hodge Laplacian acting on the space $\mathcal{A}^p({\Omega})$ of smooth $p$-forms on a compact Riemannian manifold ${\Omega}$ is given by $\Delta=(d\delta+\delta d)$. \begin{nota}\label{hodge_decomp} For any subspace ${\mathcal W}$ of $\mathcal{A}({\Omega})$, we denote by ${\mathcal W}_D$ and ${\mathcal W}_N$ the subspaces of elements of ${\mathcal W}$ satisfying Dirichlet and Neumann boundary conditions, respectively, i.e., $${\mathcal W}_D({\Omega})=\{\omega\in {\mathcal W}({\Omega}): \, i^*\, \omega =0\}$$ where $i: \Sigma\to {\Omega}$ is the inclusion, and $${\mathcal W}_ N({\Omega})=\{\omega\in {\mathcal W}({\Omega}): \, \nu \lrcorner\, \omega =0\}$$ where $\nu$ is the outward pointing unit normal along $\Sigma$ and $\lrcorner$ denotes the interior product. Let $\delta$ denote the adjoint of the exterior differential operator. Set $$\mathcal{H}^p({\Omega})=\{\omega\in \mathcal{A}^p({\Omega}): d\omega=0=\delta\omega\}.$$ Elements of $\mathcal{H}^p({\Omega})$ are called harmonic fields. \end{nota} The Hodge-Morrey-Friedrichs decomposition says that \begin{equation}\label{hmf}\mathcal{A}^p(M) = d(\mathcal{A}_D^{p-1}({\Omega})) \oplus \mathcal{H}^p({\Omega}) \oplus \delta(\mathcal{A}_N^{p+1}({\Omega})).\end{equation} In contrast to the case of closed manifolds, the space of harmonic $p$-fields is properly contained in the space of harmonic $p$-forms and both are infinite-dimensional. However, $\mathcal{H}^p_N({\Omega})$ is finite-dimensional and the Hodge-Morrey-Friedrichs decomposition further says that every de Rham cohomology class is uniquely represented by an element of $\mathcal{H}^p_N({\Omega})$: $$H^p({\Omega};\R)\simeq \mathcal{H}^p_N({\Omega}).$$ Similarly, the relative cohomology classes of ${\Omega}$ are represented by $\mathcal{H}^p_D({\Omega})$: $$H^p({\Omega},\Sigma;\R)\simeq \mathcal{H}^p_D({\Omega}).$$ \subsection{Raulot and Savo's Dirichlet-to-Neumann operator and its spectrum} \begin{defn}\label{dtnp}\cite{RaSa2012} Let $\mathcal{A}^p(\Sigma)$ be the space of smooth $p$-forms on $\Sigma$. Given $\eta\in \mathcal{A}^p(\Sigma)$, define the \emph{harmonic tangential extension} $\widehat{\eta}$ of $\eta$ to be the unique $p$-form on ${\Omega}$ satisfying $$\begin{cases} \lap \widehat{\eta}=0\\i^*\widehat{\eta}=\eta\\\nu\lrcorner\,\widehat{\eta}=0 \end{cases}$$ Then Raulot and Savo's Dirichlet-to-Neumann operator $\DtN^{\rs}_p:\mathcal{A}^p(\Sigma)\to \mathcal{A}^p(\Sigma)$ is defined by $$\DtN^{\rs}_p(\eta)=\nu\lrcorner \,d\widehat{\eta}.$$ \smallskip Observe that when $p=0$, $\DtN^{\rs}_p$ coincides with $\mathcal{D}: C^\infty(\Sigma)\to C^\infty(\Sigma)$. For arbitrary $p$, we will denote the eigenvalues of $\DtN^{\rs}_p$ by $$0\leq \sigma^{\rs}_{1,p}({\Omega})\leq \sigma^{\rs}_{2,p}({\Omega})\leq \dots.$$ \end{defn} \begin{remark}\label{sighatnota}~ (i) Zero occurs as an eigenvalue of $\DtN^{\rs}_p$ if and only $H^p({\Omega};\R)\neq 0$. In this case the eigenvalue zero has multiplicity equal to the Betti number $\beta_p({\Omega})=\dim H^p({\Omega};\R)$. In particular, the indexing convention introduced above on the eigenvalues is inconsistent with the convention we have been using in the case of functions, where $\sigma_0({\Omega})=0$ and $\sigma_1({\Omega})$ is the first non-zero eigenvalue. In this case, we have $\sigma^{\rs}_{k,\,0}({\Omega})= \sigma_{k-1}({\Omega})$. We will denote the non-zero eigenvalues of $\DtN^{\rs}_p$ by \begin{equation}\label{hsig}\widehat{\sigma}^{\,\rs}_{k,p}({\Omega}):=\sigma^{\rs}_{\beta_p({\Omega})+k,\,p}({\Omega}).\end{equation} Under the assumption that ${\Omega}$ is connected, we have \begin{equation}\label{hat0}\widehat{\sigma}^{\,\rs}_{k,0}({\Omega})=\sigma_k({\Omega}).\end{equation} (ii) If one replaces $\lap\widehat{\eta}=0$ by $\lap\widehat{\eta}=\alpha\widehat{\eta}$ in Definition~\ref{dtnp}, where the parameter $\alpha$ lies in $\C -[0,\infty)$, then one obtains a parametrised family of invertible pseudo-differential self-adjoint operators introduced earlier by Carron \cite{Ca2002}. \end{remark} The variational characterisation of the eigenvalues is given by: \begin{equation} \sigma^{\rs}_{k,p}({\Omega})=\inf_{V}\,\sup_{0\neq\varphi\in V}\,\frac{\|d\varphi\|_{L^2({\Omega})}^2+\|\delta\varphi\|_{L^2({\Omega})}^2}{\|i^*\varphi\|_{L^2(\Sigma)}^2 }.\end{equation} Here $V$ runs over the collection of all $k$-dimensional subspaces of $\mathcal{A}_N^p({\Omega})$. (See \cite{RaSa2012}.) The articles \cite{Ka2017}, \cite{Kw2016}, \cite{Mi2021}, \cite{RaSa2012}, \cite{RaSa2014}, \cite{ShYu2016}, \cite{ShYu2017}, \cite{YaYu2017} and \cite{YaYu2017_2}, contain many interesting geometric bounds both from above and below for the eigenvalues. We give only a sampling here. \medskip \subsubsection{Upper bounds in terms of the isoperimetric ratio.}~ A compact Riemannian manifold ${\Omega}$ with boundary $\Sigma$ is said to be \emph{harmonic} if the mean value of any harmonic function on ${\Omega}$ equals its mean value on $\Sigma$. \begin{thm}\label{rsisoper} Let ${\Omega}$ be a $(d+1)$-dimensional compact oriented Riemannian manifold with smooth boundary $\Sigma$. \begin{enumerate} \item {\rm ( Raulot-Savo \cite{RaSa2012}.)} $$\sigma^{\rs}_{1,d}({\Omega})\leq \frac{|\Sigma|}{|{\Omega}|}.$$ If equality holds, then ${\Omega}$ must be harmonic. (As a partial converse, if ${\Omega}$ is harmonic, then $\frac{|\Sigma|}{|{\Omega}|}$ lies in the spectrum of $\DtN^{\rs}_d$.) \item {\rm (Raulot-Savo \cite{RaSa2014}.)} If ${\Omega}$ is a Euclidean domain then $$\sigma^{\rs}_{1,p}({\Omega})\leq \left(\frac{p+1}{d+1}\right)\frac{|\Sigma|}{|{\Omega}|}$$ for $p=1, \dots, d$. The inequality is strict when $p<\frac{d+1}{2}$. When $p\geq \frac{p+1}{2}$, equality holds iff ${\Omega}$ is a Euclidean ball. \end{enumerate} \end{thm} For the equality statement in (1), the authors show that ${\Omega}$ is harmonic if and only if the ``mean-exit time'' function $E$ on ${\Omega}$, given as the solution of $\Delta E\equiv 1$ on ${\Omega}$ and $E=0$ on $\Sigma$, has constant normal derivative. In this case, $*dE$ is an eigenform with eigenvalue $\frac{|\Sigma|}{|{\Omega}|}$. Yang and Yu generalised the inequality in item (2), and Shi and Yu strengthened it further: \begin{thm}\label{yyisoper} Let ${\Omega}$ be a $(d+1)$-dimensional compact oriented Riemannian manifold with smooth boundary $\Sigma$. Suppose that the space of parallel exact 1-forms on ${\Omega}$ has dimension $m>0$. Then: \begin{enumerate} \item {\rm (Yang-Yu \cite[Theorem 1.2]{YaYu2017_2}.)} \begin{equation}\label{yyiso}\widehat{\sigma}^{\,\rs}_{k,p}({\Omega})\leq \frac{C_{m-1}^{p}}{C_m^{p+1}+1 -k}\,\frac{|\Sigma|}{|{\Omega}|}\end{equation} for $p=1,\dots m-1$ and $k=1,2,\dots C_m^{p+1}.$ Here $C_k^\ell$ denotes the binomial coefficient $\binom{k}{\ell}$. \item {\rm ( Shi-Yu \cite[Theorem 1.3]{ShYu2016}.)} $\widehat{\sigma}^{\,\rs}_{1,p}({\Omega})+\dots +\widehat{\sigma}^{\,\rs}_{C_m^{p+1},p}({\Omega}) \leq C_{m-1}^p \frac{|\Sigma|}{|{\Omega}|}$ for $p=1,\dots m-1$. \end{enumerate} \end{thm} \begin{ques}\label{ques:yangyusharp} Is Yang-Yu's inequality sharp when $k=1$ and $p<\frac{d+1}{2}$? Is it sharp when $k>1?$ \end{ques} \medskip \subsection{generalising the Hersch-Payne-Schiffer Inequality.}~ \begin{nota}\label{nota.lap} Given a compact Riemannian manifold $\Sigma$ with $b$ connected components, we denote the eigenvalues of the Laplace-Beltrami operator of $\Sigma$ by $$0=\lambda_0(\Sigma)\leq\lambda_1(\Sigma)\leq \dots $$ Note that the first non-zero eigenvalue is $\lambda_{b}(\Sigma)$. \end{nota} The well-known Hersch-Payne-Schiffer \cite{HePaSc1975} inequality states for simply-connected plane domains ${\Omega}$ that \begin{equation}\label{eqhps} \sigma_j({\Omega})\sigma_k({\Omega})L(\partial{\Omega})^2\leq \lambda_{j+k}(S^1)=\begin{cases}\pi^2(j+k)^2, \,\,\,\,\,\, \, \,\,\,\,\,\,\,j+k\,\mbox{even}\\ \pi^2(j+k-1)^2, \,\,\, j+k\, \,\mbox{odd}\end{cases}\end{equation} The inequality was proven by the clever trick of considering the product $R(u)R(v)$ of the Steklov Rayleigh quotients of suitably chosen functions $u,v\in C^\infty(\Sigma)$ whose harmonic extensions $\widehat{u}$ and $\widehat{v}$ to ${\Omega}$ are harmonic conjugates. This method is of course special to dimension two. Surprisingly, by using the Steklov spectrum on differential forms, Yang and Yu generalised the Hersch-Payne-Schiffer inequality to higher dimensions: \begin{thm}\cite{YaYu2017}\label{yyhps} Let ${\Omega}$ be a compact, oriented $(d+1)$-dimensional Riemannian manifold with smooth boundary $\Sigma$. Then for every $j,k\in \mathbf Z^+$, we have $$\sigma_j({\Omega})\,\widehat{\sigma}^{\,\rs}_{k, d-1}({\Omega})\leq \lambda_{j+k+\beta_d -2}(\Sigma)$$ where $\beta_d$ is the $d$th Betti number of ${\Omega}$. \end{thm} Observe that this result agrees with Equation~\eqref{eqhps} when ${\Omega}$ is a 2-dimensional simply-connected plane domain. The key to Yang and Yu's generalisation is the observation that harmonic functions $\widehat{u}$ and $\widehat{v}$ on a planar domain are harmonic conjugates if and only if $$*d\widehat{u}=d\widehat{v}$$ (up to sign) where $*$ denotes the Hodge star operator. In higher dimensions, starting with a suitably chosen function $u\in C^\infty(\Sigma)$ and its harmonic extension $\widehat{u}$ to ${\Omega}$, the role of $\widehat{v}$ is played by a $(d-1)$-form $\widehat{\alpha}$ such that $*d\widehat{u}=- d\widehat{\alpha}$. (One of the defining conditions placed on $u$ guarantees that $*d\widehat{u}$ is exact.) The proof then follows the outline of Hersch, Payne and Schiffer's proof. Subsequently, Karpukhin \cite{Ka2017} applied these results to obtain a version of the Hersch-Payne-Schiffer inequality for surfaces of arbitrary genus $\gamma$ and $b$ boundary components. In this case, we have $d=1$ so $\widehat{\sigma}^{\,\rs}_{k, d-1}({\Omega})=\sigma_k({\Omega})$ and $\beta_1= 2\gamma +b-1$. Thus Theorem~\ref{yyhps} says that $$\sigma_j({\Omega})\sigma_{k}({\Omega})\leq \lambda_{j +k+ 2\gamma +b-3}(\Sigma).$$ By finding an upper bound for $\lambda_{p +q +2\gamma +b-2}(\Sigma)$ when $\Sigma$ is a disjoint union of $b$ circles, Karpukhin obtained the Steklov eigenvalue bound stated earlier in this survey as Theorem~\ref{thm:karp hersch}. \subsubsection{Geometric lower bounds .}~ \begin{nota}\label{nota_pcurv}~ \begin{enumerate} \item Let $({\Omega},g)$ be a compact $d+1$-manifold with smooth boundary $\Sigma$. For $x\in \Sigma$, let $$\kappa_1(x)\leq\kappa_2(x)\leq\dots\leq\kappa_d(x)$$ be the principal curvatures at $x$. Set $$c_p({\Omega})=\inf_{x\in\Sigma}\,(\kappa_1(x)+\dots+\kappa_p(x)).$$ ${\Omega}$ is said to be $p$-convex if $c_p({\Omega})\geq 0$. In particular, 1-convexity is equivalent to convexity. \item Let $W^{(p)}({\Omega})$ be the curvature term in Bochner's formula for the Laplacian on $p$-forms: $$\lap(\omega)=\nabla^*\nabla\omega + W^{(p)}({\Omega}).$$ In particular, $W^{(1)}({\Omega})$ and $W^{(d)}({\Omega})$ coincide with the Ricci tensor. \end{enumerate} \end{nota} \begin{thm}\label{rsthm1_3} {\rm [Raulot-Savo \cite[Theorems 1 and 3.]{RaSa2012}}\label{thm.rsc} Assume that $W^{(p)}({\Omega})\geq 0$. Let $p\in \{1,\dots,d\}$ and suppose that $c_p({\Omega})>0$. Then: \begin{enumerate} \item If $p<\frac{d+1}{2}$, we have $$\sigma^{\rs}_{1,p}({\Omega})>\frac{d-p+2}{d-p+1}\,c_p({\Omega}).$$ \item If $p\geq\frac{d+1}{2}$, we have $$\sigma^{\rs}_{1,p}({\Omega})\geq\frac{p+1}{p}\,c_p({\Omega}).$$ Equality holds for the ball in Euclidean space $\R^{d+1}$. If, moreover, $p>\frac{d+1}{2}$, then the ball is the only Euclidean domain for which equality holds. \end{enumerate} \end{thm} The proof used a Reilly-type formula for differential forms. \begin{remark}~ \begin{enumerate} \item Escobar's Conjecture, cited in Subsection~\ref{subsec.lowerboundgeom} asks whether $\sigma_1({\Omega})\geq c_1({\Omega})$ under the hypotheses that $c_1>0$ and that ${\Omega}$ has non-negative Ricci curvature. Under the same hypotheses as Escobar's conjecture, the case $p=1$ of Theorem~\ref{thm.rsc} says that $\sigma^{\rs}_{1,1}({\Omega})>\frac{d+1}{d}c_1({\Omega})$. \item When $p=d$, we have $c_d({\Omega})=dH$, where $H$ is the minimum on $\Sigma$ of the mean curvature. Thus the case $p=d$ of the theorem says that if the Ricci curvature of ${\Omega}$ is non-negative and the mean curvature of the boundary is positive, then $\sigma^{\rs}_{1,d}({\Omega})\geq (d+1)H$. The authors further prove in this case that equality holds if and only if ${\Omega}$ is a Euclidean ball. \end{enumerate} \end{remark} While Theorem~\ref{rsthm1_3} assumes $p\geq 1$, Karpukhin \cite{Ka2017} applied this theorem along with Theorem~\ref{yyhps} to obtain new geometric lower bounds for the Steklov spectrum on functions: \begin{thm}\label{kathmgeo} Assume that $W^{(2)}({\Omega})\geq 0$ and that $c_{d-1}({\Omega})> 0$. \begin{enumerate} \item If $\dim({\Omega})=d+1\geq 4$, then $$\sigma_k({\Omega})\leq \left(\frac{d-1}{c_{d-1}({\Omega}) d}\right)\,\lambda_{m-1}(\Sigma)$$ for all $k\geq 1$. \item If $\dim({\Omega}) =3$, then for all $k\geq b+2$, where $b$ is the number of boundary components of ${\Omega}$, one has $$\sigma_k <\left( \frac{2}{3c_{d-1}({\Omega})}\right)\,\lambda_{k-1}.$$ \end{enumerate} \end{thm} The short proof in the case that ${\Omega}$ is orientable combines Theorem~\ref{rsthm1_3} with $p=d-1$ (after noting that non-negativity of $W^{(2)}$ yields the same for $W^{(d-1)}$) with Theorem~\ref{yyhps}. The non-orientable case is proven by passing to a double cover. \medskip \subsubsection{Bounds in terms of geometry and the Hodge Laplace eigenvalues of the boundary.}~ \begin{nota}\label{nota.hodge}~ (i) Let $\lap_\Sigma^{(p)}$ denote the Hodge Laplacian acting on smooth $p$-forms on $\Sigma$. We denote the non-zero eigenvalues of $\lap_\sig^{\pp}$ by $$0<\widehat{\lambda}_{1,p}(\Sigma)\leq \widehat{\lambda}_{2,p}(\Sigma) \leq \dots$$ $\lap_\sig^{\pp}$ leaves invariant the subspaces of harmonic forms (the zero eigenspace of $\lap_\sig^{\pp}$), the exact forms, and the co-exact forms. We denote the nonzero eigenvalues of $\lap_\sig^{\pp}$ restricted to the exact forms by $$0<\lambda'_{1,p}(\Sigma)\leq \lambda'_{2,p}(\Sigma)\leq \dots$$ In particular, we have $\widehat{\lambda}_{k,p}\leq \lambda'_{k,p}.$ \smallskip (ii) When $p=0$, we will drop the second subscript and write $\widehat{\lambda}_k$ for the $k$th non-zero eigenvalue of the Laplace-Beltrami operator of $\Sigma$. In particular, we have $$\widehat{\lambda}_k(\Sigma)=\lambda_{k+b}(\Sigma)$$ for $\lambda_j$ defined as in Notation~\ref{nota.lap} \end{nota} Under curvature and topological constraints, Raulot and Savo established a relationship between the lowest non-zero eigenvalue of the Hodge Laplacian $\lap_\Sigma^{(p)}$ and the eigenvalues $\sigma^{\rs}_{1,\,p-1}({\Omega})$ and $\sigma^{\rs}_{1,\,d-(p-1)}({\Omega})$. Their results were extended by Kwong \cite{Kw2016}. We begin the discussion here with the case $p=1$. Observe that $$\widehat{\lambda}_1(\Sigma)=\lambda'_{1,1}(\Sigma).$$ The following result was first proven by Raulot and Savo \cite{RaSa2012} when $R_0=0$. The extension to the case $R_0>0$ is due to Kwong \cite{Kw2016}. \begin{thm}\label{rasa9} \cite[Theorem 9]{RaSa2012}, \cite[Theorem 2.4]{Kw2016}. Assume that ${\Omega}$ has Ricci curvature bounded below by a non-negative constant $R_0$ and strictly convex boundary (i.e., $c_1({\Omega})>0$). Then $$2\widehat{\lambda}_1(\Sigma)\geq R_0+ c_1({\Omega})\sigma^{\rs}_{1,d-1}({\Omega})+ (d)(H) \sigma_1({\Omega})$$ where $H$ is the minimum value of the mean curvature of $\Sigma$. If $d\geq 3$, then equality holds if and only if $R_0=0$ and ${\Omega}$ is a Euclidean ball. \end{thm} This theorem sharpens a result of Escobar \cite[Theorem 9]{Es1999}, who showed under the same hypotheses that $2\widehat{\lambda}_1(\Sigma)> (d)(H) \sigma_1({\Omega})$. We now consider arbitrary $p$. \begin{thm}\label{rasa8}\cite[Theorem 8]{RaSa2012}, \cite[Theorem 2.4 (part 4)]{Kw2016}. We use Notation~\ref{hodge_decomp}, \ref{nota_pcurv} and \ref{nota.hodge}. Assume that $H^p({\Omega},\Sigma;\R)=0$, that $W^{(p)}\geq R_0$, and that $c_p({\Omega})$ and $c_{d+1-p}({\Omega})$ are both non-negative. Then $$2\lambda'_{1,p}\geq R_0+c_p({\Omega})\sigma^{\rs}_{1,d-p}({\Omega})+c_{d+1-p}({\Omega})\sigma^{\rs}_{1,p-1}({\Omega}).$$ \end{thm} Again the case $R_0=0$ is due to Raulot and Savo, and the extension to arbitrary $R_0\in\R$ is due to Kwong. Theorem~\ref{rasa9} follows from Theorem~\ref{rasa8} and the fact that $H^1({\Omega};\Sigma)=0$ when the Ricci curvature is non-negative and the mean curvature is positive; see \cite[Theorems 2.6.1 and 2.6.4, Corollary 2.6.2]{Sc1995}. \medskip \subsubsection{Manifolds isometrically immersed in $\R^m$}~ For compact $(d+1)$-dimensional Riemannian manifolds ${\Omega}$ with boundary that are isometrically immersed in $\R^m$ for some $m$, Michel \cite{Mi2021} compared $\sigma^{\rs}_{1,p}$ and $\sigma^{\rs}_{1,p-1}$ in the presence of geometric bounds. \begin{nota}\label{nota.shape} Define a tensor field $T^{(p)}$ on ${\Omega}$ as follows: For $x\in {\Omega}$ and $\mathbf{n}$ a normal vector to ${\Omega}$ at $x$, the shape operator $S_{\mathbf{n}}$ gives rise to a self-sdjoint endomorphism $S_{\mathbf{n}}^p$ of $\Lambda^p_x({\Omega})$ given by $$S_{\mathbf{n}}^p(\omega)(X_1,\dots, X_p)= \sum_{j=1}^p\,\omega(X_1, \dots, S_{\mathbf{n}}(X_j),\dots,X_p).$$ Define $T^p_x$ to be the self-adjoint non-negative endomorphism of $\Lambda^p_x({\Omega})$ given by $$T^{(p)}_x=\sum_{i=1}^r\, (S^p_{\mathbf{n}_i})^2$$ where $\{\mathbf{n}_1,\dots, \mathbf{n}_r\}$ is an orthonormal basis for the normal bundle to ${\Omega}$ at $x$. (The definition is independent of the choice of basis.) \end{nota} \begin{thm}\cite{Mi2021} Suppose that the $(d+1)$-dimensional compact Riemannian manifold ${\Omega}$ is isometrically immersed in $\R^m$. Let $p\in\{1,\dots, d\}$. In the notation of~\ref{nota_pcurv} and \ref{nota.shape}, if ${\Omega}$ has $p$-convex boundary and $W^{(p)}\geq T^{(p)}$, then $$\sigma^{\rs}_{1,p}({\Omega})\geq \sigma^{\rs}_{1,p-1}({\Omega}) + \frac{c_p({\Omega})}{p}.$$ If $m=d+1$ and $\frac{d+3}{2}\leq p\leq d$, then equality holds when ${\Omega}$ is the Euclidean unit ball. \end{thm} Note that when $m=d+1$, i.e., ${\Omega}$ is a Euclidean domain, the only relevant hypothesis is $p$-convexity since $W^{(p)}\equiv T^{(p)} \equiv 0$ in that case. \subsection{Karpukhin's Dirichlet-to-Neumann operator and its spectrum} In this subsection, ${\Omega}$ is assumed to be oriented. Karpukhin's notion of the Dirichlet-to-Neumann operator on forms is a modification of that of Belishev and Sharafutdinov and is motivated by Maxwell's equations. \begin{defn}\label{dtnk}\cite{Ka2019} Given $\eta\in \mathcal{A}^p(\Sigma)$, let $\widetilde{\eta}\in \mathcal{A}^p({\Omega}) $ be any solution of \begin{equation} \label{eqteta}\begin{cases} \lap \widetilde{\eta}=0 \\ i^*\widetilde{\eta}=\eta\\\delta\widetilde{\eta}=0 \end{cases}\end{equation} Then Karpukhin's Dirichlet-to-Neumann operator $\DtN^K_p:\mathcal{A}^p(\Sigma)\to \mathcal{A}^p(\Sigma)$ is defined by $$\DtN^K_p(\eta)=\nu\lrcorner \,d\widetilde{\eta}.$$ \end{defn} The author proves that $\DtN^K_p$ is well-defined by first showing that the set of solutions of Equation~\eqref{eqteta} is an affine space with associated vector space $\mathcal{H}_D({\Omega})$. As was the case with $\DtN^{\rs}_0$, the operator $\DtN^K_0$ coincides with $\mathcal{D}: C^\infty(\Sigma)\to C^\infty(\Sigma)$. Note that the condition $\nu\,\lrcorner\,\widehat{\eta}=0$ in Definition~\ref{dtnp} has been replaced here by the condition that $\widetilde{\eta}$ be co-closed. Observe that $i^*\omega=0 = \nu\,\lrcorner\,d\omega$ for all $\omega\in d(\mathcal{A}_D^{p-1}({\Omega}))$, and the Hodge-Morrey-Friedrichs decomposition~\eqref{hmf} shows that $\mathcal{A}^p({\Omega})$ is the direct sum of $d(\mathcal{A}_D^{p-1}({\Omega}))$ with the space of co-closed $p$-forms. The Hodge decomposition and the fact that all harmonic forms on a closed manifold are co-closed show that $\mathcal{A}^p(\Sigma)$ is the vector space direct sum of the subspaces of exact $p$-forms and of co-closed $p$-forms. Karpukhin~\cite[Theorem 2.3]{Ka2019} showed: \begin{itemize} \item[(i)] $\DtN^K_p$ is identically zero on the space of exact forms on $\Sigma$. \item[(ii)] The restriction to the space of co-closed forms on $\Sigma$ is a non-negative operator with compact resolvent and discrete spectrum. Moreover, the kernel of the restricted operator has dimension $I_p$ given by \begin{equation}\label{Ip}I_p:=\dim(i^*H^p({\Omega};\R)).\end{equation} \item[(iii)] $\DtN^K_d \equiv 0$; thus the operator is of interest only on forms of degree at most $d-1$. \end{itemize} We will denote the eigenvalues of $\DtN^K_p$ restricted to the space of co-closed forms by $$0\leq \sigma^K_{1,p}({\Omega})\leq \sigma^K_{2,p}({\Omega})\leq \dots$$ and the non-zero eigenvalues by $$\widehat{\sigma}^{K}_{k,p}({\Omega}):=\sigma^K_{k+I_p,p}({\Omega}).$$ \begin{ques}\label{ques: karpukhin pseudo} Is $\DtN^K_p$ a pseudo-differential operator? \end{ques} \begin{ex}\cite{RaSa2014}, \cite{Ka2019}\label{pball} Let ${\Omega}$ be the Euclidean unit ball of dimension $d+1$. and $S^d$ the unit sphere. Recall that the Dirichlet-to-Neumann operator on $C^\infty(S^d)$ and the Laplace-Beltrami operator of $S^d$ have the same eigenspaces, namely the restrictions to $S^d$ of the homogeneous harmonic polynomials of degree $k$ for $k=0,1,2,\dots$. In the case of $p$-forms, somewhat similar relationships exist between the eigenspaces of the Hodge Laplacian of $S^d$ (computed by Ikeda and Taneguchi in 1978) and those of both $\DtN^{\rs}_p$ and $\DtN^K_p$ as we now describe. Let $P_{k,p}$ denote the space of all homogeneous polynomial $p$-forms of degree $k$. (Here $\omega\in \mathcal{A}^p({\Omega})$ is said to be a homogeneous polynomial $p$-form of degree $k$ if, relative to the standard basis $\{dx^{i_1}\wedge\dots\wedge dx^{i_p}; \,1\leq 1_1 <\dots i_p\leq d+1\}$ of $\Lambda^p(\R^{d+1})$, all its coefficient functions are homogeneous polynomials of degree $k$.) Let $$P_{k,p}^{c\,c}= \{\omega \in P_{k,p}: \delta \omega =0\}.$$ We first recall Ikeda and Taneguchi's computation of the eigenspaces of the Hodge Laplacian on $S^d$. Recalling Notation~\ref{hodge_decomp}, let $$H'_{k,p}=P_{k,p}^{c\,c}\cap \mathcal{H}^p(\R^{d+1})$$ and $$H''_{k,p}= P_{k,p}^{c\,c}\cap \mathcal{A}^p_N(\R^{d+1}).$$ For $1\leq p\leq d-1$, the collection of subspaces $\{ i^*H'_{k,p}, i^*H''_{k,p} : k=1,2,\dots\}$ is a complete eigenspace decomposition for the Hodge Laplacian on $\mathcal{A}^p(S^d)$. Moreover, the subspaces $i^*H'_{k,p}$ together span the space of exact forms, and the subspaces $i^*H''_{k,p}$ together span the co-exact forms. We first consider the co-exact subspaces. Let $\eta \in i^*H''_{k,p}$. Since $H''_{k,p}$ consists of tangential co-closed harmonic forms, we have $\widehat{\eta}=\widetilde{\eta} $ in the notation of Definitions~\ref{dtnp} and \ref{dtnk}. Thus $\DtN^{\rs}_p$ and $\DtN^K_p$ agree on this subspace. For both operators, $i^*H''_{k,p}$ is an eigenspace with eigenvalue $p+k$. Next consider the exact subspaces. It turns out that each $i^*H'_{k,p}$ is an eigenspace for $\DtN^{\rs}_p$ with eigenvalue $(k+p-1)\frac{d+2k+1}{d+2k-1}$. However, in contrast to the case of the co-closed forms, the tangential harmonic extensions of elements of $i^*H'_{k,p}$ are \emph{not} polynomial forms. On the other hand, for all $k$, elements $\eta\in i^*H'_{k,p}$ have co-closed polynomial harmonic extensions $\widetilde{\eta}$ lying in $H'_{k,p}$. Since $d\widetilde{\eta}=0$ and thus $\nu\,\lrcorner \,\widetilde{\eta}=0$, the operator $\DtN^K_p$ is identically zero on the exact forms, in agreement with statement (i) above. Finally, for $p=d$, we have $\mathcal{A}^d(S^d) = d\,\mathcal{A}^{d-1}(S^d) \oplus \R\omega$, where $\omega$ is the volume form of $S^d$. All three operators vanish on $\omega$ while the behaviour on the exact forms is the same as in the case $1\leq p\leq d-1$. In particular, $\DtN^K_d\equiv 0$ in agreement with (iii). \end{ex} The eigenvalues $\sigma^K_{k,p}$, $k=0,1,2\dots$, satisfy the variational characterisation (see \cite[Theorem 2.3]{Ka2019}): \begin{equation}\label{kvar} \sigma^K_{k,p}({\Omega})=\max_{E}\,\min_{\substack{\eta\perp E\\i^*\varphi=\eta}}\,\frac{\|d\varphi\|^2_{L^2({\Omega})}}{\|\eta\|^2_{L^2(\Sigma)}}.\end{equation} where $E$ varies over all $k$-dimensional subspaces of $\mathcal{A}^p(\Sigma)$ consisting of coclosed $p$-forms. The maximum is achieved when $E$ is spanned by the first $k$-eigenforms, $\eta$ is a $\sigma^K_{k,p}$-eigenfunction, and $\varphi$ satisfies Equations~\eqref{eqteta} (with $\varphi$ playing the role to $\widetilde{\eta}$). (Note that $d\varphi$ is independent of the choice of solution to Equations~\eqref{eqteta}. \begin{thm}\label{kar_compare}\cite[Theorem 2.5]{Ka2019}. For $0\leq p\leq d-1$, one has $$\widehat{\sigma}^{\,\rs}_{k,p}({\Omega})\leq \widehat{\sigma}^{K}_{k,p}({\Omega})$$ for all $k=1,2,\dots$ \end{thm} The proof compares the variational characterisations of eigenvalues. Girouard, Karpukhin, Levitin and Polterovich proved the following analogue of Equation~\eqref{eq: sigma = lambda +O(1)}. Recall that the Hodge Laplacian $\lap_\Sigma^{(p)}$ on $\Sigma=\partial{\Omega}$ leaves invariant the subspace of co-closed $p$-forms in $\mathcal{A}^p(\Sigma)$. \begin{thm}\cite[Theorem 5.6]{GKLP2021}\label{thm:ksig hodge} Let $$0\leq \widetilde{\lambda}_0(\Sigma)\leq \widetilde{\lambda}_1(\Sigma)\leq \dots$$ denote the eigenvalues of $\lap_\Sigma^{(p)}$ restricted to the subspace of co-closed forms in $\mathcal{A}^p(\Sigma)$. Then there exists a constant $C$ such that \begin{equation}\label{eq:karp p weyl}\left|\sigma^K_{k,p}({\Omega})-\sqrt{\widetilde{\lambda}_k(\Sigma)}\right|\leq C.\end{equation} \end{thm} As a consequence, they obtained the following Weyl law \cite[Theorem 5.8]{GKLP2021}: \begin{equation}\label{eq: Weyl Karp p} \#\{\sigma^K_{k,p}({\Omega},g)<\sigma\}=\binom{d-1}{p}\frac{|\mathbb{B}^d||\Sigma|_g}{(2\pi)^d}\,\sigma^d + o(\sigma^{d}) \end{equation} where $d+1=\dim({\Omega})\geq 2$. \begin{ques}\label{ques: Karp Weyl error} Can the error bound in the Weyl Law~\eqref{eq: Weyl Karp p} for the asymptotics of $\sigma^K_{k,p}$ be improved to $O(\sigma^{d-1})$? \end{ques} This would follow from an affirmative answer to Question~\ref{ques: karpukhin pseudo}. See \cite[Remark 5.9]{GKLP2021} for further comments. Karpukhin remarks that many of the eigenvalue inequalities for the Raulot-Savo eigenvalues $\sigma^{\rs}_{k,p}$ have analogues for $\sigma^K_{k,p}$ and in fact can be reproven using Theorem~\ref{kar_compare}. He illustrates with the following generalised and strengthened version of Theorem~\ref{yyhps}. \begin{thm}\label{khps}\cite[Theorem 2.7]{Ka2019} Let ${\Omega}$ be a compact, oriented $(d+1)$-dimensional Riemannian manifold with smooth boundary $\Sigma$. Then for every $j,k\in \mathbf Z^+$ and every $p\in\{0,1,\dots, d-1\}$, we have $$\widehat{\sigma}^{K}_{j,p}({\Omega})\widehat{\sigma}^{K}_{k, d-1-p}({\Omega}) \leq \lambda^{c\,c}_{I_p+j+k -1 +\beta_{d-p} }(\Sigma).$$ Here $0\leq \lambda^{c\,c}_{1,p}(\Sigma)\leq \lambda^{c\,c}_{2,p}(\Sigma)\leq\dots$ are the eigenvalues of the Hodge Laplacian of $\Sigma$ acting on the space of co-closed $p$-forms. If $j=k=1$, then equality holds for all $p\in\{0,1,\dots, d-1\}$ when ${\Omega}$ is a Euclidean unit ball. \end{thm} Observe that Theorem~\ref{yyhps} follows from the case $p=0$ of Theorem~\ref{khps} together with Theorem~\ref{kar_compare}. In view of the fact that the inequality in Theorem~\ref{khps} is sharp when $j=k=1$, it is interesting to ask whether other inequalities for the Raulot-Savo eigenvalues have sharp versions for Karpukhin's eigenvalues. \medskip \subsubsection{Conformally invariant case: $d=2p+1$}~ Suppose that the dimension $d+1$ of ${\Omega}$ is even and let $p=\frac{d-1}{2}$. Then, analogous to the case of the Steklov spectrum on functions in dimension two, the variational characterisation of eigenvalues~\eqref{kvar} shows that the eigenvalues $\sigma^K_{k,p}$ are invariant under conformal changes of metric provided the conformal factor is trivial on the boundary $\Sigma$. Motivated by the literature on optimizing Steklov eigenvalues on surfaces discussed in Section~\ref{sec:existence}, Karpukhin proposes: \begin{prob}\cite{Ka2019}\label{prob:karp} Given a closed oriented Riemannian manifold $(\Sigma,h)$ of dimension $d=2p+1$, let $[\Sigma,h]_m$ denote the collection of all orientable Riemannian manifolds $({\Omega},g)$ with $\partial {\Omega} =\Sigma$, $g_{|\Sigma}=h$ and $\beta_{d-p}({\Omega})=m$. For fixed $m$ and $k$, investigate $$\sup\,\{\sigma^K_{k,p}({\Omega},g): ({\Omega},g)\in [\Sigma,h]_m\}.$$ \end{prob} The following theorem shows that the supremum is always finite. \begin{thm}\cite[Theorem 2.11]{Ka2019}\label{khpsp} Suppose $d=2p+1$. Then with the notation and hypotheses of Theorem~\ref{khps}, we have $$\widehat{\sigma}^{K}_{k,p}({\Omega})^2\leq \lambda^{c\,c}_{I_p+2k -1 +\beta_{p+1} }(\Sigma).$$ Moreover, when $k\leq \frac{1}{2}\binom{2p+2}{p+1}$, equality holds for the Euclidean unit ball. \end{thm} The inequality is immediate from Theorem~\ref{khps}. \begin{conj}\label{conjs:karpuhkin}\cite{Ka2019}~ \begin{itemize} \item The inequality in Theorem~\ref{khpsp} is sharp for all $k$. \item When $k\leq \frac{1}{2}\binom{2p+2}{p+1}$, equality holds only for the Euclidean unit ball. \end{itemize} \end{conj} \section{Inverse problems: positive results}\label{sec:inv probs pos} In this section we discuss some recent ``positive" inverse spectral results for the Steklov problem. By ``positive" we mean that we are looking for geometric information that \emph{can} be recovered from the Steklov spectrum. The analogous problem for the Laplacian, either on a closed manifold or with Dirichlet, Neumann, or Robin conditions on a domain, has been well-studied. The traditional approach is to look for \emph{spectral invariants} -- quantities which can be determined from the spectrum of the Laplacian and which have some geometric meaning. With these invariants, one may then try to prove results such as spectral rigidity: proving that all elements of a certain restricted class of domains or manifolds are spectrally distinguishable. The hunt for spectral invariants of the Laplacian starts with Weyl's asymptotics. Weyl proved in 1911 (in the cases $d=1$ and $d=2$) that when $\Omega\subset\mathbb R^{d+1}$ is bounded and has sufficiently regular boundary, \[N(\lambda_k\leq\lambda) = C_d|\Omega|\lambda^{(d+1)/2} + o(\lambda^{(d+1)/2}),\] where $C_d$ is a dimensional constant \cite{We1911}. This estimate has been generalised to all dimensions and refined significantly over the last century. Notably, in 1980, Ivrii proved a \emph{two-term} asymptotic formula under the assumption that the set of all periodic geodesic billiards on $\Omega$ has measure zero \cite{Iv1980}: \[N(\lambda_k\leq\lambda) = C_d|\Omega|\lambda^{(d+1)/2} + C_d'|\partial\Omega|\lambda^{d/2} + o(\lambda^{d/2}).\] Similar asymptotics hold under similar assumptions in the case where $\Omega$ is a closed manifold (see for example Duistermaat and Guillemin \cite{DuGu1975}), or a domain in a complete manifold; we will not discuss them here. The use of these asymptotics for inverse problems is that since the counting function is spectrally determined, so are the coefficients of the expansion on the right-hand side. This shows that the volume of $\Omega$ may be determined from the spectrum of the Laplacian, and similarly the volume of $\partial\Omega$ may be recovered under the aforementioned dynamical assumptions. In order to obtain more spectral invariants, we may turn to \emph{heat} invariants. It was shown by McKean and Singer in 1967 \cite{McSi1967} that for a smooth manifold with boundary of dimension $d+1$, the trace of the heat kernel, $Tr H(t)$, has a short-time asymptotic expansion as $t\to 0$ given by \[Tr H(t) = \sum_{j=0}^{\infty}a_jt^{-(d+1)/2 + j/2}.\] Here the quantities $a_j$ may be computed recursively, and each is an integral, over $\Omega$, of a polynomial in the curvature of $\Omega$ and its derivatives, plus an analogous integral over $\partial\Omega$ involving the principal curvatures of the boundary. See the appendix of \cite{OPS1988a} and the references therein, including \cite{Me1983}. The key point is that since \[Tr H(t)=\sum_{k=1}^{\infty}e^{-\lambda_kt},\] the heat trace, and thus all of the coefficients $a_i$, are spectral invariants. We call the $a_i$ the ``heat invariants". The first term $a_0$ is a dimensional constant times the volume of $\Omega$, so that volume is a spectral invariant. Likewise, it follows from examining $a_1$ that the volume of $\partial\Omega$ is a spectral invariant, even when the dynamical assumption need to obtain a second term in Weyl's law is not satisfied. The coefficient $a_2$ was first studied by McKean and Singer \cite{McSi1967}, who showed that if $\Omega$ is a compact surface with boundary then $a_2$ is a nonzero multiple of the Euler characteristic, and therefore the Euler characteristic is a spectral invariant. More can be done, though the calculation of the $a_i$ is recursive and some of the formulas for $a_i$, $i\geq 3$, are harder to interpret geometrically. See, for example, \cite{Gi2003} for a detailed discussion of these formulas. A more exotic approach to spectral invariants is to make use of the \emph{spectral zeta function}, defined for $\Re s$ large by \[\zeta(s):=\sum_{k=1}^{\infty}\lambda_k^{-s}.\] This function may then be meromorphically continued to the complex plane by writing \[\zeta(s)=\frac{1}{\Gamma(s)}\int_{0}^{\infty}t^{s-1}Tr H(t)\, dt\] and taking advantage of the short-time asymptotic expansion of the heat trace. Since the spectral zeta function is spectrally determined, so are all special values of the zeta function and its derivatives. A particularly useful special value is the \emph{determinant of the Laplacian}, defined by \[\det\Delta := e^{-\zeta'(0)}.\] In a series of celebrated papers in the late 1980s, Osgood, Phillips, and Sarnak used both the determinant and the heat invariants to show that any isospectral family of planar domains is compact in a natural $C^{\infty}$ topology \cite{OPS1988,OPS1988a}. Specifically, they used the determinant together with some calculus of variations and the first couple of heat invariants to prove compactness in the Sobolev space $H^s$ for some $s>0$ \cite{OPS1988}. They then used the heat invariants $\{a_i\}$ to bootstrap the argument and obtain compactness for every real $s$, with each successive heat invariant improving the Sobolev order by one \cite{OPS1988a}. Although they were not able to explicitly compute each heat invariant without recursion, they were able to isolate key terms which enabled them to prove compactness. See \cite[Appendix]{OPS1988a}. \subsection{Steklov spectral invariants} Many of the techniques used for the Laplacian are naturally used in the Steklov problem as well. For example, there are Weyl asymptotics: \begin{equation}\label{eq:stekweyl} N(\sigma_k\leq\sigma) = C_d|\partial\Omega|\sigma^{d} + o(\sigma^{d}), \end{equation} where again $C_d$ is an explicit dimensional constant. The asymptotics \eqref{eq:stekweyl} are due to various authors under various assumptions on smoothness of the boundary. In the case where $\Omega$ is a domain in Euclidean space and $\partial\Omega$ is $C^1$, it was proved by Agranovich in 2006 \cite{Ag2006}. \begin{ques}\label{ques:weylrough} Show that \eqref{eq:stekweyl} holds whenever $\partial\Omega$ is Lipschitz. \end{ques} Recently, Karpukhin, Lagac\'e, and Polterovich have done this for $d+1=2$ \cite{KLP2022}. Observe that the exponent in \eqref{eq:stekweyl} is $d$ rather than $(d+1)/2$. At least in the case of smooth boundary, this is because the Steklov spectrum is the spectrum of a pseudodifferential operator of order 1 on a $d$-dimensional manifold, rather than the spectrum of a differential operator of order 2 on a $d+1$-dimensional manifold. For the same reason, the leading term is a geometric constant times the volume of the \emph{boundary} $\partial\Omega$, showing that $|\partial\Omega|$ is a Steklov spectral invariant. As with the Laplacian, there is sometimes a second term in these asymptotics, subject to certain assumptions concerning the dynamics of the geodesic flow on $\partial\Omega$. For example, if $\partial\Omega$ is smooth and its periodic geodesics form a set of measure zero in its cotangent bundle, then one recovers a two-term Weyl law from the results of \cite{DuGu1975}, see \cite[Remark 5.7]{PoSh2015}. However, as with the Laplacian, there are no further terms. Although the Weyl asymptotics are the best results available in dimension $d+1>2$, the case $d+1=2$ is quite different, due to the conformal invariance of the Laplacian. Suppose that $\Omega$ is a simply connected surface with smooth boundary. In this case, it has been known since the 1970s that the Steklov spectrum has very precise asymptotic behaviour \cite{Sh1971,Ro1979}. Specifically, let $D_{|\partial D|}$ be a disk with the same perimeter as $\Omega$. Then \[|\sigma_k(\Omega)-\sigma_k(D_{|\partial D|})| = O(k^{-\infty}).\] In other words, the eigenvalues of $\Omega$ converge quite rapidly to a doubled arithmetic progression. \begin{remark} \label{rem:gpps2014} In 2014, Girouard, Parnovski, Polterovich and Sher investigated the $d=2$ case where $\Omega$ is not simply connected \cite{GPPS2014}. Suppose that $\Omega$ is a smooth surface with boundary, which has $N$ boundary components with lengths $\ell_1,\ldots,\ell_N$, and let $U$ be a union of $N$ disks with perimeters $\ell_1,\ldots,\ell_N$. Then \[|\sigma_k(\Omega)-\sigma_k(U)|=O(k^{-\infty}).\] Using some elementary number theory, they were able to prove that the multiset $\{\ell_1,\ldots,\ell_N\}$ of boundary lengths may be determined algorithmically from the Steklov spectrum $\sigma(\Omega)$ \cite{GPPS2014}. \end{remark} The other methods of obtaining spectral invariants have also been used in the Steklov setting. Polterovich and Sher made a systematic study of Steklov heat invariants \cite{PoSh2015}, proving among other things that the total mean curvature of $\partial\Omega$ is a Steklov spectral invariant. This was then used to show that a ball in $\mathbb R^3$ is uniquely determined by its Steklov spectrum among all domains in $\mathbb R^3$ \cite{PoSh2015}. The Steklov spectral zeta function may be defined similarly to the Laplace spectral zeta function. It was exploited in a notable series of papers by Julian Edward from the late 1980s and early 1990s. In his first paper he used the zeta function to show that a disk in $\mathbb R^2$ is uniquely determined among simply connected domains by its Steklov spectrum \cite{Ed1993}. (This may also be deduced from the Weyl asymptotics and Weinstock's inequality, and is in fact true even among all domains, see \cite[Corollary 1.13]{GPPS2014}). In a follow-up work he investigated compactness of isospectral sets of simply connected planar domains along the lines of Osgood, Phillips, and Sarnak \cite{Ed1993a}. Although he was not quite able to prove compactness of isospectral sets in the $C^{\infty}$ topology, he was able to prove it in the $H^{5/2-\epsilon}$ topology for any $\epsilon>0$, thus carrying out the first step of the program of Osgood, Phillips, and Sarnak \cite{OPS1988,OPS1988a} in the Steklov setting. \subsection{Isospectral compactness} Recently, Alexandre Jollivet and Vladimir Sharafutdinov \cite{JoSh2018_2} have completed the program started by Edward. \begin{thm}\label{thm:joshmain} \cite[Theorem 1.3]{JoSh2018_2} Suppose that $\{\Omega_i\}$ is a sequence of Steklov isospectral, compact, simply connected, possibly multisheeted planar domains. Then there exists a subsequence of $\{\Omega_i\}$ which converges to a limit $\Omega$ in the $C^{\infty}$ topology. \end{thm} The proof is an interesting variation on the techniques of Osgood, Phillips, and Sarnak in \cite{OPS1988,OPS1988a} and we give a sketch of it here. The main problem is that in \cite{OPS1988a}, the authors used the Laplace heat invariants to perform their bootstrapping procedure. However, in two dimensions, all Steklov heat invariants beyond the first are \emph{non-local} and as such cannot be computed by the same recursive methods typically used to compute Laplace heat invariants. In general, as shown in \cite{PoSh2015}, only the first $d$ Steklov heat invariants are local. So there is no hope of being able to use a structure theorem for heat invariants to complete the proof. Nevertheless, Jollivet and Sharafutdinov are able to use a set of invariants drawn from the Steklov zeta function, which they call ``zeta-invariants", to replace the role of the heat invariants. These invariants were originally defined in a paper of Malkovich and Sharafutdinov \cite{MaSh2015}. To define these zeta-invariants, Jollivet and Sharafutdinov use the well-known fact that if $\Omega$ is a compact, simply connected planar domain, then the Steklov eigenvalue problem is equivalent to the eigenvalue problem for the operator $a\mathcal D_{\mathbb S^1}$, where \[a(z)=|\Phi'(z)|^{-1},\ \Phi:\mathbb D\to \Omega\textrm{ is biholomorphic.}\] In this translation, also used by Edward \cite{Ed1993,Ed1993a} and dating back much earlier \cite{Ro1979}, the inverse spectral problem is equivalent to the problem of determining $a(z)$ from the spectrum of $a\mathcal D_{\mathbb S^1}$. The zeta-invariants are defined by \[Z_m(a):=\textrm{Tr}[(a\mathcal D_{\mathbb S^1})^{2m} - (-ia\frac{d}{d\theta})^{2m}].\] Although Edward did not use the notation of zeta-invariants, his proof of isospectral compactness in $H^{5/2-\epsilon}$ uses the zeta-invariants $Z_1(a)$ and $Z_2(a)$ as well as the special values of the zeta function $\zeta(-1)$ and $\zeta(-3)$ \cite{Ed1993a}. To do this he computed a formula for $Z_1(a)$ in terms of the Fourier coefficients $\hat a_m$ of $a$: \[Z_1(a)=\frac 23\sum_{n=2}^{\infty}(n^3-n)|\hat a_n|^2.\] This formula was then generalised by Malkovich and Sharafutdinov to a formula for $Z_m(a)$ \cite{MaSh2015}. The key novel observation of Jollivet and Sharafutdinov is a lower bound for $Z_m(a)$, namely \begin{thm} \cite[Theorem 1.1]{JoSh2018_2}. Let $m\geq 1$ and let $b=a^m$. Then there is a constant $c_m$ depending only on $m$ such that for any smooth function $a$, \[Z_m(a)\geq c_m\sum_{n=m+1}^{\infty}n^{2m+1}|\hat b_n|^2.\] \end{thm} This lower bound builds on their prior work in \cite{JoSh2018} and turns out to be enough for them to prove Theorem \ref{thm:joshmain}. In essence it fills the same role that the structure theorem for higher heat invariants plays in \cite{OPS1988a}. The authors use the estimate on $Z_m(a)$ to obtain control over the $H^{m+\frac 12}$-norms of an isospectral sequence $\{a_k\}$ and thus prove compactness in $H^{m+\frac 12-\epsilon}$ for each $m$. One might also consider the case of multiply connected domains. Indeed, Osgood, Phillips, and Sarnak were able to prove an isospectral compactness theorem in that setting as well \cite{OPS1989}. \begin{ques}\label{ques:multconnops} Are families of multiply connected, compact, Steklov isospectral planar domains necessarily compact in the $C^{\infty}$ topology? \end{ques} \begin{remark} For certain restricted classes of manifolds, much stronger results than isospectral compactness may sometimes be deduced. See, for example, the papers \cite{DaKaNi2021} by Daud\'e, Kamran and Nicolau and \cite{Gendron2020} by Gendron, in the setting of warped product manifolds. \end{remark} \subsection{Steklov spectral asymptotics with corners} As previously discussed, Steklov spectral asymptotics in two dimensions, for $\partial\Omega$ smooth, are straightforward: the spectrum converges extremely rapidly to a doubled arithmetic progression, or a union of such progressions in the case where $\partial\Omega$ has more than one connected component. However, it has been known for a long time that the presence of corners on $\partial\Omega$ changes the picture substantially. We highlight some recent developments on asymptotics in these settings and then discuss the inverse spectral applications. \subsubsection{Sloshing asymptotics} The first recent results on these asymptotics come in the case of a mixed Steklov-Neumann problem known as the \emph{sloshing problem}. In this problem, $\Omega$ is taken as a simply connected region in $\mathbb R^2$ whose boundary decomposes in a piecewise smooth fashion as $S\cup W$, where $S$ is a segment of the $x$-axis of length $L$ from point $A$ to point $B$, and $W$ is a curve in the lower half-plane from point $B$ to point $A$. We let $\alpha$ and $\beta$ be the interior angles of $\Omega$ at $A$ and $B$ respectively. We then consider the mixed Steklov-Neumann problem obtained by imposing Steklov conditions on $S$ and Neumann conditions on $W$. There is an analogous Steklov-Dirichlet problem as well. Computations of these eigenvalues can be done directly in some very specific cases such as the equilateral triangle. Based on these computations and on considerable numerical evidence, Fox and Kuttler conjectured in 1983 \cite{FoKu1983} that in the case where $\alpha=\beta$, \[\sigma_k\cdot L = \pi(k+\frac 12) - \frac{\pi^2}{4\alpha}+o(1).\] In 2017, Levitin, Parnovski, Polterovich, and Sher proved the following generalisation of the Fox-Kuttler conjecture \cite{LPPS2017}: \begin{thm}\label{thm:sloshingmain} \cite[Theorem 1.1]{LPPS2017} Suppose that $\Omega$ is a sloshing domain where $W$ is piecewise $C^1$ and the angles $\alpha$ and $\beta$ are less than $\pi/2$. Then \[\sigma_k\cdot L = \pi(k+\frac 12) -\frac{\pi^2}{8}(\frac 1{\alpha}+\frac 1{\beta}) + o(1).\] \end{thm} Observe that the angles produce a shift in the asymptotics of the eigenvalues. Moreover, if the domain is symmetric, the angles may be recovered from those eigenvalue asymptotics. \begin{remark} Some comments: \begin{itemize} \item There is a similar result for the Steklov-Dirichlet problem; all that changes is that the minus sign in front of the term with the angles becomes a plus sign. \item If $W$ is straight near the corners, the remainder estimate can be improved from $o(1)$. \item The angles $\alpha$ and/or $\beta$ may be allowed to equal $\pi/2$ under the assumption that $\Omega$ satisfies a local John's condition. \end{itemize} \end{remark} The proof of Theorem \ref{thm:sloshingmain} relies on a construction of quasimodes, that is, approximate Steklov eigenfunctions corresponding to approximate Steklov eigenvalues. These quasimodes are constructed by using what physicists call the method of matched asymptotic expansions: namely, one may glue a traveling wave along $S$ to solutions of model problems near its endpoints. Conveniently, appropriate model solutions near the corners were originally given by Peters \cite{Pe1950}. The gluing construction, in the end, produces a sequence of quasimodes which are accurate enough to show that there is a real Steklov eigenvalue near each quasi-eigenvalue. A major challenge in the proof, and in any similar quasimode construction, is that it is difficult to show that this construction does not ``miss" any Steklov eigenvalue -- in essence, that the set of quasimodes is complete. In this case this difficulty is handled by combining ODE techniques based on \cite{Na1967} with domain monotonicity. \subsubsection{Steklov spectral asymptotics for polygons} The methods used to tackle the sloshing problem in \cite{LPPS2017} were further developed by the same authors to give precise Steklov spectral asymptotics for curvilinear polygons \cite{LPPS2019}. Throughout this subsection, we let $\Omega$ be a curvilinear polygon, piecewise smooth, with finitely many nonzero interior angles $\alpha_j$ and side lengths $\ell_j$, arranged so that the $\ell_j$ appear in clockwise order and $\alpha_j$ is the angle between $\ell_j$ and $\ell_{j+1}$. \begin{thm}\label{thm:steklovpolymain} \cite[Theorem 1.4]{LPPS2019} Suppose that $\Omega$ is a curvilinear polygon all of whose interior angles are in $(0,\pi)$. Then there exists a sequence of quasi-eigenvalues $\{\hat\sigma_k\}$, which may be explicitly computed in terms of the side lengths and the angles, as well as an explicitly computable $\epsilon_0>0$, for which \[\sigma_k=\hat\sigma_k + O(k^{-\epsilon_0}).\] \end{thm} Although $\hat\sigma_k$ may be computed in all circumstances, giving a full account is notationally complicated. Here we restrict to the case where no $\alpha_j$ has the form $\pi/n$ for $n\in\mathbb N$. In this case, we define matrices as follows: \[A(\alpha) = \begin{bmatrix} \csc(\frac{\pi^2}{2\alpha}) & -i\cot(\frac{\pi^2}{2\alpha}) \\ i\cot(\frac{\pi^2}{2\alpha}) & \csc(\frac{\pi^2}{2\alpha})\end{bmatrix};\quad B(\ell,\sigma) = \begin{bmatrix} e^{i\ell\sigma} & 0 \\ 0 &e^{-i\ell\sigma} \end{bmatrix};\quad C(\alpha,\ell,\sigma)=A(\alpha)B(\ell,\sigma).\] Then for any given $\Omega$ with angles $\vec\alpha=(\alpha_1,\ldots,\alpha_n)$ and side lengths $\vec\ell=(\ell_1,\ldots,\ell_n)$, we can define the parameter-dependent matrix $T(\vec\alpha,\vec\ell,\sigma)$ by \[T(\vec\alpha,\vec\ell,\sigma) = C(\alpha_n,\ell_n,\sigma)C(\alpha_{n-1},\ell_{n-1},\sigma)\dots C(\alpha_1,\ell_1,\sigma).\] With this, we define: \begin{definition} Suppose that no $\alpha_i$ is equal to $\pi/n$. Then the \emph{quasi-eigenvalues} $\hat\sigma_k$ of $\Omega$ are the non-negative values of $\sigma$ for which 1 is an eigenvalue of $T(\vec\alpha,\vec\ell,\sigma)$. \end{definition} For multiplicity, the multiplicity of $\hat\sigma_k$ is the geometric multiplicity of 1 as an eigenvalue of $T(\hat\sigma_k)$, except that if $\hat\sigma_1=0$ it has multiplicity 1. Note also that 1 is an eigenvalue of $T(\sigma)$ if and only if the trace of $T(\sigma)$ equals 2, which assists in computations \cite{LPPS2019}. There are similar, if more notationally involved, expressions for $\hat\sigma_k$ when one or more of the angles $\alpha_j$ has form $\pi/n$. The condition that $T(\vec\alpha,\vec\ell,\sigma)$ have eigenvalue 1 should be viewed as a quantisation condition. Along each side of a polygon, a Steklov eigenfunction of sufficiently high energy $\sigma$ will look like a traveling wave with frequency $\sigma$. The matrices $A(\alpha)$ should be viewed as transmission matrices at the corners, and $B(\ell,\sigma)$ as matrices representing propagation along each side. The condition that 1 be an eigenvalue of $T$ is the condition that when the wave propagates all the way around the polygon, it comes back with the same phase. The proof of Theorem \ref{thm:steklovpolymain} is again a quasimode construction, gluing together traveling waves along each side with model solutions in each corner. The model solutions in the corner are obtained by doubling the Peters solutions used in the sloshing problem. There are a number of technical complications related to the curvilinear boundary, and proving completeness is again quite difficult, but all this can be overcome. A natural open question is the following: \begin{ques}\label{ques:polygonsbigangles} Can the condition that the angles of $\Omega$ are in $(0,\pi)$ be relaxed to $(0,2\pi)$? \end{ques} Based on numerical evidence the answer appears to be ``yes", but the proof techniques used here depend very strongly on the angles being less than $\pi$. \subsubsection{Inverse spectral problem for polygons} As discussed earlier, it is possible to recover the boundary lengths of a surface from its Steklov spectrum. This poses the natural question: what geometric information can be recovered from the Steklov spectrum of a polygon $\Omega$? In particular, can the side lengths and/or angles be recovered? The following theorem, due to Krymski, Levitin, Parnovski, Polterovich, and Sher \cite{KLPPS2021}, indicates that the answer is \emph{generically} yes, at least up to a natural symmetry involving the angles. \begin{thm} \cite[Theorem 1.9]{KLPPS2021} Suppose that $\Omega$ is a curvilinear polygon, with all interior angles in $(0,\pi)$, and with no interior angle equal to $\pi/n$ for any $n$. Suppose also that the lengths $\{\ell_1,\ldots,\ell_n\}$ are incommensurable over $\{-1,0,1\}$. Then: \begin{enumerate} \item The side lengths $\ell_j$, in order, may be recovered from the Steklov spectrum. \item The vector \[\cos(\vec\alpha):=(\cos\frac{\pi^2}{2\alpha_1},\ldots,\cos\frac{\pi^2}{2\alpha_n})\] may be recovered from the Steklov spectrum. \end{enumerate} Moreover, the procedure to recover these is constructive. \end{thm} The procedure has two steps. First, the Steklov spectrum is used to reconstruct the polynomial $T(\vec\alpha,\vec\ell,\sigma)$ via the construction of an infinite product and use of the Hadamard-Weierstrass factorisation theorem. Second, the vectors $cos(\vec\alpha)$ and $\vec\ell$ are recovered from $T(\vec\alpha,\vec\ell,\sigma)$. This second recovery is fully algorithmic and can be implemented by computer. \begin{remark} \begin{itemize} \item Observe that there is no hope of recovering $\vec\alpha$ from $T(\vec\alpha,\vec\ell,\sigma)$, as the definition of $T$ may be written so that it only involves $\cos(\vec\alpha)$. \item There are further results when one or more of the angles is $\pi/n$ for $n$ even; see \cite{KLPPS2021} for details. \item The condition about incommensurability is necessary, as without this assumption one may construct two different polygons with the same $T(\sigma)$. The same is true if one or more angles are $\pi/n$ for $n$ odd. See \cite[Subsection 3.3]{KLPPS2021}. \end{itemize} \end{remark} Many questions here are still open, for example: \begin{ques}\label{ques:recoverangles} Can the angles themselves, and not just $\cos(\pi^2/2\alpha)$, be recovered from the Steklov spectrum? \end{ques} In order to address this one will need to use some sort of non-asymptotic information from the Steklov spectrum, as the asymptotic behaviour does not distinguish between two different angles with the same value of $\cos(\pi^2/2\alpha)$. \subsection{Inverse problems on orbifolds}\label{subsec:orb pos} Orbifolds are a generalisation of manifolds in which certain types of singularities may occur. The simplest examples of orbifolds, the so-called ``good orbifolds'', are quotients $\mathcal{O}:=H\backslash M$ of a manifold $M$ by a discrete group $H$ acting smoothly and effectively with only finite isotropy. (``Effective'' means that the identity element of $\Gamma$ is the only element that acts as the identity transformation.) Points in $\mathcal{O}$ can be viewed as $H$-orbits of the action of $H$ on $M$, hence the name ``orbifold''. Letting $\pi: M\to \mathcal{O}$ be the projection, the boundary of $\mathcal{O}$ is given by \begin{equation}\label{eq.boundary}\partial\mathcal{O}=\pi(\partial M).\end{equation} Singularities occur at elements of $\mathcal{O}$ for which the corresponding orbit has non-trivial isotropy in $H$. The \emph{isotropy type} of the singularity is the isomorphism class of the isotropy group. Arbitrary $n$-dimensional orbifolds locally have the structure of good orbifolds. Every point in the interior of an orbifold has a neighborhood $U$ modelled on a quotient $\Gamma\backslash\widetilde{U}$ of an open set $\widetilde{U} \subset\R^n$ by an effective action of a (possibly trivial) finite group. A neighborhood of each boundary point has a similar model but with the role of $\R^n$ played by a closed half-space of $\R^n$. Orbifolds are stratified spaces: the regular points form an open dense stratum, and the singular strata are connected components of singular points with a given isotropy type. \begin{ex}\label{2dorb} In the two-dimensional case, all singular strata are classified as follows: \begin{enumerate} \item Cone points $p$ of order $m$, $2\leq m\in \mathbf Z^+$: A neighborhood of $p$ is modelled on the quotient of a disk by the cyclic group generated by rotation. \item Reflectors: These are 1-dimensional singular strata. A neighborhood of a point in the reflector is modelled on the quotient of a disk by reflection across a diameter. \item Dihedral points: These are points where two reflectors come together at an angle of $\frac{\pi}{m}$ for some $m\in\mathbf Z^+$. A neighborhood of the point is modelled on the quotient of a ball by a dihedral group of order $2m$. \end{enumerate} For $n$-dimensional orbifolds, one can have singular strata of any dimension $<n$. The only strata of co-dimension one are reflectors (modelled on quotients of a ball by reflection across a plane through the center). \end{ex} Riemannian metrics on a good orbifold $\mathcal{O}:=H\backslash M$ are defined by $H$-invariant Riemannian metrics on $M$. Letting $\pi:M\to \mathcal{O}$ be the projection, a function $f:\mathcal{O}\to\R$is said to be $C^\infty$ if $\pi^*f\in C^\infty(M)$. Since the Laplacian of $M$ commutes with the isometric action of $H$, one can define a Laplace operator on $C^\infty(\mathcal{O})$ by $\pi^*\Delta_\mathcal{O} f =\Delta_M\circ \pi^*f$, where $\Delta_M$ is the Laplace-Beltrami operator of $M$. On arbitrary orbifolds, Riemannian metrics are defined locally using orbifold charts, subject to a compatibility condition to obtain a global Riemannian metric. The Laplacian is similarly defined. The Dirichlet-to-Neumann operator and Steklov spectrum on orbifolds were introduced in \cite{ADGHRS2019} by Arias-Marco, Dryden, Gordon, Hassannezhad, Ray, and Stanhope The Dirichlet-to-Neumann operator on compact Riemannian orbifolds is a pseudo-differential operator with discrete spectrum. The variational characterisation of eigenvalues \ref{eq:Stek Rayleigh min max} remains valid in the orbifold setting. \begin{remark}\label{paorb} Any orbifold $\mathcal{O}$ whose only singular strata are reflectors is a good orbifold. Moreover, the underlying topological space $\underline{\mathcal{O}}$ of $\mathcal{O}$ has the structure of a manifold with boundary. $\partial\underline{\mathcal{O}}$ is the union of the orbifold boundary $\partial\mathcal{O}$ and the reflector strata. For example, a ``half-disk'' orbifold, the quotient of a disk by reflection, is pictured in Figure~\ref{fig:halfdisk}; the reflector stratum is indicated by a double line. \begin{figure} \includegraphics[width=3cm]{Figures/halfdisk.pdf} \caption{} \label{fig:halfdisk} \end{figure} Functions $f\in C^\infty(\mathcal{O})$ must have normal derivative zero on the reflectors since they pull back to reflection-invariant functions on the double $M$. Thus the Steklov eigenvalue problem on $\mathcal{O}$ is equivalent to a mixed Steklov-Neumann eigenvalue problem on the manifold $\underline{\mathcal{O}}$. \end{remark} For both the Laplace spectrum and the Steklov spectrum, it is natural to ask: \begin{itemize}\item \emph{Does the spectrum detect the presence of singularities? Equivalently, are Riemannian orbifolds with singularities spectrally distinguishable from Riemannian manifolds?} \end{itemize} This question remains open even in the case of the Laplace spectrum although there are many partial results. The heat asymptotics for closed Riemannian orbifolds were developed by Donnelly \cite{Do1976} in the case of good Riemannian orbifolds and generalised to arbitrary closed Riemannian orbifolds by Dryden, Gordon, Greenwald, and Webb (\cite{DGGW2008} and errata \cite{DGGW2017}). From the heat asymptotics, one can show for example that the Laplace spectrum detects whether an orbifold contains any singular strata of odd co-dimension. In particular, the Laplace spectrum distinguishes orbifolds containing such strata from manifolds. In the setting of the Steklov spectrum, the question above can be split into two questions: \begin{ques}\label{ques: orb boundary} Does the Steklov spectrum detect the presence of singularities on the boundary of a compact Riemannian orbifold? \end{ques} \begin{ques}\label{ques: orb interior} Does the Steklov spectrum detect the presence of singularities in the interior of a compact Riemannian orbifold? \end{ques} These questions were studied in the case of orbisurfaces, i.e., two-dimensional orbifolds, in \cite{ADGHRS2019}. Before stating the results, we note that every component of the boundary of an orbisurface is a closed one-dimensional orbifold and has one of the following two structures: \begin{itemize} \item Type I: a circle. This type has no singularities. \item Type II: a ``half-circle" $\mathbf Z_2\backslash S^1$, the quotient of a circle by a reflection. This type has two singularities, both of which are reflector points. \end{itemize} For example, the half-disk orbifold in Remark~\ref{paorb} has boundary of the second type. The following proposition and corollary generalise the results of Girouard, Parnovski, Polterovich and Sher discussed in Remark~\ref{rem:gpps2014}. \begin{prop}\cite[Theorem 1.2]{ADGHRS2019}\label{adghrs} Let $\mathcal{O}$ be a compact Riemannian orbisurface. Suppose that $\partial\mathcal{O}$ consists of $r$ type I components of lengths $\ell_1,\dots,\ell_r$ and $s$ type II components of lengths $\ell_1',\dots,\ell_s'$. Let $U$ be the disjoint union of $r$ disks of circumference $\ell_1,\dots, \ell_r$ and $s$ half-disk orbifolds of boundary length $\ell_1',\dots,\ell_s'$. Then $$|\sigma_k(\mathcal{O})-\sigma_k(U)| =O(k^{-\infty}).$$ \end{prop}. \begin{cor}\label{cor:stek orbisurface} The Steklov spectrum of a compact Riemannian orbisurface determines: \begin{enumerate} \item the number of boundary components of each type and their lengths up to an equivalence. \item the number of singular points on the boundary. \end{enumerate} \end{cor} The equivalence relation in the statement of the corollary is generated by the following: a single circular boundary component of length $\ell_1$ together with a pair of type II boundary components each of length $\ell_2$ is equivalent to a single type I boundary component of length $2\ell_2$ together with two type II components each of length $\frac{1}{2}\ell_1$. The first statement cannot be improved since the disjoint union of a disk of boundary length $\ell_1$ and two`half-disk orbifolds each of boundary length $\ell_2$ is Steklov isospectral to the disjoint union of a disk and two half-disks with the equivalent boundary lengths. Next, consider Question~\ref{ques: orb interior} for orbisurfaces. By Example~\ref{2dorb}, there are three types of singularities to consider. The first of these, orbifold cone points, can be viewed as a special case of conical singularities. As discussed in Remark~\ref{rem:con sing}, they can be removed by a $\sigma$-isometry and thus are not detected by the Steklov spectrum. Dihedral points can similarly be removed; the two reflectors that meet at the dihedral point then become a single reflector. Thus every orbisurface is $\sigma$-isometric to an orbisurface whose only interior singularities are reflectors. It is not known whether the Steklov spectrum detects reflectors that do not intersect the orbifold boundary. In view of Remark~\ref{paorb}, this question is a special case of the following much more general question that we pose in arbitrary dimension: \begin{ques}\label{ques:mixed vs pure} Can a mixed Steklov-Neumann problem on a compact Riemannian manifold ${\Omega}_1$ and a pure Steklov problem on a compact Riemannian manifold ${\Omega}_2$ have the same spectrum? \end{ques} \smallskip In dimension greater than two, Questions~\ref{ques: orb boundary} and \ref{ques: orb interior} are completely open. One can also ask: \begin{ques}\label{ques.sing type} Among Riemannian orbifolds with singularities, to what extent does the Steklov spectrum recognise the types of singularities? \end{ques} As the heat asymptotics yielded some positive inverse spectral results for the Laplacian on orbifolds, one might start with the following: \begin{prob}\label{prob: orb heat} Develop Steklov heat asymptotics for compact Riemannian orbifolds with boundary. \end{prob} \section{Inverse problems: negative results}\label{sec: isospectrality} In this section, we address constructions of Steklov isospectral Riemannian manifolds that are not isometric. Such constructions enable us to identify geometric or topological invariants that are not spectrally determined. Recall by Corollary~\ref{cor:sigmaisom} that $\sigma$-isometric Riemannian surfaces (as defined in Definition~\ref{sigmaisom}) are necessarily Steklov isospectral. Thus when working in dimension two, our interest will be in Steklov isospectral surfaces that are not $\sigma$-isometric. The current state of the art for constructing Steklov isospectral manifolds is almost identical to that for Laplace isospectrality. There are two general techniques: the Sunada technique (and various generalisations) and the torus action technique. These techniques were initially introduced to construct Laplace isospectral manifolds, with or without boundary. (In the case of non-trivial boundary, the methods generally produce manifolds that are both Neumann and Dirichlet isospectral; moreover, the boundaries of the manifolds are Laplace isospectral as well.) Both methods are quite robust in that they yield pairs or families of Riemannian manifolds that are simultaneously isospectral for a wide range of spectral problems. In particular, in the case of manifolds with boundary, the manifolds constructed by these methods are now known to be Steklov isospectral as well. Beyond these techniques, most constructions of either Laplace isospectral or Steklov isospectral or manifolds in the literature are via ad hoc methods, e.g., direct computations of the spectra. There is a very large literature on Riemannian manifolds constructed via the Sunada and torus action techniques. While most of this literature pre-dated the application to the Steklov problem, it immediately yields vast examples of Steklov isospectral manifolds. In particular, almost all known Laplace isospectral Riemannian manifolds with boundary are also Steklov isospectral. (Plane domains are a notable exception, as we will discuss later in this section.) However, it is disappointing that we do not currently have methods for isospectral constructions specific to the Steklov problem. In particular, the following questions remain open: \begin{ques}\label{ques.NotLapIsosp} Do there exist Steklov isospectral Riemannian manifolds whose boundaries are not Laplace isospectral? \end{ques} This question can be rephrased as: \emph{Does the Steklov spectrum of a Riemannian manifold determine the Laplace spectrum of its boundary?} \smallskip \begin{ques}\label{ques.NotNeumIsosp} Do there exist Steklov isospectral Riemannian manifolds that are not Neumann isospectral (other than $\sigma$-isometric surfaces)? \end{ques} The analogous question with ``Neumann'' replaced by ``Dirichlet'' is also open. Question~\ref{ques.NotLapIsosp} was first posed in the earlier survey \cite{GiPo2017}. We are hesitant to conjecture an answer to this question. Throughout this survey, we have seen many relationships between the Steklov spectrum of a Riemannian manifold $({\Omega},g)$ and the Laplace spectrum of $(\partial{\Omega},g)$, so this question is natural and interesting. On the other hand, there is no obvious connection between the Steklov spectrum and the Neumann or Dirichlet spectrum. Thus one would expect Question~\ref{ques.NotNeumIsosp} to have a positive answer. The fact that the question is open seems primarily indicative of the challenge of coming up with new methods for constructing Steklov isospectral manifolds. Furthermore, all known examples of Steklov isospectral manifolds satisfy the following notion of ``strong Steklov isospectrality'': \begin{nota}\label{nota.strongStek} Given a compact Riemannian manifold ${\Omega}$ with boundary and $\alpha\in \R$, consider the \emph{Steklov eigenvalue problem with potential} $\alpha$: \begin{equation}\label{eq.alpha} \begin{cases}\Delta u=\alpha\,u\\ \partial_\nu u=\sigma u \mbox{\,\,on\,\,}\partial{\Omega} \end{cases} \end{equation} This is a well-defined problem with discrete spectrum, provided that $\alpha$ is not a Dirichlet eigenvalue for the Laplacian on ${\Omega}$. We will denote the spectrum by $\stek_\alpha({\Omega})$. One has an orthonormal basis of $L^2(\partial{\Omega})$ consisting of $\alpha$-Steklov eigenfunctions (more precisely, of the restrictions to $\partial{\Omega}$ of the $\alpha$-Steklov eigenfunctions. Observe that $\stek_0({\Omega})=\stek({\Omega})$. We will say that two compact Riemannian manifolds ${\Omega}$ and ${\Omega}'$ are \emph{strongly Steklov isospectral} if they have the same Dirichlet spectrum and $\stek_\alpha({\Omega})=\stek_{\alpha}({\Omega}')$ for all $\alpha$ not in their common Dirichlet spectrum. \end{nota} \begin{remark} If one fixes $\sigma$ and treats $\alpha$ as the unknown, then Equation~\eqref{eq.alpha} becomes the Robin eigenvalue problem with parameter $\sigma$. Thus ${\Omega}$ and ${\Omega}'$ are strongly Steklov isospectral if and only if they are isospectral for the Robin problem for every parameter $\sigma$. \end{remark} We note that except when $\alpha=0$, the spectrum $\stek_\alpha({\Omega},g)$ of a Riemannian surface $({\Omega},g)$ is \emph{not} invariant under $\sigma$-isometries. (See Definition~\ref{sigmaisom}.) \begin{ques}\label{stek not strong} Do there exist Steklov isospectral Riemannian manifolds that are not strongly Steklov isospectral (other than $\sigma$-isometric surfaces)? \end{ques} There is no reason to expect the Steklov spectrum to determine all the $\alpha$-Steklov spectra. As in the case of Question~\ref{ques.NotNeumIsosp}, new methods are needed to produce counterexamples. The outline of this section is as follows: In subsection~\ref{subsec.prod}, we will see that strong Steklov isospectrality behaves better than simple (i.e., not strong) Steklov isospectrality under Riemannian direct products. We will discuss the Sunada technique and the torus action technique in Subsections~\ref{subsec.sun} and \ref{subsec.torus}, respectively. In Subsection~\ref{ballquots}, we will see that the Steklov spectra on $p$-forms of different degrees encode different information about the manifold. \subsection{Product manifolds.}\label{subsec.prod} As noted in the discussion following Example~\ref{example: cylinder}, the earliest known non-trivial construction of Steklov isospectral manifolds were pairs of cylinders $N\times I$ and $N'\times I$, where $N$ and $N'$ are any pair of non-isometric Laplace isospectral closed manifolds and $I$ is an interval. One easily shows that if ${\Omega}$ is any \emph{fixed} compact Riemannian manifold with boundary, then $N\times {\Omega}$ and $N'\times {\Omega}$ are also Steklov isospectral. (This in fact follows from the proof of Proposition~\ref{isosp_prod} below.) Strong Steklov isospectrality is preserved by much more general products: \begin{prop}\label{isosp_prod} Let $N$ and $N'$ be Laplace isospectral closed Riemannian manifolds and let ${\Omega}$ and ${\Omega}'$ be strongly Steklov isospectral compact Riemannian manifolds with boundary. Then $N\times {\Omega}$ is strongly Steklov isospectral to $N'\times {\Omega}'$. \end{prop} \begin{proof} Denote by $\spec(N)$ the Laplace spectrum of $N$ (and of $N'$). The definition of strong Steklov isospectrality in Notation~\ref{nota.strongStek} implies that ${\Omega}$ and ${\Omega}'$ have the same Dirichlet spectrum, which we denote by $\spec_D({\Omega})$. We then have $$\spec_D(N\times {\Omega})=\spec_D(N\times {\Omega}') =\{\lambda +\beta: \lambda\in \spec(N), \,\beta\in \spec_D({\Omega})\}.$$ We fix $\alpha\not\in\spec_D(N\times {\Omega})$ and solve the eigenvalue problem~\eqref{eq.alpha} by separation of variables. For $f\in C^\infty(N)$ and $g\in C^\infty({\Omega})$, we find that $fg\in C^\infty(N\times {\Omega})$ is an eigenfunction for~\eqref{eq.alpha} if there exists $\lambda\in \spec(N)$ such that $\Delta_N f=\lambda f$ and such that $g$ is an eigenfunction of the Steklov eigenvalue problem on ${\Omega}$ with potential $\alpha-\lambda$. The $\alpha-\lambda$ Steklov eigenvalue problem is well-defined since $\alpha-\lambda$ is not in the Dirichlet spectrum of ${\Omega}$, and $L^2({\Omega})$. Since the restrictions to $\partial(N\times{\Omega})$ of the resulting eigenfunctions span $L^2(\partial(N\times{\Omega}))$, we conclude that \begin{equation}\label{sepvar}\stek_\alpha(N\times {\Omega}) = \bigcup_{\lambda\in \spec(N)}\,\stek_{\alpha-\lambda} ({\Omega})\,=\, \stek_\alpha(N\times {\Omega}')\end{equation} where the eigenvalues are repeated by multiplicity. \end{proof} Equation~\ref{sepvar} shows that the Steklov spectrum of $N\times {\Omega}$ depends on the strong Steklov spectrum of ${\Omega}$ as well as the Laplace spectrum of $N$, so there is no analogue of Proposition~\ref{isosp_prod} for simple (i.e., not strong) Steklov isospectrality even if one assumses that $N=N'$. \subsection{Sunada's technique}\label{subsec.sun} Toshikazu Sunada \cite{Su1985} introduced an elegant and simple technique to construct pairs of Laplace isospectral compact manifolds with a common Riemannian covering. If the manifolds have boundary, the technique works with both Dirichlet and Neumann boundary conditions. Moreover, the manifolds constructed by this technique are strongly isospectral: they are isospectral with respect to all natural self-adjoint elliptic differential operators; e.g., they have the same Hodge spectra on $p$-forms for all $p$. The technique remains the most widely used method for isospectral constructions. \begin{defn} Subgroups $H_1$ and $H_2$ of a finite group $G$ are said to be \emph{almost conjugate} in $G$ if each $G$-conjugacy class intersects $H_1$ and $H_2$ in the same number of elements. \end{defn} The following version of the Sunada Theorem has the same hypotheses as Sunada's original theorem (except of course that we require the boundary to be non-trivial). \begin{thm}\label{thm.sun}\cite[Theorem 2.3]{GoHeWe2021} Assume that $H_1$ and $H_2$ are almost conjugate subgroups of a finite group $G$. Suppose that $G$ acts by isometries on a compact $(d+1)$-dimensional Riemannian manifold ${\Omega}$ with boundary. Then: \begin{enumerate} \item $H_1\backslash {\Omega}$ and $H_2\backslash{\Omega}$ are strongly Steklov isospectral; \item $\stek_p^{\rs}(H_1\backslash {\Omega})= \stek_p^{\rs}(H_2\backslash {\Omega})$ and $\stek_p^{K}(H_1\backslash {\Omega})= \stek_p^{K}(H_2\backslash {\Omega})$ for every $p\in \{1,\dots, d\}$, where $\stek_p^{\rs}$ and $\stek_p^K$ denote the Steklov spectra on $p$ forms defined by Raulot-Savo and by Karpukhin, respectively. (See Section~\ref{stek.forms}.) \end{enumerate} \end{thm} The theorem becomes trivial if $H_1$ and $H_2$ are conjugate subgroups of $G$; i.e., the isospectral quotients are isometric in that case. When the subgroups are only almost conjugate, the quotient manifolds will usually be non-isometric, but one must always check. If $H_1$ and $H_2$ do not act freely on ${\Omega}$, then $H_1\backslash {\Omega}$ and $H_2\backslash{\Omega}$ will be orbifolds with singularities; otherwise they are smooth manifolds. There is a huge literature on isospectral manifolds based on Sunada's theorem. Many use a delightful method introduced by Peter Buser \cite{Bu1986} for constructing examples using the patterns of the Schreier graphs of the coset spaces $H_1\backslash G$ and $H_2\backslash G$, where $(G,H_1,H_2)$ satisfies the hypothesis of Sunada's theorem. Figure~\ref{fig:Buser_surfaces} (reprinted from \cite{GoWeWo1992}) illustrates Buser's construction in this manner of a pair of Neumann and Dirichlet isospectral flat surfaces embedded in $\R^3$. Theorem~\ref{thm.sun} implies that they are also Steklov isospectral. One can easily adjust the shape of the building block (a cross in this example) so that the resulting surfaces have smooth boundary. Sunada's technique enables one to identify global invariants that are not determined by the Steklov spectrum (as well as by the many other spectra to which the technique applies.) For example, the following invariants are not spectrally determined: \begin{itemize} \item the diameter of the manifold and the intrinsic diameter of its boundary (as evident in Buser's example in Figure~\ref{fig:Buser_surfaces}); \item the fundamental group of the manifold. (Such examples can arise when the almost conjugate subgroups $H_1$ and $H_2$ used in the construction are non-isomorphic.) \end{itemize} \begin{figure} \includegraphics[width=7cm]{Figures/Buser_surfaces.pdf} \caption{Isospectral surfaces (reprinted from \cite{GoWeWo1992}).} \label{fig:Buser_surfaces} \end{figure} \begin{remark}\label{sun.orb}~ One can generalise Sunada's Theorem to the setting of variational eigenvalue problems associated with admissible Radon measures as follows: In addition to the hypotheses of Theorem~\ref{thm.sun}, assume that $\mu$ is a $G$-invariant admissible Radon measure on ${\Omega}$. The induced Radon measures on $H_1\backslash {\Omega}$ and $H_2\backslash{\Omega}$ will then have the same variational spectrum. (See Subsection~\ref{subsection:introvareigenradon} and Section~\ref{Section:Radon}, in particular Definition~\ref{def:admissible} for the concept of variational eigenvalue problems associated with such measures.) \end{remark} We end our discussion of Sunada's Theorem by remarking on the following open question: \begin{ques}\label{ques:isospplanedomains} Do there exist non-isometric Steklov isospectral plane domains? \end{ques} Sunada's technique has been used to construct pairs of Neumann and Dirichlet isospectral plane domains. We've seen that Neumann and Dirichlet isospectral manifolds constructed via Sunada's technique are generally Steklov isospectral as well. However, there is a subtlety in the application of Sunada's technique in this case that does not allow us to conclude Steklov isospectrality. Consider the pair of orbifolds $\mathcal{O}_1$ and $\mathcal{O}_2$ in Figure~\ref{fig:planar} (reprinted from \cite{GoWeWo1992}), obtained as quotients by reflection of Buser's surfaces ${\Omega}_1$ and ${\Omega}_2$ shown in Figure~\ref{fig:Buser_surfaces}. \begin{figure} \includegraphics[width=9cm]{Figures/Plane_domains.pdf} \caption{Isospectral orbifolds (reprinted from \cite{GoWeWo1992}).} \label{fig:planar} \end{figure} The edges marked by double lines are reflectors. Let $D_1$ and $D_2$ be the underlying plane domains. In \cite{GoWeWo1992}, Gordon, Webb, and Wolpert used Sunada's technique, along with Remark~\ref{paorb} to show that $D_1$ and $D_2$ are Neumann isospectral and also isospectral for the mixed Dirichlet-Neumann problem, with Neumann conditions on the edges that correspond to reflectors. Dirichlet isospectrality of $D_1$ and $D_2$ was then proved by using the facts that (i) Buser's surfaces are Dirichlet isospectral, (ii) the domains are Dirichlet-Neumann isospectral, and (iii) the mixed Dirichlet-Neumann, respectively, Dirichlet, spectra of the domains correspond to the part of the Dirichlet spectra of Buser's surfaces for which the eigenfunctions lie in the subspace of reflection-invariant, respectively anti-invariant, functions. As noted in \cite{GoHeWe2021}, Sunada's Theorem along with Remark~\ref{paorb} does show that $D_1$ and $D_2$ are isospectral for the mixed Steklov-Neumann problem, with Neumann conditions on the edges corresponding to reflectors. Although not specifically mentioned in \cite{GoHeWe2021}, they are also isospectral for the analogous mixed Steklov-Dirichlet problem; this follows by the same argument used to prove Dirichlet isospectrality of $D_1$ and $D_2$ above. \subsection{Torus action technique}\label{subsec.torus} The torus action technique yields isospectral manifolds that differ in their local as well as global geometry, thus allowing one to identify curvature properties that are not spectral invariants. The method was first introduced for the Laplacian in a very special setting in \cite{Go1994} and then generalised in a series of papers by numerous authors. The most general version is due to Schueth \cite{Sc2001}. (See \cite{Sc2001} for further history and references.) The Steklov version below is the analog in the Steklov setting of Schueth's method. View \emph{tori} as compact, connected, abelian Lie groups. Let~$T$ be a torus acting effectively by isometries on a compact, connected Riemannian manifold~${\Omega}$. The union of those orbits on which $T$ acts freely is an open, dense submanifold of ${\Omega}$ that we will denote by $\widehat{{\Omega}}$; it carries the structure of a principal $T$-bundle. \begin{thm}\label{thm:Torus_method}\cite[Theorem 3.2]{GoHeWe2021} Let $T$ be a torus which acts isometrically and effectively on two compact, connected Riemannian manifolds $({\Omega},g)$ and $({\Omega}',g')$ with smooth boundary. For each subtorus $W\subset T$ of codimension one, suppose that there exists a $T$-equivariant diffeomorphism $F_W: {\Omega}\to {\Omega}'$ such that \begin{enumerate} \item\label{F_WVolPres} $F_W:{\Omega}\to {\Omega}'$ is volume-preserving with respect to the Riemannian volume densities; \item $F_W|_{\partial {\Omega}}: \partial{\Omega}\to \partial {\Omega}'$ is volume preserving. (Here the volumes are measured with respect to the Riemannian metrics induced on the boundaries by $g$ and $g'$.) \item $F_W$ induces an isometry $\overline{F}_W: (W\backslash\widehat{{\Omega}}, g_W)\to(W\backslash\widehat{{\Omega}'},g'_W)$, where $g_W$ and $g'_W$ are the metrics induced by $g$ and $g'$ on the quotients. \end{enumerate} Then $({\Omega},g)$ and $({\Omega}',g')$ are strongly Steklov isospectral. \end{thm} The proof uses Fourier decomposition with respect to the $T$-action to decompose $C^\infty({\Omega})$ and $C^\infty({\Omega}')$ into a sum of subspaces $C^\infty({\Omega})^W$, respectively $C^\infty({\Omega}')^W$, where $C^\infty({\Omega})^W$ and $C^\infty({\Omega}')^W$ consist of all $W$-invariant elements of $C^\infty({\Omega})$, respectively $C^\infty({\Omega}')$. We have $F_W^* : C^\infty({\Omega}')^W\simeq C^\infty({\Omega})^W$ and one shows that the map $F_W^*$ preserves the Steklov-Rayleigh quotients, i.e., $$\frac{\|\nabla(F_W^*u)\|^2_{L^2({\Omega})}}{\|F_W^*u\|^2_{L^2(\partial {\Omega})}}=\frac{\|\nabla u\|^2_{L^2({\Omega}')}}{\|u\|^2_{L^2(\partial {\Omega}')}}$$ Like Sunada's technique, the torus action technique is quite robust in the sense that the same method works for a wide range of operators. For example, if $\mu$ and $\mu'$ are $T$-invariant admissible Radon measures on ${\Omega}$ and ${\Omega}'$, respectively, and if one replaces condition (2) in the theorem by the condition $F_W^*\mu'=\mu$, then one can conclude that $\mu$ and $\mu'$ have the same variational spectrum. On the other hand, because the theorem depends on the use of Riemannian submersions, the technique is not applicable to the Hodge Laplacian on $p$-forms or to the Dirichlet-to-Neumann operators on $p$-forms defined by Raulot-Savo and Karpukhin. \begin{ex}\cite{Go2001}, \cite{Sc2001}, \cite{GoHeWe2021} Let $B^{d+1}$ be a $(d+1)$-dimensional ball. For $d\geq 7$, there exist continuous families of non-isometric Riemannian metrics on $B^{d+1}$ that are mutually strongly Steklov isospectral. The metrics are also isospectral for both the Neumann and Dirichlet problems and the boundary spheres have the same Laplace spectra. For $d=5,6$, there exist pairs of such metrics satsifying the same properties. \end{ex} \begin{ex}\label{canthear} For each of the following geometric properties, the torus action method yields pairs of manifolds ${\Omega}$ and ${\Omega}'$ such that ${\Omega}$ and ${\Omega}'$ are simultaneously Steklov, Neumann, and Dirichlet isospectral and their boundaries $\Sigma$ and $\Sigma'$ are Laplace isospectral, but: \begin{itemize} \item ${\Omega}$ has constant Ricci curvature while ${\Omega}'$ has variable Ricci curvature (examples in dimension 10) ; \item $\Sigma$ has constant scalar curvature while $\Sigma'$ has variable Ricci curvature (examples for ${\Omega}$ and ${\Omega}'$ of dimension 9) ; \item the curvature tensor of ${\Omega}$ is parallel, the curvature tensor of ${\Omega}'$ is not (examples in dimension $4(\ell +1)$, $\ell\geq 2$). \end{itemize} The counterexamples appear in \cite{GoSz2002} although the Steklov isospectrality was not mentioned. there. (The second item is stated in a different form in \cite{GoSz2002}: a pair of (non-product) isospectral metrics $g_1$ and $g_2$ is defined on the manifold ${\Omega}:=\R^6\times T^3$, where $T^3$ is a 3-torus. One of these metrics induces a metric of constant scalar curvature on $\Sigma=\partial {\Omega}$ and the other a metric of variable scalar curvature. The torus action method implies that the two metrics on ${\Omega}$ are Neumann, Dirichlet, and Steklov isospectral and that the metrics on $\Sigma$ are Laplace isospectral, although only the latter fact is actually stated in \cite{GoSz2002}.) \end{ex} Nevertheless, the gap between what one can glean from, say, spectral asymptotics of the Steklov spectrum as in Section~\ref{sec:inv probs pos} and what the counterexamples tell us remains huge. A sampling of the myriad open questions: \begin{ques}\label{quesgeq3} For manifolds of dimension $\geq 3$, can one tell from the Steklov spectrum: \begin{itemize} \item Whether the manifold has constant sectional curvature? \item Whether the induced metric on the boundary has constant sectional (or principal) curvature? \item Whether the induced metric on the boundary is Einstein? \end{itemize} \end{ques} While more can be gleaned currently from spectral asymptotics of the Laplacian than from the Steklov spectrum, the questions above are not answered for the Laplacian either, although it is known for the Laplacian that any Riemannian metric $g_0$ of constant curvature $\kappa$ on a closed Riemannian manifold $M$ is at least spectrally isolated in the sense that no sufficiently nearby Riemannian metric has the same Laplace spectrum. This was proven by S. Tanno for $\kappa>0$, R. Kuwabara for $\kappa=0$, and V. Sharafutdinov for $\kappa<0$. \subsection{Quotients of Euclidean balls}\label{ballquots} The literature on the Laplace spectrum and also on the Hodge spectra on $p$-forms contains rich collections of examples of isospectral spherical space forms that exhibit interesting properties. The following elementary proposition shows that each such example yields a orresponding example of Steklov isospectral quotients of Euclidean balls. We will state the result in the language of the Steklov spectrum on $p$-forms introduced by Raulot and Savo (see Definition~\ref{dtnp}). Recall that when $p=0$, this spectrum coincides with the usual notion of Steklov spectrum on functions. \begin{prop}\label{prop.ballquot} Let $\Gamma_1\backslash S^{d}$ and $\Gamma_2\backslash S^{d}$ be spherical space forms. (Here $\Gamma_1$ and $\Gamma_2$ are finite subgroups of the orthogonal group of $\R^{d+1}$.) Let $\mathbb{B}^{d+1}$ be the unit ball in $\R^{d+1}$ and let ${\Omega}_i=\Gamma_i\backslash \mathbb{B}^{d+1}$, $i=1,2$. Let $p\in \{0,\dots, d\}$. Then the following are equivalent: \begin{itemize} \item[(i)] $\stek_p^{\rs}({\Omega}_1)=\stek_p^{\rs}({\Omega}_2)$; \item[(ii)] The Hodge Laplacians of $\Gamma_1\backslash \mathbb{S}^{d}$ and $\Gamma_2\backslash \mathbb{S}^{d}$ on $p$-forms are isospectral. \end{itemize} Moreover, when these equivalent conditions hold, we also have that \[\stek_p^{K}({\Omega}_1)=\stek_p^{K}({\Omega}_2).\] \end{prop} \begin{proof} As discussed in Example~\ref{pball}, the Hodge-$p$ Laplacian on $\mathbb{S}^d$ and the Dirichlet-to-Neumann operator $\mathcal{D}_p^{\rs}$ on $\mathbb{B}^{d+1}$ have the same eigenspaces in $\mathcal{A}^p(\mathbb{S}^d)$. We denote these eigenspaces by $E_k$, $k=1,2,\dots$. The Hodge-$p$ eigenspace of $\Gamma_i\backslash \ \mathbb{S}^d$ corresponds by pullback to $E_k^{\Gamma_i}$, $k=1,2,\dots$, where $E_k^{\Gamma_i}$ denotes the $\Gamma_i$-invariant $p$-forms in $E_k$. Let $$\widehat{E}_k =\{\widehat{\eta}\in \mathcal{A}^p(\mathbb{B}^{d+1}): \eta\in E_k\}$$ where $\widehat{\eta}$ is the tangential harmonic extension of $\eta$ as in Definition~\ref{dtnp}. Observe that \begin{equation}\label{hetaeta} \eta\in E_k^{\Gamma_i}\iff \widehat{\eta}\in \widehat{E}_k^{\Gamma_i}.\end{equation} Indeed the ``if'' statement is trivial. The ``only if'' statement follows from the observation that if $\eta$ is $\Gamma_i$-invariant, then averaging $\widehat{\eta}$ by the isometric action of $\Gamma_i$ gives another tangential harmonic tangential extension of $\eta$, which must equal $\widehat{\eta}$ by uniqueness. We can now conclude that $\mathcal{D}_p^{\rs}({\Omega}_i)$ and the Hodge-$p$ operator of $\Gamma_i\backslash \mathbb{S}^d$ have the same eigenspaces $E_k^{\Gamma_i}$. The equivalence of (i) and (ii) follows. The final statement follows in a similar way. Although the co-closed harmonic extension $\widetilde{\eta}$ of $\eta$ used in Karpukhin's definition~\ref{dtnk} of $\mathcal{D}_p^K$ is not unique, the fact that the operator is well-defined independent of the choice of such extension permits an easy modification of the observation~\eqref{hetaeta}. We can't conclude the converse of the final statement since the $0$-eigenspace of $\mathcal{D}_p^K(\mathbb{B}^{d+1})$ is the sum of many of the $E_k$. \end{proof} The proposition above in the case $p=0$ was observed in \cite[Example 6.1]{ADGHRS2022}. \begin{remark}\label{rem.orb ball quot} In Proposition~\ref{prop.ballquot}, ${\Omega}_i$, $i=1,2$, is necessarily an orbifold with a singularity at the point corresponding to the origin in $\mathbb{B}^{d+1}$. If $\Gamma_i$ doesn't act freely on $\mathbb{S}^d$, then ${\Omega}_i$ will also contain higher-dimensional singular strata. \end{remark} We give two applications of Proposition~\ref{prop.ballquot}. First we address the following question: \begin{ques}\label{ques: inverse prob forms} Do the Steklov spectra on $p$-forms for different choices of $p$ contain different geometric information? Equivalently, for $p\neq q$, do there exist compact Riemannian manifolds ${\Omega}_1$ and ${\Omega}_2$ with boundary such that $\stek_p^{\rs}({\Omega}_1)=\stek_p^{\rs}$ but $\stek_q^{\rs}({\Omega}_1)\neq \stek_q^{\rs}({\Omega}_2)$? (One can ask the same question for $\stek_p^{K}({\Omega}_i)$) \end{ques} Concerning the analogous question for the Hodge Laplacian on closed Riemannian manifolds, A. Ikeda \cite{Ik1989} showed that for every $p_0$, there exist pairs of Lens spaces whose Hodge Laplacians on $p$-forms are isospectral for every $p\in \{0,\dots, p_0\}$ but not for $p=p_0$. The articles \cite{GoMc2006} (corrected in \cite{GoMc2019}) and \cite{La2019} contain many other examples of Lens spaces that are $p$-isospectral for some (not necessarily consecutive) but not all values of $p$. By Proposition~\ref{prop.ballquot}, each of these examples yields examples of flat orbifolds of the form $\Gamma_i\backslash \mathbb{B}^{d+1}$ whose Steklov spectra on $p$-forms coincide for some but not all values of $p$. To our knowledge, there are no examples currently known of pairs of Riemannian manifolds, as opposed to orbifolds, exhibiting this property. \medskip Proposition~\ref{prop.ballquot} also has applications to Question~\ref{ques.sing type}, which asks the extent to which the Steklov spectrum of an orbifold $\mathcal{O}$ recognises the types of singularities in $\mathcal{O}$. Indeed, Shams, Stanhope and Webb \cite{ShStWe2006} and also Rosetti, Schueth and Weilandt \cite[Example 2.9]{RoScWe2008} used the Sunada technique to construct orbifolds $\Gamma_1\backslash S^d$ and $\Gamma_2\backslash S^d$ that are isospectral both for the Laplacian and for the Hodge Laplacian on $p$-forms for all $p$ but that have singularities of different isotropy types. By Proposition~\ref{prop.ballquot}, the orbifolds $\Gamma_1\backslash B^{d+1}$ and $\Gamma_2\backslash B^{d+1}$ are Steklov isospectral and also have the same Steklov spectra on $p$-forms for all $p$ (with respect to both the Raulot-Savo and the Karpukhin notions). These orbifolds have different types of singularities both on their boundaries $\Gamma_1\backslash S^d$ and $\Gamma_2\backslash S^d$ and in their interiors. \section{Geometry of eigenfunctions}\label{sec:eigenfunctions} In this section we review some of the recent developments concerning the qualitative behaviour of Steklov eigenfunctions. As can be seen from Example \ref{example: cylinder} in particular, there is a general heuristic for the behaviour of Steklov eigenfunctions: \begin{itemize} \item Oscillation near the boundary, of frequency $\sigma$. \item Rapid decay into the interior of $\Omega$ as $\sigma\to\infty$. Equivalently, concentration of mass near the boundary $\partial\Omega$. \end{itemize} \subsection{Interior decay} In 2001, Hislop and Lutzer \cite{HiLu2001} showed that when the boundary $\partial\Omega$ is smooth, Steklov eigenfunctions decay super-polynomially into the interior: \begin{thm}\cite[Theorem 1.1]{HiLu2001} Suppose that $\Omega\subseteq\mathbb R^{d+1}$ has smooth boundary $\Sigma$. Let $K$ be a compact subset of the interior of $\Omega$. Then for any $j\in\mathbb N$, \begin{equation}\label{eq:hlbound} \|u_k\|_{H^1(K)} = O(k^{-j}), \end{equation} with the implied constants depending only on $K$ and $j$. \end{thm} The proof uses a Green's function (layer potential) representation for the harmonic extensions of the boundary eigenfunctions. It also uses the fact, which follows from the Calder\'on-Vaillancourt theorem for pseudodifferential operators, that $\mathcal D^j$ is bounded from $H^j(\Sigma)$ to $L^2(\Sigma)$ for any $j$. \begin{remark} The same result holds, with a virtually identical proof, if $H^1(K)$ is replaced by $H^m(K)$ or by $C^m(K)$ for any $m$. It also holds if $\mathbb R^{d+1}$ is replaced by any smooth $(d+1)$-dimensional ambient Riemannian manifold $M$. The proof is again nearly identical, using the smoothness of the Green's function on $M$ away from the diagonal. \end{remark} Hislop and Lutzer also conjectured in this paper that ``the decay is actually of order $e^{-\textrm{dist}(K,\Sigma)|m|}$ in the case of an analytic boundary." This was shown in \cite{PoShTo2019} for domains $\Omega\subseteq\mathbb R^2$. Galkowski and Toth have now proven the Hislop-Lutzer conjecture in all dimensions \cite{GaTo2019}. Their result is stated semiclassically and gives an extremely sharp description of the decay of Steklov eigenfunctions into the interior. Non-semiclassically, it translates as follows: \begin{thm}\cite[Theorem 1]{GaTo2019} Let $\Omega$ be a real-analytic Riemannian manifold of dimension $d+1$ with real-analytic boundary. There exists a small tubular neighborhood of $\Sigma\subseteq\Omega$ in which the Steklov eigenfunctions $u_k$ satisfy the estimate \[|u_k(x)|\leq Ck^{\frac d2-\frac 14}e^{-k(d(x,\Sigma)-cd^2(x,\Sigma))}.\] Here $C$ depends only on the size of the tubular neighborhood, $c$ is an explicit geometric constant, and $d(x,\Sigma)$ is the distance to the boundary. Similar bounds hold for derivatives of $u_k(x)$, with an additional factor of $k$ for each derivative. \end{thm} As Galkowski and Toth observe, this bound immediately implies the analogue of Hislop and Lutzer's result, by using the maximum principle: for any compact set $K$ contained in the interior of $\Omega$, \begin{equation}\label{eq:gtbound} |u_k(x)|\leq Ce^{-kc}\end{equation} for some positive constant $c$. It is not yet clear exactly which assumptions are necessary for which kind of decay. This motivates the following open questions. \begin{ques}\label{ques:interiordecaysmooth} Does \eqref{eq:gtbound} hold if $\Sigma$ and/or $\Omega$ are assumed smooth rather than real analytic? \end{ques} The proof techniques of \cite{GaTo2019} are based on analytic microlocal analysis and thus use the analyticity hypotheses in extremely strong ways. So one would perhaps guess that the answer to Open Question \ref{ques:interiordecaysmooth} is ``no". However, the example of a cylinder (Example \ref{example: cylinder}) shows that even if $\Sigma$ is not assumed real analytic, \eqref{eq:gtbound} holds for $\Omega=\Sigma\times[0,1]$. Similar bounds also seem to hold for certain warped products \cite{DaHeNi2021}. This suggests that some form of partial analyticity, perhaps in the direction normal to the boundary, might suffice. \begin{ques}\label{ques:decaynonsmooth} What kind of decay properties do the Steklov eigenfunctions possess if $\Sigma$ is not assumed to be smooth? \end{ques} In particular, it is natural to ask if we still have a bound of the form \eqref{eq:hlbound}, despite the fact that the pseudodifferential techniques involved in its proof do not go through when $\Sigma$ is not smooth. If not, perhaps one may replace $O(k^{-\infty})$ with $O(k^{-\alpha})$ for some $\alpha>0$ depending on the regularity of $\Sigma$. There are hints in \cite{LPPS2019} that \eqref{eq:hlbound} may \emph{not} hold in the case where $\Omega$ is a planar polygon. Curiously, this may be very sensitive to the arithmetic properties of the angles of the polygon, with decay rates depending on the angles, and super-polynomial decay if and only if all angles are certain rational multiples of $\pi$. We should also mention the question of concentration in the case of multiple boundary components. Specifically, do Steklov eigenfunctions generically concentrate on a single boundary component, or is their mass more often split evenly between multiple boundary components? This question has been recently studied by Daud\'e, Helffer, and Nicoleau in the setting of warped product manifolds \cite{DaHeNi2021}. They observe that if the warped product is asymmetric (a generic assumption in this setting), Steklov eigenfunctions localise near one component each. It is thus a natural question to ask: \begin{ques}\label{ques:generic}Do Steklov eigenfunctions generically concentrate on a single boundary component? \end{ques} This open question has been answered in dimension two by Martineau, who showed that this concentration occurs under the (generic) assumption that the ratios of the lengths of each boundary component are neither rational numbers nor Liouville numbers \cite[Theorem 3]{Ma2018}. It remains open in higher dimensions. \subsection{Nodal sets} There has also been recent progress on understanding the nodal sets of the Steklov eigenfunctions $u_k$ and the Dirichlet-to-Neumann eigenfunctions $\varphi_k$. \subsubsection{Size of nodal sets} As with eigenfunctions of the Laplacian, there is a version of Yau's conjecture for the size of the nodal set, stated as Open Problem 11 in \cite{GiPo2017}: \begin{ques}\label{ques:yau} Do there exist constants $c$ and $C$ depending only on $\Omega$ for which \begin{equation}\label{eq:yauint} c\sigma_k\leq\mathcal H_{d}(u_k)\leq C\sigma_k; \end{equation} \begin{equation}\label{eq:yaubound} c\sigma_k\leq\mathcal H_{d-1}(\varphi_k)\leq C\sigma_k? \end{equation} \end{ques} \begin{remark} By considering the example of a cylinder, the statement \eqref{eq:yaubound} would imply Yau's conjecture for the Laplace nodal sets. It is thus highly nontrivial. \end{remark} Much of the progress on Question \ref{ques:yau} has been concentrated in the case where $\Omega$ is real analytic. In that setting, the study of the boundary nodal sets was largely initiated by Bellova and Lin \cite{BeLi2015}, who proved an upper bound of the form \[\mathcal H_{d-1}(\varphi_k)\leq C\sigma_k^6\] for the size of the nodal sets of the boundary eigenfunctions $\varphi_k$. This was then improved by Zelditch \cite{Ze2015}, who proved the optimal upper bound of $C\sigma_k$, again under the assumption that $\Omega$ is analytic. Results for the lower bounds are somewhat weaker; the state of the art is a result by Wang and Zhu \cite{WaZh2015} that \[\mathcal H_{d-1}(\varphi_k)\geq c\sigma_k^{\frac{3-d}{2}}.\] Concerning the study of the interior nodal sets, in the real analytic setting, the case $d=1$ of Question \ref{ques:yau} was answered in the affirmative in \cite{PoShTo2019} by using analytic microlocal analysis. Zhu then recently proved the upper bound in \eqref{eq:yauint} when $\Omega$ is a real analytic manifold of \emph{any} dimension \cite{Zh2020}. Zhu's proof uses a series of classical ideas including doubling properties and Carleman estimates. Together with Lin, Zhu has also used these ideas to obtain upper bounds on the size of the nodal sets for related eigenvalue problems \cite{LiZh2020}. If $\Omega$ is only assumed smooth rather than real analytic, the results are naturally weaker. There is a lower bound: it was proved by Sogge, Wang, and Zhu \cite{SoWaZh2016} that \[\mathcal H_{d}(u_k)\geq c\sigma_k^{1-\frac{d+1}{2}}.\] The sharpest known upper bound in this sort of generality is also due to Zhu \cite{Zh2016}, who showed that \[\mathcal H_{d}(u_k)\leq C\sigma_k^{3/2}.\] However, in the case where $\Omega$ is a domain in Euclidean space, there are notable recent improvements by Decio \cite{De2021, De2021_2}. In particular Decio proved the following: \begin{thm}\cite{De2021_2} Suppose that $\Omega$ is a domain in Euclidean space with $C^2$ boundary. Then there exist positive constants $c$, $C$, and $e$ depending only on the domain such that \[c\leq \mathcal H_{d-1}(u_k)\leq C\sigma_k\log(\sigma_k + e).\] \end{thm} Observe that this is an improvement over the lower bound of \cite{SoWaZh2016} and the upper bound of \cite{Zh2016}, albeit in a slightly more restrictive setting. Decio's methods use machinery developed by Logunov to study Yau's conjecture for Laplace eigenfunctions. \subsubsection{Density of nodal sets} It is well-known that nodal sets of Laplace eigenfunctions are dense on the scale $\lambda^{-1/2}$. In \cite{GiPo2017}[Open Problem 10], Girouard and Polterovich asked the analogous question for both the Steklov and Dirichlet-to-Neumann eigenfunctions, namely whether they are dense on the scale $\sigma^{-1}$. They point out that this cannot always hold without some additional regularity assumptions, as for example Dirichlet-to-Neumann eigenfunctions on a rectangle, of arbitrarily high eigenvalue, may be constant and nonzero along the full length of a side. There has been significant recent progress on this question. For Steklov eigenfunctions, it was answered in the negative by Bruno and Galkowski \cite{BrGa2020}. In simplified and weakened form, their result reads: \begin{thm}\cite{BrGa2020} There exists a compact domain $\Omega\subseteq\mathbb R^2$ with analytic boundary, and a fixed value $r_1>0$, for which each Steklov eigenfunction of $\Omega$ has a nodal domain which contains a ball of radius $r_1$. \end{thm} Note that the results of Bruno and Galkowski are in fact significantly stronger. They show that $\Omega$ may be chosen arbitrarily close to any fixed domain $\Omega_0$ with analytic boundary. They also give upper bounds for the `oscillation' of Steklov eigenfunctions on larger subsets of $\Omega$. Despite both of these negative results, Decio has recently proved a positive density result, involving balls in $\Omega$ which are nonetheless centered at a point in $\partial\Omega$: \begin{thm}\cite{De2021} Suppose that $\Omega$ is a Lipschitz domain in Euclidean space. Then there is a constant $C=C(\Omega)$ for which any ball of radius $C/\sigma_k$ centered at a point of $\partial\Omega$ intersects the nodal set of $u_{\sigma_k}$ nontrivially. \end{thm} \subsection{Nodal count} Another natural question about nodal sets is to count nodal domains. This is inspired by the famous Courant nodal domain theorem, which states that the $k$th eigenfunction of the Laplacian on a compact manifold has at most $k$ nodal domains. It is an old result of Kuttler and Sigillito \cite{KuSi1969} that the same is true for Steklov eigenfunctions. In \cite{HaSh2021}, Hassannezhad and Sher have recently investigated the analogue for eigenfunctions of the Steklov problem with a potential $q$. \begin{thm}\label{thm:mainthmhash} \cite[Theorem 1.1]{HaSh2021} Let $\Omega$ be a Lipschitz domain in a smooth manifold $M$ and let $q\in L^{\infty}(\Omega)$ be a potential. Let $N_k$ be the number of nodal domains of a $k$th eigenfunction $u_k$ of the Steklov problem for $\Delta+q$ on $\Omega$. Then \[N_k\leq k + d,\] where $d$ is the number of non-positive Dirichlet eigenvalues of $\Delta + q$. \end{thm} This theorem is sharp, as the authors show via explicit examples. The idea of the proof is to use Steklov-Robin duality. This idea is originally due to Friedlander \cite{Fr1991}, see also \cite{Ma1991} and sequels. Steklov eigenvalues and eigenfunctions, with the spectral parameter in the boundary condition, may be viewed alternatively as Robin eigenvalues and eigenfunctions, with the spectral parameter in the interior. After suitably generalising this set of ideas, the proof of Theorem \ref{thm:mainthmhash} is a direct consequence of a Courant-type theorem for Robin eigenfunctions. There are many open questions. First, if $N_k$ is a Laplace nodal count, it is a result of Pleijel \cite{Pl1956} that \[\limsup\frac{N_k}{k}<\gamma<1,\] where $\gamma$ is an explicit constant. \begin{ques}\label{ques:pleijel1} Is there a Pleijel-type theorem for the nodal counts of Steklov eigenfunctions? \end{ques} Interestingly, very little seems to be known about the nodal counts of the \emph{Dirichlet-to-Neumann} eigenfunctions $\phi_k$. The same explicit examples show that a strict bound of $k$ is impossible, at least in the same generality considered in \cite{HaSh2021}. However one may ask: \begin{ques}\label{ques:pleijel2} Is there a theorem like Theorem \ref{thm:mainthmhash} for Dirichlet-to-Neumann eigenfunctions? If so, can it be improved to a Pleijel-type result? \end{ques} \subsection{Other properties of eigenfunctions} In the setting of the Laplacian, Uhlenbeck showed in 1976 \cite{uhlenbeck1976} that for \emph{generic} metrics on a fixed manifold $\Omega$, \begin{itemize} \item All eigenvalues are simple; \item All eigenfunctions have zero as a regular value; \item All eigenfunctions are Morse functions. \end{itemize} This work has recently been extended to the Steklov setting by Wang \cite{wang2022gen}. In particular, Wang shows analogues of each of the three results above, proving that the Steklov eigenvalues are generically simple. The latter two results are demonstrated for the Dirichlet-to-Neumann eigenfunctions, that is, the restrictions to the boundary of the Steklov eigenfunctions. \section{Variational eigenvalues of Radon measures: basic properties and continuity} \section{Variational eigenvalues of Radon measures} \label{Section:Radon} Let $(\Omega,g)$ be a compact Riemannian manifold of dimension $n=d+1$ with boundary $\Sigma$. Let $\mu$ be a nonzero Radon measure on $\Omega$ and consider the induced Rayleigh--Radon quotient $$R_{\mu}(u):=\frac{\int_\Omega|\nabla u|^2\,dV_{{\Omega}}}{\int_{\Omega}u^2\,d\mu},$$ which is initially defined for $u\in C^\infty(\Omega)$ with $\int_{\Omega}u^2\,d\mu\neq 0$. The variational eigenvalues $\lambda_k(\Omega,g,\mu)$ are then defined through the min-max variational formula~\eqref{def:VarEigenRadon}. (We will often write $\lambda_k(\mu)$ for $\lambda_k(\Omega,g,\mu)$, suppressing the name of the Riemannian manifold if it is fixed.) The goal of this appendix is to give enough background to state Theorem~\ref{thm:ContinuityRadon}, which provides continuity for these eigenvalues. See section \ref{subsection:introvareigenradon} for examples connecting variational eigenvalues of Radon measures to various classical eigenvalue problems. It follows from $R_\mu(1)=0$ that $\lambda_0(\Omega,g,\mu)=0$. For $\Omega$ connected, it is natural to expect that $\lambda_1(\Omega,g,\mu)$ should be positive. However, for an arbitrary Radon measure, the analogy between the variational eigenvalues $\lambda_k(\Omega,g,\mu)$ and the eigenvalues of usual elliptic operators could become very weak. \begin{ex} Let $p_1,\cdots,p_\ell\in\Omega$ be distinct and consider the sum of delta-measures $\mu=\sum_i\delta_{p_i}$. Because the capacity of a point is 0, we see directly that $\lambda_0(\mu)=\lambda_1(\mu)=\cdots=\lambda_{\ell-1}(\mu)=0$. Moreover, since $L^2(\Omega,\mu)$ is only $\ell$-dimensional, the infimum in the definition of $\lambda_\ell(\mu)$ is over the empty set, so $\lambda_\ell(\mu)=+\infty,$ \end{ex} To develop the theory beyond a mere observation and catalogue of examples, a sound functional setting is required. Define $H^1(\Omega,\mu)$ to be the completion of $C^\infty(\Omega)$ with respect to the norm given by \begin{equation} \label{eq:completion} \|u\|^2_{H^1(\Omega,\mu)} = \int_\Omega u^2\,d\mu + \int_\Omega |\nabla u|_g^2\,dv_g = \|u\|^2_{L^2(\Omega,\mu)} + \|{\nabla u}\|_{L^2(\Omega,g)}^2. \end{equation} Through this completion, the natural map $C^\infty(\Omega)\to L^2(\Omega,\mu)$ induces a bounded map $\tau^{\mu}:H^1(\Omega,\mu)\rightarrow L^2(\Omega,\mu)$, which we call the \emph{trace map} induced by $\mu$. For all measures $\mu$ that we will use, the space $H^1(\Omega,\mu)$ coincides (although the norm may differ) with the usual Sobolev space $H^1(\Omega,dV_g)$ and the map $\tau^\mu$ is explicitly identified. Nevertheless, for an arbitrary Radon measure, the spaces $H^1(\Omega,\mu)$ could be very different from the usual ones. \begin{ex} Let $\mu=\delta_{p}$ be a delta-measure supported at $p\in\Omega$. Then $H^1(\Omega,\mu)$ is naturally identified with $\R\times H^1(\Omega,dV_g))/\R$ and $\tau^\mu(t,f)=t$. \end{ex} By limiting our attention to a more manageable class of Radon measures, we will recover many of the expected properties observed in classical eigenvalue problems. The following definition was proposed by Girouard, Kapukhin and Lagac\'e in~\cite[Definition 3.3]{GiKaLa2021}. \begin{defn}\label{def:admissible} A Radon measure $\mu$ on $\Omega$ is \emph{admissible} if the following conditions are satisfied: \begin{enumerate} \item[A1)] For each $p\in\Omega$, $\mu(\{p\})=0$; \item[A2)] The map $\tau^{\mu}:H^1(\Omega,\mu)\to L^2(\Omega,\mu)$ is compact; \item[A3)] There exists $K > 0$ such that for all $u \in C^\infty(\Omega)$ with vanishing $\mu$-average, \begin{equation}\label{Ineq:Poincare} \int_\Omega u^2\, d\mu \le K \int_\Omega |\nabla u|^2\, dV. \end{equation} \end{enumerate} \end{defn} One can show that the best constant in $(A3)$ is $\lambda_1(\Omega,g,\mu)^{-1}$. Moreover, the Poincar\'e-type inequality $(A3)$ implies a Poincar\'e--Wirtinger type inequality for all $u\in C^\infty(\Omega)$: $$\int_\Omega u^2\, d\mu \le K'( \int_\Omega |\nabla u|^2\, dV+\int_\Omega u^2\,dV).$$ In particular, the natural map $j:C^\infty(\Omega)\to L^2(\Omega,\mu)$ extends to a bounded linear map $T_\mu:H^1(\Omega,dV_g))\to L^2(\Omega,\mu)$. (We emphasize that $T_\mu$ and the map $\tau^\mu$ defined above have different domains.) \begin{lemma}\cite[Theorem 3.4]{GiKaLa2021}\label{Lemma:AdminissibleAlternative} Let $\mu$ be a nonzero Radon measure on $\Omega$ such that $\mu(\{p\})=0$ for each $p\in\Omega$. Then $\mu$ is admissible if and only if the natural map $j:C^\infty(\Omega)\to L^2(\Omega,\mu)$ extends to a compact linear map $T_\mu:H^1(\Omega,dV_g)\to L^2(\Omega,\mu)$. \end{lemma} To complete this picture, it is useful to know that the Sobolev--Radon spaces associated to admissible Radon measures are isomorphic to each other. \begin{lemma}\cite[Theorem 3.5]{GiKaLa2021} If $\mu,\xi$ are two admissible measures, then the identity map on $C^\infty(\Omega)$ extends to a bounded invertible linear map $T_{\mu,\xi}:H^1(\Omega,\mu)\to H^1(\Omega,\xi)$. \end{lemma} In particular, since $dV_g$ is itself an admissible measure, $H^1(\Omega,\mu)\cong H^1(\Omega,dV_g)$ for any admissible $\mu$. \begin{remark} The literature on Sobolev spaces is replete with compactness criteria, which are useful in deciding which measures are admissible. See the book~\cite[Chapter 11]{Ma2011} by Maz’ya. In particular, if $\mu$ is admissible then any set $E\subset\Omega$ of vanishing capacity must satisfy $\mu(E)=0$. A very interesting geometric criterion for admissibility in terms of a quantity called the \emph{lower $\infty$-dimension of $\mu$} could also be deduced from the work of Hu, Lau, and Ngai~\cite{HuLaNg2006}. See also~\cite[Section 2]{Ko2014}. \end{remark} The following Proposition shows that variational eigenvalues of admissible measures behave similarly to the eigenvalues of elliptic operators with compact resolvent. \begin{prop}\label{thm:SpectralTheoremMeasure} Let $\mu$ be an admissible measure on the compact connected Riemannian manifold $(\Omega,g)$. Then the variational eigenvalues $\lambda_k(\mu)=\lambda_k(\Omega,g,\mu)$ form an unbounded sequence $$ 0=\lambda_0(\mu)<\lambda_1(\mu)\leqslant\lambda_2(\mu)\leqslant\ldots\nearrow\infty. $$ A real number $\lambda$ is an eigenvalue if and only if there exists $0\neq u\in H^1(\Omega,\mu)$ such that \begin{equation} \label{eigenfunctions_measures:def} \int_\Omega \nabla u\cdot \nabla \phi \, dV = \lambda\int_\Omega \tau^{\mu}(u) \tau^{\mu}(\phi)\, d\mu,\qquad\qquad\text{for all } \phi\in H^1(\Omega,\mu). \end{equation} In that case we call $u$ an eigenfunction corresponding to $\lambda$. Moreover, there exists a sequence $(u_j)\subset H^1(\Omega,\mu)$ of eigenfunctions corresponding to $\lambda_j(\mu)$ such that the functions $\tau^\mu(f_j)$ form an orthonormal basis of $L^2(\Omega,\mu)$. \end{prop} In~\cite{Ko2014} the proof of this result is described in terms of a classical recursive procedure for constructing the eigenvalues. However, it is interesting to observe that the eigenvalues $\lambda_k(\Omega,g,\mu)$ are the eigenvalues of a measure geometric Laplace operator $\Delta_\mu$ of Krein--Feller type. The whole theory could also be presented from that perspective. See~\cite{HuLaNg2006} and also the recent preprint~\cite{KeNi2022} of Kesseb\"ohmer and Niemann where these operators are studied for the purpose of studying spectral asymptotics on fractals. In particular, it is interesting to compare Proposition~\ref{thm:SpectralTheoremMeasure} with~\cite[Theorem 1.2]{HuLaNg2006}. \subsection{Continuity properties of variational eigenvalues} A sequence $(\mu_n)$ of Radon measures on $\Omega$ converges to $\mu$ in the weak-$\star$ topology if $\int_{\Omega}f\,d\mu_n\xrightarrow{n\to\infty}\int_{\Omega}f\,d\mu$ for each $f\in C^0(\Omega)$. In that case we write $\mu_n\xrightharpoonup{\star}\mu$. In~\cite{Ko2014}, Kokarev proved upper-semicontinuity of variational eigenvalues: $$\limsup_{n\to\infty}\lambda_k(\mu_n)\leq\lambda_k(\mu).$$ This holds for all Radon measures, whether they are admissible or not. \begin{ex}Let us consider a simple application of the upper continuity. Let $M$ be a closed Riemannian manifold of dimension $d$. Let $\Sigma\subset M$ be a submanifold of dimension $0<n\leq d-2$, with measure $dV_\Sigma$. The inclusion $\iota:\Sigma\to M$ allows the definition of the push-forward probability measure $\mu=|\Sigma|^{-1}\iota_{\star}dV_\Sigma$ on $M$. This measure is not admissible, since $\Sigma$ has vanishing capacity and one can show that $\lambda_k(M,g,\mu)=0$ for each $k$. Consider tubular neighborhoods $T_\varepsilon:=\{x\in M\,:\,d(x,\Sigma)<\varepsilon\}$, and let $\Omega_\varepsilon=M\setminus T_\varepsilon$ with corresponding inclusions still written $\iota:\partial\Omega_\varepsilon\to M$. The boundary probability measures $\mu_\varepsilon=|\partial T_\varepsilon|^{-1}\iota_{\star}dV_{\partial T_\varepsilon}$ are admissible. In fact we have seen in example~\ref{example:transmission} that their variational eigenvalues are related to the transmission eigenvalues of $\Omega_\varepsilon$ in $M$: $$\lambda_k(M,g,\mu_\varepsilon)=\tau_k(M,\Omega_\varepsilon)|\partial\Omega_\varepsilon|\geq\sigma_k(\Omega_\varepsilon)|\partial\Omega_\varepsilon|.$$ It is clear from their definition that $\mu_\varepsilon\xrightharpoonup{\star}\mu$, and it follows from the semi-continuity property that $$\lim_{\varepsilon\to 0}\sigma_k(\Omega_\varepsilon)|\partial\Omega_\varepsilon|=\lim_{\varepsilon\to 0}\lambda_k(M,g,\mu_\varepsilon)=0.$$ This raises the question to understand the asymptotic behaviour of $\sigma_k(\Omega_\varepsilon)$ as $\varepsilon\to 0$. For connected submanifolds $\Sigma$, this was studied by Brisson in~\cite{Br2022}, where she proved for instance that for $d\geq 3$ and $0<n\leq d-2$, and for each index $k\in\N$, \begin{gather*} \varepsilon\sigma_k(\Omega_\varepsilon)\xrightarrow{\varepsilon\to0}d-n-2. \end{gather*} See Example \ref{JadeBrisson}. \end{ex} In~\cite[Proposition 4.8]{GiKaLa2021}, Girouard, Karpukhin and Lagac\'e gave sufficient conditions for the continuity of $\lambda_k(\mu)$ under weak-$\star$ convergence. \begin{thm}\label{thm:ContinuityRadon} Let $(\Omega,g)$ be a compact Riemannian manifold (with or without boundary), and let $\Omega_n\subset\Omega$ be a sequence of domains such that $|\Omega\setminus\Omega_n|\to 0$. Let $\mu_n$ be a sequence of admissible Radon measures supported on $\overline{\Omega}_n$ and let $\mu$ be an admissible measure on $\Omega$. Suppose that the following conditions are satisfied: \begin{itemize} \item[M1)] $\mu_n\xrightharpoonup{\star}\mu$ and ; \item[M2)] There is a bounded family of extension maps $J_n:H^1(\Omega_n,\mu_n)\to H^1(\Omega,\mu)$. \end{itemize} Suppose moreover that the the measures $\mu_n,\mu$ induce elements of $W^{1,1}(\Omega,dV_g)^\star$ such that $\mu_n\to\mu$ in $W^{1,1}(\Omega,dV_g)^\star$. Then for each index $k$, $$\lim_{n\to\infty}\lambda_k(\Omega_n,\mu_n)=\lambda_k(\Omega,\mu).$$ \end{thm} Here $W^{1,1}(\Omega,dV_g)$ is the usual Sobolev space of $L^1$-functions with weak gradient in $L^1$. \begin{remark} See also the paper~\cite{FrMi2020} by Freiberg and Minorics for related results in dimension 1, presented in terms of Krein--Feller operators. \end{remark} \begin{remark} Theorem \ref{thm:ContinuityRadon} provides a flexible tool to discuss eigenvalue convergence. It is the main technical tool behind the proof of Theorem~\ref{thm: GiKaLa 8pi} and Theorem~\ref{thm: GKL sharp}, via homogenisation by perforation. It also provides a convenient setting to discuss boundary homogenisation, as in the work of Bucur and Nahon~\cite{BuNa2021}. See Remark~\ref{rem:BucurNahon}. \end{remark} \section{Open problems}\label{app:open ques} We list here the open problems and questions that have been proposed throughout the paper. \begin{itemize} \item[\ref{ques:conformal}] Describe the class of all smooth compact surfaces $\Omega\subset\R^3$ with boundary $\partial\Omega=\partial\mathbb D$ that admit a conformal parametrisation $\Phi:\mathbb D\to\Omega$ such that $|\Phi'|\equiv 1$ on $\partial\mathbb D$. \item[\ref{ques:karpstern}.] If $M$ is a closed surface and $\Omega$ is a domain in that surface, does the strict inequality \[\sigma_k(\Omega,g)L(\partial\Omega)<\lambda_k^*(M,[g])\] hold for $k\geq 3$? (It is known for $k=1$ and $k=2$.) \item[Conjecture \ref{conj:escobar}] (Escobar's Conjecture, \cite{Es1999}) Let $\Omega$ be a smooth compact connected Riemannian manifold of dimension $\ge 3$ with boundary $\Sigma=\partial \Omega$. Suppose that the Ricci curvature of $\Omega$ is non-negative and that the second fundamental form $\rho$ of $\Sigma$ is bounded below by $c>0$. Then $\sigma_1(\Omega) \ge c$, with equality if and only if $\Omega$ is the Euclidean ball of radius $\frac{1}{c}$. (Progress on this is discussed in subsection \ref{subsec.lowerboundgeom}). \item[\ref{question lower}] Under the hypotheses (H1)-(H3) of Theorem \ref{comparison}, is it possible to find an explicit lower bound for $\sigma_1(\Omega)$ in terms of geometric invariants of $\Omega$ and of it boundary $\Sigma$ even when $\Sigma$ is not assumed to be connected? \item[\ref{question cheeger}] Can one define a different Cheeger-type constant for which $\sigma_1$ satisfies a Buser-type inequality? \item[\ref{ques:logbase2}] Is the coefficient $\frac{C'}{\log^2(k+2)}$ of $i_{k+1}(\Omega)$ in the following estimate (estimate (\ref{better})) sharp? \begin{equation*} \sigma_{2k+1}(\Omega)\ge \frac{C'}{\log^2(k+2)} i_{k+1}(\Omega). \end{equation*} \item[\ref{ques:higherordercheeger}] Can one find a higher order Cheeger inequality without any dependence on $k$? \item[\ref{question:maxdom2}] If $(M,g)$ is a complete Riemannian manifold of dimension $\ge 3$ of infinite volume, with Ricci curvature bounded from below, can one construct domains $\Omega \subset M$ with arbitrarily large first nonzero normalised eigenvalue (for the different normalisations mentioned in subsection \ref{Upperexamples})? \item[\ref{ques:intrinsicextrinsic}] Let $\Omega$ be a compact Riemannian manifold of dimension $d+1\ge 3$ with (connected) boundary $\Sigma$. Is it possible to construct a family $(g_{\epsilon})_{\epsilon>0}$ of Riemannian metrics on $\Omega$ that stays constant on $\Sigma$ and satisfies $\sigma_1(\Omega,g_{\epsilon}) \vert \Sigma\vert\vert(\Omega, g_{\epsilon})\vert^{\frac{1-d}{d+1}} \to \infty$ as $\epsilon \to 0$? \item[\ref{ques:weinstockhnsn}]Let $\Omega$ be a bounded domain in $\mathbb H^{d+1}$ or $\mathbb S^{d+1}$. Let $\Omega^*$ be a ball in the same space with $|\partial\Omega^*|=|\partial\Omega|$. Is it true that \begin{equation*} \sigma_1(\Omega) \le \sigma_1(\Omega^*) \end{equation*} with equality iff $\Omega$ is a ball? \item[\ref{ques:diamterm}] The term $\frac{sn_{K}(Diam(\Omega))}{sn_{\kappa}(Diam(\Omega))}$ in \eqref{LiWaWu} may become very large when $Diam(\Omega)$ becomes large. Is it possible to establish a better estimate or to construct an example of a domain $\Omega$ with large diameter and $\sigma_1(\Omega)$ large? \item[\ref{ques:cartanhadamard}] Can we get estimates like (\ref{LiWaWu}) for the other eigenvalues of domains in Cartan-Hadamard manifolds? The methods used in \cite{CoElGi2011} or \cite{Ha2011} do not seem to apply. \item[\ref{ques:diameterhnsn}] Is it possible to get inequalities similar to those in Theorem \ref{diam1} for domains in the hyperbolic space or the sphere? \item[\ref{ques:specconvtozero}] If the diameter of a domain $\Omega$ is fixed and its volume tends to $0$, can one say that all the eigenvalues of the domain tend to $0$? \item[\ref{ques:manyquestionsdoublyconnected}] Can the results on doubly connected domains at the end of Subsection \ref{UpperDomains} be improved? (See the text for the full question). \item[\ref{ques:rev1}] Can Xiong's results on upper and lower bounds for Steklov eigenvalues on manifolds of revolution, Theorems \ref{thm:xiong1} and \ref{thm:xiong2}, be improved by imposing a geometric constraint such as $\vert Ricci \vert \le a^2$? \item[\ref{ques:rev2}] Can Xiong's results (see previous question) be generalised to manifolds of revolution with two boundary components? \item[\ref{ques:2bc1}] Find a sharp lower bound for $\sigma_{k}(\Omega)$, where $\Omega$ is a hypersurface of revolution with boundary components $\mathbb \Sp^{d} \times \{0\} \cup \mathbb \Sp^{d} \times \{\delta\}$ and $\delta<2$. \item[\ref{ques:2bc2}] Find a sharp upper bound for $\sigma_{k}(\Omega)$, where $\Omega$ is a hypersurface of revolution with boundary components $\mathbb \Sp^{d} \times \{0\} \cup \mathbb \Sp^{d} \times \{\delta\}$. \item[\ref{ques:2bc3}] In the case of a hypersurface of revolution with either one or two boundary components, can we obtain results as in Theorem \ref{thm:xiong2}, that is, sharp bounds for the difference $\sigma_{(k+1)}-\sigma_{(k)}$ and the ratio $\frac{\sigma_{(k+1)}}{\sigma_{(k)}}$? \item[\ref{open:arbitrarilylarge}] Given a $d$-dimensional compact submanifold $\Sigma$ in $\R^m$, is it possible to construct a family of $d+1$- dimensional submanifolds $\Omega$ of $\R^m$ with boundary $\Sigma$ for which $\sigma_1(\Omega)$ becomes arbitrarily large? \item[\ref{ques:eucl1}] Is it possible to get a similar inequality to \eqref{ineqmean} for submanifolds of hyperbolic space? \item[\ref{ques:eucl2}] Is it possible to generalise Inequality (\ref{ineqmean}) to other eigenvalues? Note that a similar question for the spectrum of the Laplacian was solved only recently (and partially) by Kokarev in~\cite[Theorem 1.6]{Ko2020}. \item[\ref{ques:any higher maximisers}] Find examples of compact surfaces with boundary and integers $k>1$ for which a $\sigma_k$-maximising metric exists. \item[\ref{ques:gapk}.] For given $k$, which compact surfaces ${\Omega}$ satisfy $\gap_k({\Omega})>0$? (See Definition \ref{def.gap} for a definition of $\gap_k({\Omega})$). \item [\ref{ques:sig >2pi}.] Is $\sigma_1^*({\Omega},[g])>2\pi$ for every conformal class $[g]$ when the surface ${\Omega} $ is not diffeomorphic to a disk? \item[\ref{ques:ksoq3}] \cite[Open Question 3]{KaSt2021}. In the setting of Theorem~\ref{thm.KS asymptotics}, if the limiting surface in $\mathbb{S}^{N-1}$ realising $\lambda_1^*(M)$ is embedded, does it necessarily follow that the minimal surfaces in $\mathbb{B}^{N}$ realising $\sigma_1^*(M_b)$ are embedded for all sufficiently large $b$? \item[Conjecture \ref{fraser li conj}] (Fraser and Li's Conjecture, \cite[Conjecture 3.3]{FrLi2014}) If ${\Omega}$ is a properly embedded free boundary minimal hypersurface in $\mathbb{B}^n$, then $\sigma_1({\Omega})=1$, i.e., ${\Omega}$ has spectral index one. \item[\ref{ques: conf ext metric vs pair}] If $g$ is a $\overline{\sigma}_k$-(conformally) extremal metric, is $(g,1)$ necessarily a $\overline{\sigma}_k$-(conformally) extremal pair? \item[\ref{ques: conf spec simple}] Given a compact manifold ${\Omega}$ with boundary and a conformal class $[g]$ of Riemannian metrics on ${\Omega}$, define $$\overline{\sigma}_k({\Omega}, [g])=\sup_{g'\in [g]}\, \overline{\sigma}({\Omega},g).$$ Is $$\overline{\sigma}_k({\Omega},[g])<\overline{\sigma}_{k+1}({\Omega}, [g]) \mbox{\,\,for all}\,\,k?$$ \item[\ref{ques:discretcomp}]For all $1\le k \le \vert B\vert-1$, is it possible to obtain an inequality of the form $$ A_1 \le \frac{\sigma_k(\Omega)}{\sigma_k(\Gamma)}\le A_2, $$ where $\Gamma$ is a discretisation of $\Omega$ defined as in subsection \ref{Discretisation and spectrum}? \item[\ref{ques:cheegerdiscrete}] In the discrete setting, is it possible to have a higher order Cheeger inequality without any dependence on $k$? \item[\ref{ques:discretequasiisom}] Let $\Gamma_1,\Gamma_2$ be two infinite roughly quasi-isometric graphs. If there exists a constant $C_1(\Gamma_1)$ such that for each finite subgraph $\Omega$ of $\Gamma_1$, $\sigma_1(\bar \Omega)\le \frac{C_1(\Gamma_1)}{\vert B\vert^{\alpha}}$, with $B$ the boundary of $\Omega$ and $\alpha>0$, is an analogous property also true for the finite subgraphs of $\Gamma_2$? \item[\ref{ques:yangyusharp}] Is Yang-Yu's inequality \eqref{yyiso} for eigenvalues of the Raulot-Savo Dirichlet-to-Neumann operator sharp when $k=1$ and $p<\frac{d+1}{2}$? Is it sharp when $k>1?$ \item[\ref{ques: karpukhin pseudo}] Is Karpukhin's Dirichlet-to-Neumann operator $\DtN^K_p$ a pseudo-differential operator? \item[\ref{ques: Karp Weyl error}] Can the error bound in the Weyl Law~\eqref{eq:karp p weyl} for the asymptotics of $\sigma^K_{k,p}$ be improved to $O(\sigma^{d-1})$? (This would follow from an affirmative answer to Question~\ref{ques: karpukhin pseudo}. See \cite[Remark 5.9]{GKLP2021} for further comments). \item[\ref{prob:karp}] Given a closed oriented Riemannian manifold $(\Sigma,h)$ of dimension $d=2p+1$, let $[\Sigma,h]_m$ denote the collection of all orientable Riemannian manifolds $({\Omega},g)$ with $\partial {\Omega} =\Sigma$, $g_{|\Sigma}=h$ and $\beta_{d-p}({\Omega})=m$. For fixed $m$ and $k$, investigate $$\sup\,\{\sigma^K_{k,p}({\Omega},g): ({\Omega},g)\in [\Sigma,h]_m\}.$$ \item[Conjecture \ref{conjs:karpuhkin}]Some conjectures from Karpukhin \cite{Ka2019}: \begin{itemize} \item The inequality in Theorem~\ref{khpsp} is sharp for all $k$. \item When $k\leq \frac{1}{2}\binom{2p+2}{p+1}$, equality holds only for the Euclidean unit ball. \end{itemize} \item[\ref{ques:weylrough}] Can it be shown that the Steklov Weyl asymptotics \eqref{eq:stekweyl} hold whenever $\partial\Omega$ is Lipschitz? It has been done when $d+1=2$ \cite{KLP2022}. \item[\ref{ques:multconnops}] Are families of multiply connected, compact, Steklov isospectral planar domains necessarily compact in the $C^{\infty}$ topology? \item[\ref{ques:polygonsbigangles}] When obtaining Steklov spectral asymptotics for curvilinear polygons, can the condition that the angles of $\Omega$ are in $(0,\pi)$ be relaxed to $(0,2\pi)$? \item[\ref{ques:recoverangles}] Can the angles themselves, and not just $\cos(\pi^2/2\alpha)$, be recovered from the Steklov spectrum of a (possibly curvilinear) polygon? \item[\ref{ques: orb boundary}] Does the Steklov spectrum detect the presence of singularities on the boundary of a compact Riemannian orbifold? (Some results are known for orbisurfaces, see subsection \ref{subsec:orb pos}). \item[\ref{ques: orb interior}] Does the Steklov spectrum detect the presence of singularities in the interior of a compact Riemannian orbifold? (Some results are known for orbisurfaces, see subsection \ref{subsec:orb pos}). \item[\ref{ques:mixed vs pure}] Can a mixed Steklov-Neumann problem on a compact Riemannian manifold ${\Omega}_1$ and a pure Steklov problem on a compact Riemannian manifold ${\Omega}_2$ have the same spectrum? \item[\ref{ques.sing type}] Among Riemannian orbifolds with singularities, to what extent does the Steklov spectrum recognise the types of singularities? \item[\ref{prob: orb heat}] Develop Steklov heat asymptotics for compact Riemannian orbifolds with boundary. \item[\ref{ques.NotLapIsosp}] Do there exist Steklov isospectral Riemannian manifolds whose boundaries are not Laplace isospectral? Equivalently, does the Steklov spectrum of a Riemannian manifold determine the Laplace spectrum of its boundary? \item[\ref{ques.NotNeumIsosp}] Do there exist Steklov isospectral Riemannian manifolds that are not Neumann isospectral (other than $\sigma$-isometric surfaces)? \item[\ref{ques:isospplanedomains}] Do there exist non-isometric Steklov isospectral plane domains? \item[\ref{quesgeq3}] For manifolds of dimension $\geq 3$, can one tell from the Steklov spectrum: \begin{itemize} \item Whether the manifold has constant sectional curvature? \item Whether the induced metric on the boundary has constant sectional (or principal) curvature? \item Whether the induced metric on the boundary is Einstein? \end{itemize} \item[\ref{ques: inverse prob forms}] Do there exist compact Riemannian manifolds ${\Omega}_1$ and ${\Omega}_2$ with boundary such that we have $p$ and $q$ with $\stek_p^{\rs}({\Omega}_1)=\stek_p^{\rs}$ but $\stek_q^{\rs}({\Omega}_1)\neq \stek_q^{\rs}({\Omega}_2)$? One can ask the same question for $\stek_p^{K}({\Omega}_i)$. (Note that answers to some similar problems for orbifolds seem to be ``yes", see subsection \ref{ballquots}). \item[\ref{ques:interiordecaysmooth}] Does exponential decay of Steklov eigenfunctions into the interior, for example \eqref{eq:gtbound}, hold if $\Sigma$ and/or $\Omega$ are merely assumed smooth rather than real analytic? \item[\ref{ques:decaynonsmooth}] More generally, what kind of decay properties do the Steklov eigenfunctions possess if $\Sigma$ is not assumed to be smooth? \item[\ref{ques:generic}] Do Steklov eigenfunctions generically concentrate on a single boundary component? (The case of dimension 2 has been answered in the affirmative by Martineau \cite{Ma2018}). \item[\ref{ques:yau}]\cite[Open Problem 11]{GiPo2017} Does the Steklov analogue of the nodal volume conjecture of Yau hold? That is, do there exist constants $c$ and $C$ depending only on $\Omega$ for which \begin{equation*} c\sigma_k\leq\mathcal H_{d}(u_k)\leq C\sigma_k\textrm{ and/or } c\sigma_k\leq\mathcal H_{d-1}(\varphi_k)\leq C\sigma_k? \end{equation*} \item[\ref{ques:pleijel1}] Is there a Pleijel-type theorem for the nodal counts of Steklov eigenfunctions? \item[\ref{ques:pleijel2}] Is there a theorem like Theorem \ref{thm:mainthmhash} for Dirichlet-to-Neumann eigenfunctions? If so, can it be improved to a Pleijel-type result? \end{itemize}
{ "timestamp": "2022-12-26T02:14:53", "yymm": "2212", "arxiv_id": "2212.12528", "language": "en", "url": "https://arxiv.org/abs/2212.12528", "abstract": "The Steklov eigenvalue problem, first introduced over 125 years ago, has seen a surge of interest in the past few decades. This article is a tour of some of the recent developments linking the Steklov eigenvalues and eigenfunctions of compact Riemannian manifolds to the geometry of the manifolds. Topics include isoperimetric-type upper and lower bounds on Steklov eigenvalues (first in the case of surfaces and then in higher dimensions), stability and instability of eigenvalues under deformations of the Riemannian metric, optimisation of eigenvalues and connections to free boundary minimal surfaces in balls, inverse problems and isospectrality, discretisation, and the geometry of eigenfunctions. We begin with background material and motivating examples for readers that are new to the subject. Throughout the tour, we frequently compare and contrast the behavior of the Steklov spectrum with that of the Laplace spectrum. We include many open problems in this rapidly expanding area.", "subjects": "Spectral Theory (math.SP); Differential Geometry (math.DG)", "title": "Some recent developments on the Steklov eigenvalue problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682441314384, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7073169797219705 }
https://arxiv.org/abs/0708.0048
A renormalization approach to irrational rotations
We introduce a renormalization procedure which allows us to study in a unified and concise way different properties of the irrational rotations on the unit circle $\beta \mapsto \set{\alpha+\beta}$, $\alpha \in \R\setminus \Q$. In particular we obtain sharp results for the diffusion of the walk on $\Z$ generated by the location of points of the sequence $\{n\alpha +\beta\}$ on a binary partition of the unit interval. Finally we give some applications of our method.
\section{Introduction} \label{sec:intro} Irrational rotations on the unit circle $S^1\cong [0,1]/(0=1)$ are isometric transformations defined by $$[0,1) \ni \beta \mapsto \set{\alpha + \beta} \in [0,1)$$ where $\alpha \in \mathbb R \setminus \mathbb Q$ is the angle of rotation and $\set{\cdot}$ denotes the fractional part of a real number. It is well known that the Lebesgue measure on the unit interval is the unique (and thus ergodic) invariant probability measure for these transformations. However, some ergodic properties of the rotations, such as recurrence rates, waiting times and some limit laws, are known to depend on the arithmetic properties of the rotation angle $\alpha$ (see \cite{bgi}, \cite{chaz}, \cite{coelho}, \cite{kim}, \cite{ks}). These, in turn, are encoded in its continued fraction expansion and it is starting from this expansion that we introduce, borrowing it from dynamical systems, a {\sl renormalization procedure} which allows us to study several relevant properties of the orbit $(\set{n\alpha + \beta})$. The possibility of studying all these properties through the same approach is one of the main motivations of this paper. Indeed, the paper is self-contained and we obtain sharp results using efficient and concise arguments based solely on the renormalization procedure. A renormalization approach to circle maps was introduced in \cite{lanford}, and used for example in \cite{coelho} to estimate limit laws of entrance times for irrational rotations. We remark that the renormalization procedure we use is quite different from the one in \cite{lanford}, both in the construction and in the spirit. Moreover, we think that our approach can be used successfully to give sharp estimates on some more ergodic properties and limit laws than those considered in this paper (see \cite{bon}). \noindent In this paper, we study the distribution properties of the sequences $x_n:=n\alpha + \beta$, $n=0,1,2,\dots$. We recall that a sequence $(y_n)$ of real numbers is said to be {\sl uniformly distributed modulo $1$}, if for any real number $0<\gamma \leq 1$ we have $$ \lim_{n\to \infty} \frac 1 n \ \sum_{r=0}^{n-1} \ \chi_{[0,\gamma)} (\set{y_r})=\gamma $$ where $\chi_{A}$ denotes the indicator function of the set $A$. An interesting characterisation of the distribution properties of a sequence $(y_n)$ can be obtained as follows: take the partition of the unit interval given by $\set{[0,\frac 1 2),[\frac 1 2,1)}$ and construct the walk on $\mathbb Z$ which starts at the origin at time $0$ and at time $n+1$ moves one step to the right if $\set{y_n} \in [0,\frac 1 2)$, one step to the left otherwise. After $n+1$ steps, the position of the walker is given by \begin{equation} \label{def:somme} S_n=\sum_{r=0}^{n}s(\set{y_r}) \end{equation} $$s(y):=2\, \chi_{[0,\frac 1 2)}(\set{y})-1.$$ \noindent Clearly, if $(y_n)$ is uniformly distributed modulo $1$ then $|S_n|=o(n)$ as $n\to \infty$ and the better is the uniform distribution of $(y_n)$ the slower is the diffusion of the walk. In particular, for the ideal distribution for which $\frac 1 n \ \sum_{r=0}^{n-1} \ \chi_{[0,\gamma)} (\set{y_r})=\gamma$ for all $n$ we have $S_n=0$ for all $n$. In \cite{isola} the growth of the quantity $S_n$ has been studied for the sequence $(x_n)=(n\alpha + \beta)$ in the $L^\infty$ and $L^2$ norms. In this paper we give an explicit formula for $S_n$ and obtain as a corollary growth estimates for $\beta=0$, and a sharp result for the $L^\infty$ norm (see Corollary \ref{cor:l-inf}), improving results in \cite{isola}. \noindent For the sequence $x_n:= n\alpha + \beta$, with $\alpha$ irrational and $\beta\in (0,1)$, the indicator $S_n$ will be denoted by $S_n(\alpha,\beta)$ ($S_n(\alpha)$ if $\beta=0$) to stress its dependence on the arithmetical properties of the number $\alpha$. \noindent The paper is organized as follows. Notations and the main ideas are settled in Section \ref{sec:rot} which includes a preliminary analysis of several quantities that are needed for the renormalization procedure. In particular, we study how the different points of the sequence $(n\alpha)$ are organized according to their integer part. This depends on $a_1$, the first partial quotient in the continued fraction expansion of $\alpha$, and in particular by its parity. The main result of this section is Theorem \ref{cor:rkj}. \noindent In Section \ref{sec:diffusione} we give an iterative method to obtain an explicit expression for $S_n(\alpha)$ only in terms of the coefficients of the continued fraction of $\alpha$. This method is then used to obtain sharp estimates for $S_n(\alpha)$ that have significantly different expressions according to whether $a_1$ is even or odd. The main results are the algorithm described in Proposition \ref{prop:massimi} and its consequences described in Theorem \ref{cor:maxpari} as well as in the subsequent examples. We point out that we obtain results also for the minimum values of $S_n(\alpha)$, with the aim of giving hints on the returns to zero (this point is analysed in \cite{bon}). \noindent Next, we consider the general case of the sums $S_n(\alpha,\beta)$. Again we obtain an explicit expression for these sums (Theorem \ref{prop:gen-diff}) in terms of the coefficient of the continued fraction of $\alpha$ and of the coefficient of the expansion of $\beta$ introduced in Proposition \ref{prop:espansione}. In particular we obtain as a corollary a sharp estimate on the $L^\infty$ norm of $S_n(\alpha,\beta)$ (see Corollary \ref{cor:l-inf}). \noindent Finally, we give some applications of our approach to the Birkhoff Ergodic Theorem and to the \emph{discrepancy} (see (\ref{def:disc})) of the sequence $(n\alpha)$. It is surprising that these results easily follow from our renormalization approach. \section{Continued fractions and return sequences} \label{sec:rot} \noindent For a given number $\alpha \in (0,1)$ let us consider its expansion in continued fraction \cite{pytheasfogg} \begin{equation} \label{eq:cont-frac} \alpha = {1\over \displaystyle a_1 + {1\over \displaystyle a_2 + {1\over\displaystyle a_3 +\cdots }}} \end{equation} which we denote by $\alpha= [a_1,a_2,a_3,\dots]$. The partial quotients $a_h$ are positive integers and the expansion terminates if and only if $\alpha$ is rational. If $\alpha$ is irrational its \virg{fast} rational approximants are the numbers \begin{equation} \frac{p_n}{q_n} := [a_1,\dots, a_n] \end{equation} which can be also be defined recursively by \begin{equation} \label{eq:rel-approx} {p_0\over q_0}={0\over 1},\quad {p_1\over q_1}={1\over a_1} \quad\hbox{and}\quad {p_{n+1}\over q_{n+1}}={a_{n+1}\, p_n + p_{n-1}\over a_{n+1}\, q_n + q_{n-1}},\quad n\geq 1. \end{equation} In the following we shall consider also the positive numbers \begin{equation} \label{effenne} f_n:=(-1)^n (q_n \alpha - p_n),\qquad n\ge 0. \end{equation} To a given $\alpha \in (0,1)$ we associate the rotation $T_\alpha:X\to X$ of the unit circle $X=[0,1]/(0=1)$ given by \begin{equation} T_\alpha(\beta) := \set{\alpha+\beta}. \end{equation} One easily checks that the numbers $f_n$ determine the successive closest returns of the orbit of a point $x$ to the point itself (thus forming a monotonically decreasing sequence). Indeed, for all $n\ge 1$ and for all $\beta \in X$, it holds \begin{equation} q_n = \min \set{ r> q_{n-1} \ :\ |x-T_\alpha^r(\beta)| < |\beta-T_\alpha^{q_{n-1}}(\beta)|\,} \end{equation} and \begin{equation} f_n = |\beta-T_\alpha^{q_n}(\beta)|. \end{equation} The first numbers $f_n$ are given by $f_0=\alpha$, $f_1=1 -a_1 \alpha$, $f_2 = f_0 - a_2 f_1$ and more generally they satisfy the recursion \begin{equation} \label{recu} f_{n+1}=f_{n-1} - a_{n+1} f_n,\qquad n\ge 1. \end{equation} Conversely, once the $f_n$ are known the partial quotients can be obtained as: \begin{equation} \label{recu1} a_{n+1} = \max \set{h \ge 1 \ :\ h f_n < f_{n-1}},\qquad n\geq 0, \end{equation} with the position $f_{-1}=1$. This yields in particular $$a_1 = \max \set{h \ge 1 \ :\ h \alpha<1}$$ $$a_2 = \max \set{h \ge 1 \ :\ h(1-a_1 \alpha) < \alpha}.$$ \vskip 0.2cm \noindent Note that $\lfloor r\alpha \rfloor=0$ for all $0\le r \le a_1$ and $\lfloor (a_1+1)\alpha \rfloor=1$. In the sequel we shall study the behaviour of the sum $S_n(\alpha)$ by looking at the values of $s(r\alpha)$ with $\lfloor r\alpha \rfloor$ constant. To this end we introduce the following quantities. Set \begin{equation} \label{eq:rk} r_k := \min \set{r\ge 0 \ :\ \lfloor r\alpha \rfloor=k},\quad k\geq 0. \end{equation} In terms of $T_\alpha$ this is the least number of iterates of $0$ needed to make $k$ ``turns'' of the circle $X$. Set moreover \begin{equation} \label{eq:tk} t_k:= \# \set{r\ge 0 \ :\ \lfloor r\alpha \rfloor =k}, \quad k\geq 0, \end{equation} which is the number of $T_\alpha$-iterates of $0$ which are all lying ``within the same circle'' after having turned the circle $k$ times. \noindent One sees that $t_0 = a_1 +1$, and it is not difficult to realize that for all $k\ge 1$, $t_k$ is equal to either $a_1$ or $a_1+1$. More precisely we have \begin{lem} \label{lemmino} $t_k$ is either equal to $a_1 +1$ or $a_1$, according to whether $\set{r_k \alpha}$ is smaller or bigger than $f_1$, respectively. \end{lem} \noindent {\sl Proof.} Let $\lfloor r_k \alpha \rfloor=k$, then $0\le \{ r_k \alpha\} \le \alpha$ and $\{ r_k \alpha\} + (a_1 -1) \alpha \le a_1 \alpha < 1$, hence $t_k \ge a_1$. Moreover $\{ r_k \alpha\} + (a_1 +1) \alpha \ge (a_1+1)\alpha >1$, hence $t_k \le a_1+1$. Finally $t_k = a_1+1$ if and only if $\{ r_k \alpha\} + a_1 \alpha <1$, that is if and only if $\{ r_k \alpha\} < (1- a_1 \alpha)=f_1$. $\Box$ \vskip 0.5cm \noindent Starting our analysis from $r=0$, we notice that $r_1=a_1+1$ and $\set{r_1 \alpha}=\alpha-f_1=f_0-f_1$. Then $t_1 = a_1+1$ if and only if $f_0-f_1 < f_1$, that is if and only if $f_2=f_0-f_1$, that is $a_2=1$ (cfr. (\ref{recu1})). If instead $a_2>1$ then $\set{r_2 \alpha}= \set{r_1 \alpha}-f_1 = f_0-2f_1$ and proceeding recursively $\set{r_k \alpha}=f_0 -kf_1 > f_1$ for all $1\le k < a_2$, hence $t_k = a_1$ for all $1\le k < a_2$. On the other hand $\set{r_{a_2} \alpha}=f_0 - a_2 f_1 = f_2<f_1$, whence $t_{a_2}=a_1+1$ and $r_{a_2}=q_2$, the denominator of the second \virg{fast} rational approximant of $\alpha$. \noindent Let us denote by $(r_{k_j})$ the sub-sequence of $(r_k)$ such that $t_{k_j} = a_1+1$ for all $j\ge 0$. So far we have showed that $k_0=0$, $r_{k_0}=0$ and $k_1=a_2$, $r_{k_1}=q_2$. We now investigate the following terms of $(r_{k_j})$. Let $(g_j)$ denote the sequence of \virg{gaps} between subsequent elements of $(r_{k_j})$: \begin{equation} \label{eq:gj} g_{j} := r_{k_j} - r_{k_{j-1}},\qquad j\geq 1. \end{equation} Given the irrational number $\alpha=[a_1,a_2,\dots]$ and $m\ge 1$, we denote by \begin{equation} \label{eq:alfam} \alpha_m := [a_{m+1}, a_{m+2}, \dots]=G^m(\alpha) \end{equation} the $m$-th iterate of $\alpha$ under the Gauss map $G:[0,1]\to [0,1]$ defined by $G(x)=\{1/x\}$ for $x\ne 0$ and $G(0)=0$. We denote by $p_n^{(m)}$, $q_n^{(m)}$ and $f_n^{(m)}$ the quantities corresponding to (\ref{eq:rel-approx}) and (\ref{effenne}) for $\alpha_m$. It holds \begin{equation} \label{decr} f_n^{(m)} =\prod_{k=0}^{n}\alpha_{k+m} \end{equation} and therefore \begin{equation} \label{eq:f2f1} \alpha_{r+m} = \frac{f_r^{(m)}}{f_{r-1}^{(m)}}, \qquad r\geq 0. \end{equation} Let moreover $T_\alpha^{(m)} : X\to X$ denote the rotation with angle $\alpha_m$ (so that $T_\alpha^{(0)} =T_\alpha$) and let $t_k^{(m)}$, $r_k^{(m)}$, $r_{k_j}^{(m)}$ and $g_j^{(m)}$ be the corresponding quantities. \begin{prop} \label{prop:rotren} The following relations hold for all $m\ge 0$: \begin{itemize} \item[(i)] for all $k\ge 0$ $$t_k^{(m)} = \left\{ \begin{array}{ll} a_{m+1}+1, & {\rm if}\ \{ r_k^{(m)} \alpha_m \} < f_1^{(m)} = 1- a_{m+1} \alpha_m\\[0.2cm] a_{m+1}, & {\rm otherwise} \end{array} \right.$$ \item[(ii)] for all $j\ge 1$ $$g_{j}^{(m)}=r_{k_j}^{(m)} - r_{k_{j-1}}^{(m)} = \left\{ \begin{array}{ll} q_2^{(m)}, & {\rm if}\ \{ (j-1) \alpha_{m+2} \} < (1- \alpha_{m+2})\\[0.2cm] q_2^{(m)}+q_1^{(m)}, & {\rm otherwise} \end{array} \right.$$ \item[(iii)] let $(j_h^{(m)})$ be the subsequence such that $g_{j_h}^{(m)} = q_2^{(m)}+q_1^{(m)}$ for all $h\ge 0$. Then $j_0^{(m)} = a_{m+3} +1$ and $j_h^{(m)}- j_{h-1}^{(m)}= t_{h}^{(m+2)}$ for all $h\ge 1$. \end{itemize} \end{prop} \noindent {\sl Proof.} For notational simplicity' sake we show the results for $m=0$. The general situation is obviously the same. \noindent Point $(i)$ has been proved above. \noindent To prove point $(ii)$ we apply the {\it Three Gap Theorem} (see for example \cite{pytheasfogg}) to the interval $(0,f_1)$. According to this theorem the possible values of the gaps $g_j$ between two successive visits of the interval $(0,f_1)$ by the orbit $(j\alpha)$ are given by $$g_j=r_{k_j} - r_{k_{j-1}} = \left\{ \begin{array}{ll} q_2+q_1, & \mbox{with frequency }\ \frac{f_2}{f_1}, \\[0.3cm] q_2, & \mbox{with frequency }\ 1-\frac{f_2}{f_1} . \end{array} \right.$$ Let now $\{r_{k_{j-1}}\alpha\}$ be in $(0,f_1)$. We can repeat the same argument as for $r_{a_2}$ to prove that $g_j=q_2$ if and only if $\{r_{k_{j-1}}\alpha\}< f_1-f_2$. Indeed we have $\set{r_{k_{j-1}+1} \alpha} = \set{r_{k_{j-1}} \alpha} +f_0 - f_1$, and more generally we can write $$\set{r_{k_{j-1}+h} \alpha} = \set{r_{k_{j-1}} \alpha} +f_0 - h f_1$$ for all $h=1,2,\dots$ such that the r.h.s. remains non-negative. This certainly happens until $h$ reaches the value $a_2$, as one readily checks, but for $h=a_2+2$ we have $\set{r_{k_{j-1}} \alpha} +f_0 - (a_2+2) f_1 = \set{r_{k_{j-1}} \alpha} + f_2 - 2f_1 < f_2 - f_1<0$ since $\set{r_{k_{j-1}} \alpha} < f_1$. This shows that $a_2\le k_j - k_{j-1} \le a_2+1$ and it is a constructive proof of what are the possible values of the gaps $g_j$. Now $k_j - k_{j-1}$ is equal either to $a_2$ or to $a_2+1$ (and the gap $g_j$ is equal to $q_2$ or to $q_2+q_1$, respectively) if and only if $\set{r_{k_{j-1}} \alpha} +f_0 - a_2 f_1$ is smaller or greater than $f_1$, respectively. Hence $g_j=q_2$ if and only if $\{r_{k_{j-1}}\alpha\}< f_1-f_2$. \noindent Let us now denote by $\tilde T$ the map that acts on $\tilde X := [0,f_1]/(0=f_1)$ as $\tilde T : \set{r_{k_j} \alpha} \mapsto \set{r_{k_{j+1}} \alpha}$, that is the first return map on the interval $[0,f_1]$ for the rotation $T_\alpha$. $\tilde T$ is isomorphic to the rotation $T_\alpha^{(2)}$ of $X$ through the angle $\alpha_2= \frac{f_2}{f_1} =[a_3,a_4,\dots]$. Starting from $r_{k_0}=0$, we need to follow the orbit of $0$ using the rotation $T_\alpha^{(2)}$ and determine the gaps $(g_j)$. We have showed that $g_j =q_2$ if and only if $(T_\alpha^{(2)})^{j-1}(0) < 1-\frac{f_2}{f_1}$ or, which is the same by (\ref{eq:f2f1}), $\{ (j-1) \alpha_2 \} < (1-\alpha_2)$. This proves $(ii)$. \noindent Point (iii) follows by repeating a similar argument for $T_\alpha^{(2)}$ on the interval $(1-\alpha_2, 1)$. Again the Three Gap Theorem yields the gaps between two successive visits of the interval $(1-\alpha_2, 1)$ by the orbit $((j-1)\alpha_2)$. Let $(j_h)$ be the subsequence such that $\{ (j_h -1) \alpha_2 \} > (1-\alpha_2)$, then for each $h\ge 1$, ${j_h} - {j_{h-1}}$ is equal either to $a_3+1$ or to $a_3$, with ${j_0}= a_3+1$. Indeed, $(j_0-1) \alpha_2 = a_3 \alpha_2 <1$ but $(a_3+1) \alpha_2 >1$, therefore $a_3 \alpha_2 > (1-\alpha_2)$. This implies that $g_1=g_2=\dots=g_{a_3} = q_2$ and $g_{a_3+1}= q_2+q_1$. Let us remark that $\lfloor (j_{h-1}-1) \alpha_2 \rfloor =h-1$, then $r_h^{(2)}=j_{h-1}$, where we recall that $r_h^{(2)}$ is defined as the smallest integer such that $\lfloor r_h^{(2)} \alpha_2 \rfloor =h$. Now $\{ r_h^{(2)} \alpha_2 \} + (a_3+1) \alpha_2 >1$ hence $(j_h -1)-(j_{h-1}-1) \le a_3+1$, and $\{ r_h^{(2)} \alpha_2 \} + (a_3-2) \alpha_2 < (a_3-1)\alpha_2 < 1-\alpha_2$ hence $(j_h - 1) - (j_{h-1} - 1) \ge a_3$. Moreover $\{ r_h^{(2)} \alpha_2 \} < f_1^{(2)}=1-a_3 \alpha_2$ if and only if $\{ r_h^{(2)} \alpha_2 \} + (a_3-1) \alpha_2 < 1-\alpha_2$, hence if and only if $(j_h -1)-(j_{h-1}-1) = a_3+1$. This shows that for all $h\ge 1$ ${j_h} - {j_{h-1}}$ is equal to $t_h^{(2)}$ and both are equal either to $a_3+1$ or to $a_3$. $\Box$ \vskip 0.5cm \noindent The proof given above brings out the {\sl renormalization argument} mentioned in the Introduction and which will be fully developed in the next section. According to the above discussion, since $j_0=a_3+1$, the values of the sequence $r_{k_j}$, with $1\le j \le a_3+1$, are given by $$0,\ q_2,\ 2q_2,\dots,\ a_3 q_2,\ a_3 q_2 + q_2 +q_1 = q_3+q_2.$$ Note that $k_{a_3+1}=p_3 + p_2$. To continue the determination of the numbers $r_{k_j}$ we have to use the knowledge of the following $j_h$ and by point (iii) this is equivalent to repeat the argument above for the rotation $T_\alpha^{(2)}$. \noindent We need the \emph{Ostrowski representation} of an integer number \cite{pytheasfogg}: given an irrational number $\alpha \in (0,1)$ with partial quotients $(a_h)$ and denominators $(q_h)$ of its rational approximants, any positive integer $r$ can be written in a unique way in the form \begin{equation} \label{eq:ost} r = \sum_{h\ge 0}^N c_h \,q_h \quad \hbox{with}\quad 0\leq c_{h} \leq a_{h+1} \quad \hbox{and}\quad c_{h-1}=0\quad \hbox{if}\quad c_{h}=a_{h+1} \end{equation} for some integer $N$. We call $N$ the \emph{order} of the integer $r$, denoted as $N=ord(r)$. \begin{thm} \label{cor:rkj} Given a positive integer $r$, we have $r=r_{k_j}$ for some $j> 0$ if and only if in the Ostrowski representation of $r$ we have: $c_0=c_1=0$ and $\min \{h : c_h >0\}\geq 2$ and even. Moreover, a positive integer $r$ is of the form $r_k$, for some $k$, if and only if either $r=r_{k_j}$ for some $j> 0$ or $r=r_{k_j} + c_1 q_1 +1$ for some $j> 0$ and $1\le c_1 \le a_2$. \end{thm} \noindent {\sl Proof.} We have verified the first part of the thesis for $r\le q_3+q_2$, finding $r_{k_j}$ with $j=1,\dots,a_3+1$. To continue, by Proposition \ref{prop:rotren}(iii) we have to study the sequence $t_h^{(2)}$, that is the sequence $t_h$ for the angle $\alpha_2=[a_3,a_4,\dots]$. The first $(a_4+1)$ values are $$a_3+1,\ a_3,\ a_3,\dots,\ a_3, a_3+1$$ as obtained by part (i) and (ii) of Proposition \ref{prop:rotren} for $m=2$. This leads to the computation of $r_{k_j}$ up to $(q_4+q_3+q_2)$. What happens after depends on whether $r_{k_2}^{(2)}$ is $q_2^{(2)}=(a_4 a_3+1)$ or $q_2^{(2)}+ q_1^{(2)}=(a_4 a_3 +a_3 +1)$. We already solved this problem for $r_{k_j}^{(0)}$ up to $j=a_3+1$. Hence in the same way we can solve the problem for $r_{k_j}^{(2)}$ up to $j=a_3^{(2)}+1=a_5+1$. This implies the thesis up to $(q_5+q_2)$. \noindent The subsequent steps follow by repeating the same argument as before, where for all $i\ge 2$, the denominators $q_{2i}$ and $q_{2i+1}$ substitute $q_4$ and $q_5$. Whence the form of the integers $r_{k_j}$ follows by induction on $i\ge 2$. \noindent To prove the result for $r_k$, simply notice that if $r_k$ is not $r_{k_j}$, then it is obtained from one of the $r_{k_j}$ by adding $q_1$ as many times as needed, since $t_k\ge q_1$, hence there are at least $q_1$ iterations before $\lfloor r\alpha \rfloor$ increases. Moreover the iterations can't be more than $q_1$ because $r_k$ is not $r_{k_j}$. This finishes the proof. $\Box$ \vskip 0.5cm \noindent We finally point out that the following relation between an integer $r_{k_j}$ and its index $j$ is in force: first, for $j>0$ we have \begin{equation} \label{eq:rkj-kj} r_{k_j} = \sum_{h\ge 2} c_h\, q_h \qquad \Rightarrow \qquad k_j= \sum_{h\ge 2} c_h\, p_h. \end{equation} Second, replacing $p_3$ and $p_2$ in $k_j$ with $a_3$ and $1$, respectively, and using the definition of the numbers $q_h^{(2)}$, one obtains inductively \begin{equation} \label{eq:kj-j} j= \sum_{h\ge 2} c_{h} q_{h-2}^{(2)} = \sum_{h\ge 2} c_{h} \left( q_h - a_1 p_h \right). \end{equation} \noindent In the following, besides $\alpha_m=G^m(\alpha)$ we will also need the numbers \begin{equation} \label{eq:alfamprimo} \bar \alpha_m := [a_{m+1}-1, a_{m+2},\dots] = \frac{G^m(\alpha)}{1-G^m(\alpha)}\, \cdot \end{equation} We remark that if $a_{m+1}=1$ then $\bar \alpha_m= \alpha_{m+2}$. Let us denote by $\bar T_\alpha^{(m)}:X\to X$ the rotation of angle $\bar \alpha_m$ and $\bar p_n^{(m)}, \bar q_n^{(m)}, \bar f_n^{(m)}$ the corresponding quantities (cfr. (\ref{eq:rel-approx}) and (\ref{effenne})). A simple inductive argument shows that in the case $m=1$, if $r_{k_j}$ is defined as above, then \begin{equation} \label{eq:kjmenoj} k_j-j = \left\{ \begin{array}{ll} \sum_{h\ge 2} \ c_h \bar q_{h-1}^{(1)}, & \mbox{if } a_2 \not= 1, \\[5mm] \sum_{h\ge 3} \ c_h \bar q_{h-3}^{(1)}, & \mbox{if } a_2 = 1. \end{array} \right. \end{equation} Whereas for the sequence $(\bar f_n^{(m)})$ it holds for all $m\ge 1$ and all $n\ge 0$ \begin{equation} \label{eq:effebar} \bar f_n^{(m)} = \left\{ \begin{array}{ll} \frac{f_{n+1}^{(m-1)}}{f_0^{(m-1)}-f_1^{(m-1)}}, & \mbox{if } a_2 \not= 1, \\[5mm] \frac{f_{n+3}^{(m-1)}}{f_2^{(m-1)}}, & \mbox{if } a_2 = 1. \end{array} \right. \end{equation} \vskip 0.5cm \noindent We end this section by giving the following version of a standard expansion of a real number $\beta \in (0,1)$ in terms of the numbers $f_n$ defined in (\ref{effenne}) for a fixed irrational number $\alpha$ with partial quotients $(a_k)$ (see, e.g., \cite{pytheasfogg}, Sect. 6.4) \begin{prop} \label{prop:espansione} For all $\beta \in (0,1)$ there exists a unique sequence of integers $(b_k)$ such that: (i) $\beta= \sum_{k=0}^\infty b_k f_k$; (ii) $0\le b_k \le a_{k+1}$ for all $k\ge 0$; (iii) $b_k=a_{k+1}$ implies $b_{k+1}=0$. Moreover the coefficients $(b_k)$ are definitively null if and only if $\beta \in \mathbb Z + \alpha \mathbb Z$. \end{prop} \noindent \emph{Proof.} By definition, $(f_k)$ is a monotonically decreasing sequence of positive real numbers. The sequence $(b_k)$ is constructed by a greedy algorithm: let $$b_0 := \left\lfloor \frac \beta \alpha \right\rfloor, \qquad \qquad \beta_1 := \beta - b_0 \alpha,$$ where we recall $\alpha=f_0$. Note that $\beta < 1$ implies $b_0 \le a_1$ and $\beta_1 < f_0$. Then we can define by induction for all $k\ge 1$ \begin{equation} \label{eq:bk} b_k := \left\lfloor \frac{\beta_k}{f_k} \right\rfloor, \qquad \qquad \beta_{k+1} := \beta_k - b_k f_k = \beta - \sum_{i=0}^k b_i f_i. \end{equation} By definition of $b_k$, it holds $\beta_k < f_{k-1}$, hence $\beta=\lim_k \sum_{i=0}^k b_i f_i$ and $b_k \le a_{k+1}$ (see equation (\ref{recu1})). Moreover, $b_k = a_{k+1}$ implies $\beta_{k+1} < f_{k-1} - a_{k+1} f_k = f_{k+1}$ (see equation (\ref{recu})), hence $b_{k+1} =0$. This proves part (i), (ii) and (iii). \noindent Let now $b_k=0$ for all $k> \bar k$, for some integer $\bar k$. Then $\beta = \sum_{i=0}^{\bar k} b_i f_i$ and $f_k \in \mathbb Z + \alpha \mathbb Z$ for all $k\ge 0$ imply that $\beta \in \mathbb Z + \alpha \mathbb Z$. Conversely, let $\beta = t+\alpha s$ for $t,s \in \mathbb Z$. Since $\beta \in (0,1)$, if $t=0$ then $0\le s \le a_1$, hence $b_0=s$ and $\beta_1 =0$. This implies $b_k=0$ for all $k\ge 1$. Let now $t>0$ so that $s<0$. If we let $m=\max \set{r \in \mathbb N \ :\ \lfloor r\alpha \rfloor =t-1}$, then we can write $\beta = t -m\alpha + (m-|s|)\alpha$, where $m-|s| \le a_1$. Using the expansion of equation (\ref{eq:miaforma}), we can write $m=r_{k_j}+R_1 q_1$ for some $r_{k_j}=\sum_{h=2}^N c_h q_h$ and $0\le R_1 \le a_2+1$. Notice that $R_0=0$ by the definition of $m$. From this, using equation (\ref{eq:rkj-kj}), we obtain $t= k_j +R_1$ and therefore $$\beta = \sum_{h=2}^N \ (-1)^{h+1} c_h f_h + R_1 f_1 + (m-|s|) f_0$$ From the definition of $b_k$ one immediately sees that $b_k=0$ for all $k> N$. The same argument works for the case $t<0$. $\Box$ \section{The growth of $S_n(\alpha)$} \label{sec:diffusione} We now use the sequence $(t_k)$ to study the behaviour of $S_n(\alpha)$ and whence the diffusive properties of the corresponding walk. \noindent We have showed that $t_k$ is equal either to $a_1$ or to $a_1+1$. Therefore, according to whether $a_1$ is even or odd, only the iterations for which $t_k=a_1+1$ or $t_k=a_1$, respectively, are important for the growth behaviour. \noindent Let us consider first of all the case $a_1$ even. In this case, if $t_k=a_1$ then $\sum_{i=r_k}^{r_k+a_1-1} s(i\alpha)=0$, hence we can neglect these terms, since the \virg{walker} associated to $S_n(\alpha)$ would simply take $a_1$ steps to start from $S_{r_k-1}(\alpha)$ and come back to the same point, after having reached the point $S_{r_k-1}(\alpha)+\frac{a_1}{2}$. Hence we can restrict ourselves to the study of the sequence $\set{r\alpha}$ with $r_{k_j}\le r \le r_{k_j}+a_1$, where we recall that the sub-sequence $(r_{k_j})$ corresponds to $t_{k_j} = a_1+1$. In these cases $\sum_{i=r_{k_j}}^{r_{k_j}+a_1} s(i\alpha)= \pm 1$, according to whether the number $\{ r_{k_j} \alpha + \frac{a_1}{2} \alpha \}$ is $< \frac 1 2$ or $> \frac 1 2$, respectively, that is whether $\set{r_{k_j} \alpha} < \frac 1 2 f_1$ or $> \frac 1 2 f_1$. In view of the analysis made in the previous section, given the first return map $\tilde T$ on the interval $(0,f_1)$, and its isomorphism with the rotation $T_\alpha^{(2)}$ on $X$, we conclude that $$\sum_{i=r_{k_j}}^{r_{k_j}+a_1} s(T_\alpha^i(0))= 1 \ \Longleftrightarrow \ (T_\alpha^{(2)})^j (0) < \frac 1 2\, \cdot$$ Using this fact we now study the relation between the sequences $S_n(\alpha)$ and $S_n(\alpha_2)$. We obtain that for all $r\ge 0$ it holds \begin{equation} \label{eq:rin-rotpari} a_1 \mbox{ even} \ \Longrightarrow \ S_{r}(\alpha) = S_{j(r)}(\alpha_2) + \tilde S(r) \end{equation} where $j(r)$ and $\tilde S(r)$ are computed in the following way. Let us write $r$ in the form \begin{equation} \label{eq:miaforma} r= r_{k_j} + R_1 q_1 +R_0 \end{equation} with $R_1 q_1 + R_0 < r_{k_{j+1}}- r_{k_j}$, $0\le R_1 \le a_2+1$ and $0\le R_0 < q_1$. We remark that this can be different from the Ostrowski representation of $r$, since it can be $R_1=a_2+1$. However the order of $r$ is equal to that of $r_{k_j}$ for $j>0$. We have \begin{equation} \label{eq:jr} j(r)=\max \set{\bar j\ge 0 \ :\ r_{k_{\bar j}} < r-R_0} = \max \set{j+ \mbox{sgn}(R_1)-1,0} \end{equation} and \begin{equation} \label{eq:sr} \tilde S(r)= \left\{ \begin{array}{ll} \sum_{i=r-R_0+1}^r s(\{ i\alpha \}) & \mbox{ if } R_0>0, \; R_1>0, \\[0.3cm] \sum_{i=r-R_0}^r s(\{ i\alpha \}) & \mbox{ if } R_0>0, \;R_1=0,\\[0.3cm] 0& \mbox{ if } R_0=0, \;R_1>0,\\[0.3cm] s(\{ r\alpha \})& \mbox{ if } R_0=0, \;R_1=0. \end{array} \right. \end{equation} We remark that using equations (\ref{eq:rkj-kj}) and (\ref{eq:kj-j}) it is possible to obtain $j$ from the knowledge of $r_{k_j}$. Moreover $0\le \tilde S(r) \le 1 + \frac{a_1}{2}$ for all $r\ge 0$, hence the growth behaviour of $S_n(\alpha)$ only depends on that of $S_n(\alpha_2)$. \vskip 0.5cm \noindent The case $a_1$ odd is in some sense complementary to the previous one. Indeed, in this case, we obviously have $\sum_{i=r_{k_j}}^{r_{k_j}+a_1} s(T_\alpha^i(0))= 0$, whereas for $k$ such that $t_k = a_1$ we have $\sum_{i=r_{k}}^{r_{k}+a_1-1} s(T_\alpha^i(0))= \pm 1$ according to whether $\{r_k \alpha + \frac{a_1-1}{2} \alpha \} < \frac 1 2$ or $>\frac 1 2$. We would like to construct an induced map on some interval of $X$, to connect the values of $\sum_{i=r_{k}}^{r_{k}+a_1-1} s(T_\alpha^i(0))$ to a suitable orbit of such induced map. To this aim we notice that the point $\{ r_k \alpha + \frac{a_1-1}{2} \alpha \}$ belongs to the interval $J:=\left( \frac 1 2 - \frac 1 2 ( f_0 - f_1), \frac 1 2 + \frac 1 2 ( f_0 - f_1)\right)$ for all $k\ge 0$ such that $t_k =a_1$. This follows immediately from the following remarks: \begin{enumerate} \item $\set{r_k \alpha}> f_1$ and $f_1 + \frac{a_1-1}{2} \alpha = \frac 1 2 - \frac 1 2 ( f_0 - f_1)$; \item $\{ r_k \alpha + \frac{a_1-1}{2} \alpha \} < f_1 + \frac{a_1+1}{2} \alpha - f_1 $, since $f_1 + \frac{a_1+1}{2} \alpha > \{ r_{k_j} \alpha + \frac{a_1-1}{2} \alpha \}$ for all $k_j$, and $\frac{a_1+1}{2} \alpha = \frac 1 2 + \frac 1 2 ( f_0 - f_1)$. \end{enumerate} Moreover the two estimates in 1. and 2. are sharp. \noindent From the definition of the interval $J$, it also follows that $\{ r_{k_j} \alpha + r \alpha \} \not\in J$ for all $r=1,\dots,a_1$, since $\set{r_{k_j} \alpha} \in (0,f_1)$ implies that $\{ r_{k_j} \alpha + \frac{a_1-1}{2} \alpha \} <\frac 1 2 - \frac 1 2 ( f_0 - f_1)$ and $\{ r_{k_j} \alpha + \frac{a_1+1}{2} \alpha \} > \frac 1 2 + \frac 1 2 ( f_0 - f_1)$. Hence we can consider the first return map $\bar T$ of $T_\alpha$ to the interval $J$, and obtain that $\bar T$ is isomorphic to the inverse of the rotation $\bar T_{\alpha}^{(1)}$ on $X$, that is the rotation of angle $$- \bar \alpha_1=- \frac{f_1}{f_0-f_1}.$$ \noindent Let now $(k_i)$ be the sub-sequence such that $t_{k_i}=a_1$ for all $i\ge 1$, then $$\sum_{r=r_{k_i}}^{r_{k_i}+a_1-1} s(T_\alpha^r(0))= 1 \Longleftrightarrow \ (\bar T_\alpha^{(1)})^i (0) < \frac 1 2$$ We now want to give an analogous equation of (\ref{eq:rin-rotpari}). In this case we have to neglect $(\bar T_\alpha^{(1)})^0(0)=0$, since we start with $i=1$, hence we obtain that for all $r\ge 0$ \begin{equation} \label{eq:rin-rotdisp} a_1 \mbox{ odd} \ \Longrightarrow \ S_{r}(\alpha) = S_{i(r)}(-\bar \alpha_1)-1 + \tilde S(r) \end{equation} where $\tilde S(r)$ is the same as in equation (\ref{eq:rin-rotpari}) and $i(r)$ is computed in the following way. Let us write again $r$ as in equation (\ref{eq:miaforma}), then \begin{equation} \label{eq:ir} i(r)= k_j-j+\max\{(R_1-1),0\} \end{equation} where we recall equations (\ref{eq:rkj-kj}), (\ref{eq:kj-j}) and (\ref{eq:kjmenoj}). \noindent Again $\tilde S(r)$ is uniformly bounded so that the diffusive properties of $S_n(\alpha)$ depend only on those of $S_n(\bar \alpha)$. Moreover we note that for all $n\ge 0$ we have \begin{equation} \label{eq:utile-dispari} S_{n}(-\bar \alpha_1) -1 = - \left( S_n(\bar \alpha_1)-1 \right) \end{equation} \vskip 0.5cm \noindent In conclusion, we have showed that, as far as the diffusive properties are concerned, the walk $(S_n(\alpha))$ is equivalent to a \virg{renormalized} walk $(S_{R(n)}(\beta))$, where the values of $R(n)$ and $\beta$ depend on the parity of $a_1$, the first partial quotient of the number $\alpha$. \noindent Equations (\ref{eq:rin-rotpari}) and (\ref{eq:rin-rotdisp}) lead by iteration to an explicit expression for $S_n(\alpha)$ only in terms of the $(a_k)$. A tentative result in this direction was given in \cite{sos}. About growth estimates, let us see how this argument leads to precise estimates on the behaviour of maxima and minima of $S_n(\alpha)$. \begin{prop} \label{prop:massimi} Given $\alpha=[a_1,a_2,\dots] \in (0,1)$, let $r=r_{k_j}+R_1 q_1 + R_0$ for some $j\ge 0$ as in equation (\ref{eq:miaforma}). \noindent If $a_1$ is even then $$0 \le \max\limits_{0\le n\le r} S_n(\alpha) - \left( \max\limits_{0\le m\le j(r)} S_m(\alpha_2) +\frac{a_1}{2} \right) \le 1$$ $$\min\limits_{0\le n\le r} S_n(\alpha) = \min\limits_{0\le m\le j(r)} S_m(\alpha_2)$$ where $\alpha_2=[a_3,a_4,\dots]$ and $j(r)$ is given in (\ref{eq:jr}). If instead $a_1$ is odd then $$0\le \max\limits_{0\le n\le r} S_n(\alpha) - \left(1-\min\limits_{0\le m\le i(r)} S_m(\bar \alpha_1) + \frac{a_1-1}{2}\right) \le 1$$ $$\min\limits_{0\le n\le r} S_n(\alpha) = 1-\max\limits_{0\le m\le i(r)} S_m(\bar \alpha_1)$$ where $\bar \alpha_1 = [a_2-1,a_3,\dots]$ and $i(r)$ is given in (\ref{eq:rin-rotdisp}). \noindent Moreover in the case $a_1$ even, the difference between maxima is equal to 1 only if $R_1=0$ and $R_0\ge \frac{a_1}{2}$. \end{prop} \noindent \emph{Proof.} Let us consider first the case $a_1$ even. The result is a direct consequence of equation (\ref{eq:rin-rotpari}) and the relation $0\le \tilde S(r) \le 1 + \frac{a_1}{2}$. \noindent For the case $a_1$ odd, the proof follows from equations (\ref{eq:rin-rotdisp}) and (\ref{eq:utile-dispari}). $\Box$ \vskip 0.5cm \noindent We point out that, since $j(r)$ and $i(r)$ are explicitly computable from $r$, one can iterate the renormalization argument in such a way that at the end of the process the maxima and minima of the walk $S_n(\alpha)$ will be explicitly computable linear combinations of the partial quotients of $\alpha$. To this end we observe that in order to apply the argument to $S_{m}(\alpha_2)$ it is enough to notice that in equation (\ref{eq:kj-j}) the number $j$ is obtained as a linear combination of $a_3$ and $1$, which are nothing but $q_1^{(2)}$ and $q_0^{(2)}$ respectively (we are using the notations of Proposition \ref{prop:rotren}). Therefore $j$ can be expressed with respect to $\alpha_2$ in the form $r_{k_l}^{(2)} + R_1^{(2)} q_1^{(2)} + R_0^{(2)}$, and the iteration can proceed. We thus obtain the following, \begin{thm} \label{cor:maxpari} Let the partial quotients $(a_{2i+1})$ be even for all $i\ge 0$. If $r=r_{k_j}+R_1 q_1 + R_0$ for some $j\ge 0$ and $ord(r)=N$, then $$\frac{1}{2} \sum_{i=0}^{\frac{N-2}{2}} a_{2i+1} \le \max\limits_{0\le n\le r} S_n(\alpha) \le \frac N 2 + \frac{1}{2} \sum_{i=0}^{\frac{N-2}{2}} a_{2i+1}$$ $$\min\limits_{0\le n\le r} S_n(\alpha) =1$$ \end{thm} \noindent This theorem implies that the diffusion properties of $S_n(\alpha)$ depend only weakly on the partial quotients $(a_{2i})$. In particular, for all $\alpha$ with fixed partial quotients $(a_{2i+1})$, even for all $i\ge 0$, the sequence $S_n(\alpha)$ grows with the same rate, and what changes is the number of fluctuations. \noindent The situation is more cumbersome for numbers $\alpha$ with odd partial quotients in an odd position. This would imply to change the kind of \virg{renormalization}, and also partial quotients with even position become important. However, we can make some computations for particular cases. \vskip 0.2cm \noindent {\sc Example.} Let $\alpha=[a,a,a,\dots]$ with $a$ odd. Then the first renormalization leads to $\bar \alpha_1 =[a-1,a,a,\dots]$. This fact implies that two different situations occur for $a=1$ and $a>1$. Hence the sequence $S_n(\alpha)$ with $\alpha$ the golden ratio $\frac{\sqrt{5}-1}{2}$ has peculiar properties. \noindent Let us first consider the case $a>1$. From Proposition \ref{prop:massimi} it follows that $$\max\limits_{0\le n\le r} S_n(\alpha) \le 2 + \frac{a-1}{2} - \min\limits_{0\le m\le j(i(r))} S_m(\alpha) \le $$ $$\le 2+ (a-1) + \max\limits_{0\le n\le j(i(j(i(r))))} S_n(\alpha)$$ where $ord(j(i(j(i(r)))))=ord(r)-6$. Therefore, if $ord(r)=6k$, repeating the same argument from below we have $$\frac{(a-1)}{6}\ ord(r)\le \max\limits_{0\le n\le r} S_n(\alpha) \le \frac{(a+1)}{6}\ ord(r)$$ For example, if $a=3$ then $\alpha=\frac{\sqrt{13}-3}{2}$, and $$\frac{1}{3\ \log(\frac{\sqrt{13}+3}{2})}\le \limsup\limits_{r \to \infty}\ \frac{ \max\limits_{0\le n\le r} S_n(\frac{\sqrt{13}-3}{2})} {\log r} \le \frac{2}{3\ \log(\frac{\sqrt{13}+3}{2})}$$ Let us consider now $a=1$. From Proposition \ref{prop:massimi} it follows that $$\max\limits_{0\le n\le r} S_n\left( \frac{\sqrt{5}-1}{2}\right) \le 1+ \max\limits_{0\le m\le i(i(r))} S_m\left(\frac{\sqrt{5}-1}{2}\right)$$ and $ord(i(i(r)))=ord(r)-6$. Hence $$\limsup\limits_{r \to \infty}\ \frac{ \max\limits_{0\le n\le r} S_n\left(\frac{\sqrt{5}-1}{2}\right)} {\log r} \le \frac{1}{6\ \log(\frac{\sqrt{5}+1}{2})}.$$ \vskip 0.5cm \vskip 0.5cm \section{Generalisations to other orbits and applications} \label{sec:disc} We now use the formalism developed in the previous sections to analyse the behaviour of the following quantity \begin{equation} \label{def:altri-punti} d_n(\alpha,\beta) := \sum_{r=0}^{n-1} \ \chi_{[0,\beta)} (\set{r\alpha}) - \beta n \end{equation} for points $\beta \in (0,1)$. We call $d_n(\alpha,\beta)$ the \emph{relative discrepancy} of $\alpha$ with respect to $\beta$. The term is justified by the usual definition of \emph{discrepancy} of the sequence $(n\alpha)$ as \begin{equation} \label{def:disc} D_n^*(\alpha):= \sup\limits_{\beta \in (0,1)} \left| \frac 1 n \ d_n(\alpha,\beta) \right| \end{equation} We first give an iterative argument to compute the relative discrepancies $d_n(\alpha,\beta)$. This is useful to give an explicit expression for the diffusion of the orbit of a general point $\beta\in (0,1)$ for a rotation $T_\alpha$, that is $$S_n(\alpha,\beta):= \sum_{r=0}^n s(\set{r\alpha + \beta})$$ (see (\ref{def:somme})). \noindent Given $\alpha=[a_1,a_2,\dots]$, we recall the notations $$\alpha_m=G^m(\alpha)=[a_{m+1},a_{m+2},\dots]$$ $$\bar \alpha_m=\frac{G^m(\alpha)}{1-G^m(\alpha)}=[a_{m+1}-1,a_{m+2},\dots]$$ where $G$ is the Gauss map, as well as the sequences $p_h$, $q_h$, $f_h$ and $r_{k_j}$ associated to $\alpha$. Let us fix $\beta \in (0,1)$ and recall its expansion $\beta=\sum_k b_k f_k$ as well as the numbers $\beta_m$ given in (\ref{eq:bk}). Let us define for all $m\ge 1$ \begin{equation} \label{eq:betabar} \beta^m := \frac{\beta_m}{f_m} \qquad \qquad \bar \beta^m:= \frac{\beta_m-f_m}{f_{m-1}-f_m} \end{equation} \begin{prop} \label{prop:discr1} For a given $n\in \mathbb N$, let us write $n-1=r_{k_j}+R_1 q_1 + R_0$ as in equation (\ref{eq:miaforma}) with $ord(n)=N$. Let $j(n-1)$ and $i(n-1)$ be defined as in equations (\ref{eq:jr}) and (\ref{eq:ir}) respectively, and let $\tilde S(n-1)$ be defined as in equation (\ref{eq:sr}). If we define $$S(n,\alpha,\beta)=\tilde S(n-1)-\beta (R_0+1 - \mbox{\rm sgn}(R_1))$$ and write $r_{k_j}=\sum_{h=2}^N c_h q_h$, then $$d_n(\alpha,\beta) = S(n,\alpha,\beta) + \left\{ \begin{array}{l} -\frac{(b_0-a_1 \beta)}{f_1} \ C(\alpha,n) + d_{j(n-1)+1}(\alpha_2,\beta^1) \\[0.5cm] \frac{(b_0-a_1 \beta+1-\beta)}{f_0-f_1} \ C(\alpha,n) + \bar \beta^1 + d_{i(n-1)+1}^c(\bar \alpha_1,\bar \beta^1) \end{array} \right.$$ where the first formulation is valid if $b_1=0$ and the second otherwise, the constant $C(\alpha,n)$ does not depend on $\beta$ and is given by $$C(\alpha,n):=\alpha\ \mbox{\rm sgn}(R_1)- R_1 f_1 + \sum_{h=2}^N \ (-1)^{h} c_h f_h $$ and $d^c(\cdot)$ means that in the definition of $d(\cdot)$ we use the indicator function of the interval $(1-\beta^{1},1]$. \end{prop} \noindent \emph{Proof.} The main idea is to use the partition of the sequence $\set{r\alpha}$ using the sequence $\set{r_k}$. Indeed, as in the treatment of the diffusion, we have that for all $k\ge 0$ $$\sum_{r=r_k}^{r_{k+1}-1} \ \chi_{[0,\beta)}(\set{r\alpha}) = \left\{ \begin{array}{ll} b_0+1 & \mbox{if }\ \{r_k \alpha\} < \beta_1 \\[0.2cm] b_0 & \mbox{otherwise} \end{array} \right.$$ By lemma \ref{lemmino}, if $b_1=0$, that is $\beta_1<f_1$, the first case is possible only if $k=k_j$ for some $j\ge 0$. In this case we have $$\sum_{r=0}^{n-1} \ \chi_{[0,\beta)}(\set{r\alpha})=\tilde S(n-1)+ b_0(k_j+R_1)+\sum_{j=0}^{j(n-1)} \ \chi_{[0,\beta^1)}(\set{j\alpha_2})$$ where the last term accounts for the relation $$\sum_{r=r_{k_j}}^{r_{k_j}+q_1} \ \chi_{[0,\beta)}(\set{r\alpha})= b_0 + \chi_{[0,\beta_1)}(\set{r_{k_j}\alpha})$$ and uses the isomorphism between the first return function to the set $(0,f_1)$ and the rotation of angle $\alpha_2$ on the unit circle. Moreover let us write $$\beta n = \beta (R_0+1 - \mbox{\rm sgn}(R_1)) + \beta (j+\mbox{\rm sgn}(R_1)) + \beta (r_{k_j}+R_1q_1-j)$$ where, we recall, $j(n-1)=j+\mbox{\rm sgn}(R_1)-1$ (cfr. (\ref{eq:jr})) and $(r_{k_j}+R_1q_1-j)=a_1(k_j+R_1)$ (cfr. (\ref{eq:rkj-kj}) and (\ref{eq:kj-j})). The claim now follows by evaluating $$k_j+R_1 - \frac{\alpha}{f_1} (j+\mbox{\rm sgn}(R_1))=-\frac{C(\alpha,n)}{f_1}.$$ A similar argument can be applied if $b_1>0$, using the fact that the first return map to the interval $(f_1, \alpha)$ is isomorphic to the inverse of the rotation on the unit interval with angle $\bar \alpha_1$. This leads to the term $d^c(\cdot)$. $\Box$ \vskip 0.5cm \noindent The previous result shows that to evaluate $d_n(\alpha,\beta)$ we have to repeat the same argument for $d_{j(n-1)+1}$ and $d_{i(n-1)+1}$, respectively. To this end we point out that the same argument as before yields for general $\alpha$ and $\beta$ \begin{equation} \label{eq:disc-inv} d_n^c(\alpha,\beta)= S^c(n,\alpha,\beta) + \left\{ \begin{array}{l} -\frac{(b_0-a_1 \beta)}{f_1} \ C(\alpha,n) + d_{j(n-1)+1}^c(\alpha_2,\beta^1) \\[0.5cm] \frac{(b_0-a_1 \beta+1-\beta)}{f_0-f_1} \ C(\alpha,n) + \bar \beta^1 + d_{i(n-1)+1}(\bar \alpha_1,\bar \beta^1) \end{array} \right. \end{equation} where $S^c(n,\alpha,\beta)$ is obtained by using the indicator function of the interval $(1-\beta,1]$ in $\tilde S(n-1)$. \noindent Going on, we see that we can repeat the same argument until the \virg{renormalized} rotation that we obtain does not have enough iterates to be renormalized again. When this happens is up to the order of $n$ and to $\beta$, which implies the \virg{renormalization path} that have to be followed. Let us see how the algorithm to choose the new angle of rotation and the new interval works. We have already seen that the first step is $$(\alpha,\beta) \longrightarrow \left\{ \begin{array}{ll} (\alpha_2,\beta^1) & \mbox{ if } b_1=0 \\[0.2cm] (\bar \alpha_1, \bar \beta^1)^c & \mbox{ if } b_1>0 \end{array} \right.$$ and that two subsequent $(\cdot)^c$ cancel out (since they are generated by two inversion in the rotations). By straightforward computation, one can readily verify that the general scheme is as follows: the starting point is always of the form $(\alpha_m,\beta^{m-1})$ or $(\bar \alpha_m,\bar \beta^m)$, and for all $m\ge 2$ we have \begin{equation} \label{eq:schema} (\alpha_m,\beta^{m-1}) \longrightarrow \left\{ \begin{array}{ll} (\alpha_{m+2},\beta^{m+1}), & \mbox{ if } b_{m+1}=0, \\[0.2cm] (\bar \alpha_{m+1}, \bar \beta^{m+1})^c, & \mbox{ if } b_{m+1}>0, \end{array} \right. \end{equation} the same holding true for $(\bar \alpha_m,\bar \beta^m)$. \noindent To conclude, we remark that the constants $C(\alpha_m,n)$ and $C(\bar \alpha_m, n)$ are the same as in Proposition \ref{prop:discr1}, with the values $f_n^{(m)}$ and $\bar f_n^{(m)}$ computed using equation (\ref{decr}) and (\ref{eq:effebar}), respectively. Furthermore, the coefficients multiplying $C(\alpha_m,n)$ and $C(\bar \alpha_m,n)$ at each step satisfy the following \begin{lem} \label{lem:disc-limiti} For all $\alpha$ and $\beta$ it holds: \begin{itemize} \item[(i)] if $b_{m+1}=0$ then $-a_1^{(m)} \le \frac{(b_0^{m}-a_1^{(m)} \beta^{m-1})}{f_1^{(m)}} \le a_1^{(m)}$; \item[(ii)] if $b_{m+1}>0$ then $-a_1^{(m)} \le \frac{(b_0^{m}-a_1^{(m)} \beta^{m-1}+ 1- \beta^{m-1})}{{f_0^{(m)}-f_1^{(m)}}} \le a_1^{(m)}$; \item[(iii)] for all $m\ge 0$ and all $n$, it holds $0<C(\alpha_m,n)< \alpha_m$. \end{itemize} The same relations hold if we consider the corresponding quantities for $\bar \alpha_m$ and $\bar \beta^{m}$. \end{lem} \noindent Using the above algorithm we are able to obtain the actual values of $d_n(\alpha,\beta)$. To obtain growth estimates we write $$d_n(\alpha,\beta)={\bf \mathcal{C}}(n,\alpha,\beta)+{\bf \mathcal{S}}(n,\alpha,\beta)+ {\bf \mathcal{B}}(n,\alpha,\beta)$$ and give estimates for these three terms separately. These terms come from the expression of $d_n(\alpha,\beta)$ given in Proposition \ref{prop:discr1}. The first term arises by summing the sequence of constants $C(\alpha_m,n(m))$. The second comes from summation of the terms $S(n,\alpha,\beta)$. The last term arises by adding the different $\bar \beta^m$ that we encounter when $b_{m}>0$. In the appendix we give the proof of the following estimates \begin{prop} \label{prop:disc-stime} Let $n$ be an integer such that $ord(n-1)=N$, then for all $\alpha=[a_1,a_2,\dots]$ and all $\beta$, the following hold: \begin{itemize} \item[(i)] $|{\bf \mathcal{C}}(n,\alpha,\beta)|< N$; \item[(ii)] $|{\bf \mathcal{B}}(n,\alpha,\beta)| < N$; \item[(iii)] $|{\bf \mathcal{S}}(n,\alpha,\beta)| \le \sum_{m=1}^{N+1}\ \left( 1+\frac{a_m}{4} \right)$. \end{itemize} \end{prop} \noindent We now compute the sum $S_n(\alpha,\beta)$ for a given $\alpha$. Let \begin{equation} S_n(\alpha,\beta) = S_n(\alpha) + R_n(\alpha,\beta) \end{equation} where the term $R_n(\alpha,\beta)$ accounts for the times that $s(\set{r\alpha}) \not= s(\set{r\alpha +\beta})$. We first remark that for all $\beta \in [0,\frac 1 2)$ it holds $$S_n(\alpha, \beta +\frac 1 2) = - S_n(\alpha,\beta)$$ hence it suffices to study $R_n(\alpha,\beta)$ in the case $\beta < \frac 1 2$. \begin{thm} \label{prop:gen-diff} For all $\beta \in [0,\frac 1 2)$, $$S_n(\alpha,\beta) = S_n(\alpha) + R_n(\alpha,\beta)$$ where the term $R_n(\alpha,\beta)$ can be written as $$R_{n-1}(\alpha,\beta) = 2 \left[ d_n(\alpha, \frac 1 2 -\beta) - d_n(\alpha,1-\beta) - d_n(\alpha,\frac 1 2) \right]$$ \end{thm} \noindent {\bf Proof.} If we denote $$P_n := \set{0\le r \le n-1 : \frac 1 2 < \set{r\alpha} < 1 ,\ \ 1 < \beta + \set{r\alpha} < \frac 3 2 }$$ $$M_n := \set{0\le r \le n-1 : \set{r\alpha} < \frac 1 2 ,\ \ \frac 1 2 < \beta + \set{r\alpha} < 1 }$$ then $$R_{n-1}(\alpha,\beta) = 2(card(P_n) - card(M_n))$$ Introducing the notations $$A_a^b:=\set{0\le r \le n-1 : a < \set{r\alpha} < b}$$ $$B_a^b:=\set{0\le r \le n-1 : a < \beta + \set{r\alpha} < b}$$ we have $$P_n = B_1^{3/2} \cap A_{1/2}^1$$ $$M_n = B_{1/2}^1 \cap A_0^{1/2}= (B_{1/2}^\infty \cap A_0^{1/2}) \setminus (B_1^\infty \cap A_0^{1/2})$$ Now, since $0\le \beta < \frac 1 2$, it is easy to obtain $$B_1^{3/2} \cap A^1_{1/2} = B_1^{3/2}$$ $$B_1^\infty \cap A_0^{1/2}= \emptyset$$ hence $$P_n =B_1^{3/2}=A_{(1-\beta)}^1 = A_0^1 \setminus A_0^{(1-\beta)}$$ $$M_n = B_{1/2}^\infty \cap A_0^{1/2} = A_0^{1/2} \setminus A_0^{(1/2 -\beta)}$$ The thesis now follows by writing $$card(A_a^b)= \sum_{r=0}^{n-1} \chi_{_{(a,b)}}(\set{r\alpha})$$ and using the definition of $d_n(\alpha,\beta)$ in (\ref{def:altri-punti}). $\Box$ \vskip 0.5cm \noindent Putting together Proposition \ref{prop:disc-stime} and Theorem \ref{prop:gen-diff}, one gets immediately \begin{cor} \label{cor:l-inf} For all irrational $\alpha=[a_1,a_2,\dots]$ it holds $$\sup_\beta |S_n(\alpha,\beta)| \le |S_n(\alpha)| + 6 \sum_{m=1}^N\ \left(3+\frac{a_m}{4} \right)$$ where $ord(n)=N$. \end{cor} \subsection{Applications} \label{subsec:disc} We now give some applications of the estimates of Proposition \ref{prop:disc-stime}. We first consider the speed of convergence in the Birkhoff Ergodic Theorem. As stated in the Introduction, the Lebesgue measure is invariant and ergodic for the irrational translations on the unit circle. Hence for the rotation $T_\alpha(\beta):= \set{\alpha+\beta}$, the Birkhoff Ergodic Theorem implies that \begin{equation} \label{def:birkhoff} \lim\limits_{n\to \infty} \frac 1 n \ \sum_{k=0}^{n-1}\ \chi_{I} (\set{k\alpha +\beta}) = |I|:= \int_0^1\ \chi_{I} (x) \ dx \end{equation} for any interval $I\subset [0,1]$. We prove that \begin{thm} \label{teo:erg} For any interval $I\subset [0,1]$ it holds \begin{equation} \label{eq:erg} \sum_{k=0}^{n-1}\ \chi_{I} (\set{k\alpha +\beta}) - n|I| = d_n(\alpha,\set{\delta-\beta})-d_n(\alpha,\set{\gamma-\beta}) \end{equation} from which it follows that \begin{equation} \label{eq:erg2} \left|\sum_{k=0}^{n-1}\ \chi_{I} (\set{k\alpha +\beta}) - n|I| \right| \le 2\ \sum_{m=1}^N\ \left(3+\frac{a_m}{4} \right) \end{equation} where $\alpha=[a_1,a_2,\dots]$ and $ord(n-1)=N$. \end{thm} \noindent \emph{Proof.} Let $I=[\gamma,\delta]$. Then we can write $$\sum_{k=0}^{n-1}\ \chi_{I} (\set{k\alpha +\beta}) = \sum_{k=0}^{n-1}\ \chi_{[\gamma-\beta,\delta-\beta]} (\set{k\alpha})$$ By using equation (\ref{def:altri-punti}), we obtain (\ref{eq:erg}). The estimate (\ref{eq:erg2}) follows by Proposition \ref{prop:disc-stime}. $\Box$ \vskip 0.5cm \noindent One can easily generalize (\ref{eq:erg}) to bounded variation functions, for which the analogous of estimate (\ref{eq:erg2}) is the Denjoy-Koksma inequality (see \cite{kn}). Indeed estimate (\ref{eq:erg2}) can be considered a particular case of the Denjoy-Koksma inequality. \vskip 0.5cm \noindent As stated at the beginning of Section \ref{sec:disc}, the term $d_n(\alpha,\beta)$ is related to the \emph{discrepancy} $D_n^*(\alpha)$ of the sequence $(n\alpha)$ by equation (\ref{def:disc}). Some classical results are known for the discrepancy of a general sequence, and in particular for the sequence $(n\alpha)$ according to the arithmetical properties of $\alpha$ (see \cite{kn}). For more recent sharp results we refer to \cite{sch} and \cite{pinner}. We briefly show below how to get similar results to those in \cite{sch} and \cite{pinner} directly by our approach. Proofs can be found in the appendix. \noindent From Proposition \ref{prop:disc-stime} one gets immediately \begin{equation} \label{eq:come-pinner} nD^*_n(\alpha)=\sup_\beta |d_n(\alpha,\beta)| \le 1+ 3N +\frac 1 4 \ \sum_{m=1}^{N+1} \ a_m \end{equation} where $N=ord(n-1)$. This bound is of the same order of results in \cite{pinner}. We will show that this is the best possible estimate for the general case. But first we shall obtain some lower bounds. \begin{prop} \label{prop:discr2} For all irrational $\alpha$ there exist a number $\beta$ and two infinite subsequences $(n_k)$ and $(n_h)$ such that $$\sup\limits_\beta |{\bf \mathcal{S}}(n_k,\alpha,\beta)| \ge \sum_{\stackrel{m=1}{a_{2m-1}\ge 3}}^{\frac{N_k}{2}+1} \frac{a_{2m-1}-2}{4}$$ where $N_k=ord(n_k)$ and $$\sup\limits_\beta |{\bf \mathcal{S}}(n_h,\alpha,\beta)| \ge \sum_{\stackrel{m=2}{a_{2m}\ge 3}}^{\frac{N_h+1}{2}} \frac{a_{2m}-2}{4}$$ where $N_h=ord(n_h)$. \end{prop} \begin{thm} \label{teo:disc} Let $\alpha$ have unbounded partial quotients $(a_k)$ and denote $$\ell_e=\liminf\limits_{k\to \infty} \frac{\sum^k_{m=1\ even}\ a_m}{\sum_{m=1}^k\ a_m}$$ $$\ell_o=\liminf\limits_{k\to \infty} \frac{\sum^k_{m=1\ odd}\ a_m}{\sum_{m=1}^k\ a_m}$$ If $(\ell_e^2 + \ell_o^2) >0$ and \begin{equation} \label{eq:serve} \limsup\limits_{k\to \infty}\ \frac{k}{\sum_{m=1}^k\ a_m}=0 \end{equation} then \begin{equation} \label{eq:risultato} \frac{1}{4}\ \max \set{\ell_e, \ell_o}\ \le\ \limsup\limits_{n\to \infty}\ \frac{nD_n^*(\alpha)}{\stackrel{ord(n-1)+1}{\sum\limits_{m=1}} a_m} \le \frac 1 4 \end{equation} \end{thm} \vskip 0.5cm \noindent {\sc Example.} The conditions of Theorem \ref{teo:disc} are satisfied by $$\alpha=e-2 =[1,2,1,1,4,1,1,6,1, 1,8,1,1,10,1,1,12,\dots]$$ indeed $\ell_e=\ell_o=\frac 1 2$ and $\sum_{m=1}^k a_m \sim \frac 1 9 k^2$. \noindent In fact estimate (\ref{eq:risultato}) is the best possible in general, as can be shown by choosing for example $$\alpha:=[1,2,1,3,1,4,1,5,\dots,1,n,\dots]$$ for which $\ell_e=1$, $\ell_o=0$ and $\sum_{m=1}^k a_m \sim \frac 1 8 k^2$. $\lozenge$ \vskip 0.5cm \noindent We finish with few more remarks. First of all, we remark that by Proposition \ref{prop:espansione} one can easily prove the well known result that for a given $\alpha \in (0,1)$, if $\beta \in \mathbb Z +\alpha \mathbb Z$ then $d_n(\alpha,\beta)$ is bounded for all $n\ge 0$. \noindent Finally, we dwell upon relations between the discrepancy and the sums $S_n(\alpha)$. To start with, let us notice that by definition $S_{n-1}(\alpha) = 2 d_n(\alpha,\frac 1 2)$, so that for any given function $F(n)\nearrow \infty$ we have \begin{equation} \label{eq:disc-somme} \limsup\limits_{n\to \infty} \frac{nD_n^*(\alpha)}{F(n)} \ge \limsup\limits_{n\to \infty} \frac{|S_{n-1}(\alpha)|}{2F(n)} \end{equation} The analogous relation for the infimum limit is not interesting since for all $\alpha$ it holds $|S_{q_k}(\alpha)| \le 2$ for all denominators $q_k$ (see \cite{isola}). \noindent However, in some cases one could get an equality in (\ref{eq:disc-somme}). Let us consider the case $\alpha=\sqrt{2}-1=[2,2,2,\dots]$. We find from Theorem \ref{cor:maxpari} $$\limsup\limits_{n\to \infty} \frac{nD_n^*(\sqrt{2}-1)}{\log n} \ge \limsup\limits_{n\to \infty} \frac{ord(n-1)}{4 \log n} \ge \frac{1}{4\ \log(\sqrt{2}+1)}$$ This relation is in fact an equality, as proved in \cite{ds}, and therefore $$\limsup\limits_{n\to \infty} \frac{S_n(\sqrt{2}-1)}{2 \log n}= \limsup\limits_{n\to \infty} \frac{nD_n^*(\sqrt{2}-1)}{\log n}$$ The same is shown in \cite{boris2} for $\alpha=\frac{\sqrt{3}-1}{2}=[2,1,2,1,\dots]$. Moreover the author exhibits some other couples $(\alpha,\beta)$ for which a similar relation holds. He then conjectures that a similar relation holds for all couples $(\alpha,\beta)$ with $\alpha$ a quadratic irrational and $\beta \in \mathbb Q(\alpha)$ but $\beta \not\in \mathbb Z + \alpha \mathbb Z$. We show here that it is not the case. \begin{cor} \label{cor:boris} Let $\alpha$ be a quadratic irrational in $(0,1)$ with partial quotients $(a_m)$ verifying $a_{2i-1}=2$ and $a_{2i}=2k$ for all $i\ge 1$ and a fixed $k>2$. Then $$\nu^*(\alpha) > \limsup\limits_{n\to \infty} \frac{|d_n(\alpha,\frac 1 2)|}{\log n}$$ \end{cor} \noindent \emph{Proof.} Applying the result of \cite{sch} mentioned above it follows that $$\limsup\limits_{n\to \infty} \frac{nD_n^*(\alpha)}{\log n}= \frac 1 4\ \max \set{\limsup\limits_{N\to \infty}\ \frac{kN}{\log q_N},\ \limsup\limits_{N\to \infty}\ \frac{N}{\log q_N}}$$ However from Theorem \ref{cor:maxpari} and equation (\ref{eq:disc-somme}), it follows that if $N=ord(n)$ then $$\limsup\limits_{n\to \infty}\ \frac{|d_n(\alpha,\frac 1 2)|}{\log n}\le \limsup\limits_{n\to \infty}\ \frac{N}{2\ \log n} \le \limsup\limits_{n\to \infty}\ \frac{N}{2\ \log q_N}$$ Hence, if $k>2$, the thesis is proved. $\Box$ \section{Appendix} \label{sec:app} \noindent \emph{Proof of Proposition \ref{prop:disc-stime}.} Part (i) is an easy consequence of Lemma \ref{lem:disc-limiti}, which implies that for all $\beta$ $$|{\bf \mathcal{C}}(n,\alpha,\beta)| \le \sum_{m=0}^{N-1} \ a_1^{(m)} \alpha_m$$ hence the thesis, since $a_1^{(m)} \alpha_m <1$. The same holds true if any of the terms of the sum is of the type $\bar a_1^{(m)} \bar \alpha_m$. \noindent The proof of part (ii) is even more immediate, since $${\bf \mathcal{B}}(n,\alpha,\beta) \le \sum_{m=1}^N \ \bar \beta^m$$ with $\beta^m <1$ for all $m$. Note that having an equality in the previous relation would mean that each step of the renormalization procedure is done using $\bar \alpha_m$ and $\bar \beta^m$. \noindent To study the behaviour of ${\bf \mathcal{S}}(n,\alpha,\beta)$ let us start writing $n-1=r_{k_j}+R_1q_1+R_0$ as usual. It is then easy to realise that $S(n,\alpha,\beta)$ can assume only a finite number of values. In particular, for each $\beta$, a short calculation shows that $$(b_0+1)(1-\beta) \ge S(n,\alpha,\beta) \ge \left\{ \begin{array}{ll} b_0-a_1 \beta-\beta, & \mbox{ if }\ b_1=0, \\[0.2cm] b_0-a_1 \beta, & \mbox{ if }\ b_1>0, \end{array} \right.$$ for all $n \in [r_{k_j}, r_{k_{j+1}}-1]$. A similar result holds for $S^c(n,\alpha,\beta)$, namely $$\left\{ \begin{array}{ll} b_0-a_1 \beta+1-\beta \ge S^c(n,\alpha,\beta) \ge -\beta (a_1+1-b_0), & \mbox{ if }\ b_1=0, \\[0.2cm] b_0-a_1 \beta+1 \ge S^c(n,\alpha,\beta) \ge -\beta (a_1-b_0), & \mbox{ if }\ b_1>0. \end{array} \right.$$ \noindent Let us first examine $|S(n,\alpha,\beta)|$. We have that if $b_1=0$ then $(b_0-a_1\beta -\beta)\ge -1$, whereas if $b_1>0$ then $(b_0-a_1\beta)\ge -a_1 \alpha$. Moreover, maximising $(b_0+1)(1-\beta)$ on $(0,1)$ yields for all $\alpha$ and $\beta$ $$|S(n,\alpha,\beta)| \le \left\{ \begin{array}{ll} \left( \frac{a_1}{2}+1 \right) \left( 1- \frac{a_1}{2}\ \alpha \right)\le 1 + \frac{a_1}{4}, & \mbox{ if $a_1$ is even}, \\[0.2cm] \frac{a_1+1}{2}\ \left( 1- \frac{a_1-1}{2}\ \alpha \right) \le 1 + \frac{a_1-1}{4}, & \mbox{ if $a_1$ is odd.} \end{array} \right.$$ For $|S^c(n,\alpha,\beta)|$, if $b_1=0$ then $(b_0-a_1 \beta+1-\beta) \le 1$, if $b_1>0$ then $(b_0-a_1 \beta+1) \le a_1 \alpha$. To maximise $|S^c(n,\alpha,\beta)|$ we have to consider separately the cases $b_1=0$ and $b_1>0$. \noindent If $b_1=0$, then maximising $\beta(a_1+1-b_0)$, for all $\alpha$ and $\beta$ we get $$\beta(a_1+1-b_0) \le \left\{ \begin{array}{ll} \left( \frac{a_1}{2}+1 \right)^2 \ \alpha \le 1+ \frac{a_1+a_2}{4}, & \mbox{ if $a_1$ is even}, \\[0.2cm] \frac{(a_1+1)(a_1+3)}{4}\ \alpha \le 1+ \frac{a_1+a_2}{4}, & \mbox{ if $a_1$ is odd.} \end{array} \right.$$ But if $b_1=0$, in the next step of the renormalization, the partial quotient $a_2$ will not appear (see the scheme (\ref{eq:schema})). \noindent If $b_1>0$, we have instead to maximise $\beta(a_1-b_0)$. For all $\alpha$ and $\beta$ it holds $$\beta(a_1-b_0) \le \left\{ \begin{array}{ll} \left( \frac{a_1}{2}+1 \right) \frac{a_1}{2}\ \alpha \le 1+ \frac{a_1}{4}, & \mbox{ if $a_1$ is even}, \\[0.2cm] \left( \frac{a_1+1}{2}\right)^2 \alpha \le 1+ \frac{a_1}{4}, & \mbox{ if $a_1$ is odd}. \end{array} \right.$$ Applying these inequalities to each renormalization step yields $${\bf \mathcal{S}}(n,\alpha,\beta) \le \sum_{m=0}^{N} \ \left( 1+\frac{a_1^{(m)}}{4} \right)$$ where we are using $\bar a_1^{(m)}=a_1^{(m)}-1< a_1^{(m)}$. The thesis of part (iii) follows using $a_1^{(m)}=a_{m+1}$, where $(a_k)$ are the partial quotients of $\alpha$. $\Box$ \vskip 0.5cm \noindent \emph{Proof of Proposition \ref{prop:discr2}.} We need to show the existence of a $\beta$ such that $d_n(\alpha,\beta)$ has the form we look for. Let $\beta$ satisfy $b_{2k+1}=0$ for all $k\ge 0$, then following the scheme (\ref{eq:schema}) one works with couples which are all of the form $(\alpha_{2m},\beta^{2m-1})$, for $m\ge 0$ (we denote $\beta\equiv \beta^{-1}$), and there are no inversions. Hence one has ${\bf \mathcal{S}}(n,\alpha,\beta) = \sum_{m=0}^{\frac{ord(n-1)}{2}+1} S(n(m),\alpha_{2m},\beta^{2m-1})$, where $n(m)$ denotes the number of iterates at each step, that is $n(0)=n$, $n(1)=j(n-1)+1$, and in general $n(m)=j(n(m-1)-1)+1$. We now remark that for all integers of the form $r_{k_j}+R_1 q_1$ there exists $\bar R_0$ such that if $n-1=r_{k_j}+R_1 q_1 +\bar R_0$ then $S(n(m),\alpha_{2m},\beta^{2m-1})=b_0^{(2m)}(1-\beta^{2m-1})$ for all $m$, and since this choice depends only on $R_0$, and changing $R_0$ does not change $j(n-1)$ (see equation (\ref{eq:jr})), we can choose a sequence $n_k$ such that at each renormalization step the term $S$ has the chosen value. Moreover, we can show that if $a_1^{(2m)}\ge 3$ then $b_0^{(2m)}(1-\beta^{2m-1})$ is bigger than $\frac{a_1^{(2m)}-2}{4}$ for all $m\ge 0$ if $b_0^{(2m)}=\frac{a_1^{(2m)}}{2}$ or $b_0^{(2m)}=\frac{a_1^{(2m)}-1}{2}$, according to whether $a_1^{(2m)}$ is even or odd, respectively. If $a_1^{(2m)}< 3$, we can choose $\bar R_0$ such that $S\ge 0$. The first part follows. \noindent For the second part of the proof, we just change a little bit the argument, choosing $\beta$ such that $b_1>0$ and $b_{2k}=0$ for all $k\ge 1$. Then by the scheme (\ref{eq:schema}), we use all the couples $(\alpha_{2m+1},\beta^{2m})$ for all $m\ge 1$, and they are all inverted, since there is only one inversion at the beginning. Hence ${\bf \mathcal{S}}(n,\alpha,\beta) = S(n,\alpha,\beta)+ \sum_{m=1}^{\frac{ord(n-1)}{2}+1} S^c(n(m),\alpha_{2m+1},\beta^{2m})$. As above, we can choose a subsequence $n_h$ such that for all $m\ge 1$ it holds $$S^c(n(m),\alpha_{2m+1},\beta^{2m}) = -\beta^{2m}(a_1^{(2m+1)}-b_0^{(2m+1)}-1)\le -\frac{a_1^{(2m+1)}-2}{4}$$ by choosing $b_0^{(2m+1)}=\frac{a_1^{(2m+1)}}{2}$ or $b_0^{(2m+1)}=\frac{a_1^{(2m+1)}-1}{2}$, according to whether $a_1^{(2m+1)}$ is even or odd respectively, if $a_1^{(2m)}\ge 3$. Otherwise, just choose $S^c\le 0$. $\Box$ \vskip 0.5cm \noindent \emph{Proof of Theorem \ref{teo:disc}.} The upper bound follows straightforwardly from (\ref{eq:come-pinner}). Moreover, from Propositions \ref{prop:disc-stime} and \ref{prop:discr2}, we obtain that if $(\ell_e^2 + \ell_o^2) >0$ then there exists a subsequence $(n_h)$ such that, denoting $N_h=ord(n_h)$, for all $\epsilon >0$ there exists $\bar h$ such that for all $h>\bar h$ $$n_h D^*_{n_h} \ge \sup_\beta |{\bf \mathcal{S}}(n_h,\alpha,\beta)|-2N_h \ge \frac{\max \set{\ell_e,\ell_o}-\epsilon}{4} \ \sum_{m=1}^{N_h+1} \ a_m - 3 N_h$$ where we have used the relation $$\sum_{\stackrel{m=1}{a_{m}\ge 3}}^k\ a_m\ge \sum_{m=1}^k \ a_m - 2k$$ for all $k\ge 1$, which holds also when the sum is restricted to even or odd indexes. From this it follows $$\limsup\limits_{n\to \infty}\ \frac{nD_n^*(\alpha)}{\stackrel{ord(n-1)+1}{\sum\limits_{m=1}} a_m} \ge \frac{\max \set{\ell_e,\ell_o}-\epsilon}{4}$$ for all $\epsilon >0$. $\Box$
{ "timestamp": "2007-08-01T01:43:10", "yymm": "0708", "arxiv_id": "0708.0048", "language": "en", "url": "https://arxiv.org/abs/0708.0048", "abstract": "We introduce a renormalization procedure which allows us to study in a unified and concise way different properties of the irrational rotations on the unit circle $\\beta \\mapsto \\set{\\alpha+\\beta}$, $\\alpha \\in \\R\\setminus \\Q$. In particular we obtain sharp results for the diffusion of the walk on $\\Z$ generated by the location of points of the sequence $\\{n\\alpha +\\beta\\}$ on a binary partition of the unit interval. Finally we give some applications of our method.", "subjects": "Dynamical Systems (math.DS); Number Theory (math.NT)", "title": "A renormalization approach to irrational rotations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682478041813, "lm_q2_score": 0.7154239836484143, "lm_q1q2_score": 0.707316976350765 }
https://arxiv.org/abs/1002.0530
Integrability of Lie systems through Riccati equations
Integrability conditions for Lie systems are related to reduction or transformation processes. We here analyse a geometric method to construct integrability conditions for Riccati equations following these approaches. This approach provides us with a unified geometrical viewpoint that allows us to analyse some previous works on the topic and explain new properties. Moreover, this new approach can be straightforwardly generalised to describe integrability conditions for any Lie system. Finally, we show the usefulness of our treatment in order to study the problem of the linearisability of Riccati equations.
\section{Introduction} \indent The Riccati equation \begin{equation}\label{ricceq} \frac{dy}{dt}=b_0(t)+b_1(t)y+b_2(t)y^2, \end{equation} is the simplest non-linear differential equation \cite{CRL07,CarRamGra} and it appears in many different fields of Mathematics and Physics \cite{CL09,CLR10,CMN,Ch09,MH05,RM08,SHC07,DS08,PW}. It is essentially the only first-order ordinary differential equation on the real line admitting a non-linear superposition principle \cite{LS,PW} and in spite of its apparent simplicity, its general solution cannot be described by means of quadratures except in some very particular cases \cite{AS64,CarRam,Ib08,Ib09II,Kamke,Ko06,Zh98,Zh99,Mu60,Na99,Na00,Pr80,Ra61,Ra62,RU68,RDM05,Stre}. In this paper we review the geometric approach to Riccati equations according to the results of the works \cite{CRL07,CarRam} with the aim of proving that integrability conditions of Riccati equations can be understood in a very general way from the point of view of the theory of Lie systems \cite{CR02,Gu93,Ib96,Ib99,Ib89,LS, Ve93,PW}. Furthermore, we recover various known results as particular cases of our approach and the method here derived can be applied to any other Lie system, e.g. \cite{CL09,CLR10}. Each Lie system is associated with a Lie algebra of vector fields, the so-called Vessiot--Guldberg Lie algebra \cite{CLR10,GGL08,Gu93,Ib96,Ve93}. This Lie algebra can be used to classify those Lie systems that can be integrated by quadratures \cite{CarRamGra}. For instance, it is a known fact that Lie systems related to solvables Vessiot--Guldberg Lie algebras, e.g. affine homogeneous systems or linear homogeneous systems, can be integrated by quadratures \cite{CarRamGra,Ib09II,Ib09}. Nevertheless, the general solution of Lie systems related to non-solvables Lie algebras, e.g. Riccati equations, cannot be completely determined and it frequently relies on the knowledge of certain special functions \cite{Ib09II}, the solution of other equations \cite{Na99,Na00,DS08}, etc. The method developed here allows us to determine integrable cases of Lie systems related to non-solvables Vessiot--Gulberg Lie algebras. Such a procedure is detailed for Riccati equations, which are associated with a non-solvable Vessiot--Guldberg Lie algebra isomorphic to $\mathfrak{sl}(2,\mathbb{R})$, but it can also be applied to other Lie systems related to Vessiot--Guldberg Lie algebras isomorphic to this one, for example, the Lie systems connected to Milne--Pinney equations \cite{CL09Milne}, Ermakov systems \cite{CLR08e}, harmonic oscillators \cite{CLR10}, etc. We finally analyse the linearisation of Riccati equation by means of our new approach and recover a characterisation previously proved by Ibragimov \cite{Ib08}. Furthermore, we detail a, as far as we know, new result about the properties of linearisation of Riccati equations. The paper is organised as follows. For the sake of completeness, we report some known facts on the integrability of Riccati equations in Sec. 2 and we review the geometric interpretation of the general Riccati equation as a $t$-dependent vector field on the one-point compactification of the real line in Sec. 3. As a consequence of the latter, Riccati equations can be studied through equations on $SL(2,\mathbb{R})$. Sec. 4 is devoted to reporting some known results on the action of the group of curves in $SL(2,\mathbb{R})$ on the set of Riccati equations and how this action can be seen in terms of transformations of the corresponding equations on $SL(2,\mathbb{R})$, see \cite{CarRam05b}. In Sec. 5 we build up a Lie system describing the transformation process of Riccati equations through the action of curves of $SL(2,\mathbb{R})$ and then, in Sec. 6, we analyse the general characteristics of our approach into integrability conditions and how the transformation processes described by the previous Lie system can be used to give a unified approach to the results of \cite{CRL07,CarRamGra}. In Sec. 7 we develop a particular case of the procedures of Sec. 6 in order to recover some results found in the literature \cite{AS64,Ko06,Ra61,Ra62,RU68,RDM05}. Sec. 8 is devoted to analyzing the theory of integrability through reduction from our new viewpoint. Finally, in Sec. 9 we describe how our Lie system for studying integrability conditions enables us to explain when certain linear fractional transformations allow us to linearise Riccati equations. As a particular instance we obtain a result given in \cite{Ib08,RDM05}. \section{Integrability of Riccati equations}\label{IntRicEqu} \indent In order to provide a first insight into the study of integrability conditions for Riccati equations, and for any Lie system in general, we review in this Section some known results about the integrability of Riccati equations. As a first particular example, Riccati equations (\ref{ricceq}) are integrable by quadratures when $b_2=0$. Indeed, in such a case these equations reduce to an inhomogeneous linear equation and two quadratures allow us to find the general solution. Additionally, under the change of variable $w=-1/y$ the Riccati equation (\ref{ricceq}) reads $$ \frac{dw}{dt}=b_0(t)\, w^2-b_1(t)\,w+b_2(t) $$ and if we suppose $b_0=0$ in Eq. (\ref{ricceq}), then the mentioned change of variable transforms the given equation into an integrable linear one. Another very well-known property on integrability of Riccati equations is that given a particular solution $y_1(t)$ of Eq. (\ref{ricceq}), then the change of variable $y=y_1(t)+z$ leads to a new Riccati equation for which the coefficient of the term independent of $z$ is zero, i.e. \begin{equation} \frac {dz}{dt}=[2\, b_2(t)\, y_1(t)+ b_1(t)] z+ b_2(t)\,z^2, \label{Bereq} \end{equation} and, as we pointed out before, it can be reduced to an inhomogeneous linear equation with the change $z=-1/u$. Therefore, given one particular solution, the general solution can be found by means of two quadratures. If not only one but two particular solutions, $y_1(t)$ and $y_2(t)$, of Eq. (\ref{ricceq}) are known, the general solution can be found by means of only one quadrature. In fact, the change of variable $z=(y-y_1(t))/(y-y_2(t))$ transforms the original equation into a homogeneous first-order linear differential equation in the new variable $z$ and therefore the general solution can immediately be found. Finally, giving three particular solutions, $y_1(1),y_2(t),y_3(t)$, the general solution can be written, without making use of any quadrature, in the following way $$ y(t)=\frac{y_1(t)(y_3(t)-y_2(t))-ky_2(t)(y_1(t)-y_3(t))}{(y_3(t)-y_2(t))-k(y_1(t)-y_3(t))}. $$ This is a non-linear superposition rule studied in \cite{CMN} from a group theoretical perspective. The simplest case of Eq. (\ref{ricceq}), when it is an autonomous equation ($b_0$, $b_1$ and $b_2$ constants), has been fully studied (see e.g. \cite{CarRamdos} and references therein) and it is integrable by quadratures. This result can be considered as a consequence of the existence of a constant (maybe complex) solution enabling us to reduce the Riccati equation into an inhomogeneous linear one. Moreover, the separable Riccati equations of the form \begin{equation*} \frac{dy}{dt}=\varphi(t)(c_0+c_1\,y+c_2\,y^2), \end{equation*} with $\varphi(t)$ a non-vanishing function on a certain open interval $I\subset \mathbb{R}$ and $c_0$, $c_1$, $c_2$ real numbers, are integrable because a new time function $\tau=\tau(t)$ such that $d\tau/dt=\varphi(t)$ reduces the above equation into an autonomous one. Furthermore, the above Riccati equations are also integrable as they accept, in similarity to the autonomous case, a constant (maybe complex) solution. \section{Geometric approach to Riccati equations} \noindent Let us report in this Section some known results about the geometrical approach to the Riccati equation \cite{CRL07}. Such a point of view is used in next Sections to investigate integrability conditions for these equations and, in general, for any Lie system. From the geometric viewpoint, the Riccati equation (\ref{ricceq}) can be considered as a differential equation determining the integral curves for the $t$-dependent vector field \cite{Car96} \begin{equation} X(t,y)=\left[b_0(t)+b_1(t)y+b_2(t)y^2\right]\frac{\partial}{\partial y}\ .\label{vfRic} \end{equation} This $t$-dependent vector field is a linear combination with $t$-dependent coefficients $b_0(t)$, $b_1(t)$ and $b_2(t)$ of the three vector fields \begin{equation} L_0 =\frac{\partial}{\partial y}\,, \quad L_1 =y\,\frac{\partial}{\partial y}\, , \quad L_2 = y^2\,\frac{\partial}{\partial y}\,, \label{sl2gen} \end{equation} with defining relations \begin{equation}\label{conmutL} [L_0,L_1] = L_0\,, \quad [L_0,L_2] = 2L_1\,, \quad [L_1,L_2] = L_2 \,, \end{equation} and therefore spanning a three-dimensional Lie algebra of vector fields $V$. Consequently, Riccati equations are Lie systems \cite{LS} and the Lie algebra $V$, the so-called Vessiot--Guldberg Lie algebra \cite{Gu93,Ve93}, is isomorphic to $\mathfrak{sl}(2,\mathbb{R})$ being here considered as made up by traceless $2\times 2$ matrices. A particular basis for $\mathfrak{sl}(2,\mathbb{R})$ is given by \begin{equation} M_0=\left(\begin{matrix} 0&-1\\0&0 \end{matrix}\right)\,, M_1=\frac{1}{2}\left(\begin{matrix} -1&0\\0&1 \end{matrix}\right)\,, M_2=\left(\begin{matrix} 0&0 \\1&0 \end{matrix}\right)\ . \label{base_matrices} \end{equation} Moreover, it can be checked that the linear map $\rho:\mathfrak{sl}(2,\mathbb{R})\rightarrow V$ obeying $\rho(M_j)=L_j$, with $j=0,1,2$, is a Lie algebra isomorphism. Note that $L_2$ is not a complete vector field on $\mathbb{R}$. However we can do the one-point compactification $\overline{\mathbb{R}}=\mathbb{R}\cup \{\infty\}$ of $\mathbb{R}$ and then $L_0$, $L_1$ and $L_2$ are complete vector fields on $\overline{\mathbb{R}}$. Consequently, these vector fields are fundamental vector fields corresponding to the action $\Phi:(A,y)\in SL(2,{\mathbb{R}})\times \overline{\mathbb{R}}\mapsto \Phi(A,y)\in\overline{\mathbb{R}}$ given by \begin{equation}\label{Action} \Phi(A,y)=\left\{ \begin{aligned} &{\frac{\alpha\, y+\beta}{\gamma\, y+\delta}}\quad &y&\neq-{\frac{\delta}{\gamma}},\,\,\,y\neq\infty,\\ &\frac{\alpha}{\gamma} \quad &y&=\infty,\\ &\infty \quad&y&=-\frac{\delta}{\gamma}, \\ \end{aligned} \right.\quad {\rm with}\quad A=\left(\begin{array}{cc}\alpha& \beta\\\gamma&\delta\end{array}\right)\in SL(2,\mathbb{R}). \end{equation} Denote by $X^{\tt R}_j$ and $ X^{\tt L}_j$, $j=1,2,3$, the right- and left-invariant vector fields on $SL(2,{\mathbb{R}})$ such that $X^{\tt R}_j(I)=X^{\tt L}_j(I)=M_j$. Moreover, these vector fields satisfy that $X^{\tt R}_j(A)=M_j\cdot A$ and $X^{\tt L}_j(A)=A\cdot M_j$, with ``$\cdot$'' the usual matrix multiplication. A remarkable property is that if $A(t)$ is the integral curve for the $t$-dependent vector field $$X(t)=-\sum_{j=0}^2 b_j(t)\, X^{\tt R}_j\,, $$ starting from the neutral element in $SL(2,\mathbb{R})$, i.e. $A(0)=I$, then $A(t)$ satisfies the equation \begin{equation}\label{eLA} \dot{A}(t)A^{-1}(t)=-\sum_{j=0}^2b_j(t)M_j\equiv {\rm a}(t), \end{equation} and the solution of Riccati equation (\ref{ricceq}) with initial condition $y(0)=y_0$ is given by $y(t)=\Phi(A(t),y_0)$ \cite{CR02}. Note that the r.h.s. in Eq. (\ref{eLA}) is a curve in $T_ISL(2,\mathbb{R})$ that can be identified to a curve in the Lie algebra $\mathfrak{sl}(2,\mathbb{R})$ of left-invariant vector fields on $SL(2,\mathbb{R})$ through the usual isomorphism: we relate each left-invariant vector field $X^{\tt L}$ to the element $X^{\tt L}(I)\in T_ISL(2,\mathbb{R})$. From now on, we do not distinguish explicitly elements in $T_ISL(2,\mathbb{R})$ and its corresponding ones in $\mathfrak{sl}(2,\mathbb{R})$. In summary, the general solution of Riccati equations (\ref{ricceq}) can be obtained through solutions of an equation like (\ref{eLA}) starting from $I$. Consequently, we have reduced the problem of finding the general solution of Riccati equations to determining the solution of Eq. (\ref{eLA}) beginning at the neutral element of $SL(2,\mathbb{R})$. Note that, in a similar way, this procedure can be applied to any Lie system \cite{CL09}. \section{Transformation laws of Riccati equations}\label{TL} \noindent In this Section we briefly describe an important property of Lie systems, in the particular case of Riccati equations, which plays a very relevant r\^ole for establishing, as indicated in \cite{CarRam}, integrability criteria: {\it The group $\mathcal{G}$ of curves in a Lie group $G$ associated with a Lie system, here $SL(2, {\mathbb{R}})$, acts on the set of these Lie systems, here Riccati equations}. More explicitly, fixed a basis of vector fields on $\overline{\mathbb{R}}$, for instance $\{L_j\,|\,j=0,1,2\}$, which spans a Vessiot--Guldberg Lie algebra of vector fields isomorphic to $\mathfrak{sl}(2,\mathbb{R})$, each Riccati equation (\ref{ricceq}) can be considered as a curve $(b_0(t),b_1(t),b_2(t))$ in $\mathbb{R}^3$. The point now is that each element of the group of smooth curves in $SL(2, \mathbb{R})$, i.e. $\bar A\in \mathcal{G}\equiv{\rm Map}(\mathbb{R},\,SL(2,\mathbb{R}))$, transforms every curve $y(t)$ in $\overline{\mathbb{R}}$ into a new curve $y'(t)$ in $\overline{\mathbb{R}}$ given by $y'(t)=\Phi(\bar A(t),y(t))$. Moreover, the $t$-dependent change of variables $y'(t)=\Phi(\bar A(t),y(t))$ transforms the Riccati equation (\ref{ricceq}) into a new Riccati equation with new $t$-dependent coefficients, $b'_0,b'_1, b'_2$ given by \begin{equation}\label{trans} \left\{\begin{aligned} b'_2&={\delta}^2\,b_2-\delta\gamma\,b_1+{\gamma}^2\,b_0+\gamma {\dot{\delta}}-\delta \dot{\gamma}\ ,\\ b'_1&=-2\,\beta\delta\,b_2+(\alpha\delta+\beta\gamma)\,b_1-2\,\alpha\gamma\,b_0 +\delta \dot{\alpha}-\alpha \dot{\delta}+\beta \dot{\gamma}-\gamma \dot{\beta}\ , \\ b'_0&={\beta}^2\,b_2-\alpha\beta\,b_1+{\alpha}^2\,b_0+\alpha\dot{\beta}-\beta\dot{\alpha}, \end{aligned}\right. \end{equation} with $$ \bar{A}(t)=\left( \begin{matrix} \alpha(t)&\beta(t)\\ \gamma(t)&\delta(t) \end{matrix}\right). $$ The above transformation defines an affine action (see e.g. \cite{LM87} for the general definition of this concept) of the group $\mathcal{G}$ on the set of Riccati equations, see \cite{CarRam}. The group $\mathcal{G}$ also acts on the set of equations of the form (\ref{eLA}) on $SL(2,\mathbb{R})$. In order to show this, note first that $\mathcal{G}$ acts on the left on the set of curves in $SL(2,\mathbb{R})$ by left translations, i.e. given two curves $A(t)$ and $\bar A(t)$ in $SL(2,\mathbb{R})$, the curve $\bar A(t)$ transforms the curve $A(t)$ into a new one $A'(t)=\bar A(t) A(t)$. Moreover, if $A(t)$ is a solution of Eq. (\ref{eLA}), then the new curve $A'(t)$ satisfies a new equation like (\ref{eLA}) but with a different right hand side ${\rm a}'(t)$. Differentiating the relation $A'(t)=\bar A(t) A(t)$ in terms of time and taking into account the form of (\ref{eLA}), we get that the relation between the curves ${\rm a}(t)$ and ${\rm a}'(t)$ in $\mathfrak{sl}(2,\mathbb{R})$ is \begin{equation} {\rm a}'(t)=\bar A(t){\rm a}(t)\bar A^{-1}(t)+\dot{\bar{A}}(t)\bar A^{-1}(t) =-\sum_{j=0}^2b'_j(t)M_j\, \label{newricc} \end{equation} and such a relation implies the expressions (\ref{trans}). Conversely, if $A'(t)=\bar A(t) A(t)$ is the solution for the equation corresponding to the curve ${\rm a}'(t)$ given by the transformation rule (\ref{newricc}), then $A(t)$ is the solution of Eq. (\ref{eLA}). To sum up, we have shown that it is possible to associate each Riccati equation with an equation on the Lie group $SL(2,\mathbb{R})$ and to define an infinite-dimensional group of transformations acting on the set of Riccati equations. Additionally, this process can be easily derived in a similar way for any Lie system. In such a case, we must consider an equation on a Lie group $G$ associated with the corresponding Lie system and the group $\mathcal{G}$ of curves in $G$ acting on the set of curves in $G$ in the form $A'(t)=L_{\bar A(t)}A(t)$ instead of $A'(t)=\bar A(t)A(t)$. This action induces other action of $\mathcal{G}$ on the set of equations of the form (\ref{eLA}) but on the Lie group $G$. More explicitly, a curve $\bar A(t)\in \mathcal{G}$ transforms an equation on $G$ of the form (\ref{eLA}) determined by a curve ${\rm a}(t)\subset T_IG$ into a new one determined by the new curve ${\rm a}'(t)\subset T_IG$ given by \begin{equation}\label{TransConecc} {\rm a}'(t)={\rm Ad}_{\bar A(t)}{\rm a}(t)+R_{\bar A^{-1}(t)*\bar A(t)}\dot{\bar{A}}(t). \end{equation} \section{Lie structure of an equation of transformation of Lie systems} \indent Our aim in this Section is to construct a Lie system describing the curves in $SL(2,\mathbb{R})$ relating two Riccati equations associated with a pair of equations in $SL(2,\mathbb{R})$ characterised by two curves ${\rm a}(t), {\rm a}'(t)\subset \mathfrak{sl}(2,{\mathbb{R}})$. By means of this Lie system we are going to explain in next Sections the developments of \cite{CRL07, CarRamGra} and other works from a unified viewpoint. Let us multiply Eq. (\ref{newricc}) on the right by $\bar A(t)$ to get \begin{equation}\label{MatrixRicc} \dot{\bar{A}}(t)={\rm a}'(t)\bar A(t)-\bar A(t){\rm a}(t)\,. \end{equation} If we consider Eq. (\ref{MatrixRicc}) as a first-order differential equation in the coefficients of the curve $\bar A(t)$ in $SL(2,\mathbb{R})$, with $$ \bar A(t)=\left( \begin{matrix} \alpha(t) &\beta(t)\\ \gamma(t)& \delta(t) \end{matrix}\right)\,,\quad \alpha(t)\delta(t)-\beta(t)\gamma(t)=1, $$ then system (\ref{MatrixRicc}) reads \begin{equation}\label{FS} \left(\begin{matrix} \dot\alpha\\ \dot\beta\\ \dot\gamma\\ \dot\delta \end{matrix}\right) = \left(\begin{matrix} \frac{b'_1-b_1}{2}&b_2 &b'_0&0\\ -b_0& \frac{b'_1+b_1}{2}&0 &b'_0\\ -b'_2&0 &-\frac{b'_1+b_1}{2}& b_2\\ 0&-b_2' &-b_0& -\frac{b'_1-b_1}{2} \end{matrix}\right) \left(\begin{matrix} \alpha\\ \beta\\ \gamma\\ \delta \end{matrix}\right). \end{equation} In order to determine the solutions $x(t)=(\alpha(t),\beta(t), \gamma(t),\delta(t))$ of the above system relating two different Riccati equations, we should check that actually the matrices $\bar A(t)$, whose elements are the corresponding components of $x(t)$, are related to matrices in $SL(2,\mathbb{R})$, i.e. we have to verify that at any time $\alpha\delta-\beta\gamma=1$. Nevertheless, we can drop such a restriction because it can be automatically implemented by a restraint on the initial conditions for the solutions and hence we can deal with the variables $\alpha,\beta,\gamma, \delta$ in the system (\ref{FS}) as being independent. Consider now the vector fields {\small \begin{equation*} \begin{array}{ll} N_0=-\alpha\dfrac{\partial}{\partial\beta}-\gamma\dfrac{\partial}{\partial\delta}, &N'_0=\gamma\dfrac{\partial}{\partial\alpha}+\delta\dfrac{\partial}{\partial\beta},\cr N_1=\frac 12\left(\beta\dfrac{\partial}{\partial\beta}+\delta\dfrac{\partial}{\partial\delta}-\alpha\dfrac{\partial}{\partial\alpha}-\gamma\dfrac{\partial}{\partial\gamma}\right), &N'_1=\frac 12\left(\alpha\dfrac{\partial}{\partial\alpha}+\beta\dfrac{\partial}{\partial\beta}-\gamma\dfrac{\partial}{\partial\gamma}-\delta\dfrac{\partial}{\partial\delta}\right),\cr N_2=\beta\dfrac{\partial}{\partial\alpha}+\delta\dfrac{\partial}{\partial\gamma},& N'_2=-\alpha\dfrac{\partial}{\partial\gamma}-\beta\dfrac{\partial}{\partial\delta},\nonumber \end{array} \end{equation*}} satisfying the non-null commutation relations \begin{eqnarray*} &&\left[ N_0,N_1\right]=N_0, \qquad [N_0,N_2]=2 N_1, \qquad [N_1,N_2]=N_2,\cr &&[N'_0,N'_1]=N'_0, \qquad [N'_0, N'_2]=2 N'_1,\qquad [N'_1, N'_2]=N'_2\,.\nonumber \end{eqnarray*} Note that as $[N_i,N'_j]=0$, for $i,j=0,1,2$, the linear system of differential equation (\ref{FS}) is a Lie system on $\mathbb{R}^4$ associated with a Lie algebra of vector fields isomorphic to $\mathfrak{g}\equiv\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(2,\mathbb{R})$. This Lie algebra decomposes into a direct sum of two Lie algebras of vector fields isomorphic to $\mathfrak{sl}(2,\mathbb{R})$: the first one is spanned by $\{N_0,N_1,N_2\}$ and the second one by $\{N'_0,N'_1,N'_2\}$. If we denote $x\equiv\left(\alpha,\beta,\gamma,\delta\right)\in \mathbb{R}^4$, the system (\ref{FS}) is a differential equations on $\mathbb{R}^4$ \begin{equation*} \frac{dx}{dt}=N(t,x), \end{equation*} with $N$ being the $t$-dependent vector field \begin{equation*} N(t,x)=\sum_{j=0}^2\left(b_\alpha(t)N_\alpha(x)+b'_\alpha(t)N'_\alpha(x)\right). \end{equation*} The vector fields $\{N_0,N_1,N_2,N'_0,N'_1,N'_2\}$ span a regular involutive distribution $\mathcal{D}$ with rank three in almost any point of $\mathbb{R}^4$ and thus there exists, at least locally, a first-integral. We can check that the function $$I:x\equiv (\alpha,\beta,\gamma,\delta)\in\mathbb{R}^4\longrightarrow I(x)\equiv\det x\equiv\alpha\delta-\beta\gamma \in \mathbb{R}$$ is a first-integral for the vector fields in the distribution $\mathcal{D}$. Moreover, such a first-integral is related to the determinant of the matrix $\bar A$ with coefficients given by the components of $x=(\alpha,\beta,\gamma,\delta)$. Therefore, if we have a solution of the system (\ref{FS}) with an initial condition $\det x(0)=\alpha(0)\delta(0)-\beta(0)\gamma(0)=1$, then $ \det x(t)=1$ at any time $t$ and the solution can be understood as a curve in $SL(2,\mathbb{R})$. In summary, we have proved that: \begin{theorem}\label{THLS} The curves in $SL(2,\mathbb{R})$ transforming equation (\ref{eLA}) into a new equation of the same form but characterised by a new curve ${\rm a}'(t)=-\sum_{j=0}^2b'_j(t)M_j\,$ are described through the solutions of the Lie system \begin{equation}\label{Sys} \frac{dx}{dt}=N(t,x)\equiv\sum_{j=0}^2\left(b_j(t)N_j(x)+b'_j(t)N'_j(x)\right)\, \end{equation} such that $\det x(0)=1$. Furthermore, the above Lie system is related to a non-solvable Vessiot--Guldberg Lie algebra isomorphic to $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{sl}(2,\mathbb{R})$. \end{theorem} \begin{corollary} \label{CorCur} Given two Riccati equations associated with curves ${\rm a}'(t)$ and ${\rm a}(t)$ in $\mathfrak{sl}(2,\mathbb{R})$ there always exists a curve $\bar A(t)$ in $SL(2,\mathbb{R})$ transforming the Riccati equation related to ${\rm a}(t)$ into the one associated with ${\rm a}'(t)$. If furthermore $\bar A(0)=I$, this curve is uniquely defined. \end{corollary} \begin{proof} Given a matrix $A(0)\in SL(2,\mathbb{R})$ and an element $x(0)$ related to it, according to the theorem of existence and uniqueness of solutions for differential equations, the system (\ref{Sys}), with the chosen ${\rm a}'(t)$ and ${\rm a}(t)$, admits a solution $x(t)$ with initial condition $x(0)$. As ${\rm det}(x(t))={\rm det}(x(0))=1$, such a solution, considered as a matrix $\bar A(t)$, belongs to $SL(2,\mathbb{R})$ and therefore there exists a solution $x(t)$ for the system (\ref{Sys}) with initial condition $x(0)$ related to $\bar A(0)$. This proves the first statement of our corollary. If the curve $\bar A(t)$ connecting two curves in $\mathfrak{sl}(2,\mathbb{R})$ satisfies $\bar A(0)=I$, it is the curve $x(t)$ in $\mathbb{R}^4$ being the solution of system (\ref{Sys}) with initial condition $x(0)=(1,0,0,1)$, which is uniquely determined because of the theorem of existence and uniqueness of solutions of systems of first-order differential equations. \end{proof} Even if we know that given two equations on the Lie group $SL(2,\mathbb{R})$ there always exists a transformation relating both, in order to obtain such a curve we need to solve the Lie system (\ref{Sys}). Unfortunately, such a Lie system is associated with a non-solvable Lie algebra and it is not easy in general to find its solutions, i.e. it is not integrable by quadratures and therefore such a curve cannot be easily found in the general case. Nevertheless, we will explain many known properties and obtain new integrability conditions for Riccati equations by means of Theorem \ref{THLS}. Furthermore, the procedure to obtain the Lie system (\ref{Sys}) can be generalised to deal with any Lie system related to a Lie group $G$ with Lie algebra $\mathfrak{g}$. In this general case, relation (\ref{TransConecc}) implies that \begin{equation*} \dot{\bar{A}}(t)=R_{\bar{A}(t)*I}{\rm a}'(t)-L_{\bar{A}(t)*I}{\rm a}(t). \end{equation*} As $X^{\tt R}(t,\bar A)=R_{\bar{A}*I}{\rm a}'(t)$ is a $t$-dependent right-invariant vector field on $G$ and $X^{\tt L}(t,\bar A)=-L_{\bar{A}*I}{\rm a}(t)$ a left-invariant one, the above system is the equation determining the integral curves of a time-dependent vector field with values in the linear space spanned by right- and left- invariant vector fields on $G$. Note that the family of left-invariant (right-invariant) vector fields on $G$ spans a Lie algebra isomorphic to $\mathfrak{g}$ and, as right- and left-invariant vector fields commute among them, the set of vector fields spanned by both families is a Lie algebra of vector fields isomorphic to $\mathfrak{g}\oplus\mathfrak{g}$. In this way, we get that the above system, relating two Lie systems associated with curves ${\rm a}(t)$ and ${\rm a}'(t)$ in $\mathfrak{g}$, is a Lie system related to a Vessiot--Guldberg Lie algebra isomorphic to $\mathfrak{g}\oplus\mathfrak{g}$. \section{Lie systems and integrability conditions} \indent In this section some integrability conditions are analysed from the perspective of the theory of Lie systems with $SL(2,\mathbb{R})$ as associated Lie group, with the aim of giving a unified approach to the reduction and transformations procedures described in \cite{CRL07, CarRamGra}. More explicitly, these methods are related to conditions for the existence of a curve in a previously chosen family of curves in $SL(2,\mathbb{R})$ connecting a curve ${\rm a}(t)\subset\mathfrak{sl}(2,\mathbb{R})$ with a curve ${\rm a}'(t)$ in a solvable Lie subalgebra of $\mathfrak{sl}(2,\mathbb{R})$. It is also shown that this viewpoint enables us to explain many of the previous results scattered in the literature about this topic and to prove other new properties. As it was shown in Sec. \ref{TL}, if the curve $\bar A(t)\subset SL(2,\mathbb{R})$ transforms the equation on this Lie group defined by the curve ${\rm a}(t)$ into another one characterised by ${\rm a}'(t)$ and $A'(t)$ is a solution for the equation similar to (\ref{eLA}) for the primed system, i.e. characterised by ${\rm a}'(t)$, then $A(t)=\bar A^{-1}(t)A'(t)$ is a solution for the equation in $SL(2,\mathbb{R})$ characterised by ${\rm a}(t)$. Moreover, if ${\rm a}'(t)$ lies in a solvable Lie subalgebra of $\mathfrak{sl}(2,\mathbb{R})$, we can obtain $A'(t)$ in many ways, e.g. by quadratures or by other methods as those used in \cite{CarRamGra}. Then, once $A'(t)$ is obtained, the knowledge of the curve $\bar A(t)$ transforming the curve ${\rm a}(t)$ into ${\rm a}'(t)$ provides the curve $A(t)$. Therefore if we begin with a curve ${\rm a}'(t)$ in a solvable Lie subalgebra of $\mathfrak{sl}(2,\mathbb{R})$ and consider the solutions for the system (\ref{Sys}) in a subset of $SL(2,\mathbb{R})$, we can relate the curve ${\rm a}'(t)$, and therefore its Riccati equation, to other possible curves ${\rm a}(t)$, finding in this way a family of Riccati equations that can be exactly solved. Note that, if we do not consider solutions of the system (\ref{Sys}) in a subset of $SL(2,\mathbb{R})$, it is generally difficult to check whether a particular Riccati equation belongs to the family of integrable Riccati equations so obtained. Suppose we impose some restrictions on the family of curves solutions of the system (\ref{Sys}), for instance $\beta=\gamma=0$. Consequently, the system may not have solutions compatible with such restrictions, i.e. it may be impossible to connect the curves ${\rm a}(t)$ and ${\rm a}'(t)$ by a curve in $SL(2,\mathbb{R})$ satisfying the assumed restrictions. This gives rise to some compatibility conditions for the existence of these special solutions, some of them algebraic and other differential ones, between the $t$-dependent coefficients of ${\rm a}'(t)$ and ${\rm a}(t)$. It will be shown later on that such restrictions correspond to integrability conditions previously proposed in the literature. Therefore, there are two ingredients to take into account: \begin{enumerate} \item {\it The equations on the Lie group characterised by curves ${\rm a}'(t)$ for which we can obtain an explicit solution}. We always suppose that ${\rm a}'(t)$ is related to a solvable Lie subalgebra of $\mathfrak{sl}(2,\mathbb{R})$ and we leave open other possible restrictions for further study. \item {\it The conditions imposed on the solutions of system} (\ref{FS}). We follow two principal approaches in next Sections where the solutions of this system are related to curves in certain one-parameter or two-parameter subsets of $SL(2,\mathbb{R})$. \end{enumerate} Consider the next example of our theory: suppose we try to connect any ${\rm a}(t)$ with a final curve of the form ${\rm a}'(t)=-D(t)(c_0{\rm a}_0+c_1{\rm a}_1+c_2{\rm a}_2)$, where $c_0,c_1$ and $c_2$ are real numbers. In this way, the system (\ref{FS}) describing the curve $\bar A(t)\subset SL(2,\mathbb{R})$ connecting these curves is \begin{equation}\label{Lie2} \frac{dx}{dt}=\sum_{j=0}^2\left(b_j(t)N_j(x)+D(t) c_j N'_j(x)\right)=N(t,x). \end{equation} Now, as the vector field \begin{equation*} N'=\sum_{j=0}^2c_j N'_j, \end{equation*} is such that \begin{equation*} \left[N_j,N'\right]=0,\quad\quad j=0,1,2, \end{equation*} the Lie system (\ref{Lie2}) is related to a non-solvable Lie algebra of vector fields isomorphic to $\mathfrak{sl}(2,\mathbb{R})\oplus \mathbb{R}$. Hence, it is not integrable by quadratures and the solution cannot be easily found in the general case. Nevertheless, note that system (\ref{Lie2}) always has a solution. In this way, we can consider some particular cases of Lie system (\ref{Lie2}) for which the resulting system of differential equations can be easily integrated. As a first instance, take $x$ related to a one-parameter family of elements of $SL(2,\mathbb{R})$. Such a restriction implies that system (\ref{Lie2}) has not always a solution because sometimes it is not possible to connect ${\rm a}(t)$ and ${\rm a}'(t)$ by means of the chosen family of curves. This fact induces differential and/or algebraic restrictions on the initial $t$-dependent functions $b_j$, with $j=0,1,2$, that describe some known integrability conditions and may be some new ones developing the ideas of \cite{CRL07}. From this viewpoint we can obtain new integrability conditions that can be used, for instance, to obtain exact solutions. Otherwise, if we choose a two-parameter set for the restriction, we find in some cases that we need a particular solution of the initial Riccati equation to obtain the reduction of the given Riccati equation into an integrable one. This is the point of view shown in \cite{CarRamGra} where integrability conditions were related to reduction methods. \section{Description of known integrability conditions}\label{DIC} \indent Let us first remark that Lie systems on $G$ of the form (\ref{eLA}) and determined by a constant curve, ${\rm a}=-\sum_{j=0}^2 c_j M_j$, are integrable and consequently the same happens for curves of the form ${\rm a}(t)=-D(t)\left(\sum_{j=0}^2 c_j M_j\right)$, where $D$ is any non-vanishing function, because a time-reparametrisation reduces the problem to the previous one. Our aim in this Section is to determine the curves $\bar A(t)$ in $SL(2,\mathbb{R})$ relating two equations on $SL(2,\mathbb{R})$ characterised by the curves ${\rm a}(t)$ and ${\rm a}'(t)=-D(t)(c_0M_0+c_1M_1+c_2M_2)$ with $D(t)$ a non-vanishing function and $c_0$, $c_1$ and $c_2$ real constants such that $c_0c_2\neq 0$. As the final equation is integrable, the transformation establishing the relation to such a final integrable equation allows us to find by quadratures the solution of the initial equation and, therefore, the solution for its associated Riccati equation. In order to get such a transformation, we look for curves $\bar A(t)$ in $SL(2,\mathbb{R})$ satisfying certain conditions in order to get an integrable equation (\ref{Lie2}). Nevertheless, under the assumed restrictions, we may obtain a system of differential equations admitting no solution. As an application, we show that many known results can be recovered and explained in this way. We have already showed that the Riccati equations (\ref{ricceq}) with either $b_0\equiv 0$ or $b_2\equiv 0$ are reducible to linear differential equations and therefore they are always integrable. Hence, they are not interesting in our study and we focus our attention on reducing a Riccati equation (\ref{ricceq}), with $b_0b_2\ne 0$ in an open interval in $t$, into an integrable one by means of the action of a curve in $SL(2,\mathbb{R})$. With this aim, we consider the family of curves in $SL(2,\mathbb{R})$ with $\beta=0$ and $\gamma=0$, i.e. we take curves of the form $$\bar A(t)=\left(\begin{matrix}\alpha(t)&0\\0&\delta(t)\end{matrix}\right)\in SL(2,\mathbb{R})\,,\quad\alpha(t)\delta(t)=1.$$ We already pointed out that a curve $\bar A(t)$ in $SL(2,\mathbb{R})$ induces a $t$-dependent change of variables in $\bar{\mathbb{R}}$ given by $y'(t)=\Phi(\bar A(t),y(t))$. In view of (\ref{Action}) and as $\alpha\delta=1$, we get that, in our case, such a change of variables is given by \begin{equation}\label{yprime} y'=\alpha^2(t)y=G(t)y\,,\quad G(t)\equiv \frac{\alpha(t)}{ \delta(t)}>0. \end{equation} In view of the relations (\ref{trans}), the initial Riccati equation is transformed by means of the curve $\bar A(t)$ into the new Riccati equation with $t$-dependent coefficients $$b'_2=\delta^2\,b_2\,,\qquad b'_1=\alpha\,\delta\,b_1+\dot \alpha\,\delta-\alpha\,\dot \delta\,,\qquad b'_0=\alpha^2\, b_0.$$ Furthermore, the functions $\alpha$ and $\delta $ are solutions of system (\ref{FS}), which in this case reads \begin{equation}\label{RLFS} \left(\begin{matrix} \dot\alpha\\ 0\\ 0\\ \dot\delta \end{matrix}\right) =\left(\begin{matrix} \frac{b'_1-b_1}{2}&b_2 &b'_0&0\\ -b_0& \frac{b'_1+b_1}{2}&0 &b'_0\\ -b'_2&0 &-\frac{b'_1+b_1}{2}& b_2\\ 0&-b_2' &-b_0& -\frac{b'_1-b_1}{2} \end{matrix}\right)\left( \begin{matrix} \alpha\\ 0\\ 0\\ \delta \end{matrix}\right). \end{equation} The existence of particular solutions for the above system related to elements of $SL(2,\mathbb{R})$ and satisfying the required conditions determines integrability conditions for Riccati equations by the described method. Thus, let us analyse the existence of such solutions to get these integrability conditions. From some of the relations of the system (\ref{RLFS}), we get that $$-b_0\,\alpha+b'_0\, \delta=0\,,\qquad -b_2'\,\alpha+b_2\,\delta=0.$$ As $\alpha(t)\delta(t)= 1$, the above relations imply that $b_0b_2= b'_0b'_2$ and \begin{equation*} \alpha^2=\frac{b_0'}{b_0}=\frac{b_2}{b_2'}\equiv G>0\,. \end{equation*} Hence, the transformation formulas (\ref{trans}) reduce to \begin{equation} b'_2=\alpha^{-2}\,b_2\,,\qquad b'_1=b_1+2\frac{\dot \alpha}\alpha \,,\qquad b'_0=\alpha^2 b_0\,.\label{transfb} \end{equation} Then, in order to exist a $t$-dependent function $D$ and two real constants $c_0$ and $c_2$, with $c_0c_2\neq 0$, such that $b'_2=Dc_2$ and $b'_0=Dc_0$, the function $D$ must be given by \begin{equation*} D^2c_0c_2=b_0b_2\Longrightarrow D=\pm\sqrt{\frac{b_0b_2}{c_0c_2}}\,, \end{equation*} where we have used that $b'_0b'_2=b_0b_2$. On the other hand, as $b'_0/b_0=\alpha^2>0$, we have to fix the sign $\kappa$ of the function $D$ in order to satisfy this relation, i.e. ${\rm sg}(c_0D)={\rm sg}(b_0)$. Therefore, $$ \kappa={\rm sg}(D)={\rm sg}(b_0/c_0). $$ Also, as $b_0b_2=b'_0b'_2$, we get that ${\rm sg}(b_0b_2)={\rm sg}(c_0c_2D^2)={\rm sg}(c_0c_2)$. Furthermore, in view of the relations (\ref{transfb}), $\alpha$ is determined, up to a sign, by \begin{equation}\label{otroalfa} \alpha=\sqrt{\frac{Dc_0}{b_0}}=\left(\frac{c_0}{c_2}\,\frac{b_2}{b_0} \right)^{1/4}\,, \end{equation} and therefore the change of variables (\ref{yprime}) reads: \begin{equation}\label{Chang} y'=\frac{D(t)c_0}{b_0(t)}y\,. \end{equation} Finally, as a consequence of (\ref{transfb}), in order for $b'_1$ to be the product $b'_1=c_1\, D$, we see that \begin{equation}\label{eq10} b_1+2\, \frac{\dot \alpha}{\alpha}=\kappa c_1 \sqrt{\frac{b_0b_2}{c_0c_2}}\,. \end{equation} Using (\ref{otroalfa}) we get $$4\,\frac{\dot \alpha}{\alpha}=\frac 1{\alpha^4}\, \frac {d\alpha^4}{dt}=\frac{b_0}{b_2}\,\frac d{dt}\left(\frac{b_2}{b_0}\right)=\frac{b_0}{b_2}\,\, \frac{\dot b_2b_0-\dot b_0 b_2}{b_0^2}=\frac{\dot b_2}{b_2}-\frac{\dot b_0}{b_0}, $$ and replacing $2\dot\alpha/\alpha$ in (\ref{eq10}) for the value obtained above, we see that the required integral condition is \begin{equation*} \sqrt{\frac{c_0c_2}{b_0b_2}}\left[b_1+\frac{1}{2}\left(\frac{\dot b_2}{b_2}-\frac{\dot b_0}{b_0}\right)\right]=\kappa c_1\,. \end{equation*} Conversely, it can be verified that if the above integrability condition holds and $D^2c_0c_2=b_0b_2$, then the change of variables (\ref{Chang}) transforms the Riccati equation (\ref{ricceq}) into $dy'/dt=D(t)(c_0+c_1y'+c_2y'^2)$, with $c_0c_2\neq 0$. In summary: \begin{theorem}\label{TU} The necessary and sufficient condition for the existence of a transformation \begin{equation*} y'=G(t)y,\quad G(t)>0, \end{equation*} relating the Riccati equation \begin{equation*} \frac{dy}{dt}=b_0(t)+b_1(t)y+b_2(t)y^2\,, \qquad b_0b_2\ne 0, \end{equation*} to an integrable one given by \begin{equation} \frac{dy'}{dt}=D(t)(c_0+c_1y'+c_2y'^2)\,,\quad c_0c_2\neq 0\label{eqDcs} \end{equation} where $c_0, c_1, c_2$ are real numbers and $D(t)$ is a non-vanishing function, are \begin{equation} D^2c_0c_2=b_0b_2,\qquad \left(b_1+\frac{1}{2}\left(\frac{\dot b_2}{b_2}-\frac{\dot b_0}{b_0}\right)\right)\sqrt{\frac{c_0c_2}{b_0b_2}}=\kappa c_1,\label{DinTh2} \end{equation} where $\kappa={\rm sg}(D)=sg(b_0/c_0)$. The transformation is then uniquely defined by \begin{equation*} y'=\sqrt{\frac{b_2(t)c_0}{b_0(t)c_2}}\,y\,. \end{equation*} \end{theorem} As a consequence of Theorem \ref{TU}, given a Riccati equation \begin{equation*} \frac{dy}{dt}=b_0(t)+b_1(t)y+b_2(t)y^2\,, \qquad b_0(t)b_2(t)\ne 0, \end{equation*} if there are real constants $c_0,c_1$ and $c_2$, with $c_0c_2\neq 0$, such that \begin{equation*} \sqrt{\frac{c_0c_2}{b_0b_2}}\left(b_1+\frac{1}{2}\left(\frac{\dot b_2}{b_2}-\frac{\dot b_0}{b_0}\right)\right)=\kappa c_1, \end{equation*} there exists a $t$-dependent linear change of variables transforming the given equation into an integrable Riccati equation of the form \begin{equation} \frac{dy'}{dt}=D(t)(c_0+c_1y'+c_2y'^2), \qquad c_0c_2\neq 0,\label{ricc1dim} \end{equation} and the function $D$ is given by (\ref{DinTh2}) with the sign determined by $\kappa$. From the previous results, it can be derived the following corollary. \begin{corollary}\label{CTU} A Riccati equation (\ref{ricceq}) with $b_0b_2\ne 0$ can be transformed into a Riccati equation of the form (\ref{ricc1dim}) by a $t$-dependent change of variables $y'=G(t)y$, with $G(t)>0$, if and only if \begin{equation} \frac{1}{\sqrt{|b_0b_2|}}\left(b_1+\frac{1}{2}\left(\frac{\dot b_2}{b_2}-\frac{\dot b_0}{b_0}\right)\right)=K, \label{resCor2} \end{equation} for a certain real constant $K$. In such a case, the Riccati equation (\ref{ricceq}) is integrable by quadratures. \end{corollary} According to Theorem \ref{TU}, if we start with the integrable Riccati Eq. (\ref{ricc1dim}), we can obtain the set of all Riccati equations that can be reached from it by means of a transformation of the form (\ref{yprime}). \begin{corollary}\label{C2TU} Given an integrable Riccati equation \begin{equation*} \frac{dy}{dt}=D(t)(c_0+c_1y+c_2y^2),\qquad c_0c_2\neq 0, \end{equation*} with $D(t)$ a non-vanishing function, the set of Riccati equations which can be obtained with a transformation $y'=G(t)y$, with $G(t)>0$, are those of the form: \begin{equation*} \frac{dy'}{dt}=b_0(t)+\left( \frac{\dot b_0(t)}{b_0(t)}-\frac{\dot D(t)}{D(t)}+c_1D(t)\right) y'+\frac{D^2(t)c_0c_2}{b_0(t)}y'^2\,, \end{equation*} with $$G=\frac{Dc_0}{\sqrt{b_0}}\,.$$ \end{corollary} Therefore starting with an integrable equation we can generate a family of solvable Riccati equations whose coefficients are parametrised by a non-vanishing function $b_0$. Moreover, the integrability condition to check whether a Riccati equation belongs to this family can be easily verified. These results can now be used for a better understanding of some integrability conditions found in the literature. \medskip $\bullet$ {\it The case of Allen and Stein}: \medskip The results of the paper by Allan and Stein \cite{AS64} can be recovered through our general approach. In that work, a Riccati equation (\ref{ricceq}) with $b_0b_2>0$ and $b_0$, $b_2$ differentiable functions satisfying the condition \begin{equation}\label{ALintegra} \frac{b_1+\frac{1}{2}\left(\frac{\dot b_2}{b_2}-\frac{\dot b_0}{b_0}\right)}{\sqrt{b_0b_2}}=C, \end{equation} where $C$ is a real constant, was transformed into the integrable one \begin{equation}\label{FREAS64} \frac{dy'}{dt}=\sqrt{b_0(t)b_2(t)}\left(1+Cy'+y'^2\right), \end{equation} through the $t$-dependent linear transformation \begin{equation*} y'=\sqrt{\frac{b_2(t)}{b_0(t)}}y\,. \end{equation*} If integrability condition (\ref{ALintegra}) is satisfied by a Riccati equation, such an equation also holds the assumptions of the Corollary \ref{CTU} and, therefore, the integrability condition given in Theorem \ref{TU} with \begin{equation*} c_0=1=c_2,\quad c_1=C,\quad D=\sqrt{b_0b_2}. \end{equation*} Consequently, the corresponding transformation given by Theorem \ref{TU} reads \begin{equation*} y'=\sqrt{\frac{b_2(t)}{b_0(t)}}y\,, \end{equation*} showing that the transformation in \cite{AS64} is a particular case of our results. This is not an unexpected result because Theorem \ref{TU} shows that if such a time-dependent change of variables is used to transform a Riccati equation (\ref{ricceq}) into one of the form (\ref{eqDcs}), this change of variables must be of the form (\ref{Chang}) and the initial Riccati equation must hold the integrability conditions (\ref{DinTh2}). \medskip $\bullet$ {\it The case of Rao and Ukidave}: \medskip Rao and Ukidave stated in their work \cite{RU68} that the Riccati equation (\ref{ricceq}), with $b_0b_2>0$, can be transformed into an integrable Riccati equation of the form \begin{equation*} \frac{dy'}{dt}=\sqrt{cb_0b_2}\left(1-ky'+\frac{1}{c}{y'}^2\right), \end{equation*} through a $t$-dependent linear transformation \begin{equation*} y'=\frac{1}{v(t)}y, \end{equation*} if there exist real constants $c$ and $k$ such that following integrability condition holds \begin{equation}\label{CondRU1} b_2=\frac{b_0}{cv^2}, \end{equation} with $v$ being a solution of the differential equation \begin{equation}\label{CondRU2} \frac{dv}{dt}=b_1(t)v+kb_0(t)\,. \end{equation} Note that, in view of (\ref{CondRU1}), necessarily $c>0$ and if the integrability conditions (\ref{CondRU1}) and (\ref{CondRU2}) hold with constants $c$ and $k$ and a negative solution $v(t)$, the same conditions hold for the constants $c$ and $-k$ and a positive solution $-v(t)$. Consequently, we can restrict ourselves to studying the integrability conditions (\ref{CondRU1}) and (\ref{CondRU2}) for positive solutions $v(t)>0$. In such a case, the previous method uses a $t$-dependent linear change of coordinates of the form (\ref{yprime}) and the final Riccati equation are of the type described in our work (\ref{eqDcs}), therefore the integrability conditions derived by Rao and Ukidave must be a particular instance of the integrable cases provided by Theorem \ref{TU}. Using the value of $v(t)$ in terms of the constant $c$ and the functions $b_0$ and $b_2$ obtained from formula (\ref{CondRU1}) and Eq. (\ref{CondRU2}), we get that \begin{equation*} \frac{1}{\sqrt{|b_0b_2|}}\left(b_1+\frac 12\left(\frac{\dot b_2}{b_2}-\frac{\dot b_0}{b_0}\right)\right)=-k\,{\rm sg}(b_0)\sqrt{c}. \end{equation*} Hence, the Riccati equations obeying conditions (\ref{CondRU1}) and (\ref{CondRU2}) satisfy the integrability conditions of Corollary \ref{CTU}. Moreover, if we choose \begin{equation*} D^2=cb_0b_2,\quad c_0=1,\quad c_1=-k,\quad c_2=c^{-1}\,, \end{equation*} then $D=\sqrt{c b_0b_2}$ and the only possible transformation (\ref{yprime}) given by Theorem \ref{TU} reads \begin{equation*} y'=\alpha^2(t)y=\sqrt{\frac{cb_2(t)}{b_0(t)}}y, \end{equation*} and then \begin{equation*} \frac{1}{v}=\sqrt{\frac{cb_2}{b_0}}. \end{equation*} In this way, we recover one of the results derived by Rao and Ukidave in \cite{RU68}. \medskip $\bullet$ {\it The case of Kovalevskaya}: \medskip Kovalevskaya showed in the paper \cite{Ko06} that the Riccati equation \begin{equation*} \frac{dy}{dt}=F(t)+\left(L+\frac{\dot F(t)}{F(t)}\right)y-\frac{K}{F(t)}y^2, \end{equation*} where $K$ and $L$ are real constant, can be integrated through quadratures. It can be verified that the above family of Riccati equations holds the assumption of Corollary \ref{CTU}. Indeed, taking $c_0=1$, $c_2=-K$, $c_1=L$ we get that $\kappa=1$, \begin{equation*} \sqrt{\frac{c_0c_2}{b_0b_2}}\left(b_1+\frac{1}{2}\left(\frac{\dot b_2}{b_2}-\frac{\dot b_0}{b_0}\right)\right)=L=c_1, \end{equation*} and $D=\sqrt{{b_2b_2}/{c_2c_0}}=1$. Therefore, Theorem \ref{TU} shows that the above family of Riccati equations can be integrated. Moreover, taking the above values of the constants $c_0$, $c_1$, $c_2$ and the function $b_0(t)=F(t)$, Corollary \ref{C2TU} reproduces the family of Riccati equations analysed by Kovalevskaya. \medskip $\bullet$ {\it The case of Hong-Xiang}: \medskip As a final example we can consider the Riccati equation \begin{equation*} \frac{dy}{dt}=- y^2-\left(2 b G(t)-\frac{\dot G(t)}{G(t)}\right) y-c G^2(t), \end{equation*} used to analyse a certain integrable linear differential equation in \cite{Ro07} which was also analysed by Hong-Xiang \cite{HX82}. The above Riccati equation satisfies the integrability condition (\ref{resCor2}) and hence it can be integrated. Indeed, we have $$ \left\{ \begin{aligned} b_0(t)&=-c G^2(t),\\ b_1(t)&=-\left(2bG(t)-\frac{\dot G(t)}{G(t)}\right),\\ b_2(t)&=-1, \end{aligned}\right. $$ and therefore we get that $$ \frac{b_1+\frac{1}{2}\left(\frac{\dot b_2}{b_2}-\frac{\dot b_0}{b_0}\right)}{\sqrt{|b_0b_2|}}={\rm const.} $$ In summary, many integrability conditions shown in the literature are equivalent to or particular instances of those given in our more general statements. \section{Integrability and reduction} \indent In this Section we develop a procedure that is similar to the one derived throughout the previous Sections but we here consider solutions of system (\ref{FS}) in two-parameter subsets of $SL(2,\mathbb{R})$. In this case, we recover some known integrability conditions, e.g. a certain kind of integrability used in \cite{CarRamGra}. More specifically, we try to relate a Riccati equation (\ref{ricceq}) to an integrable one associated, as a Lie system, with a curve of the form ${\rm a}'(t)=-D(t)(c_0{\rm a}_0+c_1{\rm a}_1+c_2{\rm a}_2)$, with $c_2\neq 0$ and a non-vanishing function $D=D(t)$. Furthermore, we consider solutions of system (\ref{Sys}) with $\gamma=0$ and $\alpha>0$ related to elements of $SL(2,\mathbb{R})$, i.e. we analyse transformations $$y'=\frac{\alpha(t)}{\delta(t)}y+\frac{\beta(t)}{\delta(t)}=\alpha^2(t)\,y+ \alpha(t)\beta(t)\,.$$ In this case, using the expression in coordinates (\ref{FS}) of system (\ref{Sys}), we get that \begin{equation}\label{PC} \left(\begin{matrix} \dot\alpha\\ \dot\beta\\ 0\\ \dot\delta \end{matrix}\right)=\left( \begin{matrix} \frac{b'_1-b_1}{2}&b_2 &b'_0&0\\ -b_0& \frac{b'_1+b_1}{2}&0 &b'_0\\ -b'_2&0 &-\frac{b'_1+b_1}{2}& b_2\\ 0&-b_2' &-b_0& -\frac{b'_1-b_1}{2} \end{matrix}\right)\left( \begin{matrix} \alpha\\ \beta\\ 0\\ \delta \end{matrix}\right)\,, \end{equation} where $b'_j=D\,c_j$ and $c_j\in\mathbb{R}$ for $j=0,1,2$. As we suppose $b'_2\neq 0$, the third equation of the above system implies \begin{equation*} \frac{\alpha}{\delta}=\frac{b_2}{b_2'}. \end{equation*} As $\alpha\delta=1$ in order to obtain a solution of (\ref{Sys}) related to an element of $SL(2,\mathbb{R})$ and $b_2'=D c_2$, we get \begin{equation}\label{Drelation} \alpha^2=\frac{b_2}{D c_2}. \end{equation} Hence, $\alpha$ is determined, up to a sign, by the values of $b_2(t), D$ and $c_2$. In this way, if we take $\alpha$ to be positive, the first differential equation of system (\ref{PC}) gives us the value of $\beta$ in terms of the related initial and final Riccati equation, i.e. $$ \beta=\frac{1}{b_2}\left(\dot \alpha-\frac{b'_1-b_1}{2}\alpha\right). $$ Taking into account the relation (\ref{Drelation}), the above expression is equivalent to the differential equation \begin{equation*} \frac{dD}{dt}=\left(b_1(t)+\frac{\dot b_2(t)}{b_2(t)}\right)D-c_1D^2-2b_2(t)D\beta \left(\frac{c_2D}{b_2(t)}\right)^{1/2}, \end{equation*} and, as $\alpha\delta= 1$, we can define $M=\beta/\alpha$ and rewrite the above expression as follows \begin{equation*} \frac{dD}{dt}=\left(b_1(t)+\frac{\dot b_2(t)}{b_2(t)}\right)D-c_1D^2-2b_2(t)MD. \end{equation*} Considering the differential equation in $\dot \beta$ in terms of $M$, we get the equation \begin{equation*} \frac{dM}{dt}=-b_0(t)+\frac{c_0c_2}{b_2(t)}D^2+b_1(t) M-b_2(t) M^2\,. \end{equation*} Finally, as $\delta\alpha=1$ is a first-integral of system (\ref{Sys}), if the system for the variables $M$ and $D$ and all the obtained conditions are satisfied, the value $\delta=\alpha^{-1}$ satisfies its corresponding differential equation of the system (\ref{PC}). To sum up, we have obtained the following result. \begin{theorem}\label{FT2} Given a Riccati equation (\ref{ricceq}), there exists a transformation \begin{equation*} y'=G(t)y+H(t)\,,\qquad G(t)>0\,, \end{equation*} relating it to the integrable equation \begin{equation}\label{fequation} \frac{dy'}{dt}=D(t)(c_0+c_1y'+c_2y'^2), \end{equation} with $c_2\neq 0$ and $D$ a non-vanishing function, if and only if there exist functions $D$ and $M$ satisfying the following system \begin{eqnarray*} \left\{\begin{aligned} \frac{dD}{dt}&=\left(b_1(t)+\frac{\dot b_2(t)}{b_2(t)}\right)D-c_1D^2-2b_2(t)MD,\\ \frac{dM}{dt}&=-b_0(t)+\frac{c_0c_2}{b_2(t)}D^2+b_1(t) M-b_2(t) M^2. \end{aligned}\right. \end{eqnarray*} The transformation is then given by \begin{equation}\label{ChangeT3} y'=\frac{b_2(t)}{D(t)c_2}(y+M(t))\,. \end{equation} \end{theorem} Consider $c_0=0$ in Eq. (\ref{fequation}). Thus, the system determining the curve in $SL(2,\mathbb{R})$ performing the transformation of Theorem \ref{FT2} is \begin{equation} \left\{ \begin{array}{rcl} \dfrac{dD}{dt}&=&\left(b_1(t)+\dfrac{\dot b_2(t)}{b_2(t)}\right)D-c_1D^2(t)-2b_2(t)MD,\\ \dfrac{dM}{dt}&=&-b_0(t)+b_1(t) M-b_2(t) M^2. \end{array}\label{RedSep}\right. \end{equation} On one hand, this system does not involve any integrability condition because, as a consequence of the Theorem of existence and uniqueness of solutions, there always exists a solution for every initial condition. On the other hand, such solutions can be as difficult to be found as the general solution of the initial Riccati equation. Hence, in order to find a particular solution, we need to look for some simplifications. For instance, we can consider the case in which $M=b_1/b_2$. In this case, the first differential equation of the above system does not depend on $M$ and reads $$ \frac{dD}{dt}=\left(-b_1(t)+\frac{\dot b_2(t)}{b_2(t)}\right)D-c_1D^2 $$ and it is integrable by quadratures. Its solution reads \begin{equation*} D(t)=\frac{\exp\left(\int_0^t A(t')dt'\right)}{C+c_1\int^t_0\exp\left(\int_0^{t''} A(t')dt'\right)dt''}\,,\qquad A(t)=\left(-b_1(t)+\frac{\dot b_2(t)}{b_2(t)}\right). \end{equation*} Meanwhile, the condition for $M=b_1/b_2$ to be a solution of the second equation in (\ref{RedSep}) is \begin{equation*} \frac{d}{dt}\left(\frac{b_1}{b_2}\right)=-b_0\,, \end{equation*} giving rise to an integrability condition. This summarises one of the integrability conditions considered in \cite{Ra62}. Next, we recover from this new viewpoint the well-known result that the knowledge of a particular solution of the Riccati equation allows us to solve the system (\ref{RedSep}). In fact, under the change of variables $M\mapsto -y$, system (\ref{RedSep}) becomes \begin{eqnarray}\label{eq8} \left\{\begin{aligned} \frac{dD}{dt}&=\left(b_1(t)+\frac{\dot b_2(t)}{b_2(t)}\right)D-c_1D^2+2b_2(t)yD,\\ \dfrac{dy}{dt}&=b_0(t)+b_1(t) y+b_2(t) y^2. \end{aligned}\right. \end{eqnarray} Note that each particular solution of the above system is the form $(D_p(t),y_p(t))$, with $y_p(t)$ a particular solution of the Riccati equation (\ref{ricceq}). Therefore, given such a particular solution $y_p(t)$, the function $D_p=D_p(t)$, corresponding to the particular solution $(D_p(t),y_p(t))$ of system (\ref{eq8}), holds the integrable equation \begin{equation}\label{PS} \frac{dD_p}{dt}=\left(b_1(t)+\frac{\dot b_2(t)}{b_2(t)}+2b_2(t)y_p(t)\right)D_p-c_1D_p^2. \end{equation} Hence, the knowledge of a particular solution $y_p(t)$ of the Riccati equation (\ref{ricceq}) enables us to get a particular solution $(D_p(t),y_p(t))$ of system (\ref{eq8}) and, taking into account the change of variables $y\mapsto -M$, a particular solution $(D_p(t),M_p(t))=(D_p(t),-y_p(t))$ of system (\ref{RedSep}). Finally, the functions $M_p(t)$ and $D(t)$ determines a change of variables (\ref{ChangeT3}) given by Theorem \ref{FT2} transforming the initial Riccati equation (\ref{ricceq}) into another one related, as a Lie system, to a solvable Lie algebra of vector fields. In this way, we describe a reduction process similar to that one pointed out in \cite{CarRamGra}. Nevertheless, we here directly obtain a reduction to a Riccati equation related, as a Lie system, to a one dimensional Lie subalgebra of $\mathfrak{sl}(2,\mathbb{R})$ through one of its particular solutions. There exists many ways to impose conditions on the coefficients of the second equation of (\ref{eq8}) for being able to obtain one of its particular solutions easily. Now, we give some particular examples of this. If there exists a real constant $c$ such that for the time-dependent functions $b_0$, $b_1$ and $b_2$ we have that $b_0+b_1 c+b_2 c^2=0$, then $c$ is a particular solution. This resumes some cases found in \cite{CarRamGra, Stre}. For instance: \begin{enumerate} \item $b_0+b_1+b_2=0$ means that $c=1$ is a particular solution. \item $c_1^2b_0+c_1c_2b_1+c_2^2b_2=0$ means that $c=c_2/c_1$ is a particular solution. \end{enumerate} In these particular instances, we can find $D$ through the first differential equation of (\ref{eq8}). As a first application of this last case we can integrate the Riccati equation \begin{equation}\label{Hovy} \frac{dy}{dt}=-\frac{n}{t}+\left(1+\frac{n}{t}\right)y-y^2. \end{equation} related to Hovy's equation \cite{Ro07}. This Riccati equation admits the particular constant solution $y_p(t)=1$. Using such a particular solution in Eq. (\ref{PS}) and fixing, for instance, $c_1=0$, we can obtain a particular solution for Eq. (\ref{PS}), e.g. $D_p(t)=t^ne^{-t}$. Therefore, we get that $(t^ne^{-t},1)$ is a solution of the system (\ref{eq8}) related to Eq. (\ref{Hovy}) and $(t^ne^{-t},-1)$ is a solution of the system (\ref{RedSep}). In this way, Theorem \ref{FT2} states that the transformation (\ref{ChangeT3}), determined by the $D_p(t)=t^ne^{-t}$ and $M_p(t)=-1$, of the form \begin{equation}\label{rel} y'=-t^{-n}e^tc_2^{-1}(y-1), \end{equation} relates the solutions of Eq. (\ref{Hovy}) to the integrable one $$ \frac{dy'}{dt}=e^{-t}t^n(c_0+c_2y'^2). $$ If we fix $c_0=1$ and $c_2=1$ the solution for the above equation is \begin{equation*} y'(t)=-\frac{1}{-K+\Gamma(1+n,t)}, \end{equation*} where $K$ is an integration constant and $\Gamma(a,b)$ is the incomplete Euler's Gamma function \begin{equation*} \Gamma(a,t)=\int^\infty_t t'^{a-1}e^{-t'}dt'. \end{equation*} In view of the change of variables (\ref{rel}), the solutions $y(t)$ of the Riccati equation (\ref{Hovy}) and $y'(t)$ are related by means of the expression $y'(t)=-t^{-n}e^tc_2^{-1}(y(t)-1)$. Therefore, if we substitute the general solution $y'(t)$ in this expression we can derive the general solution for the Riccati equation (\ref{Hovy}), that is, \begin{equation*} y(t)=1-\frac{e^{-t}t^n}{\Gamma(n+1,t)+K}. \end{equation*} Another approach that can be summarised by Theorem \ref{FT2} is the factorisation method developed in \cite{Ro07} to explain an integrability process for second-order differential equations. In that work, it was analysed the differential equation: \begin{equation}\label{SOE} \frac{d^2y}{dt^2}+2P(t)\frac{dy}{dt}+\left(\frac{dP}{dt}+P^2(t)-\frac{d\phi} {dt}-\phi^2(t)\right)y=0\,. \end{equation} We know that invariance under dilations leads to consider an adapted variable $z$, such that $y=e^z$. Under this change of variables the equation obtained for $\psi=\dot z$ is the Riccati equation \begin{equation}\label{FRE} \frac{d\psi}{dt}=-\psi^2-2P(t)\psi-\left(\frac{dP}{dt}(t)+P^2(t)-\frac{d\phi} {dt}(t)-\phi^2(t)\right). \end{equation} This equation was integrated through a factorisation method in \cite{Ro07}. Nevertheless, we can also integrate this equation if we take into account that $\psi_p(t)=\phi(t)-P(t)$ is a particular solution of the above differential equation and then applying the same procedure as for Eq. (\ref{Hovy}). Indeed, as $\psi_p(t)$ is a particular solution for the Riccati equation (\ref{FRE}), we can obtain a particular solution $D_p=D_p(t)$ for Eq. (\ref{PS}) and by means of the functions $M_p(t)=-\psi_p(t)$ and $D_p(t)$ we can obtain the solution of the Riccati equation (\ref{FRE}). Finally, inverting the change of variables used to relate Eq. (\ref{SOE}) to (\ref{FRE}) we obtain the solution for Eq. (\ref{SOE}). \section{Linearisation of Riccati equations} \indent One can also study the problem of the linearisation of Riccati equations through the linear fractional transformations (\ref{yprime}). This set of time-dependent transformations is general enough to include many of the time-dependent or time-independent changes of variables already used to study Riccati equations, e.g. it allows us to recover the results of \cite{RDM05}. As a main result, we state in this Section some integrability conditions to be able to transform a $t$-dependent Riccati equation into a linear one by means of a diffeomorphism on $\overline{\mathbb{R}}$ associated with certain linear fractional transformations. As a first insight in the linearisation process, note that Corollary \ref{CorCur} shows that there always exists a curve in $SL(2,\mathbb{R})$, and then a $t$-dependent linear fractional transformation on ${\overline{\mathbb{R}}}$, transforming a given Riccati equation into any other one. In particular, if we fix $b_2'=0$ in the final Riccati equation, we obtain that there is a $t$-dependent linear fractional change of variables transforming any Riccati equation (\ref{ricceq}) into a linear one. Nevertheless, as the Lie system (\ref{Sys}) describing such a transformation is not related to a solvable Lie algebra of vector fields, it is not easy to find such a transformation in the general case. Let us try to relate a Riccati equation (\ref{ricceq}) to a linear differential equation by means of a linear fractional transformation (\ref{Action}) determined by a vector $(\alpha,\beta,\gamma,\delta)\in \mathbb{R}^4$ with $\alpha\delta-\beta\gamma=1$. In this case, the existence of solutions of the system (\ref{FS}) performing such a transformation is an easy task and we can look for integrability conditions to get the corresponding change of variables. Note that as $(\alpha,\beta,\gamma,\delta)$ is a constant, we have $\dot\alpha=\dot\beta=\dot\gamma=\dot\delta=0$ and, in view of (\ref{FS}), the diffeomorphism on $\overline{\mathbb{R}}$ performing the transformation is related to a vector in the kernel of the matrix \begin{equation}\label{EM} B=\left(\begin{matrix} \frac{b'_1-b_1}{2}&b_2 &b'_0&0\\ -b_0& \frac{b'_1+b_1}{2}&0 &b'_0\\ 0&0 &-\frac{b'_1+b_1}{2}& b_2\\ 0&0 &-b_0& -\frac{b'_1-b_1}{2} \end{matrix}\right), \end{equation} where we assume $b_0b_2\neq 0$ in an open interval in the variable $t$. We leave out the study of the case $b_0b_2=0$ in an open interval because, as it was shown in Sec. \ref{IntRicEqu}, this case is known to be integrable. The necessary and sufficient condition for a non-trivial $\ker B$ is $\det B= 0$ and, therefore, a short calculation shows that $\dim\,\, {\rm ker}\, B>0$ if and only if $(-b_1^2+b_1'^2(t)+4 b_0 b_2)^2=0.$ Thus, $b_1'=\pm \sqrt{b_1^2-4 b_0 b_2}$ and $b_1'$ is fixed, but a sign, by the values of $b_0$, $b_1$ and $b_2$. Let us study the kernel of the matrix $B$ in the positive and negative cases for $b'_1$. $\bullet$ Positive case: The kernel of the matrix (\ref{EM}) is given by the vectors \begin{equation*} \left(\delta\frac{b_0'}{b_0}+\beta\frac{b_1+\sqrt{b_1^2-4 b_0 b_2}}{2 b_0} ,\beta,-\delta\frac{-b_1+\sqrt{b_1^2-4 b_0b_2}}{2 b_0},\delta\right), \qquad \delta,\beta\in\mathbb{R}. \end{equation*} Recall that we only consider the constant elements of $\ker B$, therefore there should be two real constants $K_1$ and $K_2$ such that \begin{eqnarray*} K_1=\delta\frac{b_0'}{b_0}+\beta\frac{b_1+\sqrt{b_1^2-4 b_0 b_2}}{2 b_0},\qquad K_2=\frac{-b_1+\sqrt{b_1^2-4 b_0b_2}}{2 b_0}. \end{eqnarray*} Moreover, in order to relate these vectors to elements in $SL(2,\mathbb{R})$ we have to impose that $\det (K_1,\beta,-\delta K_2,\delta)=\delta(K_1 +\beta K_2)=1$. The second condition imposes a restriction on the coefficients of the initial Riccati equation to be linearisable by a constant linear fractional transformation (\ref{Action}). Then, if this condition is satisfied we can fix $\beta,\gamma, K_1$ and $b_0'$ to satisfy the other conditions. Thus, the only linearisation condition is the condition on $K_2$. $\bullet$ Negative case: In this case, $\ker\,B$ reads \begin{equation*} \left(\delta \frac{b_0'}{b_0}+\beta\frac{b_1-\sqrt{b_1^2-4 b_0 b_2}}{2 b_0} ,\beta,-\delta\frac{-b_1-\sqrt{b_1^2-4 b_0b_2}}{2 b_0},\delta\right),\qquad \delta,\beta\in\mathbb{R}, \end{equation*} and now the new conditions reduce to the existence of two real constants $K_1$ and $K_2$ such that \begin{eqnarray*} K_1=\delta \frac{b_0'}{b_0}+\beta\frac{b_1-\sqrt{b_1^2-4 b_0 b_2}}{2 b_0},\qquad K_2= \frac{-b_1-\sqrt{b_1^2-4 b_0b_2}}{2 b_0}, \end{eqnarray*} with $\delta(K_1+\beta K_2)=1$. If the condition in $K_2$ is satisfied we can proceed as in the positive case to obtain the transformation performing the linearisation of the initial Riccati equation. In summary: \begin{theorem} The necessary and sufficient condition for the existence of a diffeomorphism on $\bar{\mathbb{R}}$ of linear fractional type associated with a transformation of $SL(2,\mathbb{R})$ transforming the Riccati equation (\ref{ricceq}) into a linear differential equation is the existence of a real constant $K$ such that \begin{equation}\label{IntCond} K=\frac{-b_1\pm\sqrt{b_1^2-4 b_0b_2}}{2 b_0}. \end{equation} \end{theorem} As a Riccati equation (\ref{ricceq}) holds condition (\ref{IntCond}) if and only if $K$ is a constant particular solution, we get the following corollary: \begin{corollary} A Riccati equation can be linearised by means of a diffeomorphism on $\overline{\mathbb{R}}$ of the form (\ref{Action}) if and only if it admits a constant particular solution. \end{corollary} Ibragimov showed that a Riccati equation (\ref{ricceq}) is linearisable by means of a change of variables $z=z(y)$ if and only if the Riccati equation admits a constant solution \cite{Ib08}. Additionally, we have proved that in such a case, the change of variables can be described by means of a transformation of the type (\ref{Action}). Now, it can be checked that the example given in \cite{Ib08,RDM05} satisfies the above integrability condition. In this work, the differential equation \begin{equation*} \frac{dy}{dt}=P(t)+Q(t)y+k(Q(t)-kP(t))y^2, \end{equation*} was studied. The only interesting case is that with $k\neq 0$ because the other ones are linear. In this latter case, $b_0(t)=P(t)$, $b_1(t)=Q(t)$ and $b_2(t)=k(Q(t)-kP(t))$. Hence, \begin{eqnarray*} \begin{aligned} \frac{-b_1-\sqrt{b_1^2-4 b_0b_2}}{2 b_0}=-k, \end{aligned} \end{eqnarray*} and the integrability condition (\ref{IntCond}) holds. Now we may fix $K_1=0$ and we look for a solution for the condition $\det(K_1,\beta,-\delta K_2,\delta)=1$ reading $k\delta\beta=-1$. As $k\neq 0$, we can take $\beta=-1/k$ to get from the above condition that $\delta=1$. Thus the transformation is that one associated with the vector $(0,-1/k,k,1)$, i.e. the linear fractional transformation \begin{equation*} y'=\frac{-1/k}{ky+1} \end{equation*} that is the same found in \cite{RDM05}. In this way we only have to obtain $b_0'$ from the condition $$ K_1=0=\delta\frac{b_0'}{b_0}+\beta\frac{b_1+\sqrt{b_1^2-4 b_0 b_2}}{2 b_0}, $$ to get the final linear differential equation, that is, \begin{equation*} \frac{dy'}{dt}=\frac{Q(t)}{P(t)}+(Q(t)-2P(t)k)y', \end{equation*} as it appears in \cite{CRL07}. \section{Conclusions and outlook} \indent It has been shown that previous works about the integrability of the Riccati equation can be explained from the unifying viewpoint of Lie systems. The transformations used in the study of the integrability condition for these equations have been understood as induced by curves in $SL(2,\mathbb{R})$. We have investigated a Lie system characterising the $t$-dependent fractional transformations relating different Riccati equations associated, as Lie systems, with curves in $\mathfrak{sl}(2,\mathbb{R})$. We have used this differential equation and considered some simple instances. These simplifications have been used to analyse known integrability conditions and provide new ones. We have also shown that the system (\ref{FS}) is a good way to describe linear fractional time--dependent transformations and found necessary and sufficient conditions for the linearisability or simplification of a Riccati equation through time-independent and time-dependent transformations obtained from curves in $SL(2,\mathbb{R})$. There are many ways of simplifying (\ref{Sys}) and some of them have been developed here. Other ways can be used to obtain new integrability conditions. Finally, the theory used here can be extended to any other Lie system to provide new or recover known integrability conditions. This fact is to be developed in forthcoming works. \section*{Acknowledgements} \indent Partial financial support by research projects MTM2009-11154, MTM2009-08166-E and E24/1 (DGA) are acknowledged. JdL also acknowledges a F.P.U. grant from Ministerio de Educaci\'on y Ciencia.
{ "timestamp": "2011-04-07T02:02:44", "yymm": "1002", "arxiv_id": "1002.0530", "language": "en", "url": "https://arxiv.org/abs/1002.0530", "abstract": "Integrability conditions for Lie systems are related to reduction or transformation processes. We here analyse a geometric method to construct integrability conditions for Riccati equations following these approaches. This approach provides us with a unified geometrical viewpoint that allows us to analyse some previous works on the topic and explain new properties. Moreover, this new approach can be straightforwardly generalised to describe integrability conditions for any Lie system. Finally, we show the usefulness of our treatment in order to study the problem of the linearisability of Riccati equations.", "subjects": "Mathematical Physics (math-ph)", "title": "Integrability of Lie systems through Riccati equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.98866824713641, "lm_q2_score": 0.7154239836484144, "lm_q1q2_score": 0.7073169758730256 }
https://arxiv.org/abs/1611.01182
Golden-Ratio-Based Rectangular Tilings
A golden-ratio-based rectangular tiling of the first quadrant of the Euclidean plane is constructed by drawing vertical and horizontal grid lines which are located at all even powers of $\phi$ along one axis, and at all odd powers of $\phi$ on the other axis. The vertices of the rectangles formed by these lines can be connected by rays starting at the origin having slopes that are odd powers of $\phi$. A refinement of this tiling results in the familiar one with horizontal and vertical grid lines at every power of $\phi$ along each axis. Geometric proofs of the convergence of several known power series' in $\phi$ are provided.
\section{Introduction} Golden-ratio-based tilings that fill the first quadrant of the Euclidean plane have proven to be of interest, since they lead to some geometric methods for proving certain relationships obeyed by the golden ratio and the Fibonacci numbers. A tiling pattern introduced and discussed here, provides an alternative to those that have already appeared in the literature \cite{bic,pos}. The pattern, albeit in a tilted and more skeletal form, first arose in a time series representation of spatial curvature oscillations occurring near the initial cosmological singularity of a Bianchi-IX vacuum spacetime. The golden ratio and its inverse arose directly from the Einstein equations, and surprisingly, the time series evolution of the curvature tensor components produced a pattern of self-similar golden rectangles \cite{bry}. In the present paper, the pattern has been rotated and extended to cover the first quadrant of the Euclidean plane. It could be subsequently reflected about the fundamental axes to fill the entire plane, but this paper discusses it as a first-quadrant tiling pattern. The tiling is formed from a grid of horizontal and vertical lines that intersect one axis at odd powers of $\phi$ and the other at even powers of $\phi$. It is called the AP$\phi$ tiling or the Alternating-Power-of-$\phi$ tiling. The pattern is called alternating because when counting through integer (negative, zero and positive) powers of $\phi$, grid lines appear on one axis, then the other, in an alternating fashion. By adding extra lines so that each axis has a grid line at every integer power of $\phi$, the tiling can be subdivided to create a previously discussed tiling \cite{bic} (herein called the EP$\phi$ tiling or the Every-Power-of-$\phi$ tiling). In each of these tilings (AP$\phi$ and EP$\phi$), vertices can be connected by a concurrent family of rays (emanating from the origin) with slopes equal to integer powers of $\phi$. In this paper, these tilings (and extensions formed by further subdivisions), are used to prove the convergence of some known power series' (powers of $\phi$) and one formula relating $\pi$ and $\phi$. Two patterns provide visual illustrations of the breeding pattern of Fibonacci's rabbits \cite{fib}. \section{Some Formulas in $\phi$} The golden ratio, $\phi=\frac{1+\sqrt{5}}{2}$, has the unique property that $\phi^2=\phi+1$ (or $\phi^{-1}=\phi-1$). Multiplying through by any power of $\phi$ gives the recursion relation: \begin{equation} \label{PhiSplit}\phi^{n+2}=\phi^{n+1}+\phi^{n}\mbox{,} \end{equation} which will prove to be useful later. Four known power series formulas, \begin{eqnarray} \label{formula1} \phi^{n-1} + \phi^{n-3}+\phi^{n-5}+\cdots+ \phi^{n-2k+1}+\cdots & = & \phi^{n}\mbox{,} \\ \label{formula2} \phi^{n-1} + \phi^{n-2}+\phi^{n-3}+\cdots+\phi^{n-k}+\cdots & = & \phi^{n+1}\mbox{,} \\ \label{formula4} 1\phi^{n-1} + 1\phi^{n-3}+2\phi^{n-5}+\cdots+F_k\phi^{n-2k+1}+\cdots & = & \frac{1}{2} \phi^{n+2}\mbox{,} \\ \label{formula3} 1\phi^{n-1} + 2\phi^{n-2}+3\phi^{n-3}+\cdots+k\phi^{n-k}+\cdots & = & \phi^{n+3}\mbox{,} \end{eqnarray} where $k=1,2,3,\ldots$ and $F_k$ is the $k^{\textnormal {th}}$ Fibonacci number, will be proven geometrically. They are generalized versions of those discussed and geometrically proven in \cite{bic}. The only difference here is that new tilings provide some alternate proofs. \section{Construction of the Tiling Pattern} Consider the first quadrant of 2D Euclidean space with an as yet unspecified origin. Construct a landscape-oriented golden rectangle $ABPQ$, such that point B is closest to the origin and $\overline{BA}$ and $\overline{BP}$ are parallel to the $y$- and $x$-axes respectively and have lengths $\phi^{n}$ and $\phi^{n+1}$ respectively, as shown in Figure \ref{Construct}a. Extend $\overline{BA}$ and $\overline{PQ}$ in the positive y-direction for a distance $\phi^{n+2}$ to points $S$ and $R$ respectively, thus forming another golden rectangle $ASRQ$, sharing side $\overline{AQ}$ with $ABPQ$ as shown. \begin{figure} \begin{center} \includegraphics[width=10.0cm]{Construct.eps} \caption{Showing construction of a pattern which can be used to create the AP$\phi$ tiling pattern. The divider is shown as the thick zig-zag path.} \label{Construct} \end{center} \end{figure} Extend the two rectangle diagonals, $\overline{RA}$ and $\overline{QB}$ until they meet at $O$, as shown. The diagonals have slopes $\phi$ and $\phi^{-1}$ respectively. Now designate $O$ as the origin. Extend $\overline{AB}$ down to point $T$ on the $x$-axis. $\overline{OT}$ has some fixed but unknown length, $x$, and $\overline{TB}$ has length $x\cdot\phi^{-1}$. To find $x$, use the slope of $\triangle AOT$'s hypotenuse: \begin{equation} \frac{\phi^n+x\phi^{-1}}{x}=\phi\mbox{.} \\ \end{equation} Solving for $x$, one obtains \begin{equation} x=\frac{\phi^n}{\phi-\phi^{-1}}=\frac{\phi^n}{\phi-(\phi-1)}=\phi^{n}\mbox{.} \\ \end{equation} In Figure \ref{Construct}b, $x$ has been replaced with its now known value. Also shown in Figure \ref{Construct}b is an infinite zig-zag of horizontal and vertical line segments (hereafter called \emph{the divider}) descending towards the origin between the rays $\overrightarrow{OA}$ and $\overrightarrow{OB}$. To form the divider, start at $B$ and proceed horizontally leftward to meet $\overrightarrow{OA}$ at $C$, then downward to meet $\overrightarrow{OB}$ at $D$, then leftward to meet $\overrightarrow{OA}$ at $E$, and so on, down to convergence at the origin. The divider also alternates between $\overrightarrow{OA}$ and $\overrightarrow{OB}$ extending infinitely in the increasing direction. Using slope conditions on the right triangles ($\triangle ABC$, $\triangle BCD$, and so on) at the divider, it is clear that each segment of the divider is a factor of $\phi$ smaller than the one before when proceeding towards the origin. $\overline{OT}$ is the sum of the horizontal divider segments above it, proving (\ref{formula1}). Adding all the divider segments (horizontal and vertical) from $B$ down to $O$ gives \begin{equation} \phi^{n-1}+\phi^{n-2}+\phi^{n-3}+\cdots=\phi^{n-1}+\phi^{n}=\phi^{n+1}\mbox{,} \end{equation} proving (\ref{formula2}). The final step used (\ref{PhiSplit}). \begin{figure} \begin{center} \includegraphics[width=11.0cm]{APphi.eps} \caption{The AP$\phi$ tiling pattern (extended from Figure \ref{Construct}) showing power-of-$\phi$ grid lines alternating between the axes and the rays with odd power-of-$\phi$ slopes. The grid lines converge to the axes where details cannot be shown.} \label{APphi} \end{center} \end{figure} \section{The AP$\phi$ Tiling Pattern} Extending all divider line segments in Figure \ref{Construct}b to full 1st-quadrant size creates Figure \ref{APphi}, the AP$\phi$ tiling pattern. In each axis direction, grid lines are spaced every second power apart, as are lines $\overleftrightarrow{BP}$, $\overleftrightarrow{AQ}$ and $\overleftrightarrow{SR}$ in Figure \ref{Construct}a. This is shown by tick marks and spacings along each axis in Figure \ref{APphi}. The divider is again shown as a thick zig-zag of line segments. It is clear that in the AP$\phi$ tiling, all fundamental rectangles (formed by adjacent grid lines in each dimension), have integer-power-of-$\phi$ dimensions (i.e. are $\phi^j$ by $\phi^i$ rectangles, $i,j \in \Z$). Nestled underneath the divider is a series of golden rectangles with landscape orientation (labelled as $LG$ for Landscape Golden). These rectangles can be visualized as rectangular beads, strung by their diagonals, along the ray with slope $\phi^{-1}$, each bead being a factor of $\phi^2$ in linear measure larger than the one before it for progression away from the origin. Nestled above the divider is a similar set of golden rectangles strung in a similar manner along the ray with slope $\phi$. In this case they have portrait orientation (and so are labelled PG). Both sets together represent all golden rectangles whose side lengths are integer powers of $\phi$. Outside of these golden rectangles, the rays with slopes $\phi^{-3}$ and $\phi^{3}$ each exhibit the same string-of-beads pattern and together contain all rectangles with power-of-$\phi$ dimensions where the length-to-width ratio is $\phi^{3}$ (labelled appropriately as $LG^3$ and $PG^3$). This continues with the subsequent rays, which have rectangles with length-to-width ratios of $\phi^{5}$, $\phi^{7}$, and so on. Note that all rectangles below the divider have landscape orientation, while those above have portrait orientation. In the AP$\phi$ tiling, all rectangles with integer-power-of-$\phi$ dimensions for which the length-to-width ratio is a positive odd power of $\phi$ are represented exactly once, and those are the only ones represented. The AP$\phi$ pattern is invariant under even-power-of-$\phi$ dilatations about the origin and also under transformations that are a single composition of an odd-power-of-$\phi$ dilatation about the origin with a reflection about the ray $y=x$. \begin{figure} \begin{center} \includegraphics[width=9.0cm]{PiDiv4.eps} \caption{The beaded ray of portrait golden rectangles from the AP$\phi$ tiling, with markings and points for proof of a formula relating $\pi$ and $\phi$.} \label{PiDiv4} \end{center} \end{figure} \section{A Formula linking $\pi$ and $\phi$} In the AP$\phi$ tiling, the rays with slopes $\phi^3$ and $\phi^{-1}$ are separated by an angle of $45^\circ$, as the following shows. The AP$\phi$ tiling in Figure \ref{PiDiv4} features the slope-$\phi$ ray whose beads are portrait-oriented golden rectangles. Consider the rectangle displaying diagonal $\overline{TR}$, which is an arbitrary one since the scaling factor, $n$, is unspecified. $\angle OQR=\angle RST=90^\circ$, $|\overline{OQ}|=|\overline{RS}|=\phi^{n+6}$ and $|\overline{QR}|=|\overline{ST}|=\phi^{n+5}\Rightarrow \triangle OQR \equiv \triangle RST$ (by the Side-Angle-Side congruence theorem) $\Rightarrow |\overline{OR}|=|\overline{RT}|$. $\overline{RT}$ has slope $-\phi$ and $\overline{OR}$ has slope $\phi^{-1}$ so the product of the slopes is $(-\phi) (\phi^{-1})=-1$, so $\overline{OR}\perp\overline{RT}$. Clearly $\triangle ORT$ is a 45-45-90 triangle and $\angle ROT = 45^\circ = \pi/4$ radians. This means $\angle ROT = \angle QOT - \angle QOR$ implies \begin{equation} \frac{\pi}{4}=\arctan{(\phi^3)}-\arctan{(\phi^{-1})} \end{equation} which is a formula relating $\pi$ and $\phi$. It is expected that there are geometric proofs for many other formulas hidden in the AP$\phi$ tiling. The cosmological time series pattern \cite{bry} that was the impetus to investigate the AP$\phi$ tiling was a tilted pattern where the slope-$\phi^{-1}$ ray coincided with the $x$-axis and the slope-$\phi^3$ ray coincided with the ray $y=x$. The self-similar golden rectangles produced by the time series are the shaded ones in Figure \ref{PiDiv4}. \section{Division of Power-of-$\phi$-Dimension Rectangles}\label{SectionRDivide} Figure \ref{RectSubDiv} shows a rectangle with arbitrary integer-power-of-$\phi$ dimensions ($\phi^i$ by $\phi^j$) being subdivided using the golden ratio relation, as given in (\ref{PhiSplit}). If division is performed as shown (the smaller segment preceding the larger one), the rectangle is divided into the four sub-rectangles labelled $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$ and $\mathcal{D}$. It is easily seen that the diagonals of $\mathcal{B}$ and $\mathcal{C}$ have the same slope as the parent rectangle, whereas the diagonal of $\mathcal{A}$ ($\mathcal{D}$) has a slope that is $\phi$ times greater (less) than the parent rectangle. Dividing up rectangles which have integer-power-of-$\phi$ dimensions produces sub-rectangles that also have integer-power-of-$\phi$ dimensions. If the rectangles are part of a tiling pattern, sub-patterns can be made by repeatedly subdividing rectangles. In general, the grid locations for the vertices of such sub-rectangles become more complicated due to the creation of sums of various integer powers of $\phi$ that do not conveniently simplify. Complicated but useful tiling patterns can be made in this way to aid in geometric proofs of the convergence of power series' in $\phi$, as will be shown later in proofs of (\ref{formula4}). \begin{figure} \begin{center} \includegraphics[width=9.0cm]{RectSubDiv.eps} \caption{An arbitrary integer-power-of-$\phi$ rectangle (measuring $\phi^i$ by $\phi^j$) is divided according to the golden ratio in each dimension, with the smaller portion coming first in each instance. The sub-rectangles $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$ and $\mathcal{D}$, have diagonals with slopes of $\phi^{j-i+1}$, $\phi^{j-i}$, $\phi^{j-i}$ and $\phi^{j-i-1}$, respectively.} \label{RectSubDiv} \end{center} \end{figure} \section{The EP$\phi$ Tiling Pattern} \begin{figure} \begin{center} \includegraphics[width=11.0cm]{EPphi.eps} \caption{The EP$\phi$ tiling pattern has grid lines at every integer power of $\phi$ along each axis. This is produced by subdivision of the AP$\phi$ tiling (Figure \ref{APphi}) in the manner that Figure \ref{RectSubDiv} illustrates.} \label{EPphi} \end{center} \end{figure} Applying the division shown in Figure \ref{RectSubDiv} to every fundamental rectangle in the AP$\phi$ tiling, produces the EP$\phi$ tiling (see Figure \ref{EPphi}). It has grid lines located at each power of $\phi$ on each axis and is a pattern which commonly appears in the literature \cite{bic, pos}. If we consider any two adjacent rays of the AP$\phi$ tiling (Figure \ref{APphi}), they will have slopes $\phi^k$ and $\phi^{k+2}$ for the lower and upper rays respectively, for some $k$, where $k$ is odd. Each ray can be visualized as supporting a sequence of self-similar rectangular beads with diagonals positioned along the ray. These beaded rays fit snugly together, one to the next. When these Figure \ref{APphi} rectangle/beads are subdivided (in Figure \ref{RectSubDiv} fashion), the $\mathcal{D}$ sub-rectangles from the upper ray beads will have diagonal slopes of $\phi^{k+1}$ (one power less than the slope of the upper ray) and the $\mathcal{A}$ sub-rectangles from the lower ray beads will also have diagonal slopes of $\phi^{k+1}$ (one power more than the slope of the lower ray). These sub-rectangles combine to form a beaded ray with slope $\phi^{k+1}$. \begin{figure} \begin{center} \includegraphics[width=10.0cm]{Colourful.eps} \caption{Showing the $\phi$-by-$\phi$ square next to the origin of the EP$\phi$ tiling, with shading appropriate for proving an instance of (\ref{formula3}).} \label{SequentCoefficients} \end{center} \end{figure} The EP$\phi$ tiling has the same beads-on-the-ray layout as the AP$\phi$ tiling, except the EP$\phi$ tiling has beaded rays for every power of $\phi$, instead of only for the odd powers. Further subdivision of the rectangles of the EP$\phi$ tiling leads to $\mathcal{A}$ and $\mathcal{D}$ sub-rectangles with differing slopes (not aligning along a ray), so further subdivision patterns do not have the same beads-on-a-ray structure. The EP$\phi$ tiling has reflection symmetry about the ray $y=x$ and is invariant under any integer-power-of-$\phi$ dilatation about the origin. The beads along the $y=x$ ray consist of all possible squares that have integer-power-of-$\phi$ side-length, each size appearing exactly once. In the EP$\phi$ tiling, except for the squares, every other possible rectangle that has integer-powers-of-$\phi$ length and width occurs exactly twice, once in landscape orientation, and once in portrait orientation. Figure \ref{SequentCoefficients} (also seen in \cite{bic}) shows the EP$\phi$ tiling (with specific powers of $\phi$ on the axes), restricted to the $\phi$-by-$\phi$ section near the origin. The tiles have been coloured to illustrate a geometric proof of (\ref{formula3}) in the instance $n=-1$: \begin{equation} \label{SequentFormula} 1\phi^{-2} + 2\phi^{-3}+3\phi^{-4}+\cdots = \phi^{2}\mbox{.} \end{equation} It is sufficient to prove the formula for just one value of $n$, since multiplying through by a power of $\phi$ can then establish it for other $n$ values. The proof should be clear from the diagram, or one can refer to \cite{pos}. \section{The Fibonacci Relations} As discussed in Section \ref{SectionRDivide}, each tile in an integer-power-of-$\phi$ rectangular tiling pattern can easily be subdivided as many times as desired to form a customized tiling pattern in which all tiles have dimensions which are powers of the golden ratio. This is done in Figures \ref{Rabbtri} and \ref{Rabbzoid}, where self-similar patterns have been created to prove instances of (\ref{formula4}). For each of these figures, different tile sizes represent different terms of the series. To show that the number of tiles for each tile size is the correct Fibonacci number, Fibonacci's original concept of counting rabbit pairs has been employed \cite{fib}. In that paper, the Fibonacci numbers are the number of rabbit pairs in existence each month, where the first month has only a single pair at birth. Newly born pairs do not reproduce for their first month, but produce one more pair (capable of future breeding) in each month after that. It is assumed that all rabbits live forever. In each Figure, the largest tile (labelled ``original pair at birth"), represents the rabbit pair that begins the breeding pattern at the start of Month 1. The rabbit pairs in each subsequent month are represented by tiles which are a factor of $\phi$ smaller than those of the month before. Since there is a one-to-one correspondence between rabbit pairs in the month and tiles of that month's size, in Month $n$, the total number of tiles of the appropriate size will be $F_n$, representing the number of rabbit pairs present in that month (counting all pre-existing and newly-born). The fact that each rabbit pair lives forever is represented by the series of tiles (lower edges aligned) to the left of each baby pair. Figure \ref{Rabbtri} illustrates the $n=-3$ instance of (\ref{formula4}): \begin{equation} \label{RabbtriFormula} 1\phi^{-4} + 1\phi^{-6}+2\phi^{-8}+\cdots+F_k\phi^{-2k-2}+\cdots = \frac{1}{2} \phi^{-1}\mbox{.} \end{equation} The tiles are all squares and correspond to rabbit pairs. For Month $k$ each square has area $\phi^{-2k-2}$ and there are $F_k$ of them so the total area of Month $k$ squares is $F_k\phi^{-2k-2}$ which is the $k$th term of the LHS of (\ref{RabbtriFormula}) so the LHS of (\ref{RabbtriFormula}), is the total area of all rabbit pair tiles in the Figure \ref{Rabbtri} tiling. Each of the square tiles in the pattern touches the $\triangle OQR$ hypotenuse, $\overline{OQ}$. As the breeding pattern progresses, the new points where $\overline{OQ}$ is touched are always proportionally spaced, so it is clear that the tiling pattern will, in its limit, completely fill $\triangle OQR$. $\triangle OQR$ has area $\frac{1}{2}|\overline{OR}||\overline{RQ}| = \frac{1}{2}(1)(\phi^{-1})=\frac{1}{2}\phi^{-1}$ which is the RHS of (\ref{RabbtriFormula}), completing the proof. The Figure \ref{Rabbtri} pattern appears in \cite{hut}, without mathematical analysis. \begin{figure} \begin{center} \includegraphics[width=12.0cm]{RabbTri.eps} \caption{A tiling of squares that fills a triangular half of a golden rectangle, geometrically proving an instance of (\ref{formula4}) and providing a visualization of Fibonacci's rabbits.} \label{Rabbtri} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=12.0cm]{RabbTzoid.eps} \caption{A tiling of golden rectangles that fills a trapezoid, geometrically proving an instance of (\ref{formula4}) and providing a visualization of Fibonacci's rabbits.} \label{Rabbzoid} \end{center} \end{figure} Figure \ref{Rabbzoid} illustrates the $n=-2$ instance of (\ref{formula4}): \begin{equation} \label{RabbzoidFormula} 1\phi^{-3} + 1\phi^{-5}+2\phi^{-7}+\cdots+F_k\phi^{-2k-1}+\cdots = \frac{1}{2}\mbox{.} \end{equation} The tiles are all golden rectangles and correspond to rabbit pairs. For Month $k$ each rectangle has area $\phi^{-2k-1}$ and there are $F_k$ of them so the total area of Month $k$ rectangles is $F_k\phi^{-2k-1}$ which is the $k$th term of the LHS of (\ref{RabbzoidFormula}), so the LHS of (\ref{RabbzoidFormula}) is the total area of all rabbit pair tiles in the Figure \ref{Rabbzoid} tiling. Trapezoid $OPQR$ has area $\frac{1}{2}(|\overline{PQ}|+|\overline{OR}|)|\overline{RQ}| = \frac{1}{2}(1+\phi^{-1})(\phi^{-1})=\frac{1}{2}(\phi)(\phi^{-1}) = \frac{1}{2}$ which is the RHS of (\ref{RabbzoidFormula}). It remains to prove that the tiling pattern completely fills trapezoid $OPQR$. There is a sequence of baby-rabbit-pair tiles, where each tile in the sequence touches $\overline{PQ}$, the top side of the trapezoid. The sum of the widths of these tiles is $\phi^{-2}+\phi^{-4}+\phi^{-6}+\cdots=\phi^{-1}=|\overline{PQ}|$, ((\ref{formula1}) with $n=0$ was used), so the pattern fills the trapezoid along the top side, $\overline{PQ}$. The sum of the widths of the tiles corresponding to the \emph{original rabbit pair} existing for every month is $\phi^{-2}+\phi^{-3}+\phi^{-4}+\cdots=\phi^{0}=1=|\overline{OR}|$, ((\ref{formula2}) with $n=-1$ was used), so the series of tiles representing the \emph{original pair} at all ages converges at $O$, meaning the pattern fills the base of the trapezoid, $\overline{OR}$. It remains to show that the tiling pattern fills the trapezoid up to the boundary, $\overline{OP}$. For each \emph{other rabbit pair} (i.e. not the original pair), there is also a series of tiles representing them at all ages. It starts at the lower right corner of the tile representing them as babies and extends leftward. The location of that initial corner depends on events that happened before the pair were born. The convergence point for a rabbit pair series is the same as the convergence point, $O$, of the original pair series, except that every time one of the rabbit pairs' ancestors gives birth, there is (1) a rightward shift by\begin{em} the \begin{bf}width\end{bf} of the ancestor tile for the month when the birth happened\end{em}, and (2) an upward shift by\begin{em} the \begin{bf}height\end{bf} of the ancestor tile for the month when the birth happened\end{em}. The ratio of \emph{upward} shift to \emph{rightward} shift is always the golden ratio. The series convergence point for \emph{any rabbit pair} is shifted \emph{upward} and \emph{rightward} along the ray $y=\phi x$ whenever one of the rabbit pairs' ancestors gives birth. But this ray is $\overline{OP}$. Therefore, the series for every rabbit pair always converges to a point on $\overline{OP}$, at the same height as the lower right corner of the tile representing the pair at birth. The vertical locations of the convergence points on $\overline{OP}$ are always proportionally spaced, so it is clear that the tiling pattern will, in its limit, completely fill trapezoid $OPQR$, so the proof is complete. \section{Conclusion} Golden-ratio-based tiling patterns have been of interest from both an aesthetic and a mathematical point of view. The main pattern that has appeared in the literature (referred to as the EP$\phi$ tiling) is created by orthogonal grid lines appearing at every power of $\phi$ along both axes of the first quadrant of a Cartesian coordinate system. The AP$\phi$ tiling, introduced here, has grid lines at all powers of phi, but they alternate between the two orthogonal axes, so one axis contains all the even power-of-$\phi$ locations and the other the odd. This results in a tiling where each tile is unique. The EP$\phi$ tiling is obtained from the AP$\phi$ tiling by sub-division of the AP$\phi$ tiles using the fundamental relation $\phi^2 = \phi +1$ (which is merely adding extra grid lines). The EP$\phi$ tiling has pairs of congruent tiles (one of each pair having landscape orientation and the other portrait), except for the set of self-similar $\phi^i \times \phi^i$ squares where each square appears only once. This paper has noted that each of the two tilings has a discrete family of rays emanating from the origin and passing through the diagonals of the rectangular tiles. Each ray can be associated with a self-similar ``rectangular-beads-on-a-ray'' structure. The EP$\phi$ tiling consists of beaded-rays that have slopes of every integer power of $\phi$, while the AP$\phi$ tiling consists of beaded-rays with slopes of every odd integer power of $\phi$. The EP$\phi$ tiling pattern is useful for providing geometric proofs of the convergence of certain power series' of $\phi$. The AP$\phi$ tiling's sparser structure (missing grid lines compared to the EP$\phi$ tiling) serves to highlight other properties and proofs. It facilitated recognition of the ``cosmological curvature pattern'', leading to a relation between $\pi$ and $\phi$. By stepping back from the EP$\phi$ tiling and looking more generally at tile subdivisions, two self-similar tilings were obtained which aided visualization of the pattern of Fibonacci's breeding rabbit model and provided proofs of the convergence of a power series of $\phi$. It can be expected that other relationships remain hidden in the tilings, dividers and families of beaded-rays introduced here. It is hoped that the perspectives discussed in this manuscript will inspire others in the search for alternative mathematical relations that might have a basis in geometrically self-similar golden ratio patterns. \section{Acknowledgments} The authors wish to thank Holly Farrell (step-daughter of MB), for creating the rabbit pair images. One of us (DH) gratefully acknowledges financial support from the National Sciences and Engineering Research Council of Canada (NSERC).
{ "timestamp": "2016-11-07T02:00:52", "yymm": "1611", "arxiv_id": "1611.01182", "language": "en", "url": "https://arxiv.org/abs/1611.01182", "abstract": "A golden-ratio-based rectangular tiling of the first quadrant of the Euclidean plane is constructed by drawing vertical and horizontal grid lines which are located at all even powers of $\\phi$ along one axis, and at all odd powers of $\\phi$ on the other axis. The vertices of the rectangles formed by these lines can be connected by rays starting at the origin having slopes that are odd powers of $\\phi$. A refinement of this tiling results in the familiar one with horizontal and vertical grid lines at every power of $\\phi$ along each axis. Geometric proofs of the convergence of several known power series' in $\\phi$ are provided.", "subjects": "History and Overview (math.HO); Combinatorics (math.CO); Metric Geometry (math.MG)", "title": "Golden-Ratio-Based Rectangular Tilings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682468025243, "lm_q2_score": 0.7154239836484144, "lm_q1q2_score": 0.7073169756341557 }
https://arxiv.org/abs/1606.00269
New Analysis of Linear Convergence of Gradient-type Methods via Unifying Error Bound Conditions
This paper reveals that a common and central role, played in many error bound (EB) conditions and a variety of gradient-type methods, is a residual measure operator. On one hand, by linking this operator with other optimality measures, we define a group of abstract EB conditions, and then analyze the interplay between them; on the other hand, by using this operator as an ascent direction, we propose an abstract gradient-type method, and then derive EB conditions that are necessary and sufficient for its linear convergence. The former provides a unified framework that not only allows us to find new connections between many existing EB conditions, but also paves a way to construct new EB conditions. The latter allows us to claim the weakest conditions guaranteeing linear convergence for a number of fundamental algorithms, including the gradient method, the proximal point algorithm, and the forward-backward splitting algorithm. In addition, we show linear convergence for the proximal alternating linearized minimization algorithm under a group of equivalent EB conditions, which are strictly weaker than the traditional strongly convex condition. Moreover, by defining a new EB condition, we show Q-linear convergence of Nesterov's accelerated forward-backward algorithm without strong convexity. Finally, we verify EB conditions for a class of dual objective functions.
\section{Introduction} A standard assumption for proving linear convergence of gradient-type methods is strong convexity \cite{nesterov2004introductory}. In practice, however, strong convexity is too stringent. Moreover, various gradient-type methods for solving convex optimization problems have exhibited linear convergence in numerical experiments even when strong convexity is absent; see e.g. \cite{Hale2008fixed,Lai2012Augmented,xiao2013a}. Thereby, one would wonder whether such a phenomenon can be explained theoretically, and whether there exist weaker alternatives to strong convexity that retain fast rates. A very powerful idea to address these questions is to connect error bound (EB) conditions with the convergence rate estimation of gradient-type methods. This idea has a long history dating back to 1963 when Polyak introduced an EB inequality as a sufficient condition for gradient descent to attain linear convergence \cite{Polyak1963Gradient}. In the same year, a wide class of inequalities, which include Polyak's as a special case, were introduced by {\L}ojasiewicz \cite{lojasiewicz1963propriete}. In the recent manuscript \cite{Karimil2016linear}, the EB condition of Polyak-{\L}ojasiewicz's type was further developed for linear convergence of gradient and proximal gradient methods. The second type of EB conditions is due to Hoffman, who proposed an EB inequality for systems of linear inequalities \cite{Hoffman1952approximate} in 1952. Along this line, Luo and Tseng in the early 90's contributed several aspects for connecting EB conditions of Hoffman's type with convergence analysis of descent methods \cite{Luo1990On,Luo1993Error}. Recently, global versions of EB conditions of Hoffman's type attract much attention \cite{So2013Non,Wang2014Iteration,Zhou2015A}. The third type of EB conditions is the quadratic growth condition (also called zero-order EB condition in \cite{Bolte2015From}), which might go back to the work \cite{zol1978on}. It was recently rediscovered in the special case of convex functions, and widely used to derive linear convergence for many gradient-type methods as well \cite{Liu2014Asynchronous,Gong14linear,I2015Linear}. Moreover, there recently emerges a surge of interest in developing new EB conditions guaranteeing (global) linear convergence for various gradient-type methods. For example, the authors of \cite{Lai2012Augmented,zhang2013gradient,zhang2015restricted} proposed a restricted secant inequality (RSI), and developed the restricted strongly convex (RSC) property with parameter $\nu$ for analyzing linear convergence of (dual) gradient descent methods and Nesterov's restart accelerated methods; the authors of \cite{I2015Linear} proposed several relaxations of strong convexity that are sufficient for obtaining linear convergence for (projected and accelerated) gradient-type methods. Another line of recent works is to find connections between existing EB conditions. The authors of \cite{Drusvyatskiy2016Error} discussed the relationship between the quadratic growth condition and the EB condition of Hoffman's type (also called Luo-Tseng's type in \cite{Li2016Calculus}). Parallel to and partially influenced by the work \cite{Drusvyatskiy2016Error}, the author of this paper also established several new types of equivalence between the RSC property, the quadratic growth condition, and the EB condition of Hoffman's type in \cite{Zhang2015The}. The authors of \cite{Bolte2015From} showed the equivalence between the quadratic growth condition and the Kurdyka-{\L}ojasiewicz inequality. Besides, we note that works \cite{I2015Linear} and \cite{Karimil2016linear} also discussed the relationships among many of these EB conditions. Based on these two lines of recent developments, two natural questions arise. The first one is whether there is a unified framework for defining different EB conditions and analyzing connections between them. The second one is whether these sufficient conditions guaranteeing linear convergence for gradient-type methods are also necessary. To answer these two questions, we will rely on a vital observation: a common and key role, played in many EB conditions and a variety of gradient-type methods, is a residual measure operator. This observation immediately leads us to the following discoveries: \begin{enumerate} \item By linking the residual measure operator with other optimality measures, we define a group of abstract EB conditions. Then, we comprehensively analyze the interplay between them by means of the technique developed in \cite{Bolte2015From}, which plays a fundamental role of the corresponding error bound equivalence. The definition of abstract EB conditions not only unifies many existing EB conditions, but also helps us to construct new ones. The interplay between the abstract EB conditions allows us to find new connections between many existing EB conditions. \item By viewing the residual measure operator as an ascent direction, we propose an abstract gradient-type method, and then derive EB conditions that are necessary and sufficient for its linear convergence. The latter allows us to claim the weakest (or say, necessary and sufficient) conditions guaranteeing linear convergence for a number of fundamental algorithms, including the gradient method (applied to possibly nonconvex optimization), the proximal point algorithm, and the forward-backward splitting algorithm. The sufficiency of these EB conditions for linear convergence has been widely known; see e.g. \cite{Bolte2015From}. In contrast, there is very little attention to the discussion of necessity. \end{enumerate} In addition, we also make the following contributions, from aspects of block coordinate gradient descent, Nesterov's acceleration, and verifying EB conditions, separately: \begin{enumerate} \item[3.] We show linear convergence for the proximal alternating linearized minimization (PALM) algorithm under a group of equivalent EB conditions. It has been recently shown \cite{Shefi2016on,Hong2016iter,Li2016An} that PALM achieves sublinear convergence for convex problems and linear convergence for strongly convex problems. In this study, we show its linear convergence under strictly weaker conditions than strong convexity. \item[4.] By defining a new EB condition, we obtain Q-linear convergence of Nesterov's accelerated forward-backward algorithm, which generalizes the Q-linear convergence of Nesterov's accelerated gradient method, recently independently discovered in \cite{Karimi2016A} and \cite{Wilson2016A}. The new EB condition in some special cases can be viewed as a strictly weaker relaxation of strong convexity. In such sense, we show Q-linear convergence of Nesterov's accelerated method without strong convexity. Our proof idea is partially inspired by \cite{Attouch2015the} but might be of interest in its own right. \item[5.] We provide a new proof to show that a class of dual objective functions satisfy EB conditions, under slightly weaker assumptions, again by means of the technique developed in \cite{Bolte2015From}. The authors of \cite{Lai2012Augmented} gave the first proof for a special case of this class of functions, and the author of \cite{Frank2015linear} gave the first general proof by contradiction. \end{enumerate} The paper is organized as follows. In Section \ref{sec2}, we present the basic notation and some elementary preliminaries. In Section \ref{sec3}, we analyze necessary and sufficient conditions guaranteeing linear convergence for the gradient descent. In Section \ref{sec4}, we define a group of abstract EB conditions, and analyze the interplay between them. In Section \ref{sec5}, we define an abstract gradient-type method, and derive EB conditions that are necessary and sufficient for guaranteeing its linear convergence. In Section \ref{sec6}, we study linear convergence of the PALM algorithm. In Section \ref{sec7}, we study linear convergence of Nesterov's accelerated forward-backward algorithm. In Section \ref{sec8}, we verify EB conditions for a class of dual objective functions. Finally, in Section \ref{sec9}, we give a short summary of this paper, along with some discussion for future work. \section{Notation and preliminaries}\label{sec2} Throughout the paper, $\mathbb{R}^n$ will denote an $n$-dimensional Euclidean space associated with inner-product $\langle \cdot, \cdot\rangle$ and induced norm $\|\cdot\|$. For any nonempty $Q\subset \mathbb{R}^n$, we define the distance function by $d(x,Q):=\inf_{y\in Q}\|x-y\|.$ For a nonempty set $Q\subset \mathbb{R}^n$, we define the indicator function of $Q$ by \begin{eqnarray*} \delta_Q(x):= \left\{\begin{array}{lll} 0, &\textrm{if} ~~x\in Q; \\ +\infty, &\textrm{otherwise}. \end{array} \right. \end{eqnarray*} We say that $f$ is gradient-Lipschitz-continuous with modulus $L>0$ if \begin{equation*} \quad\forall x, y\in\mathbb{R}^n,~~\|\nabla f(x)-\nabla f(y)\|\leq L\|x-y\|, \label{Lip} \end{equation*} and $f$ is strongly convex with modulus $\mu>0$ if for any $\alpha\in [0,1]$, \begin{equation*} \quad\forall x, y\in\mathbb{R}^n,~~f(\alpha x+(1-\alpha)y)\leq \alpha f(x)+(1-\alpha)f(y)-\frac{1}{2}\mu\alpha(1-\alpha)\|x-y\|^2, \label{SC1} \end{equation*} or if (when it is differentiable) \begin{equation*} \quad\forall x, y\in\mathbb{R}^n,~~\langle \nabla f(x)-\nabla f(y), x-y\rangle \geq \mu \|x-y\|^2.\label{SC} \end{equation*} We will consider the following classes of functions. \begin{itemize} \item ${\mathcal{F}}^1(\mathbb{R}^n)$: the class of continuously differentiable convex functions from $\mathbb{R}^n$ to $\mathbb{R}$; \item ${\mathcal{F}}^{1,1}_{L}(\mathbb{R}^n)$: the class of gradient-Lipschitz-continuous convex functions from $\mathbb{R}^n$ to $\mathbb{R}$ with Lipschitz modulus $L$; \item ${\mathcal{S}}^{1,1}_{\mu,L}(\mathbb{R}^n)$: the class of gradient-Lipschitz-continuous and strongly convex functions from $\mathbb{R}^n$ to $\mathbb{R}$ with Lipschitz modulus $L$ and strongly convex modulus $\mu$; \item $\Gamma(\mathbb{R}^n)$: the class of proper and lower semicontinuous functions from $\mathbb{R}^n$ to $(-\infty, +\infty]$; \item $\Gamma_0(\mathbb{R}^n)$: the class of proper and lower semicontinuous convex functions from $\mathbb{R}^n$ to $(-\infty, +\infty]$. \end{itemize} Obviously, we have the following inclusions: $${\mathcal{S}}^{1,1}_{\mu,L}(\mathbb{R}^n)\subseteq {\mathcal{F}}^{1,1}_{L}(\mathbb{R}^n) \subseteq {\mathcal{F}}^1(\mathbb{R}^n),~~~\Gamma_0(\mathbb{R}^n)\subseteq\Gamma(\mathbb{R}^n).$$ It is convenient to denote by $\mathrm{Arg}\min f$ the set of optimal solutions of minimizing $f$ over $\mathbb{R}^n$, and to use ``$\arg\min f$", if the solution is unique, to stand for the unique solution. If $\mathrm{Arg}\min f$ is nonempty, we let $\min f$ present the minimum of $f$ over $\mathbb{R}^n$. The notation of subdifferential plays a central role in (non)convex optimization. \begin{definition}[subdifferentials, \cite{Rockafellar2004Variational}] Let $f\in \Gamma(\mathbb{R}^n)$. Its domain is defined by $${\mathrm{dom}} f:=\{x\in\mathbb{R}^n: f(x)<+\infty\}.$$ \begin{enumerate} \item[(a)] For a given $x\in {\mathrm{dom}} f$, the Fr$\acute{e}$chet subdifferential of $f$ at $x$, written $\hat{\partial}f(x)$, is the set of all vectors $u\in \mathbb{R}^n$ which satisfy $$\lim_{y\neq x}\inf_{y\rightarrow x}\frac{f(y)-f(x)-\langle u, y-x\rangle}{\|y-x\|}\geq 0.$$ When $x\notin {\mathrm{dom}} f$, we set $\hat{\partial}f(x)=\emptyset$. \item[(b)] The (limiting) subdifferential, of $f$ at $x\in \mathbb{R}^n$, written $\partial f(x)$, is defined through the following closure process $$\partial f(x):=\{u\in\mathbb{R}^n: \exists x^k\rightarrow x, f(x^k)\rightarrow f(x)~\textrm{and}~ \hat{\partial}f(x^k)\ni u^k \rightarrow u~\textrm{as}~k\rightarrow \infty\}.$$ \item[(c)] If we further assume that $f$ is convex, then the subdifferential of $f$ at $x\in {\mathrm{dom}} f$ can also be defined by $$\partial f(x):=\{v\in\mathbb{R}^n: f(z)\geq f(x)+\langle v, z-x\rangle,~~\forall ~z\in \mathbb{R}^n\}.$$ The elements of $\partial f(x)$ are called subgradients of $f$ at $x$. \end{enumerate} \end{definition} Denote the domain of $\partial f$ by ${\mathrm{dom}} \partial f:= \{x\in \mathbb{R}^n:\partial f(x)\neq \emptyset\}$. Then, if $f\in \Gamma(\mathbb{R}^n)$ and $x\in {\mathrm{dom}} f$, then $\partial f(x)$ is closed (see Theorem 8.6 in \cite{Rockafellar2004Variational}); if $f\in \Gamma_0(\mathbb{R}^n)$ and $x\in {\mathrm{dom}} \partial f$, then ${\mathrm{dom}} \partial f\subset {\mathrm{dom}} f$ and $\partial f(x)$ is a nonempty closed convex set (see Proposition 16.3 in \cite{Bauschke2011Convex}). In the later case, we denote by $\partial^0 f(x)$ the unique least-norm element of $\partial f(x)$ for $x\in {\mathrm{dom}} \partial f$, along with the convention that $\|\partial^0 f(x)\| =+\infty$ for $x\notin {\mathrm{dom}} \partial f$. Points whose subdifferential contains 0 are called critical points. The set of critical points of $f$ is denoted by ${\mathbf{crit}} f$. If $f\in \Gamma_0(\mathbb{R}^n)$, then ${\mathbf{crit}} f=\mathrm{Arg}\min f$. Let $f\in\Gamma_0(\mathbb{R}^n)$; its Fenchel conjugate function $f^*:\mathbb{R}^n\rightarrow (-\infty, +\infty]$ is defined by $$f^*(x):=\sup_{y\in\mathbb{R}^n}\{\langle y, x\rangle -f(y)\},$$ and the proximal mapping operator by $${\mathbf{prox}}_{\lambda f}(x):=\arg\min_{y\in\mathbb{R}^n}\{f(y)+\frac{1}{2\lambda}\|y-x\|^2\}.$$ For each $x\in \overline{{\mathrm{dom}} f} $, it is well-known \cite{Brezis1973Op} that there is a unique absolutely continuous curve $\chi_x: [0, \infty)\rightarrow \mathbb{R}^n$ such that $\chi_x(0)=x$ and for almost every $t>0$, $$\dot{\chi}_x(t)\in -\partial f(\chi_x(t)).$$ We say that $\Omega\subset \mathbb{R}^n$ is $\partial f$-invariant if $$(\forall x\in \Omega\cap {\mathrm{dom}}~\partial f)(\textrm{for a.e.}, t>0)~~\chi_x(t)\in \Omega.$$ This concept was proposed in \cite{Brezis1973Op} and recently used in \cite{Garrigos2017conv}. There are several types of $\Omega$ being $\partial f$-invariant; see Example 7.2 in \cite{Garrigos2017conv} and Section IV.4 in \cite{Brezis1973Op}. In Sections \ref{sec5} and \ref{sec8}, we will use the fact that the sublevel set $X_r:=\{x:f(x)\leq r\}$ is always $\partial f$-invariant for any function $f\in \Gamma_0(\mathbb{R}^n)$. At last, we present some variational analysis tools. Let $\mathcal{T}$, ${\mathcal{E}}$, and ${\mathcal{E}}_i,$ $i=1,2$ be finite-dimensional Euclidean spaces. The closed ball around $x\in{\mathcal{E}}$ with radius $r>0$ is denoted by $\mathbb{B}_{\mathcal{E}}(x,r):=\{y\in{\mathcal{E}}:\|x-y\|\leq r\}$. The unit ball is denoted by $\mathbb{B}_{\mathcal{E}}$ for simplicity, and the open unit ball around the original in ${\mathcal{E}}$ is by $\mathbb{B}_{\mathcal{E}}^o$. A multi-function $S: \mathcal{E}_1\rightrightarrows\mathcal{E}_2$ is a mapping assigning each point in ${\mathcal{E}}_1$ to a subset of ${\mathcal{E}}_2$. The graph of $S$ is defined by $$\verb"gph"(S):=\{(u,v)\in{\mathcal{E}}_1\times{\mathcal{E}}_2: v\in S(u)\}.$$ The inverse map $S^{-1}:\mathcal{E}_2\rightrightarrows\mathcal{E}_1$ is defined by setting $$S^{-1}(v):=\{u\in {\mathcal{E}}_1: v\in S(u)\}.$$ Calmness and metric subregularity have been considered in various contexts and under various names. Here, we follow the terminology of Dontchev and Rockafellar \cite{Dontchev2013Implicit}. \begin{definition}[\cite{Dontchev2013Implicit}, Chapter 3H] \begin{enumerate} \item [(a)] A multi-function $S: \mathcal{E}_1\rightrightarrows\mathcal{E}_2$ is said to be calm with constant $\kappa>0$ around $\bar{u}\in\mathcal{E}_1$ for $\bar{v}\in \mathcal{E}_2$ if $(\bar{u},\bar{v})\in \verb"gph"(S)$ and there exist constants $\epsilon, \delta>0$ such that \begin{equation}\label{calm1} S(u)\cap \mathbb{B}_{\mathcal{E}_2}(\bar{v},\epsilon)\subseteq S(\bar{u})+\kappa\cdot \|u-\bar{u}\|_2\mathbb{B}_{\mathcal{E}_2},~~\forall u\in \mathbb{B}_{\mathcal{E}_1}(\bar{u},\delta), \end{equation} or equivalently, \begin{equation}\label{calm2} S(u)\cap \mathbb{B}_{\mathcal{E}_2}(\bar{v},\epsilon)\subseteq S(\bar{u})+\kappa\cdot \|u-\bar{u}\|_2\mathbb{B}_{\mathcal{E}_2},~~\forall u\in \mathcal{E}_1. \end{equation} \item [(b)] A multi-function $S: \mathcal{E}_1\rightrightarrows\mathcal{E}_2$ is said to be metrically sub-regular with constant $\kappa>0$ around $\bar{u}\in\mathcal{E}_1$ for $\bar{v}\in \mathcal{E}_2$ if $(\bar{u},\bar{v})\in \verb"gph"(S)$ and there exists a constant $\delta>0$ such that \begin{equation}\label{mesub} d(u, S^{-1}(\bar{v}))\leq \kappa\cdot d(\bar{v},S(u)),~~\forall u\in \mathbb{B}_{\mathcal{E}_1}(\bar{u},\delta). \end{equation} \end{enumerate} \end{definition} Note that the calmness defined above is weaker than the local upper Lipschitz-continuity property \cite{Robinson1981some}: \begin{equation}\label{calm3} S(u) \subseteq S(\bar{u})+\kappa\cdot \|u-\bar{u}\|_2\mathbb{B}_{\mathcal{E}_2},~~\forall u\in \mathbb{B}_{\mathcal{E}_1}(\bar{u},\delta), \end{equation} which requires the multi-functions $S$ to be calm around $\bar{u}\in{\mathcal{E}}_1$ with constant $\kappa>0$ for any $\bar{v}\in {\mathcal{E}}_2$. Recently, the local upper Lipschitz-continuity property \eqref{calm3} was employed in \cite{Frank2015linear} as a main assumption for verifying EB conditions of a class of dual objective functions. \section{The gradient descent: a necessary and sufficient condition for linear convergence}\label{sec3} In this section, we first figure out the weakest condition that ensures gradient descent to converge linearly, and then we show that a number of existing linear convergence results can be recovered in a unified and transparent manner. This is a "warm-up" section for the forthcoming abstract theory in Sections \ref{sec4} and \ref{sec5}. Now, we start by considering the following unconstrained optimization problem $$\Min_{x\in\mathbb{R}^n} f(x),$$ where $f: \mathbb{R}^n\rightarrow \mathbb{R}$ is a differentiable function achieving its minimum $\min f$ so that $\mathrm{Arg}\min f\neq \emptyset$. Note that $\mathrm{Arg}\min f$ is closed since $f$ is differentiable. For any $x\in\mathbb{R}^n$, the set of its projection points onto $\mathrm{Arg}\min f$, denoted by ${\mathcal{Y}}_f(x)$, is nonempty. Let $\{x_k\}_{k\geq 0}$ be generated by the gradient descent method \begin{equation}\label{gradmethod} x_{k+1}=x_k-h\cdot\nabla f(x_k), ~k\geq 0, \end{equation} where $h>0$ is a constant step size. Observe that $d(x_k,\mathrm{Arg}\min f)$ measures how close $x_k$ is to $\mathrm{Arg}\min f$, and the ratio of $d(x_{k+1},\mathrm{Arg}\min f)$ to $d(x_k,\mathrm{Arg}\min f)$ measures how fast $x_k$ converges to $\mathrm{Arg}\min f$. Now, we analyze the ratio of $d(x_{k+1},\mathrm{Arg}\min f)$ to $d(x_k,\mathrm{Arg}\min f)$ as follows \begin{align*} d^2(x_{k+1},\mathrm{Arg}\min f) &= \|x_{k+1}-x_{k+1}^\prime\|^2\leq \|x_{k+1}-x_k^\prime\|^2 \\ &= \|x_k-h\cdot\nabla f(x_k)-x_k^\prime\|^2 \\ & = d^2(x_k,\mathrm{Arg}\min f) -2h\langle \nabla f(x_k),x_k-x_k^\prime\rangle+h^2\|\nabla f(x_k)\|^2, \end{align*} where $x_{k+1}^\prime\in {\mathcal{Y}}_f(x_{k+1})$ and $x_{k}^\prime\in {\mathcal{Y}}_f(x_{k})$. To ensure gradient descent to converge linearly in the following sense: \begin{equation}\label{linearconv} d^2(x_{k+1},\mathrm{Arg}\min f) \leq \tau\cdot d^2(x_k,\mathrm{Arg}\min f),~k\geq 0. \end{equation} it suffices to require that for $k\geq 0, x_k^\prime\in {\mathcal{Y}}_f(x_{k}),$ $$d^2(x_k,\mathrm{Arg}\min f) -2h\langle \nabla f(x_k),x_k-x_k^\prime\rangle+h^2\|\nabla f(x_k)\|^2\leq \tau\cdot d^2(x_k,\mathrm{Arg}\min f),$$ i.e., \begin{equation}\label{basic} \inf_{u\in {\mathcal{Y}}_f(x^{k})} \langle\nabla f(x_k),x_k-u\rangle \geq \frac{1-\tau}{2h}d^2(x_k,\mathrm{Arg}\min f)+\frac{h}{2}\|\nabla f(x_k)\|^2, ~k\geq0. \end{equation} It turns out that this sufficient condition is also necessary when the objective function $f$ belongs to ${\mathcal{F}}_L^{1,1}(\mathbb{R}^n)$ and the step size $h$ lies in some interval. \begin{proposition} \label{necesuff} Let $f$ be a differentiable function on $\mathbb{R}^n$ achieving its minimum $\min f$ so that $\mathrm{Arg}\min f\neq \emptyset$, and let $h>0$ and $\tau\in (0,1)$. \begin{enumerate} \item[(i)] If the condition \eqref{basic} holds, then the sequence $\{x_k\}_{k\geq0}$ generated by the gradient descent method \eqref{gradmethod} must converge linearly in the sense of \eqref{linearconv}. \item[(ii)] Let $f(x)\in {\mathcal{F}}_L^{1,1}(\mathbb{R}^n)$. If the sequence $\{x_k\}$ generated by the gradient descent method \eqref{gradmethod} with $0< h\leq \frac{1-\sqrt{\tau}}{L}$ converges linearly as \eqref{linearconv}, then the condition \eqref{basic} must hold. \end{enumerate} \end{proposition} \begin{proof} The proof of sufficiency has been done. We now show the necessity part. Pick $u_{k+1}\in {\mathcal{Y}}_f(x_{k+1})$ to derive that \begin{align}\label{add1} d(x_k, \mathrm{Arg}\min f) &\leq \|x_k-u_{k+1}\| \leq \|x_{k+1}-u_{k+1}\|+\|x_{k+1}-x_k\| \nonumber \\ &= d(x_{k+1}, \mathrm{Arg}\min f)+ h\|\nabla f(x_k)\|, ~k\geq0. \end{align} Combine \eqref{add1} and the fact of linear convergence $$d(x_{k+1},\mathrm{Arg}\min f) \leq \sqrt{\tau} \cdot d(x_{k},\mathrm{Arg}\min f), ~k\geq0$$ to obtain \begin{equation}\label{rev1} (1-\sqrt{\tau})d(x_k, \mathrm{Arg}\min f) \leq h\|\nabla f(x_k)\|, ~k\geq0. \end{equation} According to Theorem 2.1.5 in \cite{nesterov2004introductory}, we know that $f(x)\in {\mathcal{F}}_L^{1,1}(\mathbb{R}^n)$ implies $$ \langle\nabla f(x_k),x_k-v_k\rangle \geq \frac{1}{L}\|\nabla f(x_k)\|^2, ~ v_k\in{\mathcal{Y}}_f(x_{k}), ~k\geq0.$$ By letting $\alpha+\beta\leq 1$ and $\alpha,\beta>0$, we have that for any $ v_k\in{\mathcal{Y}}_f(x_{k})$, \begin{align*} \langle\nabla f(x_k),x_k-v_k\rangle &\geq \frac{\alpha}{L}\|\nabla f(x_k)\|^2+\frac{\beta}{L}\|\nabla f(x_k)\|^2\\ &\geq \frac{\alpha}{L}\|\nabla f(x_k)\|^2+\frac{\beta(1-\sqrt{\tau})^2}{Lh^2}d(x_k, \mathrm{Arg}\min f)^2, ~k\geq0, \end{align*} where the last inequality follows by \eqref{rev1}. Thus, by letting $\frac{\alpha}{L}=\frac{h}{2}$ and $\frac{\beta(1-\sqrt{\tau})^2}{Lh^2}=\frac{1-\tau}{2h}$, we get the condition \eqref{basic}. At last, we need $$ \alpha+\beta=\frac{Lh}{2}+\frac{Lh(1-\tau)}{2(1-\sqrt{\tau})^2}=\frac{hL}{1-\sqrt{\tau}}\leq 1,$$ which forces $h\leq\frac{1-\sqrt{\tau}}{L}$. This completes the proof. \end{proof} The condition \eqref{basic} means that if the steepest descent direction $-\nabla f(x)$ is well correlated to any direction towards optimality $u-x$, where $u\in {\mathcal{Y}}_f(x)$, then a linear convergence rate of the gradient descent method can be ensured. Conversely, when $f(x)\in {\mathcal{F}}_L^{1,1}(\mathbb{R}^n)$ and if the gradient descent converges linearly and the step size lies in the interval $(0, \frac{1-\sqrt{\tau}}{L}]$, then $-\nabla f(x)$ must be well correlated to $u-x$. Now, we list some direct applications of this basic observation. In our first illustrating example, we consider functions in $S_{\mu, L}^{1,1}(\mathbb{R}^n)$. First, we introduce an important property about this type of functions. \begin{lemma}[\cite{nesterov2004introductory}]If $f\in S_{\mu, L}^{1,1}(\mathbb{R}^n)$, then we have $$\quad\forall x, y\in\mathbb{R}^n,~~ \langle \nabla f(x)-\nabla f(y), x-y\rangle\geq \frac{\mu L}{\mu+L} \|x-y\|^2+\frac{1}{\mu+L}\|\nabla f(x)-\nabla f(y)\|^2.$$ \end{lemma} Let $x^*$ be the unique minimizer of $f\in S_{\mu, L}^{1,1}(\mathbb{R}^n)$; then $\mathrm{Arg}\min f=\{x^*\}$. Using the inequality above with $x=x_k, y=x^*$ and noting that $\nabla f(x^*)=0$ and $\|x_k-x^*\|= d(x_k,\mathrm{Arg}\min f)$, we obtain $$\langle\nabla f(x_k),x_k-x^*\rangle \geq \frac{\mu L}{\mu+L}d^2(x_k,\mathrm{Arg}\min f)+\frac{1}{\mu+L}\|\nabla f(x_k)\|^2, ~k\geq0.$$ To guarantee the condition \eqref{basic}, we only need $$\frac{\mu L}{\mu+L}\geq \frac{1-\tau}{2h}~~\textrm{and}~~\frac{1}{\mu+L}\geq \frac{h}{2},$$ which implies that $$ \frac{(1-\tau)(\mu +L)}{2\mu L}\leq h\leq \frac{2}{\mu+L},~~ \tau \geq \tau_0:=(\frac{L-\mu}{L+\mu})^2.$$ The optimal linear convergence rate $\tau_0$ can be obtained by setting $h=\frac{2}{\mu+L}$. This gives the corresponding result in Nesterov's book; see Theorem 2.1.15 in \cite{nesterov2004introductory}. In our second illustrating example, we consider RSC functions \cite{zhang2013gradient,zhang2015restricted}. The following property can be viewed as a convex combination of the restricted strong convexity and the gradient-Lipschitz-continuity property; see Lemma 3 in \cite{zhang2015restricted}. \begin{lemma}[\cite{zhang2015restricted}]\label{comb} If $f\in {\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$ and $f$ is RSC with $0<\nu< L$, then for every $\theta\in [0,1]$ it holds: \begin{equation*} \quad\forall x \in\mathbb{R}^n,~~ \langle\nabla f(x), x-{x^\prime}\rangle \geq \frac{\theta}{L}\|\nabla f(x)\|^2+ (1-\theta)\nu d^2(x,\mathrm{Arg}\min f), \end{equation*} where $x^\prime$ is the unique projection point of $x$ onto $\mathrm{Arg}\min f$ since $\mathrm{Arg}\min f$ is a nonempty closed convex set. \end{lemma} Similarly, to guarantee the condition \eqref{basic} , we only need $$(1-\theta)\nu \geq \frac{1-\tau}{2h}~~\textrm{and}~~\frac{\theta}{L}\geq \frac{h}{2},$$ which implies that $$\frac{1-\tau}{2(1-\theta)\nu}\leq h\leq \frac{2\theta}{L},~~ \tau \geq 1- \frac{4\theta(1-\theta)\nu}{L}\geq 1-\frac{\nu}{L}.$$ The optimal linear convergence rate $1-\frac{\nu}{L}$ can be obtained at $\theta=\frac{1}{2}$ and $h=\frac{1}{L}$. This gives the corresponding result in \cite{zhang2015restricted}. The argument here is much simpler than that previously employed to derive the same result; see the proof of Theorem 2 in \cite{zhang2015restricted}. The last example to be illustrated is a nonconvex minimization. The following definition can be viewed as a local version of Lemma \ref{comb}. Therefore, it is not difficult to predict a local linear convergence under such property. \begin{definition}[Regularity Condition, \cite{Candes2015Phase}] Let ${\mathcal{N}}$ be a neighborhood of $\mathrm{Arg}\min f$ and let $\alpha, \beta>0$. We say that $f$ satisfies the regularity condition if $$ \quad \forall x\in{\mathcal{N}},~~\inf_{u\in{\mathcal{Y}}_f(x)}\langle \nabla f(x), x-u\rangle\geq \frac{1}{\alpha} d^2(x,\mathrm{Arg}\min f)+ \frac{1}{\beta}\|\nabla f(x)\|^2.$$ \end{definition} Again, to guarantee the condition \eqref{basic} locally, we only need $$\frac{1}{\alpha}\geq \frac{1-\tau}{2h}~~\textrm{and}~~\frac{1}{\beta}\geq \frac{h}{2},$$ which implies that $$\frac{(1-\tau)\alpha}{2}\leq h\leq\frac{2}{\beta},~~ \tau \geq \tau_0:=(1-\frac{4}{\alpha\beta}).$$ The optimal linear convergence rate $\tau_0$ can be obtained by setting $h=\frac{2}{\beta}$ and assuming $\alpha\beta>4$. The latter can be guaranteed usually; see e.g. the argument below Lemma 7.10 in \cite{Candes2015Phase}. Therefore, we obtain the corresponding result in \cite{Candes2015Phase}. Regularity condition provably holds for nonconvex optimization problems that appear in phase retrieval and low-rank matrix recovery; interested readers can refer to \cite{Candes2015Phase} and \cite{Tu2016Low} for details. Observe that the right-hand side of \eqref{basic} has two terms. In order to better analyze such condition, we decompose it into two parts: \begin{align*} &\inf_{u\in {\mathcal{Y}}_f(x^{k})} \langle\nabla f(x_k),x_k-u\rangle \geq \theta_1\cdot d^2(x_k,\mathrm{Arg}\min f), \\ &\inf_{u\in {\mathcal{Y}}_f(x^{k})} \langle\nabla f(x_k),x_k-u\rangle \geq \theta_2\cdot\|\nabla f(x_k)\|^2, \end{align*} where $\theta_i, i=1,2$ are some positive parameters. This idea of separating the right-hand side of \eqref{basic} partially inspires us to consider new and abstract error bound conditions, which are the main content of the next section. \section{Abstract EB conditions: definition and interplay}\label{sec4} This section is divided into two parts. In the first part, we define a group of EB conditions in a unified and abstract way. In the second part, we discuss some interplay between them, along with new connections between many existing EB conditions. \subsection{Definition of abstract EB conditions} The concept of residual measure operator, given by the following definition, will play a key role in the forthcoming theory. \begin{definition} Let $\varphi\in \Gamma(\mathbb{R}^n)$ and $X\subset \mathbb{R}^n$. We say that $G_\varphi: X \rightarrow \mathbb{R}^n$ is a residual measure operator related to $\varphi$ and $X$, if it satisfies $$\{x\in X: G_{\varphi}(x)=0\}={\mathbf{crit}} \varphi.$$ Especially, if we further assume that $\varphi$ is convex, the above condition can be written as $$\{x\in X: G_{\varphi}(x)=0\}=\mathrm{Arg}\min \varphi.$$ \end{definition} Now, we define a group of abstract EB conditions. \begin{definition}\label{maindef}Let $\varphi\in \Gamma(\mathbb{R}^n)$ be such that it achieves its minimum $\min \varphi$ and that its critical point set ${\mathbf{crit}} \varphi$ is nonempty and closed. Let $X\subset \mathbb{R}^n$, $\Omega\subset X$, and $G_\varphi$ be a residual measure operator related to $\varphi$ and $X$. Define the projection operator $\mathcal{P}_\varphi: \mathbb{R}^n\rightrightarrows \mathbb{R}^n$ onto ${\mathbf{crit}} \varphi$ by: $$\mathcal{P}_\varphi(x):={\mathrm{Arg}\min}_{u\in{\mathbf{crit}} \varphi}\|x-u\|.$$ We call $d(x,{\mathbf{crit}} \varphi)$ point value error, $\varphi(x)-\min \varphi $ objective value error, $\|G_{\varphi}(x)\|$ residual value error, and $\inf_{x_p\in \mathcal{P}_\varphi(x)}\langle G_{\varphi}(x), x-x_p\rangle$ least correlated error. With these optimality measures, we say that \begin{enumerate} \item $\varphi$ satisfies the residual-point value EB condition with operator $G_\varphi$ and constant $\kappa>0$ on $\Omega$, abbreviated $(G_\varphi, \kappa, \Omega)$-(res-EB) condition, if: \begin{equation}\label{res-EB} \tag{res-EB} \forall~ x\in \Omega \cap {\mathrm{dom}} \varphi,~~\|G_{\varphi}(x)\| \geq \kappa\cdot d(x,{\mathbf{crit}} \varphi); \end{equation} \item $\varphi$ satisfies the correlated-point value EB condition with operator $G_\varphi$ and constant $\nu>0$ on $\Omega$, abbreviated $(G_\varphi, \nu, \Omega)$-(cor-EB) condition, if: \begin{equation}\label{cor-EB} \tag{cor-EB} \forall~ x\in \Omega \cap {\mathrm{dom}} \varphi,~~\inf_{x_p\in \mathcal{P}_\varphi(x)}\langle G_{\varphi}(x), x-x_p\rangle \geq \nu\cdot d^2(x,{\mathbf{crit}} \varphi); \end{equation} \item $\varphi$ satisfies the objective-point value EB condition with constant $\alpha>0$ on $\Omega$, abbreviated $(\varphi, \alpha, \Omega)$-(obj-EB) condition, if: \begin{equation}\label{obj-EB} \tag{obj-EB} \forall~ x\in \Omega \cap {\mathrm{dom}} \varphi,~~\varphi(x)- \min \varphi \geq \frac{\alpha}{2}\cdot d^2(x,{\mathbf{crit}} \varphi); \end{equation} \item $\varphi$ satisfies the residual-objective value EB condition with operator $G_\varphi$ and constant $\eta>0$ on $\Omega$, abbreviated $(G_\varphi, \eta, \Omega)$-(res-obj-EB) condition, if: \begin{equation}\label{res-obj-EB} \tag{res-obj-EB} \forall~ x\in \Omega \cap {\mathrm{dom}} \varphi,~~\|G_{\varphi}(x)\| \geq \eta\cdot\sqrt{\varphi(x)- \min \varphi }; \end{equation} \item $\varphi$ satisfies the correlated-residual value EB condition with operator $G_\varphi$ and constant $\beta>0$ on $\Omega$, abbreviated $(G_\varphi, \beta, \Omega)$-(cor-res-EB) condition, if: \begin{equation}\label{cor-res-EB} \tag{cor-res-EB} \forall~ x\in \Omega \cap {\mathrm{dom}} \varphi,~~\inf_{x_p\in \mathcal{P}_\varphi(x)}\langle G_{\varphi}(x), x-x_p\rangle \geq \beta\cdot \|G_{\varphi}(x)\|^2; \end{equation} \item $\varphi$ satisfies the correlated-objective value EB condition with operator $G_\varphi$ and constant $\omega>0$ on $\Omega$, abbreviated $(G_\varphi, \omega, \Omega)$-(cor-obj-EB) condition, if: \begin{equation}\label{cor-obj-EB} \tag{cor-obj-EB} \forall~ x\in \Omega \cap {\mathrm{dom}} \varphi,~~\inf_{x_p\in \mathcal{P}_\varphi(x)}\langle G_{\varphi}(x), x-x_p\rangle \geq \omega\cdot(\varphi(x)- \min \varphi ). \end{equation} \end{enumerate} We will refer to these EB conditions as global if $\Omega=\mathbb{R}^n$. For global EB conditions, we will omit $\Omega$ for simplicity. \end{definition} In order to gain some intuition of the abstract EB conditions, we point out their correspondences to existing notions: \eqref{res-EB} corresponds to the EB condition of Hoffman's type \cite{Luo1990On,Drusvyatskiy2016Error,Zhou2015A}, \eqref{res-obj-EB} to the Polyak-{\L}ojasiewicz's type \cite{Bolte2015From,Karimil2016linear}, \eqref{obj-EB} to the quadratic growth condition \cite{Bolte2015From,Drusvyatskiy2016Error}, \eqref{cor-EB} to the RSI's type \cite{zhang2013gradient}, and \eqref{cor-obj-EB} to the subgradient inequality for convex functions. The \eqref{cor-res-EB} condition, which will be used in Section \ref{sec5}, is a relaxation of the following property: $$\quad\forall x, y\in \mathbb{R}^n,~~\langle \nabla \varphi(x)-\nabla \varphi(y), x-y\rangle\geq \frac{1}{L}\|\nabla \varphi(x)-\nabla \varphi(y)\|^2,$$ which is equivalent to $\varphi\in{\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$; see Theorem 2.1.5 in \cite{nesterov2004introductory}. In our early manuscript \cite{zhang2016new}, we only roughly gave global EB conditions in Definition \ref{maindef}. The above was obtained by incorporating the referee's comments and was influenced by the recent work \cite{Garrigos2017conv}, resulting in a much more complete list than the previous one. \subsection{Interplay between the EB conditions} We first show the interplay between the abstract EB conditions. The proof of equivalence will rely heavily on a technical result developed in \cite{Bolte2015From}. \begin{theorem}\label{mainresult} Let $\varphi\in \Gamma(\mathbb{R}^n)$ be such that it achieves its minimum $\min \varphi$ and that ${\mathbf{crit}} \varphi$ is nonempty and closed. Let $X\subset \mathbb{R}^n$, $\Omega\subset X$, and $G_\varphi$ be a residual measure operator related to $\varphi$ and $X$. Assume that the $(G_\varphi, \omega, \Omega)$-(cor-obj-EB) condition holds. Then, we have the following implications $$ \eqref{obj-EB}\Rightarrow \eqref{cor-EB}\Rightarrow \eqref{res-EB}\Rightarrow \eqref{res-obj-EB}.$$ One can take $\nu=\frac{\alpha\omega}{2}, \kappa=\nu, \eta=\sqrt{\kappa\omega}$ to show above implications. If we further assume that $\varphi\in \Gamma_0(\mathbb{R}^n)$, $\Omega$ is $\partial \varphi$-invariant, and $G_\varphi$ satisfies \begin{equation}\label{assum1} \forall~ x\in \Omega \cap {\mathrm{dom}} \varphi,~~ \|G_{\varphi}(x)\|\leq \inf_{g\in\partial \varphi(x)}\|g\|, \end{equation} then we have the following equivalent relationship $$\eqref{obj-EB}\Leftrightarrow \eqref{cor-EB}\Leftrightarrow \eqref{res-EB}\Leftrightarrow \eqref{res-obj-EB}.$$ For $\eqref{res-obj-EB}\Rightarrow\eqref{obj-EB}$, one can take $\alpha=\frac{1}{2}\eta^2$. \end{theorem} \begin{proof} We prove this theorem by showing the following implications $$\eqref{obj-EB}\Rightarrow \eqref{cor-EB}\Rightarrow \eqref{res-EB}\Rightarrow \eqref{res-obj-EB} \Rightarrow \eqref{obj-EB}.$$ Firstly, the implication of $\eqref{obj-EB}\Rightarrow \eqref{cor-EB}$ follows from $$\inf_{x_p\in \mathcal{P}_\varphi(x)}\langle G_{\varphi}(x), x-x_p\rangle \geq \omega\cdot(\varphi(x)-\min{\varphi})\geq \frac{\alpha\omega}{2}\cdot d^2(x,{\mathbf{crit}} \varphi),$$ where the left inequality is \eqref{cor-obj-EB} and the right one is \eqref{obj-EB}. Secondly, the implication of $\eqref{cor-EB}\Rightarrow \eqref{res-EB}$ follows from a direct application of the Cauchy-Schwarz inequality to \eqref{cor-EB}. Thirdly, we show $\eqref{res-EB}\Rightarrow \eqref{res-obj-EB}$. By \eqref{cor-obj-EB} and \eqref{res-EB}, we derive that for $\forall~ x\in \Omega \cap {\mathrm{dom}} \varphi$, \begin{align*} \omega\cdot(\varphi(x)-\min \varphi ) &\leq \inf_{x_p\in \mathcal{P}_\varphi(x)}\langle G_{\varphi}(x), x-x_p\rangle \\ &\leq \inf_{x_p\in \mathcal{P}_\varphi(x)}\|G_{\varphi}(x)\| \|x-x_p\|=\|G_{\varphi}(x)\|\cdot d(x,{\mathbf{crit}} \varphi) \\ &\leq \kappa^{-1}\|G_{\varphi}(x)\|^2. \end{align*} Thus, it holds that $\forall~ x\in \Omega \cap {\mathrm{dom}} \varphi, ~~\|G_{\varphi}(x)\| \geq \sqrt{\kappa\omega}\cdot\sqrt{\varphi(x)-\min \varphi},$ which is just \eqref{res-obj-EB}. At last, we show $\eqref{res-obj-EB} \Rightarrow \eqref{obj-EB}$. The following is based on an argument used for proving Theorem 27 in \cite{Bolte2015From}. For the sake of completeness, we reproduce that proof in our particular case. First of all, take $x\in \Omega \cap {\mathrm{dom}} \varphi$ and recall that we have additionally assumed ${\mathbf{crit}} \varphi=\mathrm{Arg}\min \varphi$. Without loss of generality, we assume that $ \min\varphi =0$ and $x\notin \mathrm{Arg}\min \varphi$. According to the result about subgradient curves due to Br$\acute{e}$zis \cite{Brezis1973Op} and Bruck \cite{Bruck1975Asymptotic} and recently used in \cite{Bolte2015From}, we can find the unique absolutely continuous curve $\chi_x:[0, +\infty)\rightarrow \mathbb{R}^n$ such that $\chi_x(0)=x$ and $$\dot{\chi}_x(t)\in -\partial \varphi(\chi_x(t))$$ for almost every $t>0$. Moreover, $\chi_x(t)$ converges to some point $\hat{x}$ in $\mathrm{Arg}\min \varphi$ as $t\rightarrow +\infty$ and the function $t \mapsto \varphi(\chi_x(t))$ is nonincreasing and $$\lim_{t\rightarrow +\infty}\varphi(\chi_x(t))=\min \varphi = 0.$$ By the $\partial \varphi$-invariant property of $\Omega$, we have $\chi_x(t)\in \Omega$ and hence $\chi_x(t)\in \Omega \cap {\mathrm{dom}} \varphi$ due to the nonincreasingness of $\varphi(\chi_x(t))$. Let $$T:=\inf\{t\in [0, +\infty): \varphi(\chi_x(t))=0\}.$$ We claim that $T>0$. Otherwise, $T=0$ and then, by the lower semicontinuity property of $\varphi$, we can derive that $$ \varphi (x)=\varphi(\chi_x(0))\leq {\lim\inf}_{t\rightarrow 0^+}\varphi(\chi_x(t)) =0.$$ This contradicts $x\notin \mathrm{Arg}\min \varphi$. Now, combining \eqref{assum1} and \eqref{res-obj-EB}, we derive that $$\frac{\|\dot{\chi}_x(t)\|}{\sqrt{\varphi(\chi_x(t))}}\geq \frac{\inf_{g\in \partial \varphi(\chi_x(t))}\|g\|}{\sqrt{\varphi(\chi_x(t))}}\geq \frac{\|G_{\varphi}(\chi_x(t))\|}{\sqrt{\varphi(\chi_x(t))}}\geq \eta, ~~\forall t\in [0, T).$$ Observe that for $p, q\in [0, T)$ with $q\geq p$, \begin{align*} &\sqrt{\varphi(\chi_x(p))}-\sqrt{\varphi(\chi_x(q))}=\int_q^p \frac{d\sqrt{\varphi(\chi_x(t))}}{dt} dt\\ =&\frac{1}{2}\int_p^q \left(\varphi(\chi_x(p)) \right)^{-\frac{1}{2}}\|\dot{\chi}_x(t)\|^2 dt=\frac{1}{2} \int_p^q \frac{\|\dot{\chi}_x(t)\|}{\sqrt{\varphi(\chi_x(t))}} \|\dot{\chi}_x(t)\| dt\\ \geq &\frac{1}{2} \int_p^q \eta \|\dot{\chi}_x(t)\| dt=\frac{\eta}{2}\cdot \textrm{length}(\chi_x(t), p, q)\geq \frac{\eta}{2}\cdot\|\chi_x(p)-\chi_x(q)\|, \end{align*} where $\textrm{length}(\chi_x(t), p, q)$ stands for the length of subgradient curve from $p$ to $q$. By letting $p=0$ and $q\rightarrow +\infty$ if $T=+\infty$ and $q\rightarrow T$ if $T<+\infty$, we obtain $$\sqrt{\varphi(\chi_x(0))}=\sqrt{\varphi(x)}\geq \frac{\eta}{2}\cdot \|x-\hat{x}\|.$$ Therefore, for $\forall~ x\in \Omega \cap {\mathrm{dom}} \varphi$ we always have $$\varphi(x)- \min \varphi \geq \frac{\eta^2}{4}\cdot \|x-\hat{x}\|^2\geq \frac{\eta^2}{4}\cdot d^2(x,\mathrm{Arg}\min \varphi)=\frac{\eta^2}{4}\cdot d^2(x,{\mathbf{crit}} \varphi),$$ which implies that \eqref{obj-EB} with $\alpha=\frac{\eta^2}{2}$ holds. This completes the proof. \end{proof} As a direct consequence, we have the following corollary. \begin{corollary}\label{corr1} Let $\varphi\in \Gamma_0(\mathbb{R}^n)$ be such that its achieves its minimum $\min \varphi$ so that $\mathrm{Arg}\min \varphi \neq \emptyset$. Let $X\subset \mathbb{R}^n$, $\Omega\subset X$ be $\partial \varphi$-invariant, and $G^i_\varphi, i=1, 2$ be two different residual measure operators related to the same function $\varphi$ and the same subset $X$. We assume that $G^i_\varphi, i=1, 2$ satisfy \begin{equation}\label{assum10} \forall~ x\in \Omega \cap {\mathrm{dom}} \varphi,~~\|G^i_{\varphi}(x)\|\leq \inf_{g\in\partial \varphi(x)}\|g\|, \end{equation} and $(G^i_\varphi, \omega, \Omega)$-(cor-obj-EB) conditions hold. Then, we have \begin{center} $(G^1_\varphi, \kappa, \Omega)$-\eqref{res-EB}$\Leftrightarrow$ $(G^1_\varphi, \nu, \Omega)$-\eqref{cor-EB}$\Leftrightarrow$ $(G^1_\varphi, \eta, \Omega)$-\eqref{res-obj-EB}\\ $\Leftrightarrow$$(\varphi, \alpha, \Omega)$-\eqref{obj-EB} $\Leftrightarrow$ \\ $(G^2_\varphi, \kappa, \Omega)$-\eqref{res-EB}$\Leftrightarrow$ $(G^2_\varphi, \nu, \Omega)$-\eqref{cor-EB}$\Leftrightarrow$ $(G^2_\varphi, \eta, \Omega)$-\eqref{res-obj-EB}. \end{center} \end{corollary} Now, we list some cases where the equivalence between the EB conditions indeed holds. \begin{corollary}\label{corr2} The EB conditions \eqref{cor-EB}, \eqref{res-EB}, \eqref{obj-EB}, and \eqref{res-obj-EB} are equivalent under each of the following situations: \begin{description} \item[case 1:] $\varphi \in{\mathcal{F}}^{1}(\mathbb{R}^n)$ achieves its minimum $\min \varphi$, $X=\mathbb{R}^n$ and $\Omega\subset X$ is $\nabla \varphi$-invariant, and $G_{\varphi} =\nabla \varphi$; \item[case 2:] $\varphi \in\Gamma_0(\mathbb{R}^n)$ achieves its minimum $\min \varphi$, $X={\mathrm{dom}} \partial\varphi$ and $\Omega\subset X$ is $\partial \varphi$-invariant, and $G_{\varphi} =\partial^0\varphi$; \item[case 3:] $\varphi =f +g $, where $f \in{\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$ and $g\in\Gamma_0(\mathbb{R}^n)$, achieves its minimum $\min \varphi$, $X=\mathbb{R}^n$ and $\Omega\subset X$ is $\partial \varphi$-invariant, and $G_{\varphi}=\mathcal{R}_t$, where $\mathcal{R}_t(x):=t^{-1}(x-x^+)$ with $t\in (0,\frac{1}{L}]$ and $x^+={\mathbf{prox}}_{tg}(x-t\nabla f(x))$. In addition, we assume that there exists a constant $0<\epsilon\leq \frac{2}{t}$ such that \begin{equation}\label{assum2} \|G_{\varphi}(x)\|^2\geq \epsilon (\varphi(x)-\varphi(x^+)). \end{equation} \end{description} \end{corollary} \begin{proof} First of all, ${\mathbf{crit}} \varphi$ is nonempty since ${\mathbf{crit}} \varphi=\mathrm{Arg}\min \varphi\neq \emptyset$, and is closed since $\varphi$ a proper and lower semicontinuous function, in all the listed cases. Secondly, by optimality conditions, one can easily verify that $G_\varphi$ in all the listed cases are residual measure operators. We only need to verify the remaining assumptions in Theorem \ref{mainresult}. For both cases 1 and 2, the convexity of $\varphi$ implies the \eqref{cor-obj-EB} condition with $\omega=1$. In case 1, the assumption \eqref{assum1} holds obviously because of $\partial \varphi(x)=\{\nabla \varphi(x)\}$. In case 2, the assumption \eqref{assum1} follows from the definition of $\partial^0\varphi(x)$. Now, let us consider the case 3. Since $f(x)\in{\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$ and $g\in\Gamma_0(\mathbb{R}^n)$, we have that $G_{\varphi}(x)$ satisfies the standard result $$~~\forall x, y\in\mathbb{R}^n,~~\varphi(x^+)\leq \varphi(y)+\langle G_{\varphi}(x), x-y\rangle -\frac{t}{2}\|G_{\varphi}(x)\|^2;$$ see e.g. Lemma 2.3 in \cite{beck2009fast} or Lemma 2 in the very recent work \cite{Attouch2017conv}. Since $\varphi$ also belongs to $\Gamma_0(\mathbb{R}^n)$, we can conclude that $\mathrm{Arg}\min \varphi$ is a nonempty closed convex set. Thus, by the projection theorem, there exists a unique projection point of $x$ onto $\mathrm{Arg}\min \varphi$, denoted by $x_p$. Using the inequality above with $y=x_p$ and the assumption \eqref{assum2}, we derive that \begin{align*} \langle G_{\varphi}(x), x-x_p \rangle & \geq \varphi(x^+)-\min \varphi +\frac{t}{2}\|G_{\varphi}(x)\|^2\\ &\geq \varphi(x^+)-\min \varphi +\frac{t\epsilon}{2}(\varphi(x)-\varphi(x^+)) \\ &= \frac{t\epsilon}{2}(\varphi(x)-\min \varphi)+(1-\frac{t\epsilon}{2})(\varphi(x^+)-\min \varphi)\\ &\geq \frac{t\epsilon}{2}(\varphi(x)-\min \varphi), \end{align*} from which the $(G_\varphi, \omega, \Omega)$-\eqref{cor-obj-EB} condition with $\omega=\frac{t\epsilon}{2}$ follows. The assumption \eqref{assum1} in this case was established in Theorem 3.5 in \cite{Drusvyatskiy2016Error} and Lemma 4.1 in \cite{Li2016Calculus}. This completes the proof. \end{proof} \begin{remark}\label{remarkadd} \begin{itemize} \item[(i)]In cases 1 and 2, from Theorem \ref{mainresult} we can see that if one only needs the implication $$\eqref{obj-EB}\Rightarrow \eqref{cor-EB}\Rightarrow \eqref{res-EB}\Rightarrow \eqref{res-obj-EB},$$ then the assumption on $\Omega$ can be removed. \item[(ii)]In case 2, if $\Omega={\mathrm{dom}} \partial\varphi$, then $(\varphi, \alpha, {\mathrm{dom}} \partial\varphi)$-\eqref{obj-EB} is actually equivalent to $$ \forall~ x\in \mathbb{R}^n,~~\varphi(x)- \min \varphi \geq \frac{\alpha}{2}\cdot d^2(x, \mathrm{Arg}\min \varphi),$$ since ${\mathrm{dom}} \partial\varphi$ is a dense subset of ${\mathrm{dom}}\varphi$ according to Corollary 16.29 in \cite{Bauschke2011Convex} and $\varphi(x)=+\infty$ for $x\notin {\mathrm{dom}} \varphi$. \end{itemize} \end{remark} We note that while this work was under review, the authors of \cite{Karimil2016linear} independently also obtained the equivalent relationship between the EB conditions \eqref{cor-EB}, \eqref{res-EB}, \eqref{obj-EB}, and \eqref{res-obj-EB} for functions in ${\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$. We also note that the authors of \cite{Garrigos2017conv} independently recently obtained the equivalent relationship between the EB conditions \eqref{res-EB}, \eqref{obj-EB}, and \eqref{res-obj-EB} for functions in $\Gamma_0(\mathbb{R}^n)$. The former is merely limited to ${\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$, and the latter mainly focuses on $\Gamma_0(\mathbb{R}^n)$ but does not consider \eqref{cor-EB}. Observe that the condition \eqref{assum2} is implied by the \eqref{res-obj-EB} condition since \begin{equation*} \|G_{\varphi}(x)\|^2\geq \eta^2 (\varphi(x)-\min\varphi) \geq \eta^2 (\varphi(x)-\varphi(x^+)). \end{equation*} And also, note that $\varphi=f+g\in \Gamma_0(\mathbb{R}^n)$ if $f \in{\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$ and $g\in\Gamma_0(\mathbb{R}^n)$. With a little effort, we can get the following result. \begin{corollary}\label{corr3} Let $\varphi=f+g$ with $f \in{\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$ and $g\in\Gamma_0(\mathbb{R}^n)$ achieve its minimum $\min \varphi$, and let $\Omega\subset \mathbb{R}^n$ be $\partial \varphi$-invariant and $t\in (0, \frac{1}{L}]$. If the $(\mathcal{R}_t, \eta, \Omega)$-\eqref{res-obj-EB} condition holds, then each of the following conditions holds and hence they are equivalent: \begin{center} $(\partial^0\varphi, \kappa, \Omega)$-\eqref{res-EB}$\Leftrightarrow$ $(\partial^0\varphi, \nu, \Omega)$-\eqref{cor-EB}$\Leftrightarrow$ $(\partial^0\varphi, \eta, \Omega)$-\eqref{res-obj-EB}\\ $\Leftrightarrow$$(\varphi, \alpha, \Omega)$-\eqref{obj-EB} $\Leftrightarrow$ \\ $(\mathcal{R}_t, \kappa, \Omega)$-\eqref{res-EB}$\Leftrightarrow$ $(\mathcal{R}_t, \nu, \Omega)$-\eqref{cor-EB}$\Leftrightarrow$ $(\mathcal{R}_t, \eta, \Omega)$-\eqref{res-obj-EB}. \end{center} \end{corollary} Based on the relationship established in Theorem 2 in \cite{Zhang2015The}, that is $(\varphi, \alpha, \Omega)$-\eqref{obj-EB} $\Leftrightarrow$ $(\mathcal{R}_t, \kappa, \Omega)$-\eqref{res-EB}$\Leftrightarrow$ $(\mathcal{R}_t, \nu, \Omega)$-\eqref{cor-EB}, and together with the case 2 of Corollary \ref{corr2}, we still have the following result even if we do not take the $(\mathcal{R}_t, \eta, \Omega)$-\eqref{res-obj-EB} condition as an assumption. \begin{corollary} Let $\varphi=f+g$ with $f \in{\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$ and $g\in\Gamma_0(\mathbb{R}^n)$ achieve its minimum $\min \varphi$, and let $\Omega\subset {\mathrm{dom}}\partial \varphi$ be $\partial \varphi$-invariant and $t\in (0, \frac{1}{L}]$. Then, we have \begin{center} $(\partial^0\varphi, \kappa, \Omega)$-\eqref{res-EB}$\Leftrightarrow$ $(\partial^0\varphi, \nu, \Omega)$-\eqref{cor-EB}$\Leftrightarrow$ $(\partial^0\varphi, \eta, \Omega)$-\eqref{res-obj-EB}\\ $\Leftrightarrow$$(\varphi, \alpha, \Omega)$-\eqref{obj-EB} $\Leftrightarrow$ $(\mathcal{R}_t, \kappa, \Omega)$-\eqref{res-EB}$\Leftrightarrow$ $(\mathcal{R}_t, \nu, \Omega)$-\eqref{cor-EB}. \end{center} \end{corollary} Note that the $(\mathcal{R}_t, \eta, \Omega)$-\eqref{res-obj-EB} condition is not involved in the equivalence above. This might explain why one can avoid the condition \eqref{assum2} in existing related results. In all corollaries above, parameters involved in different EB conditions can be set explicitly as in Theorem \ref{mainresult}, but we omit the details here. \section{An abstract gradient-type method: linear convergence and applications}\label{sec5} In this section, we define an abstract gradient-type method by viewing the negative of the residual measure operator as a descent direction, and then figure out a necessary and sufficient condition for linear convergence based on the abstract EB conditions defined before. The following main result generalizes Proposition \ref{necesuff}. \begin{theorem} \label{genecond} Let $\varphi\in \Gamma(\mathbb{R}^n)$ be such that it achieves its minimum $\min \varphi$ and that ${\mathbf{crit}} \varphi$ is nonempty and closed. Let $X\subset \mathbb{R}^n$, $\Omega\subset X$, and $G_\varphi$ be a residual measure operator related to $\varphi$ and $X$. Suppose that $\varphi$ satisfies the $(G_\varphi, \beta, \Omega)$-\eqref{cor-res-EB} condition. Define the abstract gradient-type method by $$x_{k+1}=x_k-h\cdot G_{\varphi}(x_k), ~k\geq 0,$$ with step size $h>0$ and arbitrary initial point $x_0\in\Omega$. Assume that $x_k\in \Omega, k\geq 0$. Let $\tau, \theta\in (0, 1)$. \begin{enumerate} \item[(i)] If $\varphi$ satisfies the $(G_\varphi, \nu, \Omega)$-\eqref{cor-EB} condition with $\nu<\frac{1}{\beta}$ and the following inequalities hold \begin{equation}\label{sizecond} \frac{1-\tau}{2\theta\nu}\leq h\leq 2(1-\theta)\beta,~~\tau\geq 1-4\theta(1-\theta)\beta\nu, \end{equation} then the abstract gradient-type method converges linearly in the sense that \begin{equation}\label{linearconv1} d^2(x_{k+1},{\mathbf{crit}} \varphi) \leq \tau\cdot d^2(x_k,{\mathbf{crit}} \varphi),~~k\geq 0. \end{equation} The optimal rate $\tau_0:=1-\beta\nu$ is obtained at $h=\beta$ and $\theta=\frac{1}{2}$. \item[(ii)] Conversely, if the abstract gradient-type method converges linearly in the sense of \eqref{linearconv1}, then $\varphi$ satisfies the $(G_\varphi, \nu, \Omega)$-\eqref{cor-EB} condition with $\nu=\frac{\beta(1-\sqrt{\tau})^2}{h^2}$. \end{enumerate} \end{theorem} \begin{proof} First, we repeat the argument before \eqref{linearconv} to obtain that for $v_k\in \mathcal{P}_\varphi(x_k)$, $$d^2(x_{k+1},{\mathbf{crit}}\varphi) \leq d^2(x_k,{\mathbf{crit}}\varphi) -2h\langle G_{\varphi}(x_k),x_k-v_k\rangle+h^2\|G_{\varphi}(x_k)\|^2,~k\geq 0.$$ Take $\theta\in (0, 1)$ and then use a convex combination of the \eqref{cor-res-EB} and \eqref{cor-EB} conditions at $x=x_k$ to obtain $$ \inf_{v_k\in \mathcal{P}_\varphi(x_k)}\langle G_{\varphi}(x_k), x_k-v_k\rangle \geq \theta \nu\cdot d^2(x_k,{\mathbf{crit}}\varphi)+(1-\theta)\beta\cdot \|G_{\varphi}(x_k)\|^2, ~k\geq 0.$$ Therefore, we can derive that \begin{align*} d^2(x_{k+1},{\mathbf{crit}}\varphi) & \leq (1-2\theta \nu h)d^2(x_k,{\mathbf{crit}}\varphi)+(h^2-2h(1-\theta)\beta) \|G_{\varphi}(x_k)\|^2\\ &\leq \tau \cdot d^2(x_k,{\mathbf{crit}}\varphi), ~k\geq 0, \end{align*} where the second inequality follows from the condition \eqref{sizecond} on the step size. Obviously, the optimal linear convergence rate $\tau_0=1-\beta\nu$ can be obtained at $h=\beta, \theta=\frac{1}{2}$. Conversely, pick $u_{k+1}\in \mathcal{P}_\varphi(x_{k+1})$ to derive that \begin{align}\label{add2} d(x_k, {\mathbf{crit}} \varphi) &\leq \|x_k-u_{k+1}\| \leq \|x_{k+1}-u_{k+1}\|+\|x_{k+1}-x_k\|\nonumber \\ &= d(x_{k+1}, {\mathbf{crit}} \varphi)+ h\|G_{\varphi}(x_k)\|, ~k\geq0. \end{align} Combine \eqref{add2} and the fact of linear convergence $$d(x_{k+1},{\mathbf{crit}} \varphi) \leq \sqrt{\tau} \cdot d(x_{k},{\mathbf{crit}} \varphi), ~k\geq0$$ to obtain $$(1-\sqrt{\tau})^2d^2(x_k,{\mathbf{crit}} \varphi)\leq h^2\|G_{\varphi}(x_k)\|^2, ~k\geq0.$$ Thus, together with the \eqref{cor-res-EB} condition, we can derive that $$\inf_{v_k\in \mathcal{P}_\varphi(x_k)}\langle G_{\varphi}(x_k), x_k-v_k\rangle \geq \beta\|G_{\varphi}(x_k)\|^2\geq\frac{\beta(1-\sqrt{\tau})^2}{h^2}d^2(x_k,{\mathbf{crit}} \varphi), ~k\geq0.$$ Observe that the starting point $x_0\in\Omega$ can be arbitrary. Therefore, the \eqref{cor-EB} condition with $\nu=\frac{\beta(1-\sqrt{\tau})^2}{h^2}$ holds. This completes the proof. \end{proof} With Theorem \ref{genecond} in hand, we now claim the necessary and sufficient EB conditions guaranteeing linear convergence for the gradient method, the proximal point algorithm, and the forward-backward splitting algorithm. These conditions, previously known to be sufficient for linear convergence (see e.g. Section 4 in \cite{Bolte2015From}), are actually necessary. We start by the gradient method, applied to possibly nonconvex optimization. \begin{corollary}\label{corr5} Let $f:\mathbb{R}^n\rightarrow\mathbb{R}$ be a gradient-Lipschitz-continuous function with modulus $L>0$. Assume that $f$ achieves its minimum $\min f$ and ${\mathbf{crit}} f=\mathrm{Arg}\min f \neq \emptyset$. Let $\epsilon>0$ be a fixed constant and set $\Omega=\{x: f(x)\leq \min f+\epsilon \}$. Let $\{x_k\}_{k\geq0}$ be generated by the gradient descent method \eqref{gradmethod} with $h=\frac{1}{L}$ and $x_0\in \Omega$. \begin{enumerate} \item[(i)] If $f$ satisfies the $(\nabla f, \nu, \Omega)$-\eqref{cor-EB} condition with $\nu<L$, then the gradient descent method \eqref{gradmethod} with $h=\frac{1}{L}$ converges linearly in the sense that \begin{equation}\label{linf} f(x_{k+1})-\min f\leq \left(1-(\frac{\nu}{L})^2\right)(f(x_k)-\min f),~k\geq 0. \end{equation} \item[(ii)] If we further assume that $f$ is convex, then the gradient descent method \eqref{gradmethod} with $h=\frac{1}{L}$ attains the following linear convergence: \begin{equation}\label{linx} d^2(x_{k+1},\mathrm{Arg}\min f) \leq (1-\frac{\nu}{L})\cdot d^2(x_k,\mathrm{Arg}\min f),~k\geq 0. \end{equation} \item[(iii)] Conversely, if $f$ is convex and if starting from an arbitrary initial point $x_0\in\Omega$, the gradient descent method \eqref{gradmethod} with $h=\frac{1}{L}$ converges linearly like \eqref{linx} but replacing $1-\frac{\nu}{L}$ with $\tau$, then $f$ satisfies the $(\nabla f, \nu, \Omega)$-\eqref{cor-EB} condition with $\nu=L(1-\sqrt{\tau})^2$. \end{enumerate} \end{corollary} \begin{proof} We first show \eqref{linf} by modifying the argument due to Polyak \cite{Polyak1963Gradient} and recently highlighted in \cite{Karimil2015linear,Karimil2016linear}. The gradient-Lipschitz-continuity of $f$ implies \begin{equation}\label{Lipg} f(y)-f(x)-\langle \nabla f(x), y-x\rangle \leq \frac{L}{2}\|y-x\|^2, ~\forall ~x, y\in\mathbb{R}^n. \end{equation} Using this inequality with $y=x_{k+1}$ and $x=x_k$ and together with the update rule of gradient descent, we get \begin{equation}\label{inq001} f(x_{k+1})-f(x_k)\leq -\frac{1}{2L}\|\nabla f(x_k)\|^2, ~k\geq0, \end{equation} which implies ${x_k}\in \Omega, k\geq 0$. Using again the inequality \eqref{Lipg} with $y=x_k$ and $x=u_k\in \mathcal{P}_f(x_k)$, and noting that $u_k\in {\mathbf{crit}} f= \mathrm{Arg}\min f$ and hence $f(u_k)=\min f$ and $\nabla f(u_k)=0$, we have \begin{equation}\label{inq002} f(x_k)-\min f\leq \frac{L}{2}d^2(x_k,{\mathbf{crit}} f), ~k\geq0. \end{equation} Applying the Cauchy-Schwarz inequality to the $(\nabla f, \nu, \Omega)$-\eqref{cor-EB} condition, we obtain $$\forall x\in \Omega \cap{\mathrm{dom}} f, ~~\|\nabla f(x)\|\geq \nu \cdot d(x,{\mathbf{crit}} f).$$ Thus, combining the inequalities \eqref{inq001} and \eqref{inq002}, we have that $$f(x_{k+1})-f(x_k)\leq -\frac{1}{2L}\|\nabla f(x_k)\|^2\leq -\frac{\nu^2}{L^2}(f(x_k)-\min f), ~k\geq 0,$$ from which \eqref{linf} follows. Now, with the additional convexity assumption of $f$, we have $f\in {\mathcal{F}}^{1,1}_{L}(\mathbb{R}^n)$, which is equivalent to the following condition $$\langle \nabla f(x)-\nabla f(y), x-y\rangle\geq \frac{1}{L}\|\nabla f(x)-\nabla f(y)\|^2,~~x, y\in \mathbb{R}^n;$$ see Theorem 2.1.5 \cite{nesterov2004introductory}. Using this inequality with $y\in \mathcal{P}_f(x)$, we obtain $$\inf_{y\in \mathcal{P}_f(x)} \langle \nabla f(x), x-y\rangle\geq \frac{1}{L}\|\nabla f(x)\|^2,~x\in \mathbb{R}^n,$$ which is just the $(\nabla f, \beta, \Omega)$-\eqref{cor-res-EB} condition with $\beta=\frac{1}{L}$. Therefore, the remaining results follow from Theorem \ref{genecond}. This completes the proof. \end{proof} \begin{remark} In Example 2 in \cite{zhang2015restricted}, we constructed a one-dimensional nonconvex function, that satisfies all the conditions in Corollary \ref{corr5} that ensure \eqref{linf}. In this sense, \eqref{linf} is one of the few general results for global linear convergence on non-convex problems. We note that a similar phenomenon was observed by the authors of \cite{Karimil2016linear} under the Polyak-{\L}ojasiewicz condition. While ${\mathbf{crit}} f=\mathrm{Arg}\min f$ is a strong assumption, it is not the same as convexity but implies the weaker condition of invexity, which says that a function f is invex if and only if its every critical point is a global minimum. This assumption can be satisfied by some nonconvex optimization problems recently appeared in machine/deep learning, see e.g. \cite{Yun2017global} and \cite{Zhou2017char}. \end{remark} Before we discuss the linear convergence of the proximal point algorithm (PPA), we introduce the following result. \begin{lemma}[\cite{Bauschke2011Convex,ruszczynski2006nonlinear}]\label{flamb} Let $f\in \Gamma_0(\mathbb{R}^n)$ and $\lambda>0$. Let the Moreau-Yosida regularization of $f$ be defined by $$f_\lambda(x):=\min_{u\in\mathbb{R}^n}\left\{f(u)+\frac{1}{2\lambda}\|x-u\|^2\right\}.$$ Then, \begin{itemize} \item $f_\lambda$ is real-valued, convex, and continuously differentiable and can be formulated as $$f_\lambda(x)=f({\mathbf{prox}}_{\lambda f}(x))+\frac{1}{2\lambda}\|x-{\mathbf{prox}}_{\lambda f}(x)\|^2;$$ \item Its gradient $$\nabla f_\lambda(x)=\lambda^{-1}(x-{\mathbf{prox}}_{\lambda f}(x))$$ is $\lambda^{-1}$-Lipschitz continuous. \item $\mathrm{Arg}\min f_\lambda=\mathrm{Arg}\min f$ and $\min f=\min f_\lambda$. \end{itemize} \end{lemma} Now, we are ready to present the result of linear convergence for PPA. \begin{corollary}\label{proxthm} Let $f\in \Gamma_0(\mathbb{R}^n)$ achieve its minimum $\min f$ and $\lambda>0$. Let $\epsilon>0$ be a fixed constant and set $\Omega=\{x: f(x)\leq \min f+\epsilon \}\cap{\mathrm{dom}} \partial f$. Starting from $x_0\in\Omega$, the PPA can be defined by $$x_{k+1}={\mathbf{prox}}_{\lambda f}(x_k)=x_k-\lambda\cdot\nabla f_\lambda(x_k),~k\geq 0.$$ \begin{enumerate} \item[(i)] If $f$ satisfies the $(f,\alpha, \Omega)$-\eqref{obj-EB} condition, then $f_\lambda$ satisfies the $(\nabla f_\lambda, \nu, \Omega)$-\eqref{cor-EB} condition with $\nu=\min\{\frac{\alpha}{4}, \frac{1}{4\lambda}\}$, and hence the PPA converges linearly in the sense that \begin{equation}\label{linppa} d^2(x_{k+1},\mathrm{Arg}\min f) \leq \left(1-\min\{\frac{\alpha\lambda}{4}, \frac{1}{4}\}\right)\cdot d^2(x_k,\mathrm{Arg}\min f),~k\geq 0. \end{equation} \item[(ii)] Conversely, if starting from an arbitrary initial point $x_0\in\Omega$ the PPA converges linearly like \eqref{linppa} but replacing the rate $1-\min\{\frac{\alpha\lambda}{4}, \frac{1}{4}\}$ with a constant $\tau\in (0, 1)$, then $f$ satisfies the $(f, \alpha, \Omega)$-\eqref{obj-EB} condition with $\alpha= \frac{(1-\sqrt{\tau})^2}{2\lambda}$. \end{enumerate} \end{corollary} \begin{proof} First of all, we remark that \begin{equation}\label{fact1} {\mathbf{crit}} f=\mathrm{Arg}\min f=\mathrm{Arg}\min f_\lambda={\mathbf{crit}} f_\lambda. \end{equation} From Lemma \ref{flamb}, we have $f_\lambda\in {\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$ with $L=\lambda^{-1}$ and hence the $(\nabla f_\lambda, \beta,\Omega)$-\eqref{cor-res-EB} condition with $\beta=\lambda$ holds. Now, we first prove that the $(f,\alpha, \Omega)$-\eqref{obj-EB} condition implies the $(f_\lambda,c, \Omega)$-\eqref{obj-EB} condition with $c=\min\{\frac{\alpha}{2}, \frac{1}{2\lambda}\}$. Indeed, letting $v={\mathbf{prox}}_{\lambda f}(x)$ and $v^\prime\in \mathcal{P}_f(v)$, for any $x\in \Omega \cap{\mathrm{dom}} f$ we can derive that \begin{align*} f_\lambda(x)-\min f_\lambda & = f({\mathbf{prox}}_{\lambda f}(x))+\frac{1}{2\lambda}\|x-{\mathbf{prox}}_{\lambda f}(x)\|^2-\min f \\ &\geq \frac{\alpha}{2}d^2({\mathbf{prox}}_{\lambda f}(x), {\mathbf{crit}} f)+\frac{1}{2\lambda}\|x-{\mathbf{prox}}_{\lambda f}(x)\|^2 \\ &= \frac{\alpha}{2}\|v-v^\prime\|^2 +\frac{1}{2\lambda}\|x-v\|^2\geq c\cdot(\|v-v^\prime\|^2+\|x-v\|^2)\\ &\geq \frac{c}{2}(\|v-v^\prime\|+\|x-v\|)^2\geq \frac{c}{2} \|x-v^\prime\|^2\geq \frac{c}{2} d^2(x,{\mathbf{crit}} f_\lambda), \end{align*} where the first inequality utilizes the fact of $f(v)+\frac{1}{2\lambda}\|v-x\|^2\leq f(x)$, which implies $v\in \Omega \cap{\mathrm{dom}} f$, and the last inequality follows by $v^\prime\in \mathcal{P}_f(v)\subset {\mathbf{crit}} f ={\mathbf{crit}} f_\lambda$. From case 1 of Corollary \ref{corr2} and (i) in Remark \ref{remarkadd}, the $(f_\lambda,c, \Omega)$-\eqref{obj-EB} condition implies the $(\nabla f_\lambda, \nu, \Omega)$-\eqref{cor-EB} condition with $\nu=\min\{\frac{\alpha}{4}, \frac{1}{4\lambda}\}$. Therefore, \eqref{linppa} follows from Theorem \ref{genecond} and the fact \eqref{fact1}. Now, we turn to the necessity part. Invoking Theorem \ref{genecond} again, we conclude that $f_\lambda$ satisfies the $(\nabla f_\lambda, \nu, \Omega)$-\eqref{cor-EB} condition with $\nu=\frac{(1-\sqrt{\tau})^2}{\lambda}$, that is \begin{equation}\label{cor-EB1} \forall~ x\in \Omega \cap {\mathrm{dom}} f_\lambda,~~\inf_{x_p\in \mathcal{P}_{f_\lambda}(x)}\langle \nabla f_\lambda(x), x-x_p\rangle \geq \nu\cdot d^2(x,{\mathbf{crit}} f_\lambda). \end{equation} Together with the fact of ${\mathbf{crit}} f={\mathbf{crit}} f_\lambda$, we can get \begin{equation}\label{res-EB1} \forall~ x\in \Omega \cap {\mathrm{dom}} f_\lambda,~~\|\nabla f_\lambda(x)\| \geq \nu\cdot d (x,{\mathbf{crit}} f). \end{equation} On the other hand, using the definition of $v={\mathbf{prox}}_{\lambda f}(x)$, which implies $\frac{1}{\lambda}(x-v)\in \partial f(v)$, and the convexity of $f$, we obtain that \begin{equation}\label{subff} \forall~ x\in {\mathrm{dom}} f,\forall~g\in \partial f(x),~~ \langle \frac{1}{\lambda}(x-v)-g, v-x\rangle \geq 0, \end{equation} which further implies that \begin{equation}\label{subed} \forall~ x\in {\mathrm{dom}} f,~~ \inf_{g\in \partial f(x)}\|g\|\geq \frac{1}{\lambda}\|x-v\|=\|\nabla f_\lambda(x)\|. \end{equation} Thus, combining \eqref{res-EB1} and \eqref{subed} and noting that ${\mathrm{dom}} f\subset {\mathrm{dom}} f_\lambda$ and $\|\partial^0 f(x)\| =+\infty$ for $x\notin {\mathrm{dom}} \partial f$, we obtain \begin{equation} \forall~ x\in \Omega \cap {\mathrm{dom}} f,~~\|\partial^0 f(x)\| =\inf_{g\in \partial f(x)}\|g\| \geq \nu\cdot d (x, {\mathbf{crit}} f). \end{equation} This is just the $(\partial^0 f, \kappa, \Omega)$-\eqref{res-EB} condition with $\kappa=\nu$. Note that $\Omega$ is $\partial f$-invaiant. Therefore, the $(f, \alpha, \Omega)$-\eqref{obj-EB} condition with $\alpha= \frac{(1-\sqrt{\tau})^2}{2\lambda}$ holds by case 2 of Corollary \ref{corr2}. \end{proof} \begin{remark} Linear convergence of PPA was previously provided based on different EB conditions, such as the ${\L}$ojasiewicz inequality (corresponding to \eqref{res-obj-EB}) in \cite{attouch2007on,attouch2013convergence,Bolte2015From}, the quadratic growth condition (corresponding to \eqref{obj-EB}) in Proposition 6.5.2 in \cite{bertsekas2011convex}, and the EB condition of Hoffman's type (corresponding to \eqref{res-EB}) in Theorem 2.1 in \cite{Luque1984Asy}. Our novelty here mainly lies in the necessity part, i.e., conclusion (ii). \end{remark} Finally, we discuss linear convergence for the forward-backward splitting (FBS) algorithm. Recall that $\mathcal{R}_{1/L}(x)=L\left(x-{\mathbf{prox}}_{tg}(x-\frac{1}{L}\nabla f(x))\right)$. \begin{corollary}\label{corr7} Let $\varphi =f +g $, where $f \in{\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$ and $g\in\Gamma_0(\mathbb{R}^n)$, achieve its minimum $\min \varphi$. Let $\epsilon>0$ be a fixed constant and set $\Omega=\{x: \varphi(x)\leq \min \varphi+\epsilon \}$. Starting from $x_0\in\Omega$, the FBS can be defined by $$x_{k+1}={\mathbf{prox}}_{\frac{1}{L}g}(x_k-\frac{1}{L}\nabla f(x_k))=x_k-\frac{1}{L}\cdot \mathcal{R}_{1/L}(x_k), ~k\geq0.$$ Denote $S_k:=\sum_{i=0}^\infty \|\mathcal{R}_{1/L}(x_{k+i})\|^2,~k\geq 0$. \begin{enumerate} \item[(i)] If $\varphi$ satisfies the $(\mathcal{R}_{1/L}, \nu, \Omega)$-\eqref{cor-EB} condition with $\nu<2L$, then FBS converges linearly in the sense that \begin{equation}\label{linffb} \varphi(x_{k+1})-\min \varphi\leq (1-\frac{\nu}{2L}) (\varphi(x_k)-\min \varphi),~k\geq 0, \end{equation} \begin{equation}\label{linfb} d^2(x_{k+1},\mathrm{Arg}\min \varphi) \leq (1-\frac{\nu}{2L})\cdot d^2(x_k,\mathrm{Arg}\min \varphi),~k\geq 0, \end{equation} and \begin{equation}\label{lingb} S_{k+1}\leq (1-\frac{\nu}{2L})S_k ,~k\geq 0. \end{equation} \item[(ii)] Conversely, if starting from an arbitrary initial point $x_0\in\Omega$, FBS converges linearly like \eqref{linfb} but replacing $1-\frac{\nu}{2L}$ with $\tau$, then $\varphi$ satisfies the $(\mathcal{R}_{1/L}, \nu, \Omega)$-\eqref{cor-EB} condition with $\nu=\frac{L}{2}(1-\sqrt{\tau})^2$. \end{enumerate} \end{corollary} \begin{proof} We rely on the following standard result (see again Lemma 2.3 in \cite{beck2009fast}): \begin{equation}\label{sand} \forall x, y\in\mathbb{R}^n,~~\langle \mathcal{R}_{1/L}(y), y-x\rangle \geq \varphi({\mathbf{prox}}_{\frac{1}{L}g}(y-\frac{1}{L}\nabla f(y)))-\varphi(x)+\frac{1}{2L}\|\mathcal{R}_{1/L}(y)\|^2. \end{equation} Using successively this result at $x=y=x_k$, and then at $y=x_k, x=u_k\in\mathcal{P}_\varphi(x_k)$, together with the fact of $x_{k+1}={\mathbf{prox}}_{\frac{1}{L}g}(x_k-\frac{1}{L}\nabla f(x_k))$, we obtain the following sufficient decrease property \begin{equation}\label{sand0} \varphi(x_{k+1})-\varphi(x_k)\leq -\frac{1}{2L}\|\mathcal{R}_{1/L}(x_k)\|^2, ~k\geq 0, \end{equation} and $$\varphi(x_{k+1})-\min \varphi +\frac{1}{2L}\|\mathcal{R}_{1/L}(x_k)\|^2 \leq \langle \mathcal{R}_{1/L}(x_k), x_k-u_k\rangle, ~k\geq 0.$$ Note that \eqref{sand0} implies $x_k\in\Omega, k\geq 0$. Applying the Cauchy-Schwarz inequality to the $(\mathcal{R}_{1/L}, \nu, \Omega)$-\eqref{cor-EB} condition, we obtain $$\forall~ x\in \Omega \cap {\mathrm{dom}} \varphi,~~\| \mathcal{R}_{1/L}(x) \|\geq \nu\cdot d(x,{\mathbf{crit}} \varphi),$$ from which the following inequality follows $$\langle \mathcal{R}_{1/L}(x_k), x_k-u_k\rangle\leq \frac{1}{\nu}\| \mathcal{R}_{1/L}(x_k)\|^2, ~k\geq 0.$$ Thus, we obtain \begin{equation}\label{sand1} \varphi(x_{k+1})-\min \varphi \leq (\frac{1}{\nu}-\frac{1}{2L})\|\mathcal{R}_{1/L}(x_k)\|^2, ~k\geq 0. \end{equation} Combining \eqref{sand0} and \eqref{sand1}, we get $$\varphi(x_{k+1})-\varphi(x_k)\leq -\frac{1}{2L}\left(\frac{1}{\nu}-\frac{1}{2L}\right)^{-1}(\varphi(x_{k+1})-\min \varphi), ~k\geq 0,$$ from which the announced result \eqref{linffb} follows. The convergence result \eqref{lingb} can also be derived from \eqref{sand0} and \eqref{sand1}. In fact, we first observe that for any integer $N>0$, it holds $$\varphi(x_{k+1})-\min \varphi\geq \sum_{i=1}^N(\varphi(x_{k+i})-\varphi(x_{k+i+1})), ~k\geq 0$$ and hence the sufficient decrease property \eqref{sand0} yields $$\varphi(x_{k+1})-\min \varphi\geq \sum_{i=1}^\infty(\varphi(x_{k+i})-\varphi(x_{k+i+1}))\geq \frac{1}{2L}\sum_{i=1}^\infty\|\mathcal{R}_{1/L}(x_{k+i})\|^2, ~k\geq 0.$$ Together with \eqref{sand1}, we derive that \begin{align*} \varphi(x_k)-\varphi(x_{k+1})&=\varphi(x_k)-\min \varphi-(\varphi(x_{k+1})-\min \varphi) \\ &\leq (\frac{1}{\nu}-\frac{1}{2L})\|\mathcal{R}_{1/L}(x_{k-1})\|^2-\frac{1}{2L}\sum_{i=1}^\infty\|\mathcal{R}_{1/L}(x_{k+i})\|^2, ~k\geq 1. \end{align*} Using \eqref{sand0} again, we obtain $$(\frac{1}{\nu}-\frac{1}{2L})\|\mathcal{R}_{1/L}(x_{k-1})\|^2\geq \frac{1}{2L}\sum_{i=0}^\infty\|\mathcal{R}_{1/L}(x_{k+i})\|^2, k\geq 1,$$ i.e., $$(\frac{1}{\nu}-\frac{1}{2L})(S_{k-1}-S_k)\geq \frac{1}{2L}S_k, k\geq 1,$$ from which the announced result \eqref{lingb} follows. Now, using the standard result \eqref{sand} with $x=y_p\in \mathcal{P}_\varphi(y)$ to yield \begin{equation*} \langle \mathcal{R}_{1/L}(y), y-y_p\rangle \geq \varphi({\mathbf{prox}}_{\frac{1}{L}g}(y-\frac{1}{L}\nabla f(y)))-\varphi(y_p)+\frac{1}{2L}\|\mathcal{R}_{1/L}(y)\|^2, \end{equation*} and noting that $$\varphi({\mathbf{prox}}_{\frac{1}{L}g}(y-\frac{1}{L}\nabla f(y)))-\varphi(y_p)=\varphi({\mathbf{prox}}_{\frac{1}{L}g}(y-\frac{1}{L}\nabla f(y)))-\min \varphi\geq 0,$$ we obtain $$\forall y\in\mathbb{R}^n,~~\langle \mathcal{R}_{1/L}(y), y-y_p\rangle \geq \frac{1}{2L}\|\mathcal{R}_{1/L}(y)\|^2.$$ Thus, $\varphi$ satisfies the $(\mathcal{R}_{1/L},\beta, \Omega)$-\eqref{cor-res-EB} condition with $\beta=\frac{1}{2L}$. Therefore, the remaining results follow from Theorem \ref{genecond} and the fact of ${\mathbf{crit}} \varphi=\mathrm{Arg}\min \varphi$. \end{proof} \begin{remark} The results \eqref{linffb} and \eqref{linfb} were essentially shown in \cite{Drusvyatskiy2016Error} and \cite{Zhang2015The} respectively, with different methods. We note that while this work was under review, the authors of \cite{Cruz2018on} improved these results under error bound conditions and weaken assumptions on the gradient Lipschitz continuity. Our novelty here lies in conclusion (ii), which was independently also recently observed by the authors in \cite{Garrigos2017conv}. In addition, the result \eqref{lingb} seems also new and interesting. \end{remark} \section{Linear convergence of the PALM algorithm}\label{sec6} The PALM algorithm was recently introduced by the authors of \cite{bolte2014proximal} for a class of composite optimization problems in the general non-convex and non-smooth setting. The authors developed a convergence analysis framework relying on the Kurdyka-{\L}ojasiewicz (KL) inequality and proved that PALM converges globally to a critical point for problems with semi-algebraic data. A global non-asymptotic sublinear rate of convergence of PALM for convex problems was obtained independently in \cite{Shefi2016on} and \cite{Hong2016iter}. Very recently, global linear convergence of PALM for strongly convex problems was obtained in \cite{Li2016An}. Note that PALM is called block coordinate proximal gradient algorithm in \cite{Hong2016iter} and cyclic block coordinate descent-type method in \cite{Li2016An}. In this section, we show linear convergence of PALM under EB conditions, which are strictly weaker than strong convexity. Let $x_{1:k}:=(x_1,x_2,\cdots, x_k)$ and denote $x_{1:(j-1)}^{(t+1)}:=(x_1^{(t+1)},\cdots, x_{j-1}^{(t+1)})$, $x_{(j+1):p}^{(t)}:=(x_{j+1}^{(t)},\cdots, x_{p}^{(t)})$, $\psi_j^{(t)}(x_j):=f(x_{1:(j-1)}^{(t+1)},x_j,x_{(j+1):p}^{(t)})$, and $\varphi_j^{(t)}(x_j):=\psi_j^{(t)}(x_j) +g_j(x_j).$ Start with given initial points $\{x_j^{(0)}\}_{j=1}^p$. PALM generates $\{x_j^{(t+1)}\}_{j=1}^p$ via solving a collection of subproblems $$x_j^{(t+1)}=\arg\min_{x_j}\left\{\langle x_j-x_j^{(t)}, \nabla \psi_j^{(t)}(x_j^{(t)})\rangle +\frac{L_j}{2}\|x_j-x_j^{(t)}\|^2+g_j(x_j)\right\},~~j=1,\cdots, p,~t\geq 0.$$ The following is our main result in this section. \begin{theorem}\label{PALMthm} Consider the following composite convex nonsmooth minimization problem \begin{equation}\label{obj} \Min_{x\in \mathbb{R}^d} \varphi(x):=f(x_1,\cdots, x_p)+\sum^p_{j=1}g_j(x_j), \end{equation} where $\mathbb{R}^d\ni x=(x_1,\cdots, x_p)$ with the $j$-th block $x_j\in\mathbb{R}^{d_j}$, and $d=\sum^p_{j=1}d_j$. Set $g(x):=\sum^p_{j=1}g_j(x_j)$ so that ${\mathrm{dom}} g= \Pi_{j=1}^p{\mathrm{dom}} g_j$. With these notations, the objective function of \eqref{obj} reads as $\varphi=f+g$. Assume that \begin{itemize} \item $f \in{\mathcal{F}}^{1,1}_L(\mathbb{R}^d)$, $g_j\in\Gamma_0(\mathbb{R}^{d_j}), j=1,\cdots, p$, and $\Omega \subset {\mathrm{dom}} \partial \varphi$; \item $f(x_{1:(j-1)},x_j,x_{(j+1):p})\in {\mathcal{F}}^{1,1}_{L_j}(\mathbb{R}^{d_j})$ for all $x_{1:(j-1)}$ and $x_{(j+1):p}$, $j=1,\cdots; p$; \item $\varphi=f+g$ is such that it achieves its minimum $\min \varphi$; \item $\varphi$ satisfies the $(\partial^0 \varphi, \eta, \Omega)$-\eqref{res-obj-EB} condition (or its equivalent conditions from case 2 of Corollary \ref{corr2}), which is strictly weaker than strong convexity. \end{itemize} Here, $L_j, j=1,\cdots, p$ and $L$ are positive constants. Let $\{x^{(t)}\}$ be generated by PALM and assume that $x^{(t)}\in \Omega, t\geq 0$. Then, PALM converges linearly in the sense that $$\varphi(x^{(t+1)})-\min \varphi\leq\left(\frac{\eta^2L_{\min}}{4pL^2+4L_{\max}^2}+1\right)^{-1}(\varphi(x^{(t)})-\min \varphi), ~~t\geq 0, $$ where $L_{\min}=\min_j L_j$ and $L_{\max}=\max_j L_j$. \end{theorem} \begin{proof} We divide the proof into three steps. \textbf{Step 1.} We prove that \begin{equation}\label{des1} \varphi(x^{(t)})-\varphi(x^{(t+1)})\geq \frac{L_{\min}}{2}\|x^{(t)}-x^{(t+1)}\|^2, ~t\geq 0. \end{equation} Let $G^{(t)}_j=L_j(x_j^{(t)}-x_j^{(t+1)})$. By the definition of $x_j^{(t+1)}$ and Lemma 2.3 in \cite{beck2009fast}, we get $$\varphi_j^{(t)}(x_j^{(t)})-\varphi_j^{(t)}(x_j^{(t+1)})\geq\frac{1}{2L_j}\|G^{(t)}_j\|^2 =\frac{L_j^2}{2L_j}\|x_j^{(t)}-x_j^{(t+1)}\|^2= \frac{L_j}{2}\|x_j^{(t)}-x_j^{(t+1)}\|^2.$$ In addition, note that $$\sum^p_{j=1}\varphi_j^{(t)}(x_j^{(t)})=\sum^p_{j=1}\left(f(x^{(t+1)}_{1:(j-1)},x^{(t)}_{j:p})+g_j(x_j^{(t)})\right)$$ and $$\sum^p_{j=1}\varphi_j^{(t)}(x_j^{(t+1)})=\sum^p_{j=1}\left(f(x^{(t+1)}_{1:j},x^{(t)}_{(j+1):p})+g_j(x_j^{(t+1)})\right).$$ Thus, we derive that for $t\geq 0$, $$\varphi(x^{(t)})-\varphi(x^{(t+1)})=\sum^p_{j=1}\varphi_j^{(t)}(x_j^{(t)})-\sum^p_{j=1}\varphi_j^{(t)}(x_j^{(t+1)})\geq \sum^p_{j=1}\frac{L_j}{2}\|x_j^{(t)}-x_j^{(t+1)}\|^2,$$ from which \eqref{des1} follows. \textbf{Step 2.} The $(\partial^0 \varphi, \eta, \Omega)$-\eqref{res-obj-EB} condition at $x=x^{(t+1)}$ reads as $$\varphi(x^{(t+1)})-\min \varphi\leq \frac{\|\partial^0 \varphi(x^{(t+1)})\|^2}{\eta^2}.$$ At the $(t+1)$-th iteration, there exists $\xi_j^{(t+1)}\in\partial g_j(x_j^{(t+1)})$ satisfying the optimality condition: $$\nabla_j f(x_{1:(j-1)}^{(t+1)},x_j^{(t)},x_{(j+1):p}^{(t)})+L_j(x_j^{(t+1)}-x_j^{(t)})+\xi_j^{(t+1)}=0.$$ Here and below, we denote the partial gradient $\nabla_{x_j}f(x)$ by $\nabla_j f(x)$ for notational simplicity. Let $\xi^{(t+1)}=(\xi_1^{(t+1) },\cdots, \xi_p^{(t+1)})$. Then, $$\nabla f(x^{(t+1)})+\xi^{(t+1)}\in \partial \varphi(x^{(t+1)})$$ and hence $$\varphi(x^{(t+1)})- \min\varphi\leq \frac{\|\partial^0 \varphi(x^{(t+1)})\|^2}{\eta^2}\leq \frac{\|\nabla f(x^{(t+1)})+\xi^{(t+1)}\|^2}{\eta^2}.$$ Using the optimality condition and the fact of $f(x)\in {\mathcal{F}}^{1,1}_L(\mathbb{R}^d)$, we derive that \begin{align*} \|\nabla f(x^{(t+1)})+\xi^{(t+1)}\|^2 = & \sum^p_{j=1}\| \nabla_jf(x^{(t+1)})-\nabla_j f(x_{1:(j-1)}^{(t+1)},x_j^{(t)},x_{(j+1):p}^{(t)})-L_j(x_j^{(t+1)}-x_j^{(t)})\|^2\\ \leq &\sum^p_{j=1}2\| \nabla_jf(x^{(t+1)})-\nabla_j f(x_{1:(j-1)}^{(t+1)},x_j^{(t)},x_{(j+1):p}^{(t)})\|^2+\sum^p_{j=1}2L_j^2\|x_j^{(t+1)}-x_j^{(t)}\|^2\\ \leq &\sum^p_{j=1}2\| \nabla f(x^{(t+1)})-\nabla f(x_{1:(j-1)}^{(t+1)},x_j^{(t)},x_{(j+1):p}^{(t)})\|^2+ \sum^p_{j=1}2L_j^2\|x_j^{(t+1)}-x_j^{(t)}\|^2\\ \leq &\sum^p_{j=1}2L^2\| x_{j:p}^{(t+1)}-x_{j:p}^{(t)}\|^2+ \sum^p_{j=1}2L_j^2\|x_j^{(t+1)}-x_j^{(t)}\|^2\\ \leq &(2pL^2+2L_{\max}^2)\|x^{(t+1)}-x^{(t)}\|^2. \end{align*} Therefore, we obtain \begin{equation}\label{des2} \varphi(x^{(t+1)})-\min \varphi\leq \frac{(2pL^2+2L_{\max}^2)}{\eta^2}\|x^{(t+1)}-x^{(t)}\|^2. \end{equation} \textbf{Step 3.} Combining \eqref{des1} and \eqref{des2}, we derive that \begin{align*} \varphi(x^{(t)})-\min \varphi= &\left( \varphi(x^{(t)})- \varphi(x^{(t+1)})\right)+ \left(\varphi(x^{(t+1)})-\min \varphi\right)\\ \geq &\frac{L_{\min}}{2}\|x^{(t)}-x^{(t+1)}\|^2 +\left(\varphi(x^{(t+1)})-\min \varphi\right)\\ \geq &\left(\frac{\eta^2L_{\min}}{4pL^2+4L_{\max}^2}+1\right) \left(\varphi(x^{(t+1)})-\min \varphi\right), \end{align*} from which the claimed result follows. This completes the proof. \end{proof} On one hand, the $(\varphi, \alpha, \Omega)$-\eqref{obj-EB} condition is obviously weaker than strong convexity. On the other hand, we can easily construct functions that satisfy \eqref{obj-EB} but fail to be strongly convex. For example, the composition $f(Ax)$, where $f(\cdot)$ is strongly convex and $A$ is rank deficient, is such a function. This explains why we say that the $(\partial^0 \varphi, \eta, \Omega)$-\eqref{res-obj-EB} condition, which is equivalent to the $(\varphi, \alpha, \Omega)$-\eqref{obj-EB} condition, is strictly weaker than strong convexity. We note that the authors of \cite{Banjac2016on} very recently showed that the regularized Jacobi algorithm-a type of cyclic block coordinate descent method-achieves a linear convergence rate under similar conditions to that of Theorem \ref{PALMthm}. \section{Linear convergence of Nesterov's accelerated forward-backward algorithm}\label{sec7} This section is divided into two parts. In the first part, we first introduce a composite optimization problem, and then we give a new EB condition. In the second part, we introduce Nesterov's accelerated forward-backward algorithm and show its Q-linear convergence. \subsection{Problem formulation and a new EB condition} Given a nonnegative real sequence $\{r_k\}_{k\geq 0}$. Following the terminology from \cite{Necedal1997numericall}, we say that $r_k$ converges: \begin{itemize} \item Q-linearly if there exists a constant $\tau\in (0,1)$ such that $\forall k\geq 0$, $r_{k+1}\leq \tau\cdot r_k$, \item R-linearly if there exists a sequence $\{s_k\}_{k\geq 0}$ Q-linearly converging to zero such that $\forall k\geq 0$, $r_k\leq s_k$. \end{itemize} It is well-known that Nesterov's accelerated gradient method with the following form \begin{eqnarray}\label{acc1} \left\{\begin{array}{lll} y_k &= &x_k+\frac{\sqrt{L}-\sqrt{\mu}}{\sqrt{L}+\sqrt{\mu}}(x_k-x_{k-1}) \\\\ x_{k+1}&=&y_k-\frac{1}{L}\nabla f(y_k), \end{array} \right. \end{eqnarray} converges R-linearly for minimizing $f\in{\mathcal{S}}_{\mu,L}^{1,1}(\mathbb{R}^n)$ in the sense that $\{f(x_k)-\min f\}_{k\geq 0}$ converges R-linearly. Very recently, the following Q-linear convergence was independently discovered in \cite{Karimi2016A} and \cite{Wilson2016A} by quite different methods: \begin{equation}\label{qlin1} f(x_{k+1})-\min f +\frac{\mu}{2}\|w_{k+1}-x^*\|^2\leq\left(1-\sqrt{\frac{\mu}{L}}\right) \left(f(x_k)-\min f +\frac{\mu}{2}\|w_k-x^*\|^2\right), ~\forall k\geq 0, \end{equation} where $w_k=(1+\sqrt{\frac{L}{\mu}})y_k-\sqrt{\frac{L}{\mu}}x_k$. In Nesterov's book \cite{Rockafellar2004Variational}, via replacing gradient with gradient mapping, the accelerated scheme \eqref{acc1} was successfully extended to solve the following minimization problems: \begin{equation}\label{p1} \Min_{x\in Q} f(x), \end{equation} and \begin{equation}\label{p2} \Min_{x\in Q} f(x):=\max_{1\leq i\leq m} f_i(x), \end{equation} where $f, f_i\in {\mathcal{S}}_{\mu,L}^{1,1}(\mathbb{R}^n), i=1, \cdots, m$ and $Q$ is a nonempty closed convex set. Similarly, the accelerated scheme \eqref{acc1} can also be successfully extended to solve \begin{equation}\label{p3} \Min_{x\in \mathbb{R}^n} \varphi(x):=f(x)+g(x), \end{equation} where $f \in{\mathcal{S}}_{\mu,L}^{1,1}(\mathbb{R}^n)$ and $g\in \Gamma_0(\mathbb{R}^{n})$. Nesterov's extended accelerated methods have been proved to achieve R-linear convergence. A natural question arises: Whether there exists $Q$-linear convergence for Nesterov's accelerated method applied to problems \eqref{p1}-\eqref{p3} as well. In order to study problems \eqref{p1}-\eqref{p3} in a unified way, we consider the following composite optimization problem: \begin{equation}\label{comp} \Min_{x} \varphi(x):=f(e(x))+g(x). \end{equation} This is a very powerful expression covering many optimization problems, including problems \eqref{p1}-\eqref{p3}, as its special cases; see \cite{Drusvyatskiy2016Error,Drusvyatskiy2016An}. Now, we introduce a new EB condition, commonly satisfied by many concrete examples in the form of \eqref{p1}-\eqref{p3}; see Remark \ref{remark5} below. Our forthcoming argument will heavily rely on this condition. \begin{definition} Let $\varphi :=f\circ e +g $ be such that $f:\mathbb{R}^m\rightarrow \mathbb{R}$ is a closed convex function, $g\in \Gamma_0(\mathbb{R}^{n})$, and $e: \mathbb{R}^n\rightarrow \mathbb{R}^m$ is a smooth mapping with its Jacobian given by $\nabla e(x)$. Let $L>0$ and define $$\ell(x;y):=g(x)+f(e(y) + \nabla e(y)(x-y)) +\frac{L}{2}\|x-y\|^2,$$ and \begin{align} \nonumber &p(y):=\arg\min_{x\in\mathbb{R}^n} \ell(x;y),\\ &G(y):=L(y-p(y)).\nonumber \end{align} We say that $\varphi$ satisfies the composite EB condition with positive constants $\mu, L$ obeying $\mu< L$ if \begin{equation}\label{composition-EB} ~~\forall x, y\in \mathbb{R}^n,~~\langle G(y), y-x\rangle \geq \varphi(p(y))-\varphi(x)+\frac{1}{2L}\|G(y)\|^2+\frac{\mu}{2}\|x-y\|^2. \end{equation} \end{definition} Let us give several comments on this definition. \begin{remark}\label{remark4} \begin{enumerate} \item Both $p(y)$ and $G(y)$ are well defined due to the strong convexity of $\ell(\cdot;y)$ for any $y\in\mathbb{R}^n$. Moreover, the operator $G$ is a residual measure operator related to $\varphi$ and $\mathbb{R}^n$. In fact, observe that the optimality conditions for the proximal subproblem $\mathrm{Arg}\min_{x\in\mathbb{R}^n} \ell(x;y)$ read as $$G(y)\in \partial g(p(y))+\nabla e(y)^T \partial f(e(y) + \nabla e(y)(p(y)-y)),$$ which implies $y\in {\mathbf{crit}} \varphi$ if $G(y)=0$. On the other hand, by the definition of $p(y)$ and using the convexity of $g$ and $f$, we derive that \begin{align}\label{ineqadd1} \varphi (y)&= \ell(y;y)\geq \ell(p(y);y) \nonumber\\ &= g(p(y))+f(e(y) + \nabla e(y)(p(y)-y)) +\frac{L}{2}\|p(y)-y\|^2 \nonumber \\ &\geq (g(y)+\langle z, p(y)-y\rangle) +(f(e(y))+\langle w, \nabla e(y)(p(y)-y)\rangle)+\frac{L}{2}\|p(y)-y\|^2 \nonumber \\ &= \varphi (y)+\langle z+\nabla e(y)^T w, p(y)-y\rangle +\frac{L}{2}\|p(y)-y\|^2, \end{align} where $z\in\partial g(y)$ and $w\in\partial f(e(y))$, and hence $z+\nabla e(y)^T w \in \partial \varphi(y)$. Thus, if $0\in \partial \varphi(y)$, then we can take some $z\in\partial g(y)$ and $w\in\partial f(e(y))$ such that $z+\nabla e(y)^T w =0$. Hence, the inequality \eqref{ineqadd1} implies that $G(y)=0$ if $y\in{\mathbf{crit}} \varphi$. Therefore, we have $\{x\in\mathbb{R}^n: G(y)=0\}={\mathbf{crit}}\varphi$, i.e., $G$ is a residual measure operator related to $\varphi$. \item The composite EB condition \eqref{composition-EB} can be viewed as a relaxation of strong convexity to some degree. This perspective is in the spirit of the work \cite{I2015Linear}. Indeed, in case of $m=1$, $g(x)\equiv0$, $f(t)\equiv t, t\in\mathbb{R}$, and $e\in{\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$, \eqref{composition-EB} reads as \begin{equation}\label{rel1} ~\forall x, y\in \mathbb{R}^n,~~e(x)\geq \left( e(y-\frac{1}{L}\nabla e(y))+\frac{1}{2L}\|\nabla e(y)\|^2\right)+\langle \nabla e(y),x-y\rangle +\frac{\mu}{2}\|x-y\|^2. \end{equation} On the other hand, $e\in{\mathcal{F}}^{1,1}_L(\mathbb{R}^n)$ implies that $$~~\forall x, y\in \mathbb{R}^n,~~e(y)\geq e(y-\frac{1}{L}\nabla e(y))+\frac{1}{2L}\|\nabla e(y)\|^2.$$ Therefore, \eqref{rel1} is a relaxation of strong convexity in the following form: \begin{equation*}\label{SC1} ~\forall x, y\in \mathbb{R}^n,~~e(x)\geq e(y)+\langle \nabla e(y),x-y\rangle +\frac{\mu}{2}\|x-y\|^2. \end{equation*} In the case of $f\circ e(x)\equiv 0$ and $g\in \Gamma_0(\mathbb{R}^n)$, \eqref{composition-EB} reads as \begin{equation}\label{rel2} ~\forall x, y\in \mathbb{R}^n, ~~g(x)\geq g_\lambda(y)+\langle \nabla g_\lambda(y),x-y\rangle +\frac{\mu}{2}\|x-y\|^2, \end{equation} where $\lambda=\frac{1}{L}$. Recall that $g_\lambda$ is the Moreau-Yosida regularization of $g$ and note that $g(x)\geq g_\lambda(x)$. We can see that \eqref{rel2} is a relaxation of strong convexity of $g_\lambda$. \item Although we have shown that \eqref{composition-EB} can be viewed as a relaxation of strong convexity, it is still a very strong property. Now, we construct an example to show that even strongly convex property of $f$ is not enough to ensure \eqref{composition-EB} to hold. This example is obtained by setting $n=m=2$, $x=(x_1, x_2)^T$, $e(x)=(x_1, x_1)^T$, $f(x)=\frac{1}{2}x_1^2+\frac{1}{2}x_2^2,$ $g(x)\equiv 0$; then $\varphi(x)=f\circ e(x)=x_1^2$. It is obvious to see that $f$ is strongly convex. Let us show that in this special case \eqref{composition-EB} fails to hold. Actually, after some simple calculations, we can get $$p(y)=\begin{pmatrix}\frac{L}{L+2}y_1 \\ y_2 \end{pmatrix},~~~~G(y)=\begin{pmatrix}\frac{2L}{L+2}y_1 \\ 0 \end{pmatrix},$$ and therefore \eqref{composition-EB} reads as \begin{align*} \frac{2L}{L+2}y_1(y_1-x_1)\geq &(\frac{L}{L+2}y_1)^2-x_1^2+\frac{2L}{(L+2)^2}y_1^2\\ &+\frac{\mu}{2}(x_1-y_1)^2+\frac{\mu}{2}(x_2-y_2)^2, ~~\forall x_i, y_i\in \mathbb{R}, i=1,2. \end{align*} But, if we take $x_1=y_1\equiv 0$, then we should have $$0\geq \frac{\mu}{2}(x_2-y_2)^2, ~~\forall x_2, y_2\in \mathbb{R}.$$ Obviously, this is impossible for any positive constant $\mu$. \item Let $A\in\mathbb{R}^{m\times n}$ with $m<n$ be a given matrix and $b\in\mathbb{R}^m$ be a given vector. A well-known fact in the community of EB is that the quadratic function $\frac{1}{2}\|Ax-b\|^2$ is not strongly convex but satisfies EB conditions. Unfortunately, this function fails to satisfy \eqref{rel1}. We show this point by contradiction. It is enough to consider the case of $m=1$, $g(x)\equiv0$, $f(t)\equiv t, t\in\mathbb{R}$, and $e(x)=\frac{1}{2}x^Taa^Tx$ with $\|a\|^2=L$. In this case, \eqref{rel1} reads as $$\frac{1}{2}(a^Tx-a^Ty)^2 \geq \frac{\mu}{2}\|x-y\|^2, ~\forall x, y\in \mathbb{R}^n.$$ Let $h\neq 0$ be an orthogonal vector of $a$. Now, take $y-x=\lambda h, \lambda\in\mathbb{R}$. Then, we have $$0\geq \frac{\mu}{2}\lambda^2\|h\|^2, ~\forall\lambda \in\mathbb{R},$$ which is impossible for any positive constant $\mu$. \item In order to show that \eqref{rel2} can be \textsl{strictly} weaker than strong convexity, we now construct a one-dimensional example that satisfies \eqref{rel2} but fails to be strongly convex. Define the shrinkage operator by ${\mathcal{S}}(t):=\mathrm{sign}(t)\cdot\max\{|t|-1,0\}$ and the projection operator by $[x]_I^+:=\arg\min_{y\in I}\|x-y\|$, where $I$ is some closed interval. Now, we take $\lambda=1$, $I=[-2,2]$, and $g(x)=|x|+\delta_I(x)$. Obviously, such $g(x)$ is convex but not strongly convex. Using formula (14) in \cite{zhang2017proj} and Lemma \ref{flamb}, we have $$g_\lambda(x)=|[{\mathcal{S}}(x)]_I^+|+\frac{1}{2}(x-[{\mathcal{S}}(x)]_I^+)^2.$$ Here, $g_\lambda$ is the Moreau-Yosida regularization of $g$. Denote $\ell_{g_\lambda}(x;y):=g_\lambda(y)+\langle \nabla g_\lambda(y),x-y\rangle$. We have the following expression: \begin{equation} \ell_{g_\lambda}(x;y)=\left\{\begin{array}{ll} (y+2)x-\frac{1}{2}y^2+4,~~& ~~y\leq -3,\\ -x-\frac{1}{2}, & ~~-3\leq y\leq -1,\\ yx-\frac{1}{2}y^2, & ~~-1\leq y\leq 1,\\ x-\frac{1}{2},& ~~1\leq y\leq 3,\\ (y-2)x-\frac{1}{2}y^2+4,& ~~y\geq 3 \end{array}\right. \end{equation} Then, one can verify case by case that for any $\mu\in (0,\frac{1}{9}]$, \eqref{rel2} always holds. For example, in the case of $y\leq -3$, we only need to verify that $$|x|\geq (y+2)x-\frac{1}{2}y^2+4+\frac{\mu}{2}(x-y)^2, ~~x\in [-2,2],$$ i.e., $\frac{1-\mu}{2}(x-y)^2 \geq \frac{1}{2}x^2+2x-|x|+4, ~~x\in [-2,2].$ Thus, it is sufficient to require that $$\mu\leq 1-\max_{y\leq -3, |x|\leq 2}\frac{x^2+4x-2|x|+8}{(x-y)^2}.$$ After some simple calculations, we have $\mu\leq \frac{1}{9}$. The other cases can be similarly verified; we omit the details here. This example shows that the composite EB condition \eqref{composition-EB} indeed holds for some non-strongly convex functions. \end{enumerate} \end{remark} Now, we explain why we say that the condition \eqref{composition-EB} is commonly satisfied by problems \eqref{p1}-\eqref{p3}, whose objective functions are clearly not in ${\mathcal{S}}^{1,1}_{\mu,L}(\mathbb{R}^n)$. \begin{remark}\label{remark5} \begin{itemize} \item[(i)] The minimization problem \eqref{comp} with $m=1$, $e(x)\in {\mathcal{S}}^{1,1}_{\mu,L}(\mathbb{R}^n)$, $f(t)\equiv t, t\in\mathbb{R}$, $g(x)=\delta_Q(x)$, and $Q$ being nonempty closed convex, corresponds to problem \eqref{p1}. The condition \eqref{composition-EB} holds in this setting; see Theorem 2.2.7 in \cite{nesterov2004introductory}. \item[(ii)] The minimization problem \eqref{comp} with $f(y)=\max_{1\leq i\leq m}\{y_i\}$, $f_i(x)\in{\mathcal{S}}^{1,1}_{\mu,L}(\mathbb{R}^n)$, $e(x)=(f_1(x), f_2(x),\cdots, f_m(x))$, $g(x)=\delta_Q(x)$, and $Q$ being nonempty closed convex, corresponds to problem \eqref{p2}. The condition \eqref{composition-EB} holds in this setting; see Corollary 2.3.2 in \cite{nesterov2004introductory}. \item[(iii)] The minimization problem \eqref{comp} with $m=1$, $e(x)\in {\mathcal{S}}^{1,1}_{\mu,L}(\mathbb{R}^n)$, $f(t)\equiv t, t\in\mathbb{R}$, and $g(x)\in \Gamma_0(\mathbb{R}^n)$, corresponds to problem \eqref{p3}. The condition \eqref{composition-EB} holds in this setting; see the inequality (4.36) in \cite{Chamb2016an}. \end{itemize} \end{remark} Interestingly, we note that while this work was under review, the authors of \cite{Ma2017under} utilized the exact form \eqref{composition-EB} to construct underestimate sequences and proposed several first order methods for minimizing strongly convex smooth functions and for strongly convex composite functions. Based on the discussion in this section, it could be expected to extend the corresponding results in \cite{Ma2017under} to the composite optimization problem \eqref{comp}. In general, we have to admit that it is difficult to verify the composite EB condition \eqref{composition-EB}, which therefore deserves further study in the future. \subsection{Q-linear convergence of Nesterov's acceleration} In this part, we show Q-linear convergence of Nesterov's acceleration under the composite EB condition \eqref{composition-EB}, which is more general than strong convexity. First, in light of Nesterov's accelerated scheme (2.2.11) in \cite{nesterov2004introductory}, Nesterov's accelerated forward-backward algorithm for solving the problem \eqref{comp} reads as: choosing $x_{-1}=x_0\in \mathbb{R}^n$, for $k\geq 0$, \begin{eqnarray*} \left\{\begin{array}{lll} y_k &= &x_k+\frac{\sqrt{L}-\sqrt{\mu}}{\sqrt{L}+\sqrt{\mu}}(x_k-x_{k-1}) \\\\ x_{k+1}&=&y_k-\frac{1}{L}G(y_k). \end{array} \right. \end{eqnarray*} Let $$\alpha=\frac{\sqrt{L}-\sqrt{\mu}}{\sqrt{L}+\sqrt{\mu}},~\beta=\frac{2\sqrt{\mu}}{\sqrt{L}+\sqrt{\mu}}, ~\gamma=\frac{1}{2L}(1+\sqrt{\frac{L}{\mu}}).$$ Let $$\Phi_k(x^*;\tau):=\varphi(x_k)- \min\varphi+\tau\cdot\|z_k-x^*\|^2,~k\geq 0,$$ where $x^*\in\mathrm{Arg}\min\varphi$ (assumed to be nonempty) and $$z_k=\frac{1}{2}(1+\sqrt{\frac{L}{\mu}})y_k+\frac{1}{2}(1-\sqrt{\frac{L}{\mu}})x_k, ~k\geq 0.$$ Now, we are ready to present the main result in this section. The proof idea behind is partially inspired by the argument in \cite{Attouch2015the} but might be of interest in its own right. \begin{theorem} Let $\varphi :=f\circ e +g $ be such that $f:\mathbb{R}^m\rightarrow \mathbb{R}$ is a closed convex function, $g\in \Gamma_0(\mathbb{R}^{n})$, and $e: \mathbb{R}^n\rightarrow \mathbb{R}^m$ is a smooth mapping with its Jacobian given by $\nabla e(x)$. Let $\varphi$ satisfy the composite EB condition \eqref{composition-EB} with positive constants $\mu, L$ obeying $\mu< L$. Assume that $\varphi$ achieves its minimum $\min\varphi$ so that $\mathrm{Arg}\min\varphi\neq \emptyset$. Then, there exist a unique vector $x^*$ such that $\mathrm{Arg}\min \varphi=\{x^*\}$, and Nesterov's accelerated forward-backward method converges Q-linearly in the sense that there exists a positive constant $\theta_0<1$ such that for any $\theta\in[\theta_0, 1)$ it holds \begin{equation}\label{qlin2} \Phi_{k+1}(x^*;\tau)\leq \rho \cdot \Phi_k(x^*;\tau), ~k\geq 0, \end{equation} where $\rho=\max\{\alpha, \theta\}<1$ and $\tau=\frac{\theta\beta}{2\rho\gamma}$. Especially, by taking $\theta= \max\{\theta_0, \alpha\}$, we have \begin{equation}\label{qlin3} \Phi_{k+1}(x^*;\frac{2L\mu}{(\sqrt{L}+\sqrt{\mu})^2})\leq \max\{\theta_0, \alpha\} \cdot \Phi_k(x^*;\frac{2L\mu}{(\sqrt{L}+\sqrt{\mu})^2}), ~k\geq 0. \end{equation} \end{theorem} \begin{proof} We first show the uniqueness of optimal solution $x^*$ of $\varphi$. In fact, by statement (i) in Remark \ref{remark4} and the fact of $\mathrm{Arg}\min\varphi \subset {\mathbf{crit}} \varphi$, we have that $G(x^*)=0$ and $p(x^*)=x^*$, and hence \eqref{composition-EB} at $y=x^*$ reads as $$\varphi(x)-\min \varphi\geq \frac{\mu}{2}\|x-x^*\|^2, ~~\forall x\in \mathbb{R}^n,$$ which clearly implies that $\mathrm{Arg}\min \varphi=\{x^*\}$. Now, we analyze rates of linear convergence. Using successively \eqref{composition-EB} at $x=x_k$ and $y=y_k$, and then at $y=y_k$ and $x=x^*$, together with the fact of $x_{k+1}=p(y_k)$, we obtain \begin{equation*} \varphi(x_{k+1})\leq \varphi(x_k)+\langle G(y_k),y_k-x_k\rangle -\frac{1}{2L}\|G(y_k)\|^2-\frac{\mu}{2}\|x_k-y_k\|^2 \end{equation*} and \begin{equation*} \varphi(x_{k+1})\leq \varphi(x^*)+\langle G(y_k),y_k-x^*\rangle -\frac{1}{2L}\|G(y_k)\|^2-\frac{\mu}{2}\|x^*-y_k\|^2. \end{equation*} Multiplying the first inequality by $\alpha$ and the second one by $\beta$, and then adding the two resulting inequalities, we obtain \begin{align*} \varphi(x_{k+1})\leq &\alpha \varphi(x_k)+\beta\varphi(x^*) + \langle G(y_k),\alpha (y_k-x_k)+\beta(y_k-x^*)\rangle\\ &-\frac{1}{2L}\|G(y_k)\|^2-\frac{\mu\alpha }{2}\|x_k-y_k\|^2-\frac{\mu\beta}{2}\|x^*-y_k\|^2. \end{align*} In order to estimate the right-hand side of the inequality above, we first write down: \begin{equation}\label{eq1} \alpha (y_k-x_k)+\beta(y_k-x^*)=\beta(z_k-x^*). \end{equation} Secondly, using the expression of $y_{k+1}=x_{k+1}+\alpha(x_{k+1}-x_k)$, we get \begin{equation}\label{zk} z_{k+1}=\frac{1}{2}(1+\sqrt{\frac{L}{\mu}})x_{k+1}+\frac{1}{2}(1-\sqrt{\frac{L}{\mu}})x_k. \end{equation} Then, substitute $x_{k+1}=y_k-\frac{1}{L}G(y_k)$ into formula \eqref{zk} to obtain \begin{equation}\label{eq01} z_{k+1}-x^*=z_k-x^*-\gamma\cdot G(y_k). \end{equation} Using equality \eqref{eq01}, we derive that \begin{align*} \langle G(y_k),z_k-x^*\rangle &= \frac{1}{\gamma}\langle z_k-x^*-(z_{k+1}-x^*),z_k-x^*\rangle \\ &= \frac{1}{\gamma}\|z_k-x^*\|^2 -\frac{1}{\gamma}\langle z_{k+1}-x^*,z_k-x^*\rangle \\ &= \frac{1}{\gamma}\|z_k-x^*\|^2 -\frac{1}{\gamma}\langle z_{k+1}-x^*,z_{k+1}-x^*+\gamma\cdot G(y_k)\rangle\\ &= \frac{1}{\gamma}\|z_k-x^*\|^2 -\frac{1}{\gamma}\|z_{k+1}-x^*\|^2- \langle z_{k+1}-x^*, G(y_k)\rangle\\ &= \frac{1}{\gamma}\|z_k-x^*\|^2 -\frac{1}{\gamma}\|z_{k+1}-x^*\|^2-\langle G(y_k),z_k-x^*\rangle +\gamma\|G(y_k)\|^2. \end{align*} Thus, we have \begin{equation}\label{gk} \langle G(y_k),z_k-x^*\rangle=\frac{1}{2\gamma}(\|z_k-x^*\|^2-\|z_{k+1}-x^*\|^2)+ \frac{\gamma}{2}\|G(y_k)\|^2. \end{equation} Combining formula \eqref{gk} and formula \eqref{eq1}, we derive that \begin{align*} \varphi(x_{k+1})\leq & \alpha \varphi(x_k)+\beta\varphi(x^*) +\frac{\beta}{2\gamma}(\|z_k-x^*\|^2-\|z_{k+1}-x^*\|^2) \\ &+(\frac{\beta\gamma}{2}-\frac{1}{2L})\|G(y_k)\|^2-\frac{\mu\alpha }{2}\|x_k-y_k\|^2-\frac{\mu\beta}{2}\|x^*-y_k\|^2\\ =&\alpha \varphi(x_k)+\beta\varphi(x^*) +\frac{\beta}{2\gamma}(\|z_k-x^*\|^2-\|z_{k+1}-x^*\|^2)-\frac{\mu\alpha }{2}\|x_k-y_k\|^2-\frac{\mu\beta}{2}\|x^*-y_k\|^2, \end{align*} where the term $\|G(y_k)\|^2$ is eliminated since $\frac{\beta\gamma}{2}=\frac{1}{2L}$. Note that \eqref{eq1} can be written as $$z_k-x^*=(y_k-x^*)+\frac{1}{2}(\sqrt{\frac{L}{\mu}}-1)(y_k-x_k),$$ with which we further derive that \begin{align*} \|z_k-x^*\|^2\leq & 2\|x^*-y_k\|^2+ \frac{1}{2}(\sqrt{\frac{L}{\mu}}-1)^2\|y_k-x_k\|^2\\ &\leq \max\left\{2, \frac{1}{2}(\sqrt{\frac{L}{\mu}}-1)^2\right\}(\|x^*-y_k\|^2+\|y_k-x_k\|^2). \end{align*} Denote $\eta_1:=\min\left\{\frac{\mu\alpha}{2},\frac{\mu\beta}{2}\right\}$ and $\eta_2:=\max\left\{2, \frac{1}{2}(\sqrt{\frac{L}{\mu}}-1)^2\right\}$. Then, we have \begin{align*} \varphi(x_{k+1})&\leq \alpha \varphi(x_k)+\beta\varphi(x^*) +\frac{\beta}{2\gamma}(\|z_k-x^*\|^2-\|z_{k+1}-x^*\|^2) -\eta_1(\|x^*-y_k\|^2+\|y_k-x_k\|^2)\\ &\leq \alpha \varphi(x_k)+\beta\varphi(x^*) +\frac{\beta}{2\gamma}(\|z_k-x^*\|^2-\|z_{k+1}-x^*\|^2) -\frac{\eta_1}{\eta_2}\|z_k-x^*\|^2. \end{align*} Rearrange the terms to obtain $$\varphi(x_{k+1})-\varphi(x^*)+\frac{\beta}{2\gamma}\|z_{k+1}-x^*\|^2\leq \alpha( \varphi(x_k)-\varphi(x^*))+(\frac{\beta}{2\gamma}-\frac{\eta_1}{\eta_2})\|z_k-x^*\|^2.$$ Thus, there exists a positive constant $\theta_0<1$ such that for any $\theta\in [\theta_0, 1)$ it holds $$\varphi(x_{k+1})-\varphi(x^*)+\frac{\beta}{2\gamma}\|z_{k+1}-x^*\|^2\leq \alpha( \varphi(x_k)-\varphi(x^*))+\frac{\theta\beta}{2\gamma}\|z_k-x^*\|^2.$$ Since $\rho=\max\{\alpha, \theta\}$, we have that $\rho<1$ and $\frac{\theta}{\rho}\leq 1$. Thus, we obtain \begin{align*} \varphi(x_{k+1})-\varphi(x^*)+\frac{\theta\beta}{2\rho\gamma}\|z_{k+1}-x^*\|^2& \leq \alpha( \varphi(x_k)-\varphi(x^*))+\frac{\theta\beta}{2\gamma}\|z_k-x^*\|^2\\ & \leq \rho\left(\varphi(x_k)-\varphi(x^*))+\frac{\theta\beta}{2\rho\gamma}\|z_k-x^*\|^2\right), \end{align*} i.e., $\Phi_{k+1}(x^*;\tau)\leq \rho \cdot \Phi_k(x^*;\tau)$ with $\tau=\frac{\theta\beta}{2\rho\gamma}$. This is just the announced result \eqref{qlin2}. It remains to show \eqref{qlin3}. In fact, if $\theta= \max\{\theta_0, \alpha\}$, then $\rho=\max\{\alpha, \theta\}=\max\{\theta_0, \alpha\}=\theta$ and hence $$\tau=\frac{\theta\beta}{2\rho\gamma}=\frac{\beta}{2\gamma}=\frac{2L \mu}{(\sqrt{L}+\sqrt{\mu})^2}.$$ This completes the proof. \end{proof} \begin{remark} It should be noted that we here only show the existence of rates of linear convergence for Nesterov's accelerated forward-backward method. But, it is not clear whether one can derive an exact rate of linear convergence as $1-\sqrt{\frac{\mu}{L}}$ as obtained for Nesterov's accelerated gradient method. \end{remark} \section{A class of dual functions satisfying EB conditions}\label{sec8} Verifying EB conditions for functions with certain structure is a difficult topic. In this section, we consider a class of dual objective functions, that have interesting applications in signal processing and compressive sensing \cite{Zhang2015A,Lai2012Augmented}. We first describe the problem, along with some direct results. \begin{proposition}\label{prop1} Consider the linearly constrained optimization problem \begin{equation}\label{primal} \Min_{y\in \mathbb{R}^m} g(y),~~\textrm{subject to} ~~ Ay=b, \tag{P} \end{equation} where $g: \mathbb{R}^m \rightarrow \mathbb{R}$ is a real-valued and strongly convex function with modulus $c>0$, $A\in\mathbb{R}^{n\times m}$ is a given matrix with $m\leq n$, and $b\in R(A)$ is a given vector. Here, $R(A)$ stands for the range of $A$. The dual problem is \begin{equation}\label{dual} \Min_{x\in \mathbb{R}^n} f(x):= g^*(A^Tx)-\langle b, x\rangle. \tag{D} \end{equation} Then, we have that \begin{itemize} \item the primal problem \eqref{primal} has a unique optimal solution $\bar{y}$, \item the dual objective function $f$ belongs to ${\mathcal{F}}^{1,1}_{L}(\mathbb{R}^n)$ with $L=\frac{\|A\|^2}{c}$, and \item the set of optimal solutions of the dual problem, $$\mathrm{Arg}\min f:=\{x\in\mathbb{R}^n: A\nabla g^*(A^Tx)=b\},$$ is a nonempty closed convex set, and can be characterized by $\{x\in\mathbb{R}^n: A^Tx\in \partial g(\bar{y})\}$ or equivalently by $\{x\in\mathbb{R}^n: \nabla g^*(A^Tx)=\bar{y}\}$ . \end{itemize} \end{proposition} \begin{proof} The first two statements are standard results which can be found in textbooks on convex analysis and no proof will be given here. Now, we prove the third statement. First, let the Lagrangian function be given by $L(y, x)=g(y)-\langle Ay-b, x\rangle.$ By the assumption of $b\in R(A)$ and the finiteness of the optimal value of primal problem, according to Proposition 5.3.3 in \cite{bertsekas2011convex}, for any $\bar{x}\in \mathrm{Arg}\min f$ we have that $\bar{y}\in \mathrm{Arg}\min L(y, \bar{x}).$ Hence, $A^T\bar{x}\in \partial g(\bar{y})$ or equivalently $\nabla g^*(A^T\bar{x})=\bar{y}$ due to $(\partial g)^{-1}=\nabla g^*$, which holds by Corollary 23.5.1 in \cite{rockafellar1970convex}. This implies that $\mathrm{Arg}\min f\subseteq \{x\in\mathbb{R}^n: \nabla g^*(A^Tx)=\bar{y}\}$. The inverse inclusion is obvious since $A\bar{y}=b$. Thereby, $$\mathrm{Arg}\min f=\{x\in\mathbb{R}^n: \nabla g^*(A^Tx)=\bar{y}\}=\{x\in\mathbb{R}^n: A^Tx\in \partial g(\bar{y})\}.$$ This completes the proof. \end{proof} Now, we state the main result of this section. \begin{theorem}\label{dualmain} Use the same setting as Proposition \ref{prop1}. Denote $X_r:=\{x\in\mathbb{R}^n: f(x)\leq \min f +r\}$ with $r\geq0$ and $V_r:={\mathrm{cl}}(A^TX_r)$, where ${\mathrm{cl}}(A^TX_r)$ stands for the closure of $A^TX_r$. If the following assumptions hold: \begin{itemize} \item[(a)] $\partial g$ is calm around $\bar{y}$ for any $\bar{z}\in V_0$, \item[(b)] the collection $\{\partial g(\bar{y}), R(A^T)\}$ is linearly regular with constant $\gamma>0$, that is $$d(A^Tx, \partial g(\bar{y}))\geq \gamma \cdot d(A^Tx, \partial g(\bar{y})\cap R(A^T)),~~\forall x\in \mathbb{R}^n,$$ \end{itemize} then we have that \begin{enumerate} \item[(i)] There exist positive constants $r_0, \tau$ such that the $(f, \tau, X_{r_0})$-\eqref{obj-EB} condition holds, that is \begin{equation}\label{mainqg} f(x)-\min f \geq \frac{\tau}{2}\cdot d^2(x, {\mathbf{crit}} f), ~~\forall x\in X_{r_0}. \end{equation} Specifically, if $\partial g$ is calm with constant $\kappa>0$ around $\bar{y}$ for any $\bar{z}\in V_0$, then \eqref{mainqg} holds for all $\tau\in(0, \kappa^{-1})$. \item[(ii)] For any sublevel set $X_r$, pick $r_1\in (0, r_0)$ and let $c_r:=\sqrt{\frac{r_1}{r}}$ and \begin{eqnarray*} \rho_r:= \left\{\begin{array}{lll} c_r, &\textrm{when} ~~r\geq r_0, \\ 1, &\textrm{when}~~r\leq r_0. \end{array} \right. \end{eqnarray*} Then, the $(\nabla f, \nu, X_{r})$-\eqref{cor-EB} condition with $\nu=\frac{\tau \rho_r^2}{8}$ holds. \end{enumerate} \end{theorem} \begin{proof} We first prove that $V_r$ is compact for any $r\geq0$. To this end, letting $f_r=\min f+r$ and using the fact $b=A\bar{y}$, we write $X_r$ into the following form: $$X_r=\{x\in\mathbb{R}^n: g^*(A^Tx)-\langle \bar{y}, A^Tx\rangle \leq f_r\}.$$ Denote $$Y_r:=\{y\in\mathbb{R}^m: g^*(y)-\langle \bar{y}, y\rangle \leq f_r\}.$$ Obviously, $A^TX_r\subseteq Y_r$. Let $\tilde{g}(\cdot):=g^*(\cdot)-\langle \bar{y}, \cdot \rangle$. Then, $\tilde{g}^*(y)=g(y+\bar{y})$. Thus, ${\mathrm{dom}} \tilde{g}^*={\mathrm{dom}} g=\mathbb{R}^m$. This implies that $\tilde{g}$ is coercive (see Theorem 11.8 in \cite{Rockafellar2004Variational}) and hence $Y_r=\{y\in\mathbb{R}^m: \tilde{g}(y) \leq f_r\}$ is bounded. Furthermore, since $\tilde{g}$ is continuous, $Y_r$ is closed and hence compact. Thereby, $V_r={\mathrm{cl}}(A^TX_r)\subseteq Y_r$ is bounded and hence also compact. Second, we show that $V_0 \subseteq \partial g(\bar{y})$. Recall that we have shown that $X_0=\{x\in \mathbb{R}^n: A^Tx\in \partial g(\bar{y})\}$ in Proposition \ref{prop1}. Hence, $A^TX_0\subseteq \partial g(\bar{y})$. Since $g$ is a real-valued convex function, $\partial g(\bar{y})$ must be nonempty, closed, and bounded according to Theorem 23.4 in \cite{rockafellar1970convex} and Theorem 8.6 in \cite{Rockafellar2004Variational}. Therefore, $V_0={\mathrm{cl}}(A^TX_0)\subseteq \partial g(\bar{y})$ Now, since $\partial g$ is calm at $\bar{y}$ for any $\bar{z}\in V_0$ and $V_0 \subseteq \partial g(\bar{y})$ is compact, by Proposition 2 in \cite{Zhou2015A} we can conclude that there exist constants $\kappa, \epsilon>0$ such that \begin{equation}\label{unicalm} \partial g(y)\cap (V_0+\epsilon \mathbb{B}_\mathcal{E})\subseteq \partial g(\bar{y})+\kappa\cdot \|y-\bar{y}\|_2 \mathbb{B}_\mathcal{E}, ~~\forall y\in\mathcal{E}, \end{equation} where we denote $\mathbb{R}^m$ by $\mathcal{E}$ for simplicity. Pick $z\in V_0+\epsilon \mathbb{B}_\mathcal{E}$ and let $y=\nabla g^*(z)$. Then, $z\in \partial g(y)$ due to $\partial g =(\nabla g^*)^{-1}$ and hence $z \in \partial g(y)\cap (V_0+\epsilon \mathbb{B}_\mathcal{E})$. By the inclusion \eqref{unicalm}, we obtain \begin{equation}\label{subreg} d(z, \partial g(\bar{y}))\leq \kappa \|y-\bar{y}\|_2=\kappa \cdot d(\bar{y}, \nabla g^*(z)) , ~~\forall z\in V_0+\epsilon \mathbb{B}_\mathcal{E}, \end{equation} which can be rewritten as \begin{equation}\label{subreg1} d(z, (\nabla g^*)^{-1}(\bar{y}))\leq \kappa\cdot d(\bar{y}, \nabla g^*(z)) , ~~\forall z\in V_0+\epsilon \mathbb{B}_\mathcal{E}. \end{equation} This implies that $\nabla g^*$ is always metrically subregular at each $\bar{z}\in V_0$ for $\bar{y}$. Thus, by Theorem 3.1 in \cite{Drusvyatskiy2013Second}, for each $\bar{z}\in V_0$ there exists a neighborhood $\bar{z}+\epsilon(\bar{z})\mathbb{B}_\mathcal{E}$ and a positive constant $\alpha(\bar{z})$ such that \begin{equation}\label{quagrow} g^*(z)\geq g^*(\bar{z})-\langle \bar{y},\bar{z}-z\rangle+\frac{\alpha(\bar{z})}{2}\cdot d^2(z,(\nabla g^*)^{-1}(\bar{y})), ~~\forall z\in \mathcal{E}~~ \textrm{with}~~ \|z-\bar{z}\|_2\leq \epsilon(\bar{z}), \end{equation} where the constant $\alpha(\bar{z})$ can be chosen arbitrarily in $(0, \kappa^{-1})$. Note that $\{\bar{z}+\epsilon(\bar{z})\mathbb{B}^o_\mathcal{E}\}_{\bar{z}\in V_0}$ forms an open cover of the compact set $V_0$. Hence, by the Heine-Borel theorem, there exist $K$ points (where $K\geq 1$ is finite) $\bar{z}_1,\cdots, \bar{z}_K\in V_0$ such that $$V_0\subseteq U:= \bigcup_{i=1}^K(\bar{z}_i+\epsilon(\bar{z}_i)\mathbb{B}^o_\mathcal{E}).$$ Let $\alpha=\min\{\alpha(\bar{z}_1), \cdots, \alpha(\bar{z}_K)\}$, which can be chosen arbitrarily in $(0, \kappa^{-1})$, and note that $\min f = g^*(\bar{z})-\langle \bar{y}, \bar{z}\rangle, ~\forall \bar{z}\in V_0$. From \eqref{quagrow}, we have \begin{equation*} g^*(z)-\langle \bar{y}, z\rangle \geq \min f +\frac{\alpha}{2}\cdot d^2(z,(\nabla g^*)^{-1}(\bar{y})), ~~\forall z\in U. \end{equation*} Letting $r_0>0$ be small enough such that $V_{r_0}\subseteq U$ and using the fact of $(\nabla g^*)^{-1} = \partial g$, we obtain \begin{equation*} g^*(z)-\langle \bar{y}, z\rangle \geq \min f +\frac{\alpha}{2}\cdot d^2(z, \partial g(\bar{y})), ~~\forall z\in V_{r_0}, \end{equation*} and hence, \begin{equation}\label{qgoff} f(x) - \min f \geq \frac{\alpha}{2}\cdot d^2(A^Tx, \partial g(\bar{y})), ~~\forall x\in X_{r_0}. \end{equation} Using the linear regularity property of $\{\partial g(\bar{y}), R(A^T)\}$, we derive that \begin{align*} d(A^Tx, \partial g(\bar{y}))&\geq \gamma \cdot d(A^Tx, \partial g(\bar{y})\cap R(A^T)) = \gamma \cdot \min_{A^Tu\in\partial g(\bar{y})}\|A^Tx-A^Tu\|\\ &= \gamma \cdot\min_{y\in A^T X_0}\|A^Tx-y\|\geq \gamma \cdot \min_{y\in V_0}\|A^Tx-y\|= \gamma \cdot\|A^Tx- \hat{y}\|, \end{align*} where such $\hat{y}\in V_0$ exists due to the compactness of $V_0$. Now, we follow the argument in \cite{Frank2015linear} to finish the proof of (i). Since $\hat{y}\in V_0={\mathrm{cl}}(A^TX_0)$, we can find a sequence $\{x_n\}_{n=0}^{\infty}\subset X_0$ such that $A^Tx_n \rightarrow \hat{y}$ as $n\rightarrow +\infty$. Denote the null space of $A^T$ by $N(A^T)$ and the minimal positive singular value of $A$ by $\sigma(A)$. Using the fact of $\mathrm{Arg}\min f+N(A^T) \subseteq \mathrm{Arg}\min f$, we can derive that $$d(x, \mathrm{Arg}\min f)\leq \|x-(x_n+\mathcal{P}_{N(A^T)}(x-x_n))\|\leq \frac{1}{\sigma(A)}\|A^Tx-A^Tx_n\|,~n\geq 0,$$ where $\mathcal{P}_{N(A^T)}$ stands for the orthogonal projection operator onto $N(A^T)$. Thus, by letting $n\rightarrow +\infty$, we obtain \begin{equation}\label{more} d(x, \mathrm{Arg}\min f)\leq \frac{1}{\sigma(A)}\|A^Tx-\hat{y}\| \leq \frac{d(A^Tx, \partial g(\bar{y}))}{\gamma \cdot\sigma(A)}. \end{equation} Note that $\mathrm{Arg}\min f={\mathbf{crit}} f$. Thereby, in view of \eqref{qgoff} and \eqref{more}, the \eqref{obj-EB} condition follows with $\tau=\alpha\gamma^2\sigma^2(A)$. Let us prove (ii). Without loss of generality, we assume that $\min f=0$ and $r\geq r_0$. Since for any $r>0$ the sublevel set $X_r$ is $\nabla f$-invariant, using \eqref{mainqg} together with the equivalence established in Corollary \ref{corr2}, we can conclude that $f$ satisfies the $(\nabla f, \eta, X_{r_0})$-\eqref{res-obj-EB} conditions with $\eta=\sqrt{\frac{\tau}{2}}$, that is \begin{equation}\label{ro1} \forall~ x\in X_{r_0},~~\|\nabla f(x)\| \geq \eta\cdot\sqrt{f(x) }. \end{equation} Let $\varphi(t):=2\eta^{-1}t^{\frac{1}{2}}$. Then, the property \eqref{ro1} can be written as \begin{equation}\label{ro2} \forall~ x\in X_{r_0},~~\|\nabla f(x)\| \varphi^\prime (f(x))\geq 1. \end{equation} By applying Proposition 30 in \cite{Bolte2015From}, a globalization result for KL inequalities, to \eqref{ro2}, we have that for the given $r_1\in (0, r_0)$, the function given by \begin{eqnarray*} \phi(t):= \left\{\begin{array}{lll} \varphi(t), &\textrm{when} ~~t\leq r_1, \\ \varphi(r_1)+(t-r_1)\varphi^\prime (r_1), &\textrm{when}~~t\geq r_1, \end{array} \right. \end{eqnarray*} is desingularising for $f$ on all of $\mathbb{R}^n$ and hence it holds \begin{equation}\label{ro3} \forall~ x\in X_{r},~~\|\nabla f(x)\| \phi^\prime (f(x))\geq 1. \end{equation} Thereby, we can get $$\|\nabla f(x)\|\geq \eta \sqrt{r_1},~~\forall x\in X_r\cap X^c_{r_1},$$ where $X^c_{r_1}$ is the complement of $X_{r_1}$. By the definition of $c_r$, we can further obtain $$\|\nabla f(x)\|\geq \eta c_r\sqrt{r}\geq \eta c_r \sqrt{f(x)},~~\forall x\in X_r\cap X^c_{r_1}.$$ Finally, noting the expression of $\rho_r$ and together with \eqref{ro1}, for $r>0$ we have $$\|\nabla f(x)\|\geq \eta \rho_r \sqrt{f(x)},~~\forall x\in X_r,$$ which is just the $(\nabla f, \eta \rho_r, X_r)$-\eqref{res-obj-EB} condition. Thus, the $(\nabla f, \nu, X_{r})$-\eqref{cor-EB} condition follows from Corollary \ref{corr2}. Using the relevant formulas in Theorem \ref{mainresult}, we have $$\nu =\frac{1}{4}(\eta \rho_r)^2=\frac{1}{4}\rho_r^2 \cdot \frac{\tau}{2}=\frac{1}{8}\rho_r^2\tau,$$ which completes the proof. \end{proof} \begin{remark} By directly invoking Corollary 4.3 in \cite{Artacho2013Metric}, we can derive \eqref{quagrow} with the constant satisfying $\alpha(\bar{z})\in (0, \frac{1}{4\kappa})$, which is slightly worse than that of $\alpha(\bar{z})\in (0, \kappa^{-1})$. \end{remark} \begin{remark} The author of \cite{Frank2015linear}, with slightly different assumptions, proved by contradiction that the dual objective function $f(x)= g^*(A^Tx)-\langle b, x\rangle$ satisfies the $(\nabla f, \nu, X_{r})$-\eqref{cor-EB} condition. While the author of \cite{Frank2015linear} requires that $\partial g$ is calm around $\bar{y}$ for any $\bar{z}\in \mathbb{R}^m$, i.e., the local upper Lipschitz-continuity property \eqref{calm3}, we only require that $\partial g$ is calm around $\bar{y}$ for any $\bar{z}\in V_0$. Our proof is by means of the KL inequality globalization technique developed in \cite{Bolte2015From}, and hence quite different from that of \cite{Frank2015linear}. \end{remark} \begin{remark} Verifying EB conditions for more general functions with the form $f(x):=h(Ax)+l(x)$ was studied recently in \cite{Drusvyatskiy2016Error,Zhou2015A,Li2016Calculus}. Specialized to the dual objective function $f(x)= g^*(A^Tx)-\langle b, x\rangle$, the existing theory usually requires $g^*$ to be strictly or strongly convex; see e.g., Corollary 4.3 in \cite{Drusvyatskiy2016Error} and Assumption 1 in \cite{Zhou2015A}. In contrast, our study, following the research line of work \cite{Frank2015linear}, relies on exploiting the primal-dual structure, and is thus quite different from that in \cite{Drusvyatskiy2016Error,Zhou2015A,Li2016Calculus}. \end{remark} \section{Discussion}\label{sec9} In this paper, we provide a new perspective for studying EB conditions and analyzing linear convergence of gradient-type methods. Under our theoretical framework, a group of new technical results are discovered. Especially, some EB conditions, previously known to be sufficient for linear convergence, are also necessary; and Nesterov's accelerated forward-backward algorithm, previously known to be R-linearly convergent, is also Q-linearly convergent. Finally, we close this paper with the following possible future works: \begin{enumerate} \item We have defined a group of abstract EB conditions of ``square type". But we do not know whether the idea behind can be extended to that of general types by introducing so-called desingularizing functions \cite{Bolte2015From}, so that the other EB conditions discussed in \cite{Garrigos2017conv} can be included in a more general framework. \item Although we have shown sufficient conditions guaranteeing linear convergence for PALM and Nesterov's accelerated forward-backward algorithms, it is still unclear whether they are necessary. The very recent work \cite{Luke2017necessary} might shed light on this topic. \item Verifying EB conditions with \textsl{high probability} for non-convex functions has proven to be a very powerful approach for non-convex optimization; see e.g. \cite{Candes2015Phase,Tu2016Low,Liu2016On}. Thus, seeking or verifying new classes of non-convex functions, satisfying EB condition with high-probability, deserves future study. \item What are the optimal rates of linear convergence (or say, exact worst-case convergence rates) for gradient-type methods under general EB conditions? The method of performance estimation, originally proposed in \cite{Drori2014smooth} and further developed in \cite{Kim2016optimized,Taylor2016smooth,Taylor2017exact}, might be useful for this topic. \item Ordinary differential equation (ODE) approaches are recently used to study (accelerated) gradient-type methods \cite{Su2016A,Wilson2016A}. Except one paper \cite{Yang2016The}, existing analyses only consider general convex and strongly convex conditions, and do not work on general EB conditions. It would be interesting to investigate whether the EB condition presented in this paper can be embedded in the ODE approaches to study linear convergence for gradient-type methods. \end{enumerate} \section*{Acknowledgements} I am grateful to the anonymous referees, the associate editor, and the coeditor Prof. Adrian S. Lewis for many useful comments, which allowed me to significantly improve the original presentation. I would like to thank Prof. Zaiwen Wen for his invitation and hospitality during my visit to BeiJing International Center for Mathematical Research, and to thank Prof. Dmitriy Drusvyatskiy for a careful reading of an early draft of this manuscript, and for valuable comments and suggestions. I also thank Profs. Chao Ding, Bin Dong, Lei Guo, Yongjin Liu, Deren Han, Mark Schmidt, Anthony Man-Cho So, and Wotao Yin for their time and many helpful discussions with me. Further thanks due to my cousin Boya Ouyang who helped me with my English writing, and to PhD students Ke Guo, Wei Peng, Ziyang Yuan, Xiaoya Zhang, who looked over the manuscript and corrected several typos. While visiting Chinese Academy of Sciences, I was particularly fortunate to be acquainted with Prof. Florian Jarre, who carefully read and polished this paper. This work is supported by the National Science Foundation of China (No.11501569 and No.61571008). \bibliographystyle{abbrv} \small
{ "timestamp": "2018-05-17T02:06:35", "yymm": "1606", "arxiv_id": "1606.00269", "language": "en", "url": "https://arxiv.org/abs/1606.00269", "abstract": "This paper reveals that a common and central role, played in many error bound (EB) conditions and a variety of gradient-type methods, is a residual measure operator. On one hand, by linking this operator with other optimality measures, we define a group of abstract EB conditions, and then analyze the interplay between them; on the other hand, by using this operator as an ascent direction, we propose an abstract gradient-type method, and then derive EB conditions that are necessary and sufficient for its linear convergence. The former provides a unified framework that not only allows us to find new connections between many existing EB conditions, but also paves a way to construct new EB conditions. The latter allows us to claim the weakest conditions guaranteeing linear convergence for a number of fundamental algorithms, including the gradient method, the proximal point algorithm, and the forward-backward splitting algorithm. In addition, we show linear convergence for the proximal alternating linearized minimization algorithm under a group of equivalent EB conditions, which are strictly weaker than the traditional strongly convex condition. Moreover, by defining a new EB condition, we show Q-linear convergence of Nesterov's accelerated forward-backward algorithm without strong convexity. Finally, we verify EB conditions for a class of dual objective functions.", "subjects": "Optimization and Control (math.OC)", "title": "New Analysis of Linear Convergence of Gradient-type Methods via Unifying Error Bound Conditions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682461347529, "lm_q2_score": 0.7154239836484144, "lm_q1q2_score": 0.707316975156416 }